Jan 20 13:16:02 localhost kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 20 13:16:02 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 20 13:16:02 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 20 13:16:02 localhost kernel: BIOS-provided physical RAM map:
Jan 20 13:16:02 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 20 13:16:02 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 20 13:16:02 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 20 13:16:02 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 20 13:16:02 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 20 13:16:02 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 20 13:16:02 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 20 13:16:02 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 20 13:16:02 localhost kernel: NX (Execute Disable) protection: active
Jan 20 13:16:02 localhost kernel: APIC: Static calls initialized
Jan 20 13:16:02 localhost kernel: SMBIOS 2.8 present.
Jan 20 13:16:02 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 20 13:16:02 localhost kernel: Hypervisor detected: KVM
Jan 20 13:16:02 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 20 13:16:02 localhost kernel: kvm-clock: using sched offset of 3902502170 cycles
Jan 20 13:16:02 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 20 13:16:02 localhost kernel: tsc: Detected 2800.000 MHz processor
Jan 20 13:16:02 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 20 13:16:02 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 20 13:16:02 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 20 13:16:02 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 20 13:16:02 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 20 13:16:02 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 20 13:16:02 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 20 13:16:02 localhost kernel: Using GB pages for direct mapping
Jan 20 13:16:02 localhost kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 20 13:16:02 localhost kernel: ACPI: Early table checksum verification disabled
Jan 20 13:16:02 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 20 13:16:02 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 13:16:02 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 13:16:02 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 13:16:02 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 20 13:16:02 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 13:16:02 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 13:16:02 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 20 13:16:02 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 20 13:16:02 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 20 13:16:02 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 20 13:16:02 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 20 13:16:02 localhost kernel: No NUMA configuration found
Jan 20 13:16:02 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 20 13:16:02 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 20 13:16:02 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 20 13:16:02 localhost kernel: Zone ranges:
Jan 20 13:16:02 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 20 13:16:02 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 20 13:16:02 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 20 13:16:02 localhost kernel:   Device   empty
Jan 20 13:16:02 localhost kernel: Movable zone start for each node
Jan 20 13:16:02 localhost kernel: Early memory node ranges
Jan 20 13:16:02 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 20 13:16:02 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 20 13:16:02 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 20 13:16:02 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 20 13:16:02 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 20 13:16:02 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 20 13:16:02 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 20 13:16:02 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 20 13:16:02 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 20 13:16:02 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 20 13:16:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 20 13:16:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 20 13:16:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 20 13:16:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 20 13:16:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 20 13:16:02 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 20 13:16:02 localhost kernel: TSC deadline timer available
Jan 20 13:16:02 localhost kernel: CPU topo: Max. logical packages:   8
Jan 20 13:16:02 localhost kernel: CPU topo: Max. logical dies:       8
Jan 20 13:16:02 localhost kernel: CPU topo: Max. dies per package:   1
Jan 20 13:16:02 localhost kernel: CPU topo: Max. threads per core:   1
Jan 20 13:16:02 localhost kernel: CPU topo: Num. cores per package:     1
Jan 20 13:16:02 localhost kernel: CPU topo: Num. threads per package:   1
Jan 20 13:16:02 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 20 13:16:02 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 20 13:16:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 20 13:16:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 20 13:16:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 20 13:16:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 20 13:16:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 20 13:16:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 20 13:16:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 20 13:16:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 20 13:16:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 20 13:16:02 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 20 13:16:02 localhost kernel: Booting paravirtualized kernel on KVM
Jan 20 13:16:02 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 20 13:16:02 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 20 13:16:02 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 20 13:16:02 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 20 13:16:02 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 20 13:16:02 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 20 13:16:02 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 20 13:16:02 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 20 13:16:02 localhost kernel: random: crng init done
Jan 20 13:16:02 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 20 13:16:02 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 20 13:16:02 localhost kernel: Fallback order for Node 0: 0 
Jan 20 13:16:02 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 20 13:16:02 localhost kernel: Policy zone: Normal
Jan 20 13:16:02 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 20 13:16:02 localhost kernel: software IO TLB: area num 8.
Jan 20 13:16:02 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 20 13:16:02 localhost kernel: ftrace: allocating 49417 entries in 194 pages
Jan 20 13:16:02 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 20 13:16:02 localhost kernel: Dynamic Preempt: voluntary
Jan 20 13:16:02 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 20 13:16:02 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 20 13:16:02 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 20 13:16:02 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 20 13:16:02 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 20 13:16:02 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 20 13:16:02 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 20 13:16:02 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 20 13:16:02 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 20 13:16:02 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 20 13:16:02 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 20 13:16:02 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 20 13:16:02 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 20 13:16:02 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 20 13:16:02 localhost kernel: Console: colour VGA+ 80x25
Jan 20 13:16:02 localhost kernel: printk: console [ttyS0] enabled
Jan 20 13:16:02 localhost kernel: ACPI: Core revision 20230331
Jan 20 13:16:02 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 20 13:16:02 localhost kernel: x2apic enabled
Jan 20 13:16:02 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 20 13:16:02 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 20 13:16:02 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 20 13:16:02 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 20 13:16:02 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 20 13:16:02 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 20 13:16:02 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 20 13:16:02 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 20 13:16:02 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 20 13:16:02 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 20 13:16:02 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 20 13:16:02 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 20 13:16:02 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 20 13:16:02 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 20 13:16:02 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 20 13:16:02 localhost kernel: x86/bugs: return thunk changed
Jan 20 13:16:02 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 20 13:16:02 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 20 13:16:02 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 20 13:16:02 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 20 13:16:02 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 20 13:16:02 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 20 13:16:02 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 20 13:16:02 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 20 13:16:02 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 20 13:16:02 localhost kernel: landlock: Up and running.
Jan 20 13:16:02 localhost kernel: Yama: becoming mindful.
Jan 20 13:16:02 localhost kernel: SELinux:  Initializing.
Jan 20 13:16:02 localhost kernel: LSM support for eBPF active
Jan 20 13:16:02 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 20 13:16:02 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 20 13:16:02 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 20 13:16:02 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 20 13:16:02 localhost kernel: ... version:                0
Jan 20 13:16:02 localhost kernel: ... bit width:              48
Jan 20 13:16:02 localhost kernel: ... generic registers:      6
Jan 20 13:16:02 localhost kernel: ... value mask:             0000ffffffffffff
Jan 20 13:16:02 localhost kernel: ... max period:             00007fffffffffff
Jan 20 13:16:02 localhost kernel: ... fixed-purpose events:   0
Jan 20 13:16:02 localhost kernel: ... event mask:             000000000000003f
Jan 20 13:16:02 localhost kernel: signal: max sigframe size: 1776
Jan 20 13:16:02 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 20 13:16:02 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 20 13:16:02 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 20 13:16:02 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 20 13:16:02 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 20 13:16:02 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 20 13:16:02 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 20 13:16:02 localhost kernel: node 0 deferred pages initialised in 15ms
Jan 20 13:16:02 localhost kernel: Memory: 7763888K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618360K reserved, 0K cma-reserved)
Jan 20 13:16:02 localhost kernel: devtmpfs: initialized
Jan 20 13:16:02 localhost kernel: x86/mm: Memory block size: 128MB
Jan 20 13:16:02 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 20 13:16:02 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 20 13:16:02 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 20 13:16:02 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 20 13:16:02 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 20 13:16:02 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 20 13:16:02 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 20 13:16:02 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 20 13:16:02 localhost kernel: audit: type=2000 audit(1768914960.411:1): state=initialized audit_enabled=0 res=1
Jan 20 13:16:02 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 20 13:16:02 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 20 13:16:02 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 20 13:16:02 localhost kernel: cpuidle: using governor menu
Jan 20 13:16:02 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 20 13:16:02 localhost kernel: PCI: Using configuration type 1 for base access
Jan 20 13:16:02 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 20 13:16:02 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 20 13:16:02 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 20 13:16:02 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 20 13:16:02 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 20 13:16:02 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 20 13:16:02 localhost kernel: Demotion targets for Node 0: null
Jan 20 13:16:02 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 20 13:16:02 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 20 13:16:02 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 20 13:16:02 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 20 13:16:02 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 20 13:16:02 localhost kernel: ACPI: Interpreter enabled
Jan 20 13:16:02 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 20 13:16:02 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 20 13:16:02 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 20 13:16:02 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 20 13:16:02 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 20 13:16:02 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 20 13:16:02 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [3] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [4] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [5] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [6] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [7] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [8] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [9] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [10] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [11] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [12] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [13] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [14] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [15] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [16] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [17] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [18] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [19] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [20] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [21] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [22] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [23] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [24] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [25] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [26] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [27] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [28] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [29] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [30] registered
Jan 20 13:16:02 localhost kernel: acpiphp: Slot [31] registered
Jan 20 13:16:02 localhost kernel: PCI host bridge to bus 0000:00
Jan 20 13:16:02 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 20 13:16:02 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 20 13:16:02 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 20 13:16:02 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 20 13:16:02 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 20 13:16:02 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 20 13:16:02 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 20 13:16:02 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 20 13:16:02 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 20 13:16:02 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 20 13:16:02 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 20 13:16:02 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 20 13:16:02 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 20 13:16:02 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 20 13:16:02 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 20 13:16:02 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 20 13:16:02 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 20 13:16:02 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 20 13:16:02 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 20 13:16:02 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 20 13:16:02 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 20 13:16:02 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 20 13:16:02 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 20 13:16:02 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 20 13:16:02 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 20 13:16:02 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 20 13:16:02 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 20 13:16:02 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 20 13:16:02 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 20 13:16:02 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 20 13:16:02 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 20 13:16:02 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 20 13:16:02 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 20 13:16:02 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 20 13:16:02 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 20 13:16:02 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 20 13:16:02 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 20 13:16:02 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 20 13:16:02 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 20 13:16:02 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 20 13:16:02 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 20 13:16:02 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 20 13:16:02 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 20 13:16:02 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 20 13:16:02 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 20 13:16:02 localhost kernel: iommu: Default domain type: Translated
Jan 20 13:16:02 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 20 13:16:02 localhost kernel: SCSI subsystem initialized
Jan 20 13:16:02 localhost kernel: ACPI: bus type USB registered
Jan 20 13:16:02 localhost kernel: usbcore: registered new interface driver usbfs
Jan 20 13:16:02 localhost kernel: usbcore: registered new interface driver hub
Jan 20 13:16:02 localhost kernel: usbcore: registered new device driver usb
Jan 20 13:16:02 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 20 13:16:02 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 20 13:16:02 localhost kernel: PTP clock support registered
Jan 20 13:16:02 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 20 13:16:02 localhost kernel: NetLabel: Initializing
Jan 20 13:16:02 localhost kernel: NetLabel:  domain hash size = 128
Jan 20 13:16:02 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 20 13:16:02 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 20 13:16:02 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 20 13:16:02 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 20 13:16:02 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 20 13:16:02 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 20 13:16:02 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 20 13:16:02 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 20 13:16:02 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 20 13:16:02 localhost kernel: vgaarb: loaded
Jan 20 13:16:02 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 20 13:16:02 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 20 13:16:02 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 20 13:16:02 localhost kernel: pnp: PnP ACPI init
Jan 20 13:16:02 localhost kernel: pnp 00:03: [dma 2]
Jan 20 13:16:02 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 20 13:16:02 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 20 13:16:02 localhost kernel: NET: Registered PF_INET protocol family
Jan 20 13:16:02 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 20 13:16:02 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 20 13:16:02 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 20 13:16:02 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 20 13:16:02 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 20 13:16:02 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 20 13:16:02 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 20 13:16:02 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 20 13:16:02 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 20 13:16:02 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 20 13:16:02 localhost kernel: NET: Registered PF_XDP protocol family
Jan 20 13:16:02 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 20 13:16:02 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 20 13:16:02 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 20 13:16:02 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 20 13:16:02 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 20 13:16:02 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 20 13:16:02 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 20 13:16:02 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 20 13:16:02 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 72832 usecs
Jan 20 13:16:02 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 20 13:16:02 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 20 13:16:02 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 20 13:16:02 localhost kernel: ACPI: bus type thunderbolt registered
Jan 20 13:16:02 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 20 13:16:02 localhost kernel: Initialise system trusted keyrings
Jan 20 13:16:02 localhost kernel: Key type blacklist registered
Jan 20 13:16:02 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 20 13:16:02 localhost kernel: zbud: loaded
Jan 20 13:16:02 localhost kernel: integrity: Platform Keyring initialized
Jan 20 13:16:02 localhost kernel: integrity: Machine keyring initialized
Jan 20 13:16:02 localhost kernel: Freeing initrd memory: 87956K
Jan 20 13:16:02 localhost kernel: NET: Registered PF_ALG protocol family
Jan 20 13:16:02 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 20 13:16:02 localhost kernel: Key type asymmetric registered
Jan 20 13:16:02 localhost kernel: Asymmetric key parser 'x509' registered
Jan 20 13:16:02 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 20 13:16:02 localhost kernel: io scheduler mq-deadline registered
Jan 20 13:16:02 localhost kernel: io scheduler kyber registered
Jan 20 13:16:02 localhost kernel: io scheduler bfq registered
Jan 20 13:16:02 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 20 13:16:02 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 20 13:16:02 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 20 13:16:02 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 20 13:16:02 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 20 13:16:02 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 20 13:16:02 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 20 13:16:02 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 20 13:16:02 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 20 13:16:02 localhost kernel: Non-volatile memory driver v1.3
Jan 20 13:16:02 localhost kernel: rdac: device handler registered
Jan 20 13:16:02 localhost kernel: hp_sw: device handler registered
Jan 20 13:16:02 localhost kernel: emc: device handler registered
Jan 20 13:16:02 localhost kernel: alua: device handler registered
Jan 20 13:16:02 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 20 13:16:02 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 20 13:16:02 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 20 13:16:02 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 20 13:16:02 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 20 13:16:02 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 20 13:16:02 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 20 13:16:02 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 20 13:16:02 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 20 13:16:02 localhost kernel: hub 1-0:1.0: USB hub found
Jan 20 13:16:02 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 20 13:16:02 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 20 13:16:02 localhost kernel: usbserial: USB Serial support registered for generic
Jan 20 13:16:02 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 20 13:16:02 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 20 13:16:02 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 20 13:16:02 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 20 13:16:02 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 20 13:16:02 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 20 13:16:02 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T13:16:01 UTC (1768914961)
Jan 20 13:16:02 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 20 13:16:02 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 20 13:16:02 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 20 13:16:02 localhost kernel: usbcore: registered new interface driver usbhid
Jan 20 13:16:02 localhost kernel: usbhid: USB HID core driver
Jan 20 13:16:02 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 20 13:16:02 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 20 13:16:02 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 20 13:16:02 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 20 13:16:02 localhost kernel: Initializing XFRM netlink socket
Jan 20 13:16:02 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 20 13:16:02 localhost kernel: Segment Routing with IPv6
Jan 20 13:16:02 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 20 13:16:02 localhost kernel: mpls_gso: MPLS GSO support
Jan 20 13:16:02 localhost kernel: IPI shorthand broadcast: enabled
Jan 20 13:16:02 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 20 13:16:02 localhost kernel: AES CTR mode by8 optimization enabled
Jan 20 13:16:02 localhost kernel: sched_clock: Marking stable (1657006660, 145822700)->(1876534820, -73705460)
Jan 20 13:16:02 localhost kernel: registered taskstats version 1
Jan 20 13:16:02 localhost kernel: Loading compiled-in X.509 certificates
Jan 20 13:16:02 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 20 13:16:02 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 20 13:16:02 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 20 13:16:02 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 20 13:16:02 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 20 13:16:02 localhost kernel: Demotion targets for Node 0: null
Jan 20 13:16:02 localhost kernel: page_owner is disabled
Jan 20 13:16:02 localhost kernel: Key type .fscrypt registered
Jan 20 13:16:02 localhost kernel: Key type fscrypt-provisioning registered
Jan 20 13:16:02 localhost kernel: Key type big_key registered
Jan 20 13:16:02 localhost kernel: Key type encrypted registered
Jan 20 13:16:02 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 20 13:16:02 localhost kernel: Loading compiled-in module X.509 certificates
Jan 20 13:16:02 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 20 13:16:02 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 20 13:16:02 localhost kernel: ima: No architecture policies found
Jan 20 13:16:02 localhost kernel: evm: Initialising EVM extended attributes:
Jan 20 13:16:02 localhost kernel: evm: security.selinux
Jan 20 13:16:02 localhost kernel: evm: security.SMACK64 (disabled)
Jan 20 13:16:02 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 20 13:16:02 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 20 13:16:02 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 20 13:16:02 localhost kernel: evm: security.apparmor (disabled)
Jan 20 13:16:02 localhost kernel: evm: security.ima
Jan 20 13:16:02 localhost kernel: evm: security.capability
Jan 20 13:16:02 localhost kernel: evm: HMAC attrs: 0x1
Jan 20 13:16:02 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 20 13:16:02 localhost kernel: Running certificate verification RSA selftest
Jan 20 13:16:02 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 20 13:16:02 localhost kernel: Running certificate verification ECDSA selftest
Jan 20 13:16:02 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 20 13:16:02 localhost kernel: clk: Disabling unused clocks
Jan 20 13:16:02 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 20 13:16:02 localhost kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 20 13:16:02 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 20 13:16:02 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 20 13:16:02 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 20 13:16:02 localhost kernel: Run /init as init process
Jan 20 13:16:02 localhost kernel:   with arguments:
Jan 20 13:16:02 localhost kernel:     /init
Jan 20 13:16:02 localhost kernel:   with environment:
Jan 20 13:16:02 localhost kernel:     HOME=/
Jan 20 13:16:02 localhost kernel:     TERM=linux
Jan 20 13:16:02 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64
Jan 20 13:16:02 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 20 13:16:02 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 20 13:16:02 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 20 13:16:02 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 20 13:16:02 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 20 13:16:02 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 20 13:16:02 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 20 13:16:02 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 20 13:16:02 localhost systemd[1]: Detected virtualization kvm.
Jan 20 13:16:02 localhost systemd[1]: Detected architecture x86-64.
Jan 20 13:16:02 localhost systemd[1]: Running in initrd.
Jan 20 13:16:02 localhost systemd[1]: No hostname configured, using default hostname.
Jan 20 13:16:02 localhost systemd[1]: Hostname set to <localhost>.
Jan 20 13:16:02 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 20 13:16:02 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 20 13:16:02 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 20 13:16:02 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 20 13:16:02 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 20 13:16:02 localhost systemd[1]: Reached target Local File Systems.
Jan 20 13:16:02 localhost systemd[1]: Reached target Path Units.
Jan 20 13:16:02 localhost systemd[1]: Reached target Slice Units.
Jan 20 13:16:02 localhost systemd[1]: Reached target Swaps.
Jan 20 13:16:02 localhost systemd[1]: Reached target Timer Units.
Jan 20 13:16:02 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 20 13:16:02 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 20 13:16:02 localhost systemd[1]: Listening on Journal Socket.
Jan 20 13:16:02 localhost systemd[1]: Listening on udev Control Socket.
Jan 20 13:16:02 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 20 13:16:02 localhost systemd[1]: Reached target Socket Units.
Jan 20 13:16:02 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 20 13:16:02 localhost systemd[1]: Starting Journal Service...
Jan 20 13:16:02 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 20 13:16:02 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 20 13:16:02 localhost systemd[1]: Starting Create System Users...
Jan 20 13:16:02 localhost systemd[1]: Starting Setup Virtual Console...
Jan 20 13:16:02 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 20 13:16:02 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 20 13:16:02 localhost systemd-journald[302]: Journal started
Jan 20 13:16:02 localhost systemd-journald[302]: Runtime Journal (/run/log/journal/870b1f1cf19c477bb282ee6eeba50974) is 8.0M, max 153.6M, 145.6M free.
Jan 20 13:16:02 localhost systemd-sysusers[307]: Creating group 'users' with GID 100.
Jan 20 13:16:02 localhost systemd-sysusers[307]: Creating group 'dbus' with GID 81.
Jan 20 13:16:02 localhost systemd-sysusers[307]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 20 13:16:02 localhost systemd[1]: Finished Create System Users.
Jan 20 13:16:02 localhost systemd[1]: Started Journal Service.
Jan 20 13:16:02 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 20 13:16:02 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 20 13:16:02 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 20 13:16:02 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 20 13:16:02 localhost systemd[1]: Finished Setup Virtual Console.
Jan 20 13:16:02 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 20 13:16:02 localhost systemd[1]: Starting dracut cmdline hook...
Jan 20 13:16:02 localhost dracut-cmdline[323]: dracut-9 dracut-057-102.git20250818.el9
Jan 20 13:16:02 localhost dracut-cmdline[323]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 20 13:16:02 localhost systemd[1]: Finished dracut cmdline hook.
Jan 20 13:16:02 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 20 13:16:02 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 20 13:16:02 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 20 13:16:02 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 20 13:16:02 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 20 13:16:02 localhost kernel: RPC: Registered udp transport module.
Jan 20 13:16:02 localhost kernel: RPC: Registered tcp transport module.
Jan 20 13:16:02 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 20 13:16:02 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 20 13:16:02 localhost rpc.statd[441]: Version 2.5.4 starting
Jan 20 13:16:02 localhost rpc.statd[441]: Initializing NSM state
Jan 20 13:16:02 localhost rpc.idmapd[446]: Setting log level to 0
Jan 20 13:16:02 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 20 13:16:02 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 20 13:16:02 localhost systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Jan 20 13:16:02 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 20 13:16:02 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 20 13:16:02 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 20 13:16:02 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 20 13:16:02 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 20 13:16:02 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 20 13:16:02 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 20 13:16:02 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 20 13:16:02 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 20 13:16:02 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 20 13:16:02 localhost systemd[1]: Reached target Network.
Jan 20 13:16:02 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 20 13:16:02 localhost systemd[1]: Starting dracut initqueue hook...
Jan 20 13:16:03 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 20 13:16:03 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 20 13:16:03 localhost kernel:  vda: vda1
Jan 20 13:16:03 localhost kernel: libata version 3.00 loaded.
Jan 20 13:16:03 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 20 13:16:03 localhost systemd-udevd[479]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 13:16:03 localhost kernel: scsi host0: ata_piix
Jan 20 13:16:03 localhost kernel: scsi host1: ata_piix
Jan 20 13:16:03 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 20 13:16:03 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 20 13:16:03 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 20 13:16:03 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 20 13:16:03 localhost systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 20 13:16:03 localhost systemd[1]: Reached target Initrd Root Device.
Jan 20 13:16:03 localhost systemd[1]: Reached target System Initialization.
Jan 20 13:16:03 localhost systemd[1]: Reached target Basic System.
Jan 20 13:16:03 localhost kernel: ata1: found unknown device (class 0)
Jan 20 13:16:03 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 20 13:16:03 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 20 13:16:03 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 20 13:16:03 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 20 13:16:03 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 20 13:16:03 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 20 13:16:03 localhost systemd[1]: Finished dracut initqueue hook.
Jan 20 13:16:03 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 20 13:16:03 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 20 13:16:03 localhost systemd[1]: Reached target Remote File Systems.
Jan 20 13:16:03 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 20 13:16:03 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 20 13:16:03 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 20 13:16:03 localhost systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Jan 20 13:16:03 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 20 13:16:03 localhost systemd[1]: Mounting /sysroot...
Jan 20 13:16:03 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 20 13:16:03 localhost kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 20 13:16:03 localhost kernel: XFS (vda1): Ending clean mount
Jan 20 13:16:03 localhost systemd[1]: Mounted /sysroot.
Jan 20 13:16:03 localhost systemd[1]: Reached target Initrd Root File System.
Jan 20 13:16:03 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 20 13:16:03 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 20 13:16:03 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 20 13:16:03 localhost systemd[1]: Reached target Initrd File Systems.
Jan 20 13:16:03 localhost systemd[1]: Reached target Initrd Default Target.
Jan 20 13:16:03 localhost systemd[1]: Starting dracut mount hook...
Jan 20 13:16:04 localhost systemd[1]: Finished dracut mount hook.
Jan 20 13:16:04 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 20 13:16:04 localhost rpc.idmapd[446]: exiting on signal 15
Jan 20 13:16:04 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 20 13:16:04 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 20 13:16:04 localhost systemd[1]: Stopped target Network.
Jan 20 13:16:04 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 20 13:16:04 localhost systemd[1]: Stopped target Timer Units.
Jan 20 13:16:04 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 20 13:16:04 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 20 13:16:04 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 20 13:16:04 localhost systemd[1]: Stopped target Basic System.
Jan 20 13:16:04 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 20 13:16:04 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 20 13:16:04 localhost systemd[1]: Stopped target Path Units.
Jan 20 13:16:04 localhost systemd[1]: Stopped target Remote File Systems.
Jan 20 13:16:04 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 20 13:16:04 localhost systemd[1]: Stopped target Slice Units.
Jan 20 13:16:04 localhost systemd[1]: Stopped target Socket Units.
Jan 20 13:16:04 localhost systemd[1]: Stopped target System Initialization.
Jan 20 13:16:04 localhost systemd[1]: Stopped target Local File Systems.
Jan 20 13:16:04 localhost systemd[1]: Stopped target Swaps.
Jan 20 13:16:04 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Stopped dracut mount hook.
Jan 20 13:16:04 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 20 13:16:04 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 20 13:16:04 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 20 13:16:04 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 20 13:16:04 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 20 13:16:04 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 20 13:16:04 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 20 13:16:04 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 20 13:16:04 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 20 13:16:04 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 20 13:16:04 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 20 13:16:04 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 20 13:16:04 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Closed udev Control Socket.
Jan 20 13:16:04 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Closed udev Kernel Socket.
Jan 20 13:16:04 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 20 13:16:04 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 20 13:16:04 localhost systemd[1]: Starting Cleanup udev Database...
Jan 20 13:16:04 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 20 13:16:04 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 20 13:16:04 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Stopped Create System Users.
Jan 20 13:16:04 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Finished Cleanup udev Database.
Jan 20 13:16:04 localhost systemd[1]: Reached target Switch Root.
Jan 20 13:16:04 localhost systemd[1]: Starting Switch Root...
Jan 20 13:16:04 localhost systemd[1]: Switching root.
Jan 20 13:16:04 localhost systemd-journald[302]: Journal stopped
Jan 20 13:16:04 localhost systemd-journald[302]: Received SIGTERM from PID 1 (systemd).
Jan 20 13:16:04 localhost kernel: audit: type=1404 audit(1768914964.358:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 20 13:16:04 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 13:16:04 localhost kernel: SELinux:  policy capability open_perms=1
Jan 20 13:16:04 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 13:16:04 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 20 13:16:04 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 13:16:04 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 13:16:04 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 13:16:04 localhost kernel: audit: type=1403 audit(1768914964.491:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 20 13:16:04 localhost systemd[1]: Successfully loaded SELinux policy in 136.739ms.
Jan 20 13:16:04 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.750ms.
Jan 20 13:16:04 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 20 13:16:04 localhost systemd[1]: Detected virtualization kvm.
Jan 20 13:16:04 localhost systemd[1]: Detected architecture x86-64.
Jan 20 13:16:04 localhost systemd-rc-local-generator[634]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:16:04 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Stopped Switch Root.
Jan 20 13:16:04 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 20 13:16:04 localhost systemd[1]: Created slice Slice /system/getty.
Jan 20 13:16:04 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 20 13:16:04 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 20 13:16:04 localhost systemd[1]: Created slice User and Session Slice.
Jan 20 13:16:04 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 20 13:16:04 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 20 13:16:04 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 20 13:16:04 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 20 13:16:04 localhost systemd[1]: Stopped target Switch Root.
Jan 20 13:16:04 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 20 13:16:04 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 20 13:16:04 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 20 13:16:04 localhost systemd[1]: Reached target Path Units.
Jan 20 13:16:04 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 20 13:16:04 localhost systemd[1]: Reached target Slice Units.
Jan 20 13:16:04 localhost systemd[1]: Reached target Swaps.
Jan 20 13:16:04 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 20 13:16:04 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 20 13:16:04 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 20 13:16:04 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 20 13:16:04 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 20 13:16:04 localhost systemd[1]: Listening on udev Control Socket.
Jan 20 13:16:04 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 20 13:16:04 localhost systemd[1]: Mounting Huge Pages File System...
Jan 20 13:16:04 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 20 13:16:04 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 20 13:16:04 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 20 13:16:04 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 20 13:16:04 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 20 13:16:04 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 20 13:16:04 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 20 13:16:04 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 20 13:16:04 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 20 13:16:04 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 20 13:16:04 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 20 13:16:04 localhost systemd[1]: Stopped Journal Service.
Jan 20 13:16:04 localhost systemd[1]: Starting Journal Service...
Jan 20 13:16:04 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 20 13:16:04 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 20 13:16:04 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 20 13:16:04 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 20 13:16:04 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 20 13:16:04 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 20 13:16:04 localhost kernel: fuse: init (API version 7.37)
Jan 20 13:16:04 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 20 13:16:04 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 20 13:16:04 localhost systemd[1]: Mounted Huge Pages File System.
Jan 20 13:16:04 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 20 13:16:04 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 20 13:16:04 localhost systemd-journald[675]: Journal started
Jan 20 13:16:04 localhost systemd-journald[675]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 20 13:16:04 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 20 13:16:04 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Started Journal Service.
Jan 20 13:16:04 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 20 13:16:04 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 20 13:16:04 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 20 13:16:04 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 20 13:16:04 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 20 13:16:04 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 20 13:16:04 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 20 13:16:04 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 20 13:16:04 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 20 13:16:04 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 20 13:16:05 localhost systemd[1]: Mounting FUSE Control File System...
Jan 20 13:16:05 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 20 13:16:05 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 20 13:16:05 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 20 13:16:05 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 20 13:16:05 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 20 13:16:05 localhost systemd[1]: Starting Create System Users...
Jan 20 13:16:05 localhost systemd[1]: Mounted FUSE Control File System.
Jan 20 13:16:05 localhost systemd-journald[675]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 20 13:16:05 localhost systemd-journald[675]: Received client request to flush runtime journal.
Jan 20 13:16:05 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 20 13:16:05 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 20 13:16:05 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 20 13:16:05 localhost kernel: ACPI: bus type drm_connector registered
Jan 20 13:16:05 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 20 13:16:05 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 20 13:16:05 localhost systemd[1]: Finished Create System Users.
Jan 20 13:16:05 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 20 13:16:05 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 20 13:16:05 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 20 13:16:05 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 20 13:16:05 localhost systemd[1]: Reached target Local File Systems.
Jan 20 13:16:05 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 20 13:16:05 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 20 13:16:05 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 20 13:16:05 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 20 13:16:05 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 20 13:16:05 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 20 13:16:05 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 20 13:16:05 localhost bootctl[695]: Couldn't find EFI system partition, skipping.
Jan 20 13:16:05 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 20 13:16:05 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 20 13:16:05 localhost systemd[1]: Starting Security Auditing Service...
Jan 20 13:16:05 localhost systemd[1]: Starting RPC Bind...
Jan 20 13:16:05 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 20 13:16:05 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 20 13:16:05 localhost auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 20 13:16:05 localhost auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 20 13:16:05 localhost systemd[1]: Started RPC Bind.
Jan 20 13:16:05 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 20 13:16:05 localhost augenrules[706]: /sbin/augenrules: No change
Jan 20 13:16:05 localhost augenrules[721]: No rules
Jan 20 13:16:05 localhost augenrules[721]: enabled 1
Jan 20 13:16:05 localhost augenrules[721]: failure 1
Jan 20 13:16:05 localhost augenrules[721]: pid 701
Jan 20 13:16:05 localhost augenrules[721]: rate_limit 0
Jan 20 13:16:05 localhost augenrules[721]: backlog_limit 8192
Jan 20 13:16:05 localhost augenrules[721]: lost 0
Jan 20 13:16:05 localhost augenrules[721]: backlog 0
Jan 20 13:16:05 localhost augenrules[721]: backlog_wait_time 60000
Jan 20 13:16:05 localhost augenrules[721]: backlog_wait_time_actual 0
Jan 20 13:16:05 localhost augenrules[721]: enabled 1
Jan 20 13:16:05 localhost augenrules[721]: failure 1
Jan 20 13:16:05 localhost augenrules[721]: pid 701
Jan 20 13:16:05 localhost augenrules[721]: rate_limit 0
Jan 20 13:16:05 localhost augenrules[721]: backlog_limit 8192
Jan 20 13:16:05 localhost augenrules[721]: lost 0
Jan 20 13:16:05 localhost augenrules[721]: backlog 4
Jan 20 13:16:05 localhost augenrules[721]: backlog_wait_time 60000
Jan 20 13:16:05 localhost augenrules[721]: backlog_wait_time_actual 0
Jan 20 13:16:05 localhost augenrules[721]: enabled 1
Jan 20 13:16:05 localhost augenrules[721]: failure 1
Jan 20 13:16:05 localhost augenrules[721]: pid 701
Jan 20 13:16:05 localhost augenrules[721]: rate_limit 0
Jan 20 13:16:05 localhost augenrules[721]: backlog_limit 8192
Jan 20 13:16:05 localhost augenrules[721]: lost 0
Jan 20 13:16:05 localhost augenrules[721]: backlog 0
Jan 20 13:16:05 localhost augenrules[721]: backlog_wait_time 60000
Jan 20 13:16:05 localhost augenrules[721]: backlog_wait_time_actual 0
Jan 20 13:16:05 localhost systemd[1]: Started Security Auditing Service.
Jan 20 13:16:05 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 20 13:16:05 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 20 13:16:05 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 20 13:16:05 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 20 13:16:05 localhost systemd[1]: Starting Update is Completed...
Jan 20 13:16:05 localhost systemd[1]: Finished Update is Completed.
Jan 20 13:16:05 localhost systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Jan 20 13:16:05 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 20 13:16:05 localhost systemd[1]: Reached target System Initialization.
Jan 20 13:16:05 localhost systemd[1]: Started dnf makecache --timer.
Jan 20 13:16:05 localhost systemd[1]: Started Daily rotation of log files.
Jan 20 13:16:05 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 20 13:16:05 localhost systemd[1]: Reached target Timer Units.
Jan 20 13:16:05 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 20 13:16:05 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 20 13:16:05 localhost systemd[1]: Reached target Socket Units.
Jan 20 13:16:05 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 20 13:16:05 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 20 13:16:05 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 20 13:16:05 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 20 13:16:05 localhost systemd-udevd[741]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 13:16:05 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 20 13:16:05 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 20 13:16:05 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 20 13:16:05 localhost systemd[1]: Reached target Basic System.
Jan 20 13:16:05 localhost dbus-broker-lau[761]: Ready
Jan 20 13:16:05 localhost systemd[1]: Starting NTP client/server...
Jan 20 13:16:05 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 20 13:16:05 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 20 13:16:05 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 20 13:16:05 localhost systemd[1]: Started irqbalance daemon.
Jan 20 13:16:05 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 20 13:16:05 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 13:16:05 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 13:16:05 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 13:16:05 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 20 13:16:05 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 20 13:16:05 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 20 13:16:05 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 20 13:16:05 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 20 13:16:05 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 20 13:16:05 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 20 13:16:05 localhost systemd[1]: Starting User Login Management...
Jan 20 13:16:05 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 20 13:16:05 localhost chronyd[796]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 20 13:16:05 localhost chronyd[796]: Loaded 0 symmetric keys
Jan 20 13:16:05 localhost chronyd[796]: Using right/UTC timezone to obtain leap second data
Jan 20 13:16:05 localhost chronyd[796]: Loaded seccomp filter (level 2)
Jan 20 13:16:05 localhost systemd[1]: Started NTP client/server.
Jan 20 13:16:05 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 20 13:16:05 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 20 13:16:05 localhost systemd-logind[783]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 20 13:16:05 localhost systemd-logind[783]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 20 13:16:05 localhost kernel: Console: switching to colour dummy device 80x25
Jan 20 13:16:05 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 20 13:16:05 localhost kernel: [drm] features: -context_init
Jan 20 13:16:05 localhost systemd-logind[783]: New seat seat0.
Jan 20 13:16:05 localhost systemd[1]: Started User Login Management.
Jan 20 13:16:05 localhost kernel: [drm] number of scanouts: 1
Jan 20 13:16:05 localhost kernel: [drm] number of cap sets: 0
Jan 20 13:16:05 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 20 13:16:05 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 20 13:16:05 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 20 13:16:05 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 20 13:16:05 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 20 13:16:05 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 20 13:16:05 localhost kernel: kvm_amd: TSC scaling supported
Jan 20 13:16:05 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 20 13:16:05 localhost kernel: kvm_amd: Nested Paging enabled
Jan 20 13:16:05 localhost kernel: kvm_amd: LBR virtualization supported
Jan 20 13:16:06 localhost iptables.init[777]: iptables: Applying firewall rules: [  OK  ]
Jan 20 13:16:06 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 20 13:16:06 localhost cloud-init[838]: Cloud-init v. 24.4-8.el9 running 'init-local' at Tue, 20 Jan 2026 13:16:06 +0000. Up 6.36 seconds.
Jan 20 13:16:06 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 20 13:16:06 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 20 13:16:06 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpqx1mrguw.mount: Deactivated successfully.
Jan 20 13:16:06 localhost systemd[1]: Starting Hostname Service...
Jan 20 13:16:06 localhost systemd[1]: Started Hostname Service.
Jan 20 13:16:06 np0005588919.novalocal systemd-hostnamed[852]: Hostname set to <np0005588919.novalocal> (static)
Jan 20 13:16:06 np0005588919.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 20 13:16:06 np0005588919.novalocal systemd[1]: Reached target Preparation for Network.
Jan 20 13:16:06 np0005588919.novalocal systemd[1]: Starting Network Manager...
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0272] NetworkManager (version 1.54.3-2.el9) is starting... (boot:017a3b90-38ab-4863-8e66-991c4844fcc7)
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0278] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0354] manager[0x55ec69009000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0391] hostname: hostname: using hostnamed
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0392] hostname: static hostname changed from (none) to "np0005588919.novalocal"
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0396] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0496] manager[0x55ec69009000]: rfkill: Wi-Fi hardware radio set enabled
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0497] manager[0x55ec69009000]: rfkill: WWAN hardware radio set enabled
Jan 20 13:16:07 np0005588919.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0546] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0548] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0549] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0550] manager: Networking is enabled by state file
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0552] settings: Loaded settings plugin: keyfile (internal)
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0567] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0595] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0608] dhcp: init: Using DHCP client 'internal'
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0611] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0628] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0638] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0650] device (lo): Activation: starting connection 'lo' (f1b29bda-3a6a-4be0-8c9c-6df9359cf4c4)
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0661] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0666] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0724] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0730] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0735] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0739] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0743] device (eth0): carrier: link connected
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0749] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0759] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0767] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 20 13:16:07 np0005588919.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0776] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0777] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0782] manager: NetworkManager state is now CONNECTING
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0785] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0794] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:16:07 np0005588919.novalocal systemd[1]: Started Network Manager.
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0802] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:16:07 np0005588919.novalocal systemd[1]: Reached target Network.
Jan 20 13:16:07 np0005588919.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0848] dhcp4 (eth0): state changed new lease, address=38.102.83.169
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0858] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 20 13:16:07 np0005588919.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0881] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:16:07 np0005588919.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0996] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.0998] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.1003] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.1013] device (lo): Activation: successful, device activated.
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.1021] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.1027] manager: NetworkManager state is now CONNECTED_SITE
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.1031] device (eth0): Activation: successful, device activated.
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.1039] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 20 13:16:07 np0005588919.novalocal NetworkManager[856]: <info>  [1768914967.1046] manager: startup complete
Jan 20 13:16:07 np0005588919.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 20 13:16:07 np0005588919.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 20 13:16:07 np0005588919.novalocal systemd[1]: Reached target NFS client services.
Jan 20 13:16:07 np0005588919.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 20 13:16:07 np0005588919.novalocal systemd[1]: Reached target Remote File Systems.
Jan 20 13:16:07 np0005588919.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 20 13:16:07 np0005588919.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 20 13:16:07 np0005588919.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: Cloud-init v. 24.4-8.el9 running 'init' at Tue, 20 Jan 2026 13:16:07 +0000. Up 7.54 seconds.
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: |  eth0  | True |        38.102.83.169         | 255.255.255.0 | global | fa:16:3e:b5:2a:39 |
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: |  eth0  | True | fe80::f816:3eff:feb5:2a39/64 |       .       |  link  | fa:16:3e:b5:2a:39 |
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 20 13:16:07 np0005588919.novalocal cloud-init[916]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 20 13:16:10 np0005588919.novalocal useradd[985]: new group: name=cloud-user, GID=1001
Jan 20 13:16:10 np0005588919.novalocal useradd[985]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 20 13:16:10 np0005588919.novalocal useradd[985]: add 'cloud-user' to group 'adm'
Jan 20 13:16:10 np0005588919.novalocal useradd[985]: add 'cloud-user' to group 'systemd-journal'
Jan 20 13:16:10 np0005588919.novalocal useradd[985]: add 'cloud-user' to shadow group 'adm'
Jan 20 13:16:10 np0005588919.novalocal useradd[985]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: Generating public/private rsa key pair.
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: The key fingerprint is:
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: SHA256:oqb1FipA3QDYdW0c9PWThFmXe/VtUdIEP0u5rZRnI7M root@np0005588919.novalocal
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: The key's randomart image is:
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: +---[RSA 3072]----+
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |.o... .+o.  .+=+*|
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |. .. .  +. .oo B+|
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |  . o  .  .   ++B|
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: | . . .        .+X|
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |.     . S    oo==|
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |.    ...     .++.|
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: | .  +. .     E.  |
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |  .+...          |
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |  .. ..          |
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: +----[SHA256]-----+
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: Generating public/private ecdsa key pair.
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: The key fingerprint is:
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: SHA256:bF791DUbc28TlJd8ya+nF4sItgSmR/HZqSG3bZ7vYoI root@np0005588919.novalocal
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: The key's randomart image is:
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: +---[ECDSA 256]---+
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |              o.+|
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |       .      .*o|
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |        o o .  =*|
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |       * = +   .@|
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |      + S * . .++|
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |     . + B o o.oo|
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |      . = = o ooo|
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |       E o * ....|
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |          o +o . |
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: +----[SHA256]-----+
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: Generating public/private ed25519 key pair.
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: The key fingerprint is:
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: SHA256:Fs+VCqBtNhnhalplMMEUB65JjqLKHDp0zW5usKnNHDA root@np0005588919.novalocal
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: The key's randomart image is:
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: +--[ED25519 256]--+
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |   oB+=.         |
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |   ..B +     .   |
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |  . o X o   o    |
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: | + o * . = o     |
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |oE+ *   S +      |
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |o.o* o .         |
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |o.o.=            |
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |* =o.+           |
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: |o=.++.           |
Jan 20 13:16:11 np0005588919.novalocal cloud-init[916]: +----[SHA256]-----+
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: Reached target Network is Online.
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: Starting System Logging Service...
Jan 20 13:16:11 np0005588919.novalocal sm-notify[1001]: Version 2.5.4 starting
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: Starting Permit User Sessions...
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 20 13:16:11 np0005588919.novalocal sshd[1003]: Server listening on 0.0.0.0 port 22.
Jan 20 13:16:11 np0005588919.novalocal sshd[1003]: Server listening on :: port 22.
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: Finished Permit User Sessions.
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: Started Command Scheduler.
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: Started Getty on tty1.
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: Reached target Login Prompts.
Jan 20 13:16:11 np0005588919.novalocal crond[1006]: (CRON) STARTUP (1.5.7)
Jan 20 13:16:11 np0005588919.novalocal crond[1006]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 20 13:16:11 np0005588919.novalocal crond[1006]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 45% if used.)
Jan 20 13:16:11 np0005588919.novalocal crond[1006]: (CRON) INFO (running with inotify support)
Jan 20 13:16:11 np0005588919.novalocal rsyslogd[1002]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1002" x-info="https://www.rsyslog.com"] start
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: Started System Logging Service.
Jan 20 13:16:11 np0005588919.novalocal rsyslogd[1002]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: Reached target Multi-User System.
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 20 13:16:11 np0005588919.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 20 13:16:11 np0005588919.novalocal rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 13:16:11 np0005588919.novalocal kdumpctl[1015]: kdump: No kdump initial ramdisk found.
Jan 20 13:16:11 np0005588919.novalocal kdumpctl[1015]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 20 13:16:12 np0005588919.novalocal cloud-init[1126]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Tue, 20 Jan 2026 13:16:11 +0000. Up 12.06 seconds.
Jan 20 13:16:12 np0005588919.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 20 13:16:12 np0005588919.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 20 13:16:12 np0005588919.novalocal dracut[1262]: dracut-057-102.git20250818.el9
Jan 20 13:16:12 np0005588919.novalocal sshd-session[1282]: Unable to negotiate with 38.102.83.114 port 45860: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 20 13:16:12 np0005588919.novalocal cloud-init[1284]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Tue, 20 Jan 2026 13:16:12 +0000. Up 12.49 seconds.
Jan 20 13:16:12 np0005588919.novalocal sshd-session[1287]: Unable to negotiate with 38.102.83.114 port 45876: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 20 13:16:12 np0005588919.novalocal sshd-session[1289]: Unable to negotiate with 38.102.83.114 port 45892: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 20 13:16:12 np0005588919.novalocal cloud-init[1298]: #############################################################
Jan 20 13:16:12 np0005588919.novalocal cloud-init[1301]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 20 13:16:12 np0005588919.novalocal dracut[1264]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 20 13:16:12 np0005588919.novalocal cloud-init[1309]: 256 SHA256:bF791DUbc28TlJd8ya+nF4sItgSmR/HZqSG3bZ7vYoI root@np0005588919.novalocal (ECDSA)
Jan 20 13:16:12 np0005588919.novalocal sshd-session[1280]: Connection closed by 38.102.83.114 port 45850 [preauth]
Jan 20 13:16:12 np0005588919.novalocal cloud-init[1316]: 256 SHA256:Fs+VCqBtNhnhalplMMEUB65JjqLKHDp0zW5usKnNHDA root@np0005588919.novalocal (ED25519)
Jan 20 13:16:12 np0005588919.novalocal cloud-init[1323]: 3072 SHA256:oqb1FipA3QDYdW0c9PWThFmXe/VtUdIEP0u5rZRnI7M root@np0005588919.novalocal (RSA)
Jan 20 13:16:12 np0005588919.novalocal cloud-init[1325]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 20 13:16:12 np0005588919.novalocal cloud-init[1327]: #############################################################
Jan 20 13:16:12 np0005588919.novalocal sshd-session[1285]: Connection closed by 38.102.83.114 port 45872 [preauth]
Jan 20 13:16:12 np0005588919.novalocal sshd-session[1342]: Unable to negotiate with 38.102.83.114 port 45934: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 20 13:16:12 np0005588919.novalocal cloud-init[1284]: Cloud-init v. 24.4-8.el9 finished at Tue, 20 Jan 2026 13:16:12 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 12.66 seconds
Jan 20 13:16:12 np0005588919.novalocal sshd-session[1356]: Unable to negotiate with 38.102.83.114 port 45940: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 20 13:16:12 np0005588919.novalocal sshd-session[1292]: Connection closed by 38.102.83.114 port 45908 [preauth]
Jan 20 13:16:12 np0005588919.novalocal sshd-session[1311]: Connection closed by 38.102.83.114 port 45918 [preauth]
Jan 20 13:16:12 np0005588919.novalocal chronyd[796]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Jan 20 13:16:12 np0005588919.novalocal chronyd[796]: System clock wrong by 1.122361 seconds
Jan 20 13:16:13 np0005588919.novalocal chronyd[796]: System clock was stepped by 1.122361 seconds
Jan 20 13:16:13 np0005588919.novalocal chronyd[796]: System clock TAI offset set to 37 seconds
Jan 20 13:16:13 np0005588919.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 20 13:16:13 np0005588919.novalocal systemd[1]: Reached target Cloud-init target.
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: memstrack is not available
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 20 13:16:14 np0005588919.novalocal dracut[1264]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 20 13:16:15 np0005588919.novalocal dracut[1264]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 20 13:16:15 np0005588919.novalocal dracut[1264]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 20 13:16:15 np0005588919.novalocal dracut[1264]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 20 13:16:15 np0005588919.novalocal dracut[1264]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 20 13:16:15 np0005588919.novalocal dracut[1264]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 20 13:16:15 np0005588919.novalocal dracut[1264]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 20 13:16:15 np0005588919.novalocal dracut[1264]: memstrack is not available
Jan 20 13:16:15 np0005588919.novalocal dracut[1264]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 20 13:16:15 np0005588919.novalocal dracut[1264]: *** Including module: systemd ***
Jan 20 13:16:15 np0005588919.novalocal dracut[1264]: *** Including module: fips ***
Jan 20 13:16:15 np0005588919.novalocal dracut[1264]: *** Including module: systemd-initrd ***
Jan 20 13:16:15 np0005588919.novalocal dracut[1264]: *** Including module: i18n ***
Jan 20 13:16:16 np0005588919.novalocal dracut[1264]: *** Including module: drm ***
Jan 20 13:16:16 np0005588919.novalocal dracut[1264]: *** Including module: prefixdevname ***
Jan 20 13:16:16 np0005588919.novalocal dracut[1264]: *** Including module: kernel-modules ***
Jan 20 13:16:16 np0005588919.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 20 13:16:17 np0005588919.novalocal irqbalance[778]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 20 13:16:17 np0005588919.novalocal irqbalance[778]: IRQ 25 affinity is now unmanaged
Jan 20 13:16:17 np0005588919.novalocal irqbalance[778]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 20 13:16:17 np0005588919.novalocal irqbalance[778]: IRQ 31 affinity is now unmanaged
Jan 20 13:16:17 np0005588919.novalocal irqbalance[778]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 20 13:16:17 np0005588919.novalocal irqbalance[778]: IRQ 28 affinity is now unmanaged
Jan 20 13:16:17 np0005588919.novalocal irqbalance[778]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 20 13:16:17 np0005588919.novalocal irqbalance[778]: IRQ 32 affinity is now unmanaged
Jan 20 13:16:17 np0005588919.novalocal irqbalance[778]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 20 13:16:17 np0005588919.novalocal irqbalance[778]: IRQ 30 affinity is now unmanaged
Jan 20 13:16:17 np0005588919.novalocal irqbalance[778]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 20 13:16:17 np0005588919.novalocal irqbalance[778]: IRQ 29 affinity is now unmanaged
Jan 20 13:16:17 np0005588919.novalocal dracut[1264]: *** Including module: kernel-modules-extra ***
Jan 20 13:16:17 np0005588919.novalocal dracut[1264]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 20 13:16:17 np0005588919.novalocal dracut[1264]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 20 13:16:17 np0005588919.novalocal dracut[1264]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 20 13:16:17 np0005588919.novalocal dracut[1264]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 20 13:16:17 np0005588919.novalocal dracut[1264]: *** Including module: qemu ***
Jan 20 13:16:17 np0005588919.novalocal dracut[1264]: *** Including module: fstab-sys ***
Jan 20 13:16:17 np0005588919.novalocal dracut[1264]: *** Including module: rootfs-block ***
Jan 20 13:16:17 np0005588919.novalocal dracut[1264]: *** Including module: terminfo ***
Jan 20 13:16:17 np0005588919.novalocal dracut[1264]: *** Including module: udev-rules ***
Jan 20 13:16:18 np0005588919.novalocal dracut[1264]: Skipping udev rule: 91-permissions.rules
Jan 20 13:16:18 np0005588919.novalocal dracut[1264]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 20 13:16:18 np0005588919.novalocal dracut[1264]: *** Including module: virtiofs ***
Jan 20 13:16:18 np0005588919.novalocal dracut[1264]: *** Including module: dracut-systemd ***
Jan 20 13:16:18 np0005588919.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 13:16:18 np0005588919.novalocal dracut[1264]: *** Including module: usrmount ***
Jan 20 13:16:18 np0005588919.novalocal dracut[1264]: *** Including module: base ***
Jan 20 13:16:18 np0005588919.novalocal dracut[1264]: *** Including module: fs-lib ***
Jan 20 13:16:18 np0005588919.novalocal dracut[1264]: *** Including module: kdumpbase ***
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:   microcode_ctl module: mangling fw_dir
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: configuration "intel" is ignored
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]: *** Including module: openssl ***
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]: *** Including module: shutdown ***
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]: *** Including module: squash ***
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]: *** Including modules done ***
Jan 20 13:16:19 np0005588919.novalocal dracut[1264]: *** Installing kernel module dependencies ***
Jan 20 13:16:20 np0005588919.novalocal dracut[1264]: *** Installing kernel module dependencies done ***
Jan 20 13:16:20 np0005588919.novalocal dracut[1264]: *** Resolving executable dependencies ***
Jan 20 13:16:21 np0005588919.novalocal dracut[1264]: *** Resolving executable dependencies done ***
Jan 20 13:16:22 np0005588919.novalocal dracut[1264]: *** Generating early-microcode cpio image ***
Jan 20 13:16:22 np0005588919.novalocal dracut[1264]: *** Store current command line parameters ***
Jan 20 13:16:22 np0005588919.novalocal dracut[1264]: Stored kernel commandline:
Jan 20 13:16:22 np0005588919.novalocal dracut[1264]: No dracut internal kernel commandline stored in the initramfs
Jan 20 13:16:22 np0005588919.novalocal dracut[1264]: *** Install squash loader ***
Jan 20 13:16:23 np0005588919.novalocal dracut[1264]: *** Squashing the files inside the initramfs ***
Jan 20 13:16:24 np0005588919.novalocal dracut[1264]: *** Squashing the files inside the initramfs done ***
Jan 20 13:16:24 np0005588919.novalocal dracut[1264]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 20 13:16:24 np0005588919.novalocal dracut[1264]: *** Hardlinking files ***
Jan 20 13:16:24 np0005588919.novalocal dracut[1264]: Mode:           real
Jan 20 13:16:24 np0005588919.novalocal dracut[1264]: Files:          50
Jan 20 13:16:24 np0005588919.novalocal dracut[1264]: Linked:         0 files
Jan 20 13:16:24 np0005588919.novalocal dracut[1264]: Compared:       0 xattrs
Jan 20 13:16:24 np0005588919.novalocal dracut[1264]: Compared:       0 files
Jan 20 13:16:24 np0005588919.novalocal dracut[1264]: Saved:          0 B
Jan 20 13:16:24 np0005588919.novalocal dracut[1264]: Duration:       0.000840 seconds
Jan 20 13:16:24 np0005588919.novalocal dracut[1264]: *** Hardlinking files done ***
Jan 20 13:16:24 np0005588919.novalocal dracut[1264]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 20 13:16:25 np0005588919.novalocal kdumpctl[1015]: kdump: kexec: loaded kdump kernel
Jan 20 13:16:25 np0005588919.novalocal kdumpctl[1015]: kdump: Starting kdump: [OK]
Jan 20 13:16:25 np0005588919.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 20 13:16:25 np0005588919.novalocal systemd[1]: Startup finished in 1.990s (kernel) + 2.471s (initrd) + 20.223s (userspace) = 24.686s.
Jan 20 13:16:35 np0005588919.novalocal sshd-session[4298]: Accepted publickey for zuul from 38.102.83.114 port 53988 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 20 13:16:35 np0005588919.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 20 13:16:35 np0005588919.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 20 13:16:35 np0005588919.novalocal systemd-logind[783]: New session 1 of user zuul.
Jan 20 13:16:35 np0005588919.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 20 13:16:35 np0005588919.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 20 13:16:35 np0005588919.novalocal systemd[4302]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:16:35 np0005588919.novalocal systemd[4302]: Queued start job for default target Main User Target.
Jan 20 13:16:35 np0005588919.novalocal systemd[4302]: Created slice User Application Slice.
Jan 20 13:16:35 np0005588919.novalocal systemd[4302]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 13:16:35 np0005588919.novalocal systemd[4302]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 13:16:35 np0005588919.novalocal systemd[4302]: Reached target Paths.
Jan 20 13:16:35 np0005588919.novalocal systemd[4302]: Reached target Timers.
Jan 20 13:16:35 np0005588919.novalocal systemd[4302]: Starting D-Bus User Message Bus Socket...
Jan 20 13:16:35 np0005588919.novalocal systemd[4302]: Starting Create User's Volatile Files and Directories...
Jan 20 13:16:35 np0005588919.novalocal systemd[4302]: Listening on D-Bus User Message Bus Socket.
Jan 20 13:16:35 np0005588919.novalocal systemd[4302]: Reached target Sockets.
Jan 20 13:16:35 np0005588919.novalocal systemd[4302]: Finished Create User's Volatile Files and Directories.
Jan 20 13:16:35 np0005588919.novalocal systemd[4302]: Reached target Basic System.
Jan 20 13:16:35 np0005588919.novalocal systemd[4302]: Reached target Main User Target.
Jan 20 13:16:35 np0005588919.novalocal systemd[4302]: Startup finished in 158ms.
Jan 20 13:16:35 np0005588919.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 20 13:16:35 np0005588919.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 20 13:16:35 np0005588919.novalocal sshd-session[4298]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:16:36 np0005588919.novalocal python3[4384]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:16:38 np0005588919.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 13:16:40 np0005588919.novalocal python3[4414]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:16:47 np0005588919.novalocal python3[4472]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:16:48 np0005588919.novalocal python3[4512]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 20 13:16:51 np0005588919.novalocal python3[4538]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/GVaHfRonG9ohfZBeZsfGsPAY5Ua/gRcfFAsYYpV+pfGGgyLPk7GpRkk4pr+e8jNRtdcfblMAicASH+5mJlHBm4eUbFYKtcwEXZXv6pyuCU3Ecns8qj50vHni0ryqqxTyg09WqOLv2u9xctOgas5b8y8tPl7bs2/uwlGFud/NxTxRMamezw0jUgKB9f6nJj6TiaAzomayQwqBx0/0kk8Cc6o4JsrOc92YyIsAjs+grfO5gO6MLYaAFWaCv28+Yvj3G37RUIAILUpORm4vyFNvxLGV+iIKd8ZYqqV6cczJ2tM7MGlfjYz9lTXL7WHkY2Knel8HDycvHH85Ydujv3gyD8d/m+dy4VHhMoU3HR1Syxx5e1GxOjU6NV7ZtEMjYtqE6zUdCNY1zXUU4uGxxPK7dF2Zzx5ODWpS7ssrJVRsLzDPf1YiIyi/g3OHzO95EzucQchqJsVh3MJI8D/C2CjI432eipKKcQAYY9sD9/mpPwBqI0PKwfSGTpsps60NwhM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:16:51 np0005588919.novalocal python3[4562]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:16:52 np0005588919.novalocal python3[4661]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:16:52 np0005588919.novalocal python3[4732]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768915011.712395-252-194271861623093/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=db0b5efc11684c95b5a6c3da9b48c4c5_id_rsa follow=False checksum=3ee7ffdf9f2bde9aa4c9d676d061c45199023a01 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:16:53 np0005588919.novalocal python3[4855]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:16:53 np0005588919.novalocal python3[4926]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768915013.2588813-307-146443798583455/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=db0b5efc11684c95b5a6c3da9b48c4c5_id_rsa.pub follow=False checksum=c665db3a39036994c79fbfd6a268cbf34e365958 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:16:55 np0005588919.novalocal python3[4974]: ansible-ping Invoked with data=pong
Jan 20 13:16:56 np0005588919.novalocal python3[4998]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:16:58 np0005588919.novalocal python3[5056]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 20 13:16:59 np0005588919.novalocal python3[5088]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:16:59 np0005588919.novalocal python3[5112]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:00 np0005588919.novalocal python3[5136]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:00 np0005588919.novalocal python3[5160]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:00 np0005588919.novalocal python3[5184]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:00 np0005588919.novalocal python3[5208]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:02 np0005588919.novalocal sudo[5232]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qggpqwckurrncaruaxyshovvdydjejxe ; /usr/bin/python3'
Jan 20 13:17:02 np0005588919.novalocal sudo[5232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:02 np0005588919.novalocal python3[5234]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:02 np0005588919.novalocal sudo[5232]: pam_unix(sudo:session): session closed for user root
Jan 20 13:17:03 np0005588919.novalocal sudo[5310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtotboageicixjupnxivzlowtmgifpli ; /usr/bin/python3'
Jan 20 13:17:03 np0005588919.novalocal sudo[5310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:03 np0005588919.novalocal python3[5312]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:17:03 np0005588919.novalocal sudo[5310]: pam_unix(sudo:session): session closed for user root
Jan 20 13:17:03 np0005588919.novalocal sudo[5383]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-markfsipuvovleiteeaiuzgzkpryhwvn ; /usr/bin/python3'
Jan 20 13:17:03 np0005588919.novalocal sudo[5383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:03 np0005588919.novalocal python3[5385]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1768915022.9187856-33-265290447230422/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:03 np0005588919.novalocal sudo[5383]: pam_unix(sudo:session): session closed for user root
Jan 20 13:17:04 np0005588919.novalocal python3[5433]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:04 np0005588919.novalocal python3[5457]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:05 np0005588919.novalocal python3[5481]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:05 np0005588919.novalocal python3[5505]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:05 np0005588919.novalocal python3[5529]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:05 np0005588919.novalocal python3[5553]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:06 np0005588919.novalocal python3[5577]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:06 np0005588919.novalocal python3[5601]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:06 np0005588919.novalocal python3[5625]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:07 np0005588919.novalocal python3[5649]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:07 np0005588919.novalocal python3[5673]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:07 np0005588919.novalocal python3[5697]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:07 np0005588919.novalocal python3[5721]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:08 np0005588919.novalocal python3[5745]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:08 np0005588919.novalocal python3[5769]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:08 np0005588919.novalocal python3[5793]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:08 np0005588919.novalocal python3[5817]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:09 np0005588919.novalocal python3[5841]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:09 np0005588919.novalocal python3[5865]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:09 np0005588919.novalocal python3[5889]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:10 np0005588919.novalocal python3[5913]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:10 np0005588919.novalocal python3[5937]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:10 np0005588919.novalocal python3[5961]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:10 np0005588919.novalocal python3[5985]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:11 np0005588919.novalocal python3[6009]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:11 np0005588919.novalocal python3[6033]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:14 np0005588919.novalocal sudo[6057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osqpuvqclzvgwinjaxqyleboohmsxyzy ; /usr/bin/python3'
Jan 20 13:17:14 np0005588919.novalocal sudo[6057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:14 np0005588919.novalocal python3[6059]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 20 13:17:14 np0005588919.novalocal systemd[1]: Starting Time & Date Service...
Jan 20 13:17:14 np0005588919.novalocal systemd[1]: Started Time & Date Service.
Jan 20 13:17:14 np0005588919.novalocal systemd-timedated[6061]: Changed time zone to 'UTC' (UTC).
Jan 20 13:17:14 np0005588919.novalocal sudo[6057]: pam_unix(sudo:session): session closed for user root
Jan 20 13:17:14 np0005588919.novalocal sudo[6088]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rylbiuvlscbkicuvpofnspgjwjdpirnx ; /usr/bin/python3'
Jan 20 13:17:14 np0005588919.novalocal sudo[6088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:14 np0005588919.novalocal python3[6090]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:14 np0005588919.novalocal sudo[6088]: pam_unix(sudo:session): session closed for user root
Jan 20 13:17:15 np0005588919.novalocal python3[6166]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:17:15 np0005588919.novalocal python3[6237]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1768915035.0741973-252-85085486583027/source _original_basename=tmp032vp1re follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:16 np0005588919.novalocal python3[6337]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:17:16 np0005588919.novalocal python3[6408]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1768915035.9946544-302-84865711002240/source _original_basename=tmpbj4_07ef follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:17 np0005588919.novalocal sudo[6508]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxijbtnpcoawbsecmvibiqsurhxoibie ; /usr/bin/python3'
Jan 20 13:17:17 np0005588919.novalocal sudo[6508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:17 np0005588919.novalocal python3[6510]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:17:17 np0005588919.novalocal sudo[6508]: pam_unix(sudo:session): session closed for user root
Jan 20 13:17:17 np0005588919.novalocal sudo[6581]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exsoaaucdcalqpjjjnfqpimtnlnufotx ; /usr/bin/python3'
Jan 20 13:17:17 np0005588919.novalocal sudo[6581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:17 np0005588919.novalocal python3[6583]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1768915037.221259-383-160566660996014/source _original_basename=tmpbeaznhtf follow=False checksum=7a82bff5b5e9039ad1ac15f6a7286925b777bf85 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:17 np0005588919.novalocal sudo[6581]: pam_unix(sudo:session): session closed for user root
Jan 20 13:17:18 np0005588919.novalocal python3[6631]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:17:18 np0005588919.novalocal python3[6657]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:17:19 np0005588919.novalocal sudo[6735]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hweqbnptohloimkkblryzeltjzjzjikt ; /usr/bin/python3'
Jan 20 13:17:19 np0005588919.novalocal sudo[6735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:19 np0005588919.novalocal python3[6737]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:17:19 np0005588919.novalocal sudo[6735]: pam_unix(sudo:session): session closed for user root
Jan 20 13:17:19 np0005588919.novalocal sudo[6808]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnecmgqginjgvrpblmtbnbngqjpiktus ; /usr/bin/python3'
Jan 20 13:17:19 np0005588919.novalocal sudo[6808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:19 np0005588919.novalocal python3[6810]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1768915038.9616592-452-111445218516075/source _original_basename=tmpd8rwe82l follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:19 np0005588919.novalocal sudo[6808]: pam_unix(sudo:session): session closed for user root
Jan 20 13:17:20 np0005588919.novalocal sudo[6859]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tawoewbficigrpnmjuyqtgigkryuyxxh ; /usr/bin/python3'
Jan 20 13:17:20 np0005588919.novalocal sudo[6859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:20 np0005588919.novalocal python3[6861]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-d383-642d-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:17:20 np0005588919.novalocal sudo[6859]: pam_unix(sudo:session): session closed for user root
Jan 20 13:17:20 np0005588919.novalocal python3[6889]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-d383-642d-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 20 13:17:22 np0005588919.novalocal python3[6917]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:44 np0005588919.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 20 13:17:48 np0005588919.novalocal sudo[6943]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upsaioibkcarlpzshrotjopiqrfeywsr ; /usr/bin/python3'
Jan 20 13:17:48 np0005588919.novalocal sudo[6943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:48 np0005588919.novalocal python3[6945]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:48 np0005588919.novalocal sudo[6943]: pam_unix(sudo:session): session closed for user root
Jan 20 13:18:43 np0005588919.novalocal systemd[4302]: Starting Mark boot as successful...
Jan 20 13:18:43 np0005588919.novalocal systemd[4302]: Finished Mark boot as successful.
Jan 20 13:18:49 np0005588919.novalocal sshd-session[4311]: Received disconnect from 38.102.83.114 port 53988:11: disconnected by user
Jan 20 13:18:49 np0005588919.novalocal sshd-session[4311]: Disconnected from user zuul 38.102.83.114 port 53988
Jan 20 13:18:49 np0005588919.novalocal sshd-session[4298]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:18:49 np0005588919.novalocal systemd-logind[783]: Session 1 logged out. Waiting for processes to exit.
Jan 20 13:18:55 np0005588919.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 20 13:18:55 np0005588919.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 20 13:18:55 np0005588919.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 20 13:18:55 np0005588919.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 20 13:18:55 np0005588919.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 20 13:18:55 np0005588919.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 20 13:18:55 np0005588919.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 20 13:18:55 np0005588919.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 20 13:18:55 np0005588919.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 20 13:18:55 np0005588919.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 20 13:18:55 np0005588919.novalocal NetworkManager[856]: <info>  [1768915135.5815] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 20 13:18:55 np0005588919.novalocal systemd-udevd[6948]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 13:18:55 np0005588919.novalocal NetworkManager[856]: <info>  [1768915135.5987] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:18:55 np0005588919.novalocal NetworkManager[856]: <info>  [1768915135.6026] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 20 13:18:55 np0005588919.novalocal NetworkManager[856]: <info>  [1768915135.6031] device (eth1): carrier: link connected
Jan 20 13:18:55 np0005588919.novalocal NetworkManager[856]: <info>  [1768915135.6033] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 20 13:18:55 np0005588919.novalocal NetworkManager[856]: <info>  [1768915135.6040] policy: auto-activating connection 'Wired connection 1' (3c0fe307-c5e5-33c6-a0c4-a240cdee9616)
Jan 20 13:18:55 np0005588919.novalocal NetworkManager[856]: <info>  [1768915135.6045] device (eth1): Activation: starting connection 'Wired connection 1' (3c0fe307-c5e5-33c6-a0c4-a240cdee9616)
Jan 20 13:18:55 np0005588919.novalocal NetworkManager[856]: <info>  [1768915135.6046] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:18:55 np0005588919.novalocal NetworkManager[856]: <info>  [1768915135.6049] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:18:55 np0005588919.novalocal NetworkManager[856]: <info>  [1768915135.6054] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:18:55 np0005588919.novalocal NetworkManager[856]: <info>  [1768915135.6059] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:18:57 np0005588919.novalocal sshd-session[6951]: Accepted publickey for zuul from 38.102.83.114 port 54426 ssh2: RSA SHA256:r50QbT7bSKscUimrVpe816OyonJCbpigaVSUx3I8hI8
Jan 20 13:18:57 np0005588919.novalocal systemd-logind[783]: New session 3 of user zuul.
Jan 20 13:18:57 np0005588919.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 20 13:18:57 np0005588919.novalocal sshd-session[6951]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:18:57 np0005588919.novalocal python3[6978]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-d6d1-215a-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:19:07 np0005588919.novalocal sudo[7057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edytcusmkehwhgshjvgljugevdtgxces ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 20 13:19:07 np0005588919.novalocal sudo[7057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:19:07 np0005588919.novalocal python3[7059]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:19:07 np0005588919.novalocal sudo[7057]: pam_unix(sudo:session): session closed for user root
Jan 20 13:19:07 np0005588919.novalocal sudo[7130]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwjsvygwrczqiecldgwqpcklnlbpdzdp ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 20 13:19:07 np0005588919.novalocal sudo[7130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:19:08 np0005588919.novalocal python3[7132]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768915147.3703582-155-6186936698920/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=b4d6cf2779d273a35a4489895da20bdddd958aca backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:19:08 np0005588919.novalocal sudo[7130]: pam_unix(sudo:session): session closed for user root
Jan 20 13:19:08 np0005588919.novalocal sudo[7180]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avzlnhxknegpnjcrzdsihymqsctacieo ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 20 13:19:08 np0005588919.novalocal sudo[7180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:19:08 np0005588919.novalocal python3[7182]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:19:08 np0005588919.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 20 13:19:08 np0005588919.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 20 13:19:08 np0005588919.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 20 13:19:08 np0005588919.novalocal systemd[1]: Stopping Network Manager...
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[856]: <info>  [1768915148.6841] caught SIGTERM, shutting down normally.
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[856]: <info>  [1768915148.6855] dhcp4 (eth0): canceled DHCP transaction
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[856]: <info>  [1768915148.6855] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[856]: <info>  [1768915148.6856] dhcp4 (eth0): state changed no lease
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[856]: <info>  [1768915148.6858] manager: NetworkManager state is now CONNECTING
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[856]: <info>  [1768915148.6939] dhcp4 (eth1): canceled DHCP transaction
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[856]: <info>  [1768915148.6939] dhcp4 (eth1): state changed no lease
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[856]: <info>  [1768915148.7011] exiting (success)
Jan 20 13:19:08 np0005588919.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 13:19:08 np0005588919.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 13:19:08 np0005588919.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 20 13:19:08 np0005588919.novalocal systemd[1]: Stopped Network Manager.
Jan 20 13:19:08 np0005588919.novalocal systemd[1]: NetworkManager.service: Consumed 1.295s CPU time, 10.0M memory peak.
Jan 20 13:19:08 np0005588919.novalocal systemd[1]: Starting Network Manager...
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.7883] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:017a3b90-38ab-4863-8e66-991c4844fcc7)
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.7885] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.7948] manager[0x5610a1ace000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 20 13:19:08 np0005588919.novalocal systemd[1]: Starting Hostname Service...
Jan 20 13:19:08 np0005588919.novalocal systemd[1]: Started Hostname Service.
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.8925] hostname: hostname: using hostnamed
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.8926] hostname: static hostname changed from (none) to "np0005588919.novalocal"
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.8935] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.8942] manager[0x5610a1ace000]: rfkill: Wi-Fi hardware radio set enabled
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.8943] manager[0x5610a1ace000]: rfkill: WWAN hardware radio set enabled
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.8987] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.8988] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.8989] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.8989] manager: Networking is enabled by state file
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.8993] settings: Loaded settings plugin: keyfile (internal)
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.8999] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9037] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9052] dhcp: init: Using DHCP client 'internal'
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9057] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9064] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9072] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9085] device (lo): Activation: starting connection 'lo' (f1b29bda-3a6a-4be0-8c9c-6df9359cf4c4)
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9095] device (eth0): carrier: link connected
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9104] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9112] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9113] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9124] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9137] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9147] device (eth1): carrier: link connected
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9154] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9162] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (3c0fe307-c5e5-33c6-a0c4-a240cdee9616) (indicated)
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9163] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9170] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9181] device (eth1): Activation: starting connection 'Wired connection 1' (3c0fe307-c5e5-33c6-a0c4-a240cdee9616)
Jan 20 13:19:08 np0005588919.novalocal systemd[1]: Started Network Manager.
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9190] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9196] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9199] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9202] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9205] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9210] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9212] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9215] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9220] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9230] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9235] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9249] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9254] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9283] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9289] dhcp4 (eth0): state changed new lease, address=38.102.83.169
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9297] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9310] device (lo): Activation: successful, device activated.
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9327] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 20 13:19:08 np0005588919.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9417] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9442] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9445] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9452] manager: NetworkManager state is now CONNECTED_SITE
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9456] device (eth0): Activation: successful, device activated.
Jan 20 13:19:08 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915148.9464] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 20 13:19:08 np0005588919.novalocal sudo[7180]: pam_unix(sudo:session): session closed for user root
Jan 20 13:19:09 np0005588919.novalocal python3[7267]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-d6d1-215a-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:19:19 np0005588919.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 13:19:38 np0005588919.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 13:19:54 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915194.0175] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 20 13:19:54 np0005588919.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 13:19:54 np0005588919.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 13:19:54 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915194.0535] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 20 13:19:54 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915194.0541] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 20 13:19:54 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915194.0547] device (eth1): Activation: successful, device activated.
Jan 20 13:19:54 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915194.0557] manager: startup complete
Jan 20 13:19:54 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915194.0559] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 20 13:19:54 np0005588919.novalocal NetworkManager[7192]: <warn>  [1768915194.0567] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 20 13:19:54 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915194.0582] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 20 13:19:54 np0005588919.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 20 13:19:54 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915194.0689] dhcp4 (eth1): canceled DHCP transaction
Jan 20 13:19:54 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915194.0691] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:19:54 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915194.0692] dhcp4 (eth1): state changed no lease
Jan 20 13:19:54 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915194.0716] policy: auto-activating connection 'ci-private-network' (1877dc82-ca8e-52d6-b413-dd9d07823d2d)
Jan 20 13:19:54 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915194.0723] device (eth1): Activation: starting connection 'ci-private-network' (1877dc82-ca8e-52d6-b413-dd9d07823d2d)
Jan 20 13:19:54 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915194.0725] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:19:54 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915194.0732] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:19:54 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915194.0743] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:19:54 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915194.0757] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:19:54 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915194.0810] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:19:54 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915194.0814] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:19:54 np0005588919.novalocal NetworkManager[7192]: <info>  [1768915194.0824] device (eth1): Activation: successful, device activated.
Jan 20 13:20:04 np0005588919.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 13:20:09 np0005588919.novalocal sshd-session[6954]: Received disconnect from 38.102.83.114 port 54426:11: disconnected by user
Jan 20 13:20:09 np0005588919.novalocal sshd-session[6954]: Disconnected from user zuul 38.102.83.114 port 54426
Jan 20 13:20:09 np0005588919.novalocal sshd-session[6951]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:20:09 np0005588919.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 20 13:20:09 np0005588919.novalocal systemd[1]: session-3.scope: Consumed 1.763s CPU time.
Jan 20 13:20:09 np0005588919.novalocal systemd-logind[783]: Session 3 logged out. Waiting for processes to exit.
Jan 20 13:20:09 np0005588919.novalocal systemd-logind[783]: Removed session 3.
Jan 20 13:20:31 np0005588919.novalocal sshd-session[7296]: error: kex_exchange_identification: read: Connection reset by peer
Jan 20 13:20:31 np0005588919.novalocal sshd-session[7296]: Connection reset by 176.120.22.52 port 59675
Jan 20 13:20:52 np0005588919.novalocal sshd-session[7297]: Accepted publickey for zuul from 38.102.83.114 port 39546 ssh2: RSA SHA256:r50QbT7bSKscUimrVpe816OyonJCbpigaVSUx3I8hI8
Jan 20 13:20:52 np0005588919.novalocal systemd-logind[783]: New session 4 of user zuul.
Jan 20 13:20:52 np0005588919.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 20 13:20:52 np0005588919.novalocal sshd-session[7297]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:20:52 np0005588919.novalocal sudo[7376]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgyhgtsywpjebfthzctxnizxvczysafc ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 20 13:20:52 np0005588919.novalocal sudo[7376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:20:52 np0005588919.novalocal python3[7378]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:20:52 np0005588919.novalocal sudo[7376]: pam_unix(sudo:session): session closed for user root
Jan 20 13:20:52 np0005588919.novalocal sudo[7449]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpvftbqyuizwdetwzgdemdfxmofvuzac ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 20 13:20:52 np0005588919.novalocal sudo[7449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:20:52 np0005588919.novalocal python3[7451]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768915252.179773-373-102212201474035/source _original_basename=tmpn2ooqnbo follow=False checksum=a06d82404ae9ae38c6111e54a4021096121ff7ef backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:20:52 np0005588919.novalocal sudo[7449]: pam_unix(sudo:session): session closed for user root
Jan 20 13:20:56 np0005588919.novalocal sshd-session[7300]: Connection closed by 38.102.83.114 port 39546
Jan 20 13:20:56 np0005588919.novalocal sshd-session[7297]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:20:56 np0005588919.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 20 13:20:56 np0005588919.novalocal systemd-logind[783]: Session 4 logged out. Waiting for processes to exit.
Jan 20 13:20:56 np0005588919.novalocal systemd-logind[783]: Removed session 4.
Jan 20 13:21:43 np0005588919.novalocal systemd[4302]: Created slice User Background Tasks Slice.
Jan 20 13:21:43 np0005588919.novalocal systemd[4302]: Starting Cleanup of User's Temporary Files and Directories...
Jan 20 13:21:43 np0005588919.novalocal systemd[4302]: Finished Cleanup of User's Temporary Files and Directories.
Jan 20 13:26:22 np0005588919.novalocal sshd-session[7481]: Accepted publickey for zuul from 38.102.83.114 port 40646 ssh2: RSA SHA256:r50QbT7bSKscUimrVpe816OyonJCbpigaVSUx3I8hI8
Jan 20 13:26:22 np0005588919.novalocal systemd-logind[783]: New session 5 of user zuul.
Jan 20 13:26:22 np0005588919.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 20 13:26:22 np0005588919.novalocal sshd-session[7481]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:26:22 np0005588919.novalocal sudo[7508]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypwmqkrqxxnfaxvzdusxhicwbmassiip ; /usr/bin/python3'
Jan 20 13:26:22 np0005588919.novalocal sudo[7508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:22 np0005588919.novalocal python3[7510]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-67c0-97af-000000000ca4-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:26:22 np0005588919.novalocal sudo[7508]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:22 np0005588919.novalocal sudo[7536]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsizmxtbsyhrzbgafaiuuibohbtjkcir ; /usr/bin/python3'
Jan 20 13:26:22 np0005588919.novalocal sudo[7536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:23 np0005588919.novalocal python3[7538]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:23 np0005588919.novalocal sudo[7536]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:23 np0005588919.novalocal sudo[7563]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvyueapunobgylpnjgjikygaextwtpin ; /usr/bin/python3'
Jan 20 13:26:23 np0005588919.novalocal sudo[7563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:23 np0005588919.novalocal python3[7565]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:23 np0005588919.novalocal sudo[7563]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:23 np0005588919.novalocal sudo[7589]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stfckwkxealobtpamaqgwwaijqoignmc ; /usr/bin/python3'
Jan 20 13:26:23 np0005588919.novalocal sudo[7589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:23 np0005588919.novalocal python3[7591]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:23 np0005588919.novalocal sudo[7589]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:23 np0005588919.novalocal sudo[7615]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjejjzudowjljvtlfhjboklaziqtpjje ; /usr/bin/python3'
Jan 20 13:26:23 np0005588919.novalocal sudo[7615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:23 np0005588919.novalocal python3[7617]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:23 np0005588919.novalocal sudo[7615]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:24 np0005588919.novalocal sudo[7641]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkoxgzfepjfsigobzaccgphofyrafaxo ; /usr/bin/python3'
Jan 20 13:26:24 np0005588919.novalocal sudo[7641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:24 np0005588919.novalocal python3[7643]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:24 np0005588919.novalocal sudo[7641]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:24 np0005588919.novalocal sudo[7719]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arkucmgxsatgwfmmskksbjigiibegwwk ; /usr/bin/python3'
Jan 20 13:26:24 np0005588919.novalocal sudo[7719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:24 np0005588919.novalocal python3[7721]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:26:24 np0005588919.novalocal sudo[7719]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:25 np0005588919.novalocal sudo[7792]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylucrdznixyhrubyrcuvznoqcljmbdir ; /usr/bin/python3'
Jan 20 13:26:25 np0005588919.novalocal sudo[7792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:25 np0005588919.novalocal python3[7794]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768915584.5801682-365-201103252402798/source _original_basename=tmp5oxa2ha2 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:25 np0005588919.novalocal sudo[7792]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:26 np0005588919.novalocal sudo[7842]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfwzxwhrhwrnoawqumrfxxexkdulgloi ; /usr/bin/python3'
Jan 20 13:26:26 np0005588919.novalocal sudo[7842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:26 np0005588919.novalocal python3[7844]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 13:26:26 np0005588919.novalocal systemd[1]: Reloading.
Jan 20 13:26:26 np0005588919.novalocal systemd-rc-local-generator[7861]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:26:26 np0005588919.novalocal sudo[7842]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:28 np0005588919.novalocal sudo[7897]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glwkekpaidzntgzisqjrhzscgmgngjrc ; /usr/bin/python3'
Jan 20 13:26:28 np0005588919.novalocal sudo[7897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:28 np0005588919.novalocal python3[7899]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 20 13:26:28 np0005588919.novalocal sudo[7897]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:28 np0005588919.novalocal sudo[7923]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paxskawhrzhbtrvedqjnimhredjqnuvi ; /usr/bin/python3'
Jan 20 13:26:28 np0005588919.novalocal sudo[7923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:28 np0005588919.novalocal python3[7925]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:26:28 np0005588919.novalocal sudo[7923]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:29 np0005588919.novalocal sudo[7951]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwjqqjczxaualswxhhpsvhodtyavisjb ; /usr/bin/python3'
Jan 20 13:26:29 np0005588919.novalocal sudo[7951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:29 np0005588919.novalocal python3[7953]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:26:29 np0005588919.novalocal sudo[7951]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:29 np0005588919.novalocal sudo[7979]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjeplwzzozlyggeuyxwjwmgwesbmrzge ; /usr/bin/python3'
Jan 20 13:26:29 np0005588919.novalocal sudo[7979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:29 np0005588919.novalocal python3[7981]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:26:29 np0005588919.novalocal sudo[7979]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:29 np0005588919.novalocal sudo[8007]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwfqslahlfqulzhldmkplopjmufbkihk ; /usr/bin/python3'
Jan 20 13:26:29 np0005588919.novalocal sudo[8007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:29 np0005588919.novalocal python3[8009]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:26:29 np0005588919.novalocal sudo[8007]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:30 np0005588919.novalocal python3[8036]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-67c0-97af-000000000cab-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:26:30 np0005588919.novalocal python3[8066]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 13:26:34 np0005588919.novalocal sshd-session[7484]: Connection closed by 38.102.83.114 port 40646
Jan 20 13:26:34 np0005588919.novalocal sshd-session[7481]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:26:34 np0005588919.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 20 13:26:34 np0005588919.novalocal systemd[1]: session-5.scope: Consumed 4.518s CPU time.
Jan 20 13:26:34 np0005588919.novalocal systemd-logind[783]: Session 5 logged out. Waiting for processes to exit.
Jan 20 13:26:34 np0005588919.novalocal systemd-logind[783]: Removed session 5.
Jan 20 13:26:36 np0005588919.novalocal sshd-session[8070]: Accepted publickey for zuul from 38.102.83.114 port 46396 ssh2: RSA SHA256:r50QbT7bSKscUimrVpe816OyonJCbpigaVSUx3I8hI8
Jan 20 13:26:36 np0005588919.novalocal systemd-logind[783]: New session 6 of user zuul.
Jan 20 13:26:36 np0005588919.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 20 13:26:36 np0005588919.novalocal sshd-session[8070]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:26:36 np0005588919.novalocal sudo[8097]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beaekemnofjruybpexrjaojynhgxociu ; /usr/bin/python3'
Jan 20 13:26:36 np0005588919.novalocal sudo[8097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:36 np0005588919.novalocal python3[8099]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 13:26:44 np0005588919.novalocal setsebool[8139]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 20 13:26:44 np0005588919.novalocal setsebool[8139]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 20 13:26:57 np0005588919.novalocal kernel: SELinux:  Converting 385 SID table entries...
Jan 20 13:26:57 np0005588919.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 13:26:57 np0005588919.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 20 13:26:57 np0005588919.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 13:26:57 np0005588919.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 20 13:26:57 np0005588919.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 13:26:57 np0005588919.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 13:26:57 np0005588919.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 13:27:07 np0005588919.novalocal kernel: SELinux:  Converting 388 SID table entries...
Jan 20 13:27:07 np0005588919.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 13:27:07 np0005588919.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 20 13:27:07 np0005588919.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 13:27:07 np0005588919.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 20 13:27:07 np0005588919.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 13:27:07 np0005588919.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 13:27:07 np0005588919.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 13:27:25 np0005588919.novalocal dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 20 13:27:25 np0005588919.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 13:27:25 np0005588919.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 20 13:27:25 np0005588919.novalocal systemd[1]: Reloading.
Jan 20 13:27:25 np0005588919.novalocal systemd-rc-local-generator[8909]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:27:25 np0005588919.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 13:27:27 np0005588919.novalocal sudo[8097]: pam_unix(sudo:session): session closed for user root
Jan 20 13:27:30 np0005588919.novalocal python3[13062]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163efc-24cc-45b7-a25c-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:27:31 np0005588919.novalocal kernel: evm: overlay not supported
Jan 20 13:27:31 np0005588919.novalocal systemd[4302]: Starting D-Bus User Message Bus...
Jan 20 13:27:31 np0005588919.novalocal dbus-broker-launch[13894]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 20 13:27:31 np0005588919.novalocal dbus-broker-launch[13894]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 20 13:27:31 np0005588919.novalocal systemd[4302]: Started D-Bus User Message Bus.
Jan 20 13:27:31 np0005588919.novalocal dbus-broker-lau[13894]: Ready
Jan 20 13:27:31 np0005588919.novalocal systemd[4302]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 20 13:27:31 np0005588919.novalocal systemd[4302]: Created slice Slice /user.
Jan 20 13:27:31 np0005588919.novalocal systemd[4302]: podman-13874.scope: unit configures an IP firewall, but not running as root.
Jan 20 13:27:31 np0005588919.novalocal systemd[4302]: (This warning is only shown for the first unit using IP firewalling.)
Jan 20 13:27:31 np0005588919.novalocal systemd[4302]: Started podman-13874.scope.
Jan 20 13:27:31 np0005588919.novalocal systemd[4302]: Started podman-pause-51d78d5d.scope.
Jan 20 13:27:32 np0005588919.novalocal sudo[14054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjswseremrqggmtjsrztflqwfromeurr ; /usr/bin/python3'
Jan 20 13:27:32 np0005588919.novalocal sudo[14054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:27:32 np0005588919.novalocal python3[14066]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.233:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.233:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:27:32 np0005588919.novalocal python3[14066]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 20 13:27:32 np0005588919.novalocal sudo[14054]: pam_unix(sudo:session): session closed for user root
Jan 20 13:27:33 np0005588919.novalocal sshd-session[8073]: Connection closed by 38.102.83.114 port 46396
Jan 20 13:27:33 np0005588919.novalocal sshd-session[8070]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:27:33 np0005588919.novalocal systemd[1]: session-6.scope: Deactivated successfully.
Jan 20 13:27:33 np0005588919.novalocal systemd[1]: session-6.scope: Consumed 46.854s CPU time.
Jan 20 13:27:33 np0005588919.novalocal systemd-logind[783]: Session 6 logged out. Waiting for processes to exit.
Jan 20 13:27:33 np0005588919.novalocal systemd-logind[783]: Removed session 6.
Jan 20 13:27:47 np0005588919.novalocal irqbalance[778]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 20 13:27:47 np0005588919.novalocal irqbalance[778]: IRQ 27 affinity is now unmanaged
Jan 20 13:27:53 np0005588919.novalocal sshd-session[22377]: Connection closed by 38.102.83.230 port 38704 [preauth]
Jan 20 13:27:53 np0005588919.novalocal sshd-session[22386]: Connection closed by 38.102.83.230 port 38710 [preauth]
Jan 20 13:27:53 np0005588919.novalocal sshd-session[22379]: Unable to negotiate with 38.102.83.230 port 38718: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 20 13:27:53 np0005588919.novalocal sshd-session[22384]: Unable to negotiate with 38.102.83.230 port 38730: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 20 13:27:53 np0005588919.novalocal sshd-session[22382]: Unable to negotiate with 38.102.83.230 port 38736: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 20 13:27:58 np0005588919.novalocal sshd-session[24181]: Accepted publickey for zuul from 38.102.83.114 port 39868 ssh2: RSA SHA256:r50QbT7bSKscUimrVpe816OyonJCbpigaVSUx3I8hI8
Jan 20 13:27:58 np0005588919.novalocal systemd-logind[783]: New session 7 of user zuul.
Jan 20 13:27:58 np0005588919.novalocal systemd[1]: Started Session 7 of User zuul.
Jan 20 13:27:58 np0005588919.novalocal sshd-session[24181]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:27:58 np0005588919.novalocal python3[24286]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCxDwznlRTnwVs4thRw4BwvCKpJwxh+kvQMz22TwtomAycQeWyoRQIrmHPtGJrxAVwBD4Z4rIf2j/gxUloZVamc= zuul@np0005588917.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:27:58 np0005588919.novalocal sudo[24509]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyfzrqrmhncdxtopsjimmgsvlzdwwbbg ; /usr/bin/python3'
Jan 20 13:27:58 np0005588919.novalocal sudo[24509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:27:59 np0005588919.novalocal python3[24519]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCxDwznlRTnwVs4thRw4BwvCKpJwxh+kvQMz22TwtomAycQeWyoRQIrmHPtGJrxAVwBD4Z4rIf2j/gxUloZVamc= zuul@np0005588917.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:27:59 np0005588919.novalocal sudo[24509]: pam_unix(sudo:session): session closed for user root
Jan 20 13:27:59 np0005588919.novalocal sudo[24934]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyijbcpnbzjexdkhmlkrachzvkkvlkkh ; /usr/bin/python3'
Jan 20 13:27:59 np0005588919.novalocal sudo[24934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:28:00 np0005588919.novalocal python3[24943]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005588919.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 20 13:28:00 np0005588919.novalocal useradd[25026]: new group: name=cloud-admin, GID=1002
Jan 20 13:28:00 np0005588919.novalocal useradd[25026]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 20 13:28:00 np0005588919.novalocal sudo[24934]: pam_unix(sudo:session): session closed for user root
Jan 20 13:28:00 np0005588919.novalocal sudo[25229]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zduvhywazdchsxtkbyommxopeachpdns ; /usr/bin/python3'
Jan 20 13:28:00 np0005588919.novalocal sudo[25229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:28:00 np0005588919.novalocal python3[25234]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCxDwznlRTnwVs4thRw4BwvCKpJwxh+kvQMz22TwtomAycQeWyoRQIrmHPtGJrxAVwBD4Z4rIf2j/gxUloZVamc= zuul@np0005588917.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:28:00 np0005588919.novalocal sudo[25229]: pam_unix(sudo:session): session closed for user root
Jan 20 13:28:01 np0005588919.novalocal sudo[25474]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjewntwfrguhudhojcriquiijftuyape ; /usr/bin/python3'
Jan 20 13:28:01 np0005588919.novalocal sudo[25474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:28:01 np0005588919.novalocal python3[25484]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:28:01 np0005588919.novalocal sudo[25474]: pam_unix(sudo:session): session closed for user root
Jan 20 13:28:01 np0005588919.novalocal sudo[25768]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwvjdntipprvmzrggvistfxgcyluotzl ; /usr/bin/python3'
Jan 20 13:28:01 np0005588919.novalocal sudo[25768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:28:01 np0005588919.novalocal python3[25775]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1768915681.0524058-168-160457724055178/source _original_basename=tmprcct7y06 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:28:01 np0005588919.novalocal sudo[25768]: pam_unix(sudo:session): session closed for user root
Jan 20 13:28:02 np0005588919.novalocal sudo[26156]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mekowbemtevaqjlajdgcjmrxnjckiuxv ; /usr/bin/python3'
Jan 20 13:28:02 np0005588919.novalocal sudo[26156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:28:02 np0005588919.novalocal python3[26164]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Jan 20 13:28:02 np0005588919.novalocal systemd[1]: Starting Hostname Service...
Jan 20 13:28:02 np0005588919.novalocal systemd[1]: Started Hostname Service.
Jan 20 13:28:02 np0005588919.novalocal systemd-hostnamed[26290]: Changed pretty hostname to 'compute-1'
Jan 20 13:28:02 compute-1 systemd-hostnamed[26290]: Hostname set to <compute-1> (static)
Jan 20 13:28:02 compute-1 NetworkManager[7192]: <info>  [1768915682.9095] hostname: static hostname changed from "np0005588919.novalocal" to "compute-1"
Jan 20 13:28:02 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 13:28:02 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 13:28:02 compute-1 sudo[26156]: pam_unix(sudo:session): session closed for user root
Jan 20 13:28:03 compute-1 sshd-session[24229]: Connection closed by 38.102.83.114 port 39868
Jan 20 13:28:03 compute-1 sshd-session[24181]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:28:03 compute-1 systemd[1]: session-7.scope: Deactivated successfully.
Jan 20 13:28:03 compute-1 systemd[1]: session-7.scope: Consumed 2.395s CPU time.
Jan 20 13:28:03 compute-1 systemd-logind[783]: Session 7 logged out. Waiting for processes to exit.
Jan 20 13:28:03 compute-1 systemd-logind[783]: Removed session 7.
Jan 20 13:28:12 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 13:28:14 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 13:28:14 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 20 13:28:14 compute-1 systemd[1]: man-db-cache-update.service: Consumed 57.700s CPU time.
Jan 20 13:28:14 compute-1 systemd[1]: run-rbd08d54a4cf34a8ab2a2ef74cd4a0100.service: Deactivated successfully.
Jan 20 13:28:32 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 13:30:39 compute-1 sshd-session[29926]: Invalid user admin from 116.99.171.211 port 56574
Jan 20 13:30:39 compute-1 sshd-session[29926]: Connection closed by invalid user admin 116.99.171.211 port 56574 [preauth]
Jan 20 13:30:50 compute-1 sshd-session[29928]: Invalid user installer from 116.99.171.211 port 49616
Jan 20 13:30:50 compute-1 sshd-session[29928]: Connection closed by invalid user installer 116.99.171.211 port 49616 [preauth]
Jan 20 13:30:53 compute-1 sshd-session[29930]: Invalid user user from 116.99.171.211 port 53404
Jan 20 13:30:54 compute-1 sshd-session[29930]: Connection closed by invalid user user 116.99.171.211 port 53404 [preauth]
Jan 20 13:31:33 compute-1 sshd-session[29932]: Connection closed by authenticating user root 116.99.171.211 port 55440 [preauth]
Jan 20 13:31:33 compute-1 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 20 13:31:33 compute-1 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 20 13:31:33 compute-1 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 20 13:31:33 compute-1 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 20 13:33:00 compute-1 sshd-session[29939]: Accepted publickey for zuul from 38.102.83.230 port 40378 ssh2: RSA SHA256:r50QbT7bSKscUimrVpe816OyonJCbpigaVSUx3I8hI8
Jan 20 13:33:00 compute-1 systemd-logind[783]: New session 8 of user zuul.
Jan 20 13:33:00 compute-1 systemd[1]: Started Session 8 of User zuul.
Jan 20 13:33:00 compute-1 sshd-session[29939]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:33:01 compute-1 python3[30015]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:33:03 compute-1 sudo[30129]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mizaaamjbarjikigpretgblfxjnznxrk ; /usr/bin/python3'
Jan 20 13:33:03 compute-1 sudo[30129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:03 compute-1 python3[30131]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:33:03 compute-1 sudo[30129]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:03 compute-1 sudo[30202]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbcfcpoybszqxjbyfatuudwutvdmwmqi ; /usr/bin/python3'
Jan 20 13:33:03 compute-1 sudo[30202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:03 compute-1 python3[30204]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768915982.8691642-34066-233922643037694/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:33:03 compute-1 sudo[30202]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:03 compute-1 sudo[30228]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-larrmrvljlghafnkfmuhccizyqegqlvx ; /usr/bin/python3'
Jan 20 13:33:03 compute-1 sudo[30228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:03 compute-1 python3[30230]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:33:03 compute-1 sudo[30228]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:04 compute-1 sudo[30301]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezrxotdegmsoqmaegvuunqddljlulxls ; /usr/bin/python3'
Jan 20 13:33:04 compute-1 sudo[30301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:04 compute-1 python3[30303]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768915982.8691642-34066-233922643037694/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:33:04 compute-1 sudo[30301]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:04 compute-1 sudo[30327]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snrhjgtocfxmuusxggspetlktorvgebr ; /usr/bin/python3'
Jan 20 13:33:04 compute-1 sudo[30327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:04 compute-1 python3[30329]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:33:04 compute-1 sudo[30327]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:04 compute-1 sudo[30400]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaxhnapktjduohrjzcxnfrkdaysucdsm ; /usr/bin/python3'
Jan 20 13:33:04 compute-1 sudo[30400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:04 compute-1 python3[30402]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768915982.8691642-34066-233922643037694/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:33:04 compute-1 sudo[30400]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:05 compute-1 sudo[30426]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hovxwwkqqtoozcmgvpjiglzvuiclajjp ; /usr/bin/python3'
Jan 20 13:33:05 compute-1 sudo[30426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:05 compute-1 python3[30428]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:33:05 compute-1 sudo[30426]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:05 compute-1 sudo[30499]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyzluvsqxofmktihrlnocgwhlmjunfcs ; /usr/bin/python3'
Jan 20 13:33:05 compute-1 sudo[30499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:05 compute-1 python3[30501]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768915982.8691642-34066-233922643037694/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:33:05 compute-1 sudo[30499]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:05 compute-1 sudo[30525]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oinydcqqifirguwioopvyzakoqiswqmk ; /usr/bin/python3'
Jan 20 13:33:05 compute-1 sudo[30525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:05 compute-1 python3[30527]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:33:05 compute-1 sudo[30525]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:06 compute-1 sudo[30598]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbkgqqymrcstmnoxbmufbjuvrtuncxwi ; /usr/bin/python3'
Jan 20 13:33:06 compute-1 sudo[30598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:06 compute-1 python3[30600]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768915982.8691642-34066-233922643037694/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:33:06 compute-1 sudo[30598]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:06 compute-1 sudo[30624]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otihdgodldvsmjaymqurynnsqztbrhel ; /usr/bin/python3'
Jan 20 13:33:06 compute-1 sudo[30624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:06 compute-1 python3[30626]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:33:06 compute-1 sudo[30624]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:06 compute-1 sudo[30697]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzqbhjjbeziideybhkndytlotdzolbwd ; /usr/bin/python3'
Jan 20 13:33:06 compute-1 sudo[30697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:07 compute-1 python3[30699]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768915982.8691642-34066-233922643037694/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:33:07 compute-1 sudo[30697]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:07 compute-1 sudo[30723]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsfmstqjjalogsyjbpdjqmucmcrfuicf ; /usr/bin/python3'
Jan 20 13:33:07 compute-1 sudo[30723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:07 compute-1 python3[30725]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:33:07 compute-1 sudo[30723]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:07 compute-1 sudo[30796]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weepkatvezwdrpabkeatvblvzdxtblia ; /usr/bin/python3'
Jan 20 13:33:07 compute-1 sudo[30796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:07 compute-1 python3[30798]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768915982.8691642-34066-233922643037694/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:33:07 compute-1 sudo[30796]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:08 compute-1 sshd-session[29937]: Invalid user ubnt from 116.99.171.211 port 39064
Jan 20 13:33:10 compute-1 sshd-session[29937]: Connection closed by invalid user ubnt 116.99.171.211 port 39064 [preauth]
Jan 20 13:33:20 compute-1 python3[30846]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:34:39 compute-1 sshd-session[30849]: Invalid user config from 116.99.171.211 port 38142
Jan 20 13:34:40 compute-1 sshd-session[30849]: Connection closed by invalid user config 116.99.171.211 port 38142 [preauth]
Jan 20 13:34:51 compute-1 sshd-session[30851]: Invalid user support from 116.99.171.211 port 56500
Jan 20 13:34:55 compute-1 sshd-session[30851]: Connection closed by invalid user support 116.99.171.211 port 56500 [preauth]
Jan 20 13:35:14 compute-1 sshd-session[30853]: Connection closed by authenticating user root 116.99.171.211 port 58980 [preauth]
Jan 20 13:36:07 compute-1 sshd-session[30855]: Invalid user admin from 116.99.171.211 port 44688
Jan 20 13:36:08 compute-1 sshd-session[30855]: Connection closed by invalid user admin 116.99.171.211 port 44688 [preauth]
Jan 20 13:36:19 compute-1 sshd-session[30857]: Connection closed by authenticating user root 116.99.171.211 port 57774 [preauth]
Jan 20 13:36:25 compute-1 sshd-session[30859]: Invalid user system from 116.99.171.211 port 44882
Jan 20 13:36:26 compute-1 sshd-session[30859]: Connection closed by invalid user system 116.99.171.211 port 44882 [preauth]
Jan 20 13:37:19 compute-1 sshd-session[30862]: Invalid user test from 116.99.171.211 port 57772
Jan 20 13:37:20 compute-1 sshd-session[30862]: Connection closed by invalid user test 116.99.171.211 port 57772 [preauth]
Jan 20 13:37:27 compute-1 sshd-session[30864]: Invalid user guest from 116.99.171.211 port 33368
Jan 20 13:37:30 compute-1 sshd-session[30864]: Connection closed by invalid user guest 116.99.171.211 port 33368 [preauth]
Jan 20 13:38:06 compute-1 sshd-session[30866]: Connection closed by authenticating user root 116.99.171.211 port 38732 [preauth]
Jan 20 13:38:20 compute-1 sshd-session[29942]: Received disconnect from 38.102.83.230 port 40378:11: disconnected by user
Jan 20 13:38:20 compute-1 sshd-session[29942]: Disconnected from user zuul 38.102.83.230 port 40378
Jan 20 13:38:20 compute-1 sshd-session[29939]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:38:20 compute-1 systemd-logind[783]: Session 8 logged out. Waiting for processes to exit.
Jan 20 13:38:20 compute-1 systemd[1]: session-8.scope: Deactivated successfully.
Jan 20 13:38:20 compute-1 systemd[1]: session-8.scope: Consumed 5.699s CPU time.
Jan 20 13:38:20 compute-1 systemd-logind[783]: Removed session 8.
Jan 20 13:39:19 compute-1 sshd-session[30868]: Invalid user admin from 116.99.171.211 port 56706
Jan 20 13:39:19 compute-1 sshd-session[30868]: Connection closed by invalid user admin 116.99.171.211 port 56706 [preauth]
Jan 20 13:39:51 compute-1 sshd-session[30870]: Invalid user admin from 116.99.171.211 port 56936
Jan 20 13:39:52 compute-1 sshd-session[30870]: Connection closed by invalid user admin 116.99.171.211 port 56936 [preauth]
Jan 20 13:39:57 compute-1 sshd-session[30872]: Invalid user admin from 116.99.171.211 port 56962
Jan 20 13:39:57 compute-1 sshd-session[30872]: Connection closed by invalid user admin 116.99.171.211 port 56962 [preauth]
Jan 20 13:40:03 compute-1 sshd-session[30874]: Invalid user admin from 116.99.171.211 port 56978
Jan 20 13:40:04 compute-1 sshd-session[30874]: Connection closed by invalid user admin 116.99.171.211 port 56978 [preauth]
Jan 20 13:40:26 compute-1 sshd-session[30876]: Invalid user admin from 116.99.171.211 port 51832
Jan 20 13:40:28 compute-1 sshd-session[30876]: Connection closed by invalid user admin 116.99.171.211 port 51832 [preauth]
Jan 20 13:41:08 compute-1 sshd-session[30878]: Invalid user admin from 116.99.171.211 port 54912
Jan 20 13:41:08 compute-1 sshd-session[30878]: Connection closed by invalid user admin 116.99.171.211 port 54912 [preauth]
Jan 20 13:41:19 compute-1 sshd-session[30880]: Invalid user user from 116.99.171.211 port 39230
Jan 20 13:41:21 compute-1 sshd-session[30880]: Connection closed by invalid user user 116.99.171.211 port 39230 [preauth]
Jan 20 13:41:28 compute-1 sshd-session[30883]: Invalid user admin from 116.99.171.211 port 57122
Jan 20 13:41:31 compute-1 sshd-session[30883]: Connection closed by invalid user admin 116.99.171.211 port 57122 [preauth]
Jan 20 13:43:26 compute-1 sshd-session[30886]: Connection closed by authenticating user root 116.99.171.211 port 40234 [preauth]
Jan 20 13:44:04 compute-1 sshd-session[30889]: Invalid user support from 116.99.171.211 port 52972
Jan 20 13:44:06 compute-1 sshd-session[30891]: Connection closed by authenticating user ftp 116.99.171.211 port 38726 [preauth]
Jan 20 13:44:07 compute-1 sshd-session[30889]: Connection closed by invalid user support 116.99.171.211 port 52972 [preauth]
Jan 20 13:44:49 compute-1 sshd-session[30893]: Connection closed by authenticating user root 116.99.171.211 port 40034 [preauth]
Jan 20 13:44:56 compute-1 sshd-session[30895]: Connection closed by authenticating user operator 116.99.171.211 port 49584 [preauth]
Jan 20 13:45:43 compute-1 sshd-session[30897]: Accepted publickey for zuul from 192.168.122.30 port 34518 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:45:43 compute-1 systemd-logind[783]: New session 9 of user zuul.
Jan 20 13:45:43 compute-1 systemd[1]: Started Session 9 of User zuul.
Jan 20 13:45:43 compute-1 sshd-session[30897]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:45:45 compute-1 python3.9[31050]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:45:46 compute-1 sudo[31229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjzojsnxdmukhiifkodojkeavpykjitd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916745.797542-57-88418379594171/AnsiballZ_command.py'
Jan 20 13:45:46 compute-1 sudo[31229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:45:46 compute-1 python3.9[31231]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:45:52 compute-1 sshd-session[31246]: Invalid user 1234 from 116.99.171.211 port 43072
Jan 20 13:45:52 compute-1 sshd-session[31246]: Connection closed by invalid user 1234 116.99.171.211 port 43072 [preauth]
Jan 20 13:45:53 compute-1 sudo[31229]: pam_unix(sudo:session): session closed for user root
Jan 20 13:45:54 compute-1 sshd-session[30900]: Connection closed by 192.168.122.30 port 34518
Jan 20 13:45:54 compute-1 sshd-session[30897]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:45:54 compute-1 systemd[1]: session-9.scope: Deactivated successfully.
Jan 20 13:45:54 compute-1 systemd[1]: session-9.scope: Consumed 8.316s CPU time.
Jan 20 13:45:54 compute-1 systemd-logind[783]: Session 9 logged out. Waiting for processes to exit.
Jan 20 13:45:54 compute-1 systemd-logind[783]: Removed session 9.
Jan 20 13:46:10 compute-1 sshd-session[31293]: Accepted publickey for zuul from 192.168.122.30 port 48042 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:46:10 compute-1 systemd-logind[783]: New session 10 of user zuul.
Jan 20 13:46:10 compute-1 systemd[1]: Started Session 10 of User zuul.
Jan 20 13:46:10 compute-1 sshd-session[31293]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:46:11 compute-1 python3.9[31446]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 20 13:46:12 compute-1 python3.9[31620]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:46:13 compute-1 sudo[31770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blkhyuuicqgpxfybgurbtwlylutpqqgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916773.235243-94-26396854004870/AnsiballZ_command.py'
Jan 20 13:46:13 compute-1 sudo[31770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:46:13 compute-1 python3.9[31772]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:46:13 compute-1 sudo[31770]: pam_unix(sudo:session): session closed for user root
Jan 20 13:46:14 compute-1 sudo[31923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvjkcfrqmfunaavgntqwrtgbjekliapk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916774.3340526-130-230121523016713/AnsiballZ_stat.py'
Jan 20 13:46:14 compute-1 sudo[31923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:46:14 compute-1 python3.9[31925]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:46:14 compute-1 sudo[31923]: pam_unix(sudo:session): session closed for user root
Jan 20 13:46:15 compute-1 sudo[32076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymumigvtjdhuywmadumzjpaxtjhnoalc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916775.1526153-154-264760680687259/AnsiballZ_file.py'
Jan 20 13:46:15 compute-1 sudo[32076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:46:15 compute-1 python3.9[32078]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:46:15 compute-1 sudo[32076]: pam_unix(sudo:session): session closed for user root
Jan 20 13:46:16 compute-1 sudo[32229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otlceemonxwibgabsybmdvinsishdbsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916776.5429342-178-141990721373655/AnsiballZ_stat.py'
Jan 20 13:46:16 compute-1 sudo[32229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:46:17 compute-1 python3.9[32231]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:46:17 compute-1 sudo[32229]: pam_unix(sudo:session): session closed for user root
Jan 20 13:46:17 compute-1 sudo[32352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpnegbexwnbkhchbuovfcmakesdnfcqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916776.5429342-178-141990721373655/AnsiballZ_copy.py'
Jan 20 13:46:17 compute-1 sudo[32352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:46:17 compute-1 python3.9[32354]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1768916776.5429342-178-141990721373655/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:46:17 compute-1 sudo[32352]: pam_unix(sudo:session): session closed for user root
Jan 20 13:46:18 compute-1 sudo[32504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xttpmvxqoyjanydnwhuaryvhegdwzqoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916778.122203-223-81554695457348/AnsiballZ_setup.py'
Jan 20 13:46:18 compute-1 sudo[32504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:46:18 compute-1 python3.9[32506]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:46:18 compute-1 sudo[32504]: pam_unix(sudo:session): session closed for user root
Jan 20 13:46:19 compute-1 sshd-session[32050]: Connection closed by authenticating user root 116.99.171.211 port 42198 [preauth]
Jan 20 13:46:19 compute-1 sudo[32660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tonwcxokqztslmlvcmovnmjxjvvdnqyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916779.214463-247-104631193672306/AnsiballZ_file.py'
Jan 20 13:46:19 compute-1 sudo[32660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:46:19 compute-1 python3.9[32662]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:46:19 compute-1 sudo[32660]: pam_unix(sudo:session): session closed for user root
Jan 20 13:46:20 compute-1 sudo[32812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rutnxkbdzdjznnqhevehwahztyucazkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916780.0743492-274-273628631785524/AnsiballZ_file.py'
Jan 20 13:46:20 compute-1 sudo[32812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:46:20 compute-1 python3.9[32814]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:46:20 compute-1 sudo[32812]: pam_unix(sudo:session): session closed for user root
Jan 20 13:46:21 compute-1 python3.9[32964]: ansible-ansible.builtin.service_facts Invoked
Jan 20 13:46:26 compute-1 sshd-session[33090]: Invalid user nikita from 116.99.171.211 port 47082
Jan 20 13:46:27 compute-1 sshd-session[33090]: Connection closed by invalid user nikita 116.99.171.211 port 47082 [preauth]
Jan 20 13:46:27 compute-1 python3.9[33220]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:46:28 compute-1 python3.9[33370]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:46:30 compute-1 python3.9[33524]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:46:31 compute-1 sudo[33680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gelodsdthcqryebfekkotfcthmixtoga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916790.8108516-418-77714171841874/AnsiballZ_setup.py'
Jan 20 13:46:31 compute-1 sudo[33680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:46:31 compute-1 python3.9[33682]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:46:31 compute-1 sudo[33680]: pam_unix(sudo:session): session closed for user root
Jan 20 13:46:32 compute-1 sudo[33764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eomeinbvbbmxgsaposfkowgelwqubkuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916790.8108516-418-77714171841874/AnsiballZ_dnf.py'
Jan 20 13:46:32 compute-1 sudo[33764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:46:32 compute-1 python3.9[33766]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:46:47 compute-1 irqbalance[778]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 20 13:46:47 compute-1 irqbalance[778]: IRQ 26 affinity is now unmanaged
Jan 20 13:47:02 compute-1 sshd-session[33840]: Connection closed by authenticating user sync 116.99.171.211 port 54194 [preauth]
Jan 20 13:47:22 compute-1 sshd-session[34031]: Invalid user admin from 116.99.171.211 port 40686
Jan 20 13:47:23 compute-1 sshd-session[34031]: Connection closed by invalid user admin 116.99.171.211 port 40686 [preauth]
Jan 20 13:47:32 compute-1 systemd[1]: Reloading.
Jan 20 13:47:32 compute-1 systemd-rc-local-generator[34086]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:47:32 compute-1 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 20 13:47:33 compute-1 systemd[1]: Reloading.
Jan 20 13:47:33 compute-1 systemd-rc-local-generator[34133]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:47:33 compute-1 systemd[1]: Starting dnf makecache...
Jan 20 13:47:33 compute-1 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 20 13:47:33 compute-1 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 20 13:47:33 compute-1 systemd[1]: Reloading.
Jan 20 13:47:33 compute-1 systemd-rc-local-generator[34169]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:47:33 compute-1 dnf[34141]: Failed determining last makecache time.
Jan 20 13:47:33 compute-1 dnf[34141]: delorean-openstack-barbican-42b4c41831408a8e323 118 kB/s | 3.0 kB     00:00
Jan 20 13:47:33 compute-1 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 20 13:47:33 compute-1 dnf[34141]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 135 kB/s | 3.0 kB     00:00
Jan 20 13:47:33 compute-1 dnf[34141]: delorean-openstack-cinder-1c00d6490d88e436f26ef 143 kB/s | 3.0 kB     00:00
Jan 20 13:47:33 compute-1 dnf[34141]: delorean-python-stevedore-c4acc5639fd2329372142 155 kB/s | 3.0 kB     00:00
Jan 20 13:47:33 compute-1 dnf[34141]: delorean-python-cloudkitty-tests-tempest-2c80f8 159 kB/s | 3.0 kB     00:00
Jan 20 13:47:33 compute-1 dnf[34141]: delorean-os-refresh-config-9bfc52b5049be2d8de61 155 kB/s | 3.0 kB     00:00
Jan 20 13:47:33 compute-1 dnf[34141]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 166 kB/s | 3.0 kB     00:00
Jan 20 13:47:33 compute-1 dnf[34141]: delorean-python-designate-tests-tempest-347fdbc 160 kB/s | 3.0 kB     00:00
Jan 20 13:47:34 compute-1 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 20 13:47:34 compute-1 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 20 13:47:34 compute-1 dnf[34141]: delorean-openstack-glance-1fd12c29b339f30fe823e 166 kB/s | 3.0 kB     00:00
Jan 20 13:47:34 compute-1 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 20 13:47:34 compute-1 dnf[34141]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 162 kB/s | 3.0 kB     00:00
Jan 20 13:47:34 compute-1 dnf[34141]: delorean-openstack-manila-3c01b7181572c95dac462 152 kB/s | 3.0 kB     00:00
Jan 20 13:47:34 compute-1 dnf[34141]: delorean-python-whitebox-neutron-tests-tempest- 137 kB/s | 3.0 kB     00:00
Jan 20 13:47:34 compute-1 dnf[34141]: delorean-openstack-octavia-ba397f07a7331190208c 163 kB/s | 3.0 kB     00:00
Jan 20 13:47:34 compute-1 dnf[34141]: delorean-openstack-watcher-c014f81a8647287f6dcc 127 kB/s | 3.0 kB     00:00
Jan 20 13:47:34 compute-1 dnf[34141]: delorean-ansible-config_template-5ccaa22121a7ff 154 kB/s | 3.0 kB     00:00
Jan 20 13:47:34 compute-1 dnf[34141]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 162 kB/s | 3.0 kB     00:00
Jan 20 13:47:34 compute-1 dnf[34141]: delorean-openstack-swift-dc98a8463506ac520c469a 203 kB/s | 3.0 kB     00:00
Jan 20 13:47:34 compute-1 dnf[34141]: delorean-python-tempestconf-8515371b7cceebd4282 188 kB/s | 3.0 kB     00:00
Jan 20 13:47:34 compute-1 dnf[34141]: delorean-openstack-heat-ui-013accbfd179753bc3f0 201 kB/s | 3.0 kB     00:00
Jan 20 13:47:34 compute-1 dnf[34141]: CentOS Stream 9 - BaseOS                         45 kB/s | 6.4 kB     00:00
Jan 20 13:47:34 compute-1 dnf[34141]: CentOS Stream 9 - AppStream                      60 kB/s | 6.8 kB     00:00
Jan 20 13:47:34 compute-1 dnf[34141]: CentOS Stream 9 - CRB                            56 kB/s | 6.3 kB     00:00
Jan 20 13:47:35 compute-1 dnf[34141]: CentOS Stream 9 - Extras packages                32 kB/s | 7.3 kB     00:00
Jan 20 13:47:35 compute-1 dnf[34141]: dlrn-antelope-testing                            92 kB/s | 3.0 kB     00:00
Jan 20 13:47:35 compute-1 dnf[34141]: dlrn-antelope-build-deps                        130 kB/s | 3.0 kB     00:00
Jan 20 13:47:35 compute-1 dnf[34141]: centos9-rabbitmq                                108 kB/s | 3.0 kB     00:00
Jan 20 13:47:35 compute-1 dnf[34141]: centos9-storage                                 146 kB/s | 3.0 kB     00:00
Jan 20 13:47:35 compute-1 dnf[34141]: centos9-opstools                                110 kB/s | 3.0 kB     00:00
Jan 20 13:47:35 compute-1 dnf[34141]: NFV SIG OpenvSwitch                             122 kB/s | 3.0 kB     00:00
Jan 20 13:47:35 compute-1 dnf[34141]: repo-setup-centos-appstream                     176 kB/s | 4.4 kB     00:00
Jan 20 13:47:35 compute-1 dnf[34141]: repo-setup-centos-baseos                        167 kB/s | 3.9 kB     00:00
Jan 20 13:47:35 compute-1 dnf[34141]: repo-setup-centos-highavailability              163 kB/s | 3.9 kB     00:00
Jan 20 13:47:35 compute-1 dnf[34141]: repo-setup-centos-powertools                     32 kB/s | 4.3 kB     00:00
Jan 20 13:47:36 compute-1 dnf[34141]: Extra Packages for Enterprise Linux 9 - x86_64   27 kB/s |  32 kB     00:01
Jan 20 13:47:37 compute-1 dnf[34141]: Extra Packages for Enterprise Linux 9 - x86_64   19 MB/s |  20 MB     00:01
Jan 20 13:47:46 compute-1 dnf[34141]: Metadata cache created.
Jan 20 13:47:46 compute-1 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 20 13:47:46 compute-1 systemd[1]: Finished dnf makecache.
Jan 20 13:47:46 compute-1 systemd[1]: dnf-makecache.service: Consumed 10.014s CPU time.
Jan 20 13:48:37 compute-1 kernel: SELinux:  Converting 2724 SID table entries...
Jan 20 13:48:37 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 13:48:37 compute-1 kernel: SELinux:  policy capability open_perms=1
Jan 20 13:48:37 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 13:48:37 compute-1 kernel: SELinux:  policy capability always_check_network=0
Jan 20 13:48:37 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 13:48:37 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 13:48:37 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 13:48:37 compute-1 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 20 13:48:37 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 13:48:37 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 20 13:48:37 compute-1 systemd[1]: Reloading.
Jan 20 13:48:37 compute-1 systemd-rc-local-generator[34553]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:48:38 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 13:48:38 compute-1 sudo[33764]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:39 compute-1 sudo[35470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isfqvssgjntpmrojpnpvetppmeogwnet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916918.7555501-455-175897321215536/AnsiballZ_command.py'
Jan 20 13:48:39 compute-1 sudo[35470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:39 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 13:48:39 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 20 13:48:39 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.749s CPU time.
Jan 20 13:48:39 compute-1 systemd[1]: run-r6a5fbdbfa27648a8b86a5be943ed0c38.service: Deactivated successfully.
Jan 20 13:48:39 compute-1 python3.9[35472]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:48:40 compute-1 sudo[35470]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:41 compute-1 sudo[35752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkgtpvgyosjlrzjebeidipddtyjagsuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916920.5331695-478-33835208704783/AnsiballZ_selinux.py'
Jan 20 13:48:41 compute-1 sudo[35752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:41 compute-1 python3.9[35754]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 20 13:48:41 compute-1 sudo[35752]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:42 compute-1 sudo[35904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtjbqdubhkdwqydypugsezzbrpfzadqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916922.0710666-512-142266531729939/AnsiballZ_command.py'
Jan 20 13:48:42 compute-1 sudo[35904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:42 compute-1 python3.9[35906]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 20 13:48:43 compute-1 sudo[35904]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:44 compute-1 sudo[36059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyfwcxwrmnostauwevovgpxpdgicpuyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916924.0816846-535-126523383495108/AnsiballZ_file.py'
Jan 20 13:48:44 compute-1 sudo[36059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:46 compute-1 sshd-session[35908]: Invalid user ftpuser from 116.99.171.211 port 37446
Jan 20 13:48:46 compute-1 python3.9[36061]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:48:46 compute-1 sudo[36059]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:46 compute-1 sshd-session[35908]: Connection closed by invalid user ftpuser 116.99.171.211 port 37446 [preauth]
Jan 20 13:48:47 compute-1 sudo[36211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkciamgcuidkfsyuxfhkxhktvpdlcykd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916926.6503785-559-73235517887971/AnsiballZ_mount.py'
Jan 20 13:48:47 compute-1 sudo[36211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:47 compute-1 python3.9[36213]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 20 13:48:47 compute-1 sudo[36211]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:48 compute-1 sudo[36363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odcwnuzugsrvbnarpjmpmrsdvpgqjdzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916928.437016-643-26728934565132/AnsiballZ_file.py'
Jan 20 13:48:48 compute-1 sudo[36363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:52 compute-1 python3.9[36365]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:48:52 compute-1 sudo[36363]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:52 compute-1 sshd-session[36366]: Connection closed by authenticating user root 116.99.171.211 port 42736 [preauth]
Jan 20 13:48:53 compute-1 sudo[36517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dixmlacsamlzajzrtuqsambutffedpga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916932.69029-667-212216292389921/AnsiballZ_stat.py'
Jan 20 13:48:53 compute-1 sudo[36517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:53 compute-1 python3.9[36519]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:48:53 compute-1 sudo[36517]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:53 compute-1 sudo[36640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luuutwzjjmvjkyorfhzlylvvkelstjlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916932.69029-667-212216292389921/AnsiballZ_copy.py'
Jan 20 13:48:53 compute-1 sudo[36640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:55 compute-1 python3.9[36642]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768916932.69029-667-212216292389921/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a9ac548cf1fa241f1d1335913ca73d2a10501b24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:48:55 compute-1 sudo[36640]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:58 compute-1 sudo[36792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lheebvhlfrmrvcnvihemjgozyqaidzjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916937.7153826-739-171278317825940/AnsiballZ_stat.py'
Jan 20 13:48:58 compute-1 sudo[36792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:58 compute-1 python3.9[36794]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:48:58 compute-1 sudo[36792]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:58 compute-1 sudo[36944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpzobjtgvsgxhqpmqmimzxbccvqyzgvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916938.5110207-763-95271513879589/AnsiballZ_command.py'
Jan 20 13:48:58 compute-1 sudo[36944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:59 compute-1 python3.9[36946]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:48:59 compute-1 sudo[36944]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:59 compute-1 sudo[37097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyycjeefodsmuacoqemqjvpqoquyzuno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916939.361513-787-747656878502/AnsiballZ_file.py'
Jan 20 13:48:59 compute-1 sudo[37097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:59 compute-1 python3.9[37099]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:48:59 compute-1 sudo[37097]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:00 compute-1 sudo[37249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feaqudkyhlfrotsjqisjcbryirtaoqbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916940.4223588-820-146764542618363/AnsiballZ_getent.py'
Jan 20 13:49:00 compute-1 sudo[37249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:01 compute-1 python3.9[37251]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 20 13:49:01 compute-1 sudo[37249]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:01 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 13:49:01 compute-1 sudo[37403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrjpegkelbidaclrtstsfhublshyurri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916941.3323884-844-273894421898865/AnsiballZ_group.py'
Jan 20 13:49:01 compute-1 sudo[37403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:02 compute-1 python3.9[37405]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 13:49:02 compute-1 groupadd[37406]: group added to /etc/group: name=qemu, GID=107
Jan 20 13:49:02 compute-1 groupadd[37406]: group added to /etc/gshadow: name=qemu
Jan 20 13:49:02 compute-1 groupadd[37406]: new group: name=qemu, GID=107
Jan 20 13:49:02 compute-1 sudo[37403]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:03 compute-1 sudo[37561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwxnvbnfanrhwdubrmgkhapzkqdijljs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916942.4534717-868-108065613689432/AnsiballZ_user.py'
Jan 20 13:49:03 compute-1 sudo[37561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:03 compute-1 python3.9[37563]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 20 13:49:03 compute-1 useradd[37565]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 20 13:49:03 compute-1 sudo[37561]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:04 compute-1 sudo[37721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itiixvtongopxefnyfqptunxamgvshyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916943.7156167-892-69162519944417/AnsiballZ_getent.py'
Jan 20 13:49:04 compute-1 sudo[37721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:04 compute-1 python3.9[37723]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 20 13:49:04 compute-1 sudo[37721]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:04 compute-1 sudo[37874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwhuajtbgtgytsurvcquptygqaqmtijz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916944.5631475-916-123379187213693/AnsiballZ_group.py'
Jan 20 13:49:04 compute-1 sudo[37874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:05 compute-1 python3.9[37876]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 13:49:05 compute-1 groupadd[37877]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 20 13:49:05 compute-1 groupadd[37877]: group added to /etc/gshadow: name=hugetlbfs
Jan 20 13:49:05 compute-1 groupadd[37877]: new group: name=hugetlbfs, GID=42477
Jan 20 13:49:05 compute-1 sudo[37874]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:05 compute-1 sudo[38032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-innoyncjzuvjmgbckfmvlojbcuoxwzam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916945.5129895-943-192028047994157/AnsiballZ_file.py'
Jan 20 13:49:05 compute-1 sudo[38032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:06 compute-1 python3.9[38034]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 20 13:49:06 compute-1 sudo[38032]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:06 compute-1 sudo[38184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjmlkvcuvfhtdujrlhkacafcyrfbnotd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916946.5802538-976-259240880564941/AnsiballZ_dnf.py'
Jan 20 13:49:06 compute-1 sudo[38184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:07 compute-1 python3.9[38186]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:49:08 compute-1 sudo[38184]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:09 compute-1 sudo[38337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhlagmvdqqnofululidrglgiiflxrdir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916949.2049155-1001-25535098537669/AnsiballZ_file.py'
Jan 20 13:49:09 compute-1 sudo[38337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:09 compute-1 python3.9[38339]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:49:09 compute-1 sudo[38337]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:10 compute-1 sudo[38489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkhjnykglmkcgdvloxsirkbnhulpcveh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916950.0478392-1025-140779314911048/AnsiballZ_stat.py'
Jan 20 13:49:10 compute-1 sudo[38489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:10 compute-1 python3.9[38491]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:49:10 compute-1 sudo[38489]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:10 compute-1 sudo[38612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knbgkfxlciafkzrnhxptntwlzuwmbrlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916950.0478392-1025-140779314911048/AnsiballZ_copy.py'
Jan 20 13:49:10 compute-1 sudo[38612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:11 compute-1 python3.9[38614]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768916950.0478392-1025-140779314911048/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:49:11 compute-1 sudo[38612]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:12 compute-1 sudo[38764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drkmjofsnguzkksrrmihmafsoforxkmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916951.3876853-1069-263632915473835/AnsiballZ_systemd.py'
Jan 20 13:49:12 compute-1 sudo[38764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:12 compute-1 python3.9[38766]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:49:12 compute-1 systemd[1]: Starting Load Kernel Modules...
Jan 20 13:49:12 compute-1 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 20 13:49:12 compute-1 kernel: Bridge firewalling registered
Jan 20 13:49:12 compute-1 systemd-modules-load[38770]: Inserted module 'br_netfilter'
Jan 20 13:49:12 compute-1 systemd[1]: Finished Load Kernel Modules.
Jan 20 13:49:12 compute-1 sudo[38764]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:13 compute-1 sudo[38924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prhdbglwzlbepcmuvpgtnealkspqniec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916952.7371955-1093-149557128141500/AnsiballZ_stat.py'
Jan 20 13:49:13 compute-1 sudo[38924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:13 compute-1 python3.9[38926]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:49:13 compute-1 sudo[38924]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:14 compute-1 sudo[39047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goumvhvcertmgshdxxsjggnygudrqyve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916952.7371955-1093-149557128141500/AnsiballZ_copy.py'
Jan 20 13:49:14 compute-1 sudo[39047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:14 compute-1 python3.9[39049]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768916952.7371955-1093-149557128141500/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:49:14 compute-1 sudo[39047]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:15 compute-1 sudo[39199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbbdhaawnixasvanddxpkudevmvfzpkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916955.4290144-1147-10542823439508/AnsiballZ_dnf.py'
Jan 20 13:49:15 compute-1 sudo[39199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:16 compute-1 python3.9[39201]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:49:19 compute-1 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 20 13:49:19 compute-1 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 20 13:49:19 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 13:49:19 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 20 13:49:19 compute-1 systemd[1]: Reloading.
Jan 20 13:49:20 compute-1 systemd-rc-local-generator[39266]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:49:20 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 13:49:20 compute-1 sudo[39199]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:21 compute-1 python3.9[40737]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:49:22 compute-1 sshd-session[39374]: Invalid user username from 116.99.171.211 port 59170
Jan 20 13:49:22 compute-1 sshd-session[39374]: Connection closed by invalid user username 116.99.171.211 port 59170 [preauth]
Jan 20 13:49:22 compute-1 python3.9[41593]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 20 13:49:23 compute-1 python3.9[42291]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:49:24 compute-1 sudo[43142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdxmhrsssxxxwvksvqnplgecydeivlit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916963.859883-1264-279799992936744/AnsiballZ_command.py'
Jan 20 13:49:24 compute-1 sudo[43142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:24 compute-1 python3.9[43158]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:49:24 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 20 13:49:24 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 13:49:24 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 20 13:49:24 compute-1 systemd[1]: man-db-cache-update.service: Consumed 5.987s CPU time.
Jan 20 13:49:24 compute-1 systemd[1]: run-r4c36d9d280bc45419937a582094d62d4.service: Deactivated successfully.
Jan 20 13:49:25 compute-1 systemd[1]: Starting Authorization Manager...
Jan 20 13:49:25 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 20 13:49:25 compute-1 polkitd[43590]: Started polkitd version 0.117
Jan 20 13:49:25 compute-1 polkitd[43590]: Loading rules from directory /etc/polkit-1/rules.d
Jan 20 13:49:25 compute-1 polkitd[43590]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 20 13:49:25 compute-1 polkitd[43590]: Finished loading, compiling and executing 2 rules
Jan 20 13:49:25 compute-1 systemd[1]: Started Authorization Manager.
Jan 20 13:49:25 compute-1 polkitd[43590]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 20 13:49:25 compute-1 sudo[43142]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:25 compute-1 sudo[43758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjltxzmzxiskmlxrrmwgjtfsyhsjpwja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916965.593615-1291-114154708629698/AnsiballZ_systemd.py'
Jan 20 13:49:25 compute-1 sudo[43758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:26 compute-1 python3.9[43760]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:49:26 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 20 13:49:26 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Jan 20 13:49:26 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 20 13:49:26 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 20 13:49:26 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 20 13:49:26 compute-1 sudo[43758]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:27 compute-1 python3.9[43921]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 20 13:49:30 compute-1 sudo[44071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnmwdvlwlcrrkbcfurlxlkhklcrydcmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916970.5035412-1462-177317375851803/AnsiballZ_systemd.py'
Jan 20 13:49:30 compute-1 sudo[44071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:31 compute-1 python3.9[44073]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:49:31 compute-1 systemd[1]: Reloading.
Jan 20 13:49:31 compute-1 systemd-rc-local-generator[44098]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:49:31 compute-1 sudo[44071]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:31 compute-1 sudo[44260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkcsvlqvowlobckgridldmmoqymjoveu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916971.548891-1462-202437464397895/AnsiballZ_systemd.py'
Jan 20 13:49:31 compute-1 sudo[44260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:32 compute-1 python3.9[44262]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:49:32 compute-1 systemd[1]: Reloading.
Jan 20 13:49:32 compute-1 systemd-rc-local-generator[44291]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:49:32 compute-1 sudo[44260]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:33 compute-1 sudo[44450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgzwikzqpshqzqfpnnqxncwtvegwpgui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916972.809073-1510-269238340210299/AnsiballZ_command.py'
Jan 20 13:49:33 compute-1 sudo[44450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:33 compute-1 python3.9[44452]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:49:33 compute-1 sudo[44450]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:33 compute-1 sudo[44603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swmpgzfxcjmxyblxibxiivnotqqkufbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916973.6072943-1534-109879696697754/AnsiballZ_command.py'
Jan 20 13:49:33 compute-1 sudo[44603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:34 compute-1 python3.9[44605]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:49:34 compute-1 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 20 13:49:34 compute-1 sudo[44603]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:34 compute-1 sudo[44756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxmlwglegicnpjmmozruwvawcusdbdkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916974.4020622-1558-63619608401225/AnsiballZ_command.py'
Jan 20 13:49:34 compute-1 sudo[44756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:35 compute-1 python3.9[44758]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:49:36 compute-1 sudo[44756]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:37 compute-1 sudo[44918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jghciulrbylbmruhlipvuwcxluhqjmzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916976.8143885-1582-123422750261003/AnsiballZ_command.py'
Jan 20 13:49:37 compute-1 sudo[44918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:37 compute-1 python3.9[44920]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:49:37 compute-1 sudo[44918]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:38 compute-1 sudo[45071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxhihwlkdymqczdryrtghfklwiibpnwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916977.6483095-1606-25301423974421/AnsiballZ_systemd.py'
Jan 20 13:49:38 compute-1 sudo[45071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:38 compute-1 python3.9[45073]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:49:38 compute-1 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 20 13:49:38 compute-1 systemd[1]: Stopped Apply Kernel Variables.
Jan 20 13:49:38 compute-1 systemd[1]: Stopping Apply Kernel Variables...
Jan 20 13:49:38 compute-1 systemd[1]: Starting Apply Kernel Variables...
Jan 20 13:49:38 compute-1 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 20 13:49:38 compute-1 systemd[1]: Finished Apply Kernel Variables.
Jan 20 13:49:38 compute-1 sudo[45071]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:38 compute-1 sshd-session[31296]: Connection closed by 192.168.122.30 port 48042
Jan 20 13:49:38 compute-1 sshd-session[31293]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:49:38 compute-1 systemd[1]: session-10.scope: Deactivated successfully.
Jan 20 13:49:38 compute-1 systemd[1]: session-10.scope: Consumed 2min 23.427s CPU time.
Jan 20 13:49:38 compute-1 systemd-logind[783]: Session 10 logged out. Waiting for processes to exit.
Jan 20 13:49:38 compute-1 systemd-logind[783]: Removed session 10.
Jan 20 13:49:44 compute-1 sshd-session[45104]: Accepted publickey for zuul from 192.168.122.30 port 60406 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:49:44 compute-1 systemd-logind[783]: New session 11 of user zuul.
Jan 20 13:49:44 compute-1 systemd[1]: Started Session 11 of User zuul.
Jan 20 13:49:44 compute-1 sshd-session[45104]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:49:45 compute-1 python3.9[45257]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:49:46 compute-1 sudo[45411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhgabhpzbrvgqvwdjrbwrggkqhwowcdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916986.2872572-69-45827432524371/AnsiballZ_getent.py'
Jan 20 13:49:46 compute-1 sudo[45411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:46 compute-1 python3.9[45413]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 20 13:49:46 compute-1 sudo[45411]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:47 compute-1 sudo[45564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyhjdojkpddkhfvidqjyosjjgmigvufg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916987.1973636-93-165873512803400/AnsiballZ_group.py'
Jan 20 13:49:47 compute-1 sudo[45564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:47 compute-1 python3.9[45566]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 13:49:48 compute-1 groupadd[45567]: group added to /etc/group: name=openvswitch, GID=42476
Jan 20 13:49:48 compute-1 groupadd[45567]: group added to /etc/gshadow: name=openvswitch
Jan 20 13:49:48 compute-1 groupadd[45567]: new group: name=openvswitch, GID=42476
Jan 20 13:49:48 compute-1 sudo[45564]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:48 compute-1 sudo[45722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbqvdaddybpctqcwaxjqchrpwquclvnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916988.3840191-117-168177163949977/AnsiballZ_user.py'
Jan 20 13:49:48 compute-1 sudo[45722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:49 compute-1 python3.9[45724]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 20 13:49:49 compute-1 useradd[45726]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 20 13:49:49 compute-1 useradd[45726]: add 'openvswitch' to group 'hugetlbfs'
Jan 20 13:49:49 compute-1 useradd[45726]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 20 13:49:49 compute-1 sudo[45722]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:49 compute-1 sudo[45882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyainrgnzuhcbbjmnochuoearuppegvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916989.5928957-147-255313457295709/AnsiballZ_setup.py'
Jan 20 13:49:49 compute-1 sudo[45882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:50 compute-1 python3.9[45884]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:49:50 compute-1 sudo[45882]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:50 compute-1 sudo[45966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imnxexorwqpiyjrymudqajkhuvxtuxws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916989.5928957-147-255313457295709/AnsiballZ_dnf.py'
Jan 20 13:49:50 compute-1 sudo[45966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:51 compute-1 python3.9[45968]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 20 13:49:53 compute-1 sudo[45966]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:54 compute-1 sudo[46130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uauhinpbtilmgtslizliwowtpgfxlqgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916994.0435402-189-139658162783334/AnsiballZ_dnf.py'
Jan 20 13:49:54 compute-1 sudo[46130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:54 compute-1 python3.9[46132]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:50:05 compute-1 kernel: SELinux:  Converting 2736 SID table entries...
Jan 20 13:50:05 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 13:50:05 compute-1 kernel: SELinux:  policy capability open_perms=1
Jan 20 13:50:05 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 13:50:05 compute-1 kernel: SELinux:  policy capability always_check_network=0
Jan 20 13:50:05 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 13:50:05 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 13:50:05 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 13:50:05 compute-1 groupadd[46156]: group added to /etc/group: name=unbound, GID=994
Jan 20 13:50:05 compute-1 groupadd[46156]: group added to /etc/gshadow: name=unbound
Jan 20 13:50:05 compute-1 groupadd[46156]: new group: name=unbound, GID=994
Jan 20 13:50:05 compute-1 useradd[46163]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 20 13:50:05 compute-1 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 20 13:50:05 compute-1 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 20 13:50:07 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 13:50:07 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 20 13:50:07 compute-1 systemd[1]: Reloading.
Jan 20 13:50:07 compute-1 systemd-rc-local-generator[46654]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:50:07 compute-1 systemd-sysv-generator[46664]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:50:07 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 13:50:08 compute-1 sudo[46130]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:08 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 13:50:08 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 20 13:50:08 compute-1 systemd[1]: run-rd218b33b4a364d0db5cd29f7f5e0ddfd.service: Deactivated successfully.
Jan 20 13:50:11 compute-1 sudo[47233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghksdqgfxprblbuownwyqdurgdzckxle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917011.1029806-213-150008429407193/AnsiballZ_systemd.py'
Jan 20 13:50:11 compute-1 sudo[47233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:12 compute-1 python3.9[47235]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 13:50:12 compute-1 systemd[1]: Reloading.
Jan 20 13:50:12 compute-1 systemd-rc-local-generator[47263]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:50:12 compute-1 systemd-sysv-generator[47268]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:50:12 compute-1 systemd[1]: Starting Open vSwitch Database Unit...
Jan 20 13:50:12 compute-1 chown[47276]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 20 13:50:12 compute-1 ovs-ctl[47281]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 20 13:50:12 compute-1 ovs-ctl[47281]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 20 13:50:12 compute-1 ovs-ctl[47281]: Starting ovsdb-server [  OK  ]
Jan 20 13:50:12 compute-1 ovs-vsctl[47330]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 20 13:50:12 compute-1 ovs-vsctl[47346]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"5ffd4ac3-9266-4927-98ad-20a17782c725\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 20 13:50:12 compute-1 ovs-ctl[47281]: Configuring Open vSwitch system IDs [  OK  ]
Jan 20 13:50:12 compute-1 ovs-ctl[47281]: Enabling remote OVSDB managers [  OK  ]
Jan 20 13:50:12 compute-1 ovs-vsctl[47355]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Jan 20 13:50:12 compute-1 systemd[1]: Started Open vSwitch Database Unit.
Jan 20 13:50:12 compute-1 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 20 13:50:12 compute-1 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 20 13:50:12 compute-1 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 20 13:50:13 compute-1 kernel: openvswitch: Open vSwitch switching datapath
Jan 20 13:50:13 compute-1 ovs-ctl[47399]: Inserting openvswitch module [  OK  ]
Jan 20 13:50:13 compute-1 ovs-ctl[47368]: Starting ovs-vswitchd [  OK  ]
Jan 20 13:50:13 compute-1 ovs-vsctl[47416]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Jan 20 13:50:13 compute-1 ovs-ctl[47368]: Enabling remote OVSDB managers [  OK  ]
Jan 20 13:50:13 compute-1 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 20 13:50:13 compute-1 systemd[1]: Starting Open vSwitch...
Jan 20 13:50:13 compute-1 systemd[1]: Finished Open vSwitch.
Jan 20 13:50:13 compute-1 sudo[47233]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:13 compute-1 sshd-session[47106]: Invalid user oracle from 116.99.171.211 port 34072
Jan 20 13:50:13 compute-1 sshd-session[47104]: Connection closed by authenticating user sshd 116.99.171.211 port 34058 [preauth]
Jan 20 13:50:14 compute-1 python3.9[47568]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:50:15 compute-1 sudo[47718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbkwkihihzdnzjgazeuejfcfwrxhynpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917014.6652677-267-266738962355055/AnsiballZ_sefcontext.py'
Jan 20 13:50:15 compute-1 sudo[47718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:15 compute-1 python3.9[47720]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 20 13:50:15 compute-1 sshd-session[47106]: Connection closed by invalid user oracle 116.99.171.211 port 34072 [preauth]
Jan 20 13:50:16 compute-1 kernel: SELinux:  Converting 2750 SID table entries...
Jan 20 13:50:16 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 13:50:16 compute-1 kernel: SELinux:  policy capability open_perms=1
Jan 20 13:50:16 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 13:50:16 compute-1 kernel: SELinux:  policy capability always_check_network=0
Jan 20 13:50:16 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 13:50:16 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 13:50:16 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 13:50:16 compute-1 sudo[47718]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:17 compute-1 python3.9[47875]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:50:18 compute-1 sudo[48031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fttvijhnpafsdzsomncpdqgiynutvpfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917018.4027648-321-29047351931792/AnsiballZ_dnf.py'
Jan 20 13:50:18 compute-1 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 20 13:50:18 compute-1 sudo[48031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:19 compute-1 python3.9[48033]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:50:20 compute-1 sudo[48031]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:20 compute-1 sudo[48184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-calmfntzyioltelyyeviwexchawuynrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917020.5108902-345-177987429009238/AnsiballZ_command.py'
Jan 20 13:50:21 compute-1 sudo[48184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:21 compute-1 python3.9[48186]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:50:21 compute-1 sudo[48184]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:22 compute-1 sudo[48471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eobnyozjqcuqbicsuaoeoezprjcrftkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917022.280563-369-68138111349895/AnsiballZ_file.py'
Jan 20 13:50:22 compute-1 sudo[48471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:23 compute-1 python3.9[48473]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 20 13:50:23 compute-1 sudo[48471]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:24 compute-1 python3.9[48623]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:50:24 compute-1 sudo[48775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vghgksjnahvmxvqnexsjycidolnkteta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917024.5576847-417-207395852679982/AnsiballZ_dnf.py'
Jan 20 13:50:24 compute-1 sudo[48775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:25 compute-1 python3.9[48777]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:50:27 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 13:50:27 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 20 13:50:27 compute-1 systemd[1]: Reloading.
Jan 20 13:50:27 compute-1 systemd-rc-local-generator[48814]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:50:27 compute-1 systemd-sysv-generator[48817]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:50:27 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 13:50:27 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 13:50:27 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 20 13:50:27 compute-1 systemd[1]: run-r7dcf85c38a6948f1b4947a7edeb4364f.service: Deactivated successfully.
Jan 20 13:50:27 compute-1 sudo[48775]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:28 compute-1 sudo[49092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhqrngksrorxzeohjseltqamzuymbfpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917027.803836-441-9924550950459/AnsiballZ_systemd.py'
Jan 20 13:50:28 compute-1 sudo[49092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:28 compute-1 python3.9[49094]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:50:28 compute-1 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 20 13:50:28 compute-1 systemd[1]: Stopped Network Manager Wait Online.
Jan 20 13:50:28 compute-1 systemd[1]: Stopping Network Manager Wait Online...
Jan 20 13:50:28 compute-1 systemd[1]: Stopping Network Manager...
Jan 20 13:50:28 compute-1 NetworkManager[7192]: <info>  [1768917028.5795] caught SIGTERM, shutting down normally.
Jan 20 13:50:28 compute-1 NetworkManager[7192]: <info>  [1768917028.5814] dhcp4 (eth0): canceled DHCP transaction
Jan 20 13:50:28 compute-1 NetworkManager[7192]: <info>  [1768917028.5815] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:50:28 compute-1 NetworkManager[7192]: <info>  [1768917028.5815] dhcp4 (eth0): state changed no lease
Jan 20 13:50:28 compute-1 NetworkManager[7192]: <info>  [1768917028.5819] manager: NetworkManager state is now CONNECTED_SITE
Jan 20 13:50:28 compute-1 NetworkManager[7192]: <info>  [1768917028.5896] exiting (success)
Jan 20 13:50:28 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 13:50:28 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 13:50:28 compute-1 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 20 13:50:28 compute-1 systemd[1]: Stopped Network Manager.
Jan 20 13:50:28 compute-1 systemd[1]: NetworkManager.service: Consumed 13.061s CPU time, 4.1M memory peak, read 0B from disk, written 14.0K to disk.
Jan 20 13:50:28 compute-1 systemd[1]: Starting Network Manager...
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.6957] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:017a3b90-38ab-4863-8e66-991c4844fcc7)
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.6958] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.7066] manager[0x55e28ae98000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 20 13:50:28 compute-1 systemd[1]: Starting Hostname Service...
Jan 20 13:50:28 compute-1 systemd[1]: Started Hostname Service.
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8252] hostname: hostname: using hostnamed
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8253] hostname: static hostname changed from (none) to "compute-1"
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8271] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8278] manager[0x55e28ae98000]: rfkill: Wi-Fi hardware radio set enabled
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8279] manager[0x55e28ae98000]: rfkill: WWAN hardware radio set enabled
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8319] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8335] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8336] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8337] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8339] manager: Networking is enabled by state file
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8343] settings: Loaded settings plugin: keyfile (internal)
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8349] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8414] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8434] dhcp: init: Using DHCP client 'internal'
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8441] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8455] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8472] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8496] device (lo): Activation: starting connection 'lo' (f1b29bda-3a6a-4be0-8c9c-6df9359cf4c4)
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8510] device (eth0): carrier: link connected
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8520] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8533] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8535] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8556] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8578] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8591] device (eth1): carrier: link connected
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8601] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8618] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (1877dc82-ca8e-52d6-b413-dd9d07823d2d) (indicated)
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8620] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8640] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8663] device (eth1): Activation: starting connection 'ci-private-network' (1877dc82-ca8e-52d6-b413-dd9d07823d2d)
Jan 20 13:50:28 compute-1 systemd[1]: Started Network Manager.
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8674] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8699] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8711] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8721] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8750] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8756] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8760] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8765] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8770] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8780] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8786] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8801] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8825] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8842] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8846] dhcp4 (eth0): state changed new lease, address=38.102.83.169
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8853] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8861] device (lo): Activation: successful, device activated.
Jan 20 13:50:28 compute-1 systemd[1]: Starting Network Manager Wait Online...
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8888] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8964] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8972] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8981] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8986] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.8990] device (eth1): Activation: successful, device activated.
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.9009] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.9011] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.9015] manager: NetworkManager state is now CONNECTED_SITE
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.9018] device (eth0): Activation: successful, device activated.
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.9025] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 20 13:50:28 compute-1 NetworkManager[49104]: <info>  [1768917028.9032] manager: startup complete
Jan 20 13:50:28 compute-1 systemd[1]: Finished Network Manager Wait Online.
Jan 20 13:50:28 compute-1 sudo[49092]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:29 compute-1 sudo[49319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibwnxqdvmxfkkpkzaofidfmliregkeuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917029.1674957-465-10567753021418/AnsiballZ_dnf.py'
Jan 20 13:50:29 compute-1 sudo[49319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:29 compute-1 python3.9[49321]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:50:34 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 13:50:34 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 20 13:50:34 compute-1 systemd[1]: Reloading.
Jan 20 13:50:34 compute-1 systemd-rc-local-generator[49375]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:50:34 compute-1 systemd-sysv-generator[49379]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:50:34 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 13:50:35 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 13:50:35 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 20 13:50:35 compute-1 systemd[1]: run-r671d03ad3b1a4ba9b1e1ebb3c823158b.service: Deactivated successfully.
Jan 20 13:50:35 compute-1 sudo[49319]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:36 compute-1 sudo[49778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjehstxdcpigltkxvemwnmxpieyplkkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917036.394022-501-42994465050929/AnsiballZ_stat.py'
Jan 20 13:50:36 compute-1 sudo[49778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:36 compute-1 python3.9[49780]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:50:36 compute-1 sudo[49778]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:37 compute-1 sudo[49932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skskmmptbwqmtzswrwjhjmrwgklrjjtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917037.2598355-528-43621163340257/AnsiballZ_ini_file.py'
Jan 20 13:50:37 compute-1 sudo[49932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:38 compute-1 python3.9[49934]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:50:38 compute-1 sudo[49932]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:38 compute-1 sudo[50086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbdivmtxbgpfagszbkfpzvciigetmnvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917038.4664702-558-262923963814767/AnsiballZ_ini_file.py'
Jan 20 13:50:38 compute-1 sudo[50086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:39 compute-1 python3.9[50088]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:50:39 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 13:50:39 compute-1 sudo[50086]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:39 compute-1 sudo[50239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tslocreymvxxypegddqzpihlhtqjlact ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917039.2621315-558-153149717205916/AnsiballZ_ini_file.py'
Jan 20 13:50:39 compute-1 sudo[50239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:39 compute-1 python3.9[50241]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:50:39 compute-1 sudo[50239]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:40 compute-1 sudo[50391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfqsyarrxpxowztkuerhwncfwnkpnzfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917040.0888717-603-88083202038796/AnsiballZ_ini_file.py'
Jan 20 13:50:40 compute-1 sudo[50391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:40 compute-1 python3.9[50393]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:50:40 compute-1 sudo[50391]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:41 compute-1 sudo[50544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjyutjlcxydohwvokwpgprfiuhghlwow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917040.819576-603-85816548140878/AnsiballZ_ini_file.py'
Jan 20 13:50:41 compute-1 sudo[50544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:41 compute-1 python3.9[50546]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:50:41 compute-1 sudo[50544]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:41 compute-1 sshd-session[49805]: Connection closed by authenticating user root 116.99.171.211 port 45932 [preauth]
Jan 20 13:50:41 compute-1 sudo[50696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhovsgyacpagsyhbzioqylufhkiioitf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917041.6502037-648-273260332264767/AnsiballZ_stat.py'
Jan 20 13:50:41 compute-1 sudo[50696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:42 compute-1 python3.9[50698]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:50:42 compute-1 sudo[50696]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:42 compute-1 sudo[50819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjesoqkxucdxpjumteukublxnhvbatkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917041.6502037-648-273260332264767/AnsiballZ_copy.py'
Jan 20 13:50:42 compute-1 sudo[50819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:42 compute-1 python3.9[50821]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917041.6502037-648-273260332264767/.source _original_basename=.cwcydfts follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:50:42 compute-1 sudo[50819]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:43 compute-1 sudo[50971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kijlymuzbevnjxdvbgppqnhxikbufwmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917043.1454206-693-90502703822147/AnsiballZ_file.py'
Jan 20 13:50:43 compute-1 sudo[50971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:43 compute-1 python3.9[50973]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:50:43 compute-1 sudo[50971]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:44 compute-1 sudo[51123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmfftmvpfjwavsfflsaitviynzddqvnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917044.0053904-717-70100253968414/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 20 13:50:44 compute-1 sudo[51123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:44 compute-1 python3.9[51125]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 20 13:50:44 compute-1 sudo[51123]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:45 compute-1 sudo[51275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tynkpxghjcfjiwyivyxtylclajvevrjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917044.8686-744-82532720367189/AnsiballZ_file.py'
Jan 20 13:50:45 compute-1 sudo[51275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:45 compute-1 python3.9[51277]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:50:45 compute-1 sudo[51275]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:46 compute-1 sudo[51427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edlgnqavodideqlpdcumwdekqopkyrme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917045.7865968-774-68241993607176/AnsiballZ_stat.py'
Jan 20 13:50:46 compute-1 sudo[51427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:46 compute-1 sudo[51427]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:46 compute-1 sudo[51550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqrgmugadqboaeoceghwiteuvyfdplro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917045.7865968-774-68241993607176/AnsiballZ_copy.py'
Jan 20 13:50:46 compute-1 sudo[51550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:47 compute-1 sudo[51550]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:47 compute-1 sudo[51702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whdltxyqevwsffngeikhwenjgdwdrzrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917047.2537618-819-262475397650327/AnsiballZ_slurp.py'
Jan 20 13:50:47 compute-1 sudo[51702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:47 compute-1 python3.9[51704]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 20 13:50:47 compute-1 sudo[51702]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:48 compute-1 sudo[51877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mehqjqanfpwtzvomaadqjcckmobkkqae ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917048.2391117-846-90092630877966/async_wrapper.py j766640452522 300 /home/zuul/.ansible/tmp/ansible-tmp-1768917048.2391117-846-90092630877966/AnsiballZ_edpm_os_net_config.py _'
Jan 20 13:50:48 compute-1 sudo[51877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:49 compute-1 ansible-async_wrapper.py[51879]: Invoked with j766640452522 300 /home/zuul/.ansible/tmp/ansible-tmp-1768917048.2391117-846-90092630877966/AnsiballZ_edpm_os_net_config.py _
Jan 20 13:50:49 compute-1 ansible-async_wrapper.py[51882]: Starting module and watcher
Jan 20 13:50:49 compute-1 ansible-async_wrapper.py[51882]: Start watching 51883 (300)
Jan 20 13:50:49 compute-1 ansible-async_wrapper.py[51883]: Start module (51883)
Jan 20 13:50:49 compute-1 ansible-async_wrapper.py[51879]: Return async_wrapper task started.
Jan 20 13:50:49 compute-1 sudo[51877]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:49 compute-1 python3.9[51884]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 20 13:50:50 compute-1 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 20 13:50:50 compute-1 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 20 13:50:50 compute-1 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 20 13:50:50 compute-1 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 20 13:50:50 compute-1 kernel: cfg80211: failed to load regulatory.db
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.3528] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.3543] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.3969] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.3970] audit: op="connection-add" uuid="b1dcd61b-a54a-4d09-b592-1e42a44a5f87" name="br-ex-br" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.3988] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.3989] audit: op="connection-add" uuid="d9d8c26f-c8c6-4619-be80-65fe1e8ed035" name="br-ex-port" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4002] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4003] audit: op="connection-add" uuid="62009706-568c-4850-95a2-13f3e778a8c3" name="eth1-port" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4016] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4017] audit: op="connection-add" uuid="312f7ee9-fe7f-44fe-9a04-524d4d678983" name="vlan20-port" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4030] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4031] audit: op="connection-add" uuid="402b6a37-e14a-4c91-93ac-70460eb4676d" name="vlan21-port" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4044] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4046] audit: op="connection-add" uuid="a22b3790-47fd-4571-9ebd-90f4fda173a4" name="vlan22-port" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4058] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4060] audit: op="connection-add" uuid="f12405d4-5941-409c-a742-833c3119839b" name="vlan23-port" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4080] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4097] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4098] audit: op="connection-add" uuid="bffb5486-bddd-47a6-b940-1f626fe731a0" name="br-ex-if" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4131] audit: op="connection-update" uuid="1877dc82-ca8e-52d6-b413-dd9d07823d2d" name="ci-private-network" args="connection.port-type,connection.master,connection.timestamp,connection.controller,connection.slave-type,ovs-interface.type,ipv4.routing-rules,ipv4.dns,ipv4.method,ipv4.addresses,ipv4.routes,ipv4.never-default,ovs-external-ids.data,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.dns,ipv6.method,ipv6.addresses,ipv6.routes" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4147] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4149] audit: op="connection-add" uuid="93b33f82-f072-4e05-bf59-36be1960102b" name="vlan20-if" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4165] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4167] audit: op="connection-add" uuid="f30f8036-b523-4503-b0e1-ac5fe3a30f91" name="vlan21-if" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4183] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4185] audit: op="connection-add" uuid="a01ef58e-1207-40a7-95e9-e2335d145b40" name="vlan22-if" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4201] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4203] audit: op="connection-add" uuid="4e1f04f6-553e-40ff-a5c0-b9478175e86f" name="vlan23-if" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4215] audit: op="connection-delete" uuid="3c0fe307-c5e5-33c6-a0c4-a240cdee9616" name="Wired connection 1" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4227] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <warn>  [1768917051.4229] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4236] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4240] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (b1dcd61b-a54a-4d09-b592-1e42a44a5f87)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4241] audit: op="connection-activate" uuid="b1dcd61b-a54a-4d09-b592-1e42a44a5f87" name="br-ex-br" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4243] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <warn>  [1768917051.4244] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4249] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4254] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (d9d8c26f-c8c6-4619-be80-65fe1e8ed035)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4256] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <warn>  [1768917051.4258] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4263] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4268] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (62009706-568c-4850-95a2-13f3e778a8c3)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4270] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <warn>  [1768917051.4271] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4277] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4282] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (312f7ee9-fe7f-44fe-9a04-524d4d678983)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4284] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <warn>  [1768917051.4285] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4291] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4295] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (402b6a37-e14a-4c91-93ac-70460eb4676d)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4296] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <warn>  [1768917051.4297] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4303] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4307] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (a22b3790-47fd-4571-9ebd-90f4fda173a4)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4310] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <warn>  [1768917051.4311] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4317] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4321] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (f12405d4-5941-409c-a742-833c3119839b)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4322] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4325] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4327] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4333] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <warn>  [1768917051.4334] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4337] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4342] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (bffb5486-bddd-47a6-b940-1f626fe731a0)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4343] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4346] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4348] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4350] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4351] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4362] device (eth1): disconnecting for new activation request.
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4363] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4366] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4367] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4368] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4370] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <warn>  [1768917051.4371] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4374] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4377] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (93b33f82-f072-4e05-bf59-36be1960102b)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4378] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4380] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4382] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4383] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4385] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <warn>  [1768917051.4386] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4388] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4392] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (f30f8036-b523-4503-b0e1-ac5fe3a30f91)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4392] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4395] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4396] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4397] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4399] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <warn>  [1768917051.4400] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4403] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4406] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (a01ef58e-1207-40a7-95e9-e2335d145b40)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4407] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4409] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4411] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4411] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4414] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <warn>  [1768917051.4414] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4417] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4420] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (4e1f04f6-553e-40ff-a5c0-b9478175e86f)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4421] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4423] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4425] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4426] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4427] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4439] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.method" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4440] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4443] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4444] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4455] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4458] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4461] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4463] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4464] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4467] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 kernel: ovs-system: entered promiscuous mode
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4478] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4481] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4482] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4485] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4487] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4490] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4491] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4495] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4497] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4500] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 systemd-udevd[51890]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 13:50:51 compute-1 kernel: Timeout policy base is empty
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4502] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4506] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4509] dhcp4 (eth0): canceled DHCP transaction
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4510] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4510] dhcp4 (eth0): state changed no lease
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4511] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4522] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4528] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51885 uid=0 result="fail" reason="Device is not activated"
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4532] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4538] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4545] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4552] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4554] dhcp4 (eth0): state changed new lease, address=38.102.83.169
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4596] device (eth1): disconnecting for new activation request.
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4597] audit: op="connection-activate" uuid="1877dc82-ca8e-52d6-b413-dd9d07823d2d" name="ci-private-network" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4652] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51885 uid=0 result="success"
Jan 20 13:50:51 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4731] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4831] device (eth1): Activation: starting connection 'ci-private-network' (1877dc82-ca8e-52d6-b413-dd9d07823d2d)
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4837] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4847] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4852] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4859] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4862] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4867] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4869] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4871] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4872] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4873] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4875] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4889] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4901] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4905] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4909] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4912] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4916] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4920] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4925] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4929] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4934] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4937] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4942] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4945] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 13:50:51 compute-1 kernel: br-ex: entered promiscuous mode
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4950] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.4954] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5027] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5029] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5035] device (eth1): Activation: successful, device activated.
Jan 20 13:50:51 compute-1 kernel: vlan22: entered promiscuous mode
Jan 20 13:50:51 compute-1 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 20 13:50:51 compute-1 systemd-udevd[51889]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5121] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5130] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 kernel: vlan20: entered promiscuous mode
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5171] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5173] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5178] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5236] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5250] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 kernel: vlan23: entered promiscuous mode
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5278] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 20 13:50:51 compute-1 systemd-udevd[51891]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5296] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5301] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5313] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5320] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 13:50:51 compute-1 kernel: vlan21: entered promiscuous mode
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5394] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5407] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5416] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5467] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5474] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5490] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5504] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5513] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5515] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5519] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5526] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5527] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-1 NetworkManager[49104]: <info>  [1768917051.5532] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 13:50:52 compute-1 NetworkManager[49104]: <info>  [1768917052.6640] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51885 uid=0 result="success"
Jan 20 13:50:52 compute-1 sudo[52240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqmxzhrwuzuqjwkdxzwwmlqoawmkhlma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917052.3704562-846-104043141883631/AnsiballZ_async_status.py'
Jan 20 13:50:52 compute-1 sudo[52240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:52 compute-1 NetworkManager[49104]: <info>  [1768917052.8685] checkpoint[0x55e28ae6c950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 20 13:50:52 compute-1 NetworkManager[49104]: <info>  [1768917052.8687] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51885 uid=0 result="success"
Jan 20 13:50:52 compute-1 python3.9[52242]: ansible-ansible.legacy.async_status Invoked with jid=j766640452522.51879 mode=status _async_dir=/root/.ansible_async
Jan 20 13:50:52 compute-1 sudo[52240]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:53 compute-1 NetworkManager[49104]: <info>  [1768917053.2526] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51885 uid=0 result="success"
Jan 20 13:50:53 compute-1 NetworkManager[49104]: <info>  [1768917053.2546] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51885 uid=0 result="success"
Jan 20 13:50:53 compute-1 NetworkManager[49104]: <info>  [1768917053.5714] audit: op="networking-control" arg="global-dns-configuration" pid=51885 uid=0 result="success"
Jan 20 13:50:53 compute-1 NetworkManager[49104]: <info>  [1768917053.5757] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 20 13:50:53 compute-1 NetworkManager[49104]: <info>  [1768917053.5799] audit: op="networking-control" arg="global-dns-configuration" pid=51885 uid=0 result="success"
Jan 20 13:50:53 compute-1 NetworkManager[49104]: <info>  [1768917053.5830] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51885 uid=0 result="success"
Jan 20 13:50:53 compute-1 NetworkManager[49104]: <info>  [1768917053.7125] checkpoint[0x55e28ae6ca20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 20 13:50:53 compute-1 NetworkManager[49104]: <info>  [1768917053.7128] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51885 uid=0 result="success"
Jan 20 13:50:53 compute-1 ansible-async_wrapper.py[51883]: Module complete (51883)
Jan 20 13:50:54 compute-1 ansible-async_wrapper.py[51882]: Done in kid B.
Jan 20 13:50:56 compute-1 sudo[52346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hchvaoisfhtqvdlitctweoxdswmhjmzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917052.3704562-846-104043141883631/AnsiballZ_async_status.py'
Jan 20 13:50:56 compute-1 sudo[52346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:56 compute-1 python3.9[52348]: ansible-ansible.legacy.async_status Invoked with jid=j766640452522.51879 mode=status _async_dir=/root/.ansible_async
Jan 20 13:50:56 compute-1 sudo[52346]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:57 compute-1 sudo[52446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvpmspwtjnrocalqkuhnywatzuzykcvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917052.3704562-846-104043141883631/AnsiballZ_async_status.py'
Jan 20 13:50:57 compute-1 sudo[52446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:58 compute-1 python3.9[52448]: ansible-ansible.legacy.async_status Invoked with jid=j766640452522.51879 mode=cleanup _async_dir=/root/.ansible_async
Jan 20 13:50:58 compute-1 sudo[52446]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:58 compute-1 sudo[52598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhodjraokythtkmvwuccekuoblucjcee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917058.345282-927-57926163870785/AnsiballZ_stat.py'
Jan 20 13:50:58 compute-1 sudo[52598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:58 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 13:50:58 compute-1 python3.9[52600]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:50:58 compute-1 sudo[52598]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:59 compute-1 sudo[52723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxpheylxdicjweylyusxzttlmutjvabx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917058.345282-927-57926163870785/AnsiballZ_copy.py'
Jan 20 13:50:59 compute-1 sudo[52723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:59 compute-1 python3.9[52726]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917058.345282-927-57926163870785/.source.returncode _original_basename=.i64e69rx follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:50:59 compute-1 sudo[52723]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:00 compute-1 sudo[52876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sstaqisbpjfzrvwjfxajcrhynbmavsjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917060.2055712-975-227172950204853/AnsiballZ_stat.py'
Jan 20 13:51:00 compute-1 sudo[52876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:00 compute-1 python3.9[52878]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:51:00 compute-1 sudo[52876]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:01 compute-1 sudo[52999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmqnufjslribilgkblssckqkrfvheviv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917060.2055712-975-227172950204853/AnsiballZ_copy.py'
Jan 20 13:51:01 compute-1 sudo[52999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:01 compute-1 python3.9[53001]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917060.2055712-975-227172950204853/.source.cfg _original_basename=.8p5lpu28 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:51:01 compute-1 sudo[52999]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:02 compute-1 sudo[53151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxidxndakfhiarfxsopgjpizfftnnshm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917061.6601164-1020-172072319774739/AnsiballZ_systemd.py'
Jan 20 13:51:02 compute-1 sudo[53151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:02 compute-1 python3.9[53153]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:51:02 compute-1 systemd[1]: Reloading Network Manager...
Jan 20 13:51:02 compute-1 NetworkManager[49104]: <info>  [1768917062.4247] audit: op="reload" arg="0" pid=53157 uid=0 result="success"
Jan 20 13:51:02 compute-1 NetworkManager[49104]: <info>  [1768917062.4258] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 20 13:51:02 compute-1 systemd[1]: Reloaded Network Manager.
Jan 20 13:51:02 compute-1 sudo[53151]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:02 compute-1 sshd-session[45107]: Connection closed by 192.168.122.30 port 60406
Jan 20 13:51:02 compute-1 sshd-session[45104]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:51:02 compute-1 systemd[1]: session-11.scope: Deactivated successfully.
Jan 20 13:51:02 compute-1 systemd[1]: session-11.scope: Consumed 54.490s CPU time.
Jan 20 13:51:02 compute-1 systemd-logind[783]: Session 11 logged out. Waiting for processes to exit.
Jan 20 13:51:02 compute-1 systemd-logind[783]: Removed session 11.
Jan 20 13:51:08 compute-1 sshd-session[53188]: Accepted publickey for zuul from 192.168.122.30 port 53478 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:51:08 compute-1 systemd-logind[783]: New session 12 of user zuul.
Jan 20 13:51:08 compute-1 systemd[1]: Started Session 12 of User zuul.
Jan 20 13:51:08 compute-1 sshd-session[53188]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:51:09 compute-1 python3.9[53341]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:51:10 compute-1 python3.9[53496]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:51:12 compute-1 python3.9[53689]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:51:12 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 13:51:12 compute-1 sshd-session[53191]: Connection closed by 192.168.122.30 port 53478
Jan 20 13:51:12 compute-1 sshd-session[53188]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:51:12 compute-1 systemd[1]: session-12.scope: Deactivated successfully.
Jan 20 13:51:12 compute-1 systemd[1]: session-12.scope: Consumed 2.835s CPU time.
Jan 20 13:51:12 compute-1 systemd-logind[783]: Session 12 logged out. Waiting for processes to exit.
Jan 20 13:51:12 compute-1 systemd-logind[783]: Removed session 12.
Jan 20 13:51:16 compute-1 sshd-session[53717]: Invalid user rebecca from 116.99.171.211 port 41518
Jan 20 13:51:16 compute-1 sshd-session[53717]: Connection closed by invalid user rebecca 116.99.171.211 port 41518 [preauth]
Jan 20 13:51:18 compute-1 sshd-session[53720]: Accepted publickey for zuul from 192.168.122.30 port 58884 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:51:18 compute-1 systemd-logind[783]: New session 13 of user zuul.
Jan 20 13:51:18 compute-1 systemd[1]: Started Session 13 of User zuul.
Jan 20 13:51:18 compute-1 sshd-session[53720]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:51:19 compute-1 python3.9[53874]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:51:20 compute-1 python3.9[54028]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:51:21 compute-1 sudo[54182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-admhntkgeduoeknkcngysglqprhbzsxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917081.4128323-81-89599850682598/AnsiballZ_setup.py'
Jan 20 13:51:21 compute-1 sudo[54182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:22 compute-1 python3.9[54184]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:51:22 compute-1 sudo[54182]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:22 compute-1 sudo[54267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiyekqwkquifmsxkqusrccbhlquyrkmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917081.4128323-81-89599850682598/AnsiballZ_dnf.py'
Jan 20 13:51:22 compute-1 sudo[54267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:22 compute-1 python3.9[54269]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:51:24 compute-1 sudo[54267]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:24 compute-1 sudo[54420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baclgjrhupakpxehsxkkqyphxghslbzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917084.2312448-117-190033480189112/AnsiballZ_setup.py'
Jan 20 13:51:24 compute-1 sudo[54420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:24 compute-1 python3.9[54422]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:51:25 compute-1 sudo[54420]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:26 compute-1 sudo[54615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltgyqjhhptkvzgneisrsiwmfvdkskmlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917085.5548437-150-216229015049802/AnsiballZ_file.py'
Jan 20 13:51:26 compute-1 sudo[54615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:26 compute-1 python3.9[54617]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:51:26 compute-1 sudo[54615]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:26 compute-1 sudo[54767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiagbbyngkltfxlvnatbmwwlzekictle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917086.5548828-174-80385547046092/AnsiballZ_command.py'
Jan 20 13:51:27 compute-1 sudo[54767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:27 compute-1 python3.9[54769]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:51:27 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat1421343422-merged.mount: Deactivated successfully.
Jan 20 13:51:28 compute-1 podman[54770]: 2026-01-20 13:51:28.175347664 +0000 UTC m=+0.906507583 system refresh
Jan 20 13:51:28 compute-1 sudo[54767]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:28 compute-1 sudo[54930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqfcqtqwxuogsksqhkcklwbyzibdpvng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917088.4544141-198-41018304539283/AnsiballZ_stat.py'
Jan 20 13:51:28 compute-1 sudo[54930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:28 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:51:29 compute-1 python3.9[54932]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:51:29 compute-1 sudo[54930]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:29 compute-1 sudo[55053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqsijixmymyhdnnumsoxqlpuqcygjekw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917088.4544141-198-41018304539283/AnsiballZ_copy.py'
Jan 20 13:51:29 compute-1 sudo[55053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:29 compute-1 python3.9[55055]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917088.4544141-198-41018304539283/.source.json follow=False _original_basename=podman_network_config.j2 checksum=58244ca78685d57d88d55cdba14b67b67cf92d49 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:51:29 compute-1 sudo[55053]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:30 compute-1 sudo[55205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucqigrxowtlygzrpuweagcqgkenetxrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917090.164239-243-121851674105802/AnsiballZ_stat.py'
Jan 20 13:51:30 compute-1 sudo[55205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:30 compute-1 python3.9[55207]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:51:30 compute-1 sudo[55205]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:31 compute-1 sudo[55328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tytlfmefrnuyxqlhgweljtujclspcaeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917090.164239-243-121851674105802/AnsiballZ_copy.py'
Jan 20 13:51:31 compute-1 sudo[55328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:31 compute-1 python3.9[55330]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768917090.164239-243-121851674105802/.source.conf follow=False _original_basename=registries.conf.j2 checksum=88781afee5b5da15b4e5a77559a69fa53d49a457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:51:31 compute-1 sudo[55328]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:32 compute-1 sudo[55480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxkbannszmiuglvdslcfsjudvzrfdcmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917091.7508469-291-123767718589287/AnsiballZ_ini_file.py'
Jan 20 13:51:32 compute-1 sudo[55480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:32 compute-1 python3.9[55482]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:51:32 compute-1 sudo[55480]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:33 compute-1 sudo[55632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kifkrazzfodrhouzyktooqecfnqwkwse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917092.685878-291-186650348340935/AnsiballZ_ini_file.py'
Jan 20 13:51:33 compute-1 sudo[55632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:33 compute-1 python3.9[55634]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:51:33 compute-1 sudo[55632]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:33 compute-1 sudo[55784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blubxdiomaloygmvfxnsfmkszruqmhet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917093.4280138-291-113523004782989/AnsiballZ_ini_file.py'
Jan 20 13:51:33 compute-1 sudo[55784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:34 compute-1 python3.9[55786]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:51:34 compute-1 sudo[55784]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:34 compute-1 sudo[55936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubqgustumjxadbavzsratwhfwvufokpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917094.1709895-291-141677183537471/AnsiballZ_ini_file.py'
Jan 20 13:51:34 compute-1 sudo[55936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:34 compute-1 python3.9[55938]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:51:34 compute-1 sudo[55936]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:35 compute-1 sudo[56088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqutptzaixhdxlzrqymvgpjdudcjzjus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917095.0669515-384-98370041020568/AnsiballZ_dnf.py'
Jan 20 13:51:35 compute-1 sudo[56088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:35 compute-1 python3.9[56090]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:51:36 compute-1 sudo[56088]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:37 compute-1 sudo[56241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjnykovnowpquuqrzqfentmdfmqnxdwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917097.425469-417-69520632620688/AnsiballZ_setup.py'
Jan 20 13:51:37 compute-1 sudo[56241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:38 compute-1 python3.9[56243]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:51:38 compute-1 sudo[56241]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:38 compute-1 sudo[56395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kizrkrgipbdrnapfwlozbaxxisghonxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917098.3460572-441-216213498282980/AnsiballZ_stat.py'
Jan 20 13:51:38 compute-1 sudo[56395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:38 compute-1 python3.9[56397]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:51:38 compute-1 sudo[56395]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:39 compute-1 sudo[56547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlplpoobhszqmigzcntykqtrrvwfngji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917099.1845596-468-7584843719916/AnsiballZ_stat.py'
Jan 20 13:51:39 compute-1 sudo[56547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:39 compute-1 python3.9[56549]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:51:39 compute-1 sudo[56547]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:40 compute-1 sudo[56699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evvglhoxaciefeuvedbigztmiebilgmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917100.2618475-498-205211479872561/AnsiballZ_command.py'
Jan 20 13:51:40 compute-1 sudo[56699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:40 compute-1 python3.9[56701]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:51:40 compute-1 sudo[56699]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:41 compute-1 sudo[56852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpkhsyggkuafwvzqmiwfqbpghgjievdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917101.2555745-528-81450408911485/AnsiballZ_service_facts.py'
Jan 20 13:51:41 compute-1 sudo[56852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:42 compute-1 python3.9[56854]: ansible-service_facts Invoked
Jan 20 13:51:42 compute-1 network[56871]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 13:51:42 compute-1 network[56872]: 'network-scripts' will be removed from distribution in near future.
Jan 20 13:51:42 compute-1 network[56873]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 13:51:45 compute-1 sudo[56852]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:47 compute-1 sudo[57156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwcwxtssclvmpkkhixfehkooivxwvamg ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1768917106.6591554-573-136933415343750/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1768917106.6591554-573-136933415343750/args'
Jan 20 13:51:47 compute-1 sudo[57156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:47 compute-1 sudo[57156]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:47 compute-1 sudo[57325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jldvdhngedphuavrdatetcjbqeunjdlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917107.4486325-606-141903757712091/AnsiballZ_dnf.py'
Jan 20 13:51:47 compute-1 sudo[57325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:48 compute-1 python3.9[57327]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:51:48 compute-1 sshd-session[57186]: Invalid user test from 116.99.171.211 port 48440
Jan 20 13:51:48 compute-1 sshd-session[57186]: Connection closed by invalid user test 116.99.171.211 port 48440 [preauth]
Jan 20 13:51:49 compute-1 sudo[57325]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:50 compute-1 sudo[57478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fflrccqndvzvdswfxvdfedvxekmxxhhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917109.8521538-645-264046909362758/AnsiballZ_package_facts.py'
Jan 20 13:51:50 compute-1 sudo[57478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:50 compute-1 python3.9[57480]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 20 13:51:50 compute-1 sudo[57478]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:52 compute-1 sudo[57630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wynzsdxputntnsvvndjmiujoimvptosi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917111.767435-675-57503337461256/AnsiballZ_stat.py'
Jan 20 13:51:52 compute-1 sudo[57630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:52 compute-1 python3.9[57632]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:51:52 compute-1 sudo[57630]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:52 compute-1 sudo[57755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmanfzhamxeuohigpdzmfhlrhvhbbvnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917111.767435-675-57503337461256/AnsiballZ_copy.py'
Jan 20 13:51:52 compute-1 sudo[57755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:52 compute-1 python3.9[57757]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917111.767435-675-57503337461256/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:51:53 compute-1 sudo[57755]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:53 compute-1 sudo[57909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvtifnihnylebuytrkzeeavyjivdxlez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917113.284382-721-93064516323419/AnsiballZ_stat.py'
Jan 20 13:51:53 compute-1 sudo[57909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:53 compute-1 python3.9[57911]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:51:53 compute-1 sudo[57909]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:54 compute-1 sudo[58034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrpsasdnrtuuiulknwdrffwunfqrnnfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917113.284382-721-93064516323419/AnsiballZ_copy.py'
Jan 20 13:51:54 compute-1 sudo[58034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:54 compute-1 python3.9[58036]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917113.284382-721-93064516323419/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:51:54 compute-1 sudo[58034]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:56 compute-1 sudo[58188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dletvahrgfpuwydjfgwobqsohbsizfww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917115.666465-784-73976698808598/AnsiballZ_lineinfile.py'
Jan 20 13:51:56 compute-1 sudo[58188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:56 compute-1 python3.9[58190]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:51:56 compute-1 sudo[58188]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:57 compute-1 sudo[58342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvbmrqjhdsmjobtgaaowgdmfdhudltga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917117.5844328-829-103002844179041/AnsiballZ_setup.py'
Jan 20 13:51:57 compute-1 sudo[58342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:58 compute-1 python3.9[58344]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:51:58 compute-1 sudo[58342]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:59 compute-1 sudo[58426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-libjddbtoexeijvtamthfeispqdpflsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917117.5844328-829-103002844179041/AnsiballZ_systemd.py'
Jan 20 13:51:59 compute-1 sudo[58426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:59 compute-1 python3.9[58428]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:51:59 compute-1 sudo[58426]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:00 compute-1 sudo[58580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjrwhxujqgnkibouwdvagxuoniwqahkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917120.3952835-877-180125955596101/AnsiballZ_setup.py'
Jan 20 13:52:00 compute-1 sudo[58580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:01 compute-1 python3.9[58582]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:52:01 compute-1 sudo[58580]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:01 compute-1 sudo[58664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hekfxthgxhvsdlyjgtwpuanrhcibrbcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917120.3952835-877-180125955596101/AnsiballZ_systemd.py'
Jan 20 13:52:01 compute-1 sudo[58664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:01 compute-1 python3.9[58666]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:52:01 compute-1 chronyd[796]: chronyd exiting
Jan 20 13:52:01 compute-1 systemd[1]: Stopping NTP client/server...
Jan 20 13:52:01 compute-1 systemd[1]: chronyd.service: Deactivated successfully.
Jan 20 13:52:01 compute-1 systemd[1]: Stopped NTP client/server.
Jan 20 13:52:01 compute-1 systemd[1]: Starting NTP client/server...
Jan 20 13:52:02 compute-1 chronyd[58675]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 20 13:52:02 compute-1 chronyd[58675]: Frequency -28.387 +/- 0.221 ppm read from /var/lib/chrony/drift
Jan 20 13:52:02 compute-1 chronyd[58675]: Loaded seccomp filter (level 2)
Jan 20 13:52:02 compute-1 systemd[1]: Started NTP client/server.
Jan 20 13:52:02 compute-1 sudo[58664]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:02 compute-1 sshd-session[53723]: Connection closed by 192.168.122.30 port 58884
Jan 20 13:52:02 compute-1 sshd-session[53720]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:52:02 compute-1 systemd[1]: session-13.scope: Deactivated successfully.
Jan 20 13:52:02 compute-1 systemd[1]: session-13.scope: Consumed 29.746s CPU time.
Jan 20 13:52:02 compute-1 systemd-logind[783]: Session 13 logged out. Waiting for processes to exit.
Jan 20 13:52:02 compute-1 systemd-logind[783]: Removed session 13.
Jan 20 13:52:08 compute-1 sshd-session[58701]: Accepted publickey for zuul from 192.168.122.30 port 47948 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:52:08 compute-1 systemd-logind[783]: New session 14 of user zuul.
Jan 20 13:52:08 compute-1 systemd[1]: Started Session 14 of User zuul.
Jan 20 13:52:08 compute-1 sshd-session[58701]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:52:09 compute-1 sudo[58854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnbpacrjfmzolzsgvqegvhmnelugvyvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917128.4822614-27-223950199594646/AnsiballZ_file.py'
Jan 20 13:52:09 compute-1 sudo[58854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:09 compute-1 python3.9[58856]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:09 compute-1 sudo[58854]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:10 compute-1 sudo[59006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swszhwnbpxtowolhlaayetnhvoasmbgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917129.5197217-63-280393526920088/AnsiballZ_stat.py'
Jan 20 13:52:10 compute-1 sudo[59006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:10 compute-1 python3.9[59008]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:10 compute-1 sudo[59006]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:10 compute-1 sudo[59129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkcgvtpyiljjotbzfivlmemoeyzrsppm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917129.5197217-63-280393526920088/AnsiballZ_copy.py'
Jan 20 13:52:10 compute-1 sudo[59129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:11 compute-1 python3.9[59131]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917129.5197217-63-280393526920088/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:11 compute-1 sudo[59129]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:11 compute-1 sshd-session[58704]: Connection closed by 192.168.122.30 port 47948
Jan 20 13:52:11 compute-1 sshd-session[58701]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:52:11 compute-1 systemd[1]: session-14.scope: Deactivated successfully.
Jan 20 13:52:11 compute-1 systemd[1]: session-14.scope: Consumed 2.076s CPU time.
Jan 20 13:52:11 compute-1 systemd-logind[783]: Session 14 logged out. Waiting for processes to exit.
Jan 20 13:52:11 compute-1 systemd-logind[783]: Removed session 14.
Jan 20 13:52:17 compute-1 sshd-session[59158]: Accepted publickey for zuul from 192.168.122.30 port 58194 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:52:17 compute-1 systemd-logind[783]: New session 15 of user zuul.
Jan 20 13:52:17 compute-1 systemd[1]: Started Session 15 of User zuul.
Jan 20 13:52:17 compute-1 sshd-session[59158]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:52:18 compute-1 python3.9[59311]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:52:19 compute-1 sshd-session[59156]: Invalid user btf from 116.99.171.211 port 45398
Jan 20 13:52:20 compute-1 sudo[59465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipfblsbabqhfyxlvudzuswtlpwaakejy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917139.6955783-60-182438166834708/AnsiballZ_file.py'
Jan 20 13:52:20 compute-1 sudo[59465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:20 compute-1 sshd-session[59156]: Connection closed by invalid user btf 116.99.171.211 port 45398 [preauth]
Jan 20 13:52:20 compute-1 python3.9[59467]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:20 compute-1 sudo[59465]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:21 compute-1 sudo[59640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toifyudklgvlfijoynipmfltdsmemiiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917141.1259809-84-263204010172042/AnsiballZ_stat.py'
Jan 20 13:52:21 compute-1 sudo[59640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:21 compute-1 python3.9[59642]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:21 compute-1 sudo[59640]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:22 compute-1 sudo[59763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jontxyisippkaacxiwpaydmhawnovdor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917141.1259809-84-263204010172042/AnsiballZ_copy.py'
Jan 20 13:52:22 compute-1 sudo[59763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:22 compute-1 python3.9[59765]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1768917141.1259809-84-263204010172042/.source.json _original_basename=.jifsnfn6 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:22 compute-1 sudo[59763]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:23 compute-1 sudo[59915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvbhmemwaavmumjzwartjcgfyclkguev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917143.0965374-153-15111787750233/AnsiballZ_stat.py'
Jan 20 13:52:23 compute-1 sudo[59915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:23 compute-1 python3.9[59917]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:23 compute-1 sudo[59915]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:24 compute-1 sudo[60038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwjxsivtfwrcuileijtdzurpnioqvpii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917143.0965374-153-15111787750233/AnsiballZ_copy.py'
Jan 20 13:52:24 compute-1 sudo[60038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:24 compute-1 python3.9[60040]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917143.0965374-153-15111787750233/.source _original_basename=.efjns574 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:24 compute-1 sudo[60038]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:24 compute-1 sudo[60190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbwybvfuthhnkpwzfoxpaocswvdgnvik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917144.600585-201-270872871971978/AnsiballZ_file.py'
Jan 20 13:52:24 compute-1 sudo[60190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:25 compute-1 python3.9[60192]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:52:25 compute-1 sudo[60190]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:25 compute-1 sudo[60342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwvfottdlskcmzuruvpibvhqfutmtrsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917145.447901-225-272548893833124/AnsiballZ_stat.py'
Jan 20 13:52:25 compute-1 sudo[60342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:26 compute-1 python3.9[60344]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:26 compute-1 sudo[60342]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:26 compute-1 sudo[60465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whvcejlugukcczvpuitmwqpaylemwtbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917145.447901-225-272548893833124/AnsiballZ_copy.py'
Jan 20 13:52:26 compute-1 sudo[60465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:26 compute-1 python3.9[60467]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768917145.447901-225-272548893833124/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:52:26 compute-1 sudo[60465]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:27 compute-1 sudo[60617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyfcqjkwfylepojkiraufvfimnmtdljn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917146.9060292-225-52445573130770/AnsiballZ_stat.py'
Jan 20 13:52:27 compute-1 sudo[60617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:27 compute-1 python3.9[60619]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:27 compute-1 sudo[60617]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:27 compute-1 sudo[60740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qruwmduqalocyhwywsmngahtckozjqbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917146.9060292-225-52445573130770/AnsiballZ_copy.py'
Jan 20 13:52:27 compute-1 sudo[60740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:28 compute-1 python3.9[60742]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768917146.9060292-225-52445573130770/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:52:28 compute-1 sudo[60740]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:28 compute-1 sudo[60892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtwoqhtbnnblwumdsimroctmjtviynuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917148.3801243-312-135535710998603/AnsiballZ_file.py'
Jan 20 13:52:28 compute-1 sudo[60892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:28 compute-1 python3.9[60894]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:28 compute-1 sudo[60892]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:29 compute-1 sudo[61044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdhxymwxcoefkuqwedezrpuwgesnviwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917149.242158-336-217506568850933/AnsiballZ_stat.py'
Jan 20 13:52:29 compute-1 sudo[61044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:29 compute-1 python3.9[61046]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:29 compute-1 sudo[61044]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:30 compute-1 sudo[61167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrnuinbufuxlqumuwocumbtrwwqbnnwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917149.242158-336-217506568850933/AnsiballZ_copy.py'
Jan 20 13:52:30 compute-1 sudo[61167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:30 compute-1 python3.9[61169]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917149.242158-336-217506568850933/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:30 compute-1 sudo[61167]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:30 compute-1 sudo[61319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqexilvhwhlhlxvtmkkzbnlontvlcswz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917150.6350958-381-45719686309721/AnsiballZ_stat.py'
Jan 20 13:52:30 compute-1 sudo[61319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:31 compute-1 python3.9[61321]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:31 compute-1 sudo[61319]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:31 compute-1 sudo[61442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrclcsbyqrttqitvbfejjuvwbehbdhgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917150.6350958-381-45719686309721/AnsiballZ_copy.py'
Jan 20 13:52:31 compute-1 sudo[61442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:31 compute-1 python3.9[61444]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917150.6350958-381-45719686309721/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:31 compute-1 sudo[61442]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:32 compute-1 sudo[61594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaabqjjayqiierpcndqqtsfgieftjgvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917152.0179877-426-129575293995087/AnsiballZ_systemd.py'
Jan 20 13:52:32 compute-1 sudo[61594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:32 compute-1 python3.9[61596]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:52:32 compute-1 systemd[1]: Reloading.
Jan 20 13:52:33 compute-1 systemd-rc-local-generator[61620]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:52:33 compute-1 systemd-sysv-generator[61624]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:52:33 compute-1 systemd[1]: Reloading.
Jan 20 13:52:33 compute-1 systemd-rc-local-generator[61662]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:52:33 compute-1 systemd-sysv-generator[61666]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:52:33 compute-1 systemd[1]: Starting EDPM Container Shutdown...
Jan 20 13:52:33 compute-1 systemd[1]: Finished EDPM Container Shutdown.
Jan 20 13:52:33 compute-1 sudo[61594]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:34 compute-1 sudo[61821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqqizyyywcqoelwhwrfaskskowadymcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917153.7644513-450-127006310060755/AnsiballZ_stat.py'
Jan 20 13:52:34 compute-1 sudo[61821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:34 compute-1 python3.9[61823]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:34 compute-1 sudo[61821]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:34 compute-1 sudo[61944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twtrjrvcebavmksvsgrkvzuzjxthwoqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917153.7644513-450-127006310060755/AnsiballZ_copy.py'
Jan 20 13:52:34 compute-1 sudo[61944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:35 compute-1 python3.9[61946]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917153.7644513-450-127006310060755/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:35 compute-1 sudo[61944]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:35 compute-1 sudo[62096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyirlzgxhytboqmybfdhlmpkfmkljbeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917155.298333-495-14797447530989/AnsiballZ_stat.py'
Jan 20 13:52:35 compute-1 sudo[62096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:36 compute-1 python3.9[62098]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:36 compute-1 sudo[62096]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:36 compute-1 sudo[62219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfhnijukkionovtkdjwpkhnfwclwxvin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917155.298333-495-14797447530989/AnsiballZ_copy.py'
Jan 20 13:52:36 compute-1 sudo[62219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:36 compute-1 python3.9[62221]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917155.298333-495-14797447530989/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:37 compute-1 sudo[62219]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:37 compute-1 sudo[62371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oigvbhrosqgdssztjpnznsmoilqrvann ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917157.2388499-540-64267027479187/AnsiballZ_systemd.py'
Jan 20 13:52:37 compute-1 sudo[62371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:37 compute-1 python3.9[62373]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:52:37 compute-1 systemd[1]: Reloading.
Jan 20 13:52:37 compute-1 systemd-sysv-generator[62402]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:52:38 compute-1 systemd-rc-local-generator[62399]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:52:39 compute-1 systemd[1]: Reloading.
Jan 20 13:52:39 compute-1 systemd-rc-local-generator[62440]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:52:39 compute-1 systemd-sysv-generator[62443]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:52:39 compute-1 systemd[1]: Starting Create netns directory...
Jan 20 13:52:39 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 20 13:52:39 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 20 13:52:39 compute-1 systemd[1]: Finished Create netns directory.
Jan 20 13:52:39 compute-1 sudo[62371]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:40 compute-1 python3.9[62600]: ansible-ansible.builtin.service_facts Invoked
Jan 20 13:52:40 compute-1 network[62617]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 13:52:40 compute-1 network[62618]: 'network-scripts' will be removed from distribution in near future.
Jan 20 13:52:40 compute-1 network[62619]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 13:52:46 compute-1 sudo[62879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dweckanfacywaiahcfvcmbqmugepvhac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917165.6756134-588-216579552118178/AnsiballZ_systemd.py'
Jan 20 13:52:46 compute-1 sudo[62879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:46 compute-1 python3.9[62881]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:52:46 compute-1 systemd[1]: Reloading.
Jan 20 13:52:46 compute-1 systemd-rc-local-generator[62913]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:52:46 compute-1 systemd-sysv-generator[62917]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:52:46 compute-1 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 20 13:52:46 compute-1 iptables.init[62922]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 20 13:52:47 compute-1 iptables.init[62922]: iptables: Flushing firewall rules: [  OK  ]
Jan 20 13:52:47 compute-1 systemd[1]: iptables.service: Deactivated successfully.
Jan 20 13:52:47 compute-1 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 20 13:52:47 compute-1 sudo[62879]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:47 compute-1 sudo[63116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvyqngjouarnsfvqhqpnaojomoncbcny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917167.309154-588-88442922160252/AnsiballZ_systemd.py'
Jan 20 13:52:47 compute-1 sudo[63116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:47 compute-1 python3.9[63118]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:52:48 compute-1 sudo[63116]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:48 compute-1 sudo[63270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwnxxbrvwsmqlbkqecwmwxekjnhylfhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917168.3938386-636-186562226291180/AnsiballZ_systemd.py'
Jan 20 13:52:48 compute-1 sudo[63270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:49 compute-1 python3.9[63272]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:52:49 compute-1 systemd[1]: Reloading.
Jan 20 13:52:49 compute-1 systemd-rc-local-generator[63298]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:52:49 compute-1 systemd-sysv-generator[63302]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:52:49 compute-1 systemd[1]: Starting Netfilter Tables...
Jan 20 13:52:49 compute-1 systemd[1]: Finished Netfilter Tables.
Jan 20 13:52:49 compute-1 sudo[63270]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:51 compute-1 sudo[63462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeexcarxbkpoaxacoxthcgrndnutrqfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917170.6117501-660-179772868434286/AnsiballZ_command.py'
Jan 20 13:52:51 compute-1 sudo[63462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:51 compute-1 python3.9[63464]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:52:51 compute-1 sudo[63462]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:52 compute-1 sudo[63615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgjqerrxeoyqjbajixmhgzlwoymijtdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917172.3422544-702-67871139232021/AnsiballZ_stat.py'
Jan 20 13:52:52 compute-1 sudo[63615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:52 compute-1 python3.9[63617]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:52 compute-1 sudo[63615]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:53 compute-1 sudo[63740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqfshskhfqtoakyxmcuuzasxhgyuldte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917172.3422544-702-67871139232021/AnsiballZ_copy.py'
Jan 20 13:52:53 compute-1 sudo[63740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:53 compute-1 python3.9[63742]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917172.3422544-702-67871139232021/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:53 compute-1 sudo[63740]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:54 compute-1 sudo[63893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzbhsawaimaeavkdymwbgaqdkasqpfbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917173.8153734-747-183199976884390/AnsiballZ_systemd.py'
Jan 20 13:52:54 compute-1 sudo[63893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:54 compute-1 python3.9[63895]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:52:54 compute-1 systemd[1]: Reloading OpenSSH server daemon...
Jan 20 13:52:54 compute-1 sshd[1003]: Received SIGHUP; restarting.
Jan 20 13:52:54 compute-1 sshd[1003]: Server listening on 0.0.0.0 port 22.
Jan 20 13:52:54 compute-1 sshd[1003]: Server listening on :: port 22.
Jan 20 13:52:54 compute-1 systemd[1]: Reloaded OpenSSH server daemon.
Jan 20 13:52:54 compute-1 sudo[63893]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:55 compute-1 sudo[64049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffliusbluvdnioyfcwwucqipdwvndgwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917174.88927-771-39140174062458/AnsiballZ_file.py'
Jan 20 13:52:55 compute-1 sudo[64049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:55 compute-1 python3.9[64051]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:55 compute-1 sudo[64049]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:56 compute-1 sudo[64201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogqgqdymrdtmyvdvlxoivxwudtiaptpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917175.729682-795-88754883137305/AnsiballZ_stat.py'
Jan 20 13:52:56 compute-1 sudo[64201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:56 compute-1 python3.9[64203]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:56 compute-1 sudo[64201]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:56 compute-1 sudo[64324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiytbhpseqmttsgbjunapibqacyfwmbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917175.729682-795-88754883137305/AnsiballZ_copy.py'
Jan 20 13:52:56 compute-1 sudo[64324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:56 compute-1 python3.9[64326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917175.729682-795-88754883137305/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:56 compute-1 sudo[64324]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:57 compute-1 sudo[64476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyvsvqnsnugicfmbxwlfdyhvxaloxhwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917177.4854114-849-214811924930050/AnsiballZ_timezone.py'
Jan 20 13:52:57 compute-1 sudo[64476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:58 compute-1 python3.9[64478]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 20 13:52:58 compute-1 systemd[1]: Starting Time & Date Service...
Jan 20 13:52:58 compute-1 systemd[1]: Started Time & Date Service.
Jan 20 13:52:58 compute-1 sudo[64476]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:58 compute-1 sudo[64632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lycaxxzdldnpomuqqhfefbttdzatlgpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917178.60929-876-116881348050989/AnsiballZ_file.py'
Jan 20 13:52:58 compute-1 sudo[64632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:59 compute-1 python3.9[64634]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:59 compute-1 sudo[64632]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:59 compute-1 sudo[64784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pujzgxghiypktffpmzuqfabgzqjxfdqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917179.4153807-900-255969689895348/AnsiballZ_stat.py'
Jan 20 13:52:59 compute-1 sudo[64784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:59 compute-1 python3.9[64786]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:59 compute-1 sudo[64784]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:00 compute-1 sudo[64907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guvdluchvsuuudblwjesghbwsurvigez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917179.4153807-900-255969689895348/AnsiballZ_copy.py'
Jan 20 13:53:00 compute-1 sudo[64907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:00 compute-1 python3.9[64909]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917179.4153807-900-255969689895348/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:00 compute-1 sudo[64907]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:01 compute-1 sudo[65059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxxyyjhinrkptpepklhybkjcwfpyxiyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917180.8971012-945-70868812061712/AnsiballZ_stat.py'
Jan 20 13:53:01 compute-1 sudo[65059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:01 compute-1 python3.9[65061]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:53:01 compute-1 sudo[65059]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:01 compute-1 sudo[65182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlltvyufixdtqbjpwyuzsczlbwyhbakh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917180.8971012-945-70868812061712/AnsiballZ_copy.py'
Jan 20 13:53:01 compute-1 sudo[65182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:02 compute-1 python3.9[65184]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917180.8971012-945-70868812061712/.source.yaml _original_basename=.63o7k_i0 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:02 compute-1 sudo[65182]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:02 compute-1 sudo[65334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueovclvupafsxigchsdnswhodauezmbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917182.354719-990-174925750156871/AnsiballZ_stat.py'
Jan 20 13:53:02 compute-1 sudo[65334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:02 compute-1 python3.9[65336]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:53:02 compute-1 sudo[65334]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:03 compute-1 sudo[65457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvnmavtzaortwbfymlgjcsdejfmccvtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917182.354719-990-174925750156871/AnsiballZ_copy.py'
Jan 20 13:53:03 compute-1 sudo[65457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:03 compute-1 python3.9[65459]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917182.354719-990-174925750156871/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:03 compute-1 sudo[65457]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:04 compute-1 sudo[65611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpguvdtvhvcxyauzjaznsgkrjximlqwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917183.7983258-1036-141215682905389/AnsiballZ_command.py'
Jan 20 13:53:04 compute-1 sudo[65611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:04 compute-1 python3.9[65613]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:53:04 compute-1 sudo[65611]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:04 compute-1 sudo[65764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blullrcrhverjhkmmcyekgklvulbzamy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917184.6112847-1059-223455139275348/AnsiballZ_command.py'
Jan 20 13:53:04 compute-1 sudo[65764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:05 compute-1 python3.9[65766]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:53:05 compute-1 sudo[65764]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:05 compute-1 sudo[65917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stcmewmdhfxnzmblbkcwofotdotqpfmj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768917185.4232507-1083-38726559652522/AnsiballZ_edpm_nftables_from_files.py'
Jan 20 13:53:05 compute-1 sudo[65917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:06 compute-1 python3[65919]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 20 13:53:06 compute-1 sudo[65917]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:06 compute-1 sudo[66069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzpollnuckrebdvqtznqpztzflfqpzxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917186.4221895-1107-163586857766802/AnsiballZ_stat.py'
Jan 20 13:53:06 compute-1 sudo[66069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:07 compute-1 python3.9[66071]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:53:07 compute-1 sudo[66069]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:07 compute-1 sudo[66192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkanjprykowipzihaioketoeouxdwgot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917186.4221895-1107-163586857766802/AnsiballZ_copy.py'
Jan 20 13:53:07 compute-1 sudo[66192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:07 compute-1 python3.9[66194]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917186.4221895-1107-163586857766802/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:07 compute-1 sudo[66192]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:07 compute-1 sshd-session[65473]: Connection closed by authenticating user root 116.99.171.211 port 59624 [preauth]
Jan 20 13:53:08 compute-1 sudo[66344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dekwkjnyhldksszzysvxuypvhiyynxqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917187.9927852-1152-250036095852991/AnsiballZ_stat.py'
Jan 20 13:53:08 compute-1 sudo[66344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:08 compute-1 python3.9[66346]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:53:08 compute-1 sudo[66344]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:09 compute-1 sudo[66467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epcidbjuvgcsbtqptkxoltnctcayabha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917187.9927852-1152-250036095852991/AnsiballZ_copy.py'
Jan 20 13:53:09 compute-1 sudo[66467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:09 compute-1 python3.9[66469]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917187.9927852-1152-250036095852991/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:09 compute-1 sudo[66467]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:09 compute-1 sudo[66619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aefcilbhxuqxcsbadmmcsilbseauqyqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917189.4818597-1197-266182474889818/AnsiballZ_stat.py'
Jan 20 13:53:09 compute-1 sudo[66619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:10 compute-1 python3.9[66621]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:53:10 compute-1 sudo[66619]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:10 compute-1 sudo[66742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdycqffmbmhydmkluukwfvdejnqsslfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917189.4818597-1197-266182474889818/AnsiballZ_copy.py'
Jan 20 13:53:10 compute-1 sudo[66742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:10 compute-1 python3.9[66744]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917189.4818597-1197-266182474889818/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:10 compute-1 sudo[66742]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:11 compute-1 sudo[66894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmbtzfjnbiyuxnnapddlempdacacgosf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917190.8865685-1242-59139520834111/AnsiballZ_stat.py'
Jan 20 13:53:11 compute-1 sudo[66894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:11 compute-1 python3.9[66896]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:53:11 compute-1 sudo[66894]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:11 compute-1 sudo[67017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwnxrantbnvpljerugmrqbgbgqbhhtbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917190.8865685-1242-59139520834111/AnsiballZ_copy.py'
Jan 20 13:53:11 compute-1 sudo[67017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:12 compute-1 python3.9[67019]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917190.8865685-1242-59139520834111/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:12 compute-1 sudo[67017]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:12 compute-1 sudo[67169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iajnagbzdxdughuliuafmmjfxnbuqokz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917192.415442-1287-155976059032591/AnsiballZ_stat.py'
Jan 20 13:53:12 compute-1 sudo[67169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:12 compute-1 python3.9[67171]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:53:13 compute-1 sudo[67169]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:13 compute-1 sudo[67292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbldaligvvgbjbramlwesrgstwlcwndx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917192.415442-1287-155976059032591/AnsiballZ_copy.py'
Jan 20 13:53:13 compute-1 sudo[67292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:13 compute-1 python3.9[67294]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917192.415442-1287-155976059032591/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:13 compute-1 sudo[67292]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:14 compute-1 sudo[67446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddbmtdrohcctoatqjahcjkleippiflum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917193.9295182-1332-233808171829905/AnsiballZ_file.py'
Jan 20 13:53:14 compute-1 sudo[67446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:14 compute-1 python3.9[67448]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:14 compute-1 sudo[67446]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:15 compute-1 sudo[67598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exsupkzilyzhcangcorjfplsixbrikev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917194.6978638-1356-28072527096792/AnsiballZ_command.py'
Jan 20 13:53:15 compute-1 sudo[67598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:15 compute-1 python3.9[67600]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:53:15 compute-1 sudo[67598]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:16 compute-1 sudo[67757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jabuaamaighjduwqkkpzgiapkdwnltcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917195.6267898-1380-82226009094061/AnsiballZ_blockinfile.py'
Jan 20 13:53:16 compute-1 sudo[67757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:16 compute-1 python3.9[67759]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:16 compute-1 sudo[67757]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:17 compute-1 sudo[67910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbiupvdzlmpynczafdgnyhwhllffxecm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917196.6695445-1407-24120906175474/AnsiballZ_file.py'
Jan 20 13:53:17 compute-1 sudo[67910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:17 compute-1 python3.9[67912]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:17 compute-1 sudo[67910]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:17 compute-1 sudo[68062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyurcxtssxrsvnshyngfrdsamjbywkdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917197.4612918-1407-109613777798676/AnsiballZ_file.py'
Jan 20 13:53:17 compute-1 sudo[68062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:18 compute-1 python3.9[68064]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:18 compute-1 sudo[68062]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:18 compute-1 sudo[68214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmitbczkmttulspymrygizgtnhqfbepu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917198.307204-1452-66421258906992/AnsiballZ_mount.py'
Jan 20 13:53:18 compute-1 sudo[68214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:19 compute-1 python3.9[68216]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 20 13:53:19 compute-1 sudo[68214]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:19 compute-1 sshd-session[67295]: Invalid user guest1 from 116.99.171.211 port 42184
Jan 20 13:53:19 compute-1 sudo[68367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xduabqgbsrboikbpdpnxhedvdwvtaquy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917199.405593-1452-117025410524587/AnsiballZ_mount.py'
Jan 20 13:53:19 compute-1 sudo[68367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:20 compute-1 sshd-session[67295]: Connection closed by invalid user guest1 116.99.171.211 port 42184 [preauth]
Jan 20 13:53:20 compute-1 python3.9[68369]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 20 13:53:20 compute-1 sudo[68367]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:20 compute-1 sshd-session[59161]: Connection closed by 192.168.122.30 port 58194
Jan 20 13:53:20 compute-1 sshd-session[59158]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:53:20 compute-1 systemd[1]: session-15.scope: Deactivated successfully.
Jan 20 13:53:20 compute-1 systemd[1]: session-15.scope: Consumed 44.106s CPU time.
Jan 20 13:53:20 compute-1 systemd-logind[783]: Session 15 logged out. Waiting for processes to exit.
Jan 20 13:53:20 compute-1 systemd-logind[783]: Removed session 15.
Jan 20 13:53:28 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 20 13:53:33 compute-1 sshd-session[68399]: Accepted publickey for zuul from 192.168.122.30 port 59972 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:53:33 compute-1 systemd-logind[783]: New session 16 of user zuul.
Jan 20 13:53:33 compute-1 systemd[1]: Started Session 16 of User zuul.
Jan 20 13:53:33 compute-1 sshd-session[68399]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:53:33 compute-1 sudo[68552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uupptcgvtjjivdkurzlhwuknqqhompbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917213.1576018-24-269813166710879/AnsiballZ_tempfile.py'
Jan 20 13:53:33 compute-1 sudo[68552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:33 compute-1 python3.9[68554]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 20 13:53:33 compute-1 sudo[68552]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:34 compute-1 sudo[68704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fihrejgvrypvpkkrarztprfkfnysyxfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917214.095719-60-136126205198323/AnsiballZ_stat.py'
Jan 20 13:53:34 compute-1 sudo[68704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:34 compute-1 python3.9[68706]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:53:34 compute-1 sudo[68704]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:35 compute-1 sudo[68856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsjnsotuiyjrebfwhgopmipjbpixpvbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917215.0652094-90-220147097313532/AnsiballZ_setup.py'
Jan 20 13:53:35 compute-1 sudo[68856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:35 compute-1 python3.9[68858]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:53:36 compute-1 sudo[68856]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:37 compute-1 sudo[69008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edeskmnpqtzromqnpxmvfgukljbmdzmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917216.6122677-115-51562749480633/AnsiballZ_blockinfile.py'
Jan 20 13:53:37 compute-1 sudo[69008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:37 compute-1 python3.9[69010]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDrCUasX8PhlctvvIb2eE6+Z0hELmfczQ6UoBD+mPtCobptr/s786JmwJ3D8nIoKhlCLVSmhRfbqf1Pm45RUPTEtSuaa6HBDy40dZhTXU34X4KbGfKmur2bp9S/1w83ArKvI8inSqqk2qoMx1l7ECkEgeT+GbFwKfYLnbq5OV4Ms3tzl/uFUC/Xzxs2dbXlhozQiSamcO/a6EObErTvR8PrtaOoLFtTiD/I+oN+rkdBPkBc6r0qT4jS7nU1FOlT96meSZHE7Q1n8pxcy9PEc8w9hFdd1Zj8/WcGIdeEJsekuouK1Lut/sofQLZHyUMWJTcnBjx8BsjGx9NjUHPYUWIw+DZo7lT2QurAPNnaX4rp9ciGV2Bdm3ylNoOu3izNvM1JGTw3xRyYrmyxyWv3Euc35JXa0w07Xrqr+6Ckih0WTLU6q3Rlnrc/grpDC821sHrsljerHipJVOCbZB39LvV6wDDBlqfYZzfqID3dIqlVli4eL12J0K7jr7QAlPRhNf0=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOG07miJwhzuA/nm0wvGIorydl2xbBiiDhE7PypnJ/jC
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKiJpWtps/bRsuEHfak4zDuqPHKOWFLaEA2h86H7tPlrZHR8okAVZWCmY7keO3Ad1DFyffUtJPKv5OvTK91xGO8=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/3N9PJZXpat0uFh2x2RoV9B0Ih74HU9CPf+g/5HncM7gCVvCpW3CBde1qNDRU2iY9rzpOVPwzi4YzoAUcxB5KAiqZOI9ylmzfiD8JXQ+myLmIRLxHOdXFaEQ4mMp4W+X37hCZ6sdfm6Yqd6eqBuZrM/72ltYoewWBNCG/Hgqzu30L9WC4+BF+iADHT7Qnmvh/cc9U71WxB4h2ikBo1SdGoFCqoez7ajitqx+dw7VWaOtEPliS0LZuDtN3Zt/cBBgxhb/FaAEI3jRP2ej9X0NJW91YxzBygyxiVasslx92g/GmnDFOWVZb5ai/JJsNH6pLTjs25IzvnuWIf8/ZLgZ03zziR4mBLP12CIVF8g1CzaqK1IILDKkjS/dzDiTBefmiQ2+N0i5EEXOgmxchqOqTkFPQg/ar0+0uBPkwzAI0HDk99czhyYHFlO+PhnULVkL1z+XLwHBgOrbNNVQQcJCvady4Gadh66mu1UrLpryNYOgZiugZi67Biha4ZPzPHok=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF41dx3BXAuEvQwQNtbUM7rIrbaOLr5CRvYNdDD+UMr9
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBENFrTpm22/xEaEJMzd7C5WyJttJdK+HK5kxP8/NuvvAQSlLtEulBZnvD/OX5hk3/sDYhPQelj3YsNX1Plw5PJQ=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/dEYtIJ/delwiq9xMMctU8myoGU/TKMiFUM+i3BSaGKrC0rujad6qo1LAtjth5aYbBcgBhxy0UEX0oCruQQgc5qDpPmWHmJiAwdQJaDu6GxTRl3PlXF2u4rd0Rz72DAMuCxPSYedeHU91uL4vlrcD95xONWew2wa9lUuqQWdgj8DtqnB9T895BihDk9vFLXAaoGJcYZVGKJmXR8sOzNTFQxefqstVO0/dfbRUyFd0Ukp5v7rTmLxw0Np5WcGMOg9l/iRzWTopxnTRvXpBoGlFCmzNvTG2uH08dJ4FU5Wk9/iSxonuiVJu9DKs8Tp4EajaA4Y6cEuZiMhhqi7vw6zVCQuCmRBpny6Ub1Ag2CesMYgxwOVJO5cHsKh3BzuPFsh1gMgrrZK7v+qfm2r1rhHlPsCWrcnrtUIZa7gyzdFvHytTh/4uyGMgNpbwxkyCxgSN4PleQy2wvxy/DFW+JxCDzI4jK9LFH5aojzEhUtj+P3E7CXL/wRPxDJdfEU6PhTk=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEa1zL0TUD00vr72wZq3y4rgtSnctWBvs+gME/0/EAsV
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI2WwWe4rQW0CaFwcmci1J5n144T87fcxCH+Y2CVZd5XQ7Cvzlhh1cGNDX81Tng3KgxvKOuz3mdiSCLqx8noiD0=
                                             create=True mode=0644 path=/tmp/ansible.fmhhwyw3 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:37 compute-1 sudo[69008]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:37 compute-1 sshd-session[68395]: Invalid user admin from 116.99.171.211 port 58520
Jan 20 13:53:38 compute-1 sudo[69160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpdgkmrbjndijdoxqfxlmwprkyvrkxcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917217.5486598-139-108006942097525/AnsiballZ_command.py'
Jan 20 13:53:38 compute-1 sudo[69160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:38 compute-1 python3.9[69162]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.fmhhwyw3' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:53:38 compute-1 sudo[69160]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:39 compute-1 sudo[69314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsadohdemmhqviuwpidwfjaasthlbvpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917218.9300387-163-251968434758649/AnsiballZ_file.py'
Jan 20 13:53:39 compute-1 sudo[69314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:39 compute-1 python3.9[69316]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.fmhhwyw3 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:39 compute-1 sudo[69314]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:40 compute-1 sshd-session[68402]: Connection closed by 192.168.122.30 port 59972
Jan 20 13:53:40 compute-1 sshd-session[68399]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:53:40 compute-1 systemd[1]: session-16.scope: Deactivated successfully.
Jan 20 13:53:40 compute-1 systemd[1]: session-16.scope: Consumed 4.010s CPU time.
Jan 20 13:53:40 compute-1 systemd-logind[783]: Session 16 logged out. Waiting for processes to exit.
Jan 20 13:53:40 compute-1 systemd-logind[783]: Removed session 16.
Jan 20 13:53:41 compute-1 sshd-session[68395]: Connection closed by invalid user admin 116.99.171.211 port 58520 [preauth]
Jan 20 13:53:46 compute-1 sshd-session[69341]: Accepted publickey for zuul from 192.168.122.30 port 33662 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:53:46 compute-1 systemd-logind[783]: New session 17 of user zuul.
Jan 20 13:53:46 compute-1 systemd[1]: Started Session 17 of User zuul.
Jan 20 13:53:46 compute-1 sshd-session[69341]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:53:47 compute-1 python3.9[69496]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:53:48 compute-1 sudo[69650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epxjdgmvhjcwitpmrtieevetpgptxotm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917228.1741786-57-187790499319990/AnsiballZ_systemd.py'
Jan 20 13:53:48 compute-1 sudo[69650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:49 compute-1 python3.9[69652]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 20 13:53:49 compute-1 sudo[69650]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:49 compute-1 sshd-session[69397]: Connection closed by authenticating user root 116.99.171.211 port 43922 [preauth]
Jan 20 13:53:49 compute-1 sudo[69804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oibiqhammwclfyoinujznpgmertpamvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917229.5466604-81-268342947553324/AnsiballZ_systemd.py'
Jan 20 13:53:49 compute-1 sudo[69804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:50 compute-1 python3.9[69806]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:53:50 compute-1 sudo[69804]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:51 compute-1 sudo[69957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlqyqyyvdfshfvbimadizhgzuzezxegj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917230.859074-108-83377676568643/AnsiballZ_command.py'
Jan 20 13:53:51 compute-1 sudo[69957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:51 compute-1 python3.9[69959]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:53:51 compute-1 sudo[69957]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:53 compute-1 sudo[70110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nskumqrdzjyvytbrifxuxlqxccogfbjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917232.2829604-132-68712427692104/AnsiballZ_stat.py'
Jan 20 13:53:53 compute-1 sudo[70110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:53 compute-1 python3.9[70112]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:53:53 compute-1 sudo[70110]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:53 compute-1 sudo[70264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmfediboirobuspkgcukwelhocsvunqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917233.5084047-156-173794648365906/AnsiballZ_command.py'
Jan 20 13:53:53 compute-1 sudo[70264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:54 compute-1 python3.9[70266]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:53:54 compute-1 sudo[70264]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:54 compute-1 sudo[70419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-recqwyfobhskfujwomscfjhdqdxxxslx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917234.42988-181-144805400069257/AnsiballZ_file.py'
Jan 20 13:53:54 compute-1 sudo[70419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:55 compute-1 python3.9[70421]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:55 compute-1 sudo[70419]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:55 compute-1 sshd-session[69344]: Connection closed by 192.168.122.30 port 33662
Jan 20 13:53:55 compute-1 sshd-session[69341]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:53:55 compute-1 systemd[1]: session-17.scope: Deactivated successfully.
Jan 20 13:53:55 compute-1 systemd[1]: session-17.scope: Consumed 5.609s CPU time.
Jan 20 13:53:55 compute-1 systemd-logind[783]: Session 17 logged out. Waiting for processes to exit.
Jan 20 13:53:55 compute-1 systemd-logind[783]: Removed session 17.
Jan 20 13:54:01 compute-1 sshd-session[70446]: Accepted publickey for zuul from 192.168.122.30 port 36026 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:54:01 compute-1 systemd-logind[783]: New session 18 of user zuul.
Jan 20 13:54:01 compute-1 systemd[1]: Started Session 18 of User zuul.
Jan 20 13:54:01 compute-1 sshd-session[70446]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:54:02 compute-1 python3.9[70599]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:54:03 compute-1 sudo[70753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsmnjbtyyssjaefkziulekcvynwuezuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917243.1343586-62-79427766765514/AnsiballZ_setup.py'
Jan 20 13:54:03 compute-1 sudo[70753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:03 compute-1 python3.9[70755]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:54:04 compute-1 sudo[70753]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:04 compute-1 sudo[70837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unfzwgabyyycrbucancgmggovhaczujf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917243.1343586-62-79427766765514/AnsiballZ_dnf.py'
Jan 20 13:54:04 compute-1 sudo[70837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:04 compute-1 python3.9[70839]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 20 13:54:05 compute-1 sudo[70837]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:06 compute-1 python3.9[70990]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:54:08 compute-1 python3.9[71141]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 13:54:09 compute-1 python3.9[71291]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:54:09 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 13:54:09 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 13:54:09 compute-1 python3.9[71442]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:54:10 compute-1 sshd-session[70449]: Connection closed by 192.168.122.30 port 36026
Jan 20 13:54:10 compute-1 sshd-session[70446]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:54:10 compute-1 systemd[1]: session-18.scope: Deactivated successfully.
Jan 20 13:54:10 compute-1 systemd[1]: session-18.scope: Consumed 6.738s CPU time.
Jan 20 13:54:10 compute-1 systemd-logind[783]: Session 18 logged out. Waiting for processes to exit.
Jan 20 13:54:10 compute-1 systemd-logind[783]: Removed session 18.
Jan 20 13:54:10 compute-1 chronyd[58675]: Selected source 23.159.16.194 (pool.ntp.org)
Jan 20 13:54:19 compute-1 sshd-session[71467]: Accepted publickey for zuul from 38.102.83.230 port 53628 ssh2: RSA SHA256:r50QbT7bSKscUimrVpe816OyonJCbpigaVSUx3I8hI8
Jan 20 13:54:19 compute-1 systemd-logind[783]: New session 19 of user zuul.
Jan 20 13:54:19 compute-1 systemd[1]: Started Session 19 of User zuul.
Jan 20 13:54:19 compute-1 sshd-session[71467]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:54:19 compute-1 sudo[71543]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvckxhjaqljtxwjypmevvdbrazsdngva ; /usr/bin/python3'
Jan 20 13:54:19 compute-1 sudo[71543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:20 compute-1 useradd[71547]: new group: name=ceph-admin, GID=42478
Jan 20 13:54:20 compute-1 useradd[71547]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Jan 20 13:54:20 compute-1 sudo[71543]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:20 compute-1 sudo[71629]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srfwdstysfratlmsaqllfycjecyxihnt ; /usr/bin/python3'
Jan 20 13:54:20 compute-1 sudo[71629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:20 compute-1 sudo[71629]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:21 compute-1 sudo[71702]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqcqniyzkgbkddodhgoftfibgysnqupe ; /usr/bin/python3'
Jan 20 13:54:21 compute-1 sudo[71702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:21 compute-1 sudo[71702]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:21 compute-1 sudo[71752]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anllzmfnpgshbsmzqxbcuqaoczekfvci ; /usr/bin/python3'
Jan 20 13:54:21 compute-1 sudo[71752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:21 compute-1 sudo[71752]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:22 compute-1 sudo[71778]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agibildzsanhqinfwibpbhccrvzkkhqb ; /usr/bin/python3'
Jan 20 13:54:22 compute-1 sudo[71778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:22 compute-1 sudo[71778]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:22 compute-1 sudo[71804]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnfetulcwikedswisgluhaykbmkxsawx ; /usr/bin/python3'
Jan 20 13:54:22 compute-1 sudo[71804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:22 compute-1 sudo[71804]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:23 compute-1 sudo[71830]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwjuvdhgxvwvrrdffclndnedzbgmfiob ; /usr/bin/python3'
Jan 20 13:54:23 compute-1 sudo[71830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:23 compute-1 sudo[71830]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:23 compute-1 sudo[71908]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psfytcfpfbqustjmnoznidwrwxexpvfp ; /usr/bin/python3'
Jan 20 13:54:23 compute-1 sudo[71908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:23 compute-1 sudo[71908]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:24 compute-1 sudo[71981]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyshpuyukjwasotsgruddsengwvkwbig ; /usr/bin/python3'
Jan 20 13:54:24 compute-1 sudo[71981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:24 compute-1 sudo[71981]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:24 compute-1 sudo[72083]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlmrjeyspavcnbmtrbvnwilhbnlemwss ; /usr/bin/python3'
Jan 20 13:54:24 compute-1 sudo[72083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:24 compute-1 sudo[72083]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:25 compute-1 sudo[72156]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaievktxxrtsdsfzpzafqysuxlnpkkma ; /usr/bin/python3'
Jan 20 13:54:25 compute-1 sudo[72156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:25 compute-1 sudo[72156]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:25 compute-1 sudo[72206]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pntidlyxkpuqxmhtqciioezrmojdstyn ; /usr/bin/python3'
Jan 20 13:54:25 compute-1 sudo[72206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:26 compute-1 python3[72208]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:54:27 compute-1 sudo[72206]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:27 compute-1 sudo[72301]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmpwahjhuhcwsrqfqveahibtrlflwlrr ; /usr/bin/python3'
Jan 20 13:54:27 compute-1 sudo[72301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:28 compute-1 python3[72303]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 13:54:29 compute-1 sudo[72301]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:29 compute-1 sudo[72328]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mglfqctuhrpmfmgzozklkpyeytvtrztk ; /usr/bin/python3'
Jan 20 13:54:29 compute-1 sudo[72328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:29 compute-1 python3[72330]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 13:54:29 compute-1 sudo[72328]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:29 compute-1 sudo[72354]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uagwxhdnbfnogmfgrunoodueqcnocuah ; /usr/bin/python3'
Jan 20 13:54:29 compute-1 sudo[72354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:30 compute-1 python3[72356]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:54:30 compute-1 kernel: loop: module loaded
Jan 20 13:54:30 compute-1 kernel: loop3: detected capacity change from 0 to 14680064
Jan 20 13:54:30 compute-1 sudo[72354]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:30 compute-1 sudo[72389]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atqjdvbgaemxgecvpdnobcgmbqhvdewt ; /usr/bin/python3'
Jan 20 13:54:30 compute-1 sudo[72389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:30 compute-1 python3[72391]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:54:30 compute-1 lvm[72394]: PV /dev/loop3 not used.
Jan 20 13:54:30 compute-1 lvm[72396]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 13:54:30 compute-1 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 20 13:54:30 compute-1 lvm[72402]:   1 logical volume(s) in volume group "ceph_vg0" now active
Jan 20 13:54:30 compute-1 lvm[72406]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 13:54:30 compute-1 lvm[72406]: VG ceph_vg0 finished
Jan 20 13:54:30 compute-1 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 20 13:54:30 compute-1 sudo[72389]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:31 compute-1 sudo[72482]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijthviiflgggvoanfkbahrxsnieynudv ; /usr/bin/python3'
Jan 20 13:54:31 compute-1 sudo[72482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:31 compute-1 python3[72484]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:54:31 compute-1 sudo[72482]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:31 compute-1 sudo[72555]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryxklkcwnefbihektmpedazusuhqetyc ; /usr/bin/python3'
Jan 20 13:54:31 compute-1 sudo[72555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:31 compute-1 python3[72557]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768917271.0207577-36958-49779570935231/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:54:31 compute-1 sudo[72555]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:32 compute-1 sudo[72605]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrkmyshuxixcrgupihxcsayvubpjdeec ; /usr/bin/python3'
Jan 20 13:54:32 compute-1 sudo[72605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:32 compute-1 python3[72607]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:54:32 compute-1 systemd[1]: Reloading.
Jan 20 13:54:32 compute-1 systemd-rc-local-generator[72637]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:54:32 compute-1 systemd-sysv-generator[72640]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:54:33 compute-1 systemd[1]: Starting Ceph OSD losetup...
Jan 20 13:54:33 compute-1 bash[72647]: /dev/loop3: [64513]:4328448 (/var/lib/ceph-osd-0.img)
Jan 20 13:54:33 compute-1 systemd[1]: Finished Ceph OSD losetup.
Jan 20 13:54:33 compute-1 sudo[72605]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:33 compute-1 lvm[72649]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 13:54:33 compute-1 lvm[72649]: VG ceph_vg0 finished
Jan 20 13:54:35 compute-1 python3[72673]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:55:10 compute-1 sshd-session[72718]: Invalid user admin from 116.99.171.211 port 48788
Jan 20 13:55:11 compute-1 sshd-session[72718]: Connection closed by invalid user admin 116.99.171.211 port 48788 [preauth]
Jan 20 13:55:20 compute-1 sshd-session[72720]: Invalid user plex from 116.99.171.211 port 34230
Jan 20 13:55:22 compute-1 sshd-session[72720]: Connection closed by invalid user plex 116.99.171.211 port 34230 [preauth]
Jan 20 13:55:48 compute-1 sshd-session[72723]: Invalid user admin from 116.99.171.211 port 54222
Jan 20 13:55:49 compute-1 sshd-session[72723]: Connection closed by invalid user admin 116.99.171.211 port 54222 [preauth]
Jan 20 13:56:23 compute-1 sshd-session[72725]: Accepted publickey for ceph-admin from 192.168.122.100 port 56704 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:56:23 compute-1 systemd-logind[783]: New session 20 of user ceph-admin.
Jan 20 13:56:23 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Jan 20 13:56:23 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 20 13:56:23 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 20 13:56:23 compute-1 systemd[1]: Starting User Manager for UID 42477...
Jan 20 13:56:23 compute-1 systemd[72729]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:23 compute-1 systemd[72729]: Queued start job for default target Main User Target.
Jan 20 13:56:23 compute-1 systemd[72729]: Created slice User Application Slice.
Jan 20 13:56:23 compute-1 systemd[72729]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 13:56:23 compute-1 systemd[72729]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 13:56:23 compute-1 systemd[72729]: Reached target Paths.
Jan 20 13:56:23 compute-1 systemd[72729]: Reached target Timers.
Jan 20 13:56:23 compute-1 systemd[72729]: Starting D-Bus User Message Bus Socket...
Jan 20 13:56:23 compute-1 systemd[72729]: Starting Create User's Volatile Files and Directories...
Jan 20 13:56:23 compute-1 sshd-session[72743]: Accepted publickey for ceph-admin from 192.168.122.100 port 56706 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:56:23 compute-1 systemd-logind[783]: New session 22 of user ceph-admin.
Jan 20 13:56:23 compute-1 systemd[72729]: Listening on D-Bus User Message Bus Socket.
Jan 20 13:56:23 compute-1 systemd[72729]: Reached target Sockets.
Jan 20 13:56:23 compute-1 systemd[72729]: Finished Create User's Volatile Files and Directories.
Jan 20 13:56:23 compute-1 systemd[72729]: Reached target Basic System.
Jan 20 13:56:23 compute-1 systemd[72729]: Reached target Main User Target.
Jan 20 13:56:23 compute-1 systemd[72729]: Startup finished in 137ms.
Jan 20 13:56:23 compute-1 systemd[1]: Started User Manager for UID 42477.
Jan 20 13:56:23 compute-1 systemd[1]: Started Session 20 of User ceph-admin.
Jan 20 13:56:23 compute-1 systemd[1]: Started Session 22 of User ceph-admin.
Jan 20 13:56:23 compute-1 sshd-session[72725]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:23 compute-1 sshd-session[72743]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:23 compute-1 sudo[72750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:23 compute-1 sudo[72750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:23 compute-1 sudo[72750]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:23 compute-1 sudo[72775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:56:23 compute-1 sudo[72775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:23 compute-1 sudo[72775]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:23 compute-1 sshd-session[72800]: Accepted publickey for ceph-admin from 192.168.122.100 port 56722 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:56:23 compute-1 systemd-logind[783]: New session 23 of user ceph-admin.
Jan 20 13:56:23 compute-1 systemd[1]: Started Session 23 of User ceph-admin.
Jan 20 13:56:23 compute-1 sshd-session[72800]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:23 compute-1 sudo[72804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:23 compute-1 sudo[72804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:23 compute-1 sudo[72804]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:24 compute-1 sudo[72829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-1
Jan 20 13:56:24 compute-1 sudo[72829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:24 compute-1 sudo[72829]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:24 compute-1 sshd-session[72854]: Accepted publickey for ceph-admin from 192.168.122.100 port 56730 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:56:24 compute-1 systemd-logind[783]: New session 24 of user ceph-admin.
Jan 20 13:56:24 compute-1 systemd[1]: Started Session 24 of User ceph-admin.
Jan 20 13:56:24 compute-1 sshd-session[72854]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:24 compute-1 sudo[72858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:24 compute-1 sudo[72858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:24 compute-1 sudo[72858]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:24 compute-1 sudo[72883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Jan 20 13:56:24 compute-1 sudo[72883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:24 compute-1 sudo[72883]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:24 compute-1 sshd-session[72908]: Accepted publickey for ceph-admin from 192.168.122.100 port 56738 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:56:24 compute-1 systemd-logind[783]: New session 25 of user ceph-admin.
Jan 20 13:56:24 compute-1 systemd[1]: Started Session 25 of User ceph-admin.
Jan 20 13:56:24 compute-1 sshd-session[72908]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:24 compute-1 sudo[72912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:24 compute-1 sudo[72912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:24 compute-1 sudo[72912]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:25 compute-1 sudo[72937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:56:25 compute-1 sudo[72937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:25 compute-1 sudo[72937]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:25 compute-1 sshd-session[72962]: Accepted publickey for ceph-admin from 192.168.122.100 port 56748 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:56:25 compute-1 systemd-logind[783]: New session 26 of user ceph-admin.
Jan 20 13:56:25 compute-1 systemd[1]: Started Session 26 of User ceph-admin.
Jan 20 13:56:25 compute-1 sshd-session[72962]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:25 compute-1 sudo[72966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:25 compute-1 sudo[72966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:25 compute-1 sudo[72966]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:25 compute-1 sudo[72991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:56:25 compute-1 sudo[72991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:25 compute-1 sudo[72991]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:25 compute-1 sshd-session[73016]: Accepted publickey for ceph-admin from 192.168.122.100 port 56760 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:56:25 compute-1 systemd-logind[783]: New session 27 of user ceph-admin.
Jan 20 13:56:25 compute-1 systemd[1]: Started Session 27 of User ceph-admin.
Jan 20 13:56:25 compute-1 sshd-session[73016]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:25 compute-1 sudo[73020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:25 compute-1 sudo[73020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:25 compute-1 sudo[73020]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:26 compute-1 sudo[73045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Jan 20 13:56:26 compute-1 sudo[73045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:26 compute-1 sudo[73045]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:26 compute-1 sshd-session[73070]: Accepted publickey for ceph-admin from 192.168.122.100 port 56774 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:56:26 compute-1 systemd-logind[783]: New session 28 of user ceph-admin.
Jan 20 13:56:26 compute-1 systemd[1]: Started Session 28 of User ceph-admin.
Jan 20 13:56:26 compute-1 sshd-session[73070]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:26 compute-1 sudo[73074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:26 compute-1 sudo[73074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:26 compute-1 sudo[73074]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:26 compute-1 sudo[73099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:56:26 compute-1 sudo[73099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:26 compute-1 sudo[73099]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:26 compute-1 sshd-session[73124]: Accepted publickey for ceph-admin from 192.168.122.100 port 56782 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:56:26 compute-1 systemd-logind[783]: New session 29 of user ceph-admin.
Jan 20 13:56:26 compute-1 systemd[1]: Started Session 29 of User ceph-admin.
Jan 20 13:56:26 compute-1 sshd-session[73124]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:26 compute-1 sudo[73128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:26 compute-1 sudo[73128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:26 compute-1 sudo[73128]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:26 compute-1 sudo[73153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Jan 20 13:56:26 compute-1 sudo[73153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:26 compute-1 sudo[73153]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:27 compute-1 sshd-session[73178]: Accepted publickey for ceph-admin from 192.168.122.100 port 56794 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:56:27 compute-1 systemd-logind[783]: New session 30 of user ceph-admin.
Jan 20 13:56:27 compute-1 systemd[1]: Started Session 30 of User ceph-admin.
Jan 20 13:56:27 compute-1 sshd-session[73178]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:27 compute-1 sshd-session[73205]: Accepted publickey for ceph-admin from 192.168.122.100 port 56810 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:56:27 compute-1 systemd-logind[783]: New session 31 of user ceph-admin.
Jan 20 13:56:27 compute-1 systemd[1]: Started Session 31 of User ceph-admin.
Jan 20 13:56:27 compute-1 sshd-session[73205]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:27 compute-1 sudo[73209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:27 compute-1 sudo[73209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:27 compute-1 sudo[73209]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:28 compute-1 sudo[73234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Jan 20 13:56:28 compute-1 sudo[73234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:28 compute-1 sudo[73234]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:28 compute-1 sshd-session[73259]: Accepted publickey for ceph-admin from 192.168.122.100 port 56824 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:56:28 compute-1 systemd-logind[783]: New session 32 of user ceph-admin.
Jan 20 13:56:28 compute-1 systemd[1]: Started Session 32 of User ceph-admin.
Jan 20 13:56:28 compute-1 sshd-session[73259]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:28 compute-1 sudo[73263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:28 compute-1 sudo[73263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:28 compute-1 sudo[73263]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:28 compute-1 sudo[73288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-1
Jan 20 13:56:28 compute-1 sudo[73288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:28 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:56:28 compute-1 sudo[73288]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:28 compute-1 sudo[73332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:28 compute-1 sudo[73332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:28 compute-1 sudo[73332]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:28 compute-1 sudo[73357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:56:28 compute-1 sudo[73357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:28 compute-1 sudo[73357]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:29 compute-1 sudo[73382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:29 compute-1 sudo[73382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:29 compute-1 sudo[73382]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:29 compute-1 sudo[73407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 13:56:29 compute-1 sudo[73407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:29 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:56:29 compute-1 sudo[73407]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:29 compute-1 sudo[73451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:29 compute-1 sudo[73451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:29 compute-1 sudo[73451]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:29 compute-1 sudo[73476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:56:29 compute-1 sudo[73476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:29 compute-1 sudo[73476]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:29 compute-1 sudo[73501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:29 compute-1 sudo[73501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:29 compute-1 sudo[73501]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:29 compute-1 sudo[73526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 13:56:29 compute-1 sudo[73526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:29 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:56:30 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:56:30 compute-1 sudo[73526]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:30 compute-1 sudo[73588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:30 compute-1 sudo[73588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:30 compute-1 sudo[73588]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:30 compute-1 sudo[73613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:56:30 compute-1 sudo[73613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:30 compute-1 sudo[73613]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:30 compute-1 sudo[73638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:30 compute-1 sudo[73638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:30 compute-1 sudo[73638]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:30 compute-1 sudo[73663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 13:56:30 compute-1 sudo[73663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:30 compute-1 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73700 (sysctl)
Jan 20 13:56:30 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:56:30 compute-1 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 20 13:56:31 compute-1 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 20 13:56:31 compute-1 sudo[73663]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:31 compute-1 sudo[73722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:31 compute-1 sudo[73722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:31 compute-1 sudo[73722]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:31 compute-1 sudo[73747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:56:31 compute-1 sudo[73747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:31 compute-1 sudo[73747]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:31 compute-1 sudo[73772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:31 compute-1 sudo[73772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:31 compute-1 sudo[73772]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:31 compute-1 sudo[73797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 20 13:56:31 compute-1 sudo[73797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:31 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:56:31 compute-1 sudo[73797]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:32 compute-1 sudo[73841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:32 compute-1 sudo[73841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:32 compute-1 sudo[73841]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:32 compute-1 sudo[73866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:56:32 compute-1 sudo[73866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:32 compute-1 sudo[73866]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:32 compute-1 sudo[73891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:32 compute-1 sudo[73891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:32 compute-1 sudo[73891]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:32 compute-1 sudo[73916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- inventory --format=json-pretty --filter-for-batch
Jan 20 13:56:32 compute-1 sudo[73916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:32 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:56:32 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:56:35 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat3630487806-merged.mount: Deactivated successfully.
Jan 20 13:56:36 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat3630487806-lower\x2dmapped.mount: Deactivated successfully.
Jan 20 13:56:51 compute-1 podman[73978]: 2026-01-20 13:56:51.911475474 +0000 UTC m=+19.284459775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:56:52 compute-1 podman[73978]: 2026-01-20 13:56:52.002921907 +0000 UTC m=+19.375906238 container create 30d6d4718878592c04ec1b37fda453fbb0549787591c936b9528b59f9558defd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lalande, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 13:56:52 compute-1 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 20 13:56:52 compute-1 systemd[1]: Started libpod-conmon-30d6d4718878592c04ec1b37fda453fbb0549787591c936b9528b59f9558defd.scope.
Jan 20 13:56:52 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:56:52 compute-1 podman[73978]: 2026-01-20 13:56:52.235854885 +0000 UTC m=+19.608839216 container init 30d6d4718878592c04ec1b37fda453fbb0549787591c936b9528b59f9558defd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lalande, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 13:56:52 compute-1 podman[73978]: 2026-01-20 13:56:52.244401668 +0000 UTC m=+19.617385959 container start 30d6d4718878592c04ec1b37fda453fbb0549787591c936b9528b59f9558defd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 20 13:56:52 compute-1 podman[73978]: 2026-01-20 13:56:52.248496624 +0000 UTC m=+19.621480915 container attach 30d6d4718878592c04ec1b37fda453fbb0549787591c936b9528b59f9558defd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Jan 20 13:56:52 compute-1 condescending_lalande[74041]: 167 167
Jan 20 13:56:52 compute-1 systemd[1]: libpod-30d6d4718878592c04ec1b37fda453fbb0549787591c936b9528b59f9558defd.scope: Deactivated successfully.
Jan 20 13:56:52 compute-1 podman[73978]: 2026-01-20 13:56:52.256317146 +0000 UTC m=+19.629301457 container died 30d6d4718878592c04ec1b37fda453fbb0549787591c936b9528b59f9558defd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:56:52 compute-1 systemd[1]: var-lib-containers-storage-overlay-60584fc5fc5354075cd5220742611e57b272b541611b6de8ac2fb6005084a93d-merged.mount: Deactivated successfully.
Jan 20 13:56:52 compute-1 podman[73978]: 2026-01-20 13:56:52.305137931 +0000 UTC m=+19.678122212 container remove 30d6d4718878592c04ec1b37fda453fbb0549787591c936b9528b59f9558defd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 13:56:52 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:56:52 compute-1 systemd[1]: libpod-conmon-30d6d4718878592c04ec1b37fda453fbb0549787591c936b9528b59f9558defd.scope: Deactivated successfully.
Jan 20 13:56:52 compute-1 podman[74063]: 2026-01-20 13:56:52.47932095 +0000 UTC m=+0.053062836 container create d4f96443d4cb6bf4b71fc8b161f3dd6c3a388694adb87ae0863c6fc1b0dd75b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:56:52 compute-1 systemd[1]: Started libpod-conmon-d4f96443d4cb6bf4b71fc8b161f3dd6c3a388694adb87ae0863c6fc1b0dd75b6.scope.
Jan 20 13:56:52 compute-1 podman[74063]: 2026-01-20 13:56:52.450459022 +0000 UTC m=+0.024200928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:56:52 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:56:52 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7fa0c0c32eba4dd7a6d63e9ac9abe90b759712b1cc382819e6301684954ec7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:52 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7fa0c0c32eba4dd7a6d63e9ac9abe90b759712b1cc382819e6301684954ec7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:52 compute-1 podman[74063]: 2026-01-20 13:56:52.624640503 +0000 UTC m=+0.198382419 container init d4f96443d4cb6bf4b71fc8b161f3dd6c3a388694adb87ae0863c6fc1b0dd75b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:56:52 compute-1 podman[74063]: 2026-01-20 13:56:52.637844387 +0000 UTC m=+0.211586293 container start d4f96443d4cb6bf4b71fc8b161f3dd6c3a388694adb87ae0863c6fc1b0dd75b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 13:56:52 compute-1 podman[74063]: 2026-01-20 13:56:52.643449826 +0000 UTC m=+0.217191802 container attach d4f96443d4cb6bf4b71fc8b161f3dd6c3a388694adb87ae0863c6fc1b0dd75b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]: [
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:     {
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:         "available": false,
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:         "ceph_device": false,
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:         "lsm_data": {},
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:         "lvs": [],
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:         "path": "/dev/sr0",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:         "rejected_reasons": [
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "Has a FileSystem",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "Insufficient space (<5GB)"
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:         ],
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:         "sys_api": {
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "actuators": null,
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "device_nodes": "sr0",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "devname": "sr0",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "human_readable_size": "482.00 KB",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "id_bus": "ata",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "model": "QEMU DVD-ROM",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "nr_requests": "2",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "parent": "/dev/sr0",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "partitions": {},
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "path": "/dev/sr0",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "removable": "1",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "rev": "2.5+",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "ro": "0",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "rotational": "1",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "sas_address": "",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "sas_device_handle": "",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "scheduler_mode": "mq-deadline",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "sectors": 0,
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "sectorsize": "2048",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "size": 493568.0,
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "support_discard": "2048",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "type": "disk",
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:             "vendor": "QEMU"
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:         }
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]:     }
Jan 20 13:56:53 compute-1 compassionate_meninsky[74079]: ]
Jan 20 13:56:53 compute-1 systemd[1]: libpod-d4f96443d4cb6bf4b71fc8b161f3dd6c3a388694adb87ae0863c6fc1b0dd75b6.scope: Deactivated successfully.
Jan 20 13:56:53 compute-1 podman[74063]: 2026-01-20 13:56:53.806325952 +0000 UTC m=+1.380067848 container died d4f96443d4cb6bf4b71fc8b161f3dd6c3a388694adb87ae0863c6fc1b0dd75b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 13:56:53 compute-1 systemd[1]: libpod-d4f96443d4cb6bf4b71fc8b161f3dd6c3a388694adb87ae0863c6fc1b0dd75b6.scope: Consumed 1.176s CPU time.
Jan 20 13:56:54 compute-1 systemd[1]: var-lib-containers-storage-overlay-f7fa0c0c32eba4dd7a6d63e9ac9abe90b759712b1cc382819e6301684954ec7a-merged.mount: Deactivated successfully.
Jan 20 13:56:54 compute-1 podman[74063]: 2026-01-20 13:56:54.407935307 +0000 UTC m=+1.981677233 container remove d4f96443d4cb6bf4b71fc8b161f3dd6c3a388694adb87ae0863c6fc1b0dd75b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Jan 20 13:56:54 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:56:54 compute-1 sudo[73916]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:54 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:56:54 compute-1 systemd[1]: libpod-conmon-d4f96443d4cb6bf4b71fc8b161f3dd6c3a388694adb87ae0863c6fc1b0dd75b6.scope: Deactivated successfully.
Jan 20 13:56:54 compute-1 sudo[75038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:54 compute-1 sudo[75038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:54 compute-1 sudo[75038]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:54 compute-1 sudo[75063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 20 13:56:54 compute-1 sudo[75063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:54 compute-1 sudo[75063]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:54 compute-1 sudo[75088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:54 compute-1 sudo[75088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:54 compute-1 sudo[75088]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:54 compute-1 sudo[75113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph
Jan 20 13:56:54 compute-1 sudo[75113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:54 compute-1 sudo[75113]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:55 compute-1 sudo[75138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:55 compute-1 sudo[75138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:55 compute-1 sudo[75138]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:55 compute-1 sudo[75163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.conf.new
Jan 20 13:56:55 compute-1 sudo[75163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:55 compute-1 sudo[75163]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:55 compute-1 sudo[75188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:55 compute-1 sudo[75188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:55 compute-1 sudo[75188]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:55 compute-1 sudo[75213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:56:55 compute-1 sudo[75213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:55 compute-1 sudo[75213]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:55 compute-1 sshd-session[74039]: Connection closed by authenticating user root 116.99.171.211 port 41290 [preauth]
Jan 20 13:56:55 compute-1 sudo[75238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:55 compute-1 sudo[75238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:55 compute-1 sudo[75238]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:55 compute-1 sudo[75263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.conf.new
Jan 20 13:56:55 compute-1 sudo[75263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:55 compute-1 sudo[75263]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:55 compute-1 sudo[75311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:55 compute-1 sudo[75311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:55 compute-1 sudo[75311]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:55 compute-1 sudo[75336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.conf.new
Jan 20 13:56:55 compute-1 sudo[75336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:55 compute-1 sudo[75336]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:55 compute-1 sudo[75361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:55 compute-1 sudo[75361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:55 compute-1 sudo[75361]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:55 compute-1 sudo[75386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.conf.new
Jan 20 13:56:55 compute-1 sudo[75386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:55 compute-1 sudo[75386]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:55 compute-1 sudo[75411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:55 compute-1 sudo[75411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:55 compute-1 sudo[75411]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:55 compute-1 sudo[75436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 20 13:56:55 compute-1 sudo[75436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:55 compute-1 sudo[75436]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:56 compute-1 sudo[75461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:56 compute-1 sudo[75461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:56 compute-1 sudo[75461]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:56 compute-1 sudo[75486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config
Jan 20 13:56:56 compute-1 sudo[75486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:56 compute-1 sudo[75486]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:56 compute-1 sudo[75511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:56 compute-1 sudo[75511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:56 compute-1 sudo[75511]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:56 compute-1 sudo[75536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config
Jan 20 13:56:56 compute-1 sudo[75536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:56 compute-1 sudo[75536]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:56 compute-1 sudo[75561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:56 compute-1 sudo[75561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:56 compute-1 sudo[75561]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:56 compute-1 sudo[75586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf.new
Jan 20 13:56:56 compute-1 sudo[75586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:56 compute-1 sudo[75586]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:56 compute-1 sudo[75611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:56 compute-1 sudo[75611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:56 compute-1 sudo[75611]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:56 compute-1 sudo[75636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:56:56 compute-1 sudo[75636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:56 compute-1 sudo[75636]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:56 compute-1 sudo[75661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:56 compute-1 sudo[75661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:56 compute-1 sudo[75661]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:56 compute-1 sudo[75686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf.new
Jan 20 13:56:56 compute-1 sudo[75686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:56 compute-1 sudo[75686]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:56 compute-1 sudo[75734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:56 compute-1 sudo[75734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:56 compute-1 sudo[75734]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:56 compute-1 sudo[75759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf.new
Jan 20 13:56:56 compute-1 sudo[75759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:56 compute-1 sudo[75759]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:57 compute-1 sudo[75784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:57 compute-1 sudo[75784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:57 compute-1 sudo[75784]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:57 compute-1 sudo[75809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf.new
Jan 20 13:56:57 compute-1 sudo[75809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:57 compute-1 sudo[75809]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:57 compute-1 sudo[75834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:57 compute-1 sudo[75834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:57 compute-1 sudo[75834]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:57 compute-1 sudo[75859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf.new /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:56:57 compute-1 sudo[75859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:57 compute-1 sudo[75859]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:57 compute-1 sudo[75884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:57 compute-1 sudo[75884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:57 compute-1 sudo[75884]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:57 compute-1 sudo[75909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 20 13:56:57 compute-1 sudo[75909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:57 compute-1 sudo[75909]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:57 compute-1 sudo[75934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:57 compute-1 sudo[75934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:57 compute-1 sudo[75934]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:57 compute-1 sudo[75959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph
Jan 20 13:56:57 compute-1 sudo[75959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:57 compute-1 sudo[75959]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:57 compute-1 sudo[75984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:57 compute-1 sudo[75984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:57 compute-1 sudo[75984]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:57 compute-1 sudo[76009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.client.admin.keyring.new
Jan 20 13:56:57 compute-1 sudo[76009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:57 compute-1 sudo[76009]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:57 compute-1 sudo[76034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:57 compute-1 sudo[76034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:57 compute-1 sudo[76034]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:57 compute-1 sudo[76059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:56:57 compute-1 sudo[76059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:57 compute-1 sudo[76059]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:57 compute-1 sudo[76084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:57 compute-1 sudo[76084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:57 compute-1 sudo[76084]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:57 compute-1 sudo[76109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.client.admin.keyring.new
Jan 20 13:56:57 compute-1 sudo[76109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:57 compute-1 sudo[76109]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:58 compute-1 sudo[76157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:58 compute-1 sudo[76157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:58 compute-1 sudo[76157]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:58 compute-1 sudo[76182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.client.admin.keyring.new
Jan 20 13:56:58 compute-1 sudo[76182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:58 compute-1 sudo[76182]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:58 compute-1 sudo[76207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:58 compute-1 sudo[76207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:58 compute-1 sudo[76207]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:58 compute-1 sudo[76232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.client.admin.keyring.new
Jan 20 13:56:58 compute-1 sudo[76232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:58 compute-1 sudo[76232]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:58 compute-1 sudo[76257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:58 compute-1 sudo[76257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:58 compute-1 sudo[76257]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:58 compute-1 sudo[76282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 20 13:56:58 compute-1 sudo[76282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:58 compute-1 sudo[76282]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:58 compute-1 sudo[76307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:58 compute-1 sudo[76307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:58 compute-1 sudo[76307]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:58 compute-1 sudo[76332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config
Jan 20 13:56:58 compute-1 sudo[76332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:58 compute-1 sudo[76332]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:58 compute-1 sudo[76357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:58 compute-1 sudo[76357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:58 compute-1 sudo[76357]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:58 compute-1 sudo[76382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config
Jan 20 13:56:58 compute-1 sudo[76382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:58 compute-1 sudo[76382]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:58 compute-1 sudo[76407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:58 compute-1 sudo[76407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:58 compute-1 sudo[76407]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:58 compute-1 sudo[76432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring.new
Jan 20 13:56:58 compute-1 sudo[76432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:58 compute-1 sudo[76432]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:59 compute-1 sudo[76457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:59 compute-1 sudo[76457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:59 compute-1 sudo[76457]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:59 compute-1 sudo[76482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:56:59 compute-1 sudo[76482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:59 compute-1 sudo[76482]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:59 compute-1 sudo[76507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:59 compute-1 sudo[76507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:59 compute-1 sudo[76507]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:59 compute-1 sudo[76532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring.new
Jan 20 13:56:59 compute-1 sudo[76532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:59 compute-1 sudo[76532]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:59 compute-1 sudo[76580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:59 compute-1 sudo[76580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:59 compute-1 sudo[76580]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:59 compute-1 sudo[76605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring.new
Jan 20 13:56:59 compute-1 sudo[76605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:59 compute-1 sudo[76605]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:59 compute-1 sudo[76630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:59 compute-1 sudo[76630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:59 compute-1 sudo[76630]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:59 compute-1 sudo[76655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring.new
Jan 20 13:56:59 compute-1 sudo[76655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:59 compute-1 sudo[76655]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:59 compute-1 sudo[76680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:59 compute-1 sudo[76680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:59 compute-1 sudo[76680]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:59 compute-1 sudo[76705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring.new /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring
Jan 20 13:56:59 compute-1 sudo[76705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:59 compute-1 sudo[76705]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:00 compute-1 sudo[76730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:00 compute-1 sudo[76730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:00 compute-1 sudo[76730]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:00 compute-1 sudo[76755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:57:00 compute-1 sudo[76755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:00 compute-1 sudo[76755]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:00 compute-1 sudo[76780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:00 compute-1 sudo[76780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:00 compute-1 sudo[76780]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:00 compute-1 sudo[76805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:57:00 compute-1 sudo[76805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:00 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:57:00 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:57:00 compute-1 podman[76870]: 2026-01-20 13:57:00.814137772 +0000 UTC m=+0.039004577 container create 8c2dc29c862515b902ade34205abf3f9403227ce3081e098d1e4f6bd95034c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:57:00 compute-1 systemd[1]: Started libpod-conmon-8c2dc29c862515b902ade34205abf3f9403227ce3081e098d1e4f6bd95034c9f.scope.
Jan 20 13:57:00 compute-1 podman[76870]: 2026-01-20 13:57:00.795433531 +0000 UTC m=+0.020300326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:00 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:57:00 compute-1 podman[76870]: 2026-01-20 13:57:00.926787847 +0000 UTC m=+0.151654712 container init 8c2dc29c862515b902ade34205abf3f9403227ce3081e098d1e4f6bd95034c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 13:57:00 compute-1 podman[76870]: 2026-01-20 13:57:00.938587622 +0000 UTC m=+0.163454427 container start 8c2dc29c862515b902ade34205abf3f9403227ce3081e098d1e4f6bd95034c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:57:00 compute-1 podman[76870]: 2026-01-20 13:57:00.945642732 +0000 UTC m=+0.170509607 container attach 8c2dc29c862515b902ade34205abf3f9403227ce3081e098d1e4f6bd95034c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:57:00 compute-1 epic_banzai[76886]: 167 167
Jan 20 13:57:00 compute-1 systemd[1]: libpod-8c2dc29c862515b902ade34205abf3f9403227ce3081e098d1e4f6bd95034c9f.scope: Deactivated successfully.
Jan 20 13:57:00 compute-1 conmon[76886]: conmon 8c2dc29c862515b902ad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c2dc29c862515b902ade34205abf3f9403227ce3081e098d1e4f6bd95034c9f.scope/container/memory.events
Jan 20 13:57:00 compute-1 podman[76870]: 2026-01-20 13:57:00.95084888 +0000 UTC m=+0.175715745 container died 8c2dc29c862515b902ade34205abf3f9403227ce3081e098d1e4f6bd95034c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:57:01 compute-1 podman[76870]: 2026-01-20 13:57:01.006372655 +0000 UTC m=+0.231239460 container remove 8c2dc29c862515b902ade34205abf3f9403227ce3081e098d1e4f6bd95034c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:57:01 compute-1 systemd[1]: libpod-conmon-8c2dc29c862515b902ade34205abf3f9403227ce3081e098d1e4f6bd95034c9f.scope: Deactivated successfully.
Jan 20 13:57:01 compute-1 systemd[1]: Reloading.
Jan 20 13:57:01 compute-1 systemd-sysv-generator[76934]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:57:01 compute-1 systemd-rc-local-generator[76931]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:57:01 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:57:01 compute-1 systemd[1]: Reloading.
Jan 20 13:57:01 compute-1 systemd-rc-local-generator[76971]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:57:01 compute-1 systemd-sysv-generator[76976]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:57:01 compute-1 systemd[1]: Reached target All Ceph clusters and services.
Jan 20 13:57:01 compute-1 systemd[1]: Reloading.
Jan 20 13:57:01 compute-1 systemd-rc-local-generator[77009]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:57:01 compute-1 systemd-sysv-generator[77012]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:57:01 compute-1 systemd[1]: Reached target Ceph cluster e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 13:57:01 compute-1 systemd[1]: Reloading.
Jan 20 13:57:01 compute-1 systemd-rc-local-generator[77048]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:57:02 compute-1 systemd-sysv-generator[77051]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:57:02 compute-1 systemd[1]: Reloading.
Jan 20 13:57:02 compute-1 systemd-rc-local-generator[77089]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:57:02 compute-1 systemd-sysv-generator[77092]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:57:02 compute-1 systemd[1]: Created slice Slice /system/ceph-e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 13:57:02 compute-1 systemd[1]: Reached target System Time Set.
Jan 20 13:57:02 compute-1 systemd[1]: Reached target System Time Synchronized.
Jan 20 13:57:02 compute-1 systemd[1]: Starting Ceph crash.compute-1 for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 13:57:02 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:57:02 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:57:02 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:57:02 compute-1 podman[77148]: 2026-01-20 13:57:02.858648075 +0000 UTC m=+0.083158559 container create 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:57:02 compute-1 podman[77148]: 2026-01-20 13:57:02.809324217 +0000 UTC m=+0.033834721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef75fe091d0e2aed0c89236ea76d69d37c2fb011f97d328fe735d93708e99020/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef75fe091d0e2aed0c89236ea76d69d37c2fb011f97d328fe735d93708e99020/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef75fe091d0e2aed0c89236ea76d69d37c2fb011f97d328fe735d93708e99020/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:03 compute-1 podman[77148]: 2026-01-20 13:57:03.054551823 +0000 UTC m=+0.279062377 container init 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 20 13:57:03 compute-1 podman[77148]: 2026-01-20 13:57:03.059324348 +0000 UTC m=+0.283834872 container start 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:57:03 compute-1 bash[77148]: 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009
Jan 20 13:57:03 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1[77164]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 20 13:57:03 compute-1 systemd[1]: Started Ceph crash.compute-1 for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 13:57:03 compute-1 sudo[76805]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:03 compute-1 sudo[77171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:03 compute-1 sudo[77171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:03 compute-1 sudo[77171]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:03 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1[77164]: 2026-01-20T13:57:03.432+0000 7f203dcb6640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 20 13:57:03 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1[77164]: 2026-01-20T13:57:03.432+0000 7f203dcb6640 -1 AuthRegistry(0x7f2038067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 20 13:57:03 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1[77164]: 2026-01-20T13:57:03.434+0000 7f203dcb6640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 20 13:57:03 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1[77164]: 2026-01-20T13:57:03.434+0000 7f203dcb6640 -1 AuthRegistry(0x7f203dcb5000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 20 13:57:03 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1[77164]: 2026-01-20T13:57:03.436+0000 7f20377fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 20 13:57:03 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1[77164]: 2026-01-20T13:57:03.436+0000 7f203dcb6640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 20 13:57:03 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1[77164]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 20 13:57:03 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1[77164]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 20 13:57:03 compute-1 sudo[77197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:57:03 compute-1 sudo[77197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:03 compute-1 sudo[77197]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:03 compute-1 sudo[77231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:03 compute-1 sudo[77231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:03 compute-1 sudo[77231]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:03 compute-1 sudo[77256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 13:57:03 compute-1 sudo[77256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:04 compute-1 podman[77320]: 2026-01-20 13:57:04.083143899 +0000 UTC m=+0.055192387 container create 1cb7c5e0d18637127d6f7b19e77ce179c1221a1668b1780d9863b302197ed175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:57:04 compute-1 systemd[1]: Started libpod-conmon-1cb7c5e0d18637127d6f7b19e77ce179c1221a1668b1780d9863b302197ed175.scope.
Jan 20 13:57:04 compute-1 podman[77320]: 2026-01-20 13:57:04.061599528 +0000 UTC m=+0.033648056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:04 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:57:04 compute-1 podman[77320]: 2026-01-20 13:57:04.187321084 +0000 UTC m=+0.159369642 container init 1cb7c5e0d18637127d6f7b19e77ce179c1221a1668b1780d9863b302197ed175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brown, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:57:04 compute-1 podman[77320]: 2026-01-20 13:57:04.199095738 +0000 UTC m=+0.171144246 container start 1cb7c5e0d18637127d6f7b19e77ce179c1221a1668b1780d9863b302197ed175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 13:57:04 compute-1 podman[77320]: 2026-01-20 13:57:04.203233415 +0000 UTC m=+0.175281993 container attach 1cb7c5e0d18637127d6f7b19e77ce179c1221a1668b1780d9863b302197ed175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:57:04 compute-1 recursing_brown[77337]: 167 167
Jan 20 13:57:04 compute-1 systemd[1]: libpod-1cb7c5e0d18637127d6f7b19e77ce179c1221a1668b1780d9863b302197ed175.scope: Deactivated successfully.
Jan 20 13:57:04 compute-1 podman[77320]: 2026-01-20 13:57:04.210710467 +0000 UTC m=+0.182759015 container died 1cb7c5e0d18637127d6f7b19e77ce179c1221a1668b1780d9863b302197ed175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:57:04 compute-1 systemd[1]: var-lib-containers-storage-overlay-b0f98fee64d0891fc249b02064888198d558c3f4adf045567084032496aa57b2-merged.mount: Deactivated successfully.
Jan 20 13:57:04 compute-1 podman[77320]: 2026-01-20 13:57:04.270828413 +0000 UTC m=+0.242876931 container remove 1cb7c5e0d18637127d6f7b19e77ce179c1221a1668b1780d9863b302197ed175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Jan 20 13:57:04 compute-1 systemd[1]: libpod-conmon-1cb7c5e0d18637127d6f7b19e77ce179c1221a1668b1780d9863b302197ed175.scope: Deactivated successfully.
Jan 20 13:57:04 compute-1 podman[77359]: 2026-01-20 13:57:04.520720921 +0000 UTC m=+0.072590810 container create 6d23387de0c9f8bc9d9fa187afe65e88922cadb99627d9b3dcc35703f024de75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_greider, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:57:04 compute-1 systemd[1]: Started libpod-conmon-6d23387de0c9f8bc9d9fa187afe65e88922cadb99627d9b3dcc35703f024de75.scope.
Jan 20 13:57:04 compute-1 podman[77359]: 2026-01-20 13:57:04.493732045 +0000 UTC m=+0.045601984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:04 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:57:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d993914e6abbd4000a150a5a3529483fe179890d31904332e4ef83df61c4772c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d993914e6abbd4000a150a5a3529483fe179890d31904332e4ef83df61c4772c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d993914e6abbd4000a150a5a3529483fe179890d31904332e4ef83df61c4772c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d993914e6abbd4000a150a5a3529483fe179890d31904332e4ef83df61c4772c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d993914e6abbd4000a150a5a3529483fe179890d31904332e4ef83df61c4772c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:04 compute-1 podman[77359]: 2026-01-20 13:57:04.621252373 +0000 UTC m=+0.173122302 container init 6d23387de0c9f8bc9d9fa187afe65e88922cadb99627d9b3dcc35703f024de75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_greider, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:57:04 compute-1 podman[77359]: 2026-01-20 13:57:04.632982985 +0000 UTC m=+0.184852904 container start 6d23387de0c9f8bc9d9fa187afe65e88922cadb99627d9b3dcc35703f024de75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:57:04 compute-1 podman[77359]: 2026-01-20 13:57:04.638043919 +0000 UTC m=+0.189913898 container attach 6d23387de0c9f8bc9d9fa187afe65e88922cadb99627d9b3dcc35703f024de75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Jan 20 13:57:05 compute-1 elated_greider[77376]: --> passed data devices: 0 physical, 1 LVM
Jan 20 13:57:05 compute-1 elated_greider[77376]: --> relative data size: 1.0
Jan 20 13:57:05 compute-1 elated_greider[77376]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 13:57:05 compute-1 elated_greider[77376]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 562c52e7-0678-4614-81fd-9a9eecf7d0f9
Jan 20 13:57:06 compute-1 elated_greider[77376]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 13:57:06 compute-1 lvm[77424]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 13:57:06 compute-1 lvm[77424]: VG ceph_vg0 finished
Jan 20 13:57:06 compute-1 elated_greider[77376]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 20 13:57:06 compute-1 elated_greider[77376]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 20 13:57:06 compute-1 elated_greider[77376]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 20 13:57:06 compute-1 elated_greider[77376]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 20 13:57:06 compute-1 elated_greider[77376]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 20 13:57:06 compute-1 elated_greider[77376]:  stderr: got monmap epoch 1
Jan 20 13:57:06 compute-1 elated_greider[77376]: --> Creating keyring file for osd.1
Jan 20 13:57:06 compute-1 elated_greider[77376]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 20 13:57:06 compute-1 elated_greider[77376]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 20 13:57:06 compute-1 elated_greider[77376]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 562c52e7-0678-4614-81fd-9a9eecf7d0f9 --setuser ceph --setgroup ceph
Jan 20 13:57:09 compute-1 elated_greider[77376]:  stderr: 2026-01-20T13:57:06.791+0000 7febea47b740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 20 13:57:09 compute-1 elated_greider[77376]:  stderr: 2026-01-20T13:57:06.791+0000 7febea47b740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 20 13:57:09 compute-1 elated_greider[77376]:  stderr: 2026-01-20T13:57:06.792+0000 7febea47b740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 20 13:57:09 compute-1 elated_greider[77376]:  stderr: 2026-01-20T13:57:06.792+0000 7febea47b740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 20 13:57:09 compute-1 elated_greider[77376]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 20 13:57:09 compute-1 elated_greider[77376]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 20 13:57:09 compute-1 elated_greider[77376]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 20 13:57:09 compute-1 elated_greider[77376]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 20 13:57:09 compute-1 elated_greider[77376]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 20 13:57:09 compute-1 elated_greider[77376]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 20 13:57:09 compute-1 elated_greider[77376]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 20 13:57:09 compute-1 elated_greider[77376]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 20 13:57:09 compute-1 elated_greider[77376]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 20 13:57:09 compute-1 systemd[1]: libpod-6d23387de0c9f8bc9d9fa187afe65e88922cadb99627d9b3dcc35703f024de75.scope: Deactivated successfully.
Jan 20 13:57:09 compute-1 systemd[1]: libpod-6d23387de0c9f8bc9d9fa187afe65e88922cadb99627d9b3dcc35703f024de75.scope: Consumed 2.767s CPU time.
Jan 20 13:57:09 compute-1 podman[78322]: 2026-01-20 13:57:09.794923017 +0000 UTC m=+0.044674478 container died 6d23387de0c9f8bc9d9fa187afe65e88922cadb99627d9b3dcc35703f024de75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_greider, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:57:10 compute-1 systemd[1]: var-lib-containers-storage-overlay-d993914e6abbd4000a150a5a3529483fe179890d31904332e4ef83df61c4772c-merged.mount: Deactivated successfully.
Jan 20 13:57:10 compute-1 podman[78322]: 2026-01-20 13:57:10.627044089 +0000 UTC m=+0.876795520 container remove 6d23387de0c9f8bc9d9fa187afe65e88922cadb99627d9b3dcc35703f024de75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 20 13:57:10 compute-1 systemd[1]: libpod-conmon-6d23387de0c9f8bc9d9fa187afe65e88922cadb99627d9b3dcc35703f024de75.scope: Deactivated successfully.
Jan 20 13:57:10 compute-1 sudo[77256]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:10 compute-1 sudo[78338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:10 compute-1 sudo[78338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:10 compute-1 sudo[78338]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:10 compute-1 sudo[78363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:57:10 compute-1 sudo[78363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:10 compute-1 sudo[78363]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:10 compute-1 sudo[78388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:10 compute-1 sudo[78388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:10 compute-1 sudo[78388]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:10 compute-1 sudo[78413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 13:57:10 compute-1 sudo[78413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:11 compute-1 podman[78479]: 2026-01-20 13:57:11.348589837 +0000 UTC m=+0.031801343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:11 compute-1 podman[78479]: 2026-01-20 13:57:11.460373618 +0000 UTC m=+0.143585084 container create c6dde6b6ef49613f50b605822100fb95d4b8a03ef036108089db487850535e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 13:57:11 compute-1 systemd[1]: Started libpod-conmon-c6dde6b6ef49613f50b605822100fb95d4b8a03ef036108089db487850535e17.scope.
Jan 20 13:57:11 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:57:11 compute-1 podman[78479]: 2026-01-20 13:57:11.572768096 +0000 UTC m=+0.255979592 container init c6dde6b6ef49613f50b605822100fb95d4b8a03ef036108089db487850535e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 13:57:11 compute-1 podman[78479]: 2026-01-20 13:57:11.58206985 +0000 UTC m=+0.265281326 container start c6dde6b6ef49613f50b605822100fb95d4b8a03ef036108089db487850535e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:57:11 compute-1 podman[78479]: 2026-01-20 13:57:11.58700845 +0000 UTC m=+0.270219946 container attach c6dde6b6ef49613f50b605822100fb95d4b8a03ef036108089db487850535e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:57:11 compute-1 vigorous_mcclintock[78495]: 167 167
Jan 20 13:57:11 compute-1 systemd[1]: libpod-c6dde6b6ef49613f50b605822100fb95d4b8a03ef036108089db487850535e17.scope: Deactivated successfully.
Jan 20 13:57:11 compute-1 podman[78479]: 2026-01-20 13:57:11.592950309 +0000 UTC m=+0.276161765 container died c6dde6b6ef49613f50b605822100fb95d4b8a03ef036108089db487850535e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 13:57:11 compute-1 systemd[1]: var-lib-containers-storage-overlay-604e37ac9edb318c0a0ff38ef10fcf5056c9fc4c6df3cb95ee0b4d09d56e04db-merged.mount: Deactivated successfully.
Jan 20 13:57:11 compute-1 podman[78479]: 2026-01-20 13:57:11.637011439 +0000 UTC m=+0.320222895 container remove c6dde6b6ef49613f50b605822100fb95d4b8a03ef036108089db487850535e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 13:57:11 compute-1 systemd[1]: libpod-conmon-c6dde6b6ef49613f50b605822100fb95d4b8a03ef036108089db487850535e17.scope: Deactivated successfully.
Jan 20 13:57:11 compute-1 podman[78517]: 2026-01-20 13:57:11.811758826 +0000 UTC m=+0.029936771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:12 compute-1 podman[78517]: 2026-01-20 13:57:12.030197171 +0000 UTC m=+0.248375056 container create 6c30a69903d5af058e342543cd24de227f8a31e771ad012f1bc40e27817ceb99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hertz, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 13:57:12 compute-1 systemd[1]: Started libpod-conmon-6c30a69903d5af058e342543cd24de227f8a31e771ad012f1bc40e27817ceb99.scope.
Jan 20 13:57:12 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:57:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17f21cb9c066d9d6a2ec54488a5d28b3604242d5deb119b744d8620eed0af3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17f21cb9c066d9d6a2ec54488a5d28b3604242d5deb119b744d8620eed0af3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17f21cb9c066d9d6a2ec54488a5d28b3604242d5deb119b744d8620eed0af3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17f21cb9c066d9d6a2ec54488a5d28b3604242d5deb119b744d8620eed0af3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:12 compute-1 podman[78517]: 2026-01-20 13:57:12.347945945 +0000 UTC m=+0.566123870 container init 6c30a69903d5af058e342543cd24de227f8a31e771ad012f1bc40e27817ceb99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hertz, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:57:12 compute-1 podman[78517]: 2026-01-20 13:57:12.357465015 +0000 UTC m=+0.575642870 container start 6c30a69903d5af058e342543cd24de227f8a31e771ad012f1bc40e27817ceb99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hertz, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:57:12 compute-1 podman[78517]: 2026-01-20 13:57:12.3615237 +0000 UTC m=+0.579701665 container attach 6c30a69903d5af058e342543cd24de227f8a31e771ad012f1bc40e27817ceb99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hertz, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:57:13 compute-1 sweet_hertz[78535]: {
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:     "1": [
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:         {
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:             "devices": [
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:                 "/dev/loop3"
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:             ],
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:             "lv_name": "ceph_lv0",
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:             "lv_size": "7511998464",
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TDt0ds-asam-XQ1t-lT00-aV5E-HrYi-HrQkBt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=562c52e7-0678-4614-81fd-9a9eecf7d0f9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:             "lv_uuid": "TDt0ds-asam-XQ1t-lT00-aV5E-HrYi-HrQkBt",
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:             "name": "ceph_lv0",
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:             "tags": {
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:                 "ceph.block_uuid": "TDt0ds-asam-XQ1t-lT00-aV5E-HrYi-HrQkBt",
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:                 "ceph.cluster_name": "ceph",
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:                 "ceph.crush_device_class": "",
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:                 "ceph.encrypted": "0",
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:                 "ceph.osd_fsid": "562c52e7-0678-4614-81fd-9a9eecf7d0f9",
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:                 "ceph.osd_id": "1",
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:                 "ceph.type": "block",
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:                 "ceph.vdo": "0"
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:             },
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:             "type": "block",
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:             "vg_name": "ceph_vg0"
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:         }
Jan 20 13:57:13 compute-1 sweet_hertz[78535]:     ]
Jan 20 13:57:13 compute-1 sweet_hertz[78535]: }
Jan 20 13:57:13 compute-1 systemd[1]: libpod-6c30a69903d5af058e342543cd24de227f8a31e771ad012f1bc40e27817ceb99.scope: Deactivated successfully.
Jan 20 13:57:13 compute-1 podman[78517]: 2026-01-20 13:57:13.116793454 +0000 UTC m=+1.334971339 container died 6c30a69903d5af058e342543cd24de227f8a31e771ad012f1bc40e27817ceb99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hertz, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 13:57:13 compute-1 systemd[1]: var-lib-containers-storage-overlay-c17f21cb9c066d9d6a2ec54488a5d28b3604242d5deb119b744d8620eed0af3d-merged.mount: Deactivated successfully.
Jan 20 13:57:13 compute-1 podman[78517]: 2026-01-20 13:57:13.179611146 +0000 UTC m=+1.397788991 container remove 6c30a69903d5af058e342543cd24de227f8a31e771ad012f1bc40e27817ceb99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hertz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 13:57:13 compute-1 systemd[1]: libpod-conmon-6c30a69903d5af058e342543cd24de227f8a31e771ad012f1bc40e27817ceb99.scope: Deactivated successfully.
Jan 20 13:57:13 compute-1 sudo[78413]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:13 compute-1 sudo[78555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:13 compute-1 sudo[78555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:13 compute-1 sudo[78555]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:13 compute-1 sudo[78580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:57:13 compute-1 sudo[78580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:13 compute-1 sudo[78580]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:13 compute-1 sudo[78605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:13 compute-1 sudo[78605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:13 compute-1 sudo[78605]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:13 compute-1 sudo[78630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:57:13 compute-1 sudo[78630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:13 compute-1 podman[78697]: 2026-01-20 13:57:13.884024706 +0000 UTC m=+0.048980349 container create 473c3eda48698ea0394243f288f7faa561f940f7a3621074e5ccc42da7a27f61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:57:13 compute-1 systemd[1]: Started libpod-conmon-473c3eda48698ea0394243f288f7faa561f940f7a3621074e5ccc42da7a27f61.scope.
Jan 20 13:57:13 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:57:13 compute-1 podman[78697]: 2026-01-20 13:57:13.86161154 +0000 UTC m=+0.026567153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:13 compute-1 podman[78697]: 2026-01-20 13:57:13.983475357 +0000 UTC m=+0.148430990 container init 473c3eda48698ea0394243f288f7faa561f940f7a3621074e5ccc42da7a27f61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dewdney, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 13:57:13 compute-1 podman[78697]: 2026-01-20 13:57:13.990902738 +0000 UTC m=+0.155858371 container start 473c3eda48698ea0394243f288f7faa561f940f7a3621074e5ccc42da7a27f61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dewdney, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 13:57:13 compute-1 confident_dewdney[78715]: 167 167
Jan 20 13:57:13 compute-1 systemd[1]: libpod-473c3eda48698ea0394243f288f7faa561f940f7a3621074e5ccc42da7a27f61.scope: Deactivated successfully.
Jan 20 13:57:14 compute-1 podman[78697]: 2026-01-20 13:57:14.032339963 +0000 UTC m=+0.197295596 container attach 473c3eda48698ea0394243f288f7faa561f940f7a3621074e5ccc42da7a27f61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dewdney, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 13:57:14 compute-1 podman[78697]: 2026-01-20 13:57:14.033718002 +0000 UTC m=+0.198673655 container died 473c3eda48698ea0394243f288f7faa561f940f7a3621074e5ccc42da7a27f61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 13:57:14 compute-1 systemd[1]: var-lib-containers-storage-overlay-a61eed5759e507901987df3e92d86758bef62b44b1e7ad3f83617b085f258607-merged.mount: Deactivated successfully.
Jan 20 13:57:14 compute-1 podman[78697]: 2026-01-20 13:57:14.090540454 +0000 UTC m=+0.255496067 container remove 473c3eda48698ea0394243f288f7faa561f940f7a3621074e5ccc42da7a27f61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dewdney, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 13:57:14 compute-1 systemd[1]: libpod-conmon-473c3eda48698ea0394243f288f7faa561f940f7a3621074e5ccc42da7a27f61.scope: Deactivated successfully.
Jan 20 13:57:14 compute-1 podman[78746]: 2026-01-20 13:57:14.456358321 +0000 UTC m=+0.048751584 container create b4d4b812a40cf0094ac25b3e0c1232f1b462f06e197f832b9e9b5e22ce23073e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:57:14 compute-1 systemd[1]: Started libpod-conmon-b4d4b812a40cf0094ac25b3e0c1232f1b462f06e197f832b9e9b5e22ce23073e.scope.
Jan 20 13:57:14 compute-1 podman[78746]: 2026-01-20 13:57:14.436428195 +0000 UTC m=+0.028821438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:14 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:57:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/222464e56bc2f745f5311a0a24d9c9fbc5ae14208298e9eb0afb18cf88b2c704/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/222464e56bc2f745f5311a0a24d9c9fbc5ae14208298e9eb0afb18cf88b2c704/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/222464e56bc2f745f5311a0a24d9c9fbc5ae14208298e9eb0afb18cf88b2c704/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/222464e56bc2f745f5311a0a24d9c9fbc5ae14208298e9eb0afb18cf88b2c704/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/222464e56bc2f745f5311a0a24d9c9fbc5ae14208298e9eb0afb18cf88b2c704/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:17 compute-1 podman[78746]: 2026-01-20 13:57:17.409677984 +0000 UTC m=+3.002071227 container init b4d4b812a40cf0094ac25b3e0c1232f1b462f06e197f832b9e9b5e22ce23073e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate-test, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 13:57:17 compute-1 podman[78746]: 2026-01-20 13:57:17.422451665 +0000 UTC m=+3.014844918 container start b4d4b812a40cf0094ac25b3e0c1232f1b462f06e197f832b9e9b5e22ce23073e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate-test, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 13:57:17 compute-1 podman[78746]: 2026-01-20 13:57:17.43674164 +0000 UTC m=+3.029134873 container attach b4d4b812a40cf0094ac25b3e0c1232f1b462f06e197f832b9e9b5e22ce23073e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate-test, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Jan 20 13:57:18 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate-test[78762]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Jan 20 13:57:18 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate-test[78762]:                             [--no-systemd] [--no-tmpfs]
Jan 20 13:57:18 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate-test[78762]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 20 13:57:18 compute-1 systemd[1]: libpod-b4d4b812a40cf0094ac25b3e0c1232f1b462f06e197f832b9e9b5e22ce23073e.scope: Deactivated successfully.
Jan 20 13:57:18 compute-1 podman[78746]: 2026-01-20 13:57:18.155681124 +0000 UTC m=+3.748074377 container died b4d4b812a40cf0094ac25b3e0c1232f1b462f06e197f832b9e9b5e22ce23073e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate-test, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:57:18 compute-1 systemd[1]: var-lib-containers-storage-overlay-222464e56bc2f745f5311a0a24d9c9fbc5ae14208298e9eb0afb18cf88b2c704-merged.mount: Deactivated successfully.
Jan 20 13:57:18 compute-1 podman[78746]: 2026-01-20 13:57:18.230406474 +0000 UTC m=+3.822799737 container remove b4d4b812a40cf0094ac25b3e0c1232f1b462f06e197f832b9e9b5e22ce23073e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate-test, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:57:18 compute-1 systemd[1]: libpod-conmon-b4d4b812a40cf0094ac25b3e0c1232f1b462f06e197f832b9e9b5e22ce23073e.scope: Deactivated successfully.
Jan 20 13:57:18 compute-1 systemd[1]: Reloading.
Jan 20 13:57:18 compute-1 systemd-sysv-generator[78835]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:57:18 compute-1 systemd-rc-local-generator[78831]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:57:18 compute-1 systemd[1]: Reloading.
Jan 20 13:57:18 compute-1 systemd-rc-local-generator[78868]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:57:18 compute-1 systemd-sysv-generator[78871]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:57:19 compute-1 systemd[1]: Starting Ceph osd.1 for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 13:57:19 compute-1 podman[78928]: 2026-01-20 13:57:19.455055422 +0000 UTC m=+0.061352701 container create 20f36b600dacba52f75e185f79ca3e063cef685a0d176971a28152745ea1c248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:57:19 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:57:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44589a501efb33d81eb477415101d1de97259449653627fa8e0f90e521224e0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:19 compute-1 podman[78928]: 2026-01-20 13:57:19.432928584 +0000 UTC m=+0.039225853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44589a501efb33d81eb477415101d1de97259449653627fa8e0f90e521224e0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44589a501efb33d81eb477415101d1de97259449653627fa8e0f90e521224e0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44589a501efb33d81eb477415101d1de97259449653627fa8e0f90e521224e0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44589a501efb33d81eb477415101d1de97259449653627fa8e0f90e521224e0e/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:19 compute-1 podman[78928]: 2026-01-20 13:57:19.542510823 +0000 UTC m=+0.148808092 container init 20f36b600dacba52f75e185f79ca3e063cef685a0d176971a28152745ea1c248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:57:19 compute-1 podman[78928]: 2026-01-20 13:57:19.554777731 +0000 UTC m=+0.161074970 container start 20f36b600dacba52f75e185f79ca3e063cef685a0d176971a28152745ea1c248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 13:57:19 compute-1 podman[78928]: 2026-01-20 13:57:19.559061852 +0000 UTC m=+0.165359111 container attach 20f36b600dacba52f75e185f79ca3e063cef685a0d176971a28152745ea1c248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Jan 20 13:57:20 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate[78942]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 20 13:57:20 compute-1 bash[78928]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 20 13:57:20 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate[78942]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 20 13:57:20 compute-1 bash[78928]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 20 13:57:20 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate[78942]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 20 13:57:20 compute-1 bash[78928]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 20 13:57:20 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate[78942]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 20 13:57:20 compute-1 bash[78928]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 20 13:57:20 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate[78942]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 20 13:57:20 compute-1 bash[78928]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 20 13:57:20 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate[78942]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 20 13:57:20 compute-1 bash[78928]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 20 13:57:20 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate[78942]: --> ceph-volume raw activate successful for osd ID: 1
Jan 20 13:57:20 compute-1 bash[78928]: --> ceph-volume raw activate successful for osd ID: 1
Jan 20 13:57:20 compute-1 systemd[1]: libpod-20f36b600dacba52f75e185f79ca3e063cef685a0d176971a28152745ea1c248.scope: Deactivated successfully.
Jan 20 13:57:20 compute-1 podman[78928]: 2026-01-20 13:57:20.461252413 +0000 UTC m=+1.067549652 container died 20f36b600dacba52f75e185f79ca3e063cef685a0d176971a28152745ea1c248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:57:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-44589a501efb33d81eb477415101d1de97259449653627fa8e0f90e521224e0e-merged.mount: Deactivated successfully.
Jan 20 13:57:20 compute-1 podman[78928]: 2026-01-20 13:57:20.517300173 +0000 UTC m=+1.123597412 container remove 20f36b600dacba52f75e185f79ca3e063cef685a0d176971a28152745ea1c248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1-activate, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 20 13:57:20 compute-1 podman[79100]: 2026-01-20 13:57:20.715582798 +0000 UTC m=+0.040554462 container create 2681b7d660cda7dd317ff9dc8fefed0c116200491d86f19ae07733e29ca0fc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:57:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c1ea6a997a5906003d4ec712202dea111c0476d00cc9babc7e9d63d48299cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c1ea6a997a5906003d4ec712202dea111c0476d00cc9babc7e9d63d48299cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c1ea6a997a5906003d4ec712202dea111c0476d00cc9babc7e9d63d48299cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c1ea6a997a5906003d4ec712202dea111c0476d00cc9babc7e9d63d48299cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c1ea6a997a5906003d4ec712202dea111c0476d00cc9babc7e9d63d48299cc/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:20 compute-1 podman[79100]: 2026-01-20 13:57:20.699663996 +0000 UTC m=+0.024635680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:20 compute-1 podman[79100]: 2026-01-20 13:57:20.796172324 +0000 UTC m=+0.121144038 container init 2681b7d660cda7dd317ff9dc8fefed0c116200491d86f19ae07733e29ca0fc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:57:20 compute-1 podman[79100]: 2026-01-20 13:57:20.805265962 +0000 UTC m=+0.130237636 container start 2681b7d660cda7dd317ff9dc8fefed0c116200491d86f19ae07733e29ca0fc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 13:57:20 compute-1 bash[79100]: 2681b7d660cda7dd317ff9dc8fefed0c116200491d86f19ae07733e29ca0fc6b
Jan 20 13:57:20 compute-1 systemd[1]: Started Ceph osd.1 for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 13:57:20 compute-1 ceph-osd[79119]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 13:57:20 compute-1 ceph-osd[79119]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Jan 20 13:57:20 compute-1 ceph-osd[79119]: pidfile_write: ignore empty --pid-file
Jan 20 13:57:20 compute-1 ceph-osd[79119]: bdev(0x557dbec6b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 13:57:20 compute-1 ceph-osd[79119]: bdev(0x557dbec6b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 13:57:20 compute-1 ceph-osd[79119]: bdev(0x557dbec6b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 13:57:20 compute-1 ceph-osd[79119]: bdev(0x557dbec6b800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 13:57:20 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 13:57:20 compute-1 ceph-osd[79119]: bdev(0x557dbfaad800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 13:57:20 compute-1 ceph-osd[79119]: bdev(0x557dbfaad800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 13:57:20 compute-1 ceph-osd[79119]: bdev(0x557dbfaad800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 13:57:20 compute-1 ceph-osd[79119]: bdev(0x557dbfaad800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 13:57:20 compute-1 ceph-osd[79119]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 20 13:57:20 compute-1 ceph-osd[79119]: bdev(0x557dbfaad800 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 13:57:20 compute-1 sudo[78630]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:21 compute-1 sudo[79132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:21 compute-1 sudo[79132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:21 compute-1 sudo[79132]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:21 compute-1 sudo[79157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:57:21 compute-1 sudo[79157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:21 compute-1 sudo[79157]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bdev(0x557dbec6b800 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 13:57:21 compute-1 sudo[79182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:21 compute-1 sudo[79182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:21 compute-1 sudo[79182]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:21 compute-1 sudo[79209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 13:57:21 compute-1 sudo[79209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:21 compute-1 ceph-osd[79119]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 20 13:57:21 compute-1 ceph-osd[79119]: load: jerasure load: lrc 
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bdev(0x557dbfb2ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bdev(0x557dbfb2ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bdev(0x557dbfb2ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bdev(0x557dbfb2ec00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bdev(0x557dbfb2ec00 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bdev(0x557dbfb2ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bdev(0x557dbfb2ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bdev(0x557dbfb2ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bdev(0x557dbfb2ec00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bdev(0x557dbfb2ec00 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 13:57:21 compute-1 podman[79279]: 2026-01-20 13:57:21.650018783 +0000 UTC m=+0.036097865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 20 13:57:21 compute-1 ceph-osd[79119]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bdev(0x557dbfb2ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bdev(0x557dbfb2ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bdev(0x557dbfb2ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bdev(0x557dbfb2ec00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bdev(0x557dbfb2f400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bdev(0x557dbfb2f400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bdev(0x557dbfb2f400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bdev(0x557dbfb2f400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bluefs mount
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bluefs mount shared_bdev_used = 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: RocksDB version: 7.9.2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Git sha 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: DB SUMMARY
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: DB Session ID:  LTNCJ3XTV54YABYKDU5U
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: CURRENT file:  CURRENT
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                         Options.error_if_exists: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                       Options.create_if_missing: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                                     Options.env: 0x557dbfaffc70
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                                Options.info_log: 0x557dbece8ba0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                              Options.statistics: (nil)
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                               Options.use_fsync: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                              Options.db_log_dir: 
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                                 Options.wal_dir: db.wal
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.write_buffer_manager: 0x557dbfc08460
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.unordered_write: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                               Options.row_cache: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                              Options.wal_filter: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.two_write_queues: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.wal_compression: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.atomic_flush: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.max_background_jobs: 4
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.max_background_compactions: -1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.max_subcompactions: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.max_open_files: -1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Compression algorithms supported:
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         kZSTD supported: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         kXpressCompression supported: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         kBZip2Compression supported: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         kLZ4Compression supported: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         kZlibCompression supported: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         kLZ4HCCompression supported: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         kSnappyCompression supported: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557dbece8600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557dbecdedd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557dbece8600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557dbecdedd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:21 compute-1 podman[79279]: 2026-01-20 13:57:21.939487264 +0000 UTC m=+0.325566266 container create 798485aa26b97fa02fd02e0443b78d91c203b27c2c6839d74176afd14ea270f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_liskov, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557dbece8600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557dbecdedd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557dbece8600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557dbecdedd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557dbece8600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557dbecdedd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557dbece8600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557dbecdedd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557dbece8600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557dbecdedd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557dbece85c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557dbecde430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557dbece85c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557dbecde430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557dbece85c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557dbecde430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e16c086c-0403-48f5-8de8-0b24deda1c99
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917441958069, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917441958253, "job": 1, "event": "recovery_finished"}
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 20 13:57:21 compute-1 ceph-osd[79119]: freelist init
Jan 20 13:57:21 compute-1 ceph-osd[79119]: freelist _read_cfg
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 20 13:57:21 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bluefs umount
Jan 20 13:57:21 compute-1 ceph-osd[79119]: bdev(0x557dbfb2f400 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 13:57:21 compute-1 systemd[1]: Started libpod-conmon-798485aa26b97fa02fd02e0443b78d91c203b27c2c6839d74176afd14ea270f2.scope.
Jan 20 13:57:22 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:57:22 compute-1 podman[79279]: 2026-01-20 13:57:22.100265124 +0000 UTC m=+0.486344156 container init 798485aa26b97fa02fd02e0443b78d91c203b27c2c6839d74176afd14ea270f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 13:57:22 compute-1 podman[79279]: 2026-01-20 13:57:22.114308523 +0000 UTC m=+0.500387525 container start 798485aa26b97fa02fd02e0443b78d91c203b27c2c6839d74176afd14ea270f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_liskov, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 13:57:22 compute-1 podman[79279]: 2026-01-20 13:57:22.118415629 +0000 UTC m=+0.504494631 container attach 798485aa26b97fa02fd02e0443b78d91c203b27c2c6839d74176afd14ea270f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_liskov, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:57:22 compute-1 systemd[1]: libpod-798485aa26b97fa02fd02e0443b78d91c203b27c2c6839d74176afd14ea270f2.scope: Deactivated successfully.
Jan 20 13:57:22 compute-1 dazzling_liskov[79493]: 167 167
Jan 20 13:57:22 compute-1 podman[79279]: 2026-01-20 13:57:22.12621046 +0000 UTC m=+0.512289492 container died 798485aa26b97fa02fd02e0443b78d91c203b27c2c6839d74176afd14ea270f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 13:57:22 compute-1 conmon[79493]: conmon 798485aa26b97fa02fd0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-798485aa26b97fa02fd02e0443b78d91c203b27c2c6839d74176afd14ea270f2.scope/container/memory.events
Jan 20 13:57:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-aa27d61ce42fb7930eae9fb413cebcc5e1a20b929b138fb96e278d428a39f94b-merged.mount: Deactivated successfully.
Jan 20 13:57:22 compute-1 podman[79279]: 2026-01-20 13:57:22.18330659 +0000 UTC m=+0.569385622 container remove 798485aa26b97fa02fd02e0443b78d91c203b27c2c6839d74176afd14ea270f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 20 13:57:22 compute-1 systemd[1]: libpod-conmon-798485aa26b97fa02fd02e0443b78d91c203b27c2c6839d74176afd14ea270f2.scope: Deactivated successfully.
Jan 20 13:57:22 compute-1 ceph-osd[79119]: bdev(0x557dbfb2f400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 13:57:22 compute-1 ceph-osd[79119]: bdev(0x557dbfb2f400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 13:57:22 compute-1 ceph-osd[79119]: bdev(0x557dbfb2f400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 13:57:22 compute-1 ceph-osd[79119]: bdev(0x557dbfb2f400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 13:57:22 compute-1 ceph-osd[79119]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 20 13:57:22 compute-1 ceph-osd[79119]: bluefs mount
Jan 20 13:57:22 compute-1 ceph-osd[79119]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: bluefs mount shared_bdev_used = 4718592
Jan 20 13:57:22 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: RocksDB version: 7.9.2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Git sha 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: DB SUMMARY
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: DB Session ID:  LTNCJ3XTV54YABYKDU5V
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: CURRENT file:  CURRENT
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                         Options.error_if_exists: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                       Options.create_if_missing: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                                     Options.env: 0x557dbed2a3f0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                                Options.info_log: 0x557dbecc5580
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                              Options.statistics: (nil)
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                               Options.use_fsync: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                              Options.db_log_dir: 
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                                 Options.wal_dir: db.wal
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.write_buffer_manager: 0x557dbfc08960
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.unordered_write: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                               Options.row_cache: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                              Options.wal_filter: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.two_write_queues: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.wal_compression: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.atomic_flush: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.max_background_jobs: 4
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.max_background_compactions: -1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.max_subcompactions: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.max_open_files: -1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Compression algorithms supported:
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         kZSTD supported: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         kXpressCompression supported: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         kBZip2Compression supported: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         kLZ4Compression supported: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         kZlibCompression supported: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         kLZ4HCCompression supported: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         kSnappyCompression supported: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557dbece9220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557dbecdef30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557dbece9220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557dbecdef30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557dbece9220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557dbecdef30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557dbece9220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557dbecdef30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557dbece9220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557dbecdef30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557dbece9220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557dbecdef30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557dbece9220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557dbecdef30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557dbece9100)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557dbecdf610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557dbece9100)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557dbecdf610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557dbece9100)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557dbecdf610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e16c086c-0403-48f5-8de8-0b24deda1c99
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917442238257, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917442245904, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917442, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e16c086c-0403-48f5-8de8-0b24deda1c99", "db_session_id": "LTNCJ3XTV54YABYKDU5V", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917442249300, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917442, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e16c086c-0403-48f5-8de8-0b24deda1c99", "db_session_id": "LTNCJ3XTV54YABYKDU5V", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917442252771, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917442, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e16c086c-0403-48f5-8de8-0b24deda1c99", "db_session_id": "LTNCJ3XTV54YABYKDU5V", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917442254656, "job": 1, "event": "recovery_finished"}
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557dbfa74700
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: DB pointer 0x557dbfbf1a00
Jan 20 13:57:22 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 20 13:57:22 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 20 13:57:22 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 13:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdf610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdf610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdf610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 13:57:22 compute-1 ceph-osd[79119]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 20 13:57:22 compute-1 ceph-osd[79119]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 20 13:57:22 compute-1 ceph-osd[79119]: _get_class not permitted to load lua
Jan 20 13:57:22 compute-1 ceph-osd[79119]: _get_class not permitted to load sdk
Jan 20 13:57:22 compute-1 ceph-osd[79119]: _get_class not permitted to load test_remote_reads
Jan 20 13:57:22 compute-1 ceph-osd[79119]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 20 13:57:22 compute-1 ceph-osd[79119]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 20 13:57:22 compute-1 ceph-osd[79119]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 20 13:57:22 compute-1 ceph-osd[79119]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 20 13:57:22 compute-1 ceph-osd[79119]: osd.1 0 load_pgs
Jan 20 13:57:22 compute-1 ceph-osd[79119]: osd.1 0 load_pgs opened 0 pgs
Jan 20 13:57:22 compute-1 ceph-osd[79119]: osd.1 0 log_to_monitors true
Jan 20 13:57:22 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1[79115]: 2026-01-20T13:57:22.293+0000 7f3ee29c6740 -1 osd.1 0 log_to_monitors true
Jan 20 13:57:22 compute-1 podman[79732]: 2026-01-20 13:57:22.388208732 +0000 UTC m=+0.047942471 container create 33aecd00f9cdb8c73d0b4d1d314f72dd210e22b92d3d74f7dde96166c717d143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shtern, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 13:57:22 compute-1 systemd[1]: Started libpod-conmon-33aecd00f9cdb8c73d0b4d1d314f72dd210e22b92d3d74f7dde96166c717d143.scope.
Jan 20 13:57:22 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:57:22 compute-1 podman[79732]: 2026-01-20 13:57:22.368539254 +0000 UTC m=+0.028273013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc44f2cc50956cf4157c958606b473f20838a87b3354e637707830180568cc3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc44f2cc50956cf4157c958606b473f20838a87b3354e637707830180568cc3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc44f2cc50956cf4157c958606b473f20838a87b3354e637707830180568cc3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc44f2cc50956cf4157c958606b473f20838a87b3354e637707830180568cc3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:22 compute-1 podman[79732]: 2026-01-20 13:57:22.479314896 +0000 UTC m=+0.139048625 container init 33aecd00f9cdb8c73d0b4d1d314f72dd210e22b92d3d74f7dde96166c717d143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:57:22 compute-1 podman[79732]: 2026-01-20 13:57:22.485231064 +0000 UTC m=+0.144964783 container start 33aecd00f9cdb8c73d0b4d1d314f72dd210e22b92d3d74f7dde96166c717d143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shtern, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:57:22 compute-1 podman[79732]: 2026-01-20 13:57:22.488136767 +0000 UTC m=+0.147870516 container attach 33aecd00f9cdb8c73d0b4d1d314f72dd210e22b92d3d74f7dde96166c717d143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shtern, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:57:23 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 20 13:57:23 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 20 13:57:23 compute-1 silly_shtern[79749]: {
Jan 20 13:57:23 compute-1 silly_shtern[79749]:     "562c52e7-0678-4614-81fd-9a9eecf7d0f9": {
Jan 20 13:57:23 compute-1 silly_shtern[79749]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:57:23 compute-1 silly_shtern[79749]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 13:57:23 compute-1 silly_shtern[79749]:         "osd_id": 1,
Jan 20 13:57:23 compute-1 silly_shtern[79749]:         "osd_uuid": "562c52e7-0678-4614-81fd-9a9eecf7d0f9",
Jan 20 13:57:23 compute-1 silly_shtern[79749]:         "type": "bluestore"
Jan 20 13:57:23 compute-1 silly_shtern[79749]:     }
Jan 20 13:57:23 compute-1 silly_shtern[79749]: }
Jan 20 13:57:23 compute-1 systemd[1]: libpod-33aecd00f9cdb8c73d0b4d1d314f72dd210e22b92d3d74f7dde96166c717d143.scope: Deactivated successfully.
Jan 20 13:57:23 compute-1 podman[79732]: 2026-01-20 13:57:23.4566774 +0000 UTC m=+1.116411139 container died 33aecd00f9cdb8c73d0b4d1d314f72dd210e22b92d3d74f7dde96166c717d143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shtern, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 20 13:57:23 compute-1 systemd[1]: var-lib-containers-storage-overlay-fc44f2cc50956cf4157c958606b473f20838a87b3354e637707830180568cc3f-merged.mount: Deactivated successfully.
Jan 20 13:57:23 compute-1 ceph-osd[79119]: osd.1 0 done with init, starting boot process
Jan 20 13:57:23 compute-1 ceph-osd[79119]: osd.1 0 start_boot
Jan 20 13:57:23 compute-1 ceph-osd[79119]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 20 13:57:23 compute-1 ceph-osd[79119]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 20 13:57:23 compute-1 ceph-osd[79119]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 20 13:57:23 compute-1 ceph-osd[79119]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 20 13:57:23 compute-1 ceph-osd[79119]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 20 13:57:23 compute-1 podman[79732]: 2026-01-20 13:57:23.714557565 +0000 UTC m=+1.374291284 container remove 33aecd00f9cdb8c73d0b4d1d314f72dd210e22b92d3d74f7dde96166c717d143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shtern, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 13:57:23 compute-1 systemd[1]: libpod-conmon-33aecd00f9cdb8c73d0b4d1d314f72dd210e22b92d3d74f7dde96166c717d143.scope: Deactivated successfully.
Jan 20 13:57:23 compute-1 sudo[79209]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:24 compute-1 sudo[79786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:24 compute-1 sudo[79786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:24 compute-1 sudo[79786]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:24 compute-1 sudo[79811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 13:57:24 compute-1 sudo[79811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:24 compute-1 sudo[79811]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:24 compute-1 sudo[79836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:24 compute-1 sudo[79836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:24 compute-1 sudo[79836]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:24 compute-1 sudo[79861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:57:24 compute-1 sudo[79861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:24 compute-1 sudo[79861]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:24 compute-1 sudo[79886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:24 compute-1 sudo[79886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:24 compute-1 sudo[79886]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:24 compute-1 sudo[79911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 13:57:24 compute-1 sudo[79911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:25 compute-1 podman[80002]: 2026-01-20 13:57:25.214240175 +0000 UTC m=+0.106401932 container exec 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 13:57:25 compute-1 podman[80002]: 2026-01-20 13:57:25.446621212 +0000 UTC m=+0.338783049 container exec_died 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 13:57:25 compute-1 sudo[79911]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:25 compute-1 sudo[80049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:25 compute-1 sudo[80049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:25 compute-1 sudo[80049]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:25 compute-1 sudo[80074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:57:25 compute-1 sudo[80074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:25 compute-1 sudo[80074]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:25 compute-1 sudo[80099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:25 compute-1 sudo[80099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:25 compute-1 sudo[80099]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:25 compute-1 sudo[80124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- inventory --format=json-pretty --filter-for-batch
Jan 20 13:57:25 compute-1 sudo[80124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:26 compute-1 podman[80189]: 2026-01-20 13:57:26.314987175 +0000 UTC m=+0.068284139 container create e7f91c176f0fd720cd174799d4ab365e48a5b9f6b5e036907334a14954598b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:57:26 compute-1 podman[80189]: 2026-01-20 13:57:26.272742976 +0000 UTC m=+0.026039950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:26 compute-1 systemd[1]: Started libpod-conmon-e7f91c176f0fd720cd174799d4ab365e48a5b9f6b5e036907334a14954598b20.scope.
Jan 20 13:57:26 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:57:26 compute-1 podman[80189]: 2026-01-20 13:57:26.466557794 +0000 UTC m=+0.219854748 container init e7f91c176f0fd720cd174799d4ab365e48a5b9f6b5e036907334a14954598b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:57:26 compute-1 podman[80189]: 2026-01-20 13:57:26.47468076 +0000 UTC m=+0.227977724 container start e7f91c176f0fd720cd174799d4ab365e48a5b9f6b5e036907334a14954598b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_villani, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 13:57:26 compute-1 quirky_villani[80205]: 167 167
Jan 20 13:57:26 compute-1 systemd[1]: libpod-e7f91c176f0fd720cd174799d4ab365e48a5b9f6b5e036907334a14954598b20.scope: Deactivated successfully.
Jan 20 13:57:26 compute-1 podman[80189]: 2026-01-20 13:57:26.496536322 +0000 UTC m=+0.249833276 container attach e7f91c176f0fd720cd174799d4ab365e48a5b9f6b5e036907334a14954598b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_villani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 13:57:26 compute-1 podman[80189]: 2026-01-20 13:57:26.497483481 +0000 UTC m=+0.250780435 container died e7f91c176f0fd720cd174799d4ab365e48a5b9f6b5e036907334a14954598b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_villani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Jan 20 13:57:26 compute-1 systemd[1]: var-lib-containers-storage-overlay-e26c9cf8a570f8bf28367340bf367f599f908855fc65d27a1322f39c22c0f916-merged.mount: Deactivated successfully.
Jan 20 13:57:26 compute-1 podman[80189]: 2026-01-20 13:57:26.634121308 +0000 UTC m=+0.387418252 container remove e7f91c176f0fd720cd174799d4ab365e48a5b9f6b5e036907334a14954598b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_villani, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:57:26 compute-1 systemd[1]: libpod-conmon-e7f91c176f0fd720cd174799d4ab365e48a5b9f6b5e036907334a14954598b20.scope: Deactivated successfully.
Jan 20 13:57:26 compute-1 podman[80229]: 2026-01-20 13:57:26.881270831 +0000 UTC m=+0.087651385 container create 38e2468afbccf13d3bb90ba3cb65e5496f86df5a736b1897f6b36725d16bd384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_colden, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:57:26 compute-1 podman[80229]: 2026-01-20 13:57:26.836153965 +0000 UTC m=+0.042534569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:26 compute-1 systemd[1]: Started libpod-conmon-38e2468afbccf13d3bb90ba3cb65e5496f86df5a736b1897f6b36725d16bd384.scope.
Jan 20 13:57:26 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:57:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9121cbe8dd5a48191ea7c7e128941aa9f7507f60ecfcce118ba8b98c01480a26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9121cbe8dd5a48191ea7c7e128941aa9f7507f60ecfcce118ba8b98c01480a26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9121cbe8dd5a48191ea7c7e128941aa9f7507f60ecfcce118ba8b98c01480a26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9121cbe8dd5a48191ea7c7e128941aa9f7507f60ecfcce118ba8b98c01480a26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:27 compute-1 podman[80229]: 2026-01-20 13:57:27.005251145 +0000 UTC m=+0.211631789 container init 38e2468afbccf13d3bb90ba3cb65e5496f86df5a736b1897f6b36725d16bd384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Jan 20 13:57:27 compute-1 podman[80229]: 2026-01-20 13:57:27.017328041 +0000 UTC m=+0.223708625 container start 38e2468afbccf13d3bb90ba3cb65e5496f86df5a736b1897f6b36725d16bd384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_colden, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 13:57:27 compute-1 podman[80229]: 2026-01-20 13:57:27.039467281 +0000 UTC m=+0.245847895 container attach 38e2468afbccf13d3bb90ba3cb65e5496f86df5a736b1897f6b36725d16bd384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:57:28 compute-1 infallible_colden[80245]: [
Jan 20 13:57:28 compute-1 infallible_colden[80245]:     {
Jan 20 13:57:28 compute-1 infallible_colden[80245]:         "available": false,
Jan 20 13:57:28 compute-1 infallible_colden[80245]:         "ceph_device": false,
Jan 20 13:57:28 compute-1 infallible_colden[80245]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:         "lsm_data": {},
Jan 20 13:57:28 compute-1 infallible_colden[80245]:         "lvs": [],
Jan 20 13:57:28 compute-1 infallible_colden[80245]:         "path": "/dev/sr0",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:         "rejected_reasons": [
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "Insufficient space (<5GB)",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "Has a FileSystem"
Jan 20 13:57:28 compute-1 infallible_colden[80245]:         ],
Jan 20 13:57:28 compute-1 infallible_colden[80245]:         "sys_api": {
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "actuators": null,
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "device_nodes": "sr0",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "devname": "sr0",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "human_readable_size": "482.00 KB",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "id_bus": "ata",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "model": "QEMU DVD-ROM",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "nr_requests": "2",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "parent": "/dev/sr0",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "partitions": {},
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "path": "/dev/sr0",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "removable": "1",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "rev": "2.5+",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "ro": "0",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "rotational": "1",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "sas_address": "",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "sas_device_handle": "",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "scheduler_mode": "mq-deadline",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "sectors": 0,
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "sectorsize": "2048",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "size": 493568.0,
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "support_discard": "2048",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "type": "disk",
Jan 20 13:57:28 compute-1 infallible_colden[80245]:             "vendor": "QEMU"
Jan 20 13:57:28 compute-1 infallible_colden[80245]:         }
Jan 20 13:57:28 compute-1 infallible_colden[80245]:     }
Jan 20 13:57:28 compute-1 infallible_colden[80245]: ]
Jan 20 13:57:28 compute-1 systemd[1]: libpod-38e2468afbccf13d3bb90ba3cb65e5496f86df5a736b1897f6b36725d16bd384.scope: Deactivated successfully.
Jan 20 13:57:28 compute-1 systemd[1]: libpod-38e2468afbccf13d3bb90ba3cb65e5496f86df5a736b1897f6b36725d16bd384.scope: Consumed 1.252s CPU time.
Jan 20 13:57:28 compute-1 conmon[80245]: conmon 38e2468afbccf13d3bb9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-38e2468afbccf13d3bb90ba3cb65e5496f86df5a736b1897f6b36725d16bd384.scope/container/memory.events
Jan 20 13:57:28 compute-1 podman[80229]: 2026-01-20 13:57:28.271925748 +0000 UTC m=+1.478306282 container died 38e2468afbccf13d3bb90ba3cb65e5496f86df5a736b1897f6b36725d16bd384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 13:57:28 compute-1 systemd[1]: var-lib-containers-storage-overlay-9121cbe8dd5a48191ea7c7e128941aa9f7507f60ecfcce118ba8b98c01480a26-merged.mount: Deactivated successfully.
Jan 20 13:57:28 compute-1 podman[80229]: 2026-01-20 13:57:28.513949166 +0000 UTC m=+1.720329690 container remove 38e2468afbccf13d3bb90ba3cb65e5496f86df5a736b1897f6b36725d16bd384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_colden, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:57:28 compute-1 systemd[1]: libpod-conmon-38e2468afbccf13d3bb90ba3cb65e5496f86df5a736b1897f6b36725d16bd384.scope: Deactivated successfully.
Jan 20 13:57:28 compute-1 sudo[80124]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:29 compute-1 ceph-osd[79119]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 10.625 iops: 2720.012 elapsed_sec: 1.103
Jan 20 13:57:29 compute-1 ceph-osd[79119]: log_channel(cluster) log [WRN] : OSD bench result of 2720.011715 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 13:57:29 compute-1 ceph-osd[79119]: osd.1 0 waiting for initial osdmap
Jan 20 13:57:29 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1[79115]: 2026-01-20T13:57:29.878+0000 7f3edf15d640 -1 osd.1 0 waiting for initial osdmap
Jan 20 13:57:29 compute-1 ceph-osd[79119]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 20 13:57:29 compute-1 ceph-osd[79119]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 20 13:57:29 compute-1 ceph-osd[79119]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 20 13:57:29 compute-1 ceph-osd[79119]: osd.1 11 check_osdmap_features require_osd_release unknown -> reef
Jan 20 13:57:29 compute-1 ceph-osd[79119]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 20 13:57:29 compute-1 ceph-osd[79119]: osd.1 11 set_numa_affinity not setting numa affinity
Jan 20 13:57:29 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-1[79115]: 2026-01-20T13:57:29.905+0000 7f3ed9f6e640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 20 13:57:29 compute-1 ceph-osd[79119]: osd.1 11 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Jan 20 13:57:30 compute-1 ceph-osd[79119]: osd.1 12 state: booting -> active
Jan 20 13:57:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:57:31 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:57:38 compute-1 sshd-session[81398]: Invalid user admin from 116.99.171.211 port 43672
Jan 20 13:57:38 compute-1 sshd-session[81398]: Connection closed by invalid user admin 116.99.171.211 port 43672 [preauth]
Jan 20 13:57:41 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 15 pg[2.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:57:41 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 16 pg[2.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:57:50 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:57:51 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 25 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:57:52 compute-1 sudo[81400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:52 compute-1 sudo[81400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:52 compute-1 sudo[81400]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:52 compute-1 sudo[81425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:57:52 compute-1 sudo[81425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:52 compute-1 sudo[81425]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:52 compute-1 sudo[81450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:52 compute-1 sudo[81450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:52 compute-1 sudo[81450]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:52 compute-1 sudo[81475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:57:52 compute-1 sudo[81475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:52 compute-1 podman[81542]: 2026-01-20 13:57:52.961516748 +0000 UTC m=+0.067940058 container create 9e740ffc97a6a2f8d1bea6342169cf035c9f22d01cf13ad54da1db984ea20f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_germain, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:57:53 compute-1 systemd[1]: Started libpod-conmon-9e740ffc97a6a2f8d1bea6342169cf035c9f22d01cf13ad54da1db984ea20f66.scope.
Jan 20 13:57:53 compute-1 podman[81542]: 2026-01-20 13:57:52.931388006 +0000 UTC m=+0.037811336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:53 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:57:53 compute-1 podman[81542]: 2026-01-20 13:57:53.056897656 +0000 UTC m=+0.163320996 container init 9e740ffc97a6a2f8d1bea6342169cf035c9f22d01cf13ad54da1db984ea20f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_germain, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:57:53 compute-1 podman[81542]: 2026-01-20 13:57:53.067789056 +0000 UTC m=+0.174212346 container start 9e740ffc97a6a2f8d1bea6342169cf035c9f22d01cf13ad54da1db984ea20f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_germain, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 13:57:53 compute-1 podman[81542]: 2026-01-20 13:57:53.072739225 +0000 UTC m=+0.179162555 container attach 9e740ffc97a6a2f8d1bea6342169cf035c9f22d01cf13ad54da1db984ea20f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_germain, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 13:57:53 compute-1 strange_germain[81559]: 167 167
Jan 20 13:57:53 compute-1 systemd[1]: libpod-9e740ffc97a6a2f8d1bea6342169cf035c9f22d01cf13ad54da1db984ea20f66.scope: Deactivated successfully.
Jan 20 13:57:53 compute-1 conmon[81559]: conmon 9e740ffc97a6a2f8d1be <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e740ffc97a6a2f8d1bea6342169cf035c9f22d01cf13ad54da1db984ea20f66.scope/container/memory.events
Jan 20 13:57:53 compute-1 podman[81542]: 2026-01-20 13:57:53.078709086 +0000 UTC m=+0.185132386 container died 9e740ffc97a6a2f8d1bea6342169cf035c9f22d01cf13ad54da1db984ea20f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_germain, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:57:53 compute-1 systemd[1]: var-lib-containers-storage-overlay-e0651f5915327692c934c5689e2313c3d0ee64d529bdf0a53231a0316b2272c3-merged.mount: Deactivated successfully.
Jan 20 13:57:53 compute-1 podman[81542]: 2026-01-20 13:57:53.116358606 +0000 UTC m=+0.222781906 container remove 9e740ffc97a6a2f8d1bea6342169cf035c9f22d01cf13ad54da1db984ea20f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:57:53 compute-1 systemd[1]: libpod-conmon-9e740ffc97a6a2f8d1bea6342169cf035c9f22d01cf13ad54da1db984ea20f66.scope: Deactivated successfully.
Jan 20 13:57:53 compute-1 podman[81578]: 2026-01-20 13:57:53.211394103 +0000 UTC m=+0.056824661 container create b1154acdfe9322b30562b7514dad21020ee3d67e2ff9017f43d4c218c0cc2b42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:57:53 compute-1 systemd[1]: Started libpod-conmon-b1154acdfe9322b30562b7514dad21020ee3d67e2ff9017f43d4c218c0cc2b42.scope.
Jan 20 13:57:53 compute-1 podman[81578]: 2026-01-20 13:57:53.18354353 +0000 UTC m=+0.028974118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:53 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:57:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b20a74809fa1ea554ad9b2cee83694d9e117d940cc96de9346b5a97cb5fae8/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b20a74809fa1ea554ad9b2cee83694d9e117d940cc96de9346b5a97cb5fae8/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b20a74809fa1ea554ad9b2cee83694d9e117d940cc96de9346b5a97cb5fae8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b20a74809fa1ea554ad9b2cee83694d9e117d940cc96de9346b5a97cb5fae8/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:53 compute-1 podman[81578]: 2026-01-20 13:57:53.31368061 +0000 UTC m=+0.159111198 container init b1154acdfe9322b30562b7514dad21020ee3d67e2ff9017f43d4c218c0cc2b42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_gauss, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 13:57:53 compute-1 podman[81578]: 2026-01-20 13:57:53.32195537 +0000 UTC m=+0.167385928 container start b1154acdfe9322b30562b7514dad21020ee3d67e2ff9017f43d4c218c0cc2b42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_gauss, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 13:57:53 compute-1 podman[81578]: 2026-01-20 13:57:53.326000053 +0000 UTC m=+0.171430641 container attach b1154acdfe9322b30562b7514dad21020ee3d67e2ff9017f43d4c218c0cc2b42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_gauss, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:57:53 compute-1 systemd[1]: libpod-b1154acdfe9322b30562b7514dad21020ee3d67e2ff9017f43d4c218c0cc2b42.scope: Deactivated successfully.
Jan 20 13:57:53 compute-1 podman[81578]: 2026-01-20 13:57:53.445361827 +0000 UTC m=+0.290792395 container died b1154acdfe9322b30562b7514dad21020ee3d67e2ff9017f43d4c218c0cc2b42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:57:53 compute-1 systemd[1]: var-lib-containers-storage-overlay-32b20a74809fa1ea554ad9b2cee83694d9e117d940cc96de9346b5a97cb5fae8-merged.mount: Deactivated successfully.
Jan 20 13:57:53 compute-1 podman[81578]: 2026-01-20 13:57:53.500840027 +0000 UTC m=+0.346270575 container remove b1154acdfe9322b30562b7514dad21020ee3d67e2ff9017f43d4c218c0cc2b42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:57:53 compute-1 systemd[1]: libpod-conmon-b1154acdfe9322b30562b7514dad21020ee3d67e2ff9017f43d4c218c0cc2b42.scope: Deactivated successfully.
Jan 20 13:57:53 compute-1 systemd[1]: Reloading.
Jan 20 13:57:53 compute-1 systemd-rc-local-generator[81659]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:57:53 compute-1 systemd-sysv-generator[81663]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:57:53 compute-1 systemd[1]: Reloading.
Jan 20 13:57:53 compute-1 systemd-rc-local-generator[81699]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:57:53 compute-1 systemd-sysv-generator[81703]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:57:54 compute-1 systemd[1]: Starting Ceph mon.compute-1 for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 13:57:54 compute-1 podman[81756]: 2026-01-20 13:57:54.361246199 +0000 UTC m=+0.065608437 container create 8b3e7cd2a573119376e59e1274d49f28e2da731dd446d02544dfe881b8843c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 13:57:54 compute-1 podman[81756]: 2026-01-20 13:57:54.326544698 +0000 UTC m=+0.030906976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:54 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42312ecb659a40f1b9c6a0c8ca8b6b71245feb003a67440257ddd2c64cf3289a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:54 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42312ecb659a40f1b9c6a0c8ca8b6b71245feb003a67440257ddd2c64cf3289a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:54 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42312ecb659a40f1b9c6a0c8ca8b6b71245feb003a67440257ddd2c64cf3289a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:54 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42312ecb659a40f1b9c6a0c8ca8b6b71245feb003a67440257ddd2c64cf3289a/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:54 compute-1 podman[81756]: 2026-01-20 13:57:54.457475103 +0000 UTC m=+0.161837371 container init 8b3e7cd2a573119376e59e1274d49f28e2da731dd446d02544dfe881b8843c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:57:54 compute-1 podman[81756]: 2026-01-20 13:57:54.468979361 +0000 UTC m=+0.173341589 container start 8b3e7cd2a573119376e59e1274d49f28e2da731dd446d02544dfe881b8843c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-1, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 13:57:54 compute-1 bash[81756]: 8b3e7cd2a573119376e59e1274d49f28e2da731dd446d02544dfe881b8843c0e
Jan 20 13:57:54 compute-1 systemd[1]: Started Ceph mon.compute-1 for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 13:57:54 compute-1 ceph-mon[81775]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 13:57:54 compute-1 ceph-mon[81775]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 20 13:57:54 compute-1 ceph-mon[81775]: pidfile_write: ignore empty --pid-file
Jan 20 13:57:54 compute-1 ceph-mon[81775]: load: jerasure load: lrc 
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: RocksDB version: 7.9.2
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Git sha 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: DB SUMMARY
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: DB Session ID:  LFF7G2OZDOU7TKQ8MKAH
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: CURRENT file:  CURRENT
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                         Options.error_if_exists: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                       Options.create_if_missing: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                                     Options.env: 0x564d4f6bec40
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                                Options.info_log: 0x564d515aefc0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                              Options.statistics: (nil)
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                               Options.use_fsync: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                              Options.db_log_dir: 
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                                 Options.wal_dir: 
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                    Options.write_buffer_manager: 0x564d515beb40
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                  Options.unordered_write: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                               Options.row_cache: None
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                              Options.wal_filter: None
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.two_write_queues: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.wal_compression: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.atomic_flush: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.max_background_jobs: 2
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.max_background_compactions: -1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.max_subcompactions: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.max_total_wal_size: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                          Options.max_open_files: -1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:       Options.compaction_readahead_size: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Compression algorithms supported:
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         kZSTD supported: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         kXpressCompression supported: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         kBZip2Compression supported: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         kLZ4Compression supported: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         kZlibCompression supported: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         kLZ4HCCompression supported: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         kSnappyCompression supported: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:           Options.merge_operator: 
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564d515aec00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564d515a71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:        Options.write_buffer_size: 33554432
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:  Options.max_write_buffer_number: 2
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:          Options.compression: NoCompression
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1539d774-8a6f-4e48-b253-137c44586344
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917474527291, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917474529485, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917474529620, "job": 1, "event": "recovery_finished"}
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x564d515d0e00
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: DB pointer 0x564d5165a000
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 13:57:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564d515a71f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Jan 20 13:57:54 compute-1 sudo[81475]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:54 compute-1 ceph-mon[81775]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Jan 20 13:57:54 compute-1 ceph-mon[81775]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(???) e0 preinit fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).mds e1 new map
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 1 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 1 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 1 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 1 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e15 e15: 2 total, 2 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e16 e16: 2 total, 2 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e17 e17: 2 total, 2 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e18 e18: 2 total, 2 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e19 e19: 2 total, 2 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e20 e20: 2 total, 2 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e21 e21: 2 total, 2 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e22 e22: 2 total, 2 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e23 e23: 2 total, 2 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e24 e24: 2 total, 2 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e25 e25: 2 total, 2 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e25 crush map has features 3314933000852226048, adjusting msgr requires
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e25 crush map has features 288514051259236352, adjusting msgr requires
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e25 crush map has features 288514051259236352, adjusting msgr requires
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).osd e25 crush map has features 288514051259236352, adjusting msgr requires
Jan 20 13:57:54 compute-1 ceph-mon[81775]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 20 13:57:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3530884063' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 13:57:54 compute-1 ceph-mon[81775]: osdmap e22: 2 total, 2 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 13:57:54 compute-1 ceph-mon[81775]: pgmap v72: 6 pgs: 2 active+clean, 1 creating+peering, 3 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:54 compute-1 ceph-mon[81775]: Updating compute-2:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring
Jan 20 13:57:54 compute-1 ceph-mon[81775]: osdmap e23: 2 total, 2 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3880793223' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 13:57:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 13:57:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 13:57:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:57:54 compute-1 ceph-mon[81775]: pgmap v74: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:54 compute-1 ceph-mon[81775]: Deploying daemon mon.compute-2 on compute-2
Jan 20 13:57:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3880793223' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 13:57:54 compute-1 ceph-mon[81775]: osdmap e24: 2 total, 2 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 20 13:57:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3950308669' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 20 13:57:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3950308669' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 20 13:57:54 compute-1 ceph-mon[81775]: osdmap e25: 2 total, 2 up, 2 in
Jan 20 13:57:54 compute-1 ceph-mon[81775]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Jan 20 13:58:00 compute-1 ceph-mon[81775]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Jan 20 13:58:00 compute-1 ceph-mon[81775]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Jan 20 13:58:00 compute-1 ceph-mon[81775]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 20 13:58:00 compute-1 ceph-mon[81775]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 13:58:03 compute-1 ceph-mon[81775]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 13:58:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 20 13:58:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Jan 20 13:58:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e26 e26: 2 total, 2 up, 2 in
Jan 20 13:58:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e27 e27: 2 total, 2 up, 2 in
Jan 20 13:58:03 compute-1 ceph-mon[81775]: Deploying daemon mon.compute-1 on compute-1
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: mon.compute-0 calling monitor election
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3099254653' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: pgmap v78: 7 pgs: 2 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: mon.compute-2 calling monitor election
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: pgmap v79: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 20 13:58:03 compute-1 ceph-mon[81775]: monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 20 13:58:03 compute-1 ceph-mon[81775]: fsmap 
Jan 20 13:58:03 compute-1 ceph-mon[81775]: osdmap e25: 2 total, 2 up, 2 in
Jan 20 13:58:03 compute-1 ceph-mon[81775]: mgrmap e9: compute-0.wookjv(active, since 2m)
Jan 20 13:58:03 compute-1 ceph-mon[81775]: Health detail: HEALTH_WARN 6 pool(s) do not have an application enabled
Jan 20 13:58:03 compute-1 ceph-mon[81775]: [WRN] POOL_APP_NOT_ENABLED: 6 pool(s) do not have an application enabled
Jan 20 13:58:03 compute-1 ceph-mon[81775]:     application not enabled on pool 'vms'
Jan 20 13:58:03 compute-1 ceph-mon[81775]:     application not enabled on pool 'volumes'
Jan 20 13:58:03 compute-1 ceph-mon[81775]:     application not enabled on pool 'backups'
Jan 20 13:58:03 compute-1 ceph-mon[81775]:     application not enabled on pool 'images'
Jan 20 13:58:03 compute-1 ceph-mon[81775]:     application not enabled on pool 'cephfs.cephfs.meta'
Jan 20 13:58:03 compute-1 ceph-mon[81775]:     application not enabled on pool 'cephfs.cephfs.data'
Jan 20 13:58:03 compute-1 ceph-mon[81775]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:03 compute-1 ceph-mon[81775]: pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:03 compute-1 ceph-mon[81775]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gunjko", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3099254653' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gunjko", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 20 13:58:03 compute-1 ceph-mon[81775]: osdmap e26: 2 total, 2 up, 2 in
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: Deploying daemon mgr.compute-2.gunjko on compute-2
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 13:58:03 compute-1 ceph-mon[81775]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,created_at=2026-01-20T13:57:53.383953Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,os=Linux}
Jan 20 13:58:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e28 e28: 2 total, 2 up, 2 in
Jan 20 13:58:03 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 28 pg[2.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=28 pruub=10.162316322s) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active pruub 51.637405396s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:03 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 28 pg[2.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=28 pruub=10.162316322s) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown pruub 51.637405396s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1076842494' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: mon.compute-0 calling monitor election
Jan 20 13:58:03 compute-1 ceph-mon[81775]: mon.compute-2 calling monitor election
Jan 20 13:58:03 compute-1 ceph-mon[81775]: pgmap v83: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: mon.compute-1 calling monitor election
Jan 20 13:58:03 compute-1 ceph-mon[81775]: pgmap v84: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:03 compute-1 ceph-mon[81775]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 20 13:58:03 compute-1 ceph-mon[81775]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 20 13:58:03 compute-1 ceph-mon[81775]: fsmap 
Jan 20 13:58:03 compute-1 ceph-mon[81775]: osdmap e27: 2 total, 2 up, 2 in
Jan 20 13:58:03 compute-1 ceph-mon[81775]: mgrmap e9: compute-0.wookjv(active, since 2m)
Jan 20 13:58:03 compute-1 ceph-mon[81775]: Health detail: HEALTH_WARN 5 pool(s) do not have an application enabled
Jan 20 13:58:03 compute-1 ceph-mon[81775]: [WRN] POOL_APP_NOT_ENABLED: 5 pool(s) do not have an application enabled
Jan 20 13:58:03 compute-1 ceph-mon[81775]:     application not enabled on pool 'volumes'
Jan 20 13:58:03 compute-1 ceph-mon[81775]:     application not enabled on pool 'backups'
Jan 20 13:58:03 compute-1 ceph-mon[81775]:     application not enabled on pool 'images'
Jan 20 13:58:03 compute-1 ceph-mon[81775]:     application not enabled on pool 'cephfs.cephfs.meta'
Jan 20 13:58:03 compute-1 ceph-mon[81775]:     application not enabled on pool 'cephfs.cephfs.data'
Jan 20 13:58:03 compute-1 ceph-mon[81775]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:03 compute-1 sudo[81814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:03 compute-1 sudo[81814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:03 compute-1 sudo[81814]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:03 compute-1 sudo[81839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:58:03 compute-1 sudo[81839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:03 compute-1 sudo[81839]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:04 compute-1 sudo[81864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:04 compute-1 sudo[81864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:04 compute-1 sudo[81864]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:04 compute-1 sudo[81889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:58:04 compute-1 sudo[81889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e28 _set_new_cache_sizes cache_size:1019933228 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:04 compute-1 podman[81954]: 2026-01-20 13:58:04.569312076 +0000 UTC m=+0.069375481 container create 8443aa08b83ff9cad5d88a8d4b8aabe85b3dd1fa166d0cf5cc56c6c26726d155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_williams, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 13:58:04 compute-1 systemd[1]: Started libpod-conmon-8443aa08b83ff9cad5d88a8d4b8aabe85b3dd1fa166d0cf5cc56c6c26726d155.scope.
Jan 20 13:58:04 compute-1 podman[81954]: 2026-01-20 13:58:04.537758341 +0000 UTC m=+0.037821816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:04 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:58:04 compute-1 podman[81954]: 2026-01-20 13:58:04.671722657 +0000 UTC m=+0.171786142 container init 8443aa08b83ff9cad5d88a8d4b8aabe85b3dd1fa166d0cf5cc56c6c26726d155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_williams, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 20 13:58:04 compute-1 podman[81954]: 2026-01-20 13:58:04.682098341 +0000 UTC m=+0.182161746 container start 8443aa08b83ff9cad5d88a8d4b8aabe85b3dd1fa166d0cf5cc56c6c26726d155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 13:58:04 compute-1 podman[81954]: 2026-01-20 13:58:04.685645109 +0000 UTC m=+0.185708534 container attach 8443aa08b83ff9cad5d88a8d4b8aabe85b3dd1fa166d0cf5cc56c6c26726d155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_williams, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 13:58:04 compute-1 hardcore_williams[81970]: 167 167
Jan 20 13:58:04 compute-1 systemd[1]: libpod-8443aa08b83ff9cad5d88a8d4b8aabe85b3dd1fa166d0cf5cc56c6c26726d155.scope: Deactivated successfully.
Jan 20 13:58:04 compute-1 podman[81954]: 2026-01-20 13:58:04.692833966 +0000 UTC m=+0.192897391 container died 8443aa08b83ff9cad5d88a8d4b8aabe85b3dd1fa166d0cf5cc56c6c26726d155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_williams, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:58:04 compute-1 systemd[1]: var-lib-containers-storage-overlay-ee23155c83f2fddce7465f76b73454ad9f3989d7342da40e2b42fc3ae8ea0388-merged.mount: Deactivated successfully.
Jan 20 13:58:04 compute-1 podman[81954]: 2026-01-20 13:58:04.737730856 +0000 UTC m=+0.237794291 container remove 8443aa08b83ff9cad5d88a8d4b8aabe85b3dd1fa166d0cf5cc56c6c26726d155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_williams, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:58:04 compute-1 systemd[1]: libpod-conmon-8443aa08b83ff9cad5d88a8d4b8aabe85b3dd1fa166d0cf5cc56c6c26726d155.scope: Deactivated successfully.
Jan 20 13:58:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e29 e29: 2 total, 2 up, 2 in
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.1f( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.1e( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.8( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.7( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.1d( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.2( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.1( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.5( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.3( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.b( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.c( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.e( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.f( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.10( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.12( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.11( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.13( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.1c( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.16( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.14( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.17( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.18( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.19( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.1a( empty local-lis/les=15/16 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.1e( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.1f( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.8( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.7( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.1d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.1( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.5( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.3( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.2( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.b( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.e( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.0( empty local-lis/les=28/29 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.f( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.10( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.13( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.12( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.16( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.14( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.17( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.1c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.19( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.1a( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.11( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 29 pg[2.18( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=15/15 les/c/f=16/16/0 sis=28) [1] r=0 lpr=28 pi=[15,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-1 ceph-mon[81775]: pgmap v85: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:04 compute-1 ceph-mon[81775]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 13:58:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:58:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1076842494' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 20 13:58:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:04 compute-1 ceph-mon[81775]: osdmap e28: 2 total, 2 up, 2 in
Jan 20 13:58:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:58:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.oweoeg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 13:58:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.oweoeg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 20 13:58:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 13:58:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:04 compute-1 ceph-mon[81775]: Deploying daemon mgr.compute-1.oweoeg on compute-1
Jan 20 13:58:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:58:04 compute-1 ceph-mon[81775]: osdmap e29: 2 total, 2 up, 2 in
Jan 20 13:58:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:58:04 compute-1 systemd[1]: Reloading.
Jan 20 13:58:04 compute-1 systemd-sysv-generator[82021]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:58:04 compute-1 systemd-rc-local-generator[82017]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:58:05 compute-1 systemd[1]: Reloading.
Jan 20 13:58:05 compute-1 systemd-rc-local-generator[82055]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:58:05 compute-1 systemd-sysv-generator[82058]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:58:05 compute-1 systemd[1]: Starting Ceph mgr.compute-1.oweoeg for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 13:58:05 compute-1 podman[82116]: 2026-01-20 13:58:05.685942607 +0000 UTC m=+0.063353730 container create 04b4edd0953680a3226067bf6924e194df45e64acbb7953c52825a4a8b3eb2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 20 13:58:05 compute-1 podman[82116]: 2026-01-20 13:58:05.654459613 +0000 UTC m=+0.031870826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0da49ae4e8627d0dacb828dd5a41fd0ab0e7ac1613d0580fb5e813bc96d1ddc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0da49ae4e8627d0dacb828dd5a41fd0ab0e7ac1613d0580fb5e813bc96d1ddc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0da49ae4e8627d0dacb828dd5a41fd0ab0e7ac1613d0580fb5e813bc96d1ddc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0da49ae4e8627d0dacb828dd5a41fd0ab0e7ac1613d0580fb5e813bc96d1ddc/merged/var/lib/ceph/mgr/ceph-compute-1.oweoeg supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e30 e30: 2 total, 2 up, 2 in
Jan 20 13:58:05 compute-1 podman[82116]: 2026-01-20 13:58:05.775065195 +0000 UTC m=+0.152476408 container init 04b4edd0953680a3226067bf6924e194df45e64acbb7953c52825a4a8b3eb2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:58:05 compute-1 podman[82116]: 2026-01-20 13:58:05.783166951 +0000 UTC m=+0.160578104 container start 04b4edd0953680a3226067bf6924e194df45e64acbb7953c52825a4a8b3eb2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:58:05 compute-1 bash[82116]: 04b4edd0953680a3226067bf6924e194df45e64acbb7953c52825a4a8b3eb2ba
Jan 20 13:58:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1913464166' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 20 13:58:05 compute-1 ceph-mon[81775]: 3.1 scrub starts
Jan 20 13:58:05 compute-1 ceph-mon[81775]: 3.1 scrub ok
Jan 20 13:58:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:58:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1913464166' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 20 13:58:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:05 compute-1 ceph-mon[81775]: osdmap e30: 2 total, 2 up, 2 in
Jan 20 13:58:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:58:05 compute-1 systemd[1]: Started Ceph mgr.compute-1.oweoeg for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 13:58:05 compute-1 ceph-mgr[82135]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 13:58:05 compute-1 ceph-mgr[82135]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 20 13:58:05 compute-1 ceph-mgr[82135]: pidfile_write: ignore empty --pid-file
Jan 20 13:58:05 compute-1 sudo[81889]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:05 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'alerts'
Jan 20 13:58:06 compute-1 ceph-mgr[82135]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 20 13:58:06 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'balancer'
Jan 20 13:58:06 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:06.326+0000 7fae9b308140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 20 13:58:06 compute-1 ceph-mgr[82135]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 20 13:58:06 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'cephadm'
Jan 20 13:58:06 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:06.562+0000 7fae9b308140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 20 13:58:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e31 e31: 2 total, 2 up, 2 in
Jan 20 13:58:07 compute-1 ceph-mon[81775]: pgmap v88: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 13:58:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 20 13:58:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:58:07 compute-1 ceph-mon[81775]: osdmap e31: 2 total, 2 up, 2 in
Jan 20 13:58:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e32 e32: 2 total, 2 up, 2 in
Jan 20 13:58:08 compute-1 ceph-mon[81775]: Deploying daemon crash.compute-2 on compute-2
Jan 20 13:58:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4079761379' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 20 13:58:08 compute-1 ceph-mon[81775]: pgmap v91: 131 pgs: 93 unknown, 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:08 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:08 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4079761379' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 20 13:58:08 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:08 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:08 compute-1 ceph-mon[81775]: osdmap e32: 2 total, 2 up, 2 in
Jan 20 13:58:08 compute-1 ceph-mon[81775]: 3.2 scrub starts
Jan 20 13:58:08 compute-1 ceph-mon[81775]: 3.2 scrub ok
Jan 20 13:58:08 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'crash'
Jan 20 13:58:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e33 e33: 2 total, 2 up, 2 in
Jan 20 13:58:09 compute-1 ceph-mgr[82135]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 20 13:58:09 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'dashboard'
Jan 20 13:58:09 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:09.033+0000 7fae9b308140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 20 13:58:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 13:58:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 13:58:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 13:58:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:09 compute-1 ceph-mon[81775]: osdmap e33: 2 total, 2 up, 2 in
Jan 20 13:58:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e33 _set_new_cache_sizes cache_size:1020053189 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 32 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=32 pruub=13.407303810s) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active pruub 60.937648773s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=32 pruub=13.407303810s) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown pruub 60.937648773s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.c( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.d( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.e( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.f( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.12( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.13( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.10( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.11( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.16( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.17( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.14( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.15( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.1a( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.18( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.1b( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.19( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.1c( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.1d( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.1e( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.1f( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.2( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.3( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.6( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.7( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.4( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.5( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.8( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.a( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.9( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.b( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 33 pg[7.1( empty local-lis/les=24/25 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:10 compute-1 ceph-mon[81775]: pgmap v94: 193 pgs: 1 peering, 62 unknown, 130 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3413961177' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 20 13:58:10 compute-1 ceph-mon[81775]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 13:58:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e34 e34: 2 total, 2 up, 2 in
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.1c( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.1f( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.1d( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.12( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.11( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.10( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.16( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.17( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.14( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.b( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.8( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.15( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.9( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.e( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.6( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.a( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.5( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.4( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.7( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.d( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.1( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.0( empty local-lis/les=32/34 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.2( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.f( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.13( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.1e( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.c( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.19( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.1b( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.1a( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.3( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 34 pg[7.18( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=24/24 les/c/f=25/25/0 sis=32) [1] r=0 lpr=32 pi=[24,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e35 e35: 3 total, 2 up, 3 in
Jan 20 13:58:10 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'devicehealth'
Jan 20 13:58:10 compute-1 ceph-mgr[82135]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 20 13:58:10 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'diskprediction_local'
Jan 20 13:58:10 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:10.699+0000 7fae9b308140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 20 13:58:11 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 20 13:58:11 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 20 13:58:11 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]:   from numpy import show_config as show_numpy_config
Jan 20 13:58:11 compute-1 ceph-mgr[82135]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 20 13:58:11 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:11.241+0000 7fae9b308140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 20 13:58:11 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'influx'
Jan 20 13:58:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3413961177' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 20 13:58:11 compute-1 ceph-mon[81775]: osdmap e34: 2 total, 2 up, 2 in
Jan 20 13:58:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3257799028' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "36aff1f5-bc44-4633-b417-95c5b1ee6391"}]: dispatch
Jan 20 13:58:11 compute-1 ceph-mon[81775]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "36aff1f5-bc44-4633-b417-95c5b1ee6391"}]: dispatch
Jan 20 13:58:11 compute-1 ceph-mon[81775]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "36aff1f5-bc44-4633-b417-95c5b1ee6391"}]': finished
Jan 20 13:58:11 compute-1 ceph-mon[81775]: osdmap e35: 3 total, 2 up, 3 in
Jan 20 13:58:11 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:11 compute-1 ceph-mon[81775]: 3.3 scrub starts
Jan 20 13:58:11 compute-1 ceph-mon[81775]: 3.3 scrub ok
Jan 20 13:58:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/672023475' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 20 13:58:11 compute-1 ceph-mgr[82135]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 20 13:58:11 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'insights'
Jan 20 13:58:11 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:11.490+0000 7fae9b308140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 20 13:58:11 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'iostat'
Jan 20 13:58:11 compute-1 ceph-mgr[82135]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 20 13:58:11 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:11.972+0000 7fae9b308140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 20 13:58:11 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'k8sevents'
Jan 20 13:58:12 compute-1 ceph-mon[81775]: pgmap v97: 193 pgs: 1 peering, 31 unknown, 161 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:12 compute-1 ceph-mon[81775]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 20 13:58:12 compute-1 ceph-mon[81775]: Cluster is now healthy
Jan 20 13:58:13 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.1 deep-scrub starts
Jan 20 13:58:13 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.1 deep-scrub ok
Jan 20 13:58:13 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'localpool'
Jan 20 13:58:13 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'mds_autoscaler'
Jan 20 13:58:14 compute-1 ceph-mon[81775]: pgmap v98: 193 pgs: 1 peering, 31 unknown, 161 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:14 compute-1 ceph-mon[81775]: 3.4 scrub starts
Jan 20 13:58:14 compute-1 ceph-mon[81775]: 3.4 scrub ok
Jan 20 13:58:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054712 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:14 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'mirroring'
Jan 20 13:58:14 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'nfs'
Jan 20 13:58:15 compute-1 ceph-mon[81775]: 2.1 deep-scrub starts
Jan 20 13:58:15 compute-1 ceph-mon[81775]: 2.1 deep-scrub ok
Jan 20 13:58:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1467956015' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 20 13:58:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1467956015' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 20 13:58:15 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:15 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:15 compute-1 ceph-mon[81775]: 3.5 deep-scrub starts
Jan 20 13:58:15 compute-1 ceph-mon[81775]: 3.5 deep-scrub ok
Jan 20 13:58:15 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:15 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:15 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:15 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:15 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:15 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:15 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 20 13:58:15 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 20 13:58:15 compute-1 ceph-mgr[82135]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 20 13:58:15 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'orchestrator'
Jan 20 13:58:15 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:15.635+0000 7fae9b308140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 20 13:58:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e36 e36: 3 total, 2 up, 3 in
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.1d( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.625294685s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 64.022346497s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.1d( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.625168800s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.022346497s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.13( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.626079559s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 64.023300171s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.10( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.625190735s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 64.022499084s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.13( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.625969887s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.023300171s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.10( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.625130653s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.022499084s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.14( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.624791145s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 64.022750854s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.14( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.624705315s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.022750854s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.13( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.096986771s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 66.494857788s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.10( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.096913338s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 66.495040894s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.10( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.096845627s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.495040894s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.13( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.096667290s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.494857788s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.a( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.624776840s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 64.023109436s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.a( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.624733925s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.023109436s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.e( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.095741272s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 66.494155884s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.e( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.095705986s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.494155884s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.b( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.624183655s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 64.022796631s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.b( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.624153137s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.022796631s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.095242500s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 66.494079590s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.095215797s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.494079590s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.8( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.624043465s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 64.023010254s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.095067978s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 66.494064331s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.8( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.623994827s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.023010254s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.9( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.623971939s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 64.023063660s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.094990730s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.494064331s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.9( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.623941422s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.023063660s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.e( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.623802185s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 64.023086548s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.e( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.623771667s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.023086548s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.6( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.623663902s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 64.023101807s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.6( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.623616219s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.023101807s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.1( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.093919754s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 66.493621826s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.4( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.623414993s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 64.023139954s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.1( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.093876839s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.493621826s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.4( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.623368263s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.023139954s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.4( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.093567848s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 66.493431091s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.4( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.093485832s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.493431091s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.6( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.093406677s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 66.493484497s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.3( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.623608589s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 64.023704529s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.3( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.623566628s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.023704529s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.6( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.093358994s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.493484497s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.2( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.622950554s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 64.023246765s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.2( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.622920990s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.023246765s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.9( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.092387199s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 66.492820740s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.9( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.092342377s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.492820740s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.a( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.086156845s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 66.486694336s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.f( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.622728348s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 64.023292542s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.a( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.086091995s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.486694336s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.f( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.622667313s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.023292542s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.1e( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.622604370s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 64.023323059s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.1e( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.622563362s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.023323059s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.18( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.622730255s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 64.023712158s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.18( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.622694969s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.023712158s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.1b( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.622620583s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 64.023681641s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[7.1b( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=10.622578621s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.023681641s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.1e( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.085520744s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 66.486717224s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.1e( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.085493088s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.486717224s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.1f( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.085291862s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 66.486648560s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.1f( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.085250854s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.486648560s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.1b( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.084542274s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 66.486602783s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.1b( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.084483147s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.486602783s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.15( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.091486931s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 66.495178223s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.15( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.091196060s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.495178223s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.19( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.091099739s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 66.495681763s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[2.19( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.090907097s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.495681763s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[6.1a( empty local-lis/les=0/0 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[4.1a( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[5.18( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[3.1c( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[5.1b( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[5.1a( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[4.1b( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[6.19( empty local-lis/les=0/0 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[4.18( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[3.1a( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[6.1e( empty local-lis/les=0/0 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[5.1c( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[4.e( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[3.9( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[5.f( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[6.d( empty local-lis/les=0/0 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[5.2( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[3.3( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[4.5( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[5.4( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[5.7( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[6.7( empty local-lis/les=0/0 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[5.e( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[3.5( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[6.3( empty local-lis/les=0/0 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[4.1( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[6.2( empty local-lis/les=0/0 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[5.1( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[4.d( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[3.1d( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[6.5( empty local-lis/les=0/0 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[4.c( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[6.e( empty local-lis/les=0/0 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[3.a( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[3.d( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[6.8( empty local-lis/les=0/0 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[4.a( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[4.9( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[3.f( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[4.8( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[3.c( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[3.e( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[6.a( empty local-lis/les=0/0 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[3.10( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[5.9( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[6.15( empty local-lis/les=0/0 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[3.11( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[4.15( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[3.13( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[3.14( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[5.15( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[4.13( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[6.17( empty local-lis/les=0/0 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[5.16( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[3.16( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[5.10( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[3.15( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[4.1f( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[5.11( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[6.1c( empty local-lis/les=0/0 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[6.12( empty local-lis/les=0/0 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 36 pg[5.1f( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:16 compute-1 ceph-mgr[82135]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 20 13:58:16 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'osd_perf_query'
Jan 20 13:58:16 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:16.454+0000 7fae9b308140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 20 13:58:16 compute-1 ceph-mon[81775]: pgmap v99: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:16 compute-1 ceph-mon[81775]: 2.2 scrub starts
Jan 20 13:58:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3060958510' entity='client.admin' 
Jan 20 13:58:16 compute-1 ceph-mon[81775]: 2.2 scrub ok
Jan 20 13:58:16 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:58:16 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:58:16 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:58:16 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:58:16 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:58:16 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:58:16 compute-1 ceph-mon[81775]: osdmap e36: 3 total, 2 up, 3 in
Jan 20 13:58:16 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e37 e37: 3 total, 2 up, 3 in
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[6.1e( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[4.1f( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[6.1c( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[5.10( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[3.16( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[5.11( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[3.14( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[4.13( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[3.15( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[6.12( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[5.15( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[4.15( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[3.13( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[5.16( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[3.10( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[3.11( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[5.1f( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[3.e( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[4.9( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[5.9( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[3.f( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[6.a( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[4.8( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[3.c( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[4.a( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[6.8( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[3.d( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[3.a( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[6.7( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[4.5( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[6.15( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[5.4( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[4.d( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[5.7( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[5.2( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[6.5( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[3.3( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[6.2( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[5.1( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[3.5( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[4.1( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[6.3( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[5.f( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[4.e( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[3.9( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[5.e( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[6.d( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[6.e( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[4.c( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[5.1c( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[3.1d( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[5.1b( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[4.1a( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[4.1b( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[6.19( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[5.1a( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[3.1c( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[4.18( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[6.1a( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[3.1a( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36) [1] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[5.18( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 37 pg[6.17( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-1 ceph-mgr[82135]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 20 13:58:16 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'osd_support'
Jan 20 13:58:16 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:16.747+0000 7fae9b308140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 20 13:58:17 compute-1 ceph-mgr[82135]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 20 13:58:17 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'pg_autoscaler'
Jan 20 13:58:17 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:17.023+0000 7fae9b308140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 20 13:58:17 compute-1 ceph-mgr[82135]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 20 13:58:17 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'progress'
Jan 20 13:58:17 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:17.319+0000 7fae9b308140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 20 13:58:17 compute-1 ceph-mgr[82135]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 20 13:58:17 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'prometheus'
Jan 20 13:58:17 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:17.572+0000 7fae9b308140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 20 13:58:17 compute-1 ceph-mon[81775]: from='client.14268 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:58:17 compute-1 ceph-mon[81775]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 20 13:58:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:17 compute-1 ceph-mon[81775]: Saving service ingress.rgw.default spec with placement count:2
Jan 20 13:58:17 compute-1 ceph-mon[81775]: 3.6 scrub starts
Jan 20 13:58:17 compute-1 ceph-mon[81775]: osdmap e37: 3 total, 2 up, 3 in
Jan 20 13:58:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:17 compute-1 ceph-mon[81775]: 3.6 scrub ok
Jan 20 13:58:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 20 13:58:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:18 compute-1 ceph-mgr[82135]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 20 13:58:18 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'rbd_support'
Jan 20 13:58:18 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:18.661+0000 7fae9b308140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 20 13:58:18 compute-1 ceph-mon[81775]: pgmap v102: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:18 compute-1 ceph-mon[81775]: Deploying daemon osd.2 on compute-2
Jan 20 13:58:18 compute-1 ceph-mgr[82135]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 20 13:58:18 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'restful'
Jan 20 13:58:18 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:18.980+0000 7fae9b308140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 20 13:58:19 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.3 deep-scrub starts
Jan 20 13:58:19 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.3 deep-scrub ok
Jan 20 13:58:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).mds e2 new map
Jan 20 13:58:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T13:58:19.644785+0000
                                           modified        2026-01-20T13:58:19.644841+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Jan 20 13:58:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e38 e38: 3 total, 2 up, 3 in
Jan 20 13:58:19 compute-1 ceph-mon[81775]: 3.7 scrub starts
Jan 20 13:58:19 compute-1 ceph-mon[81775]: 3.7 scrub ok
Jan 20 13:58:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 20 13:58:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 20 13:58:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 20 13:58:19 compute-1 ceph-mon[81775]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 20 13:58:19 compute-1 ceph-mon[81775]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 20 13:58:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 20 13:58:19 compute-1 ceph-mon[81775]: osdmap e38: 3 total, 2 up, 3 in
Jan 20 13:58:19 compute-1 ceph-mon[81775]: fsmap cephfs:0
Jan 20 13:58:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:19 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'rgw'
Jan 20 13:58:19 compute-1 sshd-session[82171]: Connection closed by authenticating user root 116.99.171.211 port 54514 [preauth]
Jan 20 13:58:20 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 20 13:58:20 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 20 13:58:20 compute-1 ceph-mgr[82135]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 20 13:58:20 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'rook'
Jan 20 13:58:20 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:20.479+0000 7fae9b308140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 20 13:58:20 compute-1 ceph-mon[81775]: pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:20 compute-1 ceph-mon[81775]: 2.3 deep-scrub starts
Jan 20 13:58:20 compute-1 ceph-mon[81775]: 2.3 deep-scrub ok
Jan 20 13:58:20 compute-1 ceph-mon[81775]: from='client.14274 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:58:20 compute-1 ceph-mon[81775]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 20 13:58:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:21 compute-1 ceph-mon[81775]: 2.5 scrub starts
Jan 20 13:58:21 compute-1 ceph-mon[81775]: 2.5 scrub ok
Jan 20 13:58:21 compute-1 ceph-mon[81775]: from='client.14280 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:58:21 compute-1 ceph-mon[81775]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 20 13:58:21 compute-1 ceph-mon[81775]: 3.8 deep-scrub starts
Jan 20 13:58:21 compute-1 ceph-mon[81775]: 3.8 deep-scrub ok
Jan 20 13:58:22 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 20 13:58:22 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 20 13:58:22 compute-1 ceph-mgr[82135]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 20 13:58:22 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'selftest'
Jan 20 13:58:22 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:22.633+0000 7fae9b308140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 20 13:58:22 compute-1 ceph-mon[81775]: pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:22 compute-1 ceph-mgr[82135]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 20 13:58:22 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:22.897+0000 7fae9b308140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 20 13:58:22 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'snap_schedule'
Jan 20 13:58:23 compute-1 ceph-mgr[82135]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 20 13:58:23 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:23.189+0000 7fae9b308140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 20 13:58:23 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'stats'
Jan 20 13:58:23 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'status'
Jan 20 13:58:23 compute-1 ceph-mgr[82135]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 20 13:58:23 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:23.728+0000 7fae9b308140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 20 13:58:23 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'telegraf'
Jan 20 13:58:23 compute-1 ceph-mon[81775]: 2.7 scrub starts
Jan 20 13:58:23 compute-1 ceph-mon[81775]: 2.7 scrub ok
Jan 20 13:58:23 compute-1 ceph-mon[81775]: 3.b scrub starts
Jan 20 13:58:23 compute-1 ceph-mon[81775]: 3.b scrub ok
Jan 20 13:58:23 compute-1 ceph-mon[81775]: from='osd.2 [v2:192.168.122.102:6800/3188109873,v1:192.168.122.102:6801/3188109873]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 20 13:58:23 compute-1 ceph-mon[81775]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 20 13:58:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2618177133' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 20 13:58:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2618177133' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 20 13:58:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e39 e39: 3 total, 2 up, 3 in
Jan 20 13:58:23 compute-1 ceph-mgr[82135]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 20 13:58:23 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'telemetry'
Jan 20 13:58:23 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:23.976+0000 7fae9b308140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 20 13:58:24 compute-1 sudo[82173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:24 compute-1 sudo[82173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:24 compute-1 sudo[82173]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:24 compute-1 sudo[82198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 13:58:24 compute-1 sudo[82198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:24 compute-1 sudo[82198]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:24 compute-1 ceph-mgr[82135]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 20 13:58:24 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'test_orchestrator'
Jan 20 13:58:24 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:24.599+0000 7fae9b308140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 20 13:58:24 compute-1 sudo[82223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:24 compute-1 sudo[82223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:24 compute-1 sudo[82223]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:24 compute-1 sudo[82248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:58:24 compute-1 sudo[82248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:24 compute-1 sudo[82248]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:24 compute-1 sudo[82273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:24 compute-1 sudo[82273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:24 compute-1 sudo[82273]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e40 e40: 3 total, 2 up, 3 in
Jan 20 13:58:24 compute-1 sudo[82298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 13:58:24 compute-1 sudo[82298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:24 compute-1 ceph-mon[81775]: pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:24 compute-1 ceph-mon[81775]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 20 13:58:24 compute-1 ceph-mon[81775]: osdmap e39: 3 total, 2 up, 3 in
Jan 20 13:58:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:24 compute-1 ceph-mon[81775]: from='osd.2 [v2:192.168.122.102:6800/3188109873,v1:192.168.122.102:6801/3188109873]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 20 13:58:24 compute-1 ceph-mon[81775]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 20 13:58:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:24 compute-1 ceph-mon[81775]: Standby manager daemon compute-2.gunjko started
Jan 20 13:58:25 compute-1 ceph-mgr[82135]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 20 13:58:25 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'volumes'
Jan 20 13:58:25 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:25.328+0000 7fae9b308140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 20 13:58:25 compute-1 podman[82395]: 2026-01-20 13:58:25.4329378 +0000 UTC m=+0.110084494 container exec 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 13:58:25 compute-1 podman[82395]: 2026-01-20 13:58:25.536360521 +0000 UTC m=+0.213507195 container exec_died 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 13:58:25 compute-1 sudo[82298]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:25 compute-1 ceph-mon[81775]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 20 13:58:25 compute-1 ceph-mon[81775]: osdmap e40: 3 total, 2 up, 3 in
Jan 20 13:58:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:25 compute-1 ceph-mon[81775]: mgrmap e10: compute-0.wookjv(active, since 2m), standbys: compute-2.gunjko
Jan 20 13:58:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gunjko", "id": "compute-2.gunjko"}]: dispatch
Jan 20 13:58:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1505765802' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 13:58:25 compute-1 sudo[82484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:25 compute-1 sudo[82484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:25 compute-1 sudo[82484]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:26 compute-1 sudo[82509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:58:26 compute-1 sudo[82509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:26 compute-1 sudo[82509]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:26 compute-1 sudo[82534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:26 compute-1 sudo[82534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:26 compute-1 sudo[82534]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[6.1e( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.579854012s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 78.409347534s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=8.188198090s) [] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 72.017723083s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[6.1c( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583823204s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 78.413360596s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[6.1e( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.579854012s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.409347534s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[6.1c( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583823204s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413360596s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=8.188198090s) [] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.017723083s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[6.12( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583716393s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 78.413505554s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[6.12( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583716393s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413505554s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583548546s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 78.413490295s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583548546s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413490295s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=8.192520142s) [] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 72.022560120s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[6.17( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.586153030s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 78.416221619s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=8.192520142s) [] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.022560120s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[6.17( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.586153030s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.416221619s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[4.15( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583463669s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 78.413619995s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[7.16( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=8.192604065s) [] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 72.022766113s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[7.16( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=8.192604065s) [] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.022766113s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[4.15( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583463669s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413619995s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[2.12( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=10.664838791s) [] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active pruub 74.495071411s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[2.12( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=10.664838791s) [] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.495071411s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583417892s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 78.413772583s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583417892s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413772583s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[4.9( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583403587s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 78.413810730s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[4.9( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583403587s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413810730s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[2.18( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=10.665678978s) [] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active pruub 74.496124268s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[2.18( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=10.665678978s) [] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.496124268s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[2.f( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=10.664505005s) [] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active pruub 74.494987488s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[2.f( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=10.664505005s) [] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.494987488s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[4.8( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583350182s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 78.413917542s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[4.8( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583350182s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413917542s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583116531s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 78.413742065s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583116531s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413742065s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[2.b( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=10.663282394s) [] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active pruub 74.494148254s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[2.b( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=10.663282394s) [] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.494148254s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=8.192474365s) [] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 72.023422241s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[5.4( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583220482s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 78.414077759s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=8.192474365s) [] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.023422241s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[5.4( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583220482s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414077759s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[2.5( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=10.662651062s) [] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active pruub 74.493736267s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[2.5( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=10.662651062s) [] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.493736267s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583081245s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 78.414337158s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[4.1( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583039284s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 78.414276123s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583081245s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414337158s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[4.1( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583039284s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414276123s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[5.e( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583056450s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 78.414367676s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[5.e( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583056450s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414367676s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[4.1f( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.581921577s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 78.413330078s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[2.1c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=10.663998604s) [] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active pruub 74.495536804s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[3.1a( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583057404s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 78.414588928s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[3.1a( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.583057404s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414588928s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[2.1c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=10.663998604s) [] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.495536804s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[5.1a( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.582934380s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 78.414543152s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[5.1a( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.582934380s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414543152s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[2.1d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=10.662040710s) [] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active pruub 74.493736267s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.582720757s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 78.414451599s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[2.1d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=10.662040710s) [] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.493736267s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[4.1f( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.581921577s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413330078s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=14.582720757s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414451599s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:26 compute-1 ceph-mgr[82135]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 20 13:58:26 compute-1 ceph-mgr[82135]: mgr[py] Loading python module 'zabbix'
Jan 20 13:58:26 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:26.129+0000 7fae9b308140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 20 13:58:26 compute-1 sudo[82559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 13:58:26 compute-1 sudo[82559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:26 compute-1 ceph-mgr[82135]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 20 13:58:26 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-1-oweoeg[82131]: 2026-01-20T13:58:26.374+0000 7fae9b308140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 20 13:58:26 compute-1 ceph-mgr[82135]: ms_deliver_dispatch: unhandled message 0x559e86979600 mon_map magic: 0 v1 from mon.2 v2:192.168.122.101:3300/0
Jan 20 13:58:26 compute-1 ceph-mgr[82135]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2542147622
Jan 20 13:58:26 compute-1 sudo[82559]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:26 compute-1 ceph-mon[81775]: purged_snaps scrub starts
Jan 20 13:58:26 compute-1 ceph-mon[81775]: purged_snaps scrub ok
Jan 20 13:58:26 compute-1 ceph-mon[81775]: pgmap v109: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:26 compute-1 ceph-mon[81775]: 3.12 scrub starts
Jan 20 13:58:26 compute-1 ceph-mon[81775]: 3.12 scrub ok
Jan 20 13:58:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3562592918' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 13:58:26 compute-1 ceph-mon[81775]: Standby manager daemon compute-1.oweoeg started
Jan 20 13:58:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:28 compute-1 ceph-mon[81775]: mgrmap e11: compute-0.wookjv(active, since 2m), standbys: compute-2.gunjko, compute-1.oweoeg
Jan 20 13:58:28 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr metadata", "who": "compute-1.oweoeg", "id": "compute-1.oweoeg"}]: dispatch
Jan 20 13:58:28 compute-1 ceph-mon[81775]: pgmap v110: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/260003973' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 20 13:58:28 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:28 compute-1 sudo[82615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:28 compute-1 sudo[82615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:28 compute-1 sudo[82615]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:28 compute-1 sudo[82641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 20 13:58:28 compute-1 sudo[82641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:28 compute-1 sudo[82641]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:28 compute-1 sudo[82667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:28 compute-1 sudo[82667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:28 compute-1 sudo[82667]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:28 compute-1 sudo[82692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph
Jan 20 13:58:28 compute-1 sudo[82692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:28 compute-1 sudo[82692]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:28 compute-1 sudo[82717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:28 compute-1 sudo[82717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:28 compute-1 sudo[82717]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-1 sudo[82742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.conf.new
Jan 20 13:58:29 compute-1 sudo[82742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-1 sudo[82742]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-1 sudo[82767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:29 compute-1 sudo[82767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-1 sudo[82767]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-1 sudo[82792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:58:29 compute-1 sudo[82792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-1 sudo[82792]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-1 sudo[82817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:29 compute-1 sudo[82817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-1 sudo[82817]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-1 sudo[82842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.conf.new
Jan 20 13:58:29 compute-1 sudo[82842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-1 sudo[82842]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 20 13:58:29 compute-1 sudo[82890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:29 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 20 13:58:29 compute-1 sudo[82890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-1 sudo[82890]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-1 sudo[82915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.conf.new
Jan 20 13:58:29 compute-1 sudo[82915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-1 sudo[82915]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:29 compute-1 sudo[82940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:29 compute-1 sudo[82940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-1 sudo[82940]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-1 sudo[82965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.conf.new
Jan 20 13:58:29 compute-1 sudo[82965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-1 sudo[82965]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 20 13:58:29 compute-1 ceph-mon[81775]: Adjusting osd_memory_target on compute-2 to 127.9M
Jan 20 13:58:29 compute-1 ceph-mon[81775]: Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 20 13:58:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:58:29 compute-1 ceph-mon[81775]: Updating compute-0:/etc/ceph/ceph.conf
Jan 20 13:58:29 compute-1 ceph-mon[81775]: Updating compute-1:/etc/ceph/ceph.conf
Jan 20 13:58:29 compute-1 ceph-mon[81775]: Updating compute-2:/etc/ceph/ceph.conf
Jan 20 13:58:29 compute-1 ceph-mon[81775]: OSD bench result of 6249.450009 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 13:58:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e41 e41: 3 total, 3 up, 3 in
Jan 20 13:58:29 compute-1 sudo[82990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:29 compute-1 sudo[82990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-1 sudo[82990]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-1 sudo[83015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 20 13:58:29 compute-1 sudo[83015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-1 sudo[83015]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-1 sudo[83040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:29 compute-1 sudo[83040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-1 sudo[83040]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-1 sudo[83065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config
Jan 20 13:58:29 compute-1 sudo[83065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-1 sudo[83065]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-1 sudo[83090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:29 compute-1 sudo[83090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-1 sudo[83090]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-1 sudo[83115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config
Jan 20 13:58:30 compute-1 sudo[83115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-1 sudo[83115]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-1 sudo[83140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:30 compute-1 sudo[83140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-1 sudo[83140]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-1 sudo[83165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf.new
Jan 20 13:58:30 compute-1 sudo[83165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-1 sudo[83165]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-1 sudo[83190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:30 compute-1 sudo[83190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-1 sudo[83190]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-1 sudo[83215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:58:30 compute-1 sudo[83215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-1 sudo[83215]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-1 sudo[83240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:30 compute-1 sudo[83240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-1 sudo[83240]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-1 sudo[83265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf.new
Jan 20 13:58:30 compute-1 sudo[83265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-1 sudo[83265]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-1 sudo[83313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:30 compute-1 sudo[83313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-1 sudo[83313]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-1 sudo[83338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf.new
Jan 20 13:58:30 compute-1 sudo[83338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-1 sudo[83338]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-1 sudo[83363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:30 compute-1 sudo[83363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-1 sudo[83363]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-1 ceph-mon[81775]: pgmap v111: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:30 compute-1 ceph-mon[81775]: 2.8 scrub starts
Jan 20 13:58:30 compute-1 ceph-mon[81775]: 2.8 scrub ok
Jan 20 13:58:30 compute-1 ceph-mon[81775]: osd.2 [v2:192.168.122.102:6800/3188109873,v1:192.168.122.102:6801/3188109873] boot
Jan 20 13:58:30 compute-1 ceph-mon[81775]: osdmap e41: 3 total, 3 up, 3 in
Jan 20 13:58:30 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:30 compute-1 ceph-mon[81775]: Updating compute-2:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:58:30 compute-1 ceph-mon[81775]: Updating compute-0:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:58:30 compute-1 ceph-mon[81775]: Updating compute-1:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:58:30 compute-1 ceph-mon[81775]: 3.17 scrub starts
Jan 20 13:58:30 compute-1 ceph-mon[81775]: 3.17 scrub ok
Jan 20 13:58:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e42 e42: 3 total, 3 up, 3 in
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[6.1e( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.980747223s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.409347534s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[6.1e( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.980671883s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.409347534s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[4.1f( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.984628677s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413330078s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[4.1f( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.984438896s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413330078s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[6.1c( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.984391212s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413360596s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[6.1c( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.984327316s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413360596s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[2.18( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=6.066934586s) [2] r=-1 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.496124268s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[2.18( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=6.066905975s) [2] r=-1 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.496124268s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[6.12( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.984229088s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413505554s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[6.12( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.984197617s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413505554s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[6.17( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.986830711s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.416221619s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[7.11( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=41 pruub=3.593162298s) [2] r=-1 lpr=41 pi=[32,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.022560120s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[3.15( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.984086037s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413490295s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[6.17( empty local-lis/les=36/37 n=0 ec=32/22 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.986802101s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.416221619s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.984040260s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413490295s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=41 pruub=3.593111753s) [2] r=-1 lpr=41 pi=[32,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.022560120s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[4.15( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.984107971s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413619995s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[4.15( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.984050751s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413619995s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[7.16( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=41 pruub=3.593186617s) [2] r=-1 lpr=41 pi=[32,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.022766113s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[7.16( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=41 pruub=3.593151093s) [2] r=-1 lpr=41 pi=[32,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.022766113s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=41 pruub=3.588076591s) [2] r=-1 lpr=41 pi=[32,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.017723083s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[2.12( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=6.065394878s) [2] r=-1 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.495071411s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=41 pruub=3.588032246s) [2] r=-1 lpr=41 pi=[32,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.017723083s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[2.12( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=6.065350533s) [2] r=-1 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.495071411s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[3.11( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.983944893s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413742065s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[3.e( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.983512878s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413772583s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[4.9( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.983522415s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413810730s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.983480453s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413772583s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.983432770s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413742065s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.983484268s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413810730s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[4.8( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.983489990s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413917542s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.983444214s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.413917542s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[5.4( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.983516693s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414077759s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[2.f( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=6.064367294s) [2] r=-1 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.494987488s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[2.b( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=6.063500404s) [2] r=-1 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.494148254s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[5.4( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.983321190s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414077759s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=41 pruub=3.592596769s) [2] r=-1 lpr=41 pi=[32,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.023422241s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=32/34 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=41 pruub=3.592567205s) [2] r=-1 lpr=41 pi=[32,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.023422241s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[2.b( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=6.063432217s) [2] r=-1 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.494148254s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=6.062843323s) [2] r=-1 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.493736267s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=6.062771320s) [2] r=-1 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.493736267s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[2.f( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=6.064172745s) [2] r=-1 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.494987488s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[4.1( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.983118057s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414276123s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[4.1( empty local-lis/les=36/37 n=0 ec=30/18 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.982706070s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414276123s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[3.1a( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.982971191s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414588928s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.982712746s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414337158s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[3.1a( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.982943535s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414588928s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.982770920s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414451599s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.982671738s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414337158s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[5.e( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.982653618s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414367676s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=36/37 n=0 ec=28/17 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.982739449s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414451599s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[5.e( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.982616425s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414367676s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[2.1c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=6.063718319s) [2] r=-1 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.495536804s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[2.1c( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=6.063689232s) [2] r=-1 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.495536804s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.982590675s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414543152s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 41 pg[2.1d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=6.061774254s) [2] r=-1 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.493736267s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=36/37 n=0 ec=30/20 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=9.982562065s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.414543152s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 42 pg[2.1d( empty local-lis/les=28/29 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=6.061741352s) [2] r=-1 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.493736267s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:30 compute-1 sudo[83388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf.new
Jan 20 13:58:30 compute-1 sudo[83388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-1 sudo[83388]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-1 sudo[83413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:30 compute-1 sudo[83413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-1 sudo[83413]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-1 sudo[83438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf.new /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:58:30 compute-1 sudo[83438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-1 sudo[83438]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:31 compute-1 ceph-mon[81775]: from='client.14310 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 13:58:31 compute-1 ceph-mon[81775]: osdmap e42: 3 total, 3 up, 3 in
Jan 20 13:58:31 compute-1 ceph-mon[81775]: 3.18 deep-scrub starts
Jan 20 13:58:31 compute-1 ceph-mon[81775]: 3.18 deep-scrub ok
Jan 20 13:58:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 13:58:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 13:58:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:32 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 20 13:58:32 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 20 13:58:32 compute-1 ceph-mon[81775]: pgmap v114: 193 pgs: 28 peering, 165 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:32 compute-1 ceph-mon[81775]: 3.1b scrub starts
Jan 20 13:58:32 compute-1 ceph-mon[81775]: 3.1b scrub ok
Jan 20 13:58:33 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 20 13:58:33 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 20 13:58:33 compute-1 ceph-mon[81775]: from='client.14316 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 13:58:33 compute-1 ceph-mon[81775]: 2.11 scrub starts
Jan 20 13:58:33 compute-1 ceph-mon[81775]: 2.11 scrub ok
Jan 20 13:58:33 compute-1 sshd-session[82638]: Connection closed by authenticating user root 116.99.171.211 port 53654 [preauth]
Jan 20 13:58:34 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 20 13:58:34 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 20 13:58:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:34 compute-1 ceph-mon[81775]: pgmap v115: 193 pgs: 28 peering, 165 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:34 compute-1 ceph-mon[81775]: 2.14 scrub starts
Jan 20 13:58:34 compute-1 ceph-mon[81775]: 2.14 scrub ok
Jan 20 13:58:34 compute-1 ceph-mon[81775]: 3.19 scrub starts
Jan 20 13:58:34 compute-1 ceph-mon[81775]: 3.19 scrub ok
Jan 20 13:58:35 compute-1 ceph-mon[81775]: from='client.14322 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 13:58:35 compute-1 ceph-mon[81775]: 2.16 scrub starts
Jan 20 13:58:35 compute-1 ceph-mon[81775]: 2.16 scrub ok
Jan 20 13:58:35 compute-1 ceph-mon[81775]: 4.1d deep-scrub starts
Jan 20 13:58:35 compute-1 ceph-mon[81775]: 4.1d deep-scrub ok
Jan 20 13:58:36 compute-1 ceph-mon[81775]: pgmap v116: 193 pgs: 193 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:36 compute-1 ceph-mon[81775]: 3.1e scrub starts
Jan 20 13:58:36 compute-1 ceph-mon[81775]: 3.1e scrub ok
Jan 20 13:58:36 compute-1 ceph-mon[81775]: from='client.14328 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 13:58:37 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 20 13:58:37 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 20 13:58:37 compute-1 ceph-mon[81775]: 3.1f scrub starts
Jan 20 13:58:37 compute-1 ceph-mon[81775]: 3.1f scrub ok
Jan 20 13:58:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ktpnzt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 13:58:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ktpnzt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 13:58:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/347102734' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 13:58:38 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 20 13:58:38 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 20 13:58:38 compute-1 ceph-mon[81775]: pgmap v117: 193 pgs: 193 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:38 compute-1 ceph-mon[81775]: Deploying daemon rgw.rgw.compute-2.ktpnzt on compute-2
Jan 20 13:58:38 compute-1 ceph-mon[81775]: 2.17 scrub starts
Jan 20 13:58:38 compute-1 ceph-mon[81775]: 2.17 scrub ok
Jan 20 13:58:38 compute-1 ceph-mon[81775]: 5.3 scrub starts
Jan 20 13:58:38 compute-1 ceph-mon[81775]: 5.3 scrub ok
Jan 20 13:58:39 compute-1 sudo[83463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:39 compute-1 sudo[83463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:39 compute-1 sudo[83463]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:39 compute-1 sudo[83488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:58:39 compute-1 sudo[83488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:39 compute-1 sudo[83488]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:39 compute-1 sudo[83513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:39 compute-1 sudo[83513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:39 compute-1 sudo[83513]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:39 compute-1 sudo[83538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:58:39 compute-1 sudo[83538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:39 compute-1 podman[83604]: 2026-01-20 13:58:39.97117181 +0000 UTC m=+0.067810801 container create 88a0bb343253b352590ef93cabf3d6a81fa582a2485db1d96ab8e47f9b7f0563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:39 compute-1 systemd[72729]: Starting Mark boot as successful...
Jan 20 13:58:39 compute-1 systemd[72729]: Finished Mark boot as successful.
Jan 20 13:58:40 compute-1 podman[83604]: 2026-01-20 13:58:39.936239968 +0000 UTC m=+0.032879019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:40 compute-1 systemd[1]: Started libpod-conmon-88a0bb343253b352590ef93cabf3d6a81fa582a2485db1d96ab8e47f9b7f0563.scope.
Jan 20 13:58:40 compute-1 ceph-mon[81775]: 2.1a scrub starts
Jan 20 13:58:40 compute-1 ceph-mon[81775]: 2.1a scrub ok
Jan 20 13:58:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.orkqpg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 13:58:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.orkqpg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 13:58:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:40 compute-1 ceph-mon[81775]: Deploying daemon rgw.rgw.compute-1.orkqpg on compute-1
Jan 20 13:58:40 compute-1 ceph-mon[81775]: pgmap v118: 193 pgs: 193 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/238419675' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 13:58:40 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:58:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e43 e43: 3 total, 3 up, 3 in
Jan 20 13:58:40 compute-1 podman[83604]: 2026-01-20 13:58:40.113777096 +0000 UTC m=+0.210416107 container init 88a0bb343253b352590ef93cabf3d6a81fa582a2485db1d96ab8e47f9b7f0563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mclaren, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 13:58:40 compute-1 podman[83604]: 2026-01-20 13:58:40.126799943 +0000 UTC m=+0.223438914 container start 88a0bb343253b352590ef93cabf3d6a81fa582a2485db1d96ab8e47f9b7f0563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 13:58:40 compute-1 podman[83604]: 2026-01-20 13:58:40.131518421 +0000 UTC m=+0.228157392 container attach 88a0bb343253b352590ef93cabf3d6a81fa582a2485db1d96ab8e47f9b7f0563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mclaren, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 13:58:40 compute-1 gifted_mclaren[83621]: 167 167
Jan 20 13:58:40 compute-1 systemd[1]: libpod-88a0bb343253b352590ef93cabf3d6a81fa582a2485db1d96ab8e47f9b7f0563.scope: Deactivated successfully.
Jan 20 13:58:40 compute-1 podman[83604]: 2026-01-20 13:58:40.135085742 +0000 UTC m=+0.231724713 container died 88a0bb343253b352590ef93cabf3d6a81fa582a2485db1d96ab8e47f9b7f0563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:58:40 compute-1 systemd[1]: var-lib-containers-storage-overlay-78e46cce27f96a9c69d7e33c1c56d4dc8b71cd0755f4a8a844e5a3d399c27cff-merged.mount: Deactivated successfully.
Jan 20 13:58:40 compute-1 podman[83604]: 2026-01-20 13:58:40.176730594 +0000 UTC m=+0.273369595 container remove 88a0bb343253b352590ef93cabf3d6a81fa582a2485db1d96ab8e47f9b7f0563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 20 13:58:40 compute-1 systemd[1]: libpod-conmon-88a0bb343253b352590ef93cabf3d6a81fa582a2485db1d96ab8e47f9b7f0563.scope: Deactivated successfully.
Jan 20 13:58:40 compute-1 systemd[1]: Reloading.
Jan 20 13:58:40 compute-1 systemd-rc-local-generator[83663]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:58:40 compute-1 systemd-sysv-generator[83671]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:58:40 compute-1 systemd[1]: Reloading.
Jan 20 13:58:40 compute-1 systemd-sysv-generator[83713]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:58:40 compute-1 systemd-rc-local-generator[83706]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:58:40 compute-1 systemd[1]: Starting Ceph rgw.rgw.compute-1.orkqpg for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 13:58:41 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e44 e44: 3 total, 3 up, 3 in
Jan 20 13:58:41 compute-1 ceph-mon[81775]: 6.1 scrub starts
Jan 20 13:58:41 compute-1 ceph-mon[81775]: 6.1 scrub ok
Jan 20 13:58:41 compute-1 ceph-mon[81775]: osdmap e43: 3 total, 3 up, 3 in
Jan 20 13:58:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1247667946' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 20 13:58:41 compute-1 ceph-mon[81775]: from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 20 13:58:41 compute-1 ceph-mon[81775]: 5.5 deep-scrub starts
Jan 20 13:58:41 compute-1 ceph-mon[81775]: 5.5 deep-scrub ok
Jan 20 13:58:41 compute-1 podman[83766]: 2026-01-20 13:58:41.208203968 +0000 UTC m=+0.055077522 container create b3823476695d3583b002dcb0048f7c2e7afdb2953c94118039f1275878a4001e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-rgw-rgw-compute-1-orkqpg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d79394fd7e44d8a4966f50cadc4a3278e0964309fc67e4f603ebf7a9e34025af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d79394fd7e44d8a4966f50cadc4a3278e0964309fc67e4f603ebf7a9e34025af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d79394fd7e44d8a4966f50cadc4a3278e0964309fc67e4f603ebf7a9e34025af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d79394fd7e44d8a4966f50cadc4a3278e0964309fc67e4f603ebf7a9e34025af/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.orkqpg supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:41 compute-1 podman[83766]: 2026-01-20 13:58:41.182307219 +0000 UTC m=+0.029180843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:41 compute-1 podman[83766]: 2026-01-20 13:58:41.29850598 +0000 UTC m=+0.145379604 container init b3823476695d3583b002dcb0048f7c2e7afdb2953c94118039f1275878a4001e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-rgw-rgw-compute-1-orkqpg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:58:41 compute-1 podman[83766]: 2026-01-20 13:58:41.305415036 +0000 UTC m=+0.152288620 container start b3823476695d3583b002dcb0048f7c2e7afdb2953c94118039f1275878a4001e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-rgw-rgw-compute-1-orkqpg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:58:41 compute-1 bash[83766]: b3823476695d3583b002dcb0048f7c2e7afdb2953c94118039f1275878a4001e
Jan 20 13:58:41 compute-1 systemd[1]: Started Ceph rgw.rgw.compute-1.orkqpg for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 13:58:41 compute-1 sudo[83538]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:41 compute-1 radosgw[83787]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 20 13:58:41 compute-1 radosgw[83787]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Jan 20 13:58:41 compute-1 radosgw[83787]: framework: beast
Jan 20 13:58:41 compute-1 radosgw[83787]: framework conf key: endpoint, val: 192.168.122.101:8082
Jan 20 13:58:41 compute-1 radosgw[83787]: init_numa not setting numa affinity
Jan 20 13:58:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e45 e45: 3 total, 3 up, 3 in
Jan 20 13:58:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 20 13:58:42 compute-1 ceph-mon[81775]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2347323994' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 20 13:58:42 compute-1 ceph-mon[81775]: from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 20 13:58:42 compute-1 ceph-mon[81775]: osdmap e44: 3 total, 3 up, 3 in
Jan 20 13:58:42 compute-1 ceph-mon[81775]: pgmap v121: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/969844602' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 20 13:58:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kiggjh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 13:58:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kiggjh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 13:58:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:42 compute-1 ceph-mon[81775]: Deploying daemon rgw.rgw.compute-0.kiggjh on compute-0
Jan 20 13:58:42 compute-1 ceph-mon[81775]: 4.4 scrub starts
Jan 20 13:58:42 compute-1 ceph-mon[81775]: 4.4 scrub ok
Jan 20 13:58:42 compute-1 ceph-mon[81775]: osdmap e45: 3 total, 3 up, 3 in
Jan 20 13:58:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e46 e46: 3 total, 3 up, 3 in
Jan 20 13:58:43 compute-1 ceph-mon[81775]: 4.3 scrub starts
Jan 20 13:58:43 compute-1 ceph-mon[81775]: 4.3 scrub ok
Jan 20 13:58:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1247667946' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 20 13:58:43 compute-1 ceph-mon[81775]: from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 20 13:58:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2347323994' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 20 13:58:43 compute-1 ceph-mon[81775]: from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 20 13:58:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Jan 20 13:58:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 20 13:58:44 compute-1 ceph-mon[81775]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2347323994' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 20 13:58:44 compute-1 ceph-mon[81775]: 6.1b scrub starts
Jan 20 13:58:44 compute-1 ceph-mon[81775]: 6.1b scrub ok
Jan 20 13:58:44 compute-1 ceph-mon[81775]: from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 20 13:58:44 compute-1 ceph-mon[81775]: from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 20 13:58:44 compute-1 ceph-mon[81775]: osdmap e46: 3 total, 3 up, 3 in
Jan 20 13:58:44 compute-1 ceph-mon[81775]: pgmap v124: 195 pgs: 2 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:44 compute-1 ceph-mon[81775]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 20 13:58:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/361094635' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 20 13:58:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.jyxktq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 20 13:58:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.jyxktq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 20 13:58:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:44 compute-1 ceph-mon[81775]: Deploying daemon mds.cephfs.compute-2.jyxktq on compute-2
Jan 20 13:58:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:44 compute-1 ceph-mon[81775]: 5.6 scrub starts
Jan 20 13:58:44 compute-1 ceph-mon[81775]: 5.6 scrub ok
Jan 20 13:58:44 compute-1 ceph-mon[81775]: osdmap e47: 3 total, 3 up, 3 in
Jan 20 13:58:44 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 47 pg[10.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Jan 20 13:58:45 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 48 pg[10.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2347323994' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 20 13:58:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/418792044' entity='client.rgw.rgw.compute-0.kiggjh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 20 13:58:45 compute-1 ceph-mon[81775]: from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 20 13:58:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1247667946' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 20 13:58:45 compute-1 ceph-mon[81775]: from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 20 13:58:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/418792044' entity='client.rgw.rgw.compute-0.kiggjh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 20 13:58:45 compute-1 ceph-mon[81775]: from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 20 13:58:45 compute-1 ceph-mon[81775]: from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 20 13:58:45 compute-1 ceph-mon[81775]: osdmap e48: 3 total, 3 up, 3 in
Jan 20 13:58:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Jan 20 13:58:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 20 13:58:46 compute-1 ceph-mon[81775]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1523026806' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 13:58:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).mds e3 new map
Jan 20 13:58:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T13:58:19.644785+0000
                                           modified        2026-01-20T13:58:19.644841+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.jyxktq{-1:24178} state up:standby seq 1 addr [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] compat {c=[1],r=[1],i=[7ff]}]
Jan 20 13:58:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).mds e4 new map
Jan 20 13:58:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T13:58:19.644785+0000
                                           modified        2026-01-20T13:58:46.558090+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24178}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.jyxktq{0:24178} state up:creating seq 1 addr [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Jan 20 13:58:46 compute-1 ceph-mon[81775]: pgmap v127: 196 pgs: 1 unknown, 195 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 13:58:46 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:46 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:46 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:46 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.znrafi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 20 13:58:46 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.znrafi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 20 13:58:46 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:46 compute-1 ceph-mon[81775]: Deploying daemon mds.cephfs.compute-0.znrafi on compute-0
Jan 20 13:58:46 compute-1 ceph-mon[81775]: osdmap e49: 3 total, 3 up, 3 in
Jan 20 13:58:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4243338850' entity='client.rgw.rgw.compute-0.kiggjh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 13:58:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1523026806' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 13:58:46 compute-1 ceph-mon[81775]: from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 13:58:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/159360274' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 13:58:46 compute-1 ceph-mon[81775]: from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 13:58:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Jan 20 13:58:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 20 13:58:47 compute-1 ceph-mon[81775]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1523026806' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 13:58:47 compute-1 ceph-mon[81775]: mds.? [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] up:boot
Jan 20 13:58:47 compute-1 ceph-mon[81775]: daemon mds.cephfs.compute-2.jyxktq assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 20 13:58:47 compute-1 ceph-mon[81775]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 20 13:58:47 compute-1 ceph-mon[81775]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 20 13:58:47 compute-1 ceph-mon[81775]: Cluster is now healthy
Jan 20 13:58:47 compute-1 ceph-mon[81775]: fsmap cephfs:0 1 up:standby
Jan 20 13:58:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.jyxktq"}]: dispatch
Jan 20 13:58:47 compute-1 ceph-mon[81775]: fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:creating}
Jan 20 13:58:47 compute-1 ceph-mon[81775]: daemon mds.cephfs.compute-2.jyxktq is now active in filesystem cephfs as rank 0
Jan 20 13:58:47 compute-1 ceph-mon[81775]: 4.6 deep-scrub starts
Jan 20 13:58:47 compute-1 ceph-mon[81775]: 4.6 deep-scrub ok
Jan 20 13:58:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4243338850' entity='client.rgw.rgw.compute-0.kiggjh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 20 13:58:47 compute-1 ceph-mon[81775]: from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 20 13:58:47 compute-1 ceph-mon[81775]: from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 20 13:58:47 compute-1 ceph-mon[81775]: osdmap e50: 3 total, 3 up, 3 in
Jan 20 13:58:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4243338850' entity='client.rgw.rgw.compute-0.kiggjh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 13:58:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1523026806' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 13:58:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/159360274' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 13:58:47 compute-1 ceph-mon[81775]: from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 13:58:47 compute-1 ceph-mon[81775]: from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 13:58:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.rtofcx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 20 13:58:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.rtofcx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 20 13:58:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).mds e5 new map
Jan 20 13:58:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T13:58:19.644785+0000
                                           modified        2026-01-20T13:58:47.570199+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24178}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.jyxktq{0:24178} state up:active seq 2 addr [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.znrafi{-1:14376} state up:standby seq 1 addr [v2:192.168.122.100:6806/2144836821,v1:192.168.122.100:6807/2144836821] compat {c=[1],r=[1],i=[7ff]}]
Jan 20 13:58:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).mds e6 new map
Jan 20 13:58:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).mds e6 print_map
                                           e6
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T13:58:19.644785+0000
                                           modified        2026-01-20T13:58:47.570199+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24178}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.jyxktq{0:24178} state up:active seq 2 addr [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.znrafi{-1:14376} state up:standby seq 1 addr [v2:192.168.122.100:6806/2144836821,v1:192.168.122.100:6807/2144836821] compat {c=[1],r=[1],i=[7ff]}]
Jan 20 13:58:47 compute-1 sudo[83858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:47 compute-1 sudo[83858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:47 compute-1 sudo[83858]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:47 compute-1 sudo[83883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:58:47 compute-1 sudo[83883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:47 compute-1 sudo[83883]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:47 compute-1 sudo[83908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:47 compute-1 sudo[83908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:47 compute-1 sudo[83908]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:47 compute-1 sudo[83933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:58:47 compute-1 sudo[83933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Jan 20 13:58:48 compute-1 radosgw[83787]: LDAP not started since no server URIs were provided in the configuration.
Jan 20 13:58:48 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-rgw-rgw-compute-1-orkqpg[83783]: 2026-01-20T13:58:48.418+0000 7f0af41b1940 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 20 13:58:48 compute-1 radosgw[83787]: framework: beast
Jan 20 13:58:48 compute-1 radosgw[83787]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 20 13:58:48 compute-1 radosgw[83787]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 20 13:58:48 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 20 13:58:48 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 20 13:58:48 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 20 13:58:48 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 20 13:58:48 compute-1 radosgw[83787]: starting handler: beast
Jan 20 13:58:48 compute-1 podman[83997]: 2026-01-20 13:58:48.455957745 +0000 UTC m=+0.080968901 container create 695f2e5816d99eb87c13bf413a88ea9a65701e362cdc9e838c1e779764f1af4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_keldysh, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 13:58:48 compute-1 radosgw[83787]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 13:58:48 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 20 13:58:48 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 20 13:58:48 compute-1 radosgw[83787]: mgrc service_daemon_register rgw.24128 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.orkqpg,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=8115d0e5-f46a-4d23-887b-99af6a666d4f,zone_name=default,zonegroup_id=1c9817d6-3061-4a20-aeb7-2a830f7cf40e,zonegroup_name=default}
Jan 20 13:58:48 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 20 13:58:48 compute-1 systemd[1]: Started libpod-conmon-695f2e5816d99eb87c13bf413a88ea9a65701e362cdc9e838c1e779764f1af4d.scope.
Jan 20 13:58:48 compute-1 podman[83997]: 2026-01-20 13:58:48.419353271 +0000 UTC m=+0.044364477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:48 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 20 13:58:48 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 20 13:58:48 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Jan 20 13:58:48 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:58:48 compute-1 podman[83997]: 2026-01-20 13:58:48.574391726 +0000 UTC m=+0.199402902 container init 695f2e5816d99eb87c13bf413a88ea9a65701e362cdc9e838c1e779764f1af4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 13:58:48 compute-1 podman[83997]: 2026-01-20 13:58:48.5895366 +0000 UTC m=+0.214547766 container start 695f2e5816d99eb87c13bf413a88ea9a65701e362cdc9e838c1e779764f1af4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_keldysh, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 13:58:48 compute-1 podman[83997]: 2026-01-20 13:58:48.594051611 +0000 UTC m=+0.219062777 container attach 695f2e5816d99eb87c13bf413a88ea9a65701e362cdc9e838c1e779764f1af4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_keldysh, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:58:48 compute-1 zen_keldysh[84557]: 167 167
Jan 20 13:58:48 compute-1 systemd[1]: libpod-695f2e5816d99eb87c13bf413a88ea9a65701e362cdc9e838c1e779764f1af4d.scope: Deactivated successfully.
Jan 20 13:58:48 compute-1 podman[83997]: 2026-01-20 13:58:48.600544984 +0000 UTC m=+0.225556150 container died 695f2e5816d99eb87c13bf413a88ea9a65701e362cdc9e838c1e779764f1af4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:48 compute-1 ceph-mon[81775]: pgmap v130: 197 pgs: 1 unknown, 196 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 13:58:48 compute-1 ceph-mon[81775]: Deploying daemon mds.cephfs.compute-1.rtofcx on compute-1
Jan 20 13:58:48 compute-1 ceph-mon[81775]: mds.? [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] up:active
Jan 20 13:58:48 compute-1 ceph-mon[81775]: mds.? [v2:192.168.122.100:6806/2144836821,v1:192.168.122.100:6807/2144836821] up:boot
Jan 20 13:58:48 compute-1 ceph-mon[81775]: fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:active} 1 up:standby
Jan 20 13:58:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.znrafi"}]: dispatch
Jan 20 13:58:48 compute-1 ceph-mon[81775]: fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:active} 1 up:standby
Jan 20 13:58:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4243338850' entity='client.rgw.rgw.compute-0.kiggjh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 20 13:58:48 compute-1 ceph-mon[81775]: from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 20 13:58:48 compute-1 ceph-mon[81775]: from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 20 13:58:48 compute-1 ceph-mon[81775]: osdmap e51: 3 total, 3 up, 3 in
Jan 20 13:58:48 compute-1 systemd[1]: var-lib-containers-storage-overlay-1aa779a001d507856a640fdfd056952e1868d52dce81f39ac9b68d03a077e8c7-merged.mount: Deactivated successfully.
Jan 20 13:58:48 compute-1 podman[83997]: 2026-01-20 13:58:48.655025826 +0000 UTC m=+0.280036992 container remove 695f2e5816d99eb87c13bf413a88ea9a65701e362cdc9e838c1e779764f1af4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 13:58:48 compute-1 systemd[1]: libpod-conmon-695f2e5816d99eb87c13bf413a88ea9a65701e362cdc9e838c1e779764f1af4d.scope: Deactivated successfully.
Jan 20 13:58:48 compute-1 systemd[1]: Reloading.
Jan 20 13:58:48 compute-1 systemd-rc-local-generator[84603]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:58:48 compute-1 systemd-sysv-generator[84608]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:58:49 compute-1 systemd[1]: Reloading.
Jan 20 13:58:49 compute-1 systemd-rc-local-generator[84642]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:58:49 compute-1 systemd-sysv-generator[84648]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:58:49 compute-1 systemd[1]: Starting Ceph mds.cephfs.compute-1.rtofcx for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 13:58:49 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 20 13:58:49 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 20 13:58:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:49 compute-1 ceph-mon[81775]: 4.2 scrub starts
Jan 20 13:58:49 compute-1 ceph-mon[81775]: 4.2 scrub ok
Jan 20 13:58:49 compute-1 ceph-mon[81775]: 5.a scrub starts
Jan 20 13:58:49 compute-1 ceph-mon[81775]: 5.a scrub ok
Jan 20 13:58:49 compute-1 podman[84702]: 2026-01-20 13:58:49.712299477 +0000 UTC m=+0.051064287 container create 8be169de4b9d5b0033f68969bfb5e00cda2b48e98cf27080327a9a7ac12e432d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mds-cephfs-compute-1-rtofcx, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:58:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44f8919bc234ca92c0d89cecc7bd66f4a0180e484386127483a8a6cce4b24675/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44f8919bc234ca92c0d89cecc7bd66f4a0180e484386127483a8a6cce4b24675/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44f8919bc234ca92c0d89cecc7bd66f4a0180e484386127483a8a6cce4b24675/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44f8919bc234ca92c0d89cecc7bd66f4a0180e484386127483a8a6cce4b24675/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.rtofcx supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:49 compute-1 podman[84702]: 2026-01-20 13:58:49.686336276 +0000 UTC m=+0.025101086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:49 compute-1 podman[84702]: 2026-01-20 13:58:49.784004598 +0000 UTC m=+0.122769418 container init 8be169de4b9d5b0033f68969bfb5e00cda2b48e98cf27080327a9a7ac12e432d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mds-cephfs-compute-1-rtofcx, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:49 compute-1 podman[84702]: 2026-01-20 13:58:49.793067451 +0000 UTC m=+0.131832221 container start 8be169de4b9d5b0033f68969bfb5e00cda2b48e98cf27080327a9a7ac12e432d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mds-cephfs-compute-1-rtofcx, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:58:49 compute-1 bash[84702]: 8be169de4b9d5b0033f68969bfb5e00cda2b48e98cf27080327a9a7ac12e432d
Jan 20 13:58:49 compute-1 systemd[1]: Started Ceph mds.cephfs.compute-1.rtofcx for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 13:58:49 compute-1 sudo[83933]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:49 compute-1 ceph-mds[84722]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 13:58:49 compute-1 ceph-mds[84722]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Jan 20 13:58:49 compute-1 ceph-mds[84722]: main not setting numa affinity
Jan 20 13:58:49 compute-1 ceph-mds[84722]: pidfile_write: ignore empty --pid-file
Jan 20 13:58:49 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mds-cephfs-compute-1-rtofcx[84718]: starting mds.cephfs.compute-1.rtofcx at 
Jan 20 13:58:49 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx Updating MDS map to version 6 from mon.2
Jan 20 13:58:51 compute-1 ceph-mon[81775]: pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 89 KiB/s rd, 11 KiB/s wr, 190 op/s
Jan 20 13:58:51 compute-1 ceph-mon[81775]: 7.1 scrub starts
Jan 20 13:58:51 compute-1 ceph-mon[81775]: 7.1 scrub ok
Jan 20 13:58:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:51 compute-1 ceph-mon[81775]: 4.7 scrub starts
Jan 20 13:58:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:51 compute-1 ceph-mon[81775]: 4.7 scrub ok
Jan 20 13:58:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).mds e7 new map
Jan 20 13:58:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).mds e7 print_map
                                           e7
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T13:58:19.644785+0000
                                           modified        2026-01-20T13:58:50.863864+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24178}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.jyxktq{0:24178} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.znrafi{-1:14376} state up:standby seq 1 addr [v2:192.168.122.100:6806/2144836821,v1:192.168.122.100:6807/2144836821] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.rtofcx{-1:24137} state up:standby seq 1 addr [v2:192.168.122.101:6804/2015191638,v1:192.168.122.101:6805/2015191638] compat {c=[1],r=[1],i=[7ff]}]
Jan 20 13:58:51 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx Updating MDS map to version 7 from mon.2
Jan 20 13:58:51 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx Monitors have assigned me to become a standby.
Jan 20 13:58:51 compute-1 ceph-mon[81775]: Deploying daemon haproxy.rgw.default.compute-0.nqkboe on compute-0
Jan 20 13:58:51 compute-1 ceph-mon[81775]: 2.1b scrub starts
Jan 20 13:58:51 compute-1 ceph-mon[81775]: 2.1b scrub ok
Jan 20 13:58:51 compute-1 ceph-mon[81775]: pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 60 KiB/s rd, 8.0 KiB/s wr, 131 op/s
Jan 20 13:58:51 compute-1 ceph-mon[81775]: mds.? [v2:192.168.122.101:6804/2015191638,v1:192.168.122.101:6805/2015191638] up:boot
Jan 20 13:58:51 compute-1 ceph-mon[81775]: mds.? [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] up:active
Jan 20 13:58:51 compute-1 ceph-mon[81775]: fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:active} 2 up:standby
Jan 20 13:58:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.rtofcx"}]: dispatch
Jan 20 13:58:51 compute-1 ceph-mon[81775]: 5.c scrub starts
Jan 20 13:58:51 compute-1 ceph-mon[81775]: 5.c scrub ok
Jan 20 13:58:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).mds e8 new map
Jan 20 13:58:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).mds e8 print_map
                                           e8
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T13:58:19.644785+0000
                                           modified        2026-01-20T13:58:50.863864+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24178}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.jyxktq{0:24178} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.znrafi{-1:14376} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2144836821,v1:192.168.122.100:6807/2144836821] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.rtofcx{-1:24137} state up:standby seq 1 addr [v2:192.168.122.101:6804/2015191638,v1:192.168.122.101:6805/2015191638] compat {c=[1],r=[1],i=[7ff]}]
Jan 20 13:58:53 compute-1 ceph-mon[81775]: 3.0 scrub starts
Jan 20 13:58:53 compute-1 ceph-mon[81775]: 3.0 scrub ok
Jan 20 13:58:53 compute-1 ceph-mon[81775]: mds.? [v2:192.168.122.100:6806/2144836821,v1:192.168.122.100:6807/2144836821] up:standby
Jan 20 13:58:53 compute-1 ceph-mon[81775]: fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:active} 2 up:standby
Jan 20 13:58:53 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 20 13:58:53 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 20 13:58:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:58:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.003000093s ======
Jan 20 13:58:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:58:54.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000093s
Jan 20 13:58:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:54 compute-1 ceph-mon[81775]: pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 6.8 KiB/s wr, 111 op/s
Jan 20 13:58:54 compute-1 ceph-mon[81775]: 7.7 scrub starts
Jan 20 13:58:54 compute-1 ceph-mon[81775]: 7.7 scrub ok
Jan 20 13:58:54 compute-1 ceph-mon[81775]: 5.d scrub starts
Jan 20 13:58:54 compute-1 ceph-mon[81775]: 5.d scrub ok
Jan 20 13:58:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:54 compute-1 ceph-mon[81775]: 5.14 scrub starts
Jan 20 13:58:54 compute-1 ceph-mon[81775]: 5.14 scrub ok
Jan 20 13:58:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:54 compute-1 ceph-mon[81775]: Deploying daemon haproxy.rgw.default.compute-2.cuokcs on compute-2
Jan 20 13:58:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).mds e9 new map
Jan 20 13:58:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).mds e9 print_map
                                           e9
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T13:58:19.644785+0000
                                           modified        2026-01-20T13:58:50.863864+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24178}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.jyxktq{0:24178} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.znrafi{-1:14376} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2144836821,v1:192.168.122.100:6807/2144836821] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.rtofcx{-1:24137} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2015191638,v1:192.168.122.101:6805/2015191638] compat {c=[1],r=[1],i=[7ff]}]
Jan 20 13:58:54 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx Updating MDS map to version 9 from mon.2
Jan 20 13:58:55 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 20 13:58:55 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 20 13:58:55 compute-1 ceph-mon[81775]: 4.b scrub starts
Jan 20 13:58:55 compute-1 ceph-mon[81775]: 4.b scrub ok
Jan 20 13:58:55 compute-1 ceph-mon[81775]: mds.? [v2:192.168.122.101:6804/2015191638,v1:192.168.122.101:6805/2015191638] up:standby
Jan 20 13:58:55 compute-1 ceph-mon[81775]: fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:active} 2 up:standby
Jan 20 13:58:55 compute-1 sshd-session[84742]: Invalid user admin from 116.99.171.211 port 39088
Jan 20 13:58:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:58:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:58:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:58:56.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:58:56 compute-1 ceph-mon[81775]: pgmap v135: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 154 KiB/s rd, 6.0 KiB/s wr, 290 op/s
Jan 20 13:58:56 compute-1 ceph-mon[81775]: 7.c scrub starts
Jan 20 13:58:56 compute-1 ceph-mon[81775]: 7.c scrub ok
Jan 20 13:58:56 compute-1 ceph-mon[81775]: 2.a scrub starts
Jan 20 13:58:56 compute-1 ceph-mon[81775]: 2.a scrub ok
Jan 20 13:58:56 compute-1 sshd-session[84742]: Connection closed by invalid user admin 116.99.171.211 port 39088 [preauth]
Jan 20 13:58:57 compute-1 ceph-mon[81775]: 2.c deep-scrub starts
Jan 20 13:58:57 compute-1 ceph-mon[81775]: 2.c deep-scrub ok
Jan 20 13:58:57 compute-1 ceph-mon[81775]: pgmap v136: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 123 KiB/s rd, 4.8 KiB/s wr, 233 op/s
Jan 20 13:58:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:58:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:58:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:58:58.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:58:58 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 20 13:58:58 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 20 13:58:58 compute-1 ceph-mon[81775]: 2.d scrub starts
Jan 20 13:58:58 compute-1 ceph-mon[81775]: 2.d scrub ok
Jan 20 13:58:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:58 compute-1 ceph-mon[81775]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 13:58:58 compute-1 ceph-mon[81775]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 13:58:58 compute-1 ceph-mon[81775]: Deploying daemon keepalived.rgw.default.compute-0.gcjsxe on compute-0
Jan 20 13:58:58 compute-1 ceph-mon[81775]: 5.17 scrub starts
Jan 20 13:58:58 compute-1 ceph-mon[81775]: 5.17 scrub ok
Jan 20 13:58:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:58:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:58:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:58:59.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:58:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:59 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 20 13:58:59 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 20 13:59:00 compute-1 ceph-mon[81775]: 7.d scrub starts
Jan 20 13:59:00 compute-1 ceph-mon[81775]: 7.d scrub ok
Jan 20 13:59:00 compute-1 ceph-mon[81775]: pgmap v137: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 112 KiB/s rd, 4.4 KiB/s wr, 211 op/s
Jan 20 13:59:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:00.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:01 compute-1 ceph-mon[81775]: 7.12 scrub starts
Jan 20 13:59:01 compute-1 ceph-mon[81775]: 7.12 scrub ok
Jan 20 13:59:01 compute-1 ceph-mon[81775]: 5.19 scrub starts
Jan 20 13:59:01 compute-1 ceph-mon[81775]: 5.19 scrub ok
Jan 20 13:59:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 20 13:59:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:01.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 20 13:59:02 compute-1 ceph-mon[81775]: pgmap v138: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 170 B/s wr, 129 op/s
Jan 20 13:59:02 compute-1 ceph-mon[81775]: 4.f scrub starts
Jan 20 13:59:02 compute-1 ceph-mon[81775]: 4.f scrub ok
Jan 20 13:59:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 13:59:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:02.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 13:59:02 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 20 13:59:02 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 20 13:59:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:03.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:03 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 20 13:59:03 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 20 13:59:04 compute-1 ceph-mon[81775]: 7.15 scrub starts
Jan 20 13:59:04 compute-1 ceph-mon[81775]: 7.15 scrub ok
Jan 20 13:59:04 compute-1 ceph-mon[81775]: 7.a scrub starts
Jan 20 13:59:04 compute-1 ceph-mon[81775]: 7.a scrub ok
Jan 20 13:59:04 compute-1 ceph-mon[81775]: pgmap v139: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 0 B/s wr, 128 op/s
Jan 20 13:59:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:04 compute-1 ceph-mon[81775]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 13:59:04 compute-1 ceph-mon[81775]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 13:59:04 compute-1 ceph-mon[81775]: Deploying daemon keepalived.rgw.default.compute-2.dleeql on compute-2
Jan 20 13:59:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:04.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 13:59:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:05.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 13:59:05 compute-1 ceph-mon[81775]: 7.17 scrub starts
Jan 20 13:59:05 compute-1 ceph-mon[81775]: 7.17 scrub ok
Jan 20 13:59:05 compute-1 ceph-mon[81775]: 5.b scrub starts
Jan 20 13:59:05 compute-1 ceph-mon[81775]: 5.b scrub ok
Jan 20 13:59:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:06.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:06 compute-1 ceph-mon[81775]: pgmap v140: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 0 B/s wr, 128 op/s
Jan 20 13:59:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:07.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:07 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 20 13:59:07 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 20 13:59:07 compute-1 ceph-mon[81775]: 5.8 scrub starts
Jan 20 13:59:07 compute-1 ceph-mon[81775]: 5.8 scrub ok
Jan 20 13:59:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:59:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Jan 20 13:59:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:08.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Jan 20 13:59:08 compute-1 ceph-mon[81775]: pgmap v141: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:08 compute-1 ceph-mon[81775]: 7.19 scrub starts
Jan 20 13:59:08 compute-1 ceph-mon[81775]: 7.19 scrub ok
Jan 20 13:59:08 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:59:08 compute-1 ceph-mon[81775]: osdmap e52: 3 total, 3 up, 3 in
Jan 20 13:59:08 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:59:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 20 13:59:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:09.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 20 13:59:09 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 20 13:59:09 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 20 13:59:09 compute-1 sudo[84744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:09 compute-1 sudo[84744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:09 compute-1 sudo[84744]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:09 compute-1 sudo[84769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 13:59:09 compute-1 sudo[84769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:09 compute-1 sudo[84769]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Jan 20 13:59:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:59:09 compute-1 ceph-mon[81775]: osdmap e53: 3 total, 3 up, 3 in
Jan 20 13:59:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:59:09 compute-1 ceph-mon[81775]: 5.1d deep-scrub starts
Jan 20 13:59:09 compute-1 ceph-mon[81775]: 5.1d deep-scrub ok
Jan 20 13:59:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:59:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:59:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:59:09 compute-1 ceph-mon[81775]: osdmap e54: 3 total, 3 up, 3 in
Jan 20 13:59:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:59:09 compute-1 sudo[84794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:09 compute-1 sudo[84794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:09 compute-1 sudo[84794]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:09 compute-1 sudo[84819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:59:09 compute-1 sudo[84819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:09 compute-1 sudo[84819]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:10 compute-1 sudo[84844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:10 compute-1 sudo[84844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:10 compute-1 sudo[84844]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:10 compute-1 sudo[84869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 13:59:10 compute-1 sudo[84869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 20 13:59:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:10.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 20 13:59:10 compute-1 podman[84964]: 2026-01-20 13:59:10.585169546 +0000 UTC m=+0.063886878 container exec 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:59:10 compute-1 podman[84964]: 2026-01-20 13:59:10.692571942 +0000 UTC m=+0.171289284 container exec_died 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:59:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Jan 20 13:59:10 compute-1 ceph-mon[81775]: pgmap v144: 197 pgs: 197 active+clean; 456 KiB data, 102 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:10 compute-1 ceph-mon[81775]: 7.1a scrub starts
Jan 20 13:59:10 compute-1 ceph-mon[81775]: 7.1a scrub ok
Jan 20 13:59:10 compute-1 ceph-mon[81775]: 7.14 scrub starts
Jan 20 13:59:10 compute-1 ceph-mon[81775]: 7.14 scrub ok
Jan 20 13:59:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:59:10 compute-1 ceph-mon[81775]: osdmap e55: 3 total, 3 up, 3 in
Jan 20 13:59:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 13:59:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:11.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 13:59:11 compute-1 sudo[84869]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:11 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 20 13:59:11 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 20 13:59:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Jan 20 13:59:11 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 56 pg[10.0( v 48'48 (0'0,48'48] local-lis/les=47/48 n=8 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=13.366131783s) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 48'47 mlcod 48'47 active pruub 122.893173218s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:11 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 56 pg[10.0( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=13.366131783s) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 48'47 mlcod 0'0 unknown pruub 122.893173218s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:11 compute-1 ceph-mon[81775]: 5.1e deep-scrub starts
Jan 20 13:59:11 compute-1 ceph-mon[81775]: 5.1e deep-scrub ok
Jan 20 13:59:11 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:11 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:11 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:11 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:12.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:12 compute-1 ceph-mon[81775]: pgmap v147: 259 pgs: 62 unknown, 197 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:12 compute-1 ceph-mon[81775]: 7.1c scrub starts
Jan 20 13:59:12 compute-1 ceph-mon[81775]: 7.1c scrub ok
Jan 20 13:59:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:59:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:59:12 compute-1 ceph-mon[81775]: osdmap e56: 3 total, 3 up, 3 in
Jan 20 13:59:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.12( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.7( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.11( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.10( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.1f( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.1b( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.1e( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.1d( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.1c( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.1a( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.19( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.18( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.6( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.5( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.4( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.3( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.b( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.8( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.9( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.a( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.c( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.d( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.e( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.f( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.1( v 48'48 (0'0,48'48] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.2( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.13( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.14( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.15( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.17( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.16( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.10( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.11( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.12( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.1f( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.1b( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.7( v 48'48 (0'0,48'48] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.1d( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.1a( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.19( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.18( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.5( v 48'48 (0'0,48'48] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.1e( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.3( v 48'48 (0'0,48'48] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.4( v 48'48 (0'0,48'48] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.1c( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.d( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.b( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.6( v 48'48 (0'0,48'48] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.c( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.a( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.e( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.f( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.0( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 48'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.1( v 48'48 (0'0,48'48] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.2( v 48'48 (0'0,48'48] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.9( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.15( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.8( v 48'48 (0'0,48'48] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.14( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.16( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.17( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 57 pg[10.13( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 13:59:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:13.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 13:59:13 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Jan 20 13:59:13 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Jan 20 13:59:13 compute-1 ceph-mon[81775]: osdmap e57: 3 total, 3 up, 3 in
Jan 20 13:59:13 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:13 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:59:13 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:13 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 13:59:13 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 13:59:13 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:13 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:14 compute-1 sshd-session[85069]: Invalid user kim from 116.99.171.211 port 33996
Jan 20 13:59:14 compute-1 sshd-session[85069]: Connection closed by invalid user kim 116.99.171.211 port 33996 [preauth]
Jan 20 13:59:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 13:59:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:14.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 13:59:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:14 compute-1 ceph-mon[81775]: pgmap v150: 321 pgs: 124 unknown, 197 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:14 compute-1 ceph-mon[81775]: 6.1a scrub starts
Jan 20 13:59:14 compute-1 ceph-mon[81775]: 6.1a scrub ok
Jan 20 13:59:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 13:59:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:15.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 13:59:15 compute-1 ceph-mon[81775]: 4.10 scrub starts
Jan 20 13:59:15 compute-1 ceph-mon[81775]: 4.10 scrub ok
Jan 20 13:59:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:16.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:16 compute-1 ceph-mon[81775]: pgmap v151: 321 pgs: 31 unknown, 290 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:16 compute-1 ceph-mon[81775]: 2.13 scrub starts
Jan 20 13:59:16 compute-1 ceph-mon[81775]: 2.13 scrub ok
Jan 20 13:59:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:17.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:17 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 20 13:59:17 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 20 13:59:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.1b( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.944127083s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 126.586280823s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.12( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.944046021s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 126.586219788s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.10( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.937613487s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 126.579887390s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.12( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943946838s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.586219788s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.1b( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943997383s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.586280823s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.10( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.937577248s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.579887390s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.11( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943763733s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 126.586166382s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.11( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943738937s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.586166382s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.1e( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943623543s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 126.586227417s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.1e( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943590164s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.586227417s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.19( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943556786s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 126.586349487s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.19( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943525314s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.586349487s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.18( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943629265s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 126.586509705s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.18( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943574905s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.586509705s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.5( v 48'48 (0'0,48'48] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943511963s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 126.586524963s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.4( v 48'48 (0'0,48'48] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943505287s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 126.586570740s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.5( v 48'48 (0'0,48'48] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943478584s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.586524963s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.4( v 48'48 (0'0,48'48] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943471909s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.586570740s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.3( v 57'51 (0'0,57'51] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943395615s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=57'49 lcod 57'50 mlcod 57'50 active pruub 126.586532593s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.3( v 57'51 (0'0,57'51] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943346977s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=57'49 lcod 57'50 mlcod 0'0 unknown NOTIFY pruub 126.586532593s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.8( v 48'48 (0'0,48'48] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943283081s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 126.586616516s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.8( v 48'48 (0'0,48'48] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943231583s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.586616516s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.f( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943265915s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 126.586723328s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.f( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943235397s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.586723328s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.1( v 48'48 (0'0,48'48] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943240166s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 126.586746216s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.2( v 48'48 (0'0,48'48] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943241119s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 126.586784363s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.1( v 48'48 (0'0,48'48] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943208694s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.586746216s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.13( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943148613s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 126.586776733s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.2( v 48'48 (0'0,48'48] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943162918s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.586784363s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.13( v 48'48 (0'0,48'48] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943116188s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.586776733s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.14( v 57'51 (0'0,57'51] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943144798s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=57'49 lcod 57'50 mlcod 57'50 active pruub 126.586837769s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.14( v 57'51 (0'0,57'51] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943095207s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=57'49 lcod 57'50 mlcod 0'0 unknown NOTIFY pruub 126.586837769s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.15( v 57'51 (0'0,57'51] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.943018913s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=57'49 lcod 57'50 mlcod 57'50 active pruub 126.586799622s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[10.15( v 57'51 (0'0,57'51] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.942932129s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=57'49 lcod 57'50 mlcod 0'0 unknown NOTIFY pruub 126.586799622s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[8.14( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[11.14( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[8.17( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[8.10( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[11.12( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[11.1( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[8.8( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[11.5( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[11.f( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[11.4( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[11.7( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[8.4( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-1 ceph-mon[81775]: 2.10 scrub starts
Jan 20 13:59:17 compute-1 ceph-mon[81775]: 2.10 scrub ok
Jan 20 13:59:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[8.1b( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 20 13:59:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[8.19( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[11.1a( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[11.1b( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[11.1c( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[8.18( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[11.1e( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[11.1d( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 58 pg[8.12( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:18 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 20 13:59:18 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 20 13:59:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 20 13:59:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:18.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 20 13:59:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[11.14( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-1 ceph-mon[81775]: 4.11 scrub starts
Jan 20 13:59:18 compute-1 ceph-mon[81775]: 4.11 scrub ok
Jan 20 13:59:18 compute-1 ceph-mon[81775]: pgmap v152: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:18 compute-1 ceph-mon[81775]: 4.1a scrub starts
Jan 20 13:59:18 compute-1 ceph-mon[81775]: 4.1a scrub ok
Jan 20 13:59:18 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:59:18 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:59:18 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 20 13:59:18 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:59:18 compute-1 ceph-mon[81775]: osdmap e58: 3 total, 3 up, 3 in
Jan 20 13:59:18 compute-1 ceph-mon[81775]: osdmap e59: 3 total, 3 up, 3 in
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[11.f( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[8.17( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[11.12( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[8.8( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=58/59 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[11.1( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[11.4( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[11.5( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[11.1c( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[11.1d( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[11.7( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[11.1e( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[11.1b( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[8.10( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=44'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[8.19( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 59 pg[11.1a( empty local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:19.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:19 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 20 13:59:19 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 20 13:59:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:19 compute-1 sudo[85089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:19 compute-1 sudo[85089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:19 compute-1 sudo[85089]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:19 compute-1 sudo[85114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 13:59:19 compute-1 sudo[85114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:19 compute-1 sudo[85114]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:19 compute-1 ceph-mon[81775]: 5.18 scrub starts
Jan 20 13:59:19 compute-1 ceph-mon[81775]: 5.18 scrub ok
Jan 20 13:59:19 compute-1 ceph-mon[81775]: 5.12 scrub starts
Jan 20 13:59:19 compute-1 ceph-mon[81775]: 5.12 scrub ok
Jan 20 13:59:19 compute-1 ceph-mon[81775]: pgmap v155: 321 pgs: 9 peering, 312 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:20.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:20 compute-1 ceph-mon[81775]: 4.1b scrub starts
Jan 20 13:59:20 compute-1 ceph-mon[81775]: 4.1b scrub ok
Jan 20 13:59:20 compute-1 ceph-mon[81775]: 5.13 scrub starts
Jan 20 13:59:20 compute-1 ceph-mon[81775]: 5.13 scrub ok
Jan 20 13:59:20 compute-1 ceph-mon[81775]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 20 13:59:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 13:59:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 13:59:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:20 compute-1 ceph-mon[81775]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 20 13:59:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:20 compute-1 ceph-mon[81775]: Reconfiguring mgr.compute-0.wookjv (monmap changed)...
Jan 20 13:59:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.wookjv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 13:59:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 13:59:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:20 compute-1 ceph-mon[81775]: Reconfiguring daemon mgr.compute-0.wookjv on compute-0
Jan 20 13:59:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 13:59:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:21.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 13:59:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 13:59:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:22.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 13:59:22 compute-1 ceph-mon[81775]: pgmap v156: 321 pgs: 9 peering, 312 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 120 B/s, 0 objects/s recovering
Jan 20 13:59:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:22 compute-1 ceph-mon[81775]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 20 13:59:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 13:59:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:22 compute-1 ceph-mon[81775]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 20 13:59:22 compute-1 ceph-mon[81775]: 4.1c scrub starts
Jan 20 13:59:22 compute-1 ceph-mon[81775]: 4.1c scrub ok
Jan 20 13:59:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 20 13:59:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:23 compute-1 sudo[85139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:23 compute-1 sudo[85139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:23 compute-1 sudo[85139]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:23 compute-1 sudo[85164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:59:23 compute-1 sudo[85164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:23 compute-1 sudo[85164]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:23 compute-1 sudo[85189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:23 compute-1 sudo[85189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:23 compute-1 sudo[85189]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:23 compute-1 sudo[85214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:59:23 compute-1 sudo[85214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 13:59:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:23.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 13:59:23 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.19 deep-scrub starts
Jan 20 13:59:23 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.19 deep-scrub ok
Jan 20 13:59:23 compute-1 podman[85255]: 2026-01-20 13:59:23.534996337 +0000 UTC m=+0.048494447 container create d17d57c45b395520539c37b2b0d223622ba8097ccf5c4cd69dc7b62be66f3a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 13:59:23 compute-1 systemd[1]: Started libpod-conmon-d17d57c45b395520539c37b2b0d223622ba8097ccf5c4cd69dc7b62be66f3a9e.scope.
Jan 20 13:59:23 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:59:23 compute-1 podman[85255]: 2026-01-20 13:59:23.515015972 +0000 UTC m=+0.028514132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:59:23 compute-1 podman[85255]: 2026-01-20 13:59:23.63047309 +0000 UTC m=+0.143971220 container init d17d57c45b395520539c37b2b0d223622ba8097ccf5c4cd69dc7b62be66f3a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dubinsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 20 13:59:23 compute-1 podman[85255]: 2026-01-20 13:59:23.64103321 +0000 UTC m=+0.154531350 container start d17d57c45b395520539c37b2b0d223622ba8097ccf5c4cd69dc7b62be66f3a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dubinsky, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 13:59:23 compute-1 podman[85255]: 2026-01-20 13:59:23.64614384 +0000 UTC m=+0.159641990 container attach d17d57c45b395520539c37b2b0d223622ba8097ccf5c4cd69dc7b62be66f3a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dubinsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:59:23 compute-1 ecstatic_dubinsky[85271]: 167 167
Jan 20 13:59:23 compute-1 systemd[1]: libpod-d17d57c45b395520539c37b2b0d223622ba8097ccf5c4cd69dc7b62be66f3a9e.scope: Deactivated successfully.
Jan 20 13:59:23 compute-1 conmon[85271]: conmon d17d57c45b395520539c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d17d57c45b395520539c37b2b0d223622ba8097ccf5c4cd69dc7b62be66f3a9e.scope/container/memory.events
Jan 20 13:59:23 compute-1 podman[85255]: 2026-01-20 13:59:23.649695621 +0000 UTC m=+0.163193771 container died d17d57c45b395520539c37b2b0d223622ba8097ccf5c4cd69dc7b62be66f3a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dubinsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 13:59:23 compute-1 systemd[1]: var-lib-containers-storage-overlay-255be54dc2106e2f65cb406df0c4d761dd87d9ec1f923a35adc787e992629c01-merged.mount: Deactivated successfully.
Jan 20 13:59:23 compute-1 podman[85255]: 2026-01-20 13:59:23.704222975 +0000 UTC m=+0.217721095 container remove d17d57c45b395520539c37b2b0d223622ba8097ccf5c4cd69dc7b62be66f3a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:59:23 compute-1 systemd[1]: libpod-conmon-d17d57c45b395520539c37b2b0d223622ba8097ccf5c4cd69dc7b62be66f3a9e.scope: Deactivated successfully.
Jan 20 13:59:23 compute-1 sudo[85214]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:23 compute-1 sudo[85290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:23 compute-1 sudo[85290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:23 compute-1 sudo[85290]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:23 compute-1 ceph-mon[81775]: Reconfiguring osd.0 (monmap changed)...
Jan 20 13:59:23 compute-1 ceph-mon[81775]: Reconfiguring daemon osd.0 on compute-0
Jan 20 13:59:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 13:59:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:23 compute-1 ceph-mon[81775]: 6.19 deep-scrub starts
Jan 20 13:59:23 compute-1 ceph-mon[81775]: 6.19 deep-scrub ok
Jan 20 13:59:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 20 13:59:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:23 compute-1 sudo[85315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:59:23 compute-1 sudo[85315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:23 compute-1 sudo[85315]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:24 compute-1 sudo[85340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:24 compute-1 sudo[85340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:24 compute-1 sudo[85340]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:24 compute-1 sudo[85365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:59:24 compute-1 sudo[85365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:24 compute-1 podman[85406]: 2026-01-20 13:59:24.498685293 +0000 UTC m=+0.060443740 container create 4834c250907bb123c98d3556ee2aca657e499a4733da86cb555a14b0e1b73d74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goldwasser, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:59:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:24.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:24 compute-1 systemd[1]: Started libpod-conmon-4834c250907bb123c98d3556ee2aca657e499a4733da86cb555a14b0e1b73d74.scope.
Jan 20 13:59:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:24 compute-1 podman[85406]: 2026-01-20 13:59:24.470339907 +0000 UTC m=+0.032098434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:59:24 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:59:24 compute-1 podman[85406]: 2026-01-20 13:59:24.614463681 +0000 UTC m=+0.176222158 container init 4834c250907bb123c98d3556ee2aca657e499a4733da86cb555a14b0e1b73d74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goldwasser, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 13:59:24 compute-1 podman[85406]: 2026-01-20 13:59:24.625540387 +0000 UTC m=+0.187298844 container start 4834c250907bb123c98d3556ee2aca657e499a4733da86cb555a14b0e1b73d74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goldwasser, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:59:24 compute-1 podman[85406]: 2026-01-20 13:59:24.629648276 +0000 UTC m=+0.191406743 container attach 4834c250907bb123c98d3556ee2aca657e499a4733da86cb555a14b0e1b73d74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goldwasser, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 13:59:24 compute-1 zealous_goldwasser[85422]: 167 167
Jan 20 13:59:24 compute-1 systemd[1]: libpod-4834c250907bb123c98d3556ee2aca657e499a4733da86cb555a14b0e1b73d74.scope: Deactivated successfully.
Jan 20 13:59:24 compute-1 podman[85406]: 2026-01-20 13:59:24.635040524 +0000 UTC m=+0.196798991 container died 4834c250907bb123c98d3556ee2aca657e499a4733da86cb555a14b0e1b73d74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:59:24 compute-1 systemd[1]: var-lib-containers-storage-overlay-c93eaef64172c5e28ace8db5a9e042af6207a737dd8b34c09e640960c56d1eef-merged.mount: Deactivated successfully.
Jan 20 13:59:24 compute-1 podman[85406]: 2026-01-20 13:59:24.68707021 +0000 UTC m=+0.248828677 container remove 4834c250907bb123c98d3556ee2aca657e499a4733da86cb555a14b0e1b73d74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goldwasser, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 20 13:59:24 compute-1 systemd[1]: libpod-conmon-4834c250907bb123c98d3556ee2aca657e499a4733da86cb555a14b0e1b73d74.scope: Deactivated successfully.
Jan 20 13:59:24 compute-1 sudo[85365]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:24 compute-1 ceph-mon[81775]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 20 13:59:24 compute-1 ceph-mon[81775]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 20 13:59:24 compute-1 ceph-mon[81775]: 4.12 scrub starts
Jan 20 13:59:24 compute-1 ceph-mon[81775]: 4.12 scrub ok
Jan 20 13:59:24 compute-1 ceph-mon[81775]: pgmap v157: 321 pgs: 9 peering, 312 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 120 B/s, 0 objects/s recovering
Jan 20 13:59:24 compute-1 ceph-mon[81775]: Reconfiguring osd.1 (monmap changed)...
Jan 20 13:59:24 compute-1 ceph-mon[81775]: Reconfiguring daemon osd.1 on compute-1
Jan 20 13:59:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:24 compute-1 ceph-mon[81775]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 20 13:59:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 13:59:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 13:59:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:24 compute-1 ceph-mon[81775]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 20 13:59:25 compute-1 sudo[85447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:25 compute-1 sudo[85447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:25 compute-1 sudo[85447]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:25 compute-1 sudo[85472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:59:25 compute-1 sudo[85472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:25 compute-1 sudo[85472]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:25 compute-1 sudo[85497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:25 compute-1 sudo[85497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:25 compute-1 sudo[85497]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 13:59:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:25.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 13:59:25 compute-1 sudo[85522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:59:25 compute-1 sudo[85522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:25 compute-1 podman[85564]: 2026-01-20 13:59:25.613185912 +0000 UTC m=+0.069408200 container create c4f72d32f375654dfd77a7297ca2f1d66e63a41b28c4791e5a65e8f6db59be1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 13:59:25 compute-1 systemd[1]: Started libpod-conmon-c4f72d32f375654dfd77a7297ca2f1d66e63a41b28c4791e5a65e8f6db59be1a.scope.
Jan 20 13:59:25 compute-1 podman[85564]: 2026-01-20 13:59:25.58432654 +0000 UTC m=+0.040548888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:59:25 compute-1 systemd[1]: Started libcrun container.
Jan 20 13:59:25 compute-1 podman[85564]: 2026-01-20 13:59:25.708240593 +0000 UTC m=+0.164462871 container init c4f72d32f375654dfd77a7297ca2f1d66e63a41b28c4791e5a65e8f6db59be1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:59:25 compute-1 podman[85564]: 2026-01-20 13:59:25.718773912 +0000 UTC m=+0.174996180 container start c4f72d32f375654dfd77a7297ca2f1d66e63a41b28c4791e5a65e8f6db59be1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 20 13:59:25 compute-1 podman[85564]: 2026-01-20 13:59:25.724092488 +0000 UTC m=+0.180314786 container attach c4f72d32f375654dfd77a7297ca2f1d66e63a41b28c4791e5a65e8f6db59be1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 13:59:25 compute-1 unruffled_hugle[85581]: 167 167
Jan 20 13:59:25 compute-1 systemd[1]: libpod-c4f72d32f375654dfd77a7297ca2f1d66e63a41b28c4791e5a65e8f6db59be1a.scope: Deactivated successfully.
Jan 20 13:59:25 compute-1 podman[85564]: 2026-01-20 13:59:25.727555796 +0000 UTC m=+0.183778084 container died c4f72d32f375654dfd77a7297ca2f1d66e63a41b28c4791e5a65e8f6db59be1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:59:25 compute-1 systemd[1]: var-lib-containers-storage-overlay-3eaefe0693fce7abb33f2eeeb47dba1be60f3be85d42b1d106611815fb906926-merged.mount: Deactivated successfully.
Jan 20 13:59:25 compute-1 podman[85564]: 2026-01-20 13:59:25.781979297 +0000 UTC m=+0.238201595 container remove c4f72d32f375654dfd77a7297ca2f1d66e63a41b28c4791e5a65e8f6db59be1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:59:25 compute-1 systemd[1]: libpod-conmon-c4f72d32f375654dfd77a7297ca2f1d66e63a41b28c4791e5a65e8f6db59be1a.scope: Deactivated successfully.
Jan 20 13:59:25 compute-1 sudo[85522]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:25 compute-1 ceph-mon[81775]: 7.1d scrub starts
Jan 20 13:59:25 compute-1 ceph-mon[81775]: 7.1d scrub ok
Jan 20 13:59:25 compute-1 ceph-mon[81775]: pgmap v158: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 120 B/s, 0 objects/s recovering
Jan 20 13:59:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 20 13:59:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:25 compute-1 ceph-mon[81775]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 20 13:59:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 13:59:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 13:59:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:25 compute-1 ceph-mon[81775]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 20 13:59:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Jan 20 13:59:26 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 20 13:59:26 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 20 13:59:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:26.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:26 compute-1 ceph-mon[81775]: 2.15 scrub starts
Jan 20 13:59:26 compute-1 ceph-mon[81775]: 2.15 scrub ok
Jan 20 13:59:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 13:59:26 compute-1 ceph-mon[81775]: osdmap e60: 3 total, 3 up, 3 in
Jan 20 13:59:26 compute-1 ceph-mon[81775]: 5.1b scrub starts
Jan 20 13:59:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:26 compute-1 ceph-mon[81775]: Reconfiguring mgr.compute-2.gunjko (monmap changed)...
Jan 20 13:59:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gunjko", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 13:59:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 13:59:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:26 compute-1 ceph-mon[81775]: Reconfiguring daemon mgr.compute-2.gunjko on compute-2
Jan 20 13:59:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 13:59:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:27.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 13:59:27 compute-1 sudo[85600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:27 compute-1 sudo[85600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:27 compute-1 sudo[85600]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:27 compute-1 sudo[85625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:59:27 compute-1 sudo[85625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:27 compute-1 sudo[85625]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:27 compute-1 sudo[85650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:27 compute-1 sudo[85650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:27 compute-1 sudo[85650]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:28 compute-1 ceph-mon[81775]: 5.1b scrub ok
Jan 20 13:59:28 compute-1 ceph-mon[81775]: 4.19 scrub starts
Jan 20 13:59:28 compute-1 ceph-mon[81775]: 4.19 scrub ok
Jan 20 13:59:28 compute-1 ceph-mon[81775]: 4.16 scrub starts
Jan 20 13:59:28 compute-1 ceph-mon[81775]: 4.16 scrub ok
Jan 20 13:59:28 compute-1 ceph-mon[81775]: pgmap v160: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 116 B/s, 0 objects/s recovering
Jan 20 13:59:28 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 20 13:59:28 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:28 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Jan 20 13:59:28 compute-1 sudo[85675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 13:59:28 compute-1 sudo[85675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:28.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:28 compute-1 podman[85773]: 2026-01-20 13:59:28.723533802 +0000 UTC m=+0.086751032 container exec 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:59:28 compute-1 podman[85773]: 2026-01-20 13:59:28.843654056 +0000 UTC m=+0.206871256 container exec_died 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 20 13:59:29 compute-1 ceph-mon[81775]: 4.17 deep-scrub starts
Jan 20 13:59:29 compute-1 ceph-mon[81775]: 4.17 deep-scrub ok
Jan 20 13:59:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 20 13:59:29 compute-1 ceph-mon[81775]: osdmap e61: 3 total, 3 up, 3 in
Jan 20 13:59:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Jan 20 13:59:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:29.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:29 compute-1 sudo[85675]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:29 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 4.e deep-scrub starts
Jan 20 13:59:29 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 4.e deep-scrub ok
Jan 20 13:59:29 compute-1 sudo[85893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:29 compute-1 sudo[85893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:29 compute-1 sudo[85893]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:29 compute-1 sudo[85918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:59:29 compute-1 sudo[85918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:29 compute-1 sudo[85918]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:29 compute-1 sudo[85943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:29 compute-1 sudo[85943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:29 compute-1 sudo[85943]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:29 compute-1 sudo[85968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 13:59:29 compute-1 sudo[85968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Jan 20 13:59:30 compute-1 ceph-mon[81775]: osdmap e62: 3 total, 3 up, 3 in
Jan 20 13:59:30 compute-1 ceph-mon[81775]: pgmap v163: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 853 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Jan 20 13:59:30 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 20 13:59:30 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:30 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:30 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:30 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:30 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:30 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:30 compute-1 sudo[85968]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:30.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:31 compute-1 ceph-mon[81775]: 4.e deep-scrub starts
Jan 20 13:59:31 compute-1 ceph-mon[81775]: 4.e deep-scrub ok
Jan 20 13:59:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 20 13:59:31 compute-1 ceph-mon[81775]: osdmap e63: 3 total, 3 up, 3 in
Jan 20 13:59:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:59:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 13:59:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 13:59:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:31 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Jan 20 13:59:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:31.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:31 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 20 13:59:31 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 20 13:59:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Jan 20 13:59:32 compute-1 ceph-mon[81775]: 5.0 scrub starts
Jan 20 13:59:32 compute-1 ceph-mon[81775]: 5.0 scrub ok
Jan 20 13:59:32 compute-1 ceph-mon[81775]: osdmap e64: 3 total, 3 up, 3 in
Jan 20 13:59:32 compute-1 ceph-mon[81775]: pgmap v166: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 2 op/s
Jan 20 13:59:32 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 20 13:59:32 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 20 13:59:32 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 20 13:59:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:32.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Jan 20 13:59:33 compute-1 ceph-mon[81775]: 5.1c scrub starts
Jan 20 13:59:33 compute-1 ceph-mon[81775]: 5.1c scrub ok
Jan 20 13:59:33 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 20 13:59:33 compute-1 ceph-mon[81775]: osdmap e65: 3 total, 3 up, 3 in
Jan 20 13:59:33 compute-1 ceph-mon[81775]: osdmap e66: 3 total, 3 up, 3 in
Jan 20 13:59:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:33.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Jan 20 13:59:33 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67) [1] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:33 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67) [1] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:33 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67) [1] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:33 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67) [1] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:34 compute-1 ceph-mon[81775]: 4.18 scrub starts
Jan 20 13:59:34 compute-1 ceph-mon[81775]: 4.18 scrub ok
Jan 20 13:59:34 compute-1 ceph-mon[81775]: 4.1e scrub starts
Jan 20 13:59:34 compute-1 ceph-mon[81775]: 4.1e scrub ok
Jan 20 13:59:34 compute-1 ceph-mon[81775]: pgmap v169: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 13:59:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 20 13:59:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 20 13:59:34 compute-1 ceph-mon[81775]: osdmap e67: 3 total, 3 up, 3 in
Jan 20 13:59:34 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 20 13:59:34 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 20 13:59:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:34.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:35 compute-1 sshd-session[71470]: Received disconnect from 38.102.83.230 port 53628:11: disconnected by user
Jan 20 13:59:35 compute-1 sshd-session[71470]: Disconnected from user zuul 38.102.83.230 port 53628
Jan 20 13:59:35 compute-1 sshd-session[71467]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:59:35 compute-1 systemd[1]: session-19.scope: Deactivated successfully.
Jan 20 13:59:35 compute-1 systemd[1]: session-19.scope: Consumed 10.031s CPU time.
Jan 20 13:59:35 compute-1 systemd-logind[783]: Session 19 logged out. Waiting for processes to exit.
Jan 20 13:59:35 compute-1 systemd-logind[783]: Removed session 19.
Jan 20 13:59:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:35.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Jan 20 13:59:36 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:36 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:36 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:36 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:36 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:36 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:36 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:36 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [1]/[0] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:36 compute-1 ceph-mon[81775]: 3.1c scrub starts
Jan 20 13:59:36 compute-1 ceph-mon[81775]: 3.1c scrub ok
Jan 20 13:59:36 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 20 13:59:36 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 20 13:59:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:36.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Jan 20 13:59:37 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 20 13:59:37 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 20 13:59:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:37.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:37 compute-1 ceph-mon[81775]: 4.14 deep-scrub starts
Jan 20 13:59:37 compute-1 ceph-mon[81775]: 4.14 deep-scrub ok
Jan 20 13:59:37 compute-1 ceph-mon[81775]: pgmap v171: 321 pgs: 4 remapped+peering, 317 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 750 B/s rd, 500 B/s wr, 1 op/s; 295 B/s, 9 objects/s recovering
Jan 20 13:59:37 compute-1 ceph-mon[81775]: osdmap e68: 3 total, 3 up, 3 in
Jan 20 13:59:37 compute-1 ceph-mon[81775]: osdmap e69: 3 total, 3 up, 3 in
Jan 20 13:59:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Jan 20 13:59:38 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 70 pg[9.e( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [1] r=0 lpr=70 pi=[54,70)/1 luod=0'0 crt=51'1000 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:38 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 70 pg[9.e( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [1] r=0 lpr=70 pi=[54,70)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:38 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 70 pg[9.16( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [1] r=0 lpr=70 pi=[54,70)/1 luod=0'0 crt=51'1000 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:38 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 70 pg[9.16( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [1] r=0 lpr=70 pi=[54,70)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:38 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 70 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [1] r=0 lpr=70 pi=[54,70)/1 luod=0'0 crt=51'1000 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:38 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 70 pg[9.6( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [1] r=0 lpr=70 pi=[54,70)/1 luod=0'0 crt=51'1000 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:38 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 70 pg[9.6( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [1] r=0 lpr=70 pi=[54,70)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:38 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 70 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [1] r=0 lpr=70 pi=[54,70)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:38 compute-1 sudo[86023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:38 compute-1 sudo[86023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:38 compute-1 sudo[86023]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:38 compute-1 sudo[86048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 13:59:38 compute-1 sudo[86048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:38 compute-1 sudo[86048]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 13:59:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:38.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 13:59:38 compute-1 ceph-mon[81775]: 6.d scrub starts
Jan 20 13:59:38 compute-1 ceph-mon[81775]: 6.d scrub ok
Jan 20 13:59:38 compute-1 ceph-mon[81775]: pgmap v174: 321 pgs: 4 unknown, 4 remapped+peering, 313 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 279 B/s, 9 objects/s recovering
Jan 20 13:59:38 compute-1 ceph-mon[81775]: 5.2 scrub starts
Jan 20 13:59:38 compute-1 ceph-mon[81775]: 5.2 scrub ok
Jan 20 13:59:38 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:38 compute-1 ceph-mon[81775]: osdmap e70: 3 total, 3 up, 3 in
Jan 20 13:59:38 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:38 compute-1 ceph-mon[81775]: 6.4 scrub starts
Jan 20 13:59:38 compute-1 ceph-mon[81775]: 6.4 scrub ok
Jan 20 13:59:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Jan 20 13:59:39 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 71 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=70/71 n=5 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [1] r=0 lpr=70 pi=[54,70)/1 crt=51'1000 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:39 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 71 pg[9.6( v 51'1000 (0'0,51'1000] local-lis/les=70/71 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [1] r=0 lpr=70 pi=[54,70)/1 crt=51'1000 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:39 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 71 pg[9.e( v 51'1000 (0'0,51'1000] local-lis/les=70/71 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [1] r=0 lpr=70 pi=[54,70)/1 crt=51'1000 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:39 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 71 pg[9.16( v 51'1000 (0'0,51'1000] local-lis/les=70/71 n=5 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [1] r=0 lpr=70 pi=[54,70)/1 crt=51'1000 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:39.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:40 compute-1 ceph-mon[81775]: osdmap e71: 3 total, 3 up, 3 in
Jan 20 13:59:40 compute-1 ceph-mon[81775]: pgmap v177: 321 pgs: 4 unknown, 4 remapped+peering, 313 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:40.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:41 compute-1 ceph-mon[81775]: 6.6 scrub starts
Jan 20 13:59:41 compute-1 ceph-mon[81775]: 6.6 scrub ok
Jan 20 13:59:41 compute-1 ceph-mon[81775]: 4.1f deep-scrub starts
Jan 20 13:59:41 compute-1 ceph-mon[81775]: 4.1f deep-scrub ok
Jan 20 13:59:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 13:59:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:41.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 13:59:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Jan 20 13:59:42 compute-1 ceph-mon[81775]: pgmap v178: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 615 B/s wr, 57 op/s; 163 B/s, 8 objects/s recovering
Jan 20 13:59:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 20 13:59:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:42.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:43 compute-1 sshd-session[86073]: Connection closed by authenticating user root 116.99.171.211 port 45160 [preauth]
Jan 20 13:59:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 20 13:59:43 compute-1 ceph-mon[81775]: osdmap e72: 3 total, 3 up, 3 in
Jan 20 13:59:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Jan 20 13:59:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 13:59:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:43.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 13:59:44 compute-1 ceph-mon[81775]: pgmap v180: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 511 B/s wr, 47 op/s; 135 B/s, 7 objects/s recovering
Jan 20 13:59:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 20 13:59:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 20 13:59:44 compute-1 ceph-mon[81775]: osdmap e73: 3 total, 3 up, 3 in
Jan 20 13:59:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Jan 20 13:59:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:44.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Jan 20 13:59:45 compute-1 ceph-mon[81775]: osdmap e74: 3 total, 3 up, 3 in
Jan 20 13:59:45 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 20 13:59:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 13:59:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:45.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 13:59:46 compute-1 ceph-mon[81775]: pgmap v183: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 511 B/s wr, 47 op/s; 135 B/s, 7 objects/s recovering
Jan 20 13:59:46 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 20 13:59:46 compute-1 ceph-mon[81775]: osdmap e75: 3 total, 3 up, 3 in
Jan 20 13:59:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Jan 20 13:59:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 13:59:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:46.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 13:59:47 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 4.5 deep-scrub starts
Jan 20 13:59:47 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 4.5 deep-scrub ok
Jan 20 13:59:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Jan 20 13:59:47 compute-1 ceph-mon[81775]: osdmap e76: 3 total, 3 up, 3 in
Jan 20 13:59:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 20 13:59:47 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 77 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=77) [1] r=0 lpr=77 pi=[54,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:47 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 77 pg[9.a( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=77) [1] r=0 lpr=77 pi=[54,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:47.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Jan 20 13:59:47 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 78 pg[9.a( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[54,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:47 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 78 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[54,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:47 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 78 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[54,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:47 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 78 pg[9.a( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[54,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:48 compute-1 ceph-mon[81775]: pgmap v186: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:48 compute-1 ceph-mon[81775]: 4.5 deep-scrub starts
Jan 20 13:59:48 compute-1 ceph-mon[81775]: 4.5 deep-scrub ok
Jan 20 13:59:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 20 13:59:48 compute-1 ceph-mon[81775]: osdmap e77: 3 total, 3 up, 3 in
Jan 20 13:59:48 compute-1 ceph-mon[81775]: osdmap e78: 3 total, 3 up, 3 in
Jan 20 13:59:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:48.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Jan 20 13:59:49 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 20 13:59:49 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 20 13:59:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 13:59:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:49.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 13:59:49 compute-1 ceph-mon[81775]: osdmap e79: 3 total, 3 up, 3 in
Jan 20 13:59:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Jan 20 13:59:49 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 80 pg[9.a( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=6 ec=54/45 lis/c=78/54 les/c/f=79/55/0 sis=80) [1] r=0 lpr=80 pi=[54,80)/1 luod=0'0 crt=51'1000 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:49 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 80 pg[9.a( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=6 ec=54/45 lis/c=78/54 les/c/f=79/55/0 sis=80) [1] r=0 lpr=80 pi=[54,80)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:49 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 80 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=78/54 les/c/f=79/55/0 sis=80) [1] r=0 lpr=80 pi=[54,80)/1 luod=0'0 crt=51'1000 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:49 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 80 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=78/54 les/c/f=79/55/0 sis=80) [1] r=0 lpr=80 pi=[54,80)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 13:59:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:50.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 13:59:50 compute-1 ceph-mon[81775]: pgmap v190: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:50 compute-1 ceph-mon[81775]: 5.f scrub starts
Jan 20 13:59:50 compute-1 ceph-mon[81775]: 5.f scrub ok
Jan 20 13:59:50 compute-1 ceph-mon[81775]: osdmap e80: 3 total, 3 up, 3 in
Jan 20 13:59:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Jan 20 13:59:50 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 81 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=80/81 n=5 ec=54/45 lis/c=78/54 les/c/f=79/55/0 sis=80) [1] r=0 lpr=80 pi=[54,80)/1 crt=51'1000 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:50 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 81 pg[9.a( v 51'1000 (0'0,51'1000] local-lis/les=80/81 n=6 ec=54/45 lis/c=78/54 les/c/f=79/55/0 sis=80) [1] r=0 lpr=80 pi=[54,80)/1 crt=51'1000 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 13:59:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:51.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 13:59:51 compute-1 ceph-mon[81775]: 6.9 deep-scrub starts
Jan 20 13:59:51 compute-1 ceph-mon[81775]: 6.9 deep-scrub ok
Jan 20 13:59:51 compute-1 ceph-mon[81775]: osdmap e81: 3 total, 3 up, 3 in
Jan 20 13:59:51 compute-1 ceph-mon[81775]: pgmap v193: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 205 B/s, 9 objects/s recovering
Jan 20 13:59:51 compute-1 ceph-mon[81775]: 6.1c scrub starts
Jan 20 13:59:51 compute-1 ceph-mon[81775]: 6.1c scrub ok
Jan 20 13:59:52 compute-1 sshd-session[86075]: Accepted publickey for zuul from 192.168.122.30 port 33566 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:59:52 compute-1 systemd-logind[783]: New session 33 of user zuul.
Jan 20 13:59:52 compute-1 systemd[1]: Started Session 33 of User zuul.
Jan 20 13:59:52 compute-1 sshd-session[86075]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:59:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:52.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:52 compute-1 ceph-mon[81775]: 6.b scrub starts
Jan 20 13:59:52 compute-1 ceph-mon[81775]: 6.b scrub ok
Jan 20 13:59:52 compute-1 ceph-mon[81775]: 6.1e scrub starts
Jan 20 13:59:52 compute-1 ceph-mon[81775]: 6.1e scrub ok
Jan 20 13:59:53 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 20 13:59:53 compute-1 python3.9[86228]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:59:53 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 20 13:59:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:53.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:54 compute-1 ceph-mon[81775]: 6.c scrub starts
Jan 20 13:59:54 compute-1 ceph-mon[81775]: 6.c scrub ok
Jan 20 13:59:54 compute-1 ceph-mon[81775]: pgmap v194: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 144 B/s, 6 objects/s recovering
Jan 20 13:59:54 compute-1 ceph-mon[81775]: 6.17 scrub starts
Jan 20 13:59:54 compute-1 ceph-mon[81775]: 6.17 scrub ok
Jan 20 13:59:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 13:59:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:54.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 13:59:55 compute-1 ceph-mon[81775]: 6.7 scrub starts
Jan 20 13:59:55 compute-1 ceph-mon[81775]: 6.7 scrub ok
Jan 20 13:59:55 compute-1 ceph-mon[81775]: 6.12 scrub starts
Jan 20 13:59:55 compute-1 ceph-mon[81775]: 6.12 scrub ok
Jan 20 13:59:55 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.7 deep-scrub starts
Jan 20 13:59:55 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.7 deep-scrub ok
Jan 20 13:59:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:55.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:56 compute-1 ceph-mon[81775]: pgmap v195: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 121 B/s, 5 objects/s recovering
Jan 20 13:59:56 compute-1 ceph-mon[81775]: 4.15 scrub starts
Jan 20 13:59:56 compute-1 ceph-mon[81775]: 4.15 scrub ok
Jan 20 13:59:56 compute-1 sudo[86440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btekxafwjmuyljfxrjuqxqufcdbmnzxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917595.6807694-57-173101023044082/AnsiballZ_command.py'
Jan 20 13:59:56 compute-1 sudo[86440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:59:56 compute-1 python3.9[86442]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:59:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:56.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:57 compute-1 ceph-mon[81775]: 5.7 deep-scrub starts
Jan 20 13:59:57 compute-1 ceph-mon[81775]: 5.7 deep-scrub ok
Jan 20 13:59:57 compute-1 ceph-mon[81775]: 7.11 scrub starts
Jan 20 13:59:57 compute-1 ceph-mon[81775]: 7.11 scrub ok
Jan 20 13:59:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 13:59:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:57.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 13:59:58 compute-1 ceph-mon[81775]: 6.f scrub starts
Jan 20 13:59:58 compute-1 ceph-mon[81775]: 6.f scrub ok
Jan 20 13:59:58 compute-1 ceph-mon[81775]: pgmap v196: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 96 B/s, 4 objects/s recovering
Jan 20 13:59:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 20 13:59:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Jan 20 13:59:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:58.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:59 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 20 13:59:59 compute-1 ceph-mon[81775]: osdmap e82: 3 total, 3 up, 3 in
Jan 20 13:59:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 13:59:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 13:59:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:59.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:00:00 compute-1 ceph-mon[81775]: 6.10 deep-scrub starts
Jan 20 14:00:00 compute-1 ceph-mon[81775]: 6.10 deep-scrub ok
Jan 20 14:00:00 compute-1 ceph-mon[81775]: pgmap v198: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 20 14:00:00 compute-1 ceph-mon[81775]: overall HEALTH_OK
Jan 20 14:00:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Jan 20 14:00:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:00.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:01 compute-1 ceph-mon[81775]: 6.11 scrub starts
Jan 20 14:00:01 compute-1 ceph-mon[81775]: 6.11 scrub ok
Jan 20 14:00:01 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 20 14:00:01 compute-1 ceph-mon[81775]: osdmap e83: 3 total, 3 up, 3 in
Jan 20 14:00:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:00:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:01.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:00:02 compute-1 ceph-mon[81775]: pgmap v200: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:02 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 20 14:00:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Jan 20 14:00:02 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 84 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=68/68 les/c/f=69/69/0 sis=84) [1] r=0 lpr=84 pi=[68,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:02 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 84 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=68/68 les/c/f=69/69/0 sis=84) [1] r=0 lpr=84 pi=[68,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:02.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Jan 20 14:00:02 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 85 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=68/68 les/c/f=69/69/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[68,85)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:02 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 85 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=68/68 les/c/f=69/69/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[68,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:02 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 85 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=68/68 les/c/f=69/69/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[68,85)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:02 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 85 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=68/68 les/c/f=69/69/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[68,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:03 compute-1 sudo[86440]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:03 compute-1 ceph-mon[81775]: 6.13 scrub starts
Jan 20 14:00:03 compute-1 ceph-mon[81775]: 6.13 scrub ok
Jan 20 14:00:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 20 14:00:03 compute-1 ceph-mon[81775]: osdmap e84: 3 total, 3 up, 3 in
Jan 20 14:00:03 compute-1 ceph-mon[81775]: osdmap e85: 3 total, 3 up, 3 in
Jan 20 14:00:03 compute-1 sshd-session[86078]: Connection closed by 192.168.122.30 port 33566
Jan 20 14:00:03 compute-1 sshd-session[86075]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:00:03 compute-1 systemd[1]: session-33.scope: Deactivated successfully.
Jan 20 14:00:03 compute-1 systemd[1]: session-33.scope: Consumed 8.313s CPU time.
Jan 20 14:00:03 compute-1 systemd-logind[783]: Session 33 logged out. Waiting for processes to exit.
Jan 20 14:00:03 compute-1 systemd-logind[783]: Removed session 33.
Jan 20 14:00:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:03.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Jan 20 14:00:04 compute-1 ceph-mon[81775]: pgmap v203: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 20 14:00:04 compute-1 ceph-mon[81775]: 7.16 scrub starts
Jan 20 14:00:04 compute-1 ceph-mon[81775]: 7.16 scrub ok
Jan 20 14:00:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 20 14:00:04 compute-1 ceph-mon[81775]: osdmap e86: 3 total, 3 up, 3 in
Jan 20 14:00:04 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 20 14:00:04 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 20 14:00:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:00:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:04.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:00:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Jan 20 14:00:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 87 pg[9.d( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=6 ec=54/45 lis/c=85/68 les/c/f=86/69/0 sis=87) [1] r=0 lpr=87 pi=[68,87)/1 luod=0'0 crt=51'1000 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 87 pg[9.d( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=6 ec=54/45 lis/c=85/68 les/c/f=86/69/0 sis=87) [1] r=0 lpr=87 pi=[68,87)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 87 pg[9.1d( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=85/68 les/c/f=86/69/0 sis=87) [1] r=0 lpr=87 pi=[68,87)/1 luod=0'0 crt=51'1000 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:04 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 87 pg[9.1d( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=85/68 les/c/f=86/69/0 sis=87) [1] r=0 lpr=87 pi=[68,87)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:05 compute-1 ceph-mon[81775]: 7.1f scrub starts
Jan 20 14:00:05 compute-1 ceph-mon[81775]: 7.1f scrub ok
Jan 20 14:00:05 compute-1 ceph-mon[81775]: osdmap e87: 3 total, 3 up, 3 in
Jan 20 14:00:05 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 20 14:00:05 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 20 14:00:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:05.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Jan 20 14:00:05 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 88 pg[9.1d( v 51'1000 (0'0,51'1000] local-lis/les=87/88 n=5 ec=54/45 lis/c=85/68 les/c/f=86/69/0 sis=87) [1] r=0 lpr=87 pi=[68,87)/1 crt=51'1000 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:05 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 88 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=88) [1] r=0 lpr=88 pi=[64,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:05 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 88 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=88) [1] r=0 lpr=88 pi=[64,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:05 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 88 pg[9.d( v 51'1000 (0'0,51'1000] local-lis/les=87/88 n=6 ec=54/45 lis/c=85/68 les/c/f=86/69/0 sis=87) [1] r=0 lpr=87 pi=[68,87)/1 crt=51'1000 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:06 compute-1 ceph-mon[81775]: 6.3 scrub starts
Jan 20 14:00:06 compute-1 ceph-mon[81775]: 6.3 scrub ok
Jan 20 14:00:06 compute-1 ceph-mon[81775]: 6.14 scrub starts
Jan 20 14:00:06 compute-1 ceph-mon[81775]: 6.14 scrub ok
Jan 20 14:00:06 compute-1 ceph-mon[81775]: pgmap v206: 321 pgs: 2 active+remapped, 319 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 20 14:00:06 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 20 14:00:06 compute-1 ceph-mon[81775]: 3.15 scrub starts
Jan 20 14:00:06 compute-1 ceph-mon[81775]: 3.15 scrub ok
Jan 20 14:00:06 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 20 14:00:06 compute-1 ceph-mon[81775]: osdmap e88: 3 total, 3 up, 3 in
Jan 20 14:00:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:00:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:06.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:00:06 compute-1 sshd-session[86499]: Invalid user admin from 116.99.171.211 port 35540
Jan 20 14:00:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Jan 20 14:00:06 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 89 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=89) [1]/[2] r=-1 lpr=89 pi=[64,89)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:06 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 89 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=89) [1]/[2] r=-1 lpr=89 pi=[64,89)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:06 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 89 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=89) [1]/[2] r=-1 lpr=89 pi=[64,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:06 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 89 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=89) [1]/[2] r=-1 lpr=89 pi=[64,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:07 compute-1 ceph-mon[81775]: 6.2 scrub starts
Jan 20 14:00:07 compute-1 ceph-mon[81775]: 6.2 scrub ok
Jan 20 14:00:07 compute-1 ceph-mon[81775]: 6.16 scrub starts
Jan 20 14:00:07 compute-1 ceph-mon[81775]: 6.16 scrub ok
Jan 20 14:00:07 compute-1 ceph-mon[81775]: 2.12 scrub starts
Jan 20 14:00:07 compute-1 ceph-mon[81775]: 2.12 scrub ok
Jan 20 14:00:07 compute-1 ceph-mon[81775]: osdmap e89: 3 total, 3 up, 3 in
Jan 20 14:00:07 compute-1 sshd-session[86499]: Connection closed by invalid user admin 116.99.171.211 port 35540 [preauth]
Jan 20 14:00:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:07.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Jan 20 14:00:08 compute-1 ceph-mon[81775]: 6.18 scrub starts
Jan 20 14:00:08 compute-1 ceph-mon[81775]: 6.18 scrub ok
Jan 20 14:00:08 compute-1 ceph-mon[81775]: pgmap v209: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 20 14:00:08 compute-1 ceph-mon[81775]: osdmap e90: 3 total, 3 up, 3 in
Jan 20 14:00:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:08.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Jan 20 14:00:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 91 pg[9.f( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=6 ec=54/45 lis/c=89/64 les/c/f=90/65/0 sis=91) [1] r=0 lpr=91 pi=[64,91)/1 luod=0'0 crt=51'1000 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 91 pg[9.f( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=6 ec=54/45 lis/c=89/64 les/c/f=90/65/0 sis=91) [1] r=0 lpr=91 pi=[64,91)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 91 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=89/64 les/c/f=90/65/0 sis=91) [1] r=0 lpr=91 pi=[64,91)/1 luod=0'0 crt=51'1000 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:09 compute-1 ceph-mon[81775]: 6.1d scrub starts
Jan 20 14:00:09 compute-1 ceph-mon[81775]: 6.1d scrub ok
Jan 20 14:00:09 compute-1 ceph-mon[81775]: 3.e scrub starts
Jan 20 14:00:09 compute-1 ceph-mon[81775]: 3.e scrub ok
Jan 20 14:00:09 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 91 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=89/64 les/c/f=90/65/0 sis=91) [1] r=0 lpr=91 pi=[64,91)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:09.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:10 compute-1 ceph-mon[81775]: pgmap v211: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:10 compute-1 ceph-mon[81775]: osdmap e91: 3 total, 3 up, 3 in
Jan 20 14:00:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Jan 20 14:00:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 92 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=91/92 n=5 ec=54/45 lis/c=89/64 les/c/f=90/65/0 sis=91) [1] r=0 lpr=91 pi=[64,91)/1 crt=51'1000 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:10 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 92 pg[9.f( v 51'1000 (0'0,51'1000] local-lis/les=91/92 n=6 ec=54/45 lis/c=89/64 les/c/f=90/65/0 sis=91) [1] r=0 lpr=91 pi=[64,91)/1 crt=51'1000 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:10.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:11.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Jan 20 14:00:11 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 93 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=93) [1] r=0 lpr=93 pi=[54,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:12 compute-1 ceph-mon[81775]: osdmap e92: 3 total, 3 up, 3 in
Jan 20 14:00:12 compute-1 ceph-mon[81775]: 3.11 scrub starts
Jan 20 14:00:12 compute-1 ceph-mon[81775]: 3.11 scrub ok
Jan 20 14:00:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 20 14:00:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Jan 20 14:00:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 94 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:12 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 94 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:12.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:13 compute-1 ceph-mon[81775]: pgmap v214: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 51 B/s, 2 objects/s recovering
Jan 20 14:00:13 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 20 14:00:13 compute-1 ceph-mon[81775]: osdmap e93: 3 total, 3 up, 3 in
Jan 20 14:00:13 compute-1 ceph-mon[81775]: 6.1f scrub starts
Jan 20 14:00:13 compute-1 ceph-mon[81775]: 6.1f scrub ok
Jan 20 14:00:13 compute-1 ceph-mon[81775]: osdmap e94: 3 total, 3 up, 3 in
Jan 20 14:00:13 compute-1 ceph-mon[81775]: 4.9 scrub starts
Jan 20 14:00:13 compute-1 ceph-mon[81775]: 4.9 scrub ok
Jan 20 14:00:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Jan 20 14:00:13 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 95 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=95) [1] r=0 lpr=95 pi=[54,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:13.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:14 compute-1 ceph-mon[81775]: pgmap v217: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 20 14:00:14 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 20 14:00:14 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 20 14:00:14 compute-1 ceph-mon[81775]: osdmap e95: 3 total, 3 up, 3 in
Jan 20 14:00:14 compute-1 ceph-mon[81775]: 2.18 scrub starts
Jan 20 14:00:14 compute-1 ceph-mon[81775]: 2.18 scrub ok
Jan 20 14:00:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Jan 20 14:00:14 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 96 pg[9.10( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=6 ec=54/45 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 luod=0'0 crt=51'1000 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:14 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 96 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[54,96)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:14 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 96 pg[9.10( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=6 ec=54/45 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:14 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 96 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[54,96)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:14.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Jan 20 14:00:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 97 pg[9.12( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=97) [1] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:15 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 97 pg[9.10( v 51'1000 (0'0,51'1000] local-lis/les=96/97 n=6 ec=54/45 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=51'1000 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:15 compute-1 ceph-mon[81775]: osdmap e96: 3 total, 3 up, 3 in
Jan 20 14:00:15 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 20 14:00:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:15.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Jan 20 14:00:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 98 pg[9.12( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=98) [1]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 98 pg[9.12( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=98) [1]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 98 pg[9.11( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=6 ec=54/45 lis/c=96/54 les/c/f=97/55/0 sis=98) [1] r=0 lpr=98 pi=[54,98)/1 luod=0'0 crt=51'1000 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:16 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 98 pg[9.11( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=6 ec=54/45 lis/c=96/54 les/c/f=97/55/0 sis=98) [1] r=0 lpr=98 pi=[54,98)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:16 compute-1 ceph-mon[81775]: 2.19 deep-scrub starts
Jan 20 14:00:16 compute-1 ceph-mon[81775]: 2.19 deep-scrub ok
Jan 20 14:00:16 compute-1 ceph-mon[81775]: pgmap v220: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 14:00:16 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 20 14:00:16 compute-1 ceph-mon[81775]: osdmap e97: 3 total, 3 up, 3 in
Jan 20 14:00:16 compute-1 ceph-mon[81775]: 4.8 scrub starts
Jan 20 14:00:16 compute-1 ceph-mon[81775]: 4.8 scrub ok
Jan 20 14:00:16 compute-1 ceph-mon[81775]: osdmap e98: 3 total, 3 up, 3 in
Jan 20 14:00:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:16.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:17 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 20 14:00:17 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 20 14:00:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Jan 20 14:00:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 99 pg[9.11( v 51'1000 (0'0,51'1000] local-lis/les=98/99 n=6 ec=54/45 lis/c=96/54 les/c/f=97/55/0 sis=98) [1] r=0 lpr=98 pi=[54,98)/1 crt=51'1000 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:00:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:17.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:00:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Jan 20 14:00:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 100 pg[9.12( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=98/54 les/c/f=99/55/0 sis=100) [1] r=0 lpr=100 pi=[54,100)/1 luod=0'0 crt=51'1000 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:17 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 100 pg[9.12( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=98/54 les/c/f=99/55/0 sis=100) [1] r=0 lpr=100 pi=[54,100)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:18 compute-1 sshd-session[86501]: Accepted publickey for zuul from 192.168.122.30 port 55684 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:00:18 compute-1 systemd-logind[783]: New session 34 of user zuul.
Jan 20 14:00:18 compute-1 systemd[1]: Started Session 34 of User zuul.
Jan 20 14:00:18 compute-1 sshd-session[86501]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:00:18 compute-1 ceph-mon[81775]: 7.13 scrub starts
Jan 20 14:00:18 compute-1 ceph-mon[81775]: 7.13 scrub ok
Jan 20 14:00:18 compute-1 ceph-mon[81775]: pgmap v223: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 14:00:18 compute-1 ceph-mon[81775]: 5.1 scrub starts
Jan 20 14:00:18 compute-1 ceph-mon[81775]: 5.1 scrub ok
Jan 20 14:00:18 compute-1 ceph-mon[81775]: osdmap e99: 3 total, 3 up, 3 in
Jan 20 14:00:18 compute-1 ceph-mon[81775]: osdmap e100: 3 total, 3 up, 3 in
Jan 20 14:00:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:18.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Jan 20 14:00:18 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 101 pg[9.12( v 51'1000 (0'0,51'1000] local-lis/les=100/101 n=5 ec=54/45 lis/c=98/54 les/c/f=99/55/0 sis=100) [1] r=0 lpr=100 pi=[54,100)/1 crt=51'1000 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:19 compute-1 python3.9[86654]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 20 14:00:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:19 compute-1 ceph-mon[81775]: 7.10 scrub starts
Jan 20 14:00:19 compute-1 ceph-mon[81775]: 7.10 scrub ok
Jan 20 14:00:19 compute-1 ceph-mon[81775]: 5.4 scrub starts
Jan 20 14:00:19 compute-1 ceph-mon[81775]: 5.4 scrub ok
Jan 20 14:00:19 compute-1 ceph-mon[81775]: osdmap e101: 3 total, 3 up, 3 in
Jan 20 14:00:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:19.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:20 compute-1 python3.9[86828]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:00:20 compute-1 ceph-mon[81775]: 2.e scrub starts
Jan 20 14:00:20 compute-1 ceph-mon[81775]: 2.e scrub ok
Jan 20 14:00:20 compute-1 ceph-mon[81775]: pgmap v227: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:20.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:21 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 20 14:00:21 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 20 14:00:21 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Jan 20 14:00:21 compute-1 ceph-mon[81775]: 7.5 scrub starts
Jan 20 14:00:21 compute-1 ceph-mon[81775]: 7.5 scrub ok
Jan 20 14:00:21 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 20 14:00:21 compute-1 sudo[86982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzonquylqvtlptftuzxhbeoslpsaudpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917621.158858-94-229967409042202/AnsiballZ_command.py'
Jan 20 14:00:21 compute-1 sudo[86982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:00:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:21.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:21 compute-1 python3.9[86984]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:00:21 compute-1 sudo[86982]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:22.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:22 compute-1 ceph-mon[81775]: pgmap v228: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 8.1 KiB/s rd, 213 B/s wr, 14 op/s; 45 B/s, 1 objects/s recovering
Jan 20 14:00:22 compute-1 ceph-mon[81775]: 4.d scrub starts
Jan 20 14:00:22 compute-1 ceph-mon[81775]: 4.d scrub ok
Jan 20 14:00:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 20 14:00:22 compute-1 ceph-mon[81775]: osdmap e102: 3 total, 3 up, 3 in
Jan 20 14:00:22 compute-1 ceph-mon[81775]: 4.1 deep-scrub starts
Jan 20 14:00:22 compute-1 ceph-mon[81775]: 4.1 deep-scrub ok
Jan 20 14:00:22 compute-1 sudo[87135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwvizcazpjrrqcykvtqonzaceagblyvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917622.3672562-130-233906556776231/AnsiballZ_stat.py'
Jan 20 14:00:22 compute-1 sudo[87135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:00:22 compute-1 python3.9[87137]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:00:22 compute-1 sudo[87135]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Jan 20 14:00:23 compute-1 ceph-mon[81775]: 7.b scrub starts
Jan 20 14:00:23 compute-1 ceph-mon[81775]: 7.b scrub ok
Jan 20 14:00:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 20 14:00:23 compute-1 sudo[87289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koduqhrmfyiyzvldsimfyicfzjwwdzbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917623.3092847-163-66541688905208/AnsiballZ_file.py'
Jan 20 14:00:23 compute-1 sudo[87289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:00:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:23.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:23 compute-1 python3.9[87291]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:00:24 compute-1 sudo[87289]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:00:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:24.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:00:24 compute-1 sudo[87441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmjtmziomhocsgdvruvdxdrmzjfwzhkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917624.3097475-190-196388653716283/AnsiballZ_file.py'
Jan 20 14:00:24 compute-1 sudo[87441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:00:24 compute-1 ceph-mon[81775]: pgmap v230: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 6.8 KiB/s rd, 179 B/s wr, 12 op/s; 38 B/s, 1 objects/s recovering
Jan 20 14:00:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 20 14:00:24 compute-1 ceph-mon[81775]: osdmap e103: 3 total, 3 up, 3 in
Jan 20 14:00:24 compute-1 python3.9[87443]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:00:24 compute-1 sudo[87441]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Jan 20 14:00:25 compute-1 ceph-mon[81775]: 7.9 scrub starts
Jan 20 14:00:25 compute-1 ceph-mon[81775]: 7.9 scrub ok
Jan 20 14:00:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 20 14:00:25 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 104 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=68/68 les/c/f=69/69/0 sis=104) [1] r=0 lpr=104 pi=[68,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:25 compute-1 python3.9[87593]: ansible-ansible.builtin.service_facts Invoked
Jan 20 14:00:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:25.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:25 compute-1 network[87610]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:00:25 compute-1 network[87611]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:00:25 compute-1 network[87612]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:00:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:26.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Jan 20 14:00:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 105 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=68/68 les/c/f=69/69/0 sis=105) [1]/[2] r=-1 lpr=105 pi=[68,105)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:26 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 105 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=68/68 les/c/f=69/69/0 sis=105) [1]/[2] r=-1 lpr=105 pi=[68,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:26 compute-1 ceph-mon[81775]: pgmap v232: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 162 B/s wr, 11 op/s; 34 B/s, 1 objects/s recovering
Jan 20 14:00:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 20 14:00:26 compute-1 ceph-mon[81775]: 3.9 scrub starts
Jan 20 14:00:26 compute-1 ceph-mon[81775]: osdmap e104: 3 total, 3 up, 3 in
Jan 20 14:00:26 compute-1 ceph-mon[81775]: 3.9 scrub ok
Jan 20 14:00:27 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 20 14:00:27 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 20 14:00:27 compute-1 ceph-mon[81775]: osdmap e105: 3 total, 3 up, 3 in
Jan 20 14:00:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 20 14:00:27 compute-1 ceph-mon[81775]: 6.5 scrub starts
Jan 20 14:00:27 compute-1 ceph-mon[81775]: 6.5 scrub ok
Jan 20 14:00:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Jan 20 14:00:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:27.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:28 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 106 pg[9.16( v 51'1000 (0'0,51'1000] local-lis/les=70/71 n=5 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=106 pruub=14.820875168s) [2] r=-1 lpr=106 pi=[70,106)/1 crt=51'1000 mlcod 0'0 active pruub 200.749862671s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:28 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 106 pg[9.16( v 51'1000 (0'0,51'1000] local-lis/les=70/71 n=5 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=106 pruub=14.820657730s) [2] r=-1 lpr=106 pi=[70,106)/1 crt=51'1000 mlcod 0'0 unknown NOTIFY pruub 200.749862671s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:00:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:28.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:00:28 compute-1 ceph-mon[81775]: pgmap v235: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:28 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 20 14:00:28 compute-1 ceph-mon[81775]: osdmap e106: 3 total, 3 up, 3 in
Jan 20 14:00:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Jan 20 14:00:28 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 107 pg[9.16( v 51'1000 (0'0,51'1000] local-lis/les=70/71 n=5 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=107) [2]/[1] r=0 lpr=107 pi=[70,107)/1 crt=51'1000 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:28 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 107 pg[9.15( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=105/68 les/c/f=106/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 luod=0'0 crt=51'1000 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:28 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 107 pg[9.15( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=105/68 les/c/f=106/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:28 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 107 pg[9.16( v 51'1000 (0'0,51'1000] local-lis/les=70/71 n=5 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=107) [2]/[1] r=0 lpr=107 pi=[70,107)/1 crt=51'1000 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:00:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:29.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:00:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Jan 20 14:00:29 compute-1 ceph-mon[81775]: 7.8 scrub starts
Jan 20 14:00:29 compute-1 ceph-mon[81775]: 7.8 scrub ok
Jan 20 14:00:29 compute-1 ceph-mon[81775]: osdmap e107: 3 total, 3 up, 3 in
Jan 20 14:00:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 20 14:00:29 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 108 pg[9.15( v 51'1000 (0'0,51'1000] local-lis/les=107/108 n=5 ec=54/45 lis/c=105/68 les/c/f=106/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=51'1000 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:29 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 108 pg[9.16( v 51'1000 (0'0,51'1000] local-lis/les=107/108 n=5 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=107) [2]/[1] async=[2] r=0 lpr=107 pi=[70,107)/1 crt=51'1000 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:30 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 20 14:00:30 compute-1 python3.9[87872]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:00:30 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 20 14:00:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:00:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:30.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:00:30 compute-1 ceph-mon[81775]: pgmap v238: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:30 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 20 14:00:30 compute-1 ceph-mon[81775]: osdmap e108: 3 total, 3 up, 3 in
Jan 20 14:00:30 compute-1 ceph-mon[81775]: 6.e scrub starts
Jan 20 14:00:30 compute-1 ceph-mon[81775]: 6.e scrub ok
Jan 20 14:00:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Jan 20 14:00:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 109 pg[9.16( v 51'1000 (0'0,51'1000] local-lis/les=107/108 n=5 ec=54/45 lis/c=107/70 les/c/f=108/71/0 sis=109 pruub=14.965022087s) [2] async=[2] r=-1 lpr=109 pi=[70,109)/1 crt=51'1000 mlcod 51'1000 active pruub 203.624725342s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:30 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 109 pg[9.16( v 51'1000 (0'0,51'1000] local-lis/les=107/108 n=5 ec=54/45 lis/c=107/70 les/c/f=108/71/0 sis=109 pruub=14.964905739s) [2] r=-1 lpr=109 pi=[70,109)/1 crt=51'1000 mlcod 0'0 unknown NOTIFY pruub 203.624725342s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:31 compute-1 python3.9[88022]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:00:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:31.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:31 compute-1 ceph-mon[81775]: osdmap e109: 3 total, 3 up, 3 in
Jan 20 14:00:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 20 14:00:31 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Jan 20 14:00:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:32.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:32 compute-1 python3.9[88176]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:00:32 compute-1 ceph-mon[81775]: pgmap v241: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 1 objects/s recovering
Jan 20 14:00:32 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 20 14:00:32 compute-1 ceph-mon[81775]: osdmap e110: 3 total, 3 up, 3 in
Jan 20 14:00:33 compute-1 sudo[88332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opjjgepyazbsxwvgadroticafzyvjjgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917633.3718016-334-227439823079764/AnsiballZ_setup.py'
Jan 20 14:00:33 compute-1 sudo[88332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:00:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:33.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:34 compute-1 python3.9[88334]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:00:34 compute-1 sudo[88332]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:35.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:35 compute-1 ceph-mon[81775]: pgmap v243: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 50 B/s, 1 objects/s recovering
Jan 20 14:00:35 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 20 14:00:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Jan 20 14:00:35 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 20 14:00:35 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 20 14:00:35 compute-1 ceph-mon[81775]: 7.e deep-scrub starts
Jan 20 14:00:35 compute-1 ceph-mon[81775]: 7.e deep-scrub ok
Jan 20 14:00:35 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 20 14:00:35 compute-1 ceph-mon[81775]: osdmap e111: 3 total, 3 up, 3 in
Jan 20 14:00:35 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 20 14:00:35 compute-1 sudo[88417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkdqducljkpkhwedwydsngircuitnust ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917633.3718016-334-227439823079764/AnsiballZ_dnf.py'
Jan 20 14:00:35 compute-1 sudo[88417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:00:35 compute-1 python3.9[88419]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:00:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:35.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Jan 20 14:00:36 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 112 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=80/81 n=5 ec=54/45 lis/c=80/80 les/c/f=81/81/0 sis=112 pruub=10.714376450s) [0] r=-1 lpr=112 pi=[80,112)/1 crt=51'1000 mlcod 0'0 active pruub 204.693359375s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:36 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 112 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=80/81 n=5 ec=54/45 lis/c=80/80 les/c/f=81/81/0 sis=112 pruub=10.714286804s) [0] r=-1 lpr=112 pi=[80,112)/1 crt=51'1000 mlcod 0'0 unknown NOTIFY pruub 204.693359375s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:36 compute-1 ceph-mon[81775]: pgmap v245: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Jan 20 14:00:36 compute-1 ceph-mon[81775]: 3.a scrub starts
Jan 20 14:00:36 compute-1 ceph-mon[81775]: 3.a scrub ok
Jan 20 14:00:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 20 14:00:36 compute-1 ceph-mon[81775]: osdmap e112: 3 total, 3 up, 3 in
Jan 20 14:00:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:37.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Jan 20 14:00:37 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 113 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=80/81 n=5 ec=54/45 lis/c=80/80 les/c/f=81/81/0 sis=113) [0]/[1] r=0 lpr=113 pi=[80,113)/1 crt=51'1000 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:37 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 113 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=80/81 n=5 ec=54/45 lis/c=80/80 les/c/f=81/81/0 sis=113) [0]/[1] r=0 lpr=113 pi=[80,113)/1 crt=51'1000 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:37 compute-1 ceph-mon[81775]: 3.1a scrub starts
Jan 20 14:00:37 compute-1 ceph-mon[81775]: 3.1a scrub ok
Jan 20 14:00:37 compute-1 ceph-mon[81775]: 7.f scrub starts
Jan 20 14:00:37 compute-1 ceph-mon[81775]: 7.f scrub ok
Jan 20 14:00:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 20 14:00:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 20 14:00:37 compute-1 ceph-mon[81775]: osdmap e113: 3 total, 3 up, 3 in
Jan 20 14:00:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:37.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Jan 20 14:00:37 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 114 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=113/114 n=5 ec=54/45 lis/c=80/80 les/c/f=81/81/0 sis=113) [0]/[1] async=[0] r=0 lpr=113 pi=[80,113)/1 crt=51'1000 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:38 compute-1 ceph-mon[81775]: pgmap v247: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:38 compute-1 ceph-mon[81775]: osdmap e114: 3 total, 3 up, 3 in
Jan 20 14:00:38 compute-1 sudo[88483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:38 compute-1 sudo[88483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:38 compute-1 sudo[88483]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:38 compute-1 sudo[88508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:00:38 compute-1 sudo[88508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:38 compute-1 sudo[88508]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:38 compute-1 sudo[88533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:38 compute-1 sudo[88533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:38 compute-1 sudo[88533]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:38 compute-1 sudo[88558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 14:00:38 compute-1 sudo[88558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Jan 20 14:00:38 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 115 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=113/114 n=5 ec=54/45 lis/c=113/80 les/c/f=114/81/0 sis=115 pruub=14.986879349s) [0] async=[0] r=-1 lpr=115 pi=[80,115)/1 crt=51'1000 mlcod 51'1000 active pruub 211.692581177s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:38 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 115 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=113/114 n=5 ec=54/45 lis/c=113/80 les/c/f=114/81/0 sis=115 pruub=14.986777306s) [0] r=-1 lpr=115 pi=[80,115)/1 crt=51'1000 mlcod 0'0 unknown NOTIFY pruub 211.692581177s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:00:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:39.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:00:39 compute-1 ceph-mon[81775]: 7.2 scrub starts
Jan 20 14:00:39 compute-1 ceph-mon[81775]: 7.2 scrub ok
Jan 20 14:00:39 compute-1 ceph-mon[81775]: osdmap e115: 3 total, 3 up, 3 in
Jan 20 14:00:39 compute-1 podman[88653]: 2026-01-20 14:00:39.487379941 +0000 UTC m=+0.099471148 container exec 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 14:00:39 compute-1 podman[88653]: 2026-01-20 14:00:39.587460664 +0000 UTC m=+0.199551861 container exec_died 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:00:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:39.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Jan 20 14:00:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:40 compute-1 sudo[88558]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:40 compute-1 ceph-mon[81775]: pgmap v251: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Jan 20 14:00:40 compute-1 ceph-mon[81775]: osdmap e116: 3 total, 3 up, 3 in
Jan 20 14:00:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:40 compute-1 sudo[88777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:40 compute-1 sudo[88777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:40 compute-1 sudo[88777]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:40 compute-1 sudo[88803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:00:40 compute-1 sudo[88803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:40 compute-1 sudo[88803]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:40 compute-1 sudo[88828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:40 compute-1 sudo[88828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:40 compute-1 sudo[88828]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:40 compute-1 sudo[88853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:00:40 compute-1 sudo[88853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Jan 20 14:00:41 compute-1 sudo[88853]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:41.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:41 compute-1 ceph-mon[81775]: osdmap e117: 3 total, 3 up, 3 in
Jan 20 14:00:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:00:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:00:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:00:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:00:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:00:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:41.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:42 compute-1 ceph-mon[81775]: pgmap v254: 321 pgs: 1 active+remapped, 1 peering, 319 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 55 B/s, 2 objects/s recovering
Jan 20 14:00:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:43.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:43 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 20 14:00:43 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 20 14:00:43 compute-1 ceph-mon[81775]: 3.1d deep-scrub starts
Jan 20 14:00:43 compute-1 ceph-mon[81775]: 3.1d deep-scrub ok
Jan 20 14:00:43 compute-1 ceph-mon[81775]: 2.6 scrub starts
Jan 20 14:00:43 compute-1 ceph-mon[81775]: 2.6 scrub ok
Jan 20 14:00:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:43.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:44 compute-1 ceph-mon[81775]: 7.4 scrub starts
Jan 20 14:00:44 compute-1 ceph-mon[81775]: 7.4 scrub ok
Jan 20 14:00:44 compute-1 ceph-mon[81775]: pgmap v255: 321 pgs: 1 active+remapped, 1 peering, 319 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Jan 20 14:00:44 compute-1 ceph-mon[81775]: 6.8 scrub starts
Jan 20 14:00:44 compute-1 ceph-mon[81775]: 6.8 scrub ok
Jan 20 14:00:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 20 14:00:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:45.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 20 14:00:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Jan 20 14:00:45 compute-1 ceph-mon[81775]: 7.6 deep-scrub starts
Jan 20 14:00:45 compute-1 ceph-mon[81775]: 7.6 deep-scrub ok
Jan 20 14:00:45 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 20 14:00:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:00:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:45.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:00:46 compute-1 ceph-mon[81775]: 2.9 scrub starts
Jan 20 14:00:46 compute-1 ceph-mon[81775]: 2.9 scrub ok
Jan 20 14:00:46 compute-1 ceph-mon[81775]: pgmap v256: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 35 B/s, 0 objects/s recovering
Jan 20 14:00:46 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 20 14:00:46 compute-1 ceph-mon[81775]: osdmap e118: 3 total, 3 up, 3 in
Jan 20 14:00:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:47.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Jan 20 14:00:47 compute-1 ceph-mon[81775]: 7.18 scrub starts
Jan 20 14:00:47 compute-1 ceph-mon[81775]: 7.18 scrub ok
Jan 20 14:00:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 20 14:00:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:00:47.665381) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917647665506, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6990, "num_deletes": 255, "total_data_size": 12890158, "memory_usage": 13091344, "flush_reason": "Manual Compaction"}
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Jan 20 14:00:47 compute-1 sudo[88913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:47 compute-1 sudo[88913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917647773791, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 7725961, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 254, "largest_seqno": 6995, "table_properties": {"data_size": 7698442, "index_size": 17996, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 79151, "raw_average_key_size": 23, "raw_value_size": 7632919, "raw_average_value_size": 2268, "num_data_blocks": 797, "num_entries": 3365, "num_filter_entries": 3365, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 1768917474, "file_creation_time": 1768917647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 108513 microseconds, and 27677 cpu microseconds.
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:00:47.773896) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 7725961 bytes OK
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:00:47.773923) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:00:47.775584) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:00:47.775607) EVENT_LOG_v1 {"time_micros": 1768917647775600, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:00:47.775627) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 12853057, prev total WAL file size 12853057, number of live WAL files 2.
Jan 20 14:00:47 compute-1 sudo[88913]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:00:47.779645) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(7544KB) 8(1648B)]
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917647779762, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 7727609, "oldest_snapshot_seqno": -1}
Jan 20 14:00:47 compute-1 sudo[88938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:00:47 compute-1 sudo[88938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:47 compute-1 sudo[88938]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:47.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3114 keys, 7722426 bytes, temperature: kUnknown
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917647894712, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 7722426, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7695596, "index_size": 17952, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7813, "raw_key_size": 74973, "raw_average_key_size": 24, "raw_value_size": 7633199, "raw_average_value_size": 2451, "num_data_blocks": 796, "num_entries": 3114, "num_filter_entries": 3114, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768917647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:00:47.894939) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 7722426 bytes
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:00:47.897770) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 67.2 rd, 67.1 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(7.4, 0.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3370, records dropped: 256 output_compression: NoCompression
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:00:47.897791) EVENT_LOG_v1 {"time_micros": 1768917647897782, "job": 4, "event": "compaction_finished", "compaction_time_micros": 115006, "compaction_time_cpu_micros": 32413, "output_level": 6, "num_output_files": 1, "total_output_size": 7722426, "num_input_records": 3370, "num_output_records": 3114, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917647899112, "job": 4, "event": "table_file_deletion", "file_number": 14}
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917647899155, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 20 14:00:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:00:47.779510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:00:47 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 119 pg[9.1d( v 51'1000 (0'0,51'1000] local-lis/les=87/88 n=5 ec=54/45 lis/c=87/87 les/c/f=88/88/0 sis=119 pruub=14.054931641s) [2] r=-1 lpr=119 pi=[87,119)/1 crt=51'1000 mlcod 0'0 active pruub 219.684005737s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:47 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 119 pg[9.1d( v 51'1000 (0'0,51'1000] local-lis/les=87/88 n=5 ec=54/45 lis/c=87/87 les/c/f=88/88/0 sis=119 pruub=14.054023743s) [2] r=-1 lpr=119 pi=[87,119)/1 crt=51'1000 mlcod 0'0 unknown NOTIFY pruub 219.684005737s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Jan 20 14:00:47 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 120 pg[9.1d( v 51'1000 (0'0,51'1000] local-lis/les=87/88 n=5 ec=54/45 lis/c=87/87 les/c/f=88/88/0 sis=120) [2]/[1] r=0 lpr=120 pi=[87,120)/1 crt=51'1000 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:47 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 120 pg[9.1d( v 51'1000 (0'0,51'1000] local-lis/les=87/88 n=5 ec=54/45 lis/c=87/87 les/c/f=88/88/0 sis=120) [2]/[1] r=0 lpr=120 pi=[87,120)/1 crt=51'1000 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:48 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 3.f deep-scrub starts
Jan 20 14:00:48 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 3.f deep-scrub ok
Jan 20 14:00:48 compute-1 ceph-mon[81775]: pgmap v258: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 15 B/s, 0 objects/s recovering
Jan 20 14:00:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 20 14:00:48 compute-1 ceph-mon[81775]: osdmap e119: 3 total, 3 up, 3 in
Jan 20 14:00:48 compute-1 ceph-mon[81775]: osdmap e120: 3 total, 3 up, 3 in
Jan 20 14:00:48 compute-1 ceph-mon[81775]: 3.f deep-scrub starts
Jan 20 14:00:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Jan 20 14:00:49 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 121 pg[9.1d( v 51'1000 (0'0,51'1000] local-lis/les=120/121 n=5 ec=54/45 lis/c=87/87 les/c/f=88/88/0 sis=120) [2]/[1] async=[2] r=0 lpr=120 pi=[87,120)/1 crt=51'1000 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:49.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:49.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:50 compute-1 ceph-mon[81775]: 3.f deep-scrub ok
Jan 20 14:00:50 compute-1 ceph-mon[81775]: osdmap e121: 3 total, 3 up, 3 in
Jan 20 14:00:50 compute-1 ceph-mon[81775]: pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 20 14:00:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Jan 20 14:00:50 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 122 pg[9.1d( v 51'1000 (0'0,51'1000] local-lis/les=120/121 n=5 ec=54/45 lis/c=120/87 les/c/f=121/88/0 sis=122 pruub=14.988829613s) [2] async=[2] r=-1 lpr=122 pi=[87,122)/1 crt=51'1000 mlcod 51'1000 active pruub 222.735855103s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:50 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 122 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=70/71 n=5 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=122 pruub=8.999602318s) [0] r=-1 lpr=122 pi=[70,122)/1 crt=51'1000 mlcod 0'0 active pruub 216.746673584s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:50 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 122 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=70/71 n=5 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=122 pruub=8.999547005s) [0] r=-1 lpr=122 pi=[70,122)/1 crt=51'1000 mlcod 0'0 unknown NOTIFY pruub 216.746673584s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:50 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 122 pg[9.1d( v 51'1000 (0'0,51'1000] local-lis/les=120/121 n=5 ec=54/45 lis/c=120/87 les/c/f=121/88/0 sis=122 pruub=14.988019943s) [2] r=-1 lpr=122 pi=[87,122)/1 crt=51'1000 mlcod 0'0 unknown NOTIFY pruub 222.735855103s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:50 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 20 14:00:50 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 20 14:00:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Jan 20 14:00:51 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 123 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=70/71 n=5 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=123) [0]/[1] r=0 lpr=123 pi=[70,123)/1 crt=51'1000 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:51 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 123 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=70/71 n=5 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=123) [0]/[1] r=0 lpr=123 pi=[70,123)/1 crt=51'1000 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 20 14:00:51 compute-1 ceph-mon[81775]: osdmap e122: 3 total, 3 up, 3 in
Jan 20 14:00:51 compute-1 ceph-mon[81775]: 7.1e scrub starts
Jan 20 14:00:51 compute-1 ceph-mon[81775]: 7.1e scrub ok
Jan 20 14:00:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:51.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:51.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Jan 20 14:00:52 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 124 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=91/92 n=5 ec=54/45 lis/c=91/91 les/c/f=92/92/0 sis=124 pruub=14.281917572s) [0] r=-1 lpr=124 pi=[91,124)/1 crt=51'1000 mlcod 0'0 active pruub 224.046585083s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:52 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 124 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=91/92 n=5 ec=54/45 lis/c=91/91 les/c/f=92/92/0 sis=124 pruub=14.281862259s) [0] r=-1 lpr=124 pi=[91,124)/1 crt=51'1000 mlcod 0'0 unknown NOTIFY pruub 224.046585083s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:52 compute-1 ceph-mon[81775]: 4.a scrub starts
Jan 20 14:00:52 compute-1 ceph-mon[81775]: 4.a scrub ok
Jan 20 14:00:52 compute-1 ceph-mon[81775]: osdmap e123: 3 total, 3 up, 3 in
Jan 20 14:00:52 compute-1 ceph-mon[81775]: pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:52 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 14:00:52 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:00:52 compute-1 ceph-mon[81775]: osdmap e124: 3 total, 3 up, 3 in
Jan 20 14:00:52 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 124 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=123/124 n=5 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=123) [0]/[1] async=[0] r=0 lpr=123 pi=[70,123)/1 crt=51'1000 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:52 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 20 14:00:52 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 20 14:00:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Jan 20 14:00:52 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 125 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=91/92 n=5 ec=54/45 lis/c=91/91 les/c/f=92/92/0 sis=125) [0]/[1] r=0 lpr=125 pi=[91,125)/1 crt=51'1000 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:52 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 125 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=91/92 n=5 ec=54/45 lis/c=91/91 les/c/f=92/92/0 sis=125) [0]/[1] r=0 lpr=125 pi=[91,125)/1 crt=51'1000 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:52 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 125 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=123/124 n=5 ec=54/45 lis/c=123/70 les/c/f=124/71/0 sis=125 pruub=15.074364662s) [0] async=[0] r=-1 lpr=125 pi=[70,125)/1 crt=51'1000 mlcod 51'1000 active pruub 225.775054932s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:52 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 125 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=123/124 n=5 ec=54/45 lis/c=123/70 les/c/f=124/71/0 sis=125 pruub=15.074002266s) [0] r=-1 lpr=125 pi=[70,125)/1 crt=51'1000 mlcod 0'0 unknown NOTIFY pruub 225.775054932s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:53.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:53.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Jan 20 14:00:54 compute-1 ceph-mon[81775]: 4.c scrub starts
Jan 20 14:00:54 compute-1 ceph-mon[81775]: 4.c scrub ok
Jan 20 14:00:54 compute-1 ceph-mon[81775]: osdmap e125: 3 total, 3 up, 3 in
Jan 20 14:00:54 compute-1 ceph-mon[81775]: 5.e scrub starts
Jan 20 14:00:54 compute-1 ceph-mon[81775]: 5.e scrub ok
Jan 20 14:00:54 compute-1 ceph-mon[81775]: pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:54 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 126 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=125/126 n=5 ec=54/45 lis/c=91/91 les/c/f=92/92/0 sis=125) [0]/[1] async=[0] r=0 lpr=125 pi=[91,125)/1 crt=51'1000 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:55 compute-1 ceph-mon[81775]: osdmap e126: 3 total, 3 up, 3 in
Jan 20 14:00:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Jan 20 14:00:55 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 127 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=125/126 n=5 ec=54/45 lis/c=125/91 les/c/f=126/92/0 sis=127 pruub=14.990097046s) [0] async=[0] r=-1 lpr=127 pi=[91,127)/1 crt=51'1000 mlcod 51'1000 active pruub 227.743103027s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:55 compute-1 ceph-osd[79119]: osd.1 pg_epoch: 127 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=125/126 n=5 ec=54/45 lis/c=125/91 les/c/f=126/92/0 sis=127 pruub=14.989865303s) [0] r=-1 lpr=127 pi=[91,127)/1 crt=51'1000 mlcod 0'0 unknown NOTIFY pruub 227.743103027s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:55.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:55.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:56 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Jan 20 14:00:56 compute-1 ceph-mon[81775]: osdmap e127: 3 total, 3 up, 3 in
Jan 20 14:00:56 compute-1 ceph-mon[81775]: pgmap v271: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 20 14:00:56 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 20 14:00:56 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 20 14:00:57 compute-1 ceph-mon[81775]: osdmap e128: 3 total, 3 up, 3 in
Jan 20 14:00:57 compute-1 ceph-mon[81775]: 2.4 scrub starts
Jan 20 14:00:57 compute-1 ceph-mon[81775]: 2.4 scrub ok
Jan 20 14:00:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:57.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 20 14:00:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:57.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 20 14:00:58 compute-1 ceph-mon[81775]: 3.c scrub starts
Jan 20 14:00:58 compute-1 ceph-mon[81775]: 3.c scrub ok
Jan 20 14:00:58 compute-1 ceph-mon[81775]: pgmap v273: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 51 B/s, 2 objects/s recovering
Jan 20 14:00:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:59.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:59 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 20 14:00:59 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 20 14:00:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:00:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:00:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:59.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:01:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:00 compute-1 ceph-mon[81775]: 2.1c scrub starts
Jan 20 14:01:00 compute-1 ceph-mon[81775]: 2.1c scrub ok
Jan 20 14:01:00 compute-1 ceph-mon[81775]: pgmap v274: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 20 14:01:00 compute-1 ceph-mon[81775]: 2.1e scrub starts
Jan 20 14:01:00 compute-1 ceph-mon[81775]: 2.1e scrub ok
Jan 20 14:01:00 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 20 14:01:00 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 20 14:01:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:01.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:01 compute-1 ceph-mon[81775]: 3.10 scrub starts
Jan 20 14:01:01 compute-1 ceph-mon[81775]: 3.10 scrub ok
Jan 20 14:01:01 compute-1 ceph-mon[81775]: 5.1a scrub starts
Jan 20 14:01:01 compute-1 ceph-mon[81775]: 5.1a scrub ok
Jan 20 14:01:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:01:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:01.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:01:01 compute-1 CROND[89035]: (root) CMD (run-parts /etc/cron.hourly)
Jan 20 14:01:01 compute-1 run-parts[89038]: (/etc/cron.hourly) starting 0anacron
Jan 20 14:01:01 compute-1 anacron[89046]: Anacron started on 2026-01-20
Jan 20 14:01:01 compute-1 anacron[89046]: Will run job `cron.daily' in 38 min.
Jan 20 14:01:01 compute-1 anacron[89046]: Will run job `cron.weekly' in 58 min.
Jan 20 14:01:01 compute-1 anacron[89046]: Will run job `cron.monthly' in 78 min.
Jan 20 14:01:01 compute-1 anacron[89046]: Jobs will be executed sequentially
Jan 20 14:01:01 compute-1 run-parts[89048]: (/etc/cron.hourly) finished 0anacron
Jan 20 14:01:02 compute-1 CROND[89034]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 20 14:01:02 compute-1 ceph-mon[81775]: 6.a scrub starts
Jan 20 14:01:02 compute-1 ceph-mon[81775]: 6.a scrub ok
Jan 20 14:01:02 compute-1 ceph-mon[81775]: 2.1d scrub starts
Jan 20 14:01:02 compute-1 ceph-mon[81775]: 2.1d scrub ok
Jan 20 14:01:02 compute-1 ceph-mon[81775]: pgmap v275: 321 pgs: 321 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 45 B/s, 1 objects/s recovering
Jan 20 14:01:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:03.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:03 compute-1 ceph-mon[81775]: 7.1b scrub starts
Jan 20 14:01:03 compute-1 ceph-mon[81775]: 7.1b scrub ok
Jan 20 14:01:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:03.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:04 compute-1 ceph-mon[81775]: 7.3 scrub starts
Jan 20 14:01:04 compute-1 ceph-mon[81775]: 7.3 scrub ok
Jan 20 14:01:04 compute-1 ceph-mon[81775]: pgmap v276: 321 pgs: 321 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Jan 20 14:01:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:05.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:05 compute-1 ceph-mon[81775]: 2.b scrub starts
Jan 20 14:01:05 compute-1 ceph-mon[81775]: 2.b scrub ok
Jan 20 14:01:05 compute-1 ceph-mon[81775]: 2.1f scrub starts
Jan 20 14:01:05 compute-1 ceph-mon[81775]: 2.1f scrub ok
Jan 20 14:01:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:05.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:06 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Jan 20 14:01:06 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Jan 20 14:01:06 compute-1 ceph-mon[81775]: pgmap v277: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Jan 20 14:01:06 compute-1 ceph-mon[81775]: 6.15 scrub starts
Jan 20 14:01:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:01:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:07.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:01:07 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 20 14:01:07 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 20 14:01:07 compute-1 ceph-mon[81775]: 6.15 scrub ok
Jan 20 14:01:07 compute-1 ceph-mon[81775]: 5.9 scrub starts
Jan 20 14:01:07 compute-1 ceph-mon[81775]: 5.9 scrub ok
Jan 20 14:01:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:07.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:08 compute-1 ceph-mon[81775]: pgmap v278: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Jan 20 14:01:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:09.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:09 compute-1 ceph-mon[81775]: 2.f scrub starts
Jan 20 14:01:09 compute-1 ceph-mon[81775]: 2.f scrub ok
Jan 20 14:01:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:01:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:09.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:01:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:10 compute-1 ceph-mon[81775]: 8.1 scrub starts
Jan 20 14:01:10 compute-1 ceph-mon[81775]: 8.1 scrub ok
Jan 20 14:01:10 compute-1 ceph-mon[81775]: pgmap v279: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Jan 20 14:01:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:11.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:11 compute-1 ceph-mon[81775]: 10.12 scrub starts
Jan 20 14:01:11 compute-1 ceph-mon[81775]: 10.12 scrub ok
Jan 20 14:01:11 compute-1 ceph-mon[81775]: 9.1 scrub starts
Jan 20 14:01:11 compute-1 ceph-mon[81775]: 9.1 scrub ok
Jan 20 14:01:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:11.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:12 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 20 14:01:12 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 20 14:01:12 compute-1 ceph-mon[81775]: 8.7 deep-scrub starts
Jan 20 14:01:12 compute-1 ceph-mon[81775]: 8.7 deep-scrub ok
Jan 20 14:01:12 compute-1 ceph-mon[81775]: pgmap v280: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:12 compute-1 ceph-mon[81775]: 3.13 scrub starts
Jan 20 14:01:12 compute-1 ceph-mon[81775]: 3.13 scrub ok
Jan 20 14:01:13 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 20 14:01:13 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 20 14:01:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:13.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:13 compute-1 ceph-mon[81775]: 9.2 scrub starts
Jan 20 14:01:13 compute-1 ceph-mon[81775]: 9.2 scrub ok
Jan 20 14:01:13 compute-1 ceph-mon[81775]: 3.14 scrub starts
Jan 20 14:01:13 compute-1 ceph-mon[81775]: 3.14 scrub ok
Jan 20 14:01:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:13.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:14 compute-1 ceph-mon[81775]: 10.1e scrub starts
Jan 20 14:01:14 compute-1 ceph-mon[81775]: 10.1e scrub ok
Jan 20 14:01:14 compute-1 ceph-mon[81775]: pgmap v281: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:15.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:15 compute-1 ceph-mon[81775]: 10.4 scrub starts
Jan 20 14:01:15 compute-1 ceph-mon[81775]: 10.4 scrub ok
Jan 20 14:01:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:15.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:16 compute-1 ceph-mon[81775]: pgmap v282: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:17.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:17 compute-1 ceph-mon[81775]: 10.11 scrub starts
Jan 20 14:01:17 compute-1 ceph-mon[81775]: 10.11 scrub ok
Jan 20 14:01:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 20 14:01:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:17.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 20 14:01:18 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 20 14:01:18 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 20 14:01:18 compute-1 sudo[88417]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:18 compute-1 ceph-mon[81775]: 8.e scrub starts
Jan 20 14:01:18 compute-1 ceph-mon[81775]: pgmap v283: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:18 compute-1 ceph-mon[81775]: 8.e scrub ok
Jan 20 14:01:18 compute-1 ceph-mon[81775]: 10.f scrub starts
Jan 20 14:01:18 compute-1 ceph-mon[81775]: 10.f scrub ok
Jan 20 14:01:18 compute-1 ceph-mon[81775]: 5.15 scrub starts
Jan 20 14:01:18 compute-1 ceph-mon[81775]: 5.15 scrub ok
Jan 20 14:01:18 compute-1 sudo[89203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljiyovdebnydhslvtiphfeaixphdtmjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917678.566536-371-201006825761081/AnsiballZ_command.py'
Jan 20 14:01:18 compute-1 sudo[89203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:19 compute-1 python3.9[89205]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:01:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:19.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:19.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:19 compute-1 sudo[89203]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:20 compute-1 ceph-mon[81775]: 10.10 scrub starts
Jan 20 14:01:20 compute-1 ceph-mon[81775]: 10.10 scrub ok
Jan 20 14:01:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:20 compute-1 sudo[89490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfzkpccrlqaamrrenkjgchjmqsqdlhrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917680.1914952-394-161069995475039/AnsiballZ_selinux.py'
Jan 20 14:01:20 compute-1 sudo[89490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:21 compute-1 python3.9[89492]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 20 14:01:21 compute-1 ceph-mon[81775]: 8.13 scrub starts
Jan 20 14:01:21 compute-1 ceph-mon[81775]: 8.13 scrub ok
Jan 20 14:01:21 compute-1 ceph-mon[81775]: pgmap v284: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:21 compute-1 ceph-mon[81775]: 9.4 scrub starts
Jan 20 14:01:21 compute-1 ceph-mon[81775]: 9.4 scrub ok
Jan 20 14:01:21 compute-1 sudo[89490]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:21.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:21 compute-1 sudo[89642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvemqtiykbyutkssiafvjclfojrxmlgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917681.6304104-427-277481977568387/AnsiballZ_command.py'
Jan 20 14:01:21 compute-1 sudo[89642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:21.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:22 compute-1 ceph-mon[81775]: 10.3 scrub starts
Jan 20 14:01:22 compute-1 ceph-mon[81775]: 10.3 scrub ok
Jan 20 14:01:22 compute-1 ceph-mon[81775]: pgmap v285: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:22 compute-1 python3.9[89644]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 20 14:01:22 compute-1 sudo[89642]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:22 compute-1 sudo[89794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeyicchhzucdahsiikddquztvtqxvjnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917682.3954782-451-98694096167583/AnsiballZ_file.py'
Jan 20 14:01:22 compute-1 sudo[89794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:22 compute-1 python3.9[89796]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:01:22 compute-1 sudo[89794]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:23 compute-1 ceph-mon[81775]: 8.1a deep-scrub starts
Jan 20 14:01:23 compute-1 ceph-mon[81775]: 8.1a deep-scrub ok
Jan 20 14:01:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:23.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:23 compute-1 sudo[89946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udypgrqshnvdxifmnvslucsuitfjdggm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917683.2070942-475-176283130635022/AnsiballZ_mount.py'
Jan 20 14:01:23 compute-1 sudo[89946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:23.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:23 compute-1 python3.9[89948]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 20 14:01:23 compute-1 sudo[89946]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:24 compute-1 ceph-mon[81775]: 8.1d deep-scrub starts
Jan 20 14:01:24 compute-1 ceph-mon[81775]: 8.1d deep-scrub ok
Jan 20 14:01:24 compute-1 ceph-mon[81775]: pgmap v286: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:25 compute-1 ceph-mon[81775]: 8.1e scrub starts
Jan 20 14:01:25 compute-1 ceph-mon[81775]: 8.1e scrub ok
Jan 20 14:01:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:25 compute-1 sudo[90098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnhvmtfvjyyapnljbqhifzamxxbqezcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917684.93301-559-25678151857290/AnsiballZ_file.py'
Jan 20 14:01:25 compute-1 sudo[90098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:25.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:25 compute-1 python3.9[90100]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:01:25 compute-1 sudo[90098]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:25.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:26 compute-1 sudo[90250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwaihmwymymzaxxhndljmpyeemmgqcjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917685.7442505-583-100871071010785/AnsiballZ_stat.py'
Jan 20 14:01:26 compute-1 sudo[90250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:26 compute-1 python3.9[90252]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:01:26 compute-1 ceph-mon[81775]: 10.1 scrub starts
Jan 20 14:01:26 compute-1 ceph-mon[81775]: 10.1 scrub ok
Jan 20 14:01:26 compute-1 ceph-mon[81775]: pgmap v287: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:26 compute-1 sudo[90250]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:26 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 20 14:01:26 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 20 14:01:26 compute-1 sudo[90328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znwomqbwigmebfotuuvaevlqacyzwfsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917685.7442505-583-100871071010785/AnsiballZ_file.py'
Jan 20 14:01:26 compute-1 sudo[90328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:26 compute-1 python3.9[90330]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:01:26 compute-1 sudo[90328]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:27 compute-1 ceph-mon[81775]: 9.c scrub starts
Jan 20 14:01:27 compute-1 ceph-mon[81775]: 9.c scrub ok
Jan 20 14:01:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:27.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:27.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:28 compute-1 sudo[90480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyqozqqpiqdtuovueugctsxwaqmwdjvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917687.7300963-646-131214685837953/AnsiballZ_stat.py'
Jan 20 14:01:28 compute-1 sudo[90480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:28 compute-1 python3.9[90482]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:01:28 compute-1 sudo[90480]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:28 compute-1 ceph-mon[81775]: 4.13 scrub starts
Jan 20 14:01:28 compute-1 ceph-mon[81775]: 4.13 scrub ok
Jan 20 14:01:28 compute-1 ceph-mon[81775]: pgmap v288: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:01:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:29.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:01:29 compute-1 sudo[90634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzejvuptagqsxjfzpxqovjuljuajuzwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917688.8572378-685-44373644369993/AnsiballZ_getent.py'
Jan 20 14:01:29 compute-1 sudo[90634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:29 compute-1 ceph-mon[81775]: 8.15 scrub starts
Jan 20 14:01:29 compute-1 ceph-mon[81775]: 8.15 scrub ok
Jan 20 14:01:29 compute-1 ceph-mon[81775]: 9.14 scrub starts
Jan 20 14:01:29 compute-1 ceph-mon[81775]: 9.14 scrub ok
Jan 20 14:01:29 compute-1 python3.9[90636]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 20 14:01:29 compute-1 sudo[90634]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:29.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:30 compute-1 sudo[90787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epturcovnnzhuvhbndjoimvaifqqqqgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917689.9757755-715-193038399706462/AnsiballZ_getent.py'
Jan 20 14:01:30 compute-1 sudo[90787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:30 compute-1 python3.9[90789]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 20 14:01:30 compute-1 sudo[90787]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:30 compute-1 ceph-mon[81775]: 11.16 deep-scrub starts
Jan 20 14:01:30 compute-1 ceph-mon[81775]: 11.16 deep-scrub ok
Jan 20 14:01:30 compute-1 ceph-mon[81775]: 9.1c scrub starts
Jan 20 14:01:30 compute-1 ceph-mon[81775]: 9.1c scrub ok
Jan 20 14:01:30 compute-1 ceph-mon[81775]: pgmap v289: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:31 compute-1 sudo[90940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zktotiohsuafjyrdlrndcqvpbiyhziqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917690.751385-739-5031216522601/AnsiballZ_group.py'
Jan 20 14:01:31 compute-1 sudo[90940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 20 14:01:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:31.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 20 14:01:31 compute-1 python3.9[90942]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 14:01:31 compute-1 sudo[90940]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:31 compute-1 ceph-mon[81775]: 11.17 scrub starts
Jan 20 14:01:31 compute-1 ceph-mon[81775]: 11.17 scrub ok
Jan 20 14:01:31 compute-1 ceph-mon[81775]: 11.2 scrub starts
Jan 20 14:01:31 compute-1 ceph-mon[81775]: 11.2 scrub ok
Jan 20 14:01:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:31.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:32 compute-1 sudo[91092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkctoxehyekscoueynbmtgrduhbhebth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917691.8093572-766-37262083953600/AnsiballZ_file.py'
Jan 20 14:01:32 compute-1 sudo[91092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:32 compute-1 python3.9[91095]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 20 14:01:32 compute-1 sudo[91092]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:32 compute-1 ceph-mon[81775]: pgmap v290: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:33 compute-1 sshd-session[91093]: Invalid user matrix from 116.99.171.211 port 53860
Jan 20 14:01:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:01:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:33.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:01:33 compute-1 sudo[91246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vagjccjxegwrkatkwnttpsnebjabzsfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917693.0633507-799-63506631376502/AnsiballZ_dnf.py'
Jan 20 14:01:33 compute-1 sudo[91246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:33 compute-1 python3.9[91248]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:01:33 compute-1 sshd-session[91093]: Connection closed by invalid user matrix 116.99.171.211 port 53860 [preauth]
Jan 20 14:01:33 compute-1 ceph-mon[81775]: 8.2 deep-scrub starts
Jan 20 14:01:33 compute-1 ceph-mon[81775]: 8.2 deep-scrub ok
Jan 20 14:01:33 compute-1 ceph-mon[81775]: 11.6 deep-scrub starts
Jan 20 14:01:33 compute-1 ceph-mon[81775]: 11.6 deep-scrub ok
Jan 20 14:01:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:01:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:33.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:01:34 compute-1 ceph-mon[81775]: pgmap v291: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:35 compute-1 sudo[91246]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:35.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:35 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 20 14:01:35 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 20 14:01:35 compute-1 ceph-mon[81775]: 8.16 deep-scrub starts
Jan 20 14:01:35 compute-1 ceph-mon[81775]: 8.16 deep-scrub ok
Jan 20 14:01:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:35.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:35 compute-1 sudo[91399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrmmxckxvtrsrbaqohswobmyrevjwach ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917695.6521502-823-120078402267159/AnsiballZ_file.py'
Jan 20 14:01:35 compute-1 sudo[91399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:36 compute-1 python3.9[91401]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:01:36 compute-1 sudo[91399]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:36 compute-1 ceph-mon[81775]: 8.f scrub starts
Jan 20 14:01:36 compute-1 ceph-mon[81775]: 8.f scrub ok
Jan 20 14:01:36 compute-1 ceph-mon[81775]: 11.9 scrub starts
Jan 20 14:01:36 compute-1 ceph-mon[81775]: pgmap v292: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:36 compute-1 ceph-mon[81775]: 11.9 scrub ok
Jan 20 14:01:36 compute-1 ceph-mon[81775]: 5.16 scrub starts
Jan 20 14:01:36 compute-1 ceph-mon[81775]: 5.16 scrub ok
Jan 20 14:01:36 compute-1 sudo[91551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prxvwjjqicmddalenqktmqknywyvdpzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917696.4336271-847-114040329365561/AnsiballZ_stat.py'
Jan 20 14:01:36 compute-1 sudo[91551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:36 compute-1 python3.9[91553]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:01:37 compute-1 sudo[91551]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:37 compute-1 sudo[91629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niwqkqcardxmzvoeexjdcilasoidzqxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917696.4336271-847-114040329365561/AnsiballZ_file.py'
Jan 20 14:01:37 compute-1 sudo[91629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:37 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 20 14:01:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:37.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:37 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 20 14:01:37 compute-1 python3.9[91631]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:01:37 compute-1 sudo[91629]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:37 compute-1 ceph-mon[81775]: 3.16 scrub starts
Jan 20 14:01:37 compute-1 ceph-mon[81775]: 3.16 scrub ok
Jan 20 14:01:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:37.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:38 compute-1 sudo[91781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwuevuqpcwjcugszbhawilzfnclwjfnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917697.816992-886-88177554657560/AnsiballZ_stat.py'
Jan 20 14:01:38 compute-1 sudo[91781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:38 compute-1 python3.9[91783]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:01:38 compute-1 sudo[91781]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:38 compute-1 sudo[91859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoanpcvgxkbepoucosjjgwnkfbiqkiry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917697.816992-886-88177554657560/AnsiballZ_file.py'
Jan 20 14:01:38 compute-1 sudo[91859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:38 compute-1 ceph-mon[81775]: 11.a scrub starts
Jan 20 14:01:38 compute-1 ceph-mon[81775]: 11.a scrub ok
Jan 20 14:01:38 compute-1 ceph-mon[81775]: pgmap v293: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:38 compute-1 python3.9[91861]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:01:38 compute-1 sudo[91859]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:39.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:39 compute-1 sudo[92011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsifvnshovbbcpubzqismzvwqjygdtdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917699.4487808-931-113168400806126/AnsiballZ_dnf.py'
Jan 20 14:01:39 compute-1 sudo[92011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:39.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:40 compute-1 python3.9[92013]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:01:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:40 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 20 14:01:40 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 20 14:01:40 compute-1 ceph-mon[81775]: pgmap v294: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:41 compute-1 sudo[92011]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:01:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:41.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:01:41 compute-1 ceph-mon[81775]: 5.10 scrub starts
Jan 20 14:01:41 compute-1 ceph-mon[81775]: 5.10 scrub ok
Jan 20 14:01:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:01:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:41.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:01:42 compute-1 python3.9[92164]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:01:42 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 20 14:01:42 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 20 14:01:43 compute-1 ceph-mon[81775]: pgmap v295: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:01:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:43.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:01:43 compute-1 python3.9[92316]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 20 14:01:43 compute-1 systemd[72729]: Created slice User Background Tasks Slice.
Jan 20 14:01:43 compute-1 systemd[72729]: Starting Cleanup of User's Temporary Files and Directories...
Jan 20 14:01:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:01:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:43.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:01:43 compute-1 systemd[72729]: Finished Cleanup of User's Temporary Files and Directories.
Jan 20 14:01:44 compute-1 ceph-mon[81775]: 11.b scrub starts
Jan 20 14:01:44 compute-1 ceph-mon[81775]: 11.b scrub ok
Jan 20 14:01:44 compute-1 ceph-mon[81775]: 5.11 scrub starts
Jan 20 14:01:44 compute-1 ceph-mon[81775]: 5.11 scrub ok
Jan 20 14:01:44 compute-1 ceph-mon[81775]: 8.3 scrub starts
Jan 20 14:01:44 compute-1 ceph-mon[81775]: 8.3 scrub ok
Jan 20 14:01:44 compute-1 ceph-mon[81775]: pgmap v296: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:44 compute-1 python3.9[92466]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:01:44 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 20 14:01:44 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 20 14:01:45 compute-1 ceph-mon[81775]: 11.c scrub starts
Jan 20 14:01:45 compute-1 ceph-mon[81775]: 11.c scrub ok
Jan 20 14:01:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:45.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:45 compute-1 sudo[92617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yclgdtnjygmqnkspknztygynagnctkod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917704.716301-1054-28653613441814/AnsiballZ_systemd.py'
Jan 20 14:01:45 compute-1 sudo[92617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:45 compute-1 python3.9[92619]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:01:45 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 20 14:01:45 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Jan 20 14:01:45 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 20 14:01:45 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 20 14:01:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:45.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:46 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 20 14:01:46 compute-1 sudo[92617]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:46 compute-1 ceph-mon[81775]: 5.1f scrub starts
Jan 20 14:01:46 compute-1 ceph-mon[81775]: 5.1f scrub ok
Jan 20 14:01:46 compute-1 ceph-mon[81775]: pgmap v297: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:46 compute-1 python3.9[92781]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 20 14:01:47 compute-1 ceph-mon[81775]: 8.a scrub starts
Jan 20 14:01:47 compute-1 ceph-mon[81775]: 8.a scrub ok
Jan 20 14:01:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:47.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:47.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:48 compute-1 sudo[92806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:48 compute-1 sudo[92806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:48 compute-1 sudo[92806]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:48 compute-1 sudo[92831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:01:48 compute-1 sudo[92831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:48 compute-1 sudo[92831]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:48 compute-1 sudo[92856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:48 compute-1 sudo[92856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:48 compute-1 sudo[92856]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:48 compute-1 sudo[92881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 14:01:48 compute-1 sudo[92881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:48 compute-1 ceph-mon[81775]: pgmap v298: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:48 compute-1 ceph-mon[81775]: 11.d scrub starts
Jan 20 14:01:48 compute-1 ceph-mon[81775]: 11.d scrub ok
Jan 20 14:01:48 compute-1 podman[92978]: 2026-01-20 14:01:48.875367221 +0000 UTC m=+0.063430002 container exec 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:01:49 compute-1 podman[92978]: 2026-01-20 14:01:49.008259778 +0000 UTC m=+0.196322499 container exec_died 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 14:01:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:49.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:49 compute-1 ceph-mon[81775]: 11.10 scrub starts
Jan 20 14:01:49 compute-1 ceph-mon[81775]: 11.10 scrub ok
Jan 20 14:01:49 compute-1 sudo[92881]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:49 compute-1 sudo[93106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:49 compute-1 sudo[93106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:49 compute-1 sudo[93106]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:49 compute-1 sudo[93131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:01:49 compute-1 sudo[93131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:49 compute-1 sudo[93131]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:49 compute-1 sudo[93156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:49 compute-1 sudo[93156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:49 compute-1 sudo[93156]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:49 compute-1 sudo[93181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:01:49 compute-1 sudo[93181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:49.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:50 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 20 14:01:50 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 20 14:01:50 compute-1 sudo[93181]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:50 compute-1 ceph-mon[81775]: pgmap v299: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:50 compute-1 sudo[93362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzlntgmxaorqqfzsxpnrwibrribhmjkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917710.396573-1225-267237839365342/AnsiballZ_systemd.py'
Jan 20 14:01:50 compute-1 sudo[93362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:51 compute-1 python3.9[93364]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:01:51 compute-1 sudo[93362]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:51.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:51 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 20 14:01:51 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 20 14:01:51 compute-1 sudo[93516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tekfznkenvyfgdarkvivlrzfjlvzrgis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917711.2550447-1225-52579673690811/AnsiballZ_systemd.py'
Jan 20 14:01:51 compute-1 sudo[93516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:51 compute-1 ceph-mon[81775]: 3.d scrub starts
Jan 20 14:01:51 compute-1 ceph-mon[81775]: 3.d scrub ok
Jan 20 14:01:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:01:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:01:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:01:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:01:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:01:51 compute-1 python3.9[93518]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:01:51 compute-1 sudo[93516]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:51.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:52 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 20 14:01:52 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 20 14:01:52 compute-1 ceph-mon[81775]: 8.11 scrub starts
Jan 20 14:01:52 compute-1 ceph-mon[81775]: 8.11 scrub ok
Jan 20 14:01:52 compute-1 ceph-mon[81775]: pgmap v300: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:52 compute-1 ceph-mon[81775]: 10.6 scrub starts
Jan 20 14:01:52 compute-1 ceph-mon[81775]: 10.6 scrub ok
Jan 20 14:01:52 compute-1 sshd-session[86504]: Connection closed by 192.168.122.30 port 55684
Jan 20 14:01:52 compute-1 sshd-session[86501]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:01:52 compute-1 systemd[1]: session-34.scope: Deactivated successfully.
Jan 20 14:01:52 compute-1 systemd[1]: session-34.scope: Consumed 1min 6.332s CPU time.
Jan 20 14:01:52 compute-1 systemd-logind[783]: Session 34 logged out. Waiting for processes to exit.
Jan 20 14:01:52 compute-1 systemd-logind[783]: Removed session 34.
Jan 20 14:01:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:01:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:53.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:01:53 compute-1 ceph-mon[81775]: 11.11 scrub starts
Jan 20 14:01:53 compute-1 ceph-mon[81775]: 11.11 scrub ok
Jan 20 14:01:53 compute-1 ceph-mon[81775]: 10.7 scrub starts
Jan 20 14:01:53 compute-1 ceph-mon[81775]: 10.7 scrub ok
Jan 20 14:01:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 20 14:01:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:53.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 20 14:01:54 compute-1 ceph-mon[81775]: pgmap v301: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:54 compute-1 ceph-mon[81775]: 11.15 scrub starts
Jan 20 14:01:54 compute-1 ceph-mon[81775]: 11.15 scrub ok
Jan 20 14:01:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:55.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:55 compute-1 ceph-mon[81775]: 11.18 deep-scrub starts
Jan 20 14:01:55 compute-1 ceph-mon[81775]: 11.18 deep-scrub ok
Jan 20 14:01:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:01:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:55.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:01:56 compute-1 ceph-mon[81775]: pgmap v302: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:57 compute-1 sudo[93545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:57 compute-1 sudo[93545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:57 compute-1 sudo[93545]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:01:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:57.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:01:57 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.9 deep-scrub starts
Jan 20 14:01:57 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.9 deep-scrub ok
Jan 20 14:01:57 compute-1 sudo[93570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:01:57 compute-1 sudo[93570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:57 compute-1 sudo[93570]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:57.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:58 compute-1 ceph-mon[81775]: pgmap v303: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:58 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 20 14:01:58 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 20 14:01:58 compute-1 sshd-session[93595]: Accepted publickey for zuul from 192.168.122.30 port 40846 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:01:58 compute-1 systemd-logind[783]: New session 35 of user zuul.
Jan 20 14:01:58 compute-1 systemd[1]: Started Session 35 of User zuul.
Jan 20 14:01:58 compute-1 sshd-session[93595]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:01:59 compute-1 ceph-mon[81775]: 10.9 deep-scrub starts
Jan 20 14:01:59 compute-1 ceph-mon[81775]: 10.9 deep-scrub ok
Jan 20 14:01:59 compute-1 ceph-mon[81775]: 10.a scrub starts
Jan 20 14:01:59 compute-1 ceph-mon[81775]: 10.a scrub ok
Jan 20 14:01:59 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 20 14:01:59 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 20 14:01:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 20 14:01:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:59.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 20 14:01:59 compute-1 python3.9[93749]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:01:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:01:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:01:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:59.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:02:00 compute-1 ceph-mon[81775]: 11.e scrub starts
Jan 20 14:02:00 compute-1 ceph-mon[81775]: 11.e scrub ok
Jan 20 14:02:00 compute-1 ceph-mon[81775]: pgmap v304: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:00 compute-1 ceph-mon[81775]: 10.b scrub starts
Jan 20 14:02:00 compute-1 ceph-mon[81775]: 10.b scrub ok
Jan 20 14:02:00 compute-1 ceph-mon[81775]: 11.1f deep-scrub starts
Jan 20 14:02:00 compute-1 ceph-mon[81775]: 11.1f deep-scrub ok
Jan 20 14:02:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:00 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 20 14:02:00 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 20 14:02:01 compute-1 sudo[93903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewrrclxyioabizjqwsvdadpqwpqrrbvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917720.4180632-69-212792429836164/AnsiballZ_getent.py'
Jan 20 14:02:01 compute-1 sudo[93903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:01 compute-1 ceph-mon[81775]: 11.13 scrub starts
Jan 20 14:02:01 compute-1 ceph-mon[81775]: 11.13 scrub ok
Jan 20 14:02:01 compute-1 ceph-mon[81775]: 10.c scrub starts
Jan 20 14:02:01 compute-1 ceph-mon[81775]: 10.c scrub ok
Jan 20 14:02:01 compute-1 python3.9[93905]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 20 14:02:01 compute-1 sudo[93903]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:01.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:01 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 20 14:02:01 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 20 14:02:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:01.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:01 compute-1 sudo[94056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkiklngfvrjmelspnoxqwvripfykqtjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917721.7250733-105-84983544396788/AnsiballZ_setup.py'
Jan 20 14:02:01 compute-1 sudo[94056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:02 compute-1 ceph-mon[81775]: pgmap v305: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:02 compute-1 python3.9[94058]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:02:02 compute-1 sudo[94056]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:02 compute-1 sudo[94140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cokzcogbobztxqgmiryveidwkggjsnfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917721.7250733-105-84983544396788/AnsiballZ_dnf.py'
Jan 20 14:02:02 compute-1 sudo[94140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:03 compute-1 ceph-mon[81775]: 10.d scrub starts
Jan 20 14:02:03 compute-1 ceph-mon[81775]: 10.d scrub ok
Jan 20 14:02:03 compute-1 ceph-mon[81775]: 10.1b scrub starts
Jan 20 14:02:03 compute-1 ceph-mon[81775]: 10.1b scrub ok
Jan 20 14:02:03 compute-1 python3.9[94142]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 20 14:02:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 20 14:02:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:03.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 20 14:02:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:03.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:04 compute-1 ceph-mon[81775]: pgmap v306: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:04 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.e deep-scrub starts
Jan 20 14:02:04 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.e deep-scrub ok
Jan 20 14:02:04 compute-1 sudo[94140]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:05 compute-1 sudo[94293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unbvllfjlvgjsdhatlzaflzmapcbnfiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917724.8178306-147-271464266192570/AnsiballZ_dnf.py'
Jan 20 14:02:05 compute-1 sudo[94293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:05 compute-1 ceph-mon[81775]: 8.c scrub starts
Jan 20 14:02:05 compute-1 ceph-mon[81775]: 8.c scrub ok
Jan 20 14:02:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:05 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 20 14:02:05 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 20 14:02:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:05.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:05 compute-1 python3.9[94295]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:02:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:02:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:05.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:02:06 compute-1 ceph-mon[81775]: 10.e deep-scrub starts
Jan 20 14:02:06 compute-1 ceph-mon[81775]: 10.e deep-scrub ok
Jan 20 14:02:06 compute-1 ceph-mon[81775]: 8.d scrub starts
Jan 20 14:02:06 compute-1 ceph-mon[81775]: 8.d scrub ok
Jan 20 14:02:06 compute-1 ceph-mon[81775]: pgmap v307: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:06 compute-1 ceph-mon[81775]: 10.16 scrub starts
Jan 20 14:02:06 compute-1 ceph-mon[81775]: 10.16 scrub ok
Jan 20 14:02:06 compute-1 sudo[94293]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:07.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:07 compute-1 sudo[94446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stvkscrvmzfurynaeqbmrlhqrnwnoglf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917726.8075328-171-21451898298674/AnsiballZ_systemd.py'
Jan 20 14:02:07 compute-1 sudo[94446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:07 compute-1 python3.9[94448]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:02:07 compute-1 sudo[94446]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 20 14:02:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:07.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 20 14:02:08 compute-1 ceph-mon[81775]: pgmap v308: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:09.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:09 compute-1 python3.9[94601]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:02:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:09.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:10 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.17 deep-scrub starts
Jan 20 14:02:10 compute-1 ceph-mon[81775]: pgmap v309: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:10 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.17 deep-scrub ok
Jan 20 14:02:10 compute-1 sudo[94751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywpsysglpciacshmnblpevpvabqguulo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917730.2534485-225-191572195572041/AnsiballZ_sefcontext.py'
Jan 20 14:02:10 compute-1 sudo[94751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:11 compute-1 python3.9[94753]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 20 14:02:11 compute-1 sudo[94751]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:02:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:11.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:02:11 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 20 14:02:11 compute-1 ceph-mon[81775]: 10.17 deep-scrub starts
Jan 20 14:02:11 compute-1 ceph-mon[81775]: 10.17 deep-scrub ok
Jan 20 14:02:11 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 20 14:02:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:11.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:12 compute-1 python3.9[94905]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:02:12 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 20 14:02:12 compute-1 ceph-mon[81775]: 8.b scrub starts
Jan 20 14:02:12 compute-1 ceph-mon[81775]: 8.b scrub ok
Jan 20 14:02:12 compute-1 ceph-mon[81775]: pgmap v310: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:12 compute-1 ceph-mon[81775]: 10.1a scrub starts
Jan 20 14:02:12 compute-1 ceph-mon[81775]: 10.1a scrub ok
Jan 20 14:02:12 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 20 14:02:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:13.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:13 compute-1 ceph-mon[81775]: 10.1c scrub starts
Jan 20 14:02:13 compute-1 ceph-mon[81775]: 10.1c scrub ok
Jan 20 14:02:13 compute-1 sudo[95061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvdxuqwftlfbvshprozshlavcnvfzwpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917732.8958914-279-117086712314222/AnsiballZ_dnf.py'
Jan 20 14:02:13 compute-1 sudo[95061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:13.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:14 compute-1 python3.9[95063]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:02:14 compute-1 ceph-mon[81775]: pgmap v311: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:15.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:15 compute-1 sudo[95061]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:16.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:16 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.1d deep-scrub starts
Jan 20 14:02:16 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.1d deep-scrub ok
Jan 20 14:02:16 compute-1 ceph-mon[81775]: 8.6 scrub starts
Jan 20 14:02:16 compute-1 ceph-mon[81775]: 8.6 scrub ok
Jan 20 14:02:16 compute-1 ceph-mon[81775]: pgmap v312: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:16 compute-1 ceph-mon[81775]: 10.1d deep-scrub starts
Jan 20 14:02:16 compute-1 ceph-mon[81775]: 10.1d deep-scrub ok
Jan 20 14:02:16 compute-1 sudo[95214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adlxyzgehxiemvnaamwunmonwizqweci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917736.0975533-303-167386800324793/AnsiballZ_command.py'
Jan 20 14:02:16 compute-1 sudo[95214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:16 compute-1 python3.9[95216]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:02:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:02:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:17.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:02:17 compute-1 sudo[95214]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:18.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:18 compute-1 sudo[95501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omcadoknjyswkaezcnkpmliuljsyiirs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917737.8687978-327-251669738739961/AnsiballZ_file.py'
Jan 20 14:02:18 compute-1 sudo[95501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:18 compute-1 ceph-mon[81775]: pgmap v313: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:18 compute-1 python3.9[95503]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 20 14:02:18 compute-1 sudo[95501]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:19 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 20 14:02:19 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 20 14:02:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:19.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:19 compute-1 python3.9[95653]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:02:19 compute-1 ceph-mon[81775]: 10.18 scrub starts
Jan 20 14:02:19 compute-1 ceph-mon[81775]: 10.18 scrub ok
Jan 20 14:02:19 compute-1 ceph-mon[81775]: 10.1f scrub starts
Jan 20 14:02:19 compute-1 ceph-mon[81775]: 10.1f scrub ok
Jan 20 14:02:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:20.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:20 compute-1 sudo[95805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twcjkuhychefaeyouvugwwlnunnniatz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917739.7580554-375-108325593973314/AnsiballZ_dnf.py'
Jan 20 14:02:20 compute-1 sudo[95805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:20 compute-1 python3.9[95807]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:02:20 compute-1 ceph-mon[81775]: 8.1c scrub starts
Jan 20 14:02:20 compute-1 ceph-mon[81775]: 8.1c scrub ok
Jan 20 14:02:20 compute-1 ceph-mon[81775]: pgmap v314: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:21 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 20 14:02:21 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 20 14:02:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:21.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:21 compute-1 sudo[95805]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:21 compute-1 ceph-mon[81775]: 8.14 scrub starts
Jan 20 14:02:21 compute-1 ceph-mon[81775]: 8.14 scrub ok
Jan 20 14:02:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:22.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:22 compute-1 sudo[95958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysjtroqanxtpvezantvwgxoffcdwdqks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917741.8655586-402-274440357872359/AnsiballZ_dnf.py'
Jan 20 14:02:22 compute-1 sudo[95958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:22 compute-1 python3.9[95960]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:02:22 compute-1 ceph-mon[81775]: pgmap v315: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:23 compute-1 sshd-session[94855]: Connection closed by authenticating user bin 116.99.171.211 port 45184 [preauth]
Jan 20 14:02:23 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 20 14:02:23 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 20 14:02:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:02:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:23.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:02:23 compute-1 sudo[95958]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:23 compute-1 ceph-mon[81775]: 11.14 scrub starts
Jan 20 14:02:23 compute-1 ceph-mon[81775]: 11.14 scrub ok
Jan 20 14:02:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:24.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:24 compute-1 sudo[96111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxpgzsfhwmbydnxspidspuzoqvgplduz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917744.2841244-438-66263761352803/AnsiballZ_stat.py'
Jan 20 14:02:24 compute-1 sudo[96111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:24 compute-1 ceph-mon[81775]: pgmap v316: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:24 compute-1 python3.9[96113]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:02:24 compute-1 sudo[96111]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:25.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:25 compute-1 sudo[96265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqbvqmrispgocokpgfnoimrubcvmiknk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917745.0759006-462-245422154235231/AnsiballZ_slurp.py'
Jan 20 14:02:25 compute-1 sudo[96265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:25 compute-1 python3.9[96267]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 20 14:02:25 compute-1 sudo[96265]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:25 compute-1 ceph-mon[81775]: 10.19 scrub starts
Jan 20 14:02:25 compute-1 ceph-mon[81775]: 10.19 scrub ok
Jan 20 14:02:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:26.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:26 compute-1 ceph-mon[81775]: 10.2 scrub starts
Jan 20 14:02:26 compute-1 ceph-mon[81775]: 10.2 scrub ok
Jan 20 14:02:26 compute-1 ceph-mon[81775]: pgmap v317: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:26 compute-1 ceph-mon[81775]: 8.5 scrub starts
Jan 20 14:02:26 compute-1 ceph-mon[81775]: 8.5 scrub ok
Jan 20 14:02:27 compute-1 sshd-session[93598]: Connection closed by 192.168.122.30 port 40846
Jan 20 14:02:27 compute-1 sshd-session[93595]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:02:27 compute-1 systemd[1]: session-35.scope: Deactivated successfully.
Jan 20 14:02:27 compute-1 systemd[1]: session-35.scope: Consumed 18.780s CPU time.
Jan 20 14:02:27 compute-1 systemd-logind[783]: Session 35 logged out. Waiting for processes to exit.
Jan 20 14:02:27 compute-1 systemd-logind[783]: Removed session 35.
Jan 20 14:02:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 20 14:02:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:27.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 20 14:02:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:28.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:29 compute-1 ceph-mon[81775]: 10.8 scrub starts
Jan 20 14:02:29 compute-1 ceph-mon[81775]: 10.8 scrub ok
Jan 20 14:02:29 compute-1 ceph-mon[81775]: pgmap v318: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:29 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 20 14:02:29 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 20 14:02:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:29.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:30.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:30 compute-1 ceph-mon[81775]: pgmap v319: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:30 compute-1 ceph-mon[81775]: 8.17 scrub starts
Jan 20 14:02:30 compute-1 ceph-mon[81775]: 8.17 scrub ok
Jan 20 14:02:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:31 compute-1 ceph-mon[81775]: 10.5 scrub starts
Jan 20 14:02:31 compute-1 ceph-mon[81775]: 10.5 scrub ok
Jan 20 14:02:31 compute-1 ceph-mon[81775]: 8.9 scrub starts
Jan 20 14:02:31 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 8.10 deep-scrub starts
Jan 20 14:02:31 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 8.10 deep-scrub ok
Jan 20 14:02:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:31.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:32.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:32 compute-1 ceph-mon[81775]: 8.9 scrub ok
Jan 20 14:02:32 compute-1 ceph-mon[81775]: pgmap v320: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:32 compute-1 ceph-mon[81775]: 8.10 deep-scrub starts
Jan 20 14:02:32 compute-1 ceph-mon[81775]: 8.10 deep-scrub ok
Jan 20 14:02:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:33.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:33 compute-1 sshd-session[96294]: Accepted publickey for zuul from 192.168.122.30 port 38584 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:02:33 compute-1 systemd-logind[783]: New session 36 of user zuul.
Jan 20 14:02:33 compute-1 systemd[1]: Started Session 36 of User zuul.
Jan 20 14:02:34 compute-1 sshd-session[96294]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:02:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:34.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:34 compute-1 ceph-mon[81775]: 11.19 scrub starts
Jan 20 14:02:34 compute-1 ceph-mon[81775]: 11.19 scrub ok
Jan 20 14:02:34 compute-1 ceph-mon[81775]: 10.13 scrub starts
Jan 20 14:02:34 compute-1 ceph-mon[81775]: 10.13 scrub ok
Jan 20 14:02:34 compute-1 ceph-mon[81775]: pgmap v321: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:35 compute-1 python3.9[96447]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:02:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:02:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:35.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:02:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:36.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:36 compute-1 python3.9[96603]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:02:37 compute-1 ceph-mon[81775]: pgmap v322: 321 pgs: 321 active+clean; 455 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:37 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 20 14:02:37 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 20 14:02:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:37.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:38 compute-1 python3.9[96796]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:02:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:38.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:38 compute-1 ceph-mon[81775]: 11.8 scrub starts
Jan 20 14:02:38 compute-1 ceph-mon[81775]: 11.8 scrub ok
Jan 20 14:02:38 compute-1 ceph-mon[81775]: 11.12 scrub starts
Jan 20 14:02:38 compute-1 ceph-mon[81775]: 11.12 scrub ok
Jan 20 14:02:38 compute-1 ceph-mon[81775]: pgmap v323: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:38 compute-1 sshd-session[96297]: Connection closed by 192.168.122.30 port 38584
Jan 20 14:02:38 compute-1 sshd-session[96294]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:02:38 compute-1 systemd[1]: session-36.scope: Deactivated successfully.
Jan 20 14:02:38 compute-1 systemd[1]: session-36.scope: Consumed 2.741s CPU time.
Jan 20 14:02:38 compute-1 systemd-logind[783]: Session 36 logged out. Waiting for processes to exit.
Jan 20 14:02:38 compute-1 systemd-logind[783]: Removed session 36.
Jan 20 14:02:39 compute-1 sshd-session[96571]: Invalid user helpdesk from 116.99.171.211 port 46748
Jan 20 14:02:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:39.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:39 compute-1 sshd-session[96571]: Connection closed by invalid user helpdesk 116.99.171.211 port 46748 [preauth]
Jan 20 14:02:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 20 14:02:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:40.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 20 14:02:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:40 compute-1 ceph-mon[81775]: 10.15 deep-scrub starts
Jan 20 14:02:40 compute-1 ceph-mon[81775]: 10.15 deep-scrub ok
Jan 20 14:02:40 compute-1 ceph-mon[81775]: pgmap v324: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:41 compute-1 ceph-mon[81775]: 10.14 scrub starts
Jan 20 14:02:41 compute-1 ceph-mon[81775]: 10.14 scrub ok
Jan 20 14:02:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:02:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:41.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:02:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:42.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:42 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 20 14:02:42 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 20 14:02:42 compute-1 ceph-mon[81775]: 8.1f scrub starts
Jan 20 14:02:42 compute-1 ceph-mon[81775]: 8.1f scrub ok
Jan 20 14:02:42 compute-1 ceph-mon[81775]: pgmap v325: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:42 compute-1 ceph-mon[81775]: 11.5 scrub starts
Jan 20 14:02:42 compute-1 ceph-mon[81775]: 11.5 scrub ok
Jan 20 14:02:43 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 20 14:02:43 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 20 14:02:43 compute-1 ceph-mon[81775]: 11.f scrub starts
Jan 20 14:02:43 compute-1 ceph-mon[81775]: 11.f scrub ok
Jan 20 14:02:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:02:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:43.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:02:44 compute-1 sshd-session[96822]: Accepted publickey for zuul from 192.168.122.30 port 52504 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:02:44 compute-1 systemd-logind[783]: New session 37 of user zuul.
Jan 20 14:02:44 compute-1 systemd[1]: Started Session 37 of User zuul.
Jan 20 14:02:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:44.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:44 compute-1 sshd-session[96822]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:02:44 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 20 14:02:44 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 20 14:02:44 compute-1 ceph-mon[81775]: pgmap v326: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:44 compute-1 ceph-mon[81775]: 11.4 scrub starts
Jan 20 14:02:44 compute-1 ceph-mon[81775]: 11.4 scrub ok
Jan 20 14:02:45 compute-1 python3.9[96975]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:02:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:45 compute-1 ceph-mon[81775]: 9.19 scrub starts
Jan 20 14:02:45 compute-1 ceph-mon[81775]: 9.19 scrub ok
Jan 20 14:02:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:45.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:46.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:46 compute-1 python3.9[97129]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:02:46 compute-1 ceph-mon[81775]: pgmap v327: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:47 compute-1 sudo[97283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghlbnjpcvidhhpbliaschartlnacfweq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917766.833951-81-175849184690649/AnsiballZ_setup.py'
Jan 20 14:02:47 compute-1 sudo[97283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:02:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:47.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:02:47 compute-1 python3.9[97285]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:02:47 compute-1 ceph-mon[81775]: 9.1a scrub starts
Jan 20 14:02:47 compute-1 ceph-mon[81775]: 9.1a scrub ok
Jan 20 14:02:47 compute-1 sudo[97283]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:02:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:48.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:02:48 compute-1 sudo[97367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdyhfcgqbzcgeuahlgoeeqtgwqkpxjwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917766.833951-81-175849184690649/AnsiballZ_dnf.py'
Jan 20 14:02:48 compute-1 sudo[97367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:48 compute-1 python3.9[97369]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:02:48 compute-1 ceph-mon[81775]: 9.1b scrub starts
Jan 20 14:02:48 compute-1 ceph-mon[81775]: 9.1b scrub ok
Jan 20 14:02:48 compute-1 ceph-mon[81775]: pgmap v328: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:02:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:49.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:02:49 compute-1 sudo[97367]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:50.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:50 compute-1 sudo[97520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awdfusiebxgcqowhxjcotbdwihwyxghu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917770.0420542-117-126978500080060/AnsiballZ_setup.py'
Jan 20 14:02:50 compute-1 sudo[97520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:50 compute-1 python3.9[97522]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:02:50 compute-1 ceph-mon[81775]: 11.3 scrub starts
Jan 20 14:02:50 compute-1 ceph-mon[81775]: 11.3 scrub ok
Jan 20 14:02:50 compute-1 ceph-mon[81775]: pgmap v329: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:51 compute-1 sudo[97520]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:51.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:51 compute-1 ceph-mon[81775]: 9.b scrub starts
Jan 20 14:02:51 compute-1 ceph-mon[81775]: 9.b scrub ok
Jan 20 14:02:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:52.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:52 compute-1 sudo[97715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pglfeqnmuzsblncgpmofzjhfkqepszjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917771.5768077-150-273916398451381/AnsiballZ_file.py'
Jan 20 14:02:52 compute-1 sudo[97715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:52 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 20 14:02:52 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 20 14:02:52 compute-1 python3.9[97717]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:02:52 compute-1 sudo[97715]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:52 compute-1 ceph-mon[81775]: 9.1e scrub starts
Jan 20 14:02:52 compute-1 ceph-mon[81775]: 9.1e scrub ok
Jan 20 14:02:52 compute-1 ceph-mon[81775]: pgmap v330: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:52 compute-1 ceph-mon[81775]: 11.1 scrub starts
Jan 20 14:02:52 compute-1 ceph-mon[81775]: 11.1 scrub ok
Jan 20 14:02:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:02:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:53.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:02:53 compute-1 ceph-mon[81775]: 9.1f scrub starts
Jan 20 14:02:53 compute-1 ceph-mon[81775]: 9.1f scrub ok
Jan 20 14:02:53 compute-1 sudo[97867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pibrsdimkvfwzfbaetemcwcqldgpjsdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917773.4559755-174-55132942717254/AnsiballZ_command.py'
Jan 20 14:02:53 compute-1 sudo[97867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:54.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:54 compute-1 python3.9[97869]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:02:54 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 20 14:02:54 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 20 14:02:54 compute-1 sudo[97867]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:54 compute-1 ceph-mon[81775]: 9.3 scrub starts
Jan 20 14:02:54 compute-1 ceph-mon[81775]: 9.3 scrub ok
Jan 20 14:02:54 compute-1 ceph-mon[81775]: pgmap v331: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:54 compute-1 ceph-mon[81775]: 11.7 scrub starts
Jan 20 14:02:54 compute-1 ceph-mon[81775]: 11.7 scrub ok
Jan 20 14:02:55 compute-1 sudo[98030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlqzurspyljpejdqxkglqocbfgeddgey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917774.5502546-198-167280596829232/AnsiballZ_stat.py'
Jan 20 14:02:55 compute-1 sudo[98030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:55 compute-1 python3.9[98032]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:02:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:55 compute-1 sudo[98030]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:55.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:55 compute-1 sudo[98108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqrfqzujolwqjtznymwcssxvliqulngd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917774.5502546-198-167280596829232/AnsiballZ_file.py'
Jan 20 14:02:55 compute-1 sudo[98108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:55 compute-1 python3.9[98110]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:02:55 compute-1 sudo[98108]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:55 compute-1 ceph-mon[81775]: 9.17 scrub starts
Jan 20 14:02:55 compute-1 ceph-mon[81775]: 9.17 scrub ok
Jan 20 14:02:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:56.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:56 compute-1 sudo[98260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkkdvvzhmcgppqcipzzqgdagnlsxvugj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917776.2284002-234-55095213858073/AnsiballZ_stat.py'
Jan 20 14:02:56 compute-1 sudo[98260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:56 compute-1 python3.9[98262]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:02:56 compute-1 ceph-mon[81775]: pgmap v332: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:56 compute-1 sudo[98260]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:57 compute-1 sudo[98338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjzwlocbzvdyqolaqqjvogteonrmghlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917776.2284002-234-55095213858073/AnsiballZ_file.py'
Jan 20 14:02:57 compute-1 sudo[98338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:57 compute-1 python3.9[98340]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:02:57 compute-1 sudo[98338]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:02:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:57.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:02:57 compute-1 sudo[98365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:02:57 compute-1 sudo[98365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:02:57 compute-1 sudo[98365]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:57 compute-1 sudo[98390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:02:57 compute-1 sudo[98390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:02:57 compute-1 sudo[98390]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:57 compute-1 sudo[98433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:02:57 compute-1 sudo[98433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:02:57 compute-1 sudo[98433]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:57 compute-1 sudo[98489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:02:57 compute-1 sudo[98489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:02:57 compute-1 ceph-mon[81775]: 9.13 scrub starts
Jan 20 14:02:57 compute-1 ceph-mon[81775]: 9.13 scrub ok
Jan 20 14:02:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:02:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:58.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:02:58 compute-1 sudo[98607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbkztxcrdhrusucujepqobwjptunowks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917777.7709467-273-237709325029067/AnsiballZ_ini_file.py'
Jan 20 14:02:58 compute-1 sudo[98607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:58 compute-1 sudo[98489]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:58 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 20 14:02:58 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 20 14:02:58 compute-1 python3.9[98611]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:02:58 compute-1 sudo[98607]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:58 compute-1 sudo[98773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfywcvlxzsrkbpcvdgvbvmswykcgrtvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917778.5883381-273-146859926814172/AnsiballZ_ini_file.py'
Jan 20 14:02:58 compute-1 sudo[98773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:58 compute-1 ceph-mon[81775]: pgmap v333: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:58 compute-1 ceph-mon[81775]: 8.1b scrub starts
Jan 20 14:02:58 compute-1 ceph-mon[81775]: 8.1b scrub ok
Jan 20 14:02:59 compute-1 python3.9[98775]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:02:59 compute-1 sudo[98773]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:02:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:59.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:59 compute-1 sudo[98925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpxrunnfhfyitjhsjqljxcsbqwcmtukv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917779.3012252-273-241304137570987/AnsiballZ_ini_file.py'
Jan 20 14:02:59 compute-1 sudo[98925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:59 compute-1 python3.9[98927]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:02:59 compute-1 sudo[98925]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:03:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:00.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:03:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:00 compute-1 ceph-mon[81775]: pgmap v334: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:03:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:03:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:03:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:03:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:03:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:03:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:03:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:03:00 compute-1 sudo[99079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkaamxajdllntltfobdlqpjxcngxzmys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917780.1126347-273-258704864833826/AnsiballZ_ini_file.py'
Jan 20 14:03:00 compute-1 sudo[99079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:00 compute-1 python3.9[99081]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:03:00 compute-1 sudo[99079]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:01 compute-1 sshd-session[98928]: Invalid user psybnc from 116.99.171.211 port 38768
Jan 20 14:03:01 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 20 14:03:01 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 20 14:03:01 compute-1 sudo[99231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asflrdasymddopnuhzqmhyzpmhfiaple ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917781.1422617-366-71914741082819/AnsiballZ_dnf.py'
Jan 20 14:03:01 compute-1 sudo[99231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:01.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:01 compute-1 sshd-session[98928]: Connection closed by invalid user psybnc 116.99.171.211 port 38768 [preauth]
Jan 20 14:03:01 compute-1 python3.9[99233]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:03:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:02.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:02 compute-1 ceph-mon[81775]: pgmap v335: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:02 compute-1 ceph-mon[81775]: 8.8 scrub starts
Jan 20 14:03:02 compute-1 ceph-mon[81775]: 8.8 scrub ok
Jan 20 14:03:02 compute-1 sudo[99231]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:03 compute-1 sshd-session[99235]: Invalid user belkinstyle from 116.99.171.211 port 38772
Jan 20 14:03:03 compute-1 ceph-mon[81775]: 9.7 scrub starts
Jan 20 14:03:03 compute-1 ceph-mon[81775]: 9.7 scrub ok
Jan 20 14:03:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:03.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:03 compute-1 sshd-session[99235]: Connection closed by invalid user belkinstyle 116.99.171.211 port 38772 [preauth]
Jan 20 14:03:03 compute-1 sudo[99386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pefdtvkcrxpekxxuhqagaccnsnycidlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917783.4729366-399-276523322646802/AnsiballZ_setup.py'
Jan 20 14:03:03 compute-1 sudo[99386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:04.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:04 compute-1 python3.9[99388]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:03:04 compute-1 sudo[99386]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:04 compute-1 ceph-mon[81775]: pgmap v336: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:04 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 20 14:03:04 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 20 14:03:04 compute-1 sudo[99540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wblcnrdzrzevcmvicrexwddoxzeeydbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917784.5366917-423-46793280314219/AnsiballZ_stat.py'
Jan 20 14:03:04 compute-1 sudo[99540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:05 compute-1 python3.9[99542]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:03:05 compute-1 sudo[99540]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:05 compute-1 ceph-mon[81775]: 8.19 scrub starts
Jan 20 14:03:05 compute-1 ceph-mon[81775]: 8.19 scrub ok
Jan 20 14:03:05 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 20 14:03:05 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 20 14:03:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:05.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:05 compute-1 sudo[99692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnkpmuriphckmpmsigllvilvuoicgfil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917785.4178376-450-48050156220386/AnsiballZ_stat.py'
Jan 20 14:03:05 compute-1 sudo[99692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:05 compute-1 python3.9[99694]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:03:05 compute-1 sudo[99692]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:06.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:06 compute-1 sudo[99794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:03:06 compute-1 sudo[99794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:06 compute-1 ceph-mon[81775]: pgmap v337: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:06 compute-1 ceph-mon[81775]: 11.1c scrub starts
Jan 20 14:03:06 compute-1 ceph-mon[81775]: 11.1c scrub ok
Jan 20 14:03:06 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:03:06 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:03:06 compute-1 sudo[99794]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:06 compute-1 sudo[99844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:03:06 compute-1 sudo[99844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:06 compute-1 sudo[99844]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:06 compute-1 sudo[99892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxrcqluegrehqjipbloehpfteickywgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917786.3705719-480-150996849156082/AnsiballZ_command.py'
Jan 20 14:03:06 compute-1 sudo[99892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:06 compute-1 python3.9[99896]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:03:06 compute-1 sudo[99892]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:07.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:07 compute-1 sudo[100047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpgyndkwvkrauwbqbpfzchxggbfxdmxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917787.2923336-510-273447552031171/AnsiballZ_service_facts.py'
Jan 20 14:03:07 compute-1 sudo[100047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:07 compute-1 python3.9[100049]: ansible-service_facts Invoked
Jan 20 14:03:08 compute-1 network[100066]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:03:08 compute-1 network[100067]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:03:08 compute-1 network[100068]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:03:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:08.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:08 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 20 14:03:08 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 20 14:03:08 compute-1 ceph-mon[81775]: 9.5 deep-scrub starts
Jan 20 14:03:08 compute-1 ceph-mon[81775]: 9.5 deep-scrub ok
Jan 20 14:03:08 compute-1 ceph-mon[81775]: pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:09.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:09 compute-1 ceph-mon[81775]: 11.1a scrub starts
Jan 20 14:03:09 compute-1 ceph-mon[81775]: 11.1a scrub ok
Jan 20 14:03:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:03:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:10.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:03:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:10 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 20 14:03:10 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 20 14:03:10 compute-1 ceph-mon[81775]: 9.18 scrub starts
Jan 20 14:03:10 compute-1 ceph-mon[81775]: 9.18 scrub ok
Jan 20 14:03:10 compute-1 ceph-mon[81775]: pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:03:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:11.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:03:11 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 20 14:03:11 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 20 14:03:11 compute-1 ceph-mon[81775]: 11.1e scrub starts
Jan 20 14:03:11 compute-1 ceph-mon[81775]: 11.1e scrub ok
Jan 20 14:03:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:12.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:12 compute-1 sudo[100047]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:12 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 8.12 deep-scrub starts
Jan 20 14:03:12 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 8.12 deep-scrub ok
Jan 20 14:03:12 compute-1 ceph-mon[81775]: 9.8 scrub starts
Jan 20 14:03:12 compute-1 ceph-mon[81775]: 9.8 scrub ok
Jan 20 14:03:12 compute-1 ceph-mon[81775]: pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:12 compute-1 ceph-mon[81775]: 11.1d scrub starts
Jan 20 14:03:12 compute-1 ceph-mon[81775]: 11.1d scrub ok
Jan 20 14:03:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:13.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:13 compute-1 ceph-mon[81775]: 8.12 deep-scrub starts
Jan 20 14:03:13 compute-1 ceph-mon[81775]: 8.12 deep-scrub ok
Jan 20 14:03:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:14.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:14 compute-1 ceph-mon[81775]: pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:15.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:15 compute-1 sudo[100351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyzcybksoibkyfnsuxsliwyqtdmmfzua ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1768917795.4433925-555-245309364353439/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1768917795.4433925-555-245309364353439/args'
Jan 20 14:03:15 compute-1 sudo[100351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:15 compute-1 sudo[100351]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:16.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:16 compute-1 sudo[100518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynlzjtkihapblbkteddlisglrhnrrnyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917796.3243282-588-278504060351789/AnsiballZ_dnf.py'
Jan 20 14:03:16 compute-1 sudo[100518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:16 compute-1 ceph-mon[81775]: pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:16 compute-1 ceph-mon[81775]: 9.9 scrub starts
Jan 20 14:03:16 compute-1 python3.9[100520]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:03:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:17.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:17 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 20 14:03:17 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 20 14:03:17 compute-1 ceph-mon[81775]: 9.9 scrub ok
Jan 20 14:03:17 compute-1 sudo[100518]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:03:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:18.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:03:18 compute-1 ceph-mon[81775]: pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:18 compute-1 ceph-mon[81775]: 8.4 scrub starts
Jan 20 14:03:18 compute-1 ceph-mon[81775]: 8.4 scrub ok
Jan 20 14:03:18 compute-1 ceph-mon[81775]: 9.16 scrub starts
Jan 20 14:03:18 compute-1 ceph-mon[81775]: 9.16 scrub ok
Jan 20 14:03:19 compute-1 sudo[100671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdpoovesqvobnxedkdynzuloseiclfqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917798.664064-627-859296489944/AnsiballZ_package_facts.py'
Jan 20 14:03:19 compute-1 sudo[100671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:19.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:19 compute-1 python3.9[100673]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 20 14:03:19 compute-1 sudo[100671]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:19 compute-1 ceph-mon[81775]: 9.1d scrub starts
Jan 20 14:03:19 compute-1 ceph-mon[81775]: 9.1d scrub ok
Jan 20 14:03:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:20.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:20 compute-1 ceph-mon[81775]: pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:21 compute-1 sudo[100823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmnqinzfdcuxrqlxskqrndjnnhbmjyqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917800.7851694-658-9713942345495/AnsiballZ_stat.py'
Jan 20 14:03:21 compute-1 sudo[100823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:21 compute-1 python3.9[100825]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:21 compute-1 sudo[100823]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:03:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:21.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:03:21 compute-1 sudo[100901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgwyujnutdftkqdkkiqcypsrvyhzgxmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917800.7851694-658-9713942345495/AnsiballZ_file.py'
Jan 20 14:03:21 compute-1 sudo[100901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:22 compute-1 python3.9[100903]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:22 compute-1 sudo[100901]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:22.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:22 compute-1 sudo[101053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-updeotcmudkfdwsosbdagqnmpgbssiyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917802.510221-694-114055199259253/AnsiballZ_stat.py'
Jan 20 14:03:22 compute-1 sudo[101053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:22 compute-1 ceph-mon[81775]: pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:23 compute-1 python3.9[101055]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:23 compute-1 sudo[101053]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:23 compute-1 sudo[101131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jejprfoecdnpaerlanenwgdjefyeeqss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917802.510221-694-114055199259253/AnsiballZ_file.py'
Jan 20 14:03:23 compute-1 sudo[101131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:03:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:23.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:03:23 compute-1 python3.9[101133]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:23 compute-1 sudo[101131]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:23 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.1b deep-scrub starts
Jan 20 14:03:23 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 11.1b deep-scrub ok
Jan 20 14:03:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:24.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:24 compute-1 ceph-mon[81775]: pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:24 compute-1 ceph-mon[81775]: 11.1b deep-scrub starts
Jan 20 14:03:24 compute-1 ceph-mon[81775]: 11.1b deep-scrub ok
Jan 20 14:03:25 compute-1 sudo[101283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzkuflefvujiehtduvptnrhplszyvqik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917804.6574988-747-156936805474850/AnsiballZ_lineinfile.py'
Jan 20 14:03:25 compute-1 sudo[101283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:25 compute-1 python3.9[101285]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:25 compute-1 sudo[101283]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:25.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:03:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:26.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:03:26 compute-1 sudo[101435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufrzwcwofczyzajxgkwzqjyrepbfgpml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917806.4761803-792-249633003174330/AnsiballZ_setup.py'
Jan 20 14:03:26 compute-1 sudo[101435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:26 compute-1 ceph-mon[81775]: pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:27 compute-1 python3.9[101437]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:03:27 compute-1 sudo[101435]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:03:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:27.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:03:27 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Jan 20 14:03:27 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Jan 20 14:03:28 compute-1 ceph-mon[81775]: pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:28.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:28 compute-1 sudo[101519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asldicrtpfypduphtrqcugyyficdxxpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917806.4761803-792-249633003174330/AnsiballZ_systemd.py'
Jan 20 14:03:28 compute-1 sudo[101519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:28 compute-1 python3.9[101521]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:03:28 compute-1 sudo[101519]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:29 compute-1 ceph-mon[81775]: 8.18 scrub starts
Jan 20 14:03:29 compute-1 ceph-mon[81775]: 8.18 scrub ok
Jan 20 14:03:29 compute-1 sshd-session[96825]: Connection closed by 192.168.122.30 port 52504
Jan 20 14:03:29 compute-1 sshd-session[96822]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:03:29 compute-1 systemd[1]: session-37.scope: Deactivated successfully.
Jan 20 14:03:29 compute-1 systemd[1]: session-37.scope: Consumed 26.984s CPU time.
Jan 20 14:03:29 compute-1 systemd-logind[783]: Session 37 logged out. Waiting for processes to exit.
Jan 20 14:03:29 compute-1 systemd-logind[783]: Removed session 37.
Jan 20 14:03:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:29.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:30 compute-1 ceph-mon[81775]: pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:30.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:31.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:32.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:32 compute-1 ceph-mon[81775]: pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:33.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:33 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 20 14:03:33 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 20 14:03:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:34.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:34 compute-1 sshd-session[101548]: Accepted publickey for zuul from 192.168.122.30 port 33962 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:03:34 compute-1 systemd-logind[783]: New session 38 of user zuul.
Jan 20 14:03:34 compute-1 systemd[1]: Started Session 38 of User zuul.
Jan 20 14:03:34 compute-1 sshd-session[101548]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:03:35 compute-1 sudo[101701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slnulkjodfhnrgfuoiauhmvwpeqicgau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917814.5485017-27-9425934383039/AnsiballZ_file.py'
Jan 20 14:03:35 compute-1 sudo[101701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:35 compute-1 ceph-mon[81775]: pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:35 compute-1 ceph-mon[81775]: 9.e scrub starts
Jan 20 14:03:35 compute-1 ceph-mon[81775]: 9.e scrub ok
Jan 20 14:03:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:35 compute-1 python3.9[101703]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:35 compute-1 sudo[101701]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:03:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:35.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:03:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:03:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:36.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:03:36 compute-1 sudo[101853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cburmhewehayxvjojvfwvxhjhutvuwen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917815.6758769-63-235917847966846/AnsiballZ_stat.py'
Jan 20 14:03:36 compute-1 sudo[101853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:36 compute-1 python3.9[101855]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:36 compute-1 sudo[101853]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:36 compute-1 ceph-mon[81775]: pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:36 compute-1 sudo[101931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byggofdbcdxipenolefvrstrppawyqto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917815.6758769-63-235917847966846/AnsiballZ_file.py'
Jan 20 14:03:36 compute-1 sudo[101931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:36 compute-1 python3.9[101933]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:36 compute-1 sudo[101931]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:37 compute-1 sshd-session[101551]: Connection closed by 192.168.122.30 port 33962
Jan 20 14:03:37 compute-1 sshd-session[101548]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:03:37 compute-1 systemd[1]: session-38.scope: Deactivated successfully.
Jan 20 14:03:37 compute-1 systemd[1]: session-38.scope: Consumed 1.801s CPU time.
Jan 20 14:03:37 compute-1 systemd-logind[783]: Session 38 logged out. Waiting for processes to exit.
Jan 20 14:03:37 compute-1 systemd-logind[783]: Removed session 38.
Jan 20 14:03:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:37.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:38.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:38 compute-1 ceph-mon[81775]: pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:03:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:39.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:03:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:40.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:40 compute-1 ceph-mon[81775]: pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:41.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:41 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 20 14:03:41 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 20 14:03:42 compute-1 sshd-session[101959]: Accepted publickey for zuul from 192.168.122.30 port 33678 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:03:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:03:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:42.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:03:42 compute-1 systemd-logind[783]: New session 39 of user zuul.
Jan 20 14:03:42 compute-1 systemd[1]: Started Session 39 of User zuul.
Jan 20 14:03:42 compute-1 sshd-session[101959]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:03:42 compute-1 ceph-mon[81775]: pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:42 compute-1 ceph-mon[81775]: 9.6 scrub starts
Jan 20 14:03:42 compute-1 ceph-mon[81775]: 9.6 scrub ok
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:03:42.813500) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917822813653, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2592, "num_deletes": 251, "total_data_size": 5168477, "memory_usage": 5242656, "flush_reason": "Manual Compaction"}
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917822851262, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3372780, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7000, "largest_seqno": 9587, "table_properties": {"data_size": 3362979, "index_size": 5655, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 25988, "raw_average_key_size": 21, "raw_value_size": 3340744, "raw_average_value_size": 2765, "num_data_blocks": 251, "num_entries": 1208, "num_filter_entries": 1208, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917647, "oldest_key_time": 1768917647, "file_creation_time": 1768917822, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 37797 microseconds, and 11542 cpu microseconds.
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:03:42.851297) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3372780 bytes OK
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:03:42.851313) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:03:42.853077) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:03:42.853089) EVENT_LOG_v1 {"time_micros": 1768917822853085, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:03:42.853105) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 5156418, prev total WAL file size 5156418, number of live WAL files 2.
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:03:42.854183) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3293KB)], [15(7541KB)]
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917822854212, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 11095206, "oldest_snapshot_seqno": -1}
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3799 keys, 9488793 bytes, temperature: kUnknown
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917822922969, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 9488793, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9457817, "index_size": 20370, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9541, "raw_key_size": 91677, "raw_average_key_size": 24, "raw_value_size": 9383694, "raw_average_value_size": 2470, "num_data_blocks": 890, "num_entries": 3799, "num_filter_entries": 3799, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768917822, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:03:42.923176) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9488793 bytes
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:03:42.939092) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.2 rd, 137.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.4 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 4322, records dropped: 523 output_compression: NoCompression
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:03:42.939148) EVENT_LOG_v1 {"time_micros": 1768917822939127, "job": 6, "event": "compaction_finished", "compaction_time_micros": 68827, "compaction_time_cpu_micros": 20181, "output_level": 6, "num_output_files": 1, "total_output_size": 9488793, "num_input_records": 4322, "num_output_records": 3799, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917822940609, "job": 6, "event": "table_file_deletion", "file_number": 17}
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917822943646, "job": 6, "event": "table_file_deletion", "file_number": 15}
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:03:42.854135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:03:42.943720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:03:42.943726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:03:42.943728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:03:42.943730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:03:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:03:42.943732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:03:43 compute-1 python3.9[102112]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:03:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:43.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:43 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 20 14:03:43 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 20 14:03:44 compute-1 ceph-mon[81775]: pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:44.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:44 compute-1 sudo[102266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmizzxrxxlknucpqtkvzbfiwugpbsteg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917823.8447955-60-234424542075277/AnsiballZ_file.py'
Jan 20 14:03:44 compute-1 sudo[102266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:44 compute-1 python3.9[102268]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:44 compute-1 sudo[102266]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:45 compute-1 ceph-mon[81775]: 9.a scrub starts
Jan 20 14:03:45 compute-1 ceph-mon[81775]: 9.a scrub ok
Jan 20 14:03:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:45 compute-1 sudo[102441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvwideilvhhgbcfbbcsmmqkddmxubzeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917824.8057232-84-143375927850223/AnsiballZ_stat.py'
Jan 20 14:03:45 compute-1 sudo[102441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:45 compute-1 python3.9[102443]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:45 compute-1 sudo[102441]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:45.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:45 compute-1 sudo[102519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdhuuxzrgebzdfwhyybqorwxdjumzooo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917824.8057232-84-143375927850223/AnsiballZ_file.py'
Jan 20 14:03:45 compute-1 sudo[102519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:46 compute-1 python3.9[102521]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.77r2o6ds recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:46 compute-1 sudo[102519]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:46 compute-1 ceph-mon[81775]: pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:46.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:47 compute-1 sudo[102671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgwvgnhzqzasagthjotsxdgxmvpmwyod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917826.6280916-144-87420512225542/AnsiballZ_stat.py'
Jan 20 14:03:47 compute-1 sudo[102671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:47 compute-1 python3.9[102673]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:47 compute-1 sudo[102671]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:47 compute-1 sudo[102749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okmfdvrocaaaokxbuartoxkdvohojnzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917826.6280916-144-87420512225542/AnsiballZ_file.py'
Jan 20 14:03:47 compute-1 sudo[102749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:03:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:47.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:03:47 compute-1 python3.9[102751]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.87u40_y4 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:47 compute-1 sudo[102749]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:48.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:48 compute-1 ceph-mon[81775]: pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:48 compute-1 sudo[102901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vltpyxqxsbtdedhsyzmaamajualaaxys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917828.0844064-183-16789580807467/AnsiballZ_file.py'
Jan 20 14:03:48 compute-1 sudo[102901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:48 compute-1 python3.9[102903]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:03:48 compute-1 sudo[102901]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:49 compute-1 sudo[103053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyhiejdyzkojstllxqycpkrpjybfhazs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917828.9314206-207-229785119024695/AnsiballZ_stat.py'
Jan 20 14:03:49 compute-1 sudo[103053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:49 compute-1 python3.9[103055]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:03:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:49.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:03:49 compute-1 sudo[103053]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:49 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 20 14:03:49 compute-1 sudo[103131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqhlaggxvvsqmhqyaaurunlzgdwxilma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917828.9314206-207-229785119024695/AnsiballZ_file.py'
Jan 20 14:03:49 compute-1 sudo[103131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:49 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 20 14:03:50 compute-1 python3.9[103133]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:03:50 compute-1 sudo[103131]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:50.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:50 compute-1 ceph-mon[81775]: pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:50 compute-1 ceph-mon[81775]: 9.d scrub starts
Jan 20 14:03:50 compute-1 ceph-mon[81775]: 9.d scrub ok
Jan 20 14:03:50 compute-1 sudo[103283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ullfvvoqqcnksnzvhvwtgcwxhmqerzta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917830.298167-207-153763321382936/AnsiballZ_stat.py'
Jan 20 14:03:50 compute-1 sudo[103283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:50 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 9.f deep-scrub starts
Jan 20 14:03:50 compute-1 python3.9[103285]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:50 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 9.f deep-scrub ok
Jan 20 14:03:50 compute-1 sudo[103283]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:51 compute-1 sudo[103361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipanxlntlncbbqjgekpanoetuldhhvme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917830.298167-207-153763321382936/AnsiballZ_file.py'
Jan 20 14:03:51 compute-1 sudo[103361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:51 compute-1 ceph-mon[81775]: 9.f deep-scrub starts
Jan 20 14:03:51 compute-1 ceph-mon[81775]: 9.f deep-scrub ok
Jan 20 14:03:51 compute-1 python3.9[103363]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:03:51 compute-1 sudo[103361]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:51.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:51 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 20 14:03:51 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 20 14:03:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:03:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:52.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:03:52 compute-1 sudo[103513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbdswgmdyjkssvphgdoqcfsfgcymlvio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917831.8278341-276-28723889117150/AnsiballZ_file.py'
Jan 20 14:03:52 compute-1 sudo[103513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:52 compute-1 python3.9[103515]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:52 compute-1 ceph-mon[81775]: pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:52 compute-1 ceph-mon[81775]: 9.10 scrub starts
Jan 20 14:03:52 compute-1 ceph-mon[81775]: 9.10 scrub ok
Jan 20 14:03:52 compute-1 sudo[103513]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:52 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 20 14:03:52 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 20 14:03:52 compute-1 sudo[103666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eloizoenljcwrggdzqqgddkovqorfspn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917832.6890514-300-97849127784876/AnsiballZ_stat.py'
Jan 20 14:03:52 compute-1 sudo[103666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:53 compute-1 python3.9[103668]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:53 compute-1 sudo[103666]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:53 compute-1 ceph-mon[81775]: 9.11 scrub starts
Jan 20 14:03:53 compute-1 ceph-mon[81775]: 9.11 scrub ok
Jan 20 14:03:53 compute-1 sudo[103744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgcjtkmqidsadvqaffsqkrhihsrcryfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917832.6890514-300-97849127784876/AnsiballZ_file.py'
Jan 20 14:03:53 compute-1 sudo[103744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:03:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:53.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:03:53 compute-1 python3.9[103746]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:53 compute-1 sudo[103744]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:54.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:54 compute-1 sudo[103896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvlhspyvssunmeiczjwpyuodqllooryp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917833.9777522-336-58112586039831/AnsiballZ_stat.py'
Jan 20 14:03:54 compute-1 sudo[103896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:54 compute-1 ceph-mon[81775]: pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:54 compute-1 python3.9[103898]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:54 compute-1 sudo[103896]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:54 compute-1 sudo[103974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqyjayzvuqfxrrzhcfajftkzeyssuqql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917833.9777522-336-58112586039831/AnsiballZ_file.py'
Jan 20 14:03:54 compute-1 sudo[103974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:55 compute-1 python3.9[103976]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:55 compute-1 sudo[103974]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:55.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:55 compute-1 sudo[104126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olsygoeebawsqzoqwdfnrxtpzafbaysc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917835.4310596-372-176545414767541/AnsiballZ_systemd.py'
Jan 20 14:03:55 compute-1 sudo[104126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:03:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:56.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:03:56 compute-1 python3.9[104128]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:03:56 compute-1 systemd[1]: Reloading.
Jan 20 14:03:56 compute-1 systemd-rc-local-generator[104157]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:03:56 compute-1 systemd-sysv-generator[104160]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:03:56 compute-1 ceph-mon[81775]: pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:56 compute-1 sudo[104126]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:57 compute-1 sudo[104315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhhrdhgqehirwjedcyswiaqxrmmbndcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917837.0821576-396-122257244448132/AnsiballZ_stat.py'
Jan 20 14:03:57 compute-1 sudo[104315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:03:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:57.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:03:57 compute-1 python3.9[104317]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:57 compute-1 sudo[104315]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:57 compute-1 sudo[104393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzalgtbmyuxzgbqsimtostaaimncyxjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917837.0821576-396-122257244448132/AnsiballZ_file.py'
Jan 20 14:03:57 compute-1 sudo[104393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:58 compute-1 python3.9[104395]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:58 compute-1 sudo[104393]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:58.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:58 compute-1 ceph-mon[81775]: pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:58 compute-1 sudo[104545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmuwgggdsiwmrhhtiablrttazptayqsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917838.5358093-432-129424720726065/AnsiballZ_stat.py'
Jan 20 14:03:58 compute-1 sudo[104545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:59 compute-1 python3.9[104547]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:59 compute-1 sudo[104545]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:59 compute-1 sudo[104623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvwkwjhdgmplwjtykyklswpfqlorimko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917838.5358093-432-129424720726065/AnsiballZ_file.py'
Jan 20 14:03:59 compute-1 sudo[104623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:59 compute-1 python3.9[104625]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:59 compute-1 sudo[104623]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:03:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:59.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:00.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:00 compute-1 sudo[104775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhmdtxihcvufwnscvhskmiiletjionin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917840.0083442-468-218677960499626/AnsiballZ_systemd.py'
Jan 20 14:04:00 compute-1 sudo[104775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:00 compute-1 ceph-mon[81775]: pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:00 compute-1 python3.9[104777]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:04:00 compute-1 systemd[1]: Reloading.
Jan 20 14:04:00 compute-1 systemd-rc-local-generator[104806]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:04:00 compute-1 systemd-sysv-generator[104811]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:04:01 compute-1 systemd[1]: Starting Create netns directory...
Jan 20 14:04:01 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 20 14:04:01 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 20 14:04:01 compute-1 systemd[1]: Finished Create netns directory.
Jan 20 14:04:01 compute-1 sudo[104775]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:04:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:01.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:04:01 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 20 14:04:01 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 20 14:04:01 compute-1 python3.9[104968]: ansible-ansible.builtin.service_facts Invoked
Jan 20 14:04:02 compute-1 network[104985]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:04:02 compute-1 network[104986]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:04:02 compute-1 network[104987]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:04:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:04:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:02.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:04:02 compute-1 ceph-mon[81775]: pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:02 compute-1 ceph-mon[81775]: 9.12 scrub starts
Jan 20 14:04:02 compute-1 ceph-mon[81775]: 9.12 scrub ok
Jan 20 14:04:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:03.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:03 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 20 14:04:03 compute-1 ceph-osd[79119]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 20 14:04:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:04.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:04 compute-1 ceph-mon[81775]: pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:04 compute-1 ceph-mon[81775]: 9.15 scrub starts
Jan 20 14:04:04 compute-1 ceph-mon[81775]: 9.15 scrub ok
Jan 20 14:04:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:05.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:06.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:06 compute-1 ceph-mon[81775]: pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:06 compute-1 sudo[105124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:04:06 compute-1 sudo[105124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:06 compute-1 sudo[105124]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:07 compute-1 sudo[105149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:04:07 compute-1 sudo[105149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:07 compute-1 sudo[105149]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:07 compute-1 sudo[105174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:04:07 compute-1 sudo[105174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:07 compute-1 sudo[105174]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:07 compute-1 sudo[105199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:04:07 compute-1 sudo[105199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:07.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:07 compute-1 sudo[105199]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:08.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:08 compute-1 sshd-session[105000]: Invalid user thomas from 116.99.171.211 port 32790
Jan 20 14:04:08 compute-1 sshd-session[105000]: Connection closed by invalid user thomas 116.99.171.211 port 32790 [preauth]
Jan 20 14:04:08 compute-1 ceph-mon[81775]: pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:09 compute-1 sudo[105380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhzcxzscveamuckqrrtzbqvlvxddqoir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917848.7692552-546-186717557652735/AnsiballZ_stat.py'
Jan 20 14:04:09 compute-1 sudo[105380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:09 compute-1 python3.9[105382]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:09 compute-1 sudo[105380]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:09.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:09 compute-1 sudo[105458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdaylrmetbjgpgodvabcslzdwgytjpeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917848.7692552-546-186717557652735/AnsiballZ_file.py'
Jan 20 14:04:09 compute-1 sudo[105458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:09 compute-1 python3.9[105460]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:09 compute-1 sudo[105458]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:04:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:04:10 compute-1 ceph-mon[81775]: pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:04:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:04:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:04:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:04:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:04:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:04:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:10.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:11 compute-1 sudo[105610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlgjrmdmxpgjmzzvqospdmwlqqqimhzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917850.5659919-585-194029627678289/AnsiballZ_file.py'
Jan 20 14:04:11 compute-1 sudo[105610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:11 compute-1 python3.9[105612]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:11 compute-1 sudo[105610]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:04:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:11.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:04:11 compute-1 sudo[105762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luuafqlfppdbtzgzedjpszvdbaefrgmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917851.563894-609-112347214912096/AnsiballZ_stat.py'
Jan 20 14:04:11 compute-1 sudo[105762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:12 compute-1 python3.9[105764]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:04:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:12.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:04:12 compute-1 sudo[105762]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:12 compute-1 ceph-mon[81775]: pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:12 compute-1 sudo[105840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zthqnqxqiivvcjrbkzkiftxcsywibguu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917851.563894-609-112347214912096/AnsiballZ_file.py'
Jan 20 14:04:12 compute-1 sudo[105840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:12 compute-1 python3.9[105842]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:12 compute-1 sudo[105840]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:04:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:13.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:04:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:04:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:14.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:04:14 compute-1 sudo[105994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqcloouidxodfjhwwkvzdqwdyexvlcsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917853.8239155-654-160906985725262/AnsiballZ_timezone.py'
Jan 20 14:04:14 compute-1 sudo[105994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:14 compute-1 ceph-mon[81775]: pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:14 compute-1 python3.9[105996]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 20 14:04:14 compute-1 systemd[1]: Starting Time & Date Service...
Jan 20 14:04:14 compute-1 systemd[1]: Started Time & Date Service.
Jan 20 14:04:14 compute-1 sudo[105994]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:15 compute-1 sudo[106150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuuzwdbkrkvhkuoscnidcyvqgbdsubjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917855.157195-681-53585621116203/AnsiballZ_file.py'
Jan 20 14:04:15 compute-1 sudo[106150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:04:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:15.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:04:15 compute-1 python3.9[106152]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:15 compute-1 sudo[106150]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:16.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:16 compute-1 sshd-session[105867]: Invalid user test from 116.99.171.211 port 42284
Jan 20 14:04:16 compute-1 sudo[106302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwtwtowbymvskhseazeixrvmxwoumrhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917856.0535617-705-223279793872205/AnsiballZ_stat.py'
Jan 20 14:04:16 compute-1 sudo[106302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:16 compute-1 ceph-mon[81775]: pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:16 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:04:16 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:04:16 compute-1 python3.9[106304]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:16 compute-1 sudo[106302]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:16 compute-1 sshd-session[105867]: Connection closed by invalid user test 116.99.171.211 port 42284 [preauth]
Jan 20 14:04:16 compute-1 sudo[106307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:04:16 compute-1 sudo[106307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:16 compute-1 sudo[106307]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:16 compute-1 sudo[106353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:04:16 compute-1 sudo[106353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:16 compute-1 sudo[106353]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:16 compute-1 sudo[106430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmqgxnaclicptwmikgchkynvcgqafurw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917856.0535617-705-223279793872205/AnsiballZ_file.py'
Jan 20 14:04:16 compute-1 sudo[106430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:17 compute-1 python3.9[106432]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:17 compute-1 sudo[106430]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:17.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:17 compute-1 sudo[106582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjhpurrzrzuguvlepibepxzqjnuywsjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917857.5348117-742-80117174258337/AnsiballZ_stat.py'
Jan 20 14:04:17 compute-1 sudo[106582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:18 compute-1 python3.9[106584]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:18 compute-1 sudo[106582]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:18.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:18 compute-1 sudo[106661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvjpwygxsewgzculbqzjabvrodsaifug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917857.5348117-742-80117174258337/AnsiballZ_file.py'
Jan 20 14:04:18 compute-1 sudo[106661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:18 compute-1 python3.9[106663]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.o89gzocf recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:18 compute-1 sudo[106661]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:18 compute-1 ceph-mon[81775]: pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:19 compute-1 sudo[106813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjbdpnhrvgzuzjjebqqmnmjerokyjitc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917858.9008183-777-202254800173175/AnsiballZ_stat.py'
Jan 20 14:04:19 compute-1 sudo[106813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:19 compute-1 python3.9[106815]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:19 compute-1 sudo[106813]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:19.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:19 compute-1 sudo[106891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdzovovvrdzspcjosdjiyieedgdgqnja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917858.9008183-777-202254800173175/AnsiballZ_file.py'
Jan 20 14:04:19 compute-1 sudo[106891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:19 compute-1 python3.9[106893]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:19 compute-1 sudo[106891]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:20.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:20 compute-1 ceph-mon[81775]: pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:20 compute-1 sudo[107043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frjxvmwskbxtzpdgzmdgzuhlgrayculg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917860.204129-817-242666110727599/AnsiballZ_command.py'
Jan 20 14:04:20 compute-1 sudo[107043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:20 compute-1 python3.9[107045]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:20 compute-1 sudo[107043]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:21.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:21 compute-1 sudo[107196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxogsveedjkjgfxviytojafgdxlazcqj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768917861.196401-840-233351301054168/AnsiballZ_edpm_nftables_from_files.py'
Jan 20 14:04:21 compute-1 sudo[107196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:21 compute-1 python3[107198]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 20 14:04:21 compute-1 sudo[107196]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:22.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:22 compute-1 sudo[107348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkcrtpkaeizpmcxywfbugiupqwnhdfvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917862.394329-864-266506447013810/AnsiballZ_stat.py'
Jan 20 14:04:22 compute-1 sudo[107348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:22 compute-1 ceph-mon[81775]: pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:22 compute-1 python3.9[107350]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:22 compute-1 sudo[107348]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:23 compute-1 sudo[107426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuzzyhkhsrucajsjhdhdzfjmbktwfdak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917862.394329-864-266506447013810/AnsiballZ_file.py'
Jan 20 14:04:23 compute-1 sudo[107426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:23 compute-1 python3.9[107428]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:23 compute-1 sudo[107426]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:04:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:23.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:04:24 compute-1 sudo[107578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qosnijfznjrshwujgmnqkgbtpjsgqjyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917863.78342-900-128043648860379/AnsiballZ_stat.py'
Jan 20 14:04:24 compute-1 sudo[107578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:04:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:24.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:04:24 compute-1 python3.9[107580]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:24 compute-1 sudo[107578]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:24 compute-1 sudo[107703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gphblnuogcixovxqwctebdyfxblhwarp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917863.78342-900-128043648860379/AnsiballZ_copy.py'
Jan 20 14:04:24 compute-1 sudo[107703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:25 compute-1 python3.9[107705]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917863.78342-900-128043648860379/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:25 compute-1 sudo[107703]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:25 compute-1 ceph-mon[81775]: pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:25.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:25 compute-1 sudo[107855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-derazhdglrltdpllkorbuowbfnjuefzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917865.4151561-946-167550464329626/AnsiballZ_stat.py'
Jan 20 14:04:25 compute-1 sudo[107855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:26 compute-1 python3.9[107857]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:26 compute-1 sudo[107855]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:26 compute-1 ceph-mon[81775]: pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:26.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:26 compute-1 sudo[107933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvlrihholodmjbeldlnsvznslowwzqkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917865.4151561-946-167550464329626/AnsiballZ_file.py'
Jan 20 14:04:26 compute-1 sudo[107933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:26 compute-1 python3.9[107935]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:26 compute-1 sudo[107933]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:27 compute-1 sudo[108085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gepjjqbszgaofhzloxifzlfqqmaleokh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917866.8398705-981-857637847773/AnsiballZ_stat.py'
Jan 20 14:04:27 compute-1 sudo[108085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:27 compute-1 python3.9[108087]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:27 compute-1 sudo[108085]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:27.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:27 compute-1 sudo[108165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kthdhctvubzegpgoalgyksdpdtonfsxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917866.8398705-981-857637847773/AnsiballZ_file.py'
Jan 20 14:04:27 compute-1 sudo[108165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:27 compute-1 python3.9[108167]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:27 compute-1 sudo[108165]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:28.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:28 compute-1 ceph-mon[81775]: pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:28 compute-1 sshd-session[108088]: Invalid user www from 116.99.171.211 port 38878
Jan 20 14:04:28 compute-1 sudo[108317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oewajxynwybfylcibsvrnarbvauufwwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917868.2025435-1017-79358250105524/AnsiballZ_stat.py'
Jan 20 14:04:28 compute-1 sudo[108317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:28 compute-1 python3.9[108319]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:28 compute-1 sudo[108317]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:29 compute-1 sshd-session[108088]: Connection closed by invalid user www 116.99.171.211 port 38878 [preauth]
Jan 20 14:04:29 compute-1 sudo[108395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-musxrwusgnafgllwglwmrotzcuceakif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917868.2025435-1017-79358250105524/AnsiballZ_file.py'
Jan 20 14:04:29 compute-1 sudo[108395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:29 compute-1 python3.9[108397]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:29 compute-1 sudo[108395]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:04:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:29.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:04:30 compute-1 sudo[108547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rleijomjemoicqxxcojnudxsbyjcektf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917869.6945875-1056-97917510320600/AnsiballZ_command.py'
Jan 20 14:04:30 compute-1 sudo[108547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:30.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:30 compute-1 python3.9[108549]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:30 compute-1 sudo[108547]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:30 compute-1 ceph-mon[81775]: pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:31 compute-1 sudo[108702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khknzwoqcyouepjddzivtbkxgvnrkhty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917870.591432-1080-1974565210772/AnsiballZ_blockinfile.py'
Jan 20 14:04:31 compute-1 sudo[108702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:31 compute-1 python3.9[108704]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:31 compute-1 sudo[108702]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:31.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:31 compute-1 sudo[108854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gstzghipdxwaoyknfmbdvfpqrejlisag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917871.647884-1107-236497882851243/AnsiballZ_file.py'
Jan 20 14:04:31 compute-1 sudo[108854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:32 compute-1 python3.9[108856]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:32 compute-1 sudo[108854]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:32.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:32 compute-1 ceph-mon[81775]: pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:32 compute-1 sudo[109006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yipmfviypobabluqgpvpxuqgeaiultiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917872.3202114-1107-150520228080389/AnsiballZ_file.py'
Jan 20 14:04:32 compute-1 sudo[109006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:32 compute-1 python3.9[109008]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:32 compute-1 sudo[109006]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:33.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:33 compute-1 sudo[109158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujfqdmrrriodmmahwxwhpmogwzyyeyyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917872.9907496-1152-235845423340588/AnsiballZ_mount.py'
Jan 20 14:04:33 compute-1 sudo[109158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:33 compute-1 python3.9[109160]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 20 14:04:34 compute-1 sudo[109158]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:34.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:34 compute-1 sudo[109310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srawsrytbpunoekzlbugwkqiljefbduw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917874.1575756-1152-258732362617888/AnsiballZ_mount.py'
Jan 20 14:04:34 compute-1 sudo[109310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:34 compute-1 ceph-mon[81775]: pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:34 compute-1 python3.9[109312]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 20 14:04:34 compute-1 sudo[109310]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:35 compute-1 sshd-session[101962]: Connection closed by 192.168.122.30 port 33678
Jan 20 14:04:35 compute-1 sshd-session[101959]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:04:35 compute-1 systemd[1]: session-39.scope: Deactivated successfully.
Jan 20 14:04:35 compute-1 systemd[1]: session-39.scope: Consumed 34.593s CPU time.
Jan 20 14:04:35 compute-1 systemd-logind[783]: Session 39 logged out. Waiting for processes to exit.
Jan 20 14:04:35 compute-1 systemd-logind[783]: Removed session 39.
Jan 20 14:04:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:35.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:04:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:36.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:04:36 compute-1 ceph-mon[81775]: pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:04:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:37.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:04:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:38.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:38 compute-1 ceph-mon[81775]: pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:04:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:39.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:04:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:40.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:40 compute-1 ceph-mon[81775]: pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:40 compute-1 sshd-session[109337]: Accepted publickey for zuul from 192.168.122.30 port 55542 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:04:40 compute-1 systemd-logind[783]: New session 40 of user zuul.
Jan 20 14:04:40 compute-1 systemd[1]: Started Session 40 of User zuul.
Jan 20 14:04:40 compute-1 sshd-session[109337]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:04:41 compute-1 sudo[109490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvhvhdhnkrslletqvlsdqphwozliamhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917881.0006285-24-21387402027398/AnsiballZ_tempfile.py'
Jan 20 14:04:41 compute-1 sudo[109490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:41.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:41 compute-1 python3.9[109492]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 20 14:04:41 compute-1 sudo[109490]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:04:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:42.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:04:42 compute-1 ceph-mon[81775]: pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:43 compute-1 sudo[109642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wogxdqciwvsiplnogcitlbijlqkqkukz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917882.0491815-60-264315156815120/AnsiballZ_stat.py'
Jan 20 14:04:43 compute-1 sudo[109642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:43 compute-1 python3.9[109644]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:04:43 compute-1 sudo[109642]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:04:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:43.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:04:44 compute-1 sudo[109796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmwanpoyrucbgsixuvytfencyfjaxtae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917883.637315-84-23148940284473/AnsiballZ_slurp.py'
Jan 20 14:04:44 compute-1 sudo[109796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:04:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:44.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:04:44 compute-1 python3.9[109798]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 20 14:04:44 compute-1 sudo[109796]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:44 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 20 14:04:45 compute-1 ceph-mon[81775]: pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:45 compute-1 sudo[109950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfvzuialwtqfnzwjmglonapinwgkfazm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917884.6058173-108-241460058828216/AnsiballZ_stat.py'
Jan 20 14:04:45 compute-1 sudo[109950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:04:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:45.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:04:45 compute-1 python3.9[109952]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.hvi5p92_ follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:45 compute-1 sudo[109950]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:04:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:46.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:04:46 compute-1 ceph-mon[81775]: pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:46 compute-1 sudo[110075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pljnquvyjvidulaxamqcxbqdrghgzhkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917884.6058173-108-241460058828216/AnsiballZ_copy.py'
Jan 20 14:04:46 compute-1 sudo[110075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:46 compute-1 python3.9[110077]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.hvi5p92_ mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917884.6058173-108-241460058828216/.source.hvi5p92_ _original_basename=.v17qguwk follow=False checksum=309fed797bdebad351617b1a1ea9eb224966ee92 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:46 compute-1 sudo[110075]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:47 compute-1 sudo[110227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtppboepyracuulenezdvsyxfeyqvdbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917886.848577-153-268428317148375/AnsiballZ_setup.py'
Jan 20 14:04:47 compute-1 sudo[110227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:47.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:47 compute-1 python3.9[110229]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:04:47 compute-1 sudo[110227]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:04:48.060283) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917888060350, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 870, "num_deletes": 250, "total_data_size": 1771781, "memory_usage": 1798392, "flush_reason": "Manual Compaction"}
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917888185120, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 757928, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9593, "largest_seqno": 10457, "table_properties": {"data_size": 754473, "index_size": 1235, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8879, "raw_average_key_size": 20, "raw_value_size": 747181, "raw_average_value_size": 1690, "num_data_blocks": 54, "num_entries": 442, "num_filter_entries": 442, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917823, "oldest_key_time": 1768917823, "file_creation_time": 1768917888, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 124887 microseconds, and 3095 cpu microseconds.
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:04:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:48.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:04:48.185168) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 757928 bytes OK
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:04:48.185188) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:04:48.310387) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:04:48.310453) EVENT_LOG_v1 {"time_micros": 1768917888310423, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:04:48.310473) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1767301, prev total WAL file size 1767301, number of live WAL files 2.
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:04:48.311252) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(740KB)], [18(9266KB)]
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917888311321, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 10246721, "oldest_snapshot_seqno": -1}
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 3753 keys, 7633247 bytes, temperature: kUnknown
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917888500982, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 7633247, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7605484, "index_size": 17285, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 91249, "raw_average_key_size": 24, "raw_value_size": 7534954, "raw_average_value_size": 2007, "num_data_blocks": 754, "num_entries": 3753, "num_filter_entries": 3753, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768917888, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:04:48.501331) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7633247 bytes
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:04:48.503530) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 54.0 rd, 40.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.0 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(23.6) write-amplify(10.1) OK, records in: 4241, records dropped: 488 output_compression: NoCompression
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:04:48.503565) EVENT_LOG_v1 {"time_micros": 1768917888503549, "job": 8, "event": "compaction_finished", "compaction_time_micros": 189763, "compaction_time_cpu_micros": 34924, "output_level": 6, "num_output_files": 1, "total_output_size": 7633247, "num_input_records": 4241, "num_output_records": 3753, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917888504032, "job": 8, "event": "table_file_deletion", "file_number": 20}
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917888507624, "job": 8, "event": "table_file_deletion", "file_number": 18}
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:04:48.311160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:04:48.507716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:04:48.507726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:04:48.507730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:04:48.507734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:04:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:04:48.507739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:04:48 compute-1 sudo[110379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gibpxspbadfifbjcnzoczgqxugvlpugi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917888.1136324-178-248050823546273/AnsiballZ_blockinfile.py'
Jan 20 14:04:48 compute-1 sudo[110379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:48 compute-1 python3.9[110381]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDrCUasX8PhlctvvIb2eE6+Z0hELmfczQ6UoBD+mPtCobptr/s786JmwJ3D8nIoKhlCLVSmhRfbqf1Pm45RUPTEtSuaa6HBDy40dZhTXU34X4KbGfKmur2bp9S/1w83ArKvI8inSqqk2qoMx1l7ECkEgeT+GbFwKfYLnbq5OV4Ms3tzl/uFUC/Xzxs2dbXlhozQiSamcO/a6EObErTvR8PrtaOoLFtTiD/I+oN+rkdBPkBc6r0qT4jS7nU1FOlT96meSZHE7Q1n8pxcy9PEc8w9hFdd1Zj8/WcGIdeEJsekuouK1Lut/sofQLZHyUMWJTcnBjx8BsjGx9NjUHPYUWIw+DZo7lT2QurAPNnaX4rp9ciGV2Bdm3ylNoOu3izNvM1JGTw3xRyYrmyxyWv3Euc35JXa0w07Xrqr+6Ckih0WTLU6q3Rlnrc/grpDC821sHrsljerHipJVOCbZB39LvV6wDDBlqfYZzfqID3dIqlVli4eL12J0K7jr7QAlPRhNf0=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOG07miJwhzuA/nm0wvGIorydl2xbBiiDhE7PypnJ/jC
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKiJpWtps/bRsuEHfak4zDuqPHKOWFLaEA2h86H7tPlrZHR8okAVZWCmY7keO3Ad1DFyffUtJPKv5OvTK91xGO8=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/dEYtIJ/delwiq9xMMctU8myoGU/TKMiFUM+i3BSaGKrC0rujad6qo1LAtjth5aYbBcgBhxy0UEX0oCruQQgc5qDpPmWHmJiAwdQJaDu6GxTRl3PlXF2u4rd0Rz72DAMuCxPSYedeHU91uL4vlrcD95xONWew2wa9lUuqQWdgj8DtqnB9T895BihDk9vFLXAaoGJcYZVGKJmXR8sOzNTFQxefqstVO0/dfbRUyFd0Ukp5v7rTmLxw0Np5WcGMOg9l/iRzWTopxnTRvXpBoGlFCmzNvTG2uH08dJ4FU5Wk9/iSxonuiVJu9DKs8Tp4EajaA4Y6cEuZiMhhqi7vw6zVCQuCmRBpny6Ub1Ag2CesMYgxwOVJO5cHsKh3BzuPFsh1gMgrrZK7v+qfm2r1rhHlPsCWrcnrtUIZa7gyzdFvHytTh/4uyGMgNpbwxkyCxgSN4PleQy2wvxy/DFW+JxCDzI4jK9LFH5aojzEhUtj+P3E7CXL/wRPxDJdfEU6PhTk=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEa1zL0TUD00vr72wZq3y4rgtSnctWBvs+gME/0/EAsV
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI2WwWe4rQW0CaFwcmci1J5n144T87fcxCH+Y2CVZd5XQ7Cvzlhh1cGNDX81Tng3KgxvKOuz3mdiSCLqx8noiD0=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/3N9PJZXpat0uFh2x2RoV9B0Ih74HU9CPf+g/5HncM7gCVvCpW3CBde1qNDRU2iY9rzpOVPwzi4YzoAUcxB5KAiqZOI9ylmzfiD8JXQ+myLmIRLxHOdXFaEQ4mMp4W+X37hCZ6sdfm6Yqd6eqBuZrM/72ltYoewWBNCG/Hgqzu30L9WC4+BF+iADHT7Qnmvh/cc9U71WxB4h2ikBo1SdGoFCqoez7ajitqx+dw7VWaOtEPliS0LZuDtN3Zt/cBBgxhb/FaAEI3jRP2ej9X0NJW91YxzBygyxiVasslx92g/GmnDFOWVZb5ai/JJsNH6pLTjs25IzvnuWIf8/ZLgZ03zziR4mBLP12CIVF8g1CzaqK1IILDKkjS/dzDiTBefmiQ2+N0i5EEXOgmxchqOqTkFPQg/ar0+0uBPkwzAI0HDk99czhyYHFlO+PhnULVkL1z+XLwHBgOrbNNVQQcJCvady4Gadh66mu1UrLpryNYOgZiugZi67Biha4ZPzPHok=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF41dx3BXAuEvQwQNtbUM7rIrbaOLr5CRvYNdDD+UMr9
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBENFrTpm22/xEaEJMzd7C5WyJttJdK+HK5kxP8/NuvvAQSlLtEulBZnvD/OX5hk3/sDYhPQelj3YsNX1Plw5PJQ=
                                              create=True mode=0644 path=/tmp/ansible.hvi5p92_ state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:48 compute-1 sudo[110379]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:48 compute-1 ceph-mon[81775]: pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:49 compute-1 sudo[110531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzabotaqjxbfjqgqzwwmjnuljpyimhzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917889.0309842-202-164855959371147/AnsiballZ_command.py'
Jan 20 14:04:49 compute-1 sudo[110531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:49.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:49 compute-1 python3.9[110533]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.hvi5p92_' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:49 compute-1 sudo[110531]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 20 14:04:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:50.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 20 14:04:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:50 compute-1 sudo[110685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irxpqobmwypkhoibostjswsnbqrsqwlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917889.959466-226-78516051614786/AnsiballZ_file.py'
Jan 20 14:04:50 compute-1 sudo[110685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:50 compute-1 python3.9[110687]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.hvi5p92_ state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:50 compute-1 sudo[110685]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:50 compute-1 ceph-mon[81775]: pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:51 compute-1 sshd-session[109340]: Connection closed by 192.168.122.30 port 55542
Jan 20 14:04:51 compute-1 sshd-session[109337]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:04:51 compute-1 systemd[1]: session-40.scope: Deactivated successfully.
Jan 20 14:04:51 compute-1 systemd[1]: session-40.scope: Consumed 6.281s CPU time.
Jan 20 14:04:51 compute-1 systemd-logind[783]: Session 40 logged out. Waiting for processes to exit.
Jan 20 14:04:51 compute-1 systemd-logind[783]: Removed session 40.
Jan 20 14:04:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:04:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:51.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:04:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:04:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:52.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:04:52 compute-1 ceph-mon[81775]: pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:04:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:53.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:04:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:54.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:54 compute-1 ceph-mon[81775]: pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:55.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:56 compute-1 sshd-session[110713]: Accepted publickey for zuul from 192.168.122.30 port 44906 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:04:56 compute-1 systemd-logind[783]: New session 41 of user zuul.
Jan 20 14:04:56 compute-1 systemd[1]: Started Session 41 of User zuul.
Jan 20 14:04:56 compute-1 sshd-session[110713]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:04:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:56.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:56 compute-1 ceph-mon[81775]: pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:57 compute-1 python3.9[110868]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:04:57 compute-1 sshd-session[110716]: Invalid user admin from 116.99.171.211 port 41076
Jan 20 14:04:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:57.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:58 compute-1 sshd-session[110716]: Connection closed by invalid user admin 116.99.171.211 port 41076 [preauth]
Jan 20 14:04:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:58.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:58 compute-1 sudo[111022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvpvcvpkbelafgydstzcbpvppqvukiuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917897.6784506-57-221107977639331/AnsiballZ_systemd.py'
Jan 20 14:04:58 compute-1 sudo[111022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:58 compute-1 python3.9[111024]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 20 14:04:58 compute-1 sudo[111022]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:59 compute-1 sudo[111176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmarrithkqcpvsyzwnmjfdxploloqugw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917899.0279102-81-71662904546807/AnsiballZ_systemd.py'
Jan 20 14:04:59 compute-1 sudo[111176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:04:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:59.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:59 compute-1 python3.9[111178]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:04:59 compute-1 sudo[111176]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:00 compute-1 ceph-mon[81775]: pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:00.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:00 compute-1 sudo[111329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drnhkoeoqrhhezrkwvsnpbremtejroly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917900.1271784-108-203282091051678/AnsiballZ_command.py'
Jan 20 14:05:00 compute-1 sudo[111329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:00 compute-1 python3.9[111331]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:05:00 compute-1 sudo[111329]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:01 compute-1 ceph-mon[81775]: pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:01 compute-1 sudo[111483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozccagvpcpmhqalnzkwxhcpkjdrpejqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917901.0868561-132-266632875065439/AnsiballZ_stat.py'
Jan 20 14:05:01 compute-1 sudo[111483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:01.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:01 compute-1 python3.9[111485]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:05:01 compute-1 sudo[111483]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:02.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:02 compute-1 sudo[111635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-calfncuzxakvuorbthfjzauzbvscofqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917902.0628285-159-142213015314097/AnsiballZ_file.py'
Jan 20 14:05:02 compute-1 sudo[111635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:02 compute-1 python3.9[111637]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:02 compute-1 sudo[111635]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:03 compute-1 sshd-session[110717]: Connection closed by 192.168.122.30 port 44906
Jan 20 14:05:03 compute-1 sshd-session[110713]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:05:03 compute-1 systemd[1]: session-41.scope: Deactivated successfully.
Jan 20 14:05:03 compute-1 systemd[1]: session-41.scope: Consumed 4.663s CPU time.
Jan 20 14:05:03 compute-1 systemd-logind[783]: Session 41 logged out. Waiting for processes to exit.
Jan 20 14:05:03 compute-1 systemd-logind[783]: Removed session 41.
Jan 20 14:05:03 compute-1 ceph-mon[81775]: pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:03.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:04.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:05 compute-1 ceph-mon[81775]: pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:05.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:06.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:06 compute-1 ceph-mon[81775]: pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:05:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:07.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:05:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:08.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:08 compute-1 sshd-session[111662]: Accepted publickey for zuul from 192.168.122.30 port 46648 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:05:08 compute-1 systemd-logind[783]: New session 42 of user zuul.
Jan 20 14:05:08 compute-1 systemd[1]: Started Session 42 of User zuul.
Jan 20 14:05:08 compute-1 sshd-session[111662]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:05:08 compute-1 ceph-mon[81775]: pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:09.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:09 compute-1 python3.9[111815]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:05:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:10.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:10 compute-1 sudo[111969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycheswoakhwalburqtffkqktimkqzpsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917910.3217854-63-252507294518227/AnsiballZ_setup.py'
Jan 20 14:05:10 compute-1 sudo[111969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:10 compute-1 ceph-mon[81775]: pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:10 compute-1 python3.9[111971]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:05:11 compute-1 sudo[111969]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:11 compute-1 sudo[112053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khrgqkpusneupfffjlxypwgvcrjgwptx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917910.3217854-63-252507294518227/AnsiballZ_dnf.py'
Jan 20 14:05:11 compute-1 sudo[112053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:11.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:11 compute-1 python3.9[112055]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 20 14:05:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:12.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:12 compute-1 ceph-mon[81775]: pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:13 compute-1 sudo[112053]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:05:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:13.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:05:13 compute-1 python3.9[112206]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:05:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:14.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:15 compute-1 ceph-mon[81775]: pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:15 compute-1 python3.9[112357]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 14:05:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:05:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:15.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:05:16 compute-1 python3.9[112507]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:05:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:16.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:16 compute-1 python3.9[112657]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:05:16 compute-1 sudo[112658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:05:16 compute-1 sudo[112658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:16 compute-1 sudo[112658]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:17 compute-1 sudo[112707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:05:17 compute-1 sudo[112707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:17 compute-1 sudo[112707]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:17 compute-1 sudo[112732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:05:17 compute-1 sudo[112732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:17 compute-1 sudo[112732]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:17 compute-1 sudo[112757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:05:17 compute-1 sudo[112757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:17 compute-1 ceph-mon[81775]: pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:17.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:17 compute-1 sudo[112757]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:17 compute-1 sshd-session[111665]: Connection closed by 192.168.122.30 port 46648
Jan 20 14:05:17 compute-1 sshd-session[111662]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:05:17 compute-1 systemd[1]: session-42.scope: Deactivated successfully.
Jan 20 14:05:17 compute-1 systemd[1]: session-42.scope: Consumed 6.210s CPU time.
Jan 20 14:05:17 compute-1 systemd-logind[783]: Session 42 logged out. Waiting for processes to exit.
Jan 20 14:05:17 compute-1 systemd-logind[783]: Removed session 42.
Jan 20 14:05:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:05:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:18.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:05:18 compute-1 ceph-mon[81775]: pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:19.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:20.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:20 compute-1 ceph-mon[81775]: pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:05:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:05:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:21.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:21 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:05:21 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:05:21 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:05:21 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:05:21 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:05:21 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:05:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:22.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:22 compute-1 ceph-mon[81775]: pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:23 compute-1 sshd-session[112813]: Accepted publickey for zuul from 192.168.122.30 port 37916 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:05:23 compute-1 systemd-logind[783]: New session 43 of user zuul.
Jan 20 14:05:23 compute-1 systemd[1]: Started Session 43 of User zuul.
Jan 20 14:05:23 compute-1 sshd-session[112813]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:05:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:23.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:05:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:24.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:05:24 compute-1 python3.9[112966]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:05:25 compute-1 ceph-mon[81775]: pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:25.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:25 compute-1 sudo[113120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzznjxuraazebacyktgkiufosgmecijf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917925.544317-114-153071408790322/AnsiballZ_file.py'
Jan 20 14:05:25 compute-1 sudo[113120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:26 compute-1 python3.9[113122]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:26 compute-1 sudo[113120]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:26 compute-1 ceph-mon[81775]: pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:26.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:26 compute-1 sudo[113272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wshwcllgetepmpmfhwfwsqcsiwntzhnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917926.2011676-114-243418715098994/AnsiballZ_file.py'
Jan 20 14:05:26 compute-1 sudo[113272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:26 compute-1 python3.9[113274]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:26 compute-1 sudo[113272]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:27 compute-1 sudo[113351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:05:27 compute-1 sudo[113351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:27 compute-1 sudo[113351]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:27 compute-1 sudo[113376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:05:27 compute-1 sudo[113376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:27 compute-1 sudo[113376]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:27 compute-1 sudo[113474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kybbcwizixvzepkztsdeexcqgurskkal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917926.879435-154-126318282434116/AnsiballZ_stat.py'
Jan 20 14:05:27 compute-1 sudo[113474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:27 compute-1 python3.9[113476]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:27 compute-1 sudo[113474]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:27.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:05:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:05:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:28.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:28 compute-1 sudo[113597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjbgawnutxtoojivqpgjqbwehgzkoqow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917926.879435-154-126318282434116/AnsiballZ_copy.py'
Jan 20 14:05:28 compute-1 sudo[113597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:28 compute-1 python3.9[113599]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917926.879435-154-126318282434116/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=cdf5e5ce161bbd2dd6884aca648bc9e5c8959a7f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:28 compute-1 sudo[113597]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:28 compute-1 ceph-mon[81775]: pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:29 compute-1 sudo[113749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozjwfkgdlzkkcrkfpglpscoalfyjfftw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917928.7250347-154-209960238308873/AnsiballZ_stat.py'
Jan 20 14:05:29 compute-1 sudo[113749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:29 compute-1 python3.9[113751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:29 compute-1 sudo[113749]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:29 compute-1 sudo[113872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgwtpbknvheltvbitodedarvrcsutkfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917928.7250347-154-209960238308873/AnsiballZ_copy.py'
Jan 20 14:05:29 compute-1 sudo[113872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:29.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:29 compute-1 python3.9[113874]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917928.7250347-154-209960238308873/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=3f5ad343c2ed5cd826e6179427db625573e3eee3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:29 compute-1 sudo[113872]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:30.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:30 compute-1 sudo[114024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pheukevztdkqeyobhdvowibzrkjwqokx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917930.0562701-154-171706720860005/AnsiballZ_stat.py'
Jan 20 14:05:30 compute-1 sudo[114024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:30 compute-1 python3.9[114026]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:30 compute-1 sudo[114024]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:30 compute-1 ceph-mon[81775]: pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:31 compute-1 sudo[114147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zknpvwenkjsqaedbqqbmjeqnnbckbqfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917930.0562701-154-171706720860005/AnsiballZ_copy.py'
Jan 20 14:05:31 compute-1 sudo[114147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:31 compute-1 python3.9[114149]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917930.0562701-154-171706720860005/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=17aceafee0190f04852bee20eb6589c1f68490ff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:31 compute-1 sudo[114147]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:31.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:31 compute-1 sudo[114299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alnihvqavtkfusbkctpscuymckmaykbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917931.5854623-292-141869886381129/AnsiballZ_file.py'
Jan 20 14:05:31 compute-1 sudo[114299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:32 compute-1 python3.9[114301]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:32 compute-1 sudo[114299]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:32.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:32 compute-1 sudo[114451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofqoxghopcxtlmtrxluowmtuxwfhebdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917932.350415-292-177292989725506/AnsiballZ_file.py'
Jan 20 14:05:32 compute-1 sudo[114451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:32 compute-1 python3.9[114453]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:32 compute-1 sudo[114451]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:32 compute-1 ceph-mon[81775]: pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:33 compute-1 sudo[114603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlzsbgyphujomucvxsuszjhagqodvgmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917933.0500395-338-6886169923923/AnsiballZ_stat.py'
Jan 20 14:05:33 compute-1 sudo[114603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:33 compute-1 python3.9[114605]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:33 compute-1 sudo[114603]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:05:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:33.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:05:34 compute-1 ceph-mon[81775]: pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:34 compute-1 sudo[114726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbsgitfffwjhejkkqnmhlrfbniuhxdtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917933.0500395-338-6886169923923/AnsiballZ_copy.py'
Jan 20 14:05:34 compute-1 sudo[114726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:34.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:34 compute-1 python3.9[114728]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917933.0500395-338-6886169923923/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=46a102fd528a2f124a62aab2927298bdecb62ab2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:34 compute-1 sudo[114726]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:34 compute-1 sudo[114878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muteclyobqjubjnbtclcqptgxsiacmyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917934.563001-338-45907579591933/AnsiballZ_stat.py'
Jan 20 14:05:34 compute-1 sudo[114878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:35 compute-1 python3.9[114880]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:35 compute-1 sudo[114878]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:35 compute-1 sudo[115001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erzwpngxngpthumocegsqormnmdsdnmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917934.563001-338-45907579591933/AnsiballZ_copy.py'
Jan 20 14:05:35 compute-1 sudo[115001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:35 compute-1 python3.9[115003]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917934.563001-338-45907579591933/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=dbd41a175def1218d1038733ac1d1fb38abc7be7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:35 compute-1 sudo[115001]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:05:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:35.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:05:36 compute-1 sudo[115153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jubibivxnjthuwijtxehfkoxexlwkeiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917935.8038375-338-267169653375519/AnsiballZ_stat.py'
Jan 20 14:05:36 compute-1 sudo[115153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:05:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:36.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:05:36 compute-1 python3.9[115155]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:36 compute-1 sudo[115153]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:36 compute-1 ceph-mon[81775]: pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:36 compute-1 sudo[115276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qictqvuikvsubgiqbjrjeveizkjywjyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917935.8038375-338-267169653375519/AnsiballZ_copy.py'
Jan 20 14:05:36 compute-1 sudo[115276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:37 compute-1 python3.9[115278]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917935.8038375-338-267169653375519/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=fad6f80cf7b7bf75325b82443f1844e21070ed95 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:37 compute-1 sudo[115276]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:37 compute-1 sudo[115428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqrsesvuojfmyqfpefyezygvqgsjewhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917937.2977948-477-30089351468859/AnsiballZ_file.py'
Jan 20 14:05:37 compute-1 sudo[115428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:37.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:37 compute-1 python3.9[115430]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:37 compute-1 sudo[115428]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:38 compute-1 ceph-mon[81775]: pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:38.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:38 compute-1 sudo[115580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbmbofpdiqffzrnzaizytdynhvkwamhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917938.0133314-477-280314209795535/AnsiballZ_file.py'
Jan 20 14:05:38 compute-1 sudo[115580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:38 compute-1 python3.9[115582]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:38 compute-1 sudo[115580]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:39 compute-1 sudo[115732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knsqyvevbfpfpvajvqwacdjginescaib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917938.8323934-524-199955351963314/AnsiballZ_stat.py'
Jan 20 14:05:39 compute-1 sudo[115732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:39 compute-1 python3.9[115734]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:39 compute-1 sudo[115732]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:39.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:39 compute-1 sudo[115855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mncweoituqcnayddkqxgswjdqtngjqjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917938.8323934-524-199955351963314/AnsiballZ_copy.py'
Jan 20 14:05:39 compute-1 sudo[115855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:40 compute-1 python3.9[115857]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917938.8323934-524-199955351963314/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=61b1b75568b1bbf2c537624a8c374194b06efdf8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:40 compute-1 sudo[115855]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:40.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:40 compute-1 ceph-mon[81775]: pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:40 compute-1 sudo[116007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqvqonnljgflsbwhodnobjwgotvufhpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917940.2346165-524-133499690949212/AnsiballZ_stat.py'
Jan 20 14:05:40 compute-1 sudo[116007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:40 compute-1 python3.9[116009]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:40 compute-1 sudo[116007]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:41 compute-1 sudo[116130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naugsuodvuqrpddywfxpqbljvznwquhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917940.2346165-524-133499690949212/AnsiballZ_copy.py'
Jan 20 14:05:41 compute-1 sudo[116130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:41 compute-1 python3.9[116132]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917940.2346165-524-133499690949212/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=dbd41a175def1218d1038733ac1d1fb38abc7be7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:41 compute-1 sudo[116130]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:41.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:42 compute-1 sudo[116282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xketqhsemgthyavejmqylhtodfczwjtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917941.633193-524-228759885909605/AnsiballZ_stat.py'
Jan 20 14:05:42 compute-1 sudo[116282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:42 compute-1 python3.9[116284]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:42 compute-1 sudo[116282]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:05:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:42.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:05:42 compute-1 ceph-mon[81775]: pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:42 compute-1 sudo[116405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yllrpwubojbwvdywuscitjgyjwyazyzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917941.633193-524-228759885909605/AnsiballZ_copy.py'
Jan 20 14:05:42 compute-1 sudo[116405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:42 compute-1 python3.9[116407]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917941.633193-524-228759885909605/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=1f5c11b1c9c0e7d13b1d17638556baf1ac5ee1b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:42 compute-1 sudo[116405]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:43.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:44 compute-1 sudo[116557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxfoexnapljztebprohsxlecedoeqcla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917943.6694558-711-275939818668276/AnsiballZ_file.py'
Jan 20 14:05:44 compute-1 sudo[116557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:44 compute-1 python3.9[116559]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:44 compute-1 sudo[116557]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:44.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:44 compute-1 ceph-mon[81775]: pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:44 compute-1 sudo[116709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcsmuuljlydaxyjcvymuvhrkaydvqims ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917944.40919-738-260631789454484/AnsiballZ_stat.py'
Jan 20 14:05:44 compute-1 sudo[116709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:44 compute-1 python3.9[116711]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:44 compute-1 sudo[116709]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:45 compute-1 sudo[116832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buxvoyiqcgwrftwbekbkaewbfzxexind ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917944.40919-738-260631789454484/AnsiballZ_copy.py'
Jan 20 14:05:45 compute-1 sudo[116832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:45 compute-1 python3.9[116834]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917944.40919-738-260631789454484/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a9ac548cf1fa241f1d1335913ca73d2a10501b24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:45.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:45 compute-1 sudo[116832]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:05:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:46.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:05:46 compute-1 sudo[116984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfinfdcldjgwfoqqnjdrkjolhstluynw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917946.004738-790-127337476072806/AnsiballZ_file.py'
Jan 20 14:05:46 compute-1 sudo[116984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:46 compute-1 ceph-mon[81775]: pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:46 compute-1 python3.9[116986]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:46 compute-1 sudo[116984]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:47 compute-1 sudo[117136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzbgvqihngsjfdqoejwwbpbfxjswpcdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917946.7773268-813-261929788429138/AnsiballZ_stat.py'
Jan 20 14:05:47 compute-1 sudo[117136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:47 compute-1 python3.9[117138]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:47 compute-1 sudo[117136]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:47.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:47 compute-1 sudo[117259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsggmzmncqucdlgvejmnvymniraiktyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917946.7773268-813-261929788429138/AnsiballZ_copy.py'
Jan 20 14:05:47 compute-1 sudo[117259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:47 compute-1 python3.9[117261]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917946.7773268-813-261929788429138/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a9ac548cf1fa241f1d1335913ca73d2a10501b24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:47 compute-1 sudo[117259]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:48.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:48 compute-1 ceph-mon[81775]: pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:48 compute-1 sudo[117411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjpcwrodamagkomwgrfiacdcnqivjhkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917948.2724652-862-162451938525711/AnsiballZ_file.py'
Jan 20 14:05:48 compute-1 sudo[117411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:48 compute-1 python3.9[117413]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:48 compute-1 sudo[117411]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:49 compute-1 sudo[117563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blcvofcghnjjajqrfnmzyuynsakfyzhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917949.1093905-888-26214285001874/AnsiballZ_stat.py'
Jan 20 14:05:49 compute-1 sudo[117563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:49 compute-1 python3.9[117565]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:49 compute-1 sudo[117563]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:49.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:50 compute-1 sudo[117686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtywjjkwnljtqqkhiukyewoxohadycfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917949.1093905-888-26214285001874/AnsiballZ_copy.py'
Jan 20 14:05:50 compute-1 sudo[117686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:50.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:50 compute-1 python3.9[117688]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917949.1093905-888-26214285001874/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a9ac548cf1fa241f1d1335913ca73d2a10501b24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:50 compute-1 sudo[117686]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:50 compute-1 ceph-mon[81775]: pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:51 compute-1 sudo[117838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raadsornpagfsqzbaqjndvcifnlbupce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917950.6584687-935-149842928513118/AnsiballZ_file.py'
Jan 20 14:05:51 compute-1 sudo[117838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:51 compute-1 python3.9[117840]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:51 compute-1 sudo[117838]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:51.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:51 compute-1 sudo[117990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niofeamddmehufwuwgduceajlptlgoaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917951.448817-960-143006293386587/AnsiballZ_stat.py'
Jan 20 14:05:51 compute-1 sudo[117990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:51 compute-1 python3.9[117992]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:51 compute-1 sudo[117990]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:05:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:52.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:05:52 compute-1 sudo[118113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnsbtipzdmpfxauxdzjregqjeunkoqxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917951.448817-960-143006293386587/AnsiballZ_copy.py'
Jan 20 14:05:52 compute-1 sudo[118113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:52 compute-1 ceph-mon[81775]: pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:52 compute-1 python3.9[118115]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917951.448817-960-143006293386587/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a9ac548cf1fa241f1d1335913ca73d2a10501b24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:52 compute-1 sudo[118113]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:53 compute-1 sudo[118265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygvsdlzmrwntodwzsnbnvlldgghvusve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917952.894227-1009-134203920348688/AnsiballZ_file.py'
Jan 20 14:05:53 compute-1 sudo[118265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:53 compute-1 python3.9[118267]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:53 compute-1 sudo[118265]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:05:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:53.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:05:54 compute-1 sudo[118417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yedvjpmghnuoxzbxchkfsmsauibstdcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917953.6655827-1033-2638052903708/AnsiballZ_stat.py'
Jan 20 14:05:54 compute-1 sudo[118417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:54 compute-1 python3.9[118419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:54 compute-1 sudo[118417]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:05:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:54.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:05:54 compute-1 ceph-mon[81775]: pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:54 compute-1 sudo[118540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snraycojeynsgtselzbssxmhekfyphvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917953.6655827-1033-2638052903708/AnsiballZ_copy.py'
Jan 20 14:05:54 compute-1 sudo[118540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:54 compute-1 python3.9[118542]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917953.6655827-1033-2638052903708/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a9ac548cf1fa241f1d1335913ca73d2a10501b24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:54 compute-1 sudo[118540]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:55 compute-1 sudo[118692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dipoqvjjcyolrpkdepjtqrhqtpqcwkzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917955.2270632-1075-233468836890406/AnsiballZ_file.py'
Jan 20 14:05:55 compute-1 sudo[118692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:55.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:55 compute-1 python3.9[118694]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:55 compute-1 sudo[118692]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:56.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:56 compute-1 sudo[118844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlssrewktdpgfefryqenwjiptiksoayx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917956.1103356-1093-147307258798727/AnsiballZ_stat.py'
Jan 20 14:05:56 compute-1 sudo[118844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:56 compute-1 ceph-mon[81775]: pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:56 compute-1 python3.9[118846]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:56 compute-1 sudo[118844]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:57 compute-1 sudo[118967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdccwkhkhicsbflnaseeuoknbtnovytl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917956.1103356-1093-147307258798727/AnsiballZ_copy.py'
Jan 20 14:05:57 compute-1 sudo[118967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:57 compute-1 python3.9[118969]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917956.1103356-1093-147307258798727/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a9ac548cf1fa241f1d1335913ca73d2a10501b24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:57 compute-1 sudo[118967]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:57 compute-1 sshd-session[112816]: Connection closed by 192.168.122.30 port 37916
Jan 20 14:05:57 compute-1 sshd-session[112813]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:05:57 compute-1 systemd[1]: session-43.scope: Deactivated successfully.
Jan 20 14:05:57 compute-1 systemd[1]: session-43.scope: Consumed 27.058s CPU time.
Jan 20 14:05:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:57.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:57 compute-1 systemd-logind[783]: Session 43 logged out. Waiting for processes to exit.
Jan 20 14:05:57 compute-1 systemd-logind[783]: Removed session 43.
Jan 20 14:05:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:58.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:58 compute-1 ceph-mon[81775]: pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:05:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:59.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:06:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:00.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:06:00 compute-1 ceph-mon[81775]: pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:01.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:02.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:02 compute-1 ceph-mon[81775]: pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:02 compute-1 sshd-session[118994]: Accepted publickey for zuul from 192.168.122.30 port 33718 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:06:02 compute-1 systemd-logind[783]: New session 44 of user zuul.
Jan 20 14:06:02 compute-1 systemd[1]: Started Session 44 of User zuul.
Jan 20 14:06:02 compute-1 sshd-session[118994]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:06:03 compute-1 sudo[119147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rumvkxakkohknqfucgvnbdmjumvshhri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917962.8871264-27-120025300501118/AnsiballZ_file.py'
Jan 20 14:06:03 compute-1 sudo[119147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:03.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:03 compute-1 python3.9[119149]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:03 compute-1 sudo[119147]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:04.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:04 compute-1 sudo[119299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngxahthgtlanxsdiyejzkqdkmqzetaou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917964.0934212-63-218274905383054/AnsiballZ_stat.py'
Jan 20 14:06:04 compute-1 sudo[119299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:04 compute-1 ceph-mon[81775]: pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:04 compute-1 python3.9[119301]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:04 compute-1 sudo[119299]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:05 compute-1 sudo[119422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orpfawvitbtmvzjjkfafjvrfunorletz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917964.0934212-63-218274905383054/AnsiballZ_copy.py'
Jan 20 14:06:05 compute-1 sudo[119422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:05 compute-1 python3.9[119424]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917964.0934212-63-218274905383054/.source.conf _original_basename=ceph.conf follow=False checksum=906e2ddae7738a5e2d5bcdd5b659f6884e758b17 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:05 compute-1 sudo[119422]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:05.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:06 compute-1 sudo[119574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqlfvnfdpejoorgqeahkbohxwsdnnfev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917965.7453165-63-108960511882782/AnsiballZ_stat.py'
Jan 20 14:06:06 compute-1 sudo[119574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:06.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:06 compute-1 python3.9[119576]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:06 compute-1 sudo[119574]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:06 compute-1 ceph-mon[81775]: pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:06 compute-1 sudo[119697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wautqckwkqqtjltqgengbhymafcjdutj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917965.7453165-63-108960511882782/AnsiballZ_copy.py'
Jan 20 14:06:06 compute-1 sudo[119697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:07 compute-1 python3.9[119699]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917965.7453165-63-108960511882782/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=ddae6cb53c02baaa87ed0e28941db377a2638775 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:07 compute-1 sudo[119697]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:07 compute-1 sshd-session[118997]: Connection closed by 192.168.122.30 port 33718
Jan 20 14:06:07 compute-1 sshd-session[118994]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:06:07 compute-1 systemd[1]: session-44.scope: Deactivated successfully.
Jan 20 14:06:07 compute-1 systemd[1]: session-44.scope: Consumed 3.525s CPU time.
Jan 20 14:06:07 compute-1 systemd-logind[783]: Session 44 logged out. Waiting for processes to exit.
Jan 20 14:06:07 compute-1 systemd-logind[783]: Removed session 44.
Jan 20 14:06:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:06:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:07.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:06:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:08.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:08 compute-1 ceph-mon[81775]: pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:09.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:10.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:10 compute-1 ceph-mon[81775]: pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:11.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:12.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:12 compute-1 ceph-mon[81775]: pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:13 compute-1 sshd-session[119724]: Accepted publickey for zuul from 192.168.122.30 port 35638 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:06:13 compute-1 systemd-logind[783]: New session 45 of user zuul.
Jan 20 14:06:13 compute-1 systemd[1]: Started Session 45 of User zuul.
Jan 20 14:06:13 compute-1 sshd-session[119724]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:06:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:13.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:14.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:14 compute-1 ceph-mon[81775]: pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:14 compute-1 python3.9[119877]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:06:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:15.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:16 compute-1 sudo[120031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkgiaximravukxciiremymojjnlzxaze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917975.630385-63-271882741010889/AnsiballZ_file.py'
Jan 20 14:06:16 compute-1 sudo[120031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:06:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:16.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:06:16 compute-1 python3.9[120033]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:06:16 compute-1 sudo[120031]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:16 compute-1 ceph-mon[81775]: pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:16 compute-1 sudo[120183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kffdyjpxfghqofnrxzwqmrhmeibjdmmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917976.6504061-63-38033813168075/AnsiballZ_file.py'
Jan 20 14:06:16 compute-1 sudo[120183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:17 compute-1 python3.9[120185]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:06:17 compute-1 sudo[120183]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:17.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:06:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:18.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:06:18 compute-1 python3.9[120335]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:06:18 compute-1 ceph-mon[81775]: pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:19 compute-1 sudo[120485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dflzaditrpusmrlqjgouavcnhourozma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917979.1732728-132-275365551803457/AnsiballZ_seboolean.py'
Jan 20 14:06:19 compute-1 sudo[120485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:06:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:19.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:06:19 compute-1 python3.9[120487]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 20 14:06:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:06:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:20.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:06:20 compute-1 ceph-mon[81775]: pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:21 compute-1 sudo[120485]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:21.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:22 compute-1 sudo[120641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oauexyaypabrzfvnugpoidloxqweupbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917981.6021833-162-179612308655090/AnsiballZ_setup.py'
Jan 20 14:06:22 compute-1 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 20 14:06:22 compute-1 sudo[120641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:22 compute-1 python3.9[120643]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:06:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:06:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:22.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:06:22 compute-1 sudo[120641]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:22 compute-1 ceph-mon[81775]: pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:23 compute-1 sudo[120725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rirxcwumujunluimanqmhwcgtgmjhovz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917981.6021833-162-179612308655090/AnsiballZ_dnf.py'
Jan 20 14:06:23 compute-1 sudo[120725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:23 compute-1 python3.9[120727]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:06:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:23.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:06:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:24.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:06:24 compute-1 sudo[120725]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:24 compute-1 ceph-mon[81775]: pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:25 compute-1 sudo[120878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiixowjxoqnbulzmlsnjnmlnvbnskqhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917984.7067993-198-189095859214994/AnsiballZ_systemd.py'
Jan 20 14:06:25 compute-1 sudo[120878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:25 compute-1 python3.9[120880]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:06:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:25.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:25 compute-1 sudo[120878]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:26.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:26 compute-1 sudo[121033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqjxjapfgbpyoypgggvegtjehmqvnuts ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768917986.149812-222-126636736061292/AnsiballZ_edpm_nftables_snippet.py'
Jan 20 14:06:26 compute-1 sudo[121033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:26 compute-1 python3[121035]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 20 14:06:26 compute-1 sudo[121033]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:27 compute-1 ceph-mon[81775]: pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:27 compute-1 sudo[121078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:06:27 compute-1 sudo[121078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:27 compute-1 sudo[121078]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:27 compute-1 sudo[121133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:06:27 compute-1 sudo[121133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:27 compute-1 sudo[121133]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:27 compute-1 sudo[121182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:06:27 compute-1 sudo[121182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:27 compute-1 sudo[121182]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:27 compute-1 sudo[121230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:06:27 compute-1 sudo[121230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:27 compute-1 sudo[121285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urjsbtelcpadqterrjtogeixksxmnohp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917987.3378432-249-19301886710995/AnsiballZ_file.py'
Jan 20 14:06:27 compute-1 sudo[121285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:27.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:27 compute-1 python3.9[121287]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:27 compute-1 sudo[121285]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:28 compute-1 sudo[121230]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:28 compute-1 ceph-mon[81775]: pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:06:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:28.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:06:28 compute-1 sudo[121468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntqzkdtjytzxzqbegygiwaernnzpcfvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917988.077632-273-218503913908970/AnsiballZ_stat.py'
Jan 20 14:06:28 compute-1 sudo[121468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:28 compute-1 python3.9[121470]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:28 compute-1 sudo[121468]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:06:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:06:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:06:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:06:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:06:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:06:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:06:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:06:29 compute-1 sudo[121546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrpwjanufwphbctpuzqdoipkljkanzsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917988.077632-273-218503913908970/AnsiballZ_file.py'
Jan 20 14:06:29 compute-1 sudo[121546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:29 compute-1 python3.9[121548]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:29 compute-1 sudo[121546]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:06:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:29.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:06:29 compute-1 sudo[121698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qogxdrjxbemvafrfkszosxxzjazvtfcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917989.60728-309-139406562821470/AnsiballZ_stat.py'
Jan 20 14:06:29 compute-1 sudo[121698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:30 compute-1 ceph-mon[81775]: pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:30 compute-1 python3.9[121700]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:30 compute-1 sudo[121698]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:30.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:30 compute-1 sudo[121776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udzsctbqbnrcghlohrbwlswfsllzgzhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917989.60728-309-139406562821470/AnsiballZ_file.py'
Jan 20 14:06:30 compute-1 sudo[121776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:30 compute-1 python3.9[121778]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.xptdz28b recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:30 compute-1 sudo[121776]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:31 compute-1 sudo[121928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjzcncvlcgqyoxfenirllanpqzjhxbke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917990.8801332-345-124489455824489/AnsiballZ_stat.py'
Jan 20 14:06:31 compute-1 sudo[121928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:31 compute-1 python3.9[121930]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:31 compute-1 sudo[121928]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:31 compute-1 sudo[122006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqpkhatpvxjsggfsbzzkwlhenblsvewn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917990.8801332-345-124489455824489/AnsiballZ_file.py'
Jan 20 14:06:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:06:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:31.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:06:31 compute-1 sudo[122006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:32 compute-1 python3.9[122008]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:32 compute-1 sudo[122006]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:32.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:32 compute-1 ceph-mon[81775]: pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:32 compute-1 sudo[122158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihytavadjaoifntrnhfcmgplewhvqsat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917992.343567-384-130141073037226/AnsiballZ_command.py'
Jan 20 14:06:32 compute-1 sudo[122158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:32 compute-1 python3.9[122160]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:06:33 compute-1 sudo[122158]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:33.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:06:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:34.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:06:34 compute-1 sudo[122311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kspqzbrrirkmtrtlhcdognnctpbgfxtg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768917993.983721-408-4606295056542/AnsiballZ_edpm_nftables_from_files.py'
Jan 20 14:06:34 compute-1 sudo[122311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:34 compute-1 python3[122313]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 20 14:06:34 compute-1 sudo[122311]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:34 compute-1 ceph-mon[81775]: pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:35 compute-1 sudo[122463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldeemfscewdvrjccoibvnexslcrcytnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917994.8578079-432-24955069834980/AnsiballZ_stat.py'
Jan 20 14:06:35 compute-1 sudo[122463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:35 compute-1 python3.9[122465]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:35 compute-1 sudo[122463]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:35 compute-1 sudo[122472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:06:35 compute-1 sudo[122472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:35 compute-1 sudo[122472]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:35 compute-1 sudo[122522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:06:35 compute-1 sudo[122522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:35 compute-1 sudo[122522]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:35.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:35 compute-1 sshd-session[121393]: Invalid user xbmc from 116.99.171.211 port 57580
Jan 20 14:06:36 compute-1 sudo[122638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aurnkdohsnfkkejprqyrjhuonzavnenh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917994.8578079-432-24955069834980/AnsiballZ_copy.py'
Jan 20 14:06:36 compute-1 sudo[122638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:36 compute-1 sshd-session[121393]: Connection closed by invalid user xbmc 116.99.171.211 port 57580 [preauth]
Jan 20 14:06:36 compute-1 python3.9[122640]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917994.8578079-432-24955069834980/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:36 compute-1 sudo[122638]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:06:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:06:36 compute-1 ceph-mon[81775]: pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:36.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:36 compute-1 sudo[122790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrhksnhpcmxppwdksnlwvpddigrhhlxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917996.5200815-477-135677632120386/AnsiballZ_stat.py'
Jan 20 14:06:36 compute-1 sudo[122790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:37 compute-1 python3.9[122792]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:37 compute-1 sudo[122790]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:37 compute-1 sudo[122915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwtyjwklllsmdawskxwuzwihancynxbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917996.5200815-477-135677632120386/AnsiballZ_copy.py'
Jan 20 14:06:37 compute-1 sudo[122915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:37 compute-1 python3.9[122917]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917996.5200815-477-135677632120386/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:37 compute-1 sudo[122915]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:06:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:37.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:06:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:38.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:38 compute-1 ceph-mon[81775]: pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:38 compute-1 sudo[123067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pabxskxuwzsmmnmwaslrehiuzuyxcfkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917998.2223763-522-92709814147886/AnsiballZ_stat.py'
Jan 20 14:06:38 compute-1 sudo[123067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:38 compute-1 python3.9[123069]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:38 compute-1 sudo[123067]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:39 compute-1 sudo[123192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umixzaoxexhdmwtjighjhanywrgwpjuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917998.2223763-522-92709814147886/AnsiballZ_copy.py'
Jan 20 14:06:39 compute-1 sudo[123192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:39 compute-1 python3.9[123194]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917998.2223763-522-92709814147886/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:39 compute-1 sudo[123192]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:39.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:40 compute-1 sudo[123344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yftkhligzufqaqrwptnyzmfooimkafih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917999.7761247-567-160315442882850/AnsiballZ_stat.py'
Jan 20 14:06:40 compute-1 sudo[123344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:40 compute-1 python3.9[123346]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:40.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:40 compute-1 sudo[123344]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:40 compute-1 ceph-mon[81775]: pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:40 compute-1 sudo[123469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lymzxmolhvujyllleonegosgsbzzpkgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917999.7761247-567-160315442882850/AnsiballZ_copy.py'
Jan 20 14:06:40 compute-1 sudo[123469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:41 compute-1 python3.9[123471]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917999.7761247-567-160315442882850/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:41 compute-1 sudo[123469]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:41.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:41 compute-1 sudo[123621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiazsrnonieazrzhgtztwshzcjuablcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918001.4105432-612-49106105279695/AnsiballZ_stat.py'
Jan 20 14:06:41 compute-1 sudo[123621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:42 compute-1 python3.9[123623]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:42 compute-1 sudo[123621]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:42.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:42 compute-1 sudo[123746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwhtcpzfafekyxmtmfnsrteridmpmlwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918001.4105432-612-49106105279695/AnsiballZ_copy.py'
Jan 20 14:06:42 compute-1 sudo[123746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:42 compute-1 ceph-mon[81775]: pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:42 compute-1 python3.9[123748]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918001.4105432-612-49106105279695/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:42 compute-1 sudo[123746]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:43 compute-1 sudo[123898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxldnthilcdxvmkodnlzsmogwkypgymu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918002.9128551-657-114674256326377/AnsiballZ_file.py'
Jan 20 14:06:43 compute-1 sudo[123898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:43 compute-1 python3.9[123900]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:43 compute-1 sudo[123898]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:43.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:44 compute-1 sudo[124050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-curiglrpwyswssixcxnnpnigahzhrvac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918003.7635787-681-27778553750869/AnsiballZ_command.py'
Jan 20 14:06:44 compute-1 sudo[124050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:44.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:44 compute-1 python3.9[124052]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:06:44 compute-1 sudo[124050]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:44 compute-1 ceph-mon[81775]: pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:45 compute-1 sudo[124205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znimtxadyvjyziospefiqsecoxeriocu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918004.7203078-705-140227415298534/AnsiballZ_blockinfile.py'
Jan 20 14:06:45 compute-1 sudo[124205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:45 compute-1 python3.9[124207]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:45 compute-1 sudo[124205]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:45.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:46 compute-1 sudo[124357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zympioqkfkddmzcexenbzzmtebobnsfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918005.8403397-732-257799445291936/AnsiballZ_command.py'
Jan 20 14:06:46 compute-1 sudo[124357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:46.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:46 compute-1 python3.9[124359]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:06:46 compute-1 sudo[124357]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:46 compute-1 ceph-mon[81775]: pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:47 compute-1 sudo[124510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmhemwwbhmupbacpkyjvqhktybalduqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918006.720888-756-226125110788635/AnsiballZ_stat.py'
Jan 20 14:06:47 compute-1 sudo[124510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:47 compute-1 python3.9[124512]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:06:47 compute-1 sudo[124510]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:47.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:47 compute-1 sudo[124664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnkexurrtvgokzwpvzdupyyfpcmiaooy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918007.5980396-780-48231761264268/AnsiballZ_command.py'
Jan 20 14:06:47 compute-1 sudo[124664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:48 compute-1 python3.9[124666]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:06:48 compute-1 sudo[124664]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:48.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:48 compute-1 ceph-mon[81775]: pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:48 compute-1 sudo[124819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgutjfyyquofvqhomnjcljbwgswiqxcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918008.4514685-804-21750078340457/AnsiballZ_file.py'
Jan 20 14:06:48 compute-1 sudo[124819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:49 compute-1 python3.9[124821]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:49 compute-1 sudo[124819]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:49.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:50.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:50 compute-1 python3.9[124971]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:06:50 compute-1 ceph-mon[81775]: pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:51 compute-1 sudo[125122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkglcvouvriwqprmzuzlicmmfhkgbmqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918011.4810374-924-35575813496523/AnsiballZ_command.py'
Jan 20 14:06:51 compute-1 sudo[125122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:51.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:51 compute-1 python3.9[125124]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:06:51 compute-1 ovs-vsctl[125125]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 20 14:06:52 compute-1 sudo[125122]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:52.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:52 compute-1 sudo[125275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eayvpixbutimjvuvrsiciojrmevzquzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918012.453456-951-191985912393686/AnsiballZ_command.py'
Jan 20 14:06:52 compute-1 sudo[125275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:52 compute-1 ceph-mon[81775]: pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:52 compute-1 python3.9[125277]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:06:53 compute-1 sudo[125275]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:53 compute-1 sudo[125430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pafqprjiiyslsclchkynkfbpkgzudujr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918013.2315571-975-135879055010971/AnsiballZ_command.py'
Jan 20 14:06:53 compute-1 sudo[125430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:53 compute-1 python3.9[125432]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:06:53 compute-1 ovs-vsctl[125434]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 20 14:06:53 compute-1 sudo[125430]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:53.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:06:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:54.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:06:54 compute-1 python3.9[125585]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:06:54 compute-1 ceph-mon[81775]: pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:55 compute-1 sudo[125737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xerarjpzyiewdrlmybctqsurblxrtytm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918015.2153804-1026-102263015200777/AnsiballZ_file.py'
Jan 20 14:06:55 compute-1 sudo[125737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:55 compute-1 python3.9[125739]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:06:55 compute-1 sudo[125737]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:55.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:56 compute-1 sshd-session[125433]: Connection closed by authenticating user root 116.99.171.211 port 48086 [preauth]
Jan 20 14:06:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:56.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:56 compute-1 sudo[125889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvncwjubnaqgbemldxhgsevuzatnapwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918016.0350668-1050-60587842539685/AnsiballZ_stat.py'
Jan 20 14:06:56 compute-1 sudo[125889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:56 compute-1 python3.9[125891]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:56 compute-1 sudo[125889]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:56 compute-1 ceph-mon[81775]: pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:56 compute-1 sudo[125967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmlqjtrehqaylllerztqubmoeksdnmva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918016.0350668-1050-60587842539685/AnsiballZ_file.py'
Jan 20 14:06:56 compute-1 sudo[125967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:57 compute-1 python3.9[125969]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:06:57 compute-1 sudo[125967]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:57 compute-1 sudo[126119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taicfmqdiukrzdlcagqqctwyofkqocii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918017.4319246-1050-243227772387656/AnsiballZ_stat.py'
Jan 20 14:06:57 compute-1 sudo[126119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:57.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:58 compute-1 python3.9[126121]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:58 compute-1 sudo[126119]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:58 compute-1 sudo[126197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmhlwrazytaemrxdvgcswamzbwzwhrtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918017.4319246-1050-243227772387656/AnsiballZ_file.py'
Jan 20 14:06:58 compute-1 sudo[126197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:58.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:58 compute-1 python3.9[126199]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:06:58 compute-1 sudo[126197]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:58 compute-1 ceph-mon[81775]: pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:59 compute-1 sudo[126349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekahtrnwrbdwiyhdyftwapxieshbwygf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918018.8293846-1119-111977398074171/AnsiballZ_file.py'
Jan 20 14:06:59 compute-1 sudo[126349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:59 compute-1 python3.9[126351]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:59 compute-1 sudo[126349]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:06:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:59.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:00 compute-1 sudo[126501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfblcryenrqwpbboecxhkaxnwnugdzyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918019.7166877-1143-213538072988959/AnsiballZ_stat.py'
Jan 20 14:07:00 compute-1 sudo[126501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:00 compute-1 python3.9[126503]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:00 compute-1 sudo[126501]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:00.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:00 compute-1 sudo[126579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auxvvaokyryigkcckuttnhmtgpurdvqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918019.7166877-1143-213538072988959/AnsiballZ_file.py'
Jan 20 14:07:00 compute-1 sudo[126579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:00 compute-1 python3.9[126581]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:00 compute-1 sudo[126579]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:00 compute-1 ceph-mon[81775]: pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:01 compute-1 sudo[126731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxzznvyvvmwltfiiexpbngzhbrpqogqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918021.0271292-1179-203336920519269/AnsiballZ_stat.py'
Jan 20 14:07:01 compute-1 sudo[126731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:01 compute-1 python3.9[126733]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:01 compute-1 sudo[126731]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:01 compute-1 sudo[126809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thceejtiivflkfbfqnklinngccagsugc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918021.0271292-1179-203336920519269/AnsiballZ_file.py'
Jan 20 14:07:01 compute-1 sudo[126809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:01.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:02 compute-1 python3.9[126811]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:02 compute-1 sudo[126809]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:02.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:02 compute-1 sudo[126961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndcaehufgqgxfjesqpqjebtdxgdulmfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918022.2874568-1215-253054834362592/AnsiballZ_systemd.py'
Jan 20 14:07:02 compute-1 sudo[126961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:02 compute-1 python3.9[126963]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:07:02 compute-1 systemd[1]: Reloading.
Jan 20 14:07:02 compute-1 ceph-mon[81775]: pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:02 compute-1 systemd-rc-local-generator[126992]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:07:03 compute-1 systemd-sysv-generator[126995]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:07:03 compute-1 sudo[126961]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:03 compute-1 sudo[127151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdcekfbnyruejdcrdhwsimrrmcxaktmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918023.5072627-1239-47359034637297/AnsiballZ_stat.py'
Jan 20 14:07:03 compute-1 sudo[127151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:03.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:04 compute-1 python3.9[127153]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:04 compute-1 sudo[127151]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:04 compute-1 sudo[127229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceueupsoearrmfjrpnpjgaovtbeeffvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918023.5072627-1239-47359034637297/AnsiballZ_file.py'
Jan 20 14:07:04 compute-1 sudo[127229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:04.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:04 compute-1 python3.9[127231]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:04 compute-1 sudo[127229]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:04 compute-1 ceph-mon[81775]: pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:05 compute-1 sudo[127381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpruouolgldsdhfqthplrzohlcxppghr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918024.845037-1275-71780470523760/AnsiballZ_stat.py'
Jan 20 14:07:05 compute-1 sudo[127381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:05 compute-1 python3.9[127383]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:05 compute-1 sudo[127381]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:05 compute-1 sudo[127461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxbgfjoishsusxqifvmbkwgppfdgudxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918024.845037-1275-71780470523760/AnsiballZ_file.py'
Jan 20 14:07:05 compute-1 sudo[127461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:05.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:05 compute-1 python3.9[127463]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:05 compute-1 sudo[127461]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:06.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:06 compute-1 sudo[127613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emoafbpaiwctnzqnylomcjdrhbwsufyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918026.2796566-1311-161605280655404/AnsiballZ_systemd.py'
Jan 20 14:07:06 compute-1 sudo[127613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:06 compute-1 python3.9[127615]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:07:06 compute-1 systemd[1]: Reloading.
Jan 20 14:07:07 compute-1 ceph-mon[81775]: pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:07 compute-1 systemd-rc-local-generator[127643]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:07:07 compute-1 systemd-sysv-generator[127647]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:07:07 compute-1 systemd[1]: Starting Create netns directory...
Jan 20 14:07:07 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 20 14:07:07 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 20 14:07:07 compute-1 systemd[1]: Finished Create netns directory.
Jan 20 14:07:07 compute-1 sudo[127613]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:07 compute-1 sshd-session[127384]: Invalid user anton from 116.99.171.211 port 38302
Jan 20 14:07:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:07.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:08 compute-1 sudo[127806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioxyvkfzpqnessbfqjponeokasoqsufl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918027.8566337-1341-249198183211929/AnsiballZ_file.py'
Jan 20 14:07:08 compute-1 sudo[127806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:08.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:08 compute-1 python3.9[127808]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:08 compute-1 sudo[127806]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:08 compute-1 sshd-session[127384]: Connection closed by invalid user anton 116.99.171.211 port 38302 [preauth]
Jan 20 14:07:09 compute-1 ceph-mon[81775]: pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:09 compute-1 sudo[127958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycgatrdpbcawynfvoofifszjvwgrdzry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918028.713924-1365-273573758594560/AnsiballZ_stat.py'
Jan 20 14:07:09 compute-1 sudo[127958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:09 compute-1 python3.9[127960]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:09 compute-1 sudo[127958]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:09 compute-1 sudo[128081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxnuuzqsxdqmvuvjmzjitiubpnsgtdhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918028.713924-1365-273573758594560/AnsiballZ_copy.py'
Jan 20 14:07:09 compute-1 sudo[128081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:09.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:09 compute-1 python3.9[128083]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918028.713924-1365-273573758594560/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:09 compute-1 sudo[128081]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:10 compute-1 ceph-mon[81775]: pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:07:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:10.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:07:11 compute-1 sudo[128233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkwmvpveezsjzyybvsfyufwwlenroulv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918030.7061744-1416-202643503929617/AnsiballZ_file.py'
Jan 20 14:07:11 compute-1 sudo[128233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:11 compute-1 python3.9[128235]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:11 compute-1 sudo[128233]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:11 compute-1 sudo[128385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zchhuypagzbzuxtbenqdgfuixeaxyozc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918031.5108104-1440-166588730274480/AnsiballZ_file.py'
Jan 20 14:07:11 compute-1 sudo[128385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:11.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:12 compute-1 python3.9[128387]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:12 compute-1 sudo[128385]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:12.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:12 compute-1 ceph-mon[81775]: pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:12 compute-1 sudo[128537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjjwziqwwmlkhrjxpemediqcflfgybvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918032.2806437-1464-99481396404226/AnsiballZ_stat.py'
Jan 20 14:07:12 compute-1 sudo[128537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:12 compute-1 python3.9[128539]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:12 compute-1 sudo[128537]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:13 compute-1 sudo[128660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfgbnureqfmhdbwlelrchfrpvqvnucsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918032.2806437-1464-99481396404226/AnsiballZ_copy.py'
Jan 20 14:07:13 compute-1 sudo[128660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:13 compute-1 python3.9[128662]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918032.2806437-1464-99481396404226/.source.json _original_basename=.9g9nlfds follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:13 compute-1 sudo[128660]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:13.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:14.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:14 compute-1 python3.9[128812]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:14 compute-1 ceph-mon[81775]: pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:07:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:15.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:07:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:16.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:16 compute-1 ceph-mon[81775]: pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:16 compute-1 sudo[129233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fadxuijdynnjzdpcumttitxvzdcaikgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918036.3763163-1584-1628447783672/AnsiballZ_container_config_data.py'
Jan 20 14:07:16 compute-1 sudo[129233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:17 compute-1 python3.9[129235]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 20 14:07:17 compute-1 sudo[129233]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:17.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:18 compute-1 sudo[129385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdmhrwfbwypqbnxfehiztdlvsfartilz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918037.632037-1617-85839168018985/AnsiballZ_container_config_hash.py'
Jan 20 14:07:18 compute-1 sudo[129385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:18.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:18 compute-1 python3.9[129387]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 20 14:07:18 compute-1 sudo[129385]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:18 compute-1 ceph-mon[81775]: pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:19 compute-1 sudo[129537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrigqlgtypeapdxskvrkgonaspuioozn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768918038.8966782-1647-15206233939084/AnsiballZ_edpm_container_manage.py'
Jan 20 14:07:19 compute-1 sudo[129537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:19 compute-1 python3[129539]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 20 14:07:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:19.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:20.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:20 compute-1 ceph-mon[81775]: pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:21.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:07:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5984 writes, 25K keys, 5984 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5984 writes, 915 syncs, 6.54 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5984 writes, 25K keys, 5984 commit groups, 1.0 writes per commit group, ingest: 19.05 MB, 0.03 MB/s
                                           Interval WAL: 5984 writes, 915 syncs, 6.54 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdf610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdf610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdf610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 14:07:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:22.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:22 compute-1 ceph-mon[81775]: pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:07:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:23.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:07:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:24.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:24 compute-1 ceph-mon[81775]: pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:25 compute-1 podman[129553]: 2026-01-20 14:07:25.367090338 +0000 UTC m=+5.514869966 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 20 14:07:25 compute-1 podman[129674]: 2026-01-20 14:07:25.539406305 +0000 UTC m=+0.063909797 container create 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 20 14:07:25 compute-1 podman[129674]: 2026-01-20 14:07:25.50864365 +0000 UTC m=+0.033147142 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 20 14:07:25 compute-1 python3[129539]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 20 14:07:25 compute-1 sudo[129537]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:25.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:26 compute-1 sudo[129862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfwclqaeukmavvpqqsaegxspzialwghh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918045.911341-1671-63428464006263/AnsiballZ_stat.py'
Jan 20 14:07:26 compute-1 sudo[129862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:26.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:26 compute-1 python3.9[129864]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:07:26 compute-1 sudo[129862]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:26 compute-1 ceph-mon[81775]: pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:27 compute-1 sudo[130016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fojstwxfxvizawpkfmlozewhlthkhiem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918047.1269782-1698-8391769898941/AnsiballZ_file.py'
Jan 20 14:07:27 compute-1 sudo[130016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:27 compute-1 python3.9[130018]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:27 compute-1 sudo[130016]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:27.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:27 compute-1 sudo[130092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lafglghekcuakxnfflbmfrgdawyapuso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918047.1269782-1698-8391769898941/AnsiballZ_stat.py'
Jan 20 14:07:27 compute-1 sudo[130092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:28 compute-1 python3.9[130094]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:07:28 compute-1 sudo[130092]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:28.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:28 compute-1 sudo[130243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eszimegazzsibzygqzpdmhuqkthyiqle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918048.2091832-1698-10924979346349/AnsiballZ_copy.py'
Jan 20 14:07:28 compute-1 sudo[130243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:28 compute-1 python3.9[130245]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768918048.2091832-1698-10924979346349/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:28 compute-1 sudo[130243]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:29 compute-1 ceph-mon[81775]: pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:29 compute-1 sudo[130319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgumdfpdpptnxunlkfxmgkozatevblgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918048.2091832-1698-10924979346349/AnsiballZ_systemd.py'
Jan 20 14:07:29 compute-1 sudo[130319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:29 compute-1 python3.9[130321]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:07:29 compute-1 systemd[1]: Reloading.
Jan 20 14:07:29 compute-1 systemd-rc-local-generator[130349]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:07:29 compute-1 systemd-sysv-generator[130354]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:07:29 compute-1 sudo[130319]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:29.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:30 compute-1 sudo[130431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvocwzggwavqbimhhogfkhuigcardefq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918048.2091832-1698-10924979346349/AnsiballZ_systemd.py'
Jan 20 14:07:30 compute-1 sudo[130431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:30.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:30 compute-1 python3.9[130433]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:07:30 compute-1 systemd[1]: Reloading.
Jan 20 14:07:30 compute-1 systemd-rc-local-generator[130463]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:07:30 compute-1 systemd-sysv-generator[130467]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:07:30 compute-1 systemd[1]: Starting ovn_controller container...
Jan 20 14:07:31 compute-1 ceph-mon[81775]: pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:31 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:07:31.053064) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Jan 20 14:07:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3fd958b4440e28ef79052e97d3fa58723aa03456d5a15e11186b1551eb9205/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918051053186, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1702, "num_deletes": 251, "total_data_size": 4215090, "memory_usage": 4271720, "flush_reason": "Manual Compaction"}
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918051075137, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2763657, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10462, "largest_seqno": 12159, "table_properties": {"data_size": 2756564, "index_size": 4164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 13986, "raw_average_key_size": 19, "raw_value_size": 2742491, "raw_average_value_size": 3793, "num_data_blocks": 188, "num_entries": 723, "num_filter_entries": 723, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917888, "oldest_key_time": 1768917888, "file_creation_time": 1768918051, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 22130 microseconds, and 7980 cpu microseconds.
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:07:31.075203) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2763657 bytes OK
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:07:31.075228) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:07:31.078048) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:07:31.078073) EVENT_LOG_v1 {"time_micros": 1768918051078066, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:07:31.078099) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4207365, prev total WAL file size 4207365, number of live WAL files 2.
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:07:31.080122) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2698KB)], [21(7454KB)]
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918051080194, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 10396904, "oldest_snapshot_seqno": -1}
Jan 20 14:07:31 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f.
Jan 20 14:07:31 compute-1 podman[130475]: 2026-01-20 14:07:31.114614773 +0000 UTC m=+0.193940202 container init 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 20 14:07:31 compute-1 ovn_controller[130490]: + sudo -E kolla_set_configs
Jan 20 14:07:31 compute-1 podman[130475]: 2026-01-20 14:07:31.147621901 +0000 UTC m=+0.226947300 container start 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 3959 keys, 8245471 bytes, temperature: kUnknown
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918051176799, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 8245471, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8216579, "index_size": 17902, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9925, "raw_key_size": 96167, "raw_average_key_size": 24, "raw_value_size": 8142578, "raw_average_value_size": 2056, "num_data_blocks": 773, "num_entries": 3959, "num_filter_entries": 3959, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768918051, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:07:31.178066) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8245471 bytes
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:07:31.179919) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.2 rd, 85.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 7.3 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(6.7) write-amplify(3.0) OK, records in: 4476, records dropped: 517 output_compression: NoCompression
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:07:31.179952) EVENT_LOG_v1 {"time_micros": 1768918051179941, "job": 10, "event": "compaction_finished", "compaction_time_micros": 97003, "compaction_time_cpu_micros": 37849, "output_level": 6, "num_output_files": 1, "total_output_size": 8245471, "num_input_records": 4476, "num_output_records": 3959, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:07:31 compute-1 edpm-start-podman-container[130475]: ovn_controller
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918051180596, "job": 10, "event": "table_file_deletion", "file_number": 23}
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918051182024, "job": 10, "event": "table_file_deletion", "file_number": 21}
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:07:31.079994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:07:31.182262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:07:31.182270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:07:31.182273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:07:31.182275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:07:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:07:31.182281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:07:31 compute-1 systemd[1]: Created slice User Slice of UID 0.
Jan 20 14:07:31 compute-1 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 20 14:07:31 compute-1 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 20 14:07:31 compute-1 systemd[1]: Starting User Manager for UID 0...
Jan 20 14:07:31 compute-1 systemd[130528]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 20 14:07:31 compute-1 edpm-start-podman-container[130474]: Creating additional drop-in dependency for "ovn_controller" (72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f)
Jan 20 14:07:31 compute-1 podman[130497]: 2026-01-20 14:07:31.263450833 +0000 UTC m=+0.095065123 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 20 14:07:31 compute-1 systemd[1]: 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f-18ad1359c7a88a0c.service: Main process exited, code=exited, status=1/FAILURE
Jan 20 14:07:31 compute-1 systemd[1]: 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f-18ad1359c7a88a0c.service: Failed with result 'exit-code'.
Jan 20 14:07:31 compute-1 systemd[1]: Reloading.
Jan 20 14:07:31 compute-1 systemd-rc-local-generator[130577]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:07:31 compute-1 systemd-sysv-generator[130580]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:07:31 compute-1 systemd[130528]: Queued start job for default target Main User Target.
Jan 20 14:07:31 compute-1 systemd[130528]: Created slice User Application Slice.
Jan 20 14:07:31 compute-1 systemd[130528]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 20 14:07:31 compute-1 systemd[130528]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 14:07:31 compute-1 systemd[130528]: Reached target Paths.
Jan 20 14:07:31 compute-1 systemd[130528]: Reached target Timers.
Jan 20 14:07:31 compute-1 systemd[130528]: Starting D-Bus User Message Bus Socket...
Jan 20 14:07:31 compute-1 systemd[130528]: Starting Create User's Volatile Files and Directories...
Jan 20 14:07:31 compute-1 systemd[130528]: Listening on D-Bus User Message Bus Socket.
Jan 20 14:07:31 compute-1 systemd[130528]: Reached target Sockets.
Jan 20 14:07:31 compute-1 systemd[130528]: Finished Create User's Volatile Files and Directories.
Jan 20 14:07:31 compute-1 systemd[130528]: Reached target Basic System.
Jan 20 14:07:31 compute-1 systemd[130528]: Reached target Main User Target.
Jan 20 14:07:31 compute-1 systemd[130528]: Startup finished in 144ms.
Jan 20 14:07:31 compute-1 systemd[1]: Started User Manager for UID 0.
Jan 20 14:07:31 compute-1 systemd[1]: Started ovn_controller container.
Jan 20 14:07:31 compute-1 systemd[1]: Started Session c1 of User root.
Jan 20 14:07:31 compute-1 sudo[130431]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:31 compute-1 ovn_controller[130490]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 20 14:07:31 compute-1 ovn_controller[130490]: INFO:__main__:Validating config file
Jan 20 14:07:31 compute-1 ovn_controller[130490]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 20 14:07:31 compute-1 ovn_controller[130490]: INFO:__main__:Writing out command to execute
Jan 20 14:07:31 compute-1 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 20 14:07:31 compute-1 ovn_controller[130490]: ++ cat /run_command
Jan 20 14:07:31 compute-1 ovn_controller[130490]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 20 14:07:31 compute-1 ovn_controller[130490]: + ARGS=
Jan 20 14:07:31 compute-1 ovn_controller[130490]: + sudo kolla_copy_cacerts
Jan 20 14:07:31 compute-1 systemd[1]: Started Session c2 of User root.
Jan 20 14:07:31 compute-1 ovn_controller[130490]: + [[ ! -n '' ]]
Jan 20 14:07:31 compute-1 ovn_controller[130490]: + . kolla_extend_start
Jan 20 14:07:31 compute-1 ovn_controller[130490]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 20 14:07:31 compute-1 ovn_controller[130490]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 20 14:07:31 compute-1 ovn_controller[130490]: + umask 0022
Jan 20 14:07:31 compute-1 ovn_controller[130490]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 20 14:07:31 compute-1 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 20 14:07:31 compute-1 NetworkManager[49104]: <info>  [1768918051.8241] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 20 14:07:31 compute-1 NetworkManager[49104]: <info>  [1768918051.8250] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 14:07:31 compute-1 kernel: br-int: entered promiscuous mode
Jan 20 14:07:31 compute-1 NetworkManager[49104]: <warn>  [1768918051.8253] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 14:07:31 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:07:31 compute-1 NetworkManager[49104]: <info>  [1768918051.8262] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 20 14:07:31 compute-1 NetworkManager[49104]: <info>  [1768918051.8268] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 20 14:07:31 compute-1 NetworkManager[49104]: <info>  [1768918051.8272] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 20 14:07:31 compute-1 ovn_controller[130490]: 2026-01-20T14:07:31Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 20 14:07:31 compute-1 NetworkManager[49104]: <info>  [1768918051.8471] manager: (ovn-920572-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 20 14:07:31 compute-1 NetworkManager[49104]: <info>  [1768918051.8485] manager: (ovn-367c1a-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Jan 20 14:07:31 compute-1 systemd-udevd[130630]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:07:31 compute-1 kernel: genev_sys_6081: entered promiscuous mode
Jan 20 14:07:31 compute-1 NetworkManager[49104]: <info>  [1768918051.8820] device (genev_sys_6081): carrier: link connected
Jan 20 14:07:31 compute-1 NetworkManager[49104]: <info>  [1768918051.8824] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Jan 20 14:07:31 compute-1 systemd-udevd[130633]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:07:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:31.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:31 compute-1 NetworkManager[49104]: <info>  [1768918051.9277] manager: (ovn-7c9bfe-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 20 14:07:32 compute-1 ceph-mon[81775]: pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:32.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:32 compute-1 python3.9[130760]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 20 14:07:33 compute-1 sudo[130910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egkypfoarmidwikcjqhlakcgdesyvgat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918053.5880013-1833-214174910689818/AnsiballZ_stat.py'
Jan 20 14:07:33 compute-1 sudo[130910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:33.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:34 compute-1 python3.9[130912]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:34 compute-1 sudo[130910]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:34.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:34 compute-1 sudo[131033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgwxcksgricjowfdyvnixrddifcqyktr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918053.5880013-1833-214174910689818/AnsiballZ_copy.py'
Jan 20 14:07:34 compute-1 sudo[131033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:34 compute-1 ceph-mon[81775]: pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:34 compute-1 python3.9[131035]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918053.5880013-1833-214174910689818/.source.yaml _original_basename=.agtvf657 follow=False checksum=aedaf657c77fb1feab67c7335f83a0d24eed0971 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:34 compute-1 sudo[131033]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:35 compute-1 sudo[131185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuuvmkxpldpcvjbzolosupsicchjfhgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918055.19947-1878-201969413034774/AnsiballZ_command.py'
Jan 20 14:07:35 compute-1 sudo[131185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:35 compute-1 python3.9[131187]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:07:35 compute-1 ovs-vsctl[131188]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 20 14:07:35 compute-1 sudo[131185]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:35 compute-1 sudo[131195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:07:35 compute-1 sudo[131195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:35 compute-1 sudo[131195]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:35 compute-1 sudo[131238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:07:35 compute-1 sudo[131238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:35 compute-1 sudo[131238]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:07:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:35.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:07:35 compute-1 sudo[131269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:07:35 compute-1 sudo[131269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:35 compute-1 sudo[131269]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:36 compute-1 sudo[131318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 14:07:36 compute-1 sudo[131318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:36 compute-1 sudo[131457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-covdffpnnjiwiryorgdxcnngydmllbaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918055.955571-1902-129716643090834/AnsiballZ_command.py'
Jan 20 14:07:36 compute-1 sudo[131457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:36 compute-1 sudo[131318]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:36 compute-1 sudo[131462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:07:36 compute-1 sudo[131462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:36 compute-1 sudo[131462]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:36.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:36 compute-1 sudo[131487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:07:36 compute-1 sudo[131487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:36 compute-1 python3.9[131461]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:07:36 compute-1 sudo[131487]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:36 compute-1 ovs-vsctl[131513]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 20 14:07:36 compute-1 sudo[131457]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:36 compute-1 sudo[131515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:07:36 compute-1 sudo[131515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:36 compute-1 sudo[131515]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:36 compute-1 ceph-mon[81775]: pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:36 compute-1 sudo[131547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:07:36 compute-1 sudo[131547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:37 compute-1 sudo[131547]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:37 compute-1 sudo[131748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbfuslwutwtwlroeqtddiirjmzrwbxak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918057.1033363-1944-60849681643945/AnsiballZ_command.py'
Jan 20 14:07:37 compute-1 sudo[131748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:37 compute-1 python3.9[131750]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:07:37 compute-1 ovs-vsctl[131751]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 20 14:07:37 compute-1 sudo[131748]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 14:07:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 14:07:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:37.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:38 compute-1 sshd-session[119727]: Connection closed by 192.168.122.30 port 35638
Jan 20 14:07:38 compute-1 sshd-session[119724]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:07:38 compute-1 systemd[1]: session-45.scope: Deactivated successfully.
Jan 20 14:07:38 compute-1 systemd[1]: session-45.scope: Consumed 1min 5.539s CPU time.
Jan 20 14:07:38 compute-1 systemd-logind[783]: Session 45 logged out. Waiting for processes to exit.
Jan 20 14:07:38 compute-1 systemd-logind[783]: Removed session 45.
Jan 20 14:07:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:38.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:38 compute-1 ceph-mon[81775]: pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:38 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 20 14:07:38 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:07:38 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:07:38 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:38 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:07:38 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:07:38 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:07:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:39.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:40 compute-1 ceph-mon[81775]: pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:40.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:41.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:41 compute-1 systemd[1]: Stopping User Manager for UID 0...
Jan 20 14:07:41 compute-1 systemd[130528]: Activating special unit Exit the Session...
Jan 20 14:07:41 compute-1 systemd[130528]: Stopped target Main User Target.
Jan 20 14:07:41 compute-1 systemd[130528]: Stopped target Basic System.
Jan 20 14:07:41 compute-1 systemd[130528]: Stopped target Paths.
Jan 20 14:07:41 compute-1 systemd[130528]: Stopped target Sockets.
Jan 20 14:07:41 compute-1 systemd[130528]: Stopped target Timers.
Jan 20 14:07:41 compute-1 systemd[130528]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 14:07:41 compute-1 systemd[130528]: Closed D-Bus User Message Bus Socket.
Jan 20 14:07:41 compute-1 systemd[130528]: Stopped Create User's Volatile Files and Directories.
Jan 20 14:07:41 compute-1 systemd[130528]: Removed slice User Application Slice.
Jan 20 14:07:41 compute-1 systemd[130528]: Reached target Shutdown.
Jan 20 14:07:41 compute-1 systemd[130528]: Finished Exit the Session.
Jan 20 14:07:41 compute-1 systemd[130528]: Reached target Exit the Session.
Jan 20 14:07:41 compute-1 systemd[1]: user@0.service: Deactivated successfully.
Jan 20 14:07:41 compute-1 systemd[1]: Stopped User Manager for UID 0.
Jan 20 14:07:41 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 20 14:07:41 compute-1 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 20 14:07:41 compute-1 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 20 14:07:41 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 20 14:07:41 compute-1 systemd[1]: Removed slice User Slice of UID 0.
Jan 20 14:07:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:07:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:42.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:07:42 compute-1 ceph-mon[81775]: pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 20 14:07:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:43.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 20 14:07:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:07:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:44.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:07:44 compute-1 ceph-mon[81775]: pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:44 compute-1 sudo[131778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:07:44 compute-1 sudo[131778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:44 compute-1 sudo[131778]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:44 compute-1 sudo[131803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:07:44 compute-1 sudo[131803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:44 compute-1 sudo[131803]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:44 compute-1 sshd-session[131828]: Accepted publickey for zuul from 192.168.122.30 port 44360 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:07:44 compute-1 systemd-logind[783]: New session 47 of user zuul.
Jan 20 14:07:44 compute-1 systemd[1]: Started Session 47 of User zuul.
Jan 20 14:07:45 compute-1 sshd-session[131828]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:07:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:45.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:46 compute-1 python3.9[131981]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:07:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:46.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:46 compute-1 ceph-mon[81775]: pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:47 compute-1 sudo[132135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgsccvecbuspcvzndkozdokvpornzclk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918066.7439604-63-259906312409229/AnsiballZ_file.py'
Jan 20 14:07:47 compute-1 sudo[132135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:47 compute-1 python3.9[132137]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:47 compute-1 sudo[132135]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:07:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:47.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:07:48 compute-1 sudo[132287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzdmvgjphovcrbwjqjedxjelbailvaqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918067.686929-63-59980930266864/AnsiballZ_file.py'
Jan 20 14:07:48 compute-1 sudo[132287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:48 compute-1 python3.9[132289]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:48 compute-1 sudo[132287]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:48.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:48 compute-1 ceph-mon[81775]: pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:48 compute-1 sudo[132439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqmzywrocwofeeisanqhsisinjuuksnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918068.414881-63-89719524648366/AnsiballZ_file.py'
Jan 20 14:07:48 compute-1 sudo[132439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:49 compute-1 python3.9[132441]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:49 compute-1 sudo[132439]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:49 compute-1 sudo[132591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikmbucgdzeofvyiorlltoxoverjcmhnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918069.2354825-63-158612159617719/AnsiballZ_file.py'
Jan 20 14:07:49 compute-1 sudo[132591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:49 compute-1 python3.9[132593]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:49 compute-1 sudo[132591]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:49.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:50 compute-1 sudo[132744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-locjvibmcwihyxyqizultencutvfjrzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918069.9901886-63-49821953185621/AnsiballZ_file.py'
Jan 20 14:07:50 compute-1 sudo[132744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:50.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:50 compute-1 python3.9[132746]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:50 compute-1 sudo[132744]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:50 compute-1 ceph-mon[81775]: pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:51 compute-1 python3.9[132896]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:07:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:51.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:52.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:52 compute-1 sudo[133046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gydnahnksflgzgofmeoozwbycubgyrft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918072.0046108-195-260745768180195/AnsiballZ_seboolean.py'
Jan 20 14:07:52 compute-1 sudo[133046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:52 compute-1 ceph-mon[81775]: pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:52 compute-1 python3.9[133048]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 20 14:07:53 compute-1 sudo[133046]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:53.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:54 compute-1 python3.9[133198]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:54.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:07:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2125 writes, 12K keys, 2125 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 2125 writes, 2125 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2125 writes, 12K keys, 2125 commit groups, 1.0 writes per commit group, ingest: 23.62 MB, 0.04 MB/s
                                           Interval WAL: 2125 writes, 2125 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     47.2      0.30              0.05         5    0.059       0      0       0.0       0.0
                                             L6      1/0    7.86 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.3     80.0     67.1      0.47              0.13         4    0.118     16K   1784       0.0       0.0
                                            Sum      1/0    7.86 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.3     49.1     59.4      0.77              0.18         9    0.085     16K   1784       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.3     49.3     59.6      0.76              0.18         8    0.095     16K   1784       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     80.0     67.1      0.47              0.13         4    0.118     16K   1784       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     47.5      0.29              0.05         4    0.073       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.014, interval 0.014
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.8 seconds
                                           Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564d515a71f0#2 capacity: 308.00 MB usage: 1.29 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(62,1.13 MB,0.365681%) FilterBlock(9,53.86 KB,0.017077%) IndexBlock(9,112.48 KB,0.0356649%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 14:07:54 compute-1 ceph-mon[81775]: pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:54 compute-1 python3.9[133319]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918073.6171181-219-24433097268695/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:55 compute-1 python3.9[133469]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:07:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:55.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:07:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:56.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:56 compute-1 python3.9[133590]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918075.2710576-264-91847368609394/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:56 compute-1 ceph-mon[81775]: pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:57 compute-1 sudo[133740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohorjpxdpvrxskdgzzgdbqpujfizyhdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918076.903677-315-226044217711356/AnsiballZ_setup.py'
Jan 20 14:07:57 compute-1 sudo[133740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:57 compute-1 python3.9[133742]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:07:57 compute-1 sudo[133740]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:57.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:58 compute-1 sudo[133824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvacvnjzgxovrvjfbjlwzublyvcrtxyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918076.903677-315-226044217711356/AnsiballZ_dnf.py'
Jan 20 14:07:58 compute-1 sudo[133824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:58 compute-1 python3.9[133826]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:07:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:58.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:59 compute-1 ceph-mon[81775]: pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:59 compute-1 sudo[133824]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:07:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:59.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:00 compute-1 ceph-mon[81775]: pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:00.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:00 compute-1 sudo[133977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgposdyaeyyzyvdxnnytmypuagmqeufd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918080.313573-351-205102112679748/AnsiballZ_systemd.py'
Jan 20 14:08:00 compute-1 sudo[133977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:01 compute-1 python3.9[133979]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:08:01 compute-1 sudo[133977]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:01 compute-1 ovn_controller[130490]: 2026-01-20T14:08:01Z|00025|memory|INFO|16256 kB peak resident set size after 29.7 seconds
Jan 20 14:08:01 compute-1 ovn_controller[130490]: 2026-01-20T14:08:01Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:2
Jan 20 14:08:01 compute-1 podman[133981]: 2026-01-20 14:08:01.48534145 +0000 UTC m=+0.133152505 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 14:08:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:08:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:01.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:08:02 compute-1 python3.9[134158]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:02.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:02 compute-1 ceph-mon[81775]: pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:02 compute-1 python3.9[134279]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918081.6440496-375-153670055084101/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:03 compute-1 python3.9[134431]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:03.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:04 compute-1 python3.9[134552]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918083.1653614-375-173155184778417/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:08:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:04.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:08:04 compute-1 ceph-mon[81775]: pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:05 compute-1 sshd-session[134303]: Invalid user software from 116.99.171.211 port 42720
Jan 20 14:08:05 compute-1 python3.9[134702]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:05.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:06 compute-1 sshd-session[134303]: Connection closed by invalid user software 116.99.171.211 port 42720 [preauth]
Jan 20 14:08:06 compute-1 python3.9[134823]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918085.2148197-507-106543190817247/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:06.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:06 compute-1 ceph-mon[81775]: pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:07 compute-1 python3.9[134973]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:07 compute-1 python3.9[135094]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918086.617277-507-186687113227177/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:07.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:08:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:08.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:08:08 compute-1 python3.9[135244]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:08:09 compute-1 ceph-mon[81775]: pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:09 compute-1 sudo[135396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvcvsxioekkifopdjqnzdgvoxspvunbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918088.8704941-621-219149800464381/AnsiballZ_file.py'
Jan 20 14:08:09 compute-1 sudo[135396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:09 compute-1 python3.9[135398]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:09 compute-1 sudo[135396]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:09.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:10 compute-1 sudo[135548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsctgrnpudzfiufkinqthigtedencojz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918089.7464852-645-127407447838556/AnsiballZ_stat.py'
Jan 20 14:08:10 compute-1 sudo[135548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:10 compute-1 python3.9[135550]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:10 compute-1 sudo[135548]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:10.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:10 compute-1 sudo[135626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpieduunqcpdhxwfemwkttnqiuplzbcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918089.7464852-645-127407447838556/AnsiballZ_file.py'
Jan 20 14:08:10 compute-1 sudo[135626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:10 compute-1 python3.9[135628]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:10 compute-1 sudo[135626]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:11 compute-1 sudo[135778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwaukztktywynfgamtdgobudbttsaidn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918090.8992627-645-172695713618776/AnsiballZ_stat.py'
Jan 20 14:08:11 compute-1 sudo[135778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:11 compute-1 ceph-mon[81775]: pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:11 compute-1 python3.9[135780]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:11 compute-1 sudo[135778]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:11 compute-1 sudo[135856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npyxbngdcdtpjbdostgbeuqrlzdrnkwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918090.8992627-645-172695713618776/AnsiballZ_file.py'
Jan 20 14:08:11 compute-1 sudo[135856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:11 compute-1 python3.9[135858]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:11 compute-1 sudo[135856]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:11.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:12 compute-1 ceph-mon[81775]: pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:12.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:12 compute-1 sudo[136008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uejggvdnovpticyosibiyktyyvjbrasz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918092.233284-714-181196422128384/AnsiballZ_file.py'
Jan 20 14:08:12 compute-1 sudo[136008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:12 compute-1 python3.9[136010]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:12 compute-1 sudo[136008]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:13 compute-1 sudo[136160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbaftspyhqmturdqetizuimimjgnrzik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918093.061072-738-141753831046849/AnsiballZ_stat.py'
Jan 20 14:08:13 compute-1 sudo[136160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:13 compute-1 python3.9[136162]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:13 compute-1 sudo[136160]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:13 compute-1 sudo[136238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfvnczdquacmzsyflrwfaaovueaojjhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918093.061072-738-141753831046849/AnsiballZ_file.py'
Jan 20 14:08:13 compute-1 sudo[136238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:13.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:14 compute-1 python3.9[136240]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:14 compute-1 sudo[136238]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:08:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:14.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:08:14 compute-1 sudo[136390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gendecellgxcihbwtyvnbipfpyzjdeoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918094.399883-774-208841421141178/AnsiballZ_stat.py'
Jan 20 14:08:14 compute-1 sudo[136390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:14 compute-1 ceph-mon[81775]: pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:14 compute-1 python3.9[136392]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:14 compute-1 sudo[136390]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:15 compute-1 sudo[136468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xabstozxbotnodhnaucydnqvgsjoiepi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918094.399883-774-208841421141178/AnsiballZ_file.py'
Jan 20 14:08:15 compute-1 sudo[136468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:15 compute-1 python3.9[136470]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:15 compute-1 sudo[136468]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:15 compute-1 sudo[136620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovigwzvumwypjghczudttoqhcqwabthb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918095.5432594-810-113351077816958/AnsiballZ_systemd.py'
Jan 20 14:08:15 compute-1 sudo[136620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:15.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:16 compute-1 python3.9[136622]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:08:16 compute-1 systemd[1]: Reloading.
Jan 20 14:08:16 compute-1 systemd-rc-local-generator[136645]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:08:16 compute-1 systemd-sysv-generator[136649]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:08:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:16.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:16 compute-1 sudo[136620]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:16 compute-1 ceph-mon[81775]: pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:17 compute-1 sudo[136810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhawglgnxqjvallxqvlzwocyewymaxdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918096.8368754-834-154610438427410/AnsiballZ_stat.py'
Jan 20 14:08:17 compute-1 sudo[136810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:17 compute-1 python3.9[136812]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:17 compute-1 sudo[136810]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:17 compute-1 sudo[136888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqxwvaieaaajbocbhpodbuaqfgaolwng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918096.8368754-834-154610438427410/AnsiballZ_file.py'
Jan 20 14:08:17 compute-1 sudo[136888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:17 compute-1 python3.9[136890]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:17 compute-1 sudo[136888]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:08:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:17.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:08:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:18.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:18 compute-1 sudo[137040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ietvgogiflqrhypaakihdtbhgpqqjdtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918098.216758-870-209611502241822/AnsiballZ_stat.py'
Jan 20 14:08:18 compute-1 sudo[137040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:18 compute-1 python3.9[137042]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:18 compute-1 sudo[137040]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:18 compute-1 ceph-mon[81775]: pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:19 compute-1 sudo[137118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugzciuntddaqgpvvvujgsmnwfkvjeyju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918098.216758-870-209611502241822/AnsiballZ_file.py'
Jan 20 14:08:19 compute-1 sudo[137118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:19 compute-1 python3.9[137120]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:19 compute-1 sudo[137118]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:19.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:20 compute-1 ceph-mon[81775]: pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:20 compute-1 sudo[137270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsdrfblpatutchgkganaympastqbtbff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918099.9144888-907-31341462483350/AnsiballZ_systemd.py'
Jan 20 14:08:20 compute-1 sudo[137270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:08:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:20.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:08:20 compute-1 python3.9[137272]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:08:20 compute-1 systemd[1]: Reloading.
Jan 20 14:08:20 compute-1 systemd-rc-local-generator[137300]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:08:20 compute-1 systemd-sysv-generator[137303]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:08:20 compute-1 systemd[1]: Starting Create netns directory...
Jan 20 14:08:20 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 20 14:08:20 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 20 14:08:20 compute-1 systemd[1]: Finished Create netns directory.
Jan 20 14:08:20 compute-1 sudo[137270]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:21 compute-1 sudo[137465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veqfznhtusrshkpxezsnvqzubtbxupye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918101.4205744-936-5566026767258/AnsiballZ_file.py'
Jan 20 14:08:21 compute-1 sudo[137465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:21.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:22 compute-1 python3.9[137467]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:22 compute-1 sudo[137465]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:22 compute-1 ceph-mon[81775]: pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:22.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:22 compute-1 sudo[137617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-furhemgszunolqrxykgngfgagusjynsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918102.2309034-960-242942297915627/AnsiballZ_stat.py'
Jan 20 14:08:22 compute-1 sudo[137617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:22 compute-1 python3.9[137619]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:22 compute-1 sudo[137617]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:23 compute-1 sudo[137742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjlwwemwzqxuabrttybhyjzoneritvng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918102.2309034-960-242942297915627/AnsiballZ_copy.py'
Jan 20 14:08:23 compute-1 sudo[137742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:23 compute-1 python3.9[137744]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918102.2309034-960-242942297915627/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:23 compute-1 sudo[137742]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:23 compute-1 sshd-session[137620]: Invalid user joro from 116.99.171.211 port 51012
Jan 20 14:08:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:08:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:23.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:08:24 compute-1 sshd-session[137620]: Connection closed by invalid user joro 116.99.171.211 port 51012 [preauth]
Jan 20 14:08:24 compute-1 sudo[137894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qusmbshsssruzylquapuwusbcfzbrygp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918104.0912392-1011-68679626034040/AnsiballZ_file.py'
Jan 20 14:08:24 compute-1 sudo[137894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:24.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:24 compute-1 python3.9[137896]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:24 compute-1 sudo[137894]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:24 compute-1 ceph-mon[81775]: pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:25 compute-1 sudo[138046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqeyjurzjhmfzjbfatzzocfixafrqcol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918104.9092436-1035-199909061654402/AnsiballZ_file.py'
Jan 20 14:08:25 compute-1 sudo[138046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:25 compute-1 python3.9[138048]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:25 compute-1 sudo[138046]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:08:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:25.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:08:26 compute-1 sudo[138198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urvqekylqohzgspftpyliwrzkzdjefsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918105.743533-1059-40802652787678/AnsiballZ_stat.py'
Jan 20 14:08:26 compute-1 sudo[138198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:26 compute-1 python3.9[138200]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:26 compute-1 sudo[138198]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:26.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:26 compute-1 ceph-mon[81775]: pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:26 compute-1 sudo[138321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnhznmlmrwdsuqaetfkyarfkqlmrgoxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918105.743533-1059-40802652787678/AnsiballZ_copy.py'
Jan 20 14:08:26 compute-1 sudo[138321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:27 compute-1 python3.9[138323]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918105.743533-1059-40802652787678/.source.json _original_basename=.8tixbcl9 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:27 compute-1 sudo[138321]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:27 compute-1 python3.9[138473]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:27.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:08:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:28.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:08:29 compute-1 ceph-mon[81775]: pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:29.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:30 compute-1 sudo[138894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhcmyfbtmtldbfnrbwjkgmhbyjnsdsnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918109.7311406-1179-10929797296009/AnsiballZ_container_config_data.py'
Jan 20 14:08:30 compute-1 sudo[138894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:30 compute-1 python3.9[138896]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 20 14:08:30 compute-1 sudo[138894]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:30.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:30 compute-1 ceph-mon[81775]: pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:31 compute-1 sudo[139046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdaygmqnekducrvpvrashfyrredqspmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918110.8041492-1212-51308594810865/AnsiballZ_container_config_hash.py'
Jan 20 14:08:31 compute-1 sudo[139046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:31 compute-1 python3.9[139048]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 20 14:08:31 compute-1 sudo[139046]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:31.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:32 compute-1 podman[139096]: 2026-01-20 14:08:32.117718775 +0000 UTC m=+0.153262074 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 20 14:08:32 compute-1 sudo[139224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubzhwsjcrtndbeuoyagwwjfqctgqhdpn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768918111.9232252-1242-279832624845734/AnsiballZ_edpm_container_manage.py'
Jan 20 14:08:32 compute-1 sudo[139224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:32.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:32 compute-1 python3[139226]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 20 14:08:33 compute-1 ceph-mon[81775]: pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:34.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:34.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:34 compute-1 ceph-mon[81775]: pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:36.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:08:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:36.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:08:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:38.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:38.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:39 compute-1 ceph-mon[81775]: pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:39 compute-1 ceph-mon[81775]: pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:08:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:40.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:08:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:40.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:40 compute-1 podman[139239]: 2026-01-20 14:08:40.529577605 +0000 UTC m=+7.734853241 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:08:40 compute-1 podman[139370]: 2026-01-20 14:08:40.623429461 +0000 UTC m=+0.017585661 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:08:40 compute-1 podman[139370]: 2026-01-20 14:08:40.733570649 +0000 UTC m=+0.127726859 container create 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:08:40 compute-1 python3[139226]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:08:40 compute-1 sudo[139224]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:40 compute-1 sshd-session[139321]: Invalid user office from 116.99.171.211 port 59714
Jan 20 14:08:41 compute-1 ceph-mon[81775]: pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:41 compute-1 sshd-session[139321]: Connection closed by invalid user office 116.99.171.211 port 59714 [preauth]
Jan 20 14:08:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:42.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:42.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:44.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:44.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:45 compute-1 sudo[139433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:08:45 compute-1 sudo[139433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:08:45 compute-1 sudo[139433]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:45 compute-1 sudo[139458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:08:45 compute-1 sudo[139458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:08:45 compute-1 sudo[139458]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:45 compute-1 sudo[139483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:08:45 compute-1 sudo[139483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:08:45 compute-1 sudo[139483]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:45 compute-1 sudo[139508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:08:45 compute-1 sudo[139508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:08:45 compute-1 sudo[139508]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:46.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:46.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:48.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:48.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:48 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 20 14:08:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:08:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:50.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:08:50 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 20 14:08:50 compute-1 ceph-mon[81775]: pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:08:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:50.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:08:50 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 20 14:08:51 compute-1 sshd-session[139565]: Invalid user george from 116.99.171.211 port 39128
Jan 20 14:08:51 compute-1 sshd-session[139565]: Connection closed by invalid user george 116.99.171.211 port 39128 [preauth]
Jan 20 14:08:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:08:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:52.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:08:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:52.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:08:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:54.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:08:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:54.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:54 compute-1 ceph-mon[81775]: pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:54 compute-1 ceph-mon[81775]: pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 14:08:54 compute-1 ceph-mon[81775]: pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:54 compute-1 ceph-mon[81775]: pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:56 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 20 14:08:56 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 20 14:08:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:56 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 20 14:08:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:56.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:08:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:56.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:08:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:58.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:58 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 20 14:08:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:08:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:58.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:58 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 20 14:08:59 compute-1 ceph-mon[81775]: pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Jan 20 14:08:59 compute-1 ceph-mon[81775]: pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Jan 20 14:08:59 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:08:59 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:08:59 compute-1 sshd-session[139567]: Invalid user admian from 116.99.171.211 port 51788
Jan 20 14:09:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:00.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:00 compute-1 sshd-session[139567]: Connection closed by invalid user admian 116.99.171.211 port 51788 [preauth]
Jan 20 14:09:00 compute-1 ceph-mon[81775]: pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Jan 20 14:09:00 compute-1 ceph-mon[81775]: pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Jan 20 14:09:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:09:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:09:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:09:00 compute-1 ceph-mon[81775]: pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 5.4 KiB/s rd, 0 B/s wr, 8 op/s
Jan 20 14:09:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:00.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:09:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:02.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:09:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:09:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:02.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:09:03 compute-1 podman[139571]: 2026-01-20 14:09:03.076683939 +0000 UTC m=+0.113756302 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 14:09:03 compute-1 ceph-mon[81775]: pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Jan 20 14:09:04 compute-1 sshd-session[139569]: Invalid user admin from 116.99.171.211 port 35724
Jan 20 14:09:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:04.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:04.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:04 compute-1 sshd-session[139569]: Connection closed by invalid user admin 116.99.171.211 port 35724 [preauth]
Jan 20 14:09:05 compute-1 ceph-mon[81775]: pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Jan 20 14:09:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:06.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:06.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:06 compute-1 ceph-mon[81775]: pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 0 B/s wr, 31 op/s
Jan 20 14:09:07 compute-1 sudo[139721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fttbjdvkehgjrwhqakaltjeyktgylcrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918146.8858554-1266-189058716801144/AnsiballZ_stat.py'
Jan 20 14:09:07 compute-1 sudo[139721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:07 compute-1 python3.9[139723]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:09:07 compute-1 sudo[139721]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:08.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:08 compute-1 ceph-mon[81775]: pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Jan 20 14:09:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:08.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:09 compute-1 sudo[139875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxtqsltfpnkdtifjndzrrvzsngslruyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918149.0452592-1293-167888322593057/AnsiballZ_file.py'
Jan 20 14:09:09 compute-1 sudo[139875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:09 compute-1 python3.9[139877]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:09 compute-1 sudo[139875]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:10 compute-1 sudo[139951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niruidfbksrwaihgocrirbzzyopbimoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918149.0452592-1293-167888322593057/AnsiballZ_stat.py'
Jan 20 14:09:10 compute-1 sudo[139951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:10.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:10 compute-1 ceph-mon[81775]: pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Jan 20 14:09:10 compute-1 python3.9[139953]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:09:10 compute-1 sudo[139951]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:10.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:11 compute-1 sudo[140103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugcvnewlrritbmajbiwbubgchsqborbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918150.3079674-1293-134556555735601/AnsiballZ_copy.py'
Jan 20 14:09:11 compute-1 sudo[140103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:11 compute-1 python3.9[140105]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768918150.3079674-1293-134556555735601/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:11 compute-1 sudo[140103]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:11 compute-1 sudo[140181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ludmkwrhcfgobzxzfoqjacwvwdhzlnpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918150.3079674-1293-134556555735601/AnsiballZ_systemd.py'
Jan 20 14:09:11 compute-1 sudo[140181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:09:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:12.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:09:12 compute-1 python3.9[140183]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:09:12 compute-1 systemd[1]: Reloading.
Jan 20 14:09:12 compute-1 systemd-rc-local-generator[140211]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:09:12 compute-1 systemd-sysv-generator[140215]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:09:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:12.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:12 compute-1 sshd-session[140102]: Invalid user joggler from 116.99.171.211 port 35752
Jan 20 14:09:12 compute-1 sudo[140181]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:13 compute-1 sshd-session[140102]: Connection closed by invalid user joggler 116.99.171.211 port 35752 [preauth]
Jan 20 14:09:13 compute-1 sudo[140291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgxiysihyyelpqhyrrnfzkuzoerstddy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918150.3079674-1293-134556555735601/AnsiballZ_systemd.py'
Jan 20 14:09:13 compute-1 sudo[140291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:13 compute-1 python3.9[140293]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:09:13 compute-1 systemd[1]: Reloading.
Jan 20 14:09:14 compute-1 systemd-rc-local-generator[140322]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:09:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:09:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:14.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:09:14 compute-1 systemd-sysv-generator[140326]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:09:14 compute-1 systemd[1]: Starting ovn_metadata_agent container...
Jan 20 14:09:14 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:09:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd2634cd3aeaaed9ce404ebca3597e4aad63fcf2325112a68effa8e01d4ac52/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd2634cd3aeaaed9ce404ebca3597e4aad63fcf2325112a68effa8e01d4ac52/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:14 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500.
Jan 20 14:09:14 compute-1 podman[140333]: 2026-01-20 14:09:14.477703334 +0000 UTC m=+0.184932984 container init 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: + sudo -E kolla_set_configs
Jan 20 14:09:14 compute-1 podman[140333]: 2026-01-20 14:09:14.513395978 +0000 UTC m=+0.220625548 container start 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 20 14:09:14 compute-1 edpm-start-podman-container[140333]: ovn_metadata_agent
Jan 20 14:09:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:14.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:14 compute-1 edpm-start-podman-container[140332]: Creating additional drop-in dependency for "ovn_metadata_agent" (533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500)
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: INFO:__main__:Validating config file
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: INFO:__main__:Copying service configuration files
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: INFO:__main__:Writing out command to execute
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 20 14:09:14 compute-1 systemd[1]: Reloading.
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: ++ cat /run_command
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: + CMD=neutron-ovn-metadata-agent
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: + ARGS=
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: + sudo kolla_copy_cacerts
Jan 20 14:09:14 compute-1 podman[140356]: 2026-01-20 14:09:14.639637913 +0000 UTC m=+0.108046319 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: + [[ ! -n '' ]]
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: + . kolla_extend_start
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: Running command: 'neutron-ovn-metadata-agent'
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: + umask 0022
Jan 20 14:09:14 compute-1 ovn_metadata_agent[140349]: + exec neutron-ovn-metadata-agent
Jan 20 14:09:14 compute-1 systemd-sysv-generator[140428]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:09:14 compute-1 systemd-rc-local-generator[140425]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:09:14 compute-1 systemd[1]: Started ovn_metadata_agent container.
Jan 20 14:09:14 compute-1 sudo[140291]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:16.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.332 140354 INFO neutron.common.config [-] Logging enabled!
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.333 140354 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.333 140354 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.333 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.333 140354 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.334 140354 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.334 140354 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.334 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.334 140354 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.334 140354 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.334 140354 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.334 140354 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.334 140354 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.335 140354 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.335 140354 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.335 140354 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.335 140354 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.335 140354 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.335 140354 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.335 140354 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.335 140354 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.335 140354 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.336 140354 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.336 140354 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.336 140354 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.336 140354 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.336 140354 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.336 140354 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.336 140354 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.336 140354 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.336 140354 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.337 140354 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.337 140354 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.337 140354 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.337 140354 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.337 140354 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.337 140354 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.337 140354 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.337 140354 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.338 140354 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.338 140354 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.338 140354 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.338 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.338 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.338 140354 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.338 140354 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.338 140354 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.338 140354 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.338 140354 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.339 140354 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.339 140354 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.339 140354 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.339 140354 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.339 140354 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.339 140354 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.339 140354 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.339 140354 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.339 140354 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.339 140354 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.340 140354 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.340 140354 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.340 140354 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.340 140354 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.340 140354 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.340 140354 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.340 140354 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.340 140354 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.341 140354 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.341 140354 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.341 140354 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.341 140354 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.341 140354 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.341 140354 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.341 140354 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.341 140354 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.341 140354 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.342 140354 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.342 140354 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.342 140354 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.342 140354 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.342 140354 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.342 140354 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.342 140354 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.342 140354 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.342 140354 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.343 140354 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.343 140354 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.343 140354 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.343 140354 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.343 140354 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.343 140354 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.343 140354 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.343 140354 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.343 140354 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.344 140354 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.344 140354 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.344 140354 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.344 140354 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.344 140354 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.344 140354 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.344 140354 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.344 140354 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.344 140354 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.344 140354 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.345 140354 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.345 140354 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.345 140354 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.345 140354 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.345 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.345 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.345 140354 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.345 140354 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.345 140354 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.346 140354 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.346 140354 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.346 140354 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.346 140354 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.346 140354 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.346 140354 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.346 140354 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.346 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.346 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.347 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.347 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.347 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.347 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.347 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.347 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.347 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.347 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.347 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.348 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.348 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.348 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.348 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.348 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.348 140354 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.348 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.348 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.349 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.349 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.349 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.349 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.349 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.349 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.349 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.349 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.349 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.350 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.350 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.350 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.350 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.350 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.350 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.350 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.350 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.350 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.351 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.351 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.351 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.351 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.351 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.351 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.351 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.351 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.351 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.352 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.352 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.352 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.352 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.352 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.352 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.352 140354 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.352 140354 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.352 140354 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.353 140354 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.353 140354 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.353 140354 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.353 140354 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.353 140354 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.353 140354 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.353 140354 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.353 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.354 140354 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.354 140354 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.354 140354 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.354 140354 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.354 140354 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.354 140354 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.354 140354 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.354 140354 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.354 140354 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.355 140354 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.355 140354 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.355 140354 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.355 140354 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.355 140354 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.355 140354 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.355 140354 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.355 140354 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.355 140354 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.355 140354 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.356 140354 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.356 140354 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.356 140354 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.356 140354 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.356 140354 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.356 140354 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.356 140354 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.356 140354 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.357 140354 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.357 140354 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.357 140354 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.357 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.357 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.357 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.357 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.357 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.357 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.358 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.358 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.358 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.358 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.358 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.358 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.358 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.358 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.358 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.359 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.359 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.359 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.359 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.359 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.359 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.359 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.359 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.359 140354 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.360 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.360 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.360 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.360 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.360 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.360 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.360 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.360 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.361 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.361 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.361 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.361 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.361 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.361 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.361 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.361 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.361 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.362 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.362 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.362 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.362 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.362 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.362 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.362 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.362 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.363 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.363 140354 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.363 140354 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.363 140354 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.363 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.363 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.363 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.363 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.363 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.364 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.364 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.364 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.364 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.364 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.364 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.364 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.364 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.364 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.365 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.365 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.365 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.365 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.365 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.365 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.365 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.365 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.365 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.366 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.366 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.366 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.366 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.366 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.366 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.366 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.366 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.366 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.367 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.367 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.367 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.367 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.367 140354 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.367 140354 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.375 140354 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.376 140354 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.376 140354 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.376 140354 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.376 140354 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.389 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 5ffd4ac3-9266-4927-98ad-20a17782c725 (UUID: 5ffd4ac3-9266-4927-98ad-20a17782c725) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.417 140354 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.417 140354 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.417 140354 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.417 140354 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.420 140354 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.425 140354 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.431 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '5ffd4ac3-9266-4927-98ad-20a17782c725'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], external_ids={}, name=5ffd4ac3-9266-4927-98ad-20a17782c725, nb_cfg_timestamp=1768918059851, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.432 140354 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fb671571f70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.432 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.433 140354 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.433 140354 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.433 140354 INFO oslo_service.service [-] Starting 1 workers
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.437 140354 DEBUG oslo_service.service [-] Started child 140461 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.440 140354 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpx9hh2_y5/privsep.sock']
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.443 140461 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-164319'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.476 140461 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.477 140461 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.477 140461 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.481 140461 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.489 140461 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 20 14:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:16.498 140461 INFO eventlet.wsgi.server [-] (140461) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 20 14:09:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:16.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:17 compute-1 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 20 14:09:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:17.202 140354 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 20 14:09:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:17.203 140354 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpx9hh2_y5/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 20 14:09:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:17.064 140466 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 20 14:09:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:17.071 140466 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 20 14:09:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:17.074 140466 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 20 14:09:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:17.075 140466 INFO oslo.privsep.daemon [-] privsep daemon running as pid 140466
Jan 20 14:09:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:17.208 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[0577f207-341a-4fd9-91db-e5923117984a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:09:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:17.733 140466 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:09:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:17.734 140466 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:09:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:17.734 140466 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:09:17 compute-1 ceph-mds[84722]: mds.beacon.cephfs.compute-1.rtofcx missed beacon ack from the monitors
Jan 20 14:09:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:09:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:18.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:09:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:18.256 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[74b2ea12-9a7b-49ee-9deb-c56cf30bea6b]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:09:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:18.259 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, column=external_ids, values=({'neutron:ovn-metadata-id': '6a850319-0563-5f8e-b562-8d29f1112d59'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:09:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:18.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:19 compute-1 sudo[140471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:19 compute-1 sudo[140471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:19 compute-1 sudo[140471]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:20.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:20 compute-1 sudo[140496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:09:20 compute-1 sudo[140496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:20 compute-1 sudo[140496]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:20 compute-1 sudo[140521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:20 compute-1 sudo[140521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:20 compute-1 sudo[140521]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:20 compute-1 sudo[140546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 14:09:20 compute-1 sudo[140546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:20.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.269 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.357 140354 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.358 140354 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.358 140354 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.358 140354 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.358 140354 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.358 140354 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.358 140354 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.359 140354 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.359 140354 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.359 140354 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.359 140354 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.359 140354 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.359 140354 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.360 140354 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.360 140354 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.360 140354 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.360 140354 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.361 140354 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.361 140354 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.361 140354 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.361 140354 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.361 140354 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.362 140354 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.362 140354 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.362 140354 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.362 140354 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.362 140354 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.363 140354 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.363 140354 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.363 140354 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.363 140354 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.363 140354 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.363 140354 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.364 140354 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.364 140354 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.364 140354 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.364 140354 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.365 140354 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.365 140354 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.365 140354 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.365 140354 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.365 140354 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.366 140354 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.366 140354 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.366 140354 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.366 140354 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.366 140354 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.367 140354 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.367 140354 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.367 140354 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.367 140354 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.367 140354 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.367 140354 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.368 140354 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.368 140354 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.368 140354 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.368 140354 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.368 140354 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.368 140354 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.369 140354 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.369 140354 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.369 140354 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.369 140354 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.369 140354 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.370 140354 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.370 140354 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.370 140354 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.370 140354 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.370 140354 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.370 140354 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.371 140354 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.371 140354 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.371 140354 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.371 140354 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.371 140354 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.371 140354 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.372 140354 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.372 140354 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.372 140354 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.372 140354 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.372 140354 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.372 140354 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.372 140354 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.373 140354 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.373 140354 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.373 140354 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.373 140354 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.373 140354 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.373 140354 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.374 140354 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.374 140354 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.374 140354 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.374 140354 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.374 140354 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.374 140354 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.374 140354 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.375 140354 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.375 140354 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.375 140354 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.375 140354 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.375 140354 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.375 140354 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.375 140354 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.376 140354 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.376 140354 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.376 140354 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.376 140354 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.376 140354 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.376 140354 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.377 140354 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.377 140354 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.377 140354 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.377 140354 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.377 140354 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.377 140354 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.378 140354 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.378 140354 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.378 140354 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.378 140354 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.378 140354 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.378 140354 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.379 140354 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.379 140354 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.379 140354 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.379 140354 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.379 140354 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.380 140354 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.380 140354 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.380 140354 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.380 140354 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.381 140354 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.381 140354 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.381 140354 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.381 140354 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.381 140354 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.381 140354 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.382 140354 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.382 140354 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.382 140354 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.382 140354 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.382 140354 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.383 140354 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.383 140354 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.383 140354 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.383 140354 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.383 140354 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.383 140354 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.383 140354 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.384 140354 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.384 140354 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.384 140354 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.384 140354 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.384 140354 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.384 140354 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.384 140354 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.385 140354 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.385 140354 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.385 140354 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.385 140354 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.385 140354 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.385 140354 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.385 140354 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.386 140354 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.386 140354 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.386 140354 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.386 140354 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.386 140354 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.386 140354 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.386 140354 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.387 140354 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.387 140354 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.387 140354 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.387 140354 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.387 140354 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.387 140354 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.388 140354 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.388 140354 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.388 140354 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.388 140354 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.388 140354 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.388 140354 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.389 140354 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.389 140354 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.389 140354 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.389 140354 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.389 140354 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.390 140354 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.390 140354 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.390 140354 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.390 140354 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.390 140354 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.390 140354 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.391 140354 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.391 140354 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.391 140354 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.391 140354 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.391 140354 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.391 140354 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.392 140354 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.392 140354 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.392 140354 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.392 140354 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.392 140354 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.393 140354 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.393 140354 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.393 140354 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.393 140354 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.393 140354 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.393 140354 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.394 140354 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.394 140354 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.394 140354 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.394 140354 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.394 140354 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.394 140354 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.394 140354 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.395 140354 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.395 140354 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.395 140354 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.395 140354 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.395 140354 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.395 140354 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.396 140354 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.396 140354 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.396 140354 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.396 140354 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.396 140354 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.396 140354 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.396 140354 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.397 140354 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.397 140354 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.397 140354 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.397 140354 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.397 140354 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.397 140354 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.398 140354 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.398 140354 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.398 140354 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.398 140354 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.398 140354 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.399 140354 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.399 140354 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.399 140354 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.399 140354 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.399 140354 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.399 140354 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.399 140354 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.400 140354 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.400 140354 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.400 140354 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.400 140354 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.400 140354 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.400 140354 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.400 140354 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.401 140354 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.401 140354 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.401 140354 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.401 140354 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.401 140354 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.401 140354 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.401 140354 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.402 140354 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.402 140354 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.402 140354 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.402 140354 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.402 140354 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.402 140354 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.403 140354 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.403 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.403 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.403 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.403 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.404 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.404 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.404 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.404 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.404 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.405 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.405 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.405 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.405 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.405 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.405 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.406 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.406 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.406 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.406 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.406 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.406 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.406 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.407 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.407 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.407 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.407 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.407 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.408 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.408 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.408 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.408 140354 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.408 140354 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.408 140354 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.409 140354 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.409 140354 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:09:21.409 140354 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 20 14:09:21 compute-1 ceph-mds[84722]: mds.beacon.cephfs.compute-1.rtofcx missed beacon ack from the monitors
Jan 20 14:09:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:22.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:22.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).paxos(paxos active c 754..1350) lease_timeout -- calling new election
Jan 20 14:09:23 compute-1 ceph-mon[81775]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Jan 20 14:09:23 compute-1 ceph-mon[81775]: paxos.2).electionLogic(14) init, last seen epoch 14
Jan 20 14:09:23 compute-1 ceph-mon[81775]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 14:09:23 compute-1 ceph-mon[81775]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 14:09:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:24.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:24.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:25 compute-1 podman[140614]: 2026-01-20 14:09:25.17671085 +0000 UTC m=+4.487387413 container create 490278afcdd206df9fe453a8f387de564518087f326bd74add4bc55b313e910f (image=quay.io/ceph/haproxy:2.3, name=dazzling_cori)
Jan 20 14:09:25 compute-1 systemd[1]: Started libpod-conmon-490278afcdd206df9fe453a8f387de564518087f326bd74add4bc55b313e910f.scope.
Jan 20 14:09:25 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:09:25 compute-1 podman[140614]: 2026-01-20 14:09:25.156394693 +0000 UTC m=+4.467071306 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 20 14:09:25 compute-1 podman[140614]: 2026-01-20 14:09:25.244811332 +0000 UTC m=+4.555487905 container init 490278afcdd206df9fe453a8f387de564518087f326bd74add4bc55b313e910f (image=quay.io/ceph/haproxy:2.3, name=dazzling_cori)
Jan 20 14:09:25 compute-1 podman[140614]: 2026-01-20 14:09:25.251904424 +0000 UTC m=+4.562580997 container start 490278afcdd206df9fe453a8f387de564518087f326bd74add4bc55b313e910f (image=quay.io/ceph/haproxy:2.3, name=dazzling_cori)
Jan 20 14:09:25 compute-1 dazzling_cori[140727]: 0 0
Jan 20 14:09:25 compute-1 systemd[1]: libpod-490278afcdd206df9fe453a8f387de564518087f326bd74add4bc55b313e910f.scope: Deactivated successfully.
Jan 20 14:09:25 compute-1 conmon[140727]: conmon 490278afcdd206df9fe4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-490278afcdd206df9fe453a8f387de564518087f326bd74add4bc55b313e910f.scope/container/memory.events
Jan 20 14:09:25 compute-1 podman[140614]: 2026-01-20 14:09:25.257953235 +0000 UTC m=+4.568629828 container attach 490278afcdd206df9fe453a8f387de564518087f326bd74add4bc55b313e910f (image=quay.io/ceph/haproxy:2.3, name=dazzling_cori)
Jan 20 14:09:25 compute-1 podman[140614]: 2026-01-20 14:09:25.258328156 +0000 UTC m=+4.569004719 container died 490278afcdd206df9fe453a8f387de564518087f326bd74add4bc55b313e910f (image=quay.io/ceph/haproxy:2.3, name=dazzling_cori)
Jan 20 14:09:25 compute-1 systemd[1]: var-lib-containers-storage-overlay-ddb69c95c9aea067452f694859e1bc66a18e102056e54319235feae920b3fa3b-merged.mount: Deactivated successfully.
Jan 20 14:09:25 compute-1 podman[140614]: 2026-01-20 14:09:25.295310696 +0000 UTC m=+4.605987259 container remove 490278afcdd206df9fe453a8f387de564518087f326bd74add4bc55b313e910f (image=quay.io/ceph/haproxy:2.3, name=dazzling_cori)
Jan 20 14:09:25 compute-1 systemd[1]: libpod-conmon-490278afcdd206df9fe453a8f387de564518087f326bd74add4bc55b313e910f.scope: Deactivated successfully.
Jan 20 14:09:25 compute-1 systemd[1]: Reloading.
Jan 20 14:09:25 compute-1 systemd-rc-local-generator[140768]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:09:25 compute-1 systemd-sysv-generator[140771]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:09:25 compute-1 systemd[1]: Reloading.
Jan 20 14:09:25 compute-1 systemd-rc-local-generator[140817]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:09:25 compute-1 systemd-sysv-generator[140821]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:09:25 compute-1 ceph-mds[84722]: mds.beacon.cephfs.compute-1.rtofcx missed beacon ack from the monitors
Jan 20 14:09:25 compute-1 systemd[1]: Starting Ceph haproxy.rgw.default.compute-1.uyeocq for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 14:09:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:26.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:26 compute-1 podman[140873]: 2026-01-20 14:09:26.23626825 +0000 UTC m=+0.045839572 container create 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 14:09:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5bb3ed10a159988414e2bbbf41e50e39bab60a2eb0bd9879d92b48cb45352f/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:26 compute-1 podman[140873]: 2026-01-20 14:09:26.302449928 +0000 UTC m=+0.112021320 container init 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 14:09:26 compute-1 podman[140873]: 2026-01-20 14:09:26.211659872 +0000 UTC m=+0.021231164 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 20 14:09:26 compute-1 podman[140873]: 2026-01-20 14:09:26.307019648 +0000 UTC m=+0.116590950 container start 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 14:09:26 compute-1 bash[140873]: 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788
Jan 20 14:09:26 compute-1 systemd[1]: Started Ceph haproxy.rgw.default.compute-1.uyeocq for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 14:09:26 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq[140888]: [NOTICE] 019/140926 (2) : New worker #1 (4) forked
Jan 20 14:09:26 compute-1 ceph-mon[81775]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 14:09:26 compute-1 sudo[140546]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:26.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:26.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:28.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:28 compute-1 ceph-mon[81775]: pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 0 B/s wr, 79 op/s
Jan 20 14:09:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 14:09:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:28.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:28 compute-1 sudo[140902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:28 compute-1 sudo[140902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:28 compute-1 sudo[140902]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:28 compute-1 sudo[140927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:09:28 compute-1 sudo[140927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:28 compute-1 sudo[140927]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:28 compute-1 sudo[140952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:28 compute-1 sudo[140952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:28 compute-1 sudo[140952]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:28 compute-1 sudo[140977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 14:09:28 compute-1 sudo[140977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:28.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:29 compute-1 ceph-mon[81775]: pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 0 B/s wr, 69 op/s
Jan 20 14:09:29 compute-1 ceph-mon[81775]: pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 0 B/s wr, 84 op/s
Jan 20 14:09:29 compute-1 ceph-mon[81775]: pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 0 B/s wr, 69 op/s
Jan 20 14:09:29 compute-1 ceph-mon[81775]: pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 0 B/s wr, 67 op/s
Jan 20 14:09:29 compute-1 ceph-mon[81775]: host compute-0 `cephadm ceph-volume` failed: Cannot decode JSON: 
                                           Traceback (most recent call last):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 1514, in _run_cephadm_json
                                               return json.loads(''.join(out))
                                             File "/lib64/python3.9/json/__init__.py", line 346, in loads
                                               return _default_decoder.decode(s)
                                             File "/lib64/python3.9/json/decoder.py", line 337, in decode
                                               obj, end = self.raw_decode(s, idx=_w(s, 0).end())
                                             File "/lib64/python3.9/json/decoder.py", line 355, in raw_decode
                                               raise JSONDecodeError("Expecting value", s, err.value) from None
                                           json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Jan 20 14:09:29 compute-1 ceph-mon[81775]: Failed to apply osd.default_drive_group spec DriveGroupSpec.from_json(yaml.safe_load('''service_type: osd
                                           service_id: default_drive_group
                                           service_name: osd.default_drive_group
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           spec:
                                             data_devices:
                                               paths:
                                               - /dev/ceph_vg0/ceph_lv0
                                             filter_logic: AND
                                             objectstore: bluestore
                                           ''')): host compute-0 `cephadm ceph-volume` failed: Cannot decode JSON
                                           Traceback (most recent call last):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 1514, in _run_cephadm_json
                                               return json.loads(''.join(out))
                                             File "/lib64/python3.9/json/__init__.py", line 346, in loads
                                               return _default_decoder.decode(s)
                                             File "/lib64/python3.9/json/decoder.py", line 337, in decode
                                               obj, end = self.raw_decode(s, idx=_w(s, 0).end())
                                             File "/lib64/python3.9/json/decoder.py", line 355, in raw_decode
                                               raise JSONDecodeError("Expecting value", s, err.value) from None
                                           json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
                                           
                                           During handling of the above exception, another exception occurred:
                                           
                                           Traceback (most recent call last):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 577, in _apply_all_services
                                               if self._apply_service(spec):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 696, in _apply_service
                                               self.mgr.osd_service.create_from_spec(cast(DriveGroupSpec, spec))
                                             File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 79, in create_from_spec
                                               ret = self.mgr.wait_async(all_hosts())
                                             File "/usr/share/ceph/mgr/cephadm/module.py", line 735, in wait_async
                                               return self.event_loop.get_result(coro, timeout)
                                             File "/usr/share/ceph/mgr/cephadm/ssh.py", line 64, in get_result
                                               return future.result(timeout)
                                             File "/lib64/python3.9/concurrent/futures/_base.py", line 446, in result
                                               return self.__get_result()
                                             File "/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
                                               raise self._exception
                                             File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 76, in all_hosts
                                               return await gather(*futures)
                                             File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 63, in create_from_spec_one
                                               ret_msg = await self.create_single_host(
                                             File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 98, in create_single_host
                                               return await self.deploy_osd_daemons_for_existing_osds(host, drive_group,
                                             File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 158, in deploy_osd_daemons_for_existing_osds
                                               raw_elems: dict = await CephadmServe(self.mgr)._run_cephadm_json(
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 1518, in _run_cephadm_json
                                               raise OrchestratorError(msg)
                                           orchestrator._interface.OrchestratorError: host compute-0 `cephadm ceph-volume` failed: Cannot decode JSON
Jan 20 14:09:29 compute-1 ceph-mon[81775]: pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Jan 20 14:09:29 compute-1 ceph-mon[81775]: Deploying daemon haproxy.rgw.default.compute-1.uyeocq on compute-1
Jan 20 14:09:29 compute-1 ceph-mon[81775]: pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 17 op/s
Jan 20 14:09:29 compute-1 ceph-mon[81775]: mon.compute-1 calling monitor election
Jan 20 14:09:29 compute-1 ceph-mon[81775]: mon.compute-2 calling monitor election
Jan 20 14:09:29 compute-1 ceph-mon[81775]: pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 17 op/s
Jan 20 14:09:29 compute-1 ceph-mon[81775]: mon.compute-0 calling monitor election
Jan 20 14:09:29 compute-1 ceph-mon[81775]: pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:29 compute-1 ceph-mon[81775]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 20 14:09:29 compute-1 ceph-mon[81775]: pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:29 compute-1 ceph-mon[81775]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 20 14:09:29 compute-1 ceph-mon[81775]: fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:active} 2 up:standby
Jan 20 14:09:29 compute-1 ceph-mon[81775]: osdmap e128: 3 total, 3 up, 3 in
Jan 20 14:09:29 compute-1 ceph-mon[81775]: mgrmap e11: compute-0.wookjv(active, since 13m), standbys: compute-2.gunjko, compute-1.oweoeg
Jan 20 14:09:29 compute-1 ceph-mon[81775]: overall HEALTH_OK
Jan 20 14:09:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:29 compute-1 ceph-mon[81775]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 14:09:29 compute-1 ceph-mon[81775]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 14:09:29 compute-1 ceph-mon[81775]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 14:09:29 compute-1 ceph-mon[81775]: Deploying daemon keepalived.rgw.default.compute-1.cevitz on compute-1
Jan 20 14:09:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:09:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:30.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:09:30 compute-1 python3.9[141191]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 20 14:09:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:30.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:30 compute-1 ceph-mon[81775]: Health check failed: Failed to apply 1 service(s): osd.default_drive_group (CEPHADM_APPLY_SPEC_FAIL)
Jan 20 14:09:30 compute-1 ceph-mon[81775]: pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Jan 20 14:09:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:30.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:31 compute-1 sudo[141367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxiegkyuravvvmxogkjyavmnimaowasd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918170.7733998-1428-32947228289228/AnsiballZ_stat.py'
Jan 20 14:09:31 compute-1 sudo[141367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:31 compute-1 python3.9[141369]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:09:31 compute-1 sudo[141367]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:31 compute-1 sudo[141509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loddyqvmrpcveaczwqqpzxcivwxalfyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918170.7733998-1428-32947228289228/AnsiballZ_copy.py'
Jan 20 14:09:31 compute-1 sudo[141509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:31 compute-1 python3.9[141511]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918170.7733998-1428-32947228289228/.source.yaml _original_basename=.xw15sl9t follow=False checksum=cdeb45300f793bd9e5b2caee7d44d83f067a1a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:31 compute-1 sudo[141509]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:32.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:32 compute-1 podman[141043]: 2026-01-20 14:09:32.19332944 +0000 UTC m=+3.006207587 container create 1851a3237434bd1b3bf4c26b538cef939eb774c8b11847a2ed0ec26bfe7317aa (image=quay.io/ceph/keepalived:2.2.4, name=affectionate_robinson, release=1793, distribution-scope=public, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.buildah.version=1.28.2, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git)
Jan 20 14:09:32 compute-1 podman[141043]: 2026-01-20 14:09:32.177630674 +0000 UTC m=+2.990508851 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 20 14:09:32 compute-1 systemd[1]: Started libpod-conmon-1851a3237434bd1b3bf4c26b538cef939eb774c8b11847a2ed0ec26bfe7317aa.scope.
Jan 20 14:09:32 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:09:32 compute-1 podman[141043]: 2026-01-20 14:09:32.288026227 +0000 UTC m=+3.100904394 container init 1851a3237434bd1b3bf4c26b538cef939eb774c8b11847a2ed0ec26bfe7317aa (image=quay.io/ceph/keepalived:2.2.4, name=affectionate_robinson, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1793, name=keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 20 14:09:32 compute-1 podman[141043]: 2026-01-20 14:09:32.297957449 +0000 UTC m=+3.110835626 container start 1851a3237434bd1b3bf4c26b538cef939eb774c8b11847a2ed0ec26bfe7317aa (image=quay.io/ceph/keepalived:2.2.4, name=affectionate_robinson, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1793, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, architecture=x86_64, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., name=keepalived, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 20 14:09:32 compute-1 podman[141043]: 2026-01-20 14:09:32.302104907 +0000 UTC m=+3.114983094 container attach 1851a3237434bd1b3bf4c26b538cef939eb774c8b11847a2ed0ec26bfe7317aa (image=quay.io/ceph/keepalived:2.2.4, name=affectionate_robinson, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, description=keepalived for Ceph, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=keepalived-container, release=1793, version=2.2.4, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20)
Jan 20 14:09:32 compute-1 affectionate_robinson[141566]: 0 0
Jan 20 14:09:32 compute-1 systemd[1]: libpod-1851a3237434bd1b3bf4c26b538cef939eb774c8b11847a2ed0ec26bfe7317aa.scope: Deactivated successfully.
Jan 20 14:09:32 compute-1 podman[141043]: 2026-01-20 14:09:32.30608421 +0000 UTC m=+3.118962357 container died 1851a3237434bd1b3bf4c26b538cef939eb774c8b11847a2ed0ec26bfe7317aa (image=quay.io/ceph/keepalived:2.2.4, name=affectionate_robinson, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, version=2.2.4, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, name=keepalived, vendor=Red Hat, Inc.)
Jan 20 14:09:32 compute-1 systemd[1]: var-lib-containers-storage-overlay-ae9ff7518ea4e04fbc4a988b3d9514cd490fbac9c477eeec28a2a6bc0aeb8d40-merged.mount: Deactivated successfully.
Jan 20 14:09:32 compute-1 podman[141043]: 2026-01-20 14:09:32.340251339 +0000 UTC m=+3.153129486 container remove 1851a3237434bd1b3bf4c26b538cef939eb774c8b11847a2ed0ec26bfe7317aa (image=quay.io/ceph/keepalived:2.2.4, name=affectionate_robinson, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.openshift.tags=Ceph keepalived, release=1793, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 20 14:09:32 compute-1 systemd[1]: libpod-conmon-1851a3237434bd1b3bf4c26b538cef939eb774c8b11847a2ed0ec26bfe7317aa.scope: Deactivated successfully.
Jan 20 14:09:32 compute-1 systemd[1]: Reloading.
Jan 20 14:09:32 compute-1 systemd-rc-local-generator[141613]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:09:32 compute-1 systemd-sysv-generator[141617]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:09:32 compute-1 sshd-session[131831]: Connection closed by 192.168.122.30 port 44360
Jan 20 14:09:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:32.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:32 compute-1 sshd-session[131828]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:09:32 compute-1 systemd[1]: session-47.scope: Deactivated successfully.
Jan 20 14:09:32 compute-1 systemd[1]: session-47.scope: Consumed 1min 1.537s CPU time.
Jan 20 14:09:32 compute-1 systemd-logind[783]: Session 47 logged out. Waiting for processes to exit.
Jan 20 14:09:32 compute-1 systemd-logind[783]: Removed session 47.
Jan 20 14:09:32 compute-1 systemd[1]: Reloading.
Jan 20 14:09:32 compute-1 systemd-sysv-generator[141659]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:09:32 compute-1 systemd-rc-local-generator[141655]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:09:32 compute-1 ceph-mon[81775]: pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 0 B/s wr, 39 op/s
Jan 20 14:09:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:09:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:32.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:09:33 compute-1 systemd[1]: Starting Ceph keepalived.rgw.default.compute-1.cevitz for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 14:09:33 compute-1 podman[141664]: 2026-01-20 14:09:33.283662873 +0000 UTC m=+0.111583757 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:09:33 compute-1 podman[141735]: 2026-01-20 14:09:33.34025596 +0000 UTC m=+0.036217059 container create 5aefe91b7ec82d14c11ddef250a4a56008ca1c7923fa55da8e9a43986378df32 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.28.2, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, architecture=x86_64, version=2.2.4, vendor=Red Hat, Inc., vcs-type=git)
Jan 20 14:09:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c25161a5231e80d5ac8ddda72b615c5387a44fbf34d7d20f0c56077ae85edb/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:33 compute-1 podman[141735]: 2026-01-20 14:09:33.395113886 +0000 UTC m=+0.091074985 container init 5aefe91b7ec82d14c11ddef250a4a56008ca1c7923fa55da8e9a43986378df32 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, io.openshift.tags=Ceph keepalived, release=1793, vcs-type=git, architecture=x86_64, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, io.buildah.version=1.28.2, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived)
Jan 20 14:09:33 compute-1 podman[141735]: 2026-01-20 14:09:33.404554624 +0000 UTC m=+0.100515723 container start 5aefe91b7ec82d14c11ddef250a4a56008ca1c7923fa55da8e9a43986378df32 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, io.openshift.expose-services=, name=keepalived, vendor=Red Hat, Inc., release=1793, vcs-type=git, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 20 14:09:33 compute-1 bash[141735]: 5aefe91b7ec82d14c11ddef250a4a56008ca1c7923fa55da8e9a43986378df32
Jan 20 14:09:33 compute-1 podman[141735]: 2026-01-20 14:09:33.324730999 +0000 UTC m=+0.020692118 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 20 14:09:33 compute-1 systemd[1]: Started Ceph keepalived.rgw.default.compute-1.cevitz for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 14:09:33 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[141751]: Tue Jan 20 14:09:33 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 20 14:09:33 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[141751]: Tue Jan 20 14:09:33 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 20 14:09:33 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[141751]: Tue Jan 20 14:09:33 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 20 14:09:33 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[141751]: Tue Jan 20 14:09:33 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 20 14:09:33 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[141751]: Tue Jan 20 14:09:33 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 20 14:09:33 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[141751]: Tue Jan 20 14:09:33 2026: Starting VRRP child process, pid=4
Jan 20 14:09:33 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[141751]: Tue Jan 20 14:09:33 2026: Startup complete
Jan 20 14:09:33 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[141751]: Tue Jan 20 14:09:33 2026: (VI_0) Entering BACKUP STATE (init)
Jan 20 14:09:33 compute-1 sudo[140977]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:33 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[141751]: Tue Jan 20 14:09:33 2026: VRRP_Script(check_backend) succeeded
Jan 20 14:09:33 compute-1 sudo[141760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:33 compute-1 sudo[141759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:33 compute-1 sudo[141759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:33 compute-1 sudo[141760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:33 compute-1 sudo[141759]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:33 compute-1 sudo[141760]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:33 compute-1 sudo[141810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:33 compute-1 sudo[141810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:33 compute-1 sudo[141810]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:33 compute-1 sudo[141809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:09:33 compute-1 sudo[141809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:33 compute-1 sudo[141809]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:33 compute-1 sudo[141859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:33 compute-1 sudo[141859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:33 compute-1 sudo[141859]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:33 compute-1 sudo[141884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:09:33 compute-1 sudo[141884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:33 compute-1 sudo[141884]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:33 compute-1 sudo[141909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:33 compute-1 sudo[141909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:33 compute-1 sudo[141909]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:34 compute-1 sudo[141934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 14:09:34 compute-1 sudo[141934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:09:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:34.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:09:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:34 compute-1 ceph-mon[81775]: pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 20 14:09:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:09:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:34.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:09:34 compute-1 podman[142031]: 2026-01-20 14:09:34.646179892 +0000 UTC m=+0.080048442 container exec 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 20 14:09:34 compute-1 podman[142031]: 2026-01-20 14:09:34.779560857 +0000 UTC m=+0.213429407 container exec_died 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:09:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:09:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:34.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:09:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:35 compute-1 podman[142188]: 2026-01-20 14:09:35.751452449 +0000 UTC m=+0.082498072 container exec 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 14:09:35 compute-1 podman[142188]: 2026-01-20 14:09:35.791486026 +0000 UTC m=+0.122531609 container exec_died 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 14:09:36 compute-1 podman[142254]: 2026-01-20 14:09:36.085648314 +0000 UTC m=+0.069232866 container exec 5aefe91b7ec82d14c11ddef250a4a56008ca1c7923fa55da8e9a43986378df32 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, architecture=x86_64, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, vendor=Red Hat, Inc.)
Jan 20 14:09:36 compute-1 podman[142254]: 2026-01-20 14:09:36.10029619 +0000 UTC m=+0.083880682 container exec_died 5aefe91b7ec82d14c11ddef250a4a56008ca1c7923fa55da8e9a43986378df32 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.openshift.expose-services=, architecture=x86_64, description=keepalived for Ceph, io.buildah.version=1.28.2)
Jan 20 14:09:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:36.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:36 compute-1 sudo[141934]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:36.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:36 compute-1 ceph-mon[81775]: pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 20 14:09:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:36.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:37 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[141751]: Tue Jan 20 14:09:37 2026: (VI_0) Entering MASTER STATE
Jan 20 14:09:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:09:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:09:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:37 compute-1 ceph-mon[81775]: pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 14:09:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:09:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:09:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:09:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:09:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:38.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:09:38 compute-1 sshd-session[142288]: Accepted publickey for zuul from 192.168.122.30 port 49426 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:09:38 compute-1 systemd-logind[783]: New session 48 of user zuul.
Jan 20 14:09:38 compute-1 systemd[1]: Started Session 48 of User zuul.
Jan 20 14:09:38 compute-1 sshd-session[142288]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:09:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:38.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:38 compute-1 ceph-mon[81775]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 1 service(s): osd.default_drive_group)
Jan 20 14:09:38 compute-1 ceph-mon[81775]: Cluster is now healthy
Jan 20 14:09:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:38.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:39 compute-1 python3.9[142441]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:09:39 compute-1 ceph-mon[81775]: pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 14:09:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:40.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:40 compute-1 sudo[142596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daqghlmzfohmxnnwnaxrnsmajipvaruu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918179.9301069-63-198154131391531/AnsiballZ_command.py'
Jan 20 14:09:40 compute-1 sudo[142596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:40.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:40 compute-1 python3.9[142598]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:09:40 compute-1 sudo[142596]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:40.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:41 compute-1 ceph-mon[81775]: pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 20 14:09:41 compute-1 sudo[142761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihjlqqklyneyybdjipeqqvidhjopganm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918181.2300992-96-44796719551403/AnsiballZ_systemd_service.py'
Jan 20 14:09:41 compute-1 sudo[142761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:42.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:42 compute-1 python3.9[142763]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:09:42 compute-1 systemd[1]: Reloading.
Jan 20 14:09:42 compute-1 systemd-rc-local-generator[142782]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:09:42 compute-1 systemd-sysv-generator[142790]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:09:42 compute-1 sudo[142761]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:09:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:42.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:09:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:43.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:43 compute-1 python3.9[142948]: ansible-ansible.builtin.service_facts Invoked
Jan 20 14:09:43 compute-1 network[142966]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:09:43 compute-1 network[142967]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:09:43 compute-1 network[142968]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:09:43 compute-1 ceph-mon[81775]: pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 9.8 KiB/s rd, 0 B/s wr, 16 op/s
Jan 20 14:09:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:44.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:09:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:44.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:09:44 compute-1 ceph-mon[81775]: Removing daemon haproxy.rgw.default.compute-2.cuokcs from compute-2 -- ports [8080, 8999]
Jan 20 14:09:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:45.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:45 compute-1 podman[142997]: 2026-01-20 14:09:45.051557487 +0000 UTC m=+0.125339088 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:09:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:45 compute-1 ceph-mon[81775]: pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:45 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth rm", "entity": "client.ingress.rgw.default.compute-2.cuokcs"}]: dispatch
Jan 20 14:09:45 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:46.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:46 compute-1 ceph-mon[81775]: Removing key for client.ingress.rgw.default.compute-2.cuokcs
Jan 20 14:09:46 compute-1 ceph-mon[81775]: Removing daemon keepalived.rgw.default.compute-2.dleeql from compute-2 -- ports []
Jan 20 14:09:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:09:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:47.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:09:47 compute-1 ceph-mon[81775]: pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:48.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:48 compute-1 sudo[143250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmtepkqotaklxjuhbpqvnnjdmxnmczxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918187.845229-153-175593300329690/AnsiballZ_systemd_service.py'
Jan 20 14:09:48 compute-1 sudo[143250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:48 compute-1 sudo[143253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:48 compute-1 sudo[143253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:48 compute-1 sudo[143253]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:48 compute-1 sudo[143278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:48 compute-1 sudo[143278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:48 compute-1 sudo[143278]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:48 compute-1 python3.9[143252]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:09:48 compute-1 sudo[143301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:48 compute-1 sudo[143301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:48 compute-1 sudo[143301]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:48 compute-1 sudo[143250]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:48 compute-1 sudo[143329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:09:48 compute-1 sudo[143329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:48 compute-1 sudo[143329]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth rm", "entity": "client.ingress.rgw.default.compute-2.dleeql"}]: dispatch
Jan 20 14:09:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:48 compute-1 sudo[143405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:48 compute-1 sudo[143405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:48 compute-1 sudo[143405]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:09:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:49.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:09:49 compute-1 sudo[143458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:09:49 compute-1 sudo[143458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:49 compute-1 sudo[143458]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:49 compute-1 sudo[143503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:49 compute-1 sudo[143503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:49 compute-1 sudo[143503]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:49 compute-1 sudo[143552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 14:09:49 compute-1 sudo[143552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:49 compute-1 sudo[143603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqqdvjqaefkpfiafjlooumdnryqinrvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918188.863276-153-154849397177437/AnsiballZ_systemd_service.py'
Jan 20 14:09:49 compute-1 sudo[143603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:49 compute-1 python3.9[143605]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:09:49 compute-1 sudo[143603]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:49 compute-1 ceph-mon[81775]: Removing key for client.ingress.rgw.default.compute-2.dleeql
Jan 20 14:09:49 compute-1 ceph-mon[81775]: pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:49 compute-1 podman[143726]: 2026-01-20 14:09:49.912030126 +0000 UTC m=+0.089856181 container exec 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:09:50 compute-1 podman[143726]: 2026-01-20 14:09:50.012387875 +0000 UTC m=+0.190213910 container exec_died 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:09:50 compute-1 sudo[143878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uncptlgtjfssbjddvcmalunzkmskxgie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918189.8049836-153-158737447240740/AnsiballZ_systemd_service.py'
Jan 20 14:09:50 compute-1 sudo[143878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:50.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:50 compute-1 python3.9[143884]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:09:50 compute-1 sudo[143878]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:50 compute-1 podman[144055]: 2026-01-20 14:09:50.770398967 +0000 UTC m=+0.080088324 container exec 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 14:09:50 compute-1 podman[144055]: 2026-01-20 14:09:50.782208082 +0000 UTC m=+0.091897349 container exec_died 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 14:09:50 compute-1 sudo[144191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmwkkdtxulzdczhkkoprevunpdxibiag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918190.6183524-153-167608646710297/AnsiballZ_systemd_service.py'
Jan 20 14:09:50 compute-1 sudo[144191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:51.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:51 compute-1 podman[144206]: 2026-01-20 14:09:51.072737668 +0000 UTC m=+0.072101708 container exec 5aefe91b7ec82d14c11ddef250a4a56008ca1c7923fa55da8e9a43986378df32 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, vcs-type=git, io.openshift.tags=Ceph keepalived, version=2.2.4, com.redhat.component=keepalived-container, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.buildah.version=1.28.2)
Jan 20 14:09:51 compute-1 podman[144206]: 2026-01-20 14:09:51.094229328 +0000 UTC m=+0.093593358 container exec_died 5aefe91b7ec82d14c11ddef250a4a56008ca1c7923fa55da8e9a43986378df32 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, name=keepalived, vcs-type=git, version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc.)
Jan 20 14:09:51 compute-1 sudo[143552]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:51 compute-1 sudo[144240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:51 compute-1 sudo[144240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:51 compute-1 sudo[144240]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:51 compute-1 python3.9[144203]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:09:51 compute-1 sudo[144191]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:51 compute-1 sudo[144265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:09:51 compute-1 sudo[144265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:51 compute-1 sudo[144265]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:09:51.361834) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918191361985, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1530, "num_deletes": 257, "total_data_size": 3512167, "memory_usage": 3559760, "flush_reason": "Manual Compaction"}
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918191387745, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 2252287, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12164, "largest_seqno": 13689, "table_properties": {"data_size": 2245739, "index_size": 3683, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13582, "raw_average_key_size": 18, "raw_value_size": 2232168, "raw_average_value_size": 3121, "num_data_blocks": 166, "num_entries": 715, "num_filter_entries": 715, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768918052, "oldest_key_time": 1768918052, "file_creation_time": 1768918191, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 25968 microseconds, and 10105 cpu microseconds.
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:09:51.387809) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 2252287 bytes OK
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:09:51.387833) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:09:51.389697) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:09:51.389722) EVENT_LOG_v1 {"time_micros": 1768918191389715, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:09:51.389744) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3504908, prev total WAL file size 3504908, number of live WAL files 2.
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:09:51.391162) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323537' seq:0, type:0; will stop at (end)
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(2199KB)], [24(8052KB)]
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918191391207, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 10497758, "oldest_snapshot_seqno": -1}
Jan 20 14:09:51 compute-1 sudo[144291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:51 compute-1 sudo[144291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:51 compute-1 sudo[144291]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4137 keys, 9910000 bytes, temperature: kUnknown
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918191487259, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 9910000, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9878300, "index_size": 20262, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10373, "raw_key_size": 101803, "raw_average_key_size": 24, "raw_value_size": 9799484, "raw_average_value_size": 2368, "num_data_blocks": 859, "num_entries": 4137, "num_filter_entries": 4137, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768918191, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:09:51.487502) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 9910000 bytes
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:09:51.489277) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.2 rd, 103.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.9 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(9.1) write-amplify(4.4) OK, records in: 4674, records dropped: 537 output_compression: NoCompression
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:09:51.489294) EVENT_LOG_v1 {"time_micros": 1768918191489284, "job": 12, "event": "compaction_finished", "compaction_time_micros": 96142, "compaction_time_cpu_micros": 33932, "output_level": 6, "num_output_files": 1, "total_output_size": 9910000, "num_input_records": 4674, "num_output_records": 4137, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918191489936, "job": 12, "event": "table_file_deletion", "file_number": 26}
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918191491159, "job": 12, "event": "table_file_deletion", "file_number": 24}
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:09:51.391062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:09:51.491236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:09:51.491241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:09:51.491243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:09:51.491246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:09:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:09:51.491248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:09:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:51 compute-1 ceph-mon[81775]: pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:51 compute-1 sudo[144367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:09:51 compute-1 sudo[144367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:51 compute-1 sudo[144505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gywcjvrbodeifxnpexbtrlzcgsbeaagc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918191.4699192-153-199091866403541/AnsiballZ_systemd_service.py'
Jan 20 14:09:51 compute-1 sudo[144505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:52 compute-1 sudo[144367]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:52 compute-1 python3.9[144507]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:09:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:52.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:52 compute-1 sudo[144505]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:52 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:09:52 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:09:52 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:52 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:09:52 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:09:52 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:09:52 compute-1 sudo[144675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khjvpwwfefafmomakegxsuhjelbrakbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918192.3417416-153-207686600431606/AnsiballZ_systemd_service.py'
Jan 20 14:09:52 compute-1 sudo[144675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:52 compute-1 python3.9[144677]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:09:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:53.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:53 compute-1 sudo[144675]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:53 compute-1 ceph-mon[81775]: pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:53 compute-1 sudo[144829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykhjjyyaixdntutdeajfaorgfhuqokrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918193.201246-153-277538461949727/AnsiballZ_systemd_service.py'
Jan 20 14:09:53 compute-1 sudo[144829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:53 compute-1 python3.9[144831]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:09:53 compute-1 sudo[144829]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:54.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:09:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:55.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:09:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:55 compute-1 ceph-mon[81775]: pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:09:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:56.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:09:56 compute-1 sudo[144983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mslqpugywabalqmkkgjcofqryvprkrav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918195.6721911-309-221548577041675/AnsiballZ_file.py'
Jan 20 14:09:56 compute-1 sudo[144983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:56 compute-1 python3.9[144985]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:56 compute-1 sudo[144983]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:56 compute-1 sudo[145135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omjugpnpvkkxlbwbtvgpqhccblczvhmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918196.568592-309-129657278618310/AnsiballZ_file.py'
Jan 20 14:09:56 compute-1 sudo[145135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:57.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:57 compute-1 python3.9[145137]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:57 compute-1 sudo[145135]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:57 compute-1 sudo[145288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzkyjklvnptxztoektgckkcbqbffnfcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918197.3107545-309-138058626405375/AnsiballZ_file.py'
Jan 20 14:09:57 compute-1 sudo[145288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:57 compute-1 python3.9[145290]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:57 compute-1 sudo[145288]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:57 compute-1 ceph-mon[81775]: pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:09:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:58.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:09:58 compute-1 sudo[145440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnotzzrmcvgssqojljljaqeyqhfhpclo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918198.0273838-309-30061495267863/AnsiballZ_file.py'
Jan 20 14:09:58 compute-1 sudo[145440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:58 compute-1 python3.9[145442]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:58 compute-1 sudo[145440]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:09:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:59.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:59 compute-1 sudo[145592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vscumtfuvtvjtjdvczepaukfvknsvmpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918198.718633-309-197729916261229/AnsiballZ_file.py'
Jan 20 14:09:59 compute-1 sudo[145592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:59 compute-1 sudo[145595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:59 compute-1 sudo[145595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:59 compute-1 sudo[145595]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:59 compute-1 python3.9[145594]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:59 compute-1 sudo[145592]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:59 compute-1 sudo[145620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:09:59 compute-1 sudo[145620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:59 compute-1 sudo[145620]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:59 compute-1 sudo[145795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shbsueibcrrpdqwyeosplspyajhzahcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918199.4903069-309-44912345860591/AnsiballZ_file.py'
Jan 20 14:09:59 compute-1 sudo[145795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:59 compute-1 python3.9[145797]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:59 compute-1 sudo[145795]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:00 compute-1 ceph-mon[81775]: pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:00 compute-1 ceph-mon[81775]: overall HEALTH_OK
Jan 20 14:10:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:00.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:00 compute-1 sudo[145947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ersnhapavzbtuxnmzqfqxenstxikccmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918200.1331584-309-222618370647526/AnsiballZ_file.py'
Jan 20 14:10:00 compute-1 sudo[145947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:00 compute-1 python3.9[145949]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:00 compute-1 sudo[145947]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:10:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:01.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:10:01 compute-1 ceph-mon[81775]: Reconfiguring keepalived.rgw.default.compute-0.gcjsxe (dependencies changed)...
Jan 20 14:10:01 compute-1 ceph-mon[81775]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 14:10:01 compute-1 ceph-mon[81775]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 14:10:01 compute-1 ceph-mon[81775]: Reconfiguring daemon keepalived.rgw.default.compute-0.gcjsxe on compute-0
Jan 20 14:10:01 compute-1 sudo[146100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rklkorxwzcbvfnwsvovamzrlyajljljl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918201.1159577-459-256164922838431/AnsiballZ_file.py'
Jan 20 14:10:01 compute-1 sudo[146100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:01 compute-1 python3.9[146102]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:01 compute-1 sudo[146100]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:02.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:02 compute-1 sudo[146252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qykzgsodiaddrmdwaoebdpajvfhlkmcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918201.8588588-459-137924145609554/AnsiballZ_file.py'
Jan 20 14:10:02 compute-1 sudo[146252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:02 compute-1 ceph-mon[81775]: pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:02 compute-1 python3.9[146254]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:02 compute-1 sudo[146252]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:02 compute-1 sudo[146343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:02 compute-1 sudo[146343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:02 compute-1 sudo[146343]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:02 compute-1 sudo[146379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:10:02 compute-1 sudo[146379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:02 compute-1 sudo[146379]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:02 compute-1 sudo[146428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:02 compute-1 sudo[146428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:02 compute-1 sudo[146428]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:02 compute-1 sudo[146478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gumgcruhfkiiwbyfjpahvxgyjkjlidta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918202.591268-459-254614231564970/AnsiballZ_file.py'
Jan 20 14:10:02 compute-1 sudo[146478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:02 compute-1 sudo[146480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 14:10:02 compute-1 sudo[146480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:03.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:03 compute-1 python3.9[146489]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:03 compute-1 sudo[146478]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:03 compute-1 podman[146531]: 2026-01-20 14:10:03.281493292 +0000 UTC m=+0.071174131 container create 05e74022568c9941c63722eb3c5cf98e4e561a45e5da76bee1e1e08f5188d90c (image=quay.io/ceph/keepalived:2.2.4, name=quirky_cannon, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 20 14:10:03 compute-1 systemd[1]: Started libpod-conmon-05e74022568c9941c63722eb3c5cf98e4e561a45e5da76bee1e1e08f5188d90c.scope.
Jan 20 14:10:03 compute-1 podman[146531]: 2026-01-20 14:10:03.251361787 +0000 UTC m=+0.041042716 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 20 14:10:03 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:10:03 compute-1 podman[146531]: 2026-01-20 14:10:03.395546818 +0000 UTC m=+0.185227757 container init 05e74022568c9941c63722eb3c5cf98e4e561a45e5da76bee1e1e08f5188d90c (image=quay.io/ceph/keepalived:2.2.4, name=quirky_cannon, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, version=2.2.4, name=keepalived, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, description=keepalived for Ceph, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 20 14:10:03 compute-1 podman[146531]: 2026-01-20 14:10:03.41219112 +0000 UTC m=+0.201871999 container start 05e74022568c9941c63722eb3c5cf98e4e561a45e5da76bee1e1e08f5188d90c (image=quay.io/ceph/keepalived:2.2.4, name=quirky_cannon, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., architecture=x86_64, description=keepalived for Ceph, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, io.openshift.expose-services=, build-date=2023-02-22T09:23:20)
Jan 20 14:10:03 compute-1 podman[146531]: 2026-01-20 14:10:03.418035876 +0000 UTC m=+0.207716745 container attach 05e74022568c9941c63722eb3c5cf98e4e561a45e5da76bee1e1e08f5188d90c (image=quay.io/ceph/keepalived:2.2.4, name=quirky_cannon, vcs-type=git, name=keepalived, com.redhat.component=keepalived-container, version=2.2.4, distribution-scope=public, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 20 14:10:03 compute-1 quirky_cannon[146587]: 0 0
Jan 20 14:10:03 compute-1 systemd[1]: libpod-05e74022568c9941c63722eb3c5cf98e4e561a45e5da76bee1e1e08f5188d90c.scope: Deactivated successfully.
Jan 20 14:10:03 compute-1 podman[146531]: 2026-01-20 14:10:03.419632041 +0000 UTC m=+0.209312890 container died 05e74022568c9941c63722eb3c5cf98e4e561a45e5da76bee1e1e08f5188d90c (image=quay.io/ceph/keepalived:2.2.4, name=quirky_cannon, distribution-scope=public, name=keepalived, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, vendor=Red Hat, Inc., version=2.2.4, vcs-type=git, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 20 14:10:03 compute-1 systemd[1]: var-lib-containers-storage-overlay-bd96cd56c0156c2fc010f920ec08ada05d304117ea15340123bd4fe5fddf7f64-merged.mount: Deactivated successfully.
Jan 20 14:10:03 compute-1 podman[146531]: 2026-01-20 14:10:03.497856021 +0000 UTC m=+0.287536890 container remove 05e74022568c9941c63722eb3c5cf98e4e561a45e5da76bee1e1e08f5188d90c (image=quay.io/ceph/keepalived:2.2.4, name=quirky_cannon, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, description=keepalived for Ceph, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.buildah.version=1.28.2)
Jan 20 14:10:03 compute-1 systemd[1]: libpod-conmon-05e74022568c9941c63722eb3c5cf98e4e561a45e5da76bee1e1e08f5188d90c.scope: Deactivated successfully.
Jan 20 14:10:03 compute-1 podman[146588]: 2026-01-20 14:10:03.532784093 +0000 UTC m=+0.167888945 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 20 14:10:03 compute-1 systemd[1]: Stopping Ceph keepalived.rgw.default.compute-1.cevitz for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 14:10:03 compute-1 sudo[146753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naelwrstdaxklaxucllyaiutmlxsksbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918203.3209417-459-19102334756068/AnsiballZ_file.py'
Jan 20 14:10:03 compute-1 sudo[146753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:03 compute-1 ceph-mon[81775]: pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:03 compute-1 ceph-mon[81775]: Reconfiguring keepalived.rgw.default.compute-1.cevitz (dependencies changed)...
Jan 20 14:10:03 compute-1 ceph-mon[81775]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 14:10:03 compute-1 ceph-mon[81775]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 14:10:03 compute-1 ceph-mon[81775]: Reconfiguring daemon keepalived.rgw.default.compute-1.cevitz on compute-1
Jan 20 14:10:03 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[141751]: Tue Jan 20 14:10:03 2026: Stopping
Jan 20 14:10:03 compute-1 python3.9[146765]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:03 compute-1 sudo[146753]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:10:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:04.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:10:04 compute-1 sudo[146930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqjwrvultjukxozewybtuvofoqdhuhtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918204.0713398-459-228807578995440/AnsiballZ_file.py'
Jan 20 14:10:04 compute-1 sudo[146930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:04 compute-1 python3.9[146932]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:04 compute-1 sudo[146930]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:04 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[141751]: Tue Jan 20 14:10:04 2026: Stopped
Jan 20 14:10:04 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[141751]: Tue Jan 20 14:10:04 2026: Stopped Keepalived v2.2.4 (08/21,2021)
Jan 20 14:10:04 compute-1 podman[146767]: 2026-01-20 14:10:04.827589389 +0000 UTC m=+1.076964375 container died 5aefe91b7ec82d14c11ddef250a4a56008ca1c7923fa55da8e9a43986378df32 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., release=1793, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.expose-services=, name=keepalived, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 20 14:10:04 compute-1 systemd[1]: var-lib-containers-storage-overlay-e1c25161a5231e80d5ac8ddda72b615c5387a44fbf34d7d20f0c56077ae85edb-merged.mount: Deactivated successfully.
Jan 20 14:10:04 compute-1 podman[146767]: 2026-01-20 14:10:04.895832816 +0000 UTC m=+1.145207832 container remove 5aefe91b7ec82d14c11ddef250a4a56008ca1c7923fa55da8e9a43986378df32 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., release=1793, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=keepalived, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph)
Jan 20 14:10:04 compute-1 bash[146767]: ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz
Jan 20 14:10:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:10:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:05.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:10:05 compute-1 systemd[1]: ceph-e399cf45-e6b6-5393-99f1-75c601d3f188@keepalived.rgw.default.compute-1.cevitz.service: Deactivated successfully.
Jan 20 14:10:05 compute-1 systemd[1]: Stopped Ceph keepalived.rgw.default.compute-1.cevitz for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 14:10:05 compute-1 systemd[1]: ceph-e399cf45-e6b6-5393-99f1-75c601d3f188@keepalived.rgw.default.compute-1.cevitz.service: Consumed 1.249s CPU time.
Jan 20 14:10:05 compute-1 systemd[1]: Starting Ceph keepalived.rgw.default.compute-1.cevitz for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 14:10:05 compute-1 sudo[147145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzqicxowdpgyeqrudfujqpluqzkbelgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918204.8772695-459-253276635640088/AnsiballZ_file.py'
Jan 20 14:10:05 compute-1 sudo[147145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:05 compute-1 podman[147170]: 2026-01-20 14:10:05.363937331 +0000 UTC m=+0.056498224 container create e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, description=keepalived for Ceph, distribution-scope=public, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, name=keepalived, io.buildah.version=1.28.2, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, architecture=x86_64)
Jan 20 14:10:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0b704a066dc7e5a0e85f636af2537598365003570ac0fe0f0eed25f7c708f9f/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:05 compute-1 podman[147170]: 2026-01-20 14:10:05.420842886 +0000 UTC m=+0.113403819 container init e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, architecture=x86_64, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, name=keepalived, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, release=1793, io.openshift.expose-services=, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc.)
Jan 20 14:10:05 compute-1 podman[147170]: 2026-01-20 14:10:05.42660094 +0000 UTC m=+0.119161843 container start e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, name=keepalived, com.redhat.component=keepalived-container, version=2.2.4, release=1793, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, distribution-scope=public, io.openshift.tags=Ceph keepalived)
Jan 20 14:10:05 compute-1 bash[147170]: e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962
Jan 20 14:10:05 compute-1 podman[147170]: 2026-01-20 14:10:05.339546969 +0000 UTC m=+0.032107902 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 20 14:10:05 compute-1 systemd[1]: Started Ceph keepalived.rgw.default.compute-1.cevitz for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 14:10:05 compute-1 python3.9[147153]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:05 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[147186]: Tue Jan 20 14:10:05 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 20 14:10:05 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[147186]: Tue Jan 20 14:10:05 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 20 14:10:05 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[147186]: Tue Jan 20 14:10:05 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 20 14:10:05 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[147186]: Tue Jan 20 14:10:05 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 20 14:10:05 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[147186]: Tue Jan 20 14:10:05 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 20 14:10:05 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[147186]: Tue Jan 20 14:10:05 2026: Starting VRRP child process, pid=4
Jan 20 14:10:05 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[147186]: Tue Jan 20 14:10:05 2026: Startup complete
Jan 20 14:10:05 compute-1 sudo[147145]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:05 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[147186]: Tue Jan 20 14:10:05 2026: (VI_0) Entering BACKUP STATE (init)
Jan 20 14:10:05 compute-1 sudo[146480]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:05 compute-1 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz[147186]: Tue Jan 20 14:10:05 2026: VRRP_Script(check_backend) succeeded
Jan 20 14:10:05 compute-1 sudo[147207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:05 compute-1 sudo[147207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:05 compute-1 sudo[147207]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:05 compute-1 sudo[147246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:10:05 compute-1 sudo[147246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:05 compute-1 sudo[147246]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:05 compute-1 sudo[147296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:05 compute-1 sudo[147296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:05 compute-1 sudo[147296]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:05 compute-1 ceph-mon[81775]: pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:05 compute-1 sudo[147345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 14:10:05 compute-1 sudo[147345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:05 compute-1 sudo[147459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsmzhwweozoawitafdffgaafplpmzrax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918205.6300116-459-34201522921558/AnsiballZ_file.py'
Jan 20 14:10:05 compute-1 sudo[147459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:06.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:06 compute-1 python3.9[147471]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:06 compute-1 sudo[147459]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:06 compute-1 podman[147543]: 2026-01-20 14:10:06.818722918 +0000 UTC m=+0.470687259 container exec 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:10:06 compute-1 podman[147543]: 2026-01-20 14:10:06.923489592 +0000 UTC m=+0.575453963 container exec_died 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:10:07 compute-1 sudo[147702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgyewqpikwezgwvknwcieucqadlgcwap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918206.6077867-612-228243109745949/AnsiballZ_command.py'
Jan 20 14:10:07 compute-1 sudo[147702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:10:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:07.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:10:07 compute-1 python3.9[147710]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:10:07 compute-1 sudo[147702]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:07 compute-1 podman[147910]: 2026-01-20 14:10:07.860531974 +0000 UTC m=+0.100056440 container exec 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 14:10:07 compute-1 podman[147910]: 2026-01-20 14:10:07.87343526 +0000 UTC m=+0.112959756 container exec_died 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 14:10:07 compute-1 ceph-mon[81775]: pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:08 compute-1 python3.9[148020]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 14:10:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:08.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:08 compute-1 podman[148041]: 2026-01-20 14:10:08.190720395 +0000 UTC m=+0.092691142 container exec e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, distribution-scope=public)
Jan 20 14:10:08 compute-1 podman[148041]: 2026-01-20 14:10:08.236507535 +0000 UTC m=+0.138478382 container exec_died e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, name=keepalived, architecture=x86_64, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, description=keepalived for Ceph, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 20 14:10:08 compute-1 sudo[147345]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:08 compute-1 sudo[148111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:08 compute-1 sudo[148111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:08 compute-1 sudo[148111]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:08 compute-1 sudo[148164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:08 compute-1 sudo[148164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:08 compute-1 sudo[148164]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:09 compute-1 sudo[148274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhlqejdvfcfebzsgundslczvefthjpux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918208.622953-666-86713256883100/AnsiballZ_systemd_service.py'
Jan 20 14:10:09 compute-1 sudo[148274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:10:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:09.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:10:09 compute-1 python3.9[148276]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:10:09 compute-1 systemd[1]: Reloading.
Jan 20 14:10:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:10:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:10:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:10:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:10:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:10:09 compute-1 ceph-mon[81775]: pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:09 compute-1 systemd-rc-local-generator[148304]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:10:09 compute-1 systemd-sysv-generator[148308]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:10:09 compute-1 sudo[148274]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:10.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:10 compute-1 sudo[148463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-limdyrglsncbnnqkbrbvdkykpcwmpyia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918209.9656007-690-63018564272805/AnsiballZ_command.py'
Jan 20 14:10:10 compute-1 sudo[148463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:10 compute-1 python3.9[148465]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:10:10 compute-1 sudo[148463]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:10 compute-1 sudo[148616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poarjywrmuwgrfwaicqznajlirqedwfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918210.6584861-690-261038877851498/AnsiballZ_command.py'
Jan 20 14:10:11 compute-1 sudo[148616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:11.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:11 compute-1 python3.9[148618]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:10:11 compute-1 sudo[148616]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:11 compute-1 sudo[148770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnrkhkqojgwagaqiiwceuepucczijnij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918211.4376252-690-221315086577196/AnsiballZ_command.py'
Jan 20 14:10:11 compute-1 sudo[148770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:11 compute-1 python3.9[148772]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:10:11 compute-1 sudo[148770]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:12 compute-1 ceph-mon[81775]: pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:12.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:12 compute-1 sudo[148923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssvqretevkjtlyfmnqmhhbmlzvacpbbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918212.1655097-690-279192230896638/AnsiballZ_command.py'
Jan 20 14:10:12 compute-1 sudo[148923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:12 compute-1 python3.9[148925]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:10:12 compute-1 sudo[148923]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:13.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:13 compute-1 sudo[149076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujmrkumrfzjlcjzixappkegblaiiedla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918212.9493053-690-231295961513310/AnsiballZ_command.py'
Jan 20 14:10:13 compute-1 sudo[149076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:13 compute-1 ceph-mon[81775]: pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:13 compute-1 python3.9[149078]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:10:13 compute-1 sudo[149076]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:14 compute-1 sudo[149230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdyockvotmedyahgkxecykdmswjwtgxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918213.710545-690-182388055996026/AnsiballZ_command.py'
Jan 20 14:10:14 compute-1 sudo[149230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:14.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:14 compute-1 python3.9[149232]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:10:14 compute-1 sudo[149230]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:14 compute-1 sudo[149383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvlqwizntoqeqphvjkridddtdveilzug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918214.5581403-690-275100952058707/AnsiballZ_command.py'
Jan 20 14:10:14 compute-1 sudo[149383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:15.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:15 compute-1 python3.9[149385]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:10:15 compute-1 sudo[149383]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:15 compute-1 podman[149387]: 2026-01-20 14:10:15.279346531 +0000 UTC m=+0.092421914 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 14:10:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:16.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:10:16.370 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:10:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:10:16.371 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:10:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:10:16.371 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:10:16 compute-1 ceph-mon[81775]: pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:16 compute-1 sudo[149431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:16 compute-1 sudo[149431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:16 compute-1 sudo[149431]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:16 compute-1 sudo[149456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:10:16 compute-1 sudo[149456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:16 compute-1 sudo[149456]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:17.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:17 compute-1 sudo[149607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xczclpbtxruokotruzroyvmvfyatynnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918217.0967593-852-135174937414939/AnsiballZ_getent.py'
Jan 20 14:10:17 compute-1 sudo[149607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:17 compute-1 python3.9[149609]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 20 14:10:17 compute-1 sudo[149607]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:17 compute-1 ceph-mon[81775]: pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:18.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:18 compute-1 sudo[149760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebjyjxvmfjjqgpfojpafezwxogfumrqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918217.9919472-876-209868888736030/AnsiballZ_group.py'
Jan 20 14:10:18 compute-1 sudo[149760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:18 compute-1 python3.9[149762]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 14:10:18 compute-1 groupadd[149763]: group added to /etc/group: name=libvirt, GID=42473
Jan 20 14:10:18 compute-1 groupadd[149763]: group added to /etc/gshadow: name=libvirt
Jan 20 14:10:18 compute-1 groupadd[149763]: new group: name=libvirt, GID=42473
Jan 20 14:10:18 compute-1 sudo[149760]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:10:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:19.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:10:19 compute-1 ceph-mon[81775]: pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:10:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:20.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:10:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:20 compute-1 sudo[149919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lokkiknanqdycxewegyxvsrjodssgcny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918220.2913792-900-182040783021888/AnsiballZ_user.py'
Jan 20 14:10:20 compute-1 sudo[149919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:21 compute-1 python3.9[149921]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 20 14:10:21 compute-1 useradd[149923]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 20 14:10:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:21.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:21 compute-1 sudo[149919]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:21 compute-1 ceph-mon[81775]: pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:22 compute-1 sudo[150080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejftwbstcxjjehvuyehoambeeoftkmmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918221.6604178-933-174866115645873/AnsiballZ_setup.py'
Jan 20 14:10:22 compute-1 sudo[150080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:22.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:22 compute-1 python3.9[150082]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:10:22 compute-1 sudo[150080]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:22 compute-1 sudo[150164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zybkownpzqtlxmdduuvchtjauhntuzjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918221.6604178-933-174866115645873/AnsiballZ_dnf.py'
Jan 20 14:10:22 compute-1 sudo[150164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:10:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:23.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:10:23 compute-1 python3.9[150166]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:10:23 compute-1 ceph-mon[81775]: pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:24.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:25.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:26 compute-1 ceph-mon[81775]: pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:10:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:26.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:10:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:27.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:28 compute-1 ceph-mon[81775]: pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:28.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:28 compute-1 sudo[150181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:28 compute-1 sudo[150181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:28 compute-1 sudo[150181]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:28 compute-1 sudo[150206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:28 compute-1 sudo[150206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:28 compute-1 sudo[150206]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:29.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:30 compute-1 ceph-mon[81775]: pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:30.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:31.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:31 compute-1 ceph-mon[81775]: pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:32.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:33.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:33 compute-1 ceph-mon[81775]: pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:34 compute-1 podman[150326]: 2026-01-20 14:10:34.147598482 +0000 UTC m=+0.163647282 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:10:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:34.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:35.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:35 compute-1 ceph-mon[81775]: pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:36.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:37.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:38 compute-1 ceph-mon[81775]: pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:38.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:39.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:40 compute-1 ceph-mon[81775]: pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:40.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:41.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:41 compute-1 ceph-mon[81775]: pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:42.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:43.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:43 compute-1 ceph-mon[81775]: pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:44.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:45.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:46 compute-1 ceph-mon[81775]: pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:46 compute-1 podman[150444]: 2026-01-20 14:10:46.063994147 +0000 UTC m=+0.094263889 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 14:10:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:46.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:47.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:48.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:48 compute-1 ceph-mon[81775]: pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:49 compute-1 sudo[150464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:49 compute-1 sudo[150464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:49 compute-1 sudo[150464]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:49.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:49 compute-1 sudo[150489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:49 compute-1 sudo[150489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:49 compute-1 sudo[150489]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:50.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:50 compute-1 ceph-mon[81775]: pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:10:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:51.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:10:51 compute-1 ceph-mon[81775]: pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:52.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:52 compute-1 kernel: SELinux:  Converting 2776 SID table entries...
Jan 20 14:10:52 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 14:10:52 compute-1 kernel: SELinux:  policy capability open_perms=1
Jan 20 14:10:52 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 14:10:52 compute-1 kernel: SELinux:  policy capability always_check_network=0
Jan 20 14:10:52 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 14:10:52 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 14:10:52 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 14:10:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:10:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:53.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:10:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:54.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:54 compute-1 ceph-mon[81775]: pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:55.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:56.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:56 compute-1 ceph-mon[81775]: pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:57.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:58.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:10:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:10:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:59.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:10:59 compute-1 ceph-mon[81775]: pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:11:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:00.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:11:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:11:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:01.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:11:01 compute-1 ceph-mon[81775]: pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:11:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:02.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:11:03 compute-1 ceph-mon[81775]: pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:03.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:03 compute-1 kernel: SELinux:  Converting 2776 SID table entries...
Jan 20 14:11:03 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 14:11:03 compute-1 kernel: SELinux:  policy capability open_perms=1
Jan 20 14:11:03 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 14:11:03 compute-1 kernel: SELinux:  policy capability always_check_network=0
Jan 20 14:11:03 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 14:11:03 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 14:11:03 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 14:11:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:04.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:04 compute-1 ceph-mon[81775]: pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:04 compute-1 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 20 14:11:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:11:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:05.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:11:05 compute-1 podman[150537]: 2026-01-20 14:11:05.170153745 +0000 UTC m=+0.170370693 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 14:11:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:05 compute-1 ceph-mon[81775]: pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:11:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:06.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:11:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:11:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:07.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:11:07 compute-1 ceph-mon[81775]: pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:11:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:08.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:11:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:09.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:09 compute-1 sudo[150565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:09 compute-1 sudo[150565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:09 compute-1 sudo[150565]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:09 compute-1 sudo[150590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:09 compute-1 sudo[150590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:09 compute-1 sudo[150590]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:10 compute-1 ceph-mon[81775]: pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:10.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:11.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:11 compute-1 ceph-mon[81775]: pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:11:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:12.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:11:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:13.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:13 compute-1 ceph-mon[81775]: pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:14.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:11:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:15.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:11:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:16 compute-1 ceph-mon[81775]: pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:11:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:16.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:11:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:11:16.372 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:11:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:11:16.372 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:11:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:11:16.373 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:11:17 compute-1 podman[151361]: 2026-01-20 14:11:17.03606589 +0000 UTC m=+0.075590890 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 20 14:11:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:11:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:17.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:11:17 compute-1 sudo[151478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:17 compute-1 sudo[151478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:17 compute-1 sudo[151478]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:17 compute-1 sudo[151537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:11:17 compute-1 sudo[151537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:17 compute-1 sudo[151537]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:17 compute-1 sudo[151604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:17 compute-1 sudo[151604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:17 compute-1 sudo[151604]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:17 compute-1 sudo[151674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:11:17 compute-1 sudo[151674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:17 compute-1 ceph-mon[81775]: pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:17 compute-1 sudo[151674]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:11:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:18.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:11:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:19.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:11:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:11:19 compute-1 ceph-mon[81775]: pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:20.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:11:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:11:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:11:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:11:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:21.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:21 compute-1 ceph-mon[81775]: pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:11:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:22.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:11:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:11:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:23.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:11:23 compute-1 ceph-mon[81775]: pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:24.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:25.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:26.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:26 compute-1 ceph-mon[81775]: pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:27.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:28.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:28 compute-1 ceph-mon[81775]: pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:29.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:29 compute-1 sudo[157293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:29 compute-1 sudo[157293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:29 compute-1 sudo[157293]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:29 compute-1 sudo[157361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:29 compute-1 sudo[157361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:29 compute-1 sudo[157361]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:29 compute-1 ceph-mon[81775]: pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:11:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:30.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:11:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:31.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:31 compute-1 ceph-mon[81775]: pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:31 compute-1 sudo[158581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:31 compute-1 sudo[158581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:31 compute-1 sudo[158581]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:32 compute-1 sudo[158645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:11:32 compute-1 sudo[158645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:32 compute-1 sudo[158645]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:11:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:32.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:11:32 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:11:32 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:11:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:33.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:33 compute-1 ceph-mon[81775]: pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:34.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:35.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:36 compute-1 ceph-mon[81775]: pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:36 compute-1 podman[160625]: 2026-01-20 14:11:36.157911151 +0000 UTC m=+0.169320004 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:11:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:36.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:11:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:37.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:11:38 compute-1 ceph-mon[81775]: pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:38.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:39.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:39 compute-1 ceph-mon[81775]: pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:40.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:41.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:41 compute-1 ceph-mon[81775]: pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:42.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:43.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:43 compute-1 ceph-mon[81775]: pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:44.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:45.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:46 compute-1 ceph-mon[81775]: pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:46.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:47.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:48 compute-1 podman[166335]: 2026-01-20 14:11:48.053962957 +0000 UTC m=+0.089217647 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 20 14:11:48 compute-1 ceph-mon[81775]: pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:48.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:49.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:49 compute-1 sudo[167027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:49 compute-1 sudo[167027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:49 compute-1 sudo[167027]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:49 compute-1 sudo[167092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:49 compute-1 sudo[167092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:49 compute-1 sudo[167092]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:50 compute-1 ceph-mon[81775]: pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:50.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:51.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:51 compute-1 ceph-mon[81775]: pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:52.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:53.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:54 compute-1 ceph-mon[81775]: pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:11:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:54.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:11:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:55.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:11:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:56.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:11:56 compute-1 ceph-mon[81775]: pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:11:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:57.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:11:57 compute-1 ceph-mon[81775]: pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:58.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:11:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:59.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:59 compute-1 ceph-mon[81775]: pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:12:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:00.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:12:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:01.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:02.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:02 compute-1 ceph-mon[81775]: pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:12:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:03.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:12:04 compute-1 kernel: SELinux:  Converting 2777 SID table entries...
Jan 20 14:12:04 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 14:12:04 compute-1 kernel: SELinux:  policy capability open_perms=1
Jan 20 14:12:04 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 14:12:04 compute-1 kernel: SELinux:  policy capability always_check_network=0
Jan 20 14:12:04 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 14:12:04 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 14:12:04 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 14:12:04 compute-1 ceph-mon[81775]: pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:04.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:05.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:05 compute-1 groupadd[167871]: group added to /etc/group: name=dnsmasq, GID=993
Jan 20 14:12:05 compute-1 groupadd[167871]: group added to /etc/gshadow: name=dnsmasq
Jan 20 14:12:05 compute-1 groupadd[167871]: new group: name=dnsmasq, GID=993
Jan 20 14:12:05 compute-1 useradd[167878]: new user: name=dnsmasq, UID=992, GID=993, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 20 14:12:05 compute-1 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 20 14:12:05 compute-1 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 20 14:12:05 compute-1 ceph-mon[81775]: pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:05 compute-1 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Jan 20 14:12:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:06 compute-1 groupadd[167893]: group added to /etc/group: name=clevis, GID=992
Jan 20 14:12:06 compute-1 groupadd[167893]: group added to /etc/gshadow: name=clevis
Jan 20 14:12:06 compute-1 groupadd[167893]: new group: name=clevis, GID=992
Jan 20 14:12:06 compute-1 useradd[167907]: new user: name=clevis, UID=991, GID=992, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 20 14:12:06 compute-1 usermod[167932]: add 'clevis' to group 'tss'
Jan 20 14:12:06 compute-1 usermod[167932]: add 'clevis' to shadow group 'tss'
Jan 20 14:12:06 compute-1 podman[167892]: 2026-01-20 14:12:06.380218276 +0000 UTC m=+0.136227253 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 20 14:12:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:06.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:07.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:08.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:08 compute-1 ceph-mon[81775]: pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:08 compute-1 polkitd[43590]: Reloading rules
Jan 20 14:12:08 compute-1 polkitd[43590]: Collecting garbage unconditionally...
Jan 20 14:12:08 compute-1 polkitd[43590]: Loading rules from directory /etc/polkit-1/rules.d
Jan 20 14:12:08 compute-1 polkitd[43590]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 20 14:12:08 compute-1 polkitd[43590]: Finished loading, compiling and executing 3 rules
Jan 20 14:12:08 compute-1 polkitd[43590]: Reloading rules
Jan 20 14:12:08 compute-1 polkitd[43590]: Collecting garbage unconditionally...
Jan 20 14:12:08 compute-1 polkitd[43590]: Loading rules from directory /etc/polkit-1/rules.d
Jan 20 14:12:08 compute-1 polkitd[43590]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 20 14:12:08 compute-1 polkitd[43590]: Finished loading, compiling and executing 3 rules
Jan 20 14:12:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:12:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:09.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:12:09 compute-1 ceph-mon[81775]: pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:09 compute-1 sudo[168036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:09 compute-1 sudo[168036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:09 compute-1 sudo[168036]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:09 compute-1 sudo[168071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:09 compute-1 sudo[168071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:09 compute-1 sudo[168071]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:12:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:10.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:12:10 compute-1 groupadd[168177]: group added to /etc/group: name=ceph, GID=167
Jan 20 14:12:10 compute-1 groupadd[168177]: group added to /etc/gshadow: name=ceph
Jan 20 14:12:10 compute-1 groupadd[168177]: new group: name=ceph, GID=167
Jan 20 14:12:10 compute-1 useradd[168183]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 20 14:12:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:12:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:11.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:12:12 compute-1 ceph-mon[81775]: pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:12:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:12.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:12:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:13.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:13 compute-1 ceph-mon[81775]: pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:14 compute-1 systemd[1]: Stopping OpenSSH server daemon...
Jan 20 14:12:14 compute-1 sshd[1003]: Received signal 15; terminating.
Jan 20 14:12:14 compute-1 systemd[1]: sshd.service: Deactivated successfully.
Jan 20 14:12:14 compute-1 systemd[1]: Stopped OpenSSH server daemon.
Jan 20 14:12:14 compute-1 systemd[1]: sshd.service: Consumed 5.778s CPU time, read 32.0K from disk, written 176.0K to disk.
Jan 20 14:12:14 compute-1 systemd[1]: Stopped target sshd-keygen.target.
Jan 20 14:12:14 compute-1 systemd[1]: Stopping sshd-keygen.target...
Jan 20 14:12:14 compute-1 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 14:12:14 compute-1 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 14:12:14 compute-1 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 14:12:14 compute-1 systemd[1]: Reached target sshd-keygen.target.
Jan 20 14:12:14 compute-1 systemd[1]: Starting OpenSSH server daemon...
Jan 20 14:12:14 compute-1 sshd[168810]: Server listening on 0.0.0.0 port 22.
Jan 20 14:12:14 compute-1 sshd[168810]: Server listening on :: port 22.
Jan 20 14:12:14 compute-1 systemd[1]: Started OpenSSH server daemon.
Jan 20 14:12:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:12:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:14.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:12:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:15.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:16 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 14:12:16 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 20 14:12:16 compute-1 systemd[1]: Reloading.
Jan 20 14:12:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:12:16.373 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:12:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:12:16.374 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:12:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:12:16.374 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:12:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:16.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:16 compute-1 systemd-sysv-generator[169072]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:16 compute-1 systemd-rc-local-generator[169069]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:16 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 14:12:16 compute-1 ceph-mon[81775]: pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:17.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:18.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:18 compute-1 ceph-mon[81775]: pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:19 compute-1 podman[171625]: 2026-01-20 14:12:19.064755009 +0000 UTC m=+0.102024122 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 14:12:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:12:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:19.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:12:19 compute-1 sudo[150164]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:12:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:20.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:12:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:21.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:12:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:22.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:12:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:23.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:23 compute-1 ceph-mon[81775]: pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:12:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:24.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:12:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:25.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:26 compute-1 ceph-mon[81775]: pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:26 compute-1 ceph-mon[81775]: pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:26.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:26 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 14:12:26 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 20 14:12:26 compute-1 systemd[1]: man-db-cache-update.service: Consumed 13.456s CPU time.
Jan 20 14:12:26 compute-1 systemd[1]: run-r63e1f19053ee4943b3154f925b07c783.service: Deactivated successfully.
Jan 20 14:12:27 compute-1 ceph-mon[81775]: pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:27.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:27 compute-1 sudo[177619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lggracvcrpymlqciczrpewkslqfkqjlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918346.619171-969-123307686962314/AnsiballZ_systemd.py'
Jan 20 14:12:27 compute-1 sudo[177619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:27 compute-1 python3.9[177621]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:12:27 compute-1 systemd[1]: Reloading.
Jan 20 14:12:27 compute-1 systemd-rc-local-generator[177652]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:27 compute-1 systemd-sysv-generator[177655]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:28 compute-1 sudo[177619]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:28 compute-1 ceph-mon[81775]: pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:12:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:28.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:12:28 compute-1 sudo[177810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uigpbeunasykkxvxefspnmwrqftdltfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918348.2298515-969-62901504704100/AnsiballZ_systemd.py'
Jan 20 14:12:28 compute-1 sudo[177810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:28 compute-1 python3.9[177812]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:12:28 compute-1 systemd[1]: Reloading.
Jan 20 14:12:29 compute-1 systemd-sysv-generator[177841]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:29 compute-1 systemd-rc-local-generator[177837]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:29 compute-1 ceph-mon[81775]: pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:12:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:29.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:12:29 compute-1 sudo[177810]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:29 compute-1 sudo[178001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhoqqahajzpuecnizydgpisejlzwmhid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918349.4514558-969-165525818273025/AnsiballZ_systemd.py'
Jan 20 14:12:29 compute-1 sudo[178001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:29 compute-1 sudo[178004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:29 compute-1 sudo[178004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:29 compute-1 sudo[178004]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:30 compute-1 sudo[178029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:30 compute-1 sudo[178029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:30 compute-1 sudo[178029]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:30 compute-1 python3.9[178003]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:12:30 compute-1 systemd[1]: Reloading.
Jan 20 14:12:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:30 compute-1 systemd-rc-local-generator[178079]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:30 compute-1 systemd-sysv-generator[178083]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:30.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:30 compute-1 sudo[178001]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:31 compute-1 sudo[178241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueyeiixkgxmalzxhfjamyfhwspnxhcpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918350.7845297-969-91373233900948/AnsiballZ_systemd.py'
Jan 20 14:12:31 compute-1 sudo[178241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:31.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:31 compute-1 python3.9[178243]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:12:31 compute-1 systemd[1]: Reloading.
Jan 20 14:12:31 compute-1 systemd-rc-local-generator[178272]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:31 compute-1 systemd-sysv-generator[178278]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:31 compute-1 sudo[178241]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:32 compute-1 ceph-mon[81775]: pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:32 compute-1 sudo[178359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:32 compute-1 sudo[178359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:32 compute-1 sudo[178359]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:32 compute-1 sudo[178407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:12:32 compute-1 sudo[178407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:32 compute-1 sudo[178407]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:32.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:32 compute-1 sudo[178454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:32 compute-1 sudo[178454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:32 compute-1 sudo[178454]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:32 compute-1 sudo[178511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auegbuooljrxxfctjndkdawlkaabupus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918352.082204-1056-115037566049898/AnsiballZ_systemd.py'
Jan 20 14:12:32 compute-1 sudo[178511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:32 compute-1 sudo[178505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:12:32 compute-1 sudo[178505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:32 compute-1 python3.9[178529]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:32 compute-1 systemd[1]: Reloading.
Jan 20 14:12:33 compute-1 systemd-sysv-generator[178586]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:33 compute-1 systemd-rc-local-generator[178579]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:33 compute-1 sudo[178505]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:33.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:33 compute-1 sudo[178511]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:33 compute-1 sudo[178755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixeadymabcgfzurhfvikqyryjiluozrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918353.4516985-1056-124427654460621/AnsiballZ_systemd.py'
Jan 20 14:12:33 compute-1 sudo[178755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:34.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:34 compute-1 ceph-mon[81775]: pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:12:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:12:34 compute-1 python3.9[178757]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:34 compute-1 systemd[1]: Reloading.
Jan 20 14:12:34 compute-1 systemd-rc-local-generator[178781]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:34 compute-1 systemd-sysv-generator[178785]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:35 compute-1 sudo[178755]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:12:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:35.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:12:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:35 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:12:35 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:12:35 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:12:35 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:12:35 compute-1 ceph-mon[81775]: pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:35 compute-1 sudo[178946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgofswfsdsewbwttdhcwfoiqccsraxkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918355.3510067-1056-270513384165872/AnsiballZ_systemd.py'
Jan 20 14:12:35 compute-1 sudo[178946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:36 compute-1 python3.9[178948]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:36 compute-1 systemd[1]: Reloading.
Jan 20 14:12:36 compute-1 systemd-sysv-generator[178978]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:36 compute-1 systemd-rc-local-generator[178974]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:12:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:36.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:12:36 compute-1 sudo[178946]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:36 compute-1 podman[178987]: 2026-01-20 14:12:36.656548188 +0000 UTC m=+0.126069615 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:12:37 compute-1 sudo[179161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijnpdoybjxtymfxfhzgvsscobxntwurb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918356.7452836-1056-179705546040898/AnsiballZ_systemd.py'
Jan 20 14:12:37 compute-1 sudo[179161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:37.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:37 compute-1 python3.9[179163]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:37 compute-1 sudo[179161]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:37 compute-1 ceph-mon[81775]: pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:38 compute-1 sudo[179317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwtkwoiyarcyjldcfupfyjorvpwlbdhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918357.841816-1056-120287104684735/AnsiballZ_systemd.py'
Jan 20 14:12:38 compute-1 sudo[179317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:38.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:38 compute-1 python3.9[179319]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:38 compute-1 systemd[1]: Reloading.
Jan 20 14:12:38 compute-1 systemd-sysv-generator[179349]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:38 compute-1 systemd-rc-local-generator[179345]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:38 compute-1 sudo[179317]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:39.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:39 compute-1 sudo[179507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yynckyryhlnvgdarosqketfimyhbremb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918359.1791277-1164-274636133062139/AnsiballZ_systemd.py'
Jan 20 14:12:39 compute-1 sudo[179507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:39 compute-1 python3.9[179510]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:12:39 compute-1 systemd[1]: Reloading.
Jan 20 14:12:39 compute-1 ceph-mon[81775]: pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:39 compute-1 systemd-rc-local-generator[179540]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:39 compute-1 systemd-sysv-generator[179545]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:40 compute-1 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 20 14:12:40 compute-1 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 20 14:12:40 compute-1 sudo[179507]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:40.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:40 compute-1 sudo[179701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opiicxsbdjsnzkwizjrsvzzjwjvydooi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918360.6074362-1188-246584167375909/AnsiballZ_systemd.py'
Jan 20 14:12:40 compute-1 sudo[179701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:41 compute-1 python3.9[179703]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:41.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:41 compute-1 sudo[179701]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:41 compute-1 sudo[179857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uansuynmlmxygraboyizoxssybmxwroj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918361.4426873-1188-108647852324490/AnsiballZ_systemd.py'
Jan 20 14:12:41 compute-1 sudo[179857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:42 compute-1 python3.9[179859]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:42 compute-1 sudo[179857]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:42.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:42 compute-1 sudo[180012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjvxdalchjifbxxgfsmhoyexksjlukmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918362.3540452-1188-198652555219295/AnsiballZ_systemd.py'
Jan 20 14:12:42 compute-1 sudo[180012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:43 compute-1 python3.9[180014]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:43 compute-1 sudo[180012]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:43.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:43 compute-1 sudo[180168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukntfoqvwinqcnzqcvyrsaixgxnqluih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918363.2961047-1188-259373010789778/AnsiballZ_systemd.py'
Jan 20 14:12:43 compute-1 sudo[180168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:43 compute-1 python3.9[180170]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:43 compute-1 sudo[180168]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:44 compute-1 sudo[180323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adzmxrimthesgjndpshidrqydzkdxdyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918364.0528846-1188-279065392509536/AnsiballZ_systemd.py'
Jan 20 14:12:44 compute-1 sudo[180323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:44.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:44 compute-1 ceph-mon[81775]: pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:44 compute-1 python3.9[180325]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:44 compute-1 sudo[180323]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:45.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:45 compute-1 sudo[180478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sydmjtpbwwcczxudbmtiwxltzxauhwtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918364.99898-1188-129007052745871/AnsiballZ_systemd.py'
Jan 20 14:12:45 compute-1 sudo[180478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:45 compute-1 ceph-mon[81775]: pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:45 compute-1 ceph-mon[81775]: pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:45 compute-1 python3.9[180480]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:45 compute-1 sudo[180478]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:46 compute-1 sudo[180561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:46 compute-1 sudo[180561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:46 compute-1 sudo[180561]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:46 compute-1 sudo[180609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:12:46 compute-1 sudo[180609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:46 compute-1 sudo[180609]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:46 compute-1 sudo[180684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xprwqhlbbzuesswddaufhsylbjgqwcti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918365.9493382-1188-84084327849897/AnsiballZ_systemd.py'
Jan 20 14:12:46 compute-1 sudo[180684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:46.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:46 compute-1 python3.9[180686]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:46 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:12:46 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:12:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:47.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:47 compute-1 sudo[180684]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:47 compute-1 ceph-mon[81775]: pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:48 compute-1 sudo[180840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wekkqisvixdvoggvjefxryllejnsewse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918367.9778826-1188-248253493885461/AnsiballZ_systemd.py'
Jan 20 14:12:48 compute-1 sudo[180840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:12:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:48.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:12:48 compute-1 python3.9[180842]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:48 compute-1 sudo[180840]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:12:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:49.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:12:49 compute-1 sudo[181006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfchxvryvoqkumovjclndxqcukkyuemz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918368.9147437-1188-1754777571249/AnsiballZ_systemd.py'
Jan 20 14:12:49 compute-1 sudo[181006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:49 compute-1 podman[180969]: 2026-01-20 14:12:49.3333935 +0000 UTC m=+0.118311234 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 20 14:12:49 compute-1 python3.9[181012]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:49 compute-1 ceph-mon[81775]: pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:50 compute-1 sudo[181019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:50 compute-1 sudo[181019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:50 compute-1 sudo[181019]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:50 compute-1 sudo[181044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:50 compute-1 sudo[181044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:50 compute-1 sudo[181044]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:12:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:50.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:12:50 compute-1 sudo[181006]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:12:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:51.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:12:51 compute-1 sudo[181220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxizhydobstjcbyatggwwambrynaoajp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918370.9275584-1188-13474282839768/AnsiballZ_systemd.py'
Jan 20 14:12:51 compute-1 sudo[181220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:51 compute-1 python3.9[181222]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:51 compute-1 sudo[181220]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:51 compute-1 ceph-mon[81775]: pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:52 compute-1 sudo[181376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgcylhfavlxybpxacuwakhjshcznawlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918371.9503453-1188-86032760842794/AnsiballZ_systemd.py'
Jan 20 14:12:52 compute-1 sudo[181376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:12:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:52.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:12:52 compute-1 python3.9[181378]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:52 compute-1 sudo[181376]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:53.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:53 compute-1 sudo[181531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yztctefdiluuuzfwyniqbnunqrtojjqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918372.924028-1188-123175208288665/AnsiballZ_systemd.py'
Jan 20 14:12:53 compute-1 sudo[181531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:53 compute-1 python3.9[181533]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:53 compute-1 sudo[181531]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:53 compute-1 ceph-mon[81775]: pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:54 compute-1 sudo[181687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnmhwobtzimstxnogewkxphhyzslfymc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918373.968588-1188-26783734414829/AnsiballZ_systemd.py'
Jan 20 14:12:54 compute-1 sudo[181687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:54.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:54 compute-1 python3.9[181689]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:54 compute-1 sudo[181687]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:55.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:55 compute-1 ceph-mon[81775]: pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:55 compute-1 sudo[181842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unstquthygbechyguyhtaewwodocvfye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918374.9302306-1188-134450516210668/AnsiballZ_systemd.py'
Jan 20 14:12:55 compute-1 sudo[181842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:55 compute-1 python3.9[181844]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:55 compute-1 sudo[181842]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:56.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:57.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:57 compute-1 ceph-mon[81775]: pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:58 compute-1 sudo[181999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpkshdbkhfuujpckmetxdplivexqnitm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918378.0596073-1494-39597991785635/AnsiballZ_file.py'
Jan 20 14:12:58 compute-1 sudo[181999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:58.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:58 compute-1 python3.9[182001]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:58 compute-1 sudo[181999]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:59 compute-1 sudo[182151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymsmewbhiijdhzskblxcaocnrbkcmijv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918378.7823665-1494-67141469153948/AnsiballZ_file.py'
Jan 20 14:12:59 compute-1 sudo[182151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:12:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:59.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:59 compute-1 python3.9[182153]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:59 compute-1 sudo[182151]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:59 compute-1 sudo[182304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hioojidlqcgqbjkwxumxaadnyhbbtfar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918379.520219-1494-89984759372293/AnsiballZ_file.py'
Jan 20 14:12:59 compute-1 sudo[182304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:00 compute-1 python3.9[182306]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:13:00 compute-1 sudo[182304]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:00 compute-1 ceph-mon[81775]: pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:00.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:00 compute-1 sudo[182456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xllgmkdjulfxhztlgxwnhvteoqhgcjce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918380.2506614-1494-80569021535306/AnsiballZ_file.py'
Jan 20 14:13:00 compute-1 sudo[182456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:00 compute-1 python3.9[182458]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:13:00 compute-1 sudo[182456]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:01.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:01 compute-1 sudo[182608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wueafnnrmmtqolxdnkbttbnxmbhnlsqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918381.0065477-1494-254231238859259/AnsiballZ_file.py'
Jan 20 14:13:01 compute-1 sudo[182608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:01 compute-1 ceph-mon[81775]: pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:01 compute-1 python3.9[182610]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:13:01 compute-1 sudo[182608]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:01 compute-1 sudo[182761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwjkkzojlnpnycemrjeaukudjbpollyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918381.677599-1494-120190574257735/AnsiballZ_file.py'
Jan 20 14:13:01 compute-1 sudo[182761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:02 compute-1 python3.9[182763]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:13:02 compute-1 sudo[182761]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:02.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:03 compute-1 python3.9[182913]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:13:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:13:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:03.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:13:03 compute-1 sudo[183064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvrghlxigsghjxoghqxktyaoczkjouno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918383.4060566-1647-21936290292157/AnsiballZ_stat.py'
Jan 20 14:13:03 compute-1 sudo[183064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:03 compute-1 ceph-mon[81775]: pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:04 compute-1 python3.9[183066]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:04 compute-1 sudo[183064]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:04.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:04 compute-1 sudo[183189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keouxgcediucvcwrtkdloluwionmmtif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918383.4060566-1647-21936290292157/AnsiballZ_copy.py'
Jan 20 14:13:04 compute-1 sudo[183189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:04 compute-1 python3.9[183191]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768918383.4060566-1647-21936290292157/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:04 compute-1 sudo[183189]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:05.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:05 compute-1 sudo[183341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhkcatwealgelfilbxhqdqxrksyzyxxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918385.106534-1647-227078017862475/AnsiballZ_stat.py'
Jan 20 14:13:05 compute-1 sudo[183341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:05 compute-1 python3.9[183343]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:05 compute-1 sudo[183341]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:06 compute-1 ceph-mon[81775]: pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:06 compute-1 sudo[183467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hazlefptrtzaqvzpzhyijqiwcvvlztmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918385.106534-1647-227078017862475/AnsiballZ_copy.py'
Jan 20 14:13:06 compute-1 sudo[183467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:06 compute-1 python3.9[183469]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768918385.106534-1647-227078017862475/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:06 compute-1 sudo[183467]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:13:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:06.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:13:07 compute-1 sudo[183630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqlsffnxvhtwkmvpnlrgddvidpvxikam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918386.652252-1647-229829906464272/AnsiballZ_stat.py'
Jan 20 14:13:07 compute-1 sudo[183630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:07 compute-1 podman[183593]: 2026-01-20 14:13:07.176219466 +0000 UTC m=+0.186266966 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 20 14:13:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:07.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:07 compute-1 python3.9[183637]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:07 compute-1 sudo[183630]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:07 compute-1 ceph-mon[81775]: pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:07 compute-1 sudo[183770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnuaqygkuaofjpwdijzwocfuiwfdzowq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918386.652252-1647-229829906464272/AnsiballZ_copy.py'
Jan 20 14:13:07 compute-1 sudo[183770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:08 compute-1 python3.9[183772]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768918386.652252-1647-229829906464272/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:08 compute-1 sudo[183770]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:08.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:08 compute-1 sudo[183922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrqkiawmyqpwuttfzhzmemfjindmcjjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918388.2250416-1647-73206054279324/AnsiballZ_stat.py'
Jan 20 14:13:08 compute-1 sudo[183922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:08 compute-1 python3.9[183924]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:08 compute-1 sudo[183922]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:09.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:09 compute-1 sudo[184047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpomonaifakxkorykftinvsntwyxbmjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918388.2250416-1647-73206054279324/AnsiballZ_copy.py'
Jan 20 14:13:09 compute-1 sudo[184047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:09 compute-1 python3.9[184049]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768918388.2250416-1647-73206054279324/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:09 compute-1 sudo[184047]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:09 compute-1 ceph-mon[81775]: pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:10 compute-1 sudo[184200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhxlttquladbdpsawzwiqtlkpvnzphkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918389.6745286-1647-215140573588030/AnsiballZ_stat.py'
Jan 20 14:13:10 compute-1 sudo[184200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:10 compute-1 python3.9[184202]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:10 compute-1 sudo[184200]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:10 compute-1 sudo[184203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:10 compute-1 sudo[184203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:10 compute-1 sudo[184203]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:10 compute-1 sudo[184230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:10 compute-1 sudo[184230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:10 compute-1 sudo[184230]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:13:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:10.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:13:10 compute-1 sudo[184375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqdkaunrkxqnqmoxnmwxmrvmriujaxqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918389.6745286-1647-215140573588030/AnsiballZ_copy.py'
Jan 20 14:13:10 compute-1 sudo[184375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:11 compute-1 python3.9[184377]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768918389.6745286-1647-215140573588030/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:11 compute-1 sudo[184375]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:11.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:11 compute-1 sudo[184528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvfzxwkiljtobjshpjebyzpmomnqeodc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918391.219992-1647-105273642242027/AnsiballZ_stat.py'
Jan 20 14:13:11 compute-1 sudo[184528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:11 compute-1 python3.9[184530]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:11 compute-1 sudo[184528]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:12 compute-1 ceph-mon[81775]: pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:12 compute-1 sudo[184653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvhcctkigtyytkvueunogqcaorkvsbgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918391.219992-1647-105273642242027/AnsiballZ_copy.py'
Jan 20 14:13:12 compute-1 sudo[184653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:12.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:12 compute-1 python3.9[184655]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768918391.219992-1647-105273642242027/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:12 compute-1 sudo[184653]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:13 compute-1 sudo[184805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkdeljnilhqfuepvxfnxmvcbhfomzysz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918392.7503943-1647-253771963565580/AnsiballZ_stat.py'
Jan 20 14:13:13 compute-1 sudo[184805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:13.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:13 compute-1 python3.9[184807]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:13 compute-1 sudo[184805]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:13 compute-1 ceph-mon[81775]: pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:13 compute-1 sudo[184929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhknbjznpqwueriogumgkxidabstenfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918392.7503943-1647-253771963565580/AnsiballZ_copy.py'
Jan 20 14:13:13 compute-1 sudo[184929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:13 compute-1 python3.9[184931]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768918392.7503943-1647-253771963565580/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:13 compute-1 sudo[184929]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:14 compute-1 sudo[185081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orlsmptsxiycqifuwtumiocpewabhpai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918394.084506-1647-77345410761574/AnsiballZ_stat.py'
Jan 20 14:13:14 compute-1 sudo[185081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:14.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:14 compute-1 python3.9[185083]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:14 compute-1 sudo[185081]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:15 compute-1 sudo[185206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgmhhgtlfgxpahdtmncwxllwofzdbvug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918394.084506-1647-77345410761574/AnsiballZ_copy.py'
Jan 20 14:13:15 compute-1 sudo[185206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:15.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:15 compute-1 python3.9[185208]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768918394.084506-1647-77345410761574/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:15 compute-1 sudo[185206]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:16 compute-1 sudo[185359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsxibzssiuztrbwhiklffewbtowncwrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918395.705858-1986-5720459862384/AnsiballZ_command.py'
Jan 20 14:13:16 compute-1 sudo[185359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:16 compute-1 ceph-mon[81775]: pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:16 compute-1 python3.9[185361]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 20 14:13:16 compute-1 sudo[185359]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:13:16.373 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:13:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:13:16.374 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:13:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:13:16.374 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:13:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:16.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:16 compute-1 sudo[185512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lguwkfugdzlrrjtlbjbifgcmsoesjwku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918396.6278443-2013-229036805028718/AnsiballZ_file.py'
Jan 20 14:13:16 compute-1 sudo[185512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:17 compute-1 python3.9[185514]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:17 compute-1 sudo[185512]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:17 compute-1 ceph-mon[81775]: pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:17.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:17 compute-1 sudo[185665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtdygwcnmjdduhopirbwlwbuuryahgaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918397.370985-2013-102197945560824/AnsiballZ_file.py'
Jan 20 14:13:17 compute-1 sudo[185665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:17 compute-1 python3.9[185667]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:17 compute-1 sudo[185665]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:18 compute-1 sudo[185817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgoqzbhqdaleofzyfmlbqhcpdwjmgrui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918398.0727255-2013-68744746593619/AnsiballZ_file.py'
Jan 20 14:13:18 compute-1 sudo[185817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:13:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:18.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:13:18 compute-1 python3.9[185819]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:18 compute-1 sudo[185817]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:19 compute-1 sudo[185969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxscpxsovchoohibzwulwddkgsrjgxjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918398.7815275-2013-168845457900780/AnsiballZ_file.py'
Jan 20 14:13:19 compute-1 sudo[185969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:19 compute-1 ceph-mon[81775]: pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:19 compute-1 python3.9[185971]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:19.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:19 compute-1 sudo[185969]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:19 compute-1 sudo[186134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrgfvjbjdtkvvnuachwgpnuiaakzxsgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918399.4277594-2013-160502478737522/AnsiballZ_file.py'
Jan 20 14:13:19 compute-1 sudo[186134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:19 compute-1 podman[186096]: 2026-01-20 14:13:19.858065944 +0000 UTC m=+0.128284119 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 20 14:13:20 compute-1 python3.9[186142]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:20 compute-1 sudo[186134]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:13:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:20.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:13:20 compute-1 sudo[186292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkbuocurslpakcmasgbwqwckdhurlvut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918400.2520845-2013-139244352358771/AnsiballZ_file.py'
Jan 20 14:13:20 compute-1 sudo[186292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:20 compute-1 python3.9[186294]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:20 compute-1 sudo[186292]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:21.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:21 compute-1 sudo[186444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njufybvhmudqywmqrksrrhgtltomjggo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918400.948575-2013-109004175616211/AnsiballZ_file.py'
Jan 20 14:13:21 compute-1 sudo[186444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:21 compute-1 ceph-mon[81775]: pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:21 compute-1 python3.9[186446]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:21 compute-1 sudo[186444]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:22 compute-1 sudo[186597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcoqcjqqhspnebhnkwitlxiktsjcnhhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918401.720404-2013-140337759250518/AnsiballZ_file.py'
Jan 20 14:13:22 compute-1 sudo[186597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:22 compute-1 python3.9[186599]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:22 compute-1 sudo[186597]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:22.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:22 compute-1 sudo[186749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiuwbqsmscafcgalcyuasvljxxkthmxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918402.5952382-2013-1250424607251/AnsiballZ_file.py'
Jan 20 14:13:22 compute-1 sudo[186749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:23 compute-1 python3.9[186751]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:23 compute-1 sudo[186749]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:23.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:23 compute-1 sudo[186902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idgfevwaouynjimybeufhthapinsbofb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918403.2352238-2013-134079471083760/AnsiballZ_file.py'
Jan 20 14:13:23 compute-1 sudo[186902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:23 compute-1 python3.9[186904]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:23 compute-1 sudo[186902]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:24 compute-1 sudo[187054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gghgarzffybjvyhqmnsikxhlrgvuvwfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918404.0378854-2013-134774580790233/AnsiballZ_file.py'
Jan 20 14:13:24 compute-1 sudo[187054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:24.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:24 compute-1 python3.9[187056]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:24 compute-1 ceph-mon[81775]: pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:24 compute-1 sudo[187054]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:25.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:25 compute-1 sudo[187206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kidwbcqkywhsegtydfzoezcvqwiaaaar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918404.8781772-2013-38291891856256/AnsiballZ_file.py'
Jan 20 14:13:25 compute-1 sudo[187206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:25 compute-1 python3.9[187208]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:25 compute-1 sudo[187206]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:26 compute-1 sudo[187359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfxsgpwcheveftlohwfqwfpvtkomdhdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918405.7138286-2013-156101984309707/AnsiballZ_file.py'
Jan 20 14:13:26 compute-1 sudo[187359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:26 compute-1 python3.9[187361]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:26 compute-1 sudo[187359]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:13:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:26.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:13:26 compute-1 ceph-mon[81775]: pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:26 compute-1 sudo[187511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbazbxnonwlijmqpxkjpcrecmwzawgvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918406.5053668-2013-142018910715480/AnsiballZ_file.py'
Jan 20 14:13:26 compute-1 sudo[187511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:27 compute-1 python3.9[187513]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:27 compute-1 sudo[187511]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:27.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:27 compute-1 ceph-mgr[82135]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2542147622
Jan 20 14:13:27 compute-1 ceph-mon[81775]: pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:27 compute-1 sudo[187664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmbkfgtkawpwoakealuxtiwgzjopkpjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918407.536898-2310-275333947936575/AnsiballZ_stat.py'
Jan 20 14:13:27 compute-1 sudo[187664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:28 compute-1 python3.9[187666]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:28 compute-1 sudo[187664]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:28.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:28 compute-1 sudo[187787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmyvasxgvdkbenvwoxjudbsskshloeqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918407.536898-2310-275333947936575/AnsiballZ_copy.py'
Jan 20 14:13:28 compute-1 sudo[187787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:28 compute-1 python3.9[187789]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918407.536898-2310-275333947936575/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:28 compute-1 sudo[187787]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:29.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:29 compute-1 sudo[187939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tutfnhxtmxvhtiawqnjqrmxokllanadk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918408.9991007-2310-198369940559772/AnsiballZ_stat.py'
Jan 20 14:13:29 compute-1 sudo[187939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:29 compute-1 python3.9[187941]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:29 compute-1 sudo[187939]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:30 compute-1 sudo[188063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jttrlvsmssztyhcugzwcqdpbbripzjyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918408.9991007-2310-198369940559772/AnsiballZ_copy.py'
Jan 20 14:13:30 compute-1 sudo[188063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:30 compute-1 python3.9[188065]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918408.9991007-2310-198369940559772/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:30 compute-1 sudo[188063]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:30 compute-1 ceph-mon[81775]: pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:30 compute-1 sudo[188111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:30 compute-1 sudo[188111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:30 compute-1 sudo[188111]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:30 compute-1 sudo[188167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:30 compute-1 sudo[188167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:13:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:30.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:13:30 compute-1 sudo[188167]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:30 compute-1 sudo[188265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfmacayhrarhwuykxfwzzykfbfsqklxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918410.4076924-2310-231478974680917/AnsiballZ_stat.py'
Jan 20 14:13:30 compute-1 sudo[188265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:30 compute-1 python3.9[188267]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:30 compute-1 sudo[188265]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:31.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:31 compute-1 sudo[188388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djfbhkoztjcuyutusbwbaicrajvobtro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918410.4076924-2310-231478974680917/AnsiballZ_copy.py'
Jan 20 14:13:31 compute-1 sudo[188388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:31 compute-1 python3.9[188390]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918410.4076924-2310-231478974680917/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:31 compute-1 sudo[188388]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:31 compute-1 ceph-mon[81775]: pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:32 compute-1 sudo[188541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pikszecmhineduowibybwhjhbbdznebd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918411.7933903-2310-278971580310133/AnsiballZ_stat.py'
Jan 20 14:13:32 compute-1 sudo[188541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:32 compute-1 python3.9[188543]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:32 compute-1 sudo[188541]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:13:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:32.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:13:32 compute-1 sudo[188664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zugxjdexyaqmetxtwejkthjpcabjdsab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918411.7933903-2310-278971580310133/AnsiballZ_copy.py'
Jan 20 14:13:32 compute-1 sudo[188664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:33 compute-1 python3.9[188666]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918411.7933903-2310-278971580310133/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:33 compute-1 sudo[188664]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:33.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:33 compute-1 sudo[188817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihsospiplycefxlfpcltkwzhkrgzkqlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918413.1734455-2310-26613866388463/AnsiballZ_stat.py'
Jan 20 14:13:33 compute-1 sudo[188817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:33 compute-1 python3.9[188819]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:33 compute-1 sudo[188817]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:34 compute-1 sudo[188940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zebehewypqkdwoevtpkhhlyhtrvbruni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918413.1734455-2310-26613866388463/AnsiballZ_copy.py'
Jan 20 14:13:34 compute-1 sudo[188940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:13:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:34.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:13:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:35.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:13:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:37.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:13:37 compute-1 ceph-mon[81775]: pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:37 compute-1 ceph-mon[81775]: pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:37.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:37 compute-1 python3.9[188942]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918413.1734455-2310-26613866388463/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:37 compute-1 sudo[188940]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:38 compute-1 podman[188945]: 2026-01-20 14:13:38.182130602 +0000 UTC m=+0.199846597 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Jan 20 14:13:38 compute-1 ceph-mon[81775]: pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:38 compute-1 sudo[189121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drnlxnvogogdhvnskmcgjledsrnybrsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918418.092242-2310-252288190296608/AnsiballZ_stat.py'
Jan 20 14:13:38 compute-1 sudo[189121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:38 compute-1 python3.9[189123]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:38 compute-1 sudo[189121]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:39 compute-1 sudo[189244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dracwgqdpazctgxerziikigkxaiwkbss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918418.092242-2310-252288190296608/AnsiballZ_copy.py'
Jan 20 14:13:39 compute-1 sudo[189244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:39.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:39 compute-1 ceph-mon[81775]: pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:13:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:39.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:13:39 compute-1 python3.9[189246]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918418.092242-2310-252288190296608/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:39 compute-1 sudo[189244]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:39 compute-1 sudo[189397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rioudgrdibilwpdkwhfldizifocshkeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918419.523305-2310-257019743708740/AnsiballZ_stat.py'
Jan 20 14:13:39 compute-1 sudo[189397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:40 compute-1 python3.9[189399]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:40 compute-1 sudo[189397]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:40 compute-1 sudo[189520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tytibxqnatwwvpanttamkiimhrxgfhda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918419.523305-2310-257019743708740/AnsiballZ_copy.py'
Jan 20 14:13:40 compute-1 sudo[189520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:40 compute-1 python3.9[189522]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918419.523305-2310-257019743708740/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:40 compute-1 sudo[189520]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:41 compute-1 sudo[189672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iovzargcmpmxspvvjtskopzspdxifupl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918420.8661747-2310-130548258020543/AnsiballZ_stat.py'
Jan 20 14:13:41 compute-1 sudo[189672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:41.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:41.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:41 compute-1 python3.9[189674]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:41 compute-1 sudo[189672]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:41 compute-1 sudo[189796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ungywktmdgofqavzhlnarswgtdyelgyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918420.8661747-2310-130548258020543/AnsiballZ_copy.py'
Jan 20 14:13:41 compute-1 sudo[189796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:41 compute-1 python3.9[189798]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918420.8661747-2310-130548258020543/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:41 compute-1 ceph-mon[81775]: pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:41 compute-1 sudo[189796]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:42 compute-1 sudo[189948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttbuqgrszulsflhddjcefjwdepbvtsnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918422.125068-2310-226069141755676/AnsiballZ_stat.py'
Jan 20 14:13:42 compute-1 sudo[189948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:42 compute-1 python3.9[189950]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:42 compute-1 sudo[189948]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:13:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:43.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:13:43 compute-1 sudo[190071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pezcdxomexcgtrexfbfdkgglvbpkzoqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918422.125068-2310-226069141755676/AnsiballZ_copy.py'
Jan 20 14:13:43 compute-1 sudo[190071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:43.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:43 compute-1 python3.9[190073]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918422.125068-2310-226069141755676/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:43 compute-1 sudo[190071]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:43 compute-1 sudo[190224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sljlrekyajnnxxsjbqmmxcpkoaskdimq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918423.5945578-2310-149168487224057/AnsiballZ_stat.py'
Jan 20 14:13:43 compute-1 sudo[190224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:44 compute-1 ceph-mon[81775]: pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:44 compute-1 python3.9[190226]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:44 compute-1 sudo[190224]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:44 compute-1 sudo[190347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flpounirtoemerdyocmndtmmixgjkjwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918423.5945578-2310-149168487224057/AnsiballZ_copy.py'
Jan 20 14:13:44 compute-1 sudo[190347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:13:44.606991) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918424607115, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 2647, "num_deletes": 501, "total_data_size": 6153070, "memory_usage": 6229720, "flush_reason": "Manual Compaction"}
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918424631284, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 2361262, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13694, "largest_seqno": 16336, "table_properties": {"data_size": 2353696, "index_size": 3740, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 21565, "raw_average_key_size": 19, "raw_value_size": 2335088, "raw_average_value_size": 2130, "num_data_blocks": 169, "num_entries": 1096, "num_filter_entries": 1096, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768918191, "oldest_key_time": 1768918191, "file_creation_time": 1768918424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 24371 microseconds, and 11176 cpu microseconds.
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:13:44.631363) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 2361262 bytes OK
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:13:44.631386) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:13:44.633998) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:13:44.634028) EVENT_LOG_v1 {"time_micros": 1768918424634019, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:13:44.634054) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 6140736, prev total WAL file size 6140736, number of live WAL files 2.
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:13:44.636626) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(2305KB)], [27(9677KB)]
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918424636682, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 12271262, "oldest_snapshot_seqno": -1}
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 4320 keys, 8321044 bytes, temperature: kUnknown
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918424696419, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 8321044, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8290807, "index_size": 18351, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 106684, "raw_average_key_size": 24, "raw_value_size": 8211287, "raw_average_value_size": 1900, "num_data_blocks": 773, "num_entries": 4320, "num_filter_entries": 4320, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768918424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:13:44.696683) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 8321044 bytes
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:13:44.710830) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 205.2 rd, 139.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 9.5 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(8.7) write-amplify(3.5) OK, records in: 5233, records dropped: 913 output_compression: NoCompression
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:13:44.710899) EVENT_LOG_v1 {"time_micros": 1768918424710856, "job": 14, "event": "compaction_finished", "compaction_time_micros": 59811, "compaction_time_cpu_micros": 35803, "output_level": 6, "num_output_files": 1, "total_output_size": 8321044, "num_input_records": 5233, "num_output_records": 4320, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918424711699, "job": 14, "event": "table_file_deletion", "file_number": 29}
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918424714994, "job": 14, "event": "table_file_deletion", "file_number": 27}
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:13:44.636516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:13:44.715071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:13:44.715079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:13:44.715083) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:13:44.715087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:13:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:13:44.715091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:13:44 compute-1 python3.9[190349]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918423.5945578-2310-149168487224057/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:44 compute-1 sudo[190347]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:13:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:45.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:13:45 compute-1 sudo[190499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjbofvjbeyfdxqsfptnpzihxeumdkqfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918424.877487-2310-233701375186516/AnsiballZ_stat.py'
Jan 20 14:13:45 compute-1 sudo[190499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:45.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:45 compute-1 python3.9[190501]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:45 compute-1 sudo[190499]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:45 compute-1 ceph-mon[81775]: pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:45 compute-1 sudo[190623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itsxsbcrzmgipphaiwoofopcdybtxvmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918424.877487-2310-233701375186516/AnsiballZ_copy.py'
Jan 20 14:13:45 compute-1 sudo[190623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:46 compute-1 python3.9[190625]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918424.877487-2310-233701375186516/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:46 compute-1 sudo[190623]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:46 compute-1 sudo[190725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:46 compute-1 sudo[190725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:46 compute-1 sudo[190725]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:46 compute-1 sudo[190771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:13:46 compute-1 sudo[190771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:46 compute-1 sudo[190771]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:46 compute-1 sudo[190832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwyrwvuugwogwzwgevjcnmxtebpgggva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918426.2347817-2310-76284868860818/AnsiballZ_stat.py'
Jan 20 14:13:46 compute-1 sudo[190832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:46 compute-1 sudo[190819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:46 compute-1 sudo[190819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:46 compute-1 sudo[190819]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:46 compute-1 sudo[190853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:13:46 compute-1 sudo[190853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:46 compute-1 python3.9[190850]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:46 compute-1 sudo[190832]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:47 compute-1 sudo[190853]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:13:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:47.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:13:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:47.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:47 compute-1 sudo[191030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkdkelrequikpeesotuxnqqttymxrwqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918426.2347817-2310-76284868860818/AnsiballZ_copy.py'
Jan 20 14:13:47 compute-1 sudo[191030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:47 compute-1 python3.9[191032]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918426.2347817-2310-76284868860818/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:47 compute-1 sudo[191030]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:48 compute-1 sudo[191183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wthnakjawmaaevqpajiwboxmvwjtplsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918427.7473671-2310-52907660477716/AnsiballZ_stat.py'
Jan 20 14:13:48 compute-1 sudo[191183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:48 compute-1 ceph-mon[81775]: pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:13:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:13:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:13:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:13:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:13:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:13:48 compute-1 python3.9[191185]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:48 compute-1 sudo[191183]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:48 compute-1 auditd[701]: Audit daemon rotating log files
Jan 20 14:13:48 compute-1 sudo[191306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyfljzjcpjjkvccdjegvtsbuhwnowhnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918427.7473671-2310-52907660477716/AnsiballZ_copy.py'
Jan 20 14:13:48 compute-1 sudo[191306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:48 compute-1 python3.9[191308]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918427.7473671-2310-52907660477716/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:48 compute-1 sudo[191306]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:13:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:49.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:13:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:13:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:49.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:13:49 compute-1 ceph-mon[81775]: pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:49 compute-1 sudo[191458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzdirberzienatcvnwpcdytmswdcohuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918429.0384455-2310-126275348430243/AnsiballZ_stat.py'
Jan 20 14:13:49 compute-1 sudo[191458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:49 compute-1 python3.9[191460]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:49 compute-1 sudo[191458]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:50 compute-1 podman[191514]: 2026-01-20 14:13:50.048432762 +0000 UTC m=+0.086723614 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 14:13:50 compute-1 sudo[191601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggdqdjmjvgutugxrgeczjuravcafeysl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918429.0384455-2310-126275348430243/AnsiballZ_copy.py'
Jan 20 14:13:50 compute-1 sudo[191601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:50 compute-1 python3.9[191603]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918429.0384455-2310-126275348430243/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:50 compute-1 sudo[191601]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:50 compute-1 sudo[191628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:50 compute-1 sudo[191628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:50 compute-1 sudo[191628]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:50 compute-1 sudo[191676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:50 compute-1 sudo[191676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:50 compute-1 sudo[191676]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:51.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:51 compute-1 python3.9[191803]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:13:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:13:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:51.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:13:52 compute-1 sudo[191957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yowffyiwnmzbndbgzyqryteqovhhvaoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918431.5332773-2928-273337639231154/AnsiballZ_seboolean.py'
Jan 20 14:13:52 compute-1 sudo[191957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:52 compute-1 ceph-mon[81775]: pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:52 compute-1 python3.9[191959]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 20 14:13:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:53.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:13:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:53.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:13:53 compute-1 sudo[191957]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:53 compute-1 ceph-mon[81775]: pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:54 compute-1 sudo[192035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:54 compute-1 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 20 14:13:54 compute-1 sudo[192035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:54 compute-1 sudo[192035]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:54 compute-1 sudo[192087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:13:54 compute-1 sudo[192087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:54 compute-1 sudo[192087]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:54 compute-1 sudo[192164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqtxnipmeuvyvjpmtejxilfrrwfajsrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918434.341561-2952-39215696182123/AnsiballZ_copy.py'
Jan 20 14:13:54 compute-1 sudo[192164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:54 compute-1 python3.9[192166]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:54 compute-1 sudo[192164]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.003000085s ======
Jan 20 14:13:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:55.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000085s
Jan 20 14:13:55 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:13:55 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:13:55 compute-1 ceph-mon[81775]: pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:13:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:55.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:13:55 compute-1 sudo[192316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdugzkoujecdifiyinplakxtzkjtjsiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918435.0189607-2952-109992120694373/AnsiballZ_copy.py'
Jan 20 14:13:55 compute-1 sudo[192316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:55 compute-1 python3.9[192318]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:55 compute-1 sudo[192316]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:55 compute-1 sudo[192469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjfokubncqkhsoxuznfkqrdnlbxbhfwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918435.6854932-2952-172174635339625/AnsiballZ_copy.py'
Jan 20 14:13:55 compute-1 sudo[192469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:56 compute-1 python3.9[192471]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:56 compute-1 sudo[192469]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:56 compute-1 sudo[192621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvuwowfbmknoxjdzocrvcwntgonfrpvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918436.3509657-2952-247328747814546/AnsiballZ_copy.py'
Jan 20 14:13:56 compute-1 sudo[192621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:56 compute-1 python3.9[192623]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:56 compute-1 sudo[192621]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:13:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:57.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:13:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:13:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:57.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:13:57 compute-1 sudo[192774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjftwbezqplmyhkowtyksngdyjzhzkgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918437.1700227-2952-93901441845904/AnsiballZ_copy.py'
Jan 20 14:13:57 compute-1 sudo[192774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:57 compute-1 python3.9[192776]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:57 compute-1 sudo[192774]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:58 compute-1 ceph-mon[81775]: pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:58 compute-1 sudo[192926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hktblfnibjebaxbokoeceychfpgiuxtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918438.354128-3060-19990088309879/AnsiballZ_copy.py'
Jan 20 14:13:58 compute-1 sudo[192926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:58 compute-1 python3.9[192928]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:58 compute-1 sudo[192926]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:59.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:13:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:13:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:59.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:13:59 compute-1 sudo[193078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzjodzbjdyxfjzpcjwnkdfdvnftectpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918439.0888305-3060-56863273032381/AnsiballZ_copy.py'
Jan 20 14:13:59 compute-1 sudo[193078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:59 compute-1 python3.9[193080]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:59 compute-1 sudo[193078]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:59 compute-1 ceph-mon[81775]: pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:00 compute-1 sudo[193231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjoipctolnqdhscvawcwxvcnkikwrlga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918439.8536224-3060-162201531483111/AnsiballZ_copy.py'
Jan 20 14:14:00 compute-1 sudo[193231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:00 compute-1 python3.9[193233]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:00 compute-1 sudo[193231]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:00 compute-1 sudo[193383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fknaoldlmgrrycjmkfjbdnjjkdwpwijc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918440.5899916-3060-18705437156270/AnsiballZ_copy.py'
Jan 20 14:14:00 compute-1 sudo[193383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:01 compute-1 python3.9[193385]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:01 compute-1 sudo[193383]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:01.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:01 compute-1 ceph-mon[81775]: pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:01.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:01 compute-1 sudo[193536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oywabubqljrqbnqitfqdhbppggqxqwhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918441.351387-3060-198687041928344/AnsiballZ_copy.py'
Jan 20 14:14:01 compute-1 sudo[193536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:01 compute-1 python3.9[193538]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:01 compute-1 sudo[193536]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:02 compute-1 sudo[193688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obckgthmfxtgnsnkfsyuwgujyucvmyvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918442.2765374-3168-166303453579517/AnsiballZ_systemd.py'
Jan 20 14:14:02 compute-1 sudo[193688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:03 compute-1 python3.9[193690]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:14:03 compute-1 systemd[1]: Reloading.
Jan 20 14:14:03 compute-1 systemd-rc-local-generator[193710]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:14:03 compute-1 systemd-sysv-generator[193716]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:14:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:03.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:03.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:03 compute-1 systemd[1]: Starting libvirt logging daemon socket...
Jan 20 14:14:03 compute-1 systemd[1]: Listening on libvirt logging daemon socket.
Jan 20 14:14:03 compute-1 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 20 14:14:03 compute-1 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 20 14:14:03 compute-1 systemd[1]: Starting libvirt logging daemon...
Jan 20 14:14:03 compute-1 systemd[1]: Started libvirt logging daemon.
Jan 20 14:14:03 compute-1 sudo[193688]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:03 compute-1 ceph-mon[81775]: pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:04 compute-1 sudo[193881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcicrzntcqbzpcstvurleofwitxnyvjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918443.7872696-3168-153742842711902/AnsiballZ_systemd.py'
Jan 20 14:14:04 compute-1 sudo[193881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:04 compute-1 python3.9[193883]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:14:04 compute-1 systemd[1]: Reloading.
Jan 20 14:14:04 compute-1 systemd-rc-local-generator[193910]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:14:04 compute-1 systemd-sysv-generator[193913]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:14:04 compute-1 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 20 14:14:04 compute-1 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 20 14:14:04 compute-1 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 20 14:14:04 compute-1 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 20 14:14:04 compute-1 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 20 14:14:04 compute-1 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 20 14:14:04 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Jan 20 14:14:04 compute-1 systemd[1]: Started libvirt nodedev daemon.
Jan 20 14:14:04 compute-1 sudo[193881]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:05 compute-1 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 20 14:14:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:14:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:05.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:14:05 compute-1 sudo[194098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jafiofojhitdrgrfylcoxjdoimfborjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918444.9663033-3168-7013273204371/AnsiballZ_systemd.py'
Jan 20 14:14:05 compute-1 sudo[194098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:14:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:05.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:14:05 compute-1 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 20 14:14:05 compute-1 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 20 14:14:05 compute-1 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 20 14:14:05 compute-1 python3.9[194100]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:14:05 compute-1 systemd[1]: Reloading.
Jan 20 14:14:05 compute-1 systemd-rc-local-generator[194133]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:14:05 compute-1 systemd-sysv-generator[194137]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:14:05 compute-1 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 20 14:14:05 compute-1 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 20 14:14:05 compute-1 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 20 14:14:05 compute-1 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 20 14:14:05 compute-1 systemd[1]: Starting libvirt proxy daemon...
Jan 20 14:14:05 compute-1 systemd[1]: Started libvirt proxy daemon.
Jan 20 14:14:06 compute-1 sudo[194098]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:06 compute-1 setroubleshoot[194024]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a9a0f37e-5c50-4812-95fd-ebd8f6c1134a
Jan 20 14:14:06 compute-1 setroubleshoot[194024]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 20 14:14:06 compute-1 setroubleshoot[194024]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a9a0f37e-5c50-4812-95fd-ebd8f6c1134a
Jan 20 14:14:06 compute-1 setroubleshoot[194024]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 20 14:14:06 compute-1 ceph-mon[81775]: pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:06 compute-1 sudo[194319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qesncnabfdgyaflgeuqoxoebnggtzryd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918446.2256358-3168-247373524293095/AnsiballZ_systemd.py'
Jan 20 14:14:06 compute-1 sudo[194319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:06 compute-1 python3.9[194321]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:14:06 compute-1 systemd[1]: Reloading.
Jan 20 14:14:06 compute-1 systemd-rc-local-generator[194348]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:14:06 compute-1 systemd-sysv-generator[194354]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:14:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:07.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:07 compute-1 systemd[1]: Listening on libvirt locking daemon socket.
Jan 20 14:14:07 compute-1 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 20 14:14:07 compute-1 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 20 14:14:07 compute-1 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 20 14:14:07 compute-1 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 20 14:14:07 compute-1 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 20 14:14:07 compute-1 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 20 14:14:07 compute-1 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 20 14:14:07 compute-1 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 20 14:14:07 compute-1 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 20 14:14:07 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Jan 20 14:14:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:14:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:07.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:14:07 compute-1 systemd[1]: Started libvirt QEMU daemon.
Jan 20 14:14:07 compute-1 sudo[194319]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:07 compute-1 ceph-mon[81775]: pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:07 compute-1 sudo[194535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfomaauolhbdipfpcmkviqmiabcxofnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918447.5212028-3168-150178029932264/AnsiballZ_systemd.py'
Jan 20 14:14:07 compute-1 sudo[194535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:08 compute-1 python3.9[194537]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:14:08 compute-1 systemd[1]: Reloading.
Jan 20 14:14:08 compute-1 systemd-rc-local-generator[194582]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:14:08 compute-1 systemd-sysv-generator[194586]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:14:08 compute-1 podman[194539]: 2026-01-20 14:14:08.377145995 +0000 UTC m=+0.156854266 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 14:14:08 compute-1 systemd[1]: Starting libvirt secret daemon socket...
Jan 20 14:14:08 compute-1 systemd[1]: Listening on libvirt secret daemon socket.
Jan 20 14:14:08 compute-1 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 20 14:14:08 compute-1 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 20 14:14:08 compute-1 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 20 14:14:08 compute-1 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 20 14:14:08 compute-1 systemd[1]: Starting libvirt secret daemon...
Jan 20 14:14:08 compute-1 systemd[1]: Started libvirt secret daemon.
Jan 20 14:14:08 compute-1 sudo[194535]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:09.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:09.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:09 compute-1 sudo[194774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnrbypjasmgmdnudioqxxcbyqkgozcff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918449.2742019-3279-143295828627276/AnsiballZ_file.py'
Jan 20 14:14:09 compute-1 sudo[194774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:09 compute-1 ceph-mon[81775]: pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:09 compute-1 python3.9[194776]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:09 compute-1 sudo[194774]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:10 compute-1 sudo[194926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytggyqjdjgeoldshxvyttjoqhslvysvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918450.0875876-3303-53296320533178/AnsiballZ_find.py'
Jan 20 14:14:10 compute-1 sudo[194926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:10 compute-1 python3.9[194928]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 14:14:10 compute-1 sudo[194926]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:10 compute-1 sudo[194953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:14:10 compute-1 sudo[194953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:10 compute-1 sudo[194953]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:10 compute-1 sudo[194978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:14:10 compute-1 sudo[194978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:10 compute-1 sudo[194978]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:14:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:11.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:14:11 compute-1 sudo[195128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltqbijukxehxntewivqolywrqvncvruq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918450.924719-3327-26399246190107/AnsiballZ_command.py'
Jan 20 14:14:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:14:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:11.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:14:11 compute-1 sudo[195128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:11 compute-1 python3.9[195130]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:14:11 compute-1 sudo[195128]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:12 compute-1 ceph-mon[81775]: pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:12 compute-1 python3.9[195285]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 14:14:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:13.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:14:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:13.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:14:13 compute-1 python3.9[195435]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:13 compute-1 ceph-mon[81775]: pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:13 compute-1 python3.9[195557]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918452.8055089-3384-70850878791469/.source.xml follow=False _original_basename=secret.xml.j2 checksum=35bbbade4f0995b3fba698d107c82491080dc0dd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:14 compute-1 sudo[195707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wozfwteqarvpzxgrzydjhsrgjvyxrxbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918454.2528076-3429-10747497489566/AnsiballZ_command.py'
Jan 20 14:14:14 compute-1 sudo[195707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:14 compute-1 python3.9[195709]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine e399cf45-e6b6-5393-99f1-75c601d3f188
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:14:14 compute-1 polkitd[43590]: Registered Authentication Agent for unix-process:195711:349380 (system bus name :1.1900 [pkttyagent --process 195711 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 20 14:14:14 compute-1 polkitd[43590]: Unregistered Authentication Agent for unix-process:195711:349380 (system bus name :1.1900, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 20 14:14:14 compute-1 polkitd[43590]: Registered Authentication Agent for unix-process:195710:349379 (system bus name :1.1901 [pkttyagent --process 195710 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 20 14:14:14 compute-1 polkitd[43590]: Unregistered Authentication Agent for unix-process:195710:349379 (system bus name :1.1901, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 20 14:14:14 compute-1 sudo[195707]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:15.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:15.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:15 compute-1 python3.9[195872]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:16 compute-1 ceph-mon[81775]: pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:14:16.374 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:14:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:14:16.375 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:14:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:14:16.375 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:14:16 compute-1 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 20 14:14:16 compute-1 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.006s CPU time.
Jan 20 14:14:16 compute-1 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 20 14:14:16 compute-1 sudo[196022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izjqxoliwsaviimbyromrmviwqamcatl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918456.2977936-3477-94737170010908/AnsiballZ_command.py'
Jan 20 14:14:16 compute-1 sudo[196022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:16 compute-1 sudo[196022]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:17.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:14:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:17.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:14:17 compute-1 ceph-mon[81775]: pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:17 compute-1 sudo[196176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgkxtrhpgiacdofamhuptldnfmmwvtar ; FSID=e399cf45-e6b6-5393-99f1-75c601d3f188 KEY=AQAciW9pAAAAABAAwJYC9p1PAwdI6pFMhbpXIA== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918457.1498542-3501-220110981977468/AnsiballZ_command.py'
Jan 20 14:14:17 compute-1 sudo[196176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:17 compute-1 polkitd[43590]: Registered Authentication Agent for unix-process:196179:349672 (system bus name :1.1904 [pkttyagent --process 196179 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 20 14:14:17 compute-1 polkitd[43590]: Unregistered Authentication Agent for unix-process:196179:349672 (system bus name :1.1904, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 20 14:14:17 compute-1 sudo[196176]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:18 compute-1 sudo[196334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grbssmvhjyuozlrrayxfihoiohgrazla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918458.1132872-3525-25590290521842/AnsiballZ_copy.py'
Jan 20 14:14:18 compute-1 sudo[196334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:18 compute-1 python3.9[196336]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:18 compute-1 sudo[196334]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:14:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:19.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:14:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:14:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:19.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:14:19 compute-1 sudo[196487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odeljgvdctgejqbmaqrbvllmkbsfgddb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918459.1811197-3549-19146597404779/AnsiballZ_stat.py'
Jan 20 14:14:19 compute-1 sudo[196487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:19 compute-1 python3.9[196489]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:19 compute-1 sudo[196487]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:20 compute-1 ceph-mon[81775]: pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:20 compute-1 sudo[196627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sscxdvzttoieuxtroxqgfdnhmtfnjmye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918459.1811197-3549-19146597404779/AnsiballZ_copy.py'
Jan 20 14:14:20 compute-1 sudo[196627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:20 compute-1 podman[196584]: 2026-01-20 14:14:20.244242057 +0000 UTC m=+0.089431921 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 14:14:20 compute-1 python3.9[196631]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918459.1811197-3549-19146597404779/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:20 compute-1 sudo[196627]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:21 compute-1 sudo[196781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywqcjtuglsigyaimnxvahqpvkgwhdegl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918460.7625303-3597-231151825997258/AnsiballZ_file.py'
Jan 20 14:14:21 compute-1 sudo[196781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:21.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:21.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:21 compute-1 ceph-mon[81775]: pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:21 compute-1 python3.9[196783]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:21 compute-1 sudo[196781]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:22 compute-1 sudo[196934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbxrwbimdcrmwwqcukwbosoogmdooldv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918461.7228353-3621-157931487813500/AnsiballZ_stat.py'
Jan 20 14:14:22 compute-1 sudo[196934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:22 compute-1 python3.9[196936]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:22 compute-1 sudo[196934]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:22 compute-1 sudo[197012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmatsnvnfvxorywbkrjvrqdglvmrzhtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918461.7228353-3621-157931487813500/AnsiballZ_file.py'
Jan 20 14:14:22 compute-1 sudo[197012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:22 compute-1 python3.9[197014]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:22 compute-1 sudo[197012]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:23.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:14:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:23.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:14:23 compute-1 sudo[197164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frondrzuldmotvdshpranucvjugkzerf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918463.1023724-3657-186201119313784/AnsiballZ_stat.py'
Jan 20 14:14:23 compute-1 sudo[197164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:23 compute-1 python3.9[197166]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:23 compute-1 sudo[197164]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:23 compute-1 ceph-mon[81775]: pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:23 compute-1 sudo[197243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drgdhncrrflffbwzmgirmjqmxttttpoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918463.1023724-3657-186201119313784/AnsiballZ_file.py'
Jan 20 14:14:23 compute-1 sudo[197243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:24 compute-1 python3.9[197245]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.a4sskzt7 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:24 compute-1 sudo[197243]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:24 compute-1 sudo[197395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kruqouwagifywzysvgpcuuxbexqgsjzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918464.500493-3694-154541056621179/AnsiballZ_stat.py'
Jan 20 14:14:24 compute-1 sudo[197395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:25 compute-1 python3.9[197397]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:25 compute-1 sudo[197395]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:14:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:25.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:14:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:14:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:25.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:14:25 compute-1 sudo[197473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxqooeuzmdgymeeipxifhtgarmoaoxch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918464.500493-3694-154541056621179/AnsiballZ_file.py'
Jan 20 14:14:25 compute-1 sudo[197473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:25 compute-1 python3.9[197475]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:25 compute-1 sudo[197473]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:25 compute-1 ceph-mon[81775]: pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:26 compute-1 sudo[197626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgnjjrcddfzanwwgrrzdlqufklqqmywr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918465.967999-3733-123533169930363/AnsiballZ_command.py'
Jan 20 14:14:26 compute-1 sudo[197626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:26 compute-1 python3.9[197628]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:14:26 compute-1 sudo[197626]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:27.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:14:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:27.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:14:27 compute-1 ceph-mon[81775]: pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:27 compute-1 sudo[197779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpmloifffbtklnkruewmoflprlcrfgyh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768918466.8629694-3756-163638222048510/AnsiballZ_edpm_nftables_from_files.py'
Jan 20 14:14:27 compute-1 sudo[197779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:27 compute-1 python3[197781]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 20 14:14:27 compute-1 sudo[197779]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:28 compute-1 sudo[197932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awjaxnrvktlgnqkgwtgjlngcjbllxatm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918467.9842322-3780-42260256243279/AnsiballZ_stat.py'
Jan 20 14:14:28 compute-1 sudo[197932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:28 compute-1 python3.9[197934]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:28 compute-1 sudo[197932]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:28 compute-1 sudo[198010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lokqenzbkhpyctfrmovognkeugsvhgqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918467.9842322-3780-42260256243279/AnsiballZ_file.py'
Jan 20 14:14:28 compute-1 sudo[198010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:29 compute-1 python3.9[198012]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:29 compute-1 sudo[198010]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:14:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:29.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:14:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:29.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:29 compute-1 sudo[198163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mforpadfemtvkfnnikyyxahhwquqohzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918469.4103603-3816-66047356827061/AnsiballZ_stat.py'
Jan 20 14:14:29 compute-1 sudo[198163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:30 compute-1 ceph-mon[81775]: pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:30 compute-1 python3.9[198165]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:30 compute-1 sudo[198163]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:30 compute-1 sudo[198288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djncfygcbumanqsjehvydzvscmyvkfyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918469.4103603-3816-66047356827061/AnsiballZ_copy.py'
Jan 20 14:14:30 compute-1 sudo[198288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:30 compute-1 python3.9[198290]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918469.4103603-3816-66047356827061/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:30 compute-1 sudo[198288]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:30 compute-1 sudo[198315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:14:30 compute-1 sudo[198315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:30 compute-1 sudo[198315]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:31 compute-1 sudo[198363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:14:31 compute-1 sudo[198363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:31 compute-1 sudo[198363]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:31.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:14:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:31.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:14:31 compute-1 sudo[198490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsrtidsbdizujejdoqkfuflmfkoebeyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918470.9930747-3861-151722395722531/AnsiballZ_stat.py'
Jan 20 14:14:31 compute-1 sudo[198490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:31 compute-1 python3.9[198492]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:31 compute-1 sudo[198490]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:31 compute-1 sudo[198569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvsztmygffgccpfqccoxebqueygvtzve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918470.9930747-3861-151722395722531/AnsiballZ_file.py'
Jan 20 14:14:31 compute-1 sudo[198569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:32 compute-1 ceph-mon[81775]: pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:32 compute-1 python3.9[198571]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:32 compute-1 sudo[198569]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:32 compute-1 sudo[198721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpcbfiipimdabzqqikqxqcxmtnzeoabq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918472.44415-3897-6949419220771/AnsiballZ_stat.py'
Jan 20 14:14:32 compute-1 sudo[198721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:32 compute-1 python3.9[198723]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:33 compute-1 sudo[198721]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:14:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:33.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:14:33 compute-1 sudo[198799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkvnbwgftdxoaddtkfzwiyfrtgzxgaxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918472.44415-3897-6949419220771/AnsiballZ_file.py'
Jan 20 14:14:33 compute-1 sudo[198799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:33.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:33 compute-1 python3.9[198801]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:33 compute-1 sudo[198799]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:34 compute-1 sudo[198952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfujflhzolodhsdtamlwraqplnejgeyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918473.6725929-3933-125345158119954/AnsiballZ_stat.py'
Jan 20 14:14:34 compute-1 sudo[198952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:34 compute-1 ceph-mon[81775]: pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:34 compute-1 python3.9[198954]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:34 compute-1 sudo[198952]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:34 compute-1 sudo[199077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfhgphzmnfuteoudjwmicezfwvrxqobf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918473.6725929-3933-125345158119954/AnsiballZ_copy.py'
Jan 20 14:14:34 compute-1 sudo[199077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:34 compute-1 python3.9[199079]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918473.6725929-3933-125345158119954/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:34 compute-1 sudo[199077]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:35.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:14:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:35.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:14:35 compute-1 sudo[199230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsjaabbvefjibjneasbivehuqfyrjtqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918475.3554513-3978-155075152200936/AnsiballZ_file.py'
Jan 20 14:14:35 compute-1 sudo[199230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:35 compute-1 ceph-mon[81775]: pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:35 compute-1 python3.9[199232]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:35 compute-1 sudo[199230]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:36 compute-1 sudo[199382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gykcbivxpjdwxszfhjooemmttmjikbci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918476.1653047-4002-233132921899998/AnsiballZ_command.py'
Jan 20 14:14:36 compute-1 sudo[199382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:36 compute-1 python3.9[199384]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:14:36 compute-1 sudo[199382]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:37.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:14:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:37.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:14:37 compute-1 sudo[199538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gctzjoonfdvmwfxidcomsukuokmqnuiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918477.0618067-4026-7943000549218/AnsiballZ_blockinfile.py'
Jan 20 14:14:37 compute-1 sudo[199538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:37 compute-1 python3.9[199540]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:37 compute-1 sudo[199538]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:37 compute-1 ceph-mon[81775]: pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:38 compute-1 sudo[199700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jduquvbtsocsiarbuicqrfaamaacjqkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918478.2209036-4053-149849292275051/AnsiballZ_command.py'
Jan 20 14:14:38 compute-1 sudo[199700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:38 compute-1 podman[199664]: 2026-01-20 14:14:38.772193579 +0000 UTC m=+0.193731143 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 20 14:14:38 compute-1 python3.9[199705]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:14:38 compute-1 sudo[199700]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:39.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:39.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:39 compute-1 sudo[199870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwsdxqbrithutqkmeotjjoezhwudgbjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918479.1623821-4077-104768614118582/AnsiballZ_stat.py'
Jan 20 14:14:39 compute-1 sudo[199870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:39 compute-1 python3.9[199872]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:14:39 compute-1 sudo[199870]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:39 compute-1 ceph-mon[81775]: pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:40 compute-1 sudo[200024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrvunkwzzngywmpzrqufuggxxndvkwgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918480.1471767-4101-1499689893282/AnsiballZ_command.py'
Jan 20 14:14:40 compute-1 sudo[200024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:40 compute-1 python3.9[200026]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:14:40 compute-1 sudo[200024]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:14:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:41.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:14:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:41.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:41 compute-1 sudo[200179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbfrlyrhqjgrwfrynsrtdatzufdgagaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918481.0507927-4126-273238320690343/AnsiballZ_file.py'
Jan 20 14:14:41 compute-1 sudo[200179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:41 compute-1 python3.9[200181]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:41 compute-1 sudo[200179]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:42 compute-1 ceph-mon[81775]: pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:42 compute-1 sudo[200332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbmftinrmjrkajzewcurdthwxqzxwsyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918481.9260406-4149-141649947844015/AnsiballZ_stat.py'
Jan 20 14:14:42 compute-1 sudo[200332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:42 compute-1 python3.9[200334]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:42 compute-1 sudo[200332]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:42 compute-1 sudo[200455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfkunqjhwofdsdfhxphnonspzirdjufl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918481.9260406-4149-141649947844015/AnsiballZ_copy.py'
Jan 20 14:14:42 compute-1 sudo[200455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:43 compute-1 python3.9[200457]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918481.9260406-4149-141649947844015/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:43 compute-1 sudo[200455]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:43.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:43.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:43 compute-1 sudo[200608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdbmjalkkxkxtfkzsngfseyphncqgluu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918483.4803226-4194-253603264495037/AnsiballZ_stat.py'
Jan 20 14:14:43 compute-1 sudo[200608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:43 compute-1 python3.9[200610]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:43 compute-1 sudo[200608]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:44 compute-1 ceph-mon[81775]: pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:44 compute-1 sudo[200731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyvnumippblxnsyyjvmfncmswgtzjrhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918483.4803226-4194-253603264495037/AnsiballZ_copy.py'
Jan 20 14:14:44 compute-1 sudo[200731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:44 compute-1 python3.9[200733]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918483.4803226-4194-253603264495037/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:44 compute-1 sudo[200731]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:14:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:45.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:14:45 compute-1 sudo[200883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehosytnswrceobusnjythaoeabmmmevv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918484.9260616-4239-101700953845709/AnsiballZ_stat.py'
Jan 20 14:14:45 compute-1 sudo[200883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:45.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:45 compute-1 ceph-mon[81775]: pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:45 compute-1 python3.9[200885]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:45 compute-1 sudo[200883]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:46 compute-1 sudo[201007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxihojzorjkankvjojxohvknhpadmuoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918484.9260616-4239-101700953845709/AnsiballZ_copy.py'
Jan 20 14:14:46 compute-1 sudo[201007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:46 compute-1 python3.9[201009]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918484.9260616-4239-101700953845709/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:46 compute-1 sudo[201007]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:47 compute-1 sudo[201159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieasnahnooxgvzssiwwmgcmceqrqlneu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918486.6618528-4284-219906945238489/AnsiballZ_systemd.py'
Jan 20 14:14:47 compute-1 sudo[201159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:14:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:47.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:14:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:47.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:47 compute-1 python3.9[201161]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:14:47 compute-1 systemd[1]: Reloading.
Jan 20 14:14:47 compute-1 systemd-rc-local-generator[201189]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:14:47 compute-1 systemd-sysv-generator[201192]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:14:47 compute-1 systemd[1]: Reached target edpm_libvirt.target.
Jan 20 14:14:47 compute-1 sudo[201159]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:47 compute-1 ceph-mon[81775]: pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:48 compute-1 sudo[201351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sigsetrhngtxojohakscpcgujxigshlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918488.092098-4308-164171852097238/AnsiballZ_systemd.py'
Jan 20 14:14:48 compute-1 sudo[201351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:48 compute-1 python3.9[201353]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 20 14:14:48 compute-1 systemd[1]: Reloading.
Jan 20 14:14:48 compute-1 systemd-sysv-generator[201386]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:14:48 compute-1 systemd-rc-local-generator[201382]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:14:49 compute-1 systemd[1]: Reloading.
Jan 20 14:14:49 compute-1 systemd-rc-local-generator[201417]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:14:49 compute-1 systemd-sysv-generator[201420]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:14:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:49.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:49.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:49 compute-1 sudo[201351]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:50 compute-1 sshd-session[142291]: Connection closed by 192.168.122.30 port 49426
Jan 20 14:14:50 compute-1 sshd-session[142288]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:14:50 compute-1 systemd[1]: session-48.scope: Deactivated successfully.
Jan 20 14:14:50 compute-1 systemd[1]: session-48.scope: Consumed 4min 768ms CPU time.
Jan 20 14:14:50 compute-1 systemd-logind[783]: Session 48 logged out. Waiting for processes to exit.
Jan 20 14:14:50 compute-1 systemd-logind[783]: Removed session 48.
Jan 20 14:14:50 compute-1 ceph-mon[81775]: pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:51 compute-1 podman[201450]: 2026-01-20 14:14:51.05431591 +0000 UTC m=+0.089471283 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:14:51 compute-1 sudo[201469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:14:51 compute-1 sudo[201469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:51 compute-1 sudo[201469]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:51 compute-1 sudo[201494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:14:51 compute-1 sudo[201494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:51 compute-1 sudo[201494]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:14:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:51.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:14:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:51.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:51 compute-1 ceph-mon[81775]: pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:53.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:53.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:54 compute-1 ceph-mon[81775]: pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:54 compute-1 sudo[201521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:14:54 compute-1 sudo[201521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:54 compute-1 sudo[201521]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:54 compute-1 sudo[201546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:14:54 compute-1 sudo[201546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:54 compute-1 sudo[201546]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:54 compute-1 sudo[201571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:14:54 compute-1 sudo[201571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:54 compute-1 sudo[201571]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:55 compute-1 sudo[201596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:14:55 compute-1 sudo[201596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:55.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:55.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:55 compute-1 sshd-session[201638]: Accepted publickey for zuul from 192.168.122.30 port 47464 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:14:55 compute-1 systemd-logind[783]: New session 49 of user zuul.
Jan 20 14:14:55 compute-1 systemd[1]: Started Session 49 of User zuul.
Jan 20 14:14:55 compute-1 sshd-session[201638]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:14:55 compute-1 sudo[201596]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:55 compute-1 ceph-mon[81775]: pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:56 compute-1 python3.9[201808]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:14:56 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:14:56 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:14:56 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:14:56 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:14:56 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:14:56 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:14:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:57.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:57.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:57 compute-1 ceph-mon[81775]: pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:58 compute-1 python3.9[201963]: ansible-ansible.builtin.service_facts Invoked
Jan 20 14:14:58 compute-1 network[201980]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:14:58 compute-1 network[201981]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:14:58 compute-1 network[201982]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:14:59 compute-1 ceph-mon[81775]: pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:14:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:59.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:14:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:14:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:59.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:15:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:01.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:15:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:01.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:02 compute-1 ceph-mon[81775]: pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:15:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:03.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:15:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:15:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:03.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:15:04 compute-1 ceph-mon[81775]: pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:04 compute-1 sudo[202255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdzynrpxwserboqsvepuwyezbevgurjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918504.2157102-102-110030693837001/AnsiballZ_setup.py'
Jan 20 14:15:04 compute-1 sudo[202255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:04 compute-1 python3.9[202257]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:15:05 compute-1 sudo[202255]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:05 compute-1 sudo[202266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:15:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:05.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:05 compute-1 sudo[202266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:15:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:05 compute-1 sudo[202266]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:05.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:05 compute-1 sudo[202291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:15:05 compute-1 sudo[202291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:15:05 compute-1 sudo[202291]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:05 compute-1 sudo[202390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efxwphzxlbbvukslhflqxkzyzkzmqecx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918504.2157102-102-110030693837001/AnsiballZ_dnf.py'
Jan 20 14:15:05 compute-1 sudo[202390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:05 compute-1 python3.9[202392]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:15:06 compute-1 ceph-mon[81775]: pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:06 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:15:06 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:15:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:07.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:15:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:07.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:15:08 compute-1 ceph-mon[81775]: pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:09 compute-1 podman[202395]: 2026-01-20 14:15:09.112800887 +0000 UTC m=+0.147316295 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 20 14:15:09 compute-1 ceph-mon[81775]: pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:09.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:09.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:10 compute-1 sudo[202390]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:11 compute-1 sudo[202467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:15:11 compute-1 sudo[202467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:15:11 compute-1 sudo[202467]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:15:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:11.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:15:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:15:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:11.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:15:11 compute-1 sudo[202518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:15:11 compute-1 sudo[202518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:15:11 compute-1 sudo[202518]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:11 compute-1 sudo[202623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zskaqvzzqvwvmpdmmerngsouuokspwfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918511.2892048-138-122349258308803/AnsiballZ_stat.py'
Jan 20 14:15:11 compute-1 sudo[202623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:12 compute-1 python3.9[202625]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:15:12 compute-1 sudo[202623]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:12 compute-1 ceph-mon[81775]: pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:13 compute-1 sudo[202775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqwofseawukccdrqgzidxeerbtwpwvqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918512.5484056-168-132055540303141/AnsiballZ_command.py'
Jan 20 14:15:13 compute-1 sudo[202775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:13 compute-1 python3.9[202777]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:15:13 compute-1 sudo[202775]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:13.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:13.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:14 compute-1 ceph-mon[81775]: pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:14 compute-1 sudo[202929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egahsljzaabjrnuixwarjvlkluvsxqur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918513.790531-198-63775533812384/AnsiballZ_stat.py'
Jan 20 14:15:14 compute-1 sudo[202929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:14 compute-1 python3.9[202931]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:15:14 compute-1 sudo[202929]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:15 compute-1 ceph-mon[81775]: pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:15 compute-1 sudo[203081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdcagojspfzeeshzagwcavkhxevabqeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918514.9443219-222-18907576223535/AnsiballZ_command.py'
Jan 20 14:15:15 compute-1 sudo[203081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:15.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:15:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:15.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:15:15 compute-1 python3.9[203083]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:15:15 compute-1 sudo[203081]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:16 compute-1 sudo[203235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unbfwuvvhbbuejreafpwynlglthownqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918515.8805401-246-74879687429696/AnsiballZ_stat.py'
Jan 20 14:15:16 compute-1 sudo[203235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:15:16.375 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:15:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:15:16.376 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:15:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:15:16.376 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:15:16 compute-1 python3.9[203237]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:15:16 compute-1 sudo[203235]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:17 compute-1 sudo[203358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpwughsbqeedokpxqfanvcpqhaumylxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918515.8805401-246-74879687429696/AnsiballZ_copy.py'
Jan 20 14:15:17 compute-1 sudo[203358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:17 compute-1 python3.9[203360]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918515.8805401-246-74879687429696/.source.iscsi _original_basename=.8m6cec_c follow=False checksum=bbab9a6763471a42af22f3fd2e64e0a859c979e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:17 compute-1 sudo[203358]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:15:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:17.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:15:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:15:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:17.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:15:17 compute-1 ceph-mon[81775]: pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:18 compute-1 sudo[203511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvyylqqxxvkqnqlnfbvjxbkvbqdnsatq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918517.5644577-291-271198270928573/AnsiballZ_file.py'
Jan 20 14:15:18 compute-1 sudo[203511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:18 compute-1 python3.9[203513]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:18 compute-1 sudo[203511]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:19.005283) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918519005414, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1103, "num_deletes": 251, "total_data_size": 2529106, "memory_usage": 2561896, "flush_reason": "Manual Compaction"}
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918519022133, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 1658900, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16341, "largest_seqno": 17439, "table_properties": {"data_size": 1653996, "index_size": 2492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10360, "raw_average_key_size": 19, "raw_value_size": 1644225, "raw_average_value_size": 3090, "num_data_blocks": 113, "num_entries": 532, "num_filter_entries": 532, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768918425, "oldest_key_time": 1768918425, "file_creation_time": 1768918519, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 17014 microseconds, and 7913 cpu microseconds.
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:19.022309) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 1658900 bytes OK
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:19.022388) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:19.024082) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:19.024105) EVENT_LOG_v1 {"time_micros": 1768918519024098, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:19.024128) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2523798, prev total WAL file size 2523798, number of live WAL files 2.
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:19.025746) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(1620KB)], [30(8126KB)]
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918519025823, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 9979944, "oldest_snapshot_seqno": -1}
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4337 keys, 7965577 bytes, temperature: kUnknown
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918519100578, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 7965577, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7935520, "index_size": 18107, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 107582, "raw_average_key_size": 24, "raw_value_size": 7855950, "raw_average_value_size": 1811, "num_data_blocks": 759, "num_entries": 4337, "num_filter_entries": 4337, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768918519, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:19.100901) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7965577 bytes
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:19.103246) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.3 rd, 106.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.9 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(10.8) write-amplify(4.8) OK, records in: 4852, records dropped: 515 output_compression: NoCompression
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:19.103276) EVENT_LOG_v1 {"time_micros": 1768918519103263, "job": 16, "event": "compaction_finished", "compaction_time_micros": 74849, "compaction_time_cpu_micros": 34902, "output_level": 6, "num_output_files": 1, "total_output_size": 7965577, "num_input_records": 4852, "num_output_records": 4337, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918519105379, "job": 16, "event": "table_file_deletion", "file_number": 32}
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918519108899, "job": 16, "event": "table_file_deletion", "file_number": 30}
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:19.025637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:19.109042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:19.109051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:19.109054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:19.109057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:19.109060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:19 compute-1 sudo[203663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewrvwlmwgffbepclrnzdeacmzbfcrrys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918518.5883179-315-186599998365298/AnsiballZ_lineinfile.py'
Jan 20 14:15:19 compute-1 sudo[203663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:15:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:19.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:15:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:19.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:19 compute-1 python3.9[203665]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:19 compute-1 sudo[203663]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:19 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:15:19 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:15:20 compute-1 ceph-mon[81775]: pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:20 compute-1 sudo[203817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpxoifhnbigxpdswvfewzkqvhiriwrbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918519.9350023-342-167731739477113/AnsiballZ_systemd_service.py'
Jan 20 14:15:20 compute-1 sudo[203817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:20 compute-1 python3.9[203819]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:15:21 compute-1 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 20 14:15:21 compute-1 sudo[203817]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:21.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:21.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:21 compute-1 ceph-mon[81775]: pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:21 compute-1 sudo[203989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwqekujbtzutxiaczsgyupqbmcfftcwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918521.3922713-366-119957667878379/AnsiballZ_systemd_service.py'
Jan 20 14:15:21 compute-1 sudo[203989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:21 compute-1 podman[203948]: 2026-01-20 14:15:21.834821723 +0000 UTC m=+0.078595303 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 20 14:15:22 compute-1 python3.9[203996]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:15:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:22 compute-1 systemd[1]: Reloading.
Jan 20 14:15:22 compute-1 systemd-rc-local-generator[204029]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:15:22 compute-1 systemd-sysv-generator[204032]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:15:22 compute-1 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 20 14:15:22 compute-1 systemd[1]: Starting Open-iSCSI...
Jan 20 14:15:22 compute-1 kernel: Loading iSCSI transport class v2.0-870.
Jan 20 14:15:22 compute-1 systemd[1]: Started Open-iSCSI.
Jan 20 14:15:22 compute-1 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 20 14:15:22 compute-1 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 20 14:15:22 compute-1 sudo[203989]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:23.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:23.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:24 compute-1 ceph-mon[81775]: pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:24.665666) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918524665725, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 304, "num_deletes": 256, "total_data_size": 118358, "memory_usage": 124792, "flush_reason": "Manual Compaction"}
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918524668993, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 77787, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17445, "largest_seqno": 17743, "table_properties": {"data_size": 75866, "index_size": 149, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4505, "raw_average_key_size": 16, "raw_value_size": 72071, "raw_average_value_size": 259, "num_data_blocks": 7, "num_entries": 278, "num_filter_entries": 278, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768918519, "oldest_key_time": 1768918519, "file_creation_time": 1768918524, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 3356 microseconds, and 1161 cpu microseconds.
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:24.669032) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 77787 bytes OK
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:24.669046) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:24.670463) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:24.670476) EVENT_LOG_v1 {"time_micros": 1768918524670472, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:24.670493) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 116142, prev total WAL file size 116142, number of live WAL files 2.
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:24.670856) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(75KB)], [33(7778KB)]
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918524671024, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 8043364, "oldest_snapshot_seqno": -1}
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 4095 keys, 7701612 bytes, temperature: kUnknown
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918524733038, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 7701612, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7673374, "index_size": 16928, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 103856, "raw_average_key_size": 25, "raw_value_size": 7598151, "raw_average_value_size": 1855, "num_data_blocks": 696, "num_entries": 4095, "num_filter_entries": 4095, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768918524, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:24.733429) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7701612 bytes
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:24.735660) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.4 rd, 123.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 7.6 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(202.4) write-amplify(99.0) OK, records in: 4615, records dropped: 520 output_compression: NoCompression
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:24.735697) EVENT_LOG_v1 {"time_micros": 1768918524735681, "job": 18, "event": "compaction_finished", "compaction_time_micros": 62155, "compaction_time_cpu_micros": 20664, "output_level": 6, "num_output_files": 1, "total_output_size": 7701612, "num_input_records": 4615, "num_output_records": 4095, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918524736248, "job": 18, "event": "table_file_deletion", "file_number": 35}
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918524739318, "job": 18, "event": "table_file_deletion", "file_number": 33}
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:24.670778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:24.739425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:24.739430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:24.739433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:24.739436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:24 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:15:24.739439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:24 compute-1 python3.9[204197]: ansible-ansible.builtin.service_facts Invoked
Jan 20 14:15:24 compute-1 network[204214]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:15:24 compute-1 network[204215]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:15:24 compute-1 network[204216]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:15:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:25.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:25.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:25 compute-1 ceph-mon[81775]: pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:27.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:27.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:28 compute-1 ceph-mon[81775]: pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:15:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:29.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:15:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:15:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:29.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:15:30 compute-1 ceph-mon[81775]: pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:31.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:31.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:31 compute-1 sudo[204439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:15:31 compute-1 sudo[204439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:15:31 compute-1 sudo[204439]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:31 compute-1 sudo[204486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:15:31 compute-1 sudo[204486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:15:31 compute-1 sudo[204486]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:31 compute-1 sudo[204540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ievwpaoteyxlytsgtkkvzzmtyuyfrgmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918531.245947-435-273728122791024/AnsiballZ_dnf.py'
Jan 20 14:15:31 compute-1 sudo[204540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:31 compute-1 python3.9[204542]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:15:32 compute-1 ceph-mon[81775]: pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:33.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:33.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:34 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 14:15:34 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 20 14:15:34 compute-1 systemd[1]: Reloading.
Jan 20 14:15:34 compute-1 systemd-rc-local-generator[204589]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:15:34 compute-1 systemd-sysv-generator[204593]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:15:34 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 14:15:34 compute-1 ceph-mon[81775]: pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:35 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 14:15:35 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 20 14:15:35 compute-1 systemd[1]: run-r1b0671403e194c929df19f8d19fb484c.service: Deactivated successfully.
Jan 20 14:15:35 compute-1 sudo[204540]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:15:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:35.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:15:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:35.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:35 compute-1 ceph-mon[81775]: pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:36 compute-1 sudo[204859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eraflwqokazhwpbgvpqroiglxgmsptnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918536.0561678-462-192374767236453/AnsiballZ_file.py'
Jan 20 14:15:36 compute-1 sudo[204859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:36 compute-1 python3.9[204861]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 20 14:15:36 compute-1 sudo[204859]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:15:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:37.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:15:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:37.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:37 compute-1 sudo[205011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erfbbgsaoyvqbcwqmacahmbuwvguxblc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918536.898356-486-225714621297779/AnsiballZ_modprobe.py'
Jan 20 14:15:37 compute-1 sudo[205011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:37 compute-1 python3.9[205013]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 20 14:15:37 compute-1 sudo[205011]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:37 compute-1 ceph-mon[81775]: pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:38 compute-1 sudo[205168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-updsbqgwerzfzduvynjlurivxklyysgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918537.9494734-510-106715687623771/AnsiballZ_stat.py'
Jan 20 14:15:38 compute-1 sudo[205168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:38 compute-1 python3.9[205170]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:15:38 compute-1 sudo[205168]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:39 compute-1 sudo[205291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euknngozweqewuldsxkbsmlspsxoboqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918537.9494734-510-106715687623771/AnsiballZ_copy.py'
Jan 20 14:15:39 compute-1 sudo[205291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:39 compute-1 python3.9[205293]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918537.9494734-510-106715687623771/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:39 compute-1 sudo[205291]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:39.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:39.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:40 compute-1 podman[205394]: 2026-01-20 14:15:40.139713618 +0000 UTC m=+0.162359533 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:15:40 compute-1 sudo[205471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqjeiacntxgahusvnkakncfvbkwrawpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918539.74355-558-118769443284585/AnsiballZ_lineinfile.py'
Jan 20 14:15:40 compute-1 sudo[205471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:40 compute-1 python3.9[205473]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:40 compute-1 sudo[205471]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:40 compute-1 ceph-mon[81775]: pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:41 compute-1 sudo[205624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fopvsmafhzelyvfqhbxrkvutnqacmxvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918540.669125-582-217414128167861/AnsiballZ_systemd.py'
Jan 20 14:15:41 compute-1 sudo[205624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:15:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:41.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:15:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:41.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:41 compute-1 python3.9[205626]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:15:41 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 20 14:15:41 compute-1 systemd[1]: Stopped Load Kernel Modules.
Jan 20 14:15:41 compute-1 systemd[1]: Stopping Load Kernel Modules...
Jan 20 14:15:41 compute-1 systemd[1]: Starting Load Kernel Modules...
Jan 20 14:15:41 compute-1 systemd[1]: Finished Load Kernel Modules.
Jan 20 14:15:41 compute-1 sudo[205624]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:41 compute-1 ceph-mon[81775]: pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:42 compute-1 sudo[205781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vramsxyqymzinjmafnupmpoocnzlwogz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918542.021045-606-30188970968978/AnsiballZ_command.py'
Jan 20 14:15:42 compute-1 sudo[205781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:42 compute-1 python3.9[205783]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:15:42 compute-1 sudo[205781]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:43 compute-1 sudo[205934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agwvrxzizuqosyhfdudplokeeuyqpicj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918543.041334-636-62385282457122/AnsiballZ_stat.py'
Jan 20 14:15:43 compute-1 sudo[205934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:43.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:15:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:43.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:15:43 compute-1 python3.9[205936]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:15:43 compute-1 sudo[205934]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:43 compute-1 ceph-mon[81775]: pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:44 compute-1 sudo[206087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbanmsipjgbccgmoxgthftlarqfrdqyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918544.0613-663-93457358283600/AnsiballZ_stat.py'
Jan 20 14:15:44 compute-1 sudo[206087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:44 compute-1 python3.9[206089]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:15:44 compute-1 sudo[206087]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:44 compute-1 sudo[206210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydzmwzxfzmmznpxbtumckpfeuuskpxgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918544.0613-663-93457358283600/AnsiballZ_copy.py'
Jan 20 14:15:44 compute-1 sudo[206210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:45 compute-1 python3.9[206212]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918544.0613-663-93457358283600/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:45 compute-1 sudo[206210]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:15:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:45.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:15:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:45.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:45 compute-1 sudo[206363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnacmipewdumgjdrbbhqgwpslhrtfxmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918545.4969926-708-269964046265347/AnsiballZ_command.py'
Jan 20 14:15:45 compute-1 sudo[206363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:45 compute-1 python3.9[206365]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:15:45 compute-1 sudo[206363]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:46 compute-1 ceph-mon[81775]: pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:46 compute-1 sudo[206516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmvtzsayujfxpcwuxtvnlxtjxegztuto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918546.1992207-732-109886793600623/AnsiballZ_lineinfile.py'
Jan 20 14:15:46 compute-1 sudo[206516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:46 compute-1 python3.9[206518]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:46 compute-1 sudo[206516]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:47.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:47.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:47 compute-1 sudo[206668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgskczmmywawbkzoremltrffnrmvpiix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918546.9569838-756-75653741076894/AnsiballZ_replace.py'
Jan 20 14:15:47 compute-1 sudo[206668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:47 compute-1 python3.9[206670]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:47 compute-1 sudo[206668]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:48 compute-1 sudo[206821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybqvzfpdtfxodumpkchjugxwugopajpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918548.0001175-780-3134140720998/AnsiballZ_replace.py'
Jan 20 14:15:48 compute-1 sudo[206821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:48 compute-1 ceph-mon[81775]: pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:48 compute-1 python3.9[206823]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:48 compute-1 sudo[206821]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:49 compute-1 sudo[206973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utiefyvfmydtevagpybrioclybwrneiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918548.8931684-807-247559321079676/AnsiballZ_lineinfile.py'
Jan 20 14:15:49 compute-1 sudo[206973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:49.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:15:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:49.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:15:49 compute-1 python3.9[206975]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:49 compute-1 ceph-mon[81775]: pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:49 compute-1 sudo[206973]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:49 compute-1 sudo[207126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leiizjbqhoqeztecoytrerovxhbnmkyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918549.6211038-807-151252883723896/AnsiballZ_lineinfile.py'
Jan 20 14:15:49 compute-1 sudo[207126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:50 compute-1 python3.9[207128]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:50 compute-1 sudo[207126]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:50 compute-1 sudo[207278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifeydoqbvpohcxxsceahkybsvbzxqdth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918550.3134713-807-159686144941664/AnsiballZ_lineinfile.py'
Jan 20 14:15:50 compute-1 sudo[207278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:50 compute-1 python3.9[207280]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:50 compute-1 sudo[207278]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:15:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:51.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:15:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:51.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:51 compute-1 sudo[207430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edbdncdajulyqeflogehgjwwalpcsapf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918551.0164316-807-136206947850113/AnsiballZ_lineinfile.py'
Jan 20 14:15:51 compute-1 sudo[207430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:51 compute-1 ceph-mon[81775]: pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:51 compute-1 python3.9[207432]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:51 compute-1 sudo[207434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:15:51 compute-1 sudo[207434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:15:51 compute-1 sudo[207434]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:51 compute-1 sudo[207430]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:51 compute-1 sudo[207459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:15:51 compute-1 sudo[207459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:15:51 compute-1 sudo[207459]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:52 compute-1 podman[207531]: 2026-01-20 14:15:52.027966239 +0000 UTC m=+0.059912892 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:15:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:52 compute-1 sudo[207652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbwurjvzftnofjudzvbudfvlymwvssmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918551.9407241-894-267453299183755/AnsiballZ_stat.py'
Jan 20 14:15:52 compute-1 sudo[207652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:52 compute-1 python3.9[207654]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:15:52 compute-1 sudo[207652]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:53 compute-1 sudo[207806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-negpjetgowmmshfmbwvblewiyskngxfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918552.8463302-918-253126114048340/AnsiballZ_command.py'
Jan 20 14:15:53 compute-1 sudo[207806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:53 compute-1 python3.9[207808]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:15:53 compute-1 sudo[207806]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:15:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:53.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:15:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:53.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:54 compute-1 ceph-mon[81775]: pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:54 compute-1 sudo[207960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puqqahqdwwdeyyieslcbyzkhipfqcryq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918553.7884524-945-236992409623580/AnsiballZ_systemd_service.py'
Jan 20 14:15:54 compute-1 sudo[207960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:54 compute-1 python3.9[207962]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:15:54 compute-1 systemd[1]: Listening on multipathd control socket.
Jan 20 14:15:54 compute-1 sudo[207960]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:55 compute-1 sudo[208116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azvzpvkiqrdngspzzikqldsgtquweqrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918554.847527-969-161135231842571/AnsiballZ_systemd_service.py'
Jan 20 14:15:55 compute-1 sudo[208116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:55 compute-1 ceph-mon[81775]: pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:15:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:55.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:15:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:55.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:55 compute-1 python3.9[208118]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:15:56 compute-1 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 20 14:15:56 compute-1 udevadm[208124]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 20 14:15:56 compute-1 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 20 14:15:56 compute-1 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 20 14:15:56 compute-1 multipathd[208128]: --------start up--------
Jan 20 14:15:56 compute-1 multipathd[208128]: read /etc/multipath.conf
Jan 20 14:15:56 compute-1 multipathd[208128]: path checkers start up
Jan 20 14:15:56 compute-1 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 20 14:15:56 compute-1 sudo[208116]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:57.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:57.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:57 compute-1 sudo[208286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaswirwitmxwrlwypxmgakyipauixtua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918557.4513352-1005-246652557627457/AnsiballZ_file.py'
Jan 20 14:15:57 compute-1 sudo[208286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:58 compute-1 ceph-mon[81775]: pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:58 compute-1 python3.9[208288]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 20 14:15:58 compute-1 sudo[208286]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:58 compute-1 sudo[208438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-encpmorjdmxffdczgcjofaznvnzpylhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918558.3956537-1029-243691808932429/AnsiballZ_modprobe.py'
Jan 20 14:15:58 compute-1 sudo[208438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:58 compute-1 python3.9[208440]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 20 14:15:58 compute-1 kernel: Key type psk registered
Jan 20 14:15:58 compute-1 sudo[208438]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:59.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:15:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:59.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:59 compute-1 sudo[208602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pujqnpyhxngezanmpjxrmjqydcxsfkvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918559.2134857-1053-76592514658852/AnsiballZ_stat.py'
Jan 20 14:15:59 compute-1 sudo[208602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:59 compute-1 python3.9[208604]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:15:59 compute-1 sudo[208602]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:00 compute-1 ceph-mon[81775]: pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:00 compute-1 sudo[208725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smhqbbhfwetfdbwschnffduxagtkleym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918559.2134857-1053-76592514658852/AnsiballZ_copy.py'
Jan 20 14:16:00 compute-1 sudo[208725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:00 compute-1 python3.9[208727]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918559.2134857-1053-76592514658852/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:00 compute-1 sudo[208725]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:01 compute-1 sudo[208877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdslawhjwjgghxebqdktxuwmjzsyjpmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918560.8214388-1101-87971099510993/AnsiballZ_lineinfile.py'
Jan 20 14:16:01 compute-1 sudo[208877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:01 compute-1 python3.9[208879]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:01 compute-1 sudo[208877]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:01.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:16:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:01.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:16:02 compute-1 ceph-mon[81775]: pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:02 compute-1 sudo[209030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfcpllafyeyuyohawoanxgnkaolxptiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918561.7805808-1125-230032329068216/AnsiballZ_systemd.py'
Jan 20 14:16:02 compute-1 sudo[209030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:02 compute-1 python3.9[209032]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:16:02 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 20 14:16:02 compute-1 systemd[1]: Stopped Load Kernel Modules.
Jan 20 14:16:02 compute-1 systemd[1]: Stopping Load Kernel Modules...
Jan 20 14:16:02 compute-1 systemd[1]: Starting Load Kernel Modules...
Jan 20 14:16:02 compute-1 systemd[1]: Finished Load Kernel Modules.
Jan 20 14:16:02 compute-1 sudo[209030]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:03 compute-1 sudo[209186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxvvrgnupojordjvnxsmrmsjbeplengo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918562.8999536-1149-56116953202840/AnsiballZ_dnf.py'
Jan 20 14:16:03 compute-1 sudo[209186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:03.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:03.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:03 compute-1 python3.9[209188]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:16:04 compute-1 ceph-mon[81775]: pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:04 compute-1 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 20 14:16:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:05.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:05.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:05 compute-1 sudo[209196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:05 compute-1 sudo[209196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:05 compute-1 sudo[209196]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:05 compute-1 sudo[209221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:16:05 compute-1 sudo[209221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:05 compute-1 sudo[209221]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:05 compute-1 sudo[209246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:05 compute-1 sudo[209246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:05 compute-1 sudo[209246]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:05 compute-1 systemd[1]: Reloading.
Jan 20 14:16:05 compute-1 systemd-sysv-generator[209325]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:16:05 compute-1 systemd-rc-local-generator[209321]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:16:06 compute-1 ceph-mon[81775]: pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:06 compute-1 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 20 14:16:06 compute-1 systemd[1]: Reloading.
Jan 20 14:16:06 compute-1 systemd-sysv-generator[209360]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:16:06 compute-1 systemd-rc-local-generator[209355]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:16:06 compute-1 sudo[209273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:16:06 compute-1 sudo[209273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:06 compute-1 systemd-logind[783]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 20 14:16:06 compute-1 systemd-logind[783]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 20 14:16:06 compute-1 lvm[209422]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:16:06 compute-1 lvm[209422]: VG ceph_vg0 finished
Jan 20 14:16:06 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 14:16:06 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 20 14:16:06 compute-1 systemd[1]: Reloading.
Jan 20 14:16:06 compute-1 sudo[209273]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:06 compute-1 systemd-sysv-generator[209490]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:16:06 compute-1 systemd-rc-local-generator[209484]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:16:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:16:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:16:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:16:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:16:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:16:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:16:07 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 14:16:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:07.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:07.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:07 compute-1 sudo[209186]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:08 compute-1 ceph-mon[81775]: pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:08 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 14:16:08 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 20 14:16:08 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.433s CPU time.
Jan 20 14:16:08 compute-1 systemd[1]: run-rc335477f764b47d2be14d45147f30965.service: Deactivated successfully.
Jan 20 14:16:08 compute-1 sudo[210789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwtvjyaiextteilzleehhduuccswgxot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918568.012458-1173-272044110565607/AnsiballZ_systemd_service.py'
Jan 20 14:16:08 compute-1 sudo[210789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:08 compute-1 python3.9[210791]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:16:08 compute-1 systemd[1]: Stopping Open-iSCSI...
Jan 20 14:16:08 compute-1 iscsid[204038]: iscsid shutting down.
Jan 20 14:16:08 compute-1 systemd[1]: iscsid.service: Deactivated successfully.
Jan 20 14:16:08 compute-1 systemd[1]: Stopped Open-iSCSI.
Jan 20 14:16:08 compute-1 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 20 14:16:08 compute-1 systemd[1]: Starting Open-iSCSI...
Jan 20 14:16:08 compute-1 systemd[1]: Started Open-iSCSI.
Jan 20 14:16:08 compute-1 sudo[210789]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:09 compute-1 sudo[210945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogiqmidjndjctmlrunbfnfdnpqmoelkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918569.0099263-1197-118715463789075/AnsiballZ_systemd_service.py'
Jan 20 14:16:09 compute-1 sudo[210945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:09 compute-1 ceph-mon[81775]: pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:09.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:09.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:09 compute-1 python3.9[210947]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:16:09 compute-1 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 20 14:16:09 compute-1 multipathd[208128]: exit (signal)
Jan 20 14:16:09 compute-1 multipathd[208128]: --------shut down-------
Jan 20 14:16:09 compute-1 systemd[1]: multipathd.service: Deactivated successfully.
Jan 20 14:16:09 compute-1 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 20 14:16:09 compute-1 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 20 14:16:09 compute-1 multipathd[210954]: --------start up--------
Jan 20 14:16:09 compute-1 multipathd[210954]: read /etc/multipath.conf
Jan 20 14:16:09 compute-1 multipathd[210954]: path checkers start up
Jan 20 14:16:09 compute-1 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 20 14:16:09 compute-1 sudo[210945]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:10 compute-1 podman[211085]: 2026-01-20 14:16:10.34794573 +0000 UTC m=+0.080251918 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 14:16:10 compute-1 python3.9[211126]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:16:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:11.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:11.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:11 compute-1 sudo[211292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwzabsifvuuvmxrpiidgkaeyeprsuyjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918571.202563-1249-5183609421877/AnsiballZ_file.py'
Jan 20 14:16:11 compute-1 sudo[211292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:11 compute-1 python3.9[211294]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:11 compute-1 sudo[211292]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:11 compute-1 sudo[211295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:11 compute-1 sudo[211295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:11 compute-1 sudo[211295]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:11 compute-1 sudo[211320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:11 compute-1 sudo[211320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:11 compute-1 sudo[211320]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:12 compute-1 ceph-mon[81775]: pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:12 compute-1 sudo[211494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkccudhyqonahhsrbsebovycuwwdllyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918572.3222792-1282-98086557914883/AnsiballZ_systemd_service.py'
Jan 20 14:16:12 compute-1 sudo[211494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:13 compute-1 python3.9[211496]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:16:13 compute-1 systemd[1]: Reloading.
Jan 20 14:16:13 compute-1 systemd-rc-local-generator[211524]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:16:13 compute-1 systemd-sysv-generator[211528]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:16:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:13.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:13.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:13 compute-1 sudo[211494]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:14 compute-1 python3.9[211682]: ansible-ansible.builtin.service_facts Invoked
Jan 20 14:16:14 compute-1 network[211699]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:16:14 compute-1 network[211700]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:16:14 compute-1 network[211701]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:16:14 compute-1 ceph-mon[81775]: pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:15.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:15.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:15 compute-1 sudo[211725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:15 compute-1 sudo[211725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:15 compute-1 sudo[211725]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:15 compute-1 sudo[211753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:16:15 compute-1 sudo[211753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:15 compute-1 sudo[211753]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:15 compute-1 ceph-mon[81775]: pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:15 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:16:15 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:16:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:16:16.376 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:16:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:16:16.377 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:16:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:16:16.377 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:16:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:17.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:17.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:17 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 20 14:16:17 compute-1 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 20 14:16:18 compute-1 ceph-mon[81775]: pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:19.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:19.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:19 compute-1 ceph-mon[81775]: pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:20 compute-1 sudo[212027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aamcusnjhvrsjwefpzenqdkrmsiwtomh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918580.2945375-1339-31592786795061/AnsiballZ_systemd_service.py'
Jan 20 14:16:20 compute-1 sudo[212027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:21 compute-1 python3.9[212029]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:16:21 compute-1 sudo[212027]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:21.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:21.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:21 compute-1 sudo[212181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jelvrnkelasgscaysvcjwzvlmmmbvuum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918581.2562459-1339-116269338190734/AnsiballZ_systemd_service.py'
Jan 20 14:16:21 compute-1 sudo[212181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:21 compute-1 python3.9[212183]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:16:22 compute-1 sudo[212181]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:22 compute-1 ceph-mon[81775]: pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:22 compute-1 sudo[212341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siuhbtupujldmtkomuzbuwfilqyattll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918582.1702764-1339-263213496210791/AnsiballZ_systemd_service.py'
Jan 20 14:16:22 compute-1 sudo[212341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:22 compute-1 podman[212308]: 2026-01-20 14:16:22.580141987 +0000 UTC m=+0.076937406 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 14:16:22 compute-1 python3.9[212350]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:16:22 compute-1 sudo[212341]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:23.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:23 compute-1 sudo[212508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvghblilgqauxvturxrycqhomguupdtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918583.0224845-1339-197602149166626/AnsiballZ_systemd_service.py'
Jan 20 14:16:23 compute-1 sudo[212508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:23.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:23 compute-1 python3.9[212510]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:16:23 compute-1 sudo[212508]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:24 compute-1 ceph-mon[81775]: pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:24 compute-1 sudo[212662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smaooclmgyrwjbthaikxjzsdsnfkjkjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918583.9537115-1339-218608626598895/AnsiballZ_systemd_service.py'
Jan 20 14:16:24 compute-1 sudo[212662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:24 compute-1 python3.9[212664]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:16:24 compute-1 sudo[212662]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.002000056s ======
Jan 20 14:16:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:25.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Jan 20 14:16:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:25.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:25 compute-1 sudo[212816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtcpebilacltomorofewfabcopfuhczx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918584.9034257-1339-83274718897087/AnsiballZ_systemd_service.py'
Jan 20 14:16:25 compute-1 sudo[212816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:26 compute-1 python3.9[212818]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:16:26 compute-1 sudo[212816]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:26 compute-1 ceph-mon[81775]: pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:26 compute-1 sudo[212969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wixqmgqnchgqhkchbmotiwlhkboxxmlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918586.408788-1339-34841248645615/AnsiballZ_systemd_service.py'
Jan 20 14:16:26 compute-1 sudo[212969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:27 compute-1 python3.9[212971]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:16:27 compute-1 sudo[212969]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:16:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:27.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:16:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:27.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:28 compute-1 ceph-mon[81775]: pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:28 compute-1 sudo[213123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtgfloshqqoxsyblxpemcmkfenxsbbew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918587.2721455-1339-192554285156532/AnsiballZ_systemd_service.py'
Jan 20 14:16:28 compute-1 sudo[213123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:28 compute-1 python3.9[213125]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:16:28 compute-1 sudo[213123]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:29.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:29.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:29 compute-1 sudo[213277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqwkqzdtngkrmunqzjtaqcezujzljipn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918589.1301894-1516-164634522310751/AnsiballZ_file.py'
Jan 20 14:16:29 compute-1 sudo[213277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:29 compute-1 python3.9[213279]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:29 compute-1 sudo[213277]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:30 compute-1 ceph-mon[81775]: pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:30 compute-1 sudo[213429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqomkolrmpaadzmwowobobmmxacfsilv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918589.952384-1516-222640522310978/AnsiballZ_file.py'
Jan 20 14:16:30 compute-1 sudo[213429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:30 compute-1 python3.9[213431]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:30 compute-1 sudo[213429]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:31 compute-1 sudo[213581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsokxnhcheliujfrvoflonwcreyqvwgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918590.6931627-1516-41104885685572/AnsiballZ_file.py'
Jan 20 14:16:31 compute-1 sudo[213581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:31 compute-1 python3.9[213583]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:31 compute-1 sudo[213581]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:31.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:31.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:31 compute-1 sudo[213686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:31 compute-1 sudo[213686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:31 compute-1 sudo[213686]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:32 compute-1 sudo[213777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdnhyqmsruqpiysfqsvkhynasbslgvod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918591.6308985-1516-265343223034422/AnsiballZ_file.py'
Jan 20 14:16:32 compute-1 sudo[213777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:32 compute-1 sudo[213742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:32 compute-1 sudo[213742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:32 compute-1 sudo[213742]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:32 compute-1 ceph-mon[81775]: pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:32 compute-1 python3.9[213784]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:32 compute-1 sudo[213777]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:32 compute-1 sudo[213936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pusvyottcorthrasxxhovsspesmyfqpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918592.459434-1516-246400673625991/AnsiballZ_file.py'
Jan 20 14:16:32 compute-1 sudo[213936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:32 compute-1 python3.9[213938]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:32 compute-1 sudo[213936]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:33.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:33.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:33 compute-1 sudo[214088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sznwrnxhxqbdypmdzmqeqiqnykywuhvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918593.1417956-1516-201705944039768/AnsiballZ_file.py'
Jan 20 14:16:33 compute-1 sudo[214088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:33 compute-1 python3.9[214091]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:33 compute-1 sudo[214088]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:34 compute-1 sudo[214241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sskxmoawidgiuojtvflzljxnkqdbdhww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918593.8853621-1516-238924935830125/AnsiballZ_file.py'
Jan 20 14:16:34 compute-1 sudo[214241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:34 compute-1 ceph-mon[81775]: pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:34 compute-1 python3.9[214243]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:34 compute-1 sudo[214241]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:35 compute-1 sudo[214393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sduavzsaxghtprzkslgmrybqnsgdavyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918594.6181183-1516-168683695857631/AnsiballZ_file.py'
Jan 20 14:16:35 compute-1 sudo[214393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:35 compute-1 python3.9[214395]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:35 compute-1 sudo[214393]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:35.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:35.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:35 compute-1 sudo[214546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifhzmnmkbqnrcjiuyekzptiumqwzympi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918595.5434155-1687-126208095500646/AnsiballZ_file.py'
Jan 20 14:16:35 compute-1 sudo[214546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:36 compute-1 python3.9[214548]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:36 compute-1 sudo[214546]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:36 compute-1 ceph-mon[81775]: pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:36 compute-1 sudo[214698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akddtsyaujanplkpfxnvuwdbhqptuwxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918596.255936-1687-224808536431430/AnsiballZ_file.py'
Jan 20 14:16:36 compute-1 sudo[214698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:36 compute-1 python3.9[214700]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:36 compute-1 sudo[214698]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:37 compute-1 sudo[214850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewrgwpdrkqyjddmktosewiknsxiexuxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918597.0061796-1687-867109323444/AnsiballZ_file.py'
Jan 20 14:16:37 compute-1 sudo[214850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:37.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:37.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:37 compute-1 python3.9[214852]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:37 compute-1 sudo[214850]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:38 compute-1 sudo[215003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlcdaxlohxcfvwxqztkzyplhthkymqkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918597.7133408-1687-281311115207367/AnsiballZ_file.py'
Jan 20 14:16:38 compute-1 sudo[215003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:38 compute-1 python3.9[215005]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:38 compute-1 sudo[215003]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:38 compute-1 ceph-mon[81775]: pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:38 compute-1 sudo[215155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxgzmmvetxzeivrgzwglgmtundfpudea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918598.3937528-1687-267509943702335/AnsiballZ_file.py'
Jan 20 14:16:38 compute-1 sudo[215155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:39 compute-1 python3.9[215157]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:39 compute-1 sudo[215155]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:39.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:39.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:39 compute-1 sudo[215308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opjaxnhyqgascdcxmsvhpyrmgrdjpdou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918599.1804008-1687-108161112422840/AnsiballZ_file.py'
Jan 20 14:16:39 compute-1 sudo[215308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:39 compute-1 python3.9[215310]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:39 compute-1 sudo[215308]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:40 compute-1 sudo[215460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akbgprrkrenxuudhtfndvfwrehpsqhpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918599.9267747-1687-272894644302849/AnsiballZ_file.py'
Jan 20 14:16:40 compute-1 sudo[215460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:40 compute-1 ceph-mon[81775]: pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:40 compute-1 python3.9[215462]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:40 compute-1 sudo[215460]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:41 compute-1 sudo[215623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zguxeopknbrtotbujdnowzxmsypkibgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918600.6750216-1687-42163711268608/AnsiballZ_file.py'
Jan 20 14:16:41 compute-1 sudo[215623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:41 compute-1 podman[215586]: 2026-01-20 14:16:41.088512766 +0000 UTC m=+0.122778274 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller)
Jan 20 14:16:41 compute-1 python3.9[215633]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:41 compute-1 sudo[215623]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:41.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:41.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:41 compute-1 ceph-mon[81775]: pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:42 compute-1 sudo[215791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlhruvdgumwqnpvbyqfuqsbmnkmtkprh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918601.7740555-1861-108676661676358/AnsiballZ_command.py'
Jan 20 14:16:42 compute-1 sudo[215791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:42 compute-1 python3.9[215793]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:42 compute-1 sudo[215791]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:43 compute-1 python3.9[215945]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 14:16:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:43.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:16:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:43.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:16:44 compute-1 ceph-mon[81775]: pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:44 compute-1 sudo[216096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edypknoqycwvmousoyrofqiicymxioiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918603.8407264-1915-27480989859887/AnsiballZ_systemd_service.py'
Jan 20 14:16:44 compute-1 sudo[216096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:44 compute-1 python3.9[216098]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:16:44 compute-1 systemd[1]: Reloading.
Jan 20 14:16:44 compute-1 systemd-sysv-generator[216124]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:16:44 compute-1 systemd-rc-local-generator[216121]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:16:44 compute-1 sudo[216096]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:45 compute-1 sudo[216283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsulcsphmjjepmmvyivkwkvxwsywrskd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918605.0227263-1939-252585094025181/AnsiballZ_command.py'
Jan 20 14:16:45 compute-1 sudo[216283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:45.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:45.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:45 compute-1 python3.9[216285]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:45 compute-1 sudo[216283]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:45 compute-1 sudo[216437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbkhsdbpoxlhnvbasawernvvfmngymeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918605.6961825-1939-28171143682084/AnsiballZ_command.py'
Jan 20 14:16:45 compute-1 sudo[216437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:46 compute-1 ceph-mon[81775]: pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:46 compute-1 python3.9[216439]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:46 compute-1 sudo[216437]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:46 compute-1 sudo[216590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtdkzunepijoucxbpvnfzvhisiwpyzey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918606.319568-1939-205182662716578/AnsiballZ_command.py'
Jan 20 14:16:46 compute-1 sudo[216590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:46 compute-1 python3.9[216592]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:46 compute-1 sudo[216590]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:47 compute-1 sudo[216743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjevlftabdfcukxdtjcbziemcsfqband ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918606.967496-1939-93146359958735/AnsiballZ_command.py'
Jan 20 14:16:47 compute-1 sudo[216743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:47 compute-1 python3.9[216745]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:47.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:47.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:48 compute-1 ceph-mon[81775]: pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:48 compute-1 sudo[216743]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:48 compute-1 sudo[216897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jysjmfpalkcoqtzjcvyvszjfxtupfdxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918608.6162694-1939-167045796674843/AnsiballZ_command.py'
Jan 20 14:16:48 compute-1 sudo[216897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:49 compute-1 python3.9[216899]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:49 compute-1 sudo[216897]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:16:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:49.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:16:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:49.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:49 compute-1 sudo[217051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdhgtesuxtelkomimjxquaermczuhzdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918609.2701662-1939-124474085429335/AnsiballZ_command.py'
Jan 20 14:16:49 compute-1 sudo[217051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:49 compute-1 python3.9[217053]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:49 compute-1 sudo[217051]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:50 compute-1 ceph-mon[81775]: pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:50 compute-1 sudo[217204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycxqcsjhwxvthhhipcnnuzugriaahkmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918609.9919424-1939-5658918116298/AnsiballZ_command.py'
Jan 20 14:16:50 compute-1 sudo[217204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:50 compute-1 python3.9[217206]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:50 compute-1 sudo[217204]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:51 compute-1 sudo[217357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuyafjodrmjrudyioqdczskbfmvunxvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918610.7161062-1939-205752097254135/AnsiballZ_command.py'
Jan 20 14:16:51 compute-1 sudo[217357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:51 compute-1 python3.9[217359]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:51 compute-1 sudo[217357]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:51.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:51.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:52 compute-1 sudo[217386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:52 compute-1 sudo[217386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:52 compute-1 sudo[217386]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:52 compute-1 sudo[217411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:52 compute-1 sudo[217411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:52 compute-1 sudo[217411]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:52 compute-1 ceph-mon[81775]: pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:53 compute-1 podman[217535]: 2026-01-20 14:16:53.055816967 +0000 UTC m=+0.062213425 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 20 14:16:53 compute-1 sudo[217580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akwopbxeipsdbnfisckasodaturwtvhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918612.5217047-2146-10807075763577/AnsiballZ_file.py'
Jan 20 14:16:53 compute-1 sudo[217580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:53 compute-1 python3.9[217582]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:53 compute-1 sudo[217580]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:53.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:53.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:53 compute-1 sudo[217733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pznwadmwvcbqajvxiocikqkkjwsoizna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918613.4335523-2146-74262675012221/AnsiballZ_file.py'
Jan 20 14:16:53 compute-1 sudo[217733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:54 compute-1 python3.9[217735]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:54 compute-1 sudo[217733]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:54 compute-1 ceph-mon[81775]: pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:54 compute-1 sudo[217885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thomzbicsfvrvllyqipeysrljdguoccx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918614.2379699-2146-174116339246984/AnsiballZ_file.py'
Jan 20 14:16:54 compute-1 sudo[217885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:54 compute-1 python3.9[217887]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:54 compute-1 sudo[217885]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:55 compute-1 sudo[218037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbtbtjpjrdtdhrfmxcfkiexnztjhrdqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918615.105981-2212-196553323980918/AnsiballZ_file.py'
Jan 20 14:16:55 compute-1 sudo[218037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:55.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:55.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:55 compute-1 python3.9[218039]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:55 compute-1 sudo[218037]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:56 compute-1 sudo[218190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrhgktjpeczfvqwdfqeuhbqtvifyrksy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918615.7448127-2212-234370106125424/AnsiballZ_file.py'
Jan 20 14:16:56 compute-1 sudo[218190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:56 compute-1 python3.9[218192]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:56 compute-1 sudo[218190]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:56 compute-1 ceph-mon[81775]: pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:56 compute-1 sudo[218342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqphtkbgykzcpgmcxfmxobnmknijjkoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918616.3992767-2212-163187008964247/AnsiballZ_file.py'
Jan 20 14:16:56 compute-1 sudo[218342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:56 compute-1 python3.9[218344]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:56 compute-1 sudo[218342]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:57 compute-1 sudo[218494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ommrryeosbxoqgnmmhenwysmhgyxnujp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918617.1105244-2212-76621928949333/AnsiballZ_file.py'
Jan 20 14:16:57 compute-1 sudo[218494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:57.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:57.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:57 compute-1 python3.9[218496]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:57 compute-1 sudo[218494]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:58 compute-1 sudo[218647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnwmpuoawijdqkrlfipdyaszosenufdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918617.8469582-2212-69784097736028/AnsiballZ_file.py'
Jan 20 14:16:58 compute-1 sudo[218647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:58 compute-1 python3.9[218649]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:58 compute-1 sudo[218647]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:58 compute-1 sudo[218799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glruhtfhruumghnajaeeylchseeooxob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918618.5828245-2212-48575603931201/AnsiballZ_file.py'
Jan 20 14:16:58 compute-1 sudo[218799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:59 compute-1 python3.9[218801]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:59 compute-1 sudo[218799]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:59.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:16:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:16:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:59.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:16:59 compute-1 sudo[218952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skagokhbrunwcryydztokfdpjdfqnpcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918619.282134-2212-259802918448396/AnsiballZ_file.py'
Jan 20 14:16:59 compute-1 sudo[218952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:59 compute-1 python3.9[218954]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:59 compute-1 sudo[218952]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:00 compute-1 ceph-mon[81775]: pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:01 compute-1 ceph-mon[81775]: pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:01.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:01.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:02 compute-1 ceph-mon[81775]: pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:03.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:03.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:04 compute-1 ceph-mon[81775]: pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:05.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:05.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:05 compute-1 sudo[219107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbhjaddsdhimypjnoxikixvytvcytapg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918625.0973809-2537-105374198101065/AnsiballZ_getent.py'
Jan 20 14:17:05 compute-1 sudo[219107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:05 compute-1 python3.9[219109]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 20 14:17:05 compute-1 sudo[219107]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:06 compute-1 ceph-mon[81775]: pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:06 compute-1 sudo[219260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvfassufpxcrvrktfgirradpnaonwcgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918626.1579666-2561-243874150377549/AnsiballZ_group.py'
Jan 20 14:17:06 compute-1 sudo[219260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:06 compute-1 python3.9[219262]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 14:17:06 compute-1 groupadd[219263]: group added to /etc/group: name=nova, GID=42436
Jan 20 14:17:06 compute-1 groupadd[219263]: group added to /etc/gshadow: name=nova
Jan 20 14:17:06 compute-1 groupadd[219263]: new group: name=nova, GID=42436
Jan 20 14:17:07 compute-1 sudo[219260]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:17:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:07.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:17:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:07.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:07 compute-1 sudo[219419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woslewbmtpruwqngbagdbvohejuenodu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918627.272226-2585-5190065456555/AnsiballZ_user.py'
Jan 20 14:17:07 compute-1 sudo[219419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:08 compute-1 python3.9[219421]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 20 14:17:08 compute-1 useradd[219423]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 20 14:17:08 compute-1 useradd[219423]: add 'nova' to group 'libvirt'
Jan 20 14:17:08 compute-1 useradd[219423]: add 'nova' to shadow group 'libvirt'
Jan 20 14:17:08 compute-1 sudo[219419]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:08 compute-1 ceph-mon[81775]: pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:17:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:09.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:17:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:17:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:09.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:17:09 compute-1 sshd-session[219454]: Accepted publickey for zuul from 192.168.122.30 port 49770 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:17:09 compute-1 systemd-logind[783]: New session 50 of user zuul.
Jan 20 14:17:09 compute-1 systemd[1]: Started Session 50 of User zuul.
Jan 20 14:17:09 compute-1 sshd-session[219454]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:17:09 compute-1 sshd-session[219458]: Received disconnect from 192.168.122.30 port 49770:11: disconnected by user
Jan 20 14:17:09 compute-1 sshd-session[219458]: Disconnected from user zuul 192.168.122.30 port 49770
Jan 20 14:17:09 compute-1 sshd-session[219454]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:17:09 compute-1 systemd[1]: session-50.scope: Deactivated successfully.
Jan 20 14:17:09 compute-1 systemd-logind[783]: Session 50 logged out. Waiting for processes to exit.
Jan 20 14:17:09 compute-1 systemd-logind[783]: Removed session 50.
Jan 20 14:17:10 compute-1 python3.9[219608]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:11 compute-1 ceph-mon[81775]: pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:11 compute-1 python3.9[219729]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918630.1009061-2661-124490268263576/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:17:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:11.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:11.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:11 compute-1 podman[219853]: 2026-01-20 14:17:11.614014449 +0000 UTC m=+0.107378375 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:17:11 compute-1 python3.9[219890]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:12 compute-1 python3.9[219979]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:17:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:12 compute-1 sudo[220017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:12 compute-1 sudo[220017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:12 compute-1 sudo[220017]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:12 compute-1 sudo[220074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:12 compute-1 sudo[220074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:12 compute-1 ceph-mon[81775]: pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:12 compute-1 sudo[220074]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:12 compute-1 python3.9[220179]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:13 compute-1 python3.9[220300]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918632.3173077-2661-156486173364267/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:17:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:17:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:13.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:17:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:13.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:14 compute-1 python3.9[220451]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:14 compute-1 ceph-mon[81775]: pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:14 compute-1 python3.9[220572]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918633.5302818-2661-72270075553730/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:17:15 compute-1 python3.9[220722]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:15.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:15.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:15 compute-1 sudo[220818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:15 compute-1 sudo[220818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:15 compute-1 sudo[220818]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:16 compute-1 sudo[220870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:17:16 compute-1 sudo[220870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:16 compute-1 sudo[220870]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:16 compute-1 sudo[220895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:16 compute-1 sudo[220895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:16 compute-1 sudo[220895]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:16 compute-1 python3.9[220869]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918634.947498-2661-44937292821040/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:17:16 compute-1 sudo[220920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:17:16 compute-1 sudo[220920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:16 compute-1 ceph-mon[81775]: pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:17:16.377 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:17:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:17:16.378 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:17:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:17:16.379 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:17:16 compute-1 sudo[220920]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:16 compute-1 python3.9[221125]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:17:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:17:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:17:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:17:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:17:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:17:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:17:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:17:17 compute-1 python3.9[221246]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918636.3454983-2661-175762974799744/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:17:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:17.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:17:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:17.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:17:18 compute-1 sudo[221397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvdoqzoupjlodsqvnbjxwctrilthoagm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918637.9714093-2909-152550931674241/AnsiballZ_file.py'
Jan 20 14:17:18 compute-1 sudo[221397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:18 compute-1 ceph-mon[81775]: pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:18 compute-1 python3.9[221399]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:18 compute-1 sudo[221397]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:19 compute-1 sudo[221549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfyhkajtxqldmxshjtsxipvjazrzpubg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918638.8207948-2933-75719645577021/AnsiballZ_copy.py'
Jan 20 14:17:19 compute-1 sudo[221549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:19 compute-1 python3.9[221551]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:19.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:19 compute-1 sudo[221549]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:17:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:19.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:17:20 compute-1 sudo[221702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgnqctqjyxginngfkgaqfyscaztsnxjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918639.7068994-2957-8022422374356/AnsiballZ_stat.py'
Jan 20 14:17:20 compute-1 sudo[221702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:20 compute-1 python3.9[221704]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:17:20 compute-1 sudo[221702]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:20 compute-1 sudo[221854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lllqplgukbmnqnixfejyxoplelwkjseg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918640.594747-2981-138417503381827/AnsiballZ_stat.py'
Jan 20 14:17:20 compute-1 sudo[221854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:21 compute-1 ceph-mon[81775]: pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:21 compute-1 python3.9[221856]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:21 compute-1 sudo[221854]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:21 compute-1 sudo[221977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mshppgtxplfpaycxxfiyppygvepmivly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918640.594747-2981-138417503381827/AnsiballZ_copy.py'
Jan 20 14:17:21 compute-1 sudo[221977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:17:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:21.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:17:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:21.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:21 compute-1 python3.9[221979]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1768918640.594747-2981-138417503381827/.source _original_basename=.xz2y6jow follow=False checksum=159442c4cde0bdcbf09a3d9dce1a41964352533d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 20 14:17:21 compute-1 sudo[221977]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:17:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 6467 writes, 26K keys, 6467 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6467 writes, 1151 syncs, 5.62 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 483 writes, 730 keys, 483 commit groups, 1.0 writes per commit group, ingest: 0.24 MB, 0.00 MB/s
                                           Interval WAL: 483 writes, 236 syncs, 2.05 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdf610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdf610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdf610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 14:17:22 compute-1 python3.9[222132]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:17:22 compute-1 ceph-mon[81775]: pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:23 compute-1 podman[222258]: 2026-01-20 14:17:23.272926872 +0000 UTC m=+0.088115317 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 14:17:23 compute-1 python3.9[222297]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:23.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:23.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:23 compute-1 python3.9[222426]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918642.9142601-3060-207180423606463/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:17:24 compute-1 ceph-mon[81775]: pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:24 compute-1 sudo[222549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:24 compute-1 sudo[222549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:24 compute-1 sudo[222549]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:24 compute-1 sudo[222602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:17:24 compute-1 sudo[222602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:24 compute-1 sudo[222602]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:24 compute-1 python3.9[222600]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:17:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:17:25 compute-1 python3.9[222747]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918644.2289884-3104-262110577035734/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:17:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:25.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:25.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:26 compute-1 ceph-mon[81775]: pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:26 compute-1 sudo[222898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziygdfvnorwadhqsxohsmbopecnpyimj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918646.0228038-3155-228636951471512/AnsiballZ_container_config_data.py'
Jan 20 14:17:26 compute-1 sudo[222898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:26 compute-1 python3.9[222900]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 20 14:17:26 compute-1 sudo[222898]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:17:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:27.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:17:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:17:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:27.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:17:27 compute-1 sudo[223051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxykoeryypxrersfxfaowenhwabmoiwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918647.1744325-3188-63504603505790/AnsiballZ_container_config_hash.py'
Jan 20 14:17:27 compute-1 sudo[223051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:27 compute-1 python3.9[223053]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 20 14:17:27 compute-1 sudo[223051]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:28 compute-1 ceph-mon[81775]: pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:28 compute-1 sudo[223203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcpbxtbhvompvnwxnsyitakzysvsywik ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768918648.3537705-3218-110790321387117/AnsiballZ_edpm_container_manage.py'
Jan 20 14:17:28 compute-1 sudo[223203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:29 compute-1 python3[223205]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 20 14:17:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:17:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:29.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:17:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:17:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:29.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:17:29 compute-1 ceph-mon[81775]: pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:31.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:31.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:32 compute-1 sudo[223262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:32 compute-1 sudo[223262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:32 compute-1 sudo[223262]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:32 compute-1 ceph-mon[81775]: pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:32 compute-1 sudo[223287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:32 compute-1 sudo[223287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:32 compute-1 sudo[223287]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:17:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:33.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:17:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:33.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:35.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:35.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:37.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:37.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:37 compute-1 ceph-mds[84722]: mds.beacon.cephfs.compute-1.rtofcx missed beacon ack from the monitors
Jan 20 14:17:39 compute-1 ceph-mon[81775]: pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:17:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:39.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:17:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:39.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:39 compute-1 podman[223220]: 2026-01-20 14:17:39.793797037 +0000 UTC m=+10.429437170 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 20 14:17:40 compute-1 podman[223357]: 2026-01-20 14:17:40.0152152 +0000 UTC m=+0.082595884 container create 2f02162efb4537956962187671675fe84df08f5b348c51f1cd15ed9e22a6e5c3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 20 14:17:40 compute-1 podman[223357]: 2026-01-20 14:17:39.975751209 +0000 UTC m=+0.043131903 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 20 14:17:40 compute-1 python3[223205]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 20 14:17:40 compute-1 sudo[223203]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:41.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:41.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:41 compute-1 ceph-mds[84722]: mds.beacon.cephfs.compute-1.rtofcx missed beacon ack from the monitors
Jan 20 14:17:42 compute-1 podman[223421]: 2026-01-20 14:17:42.079152626 +0000 UTC m=+0.114682668 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 20 14:17:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:17:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:43.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:17:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:43.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:44 compute-1 ceph-mon[81775]: pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:44 compute-1 ceph-mon[81775]: pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:45.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:17:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:45.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:17:45 compute-1 ceph-mon[81775]: pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:45 compute-1 ceph-mon[81775]: pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:45 compute-1 ceph-mon[81775]: pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:45 compute-1 ceph-mon[81775]: pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:46 compute-1 sudo[223573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjkujbyruzhhkfntonpjpqciekduqbzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918665.9203694-3242-221442931983636/AnsiballZ_stat.py'
Jan 20 14:17:46 compute-1 sudo[223573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:46 compute-1 python3.9[223575]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:17:46 compute-1 sudo[223573]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:47 compute-1 sudo[223727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nesisfqopuawuonwyfydpgcnlpihfjkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918667.134871-3278-141240964449802/AnsiballZ_container_config_data.py'
Jan 20 14:17:47 compute-1 sudo[223727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:17:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:47.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:17:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:47.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:47 compute-1 python3.9[223730]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 20 14:17:47 compute-1 sudo[223727]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:48 compute-1 ceph-mon[81775]: pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:48 compute-1 sudo[223880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvlomsmimimgvpowkehqyyypgsexnnzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918668.2387779-3311-67744149445923/AnsiballZ_container_config_hash.py'
Jan 20 14:17:48 compute-1 sudo[223880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:48 compute-1 python3.9[223882]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 20 14:17:48 compute-1 sudo[223880]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:49.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:49.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:49 compute-1 sudo[224033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbumswodglcfkaqvjycopuxzzkkrwawf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768918669.2699537-3341-14264430460991/AnsiballZ_edpm_container_manage.py'
Jan 20 14:17:49 compute-1 sudo[224033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:50 compute-1 python3[224035]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 20 14:17:50 compute-1 ceph-mon[81775]: pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:50 compute-1 podman[224072]: 2026-01-20 14:17:50.344266384 +0000 UTC m=+0.075946368 container create 9b6a80ba477be5ac2929fd65dfa7fd443cd6e560d8aa8cb2bdfccdc381ce7301 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 20 14:17:50 compute-1 podman[224072]: 2026-01-20 14:17:50.312353408 +0000 UTC m=+0.044033412 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 20 14:17:50 compute-1 python3[224035]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 20 14:17:50 compute-1 sudo[224033]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:51 compute-1 sudo[224259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvfcvcipvjohynprikyaaaziivmmzsvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918671.0821533-3365-148171302250103/AnsiballZ_stat.py'
Jan 20 14:17:51 compute-1 sudo[224259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:17:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:51.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:17:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:51.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:51 compute-1 python3.9[224261]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:17:51 compute-1 sudo[224259]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:52 compute-1 ceph-mon[81775]: pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:52 compute-1 sudo[224344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:52 compute-1 sudo[224344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:52 compute-1 sudo[224344]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:52 compute-1 sudo[224389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:52 compute-1 sudo[224389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:52 compute-1 sudo[224389]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:53 compute-1 sudo[224464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wviwxgqugxhakebvmbyphzqlmxeywrgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918672.0730214-3392-146999238845787/AnsiballZ_file.py'
Jan 20 14:17:53 compute-1 sudo[224464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:53 compute-1 python3.9[224466]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:53 compute-1 sudo[224464]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:17:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:53.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:17:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:53.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:53 compute-1 sudo[224629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plrgubyilfepngtgkuvgzjklxxxocsap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918673.3099973-3392-113404318191523/AnsiballZ_copy.py'
Jan 20 14:17:53 compute-1 sudo[224629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:53 compute-1 podman[224590]: 2026-01-20 14:17:53.824811345 +0000 UTC m=+0.087714753 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 14:17:54 compute-1 python3.9[224636]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768918673.3099973-3392-113404318191523/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:54 compute-1 sudo[224629]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:54 compute-1 sudo[224712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahuswjchipzjcqhllhxoacthsjjitnhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918673.3099973-3392-113404318191523/AnsiballZ_systemd.py'
Jan 20 14:17:54 compute-1 sudo[224712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:54 compute-1 ceph-mon[81775]: pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:17:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3495 writes, 19K keys, 3495 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s
                                           Cumulative WAL: 3495 writes, 3495 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1370 writes, 6580 keys, 1370 commit groups, 1.0 writes per commit group, ingest: 14.40 MB, 0.02 MB/s
                                           Interval WAL: 1370 writes, 1370 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     54.6      0.37              0.08         9    0.041       0      0       0.0       0.0
                                             L6      1/0    7.34 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    100.2     83.7      0.76              0.25         8    0.095     35K   4269       0.0       0.0
                                            Sum      1/0    7.34 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     67.7     74.2      1.13              0.33        17    0.066     35K   4269       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.3    107.0    105.5      0.36              0.16         8    0.045     19K   2485       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    100.2     83.7      0.76              0.25         8    0.095     35K   4269       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     54.9      0.36              0.08         8    0.046       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.020, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.08 GB write, 0.07 MB/s write, 0.07 GB read, 0.06 MB/s read, 1.1 seconds
                                           Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564d515a71f0#2 capacity: 308.00 MB usage: 4.67 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(253,4.37 MB,1.4184%) FilterBlock(17,108.98 KB,0.0345552%) IndexBlock(17,203.77 KB,0.0646071%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 14:17:54 compute-1 python3.9[224714]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:17:54 compute-1 systemd[1]: Reloading.
Jan 20 14:17:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:54 compute-1 systemd-rc-local-generator[224737]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:17:54 compute-1 systemd-sysv-generator[224741]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:17:54 compute-1 sudo[224712]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:55 compute-1 sudo[224823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsembfypkvxqavloludifdicrdcfnseh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918673.3099973-3392-113404318191523/AnsiballZ_systemd.py'
Jan 20 14:17:55 compute-1 sudo[224823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:55.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:55.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:55 compute-1 python3.9[224825]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:17:55 compute-1 systemd[1]: Reloading.
Jan 20 14:17:55 compute-1 systemd-sysv-generator[224859]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:17:55 compute-1 systemd-rc-local-generator[224853]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:17:55 compute-1 ceph-mon[81775]: pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:55 compute-1 systemd[1]: Starting nova_compute container...
Jan 20 14:17:56 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:17:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bfc735065820b3b853aee6741a29dbfb1f97c4dd98fe504baf6c65a72090cd/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bfc735065820b3b853aee6741a29dbfb1f97c4dd98fe504baf6c65a72090cd/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bfc735065820b3b853aee6741a29dbfb1f97c4dd98fe504baf6c65a72090cd/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bfc735065820b3b853aee6741a29dbfb1f97c4dd98fe504baf6c65a72090cd/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bfc735065820b3b853aee6741a29dbfb1f97c4dd98fe504baf6c65a72090cd/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:56 compute-1 podman[224866]: 2026-01-20 14:17:56.100391075 +0000 UTC m=+0.115127541 container init 9b6a80ba477be5ac2929fd65dfa7fd443cd6e560d8aa8cb2bdfccdc381ce7301 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 14:17:56 compute-1 podman[224866]: 2026-01-20 14:17:56.108329171 +0000 UTC m=+0.123065617 container start 9b6a80ba477be5ac2929fd65dfa7fd443cd6e560d8aa8cb2bdfccdc381ce7301 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.license=GPLv2)
Jan 20 14:17:56 compute-1 podman[224866]: nova_compute
Jan 20 14:17:56 compute-1 nova_compute[224882]: + sudo -E kolla_set_configs
Jan 20 14:17:56 compute-1 systemd[1]: Started nova_compute container.
Jan 20 14:17:56 compute-1 sudo[224823]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Validating config file
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Copying service configuration files
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Deleting /etc/ceph
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Creating directory /etc/ceph
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Setting permission for /etc/ceph
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Writing out command to execute
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:17:56 compute-1 nova_compute[224882]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 20 14:17:56 compute-1 nova_compute[224882]: ++ cat /run_command
Jan 20 14:17:56 compute-1 nova_compute[224882]: + CMD=nova-compute
Jan 20 14:17:56 compute-1 nova_compute[224882]: + ARGS=
Jan 20 14:17:56 compute-1 nova_compute[224882]: + sudo kolla_copy_cacerts
Jan 20 14:17:56 compute-1 nova_compute[224882]: + [[ ! -n '' ]]
Jan 20 14:17:56 compute-1 nova_compute[224882]: + . kolla_extend_start
Jan 20 14:17:56 compute-1 nova_compute[224882]: + echo 'Running command: '\''nova-compute'\'''
Jan 20 14:17:56 compute-1 nova_compute[224882]: Running command: 'nova-compute'
Jan 20 14:17:56 compute-1 nova_compute[224882]: + umask 0022
Jan 20 14:17:56 compute-1 nova_compute[224882]: + exec nova-compute
Jan 20 14:17:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:17:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:57.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:17:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:57.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:57 compute-1 python3.9[225044]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:17:58 compute-1 nova_compute[224882]: 2026-01-20 14:17:58.238 224886 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 14:17:58 compute-1 nova_compute[224882]: 2026-01-20 14:17:58.238 224886 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 14:17:58 compute-1 nova_compute[224882]: 2026-01-20 14:17:58.238 224886 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 14:17:58 compute-1 nova_compute[224882]: 2026-01-20 14:17:58.239 224886 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 20 14:17:58 compute-1 ceph-mon[81775]: pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:58 compute-1 nova_compute[224882]: 2026-01-20 14:17:58.386 224886 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:17:58 compute-1 nova_compute[224882]: 2026-01-20 14:17:58.414 224886 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:17:58 compute-1 nova_compute[224882]: 2026-01-20 14:17:58.415 224886 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 20 14:17:58 compute-1 python3.9[225197]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.418 224886 INFO nova.virt.driver [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 20 14:17:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:59.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:59 compute-1 python3.9[225349]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:17:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:17:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:59.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.581 224886 INFO nova.compute.provider_config [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.608 224886 DEBUG oslo_concurrency.lockutils [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.609 224886 DEBUG oslo_concurrency.lockutils [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.609 224886 DEBUG oslo_concurrency.lockutils [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.609 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.609 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.610 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.610 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.610 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.610 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.610 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.611 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.611 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.611 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.611 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.611 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.612 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.612 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.612 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.612 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.613 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.613 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.613 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.613 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.613 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.614 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.614 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.614 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.614 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.615 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.615 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.615 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.615 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.616 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.616 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.616 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.616 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.617 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.617 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.617 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.617 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.617 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.618 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.618 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.618 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.618 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.619 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.619 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.619 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.619 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.620 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.620 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.620 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.620 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.620 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.620 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.621 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.621 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.621 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.621 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.621 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.621 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.621 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.622 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.622 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.622 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.622 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.622 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.622 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.622 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.623 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.623 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.623 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.623 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.623 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.623 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.623 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.623 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.624 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.624 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.624 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.624 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.624 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.624 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.624 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.625 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.625 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.625 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.625 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.625 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.625 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.625 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.626 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.626 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.626 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.626 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.626 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.626 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.626 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.627 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.627 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.627 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.627 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.627 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.627 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.627 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.628 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.628 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.628 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.628 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.628 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.628 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.628 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.629 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.629 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.629 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.629 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.629 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.629 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.629 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.630 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.630 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.630 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.630 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.630 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.630 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.630 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.631 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.631 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.631 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.631 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.631 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.631 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.631 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.631 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.632 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.632 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.632 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.632 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.632 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.632 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.633 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.633 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.633 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.633 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.633 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.633 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.633 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.634 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.634 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.634 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.634 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.634 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.634 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.634 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.635 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.635 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.635 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.635 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.635 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.635 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.635 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.636 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.636 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.636 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.636 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.636 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.636 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.636 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.637 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.637 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.637 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.637 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.638 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.638 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.638 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.638 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.638 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.638 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.638 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.639 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.639 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.639 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.639 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.639 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.639 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.640 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.640 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.640 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.640 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.640 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.640 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.640 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.641 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.641 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.641 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.641 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.641 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.641 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.641 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.642 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.642 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.642 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.643 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.643 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.643 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.643 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.644 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.644 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.644 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.644 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.645 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.645 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.645 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.645 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.646 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.646 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.646 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.646 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.647 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.647 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.647 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.647 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.648 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.648 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.648 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.648 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.648 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.649 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.649 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.649 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.649 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.650 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.650 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.650 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.650 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.651 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.651 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.651 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.651 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.651 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.652 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.652 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.652 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.652 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.653 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.653 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.653 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.653 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.654 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.654 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.654 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.654 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.655 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.655 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.655 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.655 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.655 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.656 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.656 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.656 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.656 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.657 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.657 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.657 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.657 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.658 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.658 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.658 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.658 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.659 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.659 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.659 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.659 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.659 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.660 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.660 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.660 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.660 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.661 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.661 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.661 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.661 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.662 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.662 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.662 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.662 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.663 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.663 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.663 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.663 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.664 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.664 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.664 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.664 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.664 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.665 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.665 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.665 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.665 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.666 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.666 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.666 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.666 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.667 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.667 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.667 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.667 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.668 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.668 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.668 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.668 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.668 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.669 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.669 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.669 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.669 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.670 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.670 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.670 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.670 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.671 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.671 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.671 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.671 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.672 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.672 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.672 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.672 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.672 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.673 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.673 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.673 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.673 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.674 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.674 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.674 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.674 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.675 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.675 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.676 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.676 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.676 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.677 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.677 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.677 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.678 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.678 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.678 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.679 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.679 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.679 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.679 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.679 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.679 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.679 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.680 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.680 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.680 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.680 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.680 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.681 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.681 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.681 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.681 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.681 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.681 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.681 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.682 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.682 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.682 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.682 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.682 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.682 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.682 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.683 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.683 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.683 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.683 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.683 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.683 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.683 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.684 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.684 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.684 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.684 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.684 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.684 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.684 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.684 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.685 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.685 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.685 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.685 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.685 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.685 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.685 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.686 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.686 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.686 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.686 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.686 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.686 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.686 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.687 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.687 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.687 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.687 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.687 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.687 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.687 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.688 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.688 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.688 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.688 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.688 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.688 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.688 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.688 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.689 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.689 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.689 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.689 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.689 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.689 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.689 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.690 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.690 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.690 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.690 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.690 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.690 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.690 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.691 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.691 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.691 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.691 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.691 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.691 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.691 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.692 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.692 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.692 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.692 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.692 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.692 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.692 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.692 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.693 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.693 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.693 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.693 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.693 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.693 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.693 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.694 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.694 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.694 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.694 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.694 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.694 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.694 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.695 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.695 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.695 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.695 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.695 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.695 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.696 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.696 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.696 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.696 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.696 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.696 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.696 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.697 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.697 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.697 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.697 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.697 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.697 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.697 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.697 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.698 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.698 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.698 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.698 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.698 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.698 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.698 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.699 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.699 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.699 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.699 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.699 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.699 224886 WARNING oslo_config.cfg [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 20 14:17:59 compute-1 nova_compute[224882]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 20 14:17:59 compute-1 nova_compute[224882]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 20 14:17:59 compute-1 nova_compute[224882]: and ``live_migration_inbound_addr`` respectively.
Jan 20 14:17:59 compute-1 nova_compute[224882]: ).  Its value may be silently ignored in the future.
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.700 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.700 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.700 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.700 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.700 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.700 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.701 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.701 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.701 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.701 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.701 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.701 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.701 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.702 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.702 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.702 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.702 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.702 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.702 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.rbd_secret_uuid        = e399cf45-e6b6-5393-99f1-75c601d3f188 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.702 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.703 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.703 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.703 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.703 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.703 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.703 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.703 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.704 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.704 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.704 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.704 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.704 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.704 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.705 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.705 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.705 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.705 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.705 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.705 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.705 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.706 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.706 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.706 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.706 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.706 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.706 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.706 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.707 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.707 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.707 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.707 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.707 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.707 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.707 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.708 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.708 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.708 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.708 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.708 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.708 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.708 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.709 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.709 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.709 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.709 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.709 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.709 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.709 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.709 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.710 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.710 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.710 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.710 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.710 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.710 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.710 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.711 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.711 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.711 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.711 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.711 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.711 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.711 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.712 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.712 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.712 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.712 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.712 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.712 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.712 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.713 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.713 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.713 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.713 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.713 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.713 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.713 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.713 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.714 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.714 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.714 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.714 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.714 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.714 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.714 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.715 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.715 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.715 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.715 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.715 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.715 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.715 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.716 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.716 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.716 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.716 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.716 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.716 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.716 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.716 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.717 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.717 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.717 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.717 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.717 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.717 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.717 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.718 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.718 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.718 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.718 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.718 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.718 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.718 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.719 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.719 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.719 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.719 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.719 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.719 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.720 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.720 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.720 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.720 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.720 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.720 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.720 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.721 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.721 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.721 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.721 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.721 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.721 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.721 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.722 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.722 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.722 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.722 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.722 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.722 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.723 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.723 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.723 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.723 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.723 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.723 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.723 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.724 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.724 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.724 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.724 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.724 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.724 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.724 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.725 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.725 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.725 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.725 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.725 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.725 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.725 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.726 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.726 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.726 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.726 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.726 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.726 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.726 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.727 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.727 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.727 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.727 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.727 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.727 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.727 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.728 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.728 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.728 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.728 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.728 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.728 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.728 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.729 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.729 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.729 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.729 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.729 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.729 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.729 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.730 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.730 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.730 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.730 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.730 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.730 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.730 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.731 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.731 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.731 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.731 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.731 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.731 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.731 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.731 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.732 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.732 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.732 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.732 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.732 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.732 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.732 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.733 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.733 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.733 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.733 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.733 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.733 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.733 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.733 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.734 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.734 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.734 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.734 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.734 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.734 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.734 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.735 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.735 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.735 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.735 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.735 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.735 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.736 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.736 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.736 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.736 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.736 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.736 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.736 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.737 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.737 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.737 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.737 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.737 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.737 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.737 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.738 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.738 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.738 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.738 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.738 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.738 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.738 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.739 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.739 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.739 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.739 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.739 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.739 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.739 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.740 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.740 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.740 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.740 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.740 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.740 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.740 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.741 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.741 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.741 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.741 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.741 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.741 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.741 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.742 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.742 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.742 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.742 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.742 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.742 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.742 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.743 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.743 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.743 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.743 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.743 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.743 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.743 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.744 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.744 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.744 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.744 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.744 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.744 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.744 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.745 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.745 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.745 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.745 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.745 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.745 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.745 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.745 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.746 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.746 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.746 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.746 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.746 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.746 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.746 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.747 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.747 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.747 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.747 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.747 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.747 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.747 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.748 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.748 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.748 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.748 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.748 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.748 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.748 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.749 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.749 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.749 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.749 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.749 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.749 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.749 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.749 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.750 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.750 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.750 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.750 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.750 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.750 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.750 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.751 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.751 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.751 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.751 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.751 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.751 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.751 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.751 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.752 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.752 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.752 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.752 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.752 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.752 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.752 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.753 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.753 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.753 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.753 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.753 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.753 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.753 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.754 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.754 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.754 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.754 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.754 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.754 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.754 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.754 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.755 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.755 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.755 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.755 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.755 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.755 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.755 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.756 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.756 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.756 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.756 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.756 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.756 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.757 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.757 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.757 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.757 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.757 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.757 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.757 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.758 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.758 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.758 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.758 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.758 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.758 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.758 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.759 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.759 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.759 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.759 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.759 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.759 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.759 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.759 224886 DEBUG oslo_service.service [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.760 224886 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.814 224886 DEBUG nova.virt.libvirt.host [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.815 224886 DEBUG nova.virt.libvirt.host [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.815 224886 DEBUG nova.virt.libvirt.host [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.815 224886 DEBUG nova.virt.libvirt.host [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 20 14:17:59 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Jan 20 14:17:59 compute-1 systemd[1]: Started libvirt QEMU daemon.
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.911 224886 DEBUG nova.virt.libvirt.host [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fbf4ad328b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.914 224886 DEBUG nova.virt.libvirt.host [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fbf4ad328b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.915 224886 INFO nova.virt.libvirt.driver [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Connection event '1' reason 'None'
Jan 20 14:17:59 compute-1 ceph-mon[81775]: pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.945 224886 WARNING nova.virt.libvirt.driver [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Jan 20 14:17:59 compute-1 nova_compute[224882]: 2026-01-20 14:17:59.946 224886 DEBUG nova.virt.libvirt.volume.mount [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 20 14:18:00 compute-1 sudo[225560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaawbydlkxditwasuvkuarmkfiysmqyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918680.1721191-3572-43673326770529/AnsiballZ_podman_container.py'
Jan 20 14:18:00 compute-1 sudo[225560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:18:00 compute-1 nova_compute[224882]: 2026-01-20 14:18:00.869 224886 INFO nova.virt.libvirt.host [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Libvirt host capabilities <capabilities>
Jan 20 14:18:00 compute-1 nova_compute[224882]: 
Jan 20 14:18:00 compute-1 nova_compute[224882]:   <host>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <uuid>870b1f1c-f19c-477b-b282-ee6eeba50974</uuid>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <cpu>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <arch>x86_64</arch>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model>EPYC-Rome-v4</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <vendor>AMD</vendor>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <microcode version='16777317'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <signature family='23' model='49' stepping='0'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='x2apic'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='tsc-deadline'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='osxsave'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='hypervisor'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='tsc_adjust'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='spec-ctrl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='stibp'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='arch-capabilities'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='ssbd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='cmp_legacy'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='topoext'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='virt-ssbd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='lbrv'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='tsc-scale'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='vmcb-clean'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='pause-filter'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='pfthreshold'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='svme-addr-chk'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='rdctl-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='skip-l1dfl-vmentry'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='mds-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature name='pschange-mc-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <pages unit='KiB' size='4'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <pages unit='KiB' size='2048'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <pages unit='KiB' size='1048576'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     </cpu>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <power_management>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <suspend_mem/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     </power_management>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <iommu support='no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <migration_features>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <live/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <uri_transports>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <uri_transport>tcp</uri_transport>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <uri_transport>rdma</uri_transport>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </uri_transports>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     </migration_features>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <topology>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <cells num='1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <cell id='0'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:           <memory unit='KiB'>7864312</memory>
Jan 20 14:18:00 compute-1 nova_compute[224882]:           <pages unit='KiB' size='4'>1966078</pages>
Jan 20 14:18:00 compute-1 nova_compute[224882]:           <pages unit='KiB' size='2048'>0</pages>
Jan 20 14:18:00 compute-1 nova_compute[224882]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 20 14:18:00 compute-1 nova_compute[224882]:           <distances>
Jan 20 14:18:00 compute-1 nova_compute[224882]:             <sibling id='0' value='10'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:           </distances>
Jan 20 14:18:00 compute-1 nova_compute[224882]:           <cpus num='8'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:           </cpus>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         </cell>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </cells>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     </topology>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <cache>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     </cache>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <secmodel>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model>selinux</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <doi>0</doi>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     </secmodel>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <secmodel>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model>dac</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <doi>0</doi>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     </secmodel>
Jan 20 14:18:00 compute-1 nova_compute[224882]:   </host>
Jan 20 14:18:00 compute-1 nova_compute[224882]: 
Jan 20 14:18:00 compute-1 nova_compute[224882]:   <guest>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <os_type>hvm</os_type>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <arch name='i686'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <wordsize>32</wordsize>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <domain type='qemu'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <domain type='kvm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     </arch>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <features>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <pae/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <nonpae/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <acpi default='on' toggle='yes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <apic default='on' toggle='no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <cpuselection/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <deviceboot/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <disksnapshot default='on' toggle='no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <externalSnapshot/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     </features>
Jan 20 14:18:00 compute-1 nova_compute[224882]:   </guest>
Jan 20 14:18:00 compute-1 nova_compute[224882]: 
Jan 20 14:18:00 compute-1 nova_compute[224882]:   <guest>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <os_type>hvm</os_type>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <arch name='x86_64'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <wordsize>64</wordsize>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <domain type='qemu'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <domain type='kvm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     </arch>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <features>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <acpi default='on' toggle='yes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <apic default='on' toggle='no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <cpuselection/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <deviceboot/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <disksnapshot default='on' toggle='no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <externalSnapshot/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     </features>
Jan 20 14:18:00 compute-1 nova_compute[224882]:   </guest>
Jan 20 14:18:00 compute-1 nova_compute[224882]: 
Jan 20 14:18:00 compute-1 nova_compute[224882]: </capabilities>
Jan 20 14:18:00 compute-1 nova_compute[224882]: 
Jan 20 14:18:00 compute-1 nova_compute[224882]: 2026-01-20 14:18:00.878 224886 DEBUG nova.virt.libvirt.host [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 20 14:18:00 compute-1 nova_compute[224882]: 2026-01-20 14:18:00.904 224886 DEBUG nova.virt.libvirt.host [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 20 14:18:00 compute-1 nova_compute[224882]: <domainCapabilities>
Jan 20 14:18:00 compute-1 nova_compute[224882]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 14:18:00 compute-1 nova_compute[224882]:   <domain>kvm</domain>
Jan 20 14:18:00 compute-1 nova_compute[224882]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 20 14:18:00 compute-1 nova_compute[224882]:   <arch>i686</arch>
Jan 20 14:18:00 compute-1 nova_compute[224882]:   <vcpu max='240'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:   <iothreads supported='yes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:   <os supported='yes'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <enum name='firmware'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <loader supported='yes'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <enum name='type'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <value>rom</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <value>pflash</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <enum name='readonly'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <value>yes</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <value>no</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <enum name='secure'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <value>no</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     </loader>
Jan 20 14:18:00 compute-1 nova_compute[224882]:   </os>
Jan 20 14:18:00 compute-1 nova_compute[224882]:   <cpu>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <mode name='host-passthrough' supported='yes'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <enum name='hostPassthroughMigratable'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <value>on</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <value>off</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     </mode>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <mode name='maximum' supported='yes'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <enum name='maximumMigratable'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <value>on</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <value>off</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     </mode>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <mode name='host-model' supported='yes'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <vendor>AMD</vendor>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='x2apic'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='hypervisor'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='stibp'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='ssbd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='overflow-recov'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='succor'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='ibrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='lbrv'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='tsc-scale'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='flushbyasid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='pause-filter'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='pfthreshold'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <feature policy='disable' name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     </mode>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <mode name='custom' supported='yes'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Broadwell'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Broadwell-IBRS'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Broadwell-noTSX'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Broadwell-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Broadwell-v2'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Broadwell-v3'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Broadwell-v4'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='ClearwaterForest'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bhi-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ddpd-u'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='intel-psfd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='lam'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sha512'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sm3'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sm4'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='ClearwaterForest-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bhi-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ddpd-u'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='intel-psfd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='lam'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sha512'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sm3'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sm4'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Cooperlake'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Cooperlake-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Cooperlake-v2'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Denverton'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='mpx'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Denverton-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='mpx'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Denverton-v2'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Denverton-v3'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Dhyana-v2'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='EPYC-Genoa'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='auto-ibrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='auto-ibrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='auto-ibrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='perfmon-v2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='EPYC-Milan'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='EPYC-Milan-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='EPYC-Milan-v2'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='EPYC-Milan-v3'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='EPYC-Rome'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='EPYC-Rome-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='EPYC-Rome-v2'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='EPYC-Rome-v3'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='EPYC-Turin'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='auto-ibrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='perfmon-v2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='prefetchi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sbpb'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='EPYC-Turin-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='auto-ibrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='perfmon-v2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='prefetchi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sbpb'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='EPYC-v3'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='EPYC-v4'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='EPYC-v5'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='GraniteRapids'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-fp16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='GraniteRapids-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-fp16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='GraniteRapids-v2'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-fp16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx10'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx10-128'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx10-256'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx10-512'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='GraniteRapids-v3'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-fp16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx10'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx10-128'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx10-256'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx10-512'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Haswell'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Haswell-IBRS'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Haswell-noTSX'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Haswell-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Haswell-v2'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Haswell-v3'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Haswell-v4'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v2'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v3'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v4'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v5'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v6'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v7'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='IvyBridge'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='IvyBridge-IBRS'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='IvyBridge-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='IvyBridge-v2'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='KnightsMill'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512er'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512pf'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='KnightsMill-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512er'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512pf'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Opteron_G4'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fma4'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xop'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Opteron_G4-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fma4'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xop'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Opteron_G5'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fma4'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='tbm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xop'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Opteron_G5-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fma4'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='tbm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xop'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='SapphireRapids'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='SapphireRapids-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='SapphireRapids-v2'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='SapphireRapids-v3'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='SapphireRapids-v4'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='SierraForest'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='SierraForest-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='SierraForest-v2'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='intel-psfd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='lam'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='SierraForest-v3'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='intel-psfd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='lam'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-v2'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-v3'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-v4'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-v2'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-v3'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-v4'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-v5'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Snowridge'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='core-capability'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='mpx'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='split-lock-detect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Snowridge-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='core-capability'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='mpx'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='split-lock-detect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Snowridge-v2'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='core-capability'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='split-lock-detect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Snowridge-v3'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='core-capability'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='split-lock-detect'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='Snowridge-v4'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='athlon'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='3dnow'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='3dnowext'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='athlon-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='3dnow'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='3dnowext'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='core2duo'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='core2duo-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='coreduo'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='coreduo-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='n270'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='n270-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='phenom'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='3dnow'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='3dnowext'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <blockers model='phenom-v1'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='3dnow'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <feature name='3dnowext'/>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     </mode>
Jan 20 14:18:00 compute-1 nova_compute[224882]:   </cpu>
Jan 20 14:18:00 compute-1 nova_compute[224882]:   <memoryBacking supported='yes'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <enum name='sourceType'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <value>file</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <value>anonymous</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <value>memfd</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     </enum>
Jan 20 14:18:00 compute-1 nova_compute[224882]:   </memoryBacking>
Jan 20 14:18:00 compute-1 nova_compute[224882]:   <devices>
Jan 20 14:18:00 compute-1 nova_compute[224882]:     <disk supported='yes'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <enum name='diskDevice'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <value>disk</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <value>cdrom</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <value>floppy</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <value>lun</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:00 compute-1 nova_compute[224882]:       <enum name='bus'>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <value>ide</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <value>fdc</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <value>scsi</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <value>virtio</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <value>usb</value>
Jan 20 14:18:00 compute-1 nova_compute[224882]:         <value>sata</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='model'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio-transitional</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio-non-transitional</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </disk>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <graphics supported='yes'>
Jan 20 14:18:01 compute-1 python3.9[225562]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='type'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vnc</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>egl-headless</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>dbus</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </graphics>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <video supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='modelType'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vga</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>cirrus</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>none</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>bochs</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>ramfb</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </video>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <hostdev supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='mode'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>subsystem</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='startupPolicy'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>default</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>mandatory</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>requisite</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>optional</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='subsysType'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>usb</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>pci</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>scsi</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='capsType'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='pciBackend'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </hostdev>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <rng supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='model'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio-transitional</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio-non-transitional</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>random</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>egd</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>builtin</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </rng>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <filesystem supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='driverType'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>path</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>handle</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtiofs</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </filesystem>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <tpm supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='model'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>tpm-tis</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>tpm-crb</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>emulator</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>external</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='backendVersion'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>2.0</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </tpm>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <redirdev supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='bus'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>usb</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </redirdev>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <channel supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='type'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>pty</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>unix</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </channel>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <crypto supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='model'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='type'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>qemu</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>builtin</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </crypto>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <interface supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='backendType'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>default</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>passt</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </interface>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <panic supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='model'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>isa</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>hyperv</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </panic>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <console supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='type'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>null</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vc</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>pty</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>dev</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>file</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>pipe</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>stdio</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>udp</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>tcp</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>unix</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>qemu-vdagent</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>dbus</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </console>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   </devices>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <features>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <gic supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <vmcoreinfo supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <genid supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <backingStoreInput supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <backup supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <async-teardown supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <s390-pv supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <ps2 supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <tdx supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <sev supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <sgx supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <hyperv supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='features'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>relaxed</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vapic</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>spinlocks</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vpindex</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>runtime</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>synic</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>stimer</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>reset</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vendor_id</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>frequencies</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>reenlightenment</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>tlbflush</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>ipi</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>avic</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>emsr_bitmap</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>xmm_input</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <defaults>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <spinlocks>4095</spinlocks>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <stimer_direct>on</stimer_direct>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </defaults>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </hyperv>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <launchSecurity supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   </features>
Jan 20 14:18:01 compute-1 nova_compute[224882]: </domainCapabilities>
Jan 20 14:18:01 compute-1 nova_compute[224882]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:00.910 224886 DEBUG nova.virt.libvirt.host [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 20 14:18:01 compute-1 nova_compute[224882]: <domainCapabilities>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <domain>kvm</domain>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <arch>i686</arch>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <vcpu max='4096'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <iothreads supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <os supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <enum name='firmware'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <loader supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='type'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>rom</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>pflash</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='readonly'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>yes</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>no</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='secure'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>no</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </loader>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   </os>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <cpu>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <mode name='host-passthrough' supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='hostPassthroughMigratable'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>on</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>off</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </mode>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <mode name='maximum' supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='maximumMigratable'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>on</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>off</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </mode>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <mode name='host-model' supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <vendor>AMD</vendor>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='x2apic'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='hypervisor'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='stibp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='ssbd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='overflow-recov'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='succor'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='ibrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='lbrv'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='tsc-scale'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='flushbyasid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='pause-filter'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='pfthreshold'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='disable' name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </mode>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <mode name='custom' supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-noTSX'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='ClearwaterForest'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bhi-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ddpd-u'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='lam'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sha512'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sm3'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sm4'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='ClearwaterForest-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bhi-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ddpd-u'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='lam'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sha512'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sm3'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sm4'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cooperlake'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cooperlake-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cooperlake-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Denverton'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Denverton-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Denverton-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Denverton-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Dhyana-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Genoa'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='perfmon-v2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Milan'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Milan-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Milan-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Milan-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Rome'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Rome-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Rome-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Rome-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Turin'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='perfmon-v2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbpb'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Turin-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='perfmon-v2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbpb'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-v5'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='GraniteRapids'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='GraniteRapids-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='GraniteRapids-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10-128'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10-256'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10-512'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='GraniteRapids-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10-128'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10-256'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10-512'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-noTSX'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v5'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v6'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v7'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='IvyBridge'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='IvyBridge-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='IvyBridge-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='IvyBridge-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='KnightsMill'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512er'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512pf'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='KnightsMill-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512er'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512pf'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Opteron_G4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xop'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Opteron_G4-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xop'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Opteron_G5'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tbm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xop'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Opteron_G5-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tbm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xop'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SapphireRapids'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SapphireRapids-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SapphireRapids-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SapphireRapids-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SapphireRapids-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SierraForest'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SierraForest-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SierraForest-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='lam'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SierraForest-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='lam'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-v5'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Snowridge'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Snowridge-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Snowridge-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Snowridge-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Snowridge-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='athlon'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='athlon-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='core2duo'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='core2duo-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='coreduo'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='coreduo-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='n270'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='n270-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='phenom'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='phenom-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </mode>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   </cpu>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <memoryBacking supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <enum name='sourceType'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <value>file</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <value>anonymous</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <value>memfd</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   </memoryBacking>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <devices>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <disk supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='diskDevice'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>disk</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>cdrom</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>floppy</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>lun</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='bus'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>fdc</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>scsi</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>usb</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>sata</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='model'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio-transitional</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio-non-transitional</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </disk>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <graphics supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='type'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vnc</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>egl-headless</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>dbus</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </graphics>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <video supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='modelType'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vga</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>cirrus</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>none</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>bochs</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>ramfb</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </video>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <hostdev supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='mode'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>subsystem</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='startupPolicy'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>default</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>mandatory</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>requisite</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>optional</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='subsysType'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>usb</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>pci</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>scsi</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='capsType'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='pciBackend'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </hostdev>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <rng supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='model'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio-transitional</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio-non-transitional</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>random</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>egd</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>builtin</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </rng>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <filesystem supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='driverType'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>path</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>handle</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtiofs</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </filesystem>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <tpm supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='model'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>tpm-tis</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>tpm-crb</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>emulator</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>external</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='backendVersion'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>2.0</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </tpm>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <redirdev supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='bus'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>usb</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </redirdev>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <channel supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='type'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>pty</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>unix</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </channel>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <crypto supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='model'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='type'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>qemu</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>builtin</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </crypto>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <interface supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='backendType'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>default</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>passt</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </interface>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <panic supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='model'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>isa</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>hyperv</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </panic>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <console supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='type'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>null</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vc</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>pty</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>dev</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>file</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>pipe</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>stdio</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>udp</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>tcp</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>unix</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>qemu-vdagent</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>dbus</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </console>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   </devices>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <features>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <gic supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <vmcoreinfo supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <genid supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <backingStoreInput supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <backup supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <async-teardown supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <s390-pv supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <ps2 supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <tdx supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <sev supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <sgx supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <hyperv supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='features'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>relaxed</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vapic</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>spinlocks</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vpindex</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>runtime</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>synic</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>stimer</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>reset</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vendor_id</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>frequencies</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>reenlightenment</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>tlbflush</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>ipi</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>avic</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>emsr_bitmap</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>xmm_input</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <defaults>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <spinlocks>4095</spinlocks>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <stimer_direct>on</stimer_direct>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </defaults>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </hyperv>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <launchSecurity supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   </features>
Jan 20 14:18:01 compute-1 nova_compute[224882]: </domainCapabilities>
Jan 20 14:18:01 compute-1 nova_compute[224882]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.011 224886 DEBUG nova.virt.libvirt.host [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.019 224886 DEBUG nova.virt.libvirt.host [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 20 14:18:01 compute-1 nova_compute[224882]: <domainCapabilities>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <domain>kvm</domain>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <arch>x86_64</arch>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <vcpu max='240'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <iothreads supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <os supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <enum name='firmware'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <loader supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='type'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>rom</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>pflash</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='readonly'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>yes</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>no</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='secure'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>no</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </loader>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   </os>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <cpu>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <mode name='host-passthrough' supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='hostPassthroughMigratable'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>on</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>off</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </mode>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <mode name='maximum' supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='maximumMigratable'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>on</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>off</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </mode>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <mode name='host-model' supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <vendor>AMD</vendor>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='x2apic'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='hypervisor'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='stibp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='ssbd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='overflow-recov'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='succor'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='ibrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='lbrv'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='tsc-scale'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='flushbyasid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='pause-filter'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='pfthreshold'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='disable' name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </mode>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <mode name='custom' supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-noTSX'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='ClearwaterForest'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bhi-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ddpd-u'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='lam'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sha512'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sm3'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sm4'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='ClearwaterForest-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bhi-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ddpd-u'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='lam'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sha512'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sm3'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sm4'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cooperlake'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cooperlake-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cooperlake-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Denverton'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Denverton-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Denverton-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Denverton-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Dhyana-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Genoa'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='perfmon-v2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Milan'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Milan-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Milan-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Milan-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Rome'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Rome-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Rome-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Rome-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Turin'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='perfmon-v2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbpb'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Turin-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='perfmon-v2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbpb'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-v5'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='GraniteRapids'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='GraniteRapids-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='GraniteRapids-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10-128'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10-256'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10-512'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='GraniteRapids-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10-128'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10-256'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10-512'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-noTSX'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v5'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v6'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v7'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='IvyBridge'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='IvyBridge-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='IvyBridge-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='IvyBridge-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='KnightsMill'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512er'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512pf'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='KnightsMill-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512er'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512pf'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Opteron_G4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xop'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Opteron_G4-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xop'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Opteron_G5'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tbm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xop'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Opteron_G5-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tbm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xop'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SapphireRapids'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SapphireRapids-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SapphireRapids-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SapphireRapids-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SapphireRapids-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SierraForest'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SierraForest-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SierraForest-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='lam'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SierraForest-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='lam'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-v5'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Snowridge'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Snowridge-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Snowridge-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Snowridge-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Snowridge-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='athlon'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 sudo[225560]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='athlon-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='core2duo'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='core2duo-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='coreduo'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='coreduo-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='n270'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='n270-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='phenom'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='phenom-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </mode>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   </cpu>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <memoryBacking supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <enum name='sourceType'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <value>file</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <value>anonymous</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <value>memfd</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   </memoryBacking>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <devices>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <disk supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='diskDevice'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>disk</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>cdrom</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>floppy</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>lun</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='bus'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>ide</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>fdc</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>scsi</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>usb</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>sata</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='model'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio-transitional</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio-non-transitional</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </disk>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <graphics supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='type'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vnc</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>egl-headless</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>dbus</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </graphics>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <video supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='modelType'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vga</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>cirrus</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>none</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>bochs</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>ramfb</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </video>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <hostdev supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='mode'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>subsystem</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='startupPolicy'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>default</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>mandatory</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>requisite</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>optional</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='subsysType'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>usb</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>pci</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>scsi</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='capsType'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='pciBackend'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </hostdev>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <rng supported='yes'>
Jan 20 14:18:01 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='model'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio-transitional</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio-non-transitional</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>random</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>egd</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>builtin</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </rng>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <filesystem supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='driverType'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>path</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>handle</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtiofs</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </filesystem>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <tpm supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='model'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>tpm-tis</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>tpm-crb</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>emulator</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>external</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='backendVersion'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>2.0</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </tpm>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <redirdev supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='bus'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>usb</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </redirdev>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <channel supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='type'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>pty</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>unix</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </channel>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <crypto supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='model'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='type'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>qemu</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>builtin</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </crypto>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <interface supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='backendType'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>default</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>passt</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </interface>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <panic supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='model'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>isa</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>hyperv</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </panic>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <console supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='type'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>null</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vc</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>pty</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>dev</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>file</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>pipe</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>stdio</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>udp</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>tcp</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>unix</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>qemu-vdagent</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>dbus</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </console>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   </devices>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <features>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <gic supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <vmcoreinfo supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <genid supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <backingStoreInput supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <backup supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <async-teardown supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <s390-pv supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <ps2 supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <tdx supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <sev supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <sgx supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <hyperv supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='features'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>relaxed</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vapic</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>spinlocks</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vpindex</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>runtime</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>synic</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>stimer</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>reset</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vendor_id</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>frequencies</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>reenlightenment</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>tlbflush</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>ipi</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>avic</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>emsr_bitmap</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>xmm_input</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <defaults>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <spinlocks>4095</spinlocks>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <stimer_direct>on</stimer_direct>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </defaults>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </hyperv>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <launchSecurity supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   </features>
Jan 20 14:18:01 compute-1 nova_compute[224882]: </domainCapabilities>
Jan 20 14:18:01 compute-1 nova_compute[224882]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.099 224886 DEBUG nova.virt.libvirt.host [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 20 14:18:01 compute-1 nova_compute[224882]: <domainCapabilities>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <domain>kvm</domain>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <arch>x86_64</arch>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <vcpu max='4096'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <iothreads supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <os supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <enum name='firmware'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <value>efi</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <loader supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='type'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>rom</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>pflash</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='readonly'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>yes</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>no</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='secure'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>yes</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>no</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </loader>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   </os>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <cpu>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <mode name='host-passthrough' supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='hostPassthroughMigratable'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>on</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>off</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </mode>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <mode name='maximum' supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='maximumMigratable'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>on</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>off</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </mode>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <mode name='host-model' supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <vendor>AMD</vendor>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='x2apic'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='hypervisor'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='stibp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='ssbd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='overflow-recov'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='succor'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='ibrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='lbrv'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='tsc-scale'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='flushbyasid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='pause-filter'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='pfthreshold'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <feature policy='disable' name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </mode>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <mode name='custom' supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-noTSX'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Broadwell-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='ClearwaterForest'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bhi-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ddpd-u'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='lam'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sha512'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sm3'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sm4'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='ClearwaterForest-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bhi-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ddpd-u'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='lam'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sha512'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sm3'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sm4'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cooperlake'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cooperlake-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Cooperlake-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Denverton'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Denverton-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Denverton-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Denverton-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Dhyana-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Genoa'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='perfmon-v2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Milan'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Milan-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Milan-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Milan-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Rome'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Rome-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Rome-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Rome-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Turin'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='perfmon-v2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbpb'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-Turin-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='perfmon-v2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbpb'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='EPYC-v5'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='GraniteRapids'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='GraniteRapids-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='GraniteRapids-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10-128'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10-256'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10-512'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='GraniteRapids-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10-128'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10-256'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx10-512'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-noTSX'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Haswell-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v5'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v6'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Icelake-Server-v7'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='IvyBridge'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='IvyBridge-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='IvyBridge-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='IvyBridge-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='KnightsMill'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512er'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512pf'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='KnightsMill-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512er'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512pf'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Opteron_G4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xop'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Opteron_G4-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xop'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Opteron_G5'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tbm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xop'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Opteron_G5-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tbm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xop'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SapphireRapids'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SapphireRapids-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SapphireRapids-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SapphireRapids-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SapphireRapids-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='la57'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SierraForest'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SierraForest-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SierraForest-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='lam'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='SierraForest-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='lam'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Client-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='hle'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Skylake-Server-v5'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='pku'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Snowridge'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Snowridge-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Snowridge-v2'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Snowridge-v3'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='Snowridge-v4'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='erms'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='athlon'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='athlon-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='core2duo'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='core2duo-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='coreduo'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='coreduo-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='n270'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='n270-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='ss'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='phenom'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <blockers model='phenom-v1'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </blockers>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </mode>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   </cpu>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <memoryBacking supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <enum name='sourceType'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <value>file</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <value>anonymous</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <value>memfd</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   </memoryBacking>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <devices>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <disk supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='diskDevice'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>disk</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>cdrom</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>floppy</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>lun</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='bus'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>fdc</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>scsi</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>usb</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>sata</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='model'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio-transitional</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio-non-transitional</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </disk>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <graphics supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='type'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vnc</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>egl-headless</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>dbus</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </graphics>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <video supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='modelType'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vga</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>cirrus</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>none</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>bochs</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>ramfb</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </video>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <hostdev supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='mode'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>subsystem</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='startupPolicy'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>default</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>mandatory</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>requisite</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>optional</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='subsysType'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>usb</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>pci</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>scsi</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='capsType'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='pciBackend'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </hostdev>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <rng supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='model'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio-transitional</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtio-non-transitional</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>random</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>egd</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>builtin</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </rng>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <filesystem supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='driverType'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>path</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>handle</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>virtiofs</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </filesystem>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <tpm supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='model'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>tpm-tis</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>tpm-crb</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>emulator</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>external</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='backendVersion'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>2.0</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </tpm>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <redirdev supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='bus'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>usb</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </redirdev>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <channel supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='type'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>pty</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>unix</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </channel>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <crypto supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='model'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='type'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>qemu</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>builtin</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </crypto>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <interface supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='backendType'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>default</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>passt</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </interface>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <panic supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='model'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>isa</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>hyperv</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </panic>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <console supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='type'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>null</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vc</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>pty</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>dev</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>file</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>pipe</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>stdio</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>udp</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>tcp</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>unix</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>qemu-vdagent</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>dbus</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </console>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   </devices>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <features>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <gic supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <vmcoreinfo supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <genid supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <backingStoreInput supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <backup supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <async-teardown supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <s390-pv supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <ps2 supported='yes'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <tdx supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <sev supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <sgx supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <hyperv supported='yes'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <enum name='features'>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>relaxed</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vapic</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>spinlocks</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vpindex</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>runtime</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>synic</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>stimer</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>reset</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>vendor_id</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>frequencies</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>reenlightenment</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>tlbflush</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>ipi</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>avic</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>emsr_bitmap</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <value>xmm_input</value>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </enum>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       <defaults>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <spinlocks>4095</spinlocks>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <stimer_direct>on</stimer_direct>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 14:18:01 compute-1 nova_compute[224882]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 14:18:01 compute-1 nova_compute[224882]:       </defaults>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     </hyperv>
Jan 20 14:18:01 compute-1 nova_compute[224882]:     <launchSecurity supported='no'/>
Jan 20 14:18:01 compute-1 nova_compute[224882]:   </features>
Jan 20 14:18:01 compute-1 nova_compute[224882]: </domainCapabilities>
Jan 20 14:18:01 compute-1 nova_compute[224882]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.160 224886 DEBUG nova.virt.libvirt.host [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.160 224886 DEBUG nova.virt.libvirt.host [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.161 224886 DEBUG nova.virt.libvirt.host [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.166 224886 INFO nova.virt.libvirt.host [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Secure Boot support detected
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.167 224886 INFO nova.virt.libvirt.driver [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.168 224886 INFO nova.virt.libvirt.driver [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.179 224886 DEBUG nova.virt.libvirt.driver [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] cpu compare xml: <cpu match="exact">
Jan 20 14:18:01 compute-1 nova_compute[224882]:   <model>Nehalem</model>
Jan 20 14:18:01 compute-1 nova_compute[224882]: </cpu>
Jan 20 14:18:01 compute-1 nova_compute[224882]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.182 224886 DEBUG nova.virt.libvirt.driver [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.276 224886 INFO nova.virt.node [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Determined node identity bbb02880-a710-4ac1-8b2c-5c09765848d1 from /var/lib/nova/compute_id
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.296 224886 WARNING nova.compute.manager [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Compute nodes ['bbb02880-a710-4ac1-8b2c-5c09765848d1'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.363 224886 INFO nova.compute.manager [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.453 224886 WARNING nova.compute.manager [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.453 224886 DEBUG oslo_concurrency.lockutils [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.453 224886 DEBUG oslo_concurrency.lockutils [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.454 224886 DEBUG oslo_concurrency.lockutils [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.454 224886 DEBUG nova.compute.resource_tracker [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.454 224886 DEBUG oslo_concurrency.processutils [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:18:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:18:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:01.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:18:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:01.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:01 compute-1 sudo[225762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sntrxoaseerooydtycgiqwfnpggzrjpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918681.455165-3596-82038465473983/AnsiballZ_systemd.py'
Jan 20 14:18:01 compute-1 sudo[225762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:18:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:18:01 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/996481584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:01 compute-1 nova_compute[224882]: 2026-01-20 14:18:01.991 224886 DEBUG oslo_concurrency.processutils [None req-5b5754ad-a2eb-46b1-a3d6-44d73c82b4c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:18:02 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Jan 20 14:18:02 compute-1 systemd[1]: Started libvirt nodedev daemon.
Jan 20 14:18:02 compute-1 python3.9[225764]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:18:02 compute-1 systemd[1]: Stopping nova_compute container...
Jan 20 14:18:02 compute-1 ceph-mon[81775]: pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1750596979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3870360195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/996481584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:02 compute-1 nova_compute[224882]: 2026-01-20 14:18:02.243 224886 DEBUG oslo_concurrency.lockutils [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:18:02 compute-1 nova_compute[224882]: 2026-01-20 14:18:02.244 224886 DEBUG oslo_concurrency.lockutils [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:18:02 compute-1 nova_compute[224882]: 2026-01-20 14:18:02.244 224886 DEBUG oslo_concurrency.lockutils [None req-52a0eeda-649c-48fd-9b77-6bd39b39a074 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:18:02 compute-1 virtqemud[225396]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 20 14:18:02 compute-1 virtqemud[225396]: hostname: compute-1
Jan 20 14:18:02 compute-1 virtqemud[225396]: End of file while reading data: Input/output error
Jan 20 14:18:02 compute-1 systemd[1]: libpod-9b6a80ba477be5ac2929fd65dfa7fd443cd6e560d8aa8cb2bdfccdc381ce7301.scope: Deactivated successfully.
Jan 20 14:18:02 compute-1 systemd[1]: libpod-9b6a80ba477be5ac2929fd65dfa7fd443cd6e560d8aa8cb2bdfccdc381ce7301.scope: Consumed 3.678s CPU time.
Jan 20 14:18:02 compute-1 conmon[224882]: conmon 9b6a80ba477be5ac2929 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9b6a80ba477be5ac2929fd65dfa7fd443cd6e560d8aa8cb2bdfccdc381ce7301.scope/container/memory.events
Jan 20 14:18:02 compute-1 podman[225791]: 2026-01-20 14:18:02.671493888 +0000 UTC m=+0.474344146 container died 9b6a80ba477be5ac2929fd65dfa7fd443cd6e560d8aa8cb2bdfccdc381ce7301 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 14:18:02 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9b6a80ba477be5ac2929fd65dfa7fd443cd6e560d8aa8cb2bdfccdc381ce7301-userdata-shm.mount: Deactivated successfully.
Jan 20 14:18:02 compute-1 systemd[1]: var-lib-containers-storage-overlay-09bfc735065820b3b853aee6741a29dbfb1f97c4dd98fe504baf6c65a72090cd-merged.mount: Deactivated successfully.
Jan 20 14:18:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:03.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:03.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:04 compute-1 podman[225791]: 2026-01-20 14:18:04.985721827 +0000 UTC m=+2.788572035 container cleanup 9b6a80ba477be5ac2929fd65dfa7fd443cd6e560d8aa8cb2bdfccdc381ce7301 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 20 14:18:04 compute-1 podman[225791]: nova_compute
Jan 20 14:18:05 compute-1 podman[225825]: nova_compute
Jan 20 14:18:05 compute-1 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 20 14:18:05 compute-1 systemd[1]: Stopped nova_compute container.
Jan 20 14:18:05 compute-1 systemd[1]: Starting nova_compute container...
Jan 20 14:18:05 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:18:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bfc735065820b3b853aee6741a29dbfb1f97c4dd98fe504baf6c65a72090cd/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bfc735065820b3b853aee6741a29dbfb1f97c4dd98fe504baf6c65a72090cd/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bfc735065820b3b853aee6741a29dbfb1f97c4dd98fe504baf6c65a72090cd/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bfc735065820b3b853aee6741a29dbfb1f97c4dd98fe504baf6c65a72090cd/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bfc735065820b3b853aee6741a29dbfb1f97c4dd98fe504baf6c65a72090cd/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:05 compute-1 podman[225839]: 2026-01-20 14:18:05.226396483 +0000 UTC m=+0.124769765 container init 9b6a80ba477be5ac2929fd65dfa7fd443cd6e560d8aa8cb2bdfccdc381ce7301 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 14:18:05 compute-1 podman[225839]: 2026-01-20 14:18:05.231784946 +0000 UTC m=+0.130158228 container start 9b6a80ba477be5ac2929fd65dfa7fd443cd6e560d8aa8cb2bdfccdc381ce7301 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible)
Jan 20 14:18:05 compute-1 nova_compute[225855]: + sudo -E kolla_set_configs
Jan 20 14:18:05 compute-1 podman[225839]: nova_compute
Jan 20 14:18:05 compute-1 systemd[1]: Started nova_compute container.
Jan 20 14:18:05 compute-1 sudo[225762]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Validating config file
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Copying service configuration files
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Deleting /etc/ceph
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Creating directory /etc/ceph
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Setting permission for /etc/ceph
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Writing out command to execute
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:18:05 compute-1 nova_compute[225855]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 20 14:18:05 compute-1 nova_compute[225855]: ++ cat /run_command
Jan 20 14:18:05 compute-1 nova_compute[225855]: + CMD=nova-compute
Jan 20 14:18:05 compute-1 nova_compute[225855]: + ARGS=
Jan 20 14:18:05 compute-1 nova_compute[225855]: + sudo kolla_copy_cacerts
Jan 20 14:18:05 compute-1 nova_compute[225855]: + [[ ! -n '' ]]
Jan 20 14:18:05 compute-1 nova_compute[225855]: + . kolla_extend_start
Jan 20 14:18:05 compute-1 nova_compute[225855]: + echo 'Running command: '\''nova-compute'\'''
Jan 20 14:18:05 compute-1 nova_compute[225855]: Running command: 'nova-compute'
Jan 20 14:18:05 compute-1 nova_compute[225855]: + umask 0022
Jan 20 14:18:05 compute-1 nova_compute[225855]: + exec nova-compute
Jan 20 14:18:05 compute-1 ceph-mon[81775]: pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:18:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:05.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:18:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:18:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:05.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:18:06 compute-1 sudo[226018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvflqfkjhmnubgjmezglzfszmhlptxvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918686.2035432-3623-163819754941144/AnsiballZ_podman_container.py'
Jan 20 14:18:06 compute-1 sudo[226018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:18:06 compute-1 ceph-mon[81775]: pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:06 compute-1 python3.9[226020]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 20 14:18:07 compute-1 systemd[1]: Started libpod-conmon-2f02162efb4537956962187671675fe84df08f5b348c51f1cd15ed9e22a6e5c3.scope.
Jan 20 14:18:07 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:18:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0d02bc159c28585f78064f19fbd91a200478408363f8183b598eed3d83849b0/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0d02bc159c28585f78064f19fbd91a200478408363f8183b598eed3d83849b0/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0d02bc159c28585f78064f19fbd91a200478408363f8183b598eed3d83849b0/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.172 225859 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.173 225859 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.173 225859 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.173 225859 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 20 14:18:07 compute-1 podman[226045]: 2026-01-20 14:18:07.174670656 +0000 UTC m=+0.212338352 container init 2f02162efb4537956962187671675fe84df08f5b348c51f1cd15ed9e22a6e5c3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:18:07 compute-1 podman[226045]: 2026-01-20 14:18:07.183200029 +0000 UTC m=+0.220867715 container start 2f02162efb4537956962187671675fe84df08f5b348c51f1cd15ed9e22a6e5c3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 14:18:07 compute-1 python3.9[226020]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 20 14:18:07 compute-1 nova_compute_init[226068]: INFO:nova_statedir:Applying nova statedir ownership
Jan 20 14:18:07 compute-1 nova_compute_init[226068]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 20 14:18:07 compute-1 nova_compute_init[226068]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 20 14:18:07 compute-1 nova_compute_init[226068]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 20 14:18:07 compute-1 nova_compute_init[226068]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 20 14:18:07 compute-1 nova_compute_init[226068]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 20 14:18:07 compute-1 nova_compute_init[226068]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 20 14:18:07 compute-1 nova_compute_init[226068]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 20 14:18:07 compute-1 nova_compute_init[226068]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 20 14:18:07 compute-1 nova_compute_init[226068]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 20 14:18:07 compute-1 nova_compute_init[226068]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 20 14:18:07 compute-1 nova_compute_init[226068]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:18:07 compute-1 nova_compute_init[226068]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 20 14:18:07 compute-1 nova_compute_init[226068]: INFO:nova_statedir:Nova statedir ownership complete
Jan 20 14:18:07 compute-1 systemd[1]: libpod-2f02162efb4537956962187671675fe84df08f5b348c51f1cd15ed9e22a6e5c3.scope: Deactivated successfully.
Jan 20 14:18:07 compute-1 podman[226069]: 2026-01-20 14:18:07.25997253 +0000 UTC m=+0.045478913 container died 2f02162efb4537956962187671675fe84df08f5b348c51f1cd15ed9e22a6e5c3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.304 225859 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.327 225859 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.328 225859 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 20 14:18:07 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f02162efb4537956962187671675fe84df08f5b348c51f1cd15ed9e22a6e5c3-userdata-shm.mount: Deactivated successfully.
Jan 20 14:18:07 compute-1 systemd[1]: var-lib-containers-storage-overlay-c0d02bc159c28585f78064f19fbd91a200478408363f8183b598eed3d83849b0-merged.mount: Deactivated successfully.
Jan 20 14:18:07 compute-1 podman[226082]: 2026-01-20 14:18:07.362125721 +0000 UTC m=+0.091141450 container cleanup 2f02162efb4537956962187671675fe84df08f5b348c51f1cd15ed9e22a6e5c3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm)
Jan 20 14:18:07 compute-1 systemd[1]: libpod-conmon-2f02162efb4537956962187671675fe84df08f5b348c51f1cd15ed9e22a6e5c3.scope: Deactivated successfully.
Jan 20 14:18:07 compute-1 sudo[226018]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:18:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:07.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:18:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:18:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:07.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:18:07 compute-1 ceph-mon[81775]: pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.790 225859 INFO nova.virt.driver [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.913 225859 INFO nova.compute.provider_config [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.924 225859 DEBUG oslo_concurrency.lockutils [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.924 225859 DEBUG oslo_concurrency.lockutils [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.924 225859 DEBUG oslo_concurrency.lockutils [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.924 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.924 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.925 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.925 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.925 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.925 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.925 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.925 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.925 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.926 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.926 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.926 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.926 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.926 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.926 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.926 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.926 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.927 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.927 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.927 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.927 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.927 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.927 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.927 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.928 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.928 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.928 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.928 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.928 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.928 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.928 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.929 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.929 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.929 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.929 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.929 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.929 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.929 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.930 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.930 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.930 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.930 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.930 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.931 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.931 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.931 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.931 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.931 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.931 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.931 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.932 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.932 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.932 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.932 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.932 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.932 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.932 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.933 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.933 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.933 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.933 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.933 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.933 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.933 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.934 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.934 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.934 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.934 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.934 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.934 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.934 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.935 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.935 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.935 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.935 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.935 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.935 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.935 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.936 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.936 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.936 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.936 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.936 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.936 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.936 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.937 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.937 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.937 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.937 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.937 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.937 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.937 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.938 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.938 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.938 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.938 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.938 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.938 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.938 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.939 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.939 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.939 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.939 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.939 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.939 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.939 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.940 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.940 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.940 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.940 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.940 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.940 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.940 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.941 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.941 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.941 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.941 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.941 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.941 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.941 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.942 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.942 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.942 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.942 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.942 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.942 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.942 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.943 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.943 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.943 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.943 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.943 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.943 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.944 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.944 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.944 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.944 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.944 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.944 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.944 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.945 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.945 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.945 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.945 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.945 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.945 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.945 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.946 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.946 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.946 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.946 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.946 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.947 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.947 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.947 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.947 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.947 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.948 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.948 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.948 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.948 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.948 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.948 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.949 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.949 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.949 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.949 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.949 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.949 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.949 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.950 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.950 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.950 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.950 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.950 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.950 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.951 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.951 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.951 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.951 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.951 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.951 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.952 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.952 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.952 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.952 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.952 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.952 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.952 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.953 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.953 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.953 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.953 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.953 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.953 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.954 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.954 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.954 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.954 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.954 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.954 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.954 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.955 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.955 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.955 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.955 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.955 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.955 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.955 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.956 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.956 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.956 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.956 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.956 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.956 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.957 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.957 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.957 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.957 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.957 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.957 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.957 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.957 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.958 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.958 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.958 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.958 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.958 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.958 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.958 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.959 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.959 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.959 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.959 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.959 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.959 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.959 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.960 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.960 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.960 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.960 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.960 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.960 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.960 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.961 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.961 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.961 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.961 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.961 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.961 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.961 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.961 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.962 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.962 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.962 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.962 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.962 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.962 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.962 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.963 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.963 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.963 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.963 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.963 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.963 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.963 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.964 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.964 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.964 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.964 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.964 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.964 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.964 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.964 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.965 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.965 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.965 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.965 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.965 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.965 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.965 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.966 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.966 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.966 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.966 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.966 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.966 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.966 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.967 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.967 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.967 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.967 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.967 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.967 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.967 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.968 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.968 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.968 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.968 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.968 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.968 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.968 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.969 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.969 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.969 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.969 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.969 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.969 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.969 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.970 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.970 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.970 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.970 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.970 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.970 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.970 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.970 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.971 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.971 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.971 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.971 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.971 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.971 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.971 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.972 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.972 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.972 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.972 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.972 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.972 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.972 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.972 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.973 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.973 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.973 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.973 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.973 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.973 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.973 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.974 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.974 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.974 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.974 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.974 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.974 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.974 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.975 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.975 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.975 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.975 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.975 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.975 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.975 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.975 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.976 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.976 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.976 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.976 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.976 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.976 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.977 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.977 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.977 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.977 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.977 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.978 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.978 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.978 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.978 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.978 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.978 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.978 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.978 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.979 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.979 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.979 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.979 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.979 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.979 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.979 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.980 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.980 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.980 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.980 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.980 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.980 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.980 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.981 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.981 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.981 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.981 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.981 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.981 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.982 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.982 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.982 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 sshd-session[201654]: Connection closed by 192.168.122.30 port 47464
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.982 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.982 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.983 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.983 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.983 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.983 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.983 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 sshd-session[201638]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.983 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.985 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.986 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.986 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.986 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.986 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.986 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.986 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.987 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.987 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.987 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.987 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.987 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.987 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.988 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 systemd[1]: session-49.scope: Deactivated successfully.
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.988 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.988 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 systemd[1]: session-49.scope: Consumed 2min 10.933s CPU time.
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.988 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.988 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.988 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.988 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.989 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.989 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.989 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.989 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.989 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 systemd-logind[783]: Session 49 logged out. Waiting for processes to exit.
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.989 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.990 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.990 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.990 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.990 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.990 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.990 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.990 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.991 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.991 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.991 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 systemd-logind[783]: Removed session 49.
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.991 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.991 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.991 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.991 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.992 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.992 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.992 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.992 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.992 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.992 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.992 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.993 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.993 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.993 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.993 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.993 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.993 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.994 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.994 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.994 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.994 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.994 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.994 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.994 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.995 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.995 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.995 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.995 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.995 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.995 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.995 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.996 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.996 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.996 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.996 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.996 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.996 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.996 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.997 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.997 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.997 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.997 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.997 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.997 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.998 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.998 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.998 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.999 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.999 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:07 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.999 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:07.999 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.000 225859 WARNING oslo_config.cfg [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 20 14:18:08 compute-1 nova_compute[225855]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 20 14:18:08 compute-1 nova_compute[225855]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 20 14:18:08 compute-1 nova_compute[225855]: and ``live_migration_inbound_addr`` respectively.
Jan 20 14:18:08 compute-1 nova_compute[225855]: ).  Its value may be silently ignored in the future.
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.000 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.000 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.001 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.001 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.001 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.001 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.002 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.002 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.002 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.002 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.003 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.003 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.003 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.003 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.004 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.004 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.004 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.004 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.004 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.rbd_secret_uuid        = e399cf45-e6b6-5393-99f1-75c601d3f188 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.005 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.005 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.005 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.005 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.006 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.006 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.006 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.006 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.007 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.007 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.007 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.007 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.008 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.008 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.008 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.008 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.009 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.009 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.009 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.010 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.010 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.010 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.011 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.011 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.011 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.011 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.012 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.012 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.012 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.012 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.013 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.013 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.013 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.013 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.014 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.014 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.014 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.014 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.015 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.015 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.015 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.015 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.015 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.016 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.016 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.016 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.016 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.016 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.017 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.017 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.017 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.017 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.018 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.018 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.018 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.018 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.019 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.019 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.019 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.019 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.019 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.020 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.020 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.020 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.020 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.021 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.021 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.021 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.021 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.022 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.022 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.022 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.022 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.023 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.023 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.023 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.023 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.024 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.024 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.024 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.024 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.025 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.025 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.025 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.025 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.025 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.026 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.026 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.026 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.026 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.026 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.027 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.027 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.027 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.027 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.027 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.028 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.028 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.028 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.028 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.028 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.029 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.029 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.029 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.029 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.030 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.030 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.030 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.030 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.031 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.031 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.031 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.031 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.031 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.032 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.032 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.032 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.032 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.032 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.033 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.033 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.034 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.034 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.034 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.034 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.035 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.035 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.035 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.035 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.036 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.036 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.036 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.036 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.036 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.037 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.037 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.037 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.037 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.038 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.038 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.038 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.038 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.038 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.039 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.039 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.039 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.039 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.039 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.040 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.040 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.040 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.040 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.041 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.041 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.041 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.041 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.041 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.042 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.042 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.042 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.042 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.043 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.043 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.043 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.043 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.044 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.044 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.044 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.044 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.045 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.045 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.045 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.045 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.045 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.046 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.046 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.046 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.046 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.047 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.047 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.047 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.047 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.047 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.048 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.048 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.048 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.048 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.049 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.049 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.049 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.049 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.049 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.050 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.050 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.050 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.050 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.050 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.050 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.051 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.051 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.051 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.051 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.051 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.052 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.052 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.052 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.052 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.053 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.053 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.053 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.053 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.053 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.054 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.054 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.054 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.054 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.054 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.055 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.055 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.055 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.055 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.055 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.056 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.056 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.056 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.056 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.057 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.057 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.057 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.058 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.058 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.058 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.058 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.059 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.059 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.059 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.059 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.060 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.060 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.060 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.060 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.060 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.061 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.061 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.061 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.061 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.061 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.062 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.062 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.062 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.062 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.062 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.063 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.063 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.063 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.063 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.064 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.064 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.064 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.064 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.065 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.065 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.065 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.065 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.065 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.066 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.066 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.066 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.066 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.066 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.066 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.067 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.067 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.067 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.067 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.068 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.068 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.068 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.068 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.068 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.068 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.068 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.069 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.069 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.069 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.069 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.069 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.069 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.070 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.070 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.070 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.070 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.070 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.070 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.071 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.071 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.071 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.071 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.071 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.071 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.072 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.072 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.072 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.072 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.072 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.072 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.072 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.073 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.073 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.073 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.073 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.073 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.073 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.074 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.074 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.074 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.074 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.074 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.074 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.074 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.075 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.075 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.075 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.075 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.075 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.076 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.076 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.076 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.076 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.076 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.076 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.076 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.077 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.077 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.077 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.077 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.077 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.077 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.078 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.078 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.078 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.078 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.078 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.078 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.078 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.079 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.079 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.079 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.079 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.079 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.079 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.079 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.080 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.080 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.080 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.080 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.080 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.080 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.081 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.081 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.081 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.081 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.081 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.082 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.082 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.082 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.082 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.082 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.082 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.082 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.083 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.083 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.083 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.083 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.083 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.083 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.083 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.084 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.084 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.084 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.084 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.084 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.084 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.085 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.085 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.085 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.085 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.085 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.085 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.085 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.086 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.086 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.086 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.086 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.086 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.086 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.087 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.087 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.087 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.087 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.087 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.087 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.087 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.088 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.088 225859 DEBUG oslo_service.service [None req-e24e79cd-c034-4520-8502-25699a19da22 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.088 225859 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.106 225859 INFO nova.virt.node [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Determined node identity bbb02880-a710-4ac1-8b2c-5c09765848d1 from /var/lib/nova/compute_id
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.107 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.107 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.108 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.108 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.123 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fdd113dbd00> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.126 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fdd113dbd00> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.127 225859 INFO nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Connection event '1' reason 'None'
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.142 225859 INFO nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Libvirt host capabilities <capabilities>
Jan 20 14:18:08 compute-1 nova_compute[225855]: 
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <host>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <uuid>870b1f1c-f19c-477b-b282-ee6eeba50974</uuid>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <cpu>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <arch>x86_64</arch>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model>EPYC-Rome-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <vendor>AMD</vendor>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <microcode version='16777317'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <signature family='23' model='49' stepping='0'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='x2apic'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='tsc-deadline'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='osxsave'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='hypervisor'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='tsc_adjust'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='spec-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='stibp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='arch-capabilities'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='ssbd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='cmp_legacy'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='topoext'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='virt-ssbd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='lbrv'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='tsc-scale'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='vmcb-clean'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='pause-filter'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='pfthreshold'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='svme-addr-chk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='rdctl-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='skip-l1dfl-vmentry'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='mds-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature name='pschange-mc-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <pages unit='KiB' size='4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <pages unit='KiB' size='2048'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <pages unit='KiB' size='1048576'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </cpu>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <power_management>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <suspend_mem/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </power_management>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <iommu support='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <migration_features>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <live/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <uri_transports>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <uri_transport>tcp</uri_transport>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <uri_transport>rdma</uri_transport>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </uri_transports>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </migration_features>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <topology>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <cells num='1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <cell id='0'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:           <memory unit='KiB'>7864312</memory>
Jan 20 14:18:08 compute-1 nova_compute[225855]:           <pages unit='KiB' size='4'>1966078</pages>
Jan 20 14:18:08 compute-1 nova_compute[225855]:           <pages unit='KiB' size='2048'>0</pages>
Jan 20 14:18:08 compute-1 nova_compute[225855]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 20 14:18:08 compute-1 nova_compute[225855]:           <distances>
Jan 20 14:18:08 compute-1 nova_compute[225855]:             <sibling id='0' value='10'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:           </distances>
Jan 20 14:18:08 compute-1 nova_compute[225855]:           <cpus num='8'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:           </cpus>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         </cell>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </cells>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </topology>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <cache>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </cache>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <secmodel>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model>selinux</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <doi>0</doi>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </secmodel>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <secmodel>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model>dac</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <doi>0</doi>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </secmodel>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </host>
Jan 20 14:18:08 compute-1 nova_compute[225855]: 
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <guest>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <os_type>hvm</os_type>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <arch name='i686'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <wordsize>32</wordsize>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <domain type='qemu'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <domain type='kvm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </arch>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <features>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <pae/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <nonpae/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <acpi default='on' toggle='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <apic default='on' toggle='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <cpuselection/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <deviceboot/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <disksnapshot default='on' toggle='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <externalSnapshot/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </features>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </guest>
Jan 20 14:18:08 compute-1 nova_compute[225855]: 
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <guest>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <os_type>hvm</os_type>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <arch name='x86_64'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <wordsize>64</wordsize>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <domain type='qemu'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <domain type='kvm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </arch>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <features>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <acpi default='on' toggle='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <apic default='on' toggle='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <cpuselection/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <deviceboot/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <disksnapshot default='on' toggle='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <externalSnapshot/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </features>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </guest>
Jan 20 14:18:08 compute-1 nova_compute[225855]: 
Jan 20 14:18:08 compute-1 nova_compute[225855]: </capabilities>
Jan 20 14:18:08 compute-1 nova_compute[225855]: 
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.152 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.156 225859 DEBUG nova.virt.libvirt.volume.mount [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.159 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 20 14:18:08 compute-1 nova_compute[225855]: <domainCapabilities>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <domain>kvm</domain>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <arch>i686</arch>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <vcpu max='4096'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <iothreads supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <os supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <enum name='firmware'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <loader supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='type'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>rom</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>pflash</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='readonly'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>yes</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>no</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='secure'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>no</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </loader>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </os>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <cpu>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <mode name='host-passthrough' supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='hostPassthroughMigratable'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>on</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>off</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </mode>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <mode name='maximum' supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='maximumMigratable'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>on</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>off</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </mode>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <mode name='host-model' supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <vendor>AMD</vendor>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='x2apic'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='hypervisor'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='stibp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='ssbd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='overflow-recov'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='succor'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='lbrv'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='tsc-scale'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='flushbyasid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='pause-filter'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='pfthreshold'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='disable' name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </mode>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <mode name='custom' supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-noTSX'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='ClearwaterForest'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ddpd-u'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='lam'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sha512'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sm3'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sm4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='ClearwaterForest-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ddpd-u'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='lam'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sha512'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sm3'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sm4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cooperlake'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cooperlake-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cooperlake-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Denverton'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Denverton-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Denverton-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Denverton-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Dhyana-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Genoa'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='perfmon-v2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Milan'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Milan-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Milan-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Milan-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Rome'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Rome-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Rome-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Rome-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Turin'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='perfmon-v2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbpb'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Turin-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='perfmon-v2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbpb'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-v5'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='GraniteRapids'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='GraniteRapids-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='GraniteRapids-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-128'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-256'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-512'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='GraniteRapids-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-128'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-256'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-512'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-noTSX'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v5'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v6'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v7'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='IvyBridge'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='IvyBridge-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='IvyBridge-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='IvyBridge-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='KnightsMill'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512er'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512pf'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='KnightsMill-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512er'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512pf'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Opteron_G4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xop'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Opteron_G4-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xop'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Opteron_G5'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tbm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xop'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Opteron_G5-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tbm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xop'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SapphireRapids'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SapphireRapids-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SapphireRapids-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SapphireRapids-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SapphireRapids-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SierraForest'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SierraForest-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SierraForest-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='lam'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SierraForest-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='lam'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-v5'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Snowridge'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='core-capability'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='split-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Snowridge-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='core-capability'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='split-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Snowridge-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='core-capability'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='split-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Snowridge-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='core-capability'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='split-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Snowridge-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='athlon'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnow'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnowext'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='athlon-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnow'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnowext'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='core2duo'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='core2duo-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='coreduo'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='coreduo-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='n270'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='n270-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='phenom'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnow'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnowext'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='phenom-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnow'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnowext'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </mode>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <memoryBacking supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <enum name='sourceType'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <value>file</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <value>anonymous</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <value>memfd</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </memoryBacking>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <disk supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='diskDevice'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>disk</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>cdrom</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>floppy</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>lun</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='bus'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>fdc</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>scsi</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>usb</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>sata</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='model'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio-transitional</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio-non-transitional</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <graphics supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='type'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vnc</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>egl-headless</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>dbus</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </graphics>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <video supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='modelType'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vga</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>cirrus</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>none</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>bochs</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>ramfb</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </video>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <hostdev supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='mode'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>subsystem</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='startupPolicy'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>default</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>mandatory</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>requisite</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>optional</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='subsysType'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>usb</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>pci</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>scsi</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='capsType'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='pciBackend'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </hostdev>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <rng supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='model'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio-transitional</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio-non-transitional</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='backendModel'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>random</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>egd</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>builtin</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <filesystem supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='driverType'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>path</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>handle</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtiofs</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </filesystem>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <tpm supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='model'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>tpm-tis</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>tpm-crb</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='backendModel'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>emulator</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>external</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='backendVersion'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>2.0</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </tpm>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <redirdev supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='bus'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>usb</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </redirdev>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <channel supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='type'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>pty</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>unix</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </channel>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <crypto supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='model'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='type'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>qemu</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='backendModel'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>builtin</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </crypto>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <interface supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='backendType'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>default</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>passt</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <panic supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='model'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>isa</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>hyperv</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </panic>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <console supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='type'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>null</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vc</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>pty</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>dev</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>file</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>pipe</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>stdio</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>udp</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>tcp</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>unix</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>qemu-vdagent</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>dbus</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </console>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <features>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <gic supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <vmcoreinfo supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <genid supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <backingStoreInput supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <backup supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <async-teardown supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <s390-pv supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <ps2 supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <tdx supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <sev supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <sgx supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <hyperv supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='features'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>relaxed</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vapic</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>spinlocks</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vpindex</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>runtime</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>synic</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>stimer</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>reset</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vendor_id</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>frequencies</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>reenlightenment</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>tlbflush</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>ipi</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>avic</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>emsr_bitmap</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>xmm_input</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <defaults>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <spinlocks>4095</spinlocks>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <stimer_direct>on</stimer_direct>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </defaults>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </hyperv>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <launchSecurity supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </features>
Jan 20 14:18:08 compute-1 nova_compute[225855]: </domainCapabilities>
Jan 20 14:18:08 compute-1 nova_compute[225855]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.170 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 20 14:18:08 compute-1 nova_compute[225855]: <domainCapabilities>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <domain>kvm</domain>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <arch>i686</arch>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <vcpu max='240'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <iothreads supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <os supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <enum name='firmware'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <loader supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='type'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>rom</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>pflash</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='readonly'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>yes</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>no</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='secure'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>no</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </loader>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </os>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <cpu>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <mode name='host-passthrough' supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='hostPassthroughMigratable'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>on</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>off</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </mode>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <mode name='maximum' supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='maximumMigratable'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>on</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>off</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </mode>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <mode name='host-model' supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <vendor>AMD</vendor>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='x2apic'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='hypervisor'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='stibp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='ssbd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='overflow-recov'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='succor'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='lbrv'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='tsc-scale'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='flushbyasid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='pause-filter'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='pfthreshold'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='disable' name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </mode>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <mode name='custom' supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-noTSX'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='ClearwaterForest'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ddpd-u'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='lam'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sha512'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sm3'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sm4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='ClearwaterForest-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ddpd-u'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='lam'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sha512'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sm3'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sm4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cooperlake'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cooperlake-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cooperlake-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Denverton'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Denverton-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Denverton-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Denverton-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Dhyana-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Genoa'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='perfmon-v2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Milan'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Milan-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Milan-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Milan-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Rome'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Rome-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Rome-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Rome-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Turin'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='perfmon-v2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbpb'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Turin-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='perfmon-v2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbpb'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-v5'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='GraniteRapids'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='GraniteRapids-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='GraniteRapids-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-128'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-256'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-512'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='GraniteRapids-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-128'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-256'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-512'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-noTSX'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v5'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v6'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v7'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='IvyBridge'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='IvyBridge-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='IvyBridge-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='IvyBridge-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='KnightsMill'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512er'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512pf'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='KnightsMill-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512er'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512pf'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Opteron_G4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xop'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Opteron_G4-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xop'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Opteron_G5'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tbm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xop'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Opteron_G5-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tbm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xop'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SapphireRapids'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SapphireRapids-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SapphireRapids-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SapphireRapids-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SapphireRapids-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SierraForest'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SierraForest-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SierraForest-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='lam'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SierraForest-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='lam'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-v5'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Snowridge'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='core-capability'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='split-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Snowridge-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='core-capability'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='split-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Snowridge-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='core-capability'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='split-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Snowridge-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='core-capability'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='split-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Snowridge-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='athlon'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnow'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnowext'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='athlon-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnow'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnowext'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='core2duo'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='core2duo-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='coreduo'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='coreduo-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='n270'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='n270-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='phenom'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnow'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnowext'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='phenom-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnow'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnowext'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </mode>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <memoryBacking supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <enum name='sourceType'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <value>file</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <value>anonymous</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <value>memfd</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </memoryBacking>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <disk supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='diskDevice'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>disk</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>cdrom</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>floppy</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>lun</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='bus'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>ide</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>fdc</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>scsi</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>usb</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>sata</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='model'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio-transitional</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio-non-transitional</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <graphics supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='type'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vnc</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>egl-headless</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>dbus</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </graphics>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <video supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='modelType'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vga</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>cirrus</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>none</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>bochs</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>ramfb</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </video>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <hostdev supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='mode'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>subsystem</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='startupPolicy'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>default</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>mandatory</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>requisite</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>optional</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='subsysType'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>usb</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>pci</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>scsi</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='capsType'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='pciBackend'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </hostdev>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <rng supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='model'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio-transitional</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio-non-transitional</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='backendModel'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>random</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>egd</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>builtin</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <filesystem supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='driverType'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>path</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>handle</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtiofs</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </filesystem>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <tpm supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='model'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>tpm-tis</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>tpm-crb</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='backendModel'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>emulator</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>external</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='backendVersion'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>2.0</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </tpm>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <redirdev supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='bus'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>usb</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </redirdev>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <channel supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='type'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>pty</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>unix</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </channel>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <crypto supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='model'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='type'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>qemu</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='backendModel'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>builtin</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </crypto>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <interface supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='backendType'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>default</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>passt</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <panic supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='model'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>isa</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>hyperv</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </panic>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <console supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='type'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>null</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vc</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>pty</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>dev</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>file</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>pipe</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>stdio</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>udp</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>tcp</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>unix</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>qemu-vdagent</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>dbus</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </console>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <features>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <gic supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <vmcoreinfo supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <genid supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <backingStoreInput supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <backup supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <async-teardown supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <s390-pv supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <ps2 supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <tdx supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <sev supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <sgx supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <hyperv supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='features'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>relaxed</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vapic</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>spinlocks</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vpindex</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>runtime</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>synic</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>stimer</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>reset</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vendor_id</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>frequencies</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>reenlightenment</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>tlbflush</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>ipi</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>avic</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>emsr_bitmap</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>xmm_input</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <defaults>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <spinlocks>4095</spinlocks>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <stimer_direct>on</stimer_direct>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </defaults>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </hyperv>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <launchSecurity supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </features>
Jan 20 14:18:08 compute-1 nova_compute[225855]: </domainCapabilities>
Jan 20 14:18:08 compute-1 nova_compute[225855]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.230 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.235 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 20 14:18:08 compute-1 nova_compute[225855]: <domainCapabilities>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <domain>kvm</domain>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <arch>x86_64</arch>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <vcpu max='4096'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <iothreads supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <os supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <enum name='firmware'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <value>efi</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <loader supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='type'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>rom</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>pflash</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='readonly'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>yes</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>no</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='secure'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>yes</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>no</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </loader>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </os>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <cpu>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <mode name='host-passthrough' supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='hostPassthroughMigratable'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>on</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>off</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </mode>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <mode name='maximum' supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='maximumMigratable'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>on</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>off</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </mode>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <mode name='host-model' supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <vendor>AMD</vendor>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='x2apic'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='hypervisor'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='stibp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='ssbd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='overflow-recov'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='succor'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='lbrv'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='tsc-scale'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='flushbyasid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='pause-filter'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='pfthreshold'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='disable' name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </mode>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <mode name='custom' supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-noTSX'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='ClearwaterForest'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ddpd-u'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='lam'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sha512'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sm3'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sm4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='ClearwaterForest-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ddpd-u'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='lam'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sha512'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sm3'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sm4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cooperlake'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cooperlake-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cooperlake-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Denverton'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Denverton-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Denverton-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Denverton-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Dhyana-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Genoa'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='perfmon-v2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Milan'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Milan-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Milan-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Milan-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Rome'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Rome-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Rome-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Rome-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Turin'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='perfmon-v2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbpb'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Turin-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='perfmon-v2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbpb'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-v5'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='GraniteRapids'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='GraniteRapids-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='GraniteRapids-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-128'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-256'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-512'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='GraniteRapids-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-128'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-256'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-512'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-noTSX'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v5'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v6'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v7'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='IvyBridge'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='IvyBridge-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='IvyBridge-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='IvyBridge-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='KnightsMill'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512er'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512pf'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='KnightsMill-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512er'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512pf'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Opteron_G4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xop'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Opteron_G4-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xop'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Opteron_G5'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tbm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xop'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Opteron_G5-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tbm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xop'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SapphireRapids'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SapphireRapids-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SapphireRapids-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SapphireRapids-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SapphireRapids-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SierraForest'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SierraForest-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SierraForest-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='lam'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SierraForest-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='lam'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-v5'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Snowridge'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='core-capability'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='split-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Snowridge-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='core-capability'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='split-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Snowridge-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='core-capability'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='split-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Snowridge-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='core-capability'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='split-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Snowridge-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='athlon'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnow'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnowext'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='athlon-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnow'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnowext'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='core2duo'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='core2duo-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='coreduo'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='coreduo-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='n270'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='n270-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='phenom'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnow'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnowext'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='phenom-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnow'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnowext'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </mode>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <memoryBacking supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <enum name='sourceType'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <value>file</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <value>anonymous</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <value>memfd</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </memoryBacking>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <disk supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='diskDevice'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>disk</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>cdrom</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>floppy</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>lun</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='bus'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>fdc</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>scsi</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>usb</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>sata</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='model'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio-transitional</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio-non-transitional</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <graphics supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='type'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vnc</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>egl-headless</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>dbus</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </graphics>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <video supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='modelType'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vga</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>cirrus</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>none</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>bochs</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>ramfb</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </video>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <hostdev supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='mode'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>subsystem</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='startupPolicy'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>default</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>mandatory</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>requisite</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>optional</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='subsysType'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>usb</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>pci</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>scsi</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='capsType'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='pciBackend'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </hostdev>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <rng supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='model'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio-transitional</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio-non-transitional</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='backendModel'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>random</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>egd</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>builtin</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <filesystem supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='driverType'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>path</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>handle</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtiofs</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </filesystem>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <tpm supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='model'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>tpm-tis</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>tpm-crb</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='backendModel'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>emulator</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>external</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='backendVersion'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>2.0</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </tpm>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <redirdev supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='bus'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>usb</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </redirdev>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <channel supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='type'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>pty</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>unix</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </channel>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <crypto supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='model'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='type'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>qemu</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='backendModel'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>builtin</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </crypto>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <interface supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='backendType'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>default</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>passt</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <panic supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='model'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>isa</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>hyperv</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </panic>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <console supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='type'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>null</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vc</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>pty</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>dev</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>file</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>pipe</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>stdio</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>udp</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>tcp</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>unix</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>qemu-vdagent</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>dbus</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </console>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <features>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <gic supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <vmcoreinfo supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <genid supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <backingStoreInput supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <backup supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <async-teardown supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <s390-pv supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <ps2 supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <tdx supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <sev supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <sgx supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <hyperv supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='features'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>relaxed</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vapic</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>spinlocks</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vpindex</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>runtime</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>synic</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>stimer</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>reset</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vendor_id</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>frequencies</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>reenlightenment</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>tlbflush</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>ipi</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>avic</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>emsr_bitmap</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>xmm_input</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <defaults>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <spinlocks>4095</spinlocks>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <stimer_direct>on</stimer_direct>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </defaults>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </hyperv>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <launchSecurity supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </features>
Jan 20 14:18:08 compute-1 nova_compute[225855]: </domainCapabilities>
Jan 20 14:18:08 compute-1 nova_compute[225855]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.314 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 20 14:18:08 compute-1 nova_compute[225855]: <domainCapabilities>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <domain>kvm</domain>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <arch>x86_64</arch>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <vcpu max='240'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <iothreads supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <os supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <enum name='firmware'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <loader supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='type'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>rom</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>pflash</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='readonly'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>yes</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>no</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='secure'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>no</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </loader>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </os>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <cpu>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <mode name='host-passthrough' supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='hostPassthroughMigratable'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>on</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>off</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </mode>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <mode name='maximum' supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='maximumMigratable'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>on</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>off</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </mode>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <mode name='host-model' supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <vendor>AMD</vendor>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='x2apic'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='hypervisor'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='stibp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='ssbd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='overflow-recov'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='succor'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='lbrv'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='tsc-scale'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='flushbyasid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='pause-filter'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='pfthreshold'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <feature policy='disable' name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </mode>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <mode name='custom' supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-noTSX'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Broadwell-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='ClearwaterForest'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ddpd-u'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='lam'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sha512'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sm3'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sm4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='ClearwaterForest-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ddpd-u'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='lam'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sha512'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sm3'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sm4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cooperlake'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cooperlake-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Cooperlake-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Denverton'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Denverton-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Denverton-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Denverton-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Dhyana-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Genoa'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='perfmon-v2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Milan'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Milan-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Milan-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Milan-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Rome'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Rome-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Rome-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Rome-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Turin'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/366490253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='perfmon-v2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbpb'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-Turin-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='perfmon-v2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbpb'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='EPYC-v5'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='GraniteRapids'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='GraniteRapids-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='GraniteRapids-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-128'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-256'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-512'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='GraniteRapids-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-128'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-256'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx10-512'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-noTSX'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Haswell-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v5'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v6'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Icelake-Server-v7'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='IvyBridge'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='IvyBridge-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='IvyBridge-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='IvyBridge-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='KnightsMill'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512er'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512pf'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='KnightsMill-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512er'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512pf'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Opteron_G4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xop'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Opteron_G4-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xop'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Opteron_G5'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tbm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xop'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Opteron_G5-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tbm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xop'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SapphireRapids'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SapphireRapids-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SapphireRapids-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SapphireRapids-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SapphireRapids-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='la57'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SierraForest'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SierraForest-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SierraForest-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='lam'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='SierraForest-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='lam'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Client-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='hle'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Skylake-Server-v5'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='pku'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Snowridge'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='core-capability'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='split-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Snowridge-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='core-capability'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='split-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Snowridge-v2'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='core-capability'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='split-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Snowridge-v3'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='core-capability'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='split-lock-detect'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='Snowridge-v4'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='erms'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='athlon'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnow'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnowext'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='athlon-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnow'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnowext'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='core2duo'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='core2duo-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='coreduo'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='coreduo-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='n270'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='n270-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='ss'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='phenom'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnow'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnowext'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <blockers model='phenom-v1'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnow'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <feature name='3dnowext'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </blockers>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </mode>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <memoryBacking supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <enum name='sourceType'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <value>file</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <value>anonymous</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <value>memfd</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </memoryBacking>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <disk supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='diskDevice'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>disk</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>cdrom</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>floppy</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>lun</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='bus'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>ide</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>fdc</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>scsi</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>usb</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>sata</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='model'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio-transitional</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio-non-transitional</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <graphics supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='type'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vnc</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>egl-headless</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>dbus</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </graphics>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <video supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='modelType'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vga</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>cirrus</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>none</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>bochs</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>ramfb</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </video>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <hostdev supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='mode'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>subsystem</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='startupPolicy'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>default</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>mandatory</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>requisite</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>optional</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='subsysType'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>usb</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>pci</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>scsi</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='capsType'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='pciBackend'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </hostdev>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <rng supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='model'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio-transitional</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtio-non-transitional</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='backendModel'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>random</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>egd</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>builtin</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <filesystem supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='driverType'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>path</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>handle</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>virtiofs</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </filesystem>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <tpm supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='model'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>tpm-tis</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>tpm-crb</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='backendModel'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>emulator</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>external</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='backendVersion'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>2.0</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </tpm>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <redirdev supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='bus'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>usb</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </redirdev>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <channel supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='type'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>pty</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>unix</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </channel>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <crypto supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='model'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='type'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>qemu</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='backendModel'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>builtin</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </crypto>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <interface supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='backendType'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>default</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>passt</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <panic supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='model'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>isa</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>hyperv</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </panic>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <console supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='type'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>null</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vc</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>pty</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>dev</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>file</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>pipe</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>stdio</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>udp</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>tcp</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>unix</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>qemu-vdagent</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>dbus</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </console>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <features>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <gic supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <vmcoreinfo supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <genid supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <backingStoreInput supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <backup supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <async-teardown supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <s390-pv supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <ps2 supported='yes'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <tdx supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <sev supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <sgx supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <hyperv supported='yes'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <enum name='features'>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>relaxed</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vapic</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>spinlocks</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vpindex</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>runtime</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>synic</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>stimer</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>reset</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>vendor_id</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>frequencies</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>reenlightenment</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>tlbflush</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>ipi</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>avic</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>emsr_bitmap</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <value>xmm_input</value>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </enum>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       <defaults>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <spinlocks>4095</spinlocks>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <stimer_direct>on</stimer_direct>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 14:18:08 compute-1 nova_compute[225855]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 14:18:08 compute-1 nova_compute[225855]:       </defaults>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     </hyperv>
Jan 20 14:18:08 compute-1 nova_compute[225855]:     <launchSecurity supported='no'/>
Jan 20 14:18:08 compute-1 nova_compute[225855]:   </features>
Jan 20 14:18:08 compute-1 nova_compute[225855]: </domainCapabilities>
Jan 20 14:18:08 compute-1 nova_compute[225855]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.391 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.392 225859 INFO nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Secure Boot support detected
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.393 225859 INFO nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.394 225859 INFO nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.403 225859 DEBUG nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] cpu compare xml: <cpu match="exact">
Jan 20 14:18:08 compute-1 nova_compute[225855]:   <model>Nehalem</model>
Jan 20 14:18:08 compute-1 nova_compute[225855]: </cpu>
Jan 20 14:18:08 compute-1 nova_compute[225855]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.405 225859 DEBUG nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.442 225859 INFO nova.virt.node [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Determined node identity bbb02880-a710-4ac1-8b2c-5c09765848d1 from /var/lib/nova/compute_id
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.458 225859 WARNING nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Compute nodes ['bbb02880-a710-4ac1-8b2c-5c09765848d1'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.487 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.547 225859 WARNING nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.547 225859 DEBUG oslo_concurrency.lockutils [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.548 225859 DEBUG oslo_concurrency.lockutils [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.548 225859 DEBUG oslo_concurrency.lockutils [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.548 225859 DEBUG nova.compute.resource_tracker [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:18:08 compute-1 nova_compute[225855]: 2026-01-20 14:18:08.549 225859 DEBUG oslo_concurrency.processutils [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:18:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/366490253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:18:08 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1912408237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:09 compute-1 nova_compute[225855]: 2026-01-20 14:18:09.009 225859 DEBUG oslo_concurrency.processutils [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:18:09 compute-1 nova_compute[225855]: 2026-01-20 14:18:09.147 225859 WARNING nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:18:09 compute-1 nova_compute[225855]: 2026-01-20 14:18:09.148 225859 DEBUG nova.compute.resource_tracker [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5250MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:18:09 compute-1 nova_compute[225855]: 2026-01-20 14:18:09.148 225859 DEBUG oslo_concurrency.lockutils [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:18:09 compute-1 nova_compute[225855]: 2026-01-20 14:18:09.149 225859 DEBUG oslo_concurrency.lockutils [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:18:09 compute-1 nova_compute[225855]: 2026-01-20 14:18:09.162 225859 WARNING nova.compute.resource_tracker [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] No compute node record for compute-1.ctlplane.example.com:bbb02880-a710-4ac1-8b2c-5c09765848d1: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host bbb02880-a710-4ac1-8b2c-5c09765848d1 could not be found.
Jan 20 14:18:09 compute-1 nova_compute[225855]: 2026-01-20 14:18:09.185 225859 INFO nova.compute.resource_tracker [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Compute node record created for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com with uuid: bbb02880-a710-4ac1-8b2c-5c09765848d1
Jan 20 14:18:09 compute-1 nova_compute[225855]: 2026-01-20 14:18:09.270 225859 DEBUG nova.compute.resource_tracker [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:18:09 compute-1 nova_compute[225855]: 2026-01-20 14:18:09.270 225859 DEBUG nova.compute.resource_tracker [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:18:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:09.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:09.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:09 compute-1 ceph-mon[81775]: pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1912408237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1613810926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:10 compute-1 nova_compute[225855]: 2026-01-20 14:18:10.027 225859 INFO nova.scheduler.client.report [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [req-cb7924e1-7b16-434f-9bf5-c88347a74e1d] Created resource provider record via placement API for resource provider with UUID bbb02880-a710-4ac1-8b2c-5c09765848d1 and name compute-1.ctlplane.example.com.
Jan 20 14:18:10 compute-1 nova_compute[225855]: 2026-01-20 14:18:10.058 225859 DEBUG oslo_concurrency.processutils [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:18:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:18:10 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2481846038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:10 compute-1 nova_compute[225855]: 2026-01-20 14:18:10.548 225859 DEBUG oslo_concurrency.processutils [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:18:10 compute-1 nova_compute[225855]: 2026-01-20 14:18:10.556 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 20 14:18:10 compute-1 nova_compute[225855]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 20 14:18:10 compute-1 nova_compute[225855]: 2026-01-20 14:18:10.557 225859 INFO nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] kernel doesn't support AMD SEV
Jan 20 14:18:10 compute-1 nova_compute[225855]: 2026-01-20 14:18:10.558 225859 DEBUG nova.compute.provider_tree [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:18:10 compute-1 nova_compute[225855]: 2026-01-20 14:18:10.559 225859 DEBUG nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:18:10 compute-1 nova_compute[225855]: 2026-01-20 14:18:10.563 225859 DEBUG nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Libvirt baseline CPU <cpu>
Jan 20 14:18:10 compute-1 nova_compute[225855]:   <arch>x86_64</arch>
Jan 20 14:18:10 compute-1 nova_compute[225855]:   <model>Nehalem</model>
Jan 20 14:18:10 compute-1 nova_compute[225855]:   <vendor>AMD</vendor>
Jan 20 14:18:10 compute-1 nova_compute[225855]:   <topology sockets="8" cores="1" threads="1"/>
Jan 20 14:18:10 compute-1 nova_compute[225855]: </cpu>
Jan 20 14:18:10 compute-1 nova_compute[225855]:  _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537
Jan 20 14:18:10 compute-1 nova_compute[225855]: 2026-01-20 14:18:10.625 225859 DEBUG nova.scheduler.client.report [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Updated inventory for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 20 14:18:10 compute-1 nova_compute[225855]: 2026-01-20 14:18:10.625 225859 DEBUG nova.compute.provider_tree [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Updating resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 20 14:18:10 compute-1 nova_compute[225855]: 2026-01-20 14:18:10.626 225859 DEBUG nova.compute.provider_tree [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:18:10 compute-1 nova_compute[225855]: 2026-01-20 14:18:10.774 225859 DEBUG nova.compute.provider_tree [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Updating resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 20 14:18:10 compute-1 nova_compute[225855]: 2026-01-20 14:18:10.803 225859 DEBUG nova.compute.resource_tracker [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:18:10 compute-1 nova_compute[225855]: 2026-01-20 14:18:10.803 225859 DEBUG oslo_concurrency.lockutils [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:18:10 compute-1 nova_compute[225855]: 2026-01-20 14:18:10.803 225859 DEBUG nova.service [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 20 14:18:10 compute-1 nova_compute[225855]: 2026-01-20 14:18:10.883 225859 DEBUG nova.service [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 20 14:18:10 compute-1 nova_compute[225855]: 2026-01-20 14:18:10.883 225859 DEBUG nova.servicegroup.drivers.db [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] DB_Driver: join new ServiceGroup member compute-1.ctlplane.example.com to the compute group, service = <Service: host=compute-1.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 20 14:18:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2481846038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3236723296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1303255634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:11.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:18:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:11.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:18:12 compute-1 ceph-mon[81775]: pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:12 compute-1 sudo[226204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:12 compute-1 sudo[226204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:12 compute-1 sudo[226204]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:13 compute-1 sudo[226235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:13 compute-1 sudo[226235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:13 compute-1 sudo[226235]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:13 compute-1 podman[226228]: 2026-01-20 14:18:13.079344638 +0000 UTC m=+0.140153793 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 20 14:18:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:18:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:13.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:18:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:13.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:18:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:15.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:18:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:18:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:15.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:18:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:18:16.378 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:18:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:18:16.379 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:18:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:18:16.379 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:18:16 compute-1 ceph-mon[81775]: pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:16 compute-1 ceph-mon[81775]: pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:18:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:17.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:18:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:17.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:18 compute-1 ceph-mon[81775]: pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:19.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:19.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:19 compute-1 ceph-mon[81775]: pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:18:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:21.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:18:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:18:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:21.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:18:22 compute-1 ceph-mon[81775]: pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:23.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:23.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:24 compute-1 podman[226287]: 2026-01-20 14:18:24.013997933 +0000 UTC m=+0.062884537 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 20 14:18:24 compute-1 ceph-mon[81775]: pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:24 compute-1 sudo[226308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:24 compute-1 sudo[226308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:24 compute-1 sudo[226308]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:24 compute-1 sudo[226333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:18:24 compute-1 sudo[226333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:24 compute-1 sudo[226333]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:25 compute-1 sudo[226358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:25 compute-1 sudo[226358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:25 compute-1 sudo[226358]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:25 compute-1 sudo[226383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 14:18:25 compute-1 sudo[226383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:25 compute-1 sudo[226383]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:25.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:25.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:25 compute-1 sudo[226430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:25 compute-1 sudo[226430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:25 compute-1 sudo[226430]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:25 compute-1 sudo[226455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:18:25 compute-1 sudo[226455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:25 compute-1 sudo[226455]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:25 compute-1 sudo[226480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:25 compute-1 sudo[226480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:25 compute-1 sudo[226480]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:25 compute-1 sudo[226505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:18:25 compute-1 sudo[226505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:26 compute-1 ceph-mon[81775]: pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 14:18:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:26 compute-1 sudo[226505]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 14:18:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:18:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:18:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:18:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:18:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:18:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:27.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:27.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:28 compute-1 ceph-mon[81775]: pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:29.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:18:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:29.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:18:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:30 compute-1 ceph-mon[81775]: pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:31.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:31.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:32 compute-1 ceph-mon[81775]: pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:32 compute-1 sudo[226564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:32 compute-1 sudo[226564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:32 compute-1 sudo[226564]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:32 compute-1 sudo[226589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:18:32 compute-1 sudo[226589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:32 compute-1 sudo[226589]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:33 compute-1 sudo[226614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:33 compute-1 sudo[226614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:33 compute-1 sudo[226614]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:33 compute-1 sudo[226639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:33 compute-1 sudo[226639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:33 compute-1 sudo[226639]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:33 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:33 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:18:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:33.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:18:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:18:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:33.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:18:34 compute-1 ceph-mon[81775]: pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:35.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:35 compute-1 ceph-mon[81775]: pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:18:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:35.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:18:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:37.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:37.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:38 compute-1 ceph-mon[81775]: pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:18:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:39.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:18:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:39.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:40 compute-1 ceph-mon[81775]: pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:18:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:41.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:18:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:18:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:41.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:18:42 compute-1 ceph-mon[81775]: pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:18:42 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/513778578' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:18:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:18:42 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/513778578' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:18:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/513778578' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:18:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/513778578' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:18:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1220553920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:18:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1220553920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:18:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:43.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:43.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:44 compute-1 podman[226670]: 2026-01-20 14:18:44.086042056 +0000 UTC m=+0.128500341 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 20 14:18:44 compute-1 ceph-mon[81775]: pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3907418823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:18:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3907418823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:18:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:18:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:45.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:18:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:45.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:46 compute-1 ceph-mon[81775]: pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:47.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:47.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:48 compute-1 ceph-mon[81775]: pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:48 compute-1 nova_compute[225855]: 2026-01-20 14:18:48.886 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:18:48 compute-1 nova_compute[225855]: 2026-01-20 14:18:48.945 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:18:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:49.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:18:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:49.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:18:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:50 compute-1 ceph-mon[81775]: pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Jan 20 14:18:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:51.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:51.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:52 compute-1 ceph-mon[81775]: pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 50 KiB/s rd, 0 B/s wr, 82 op/s
Jan 20 14:18:53 compute-1 sudo[226700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:53 compute-1 sudo[226700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:53 compute-1 sudo[226700]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:53 compute-1 sudo[226725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:53 compute-1 sudo[226725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:53 compute-1 sudo[226725]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:53.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:18:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:53.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:18:54 compute-1 ceph-mon[81775]: pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 50 KiB/s rd, 0 B/s wr, 82 op/s
Jan 20 14:18:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:55 compute-1 podman[226751]: 2026-01-20 14:18:55.069106575 +0000 UTC m=+0.087882368 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Jan 20 14:18:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:55.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:55 compute-1 ceph-mon[81775]: pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 136 op/s
Jan 20 14:18:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:55.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:57.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:57.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:58 compute-1 ceph-mon[81775]: pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 136 op/s
Jan 20 14:18:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:59.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:18:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:59.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:00 compute-1 ceph-mon[81775]: pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 136 op/s
Jan 20 14:19:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:01.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:01.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:02 compute-1 ceph-mon[81775]: pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 81 KiB/s rd, 0 B/s wr, 135 op/s
Jan 20 14:19:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:03.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:03.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:04 compute-1 ceph-mon[81775]: pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 20 14:19:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:19:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:05.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:19:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:05.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:06 compute-1 ceph-mon[81775]: pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 20 14:19:07 compute-1 nova_compute[225855]: 2026-01-20 14:19:07.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:19:07 compute-1 nova_compute[225855]: 2026-01-20 14:19:07.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:19:07 compute-1 nova_compute[225855]: 2026-01-20 14:19:07.342 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:19:07 compute-1 nova_compute[225855]: 2026-01-20 14:19:07.342 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:19:07 compute-1 ceph-mon[81775]: pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:07.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:07.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:07 compute-1 nova_compute[225855]: 2026-01-20 14:19:07.814 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:19:07 compute-1 nova_compute[225855]: 2026-01-20 14:19:07.815 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:19:07 compute-1 nova_compute[225855]: 2026-01-20 14:19:07.816 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:19:07 compute-1 nova_compute[225855]: 2026-01-20 14:19:07.816 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:19:07 compute-1 nova_compute[225855]: 2026-01-20 14:19:07.817 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:19:07 compute-1 nova_compute[225855]: 2026-01-20 14:19:07.817 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:19:07 compute-1 nova_compute[225855]: 2026-01-20 14:19:07.818 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:19:07 compute-1 nova_compute[225855]: 2026-01-20 14:19:07.818 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:19:07 compute-1 nova_compute[225855]: 2026-01-20 14:19:07.819 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:19:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/564146264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:19:09 compute-1 nova_compute[225855]: 2026-01-20 14:19:09.049 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:19:09 compute-1 nova_compute[225855]: 2026-01-20 14:19:09.050 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:19:09 compute-1 nova_compute[225855]: 2026-01-20 14:19:09.051 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:19:09 compute-1 nova_compute[225855]: 2026-01-20 14:19:09.051 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:19:09 compute-1 nova_compute[225855]: 2026-01-20 14:19:09.052 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:19:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:19:09 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/777619478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:19:09 compute-1 nova_compute[225855]: 2026-01-20 14:19:09.507 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:19:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:09.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:09 compute-1 ceph-mon[81775]: pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/777619478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:19:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1175703373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:19:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:19:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:09.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:19:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:09 compute-1 nova_compute[225855]: 2026-01-20 14:19:09.761 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:19:09 compute-1 nova_compute[225855]: 2026-01-20 14:19:09.764 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5272MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:19:09 compute-1 nova_compute[225855]: 2026-01-20 14:19:09.764 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:19:09 compute-1 nova_compute[225855]: 2026-01-20 14:19:09.765 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:19:10 compute-1 nova_compute[225855]: 2026-01-20 14:19:10.284 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:19:10 compute-1 nova_compute[225855]: 2026-01-20 14:19:10.285 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:19:10 compute-1 nova_compute[225855]: 2026-01-20 14:19:10.322 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:19:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2626967478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:19:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:19:10 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4174787175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:19:10 compute-1 nova_compute[225855]: 2026-01-20 14:19:10.801 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:19:10 compute-1 nova_compute[225855]: 2026-01-20 14:19:10.808 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:19:10 compute-1 nova_compute[225855]: 2026-01-20 14:19:10.841 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:19:10 compute-1 nova_compute[225855]: 2026-01-20 14:19:10.842 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:19:10 compute-1 nova_compute[225855]: 2026-01-20 14:19:10.842 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:19:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:19:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:11.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:19:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3397359392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:19:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4174787175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:19:11 compute-1 ceph-mon[81775]: pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:11.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:13 compute-1 sudo[226824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:13 compute-1 sudo[226824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:13 compute-1 sudo[226824]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:13 compute-1 sudo[226850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:13 compute-1 sudo[226850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:13 compute-1 sudo[226850]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:13.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:13.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:14 compute-1 ceph-mon[81775]: pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:19:14.105157) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918754105253, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2304, "num_deletes": 251, "total_data_size": 5757540, "memory_usage": 5836592, "flush_reason": "Manual Compaction"}
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918754145794, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 3767487, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17748, "largest_seqno": 20047, "table_properties": {"data_size": 3758175, "index_size": 5870, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18600, "raw_average_key_size": 20, "raw_value_size": 3739661, "raw_average_value_size": 4029, "num_data_blocks": 262, "num_entries": 928, "num_filter_entries": 928, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768918525, "oldest_key_time": 1768918525, "file_creation_time": 1768918754, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 40786 microseconds, and 17600 cpu microseconds.
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:19:14.145939) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 3767487 bytes OK
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:19:14.145969) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:19:14.148164) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:19:14.148191) EVENT_LOG_v1 {"time_micros": 1768918754148182, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:19:14.148218) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 5747533, prev total WAL file size 5747533, number of live WAL files 2.
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:19:14.150476) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(3679KB)], [36(7521KB)]
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918754150530, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 11469099, "oldest_snapshot_seqno": -1}
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4504 keys, 9395843 bytes, temperature: kUnknown
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918754274582, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 9395843, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9363675, "index_size": 19834, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 112778, "raw_average_key_size": 25, "raw_value_size": 9279954, "raw_average_value_size": 2060, "num_data_blocks": 822, "num_entries": 4504, "num_filter_entries": 4504, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768918754, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:19:14.275157) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 9395843 bytes
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:19:14.277227) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.2 rd, 75.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 7.3 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(5.5) write-amplify(2.5) OK, records in: 5023, records dropped: 519 output_compression: NoCompression
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:19:14.277258) EVENT_LOG_v1 {"time_micros": 1768918754277244, "job": 20, "event": "compaction_finished", "compaction_time_micros": 124412, "compaction_time_cpu_micros": 39957, "output_level": 6, "num_output_files": 1, "total_output_size": 9395843, "num_input_records": 5023, "num_output_records": 4504, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918754279351, "job": 20, "event": "table_file_deletion", "file_number": 38}
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918754282375, "job": 20, "event": "table_file_deletion", "file_number": 36}
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:19:14.150407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:19:14.282660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:19:14.282668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:19:14.282671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:19:14.282674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:19:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:19:14.282677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:19:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:15 compute-1 podman[226875]: 2026-01-20 14:19:15.050705032 +0000 UTC m=+0.102532824 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:19:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:15.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:15.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:16 compute-1 ceph-mon[81775]: pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:19:16.379 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:19:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:19:16.379 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:19:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:19:16.379 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:19:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:19:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:17.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:19:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:19:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:17.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:19:18 compute-1 ceph-mon[81775]: pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:19.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:19:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:19.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:19:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:20 compute-1 ceph-mon[81775]: pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:21.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:21.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:22 compute-1 ceph-mon[81775]: pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:23.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:19:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:23.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:19:24 compute-1 ceph-mon[81775]: pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:19:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:25.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:19:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:25.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:26 compute-1 podman[226908]: 2026-01-20 14:19:26.046493573 +0000 UTC m=+0.079000955 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 14:19:26 compute-1 ceph-mon[81775]: pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:27.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:19:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:27.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:19:27 compute-1 ceph-mon[81775]: pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:29.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:29.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:30 compute-1 ceph-mon[81775]: pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:31.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:31.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:32 compute-1 ceph-mon[81775]: pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:33 compute-1 sudo[226929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:33 compute-1 sudo[226929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:33 compute-1 sudo[226929]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:33 compute-1 sudo[226954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:19:33 compute-1 sudo[226954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:33 compute-1 sudo[226954]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:33 compute-1 sudo[226979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:33 compute-1 sudo[226979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:33 compute-1 sudo[226979]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:33 compute-1 sudo[227004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:19:33 compute-1 sudo[227004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:19:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:33.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:19:33 compute-1 sudo[227045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:33 compute-1 sudo[227045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:33 compute-1 sudo[227045]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:33.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:33 compute-1 sudo[227073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:33 compute-1 sudo[227073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:33 compute-1 sudo[227073]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:33 compute-1 sudo[227004]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:34 compute-1 ceph-mon[81775]: pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 14:19:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:19:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:19:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:19:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:19:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:19:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:19:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:35.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:35.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:36 compute-1 ceph-mon[81775]: pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:37.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:37.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:37 compute-1 ceph-mon[81775]: pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:19:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:39.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:19:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:39.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:40 compute-1 ceph-mon[81775]: pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:40 compute-1 sudo[227115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:40 compute-1 sudo[227115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:40 compute-1 sudo[227115]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:40 compute-1 sudo[227140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:19:40 compute-1 sudo[227140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:40 compute-1 sudo[227140]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:19:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:19:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:41.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:41.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:42 compute-1 ceph-mon[81775]: pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:19:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:43.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:19:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:43.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:44 compute-1 ceph-mon[81775]: pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:45.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:45.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:46 compute-1 podman[227168]: 2026-01-20 14:19:46.074242091 +0000 UTC m=+0.122744237 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:19:46 compute-1 ceph-mon[81775]: pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:47.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:19:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:47.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:19:48 compute-1 ceph-mon[81775]: pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:19:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:49.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:19:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:19:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:49.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:19:50 compute-1 ceph-mon[81775]: pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:51.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:51.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:52 compute-1 ceph-mon[81775]: pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:53.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:53.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:53 compute-1 sudo[227198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:53 compute-1 sudo[227198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:53 compute-1 sudo[227198]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:53 compute-1 sudo[227223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:53 compute-1 sudo[227223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:53 compute-1 sudo[227223]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:54 compute-1 ceph-mon[81775]: pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:19:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:55.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:19:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:19:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:55.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:19:56 compute-1 ceph-mon[81775]: pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:57 compute-1 podman[227249]: 2026-01-20 14:19:57.045148926 +0000 UTC m=+0.081819895 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:19:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:57.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:57.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:58 compute-1 ceph-mon[81775]: pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:59.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:19:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:59.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:00 compute-1 ceph-mon[81775]: pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:00 compute-1 ceph-mon[81775]: overall HEALTH_OK
Jan 20 14:20:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:20:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:01.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:20:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:01.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:02 compute-1 ceph-mon[81775]: pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:03 compute-1 ceph-mon[81775]: pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:03.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:03.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:05.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:20:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:05.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:20:06 compute-1 ceph-mon[81775]: pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:07.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:20:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:07.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:20:08 compute-1 ceph-mon[81775]: pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:09.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:20:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:09.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:20:10 compute-1 ceph-mon[81775]: pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:10 compute-1 nova_compute[225855]: 2026-01-20 14:20:10.836 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:20:10 compute-1 nova_compute[225855]: 2026-01-20 14:20:10.836 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:20:10 compute-1 nova_compute[225855]: 2026-01-20 14:20:10.874 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:20:10 compute-1 nova_compute[225855]: 2026-01-20 14:20:10.875 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:20:10 compute-1 nova_compute[225855]: 2026-01-20 14:20:10.875 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:20:10 compute-1 nova_compute[225855]: 2026-01-20 14:20:10.893 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:20:10 compute-1 nova_compute[225855]: 2026-01-20 14:20:10.893 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:20:10 compute-1 nova_compute[225855]: 2026-01-20 14:20:10.893 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:20:10 compute-1 nova_compute[225855]: 2026-01-20 14:20:10.894 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:20:10 compute-1 nova_compute[225855]: 2026-01-20 14:20:10.895 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:20:10 compute-1 nova_compute[225855]: 2026-01-20 14:20:10.895 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:20:10 compute-1 nova_compute[225855]: 2026-01-20 14:20:10.895 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:20:10 compute-1 nova_compute[225855]: 2026-01-20 14:20:10.896 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:20:10 compute-1 nova_compute[225855]: 2026-01-20 14:20:10.896 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:20:10 compute-1 nova_compute[225855]: 2026-01-20 14:20:10.929 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:20:10 compute-1 nova_compute[225855]: 2026-01-20 14:20:10.930 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:20:10 compute-1 nova_compute[225855]: 2026-01-20 14:20:10.930 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:20:10 compute-1 nova_compute[225855]: 2026-01-20 14:20:10.930 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:20:10 compute-1 nova_compute[225855]: 2026-01-20 14:20:10.931 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:20:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4055056685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:20:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:20:11 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2351083615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:20:11 compute-1 nova_compute[225855]: 2026-01-20 14:20:11.406 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:20:11 compute-1 nova_compute[225855]: 2026-01-20 14:20:11.558 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:20:11 compute-1 nova_compute[225855]: 2026-01-20 14:20:11.559 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5258MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:20:11 compute-1 nova_compute[225855]: 2026-01-20 14:20:11.560 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:20:11 compute-1 nova_compute[225855]: 2026-01-20 14:20:11.560 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:20:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:11.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:11.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:11 compute-1 nova_compute[225855]: 2026-01-20 14:20:11.895 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:20:11 compute-1 nova_compute[225855]: 2026-01-20 14:20:11.895 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:20:11 compute-1 rsyslogd[1002]: imjournal from <np0005588919:nova_compute>: begin to drop messages due to rate-limiting
Jan 20 14:20:11 compute-1 nova_compute[225855]: 2026-01-20 14:20:11.920 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:20:12 compute-1 ceph-mon[81775]: pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1052810782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:20:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2351083615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:20:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/90274225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:20:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:20:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/574053826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:20:12 compute-1 nova_compute[225855]: 2026-01-20 14:20:12.402 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:20:12 compute-1 nova_compute[225855]: 2026-01-20 14:20:12.410 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:20:12 compute-1 nova_compute[225855]: 2026-01-20 14:20:12.759 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:20:12 compute-1 nova_compute[225855]: 2026-01-20 14:20:12.762 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:20:12 compute-1 nova_compute[225855]: 2026-01-20 14:20:12.763 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:20:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/526916081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:20:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/574053826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:20:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:20:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4041639714' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:20:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:20:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4041639714' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:20:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:13.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:13.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:14 compute-1 sudo[227322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:14 compute-1 sudo[227322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:14 compute-1 sudo[227322]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:14 compute-1 sudo[227347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:14 compute-1 sudo[227347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:14 compute-1 sudo[227347]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:14 compute-1 ceph-mon[81775]: pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4041639714' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:20:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4041639714' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:20:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:15.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:15.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:16 compute-1 ceph-mon[81775]: pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:20:16.380 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:20:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:20:16.380 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:20:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:20:16.380 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:20:17 compute-1 podman[227373]: 2026-01-20 14:20:17.095565189 +0000 UTC m=+0.137646720 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 20 14:20:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:17.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:17.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:18 compute-1 ceph-mon[81775]: pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:20:19.156 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:20:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:20:19.158 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:20:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:20:19.159 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:20:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:19.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:19.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:20 compute-1 ceph-mon[81775]: pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:21.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:20:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:21.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:20:22 compute-1 ceph-mon[81775]: pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:23.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:23.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:24 compute-1 ceph-mon[81775]: pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:25.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:25.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:26 compute-1 ceph-mon[81775]: pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:20:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:27.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:20:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:20:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:27.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:20:28 compute-1 podman[227405]: 2026-01-20 14:20:28.067564858 +0000 UTC m=+0.109922045 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 14:20:28 compute-1 ceph-mon[81775]: pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:29.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:29.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:30 compute-1 ceph-mon[81775]: pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:31.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:31.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:32 compute-1 ceph-mon[81775]: pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:33.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:33.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:34 compute-1 sudo[227428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:34 compute-1 sudo[227428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:34 compute-1 sudo[227428]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:34 compute-1 sudo[227453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:34 compute-1 sudo[227453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:34 compute-1 sudo[227453]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:34 compute-1 ceph-mon[81775]: pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:35.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:20:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:35.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:20:36 compute-1 ceph-mon[81775]: pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:37 compute-1 ceph-mon[81775]: pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:37.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:37.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:39.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:39.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:40 compute-1 ceph-mon[81775]: pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:40 compute-1 sudo[227481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:40 compute-1 sudo[227481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:40 compute-1 sudo[227481]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:40 compute-1 sudo[227506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:20:40 compute-1 sudo[227506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:40 compute-1 sudo[227506]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:41 compute-1 sudo[227531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:41 compute-1 sudo[227531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:41 compute-1 sudo[227531]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:41 compute-1 sudo[227556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 14:20:41 compute-1 sudo[227556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:41.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:41 compute-1 podman[227654]: 2026-01-20 14:20:41.795178269 +0000 UTC m=+0.096429333 container exec 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:20:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:41.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:41 compute-1 podman[227654]: 2026-01-20 14:20:41.918822331 +0000 UTC m=+0.220073365 container exec_died 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 14:20:42 compute-1 ceph-mon[81775]: pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:42 compute-1 podman[227808]: 2026-01-20 14:20:42.852468566 +0000 UTC m=+0.088956190 container exec 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 14:20:42 compute-1 podman[227808]: 2026-01-20 14:20:42.870380724 +0000 UTC m=+0.106868288 container exec_died 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 14:20:43 compute-1 podman[227874]: 2026-01-20 14:20:43.20035109 +0000 UTC m=+0.087555561 container exec e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, version=2.2.4, io.openshift.expose-services=, distribution-scope=public, name=keepalived, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9)
Jan 20 14:20:43 compute-1 podman[227874]: 2026-01-20 14:20:43.241428654 +0000 UTC m=+0.128633095 container exec_died e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, vcs-type=git, release=1793, vendor=Red Hat, Inc., description=keepalived for Ceph, io.openshift.expose-services=, name=keepalived)
Jan 20 14:20:43 compute-1 sudo[227556]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:43 compute-1 sudo[227908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:43 compute-1 sudo[227908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:43 compute-1 sudo[227908]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:43 compute-1 sudo[227933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:20:43 compute-1 sudo[227933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:43 compute-1 sudo[227933]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:43 compute-1 sudo[227959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:43 compute-1 sudo[227959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:43 compute-1 sudo[227959]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:20:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:43.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:20:43 compute-1 sudo[227984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:20:43 compute-1 sudo[227984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:43.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:44 compute-1 sudo[227984]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:44 compute-1 ceph-mon[81775]: pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:20:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:20:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:20:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:20:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:20:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:45 compute-1 ceph-mon[81775]: pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:45.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:45.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:20:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:47.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:20:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:47.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:48 compute-1 podman[228043]: 2026-01-20 14:20:48.106399372 +0000 UTC m=+0.138452413 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 14:20:48 compute-1 ceph-mon[81775]: pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:49.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:49.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:50 compute-1 sudo[228070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:50 compute-1 sudo[228070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:50 compute-1 sudo[228070]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:50 compute-1 sudo[228095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:20:50 compute-1 sudo[228095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:50 compute-1 sudo[228095]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:50 compute-1 ceph-mon[81775]: pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:20:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:51.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:20:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:51.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:52 compute-1 ceph-mon[81775]: pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:53.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:53.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:53 compute-1 ceph-mon[81775]: pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:54 compute-1 sudo[228122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:54 compute-1 sudo[228122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:54 compute-1 sudo[228122]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:54 compute-1 sudo[228147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:54 compute-1 sudo[228147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:54 compute-1 sudo[228147]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:55.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:55.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:56 compute-1 ceph-mon[81775]: pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:57.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:57.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:58 compute-1 ceph-mon[81775]: pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:59 compute-1 podman[228174]: 2026-01-20 14:20:59.025036279 +0000 UTC m=+0.065060814 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:20:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:59.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:20:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:59.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:00 compute-1 ceph-mon[81775]: pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:01 compute-1 ceph-mon[81775]: pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:01.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:01.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:21:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:03.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:21:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:21:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:03.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:21:04 compute-1 ceph-mon[81775]: pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:05.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:05.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:06 compute-1 ceph-mon[81775]: pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:07.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:07 compute-1 ceph-mon[81775]: pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:07.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:09.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:21:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:09.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:21:10 compute-1 ceph-mon[81775]: pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:11.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:11.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:11 compute-1 ceph-mon[81775]: pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:12 compute-1 nova_compute[225855]: 2026-01-20 14:21:12.765 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:21:12 compute-1 nova_compute[225855]: 2026-01-20 14:21:12.766 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:21:12 compute-1 nova_compute[225855]: 2026-01-20 14:21:12.766 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:21:12 compute-1 nova_compute[225855]: 2026-01-20 14:21:12.767 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:21:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4122199275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:21:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:13.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:13.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:13 compute-1 nova_compute[225855]: 2026-01-20 14:21:13.989 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:21:13 compute-1 nova_compute[225855]: 2026-01-20 14:21:13.989 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:21:13 compute-1 nova_compute[225855]: 2026-01-20 14:21:13.990 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:21:13 compute-1 nova_compute[225855]: 2026-01-20 14:21:13.990 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:21:13 compute-1 nova_compute[225855]: 2026-01-20 14:21:13.990 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:21:13 compute-1 nova_compute[225855]: 2026-01-20 14:21:13.991 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:21:13 compute-1 nova_compute[225855]: 2026-01-20 14:21:13.991 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:21:13 compute-1 nova_compute[225855]: 2026-01-20 14:21:13.991 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:21:13 compute-1 nova_compute[225855]: 2026-01-20 14:21:13.992 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:21:14 compute-1 nova_compute[225855]: 2026-01-20 14:21:14.040 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:21:14 compute-1 nova_compute[225855]: 2026-01-20 14:21:14.040 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:21:14 compute-1 nova_compute[225855]: 2026-01-20 14:21:14.041 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:21:14 compute-1 nova_compute[225855]: 2026-01-20 14:21:14.041 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:21:14 compute-1 nova_compute[225855]: 2026-01-20 14:21:14.042 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:21:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:21:14 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/757655099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:21:14 compute-1 nova_compute[225855]: 2026-01-20 14:21:14.524 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:21:14 compute-1 sudo[228224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:14 compute-1 sudo[228224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:14 compute-1 sudo[228224]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:14 compute-1 sudo[228251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:14 compute-1 sudo[228251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:14 compute-1 sudo[228251]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:14 compute-1 nova_compute[225855]: 2026-01-20 14:21:14.761 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:21:14 compute-1 nova_compute[225855]: 2026-01-20 14:21:14.762 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5252MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:21:14 compute-1 nova_compute[225855]: 2026-01-20 14:21:14.763 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:21:14 compute-1 nova_compute[225855]: 2026-01-20 14:21:14.763 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:21:14 compute-1 nova_compute[225855]: 2026-01-20 14:21:14.866 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:21:14 compute-1 nova_compute[225855]: 2026-01-20 14:21:14.867 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:21:14 compute-1 nova_compute[225855]: 2026-01-20 14:21:14.908 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:21:15 compute-1 ceph-mon[81775]: pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3277933881' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:21:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3277933881' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:21:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:21:15 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1836825557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:21:15 compute-1 nova_compute[225855]: 2026-01-20 14:21:15.373 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:21:15 compute-1 nova_compute[225855]: 2026-01-20 14:21:15.380 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:21:15 compute-1 nova_compute[225855]: 2026-01-20 14:21:15.400 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:21:15 compute-1 nova_compute[225855]: 2026-01-20 14:21:15.402 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:21:15 compute-1 nova_compute[225855]: 2026-01-20 14:21:15.402 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:21:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:15.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:15.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/757655099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:21:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3118763901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:21:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/812704314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:21:16 compute-1 ceph-mon[81775]: pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1836825557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:21:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3432590098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:21:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:21:16.381 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:21:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:21:16.383 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:21:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:21:16.383 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:21:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:21:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:17.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:21:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:21:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:17.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:21:19 compute-1 podman[228300]: 2026-01-20 14:21:19.092971086 +0000 UTC m=+0.136006443 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 20 14:21:19 compute-1 ceph-mon[81775]: pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:19.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:19.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:20 compute-1 ceph-mon[81775]: pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:21.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:21.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:21 compute-1 ceph-mon[81775]: pgmap v886: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:23.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:23.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:24 compute-1 ceph-mon[81775]: pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:25.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:25.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:26 compute-1 ceph-mon[81775]: pgmap v888: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:27.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:21:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:27.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:21:28 compute-1 ceph-mon[81775]: pgmap v889: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:29.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:21:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:29.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:21:30 compute-1 podman[228333]: 2026-01-20 14:21:30.041404998 +0000 UTC m=+0.080008907 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Jan 20 14:21:30 compute-1 ceph-mon[81775]: pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:31.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:21:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:31.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:21:32 compute-1 ceph-mon[81775]: pgmap v891: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:33.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:33.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:34 compute-1 ceph-mon[81775]: pgmap v892: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:34 compute-1 sudo[228355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:34 compute-1 sudo[228355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:34 compute-1 sudo[228355]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:35 compute-1 sudo[228380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:35 compute-1 sudo[228380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:35 compute-1 sudo[228380]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:35.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:35.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:36 compute-1 ceph-mon[81775]: pgmap v893: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:37.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:37 compute-1 ceph-mon[81775]: pgmap v894: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:21:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:37.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:21:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:39.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:39.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:40 compute-1 ceph-mon[81775]: pgmap v895: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:21:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:41.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:21:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:41.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:42 compute-1 ceph-mon[81775]: pgmap v896: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:43.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:43.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:44 compute-1 ceph-mon[81775]: pgmap v897: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:45.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:45.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:46 compute-1 ceph-mon[81775]: pgmap v898: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:47.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:47.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:48 compute-1 ceph-mon[81775]: pgmap v899: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:49.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:49.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:50 compute-1 podman[228413]: 2026-01-20 14:21:50.105795887 +0000 UTC m=+0.133550284 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 14:21:50 compute-1 sudo[228439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:50 compute-1 sudo[228439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:50 compute-1 ceph-mon[81775]: pgmap v900: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:50 compute-1 sudo[228439]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:50 compute-1 sudo[228464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:21:50 compute-1 sudo[228464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:50 compute-1 sudo[228464]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:50 compute-1 sudo[228489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:50 compute-1 sudo[228489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:50 compute-1 sudo[228489]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:50 compute-1 sudo[228514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:21:50 compute-1 sudo[228514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:51 compute-1 sudo[228514]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:51 compute-1 ceph-mon[81775]: pgmap v901: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:21:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:21:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:51.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:51.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:52 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:21:52 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:21:52 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:21:52 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:21:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:21:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - - [20/Jan/2026:14:21:53.492 +0000] "GET /swift/info HTTP/1.1" 200 509 - "python-urllib3/1.26.5" - latency=0.001000028s
Jan 20 14:21:53 compute-1 ceph-mon[81775]: pgmap v902: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:21:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:53.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:21:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:21:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:53.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:21:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:55 compute-1 sudo[228572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:55 compute-1 sudo[228572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:55 compute-1 sudo[228572]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:55 compute-1 sudo[228597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:55 compute-1 sudo[228597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:55 compute-1 sudo[228597]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:55.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:55.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:56 compute-1 ceph-mon[81775]: pgmap v903: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:57 compute-1 sudo[228624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:57 compute-1 sudo[228624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:57 compute-1 sudo[228624]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:57 compute-1 sudo[228649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:21:57 compute-1 sudo[228649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:57 compute-1 sudo[228649]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:21:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:57.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:21:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:57.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:58 compute-1 ceph-mon[81775]: pgmap v904: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:21:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:21:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Jan 20 14:21:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:21:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:59.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:21:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:21:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:59.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:00 compute-1 ceph-mon[81775]: pgmap v905: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:00 compute-1 ceph-mon[81775]: osdmap e129: 3 total, 3 up, 3 in
Jan 20 14:22:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Jan 20 14:22:01 compute-1 podman[228676]: 2026-01-20 14:22:01.03955401 +0000 UTC m=+0.084894346 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 20 14:22:01 compute-1 ceph-mon[81775]: osdmap e130: 3 total, 3 up, 3 in
Jan 20 14:22:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Jan 20 14:22:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:22:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:01.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:22:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:01.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:02 compute-1 ceph-mon[81775]: pgmap v908: 321 pgs: 321 active+clean; 16 MiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 2.0 MiB/s wr, 6 op/s
Jan 20 14:22:02 compute-1 ceph-mon[81775]: osdmap e131: 3 total, 3 up, 3 in
Jan 20 14:22:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:22:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:03.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:22:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:03.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:04 compute-1 ceph-mon[81775]: pgmap v910: 321 pgs: 321 active+clean; 16 MiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.7 KiB/s rd, 2.7 MiB/s wr, 9 op/s
Jan 20 14:22:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Jan 20 14:22:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:05.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:05 compute-1 ceph-mon[81775]: osdmap e132: 3 total, 3 up, 3 in
Jan 20 14:22:05 compute-1 ceph-mon[81775]: pgmap v912: 321 pgs: 321 active+clean; 16 MiB data, 153 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 2.8 MiB/s wr, 48 op/s
Jan 20 14:22:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:05.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:07 compute-1 ceph-mon[81775]: pgmap v913: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 6.1 MiB/s wr, 45 op/s
Jan 20 14:22:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:07.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:07.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:09.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:22:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:09.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:22:09 compute-1 nova_compute[225855]: 2026-01-20 14:22:09.972 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:22:09 compute-1 nova_compute[225855]: 2026-01-20 14:22:09.994 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:22:09 compute-1 nova_compute[225855]: 2026-01-20 14:22:09.994 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:22:09 compute-1 nova_compute[225855]: 2026-01-20 14:22:09.994 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:22:10 compute-1 nova_compute[225855]: 2026-01-20 14:22:10.006 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:22:10 compute-1 nova_compute[225855]: 2026-01-20 14:22:10.006 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:22:10 compute-1 nova_compute[225855]: 2026-01-20 14:22:10.006 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:22:10 compute-1 nova_compute[225855]: 2026-01-20 14:22:10.007 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:22:10 compute-1 nova_compute[225855]: 2026-01-20 14:22:10.007 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:22:10 compute-1 nova_compute[225855]: 2026-01-20 14:22:10.007 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:22:10 compute-1 nova_compute[225855]: 2026-01-20 14:22:10.030 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:22:10 compute-1 nova_compute[225855]: 2026-01-20 14:22:10.030 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:22:10 compute-1 nova_compute[225855]: 2026-01-20 14:22:10.030 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:22:10 compute-1 nova_compute[225855]: 2026-01-20 14:22:10.031 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:22:10 compute-1 nova_compute[225855]: 2026-01-20 14:22:10.031 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:22:10 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4026262527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:10 compute-1 nova_compute[225855]: 2026-01-20 14:22:10.508 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:10 compute-1 nova_compute[225855]: 2026-01-20 14:22:10.702 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:22:10 compute-1 nova_compute[225855]: 2026-01-20 14:22:10.703 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5288MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:22:10 compute-1 nova_compute[225855]: 2026-01-20 14:22:10.703 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:22:10 compute-1 nova_compute[225855]: 2026-01-20 14:22:10.704 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:22:10 compute-1 ceph-mon[81775]: pgmap v914: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 3.1 MiB/s wr, 41 op/s
Jan 20 14:22:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4026262527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:11.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:11 compute-1 nova_compute[225855]: 2026-01-20 14:22:11.920 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:22:11 compute-1 nova_compute[225855]: 2026-01-20 14:22:11.921 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:22:11 compute-1 nova_compute[225855]: 2026-01-20 14:22:11.949 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:11.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Jan 20 14:22:12 compute-1 ceph-mon[81775]: pgmap v915: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 2.6 MiB/s wr, 34 op/s
Jan 20 14:22:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:22:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2009229127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:12 compute-1 nova_compute[225855]: 2026-01-20 14:22:12.362 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:12 compute-1 nova_compute[225855]: 2026-01-20 14:22:12.367 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:22:12 compute-1 nova_compute[225855]: 2026-01-20 14:22:12.765 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:22:12 compute-1 nova_compute[225855]: 2026-01-20 14:22:12.768 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:22:12 compute-1 nova_compute[225855]: 2026-01-20 14:22:12.769 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:22:13 compute-1 ceph-mon[81775]: osdmap e133: 3 total, 3 up, 3 in
Jan 20 14:22:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2009229127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2964033409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:13.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:13.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:14 compute-1 nova_compute[225855]: 2026-01-20 14:22:14.103 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:22:14 compute-1 nova_compute[225855]: 2026-01-20 14:22:14.103 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:22:14 compute-1 nova_compute[225855]: 2026-01-20 14:22:14.104 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:22:14 compute-1 nova_compute[225855]: 2026-01-20 14:22:14.104 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:22:14 compute-1 ceph-mon[81775]: pgmap v917: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 8.3 KiB/s rd, 2.9 MiB/s wr, 12 op/s
Jan 20 14:22:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4042614107' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:22:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4042614107' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:22:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3420770576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4064589682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:15 compute-1 sudo[228745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:22:15 compute-1 sudo[228745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:15 compute-1 sudo[228745]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:15 compute-1 sudo[228770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:22:15 compute-1 sudo[228770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:15 compute-1 sudo[228770]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3670245392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:15 compute-1 ceph-mon[81775]: pgmap v918: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 7.0 KiB/s rd, 2.5 MiB/s wr, 10 op/s
Jan 20 14:22:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:15.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:15.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:22:16.383 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:22:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:22:16.384 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:22:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:22:16.384 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:17.055766) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918937055814, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2023, "num_deletes": 250, "total_data_size": 5002369, "memory_usage": 5054712, "flush_reason": "Manual Compaction"}
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918937070147, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 1972422, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20052, "largest_seqno": 22070, "table_properties": {"data_size": 1966168, "index_size": 3265, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15784, "raw_average_key_size": 20, "raw_value_size": 1952328, "raw_average_value_size": 2552, "num_data_blocks": 147, "num_entries": 765, "num_filter_entries": 765, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768918755, "oldest_key_time": 1768918755, "file_creation_time": 1768918937, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 14481 microseconds, and 5201 cpu microseconds.
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:17.070243) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 1972422 bytes OK
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:17.070290) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:17.075098) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:17.075123) EVENT_LOG_v1 {"time_micros": 1768918937075115, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:17.075158) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4993334, prev total WAL file size 4993334, number of live WAL files 2.
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:17.077369) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373531' seq:0, type:0; will stop at (end)
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(1926KB)], [39(9175KB)]
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918937077451, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 11368265, "oldest_snapshot_seqno": -1}
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 4833 keys, 8797696 bytes, temperature: kUnknown
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918937164595, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 8797696, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8765289, "index_size": 19241, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12101, "raw_key_size": 119925, "raw_average_key_size": 24, "raw_value_size": 8677715, "raw_average_value_size": 1795, "num_data_blocks": 799, "num_entries": 4833, "num_filter_entries": 4833, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768918937, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:17.165009) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8797696 bytes
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:17.166747) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.3 rd, 100.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.0 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(10.2) write-amplify(4.5) OK, records in: 5269, records dropped: 436 output_compression: NoCompression
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:17.166764) EVENT_LOG_v1 {"time_micros": 1768918937166756, "job": 22, "event": "compaction_finished", "compaction_time_micros": 87255, "compaction_time_cpu_micros": 42361, "output_level": 6, "num_output_files": 1, "total_output_size": 8797696, "num_input_records": 5269, "num_output_records": 4833, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918937167263, "job": 22, "event": "table_file_deletion", "file_number": 41}
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918937168760, "job": 22, "event": "table_file_deletion", "file_number": 39}
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:17.077195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:17.168918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:17.168928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:17.168932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:17.168935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:17.168939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:17.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:17.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:18 compute-1 ceph-mon[81775]: pgmap v919: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 5.5 KiB/s rd, 818 B/s wr, 7 op/s
Jan 20 14:22:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:22:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:19.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:22:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:22:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:19.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:22:20 compute-1 ceph-mon[81775]: pgmap v920: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:21 compute-1 podman[228798]: 2026-01-20 14:22:21.104764597 +0000 UTC m=+0.140757533 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:22:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:22:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:21.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:22:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:22:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:21.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:22:22 compute-1 ceph-mon[81775]: pgmap v921: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:22:23.207 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:22:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:22:23.209 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:22:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:23.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:23.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:24 compute-1 ceph-mon[81775]: pgmap v922: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:25.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:25.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:26 compute-1 ceph-mon[81775]: pgmap v923: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:22:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:27.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:22:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:27.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:28 compute-1 ceph-mon[81775]: pgmap v924: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:29.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:29.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:30 compute-1 ceph-mon[81775]: pgmap v925: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:31 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:22:31.212 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:22:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:22:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:31.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:22:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:32.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:32 compute-1 podman[228831]: 2026-01-20 14:22:32.016670887 +0000 UTC m=+0.063404942 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:22:32 compute-1 ceph-mon[81775]: pgmap v926: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1936891983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:33.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:34.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:34 compute-1 ceph-mon[81775]: pgmap v927: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:35 compute-1 sudo[228852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:22:35 compute-1 sudo[228852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:35 compute-1 sudo[228852]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:35 compute-1 sudo[228878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:22:35 compute-1 sudo[228878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:35 compute-1 sudo[228878]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:35 compute-1 ceph-mon[81775]: pgmap v928: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:35.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:36.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:37.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:38.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:38 compute-1 ceph-mon[81775]: pgmap v929: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 20 14:22:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Jan 20 14:22:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:39.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:40.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:40 compute-1 ceph-mon[81775]: pgmap v930: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 5.0 KiB/s rd, 5 op/s
Jan 20 14:22:40 compute-1 ceph-mon[81775]: osdmap e134: 3 total, 3 up, 3 in
Jan 20 14:22:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Jan 20 14:22:41 compute-1 ceph-mon[81775]: osdmap e135: 3 total, 3 up, 3 in
Jan 20 14:22:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:41.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:42.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:42 compute-1 ceph-mon[81775]: pgmap v933: 321 pgs: 321 active+clean; 57 MiB data, 201 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 901 KiB/s wr, 47 op/s
Jan 20 14:22:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2092988802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:22:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3988409306' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:22:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:43.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:22:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:44.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:22:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:44 compute-1 ceph-mon[81775]: pgmap v934: 321 pgs: 321 active+clean; 77 MiB data, 208 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 20 14:22:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:22:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:45.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:22:45 compute-1 ceph-mon[81775]: pgmap v935: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 51 op/s
Jan 20 14:22:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:46.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Jan 20 14:22:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:47.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:48.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:48 compute-1 ceph-mon[81775]: pgmap v936: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.7 MiB/s wr, 119 op/s
Jan 20 14:22:48 compute-1 ceph-mon[81775]: osdmap e136: 3 total, 3 up, 3 in
Jan 20 14:22:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:22:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:49.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:50.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:50 compute-1 nova_compute[225855]: 2026-01-20 14:22:50.133 225859 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:22:50 compute-1 nova_compute[225855]: 2026-01-20 14:22:50.134 225859 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:22:50 compute-1 nova_compute[225855]: 2026-01-20 14:22:50.152 225859 DEBUG nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:22:50 compute-1 nova_compute[225855]: 2026-01-20 14:22:50.251 225859 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:22:50 compute-1 nova_compute[225855]: 2026-01-20 14:22:50.252 225859 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:22:50 compute-1 nova_compute[225855]: 2026-01-20 14:22:50.258 225859 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:22:50 compute-1 nova_compute[225855]: 2026-01-20 14:22:50.258 225859 INFO nova.compute.claims [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:22:50 compute-1 nova_compute[225855]: 2026-01-20 14:22:50.407 225859 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:50 compute-1 ceph-mon[81775]: pgmap v938: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.4 MiB/s wr, 136 op/s
Jan 20 14:22:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:22:50 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2527053545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:50 compute-1 nova_compute[225855]: 2026-01-20 14:22:50.799 225859 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:50 compute-1 nova_compute[225855]: 2026-01-20 14:22:50.810 225859 DEBUG nova.compute.provider_tree [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:22:50 compute-1 nova_compute[225855]: 2026-01-20 14:22:50.833 225859 DEBUG nova.scheduler.client.report [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:22:50 compute-1 nova_compute[225855]: 2026-01-20 14:22:50.880 225859 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:22:50 compute-1 nova_compute[225855]: 2026-01-20 14:22:50.881 225859 DEBUG nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:22:50 compute-1 nova_compute[225855]: 2026-01-20 14:22:50.947 225859 DEBUG nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:22:50 compute-1 nova_compute[225855]: 2026-01-20 14:22:50.948 225859 DEBUG nova.network.neutron [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:22:50 compute-1 nova_compute[225855]: 2026-01-20 14:22:50.977 225859 INFO nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:22:50 compute-1 nova_compute[225855]: 2026-01-20 14:22:50.995 225859 DEBUG nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:22:51 compute-1 nova_compute[225855]: 2026-01-20 14:22:51.121 225859 DEBUG nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:22:51 compute-1 nova_compute[225855]: 2026-01-20 14:22:51.122 225859 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:22:51 compute-1 nova_compute[225855]: 2026-01-20 14:22:51.123 225859 INFO nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Creating image(s)
Jan 20 14:22:51 compute-1 nova_compute[225855]: 2026-01-20 14:22:51.162 225859 DEBUG nova.storage.rbd_utils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 21e70820-70b1-4bb9-bb8d-62fb69c2298b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:22:51 compute-1 nova_compute[225855]: 2026-01-20 14:22:51.205 225859 DEBUG nova.storage.rbd_utils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 21e70820-70b1-4bb9-bb8d-62fb69c2298b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:22:51 compute-1 nova_compute[225855]: 2026-01-20 14:22:51.242 225859 DEBUG nova.storage.rbd_utils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 21e70820-70b1-4bb9-bb8d-62fb69c2298b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:22:51 compute-1 nova_compute[225855]: 2026-01-20 14:22:51.245 225859 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:22:51 compute-1 nova_compute[225855]: 2026-01-20 14:22:51.246 225859 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:22:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2527053545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/273476056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3645120924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:22:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:51.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:22:51 compute-1 nova_compute[225855]: 2026-01-20 14:22:51.968 225859 DEBUG nova.virt.libvirt.imagebackend [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Image locations are: [{'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/a32b3e07-16d8-46fd-9a7b-c242c432fcf9/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/a32b3e07-16d8-46fd-9a7b-c242c432fcf9/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 20 14:22:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:22:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:52.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:22:52 compute-1 podman[228987]: 2026-01-20 14:22:52.066071499 +0000 UTC m=+0.108984685 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Jan 20 14:22:52 compute-1 nova_compute[225855]: 2026-01-20 14:22:52.509 225859 DEBUG nova.network.neutron [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Automatically allocating a network for project e024eef627014f829fa6e45ffe36c281. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460
Jan 20 14:22:52 compute-1 ceph-mon[81775]: pgmap v939: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 91 op/s
Jan 20 14:22:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1499628460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:52.775945) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918972776007, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 617, "num_deletes": 251, "total_data_size": 972845, "memory_usage": 984256, "flush_reason": "Manual Compaction"}
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918972791679, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 641457, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22075, "largest_seqno": 22687, "table_properties": {"data_size": 638367, "index_size": 1062, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7302, "raw_average_key_size": 19, "raw_value_size": 632061, "raw_average_value_size": 1658, "num_data_blocks": 48, "num_entries": 381, "num_filter_entries": 381, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768918938, "oldest_key_time": 1768918938, "file_creation_time": 1768918972, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 15785 microseconds, and 4780 cpu microseconds.
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:52.791732) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 641457 bytes OK
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:52.791752) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:52.885066) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:52.885114) EVENT_LOG_v1 {"time_micros": 1768918972885103, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:52.885180) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 969403, prev total WAL file size 969403, number of live WAL files 2.
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:52.886904) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(626KB)], [42(8591KB)]
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918972886961, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 9439153, "oldest_snapshot_seqno": -1}
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 4700 keys, 7403706 bytes, temperature: kUnknown
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918972928112, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 7403706, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7373361, "index_size": 17499, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 117865, "raw_average_key_size": 25, "raw_value_size": 7289183, "raw_average_value_size": 1550, "num_data_blocks": 719, "num_entries": 4700, "num_filter_entries": 4700, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768918972, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:52.928390) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7403706 bytes
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:52.929759) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.9 rd, 179.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 8.4 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(26.3) write-amplify(11.5) OK, records in: 5214, records dropped: 514 output_compression: NoCompression
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:52.929788) EVENT_LOG_v1 {"time_micros": 1768918972929775, "job": 24, "event": "compaction_finished", "compaction_time_micros": 41235, "compaction_time_cpu_micros": 15040, "output_level": 6, "num_output_files": 1, "total_output_size": 7403706, "num_input_records": 5214, "num_output_records": 4700, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918972930153, "job": 24, "event": "table_file_deletion", "file_number": 44}
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918972932839, "job": 24, "event": "table_file_deletion", "file_number": 42}
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:52.886794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:52.932950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:52.932957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:52.932960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:52.932963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:52 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:22:52.932966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:53 compute-1 nova_compute[225855]: 2026-01-20 14:22:53.367 225859 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:53 compute-1 nova_compute[225855]: 2026-01-20 14:22:53.418 225859 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462.part --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:53 compute-1 nova_compute[225855]: 2026-01-20 14:22:53.420 225859 DEBUG nova.virt.images [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] a32b3e07-16d8-46fd-9a7b-c242c432fcf9 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 20 14:22:53 compute-1 nova_compute[225855]: 2026-01-20 14:22:53.422 225859 DEBUG nova.privsep.utils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 20 14:22:53 compute-1 nova_compute[225855]: 2026-01-20 14:22:53.422 225859 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462.part /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:53 compute-1 nova_compute[225855]: 2026-01-20 14:22:53.582 225859 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462.part /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462.converted" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:53 compute-1 nova_compute[225855]: 2026-01-20 14:22:53.587 225859 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:53 compute-1 nova_compute[225855]: 2026-01-20 14:22:53.667 225859 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462.converted --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:53 compute-1 nova_compute[225855]: 2026-01-20 14:22:53.669 225859 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:22:53 compute-1 nova_compute[225855]: 2026-01-20 14:22:53.710 225859 DEBUG nova.storage.rbd_utils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 21e70820-70b1-4bb9-bb8d-62fb69c2298b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:22:53 compute-1 nova_compute[225855]: 2026-01-20 14:22:53.716 225859 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 21e70820-70b1-4bb9-bb8d-62fb69c2298b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:53 compute-1 ceph-mon[81775]: pgmap v940: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 727 KiB/s wr, 89 op/s
Jan 20 14:22:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:22:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:53.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:22:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:54.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:22:54 compute-1 nova_compute[225855]: 2026-01-20 14:22:54.830 225859 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 21e70820-70b1-4bb9-bb8d-62fb69c2298b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:54 compute-1 nova_compute[225855]: 2026-01-20 14:22:54.913 225859 DEBUG nova.storage.rbd_utils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] resizing rbd image 21e70820-70b1-4bb9-bb8d-62fb69c2298b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:22:55 compute-1 nova_compute[225855]: 2026-01-20 14:22:55.229 225859 DEBUG nova.objects.instance [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lazy-loading 'migration_context' on Instance uuid 21e70820-70b1-4bb9-bb8d-62fb69c2298b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:22:55 compute-1 nova_compute[225855]: 2026-01-20 14:22:55.247 225859 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:22:55 compute-1 nova_compute[225855]: 2026-01-20 14:22:55.247 225859 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Ensure instance console log exists: /var/lib/nova/instances/21e70820-70b1-4bb9-bb8d-62fb69c2298b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:22:55 compute-1 nova_compute[225855]: 2026-01-20 14:22:55.248 225859 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:22:55 compute-1 nova_compute[225855]: 2026-01-20 14:22:55.249 225859 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:22:55 compute-1 nova_compute[225855]: 2026-01-20 14:22:55.249 225859 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:22:55 compute-1 sudo[229141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:22:55 compute-1 sudo[229141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:55 compute-1 sudo[229141]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:55 compute-1 sudo[229166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:22:55 compute-1 sudo[229166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:55 compute-1 sudo[229166]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:55.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:22:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:56.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:22:56 compute-1 ceph-mon[81775]: pgmap v941: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 101 op/s
Jan 20 14:22:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:57.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:57 compute-1 sudo[229192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:22:57 compute-1 sudo[229192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:57 compute-1 sudo[229192]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:58 compute-1 sudo[229217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:22:58 compute-1 sudo[229217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:58 compute-1 sudo[229217]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:58.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:58 compute-1 sudo[229242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:22:58 compute-1 sudo[229242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:58 compute-1 sudo[229242]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:58 compute-1 sudo[229267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:22:58 compute-1 sudo[229267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:58 compute-1 ceph-mon[81775]: pgmap v942: 321 pgs: 321 active+clean; 217 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 6.6 MiB/s wr, 150 op/s
Jan 20 14:22:58 compute-1 sudo[229267]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:22:59 compute-1 ceph-mon[81775]: pgmap v943: 321 pgs: 321 active+clean; 241 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 6.7 MiB/s wr, 153 op/s
Jan 20 14:22:59 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:22:59 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:22:59 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:22:59 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:22:59 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:22:59 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:22:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:22:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:59.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:23:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:00.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:23:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:01.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:02.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:02 compute-1 ceph-mon[81775]: pgmap v944: 321 pgs: 321 active+clean; 249 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.4 MiB/s wr, 145 op/s
Jan 20 14:23:03 compute-1 podman[229323]: 2026-01-20 14:23:03.04679026 +0000 UTC m=+0.084636516 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:23:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:03.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:03 compute-1 ceph-mon[81775]: pgmap v945: 321 pgs: 321 active+clean; 257 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.4 MiB/s wr, 155 op/s
Jan 20 14:23:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:04.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:23:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:05.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:23:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:06.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:06 compute-1 ceph-mon[81775]: pgmap v946: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.4 MiB/s wr, 157 op/s
Jan 20 14:23:07 compute-1 nova_compute[225855]: 2026-01-20 14:23:07.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:07 compute-1 nova_compute[225855]: 2026-01-20 14:23:07.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 14:23:07 compute-1 nova_compute[225855]: 2026-01-20 14:23:07.358 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 14:23:07 compute-1 nova_compute[225855]: 2026-01-20 14:23:07.360 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:07 compute-1 nova_compute[225855]: 2026-01-20 14:23:07.360 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 14:23:07 compute-1 nova_compute[225855]: 2026-01-20 14:23:07.386 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:07.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:07 compute-1 ceph-mon[81775]: pgmap v947: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.5 MiB/s wr, 146 op/s
Jan 20 14:23:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:08.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:09 compute-1 nova_compute[225855]: 2026-01-20 14:23:09.499 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:09 compute-1 nova_compute[225855]: 2026-01-20 14:23:09.499 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:09.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:10.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:10 compute-1 nova_compute[225855]: 2026-01-20 14:23:10.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:10 compute-1 nova_compute[225855]: 2026-01-20 14:23:10.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:10 compute-1 nova_compute[225855]: 2026-01-20 14:23:10.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:10 compute-1 nova_compute[225855]: 2026-01-20 14:23:10.388 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:10 compute-1 nova_compute[225855]: 2026-01-20 14:23:10.388 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:10 compute-1 nova_compute[225855]: 2026-01-20 14:23:10.388 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:10 compute-1 nova_compute[225855]: 2026-01-20 14:23:10.388 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:23:10 compute-1 nova_compute[225855]: 2026-01-20 14:23:10.389 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:10 compute-1 ceph-mon[81775]: pgmap v948: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 182 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Jan 20 14:23:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:23:10 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/781625582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:10 compute-1 nova_compute[225855]: 2026-01-20 14:23:10.851 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:11 compute-1 nova_compute[225855]: 2026-01-20 14:23:11.012 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:23:11 compute-1 nova_compute[225855]: 2026-01-20 14:23:11.013 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5223MB free_disk=20.880550384521484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:23:11 compute-1 nova_compute[225855]: 2026-01-20 14:23:11.013 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:11 compute-1 nova_compute[225855]: 2026-01-20 14:23:11.013 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:11 compute-1 nova_compute[225855]: 2026-01-20 14:23:11.288 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 21e70820-70b1-4bb9-bb8d-62fb69c2298b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:23:11 compute-1 nova_compute[225855]: 2026-01-20 14:23:11.289 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:23:11 compute-1 nova_compute[225855]: 2026-01-20 14:23:11.289 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:23:11 compute-1 nova_compute[225855]: 2026-01-20 14:23:11.486 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing inventories for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 14:23:11 compute-1 nova_compute[225855]: 2026-01-20 14:23:11.702 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating ProviderTree inventory for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 14:23:11 compute-1 nova_compute[225855]: 2026-01-20 14:23:11.703 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:23:11 compute-1 nova_compute[225855]: 2026-01-20 14:23:11.721 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing aggregate associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 14:23:11 compute-1 nova_compute[225855]: 2026-01-20 14:23:11.766 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing trait associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 14:23:11 compute-1 nova_compute[225855]: 2026-01-20 14:23:11.802 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:11.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:12.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:23:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3176499354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:12 compute-1 nova_compute[225855]: 2026-01-20 14:23:12.204 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:12 compute-1 nova_compute[225855]: 2026-01-20 14:23:12.211 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:23:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/781625582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:12 compute-1 ceph-mon[81775]: pgmap v949: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 77 KiB/s rd, 773 KiB/s wr, 26 op/s
Jan 20 14:23:12 compute-1 nova_compute[225855]: 2026-01-20 14:23:12.542 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updated inventory for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with generation 7 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 20 14:23:12 compute-1 nova_compute[225855]: 2026-01-20 14:23:12.542 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 generation from 7 to 8 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 20 14:23:12 compute-1 nova_compute[225855]: 2026-01-20 14:23:12.543 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:23:12 compute-1 nova_compute[225855]: 2026-01-20 14:23:12.741 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:23:12 compute-1 nova_compute[225855]: 2026-01-20 14:23:12.741 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:13 compute-1 sudo[229393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:23:13 compute-1 sudo[229393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:13 compute-1 sudo[229393]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:13 compute-1 sudo[229418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:23:13 compute-1 sudo[229418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:13 compute-1 sudo[229418]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3176499354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:13 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:23:13 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:23:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:23:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1058192503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:23:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:23:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1058192503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:23:13 compute-1 nova_compute[225855]: 2026-01-20 14:23:13.743 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:13 compute-1 nova_compute[225855]: 2026-01-20 14:23:13.744 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:23:13 compute-1 nova_compute[225855]: 2026-01-20 14:23:13.744 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:23:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:13.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:13 compute-1 nova_compute[225855]: 2026-01-20 14:23:13.922 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 14:23:13 compute-1 nova_compute[225855]: 2026-01-20 14:23:13.923 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:23:13 compute-1 nova_compute[225855]: 2026-01-20 14:23:13.923 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:13 compute-1 nova_compute[225855]: 2026-01-20 14:23:13.924 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:13 compute-1 nova_compute[225855]: 2026-01-20 14:23:13.924 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:23:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:23:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:14.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:23:14 compute-1 nova_compute[225855]: 2026-01-20 14:23:14.515 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:14 compute-1 ceph-mon[81775]: pgmap v950: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 66 KiB/s rd, 71 KiB/s wr, 12 op/s
Jan 20 14:23:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1058192503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:23:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1058192503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:23:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2885088964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:15 compute-1 nova_compute[225855]: 2026-01-20 14:23:15.842 225859 DEBUG nova.network.neutron [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Automatically allocated network: {'id': 'abfbbc51-530d-4964-87bc-9fe4ef7eea76', 'name': 'auto_allocated_network', 'tenant_id': 'e024eef627014f829fa6e45ffe36c281', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['a221e4ee-a32e-43de-aa40-0d06401d1ef3', 'e70603a2-876a-4335-85f1-9e391d1bc039'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2026-01-20T14:22:53Z', 'updated_at': '2026-01-20T14:23:15Z', 'revision_number': 4, 'project_id': 'e024eef627014f829fa6e45ffe36c281'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478
Jan 20 14:23:15 compute-1 nova_compute[225855]: 2026-01-20 14:23:15.853 225859 WARNING oslo_policy.policy [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 20 14:23:15 compute-1 nova_compute[225855]: 2026-01-20 14:23:15.853 225859 WARNING oslo_policy.policy [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 20 14:23:15 compute-1 nova_compute[225855]: 2026-01-20 14:23:15.856 225859 DEBUG nova.policy [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '918f290d4c414b71807eacf0b27ad165', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e024eef627014f829fa6e45ffe36c281', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:23:15 compute-1 sudo[229445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:23:15 compute-1 sudo[229445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:15.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:15 compute-1 sudo[229445]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:15 compute-1 sudo[229470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:23:15 compute-1 sudo[229470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:15 compute-1 sudo[229470]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3114115571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:16 compute-1 ceph-mon[81775]: pgmap v951: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 6.3 KiB/s rd, 21 KiB/s wr, 2 op/s
Jan 20 14:23:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3120421228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:16.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:16.383 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:16.383 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:16.384 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:16 compute-1 nova_compute[225855]: 2026-01-20 14:23:16.922 225859 DEBUG nova.network.neutron [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Successfully created port: 68f29a43-1cc4-44f2-953b-48d1d2795097 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:23:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3453826868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:17.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:18.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:18 compute-1 ceph-mon[81775]: pgmap v952: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s wr, 0 op/s
Jan 20 14:23:19 compute-1 nova_compute[225855]: 2026-01-20 14:23:19.040 225859 DEBUG nova.network.neutron [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Successfully updated port: 68f29a43-1cc4-44f2-953b-48d1d2795097 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:23:19 compute-1 nova_compute[225855]: 2026-01-20 14:23:19.057 225859 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "refresh_cache-21e70820-70b1-4bb9-bb8d-62fb69c2298b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:23:19 compute-1 nova_compute[225855]: 2026-01-20 14:23:19.058 225859 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquired lock "refresh_cache-21e70820-70b1-4bb9-bb8d-62fb69c2298b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:23:19 compute-1 nova_compute[225855]: 2026-01-20 14:23:19.058 225859 DEBUG nova.network.neutron [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:23:19 compute-1 nova_compute[225855]: 2026-01-20 14:23:19.323 225859 DEBUG nova.network.neutron [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:23:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:19 compute-1 nova_compute[225855]: 2026-01-20 14:23:19.862 225859 DEBUG nova.compute.manager [req-17fdf6fa-cd1b-416f-8f3e-77e08611db88 req-da2202a5-1fc1-4510-91c1-d190c3b45ce0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Received event network-changed-68f29a43-1cc4-44f2-953b-48d1d2795097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:23:19 compute-1 nova_compute[225855]: 2026-01-20 14:23:19.863 225859 DEBUG nova.compute.manager [req-17fdf6fa-cd1b-416f-8f3e-77e08611db88 req-da2202a5-1fc1-4510-91c1-d190c3b45ce0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Refreshing instance network info cache due to event network-changed-68f29a43-1cc4-44f2-953b-48d1d2795097. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:23:19 compute-1 nova_compute[225855]: 2026-01-20 14:23:19.863 225859 DEBUG oslo_concurrency.lockutils [req-17fdf6fa-cd1b-416f-8f3e-77e08611db88 req-da2202a5-1fc1-4510-91c1-d190c3b45ce0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-21e70820-70b1-4bb9-bb8d-62fb69c2298b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:23:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:19.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:19 compute-1 ceph-mon[81775]: pgmap v953: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:23:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:23:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:20.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:23:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Jan 20 14:23:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:21.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Jan 20 14:23:22 compute-1 ceph-mon[81775]: osdmap e137: 3 total, 3 up, 3 in
Jan 20 14:23:22 compute-1 ceph-mon[81775]: pgmap v955: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:23:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:22.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:23 compute-1 podman[229498]: 2026-01-20 14:23:23.030051291 +0000 UTC m=+0.078573506 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.049 225859 DEBUG nova.network.neutron [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Updating instance_info_cache with network_info: [{"id": "68f29a43-1cc4-44f2-953b-48d1d2795097", "address": "fa:16:3e:5f:bb:6a", "network": {"id": "abfbbc51-530d-4964-87bc-9fe4ef7eea76", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3b3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.38", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e024eef627014f829fa6e45ffe36c281", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f29a43-1c", "ovs_interfaceid": "68f29a43-1cc4-44f2-953b-48d1d2795097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:23:23.059558) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919003059581, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 581, "num_deletes": 262, "total_data_size": 884949, "memory_usage": 895992, "flush_reason": "Manual Compaction"}
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919003065282, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 584362, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22693, "largest_seqno": 23268, "table_properties": {"data_size": 581268, "index_size": 1002, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7009, "raw_average_key_size": 18, "raw_value_size": 574947, "raw_average_value_size": 1489, "num_data_blocks": 44, "num_entries": 386, "num_filter_entries": 386, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768918973, "oldest_key_time": 1768918973, "file_creation_time": 1768919003, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 5751 microseconds, and 1928 cpu microseconds.
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:23:23.065309) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 584362 bytes OK
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:23:23.065322) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:23:23.067139) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:23:23.067150) EVENT_LOG_v1 {"time_micros": 1768919003067146, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:23:23.067163) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 881568, prev total WAL file size 881568, number of live WAL files 2.
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:23:23.067580) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353130' seq:0, type:0; will stop at (end)
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(570KB)], [45(7230KB)]
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919003067615, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 7988068, "oldest_snapshot_seqno": -1}
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.086 225859 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Releasing lock "refresh_cache-21e70820-70b1-4bb9-bb8d-62fb69c2298b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.086 225859 DEBUG nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Instance network_info: |[{"id": "68f29a43-1cc4-44f2-953b-48d1d2795097", "address": "fa:16:3e:5f:bb:6a", "network": {"id": "abfbbc51-530d-4964-87bc-9fe4ef7eea76", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3b3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.38", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e024eef627014f829fa6e45ffe36c281", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f29a43-1c", "ovs_interfaceid": "68f29a43-1cc4-44f2-953b-48d1d2795097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.087 225859 DEBUG oslo_concurrency.lockutils [req-17fdf6fa-cd1b-416f-8f3e-77e08611db88 req-da2202a5-1fc1-4510-91c1-d190c3b45ce0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-21e70820-70b1-4bb9-bb8d-62fb69c2298b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.087 225859 DEBUG nova.network.neutron [req-17fdf6fa-cd1b-416f-8f3e-77e08611db88 req-da2202a5-1fc1-4510-91c1-d190c3b45ce0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Refreshing network info cache for port 68f29a43-1cc4-44f2-953b-48d1d2795097 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.090 225859 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Start _get_guest_xml network_info=[{"id": "68f29a43-1cc4-44f2-953b-48d1d2795097", "address": "fa:16:3e:5f:bb:6a", "network": {"id": "abfbbc51-530d-4964-87bc-9fe4ef7eea76", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3b3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.38", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e024eef627014f829fa6e45ffe36c281", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f29a43-1c", "ovs_interfaceid": "68f29a43-1cc4-44f2-953b-48d1d2795097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.094 225859 WARNING nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.102 225859 DEBUG nova.virt.libvirt.host [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.102 225859 DEBUG nova.virt.libvirt.host [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.105 225859 DEBUG nova.virt.libvirt.host [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.106 225859 DEBUG nova.virt.libvirt.host [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.107 225859 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.107 225859 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.107 225859 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.107 225859 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.107 225859 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.108 225859 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.108 225859 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.108 225859 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.108 225859 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.108 225859 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.108 225859 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.108 225859 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.125 225859 DEBUG nova.privsep.utils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.125 225859 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 4546 keys, 7852013 bytes, temperature: kUnknown
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919003127445, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 7852013, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7821679, "index_size": 17872, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11397, "raw_key_size": 115931, "raw_average_key_size": 25, "raw_value_size": 7739161, "raw_average_value_size": 1702, "num_data_blocks": 731, "num_entries": 4546, "num_filter_entries": 4546, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768919003, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:23:23.127655) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 7852013 bytes
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:23:23.128930) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.4 rd, 131.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 7.1 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(27.1) write-amplify(13.4) OK, records in: 5086, records dropped: 540 output_compression: NoCompression
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:23:23.128946) EVENT_LOG_v1 {"time_micros": 1768919003128939, "job": 26, "event": "compaction_finished", "compaction_time_micros": 59891, "compaction_time_cpu_micros": 22406, "output_level": 6, "num_output_files": 1, "total_output_size": 7852013, "num_input_records": 5086, "num_output_records": 4546, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919003129125, "job": 26, "event": "table_file_deletion", "file_number": 47}
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919003130402, "job": 26, "event": "table_file_deletion", "file_number": 45}
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:23:23.067499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:23:23.130489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:23:23.130494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:23:23.130496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:23:23.130498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:23:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:23:23.130500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:23:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:23:23 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2873657741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.571 225859 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.608 225859 DEBUG nova.storage.rbd_utils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 21e70820-70b1-4bb9-bb8d-62fb69c2298b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:23:23 compute-1 nova_compute[225855]: 2026-01-20 14:23:23.615 225859 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:23.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:23:24 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/143690843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:24.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:24 compute-1 ceph-mon[81775]: osdmap e138: 3 total, 3 up, 3 in
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.411 225859 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.796s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.415 225859 DEBUG nova.virt.libvirt.vif [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:22:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-63374895-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-63374895-1',id=2,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e024eef627014f829fa6e45ffe36c281',ramdisk_id='',reservation_id='r-46o7356r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-314960358',owner_user_name='tempest-AutoAllocateNetworkTest-314960358-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:22:51Z,user_data=None,user_id='918f290d4c414b71807eacf0b27ad165',uuid=21e70820-70b1-4bb9-bb8d-62fb69c2298b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68f29a43-1cc4-44f2-953b-48d1d2795097", "address": "fa:16:3e:5f:bb:6a", "network": {"id": "abfbbc51-530d-4964-87bc-9fe4ef7eea76", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3b3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.38", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e024eef627014f829fa6e45ffe36c281", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f29a43-1c", "ovs_interfaceid": "68f29a43-1cc4-44f2-953b-48d1d2795097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.415 225859 DEBUG nova.network.os_vif_util [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Converting VIF {"id": "68f29a43-1cc4-44f2-953b-48d1d2795097", "address": "fa:16:3e:5f:bb:6a", "network": {"id": "abfbbc51-530d-4964-87bc-9fe4ef7eea76", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3b3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.38", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e024eef627014f829fa6e45ffe36c281", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f29a43-1c", "ovs_interfaceid": "68f29a43-1cc4-44f2-953b-48d1d2795097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.417 225859 DEBUG nova.network.os_vif_util [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:bb:6a,bridge_name='br-int',has_traffic_filtering=True,id=68f29a43-1cc4-44f2-953b-48d1d2795097,network=Network(abfbbc51-530d-4964-87bc-9fe4ef7eea76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68f29a43-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.421 225859 DEBUG nova.objects.instance [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lazy-loading 'pci_devices' on Instance uuid 21e70820-70b1-4bb9-bb8d-62fb69c2298b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.466 225859 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:23:24 compute-1 nova_compute[225855]:   <uuid>21e70820-70b1-4bb9-bb8d-62fb69c2298b</uuid>
Jan 20 14:23:24 compute-1 nova_compute[225855]:   <name>instance-00000002</name>
Jan 20 14:23:24 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:23:24 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:23:24 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <nova:name>tempest-tempest.common.compute-instance-63374895-1</nova:name>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:23:23</nova:creationTime>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:23:24 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:23:24 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:23:24 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:23:24 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:23:24 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:23:24 compute-1 nova_compute[225855]:         <nova:user uuid="918f290d4c414b71807eacf0b27ad165">tempest-AutoAllocateNetworkTest-314960358-project-member</nova:user>
Jan 20 14:23:24 compute-1 nova_compute[225855]:         <nova:project uuid="e024eef627014f829fa6e45ffe36c281">tempest-AutoAllocateNetworkTest-314960358</nova:project>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:23:24 compute-1 nova_compute[225855]:         <nova:port uuid="68f29a43-1cc4-44f2-953b-48d1d2795097">
Jan 20 14:23:24 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="fdfe:381f:8400::3b3" ipVersion="6"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.1.0.38" ipVersion="4"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:23:24 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:23:24 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <system>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <entry name="serial">21e70820-70b1-4bb9-bb8d-62fb69c2298b</entry>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <entry name="uuid">21e70820-70b1-4bb9-bb8d-62fb69c2298b</entry>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     </system>
Jan 20 14:23:24 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:23:24 compute-1 nova_compute[225855]:   <os>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:   </os>
Jan 20 14:23:24 compute-1 nova_compute[225855]:   <features>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:   </features>
Jan 20 14:23:24 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:23:24 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:23:24 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/21e70820-70b1-4bb9-bb8d-62fb69c2298b_disk">
Jan 20 14:23:24 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       </source>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:23:24 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/21e70820-70b1-4bb9-bb8d-62fb69c2298b_disk.config">
Jan 20 14:23:24 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       </source>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:23:24 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:5f:bb:6a"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <target dev="tap68f29a43-1c"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/21e70820-70b1-4bb9-bb8d-62fb69c2298b/console.log" append="off"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <video>
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     </video>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:23:24 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:23:24 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:23:24 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:23:24 compute-1 nova_compute[225855]: </domain>
Jan 20 14:23:24 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.467 225859 DEBUG nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Preparing to wait for external event network-vif-plugged-68f29a43-1cc4-44f2-953b-48d1d2795097 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.467 225859 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.467 225859 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.468 225859 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.468 225859 DEBUG nova.virt.libvirt.vif [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:22:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-63374895-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-63374895-1',id=2,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e024eef627014f829fa6e45ffe36c281',ramdisk_id='',reservation_id='r-46o7356r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-314960358',owner_user_name='tempest-AutoAllocateNetworkTest-314960358-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:22:51Z,user_data=None,user_id='918f290d4c414b71807eacf0b27ad165',uuid=21e70820-70b1-4bb9-bb8d-62fb69c2298b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68f29a43-1cc4-44f2-953b-48d1d2795097", "address": "fa:16:3e:5f:bb:6a", "network": {"id": "abfbbc51-530d-4964-87bc-9fe4ef7eea76", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3b3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.38", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e024eef627014f829fa6e45ffe36c281", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f29a43-1c", "ovs_interfaceid": "68f29a43-1cc4-44f2-953b-48d1d2795097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.468 225859 DEBUG nova.network.os_vif_util [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Converting VIF {"id": "68f29a43-1cc4-44f2-953b-48d1d2795097", "address": "fa:16:3e:5f:bb:6a", "network": {"id": "abfbbc51-530d-4964-87bc-9fe4ef7eea76", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3b3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.38", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e024eef627014f829fa6e45ffe36c281", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f29a43-1c", "ovs_interfaceid": "68f29a43-1cc4-44f2-953b-48d1d2795097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.469 225859 DEBUG nova.network.os_vif_util [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:bb:6a,bridge_name='br-int',has_traffic_filtering=True,id=68f29a43-1cc4-44f2-953b-48d1d2795097,network=Network(abfbbc51-530d-4964-87bc-9fe4ef7eea76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68f29a43-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.469 225859 DEBUG os_vif [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:bb:6a,bridge_name='br-int',has_traffic_filtering=True,id=68f29a43-1cc4-44f2-953b-48d1d2795097,network=Network(abfbbc51-530d-4964-87bc-9fe4ef7eea76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68f29a43-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.506 225859 DEBUG ovsdbapp.backend.ovs_idl [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.506 225859 DEBUG ovsdbapp.backend.ovs_idl [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.506 225859 DEBUG ovsdbapp.backend.ovs_idl [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.507 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.507 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.507 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.508 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.509 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.511 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.520 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.520 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.520 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.521 225859 INFO oslo.privsep.daemon [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp963l7m58/privsep.sock']
Jan 20 14:23:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:24.935 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:23:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:24.937 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:23:24 compute-1 nova_compute[225855]: 2026-01-20 14:23:24.939 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:25 compute-1 nova_compute[225855]: 2026-01-20 14:23:25.271 225859 INFO oslo.privsep.daemon [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Spawned new privsep daemon via rootwrap
Jan 20 14:23:25 compute-1 nova_compute[225855]: 2026-01-20 14:23:25.101 229591 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 20 14:23:25 compute-1 nova_compute[225855]: 2026-01-20 14:23:25.104 229591 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 20 14:23:25 compute-1 nova_compute[225855]: 2026-01-20 14:23:25.106 229591 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Jan 20 14:23:25 compute-1 nova_compute[225855]: 2026-01-20 14:23:25.106 229591 INFO oslo.privsep.daemon [-] privsep daemon running as pid 229591
Jan 20 14:23:25 compute-1 ceph-mon[81775]: pgmap v957: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 127 B/s wr, 0 op/s
Jan 20 14:23:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2873657741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/143690843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:25 compute-1 nova_compute[225855]: 2026-01-20 14:23:25.600 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:25 compute-1 nova_compute[225855]: 2026-01-20 14:23:25.601 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68f29a43-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:25 compute-1 nova_compute[225855]: 2026-01-20 14:23:25.601 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap68f29a43-1c, col_values=(('external_ids', {'iface-id': '68f29a43-1cc4-44f2-953b-48d1d2795097', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:bb:6a', 'vm-uuid': '21e70820-70b1-4bb9-bb8d-62fb69c2298b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:25 compute-1 nova_compute[225855]: 2026-01-20 14:23:25.603 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:25 compute-1 NetworkManager[49104]: <info>  [1768919005.6043] manager: (tap68f29a43-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Jan 20 14:23:25 compute-1 nova_compute[225855]: 2026-01-20 14:23:25.605 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:23:25 compute-1 nova_compute[225855]: 2026-01-20 14:23:25.613 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:25 compute-1 nova_compute[225855]: 2026-01-20 14:23:25.615 225859 INFO os_vif [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:bb:6a,bridge_name='br-int',has_traffic_filtering=True,id=68f29a43-1cc4-44f2-953b-48d1d2795097,network=Network(abfbbc51-530d-4964-87bc-9fe4ef7eea76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68f29a43-1c')
Jan 20 14:23:25 compute-1 nova_compute[225855]: 2026-01-20 14:23:25.687 225859 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:23:25 compute-1 nova_compute[225855]: 2026-01-20 14:23:25.688 225859 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:23:25 compute-1 nova_compute[225855]: 2026-01-20 14:23:25.688 225859 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] No VIF found with MAC fa:16:3e:5f:bb:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:23:25 compute-1 nova_compute[225855]: 2026-01-20 14:23:25.688 225859 INFO nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Using config drive
Jan 20 14:23:25 compute-1 nova_compute[225855]: 2026-01-20 14:23:25.715 225859 DEBUG nova.storage.rbd_utils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 21e70820-70b1-4bb9-bb8d-62fb69c2298b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:23:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:25.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:26.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:26 compute-1 nova_compute[225855]: 2026-01-20 14:23:26.134 225859 DEBUG nova.network.neutron [req-17fdf6fa-cd1b-416f-8f3e-77e08611db88 req-da2202a5-1fc1-4510-91c1-d190c3b45ce0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Updated VIF entry in instance network info cache for port 68f29a43-1cc4-44f2-953b-48d1d2795097. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:23:26 compute-1 nova_compute[225855]: 2026-01-20 14:23:26.134 225859 DEBUG nova.network.neutron [req-17fdf6fa-cd1b-416f-8f3e-77e08611db88 req-da2202a5-1fc1-4510-91c1-d190c3b45ce0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Updating instance_info_cache with network_info: [{"id": "68f29a43-1cc4-44f2-953b-48d1d2795097", "address": "fa:16:3e:5f:bb:6a", "network": {"id": "abfbbc51-530d-4964-87bc-9fe4ef7eea76", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3b3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.38", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e024eef627014f829fa6e45ffe36c281", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f29a43-1c", "ovs_interfaceid": "68f29a43-1cc4-44f2-953b-48d1d2795097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:23:26 compute-1 nova_compute[225855]: 2026-01-20 14:23:26.148 225859 DEBUG oslo_concurrency.lockutils [req-17fdf6fa-cd1b-416f-8f3e-77e08611db88 req-da2202a5-1fc1-4510-91c1-d190c3b45ce0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-21e70820-70b1-4bb9-bb8d-62fb69c2298b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:23:26 compute-1 nova_compute[225855]: 2026-01-20 14:23:26.848 225859 INFO nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Creating config drive at /var/lib/nova/instances/21e70820-70b1-4bb9-bb8d-62fb69c2298b/disk.config
Jan 20 14:23:26 compute-1 nova_compute[225855]: 2026-01-20 14:23:26.853 225859 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/21e70820-70b1-4bb9-bb8d-62fb69c2298b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuca9icwx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:26 compute-1 nova_compute[225855]: 2026-01-20 14:23:26.982 225859 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/21e70820-70b1-4bb9-bb8d-62fb69c2298b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuca9icwx" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:27 compute-1 nova_compute[225855]: 2026-01-20 14:23:27.006 225859 DEBUG nova.storage.rbd_utils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 21e70820-70b1-4bb9-bb8d-62fb69c2298b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:23:27 compute-1 nova_compute[225855]: 2026-01-20 14:23:27.009 225859 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/21e70820-70b1-4bb9-bb8d-62fb69c2298b/disk.config 21e70820-70b1-4bb9-bb8d-62fb69c2298b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:27 compute-1 ceph-mon[81775]: pgmap v958: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 255 B/s wr, 0 op/s
Jan 20 14:23:27 compute-1 nova_compute[225855]: 2026-01-20 14:23:27.616 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:27.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:28.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:28 compute-1 ceph-mon[81775]: pgmap v959: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 383 B/s rd, 767 B/s wr, 1 op/s
Jan 20 14:23:28 compute-1 nova_compute[225855]: 2026-01-20 14:23:28.641 225859 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/21e70820-70b1-4bb9-bb8d-62fb69c2298b/disk.config 21e70820-70b1-4bb9-bb8d-62fb69c2298b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:28 compute-1 nova_compute[225855]: 2026-01-20 14:23:28.642 225859 INFO nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Deleting local config drive /var/lib/nova/instances/21e70820-70b1-4bb9-bb8d-62fb69c2298b/disk.config because it was imported into RBD.
Jan 20 14:23:28 compute-1 systemd[1]: Starting libvirt secret daemon...
Jan 20 14:23:28 compute-1 systemd[1]: Started libvirt secret daemon.
Jan 20 14:23:28 compute-1 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 20 14:23:28 compute-1 kernel: tap68f29a43-1c: entered promiscuous mode
Jan 20 14:23:28 compute-1 NetworkManager[49104]: <info>  [1768919008.7539] manager: (tap68f29a43-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Jan 20 14:23:28 compute-1 systemd-udevd[229690]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:23:28 compute-1 NetworkManager[49104]: <info>  [1768919008.7985] device (tap68f29a43-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:23:28 compute-1 NetworkManager[49104]: <info>  [1768919008.7994] device (tap68f29a43-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:23:28 compute-1 ovn_controller[130490]: 2026-01-20T14:23:28Z|00027|binding|INFO|Claiming lport 68f29a43-1cc4-44f2-953b-48d1d2795097 for this chassis.
Jan 20 14:23:28 compute-1 ovn_controller[130490]: 2026-01-20T14:23:28Z|00028|binding|INFO|68f29a43-1cc4-44f2-953b-48d1d2795097: Claiming fa:16:3e:5f:bb:6a 10.1.0.38 fdfe:381f:8400::3b3
Jan 20 14:23:28 compute-1 nova_compute[225855]: 2026-01-20 14:23:28.801 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:28.831 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:bb:6a 10.1.0.38 fdfe:381f:8400::3b3'], port_security=['fa:16:3e:5f:bb:6a 10.1.0.38 fdfe:381f:8400::3b3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.38/26 fdfe:381f:8400::3b3/64', 'neutron:device_id': '21e70820-70b1-4bb9-bb8d-62fb69c2298b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abfbbc51-530d-4964-87bc-9fe4ef7eea76', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e024eef627014f829fa6e45ffe36c281', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd16ec2ec-302d-4208-8497-5b8aae342313', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72f351f7-24c7-4d5d-b1a1-e23b4cd26746, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=68f29a43-1cc4-44f2-953b-48d1d2795097) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:23:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:28.832 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 68f29a43-1cc4-44f2-953b-48d1d2795097 in datapath abfbbc51-530d-4964-87bc-9fe4ef7eea76 bound to our chassis
Jan 20 14:23:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:28.836 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abfbbc51-530d-4964-87bc-9fe4ef7eea76
Jan 20 14:23:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:28.838 140354 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmprzp_2ryc/privsep.sock']
Jan 20 14:23:28 compute-1 systemd-machined[194361]: New machine qemu-1-instance-00000002.
Jan 20 14:23:28 compute-1 nova_compute[225855]: 2026-01-20 14:23:28.898 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:28 compute-1 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Jan 20 14:23:28 compute-1 ovn_controller[130490]: 2026-01-20T14:23:28Z|00029|binding|INFO|Setting lport 68f29a43-1cc4-44f2-953b-48d1d2795097 ovn-installed in OVS
Jan 20 14:23:28 compute-1 ovn_controller[130490]: 2026-01-20T14:23:28Z|00030|binding|INFO|Setting lport 68f29a43-1cc4-44f2-953b-48d1d2795097 up in Southbound
Jan 20 14:23:28 compute-1 nova_compute[225855]: 2026-01-20 14:23:28.908 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:29.524 140354 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 20 14:23:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:29.524 140354 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmprzp_2ryc/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 20 14:23:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:29.408 229707 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 20 14:23:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:29.424 229707 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 20 14:23:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:29.429 229707 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Jan 20 14:23:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:29.429 229707 INFO oslo.privsep.daemon [-] privsep daemon running as pid 229707
Jan 20 14:23:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:29.526 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[feaf95f6-f136-45b2-b141-fef66fde3851]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:29.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:30.088 229707 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:30.089 229707 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:30.089 229707 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:30.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:30 compute-1 nova_compute[225855]: 2026-01-20 14:23:30.302 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919010.3014455, 21e70820-70b1-4bb9-bb8d-62fb69c2298b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:23:30 compute-1 nova_compute[225855]: 2026-01-20 14:23:30.303 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] VM Started (Lifecycle Event)
Jan 20 14:23:30 compute-1 nova_compute[225855]: 2026-01-20 14:23:30.603 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:30.662 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[714190d5-1ab6-481c-b733-cfce8065b74a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:30.663 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabfbbc51-51 in ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:23:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:30.664 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabfbbc51-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:23:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:30.665 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d5cfd51e-a280-4d6f-b217-a25addcd46ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:30.667 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ca541b62-b1b5-4114-9ce7-d1bbfc1cb8ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:30.712 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[ac349c12-81ee-4028-b4d0-99d4cccfc955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:30 compute-1 nova_compute[225855]: 2026-01-20 14:23:30.737 225859 DEBUG nova.compute.manager [req-18dbb390-3d82-4ae7-ac85-e25061c82ba3 req-5f987ecf-449e-47b3-a62d-3adf3aeeee32 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Received event network-vif-plugged-68f29a43-1cc4-44f2-953b-48d1d2795097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:23:30 compute-1 nova_compute[225855]: 2026-01-20 14:23:30.738 225859 DEBUG oslo_concurrency.lockutils [req-18dbb390-3d82-4ae7-ac85-e25061c82ba3 req-5f987ecf-449e-47b3-a62d-3adf3aeeee32 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:30 compute-1 nova_compute[225855]: 2026-01-20 14:23:30.739 225859 DEBUG oslo_concurrency.lockutils [req-18dbb390-3d82-4ae7-ac85-e25061c82ba3 req-5f987ecf-449e-47b3-a62d-3adf3aeeee32 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:30 compute-1 nova_compute[225855]: 2026-01-20 14:23:30.739 225859 DEBUG oslo_concurrency.lockutils [req-18dbb390-3d82-4ae7-ac85-e25061c82ba3 req-5f987ecf-449e-47b3-a62d-3adf3aeeee32 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:30 compute-1 nova_compute[225855]: 2026-01-20 14:23:30.740 225859 DEBUG nova.compute.manager [req-18dbb390-3d82-4ae7-ac85-e25061c82ba3 req-5f987ecf-449e-47b3-a62d-3adf3aeeee32 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Processing event network-vif-plugged-68f29a43-1cc4-44f2-953b-48d1d2795097 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:23:30 compute-1 nova_compute[225855]: 2026-01-20 14:23:30.741 225859 DEBUG nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:23:30 compute-1 nova_compute[225855]: 2026-01-20 14:23:30.745 225859 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:23:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:30.746 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9ecad7-18a4-484a-a263-bb5af19bfdfb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:30 compute-1 nova_compute[225855]: 2026-01-20 14:23:30.760 225859 INFO nova.virt.libvirt.driver [-] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Instance spawned successfully.
Jan 20 14:23:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:30.760 140354 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmplex3qvo8/privsep.sock']
Jan 20 14:23:30 compute-1 nova_compute[225855]: 2026-01-20 14:23:30.761 225859 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:23:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:30.940 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:31 compute-1 ceph-mon[81775]: pgmap v960: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.1 KiB/s rd, 847 B/s wr, 5 op/s
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.359 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.362 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.374 225859 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.374 225859 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.375 225859 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.375 225859 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.376 225859 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.376 225859 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.384 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.384 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919010.3015974, 21e70820-70b1-4bb9-bb8d-62fb69c2298b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.385 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] VM Paused (Lifecycle Event)
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.417 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.421 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919010.7445748, 21e70820-70b1-4bb9-bb8d-62fb69c2298b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.421 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] VM Resumed (Lifecycle Event)
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.443 225859 INFO nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Took 40.32 seconds to spawn the instance on the hypervisor.
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.444 225859 DEBUG nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.453 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.456 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:23:31 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:31.505 140354 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 20 14:23:31 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:31.505 140354 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmplex3qvo8/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 20 14:23:31 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:31.374 229764 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 20 14:23:31 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:31.379 229764 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 20 14:23:31 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:31.382 229764 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 20 14:23:31 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:31.382 229764 INFO oslo.privsep.daemon [-] privsep daemon running as pid 229764
Jan 20 14:23:31 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:31.508 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f31c73ad-8f95-4232-8286-f95036d97ee3]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.629 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.651 225859 INFO nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Took 41.44 seconds to build instance.
Jan 20 14:23:31 compute-1 nova_compute[225855]: 2026-01-20 14:23:31.674 225859 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 41.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:31.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.066 229764 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.066 229764 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.067 229764 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:32.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:32 compute-1 ceph-mon[81775]: pgmap v961: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.9 KiB/s rd, 16 KiB/s wr, 6 op/s
Jan 20 14:23:32 compute-1 nova_compute[225855]: 2026-01-20 14:23:32.618 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.665 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[63f1694f-52c2-40bb-8a8b-eb7ccfa8af47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.678 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[89233a01-0700-4b7b-8d79-138aec76148c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:32 compute-1 NetworkManager[49104]: <info>  [1768919012.6868] manager: (tapabfbbc51-50): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Jan 20 14:23:32 compute-1 systemd-udevd[229777]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.709 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d61303f2-e7d9-4304-82d9-4bec6cbab162]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.713 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[68717da6-e0fa-475e-bb75-5e882b5c4c04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:32 compute-1 NetworkManager[49104]: <info>  [1768919012.7369] device (tapabfbbc51-50): carrier: link connected
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.741 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e7f2d71e-bbd0-40a0-bd2f-8051357fda8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.760 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a09d19e2-5ae0-4d41-ba4a-504ab1d6efcb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabfbbc51-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:8a:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405166, 'reachable_time': 24486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 229795, 'error': None, 'target': 'ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.775 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2159b145-7f40-429f-9e9b-9a4b03c4365f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:8a9e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 405166, 'tstamp': 405166}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 229796, 'error': None, 'target': 'ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.790 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4a694806-71a1-4613-a4d1-20098490b882]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabfbbc51-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:8a:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405166, 'reachable_time': 24486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 229797, 'error': None, 'target': 'ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:32 compute-1 nova_compute[225855]: 2026-01-20 14:23:32.814 225859 DEBUG oslo_concurrency.lockutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Acquiring lock "7108f815-a0ef-4f18-a2c2-c796476ace75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:32 compute-1 nova_compute[225855]: 2026-01-20 14:23:32.815 225859 DEBUG oslo_concurrency.lockutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.824 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4e81b25b-8bb8-4d17-b6fe-ef89c075ce54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:32 compute-1 nova_compute[225855]: 2026-01-20 14:23:32.839 225859 DEBUG nova.compute.manager [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.878 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b45ee2ea-85ed-4e84-b319-4435169ec437]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.880 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabfbbc51-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.880 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.881 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabfbbc51-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:32 compute-1 kernel: tapabfbbc51-50: entered promiscuous mode
Jan 20 14:23:32 compute-1 NetworkManager[49104]: <info>  [1768919012.8834] manager: (tapabfbbc51-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 20 14:23:32 compute-1 nova_compute[225855]: 2026-01-20 14:23:32.882 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.888 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabfbbc51-50, col_values=(('external_ids', {'iface-id': 'ded59d17-485f-4d1e-8a09-5098adeebfa4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:32 compute-1 ovn_controller[130490]: 2026-01-20T14:23:32Z|00031|binding|INFO|Releasing lport ded59d17-485f-4d1e-8a09-5098adeebfa4 from this chassis (sb_readonly=0)
Jan 20 14:23:32 compute-1 nova_compute[225855]: 2026-01-20 14:23:32.888 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.890 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abfbbc51-530d-4964-87bc-9fe4ef7eea76.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abfbbc51-530d-4964-87bc-9fe4ef7eea76.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.891 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ee22b94b-7a82-4da5-856e-fe3e11702fe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.892 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-abfbbc51-530d-4964-87bc-9fe4ef7eea76
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/abfbbc51-530d-4964-87bc-9fe4ef7eea76.pid.haproxy
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID abfbbc51-530d-4964-87bc-9fe4ef7eea76
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:23:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:32.892 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76', 'env', 'PROCESS_TAG=haproxy-abfbbc51-530d-4964-87bc-9fe4ef7eea76', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abfbbc51-530d-4964-87bc-9fe4ef7eea76.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:23:32 compute-1 nova_compute[225855]: 2026-01-20 14:23:32.907 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:32 compute-1 nova_compute[225855]: 2026-01-20 14:23:32.941 225859 DEBUG nova.compute.manager [req-a55097e8-e503-47b5-b95e-0fd7d9be0e6f req-f0f8dae1-3b79-451d-b2c4-f5b820759014 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Received event network-vif-plugged-68f29a43-1cc4-44f2-953b-48d1d2795097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:23:32 compute-1 nova_compute[225855]: 2026-01-20 14:23:32.942 225859 DEBUG oslo_concurrency.lockutils [req-a55097e8-e503-47b5-b95e-0fd7d9be0e6f req-f0f8dae1-3b79-451d-b2c4-f5b820759014 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:32 compute-1 nova_compute[225855]: 2026-01-20 14:23:32.942 225859 DEBUG oslo_concurrency.lockutils [req-a55097e8-e503-47b5-b95e-0fd7d9be0e6f req-f0f8dae1-3b79-451d-b2c4-f5b820759014 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:32 compute-1 nova_compute[225855]: 2026-01-20 14:23:32.943 225859 DEBUG oslo_concurrency.lockutils [req-a55097e8-e503-47b5-b95e-0fd7d9be0e6f req-f0f8dae1-3b79-451d-b2c4-f5b820759014 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:32 compute-1 nova_compute[225855]: 2026-01-20 14:23:32.943 225859 DEBUG nova.compute.manager [req-a55097e8-e503-47b5-b95e-0fd7d9be0e6f req-f0f8dae1-3b79-451d-b2c4-f5b820759014 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] No waiting events found dispatching network-vif-plugged-68f29a43-1cc4-44f2-953b-48d1d2795097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:23:32 compute-1 nova_compute[225855]: 2026-01-20 14:23:32.944 225859 WARNING nova.compute.manager [req-a55097e8-e503-47b5-b95e-0fd7d9be0e6f req-f0f8dae1-3b79-451d-b2c4-f5b820759014 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Received unexpected event network-vif-plugged-68f29a43-1cc4-44f2-953b-48d1d2795097 for instance with vm_state active and task_state None.
Jan 20 14:23:32 compute-1 nova_compute[225855]: 2026-01-20 14:23:32.947 225859 DEBUG oslo_concurrency.lockutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:32 compute-1 nova_compute[225855]: 2026-01-20 14:23:32.947 225859 DEBUG oslo_concurrency.lockutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Jan 20 14:23:32 compute-1 nova_compute[225855]: 2026-01-20 14:23:32.956 225859 DEBUG nova.virt.hardware [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:23:32 compute-1 nova_compute[225855]: 2026-01-20 14:23:32.956 225859 INFO nova.compute.claims [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:23:33 compute-1 nova_compute[225855]: 2026-01-20 14:23:33.084 225859 DEBUG oslo_concurrency.processutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:33 compute-1 podman[229831]: 2026-01-20 14:23:33.230775423 +0000 UTC m=+0.041595093 container create d0fb244dd0bdc95ddb75bb18a552554e22daebc4ba5177ada30f919dd83cac7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:23:33 compute-1 systemd[1]: Started libpod-conmon-d0fb244dd0bdc95ddb75bb18a552554e22daebc4ba5177ada30f919dd83cac7c.scope.
Jan 20 14:23:33 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:23:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce11b97c0626c78060e5093ae253c072bdc788907d20e002f102fb4643485e00/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:33 compute-1 podman[229831]: 2026-01-20 14:23:33.299964846 +0000 UTC m=+0.110784516 container init d0fb244dd0bdc95ddb75bb18a552554e22daebc4ba5177ada30f919dd83cac7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 14:23:33 compute-1 podman[229831]: 2026-01-20 14:23:33.208772968 +0000 UTC m=+0.019592648 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:23:33 compute-1 podman[229831]: 2026-01-20 14:23:33.305320365 +0000 UTC m=+0.116140035 container start d0fb244dd0bdc95ddb75bb18a552554e22daebc4ba5177ada30f919dd83cac7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:23:33 compute-1 podman[229863]: 2026-01-20 14:23:33.322854385 +0000 UTC m=+0.060018287 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:23:33 compute-1 neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76[229866]: [NOTICE]   (229885) : New worker (229889) forked
Jan 20 14:23:33 compute-1 neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76[229866]: [NOTICE]   (229885) : Loading success.
Jan 20 14:23:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:23:33 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2620160175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:33 compute-1 nova_compute[225855]: 2026-01-20 14:23:33.572 225859 DEBUG oslo_concurrency.processutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:33 compute-1 nova_compute[225855]: 2026-01-20 14:23:33.580 225859 DEBUG nova.compute.provider_tree [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:23:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:23:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:33.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:23:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/450213987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1900357995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:33 compute-1 ceph-mon[81775]: osdmap e139: 3 total, 3 up, 3 in
Jan 20 14:23:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2620160175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:34.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:35 compute-1 ceph-mon[81775]: pgmap v963: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 294 KiB/s rd, 18 KiB/s wr, 20 op/s
Jan 20 14:23:35 compute-1 nova_compute[225855]: 2026-01-20 14:23:35.605 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:35.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:36 compute-1 nova_compute[225855]: 2026-01-20 14:23:36.036 225859 DEBUG nova.scheduler.client.report [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:23:36 compute-1 sudo[229903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:23:36 compute-1 sudo[229903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:36 compute-1 sudo[229903]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:36.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:36 compute-1 sudo[229928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:23:36 compute-1 sudo[229928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:36 compute-1 sudo[229928]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:37 compute-1 ceph-mon[81775]: pgmap v964: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 18 KiB/s wr, 57 op/s
Jan 20 14:23:37 compute-1 nova_compute[225855]: 2026-01-20 14:23:37.381 225859 DEBUG oslo_concurrency.lockutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 4.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:37 compute-1 nova_compute[225855]: 2026-01-20 14:23:37.382 225859 DEBUG nova.compute.manager [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:23:37 compute-1 nova_compute[225855]: 2026-01-20 14:23:37.620 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:37 compute-1 nova_compute[225855]: 2026-01-20 14:23:37.826 225859 DEBUG nova.compute.manager [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:23:37 compute-1 nova_compute[225855]: 2026-01-20 14:23:37.827 225859 DEBUG nova.network.neutron [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:23:37 compute-1 nova_compute[225855]: 2026-01-20 14:23:37.860 225859 INFO nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:23:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:37.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:37 compute-1 nova_compute[225855]: 2026-01-20 14:23:37.982 225859 DEBUG nova.compute.manager [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:23:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:38.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:38 compute-1 nova_compute[225855]: 2026-01-20 14:23:38.171 225859 DEBUG nova.compute.manager [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:23:38 compute-1 nova_compute[225855]: 2026-01-20 14:23:38.173 225859 DEBUG nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:23:38 compute-1 nova_compute[225855]: 2026-01-20 14:23:38.173 225859 INFO nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Creating image(s)
Jan 20 14:23:38 compute-1 nova_compute[225855]: 2026-01-20 14:23:38.209 225859 DEBUG nova.storage.rbd_utils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] rbd image 7108f815-a0ef-4f18-a2c2-c796476ace75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:23:38 compute-1 nova_compute[225855]: 2026-01-20 14:23:38.250 225859 DEBUG nova.storage.rbd_utils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] rbd image 7108f815-a0ef-4f18-a2c2-c796476ace75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:23:38 compute-1 nova_compute[225855]: 2026-01-20 14:23:38.284 225859 DEBUG nova.storage.rbd_utils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] rbd image 7108f815-a0ef-4f18-a2c2-c796476ace75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:23:38 compute-1 nova_compute[225855]: 2026-01-20 14:23:38.288 225859 DEBUG oslo_concurrency.processutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:38 compute-1 nova_compute[225855]: 2026-01-20 14:23:38.378 225859 DEBUG oslo_concurrency.processutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:38 compute-1 nova_compute[225855]: 2026-01-20 14:23:38.379 225859 DEBUG oslo_concurrency.lockutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:38 compute-1 nova_compute[225855]: 2026-01-20 14:23:38.380 225859 DEBUG oslo_concurrency.lockutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:38 compute-1 nova_compute[225855]: 2026-01-20 14:23:38.380 225859 DEBUG oslo_concurrency.lockutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:38 compute-1 nova_compute[225855]: 2026-01-20 14:23:38.413 225859 DEBUG nova.storage.rbd_utils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] rbd image 7108f815-a0ef-4f18-a2c2-c796476ace75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:23:38 compute-1 nova_compute[225855]: 2026-01-20 14:23:38.417 225859 DEBUG oslo_concurrency.processutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 7108f815-a0ef-4f18-a2c2-c796476ace75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:38 compute-1 ceph-mon[81775]: pgmap v965: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 88 op/s
Jan 20 14:23:38 compute-1 nova_compute[225855]: 2026-01-20 14:23:38.820 225859 DEBUG nova.policy [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6d292784a7494358a137fea52feffec0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6d9b3587ce494cb8ac153a66886f6883', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:23:39 compute-1 nova_compute[225855]: 2026-01-20 14:23:39.676 225859 DEBUG nova.network.neutron [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Successfully created port: 4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:23:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:39.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:40.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3972026209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:40 compute-1 nova_compute[225855]: 2026-01-20 14:23:40.495 225859 DEBUG nova.network.neutron [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Successfully updated port: 4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:23:40 compute-1 nova_compute[225855]: 2026-01-20 14:23:40.509 225859 DEBUG oslo_concurrency.lockutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Acquiring lock "refresh_cache-7108f815-a0ef-4f18-a2c2-c796476ace75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:23:40 compute-1 nova_compute[225855]: 2026-01-20 14:23:40.510 225859 DEBUG oslo_concurrency.lockutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Acquired lock "refresh_cache-7108f815-a0ef-4f18-a2c2-c796476ace75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:23:40 compute-1 nova_compute[225855]: 2026-01-20 14:23:40.510 225859 DEBUG nova.network.neutron [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:23:40 compute-1 nova_compute[225855]: 2026-01-20 14:23:40.598 225859 DEBUG nova.compute.manager [req-4a02e0f3-5e43-40c5-a02a-2921907eaaf6 req-e781f02b-d3d4-4e05-b5ea-8c6a8af974a6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Received event network-changed-4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:23:40 compute-1 nova_compute[225855]: 2026-01-20 14:23:40.598 225859 DEBUG nova.compute.manager [req-4a02e0f3-5e43-40c5-a02a-2921907eaaf6 req-e781f02b-d3d4-4e05-b5ea-8c6a8af974a6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Refreshing instance network info cache due to event network-changed-4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:23:40 compute-1 nova_compute[225855]: 2026-01-20 14:23:40.599 225859 DEBUG oslo_concurrency.lockutils [req-4a02e0f3-5e43-40c5-a02a-2921907eaaf6 req-e781f02b-d3d4-4e05-b5ea-8c6a8af974a6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-7108f815-a0ef-4f18-a2c2-c796476ace75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:23:40 compute-1 nova_compute[225855]: 2026-01-20 14:23:40.608 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:40 compute-1 nova_compute[225855]: 2026-01-20 14:23:40.662 225859 DEBUG nova.network.neutron [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:23:41 compute-1 nova_compute[225855]: 2026-01-20 14:23:41.523 225859 DEBUG nova.network.neutron [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Updating instance_info_cache with network_info: [{"id": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "address": "fa:16:3e:fb:ee:02", "network": {"id": "ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-731576160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d9b3587ce494cb8ac153a66886f6883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4573fd6d-82", "ovs_interfaceid": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:23:41 compute-1 nova_compute[225855]: 2026-01-20 14:23:41.540 225859 DEBUG oslo_concurrency.lockutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Releasing lock "refresh_cache-7108f815-a0ef-4f18-a2c2-c796476ace75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:23:41 compute-1 nova_compute[225855]: 2026-01-20 14:23:41.540 225859 DEBUG nova.compute.manager [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Instance network_info: |[{"id": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "address": "fa:16:3e:fb:ee:02", "network": {"id": "ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-731576160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d9b3587ce494cb8ac153a66886f6883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4573fd6d-82", "ovs_interfaceid": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:23:41 compute-1 nova_compute[225855]: 2026-01-20 14:23:41.541 225859 DEBUG oslo_concurrency.lockutils [req-4a02e0f3-5e43-40c5-a02a-2921907eaaf6 req-e781f02b-d3d4-4e05-b5ea-8c6a8af974a6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-7108f815-a0ef-4f18-a2c2-c796476ace75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:23:41 compute-1 nova_compute[225855]: 2026-01-20 14:23:41.541 225859 DEBUG nova.network.neutron [req-4a02e0f3-5e43-40c5-a02a-2921907eaaf6 req-e781f02b-d3d4-4e05-b5ea-8c6a8af974a6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Refreshing network info cache for port 4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:23:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:23:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:41.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:23:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:42.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:42 compute-1 nova_compute[225855]: 2026-01-20 14:23:42.623 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:43 compute-1 ceph-mon[81775]: pgmap v966: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 85 op/s
Jan 20 14:23:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3235933527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:43 compute-1 nova_compute[225855]: 2026-01-20 14:23:43.275 225859 DEBUG nova.network.neutron [req-4a02e0f3-5e43-40c5-a02a-2921907eaaf6 req-e781f02b-d3d4-4e05-b5ea-8c6a8af974a6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Updated VIF entry in instance network info cache for port 4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:23:43 compute-1 nova_compute[225855]: 2026-01-20 14:23:43.276 225859 DEBUG nova.network.neutron [req-4a02e0f3-5e43-40c5-a02a-2921907eaaf6 req-e781f02b-d3d4-4e05-b5ea-8c6a8af974a6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Updating instance_info_cache with network_info: [{"id": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "address": "fa:16:3e:fb:ee:02", "network": {"id": "ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-731576160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d9b3587ce494cb8ac153a66886f6883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4573fd6d-82", "ovs_interfaceid": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:23:43 compute-1 nova_compute[225855]: 2026-01-20 14:23:43.294 225859 DEBUG oslo_concurrency.lockutils [req-4a02e0f3-5e43-40c5-a02a-2921907eaaf6 req-e781f02b-d3d4-4e05-b5ea-8c6a8af974a6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-7108f815-a0ef-4f18-a2c2-c796476ace75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:23:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:23:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:43.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:23:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:44.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:44 compute-1 ceph-mon[81775]: pgmap v967: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 17 KiB/s wr, 99 op/s
Jan 20 14:23:44 compute-1 ceph-mon[81775]: pgmap v968: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 15 KiB/s wr, 133 op/s
Jan 20 14:23:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:44 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 20 14:23:45 compute-1 nova_compute[225855]: 2026-01-20 14:23:45.298 225859 DEBUG oslo_concurrency.processutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 7108f815-a0ef-4f18-a2c2-c796476ace75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.881s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:45 compute-1 nova_compute[225855]: 2026-01-20 14:23:45.375 225859 DEBUG nova.storage.rbd_utils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] resizing rbd image 7108f815-a0ef-4f18-a2c2-c796476ace75_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:23:45 compute-1 nova_compute[225855]: 2026-01-20 14:23:45.611 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:45.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:45 compute-1 ceph-mon[81775]: pgmap v969: 321 pgs: 321 active+clean; 260 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 15 KiB/s wr, 134 op/s
Jan 20 14:23:45 compute-1 nova_compute[225855]: 2026-01-20 14:23:45.978 225859 DEBUG nova.objects.instance [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Lazy-loading 'migration_context' on Instance uuid 7108f815-a0ef-4f18-a2c2-c796476ace75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:23:45 compute-1 nova_compute[225855]: 2026-01-20 14:23:45.993 225859 DEBUG nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:23:45 compute-1 nova_compute[225855]: 2026-01-20 14:23:45.993 225859 DEBUG nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Ensure instance console log exists: /var/lib/nova/instances/7108f815-a0ef-4f18-a2c2-c796476ace75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:23:45 compute-1 nova_compute[225855]: 2026-01-20 14:23:45.993 225859 DEBUG oslo_concurrency.lockutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:45 compute-1 nova_compute[225855]: 2026-01-20 14:23:45.994 225859 DEBUG oslo_concurrency.lockutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:45 compute-1 nova_compute[225855]: 2026-01-20 14:23:45.994 225859 DEBUG oslo_concurrency.lockutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:45 compute-1 nova_compute[225855]: 2026-01-20 14:23:45.995 225859 DEBUG nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Start _get_guest_xml network_info=[{"id": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "address": "fa:16:3e:fb:ee:02", "network": {"id": "ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-731576160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d9b3587ce494cb8ac153a66886f6883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4573fd6d-82", "ovs_interfaceid": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.000 225859 WARNING nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.004 225859 DEBUG nova.virt.libvirt.host [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.005 225859 DEBUG nova.virt.libvirt.host [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.007 225859 DEBUG nova.virt.libvirt.host [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.007 225859 DEBUG nova.virt.libvirt.host [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.008 225859 DEBUG nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.008 225859 DEBUG nova.virt.hardware [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.008 225859 DEBUG nova.virt.hardware [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.009 225859 DEBUG nova.virt.hardware [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.009 225859 DEBUG nova.virt.hardware [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.009 225859 DEBUG nova.virt.hardware [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.009 225859 DEBUG nova.virt.hardware [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.009 225859 DEBUG nova.virt.hardware [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.010 225859 DEBUG nova.virt.hardware [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.010 225859 DEBUG nova.virt.hardware [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.010 225859 DEBUG nova.virt.hardware [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.010 225859 DEBUG nova.virt.hardware [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.012 225859 DEBUG oslo_concurrency.processutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:46.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:23:46 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4216244016' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.449 225859 DEBUG oslo_concurrency.processutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.475 225859 DEBUG nova.storage.rbd_utils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] rbd image 7108f815-a0ef-4f18-a2c2-c796476ace75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.478 225859 DEBUG oslo_concurrency.processutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:23:46 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/620535083' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.895 225859 DEBUG oslo_concurrency.processutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.897 225859 DEBUG nova.virt.libvirt.vif [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:23:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-3747750',display_name='tempest-VolumesAssistedSnapshotsTest-server-3747750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-3747750',id=5,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDFF6f8GejZgbluLYmff+2O5aXyXAHCbauzfTWOArySxtg6k2Kp0zTHap6CKfyD2fWfCywq/R2Wl9LWwxTNjXBxp07Mo6pu1ISB3Tz/DzrJv4Fpmcod9g0TNVrenOml3zQ==',key_name='tempest-keypair-908508212',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d9b3587ce494cb8ac153a66886f6883',ramdisk_id='',reservation_id='r-3no1g33a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAssistedSnapshotsTest-68223103',owner_user_name='tempest-VolumesAssistedSnapshotsTest-68223103-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:23:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6d292784a7494358a137fea52feffec0',uuid=7108f815-a0ef-4f18-a2c2-c796476ace75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "address": "fa:16:3e:fb:ee:02", "network": {"id": "ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-731576160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d9b3587ce494cb8ac153a66886f6883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4573fd6d-82", "ovs_interfaceid": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.898 225859 DEBUG nova.network.os_vif_util [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Converting VIF {"id": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "address": "fa:16:3e:fb:ee:02", "network": {"id": "ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-731576160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d9b3587ce494cb8ac153a66886f6883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4573fd6d-82", "ovs_interfaceid": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.899 225859 DEBUG nova.network.os_vif_util [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:ee:02,bridge_name='br-int',has_traffic_filtering=True,id=4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54,network=Network(ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4573fd6d-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.900 225859 DEBUG nova.objects.instance [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7108f815-a0ef-4f18-a2c2-c796476ace75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.925 225859 DEBUG nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:23:46 compute-1 nova_compute[225855]:   <uuid>7108f815-a0ef-4f18-a2c2-c796476ace75</uuid>
Jan 20 14:23:46 compute-1 nova_compute[225855]:   <name>instance-00000005</name>
Jan 20 14:23:46 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:23:46 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:23:46 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <nova:name>tempest-VolumesAssistedSnapshotsTest-server-3747750</nova:name>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:23:46</nova:creationTime>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:23:46 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:23:46 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:23:46 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:23:46 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:23:46 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:23:46 compute-1 nova_compute[225855]:         <nova:user uuid="6d292784a7494358a137fea52feffec0">tempest-VolumesAssistedSnapshotsTest-68223103-project-member</nova:user>
Jan 20 14:23:46 compute-1 nova_compute[225855]:         <nova:project uuid="6d9b3587ce494cb8ac153a66886f6883">tempest-VolumesAssistedSnapshotsTest-68223103</nova:project>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:23:46 compute-1 nova_compute[225855]:         <nova:port uuid="4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54">
Jan 20 14:23:46 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:23:46 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:23:46 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <system>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <entry name="serial">7108f815-a0ef-4f18-a2c2-c796476ace75</entry>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <entry name="uuid">7108f815-a0ef-4f18-a2c2-c796476ace75</entry>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     </system>
Jan 20 14:23:46 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:23:46 compute-1 nova_compute[225855]:   <os>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:   </os>
Jan 20 14:23:46 compute-1 nova_compute[225855]:   <features>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:   </features>
Jan 20 14:23:46 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:23:46 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:23:46 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/7108f815-a0ef-4f18-a2c2-c796476ace75_disk">
Jan 20 14:23:46 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       </source>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:23:46 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/7108f815-a0ef-4f18-a2c2-c796476ace75_disk.config">
Jan 20 14:23:46 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       </source>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:23:46 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:fb:ee:02"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <target dev="tap4573fd6d-82"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/7108f815-a0ef-4f18-a2c2-c796476ace75/console.log" append="off"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <video>
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     </video>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:23:46 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:23:46 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:23:46 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:23:46 compute-1 nova_compute[225855]: </domain>
Jan 20 14:23:46 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.926 225859 DEBUG nova.compute.manager [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Preparing to wait for external event network-vif-plugged-4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.927 225859 DEBUG oslo_concurrency.lockutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Acquiring lock "7108f815-a0ef-4f18-a2c2-c796476ace75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.927 225859 DEBUG oslo_concurrency.lockutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.927 225859 DEBUG oslo_concurrency.lockutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.928 225859 DEBUG nova.virt.libvirt.vif [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:23:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-3747750',display_name='tempest-VolumesAssistedSnapshotsTest-server-3747750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-3747750',id=5,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDFF6f8GejZgbluLYmff+2O5aXyXAHCbauzfTWOArySxtg6k2Kp0zTHap6CKfyD2fWfCywq/R2Wl9LWwxTNjXBxp07Mo6pu1ISB3Tz/DzrJv4Fpmcod9g0TNVrenOml3zQ==',key_name='tempest-keypair-908508212',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d9b3587ce494cb8ac153a66886f6883',ramdisk_id='',reservation_id='r-3no1g33a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAssistedSnapshotsTest-68223103',owner_user_name='tempest-VolumesAssistedSnapshotsTest-68223103-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:23:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6d292784a7494358a137fea52feffec0',uuid=7108f815-a0ef-4f18-a2c2-c796476ace75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "address": "fa:16:3e:fb:ee:02", "network": {"id": "ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-731576160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d9b3587ce494cb8ac153a66886f6883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4573fd6d-82", "ovs_interfaceid": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.928 225859 DEBUG nova.network.os_vif_util [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Converting VIF {"id": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "address": "fa:16:3e:fb:ee:02", "network": {"id": "ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-731576160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d9b3587ce494cb8ac153a66886f6883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4573fd6d-82", "ovs_interfaceid": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.929 225859 DEBUG nova.network.os_vif_util [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:ee:02,bridge_name='br-int',has_traffic_filtering=True,id=4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54,network=Network(ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4573fd6d-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.929 225859 DEBUG os_vif [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:ee:02,bridge_name='br-int',has_traffic_filtering=True,id=4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54,network=Network(ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4573fd6d-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.930 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.930 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.931 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.934 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.934 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4573fd6d-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.934 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4573fd6d-82, col_values=(('external_ids', {'iface-id': '4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:ee:02', 'vm-uuid': '7108f815-a0ef-4f18-a2c2-c796476ace75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.936 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:46 compute-1 NetworkManager[49104]: <info>  [1768919026.9378] manager: (tap4573fd6d-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.938 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.945 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.946 225859 INFO os_vif [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:ee:02,bridge_name='br-int',has_traffic_filtering=True,id=4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54,network=Network(ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4573fd6d-82')
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.996 225859 DEBUG nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.997 225859 DEBUG nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.997 225859 DEBUG nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] No VIF found with MAC fa:16:3e:fb:ee:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:23:46 compute-1 nova_compute[225855]: 2026-01-20 14:23:46.998 225859 INFO nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Using config drive
Jan 20 14:23:47 compute-1 nova_compute[225855]: 2026-01-20 14:23:47.031 225859 DEBUG nova.storage.rbd_utils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] rbd image 7108f815-a0ef-4f18-a2c2-c796476ace75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:23:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4216244016' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/620535083' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:47 compute-1 nova_compute[225855]: 2026-01-20 14:23:47.449 225859 INFO nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Creating config drive at /var/lib/nova/instances/7108f815-a0ef-4f18-a2c2-c796476ace75/disk.config
Jan 20 14:23:47 compute-1 nova_compute[225855]: 2026-01-20 14:23:47.460 225859 DEBUG oslo_concurrency.processutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7108f815-a0ef-4f18-a2c2-c796476ace75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7r2b_ejf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:47 compute-1 nova_compute[225855]: 2026-01-20 14:23:47.594 225859 DEBUG oslo_concurrency.processutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7108f815-a0ef-4f18-a2c2-c796476ace75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7r2b_ejf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:47 compute-1 nova_compute[225855]: 2026-01-20 14:23:47.635 225859 DEBUG nova.storage.rbd_utils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] rbd image 7108f815-a0ef-4f18-a2c2-c796476ace75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:23:47 compute-1 nova_compute[225855]: 2026-01-20 14:23:47.641 225859 DEBUG oslo_concurrency.processutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7108f815-a0ef-4f18-a2c2-c796476ace75/disk.config 7108f815-a0ef-4f18-a2c2-c796476ace75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:47 compute-1 nova_compute[225855]: 2026-01-20 14:23:47.666 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:47.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:47 compute-1 ovn_controller[130490]: 2026-01-20T14:23:47Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5f:bb:6a 10.1.0.38
Jan 20 14:23:47 compute-1 ovn_controller[130490]: 2026-01-20T14:23:47Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5f:bb:6a 10.1.0.38
Jan 20 14:23:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:48.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:48 compute-1 ceph-mon[81775]: pgmap v970: 321 pgs: 321 active+clean; 320 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.2 MiB/s wr, 170 op/s
Jan 20 14:23:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2442230621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:49 compute-1 nova_compute[225855]: 2026-01-20 14:23:49.016 225859 DEBUG oslo_concurrency.processutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7108f815-a0ef-4f18-a2c2-c796476ace75/disk.config 7108f815-a0ef-4f18-a2c2-c796476ace75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.375s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:49 compute-1 nova_compute[225855]: 2026-01-20 14:23:49.017 225859 INFO nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Deleting local config drive /var/lib/nova/instances/7108f815-a0ef-4f18-a2c2-c796476ace75/disk.config because it was imported into RBD.
Jan 20 14:23:49 compute-1 kernel: tap4573fd6d-82: entered promiscuous mode
Jan 20 14:23:49 compute-1 NetworkManager[49104]: <info>  [1768919029.0649] manager: (tap4573fd6d-82): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Jan 20 14:23:49 compute-1 ovn_controller[130490]: 2026-01-20T14:23:49Z|00032|binding|INFO|Claiming lport 4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 for this chassis.
Jan 20 14:23:49 compute-1 ovn_controller[130490]: 2026-01-20T14:23:49Z|00033|binding|INFO|4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54: Claiming fa:16:3e:fb:ee:02 10.100.0.5
Jan 20 14:23:49 compute-1 nova_compute[225855]: 2026-01-20 14:23:49.065 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:49 compute-1 nova_compute[225855]: 2026-01-20 14:23:49.069 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.076 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:ee:02 10.100.0.5'], port_security=['fa:16:3e:fb:ee:02 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7108f815-a0ef-4f18-a2c2-c796476ace75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d9b3587ce494cb8ac153a66886f6883', 'neutron:revision_number': '2', 'neutron:security_group_ids': '980a7c36-df9c-4ecc-beb3-c614c9aa52f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a61ba6e8-74c1-4e95-8edc-1941c805bf71, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.077 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 in datapath ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b bound to our chassis
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.079 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.089 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f918d3-24b0-41c7-9bdf-4f3b44fd3631]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.089 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapec57bdbd-c1 in ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.091 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapec57bdbd-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.091 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bcbb1724-cac6-47d1-9659-e43c3b629ad0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.091 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[927a10a0-de95-41c8-89b0-6d775b5fe337]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:49 compute-1 systemd-udevd[230264]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:23:49 compute-1 systemd-machined[194361]: New machine qemu-2-instance-00000005.
Jan 20 14:23:49 compute-1 systemd[1]: Started Virtual Machine qemu-2-instance-00000005.
Jan 20 14:23:49 compute-1 NetworkManager[49104]: <info>  [1768919029.1148] device (tap4573fd6d-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:23:49 compute-1 NetworkManager[49104]: <info>  [1768919029.1156] device (tap4573fd6d-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.116 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[e3283773-2ea4-4dce-a1ea-27346a576097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:49 compute-1 nova_compute[225855]: 2026-01-20 14:23:49.129 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:49 compute-1 ovn_controller[130490]: 2026-01-20T14:23:49Z|00034|binding|INFO|Setting lport 4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 ovn-installed in OVS
Jan 20 14:23:49 compute-1 ovn_controller[130490]: 2026-01-20T14:23:49Z|00035|binding|INFO|Setting lport 4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 up in Southbound
Jan 20 14:23:49 compute-1 nova_compute[225855]: 2026-01-20 14:23:49.134 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.141 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[253584da-dd65-4271-a18a-5c3be5bee93d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.166 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[19cd1ec3-1cb0-45a1-abd5-7e16d0fb60f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:49 compute-1 NetworkManager[49104]: <info>  [1768919029.1716] manager: (tapec57bdbd-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.170 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba95144-fe93-4e71-b363-bb7dfe240541]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:49 compute-1 systemd-udevd[230268]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.202 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae76030-a4ac-4cf7-bc66-d5878148a81b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.206 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c56e07-2433-4757-abd6-be2362d97157]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:49 compute-1 NetworkManager[49104]: <info>  [1768919029.2363] device (tapec57bdbd-c0): carrier: link connected
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.241 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e3beb96b-5856-475d-b049-c929fac38b85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.258 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2d6916c4-e346-42cb-914b-64d1157bdf53]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec57bdbd-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:ce:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 406816, 'reachable_time': 43791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 230297, 'error': None, 'target': 'ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.272 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a0db9ce2-1bac-427a-a814-23e8a7601f60]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feff:ce21'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 406816, 'tstamp': 406816}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 230298, 'error': None, 'target': 'ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.286 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e7ffda-b1e0-46d5-a57b-707e15a4260a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec57bdbd-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:ce:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 406816, 'reachable_time': 43791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 230299, 'error': None, 'target': 'ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.310 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[941152a1-9d0b-4428-8f30-9685edfe1a7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.363 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[462b43c0-866b-4ba2-a5dd-d2bfda1c4545]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.364 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec57bdbd-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.364 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.365 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec57bdbd-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:49 compute-1 nova_compute[225855]: 2026-01-20 14:23:49.366 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:49 compute-1 kernel: tapec57bdbd-c0: entered promiscuous mode
Jan 20 14:23:49 compute-1 NetworkManager[49104]: <info>  [1768919029.3669] manager: (tapec57bdbd-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.369 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec57bdbd-c0, col_values=(('external_ids', {'iface-id': '77db07a9-3fb9-423e-8da6-f47ee620aae1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:49 compute-1 nova_compute[225855]: 2026-01-20 14:23:49.370 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:49 compute-1 ovn_controller[130490]: 2026-01-20T14:23:49Z|00036|binding|INFO|Releasing lport 77db07a9-3fb9-423e-8da6-f47ee620aae1 from this chassis (sb_readonly=0)
Jan 20 14:23:49 compute-1 nova_compute[225855]: 2026-01-20 14:23:49.383 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.383 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.384 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[38e6f293-bb2c-4ca9-a149-5e6b8c55c975]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.384 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b.pid.haproxy
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:23:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:49.385 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b', 'env', 'PROCESS_TAG=haproxy-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:23:49 compute-1 nova_compute[225855]: 2026-01-20 14:23:49.662 225859 DEBUG nova.compute.manager [req-cfdc8f95-f275-4d7c-bf2a-fcf23f97f6de req-80952d88-2d26-42d6-9abd-091960fa7902 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Received event network-vif-plugged-4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:23:49 compute-1 nova_compute[225855]: 2026-01-20 14:23:49.662 225859 DEBUG oslo_concurrency.lockutils [req-cfdc8f95-f275-4d7c-bf2a-fcf23f97f6de req-80952d88-2d26-42d6-9abd-091960fa7902 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7108f815-a0ef-4f18-a2c2-c796476ace75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:49 compute-1 nova_compute[225855]: 2026-01-20 14:23:49.663 225859 DEBUG oslo_concurrency.lockutils [req-cfdc8f95-f275-4d7c-bf2a-fcf23f97f6de req-80952d88-2d26-42d6-9abd-091960fa7902 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:49 compute-1 nova_compute[225855]: 2026-01-20 14:23:49.663 225859 DEBUG oslo_concurrency.lockutils [req-cfdc8f95-f275-4d7c-bf2a-fcf23f97f6de req-80952d88-2d26-42d6-9abd-091960fa7902 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:49 compute-1 nova_compute[225855]: 2026-01-20 14:23:49.663 225859 DEBUG nova.compute.manager [req-cfdc8f95-f275-4d7c-bf2a-fcf23f97f6de req-80952d88-2d26-42d6-9abd-091960fa7902 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Processing event network-vif-plugged-4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:23:49 compute-1 nova_compute[225855]: 2026-01-20 14:23:49.664 225859 DEBUG nova.compute.manager [req-cfdc8f95-f275-4d7c-bf2a-fcf23f97f6de req-80952d88-2d26-42d6-9abd-091960fa7902 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Received event network-vif-plugged-4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:23:49 compute-1 nova_compute[225855]: 2026-01-20 14:23:49.664 225859 DEBUG oslo_concurrency.lockutils [req-cfdc8f95-f275-4d7c-bf2a-fcf23f97f6de req-80952d88-2d26-42d6-9abd-091960fa7902 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7108f815-a0ef-4f18-a2c2-c796476ace75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:49 compute-1 nova_compute[225855]: 2026-01-20 14:23:49.664 225859 DEBUG oslo_concurrency.lockutils [req-cfdc8f95-f275-4d7c-bf2a-fcf23f97f6de req-80952d88-2d26-42d6-9abd-091960fa7902 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:49 compute-1 nova_compute[225855]: 2026-01-20 14:23:49.665 225859 DEBUG oslo_concurrency.lockutils [req-cfdc8f95-f275-4d7c-bf2a-fcf23f97f6de req-80952d88-2d26-42d6-9abd-091960fa7902 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:49 compute-1 nova_compute[225855]: 2026-01-20 14:23:49.665 225859 DEBUG nova.compute.manager [req-cfdc8f95-f275-4d7c-bf2a-fcf23f97f6de req-80952d88-2d26-42d6-9abd-091960fa7902 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] No waiting events found dispatching network-vif-plugged-4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:23:49 compute-1 nova_compute[225855]: 2026-01-20 14:23:49.665 225859 WARNING nova.compute.manager [req-cfdc8f95-f275-4d7c-bf2a-fcf23f97f6de req-80952d88-2d26-42d6-9abd-091960fa7902 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Received unexpected event network-vif-plugged-4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 for instance with vm_state building and task_state spawning.
Jan 20 14:23:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:49 compute-1 podman[230350]: 2026-01-20 14:23:49.785430853 +0000 UTC m=+0.051616083 container create d3eef0ae425ec3563d3ad4918f96b80d6c8ea801fd7ae121d12a17a0777b9776 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 14:23:49 compute-1 systemd[1]: Started libpod-conmon-d3eef0ae425ec3563d3ad4918f96b80d6c8ea801fd7ae121d12a17a0777b9776.scope.
Jan 20 14:23:49 compute-1 podman[230350]: 2026-01-20 14:23:49.757954916 +0000 UTC m=+0.024140146 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:23:49 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:23:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/276037b4c0f1a16a9dd7a2f42c512b2508dc62cc130213b8b7fe0ebd0f5eeb1c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:49 compute-1 podman[230350]: 2026-01-20 14:23:49.893000638 +0000 UTC m=+0.159185888 container init d3eef0ae425ec3563d3ad4918f96b80d6c8ea801fd7ae121d12a17a0777b9776 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:23:49 compute-1 podman[230350]: 2026-01-20 14:23:49.899450818 +0000 UTC m=+0.165636048 container start d3eef0ae425ec3563d3ad4918f96b80d6c8ea801fd7ae121d12a17a0777b9776 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 14:23:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:49.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:49 compute-1 neutron-haproxy-ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b[230365]: [NOTICE]   (230369) : New worker (230378) forked
Jan 20 14:23:49 compute-1 neutron-haproxy-ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b[230365]: [NOTICE]   (230369) : Loading success.
Jan 20 14:23:50 compute-1 ceph-mon[81775]: pgmap v971: 321 pgs: 321 active+clean; 342 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.2 MiB/s wr, 171 op/s
Jan 20 14:23:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:50.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.229 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919030.229209, 7108f815-a0ef-4f18-a2c2-c796476ace75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.230 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] VM Started (Lifecycle Event)
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.232 225859 DEBUG nova.compute.manager [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.235 225859 DEBUG nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.238 225859 INFO nova.virt.libvirt.driver [-] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Instance spawned successfully.
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.238 225859 DEBUG nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.258 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.264 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.266 225859 DEBUG nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.267 225859 DEBUG nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.267 225859 DEBUG nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.267 225859 DEBUG nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.268 225859 DEBUG nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.268 225859 DEBUG nova.virt.libvirt.driver [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.336 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.337 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919030.2294133, 7108f815-a0ef-4f18-a2c2-c796476ace75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.337 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] VM Paused (Lifecycle Event)
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.362 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.365 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919030.234734, 7108f815-a0ef-4f18-a2c2-c796476ace75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.365 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] VM Resumed (Lifecycle Event)
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.383 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.386 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.390 225859 INFO nova.compute.manager [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Took 12.22 seconds to spawn the instance on the hypervisor.
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.391 225859 DEBUG nova.compute.manager [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.424 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.462 225859 INFO nova.compute.manager [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Took 17.56 seconds to build instance.
Jan 20 14:23:50 compute-1 nova_compute[225855]: 2026-01-20 14:23:50.479 225859 DEBUG oslo_concurrency.lockutils [None req-ba1c8464-f718-4e14-8f03-f8ec1a8424f7 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:51.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:51 compute-1 nova_compute[225855]: 2026-01-20 14:23:51.939 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:52.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.339 225859 DEBUG oslo_concurrency.lockutils [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.340 225859 DEBUG oslo_concurrency.lockutils [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.341 225859 DEBUG oslo_concurrency.lockutils [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.341 225859 DEBUG oslo_concurrency.lockutils [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.341 225859 DEBUG oslo_concurrency.lockutils [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.343 225859 INFO nova.compute.manager [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Terminating instance
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.345 225859 DEBUG nova.compute.manager [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:23:52 compute-1 ceph-mon[81775]: pgmap v972: 321 pgs: 321 active+clean; 355 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.7 MiB/s wr, 203 op/s
Jan 20 14:23:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/142931646' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/341761694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.628 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:52 compute-1 kernel: tap68f29a43-1c (unregistering): left promiscuous mode
Jan 20 14:23:52 compute-1 NetworkManager[49104]: <info>  [1768919032.6607] device (tap68f29a43-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:23:52 compute-1 ovn_controller[130490]: 2026-01-20T14:23:52Z|00037|binding|INFO|Releasing lport 68f29a43-1cc4-44f2-953b-48d1d2795097 from this chassis (sb_readonly=0)
Jan 20 14:23:52 compute-1 ovn_controller[130490]: 2026-01-20T14:23:52Z|00038|binding|INFO|Setting lport 68f29a43-1cc4-44f2-953b-48d1d2795097 down in Southbound
Jan 20 14:23:52 compute-1 ovn_controller[130490]: 2026-01-20T14:23:52Z|00039|binding|INFO|Removing iface tap68f29a43-1c ovn-installed in OVS
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.672 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:52.677 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:bb:6a 10.1.0.38 fdfe:381f:8400::3b3'], port_security=['fa:16:3e:5f:bb:6a 10.1.0.38 fdfe:381f:8400::3b3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.38/26 fdfe:381f:8400::3b3/64', 'neutron:device_id': '21e70820-70b1-4bb9-bb8d-62fb69c2298b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abfbbc51-530d-4964-87bc-9fe4ef7eea76', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e024eef627014f829fa6e45ffe36c281', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd16ec2ec-302d-4208-8497-5b8aae342313', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72f351f7-24c7-4d5d-b1a1-e23b4cd26746, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=68f29a43-1cc4-44f2-953b-48d1d2795097) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:23:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:52.678 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 68f29a43-1cc4-44f2-953b-48d1d2795097 in datapath abfbbc51-530d-4964-87bc-9fe4ef7eea76 unbound from our chassis
Jan 20 14:23:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:52.680 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abfbbc51-530d-4964-87bc-9fe4ef7eea76, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:23:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:52.681 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1d032a2c-71bb-4144-bbda-a6275adef405]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:52.682 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76 namespace which is not needed anymore
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.697 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:52 compute-1 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Jan 20 14:23:52 compute-1 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 14.275s CPU time.
Jan 20 14:23:52 compute-1 systemd-machined[194361]: Machine qemu-1-instance-00000002 terminated.
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.772 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.779 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.790 225859 INFO nova.virt.libvirt.driver [-] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Instance destroyed successfully.
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.791 225859 DEBUG nova.objects.instance [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lazy-loading 'resources' on Instance uuid 21e70820-70b1-4bb9-bb8d-62fb69c2298b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.815 225859 DEBUG nova.virt.libvirt.vif [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:22:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-63374895-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-63374895-1',id=2,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:23:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e024eef627014f829fa6e45ffe36c281',ramdisk_id='',reservation_id='r-46o7356r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-314960358',owner_user_name='tempest-AutoAllocateNetworkTest-314960358-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:23:31Z,user_data=None,user_id='918f290d4c414b71807eacf0b27ad165',uuid=21e70820-70b1-4bb9-bb8d-62fb69c2298b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68f29a43-1cc4-44f2-953b-48d1d2795097", "address": "fa:16:3e:5f:bb:6a", "network": {"id": "abfbbc51-530d-4964-87bc-9fe4ef7eea76", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3b3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.38", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e024eef627014f829fa6e45ffe36c281", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f29a43-1c", "ovs_interfaceid": "68f29a43-1cc4-44f2-953b-48d1d2795097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.817 225859 DEBUG nova.network.os_vif_util [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Converting VIF {"id": "68f29a43-1cc4-44f2-953b-48d1d2795097", "address": "fa:16:3e:5f:bb:6a", "network": {"id": "abfbbc51-530d-4964-87bc-9fe4ef7eea76", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3b3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.38", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e024eef627014f829fa6e45ffe36c281", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68f29a43-1c", "ovs_interfaceid": "68f29a43-1cc4-44f2-953b-48d1d2795097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.818 225859 DEBUG nova.network.os_vif_util [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:bb:6a,bridge_name='br-int',has_traffic_filtering=True,id=68f29a43-1cc4-44f2-953b-48d1d2795097,network=Network(abfbbc51-530d-4964-87bc-9fe4ef7eea76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68f29a43-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.818 225859 DEBUG os_vif [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:bb:6a,bridge_name='br-int',has_traffic_filtering=True,id=68f29a43-1cc4-44f2-953b-48d1d2795097,network=Network(abfbbc51-530d-4964-87bc-9fe4ef7eea76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68f29a43-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.823 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.824 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68f29a43-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.826 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.829 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:23:52 compute-1 nova_compute[225855]: 2026-01-20 14:23:52.833 225859 INFO os_vif [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:bb:6a,bridge_name='br-int',has_traffic_filtering=True,id=68f29a43-1cc4-44f2-953b-48d1d2795097,network=Network(abfbbc51-530d-4964-87bc-9fe4ef7eea76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68f29a43-1c')
Jan 20 14:23:52 compute-1 neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76[229866]: [NOTICE]   (229885) : haproxy version is 2.8.14-c23fe91
Jan 20 14:23:52 compute-1 neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76[229866]: [NOTICE]   (229885) : path to executable is /usr/sbin/haproxy
Jan 20 14:23:52 compute-1 neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76[229866]: [WARNING]  (229885) : Exiting Master process...
Jan 20 14:23:52 compute-1 neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76[229866]: [ALERT]    (229885) : Current worker (229889) exited with code 143 (Terminated)
Jan 20 14:23:52 compute-1 neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76[229866]: [WARNING]  (229885) : All workers exited. Exiting... (0)
Jan 20 14:23:52 compute-1 systemd[1]: libpod-d0fb244dd0bdc95ddb75bb18a552554e22daebc4ba5177ada30f919dd83cac7c.scope: Deactivated successfully.
Jan 20 14:23:52 compute-1 podman[230435]: 2026-01-20 14:23:52.875182029 +0000 UTC m=+0.059655587 container died d0fb244dd0bdc95ddb75bb18a552554e22daebc4ba5177ada30f919dd83cac7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 20 14:23:52 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d0fb244dd0bdc95ddb75bb18a552554e22daebc4ba5177ada30f919dd83cac7c-userdata-shm.mount: Deactivated successfully.
Jan 20 14:23:52 compute-1 systemd[1]: var-lib-containers-storage-overlay-ce11b97c0626c78060e5093ae253c072bdc788907d20e002f102fb4643485e00-merged.mount: Deactivated successfully.
Jan 20 14:23:52 compute-1 podman[230435]: 2026-01-20 14:23:52.919049345 +0000 UTC m=+0.103522933 container cleanup d0fb244dd0bdc95ddb75bb18a552554e22daebc4ba5177ada30f919dd83cac7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 14:23:52 compute-1 systemd[1]: libpod-conmon-d0fb244dd0bdc95ddb75bb18a552554e22daebc4ba5177ada30f919dd83cac7c.scope: Deactivated successfully.
Jan 20 14:23:52 compute-1 podman[230480]: 2026-01-20 14:23:52.990500491 +0000 UTC m=+0.044468443 container remove d0fb244dd0bdc95ddb75bb18a552554e22daebc4ba5177ada30f919dd83cac7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:23:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:52.997 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[23b2894b-0dd9-440a-a485-7e3199efda77]: (4, ('Tue Jan 20 02:23:52 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76 (d0fb244dd0bdc95ddb75bb18a552554e22daebc4ba5177ada30f919dd83cac7c)\nd0fb244dd0bdc95ddb75bb18a552554e22daebc4ba5177ada30f919dd83cac7c\nTue Jan 20 02:23:52 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76 (d0fb244dd0bdc95ddb75bb18a552554e22daebc4ba5177ada30f919dd83cac7c)\nd0fb244dd0bdc95ddb75bb18a552554e22daebc4ba5177ada30f919dd83cac7c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:52.998 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c031307e-6485-4156-a9cc-66c665198b1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:52.999 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabfbbc51-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:53 compute-1 kernel: tapabfbbc51-50: left promiscuous mode
Jan 20 14:23:53 compute-1 nova_compute[225855]: 2026-01-20 14:23:53.002 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:53.020 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9da2b541-c60e-4449-a7fe-45f0472787e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:53.033 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bd981004-9e1b-41c4-9d55-87b4cdc7cf41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:53.035 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5744ab-0a92-43d1-9c3e-27d4dc287783]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:53.054 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[eeef833b-60cc-43dc-a80b-1da0e782ab0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405158, 'reachable_time': 17746, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 230495, 'error': None, 'target': 'ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:53 compute-1 systemd[1]: run-netns-ovnmeta\x2dabfbbc51\x2d530d\x2d4964\x2d87bc\x2d9fe4ef7eea76.mount: Deactivated successfully.
Jan 20 14:23:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:53.071 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:23:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:23:53.072 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[c64562db-3805-4cda-9716-d6f4c60f43e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:53 compute-1 podman[230497]: 2026-01-20 14:23:53.211759852 +0000 UTC m=+0.113789890 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:23:53 compute-1 nova_compute[225855]: 2026-01-20 14:23:53.618 225859 DEBUG nova.compute.manager [req-a6c8a314-39f3-47d6-ab55-a13881a0bf9c req-e6b70f25-3661-4b0f-9ef9-e15bacc59f99 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Received event network-vif-unplugged-68f29a43-1cc4-44f2-953b-48d1d2795097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:23:53 compute-1 nova_compute[225855]: 2026-01-20 14:23:53.619 225859 DEBUG oslo_concurrency.lockutils [req-a6c8a314-39f3-47d6-ab55-a13881a0bf9c req-e6b70f25-3661-4b0f-9ef9-e15bacc59f99 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:53 compute-1 nova_compute[225855]: 2026-01-20 14:23:53.620 225859 DEBUG oslo_concurrency.lockutils [req-a6c8a314-39f3-47d6-ab55-a13881a0bf9c req-e6b70f25-3661-4b0f-9ef9-e15bacc59f99 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:53 compute-1 nova_compute[225855]: 2026-01-20 14:23:53.620 225859 DEBUG oslo_concurrency.lockutils [req-a6c8a314-39f3-47d6-ab55-a13881a0bf9c req-e6b70f25-3661-4b0f-9ef9-e15bacc59f99 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:53 compute-1 nova_compute[225855]: 2026-01-20 14:23:53.620 225859 DEBUG nova.compute.manager [req-a6c8a314-39f3-47d6-ab55-a13881a0bf9c req-e6b70f25-3661-4b0f-9ef9-e15bacc59f99 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] No waiting events found dispatching network-vif-unplugged-68f29a43-1cc4-44f2-953b-48d1d2795097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:23:53 compute-1 nova_compute[225855]: 2026-01-20 14:23:53.621 225859 DEBUG nova.compute.manager [req-a6c8a314-39f3-47d6-ab55-a13881a0bf9c req-e6b70f25-3661-4b0f-9ef9-e15bacc59f99 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Received event network-vif-unplugged-68f29a43-1cc4-44f2-953b-48d1d2795097 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:23:53 compute-1 NetworkManager[49104]: <info>  [1768919033.6680] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/31)
Jan 20 14:23:53 compute-1 NetworkManager[49104]: <info>  [1768919033.6689] device (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 14:23:53 compute-1 NetworkManager[49104]: <warn>  [1768919033.6693] device (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 14:23:53 compute-1 NetworkManager[49104]: <info>  [1768919033.6710] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/32)
Jan 20 14:23:53 compute-1 NetworkManager[49104]: <info>  [1768919033.6715] device (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 14:23:53 compute-1 NetworkManager[49104]: <warn>  [1768919033.6716] device (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 14:23:53 compute-1 NetworkManager[49104]: <info>  [1768919033.6727] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Jan 20 14:23:53 compute-1 NetworkManager[49104]: <info>  [1768919033.6736] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Jan 20 14:23:53 compute-1 NetworkManager[49104]: <info>  [1768919033.6743] device (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 20 14:23:53 compute-1 NetworkManager[49104]: <info>  [1768919033.6748] device (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 20 14:23:53 compute-1 nova_compute[225855]: 2026-01-20 14:23:53.682 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:53 compute-1 nova_compute[225855]: 2026-01-20 14:23:53.872 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:53 compute-1 ovn_controller[130490]: 2026-01-20T14:23:53Z|00040|binding|INFO|Releasing lport 77db07a9-3fb9-423e-8da6-f47ee620aae1 from this chassis (sb_readonly=0)
Jan 20 14:23:53 compute-1 nova_compute[225855]: 2026-01-20 14:23:53.903 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:53 compute-1 ceph-mon[81775]: pgmap v973: 321 pgs: 321 active+clean; 379 MiB data, 425 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.5 MiB/s wr, 302 op/s
Jan 20 14:23:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:53.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:53 compute-1 nova_compute[225855]: 2026-01-20 14:23:53.950 225859 INFO nova.virt.libvirt.driver [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Deleting instance files /var/lib/nova/instances/21e70820-70b1-4bb9-bb8d-62fb69c2298b_del
Jan 20 14:23:53 compute-1 nova_compute[225855]: 2026-01-20 14:23:53.951 225859 INFO nova.virt.libvirt.driver [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Deletion of /var/lib/nova/instances/21e70820-70b1-4bb9-bb8d-62fb69c2298b_del complete
Jan 20 14:23:54 compute-1 nova_compute[225855]: 2026-01-20 14:23:54.016 225859 DEBUG nova.virt.libvirt.host [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Jan 20 14:23:54 compute-1 nova_compute[225855]: 2026-01-20 14:23:54.017 225859 INFO nova.virt.libvirt.host [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] UEFI support detected
Jan 20 14:23:54 compute-1 nova_compute[225855]: 2026-01-20 14:23:54.019 225859 INFO nova.compute.manager [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Took 1.67 seconds to destroy the instance on the hypervisor.
Jan 20 14:23:54 compute-1 nova_compute[225855]: 2026-01-20 14:23:54.019 225859 DEBUG oslo.service.loopingcall [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:23:54 compute-1 nova_compute[225855]: 2026-01-20 14:23:54.020 225859 DEBUG nova.compute.manager [-] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:23:54 compute-1 nova_compute[225855]: 2026-01-20 14:23:54.020 225859 DEBUG nova.network.neutron [-] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:23:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:23:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:54.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:23:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:55 compute-1 nova_compute[225855]: 2026-01-20 14:23:55.282 225859 DEBUG nova.network.neutron [-] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:23:55 compute-1 nova_compute[225855]: 2026-01-20 14:23:55.327 225859 INFO nova.compute.manager [-] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Took 1.31 seconds to deallocate network for instance.
Jan 20 14:23:55 compute-1 nova_compute[225855]: 2026-01-20 14:23:55.371 225859 DEBUG nova.compute.manager [req-3da07f72-8dbe-41df-8198-0d3ce5de7d57 req-86ae9924-37ce-4876-ac05-70a773919a74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Received event network-vif-deleted-68f29a43-1cc4-44f2-953b-48d1d2795097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:23:55 compute-1 nova_compute[225855]: 2026-01-20 14:23:55.406 225859 DEBUG oslo_concurrency.lockutils [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:55 compute-1 nova_compute[225855]: 2026-01-20 14:23:55.407 225859 DEBUG oslo_concurrency.lockutils [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:55 compute-1 nova_compute[225855]: 2026-01-20 14:23:55.476 225859 DEBUG oslo_concurrency.processutils [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:55 compute-1 nova_compute[225855]: 2026-01-20 14:23:55.793 225859 DEBUG nova.compute.manager [req-c21b39c5-4c05-4423-aed1-64bbfc0a7165 req-a21623b6-8c65-4ed6-ad66-049a61b2b898 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Received event network-vif-plugged-68f29a43-1cc4-44f2-953b-48d1d2795097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:23:55 compute-1 nova_compute[225855]: 2026-01-20 14:23:55.794 225859 DEBUG oslo_concurrency.lockutils [req-c21b39c5-4c05-4423-aed1-64bbfc0a7165 req-a21623b6-8c65-4ed6-ad66-049a61b2b898 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:55 compute-1 nova_compute[225855]: 2026-01-20 14:23:55.794 225859 DEBUG oslo_concurrency.lockutils [req-c21b39c5-4c05-4423-aed1-64bbfc0a7165 req-a21623b6-8c65-4ed6-ad66-049a61b2b898 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:55 compute-1 nova_compute[225855]: 2026-01-20 14:23:55.794 225859 DEBUG oslo_concurrency.lockutils [req-c21b39c5-4c05-4423-aed1-64bbfc0a7165 req-a21623b6-8c65-4ed6-ad66-049a61b2b898 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:55 compute-1 nova_compute[225855]: 2026-01-20 14:23:55.795 225859 DEBUG nova.compute.manager [req-c21b39c5-4c05-4423-aed1-64bbfc0a7165 req-a21623b6-8c65-4ed6-ad66-049a61b2b898 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] No waiting events found dispatching network-vif-plugged-68f29a43-1cc4-44f2-953b-48d1d2795097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:23:55 compute-1 nova_compute[225855]: 2026-01-20 14:23:55.795 225859 WARNING nova.compute.manager [req-c21b39c5-4c05-4423-aed1-64bbfc0a7165 req-a21623b6-8c65-4ed6-ad66-049a61b2b898 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Received unexpected event network-vif-plugged-68f29a43-1cc4-44f2-953b-48d1d2795097 for instance with vm_state deleted and task_state None.
Jan 20 14:23:55 compute-1 nova_compute[225855]: 2026-01-20 14:23:55.795 225859 DEBUG nova.compute.manager [req-c21b39c5-4c05-4423-aed1-64bbfc0a7165 req-a21623b6-8c65-4ed6-ad66-049a61b2b898 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Received event network-changed-4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:23:55 compute-1 nova_compute[225855]: 2026-01-20 14:23:55.795 225859 DEBUG nova.compute.manager [req-c21b39c5-4c05-4423-aed1-64bbfc0a7165 req-a21623b6-8c65-4ed6-ad66-049a61b2b898 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Refreshing instance network info cache due to event network-changed-4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:23:55 compute-1 nova_compute[225855]: 2026-01-20 14:23:55.796 225859 DEBUG oslo_concurrency.lockutils [req-c21b39c5-4c05-4423-aed1-64bbfc0a7165 req-a21623b6-8c65-4ed6-ad66-049a61b2b898 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-7108f815-a0ef-4f18-a2c2-c796476ace75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:23:55 compute-1 nova_compute[225855]: 2026-01-20 14:23:55.796 225859 DEBUG oslo_concurrency.lockutils [req-c21b39c5-4c05-4423-aed1-64bbfc0a7165 req-a21623b6-8c65-4ed6-ad66-049a61b2b898 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-7108f815-a0ef-4f18-a2c2-c796476ace75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:23:55 compute-1 nova_compute[225855]: 2026-01-20 14:23:55.796 225859 DEBUG nova.network.neutron [req-c21b39c5-4c05-4423-aed1-64bbfc0a7165 req-a21623b6-8c65-4ed6-ad66-049a61b2b898 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Refreshing network info cache for port 4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:23:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:55.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:23:55 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3389830566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:55 compute-1 nova_compute[225855]: 2026-01-20 14:23:55.963 225859 DEBUG oslo_concurrency.processutils [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:55 compute-1 nova_compute[225855]: 2026-01-20 14:23:55.969 225859 DEBUG nova.compute.provider_tree [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:23:55 compute-1 nova_compute[225855]: 2026-01-20 14:23:55.988 225859 DEBUG nova.scheduler.client.report [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:23:56 compute-1 nova_compute[225855]: 2026-01-20 14:23:56.010 225859 DEBUG oslo_concurrency.lockutils [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:56 compute-1 nova_compute[225855]: 2026-01-20 14:23:56.038 225859 INFO nova.scheduler.client.report [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Deleted allocations for instance 21e70820-70b1-4bb9-bb8d-62fb69c2298b
Jan 20 14:23:56 compute-1 nova_compute[225855]: 2026-01-20 14:23:56.128 225859 DEBUG oslo_concurrency.lockutils [None req-9974fa1b-d548-41bd-ab8e-423c279bb2a9 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "21e70820-70b1-4bb9-bb8d-62fb69c2298b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:56.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:56 compute-1 ceph-mon[81775]: pgmap v974: 321 pgs: 321 active+clean; 365 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.9 MiB/s wr, 288 op/s
Jan 20 14:23:56 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3389830566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:56 compute-1 sudo[230548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:23:56 compute-1 sudo[230548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:56 compute-1 sudo[230548]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:56 compute-1 sudo[230573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:23:56 compute-1 sudo[230573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:56 compute-1 sudo[230573]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:56 compute-1 nova_compute[225855]: 2026-01-20 14:23:56.962 225859 DEBUG nova.network.neutron [req-c21b39c5-4c05-4423-aed1-64bbfc0a7165 req-a21623b6-8c65-4ed6-ad66-049a61b2b898 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Updated VIF entry in instance network info cache for port 4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:23:56 compute-1 nova_compute[225855]: 2026-01-20 14:23:56.963 225859 DEBUG nova.network.neutron [req-c21b39c5-4c05-4423-aed1-64bbfc0a7165 req-a21623b6-8c65-4ed6-ad66-049a61b2b898 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Updating instance_info_cache with network_info: [{"id": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "address": "fa:16:3e:fb:ee:02", "network": {"id": "ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-731576160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d9b3587ce494cb8ac153a66886f6883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4573fd6d-82", "ovs_interfaceid": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:23:56 compute-1 nova_compute[225855]: 2026-01-20 14:23:56.985 225859 DEBUG oslo_concurrency.lockutils [req-c21b39c5-4c05-4423-aed1-64bbfc0a7165 req-a21623b6-8c65-4ed6-ad66-049a61b2b898 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-7108f815-a0ef-4f18-a2c2-c796476ace75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:23:57 compute-1 nova_compute[225855]: 2026-01-20 14:23:57.686 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:57 compute-1 nova_compute[225855]: 2026-01-20 14:23:57.827 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:57 compute-1 nova_compute[225855]: 2026-01-20 14:23:57.858 225859 DEBUG oslo_concurrency.lockutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "4bfa17e5-6bfc-42e2-9c3f-959596201da0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:57 compute-1 nova_compute[225855]: 2026-01-20 14:23:57.859 225859 DEBUG oslo_concurrency.lockutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "4bfa17e5-6bfc-42e2-9c3f-959596201da0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:57 compute-1 nova_compute[225855]: 2026-01-20 14:23:57.882 225859 DEBUG nova.compute.manager [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:23:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:57.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:57 compute-1 nova_compute[225855]: 2026-01-20 14:23:57.978 225859 DEBUG oslo_concurrency.lockutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:57 compute-1 nova_compute[225855]: 2026-01-20 14:23:57.980 225859 DEBUG oslo_concurrency.lockutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:57 compute-1 nova_compute[225855]: 2026-01-20 14:23:57.990 225859 DEBUG nova.virt.hardware [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:23:57 compute-1 nova_compute[225855]: 2026-01-20 14:23:57.992 225859 INFO nova.compute.claims [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:23:58 compute-1 nova_compute[225855]: 2026-01-20 14:23:58.114 225859 DEBUG oslo_concurrency.processutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:23:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:58.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:23:58 compute-1 ceph-mon[81775]: pgmap v975: 321 pgs: 321 active+clean; 334 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.8 MiB/s wr, 395 op/s
Jan 20 14:23:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:23:58 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2656191454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:58 compute-1 nova_compute[225855]: 2026-01-20 14:23:58.587 225859 DEBUG oslo_concurrency.processutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:58 compute-1 nova_compute[225855]: 2026-01-20 14:23:58.594 225859 DEBUG nova.compute.provider_tree [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:23:58 compute-1 nova_compute[225855]: 2026-01-20 14:23:58.620 225859 DEBUG nova.scheduler.client.report [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:23:58 compute-1 nova_compute[225855]: 2026-01-20 14:23:58.653 225859 DEBUG oslo_concurrency.lockutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:58 compute-1 nova_compute[225855]: 2026-01-20 14:23:58.655 225859 DEBUG nova.compute.manager [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:23:58 compute-1 nova_compute[225855]: 2026-01-20 14:23:58.697 225859 DEBUG nova.compute.manager [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:23:58 compute-1 nova_compute[225855]: 2026-01-20 14:23:58.698 225859 DEBUG nova.network.neutron [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:23:58 compute-1 nova_compute[225855]: 2026-01-20 14:23:58.718 225859 INFO nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:23:58 compute-1 nova_compute[225855]: 2026-01-20 14:23:58.739 225859 DEBUG nova.compute.manager [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:23:58 compute-1 nova_compute[225855]: 2026-01-20 14:23:58.848 225859 DEBUG nova.compute.manager [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:23:58 compute-1 nova_compute[225855]: 2026-01-20 14:23:58.851 225859 DEBUG nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:23:58 compute-1 nova_compute[225855]: 2026-01-20 14:23:58.852 225859 INFO nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Creating image(s)
Jan 20 14:23:58 compute-1 nova_compute[225855]: 2026-01-20 14:23:58.887 225859 DEBUG nova.storage.rbd_utils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 4bfa17e5-6bfc-42e2-9c3f-959596201da0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:23:58 compute-1 nova_compute[225855]: 2026-01-20 14:23:58.923 225859 DEBUG nova.storage.rbd_utils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 4bfa17e5-6bfc-42e2-9c3f-959596201da0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:23:58 compute-1 nova_compute[225855]: 2026-01-20 14:23:58.957 225859 DEBUG nova.storage.rbd_utils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 4bfa17e5-6bfc-42e2-9c3f-959596201da0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:23:58 compute-1 nova_compute[225855]: 2026-01-20 14:23:58.962 225859 DEBUG oslo_concurrency.processutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.009 225859 DEBUG nova.network.neutron [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.010 225859 DEBUG nova.compute.manager [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.032 225859 DEBUG oslo_concurrency.processutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.033 225859 DEBUG oslo_concurrency.lockutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.033 225859 DEBUG oslo_concurrency.lockutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.034 225859 DEBUG oslo_concurrency.lockutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.063 225859 DEBUG nova.storage.rbd_utils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 4bfa17e5-6bfc-42e2-9c3f-959596201da0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.067 225859 DEBUG oslo_concurrency.processutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 4bfa17e5-6bfc-42e2-9c3f-959596201da0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.298 225859 DEBUG nova.virt.libvirt.driver [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Creating tmpfile /var/lib/nova/instances/tmpq6nhof9a to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Jan 20 14:23:59 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2656191454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:59 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1743970707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.350 225859 DEBUG oslo_concurrency.processutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 4bfa17e5-6bfc-42e2-9c3f-959596201da0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.408 225859 DEBUG nova.storage.rbd_utils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] resizing rbd image 4bfa17e5-6bfc-42e2-9c3f-959596201da0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.439 225859 DEBUG nova.compute.manager [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpq6nhof9a',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.455 225859 DEBUG oslo_concurrency.lockutils [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.455 225859 DEBUG oslo_concurrency.lockutils [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.466 225859 INFO nova.compute.rpcapi [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.466 225859 DEBUG oslo_concurrency.lockutils [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.522 225859 DEBUG nova.objects.instance [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lazy-loading 'migration_context' on Instance uuid 4bfa17e5-6bfc-42e2-9c3f-959596201da0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.533 225859 DEBUG nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.533 225859 DEBUG nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Ensure instance console log exists: /var/lib/nova/instances/4bfa17e5-6bfc-42e2-9c3f-959596201da0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.534 225859 DEBUG oslo_concurrency.lockutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.534 225859 DEBUG oslo_concurrency.lockutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.535 225859 DEBUG oslo_concurrency.lockutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.537 225859 DEBUG nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.543 225859 WARNING nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.546 225859 DEBUG nova.virt.libvirt.host [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.547 225859 DEBUG nova.virt.libvirt.host [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.549 225859 DEBUG nova.virt.libvirt.host [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.550 225859 DEBUG nova.virt.libvirt.host [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.550 225859 DEBUG nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.551 225859 DEBUG nova.virt.hardware [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.551 225859 DEBUG nova.virt.hardware [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.551 225859 DEBUG nova.virt.hardware [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.551 225859 DEBUG nova.virt.hardware [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.552 225859 DEBUG nova.virt.hardware [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.552 225859 DEBUG nova.virt.hardware [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.552 225859 DEBUG nova.virt.hardware [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.552 225859 DEBUG nova.virt.hardware [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.553 225859 DEBUG nova.virt.hardware [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.553 225859 DEBUG nova.virt.hardware [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.553 225859 DEBUG nova.virt.hardware [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.555 225859 DEBUG oslo_concurrency.processutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:23:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:23:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:59.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:23:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:23:59 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1468624899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:59 compute-1 nova_compute[225855]: 2026-01-20 14:23:59.990 225859 DEBUG oslo_concurrency.processutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:00 compute-1 nova_compute[225855]: 2026-01-20 14:24:00.014 225859 DEBUG nova.storage.rbd_utils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 4bfa17e5-6bfc-42e2-9c3f-959596201da0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:00 compute-1 nova_compute[225855]: 2026-01-20 14:24:00.018 225859 DEBUG oslo_concurrency.processutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:24:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:00.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:24:00 compute-1 ceph-mon[81775]: pgmap v976: 321 pgs: 321 active+clean; 326 MiB data, 401 MiB used, 21 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.6 MiB/s wr, 355 op/s
Jan 20 14:24:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1468624899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:24:00 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/364352947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:00 compute-1 nova_compute[225855]: 2026-01-20 14:24:00.446 225859 DEBUG oslo_concurrency.processutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:00 compute-1 nova_compute[225855]: 2026-01-20 14:24:00.450 225859 DEBUG nova.objects.instance [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4bfa17e5-6bfc-42e2-9c3f-959596201da0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:24:00 compute-1 nova_compute[225855]: 2026-01-20 14:24:00.489 225859 DEBUG nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:24:00 compute-1 nova_compute[225855]:   <uuid>4bfa17e5-6bfc-42e2-9c3f-959596201da0</uuid>
Jan 20 14:24:00 compute-1 nova_compute[225855]:   <name>instance-00000007</name>
Jan 20 14:24:00 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:24:00 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:24:00 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <nova:name>tempest-ServersOnMultiNodesTest-server-860882609</nova:name>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:23:59</nova:creationTime>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:24:00 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:24:00 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:24:00 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:24:00 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:24:00 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:24:00 compute-1 nova_compute[225855]:         <nova:user uuid="32a16ea2839748629233294de19222b3">tempest-ServersOnMultiNodesTest-1140514054-project-member</nova:user>
Jan 20 14:24:00 compute-1 nova_compute[225855]:         <nova:project uuid="986d9f2d9bd24a228e53a76694db0568">tempest-ServersOnMultiNodesTest-1140514054</nova:project>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <nova:ports/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:24:00 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:24:00 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <system>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <entry name="serial">4bfa17e5-6bfc-42e2-9c3f-959596201da0</entry>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <entry name="uuid">4bfa17e5-6bfc-42e2-9c3f-959596201da0</entry>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     </system>
Jan 20 14:24:00 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:24:00 compute-1 nova_compute[225855]:   <os>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:   </os>
Jan 20 14:24:00 compute-1 nova_compute[225855]:   <features>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:   </features>
Jan 20 14:24:00 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:24:00 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:24:00 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/4bfa17e5-6bfc-42e2-9c3f-959596201da0_disk">
Jan 20 14:24:00 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       </source>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:24:00 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/4bfa17e5-6bfc-42e2-9c3f-959596201da0_disk.config">
Jan 20 14:24:00 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       </source>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:24:00 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/4bfa17e5-6bfc-42e2-9c3f-959596201da0/console.log" append="off"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <video>
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     </video>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:24:00 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:24:00 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:24:00 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:24:00 compute-1 nova_compute[225855]: </domain>
Jan 20 14:24:00 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:24:00 compute-1 nova_compute[225855]: 2026-01-20 14:24:00.558 225859 DEBUG nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:24:00 compute-1 nova_compute[225855]: 2026-01-20 14:24:00.559 225859 DEBUG nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:24:00 compute-1 nova_compute[225855]: 2026-01-20 14:24:00.560 225859 INFO nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Using config drive
Jan 20 14:24:00 compute-1 nova_compute[225855]: 2026-01-20 14:24:00.593 225859 DEBUG nova.storage.rbd_utils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 4bfa17e5-6bfc-42e2-9c3f-959596201da0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.018 225859 INFO nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Creating config drive at /var/lib/nova/instances/4bfa17e5-6bfc-42e2-9c3f-959596201da0/disk.config
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.027 225859 DEBUG oslo_concurrency.processutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4bfa17e5-6bfc-42e2-9c3f-959596201da0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzby7_4mx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.170 225859 DEBUG oslo_concurrency.processutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4bfa17e5-6bfc-42e2-9c3f-959596201da0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzby7_4mx" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.213 225859 DEBUG nova.storage.rbd_utils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 4bfa17e5-6bfc-42e2-9c3f-959596201da0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.217 225859 DEBUG oslo_concurrency.processutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4bfa17e5-6bfc-42e2-9c3f-959596201da0/disk.config 4bfa17e5-6bfc-42e2-9c3f-959596201da0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.396 225859 DEBUG oslo_concurrency.processutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4bfa17e5-6bfc-42e2-9c3f-959596201da0/disk.config 4bfa17e5-6bfc-42e2-9c3f-959596201da0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.396 225859 INFO nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Deleting local config drive /var/lib/nova/instances/4bfa17e5-6bfc-42e2-9c3f-959596201da0/disk.config because it was imported into RBD.
Jan 20 14:24:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/364352947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:01 compute-1 systemd-machined[194361]: New machine qemu-3-instance-00000007.
Jan 20 14:24:01 compute-1 systemd[1]: Started Virtual Machine qemu-3-instance-00000007.
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.918 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919041.9181879, 4bfa17e5-6bfc-42e2-9c3f-959596201da0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.919 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] VM Resumed (Lifecycle Event)
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.923 225859 DEBUG nova.compute.manager [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.923 225859 DEBUG nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.927 225859 INFO nova.virt.libvirt.driver [-] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Instance spawned successfully.
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.928 225859 DEBUG nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:24:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:01.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.952 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.955 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.962 225859 DEBUG nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.963 225859 DEBUG nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.964 225859 DEBUG nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.964 225859 DEBUG nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.964 225859 DEBUG nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.965 225859 DEBUG nova.virt.libvirt.driver [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.983 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.984 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919041.9193923, 4bfa17e5-6bfc-42e2-9c3f-959596201da0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:24:01 compute-1 nova_compute[225855]: 2026-01-20 14:24:01.984 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] VM Started (Lifecycle Event)
Jan 20 14:24:02 compute-1 nova_compute[225855]: 2026-01-20 14:24:02.014 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:24:02 compute-1 nova_compute[225855]: 2026-01-20 14:24:02.017 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:24:02 compute-1 nova_compute[225855]: 2026-01-20 14:24:02.022 225859 INFO nova.compute.manager [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Took 3.17 seconds to spawn the instance on the hypervisor.
Jan 20 14:24:02 compute-1 nova_compute[225855]: 2026-01-20 14:24:02.022 225859 DEBUG nova.compute.manager [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:24:02 compute-1 nova_compute[225855]: 2026-01-20 14:24:02.035 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:24:02 compute-1 nova_compute[225855]: 2026-01-20 14:24:02.079 225859 INFO nova.compute.manager [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Took 4.13 seconds to build instance.
Jan 20 14:24:02 compute-1 nova_compute[225855]: 2026-01-20 14:24:02.099 225859 DEBUG oslo_concurrency.lockutils [None req-4405ef98-0a62-45b6-834e-08b08e21c37c 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "4bfa17e5-6bfc-42e2-9c3f-959596201da0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.240s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:24:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:02.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:24:02 compute-1 ceph-mon[81775]: pgmap v977: 321 pgs: 321 active+clean; 315 MiB data, 393 MiB used, 21 GiB / 21 GiB avail; 6.1 MiB/s rd, 4.8 MiB/s wr, 399 op/s
Jan 20 14:24:02 compute-1 nova_compute[225855]: 2026-01-20 14:24:02.487 225859 DEBUG nova.compute.manager [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpq6nhof9a',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='d1be7a29-3496-40ab-b61f-694622b7453b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Jan 20 14:24:02 compute-1 nova_compute[225855]: 2026-01-20 14:24:02.533 225859 DEBUG oslo_concurrency.lockutils [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Acquiring lock "refresh_cache-d1be7a29-3496-40ab-b61f-694622b7453b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:24:02 compute-1 nova_compute[225855]: 2026-01-20 14:24:02.533 225859 DEBUG oslo_concurrency.lockutils [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Acquired lock "refresh_cache-d1be7a29-3496-40ab-b61f-694622b7453b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:24:02 compute-1 nova_compute[225855]: 2026-01-20 14:24:02.533 225859 DEBUG nova.network.neutron [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:24:02 compute-1 nova_compute[225855]: 2026-01-20 14:24:02.686 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:02 compute-1 nova_compute[225855]: 2026-01-20 14:24:02.828 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1691126561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:03.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:04 compute-1 podman[230968]: 2026-01-20 14:24:04.059972132 +0000 UTC m=+0.086976021 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:24:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:04.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:04 compute-1 ovn_controller[130490]: 2026-01-20T14:24:04Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fb:ee:02 10.100.0.5
Jan 20 14:24:04 compute-1 ovn_controller[130490]: 2026-01-20T14:24:04Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fb:ee:02 10.100.0.5
Jan 20 14:24:04 compute-1 ceph-mon[81775]: pgmap v978: 321 pgs: 321 active+clean; 257 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 5.5 MiB/s rd, 6.6 MiB/s wr, 414 op/s
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.665 225859 DEBUG nova.network.neutron [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Updating instance_info_cache with network_info: [{"id": "495d90bb-db88-4c6d-a712-bfd41c4e37fc", "address": "fa:16:3e:81:4d:90", "network": {"id": "6f01f500-b631-4cdb-ae71-b33b0ccfb1aa", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1745233184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861a4f2b70b249afadeabfe85bda53a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap495d90bb-db", "ovs_interfaceid": "495d90bb-db88-4c6d-a712-bfd41c4e37fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.697 225859 DEBUG oslo_concurrency.lockutils [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Releasing lock "refresh_cache-d1be7a29-3496-40ab-b61f-694622b7453b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.699 225859 DEBUG nova.virt.libvirt.driver [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpq6nhof9a',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='d1be7a29-3496-40ab-b61f-694622b7453b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.700 225859 DEBUG nova.virt.libvirt.driver [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Creating instance directory: /var/lib/nova/instances/d1be7a29-3496-40ab-b61f-694622b7453b pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.701 225859 DEBUG nova.virt.libvirt.driver [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Ensure instance console log exists: /var/lib/nova/instances/d1be7a29-3496-40ab-b61f-694622b7453b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.701 225859 DEBUG nova.virt.libvirt.driver [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.703 225859 DEBUG nova.virt.libvirt.vif [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:23:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1966991232',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1966991232',id=6,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:23:55Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='861a4f2b70b249afadeabfe85bda53a3',ramdisk_id='',reservation_id='r-68sk3sp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1568967339',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1568967339-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:23:55Z,user_data=None,user_id='aefb5652049e473a948c089d7c62ef1a',uuid=d1be7a29-3496-40ab-b61f-694622b7453b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "495d90bb-db88-4c6d-a712-bfd41c4e37fc", "address": "fa:16:3e:81:4d:90", "network": {"id": "6f01f500-b631-4cdb-ae71-b33b0ccfb1aa", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1745233184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861a4f2b70b249afadeabfe85bda53a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap495d90bb-db", "ovs_interfaceid": "495d90bb-db88-4c6d-a712-bfd41c4e37fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.703 225859 DEBUG nova.network.os_vif_util [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Converting VIF {"id": "495d90bb-db88-4c6d-a712-bfd41c4e37fc", "address": "fa:16:3e:81:4d:90", "network": {"id": "6f01f500-b631-4cdb-ae71-b33b0ccfb1aa", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1745233184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861a4f2b70b249afadeabfe85bda53a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap495d90bb-db", "ovs_interfaceid": "495d90bb-db88-4c6d-a712-bfd41c4e37fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.704 225859 DEBUG nova.network.os_vif_util [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:4d:90,bridge_name='br-int',has_traffic_filtering=True,id=495d90bb-db88-4c6d-a712-bfd41c4e37fc,network=Network(6f01f500-b631-4cdb-ae71-b33b0ccfb1aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap495d90bb-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.705 225859 DEBUG os_vif [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:4d:90,bridge_name='br-int',has_traffic_filtering=True,id=495d90bb-db88-4c6d-a712-bfd41c4e37fc,network=Network(6f01f500-b631-4cdb-ae71-b33b0ccfb1aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap495d90bb-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.706 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.707 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.707 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.710 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.711 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap495d90bb-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.711 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap495d90bb-db, col_values=(('external_ids', {'iface-id': '495d90bb-db88-4c6d-a712-bfd41c4e37fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:4d:90', 'vm-uuid': 'd1be7a29-3496-40ab-b61f-694622b7453b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:24:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.730 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:24:04 compute-1 NetworkManager[49104]: <info>  [1768919044.7320] manager: (tap495d90bb-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.738 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.740 225859 INFO os_vif [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:4d:90,bridge_name='br-int',has_traffic_filtering=True,id=495d90bb-db88-4c6d-a712-bfd41c4e37fc,network=Network(6f01f500-b631-4cdb-ae71-b33b0ccfb1aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap495d90bb-db')
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.741 225859 DEBUG nova.virt.libvirt.driver [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Jan 20 14:24:04 compute-1 nova_compute[225855]: 2026-01-20 14:24:04.741 225859 DEBUG nova.compute.manager [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpq6nhof9a',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='d1be7a29-3496-40ab-b61f-694622b7453b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Jan 20 14:24:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:05.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:06.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:06 compute-1 ceph-mon[81775]: pgmap v979: 321 pgs: 321 active+clean; 275 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.1 MiB/s wr, 327 op/s
Jan 20 14:24:07 compute-1 nova_compute[225855]: 2026-01-20 14:24:07.688 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1757411374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:07 compute-1 nova_compute[225855]: 2026-01-20 14:24:07.786 225859 DEBUG nova.network.neutron [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Port 495d90bb-db88-4c6d-a712-bfd41c4e37fc updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Jan 20 14:24:07 compute-1 nova_compute[225855]: 2026-01-20 14:24:07.788 225859 DEBUG nova.compute.manager [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpq6nhof9a',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='d1be7a29-3496-40ab-b61f-694622b7453b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Jan 20 14:24:07 compute-1 nova_compute[225855]: 2026-01-20 14:24:07.789 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919032.787102, 21e70820-70b1-4bb9-bb8d-62fb69c2298b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:24:07 compute-1 nova_compute[225855]: 2026-01-20 14:24:07.790 225859 INFO nova.compute.manager [-] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] VM Stopped (Lifecycle Event)
Jan 20 14:24:07 compute-1 nova_compute[225855]: 2026-01-20 14:24:07.820 225859 DEBUG nova.compute.manager [None req-e00c4d0a-cc7a-4f23-9e59-0400d5c65e25 - - - - - -] [instance: 21e70820-70b1-4bb9-bb8d-62fb69c2298b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:24:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:24:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:07.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:24:07 compute-1 systemd[1]: Starting libvirt proxy daemon...
Jan 20 14:24:07 compute-1 systemd[1]: Started libvirt proxy daemon.
Jan 20 14:24:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:24:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:08.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:24:08 compute-1 kernel: tap495d90bb-db: entered promiscuous mode
Jan 20 14:24:08 compute-1 NetworkManager[49104]: <info>  [1768919048.1825] manager: (tap495d90bb-db): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Jan 20 14:24:08 compute-1 ovn_controller[130490]: 2026-01-20T14:24:08Z|00041|binding|INFO|Claiming lport 495d90bb-db88-4c6d-a712-bfd41c4e37fc for this additional chassis.
Jan 20 14:24:08 compute-1 ovn_controller[130490]: 2026-01-20T14:24:08Z|00042|binding|INFO|495d90bb-db88-4c6d-a712-bfd41c4e37fc: Claiming fa:16:3e:81:4d:90 10.100.0.3
Jan 20 14:24:08 compute-1 ovn_controller[130490]: 2026-01-20T14:24:08Z|00043|binding|INFO|Claiming lport 2837a2a0-6c5c-4cc8-9ed2-29c35ce5252e for this additional chassis.
Jan 20 14:24:08 compute-1 ovn_controller[130490]: 2026-01-20T14:24:08Z|00044|binding|INFO|2837a2a0-6c5c-4cc8-9ed2-29c35ce5252e: Claiming fa:16:3e:29:ac:f6 19.80.0.18
Jan 20 14:24:08 compute-1 nova_compute[225855]: 2026-01-20 14:24:08.184 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:08 compute-1 systemd-udevd[231022]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:24:08 compute-1 systemd-machined[194361]: New machine qemu-4-instance-00000006.
Jan 20 14:24:08 compute-1 NetworkManager[49104]: <info>  [1768919048.2437] device (tap495d90bb-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:24:08 compute-1 NetworkManager[49104]: <info>  [1768919048.2451] device (tap495d90bb-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:24:08 compute-1 systemd[1]: Started Virtual Machine qemu-4-instance-00000006.
Jan 20 14:24:08 compute-1 nova_compute[225855]: 2026-01-20 14:24:08.282 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:08 compute-1 ovn_controller[130490]: 2026-01-20T14:24:08Z|00045|binding|INFO|Setting lport 495d90bb-db88-4c6d-a712-bfd41c4e37fc ovn-installed in OVS
Jan 20 14:24:08 compute-1 nova_compute[225855]: 2026-01-20 14:24:08.285 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:08 compute-1 nova_compute[225855]: 2026-01-20 14:24:08.287 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:08 compute-1 ceph-mon[81775]: pgmap v980: 321 pgs: 321 active+clean; 294 MiB data, 390 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 8.6 MiB/s wr, 416 op/s
Jan 20 14:24:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2027510307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:09 compute-1 nova_compute[225855]: 2026-01-20 14:24:09.107 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919049.1066225, d1be7a29-3496-40ab-b61f-694622b7453b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:24:09 compute-1 nova_compute[225855]: 2026-01-20 14:24:09.107 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] VM Started (Lifecycle Event)
Jan 20 14:24:09 compute-1 nova_compute[225855]: 2026-01-20 14:24:09.177 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:24:09 compute-1 nova_compute[225855]: 2026-01-20 14:24:09.690 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919049.6895604, d1be7a29-3496-40ab-b61f-694622b7453b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:24:09 compute-1 nova_compute[225855]: 2026-01-20 14:24:09.690 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] VM Resumed (Lifecycle Event)
Jan 20 14:24:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:09 compute-1 nova_compute[225855]: 2026-01-20 14:24:09.729 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:09 compute-1 nova_compute[225855]: 2026-01-20 14:24:09.744 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:24:09 compute-1 nova_compute[225855]: 2026-01-20 14:24:09.749 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:24:09 compute-1 nova_compute[225855]: 2026-01-20 14:24:09.833 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-1.ctlplane.example.com
Jan 20 14:24:09 compute-1 ceph-mon[81775]: pgmap v981: 321 pgs: 321 active+clean; 334 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 8.5 MiB/s wr, 334 op/s
Jan 20 14:24:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1131583590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/207560375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:09.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:10.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:10 compute-1 nova_compute[225855]: 2026-01-20 14:24:10.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:24:10 compute-1 nova_compute[225855]: 2026-01-20 14:24:10.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:24:10 compute-1 nova_compute[225855]: 2026-01-20 14:24:10.475 225859 DEBUG oslo_concurrency.lockutils [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Acquiring lock "7108f815-a0ef-4f18-a2c2-c796476ace75" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:10 compute-1 nova_compute[225855]: 2026-01-20 14:24:10.476 225859 DEBUG oslo_concurrency.lockutils [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:10 compute-1 ovn_controller[130490]: 2026-01-20T14:24:10Z|00046|binding|INFO|Releasing lport 77db07a9-3fb9-423e-8da6-f47ee620aae1 from this chassis (sb_readonly=0)
Jan 20 14:24:10 compute-1 nova_compute[225855]: 2026-01-20 14:24:10.495 225859 DEBUG nova.objects.instance [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Lazy-loading 'flavor' on Instance uuid 7108f815-a0ef-4f18-a2c2-c796476ace75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:24:10 compute-1 nova_compute[225855]: 2026-01-20 14:24:10.567 225859 DEBUG oslo_concurrency.lockutils [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:10 compute-1 nova_compute[225855]: 2026-01-20 14:24:10.583 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:10 compute-1 nova_compute[225855]: 2026-01-20 14:24:10.887 225859 DEBUG oslo_concurrency.lockutils [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Acquiring lock "7108f815-a0ef-4f18-a2c2-c796476ace75" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:10 compute-1 nova_compute[225855]: 2026-01-20 14:24:10.888 225859 DEBUG oslo_concurrency.lockutils [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:10 compute-1 nova_compute[225855]: 2026-01-20 14:24:10.889 225859 INFO nova.compute.manager [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Attaching volume 4b77ca01-614c-4b84-ae37-bb7bf58d4923 to /dev/vdb
Jan 20 14:24:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2259299625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1792625192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.091 225859 DEBUG os_brick.utils [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.093 225859 INFO oslo.privsep.daemon [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpksavhghy/privsep.sock']
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.337 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.368 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.370 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.370 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.636 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-7108f815-a0ef-4f18-a2c2-c796476ace75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.637 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-7108f815-a0ef-4f18-a2c2-c796476ace75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.638 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.639 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7108f815-a0ef-4f18-a2c2-c796476ace75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.747 225859 INFO oslo.privsep.daemon [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Spawned new privsep daemon via rootwrap
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.623 231081 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.628 231081 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.630 231081 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.630 231081 INFO oslo.privsep.daemon [-] privsep daemon running as pid 231081
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.757 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[cb62a7f0-0a1b-4d33-823a-7d08db9c1581]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:11 compute-1 ovn_controller[130490]: 2026-01-20T14:24:11Z|00047|binding|INFO|Claiming lport 495d90bb-db88-4c6d-a712-bfd41c4e37fc for this chassis.
Jan 20 14:24:11 compute-1 ovn_controller[130490]: 2026-01-20T14:24:11Z|00048|binding|INFO|495d90bb-db88-4c6d-a712-bfd41c4e37fc: Claiming fa:16:3e:81:4d:90 10.100.0.3
Jan 20 14:24:11 compute-1 ovn_controller[130490]: 2026-01-20T14:24:11Z|00049|binding|INFO|Claiming lport 2837a2a0-6c5c-4cc8-9ed2-29c35ce5252e for this chassis.
Jan 20 14:24:11 compute-1 ovn_controller[130490]: 2026-01-20T14:24:11Z|00050|binding|INFO|2837a2a0-6c5c-4cc8-9ed2-29c35ce5252e: Claiming fa:16:3e:29:ac:f6 19.80.0.18
Jan 20 14:24:11 compute-1 ovn_controller[130490]: 2026-01-20T14:24:11Z|00051|binding|INFO|Setting lport 495d90bb-db88-4c6d-a712-bfd41c4e37fc up in Southbound
Jan 20 14:24:11 compute-1 ovn_controller[130490]: 2026-01-20T14:24:11Z|00052|binding|INFO|Setting lport 2837a2a0-6c5c-4cc8-9ed2-29c35ce5252e up in Southbound
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.860 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:11.870 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:4d:90 10.100.0.3'], port_security=['fa:16:3e:81:4d:90 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-2135821709', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd1be7a29-3496-40ab-b61f-694622b7453b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-2135821709', 'neutron:project_id': '861a4f2b70b249afadeabfe85bda53a3', 'neutron:revision_number': '10', 'neutron:security_group_ids': '31a0931a-1aaa-4760-9d26-94149371fd1b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e54b11a-69dd-4260-a0b8-c84b36782857, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=495d90bb-db88-4c6d-a712-bfd41c4e37fc) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:24:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:11.873 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:ac:f6 19.80.0.18'], port_security=['fa:16:3e:29:ac:f6 19.80.0.18'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['495d90bb-db88-4c6d-a712-bfd41c4e37fc'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1487867751', 'neutron:cidrs': '19.80.0.18/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e058a3a-7e99-4576-99a5-9221e6721967', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1487867751', 'neutron:project_id': '861a4f2b70b249afadeabfe85bda53a3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31a0931a-1aaa-4760-9d26-94149371fd1b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=212b28d9-3d5f-4303-a5f1-6dbbb277d038, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2837a2a0-6c5c-4cc8-9ed2-29c35ce5252e) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:24:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:11.874 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 495d90bb-db88-4c6d-a712-bfd41c4e37fc in datapath 6f01f500-b631-4cdb-ae71-b33b0ccfb1aa bound to our chassis
Jan 20 14:24:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:11.876 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6f01f500-b631-4cdb-ae71-b33b0ccfb1aa
Jan 20 14:24:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:11.894 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9fe0ef-ad88-4a8f-9cdf-a65dff33cf2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.889 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.889 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[dc7ce880-4c86-436b-8912-797c3e2d3b40]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.896 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:11.897 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6f01f500-b1 in ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:24:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:11.899 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6f01f500-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:24:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:11.899 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[349a8ac3-b1c4-42c0-9946-3585598773ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:11.900 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a8be8140-fafd-4a34-90d0-503235ce69bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.907 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.907 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[7ec4d9b4-db28-4a99-9c17-8d3b49ae4ee5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.910 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.918 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.918 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[ccdb672b-604e-4c47-a0e7-13aa9124ef59]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:11.920 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[55657999-4826-43a0-92fb-fa79009e4019]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.921 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[86c724f0-618f-402b-b20a-00df7fdb7321]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.922 225859 DEBUG oslo_concurrency.processutils [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:11.941 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[388e74b4-ad90-4dde-8cb7-a57a931d999f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.946 225859 DEBUG oslo_concurrency.processutils [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:11.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.952 225859 DEBUG os_brick.initiator.connectors.lightos [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.953 225859 DEBUG os_brick.initiator.connectors.lightos [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.954 225859 DEBUG os_brick.initiator.connectors.lightos [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.954 225859 DEBUG os_brick.utils [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] <== get_connector_properties: return (863ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:24:11 compute-1 nova_compute[225855]: 2026-01-20 14:24:11.955 225859 DEBUG nova.virt.block_device [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Updating existing volume attachment record: 05ee5376-30c1-4b06-af38-a41fdab6fc8b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 14:24:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:11.970 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[851c2fcf-ce5e-4bb6-b531-f4ca39e3807c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:11 compute-1 NetworkManager[49104]: <info>  [1768919051.9807] manager: (tap6f01f500-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Jan 20 14:24:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:11.979 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b991f9-cc04-43d9-b69c-772c1cedbe27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:12 compute-1 systemd-udevd[231097]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:12.016 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f556e68f-e9bf-472d-a2fb-9abac137bcc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:12.020 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[04a37590-28c2-46e4-a440-ae6ccf599c46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:12 compute-1 NetworkManager[49104]: <info>  [1768919052.0558] device (tap6f01f500-b0): carrier: link connected
Jan 20 14:24:12 compute-1 nova_compute[225855]: 2026-01-20 14:24:12.058 225859 INFO nova.compute.manager [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Post operation of migration started
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:12.064 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e46d84a6-4cfa-4f31-871a-82c9cbbc57f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:12.095 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[44bbcb26-ce11-4440-af87-42833d25a3ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f01f500-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:16:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409098, 'reachable_time': 40567, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 231116, 'error': None, 'target': 'ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:12.120 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[faa3e275-a9fe-4113-a5c7-b96c4a059df9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:16f0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409098, 'tstamp': 409098}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 231117, 'error': None, 'target': 'ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:12.148 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7ec95a-2b8e-46f1-bbb6-96eb2b806268]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f01f500-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:16:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409098, 'reachable_time': 40567, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 231118, 'error': None, 'target': 'ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:12.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:12.185 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e0146a89-34de-4c51-9f74-1bc46d78e8d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:12.256 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6346e2f1-3a3f-4ab6-afd1-761e58929d79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:12.258 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f01f500-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:12.258 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:12.259 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f01f500-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:24:12 compute-1 NetworkManager[49104]: <info>  [1768919052.3045] manager: (tap6f01f500-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Jan 20 14:24:12 compute-1 nova_compute[225855]: 2026-01-20 14:24:12.303 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:12 compute-1 kernel: tap6f01f500-b0: entered promiscuous mode
Jan 20 14:24:12 compute-1 nova_compute[225855]: 2026-01-20 14:24:12.308 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:12 compute-1 ovn_controller[130490]: 2026-01-20T14:24:12Z|00053|binding|INFO|Releasing lport 428c2ef0-5c20-4d34-88ac-9a0d29a78f0e from this chassis (sb_readonly=0)
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:12.316 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6f01f500-b0, col_values=(('external_ids', {'iface-id': '428c2ef0-5c20-4d34-88ac-9a0d29a78f0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:24:12 compute-1 nova_compute[225855]: 2026-01-20 14:24:12.317 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:12 compute-1 nova_compute[225855]: 2026-01-20 14:24:12.353 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:12.354 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6f01f500-b631-4cdb-ae71-b33b0ccfb1aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6f01f500-b631-4cdb-ae71-b33b0ccfb1aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:12.355 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ca974f1c-9a82-4dcf-86fc-5d6a1ac84282]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:12.356 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/6f01f500-b631-4cdb-ae71-b33b0ccfb1aa.pid.haproxy
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 6f01f500-b631-4cdb-ae71-b33b0ccfb1aa
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:24:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:12.356 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa', 'env', 'PROCESS_TAG=haproxy-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6f01f500-b631-4cdb-ae71-b33b0ccfb1aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:24:12 compute-1 nova_compute[225855]: 2026-01-20 14:24:12.690 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:12 compute-1 ceph-mon[81775]: pgmap v982: 321 pgs: 321 active+clean; 366 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 9.9 MiB/s wr, 338 op/s
Jan 20 14:24:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:24:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3076624727' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:12 compute-1 podman[231151]: 2026-01-20 14:24:12.851041292 +0000 UTC m=+0.084478141 container create 4b580939f68977becb4612edae62f413c82a1c4eebe3a7a68594b3b2bff36f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 14:24:12 compute-1 podman[231151]: 2026-01-20 14:24:12.805704045 +0000 UTC m=+0.039140904 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:24:12 compute-1 systemd[1]: Started libpod-conmon-4b580939f68977becb4612edae62f413c82a1c4eebe3a7a68594b3b2bff36f92.scope.
Jan 20 14:24:12 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:24:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718240c016ea3692036380758388f57c4dd1c9d71b39140f51ca20ac8e217a05/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:12 compute-1 podman[231151]: 2026-01-20 14:24:12.970207021 +0000 UTC m=+0.203643890 container init 4b580939f68977becb4612edae62f413c82a1c4eebe3a7a68594b3b2bff36f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:24:12 compute-1 podman[231151]: 2026-01-20 14:24:12.981499356 +0000 UTC m=+0.214936205 container start 4b580939f68977becb4612edae62f413c82a1c4eebe3a7a68594b3b2bff36f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 14:24:13 compute-1 neutron-haproxy-ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa[231166]: [NOTICE]   (231170) : New worker (231172) forked
Jan 20 14:24:13 compute-1 neutron-haproxy-ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa[231166]: [NOTICE]   (231170) : Loading success.
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.053 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 2837a2a0-6c5c-4cc8-9ed2-29c35ce5252e in datapath 7e058a3a-7e99-4576-99a5-9221e6721967 unbound from our chassis
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.055 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7e058a3a-7e99-4576-99a5-9221e6721967
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.066 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cbed9446-ce92-4a3e-af8c-6b482861bd70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.066 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7e058a3a-71 in ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.068 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7e058a3a-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.068 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d41a4296-20aa-4e8f-83c4-a8b1cf51b862]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.068 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b064b6-9991-4775-9895-d458de5d66a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.083 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce076c1-61f0-4e0f-b426-702f0e949fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:13 compute-1 nova_compute[225855]: 2026-01-20 14:24:13.097 225859 DEBUG oslo_concurrency.lockutils [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:13 compute-1 nova_compute[225855]: 2026-01-20 14:24:13.097 225859 DEBUG oslo_concurrency.lockutils [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:13 compute-1 nova_compute[225855]: 2026-01-20 14:24:13.099 225859 DEBUG oslo_concurrency.lockutils [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:13 compute-1 nova_compute[225855]: 2026-01-20 14:24:13.107 225859 DEBUG nova.objects.instance [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Lazy-loading 'flavor' on Instance uuid 7108f815-a0ef-4f18-a2c2-c796476ace75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.123 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[30b0018d-ed3c-4ec2-9434-e34ff2d45db4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:13 compute-1 nova_compute[225855]: 2026-01-20 14:24:13.130 225859 DEBUG nova.virt.libvirt.driver [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Attempting to attach volume 4b77ca01-614c-4b84-ae37-bb7bf58d4923 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 20 14:24:13 compute-1 nova_compute[225855]: 2026-01-20 14:24:13.133 225859 DEBUG nova.virt.libvirt.guest [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] attach device xml: <disk type="network" device="disk">
Jan 20 14:24:13 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:24:13 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-4b77ca01-614c-4b84-ae37-bb7bf58d4923">
Jan 20 14:24:13 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:24:13 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:24:13 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:24:13 compute-1 nova_compute[225855]:   </source>
Jan 20 14:24:13 compute-1 nova_compute[225855]:   <auth username="openstack">
Jan 20 14:24:13 compute-1 nova_compute[225855]:     <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:24:13 compute-1 nova_compute[225855]:   </auth>
Jan 20 14:24:13 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:24:13 compute-1 nova_compute[225855]:   <serial>4b77ca01-614c-4b84-ae37-bb7bf58d4923</serial>
Jan 20 14:24:13 compute-1 nova_compute[225855]: </disk>
Jan 20 14:24:13 compute-1 nova_compute[225855]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.154 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[7599f9e7-8d0e-40e5-900d-6e487fb25f73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:13 compute-1 systemd-udevd[231114]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:24:13 compute-1 NetworkManager[49104]: <info>  [1768919053.1669] manager: (tap7e058a3a-70): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.166 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[67b5e25b-a71a-493f-8ac4-65774e87943c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.205 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[26bd66dc-9c62-45e3-841c-f8db96c083a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.208 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[b4e74e38-cb9f-41c6-bd10-5d6304d7c91c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:13 compute-1 NetworkManager[49104]: <info>  [1768919053.2382] device (tap7e058a3a-70): carrier: link connected
Jan 20 14:24:13 compute-1 nova_compute[225855]: 2026-01-20 14:24:13.238 225859 DEBUG oslo_concurrency.lockutils [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Acquiring lock "refresh_cache-d1be7a29-3496-40ab-b61f-694622b7453b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:24:13 compute-1 nova_compute[225855]: 2026-01-20 14:24:13.239 225859 DEBUG oslo_concurrency.lockutils [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Acquired lock "refresh_cache-d1be7a29-3496-40ab-b61f-694622b7453b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:24:13 compute-1 nova_compute[225855]: 2026-01-20 14:24:13.240 225859 DEBUG nova.network.neutron [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.246 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[5d5d0de4-8782-4bc4-87d1-4a5a8dd83b10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.263 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8a7863-efb9-458e-8416-d0113a5dfcda]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e058a3a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:96:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409216, 'reachable_time': 22359, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 231211, 'error': None, 'target': 'ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.279 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[538380ee-aad4-4114-a75f-387c028c4d4f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:96b3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409216, 'tstamp': 409216}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 231212, 'error': None, 'target': 'ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.296 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[56730f89-b78e-42f5-8196-178a22724ba5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e058a3a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:96:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409216, 'reachable_time': 22359, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 231213, 'error': None, 'target': 'ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:13 compute-1 nova_compute[225855]: 2026-01-20 14:24:13.313 225859 DEBUG nova.virt.libvirt.driver [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:24:13 compute-1 nova_compute[225855]: 2026-01-20 14:24:13.313 225859 DEBUG nova.virt.libvirt.driver [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:24:13 compute-1 nova_compute[225855]: 2026-01-20 14:24:13.314 225859 DEBUG nova.virt.libvirt.driver [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:24:13 compute-1 nova_compute[225855]: 2026-01-20 14:24:13.314 225859 DEBUG nova.virt.libvirt.driver [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] No VIF found with MAC fa:16:3e:fb:ee:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.340 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[208a2ccd-b68d-4791-b66d-7ac79d406e7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.434 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[54b4db8c-c22c-4657-9259-136aa2ab5d17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.437 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e058a3a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.438 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.439 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e058a3a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:24:13 compute-1 kernel: tap7e058a3a-70: entered promiscuous mode
Jan 20 14:24:13 compute-1 nova_compute[225855]: 2026-01-20 14:24:13.473 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:13 compute-1 NetworkManager[49104]: <info>  [1768919053.4741] manager: (tap7e058a3a-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 20 14:24:13 compute-1 nova_compute[225855]: 2026-01-20 14:24:13.476 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.484 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7e058a3a-70, col_values=(('external_ids', {'iface-id': '69d1df5f-8105-47f6-864e-688d91c13d13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:24:13 compute-1 nova_compute[225855]: 2026-01-20 14:24:13.486 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:13 compute-1 ovn_controller[130490]: 2026-01-20T14:24:13Z|00054|binding|INFO|Releasing lport 69d1df5f-8105-47f6-864e-688d91c13d13 from this chassis (sb_readonly=0)
Jan 20 14:24:13 compute-1 nova_compute[225855]: 2026-01-20 14:24:13.520 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.521 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7e058a3a-7e99-4576-99a5-9221e6721967.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7e058a3a-7e99-4576-99a5-9221e6721967.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.522 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[550d712e-e12f-41dd-b0e7-7d3aac8dd3fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.523 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-7e058a3a-7e99-4576-99a5-9221e6721967
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/7e058a3a-7e99-4576-99a5-9221e6721967.pid.haproxy
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 7e058a3a-7e99-4576-99a5-9221e6721967
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:24:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:13.524 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967', 'env', 'PROCESS_TAG=haproxy-7e058a3a-7e99-4576-99a5-9221e6721967', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7e058a3a-7e99-4576-99a5-9221e6721967.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:24:13 compute-1 sudo[231219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:13 compute-1 sudo[231219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:13 compute-1 sudo[231219]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:13 compute-1 sudo[231248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:24:13 compute-1 sudo[231248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:13 compute-1 sudo[231248]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:13 compute-1 nova_compute[225855]: 2026-01-20 14:24:13.622 225859 DEBUG oslo_concurrency.lockutils [None req-cfba6166-e808-47f1-b9fc-b4a5c8ab6fe8 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:13 compute-1 sudo[231275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:13 compute-1 sudo[231275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:13 compute-1 sudo[231275]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:13 compute-1 sudo[231300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:24:13 compute-1 sudo[231300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3076624727' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:13 compute-1 ceph-mon[81775]: pgmap v983: 321 pgs: 321 active+clean; 410 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 9.9 MiB/s wr, 302 op/s
Jan 20 14:24:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2630309587' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:24:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2630309587' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:24:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2739698211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:13 compute-1 podman[231349]: 2026-01-20 14:24:13.91265758 +0000 UTC m=+0.051426178 container create 903600b0c940a7555c4b06a0109946d8aa554233e1569849e83e037562c4e4f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 20 14:24:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:13.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:13 compute-1 systemd[1]: Started libpod-conmon-903600b0c940a7555c4b06a0109946d8aa554233e1569849e83e037562c4e4f0.scope.
Jan 20 14:24:13 compute-1 podman[231349]: 2026-01-20 14:24:13.885777139 +0000 UTC m=+0.024545747 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:24:13 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:24:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbc03c7536b385d00acac66f5f16e4bc400b57b72120eb89b6a3473d1cec9b14/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:14 compute-1 podman[231349]: 2026-01-20 14:24:14.00715617 +0000 UTC m=+0.145924768 container init 903600b0c940a7555c4b06a0109946d8aa554233e1569849e83e037562c4e4f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 14:24:14 compute-1 podman[231349]: 2026-01-20 14:24:14.01290451 +0000 UTC m=+0.151673098 container start 903600b0c940a7555c4b06a0109946d8aa554233e1569849e83e037562c4e4f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Jan 20 14:24:14 compute-1 neutron-haproxy-ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967[231376]: [NOTICE]   (231381) : New worker (231385) forked
Jan 20 14:24:14 compute-1 neutron-haproxy-ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967[231376]: [NOTICE]   (231381) : Loading success.
Jan 20 14:24:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:24:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:14.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:24:14 compute-1 sudo[231300]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:14 compute-1 nova_compute[225855]: 2026-01-20 14:24:14.740 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:15 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:24:15 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.509 225859 DEBUG nova.virt.libvirt.driver [None req-0f8fddf1-9a13-4576-bd67-46eb1bbe7790 fcfea6d5aa8c43228e6b2ba5e902b803 836213a644224fb8877b299d76d5df3b - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] volume_snapshot_create: create_info: {'snapshot_id': 'd6a30af1-1dc8-491a-a668-44d4c7e7eb08', 'type': 'qcow2', 'new_file': 'new_file'} volume_snapshot_create /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:3572
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.514 225859 ERROR nova.virt.libvirt.driver [None req-0f8fddf1-9a13-4576-bd67-46eb1bbe7790 fcfea6d5aa8c43228e6b2ba5e902b803 836213a644224fb8877b299d76d5df3b - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Error occurred during volume_snapshot_create, sending error status to Cinder.: nova.exception.InternalError: Found no disk to snapshot.
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.514 225859 ERROR nova.virt.libvirt.driver [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Traceback (most recent call last):
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.514 225859 ERROR nova.virt.libvirt.driver [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.514 225859 ERROR nova.virt.libvirt.driver [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75]     self._volume_snapshot_create(context, instance, guest,
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.514 225859 ERROR nova.virt.libvirt.driver [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.514 225859 ERROR nova.virt.libvirt.driver [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75]     raise exception.InternalError(msg)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.514 225859 ERROR nova.virt.libvirt.driver [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] nova.exception.InternalError: Found no disk to snapshot.
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.514 225859 ERROR nova.virt.libvirt.driver [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] 
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.658 225859 DEBUG nova.virt.libvirt.driver [None req-4b3643ae-4fef-48c3-8123-aab9eacba0b4 fcfea6d5aa8c43228e6b2ba5e902b803 836213a644224fb8877b299d76d5df3b - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] volume_snapshot_delete: delete_info: {'volume_id': '4b77ca01-614c-4b84-ae37-bb7bf58d4923'} _volume_snapshot_delete /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:3673
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.659 225859 ERROR nova.virt.libvirt.driver [None req-4b3643ae-4fef-48c3-8123-aab9eacba0b4 fcfea6d5aa8c43228e6b2ba5e902b803 836213a644224fb8877b299d76d5df3b - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Error occurred during volume_snapshot_delete, sending error status to Cinder.: KeyError: 'type'
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.659 225859 ERROR nova.virt.libvirt.driver [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Traceback (most recent call last):
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.659 225859 ERROR nova.virt.libvirt.driver [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.659 225859 ERROR nova.virt.libvirt.driver [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75]     self._volume_snapshot_delete(context, instance, volume_id,
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.659 225859 ERROR nova.virt.libvirt.driver [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.659 225859 ERROR nova.virt.libvirt.driver [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75]     if delete_info['type'] != 'qcow2':
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.659 225859 ERROR nova.virt.libvirt.driver [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] KeyError: 'type'
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.659 225859 ERROR nova.virt.libvirt.driver [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] 
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver [None req-0f8fddf1-9a13-4576-bd67-46eb1bbe7790 fcfea6d5aa8c43228e6b2ba5e902b803 836213a644224fb8877b299d76d5df3b - - default default] Failed to send updated snapshot status to volume service.: nova.exception.SnapshotNotFound: Snapshot d6a30af1-1dc8-491a-a668-44d4c7e7eb08 could not be found.
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver     self._volume_snapshot_create(context, instance, guest,
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver     raise exception.InternalError(msg)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver nova.exception.InternalError: Found no disk to snapshot.
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver 
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver 
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver cinderclient.exceptions.NotFound: Snapshot d6a30af1-1dc8-491a-a668-44d4c7e7eb08 could not be found. (HTTP 404) (Request-ID: req-901d01ef-78e9-4c36-92ea-ddb3587945dd)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver 
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver 
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3412, in _volume_snapshot_update_status
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver     self._volume_api.update_snapshot_status(context,
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 397, in wrapper
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver     res = method(self, ctx, *args, **kwargs)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 468, in wrapper
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver     _reraise(exception.SnapshotNotFound(snapshot_id=snapshot_id))
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 488, in _reraise
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver     raise desired_exc.with_traceback(sys.exc_info()[2])
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver nova.exception.SnapshotNotFound: Snapshot d6a30af1-1dc8-491a-a668-44d4c7e7eb08 could not be found.
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.674 225859 ERROR nova.virt.libvirt.driver 
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server [None req-0f8fddf1-9a13-4576-bd67-46eb1bbe7790 fcfea6d5aa8c43228e6b2ba5e902b803 836213a644224fb8877b299d76d5df3b - - default default] Exception during message handling: nova.exception.InternalError: Found no disk to snapshot.
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server     return func(*args, **kwargs)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server     self.force_reraise()
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server     raise self.value
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 4410, in volume_snapshot_create
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server     self.driver.volume_snapshot_create(context, instance, volume_id,
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3597, in volume_snapshot_create
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server     self._volume_snapshot_update_status(
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server     self.force_reraise()
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server     raise self.value
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server     self._volume_snapshot_create(context, instance, guest,
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server     raise exception.InternalError(msg)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server nova.exception.InternalError: Found no disk to snapshot.
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.678 225859 ERROR oslo_messaging.rpc.server 
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver [None req-4b3643ae-4fef-48c3-8123-aab9eacba0b4 fcfea6d5aa8c43228e6b2ba5e902b803 836213a644224fb8877b299d76d5df3b - - default default] Failed to send updated snapshot status to volume service.: nova.exception.SnapshotNotFound: Snapshot None could not be found.
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver     self._volume_snapshot_delete(context, instance, volume_id,
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver     if delete_info['type'] != 'qcow2':
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver KeyError: 'type'
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver 
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver 
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver cinderclient.exceptions.NotFound: Snapshot None could not be found. (HTTP 404) (Request-ID: req-d5ed2fb7-077a-4ca6-9fda-6e37362e321a)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver 
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver 
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3412, in _volume_snapshot_update_status
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver     self._volume_api.update_snapshot_status(context,
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 397, in wrapper
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver     res = method(self, ctx, *args, **kwargs)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 468, in wrapper
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver     _reraise(exception.SnapshotNotFound(snapshot_id=snapshot_id))
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 488, in _reraise
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver     raise desired_exc.with_traceback(sys.exc_info()[2])
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver nova.exception.SnapshotNotFound: Snapshot None could not be found.
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.842 225859 ERROR nova.virt.libvirt.driver 
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server [None req-4b3643ae-4fef-48c3-8123-aab9eacba0b4 fcfea6d5aa8c43228e6b2ba5e902b803 836213a644224fb8877b299d76d5df3b - - default default] Exception during message handling: KeyError: 'type'
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server     return func(*args, **kwargs)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server     self.force_reraise()
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server     raise self.value
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 4422, in volume_snapshot_delete
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server     self.driver.volume_snapshot_delete(context, instance, volume_id,
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3853, in volume_snapshot_delete
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server     self._volume_snapshot_update_status(
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server     self.force_reraise()
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server     raise self.value
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server     self._volume_snapshot_delete(context, instance, volume_id,
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server     if delete_info['type'] != 'qcow2':
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server KeyError: 'type'
Jan 20 14:24:15 compute-1 nova_compute[225855]: 2026-01-20 14:24:15.845 225859 ERROR oslo_messaging.rpc.server 
Jan 20 14:24:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:15.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:16 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:24:16 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:24:16 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:24:16 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:24:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2006413714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:16 compute-1 ceph-mon[81775]: pgmap v984: 321 pgs: 321 active+clean; 418 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 8.2 MiB/s wr, 269 op/s
Jan 20 14:24:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2066684918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:16 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:24:16 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.143 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Updating instance_info_cache with network_info: [{"id": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "address": "fa:16:3e:fb:ee:02", "network": {"id": "ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-731576160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d9b3587ce494cb8ac153a66886f6883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4573fd6d-82", "ovs_interfaceid": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:24:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:16.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.291 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-7108f815-a0ef-4f18-a2c2-c796476ace75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.292 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.292 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.293 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.293 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.293 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.293 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.294 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.322 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.323 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.323 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.323 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.323 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:16.384 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:16.384 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:16.385 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.486 225859 DEBUG oslo_concurrency.lockutils [None req-43b347c5-0dbd-4812-bbe6-b9a0ddcfedd4 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Acquiring lock "7108f815-a0ef-4f18-a2c2-c796476ace75" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.487 225859 DEBUG oslo_concurrency.lockutils [None req-43b347c5-0dbd-4812-bbe6-b9a0ddcfedd4 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:16 compute-1 sudo[231422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:16 compute-1 sudo[231422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.508 225859 INFO nova.compute.manager [None req-43b347c5-0dbd-4812-bbe6-b9a0ddcfedd4 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Detaching volume 4b77ca01-614c-4b84-ae37-bb7bf58d4923
Jan 20 14:24:16 compute-1 sudo[231422]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:16 compute-1 sudo[231456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:16 compute-1 sudo[231456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:16 compute-1 sudo[231456]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.699 225859 INFO nova.virt.block_device [None req-43b347c5-0dbd-4812-bbe6-b9a0ddcfedd4 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Attempting to driver detach volume 4b77ca01-614c-4b84-ae37-bb7bf58d4923 from mountpoint /dev/vdb
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.712 225859 DEBUG nova.virt.libvirt.driver [None req-43b347c5-0dbd-4812-bbe6-b9a0ddcfedd4 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Attempting to detach device vdb from instance 7108f815-a0ef-4f18-a2c2-c796476ace75 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.713 225859 DEBUG nova.virt.libvirt.guest [None req-43b347c5-0dbd-4812-bbe6-b9a0ddcfedd4 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:24:16 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:24:16 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-4b77ca01-614c-4b84-ae37-bb7bf58d4923">
Jan 20 14:24:16 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:24:16 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:24:16 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:24:16 compute-1 nova_compute[225855]:   </source>
Jan 20 14:24:16 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:24:16 compute-1 nova_compute[225855]:   <serial>4b77ca01-614c-4b84-ae37-bb7bf58d4923</serial>
Jan 20 14:24:16 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 14:24:16 compute-1 nova_compute[225855]: </disk>
Jan 20 14:24:16 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:24:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:24:16 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4079105907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.721 225859 INFO nova.virt.libvirt.driver [None req-43b347c5-0dbd-4812-bbe6-b9a0ddcfedd4 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Successfully detached device vdb from instance 7108f815-a0ef-4f18-a2c2-c796476ace75 from the persistent domain config.
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.722 225859 DEBUG nova.virt.libvirt.driver [None req-43b347c5-0dbd-4812-bbe6-b9a0ddcfedd4 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 7108f815-a0ef-4f18-a2c2-c796476ace75 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.723 225859 DEBUG nova.virt.libvirt.guest [None req-43b347c5-0dbd-4812-bbe6-b9a0ddcfedd4 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:24:16 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:24:16 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-4b77ca01-614c-4b84-ae37-bb7bf58d4923">
Jan 20 14:24:16 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:24:16 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:24:16 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:24:16 compute-1 nova_compute[225855]:   </source>
Jan 20 14:24:16 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:24:16 compute-1 nova_compute[225855]:   <serial>4b77ca01-614c-4b84-ae37-bb7bf58d4923</serial>
Jan 20 14:24:16 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 14:24:16 compute-1 nova_compute[225855]: </disk>
Jan 20 14:24:16 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.736 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.857 225859 DEBUG nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Received event <DeviceRemovedEvent: 1768919056.856628, 7108f815-a0ef-4f18-a2c2-c796476ace75 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.860 225859 DEBUG nova.virt.libvirt.driver [None req-43b347c5-0dbd-4812-bbe6-b9a0ddcfedd4 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 7108f815-a0ef-4f18-a2c2-c796476ace75 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.864 225859 INFO nova.virt.libvirt.driver [None req-43b347c5-0dbd-4812-bbe6-b9a0ddcfedd4 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Successfully detached device vdb from instance 7108f815-a0ef-4f18-a2c2-c796476ace75 from the live domain config.
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.867 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.868 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.875 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.875 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.879 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:24:16 compute-1 nova_compute[225855]: 2026-01-20 14:24:16.879 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:24:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4079105907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4176525062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.086 225859 DEBUG nova.network.neutron [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Updating instance_info_cache with network_info: [{"id": "495d90bb-db88-4c6d-a712-bfd41c4e37fc", "address": "fa:16:3e:81:4d:90", "network": {"id": "6f01f500-b631-4cdb-ae71-b33b0ccfb1aa", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1745233184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861a4f2b70b249afadeabfe85bda53a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap495d90bb-db", "ovs_interfaceid": "495d90bb-db88-4c6d-a712-bfd41c4e37fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.106 225859 DEBUG oslo_concurrency.lockutils [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Releasing lock "refresh_cache-d1be7a29-3496-40ab-b61f-694622b7453b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.118 225859 DEBUG oslo_concurrency.lockutils [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.118 225859 DEBUG oslo_concurrency.lockutils [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.118 225859 DEBUG oslo_concurrency.lockutils [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.122 225859 INFO nova.virt.libvirt.driver [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Jan 20 14:24:17 compute-1 virtqemud[225396]: Domain id=4 name='instance-00000006' uuid=d1be7a29-3496-40ab-b61f-694622b7453b is tainted: custom-monitor
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.139 225859 DEBUG nova.objects.instance [None req-43b347c5-0dbd-4812-bbe6-b9a0ddcfedd4 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Lazy-loading 'flavor' on Instance uuid 7108f815-a0ef-4f18-a2c2-c796476ace75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.178 225859 DEBUG oslo_concurrency.lockutils [None req-43b347c5-0dbd-4812-bbe6-b9a0ddcfedd4 0167905b37e04d22b41125ba80c626ca a46350811ba449b082347f02d76d1c34 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.186 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.187 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4331MB free_disk=20.789615631103516GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.187 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.187 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.232 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Migration for instance d1be7a29-3496-40ab-b61f-694622b7453b refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.289 225859 INFO nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Updating resource usage from migration 2d814df6-3c5c-4a16-b0e9-7390c8b68c64
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.289 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Starting to track incoming migration 2d814df6-3c5c-4a16-b0e9-7390c8b68c64 with flavor 522deaab-a741-4dbb-932d-d8b13a211c33 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.330 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 7108f815-a0ef-4f18-a2c2-c796476ace75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.330 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 4bfa17e5-6bfc-42e2-9c3f-959596201da0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.351 225859 WARNING nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance d1be7a29-3496-40ab-b61f-694622b7453b has been moved to another host compute-2.ctlplane.example.com(compute-2.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}.
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.351 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.351 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.426 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.695 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:24:17 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2929435309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.875 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.884 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.900 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.926 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:24:17 compute-1 nova_compute[225855]: 2026-01-20 14:24:17.926 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:17.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:18 compute-1 nova_compute[225855]: 2026-01-20 14:24:18.131 225859 INFO nova.virt.libvirt.driver [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Jan 20 14:24:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:18.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:18 compute-1 ceph-mon[81775]: pgmap v985: 321 pgs: 321 active+clean; 443 MiB data, 479 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 8.6 MiB/s wr, 394 op/s
Jan 20 14:24:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2929435309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:24:19 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2525738690' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:24:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:24:19 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2525738690' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:24:19 compute-1 nova_compute[225855]: 2026-01-20 14:24:19.139 225859 INFO nova.virt.libvirt.driver [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Jan 20 14:24:19 compute-1 nova_compute[225855]: 2026-01-20 14:24:19.144 225859 DEBUG nova.compute.manager [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:24:19 compute-1 nova_compute[225855]: 2026-01-20 14:24:19.493 225859 DEBUG nova.objects.instance [None req-07ee1594-2ec5-4256-9a6d-4ec5059ca613 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:24:19 compute-1 nova_compute[225855]: 2026-01-20 14:24:19.621 225859 DEBUG oslo_concurrency.lockutils [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Acquiring lock "7108f815-a0ef-4f18-a2c2-c796476ace75" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:19 compute-1 nova_compute[225855]: 2026-01-20 14:24:19.622 225859 DEBUG oslo_concurrency.lockutils [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:19 compute-1 nova_compute[225855]: 2026-01-20 14:24:19.623 225859 DEBUG oslo_concurrency.lockutils [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Acquiring lock "7108f815-a0ef-4f18-a2c2-c796476ace75-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:19 compute-1 nova_compute[225855]: 2026-01-20 14:24:19.623 225859 DEBUG oslo_concurrency.lockutils [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:19 compute-1 nova_compute[225855]: 2026-01-20 14:24:19.624 225859 DEBUG oslo_concurrency.lockutils [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:19 compute-1 nova_compute[225855]: 2026-01-20 14:24:19.625 225859 INFO nova.compute.manager [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Terminating instance
Jan 20 14:24:19 compute-1 nova_compute[225855]: 2026-01-20 14:24:19.627 225859 DEBUG nova.compute.manager [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:24:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:19 compute-1 nova_compute[225855]: 2026-01-20 14:24:19.744 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:19.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:20.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2525738690' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:24:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2525738690' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:24:20 compute-1 kernel: tap4573fd6d-82 (unregistering): left promiscuous mode
Jan 20 14:24:20 compute-1 NetworkManager[49104]: <info>  [1768919060.2970] device (tap4573fd6d-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.313 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:20 compute-1 ovn_controller[130490]: 2026-01-20T14:24:20Z|00055|binding|INFO|Releasing lport 4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 from this chassis (sb_readonly=0)
Jan 20 14:24:20 compute-1 ovn_controller[130490]: 2026-01-20T14:24:20Z|00056|binding|INFO|Setting lport 4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 down in Southbound
Jan 20 14:24:20 compute-1 ovn_controller[130490]: 2026-01-20T14:24:20Z|00057|binding|INFO|Removing iface tap4573fd6d-82 ovn-installed in OVS
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.317 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:20.325 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:ee:02 10.100.0.5'], port_security=['fa:16:3e:fb:ee:02 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7108f815-a0ef-4f18-a2c2-c796476ace75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d9b3587ce494cb8ac153a66886f6883', 'neutron:revision_number': '4', 'neutron:security_group_ids': '980a7c36-df9c-4ecc-beb3-c614c9aa52f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a61ba6e8-74c1-4e95-8edc-1941c805bf71, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:24:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:20.327 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 in datapath ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b unbound from our chassis
Jan 20 14:24:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:20.330 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:24:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:20.332 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5659f514-7954-4f36-9ea8-c830c751432d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:20.333 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b namespace which is not needed anymore
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.347 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:20 compute-1 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 20 14:24:20 compute-1 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Consumed 15.644s CPU time.
Jan 20 14:24:20 compute-1 systemd-machined[194361]: Machine qemu-2-instance-00000005 terminated.
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.450 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.458 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.465 225859 INFO nova.virt.libvirt.driver [-] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Instance destroyed successfully.
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.466 225859 DEBUG nova.objects.instance [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Lazy-loading 'resources' on Instance uuid 7108f815-a0ef-4f18-a2c2-c796476ace75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.481 225859 DEBUG nova.virt.libvirt.vif [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:23:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-3747750',display_name='tempest-VolumesAssistedSnapshotsTest-server-3747750',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-3747750',id=5,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDFF6f8GejZgbluLYmff+2O5aXyXAHCbauzfTWOArySxtg6k2Kp0zTHap6CKfyD2fWfCywq/R2Wl9LWwxTNjXBxp07Mo6pu1ISB3Tz/DzrJv4Fpmcod9g0TNVrenOml3zQ==',key_name='tempest-keypair-908508212',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:23:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d9b3587ce494cb8ac153a66886f6883',ramdisk_id='',reservation_id='r-3no1g33a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAssistedSnapshotsTest-68223103',owner_user_name='tempest-VolumesAssistedSnapshotsTest-68223103-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:23:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6d292784a7494358a137fea52feffec0',uuid=7108f815-a0ef-4f18-a2c2-c796476ace75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "address": "fa:16:3e:fb:ee:02", "network": {"id": "ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-731576160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d9b3587ce494cb8ac153a66886f6883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4573fd6d-82", "ovs_interfaceid": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.482 225859 DEBUG nova.network.os_vif_util [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Converting VIF {"id": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "address": "fa:16:3e:fb:ee:02", "network": {"id": "ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-731576160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d9b3587ce494cb8ac153a66886f6883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4573fd6d-82", "ovs_interfaceid": "4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.483 225859 DEBUG nova.network.os_vif_util [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fb:ee:02,bridge_name='br-int',has_traffic_filtering=True,id=4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54,network=Network(ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4573fd6d-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.484 225859 DEBUG os_vif [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:ee:02,bridge_name='br-int',has_traffic_filtering=True,id=4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54,network=Network(ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4573fd6d-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.487 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.487 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4573fd6d-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.489 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.491 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.495 225859 INFO os_vif [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:ee:02,bridge_name='br-int',has_traffic_filtering=True,id=4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54,network=Network(ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4573fd6d-82')
Jan 20 14:24:20 compute-1 neutron-haproxy-ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b[230365]: [NOTICE]   (230369) : haproxy version is 2.8.14-c23fe91
Jan 20 14:24:20 compute-1 neutron-haproxy-ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b[230365]: [NOTICE]   (230369) : path to executable is /usr/sbin/haproxy
Jan 20 14:24:20 compute-1 neutron-haproxy-ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b[230365]: [WARNING]  (230369) : Exiting Master process...
Jan 20 14:24:20 compute-1 neutron-haproxy-ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b[230365]: [ALERT]    (230369) : Current worker (230378) exited with code 143 (Terminated)
Jan 20 14:24:20 compute-1 neutron-haproxy-ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b[230365]: [WARNING]  (230369) : All workers exited. Exiting... (0)
Jan 20 14:24:20 compute-1 systemd[1]: libpod-d3eef0ae425ec3563d3ad4918f96b80d6c8ea801fd7ae121d12a17a0777b9776.scope: Deactivated successfully.
Jan 20 14:24:20 compute-1 podman[231541]: 2026-01-20 14:24:20.548334386 +0000 UTC m=+0.061038826 container died d3eef0ae425ec3563d3ad4918f96b80d6c8ea801fd7ae121d12a17a0777b9776 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.550 225859 DEBUG nova.compute.manager [req-13bc4098-a2d0-4209-a5a0-4d688fb398a8 req-ccd903b0-51a7-45b9-ad68-74398707416f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Received event network-vif-unplugged-4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.551 225859 DEBUG oslo_concurrency.lockutils [req-13bc4098-a2d0-4209-a5a0-4d688fb398a8 req-ccd903b0-51a7-45b9-ad68-74398707416f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7108f815-a0ef-4f18-a2c2-c796476ace75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.551 225859 DEBUG oslo_concurrency.lockutils [req-13bc4098-a2d0-4209-a5a0-4d688fb398a8 req-ccd903b0-51a7-45b9-ad68-74398707416f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.551 225859 DEBUG oslo_concurrency.lockutils [req-13bc4098-a2d0-4209-a5a0-4d688fb398a8 req-ccd903b0-51a7-45b9-ad68-74398707416f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.552 225859 DEBUG nova.compute.manager [req-13bc4098-a2d0-4209-a5a0-4d688fb398a8 req-ccd903b0-51a7-45b9-ad68-74398707416f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] No waiting events found dispatching network-vif-unplugged-4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.552 225859 DEBUG nova.compute.manager [req-13bc4098-a2d0-4209-a5a0-4d688fb398a8 req-ccd903b0-51a7-45b9-ad68-74398707416f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Received event network-vif-unplugged-4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:24:20 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d3eef0ae425ec3563d3ad4918f96b80d6c8ea801fd7ae121d12a17a0777b9776-userdata-shm.mount: Deactivated successfully.
Jan 20 14:24:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-276037b4c0f1a16a9dd7a2f42c512b2508dc62cc130213b8b7fe0ebd0f5eeb1c-merged.mount: Deactivated successfully.
Jan 20 14:24:20 compute-1 podman[231541]: 2026-01-20 14:24:20.594858996 +0000 UTC m=+0.107563436 container cleanup d3eef0ae425ec3563d3ad4918f96b80d6c8ea801fd7ae121d12a17a0777b9776 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 14:24:20 compute-1 systemd[1]: libpod-conmon-d3eef0ae425ec3563d3ad4918f96b80d6c8ea801fd7ae121d12a17a0777b9776.scope: Deactivated successfully.
Jan 20 14:24:20 compute-1 podman[231587]: 2026-01-20 14:24:20.668477182 +0000 UTC m=+0.046472569 container remove d3eef0ae425ec3563d3ad4918f96b80d6c8ea801fd7ae121d12a17a0777b9776 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 14:24:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:20.678 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e9eb2694-053c-4831-a63e-3cdb34f5d516]: (4, ('Tue Jan 20 02:24:20 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b (d3eef0ae425ec3563d3ad4918f96b80d6c8ea801fd7ae121d12a17a0777b9776)\nd3eef0ae425ec3563d3ad4918f96b80d6c8ea801fd7ae121d12a17a0777b9776\nTue Jan 20 02:24:20 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b (d3eef0ae425ec3563d3ad4918f96b80d6c8ea801fd7ae121d12a17a0777b9776)\nd3eef0ae425ec3563d3ad4918f96b80d6c8ea801fd7ae121d12a17a0777b9776\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:20.679 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[631c8429-09de-46a9-9a9d-0b68ac26037e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:20.680 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec57bdbd-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:24:20 compute-1 kernel: tapec57bdbd-c0: left promiscuous mode
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.685 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:20 compute-1 nova_compute[225855]: 2026-01-20 14:24:20.713 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:20.716 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[aeef832c-ebd8-4e8d-a336-b9d6c6eb8471]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:20.735 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec6bcd6-53ef-4fbf-a20f-2004cea93222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:20.736 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[37822e27-eeb6-4833-8c44-e950f27c72c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:20.754 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[20c39205-ad45-4bba-838c-1cdb45339970]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 406808, 'reachable_time': 27781, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 231602, 'error': None, 'target': 'ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:20 compute-1 systemd[1]: run-netns-ovnmeta\x2dec57bdbd\x2dccb0\x2d4a9c\x2dbb01\x2da29300b17f8b.mount: Deactivated successfully.
Jan 20 14:24:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:20.757 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ec57bdbd-ccb0-4a9c-bb01-a29300b17f8b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:24:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:20.758 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[2779c869-3409-47c9-be96-ec38fc568520]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:21 compute-1 ceph-mon[81775]: pgmap v986: 321 pgs: 321 active+clean; 437 MiB data, 475 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 7.1 MiB/s wr, 327 op/s
Jan 20 14:24:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3335376661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2815343532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:21 compute-1 nova_compute[225855]: 2026-01-20 14:24:21.508 225859 INFO nova.virt.libvirt.driver [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Deleting instance files /var/lib/nova/instances/7108f815-a0ef-4f18-a2c2-c796476ace75_del
Jan 20 14:24:21 compute-1 nova_compute[225855]: 2026-01-20 14:24:21.510 225859 INFO nova.virt.libvirt.driver [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Deletion of /var/lib/nova/instances/7108f815-a0ef-4f18-a2c2-c796476ace75_del complete
Jan 20 14:24:21 compute-1 nova_compute[225855]: 2026-01-20 14:24:21.588 225859 INFO nova.compute.manager [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Took 1.96 seconds to destroy the instance on the hypervisor.
Jan 20 14:24:21 compute-1 nova_compute[225855]: 2026-01-20 14:24:21.589 225859 DEBUG oslo.service.loopingcall [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:24:21 compute-1 nova_compute[225855]: 2026-01-20 14:24:21.590 225859 DEBUG nova.compute.manager [-] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:24:21 compute-1 nova_compute[225855]: 2026-01-20 14:24:21.590 225859 DEBUG nova.network.neutron [-] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:24:21 compute-1 nova_compute[225855]: 2026-01-20 14:24:21.606 225859 DEBUG oslo_concurrency.lockutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "9dba01d4-4ecd-40eb-8d63-95dbdcd205cf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:21 compute-1 nova_compute[225855]: 2026-01-20 14:24:21.607 225859 DEBUG oslo_concurrency.lockutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "9dba01d4-4ecd-40eb-8d63-95dbdcd205cf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:21 compute-1 nova_compute[225855]: 2026-01-20 14:24:21.682 225859 DEBUG nova.compute.manager [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:24:21 compute-1 nova_compute[225855]: 2026-01-20 14:24:21.779 225859 DEBUG oslo_concurrency.lockutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:21 compute-1 nova_compute[225855]: 2026-01-20 14:24:21.780 225859 DEBUG oslo_concurrency.lockutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:21 compute-1 nova_compute[225855]: 2026-01-20 14:24:21.789 225859 DEBUG nova.virt.hardware [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:24:21 compute-1 nova_compute[225855]: 2026-01-20 14:24:21.789 225859 INFO nova.compute.claims [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:24:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:21.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:22 compute-1 nova_compute[225855]: 2026-01-20 14:24:22.030 225859 DEBUG oslo_concurrency.processutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:24:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:22.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:24:22 compute-1 ceph-mon[81775]: pgmap v987: 321 pgs: 321 active+clean; 397 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.3 MiB/s wr, 317 op/s
Jan 20 14:24:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/131837199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:24:22 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2895502152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:22 compute-1 nova_compute[225855]: 2026-01-20 14:24:22.566 225859 DEBUG oslo_concurrency.processutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:22 compute-1 nova_compute[225855]: 2026-01-20 14:24:22.574 225859 DEBUG nova.compute.provider_tree [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:24:22 compute-1 nova_compute[225855]: 2026-01-20 14:24:22.607 225859 DEBUG nova.scheduler.client.report [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:24:22 compute-1 nova_compute[225855]: 2026-01-20 14:24:22.651 225859 DEBUG oslo_concurrency.lockutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:22 compute-1 nova_compute[225855]: 2026-01-20 14:24:22.697 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:22 compute-1 nova_compute[225855]: 2026-01-20 14:24:22.922 225859 DEBUG nova.compute.manager [req-a74a26d5-3545-4dc9-9838-40160a59ed0a req-e42927f3-6b0b-44f0-8d81-94001a537237 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Received event network-vif-plugged-4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:24:22 compute-1 nova_compute[225855]: 2026-01-20 14:24:22.923 225859 DEBUG oslo_concurrency.lockutils [req-a74a26d5-3545-4dc9-9838-40160a59ed0a req-e42927f3-6b0b-44f0-8d81-94001a537237 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7108f815-a0ef-4f18-a2c2-c796476ace75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:22 compute-1 nova_compute[225855]: 2026-01-20 14:24:22.923 225859 DEBUG oslo_concurrency.lockutils [req-a74a26d5-3545-4dc9-9838-40160a59ed0a req-e42927f3-6b0b-44f0-8d81-94001a537237 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:22 compute-1 nova_compute[225855]: 2026-01-20 14:24:22.923 225859 DEBUG oslo_concurrency.lockutils [req-a74a26d5-3545-4dc9-9838-40160a59ed0a req-e42927f3-6b0b-44f0-8d81-94001a537237 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:22 compute-1 nova_compute[225855]: 2026-01-20 14:24:22.924 225859 DEBUG nova.compute.manager [req-a74a26d5-3545-4dc9-9838-40160a59ed0a req-e42927f3-6b0b-44f0-8d81-94001a537237 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] No waiting events found dispatching network-vif-plugged-4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:24:22 compute-1 nova_compute[225855]: 2026-01-20 14:24:22.924 225859 WARNING nova.compute.manager [req-a74a26d5-3545-4dc9-9838-40160a59ed0a req-e42927f3-6b0b-44f0-8d81-94001a537237 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Received unexpected event network-vif-plugged-4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 for instance with vm_state active and task_state deleting.
Jan 20 14:24:22 compute-1 nova_compute[225855]: 2026-01-20 14:24:22.925 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:24:22 compute-1 nova_compute[225855]: 2026-01-20 14:24:22.928 225859 DEBUG oslo_concurrency.lockutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "eaa154fd-b812-43d3-81dc-22b78d92089d" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:22 compute-1 nova_compute[225855]: 2026-01-20 14:24:22.928 225859 DEBUG oslo_concurrency.lockutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "eaa154fd-b812-43d3-81dc-22b78d92089d" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:22 compute-1 sudo[231627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:22 compute-1 sudo[231627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:22 compute-1 sudo[231627]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:23 compute-1 nova_compute[225855]: 2026-01-20 14:24:23.026 225859 DEBUG nova.compute.manager [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] No node specified, defaulting to compute-1.ctlplane.example.com _get_nodename /usr/lib/python3.9/site-packages/nova/compute/manager.py:10505
Jan 20 14:24:23 compute-1 sudo[231652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:24:23 compute-1 sudo[231652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:23 compute-1 sudo[231652]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:23 compute-1 nova_compute[225855]: 2026-01-20 14:24:23.154 225859 DEBUG oslo_concurrency.lockutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "eaa154fd-b812-43d3-81dc-22b78d92089d" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:23 compute-1 nova_compute[225855]: 2026-01-20 14:24:23.155 225859 DEBUG nova.compute.manager [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:24:23 compute-1 nova_compute[225855]: 2026-01-20 14:24:23.217 225859 DEBUG nova.network.neutron [-] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:24:23 compute-1 nova_compute[225855]: 2026-01-20 14:24:23.276 225859 DEBUG nova.compute.manager [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:24:23 compute-1 nova_compute[225855]: 2026-01-20 14:24:23.276 225859 DEBUG nova.network.neutron [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:24:23 compute-1 nova_compute[225855]: 2026-01-20 14:24:23.282 225859 INFO nova.compute.manager [-] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Took 1.69 seconds to deallocate network for instance.
Jan 20 14:24:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3277269713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2895502152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:24:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:24:23 compute-1 nova_compute[225855]: 2026-01-20 14:24:23.762 225859 INFO nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:24:23 compute-1 nova_compute[225855]: 2026-01-20 14:24:23.808 225859 DEBUG nova.compute.manager [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:24:23 compute-1 nova_compute[225855]: 2026-01-20 14:24:23.878 225859 DEBUG oslo_concurrency.lockutils [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:23 compute-1 nova_compute[225855]: 2026-01-20 14:24:23.878 225859 DEBUG oslo_concurrency.lockutils [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:23.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.027 225859 DEBUG oslo_concurrency.processutils [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:24 compute-1 podman[231678]: 2026-01-20 14:24:24.102330603 +0000 UTC m=+0.137972125 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.144 225859 DEBUG nova.compute.manager [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.147 225859 DEBUG nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.148 225859 INFO nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Creating image(s)
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.182 225859 DEBUG nova.storage.rbd_utils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:24.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.220 225859 DEBUG nova.storage.rbd_utils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.244 225859 DEBUG nova.storage.rbd_utils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.247 225859 DEBUG oslo_concurrency.processutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.268 225859 DEBUG nova.network.neutron [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.269 225859 DEBUG nova.compute.manager [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:24:24 compute-1 ceph-mon[81775]: pgmap v988: 321 pgs: 321 active+clean; 323 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.9 MiB/s wr, 295 op/s
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.323 225859 DEBUG oslo_concurrency.processutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.324 225859 DEBUG oslo_concurrency.lockutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.326 225859 DEBUG oslo_concurrency.lockutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.327 225859 DEBUG oslo_concurrency.lockutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.361 225859 DEBUG nova.storage.rbd_utils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.364 225859 DEBUG oslo_concurrency.processutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:24:24 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3731601468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.520 225859 DEBUG oslo_concurrency.processutils [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.529 225859 DEBUG nova.compute.provider_tree [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.551 225859 DEBUG nova.scheduler.client.report [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.575 225859 DEBUG oslo_concurrency.lockutils [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.611 225859 INFO nova.scheduler.client.report [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Deleted allocations for instance 7108f815-a0ef-4f18-a2c2-c796476ace75
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.686 225859 DEBUG oslo_concurrency.lockutils [None req-dcb6e6fb-0239-4396-bfdf-251e5719f2f2 6d292784a7494358a137fea52feffec0 6d9b3587ce494cb8ac153a66886f6883 - - default default] Lock "7108f815-a0ef-4f18-a2c2-c796476ace75" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.748 225859 DEBUG oslo_concurrency.processutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.835 225859 DEBUG nova.storage.rbd_utils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] resizing rbd image 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.952 225859 DEBUG nova.objects.instance [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lazy-loading 'migration_context' on Instance uuid 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.992 225859 DEBUG nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.992 225859 DEBUG nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Ensure instance console log exists: /var/lib/nova/instances/9dba01d4-4ecd-40eb-8d63-95dbdcd205cf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.993 225859 DEBUG oslo_concurrency.lockutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.994 225859 DEBUG oslo_concurrency.lockutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.995 225859 DEBUG oslo_concurrency.lockutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:24 compute-1 nova_compute[225855]: 2026-01-20 14:24:24.998 225859 DEBUG nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.004 225859 WARNING nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.010 225859 DEBUG nova.virt.libvirt.host [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.011 225859 DEBUG nova.virt.libvirt.host [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.014 225859 DEBUG nova.virt.libvirt.host [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.015 225859 DEBUG nova.virt.libvirt.host [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.017 225859 DEBUG nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.018 225859 DEBUG nova.virt.hardware [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.019 225859 DEBUG nova.virt.hardware [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.020 225859 DEBUG nova.virt.hardware [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.020 225859 DEBUG nova.virt.hardware [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.021 225859 DEBUG nova.virt.hardware [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.021 225859 DEBUG nova.virt.hardware [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.022 225859 DEBUG nova.virt.hardware [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.023 225859 DEBUG nova.virt.hardware [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.023 225859 DEBUG nova.virt.hardware [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.024 225859 DEBUG nova.virt.hardware [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.024 225859 DEBUG nova.virt.hardware [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.029 225859 DEBUG oslo_concurrency.processutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.202 225859 DEBUG nova.compute.manager [req-0c6db9e3-52f1-4bf4-93c6-ddcffa88eaaa req-eb51b14c-07c6-4c86-934e-144dfcb565c5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Received event network-vif-deleted-4573fd6d-82c4-4e6d-bacb-86f1b5f0bf54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:24:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3731601468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:24:25 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2632341963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.489 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.506 225859 DEBUG oslo_concurrency.processutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.544 225859 DEBUG nova.storage.rbd_utils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.550 225859 DEBUG oslo_concurrency.processutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:25.706 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.706 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:25.708 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:24:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:25.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:24:25 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2591375473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.996 225859 DEBUG oslo_concurrency.processutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:25 compute-1 nova_compute[225855]: 2026-01-20 14:24:25.998 225859 DEBUG nova.objects.instance [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:24:26 compute-1 nova_compute[225855]: 2026-01-20 14:24:26.015 225859 DEBUG nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:24:26 compute-1 nova_compute[225855]:   <uuid>9dba01d4-4ecd-40eb-8d63-95dbdcd205cf</uuid>
Jan 20 14:24:26 compute-1 nova_compute[225855]:   <name>instance-0000000b</name>
Jan 20 14:24:26 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:24:26 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:24:26 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <nova:name>tempest-ServersOnMultiNodesTest-server-758785217-2</nova:name>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:24:25</nova:creationTime>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:24:26 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:24:26 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:24:26 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:24:26 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:24:26 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:24:26 compute-1 nova_compute[225855]:         <nova:user uuid="32a16ea2839748629233294de19222b3">tempest-ServersOnMultiNodesTest-1140514054-project-member</nova:user>
Jan 20 14:24:26 compute-1 nova_compute[225855]:         <nova:project uuid="986d9f2d9bd24a228e53a76694db0568">tempest-ServersOnMultiNodesTest-1140514054</nova:project>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <nova:ports/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:24:26 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:24:26 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <system>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <entry name="serial">9dba01d4-4ecd-40eb-8d63-95dbdcd205cf</entry>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <entry name="uuid">9dba01d4-4ecd-40eb-8d63-95dbdcd205cf</entry>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     </system>
Jan 20 14:24:26 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:24:26 compute-1 nova_compute[225855]:   <os>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:   </os>
Jan 20 14:24:26 compute-1 nova_compute[225855]:   <features>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:   </features>
Jan 20 14:24:26 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:24:26 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:24:26 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/9dba01d4-4ecd-40eb-8d63-95dbdcd205cf_disk">
Jan 20 14:24:26 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       </source>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:24:26 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/9dba01d4-4ecd-40eb-8d63-95dbdcd205cf_disk.config">
Jan 20 14:24:26 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       </source>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:24:26 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/9dba01d4-4ecd-40eb-8d63-95dbdcd205cf/console.log" append="off"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <video>
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     </video>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:24:26 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:24:26 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:24:26 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:24:26 compute-1 nova_compute[225855]: </domain>
Jan 20 14:24:26 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:24:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:24:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:26.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:24:26 compute-1 ceph-mon[81775]: pgmap v989: 321 pgs: 321 active+clean; 293 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.7 MiB/s wr, 279 op/s
Jan 20 14:24:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2632341963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3761145659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2591375473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3426931299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:26 compute-1 nova_compute[225855]: 2026-01-20 14:24:26.397 225859 DEBUG nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:24:26 compute-1 nova_compute[225855]: 2026-01-20 14:24:26.398 225859 DEBUG nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:24:26 compute-1 nova_compute[225855]: 2026-01-20 14:24:26.399 225859 INFO nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Using config drive
Jan 20 14:24:26 compute-1 nova_compute[225855]: 2026-01-20 14:24:26.441 225859 DEBUG nova.storage.rbd_utils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:27 compute-1 nova_compute[225855]: 2026-01-20 14:24:27.054 225859 INFO nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Creating config drive at /var/lib/nova/instances/9dba01d4-4ecd-40eb-8d63-95dbdcd205cf/disk.config
Jan 20 14:24:27 compute-1 nova_compute[225855]: 2026-01-20 14:24:27.064 225859 DEBUG oslo_concurrency.processutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9dba01d4-4ecd-40eb-8d63-95dbdcd205cf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb0b_kpgp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:27 compute-1 nova_compute[225855]: 2026-01-20 14:24:27.210 225859 DEBUG oslo_concurrency.processutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9dba01d4-4ecd-40eb-8d63-95dbdcd205cf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb0b_kpgp" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:27 compute-1 nova_compute[225855]: 2026-01-20 14:24:27.266 225859 DEBUG nova.storage.rbd_utils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:27 compute-1 nova_compute[225855]: 2026-01-20 14:24:27.272 225859 DEBUG oslo_concurrency.processutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9dba01d4-4ecd-40eb-8d63-95dbdcd205cf/disk.config 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:27 compute-1 nova_compute[225855]: 2026-01-20 14:24:27.699 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:24:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:27.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:24:28 compute-1 ceph-mon[81775]: pgmap v990: 321 pgs: 321 active+clean; 408 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 7.5 MiB/s wr, 351 op/s
Jan 20 14:24:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:28.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:28 compute-1 nova_compute[225855]: 2026-01-20 14:24:28.216 225859 DEBUG oslo_concurrency.processutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9dba01d4-4ecd-40eb-8d63-95dbdcd205cf/disk.config 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.944s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:28 compute-1 nova_compute[225855]: 2026-01-20 14:24:28.217 225859 INFO nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Deleting local config drive /var/lib/nova/instances/9dba01d4-4ecd-40eb-8d63-95dbdcd205cf/disk.config because it was imported into RBD.
Jan 20 14:24:28 compute-1 systemd-machined[194361]: New machine qemu-5-instance-0000000b.
Jan 20 14:24:28 compute-1 systemd[1]: Started Virtual Machine qemu-5-instance-0000000b.
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.248 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919069.2476351, 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.250 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] VM Resumed (Lifecycle Event)
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.255 225859 DEBUG nova.compute.manager [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.255 225859 DEBUG nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.260 225859 INFO nova.virt.libvirt.driver [-] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Instance spawned successfully.
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.260 225859 DEBUG nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.479 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.486 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.490 225859 DEBUG nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.491 225859 DEBUG nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.492 225859 DEBUG nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.493 225859 DEBUG nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.493 225859 DEBUG nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.494 225859 DEBUG nova.virt.libvirt.driver [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:24:29.710 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.710 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.711 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919069.2490425, 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.711 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] VM Started (Lifecycle Event)
Jan 20 14:24:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.810 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.817 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.834 225859 INFO nova.compute.manager [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Took 5.69 seconds to spawn the instance on the hypervisor.
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.835 225859 DEBUG nova.compute.manager [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:24:29 compute-1 nova_compute[225855]: 2026-01-20 14:24:29.886 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:24:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:24:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:29.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:24:30 compute-1 nova_compute[225855]: 2026-01-20 14:24:30.107 225859 INFO nova.compute.manager [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Took 8.35 seconds to build instance.
Jan 20 14:24:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:30.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:30 compute-1 ceph-mon[81775]: pgmap v991: 321 pgs: 321 active+clean; 422 MiB data, 457 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.7 MiB/s wr, 238 op/s
Jan 20 14:24:30 compute-1 nova_compute[225855]: 2026-01-20 14:24:30.366 225859 DEBUG oslo_concurrency.lockutils [None req-0ace672a-be70-43d6-b620-a4ee2639faf7 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "9dba01d4-4ecd-40eb-8d63-95dbdcd205cf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:30 compute-1 nova_compute[225855]: 2026-01-20 14:24:30.492 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:24:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:31.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:24:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:24:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:32.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:24:32 compute-1 nova_compute[225855]: 2026-01-20 14:24:32.701 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:33 compute-1 ceph-mon[81775]: pgmap v992: 321 pgs: 321 active+clean; 435 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 6.7 MiB/s wr, 275 op/s
Jan 20 14:24:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:33.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:24:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:34.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:24:34 compute-1 ceph-mon[81775]: pgmap v993: 321 pgs: 321 active+clean; 451 MiB data, 479 MiB used, 21 GiB / 21 GiB avail; 6.0 MiB/s rd, 7.8 MiB/s wr, 345 op/s
Jan 20 14:24:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:35 compute-1 podman[232075]: 2026-01-20 14:24:35.070388081 +0000 UTC m=+0.089253563 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 14:24:35 compute-1 nova_compute[225855]: 2026-01-20 14:24:35.466 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919060.461705, 7108f815-a0ef-4f18-a2c2-c796476ace75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:24:35 compute-1 nova_compute[225855]: 2026-01-20 14:24:35.467 225859 INFO nova.compute.manager [-] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] VM Stopped (Lifecycle Event)
Jan 20 14:24:35 compute-1 nova_compute[225855]: 2026-01-20 14:24:35.494 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:35 compute-1 ceph-mon[81775]: pgmap v994: 321 pgs: 321 active+clean; 451 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 6.2 MiB/s rd, 7.8 MiB/s wr, 357 op/s
Jan 20 14:24:35 compute-1 nova_compute[225855]: 2026-01-20 14:24:35.749 225859 DEBUG nova.compute.manager [None req-c3334367-0b75-42bf-87b7-485b1f7257f0 - - - - - -] [instance: 7108f815-a0ef-4f18-a2c2-c796476ace75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:24:35 compute-1 nova_compute[225855]: 2026-01-20 14:24:35.775 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:35 compute-1 nova_compute[225855]: 2026-01-20 14:24:35.798 225859 DEBUG oslo_concurrency.lockutils [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "9dba01d4-4ecd-40eb-8d63-95dbdcd205cf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:35 compute-1 nova_compute[225855]: 2026-01-20 14:24:35.799 225859 DEBUG oslo_concurrency.lockutils [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "9dba01d4-4ecd-40eb-8d63-95dbdcd205cf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:35 compute-1 nova_compute[225855]: 2026-01-20 14:24:35.799 225859 DEBUG oslo_concurrency.lockutils [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "9dba01d4-4ecd-40eb-8d63-95dbdcd205cf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:35 compute-1 nova_compute[225855]: 2026-01-20 14:24:35.800 225859 DEBUG oslo_concurrency.lockutils [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "9dba01d4-4ecd-40eb-8d63-95dbdcd205cf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:35 compute-1 nova_compute[225855]: 2026-01-20 14:24:35.800 225859 DEBUG oslo_concurrency.lockutils [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "9dba01d4-4ecd-40eb-8d63-95dbdcd205cf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:35 compute-1 nova_compute[225855]: 2026-01-20 14:24:35.802 225859 INFO nova.compute.manager [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Terminating instance
Jan 20 14:24:35 compute-1 nova_compute[225855]: 2026-01-20 14:24:35.805 225859 DEBUG oslo_concurrency.lockutils [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "refresh_cache-9dba01d4-4ecd-40eb-8d63-95dbdcd205cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:24:35 compute-1 nova_compute[225855]: 2026-01-20 14:24:35.805 225859 DEBUG oslo_concurrency.lockutils [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquired lock "refresh_cache-9dba01d4-4ecd-40eb-8d63-95dbdcd205cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:24:35 compute-1 nova_compute[225855]: 2026-01-20 14:24:35.806 225859 DEBUG nova.network.neutron [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:24:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:24:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:35.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:24:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:36.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:36 compute-1 nova_compute[225855]: 2026-01-20 14:24:36.311 225859 DEBUG nova.network.neutron [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:24:36 compute-1 sudo[232095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:36 compute-1 sudo[232095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:36 compute-1 sudo[232095]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:36 compute-1 sudo[232120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:36 compute-1 sudo[232120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:36 compute-1 sudo[232120]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:37 compute-1 nova_compute[225855]: 2026-01-20 14:24:37.373 225859 DEBUG nova.network.neutron [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:24:37 compute-1 nova_compute[225855]: 2026-01-20 14:24:37.704 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:37.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:38.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:38 compute-1 nova_compute[225855]: 2026-01-20 14:24:38.498 225859 DEBUG oslo_concurrency.lockutils [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Releasing lock "refresh_cache-9dba01d4-4ecd-40eb-8d63-95dbdcd205cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:24:38 compute-1 nova_compute[225855]: 2026-01-20 14:24:38.501 225859 DEBUG nova.compute.manager [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:24:38 compute-1 ovn_controller[130490]: 2026-01-20T14:24:38Z|00058|binding|INFO|Releasing lport 428c2ef0-5c20-4d34-88ac-9a0d29a78f0e from this chassis (sb_readonly=0)
Jan 20 14:24:38 compute-1 ovn_controller[130490]: 2026-01-20T14:24:38Z|00059|binding|INFO|Releasing lport 69d1df5f-8105-47f6-864e-688d91c13d13 from this chassis (sb_readonly=0)
Jan 20 14:24:38 compute-1 nova_compute[225855]: 2026-01-20 14:24:38.535 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:38 compute-1 ceph-mon[81775]: pgmap v995: 321 pgs: 321 active+clean; 451 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 6.2 MiB/s rd, 7.9 MiB/s wr, 339 op/s
Jan 20 14:24:38 compute-1 ovn_controller[130490]: 2026-01-20T14:24:38Z|00060|binding|INFO|Releasing lport 428c2ef0-5c20-4d34-88ac-9a0d29a78f0e from this chassis (sb_readonly=0)
Jan 20 14:24:38 compute-1 ovn_controller[130490]: 2026-01-20T14:24:38Z|00061|binding|INFO|Releasing lport 69d1df5f-8105-47f6-864e-688d91c13d13 from this chassis (sb_readonly=0)
Jan 20 14:24:38 compute-1 nova_compute[225855]: 2026-01-20 14:24:38.802 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:38 compute-1 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Jan 20 14:24:38 compute-1 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000b.scope: Consumed 10.691s CPU time.
Jan 20 14:24:38 compute-1 systemd-machined[194361]: Machine qemu-5-instance-0000000b terminated.
Jan 20 14:24:38 compute-1 nova_compute[225855]: 2026-01-20 14:24:38.984 225859 INFO nova.virt.libvirt.driver [-] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Instance destroyed successfully.
Jan 20 14:24:38 compute-1 nova_compute[225855]: 2026-01-20 14:24:38.984 225859 DEBUG nova.objects.instance [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lazy-loading 'resources' on Instance uuid 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:24:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:39 compute-1 ceph-mon[81775]: pgmap v996: 321 pgs: 321 active+clean; 451 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.5 MiB/s wr, 253 op/s
Jan 20 14:24:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:39.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:40 compute-1 nova_compute[225855]: 2026-01-20 14:24:40.010 225859 INFO nova.virt.libvirt.driver [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Deleting instance files /var/lib/nova/instances/9dba01d4-4ecd-40eb-8d63-95dbdcd205cf_del
Jan 20 14:24:40 compute-1 nova_compute[225855]: 2026-01-20 14:24:40.011 225859 INFO nova.virt.libvirt.driver [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Deletion of /var/lib/nova/instances/9dba01d4-4ecd-40eb-8d63-95dbdcd205cf_del complete
Jan 20 14:24:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:40.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:40 compute-1 nova_compute[225855]: 2026-01-20 14:24:40.496 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:41 compute-1 nova_compute[225855]: 2026-01-20 14:24:41.147 225859 INFO nova.compute.manager [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Took 2.65 seconds to destroy the instance on the hypervisor.
Jan 20 14:24:41 compute-1 nova_compute[225855]: 2026-01-20 14:24:41.148 225859 DEBUG oslo.service.loopingcall [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:24:41 compute-1 nova_compute[225855]: 2026-01-20 14:24:41.148 225859 DEBUG nova.compute.manager [-] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:24:41 compute-1 nova_compute[225855]: 2026-01-20 14:24:41.149 225859 DEBUG nova.network.neutron [-] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:24:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:41.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:42.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:42 compute-1 nova_compute[225855]: 2026-01-20 14:24:42.706 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:42 compute-1 ceph-mon[81775]: pgmap v997: 321 pgs: 321 active+clean; 437 MiB data, 475 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.3 MiB/s wr, 252 op/s
Jan 20 14:24:43 compute-1 ceph-mon[81775]: pgmap v998: 321 pgs: 321 active+clean; 392 MiB data, 454 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Jan 20 14:24:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:43.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:44 compute-1 nova_compute[225855]: 2026-01-20 14:24:44.185 225859 DEBUG nova.network.neutron [-] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:24:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:44.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:44 compute-1 nova_compute[225855]: 2026-01-20 14:24:44.348 225859 DEBUG nova.network.neutron [-] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:24:44 compute-1 nova_compute[225855]: 2026-01-20 14:24:44.402 225859 INFO nova.compute.manager [-] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Took 3.25 seconds to deallocate network for instance.
Jan 20 14:24:44 compute-1 nova_compute[225855]: 2026-01-20 14:24:44.597 225859 DEBUG oslo_concurrency.lockutils [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:44 compute-1 nova_compute[225855]: 2026-01-20 14:24:44.597 225859 DEBUG oslo_concurrency.lockutils [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:44 compute-1 nova_compute[225855]: 2026-01-20 14:24:44.683 225859 DEBUG oslo_concurrency.processutils [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2226425318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:45 compute-1 nova_compute[225855]: 2026-01-20 14:24:45.098 225859 DEBUG oslo_concurrency.processutils [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:45 compute-1 nova_compute[225855]: 2026-01-20 14:24:45.103 225859 DEBUG nova.compute.provider_tree [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:24:45 compute-1 nova_compute[225855]: 2026-01-20 14:24:45.332 225859 DEBUG nova.scheduler.client.report [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:24:45 compute-1 nova_compute[225855]: 2026-01-20 14:24:45.447 225859 DEBUG oslo_concurrency.lockutils [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:45 compute-1 nova_compute[225855]: 2026-01-20 14:24:45.498 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:24:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:45.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:24:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1638211528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:45 compute-1 ceph-mon[81775]: pgmap v999: 321 pgs: 321 active+clean; 405 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 295 KiB/s rd, 1.8 MiB/s wr, 111 op/s
Jan 20 14:24:46 compute-1 nova_compute[225855]: 2026-01-20 14:24:46.077 225859 INFO nova.scheduler.client.report [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Deleted allocations for instance 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf
Jan 20 14:24:46 compute-1 nova_compute[225855]: 2026-01-20 14:24:46.237 225859 DEBUG oslo_concurrency.lockutils [None req-917b20e5-4cc6-4508-84c6-5fd1f589efc2 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "9dba01d4-4ecd-40eb-8d63-95dbdcd205cf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.438s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:46.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:47 compute-1 nova_compute[225855]: 2026-01-20 14:24:47.708 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:47.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:48.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:48 compute-1 ceph-mon[81775]: pgmap v1000: 321 pgs: 321 active+clean; 405 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 59 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 20 14:24:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:49.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:24:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:50.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:24:50 compute-1 nova_compute[225855]: 2026-01-20 14:24:50.500 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:50 compute-1 ceph-mon[81775]: pgmap v1001: 321 pgs: 321 active+clean; 397 MiB data, 457 MiB used, 21 GiB / 21 GiB avail; 66 KiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 20 14:24:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4226925414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/150030370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/852221292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:24:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:51.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:24:52 compute-1 nova_compute[225855]: 2026-01-20 14:24:52.199 225859 DEBUG oslo_concurrency.lockutils [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "4bfa17e5-6bfc-42e2-9c3f-959596201da0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:52 compute-1 nova_compute[225855]: 2026-01-20 14:24:52.200 225859 DEBUG oslo_concurrency.lockutils [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "4bfa17e5-6bfc-42e2-9c3f-959596201da0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:52 compute-1 nova_compute[225855]: 2026-01-20 14:24:52.200 225859 DEBUG oslo_concurrency.lockutils [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "4bfa17e5-6bfc-42e2-9c3f-959596201da0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:52 compute-1 nova_compute[225855]: 2026-01-20 14:24:52.201 225859 DEBUG oslo_concurrency.lockutils [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "4bfa17e5-6bfc-42e2-9c3f-959596201da0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:52 compute-1 nova_compute[225855]: 2026-01-20 14:24:52.202 225859 DEBUG oslo_concurrency.lockutils [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "4bfa17e5-6bfc-42e2-9c3f-959596201da0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:52 compute-1 nova_compute[225855]: 2026-01-20 14:24:52.204 225859 INFO nova.compute.manager [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Terminating instance
Jan 20 14:24:52 compute-1 nova_compute[225855]: 2026-01-20 14:24:52.206 225859 DEBUG oslo_concurrency.lockutils [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "refresh_cache-4bfa17e5-6bfc-42e2-9c3f-959596201da0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:24:52 compute-1 nova_compute[225855]: 2026-01-20 14:24:52.206 225859 DEBUG oslo_concurrency.lockutils [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquired lock "refresh_cache-4bfa17e5-6bfc-42e2-9c3f-959596201da0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:24:52 compute-1 nova_compute[225855]: 2026-01-20 14:24:52.206 225859 DEBUG nova.network.neutron [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:24:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:24:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:52.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:24:52 compute-1 nova_compute[225855]: 2026-01-20 14:24:52.520 225859 DEBUG nova.network.neutron [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:24:52 compute-1 nova_compute[225855]: 2026-01-20 14:24:52.711 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:52 compute-1 ceph-mon[81775]: pgmap v1002: 321 pgs: 321 active+clean; 348 MiB data, 426 MiB used, 21 GiB / 21 GiB avail; 79 KiB/s rd, 1.8 MiB/s wr, 116 op/s
Jan 20 14:24:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3133120487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:53 compute-1 nova_compute[225855]: 2026-01-20 14:24:53.159 225859 DEBUG nova.network.neutron [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:24:53 compute-1 nova_compute[225855]: 2026-01-20 14:24:53.173 225859 DEBUG oslo_concurrency.lockutils [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Releasing lock "refresh_cache-4bfa17e5-6bfc-42e2-9c3f-959596201da0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:24:53 compute-1 nova_compute[225855]: 2026-01-20 14:24:53.174 225859 DEBUG nova.compute.manager [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:24:53 compute-1 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Deactivated successfully.
Jan 20 14:24:53 compute-1 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Consumed 15.635s CPU time.
Jan 20 14:24:53 compute-1 systemd-machined[194361]: Machine qemu-3-instance-00000007 terminated.
Jan 20 14:24:53 compute-1 nova_compute[225855]: 2026-01-20 14:24:53.394 225859 INFO nova.virt.libvirt.driver [-] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Instance destroyed successfully.
Jan 20 14:24:53 compute-1 nova_compute[225855]: 2026-01-20 14:24:53.394 225859 DEBUG nova.objects.instance [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lazy-loading 'resources' on Instance uuid 4bfa17e5-6bfc-42e2-9c3f-959596201da0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:24:53 compute-1 nova_compute[225855]: 2026-01-20 14:24:53.726 225859 INFO nova.virt.libvirt.driver [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Deleting instance files /var/lib/nova/instances/4bfa17e5-6bfc-42e2-9c3f-959596201da0_del
Jan 20 14:24:53 compute-1 nova_compute[225855]: 2026-01-20 14:24:53.727 225859 INFO nova.virt.libvirt.driver [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Deletion of /var/lib/nova/instances/4bfa17e5-6bfc-42e2-9c3f-959596201da0_del complete
Jan 20 14:24:53 compute-1 nova_compute[225855]: 2026-01-20 14:24:53.767 225859 INFO nova.compute.manager [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Took 0.59 seconds to destroy the instance on the hypervisor.
Jan 20 14:24:53 compute-1 nova_compute[225855]: 2026-01-20 14:24:53.768 225859 DEBUG oslo.service.loopingcall [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:24:53 compute-1 nova_compute[225855]: 2026-01-20 14:24:53.768 225859 DEBUG nova.compute.manager [-] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:24:53 compute-1 nova_compute[225855]: 2026-01-20 14:24:53.768 225859 DEBUG nova.network.neutron [-] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:24:53 compute-1 ceph-mon[81775]: pgmap v1003: 321 pgs: 321 active+clean; 273 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 66 KiB/s rd, 1.1 MiB/s wr, 97 op/s
Jan 20 14:24:53 compute-1 nova_compute[225855]: 2026-01-20 14:24:53.982 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919078.9812706, 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:24:53 compute-1 nova_compute[225855]: 2026-01-20 14:24:53.982 225859 INFO nova.compute.manager [-] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] VM Stopped (Lifecycle Event)
Jan 20 14:24:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:53.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:54 compute-1 nova_compute[225855]: 2026-01-20 14:24:54.003 225859 DEBUG nova.compute.manager [None req-35803451-f1fd-4697-b707-767fd02def70 - - - - - -] [instance: 9dba01d4-4ecd-40eb-8d63-95dbdcd205cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:24:54 compute-1 nova_compute[225855]: 2026-01-20 14:24:54.075 225859 DEBUG nova.network.neutron [-] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:24:54 compute-1 nova_compute[225855]: 2026-01-20 14:24:54.089 225859 DEBUG nova.network.neutron [-] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:24:54 compute-1 nova_compute[225855]: 2026-01-20 14:24:54.115 225859 INFO nova.compute.manager [-] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Took 0.35 seconds to deallocate network for instance.
Jan 20 14:24:54 compute-1 nova_compute[225855]: 2026-01-20 14:24:54.176 225859 DEBUG oslo_concurrency.lockutils [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:54 compute-1 nova_compute[225855]: 2026-01-20 14:24:54.177 225859 DEBUG oslo_concurrency.lockutils [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:54.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:54 compute-1 nova_compute[225855]: 2026-01-20 14:24:54.272 225859 DEBUG oslo_concurrency.processutils [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:24:54 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2429111377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:54 compute-1 nova_compute[225855]: 2026-01-20 14:24:54.678 225859 DEBUG oslo_concurrency.processutils [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:54 compute-1 nova_compute[225855]: 2026-01-20 14:24:54.685 225859 DEBUG nova.compute.provider_tree [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:24:54 compute-1 nova_compute[225855]: 2026-01-20 14:24:54.706 225859 DEBUG nova.scheduler.client.report [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:24:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:54 compute-1 nova_compute[225855]: 2026-01-20 14:24:54.752 225859 DEBUG oslo_concurrency.lockutils [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3059066805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2429111377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:55 compute-1 podman[232243]: 2026-01-20 14:24:55.0636793 +0000 UTC m=+0.111776564 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 14:24:55 compute-1 nova_compute[225855]: 2026-01-20 14:24:55.112 225859 INFO nova.scheduler.client.report [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Deleted allocations for instance 4bfa17e5-6bfc-42e2-9c3f-959596201da0
Jan 20 14:24:55 compute-1 nova_compute[225855]: 2026-01-20 14:24:55.450 225859 DEBUG oslo_concurrency.lockutils [None req-dd11f51b-6073-426d-bbd2-06f1e3064eb6 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "4bfa17e5-6bfc-42e2-9c3f-959596201da0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:55 compute-1 nova_compute[225855]: 2026-01-20 14:24:55.502 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:55.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:56 compute-1 ceph-mon[81775]: pgmap v1004: 321 pgs: 321 active+clean; 236 MiB data, 380 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 856 KiB/s wr, 73 op/s
Jan 20 14:24:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:24:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:56.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:24:56 compute-1 sudo[232271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:56 compute-1 sudo[232271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:56 compute-1 sudo[232271]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:56 compute-1 sudo[232296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:56 compute-1 sudo[232296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:56 compute-1 sudo[232296]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:57 compute-1 nova_compute[225855]: 2026-01-20 14:24:57.713 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:57.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:24:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:24:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:58.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:24:58 compute-1 ceph-mon[81775]: pgmap v1005: 321 pgs: 321 active+clean; 167 MiB data, 330 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 26 KiB/s wr, 85 op/s
Jan 20 14:24:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:25:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:59.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:25:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:25:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:00.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:25:00 compute-1 ceph-mon[81775]: pgmap v1006: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 942 KiB/s rd, 25 KiB/s wr, 117 op/s
Jan 20 14:25:00 compute-1 nova_compute[225855]: 2026-01-20 14:25:00.505 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:25:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:02.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:25:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:02.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:02 compute-1 ceph-mon[81775]: pgmap v1007: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 26 KiB/s wr, 137 op/s
Jan 20 14:25:02 compute-1 sshd-session[232325]: error: kex_exchange_identification: read: Connection reset by peer
Jan 20 14:25:02 compute-1 sshd-session[232325]: Connection reset by 176.120.22.52 port 30177
Jan 20 14:25:02 compute-1 nova_compute[225855]: 2026-01-20 14:25:02.715 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:04.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:04.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:04 compute-1 ceph-mon[81775]: pgmap v1008: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 128 op/s
Jan 20 14:25:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:05 compute-1 nova_compute[225855]: 2026-01-20 14:25:05.507 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:05 compute-1 nova_compute[225855]: 2026-01-20 14:25:05.627 225859 DEBUG nova.virt.libvirt.driver [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Creating tmpfile /var/lib/nova/instances/tmpqbb_x95d to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Jan 20 14:25:05 compute-1 nova_compute[225855]: 2026-01-20 14:25:05.629 225859 DEBUG nova.compute.manager [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpqbb_x95d',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Jan 20 14:25:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:06.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:06 compute-1 podman[232328]: 2026-01-20 14:25:06.042809849 +0000 UTC m=+0.081873139 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:25:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:06.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:06 compute-1 ceph-mon[81775]: pgmap v1009: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 112 op/s
Jan 20 14:25:07 compute-1 nova_compute[225855]: 2026-01-20 14:25:07.717 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:25:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:08.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:25:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:08.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:08 compute-1 nova_compute[225855]: 2026-01-20 14:25:08.393 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919093.3918226, 4bfa17e5-6bfc-42e2-9c3f-959596201da0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:25:08 compute-1 nova_compute[225855]: 2026-01-20 14:25:08.394 225859 INFO nova.compute.manager [-] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] VM Stopped (Lifecycle Event)
Jan 20 14:25:08 compute-1 nova_compute[225855]: 2026-01-20 14:25:08.442 225859 DEBUG nova.compute.manager [None req-29572704-a725-414b-b1e8-f79e24237f0f - - - - - -] [instance: 4bfa17e5-6bfc-42e2-9c3f-959596201da0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:25:08 compute-1 ceph-mon[81775]: pgmap v1010: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 20 14:25:09 compute-1 nova_compute[225855]: 2026-01-20 14:25:09.507 225859 DEBUG nova.compute.manager [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpqbb_x95d',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='85ec4052-1453-4c76-936e-bf76f2108416',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Jan 20 14:25:09 compute-1 nova_compute[225855]: 2026-01-20 14:25:09.546 225859 DEBUG oslo_concurrency.lockutils [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Acquiring lock "refresh_cache-85ec4052-1453-4c76-936e-bf76f2108416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:25:09 compute-1 nova_compute[225855]: 2026-01-20 14:25:09.547 225859 DEBUG oslo_concurrency.lockutils [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Acquired lock "refresh_cache-85ec4052-1453-4c76-936e-bf76f2108416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:25:09 compute-1 nova_compute[225855]: 2026-01-20 14:25:09.547 225859 DEBUG nova.network.neutron [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:25:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:10.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:10.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:10 compute-1 nova_compute[225855]: 2026-01-20 14:25:10.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:25:10 compute-1 nova_compute[225855]: 2026-01-20 14:25:10.509 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:11 compute-1 ceph-mon[81775]: pgmap v1011: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 11 KiB/s wr, 74 op/s
Jan 20 14:25:11 compute-1 nova_compute[225855]: 2026-01-20 14:25:11.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:25:11 compute-1 nova_compute[225855]: 2026-01-20 14:25:11.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:25:11 compute-1 nova_compute[225855]: 2026-01-20 14:25:11.391 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:25:11 compute-1 nova_compute[225855]: 2026-01-20 14:25:11.391 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:25:11 compute-1 nova_compute[225855]: 2026-01-20 14:25:11.926 225859 DEBUG nova.network.neutron [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Updating instance_info_cache with network_info: [{"id": "5146227f-80a8-47ae-a541-144b8dd24d4c", "address": "fa:16:3e:39:f1:69", "network": {"id": "6f01f500-b631-4cdb-ae71-b33b0ccfb1aa", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1745233184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861a4f2b70b249afadeabfe85bda53a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5146227f-80", "ovs_interfaceid": "5146227f-80a8-47ae-a541-144b8dd24d4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:25:11 compute-1 nova_compute[225855]: 2026-01-20 14:25:11.946 225859 DEBUG oslo_concurrency.lockutils [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Releasing lock "refresh_cache-85ec4052-1453-4c76-936e-bf76f2108416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:25:11 compute-1 nova_compute[225855]: 2026-01-20 14:25:11.949 225859 DEBUG os_brick.utils [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:25:11 compute-1 nova_compute[225855]: 2026-01-20 14:25:11.950 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:11 compute-1 nova_compute[225855]: 2026-01-20 14:25:11.961 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:11 compute-1 nova_compute[225855]: 2026-01-20 14:25:11.962 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0f4ee8-ac68-41a5-a3ea-98b95a71a39e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:11 compute-1 nova_compute[225855]: 2026-01-20 14:25:11.963 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:11 compute-1 nova_compute[225855]: 2026-01-20 14:25:11.970 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:11 compute-1 nova_compute[225855]: 2026-01-20 14:25:11.971 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[727b15b6-157f-4b71-bdbf-651febc2517b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:11 compute-1 nova_compute[225855]: 2026-01-20 14:25:11.972 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:11 compute-1 nova_compute[225855]: 2026-01-20 14:25:11.985 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:11 compute-1 nova_compute[225855]: 2026-01-20 14:25:11.986 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[202c920d-47c0-4d2a-a0fd-0f646582d71c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:11 compute-1 nova_compute[225855]: 2026-01-20 14:25:11.987 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[22c9fe1e-15b7-4193-9bb6-e7ca04846d29]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:11 compute-1 nova_compute[225855]: 2026-01-20 14:25:11.988 225859 DEBUG oslo_concurrency.processutils [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:12.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:12 compute-1 nova_compute[225855]: 2026-01-20 14:25:12.019 225859 DEBUG oslo_concurrency.processutils [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:12 compute-1 nova_compute[225855]: 2026-01-20 14:25:12.021 225859 DEBUG os_brick.initiator.connectors.lightos [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:25:12 compute-1 nova_compute[225855]: 2026-01-20 14:25:12.021 225859 DEBUG os_brick.initiator.connectors.lightos [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:25:12 compute-1 nova_compute[225855]: 2026-01-20 14:25:12.021 225859 DEBUG os_brick.initiator.connectors.lightos [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:25:12 compute-1 nova_compute[225855]: 2026-01-20 14:25:12.022 225859 DEBUG os_brick.utils [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:25:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.002000056s ======
Jan 20 14:25:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:12.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Jan 20 14:25:12 compute-1 nova_compute[225855]: 2026-01-20 14:25:12.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:25:12 compute-1 nova_compute[225855]: 2026-01-20 14:25:12.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:25:12 compute-1 nova_compute[225855]: 2026-01-20 14:25:12.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:25:12 compute-1 nova_compute[225855]: 2026-01-20 14:25:12.499 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:12 compute-1 nova_compute[225855]: 2026-01-20 14:25:12.500 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:12 compute-1 nova_compute[225855]: 2026-01-20 14:25:12.500 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:12 compute-1 nova_compute[225855]: 2026-01-20 14:25:12.501 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:25:12 compute-1 nova_compute[225855]: 2026-01-20 14:25:12.501 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/176940240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:12 compute-1 ceph-mon[81775]: pgmap v1012: 321 pgs: 321 active+clean; 176 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 678 KiB/s wr, 56 op/s
Jan 20 14:25:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4198693599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:12 compute-1 nova_compute[225855]: 2026-01-20 14:25:12.761 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:25:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/713538746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:12 compute-1 nova_compute[225855]: 2026-01-20 14:25:12.948 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.258 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.258 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.509 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.511 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4696MB free_disk=20.94263458251953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.511 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.511 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:25:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1892182925' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:25:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:25:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1892182925' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.548 225859 DEBUG nova.virt.libvirt.driver [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpqbb_x95d',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='85ec4052-1453-4c76-936e-bf76f2108416',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={67ec3b5b-23d2-4f8a-84b0-4ee1bda588af='7d9d6585-3801-4399-b7db-b7178b344eaa'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.549 225859 DEBUG nova.virt.libvirt.driver [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Creating instance directory: /var/lib/nova/instances/85ec4052-1453-4c76-936e-bf76f2108416 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.550 225859 DEBUG nova.virt.libvirt.driver [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Ensure instance console log exists: /var/lib/nova/instances/85ec4052-1453-4c76-936e-bf76f2108416/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.551 225859 DEBUG nova.virt.libvirt.driver [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.554 225859 DEBUG nova.virt.libvirt.driver [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.556 225859 DEBUG nova.virt.libvirt.vif [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:24:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-2144632196',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-2144632196',id=12,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:24:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='861a4f2b70b249afadeabfe85bda53a3',ramdisk_id='',reservation_id='r-uk9a0taa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1568967339',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1568967339-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:24:57Z,user_data=None,user_id='aefb5652049e473a948c089d7c62ef1a',uuid=85ec4052-1453-4c76-936e-bf76f2108416,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5146227f-80a8-47ae-a541-144b8dd24d4c", "address": "fa:16:3e:39:f1:69", "network": {"id": "6f01f500-b631-4cdb-ae71-b33b0ccfb1aa", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1745233184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861a4f2b70b249afadeabfe85bda53a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap5146227f-80", "ovs_interfaceid": "5146227f-80a8-47ae-a541-144b8dd24d4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.556 225859 DEBUG nova.network.os_vif_util [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Converting VIF {"id": "5146227f-80a8-47ae-a541-144b8dd24d4c", "address": "fa:16:3e:39:f1:69", "network": {"id": "6f01f500-b631-4cdb-ae71-b33b0ccfb1aa", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1745233184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861a4f2b70b249afadeabfe85bda53a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap5146227f-80", "ovs_interfaceid": "5146227f-80a8-47ae-a541-144b8dd24d4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.558 225859 DEBUG nova.network.os_vif_util [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:f1:69,bridge_name='br-int',has_traffic_filtering=True,id=5146227f-80a8-47ae-a541-144b8dd24d4c,network=Network(6f01f500-b631-4cdb-ae71-b33b0ccfb1aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5146227f-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.558 225859 DEBUG os_vif [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:f1:69,bridge_name='br-int',has_traffic_filtering=True,id=5146227f-80a8-47ae-a541-144b8dd24d4c,network=Network(6f01f500-b631-4cdb-ae71-b33b0ccfb1aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5146227f-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.559 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.560 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.561 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.565 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.566 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5146227f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.566 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5146227f-80, col_values=(('external_ids', {'iface-id': '5146227f-80a8-47ae-a541-144b8dd24d4c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:f1:69', 'vm-uuid': '85ec4052-1453-4c76-936e-bf76f2108416'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:25:13 compute-1 NetworkManager[49104]: <info>  [1768919113.5702] manager: (tap5146227f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.570 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Migration for instance 85ec4052-1453-4c76-936e-bf76f2108416 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.572 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.575 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.578 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.580 225859 INFO os_vif [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:f1:69,bridge_name='br-int',has_traffic_filtering=True,id=5146227f-80a8-47ae-a541-144b8dd24d4c,network=Network(6f01f500-b631-4cdb-ae71-b33b0ccfb1aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5146227f-80')
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.584 225859 DEBUG nova.virt.libvirt.driver [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.584 225859 DEBUG nova.compute.manager [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpqbb_x95d',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='85ec4052-1453-4c76-936e-bf76f2108416',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={67ec3b5b-23d2-4f8a-84b0-4ee1bda588af='7d9d6585-3801-4399-b7db-b7178b344eaa'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.622 225859 INFO nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Updating resource usage from migration 33512403-6150-4049-add3-d64336f6c893
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.622 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Starting to track incoming migration 33512403-6150-4049-add3-d64336f6c893 with flavor 522deaab-a741-4dbb-932d-d8b13a211c33 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.766 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance d1be7a29-3496-40ab-b61f-694622b7453b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.823 225859 WARNING nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 85ec4052-1453-4c76-936e-bf76f2108416 has been moved to another host compute-2.ctlplane.example.com(compute-2.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}.
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.823 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:25:13 compute-1 nova_compute[225855]: 2026-01-20 14:25:13.824 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:25:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:25:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:14.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:25:14 compute-1 nova_compute[225855]: 2026-01-20 14:25:14.064 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:14.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:25:14 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3824416908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:14 compute-1 nova_compute[225855]: 2026-01-20 14:25:14.542 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:14 compute-1 nova_compute[225855]: 2026-01-20 14:25:14.549 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:25:14 compute-1 nova_compute[225855]: 2026-01-20 14:25:14.581 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:25:14 compute-1 nova_compute[225855]: 2026-01-20 14:25:14.609 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:25:14 compute-1 nova_compute[225855]: 2026-01-20 14:25:14.609 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/713538746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3183489613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3926975021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3277421869' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1892182925' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:25:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1892182925' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:25:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:25:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:16.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:25:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:16.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:16.385 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:16.385 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:16.386 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:16 compute-1 nova_compute[225855]: 2026-01-20 14:25:16.609 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:25:16 compute-1 nova_compute[225855]: 2026-01-20 14:25:16.609 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:25:16 compute-1 nova_compute[225855]: 2026-01-20 14:25:16.610 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:25:16 compute-1 nova_compute[225855]: 2026-01-20 14:25:16.610 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:25:17 compute-1 sudo[232406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:17 compute-1 sudo[232406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:17 compute-1 sudo[232406]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:17 compute-1 sudo[232431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:17 compute-1 sudo[232431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:17 compute-1 sudo[232431]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:17 compute-1 nova_compute[225855]: 2026-01-20 14:25:17.791 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:18.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:25:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:18.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:25:18 compute-1 nova_compute[225855]: 2026-01-20 14:25:18.569 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:19 compute-1 ceph-mon[81775]: pgmap v1013: 321 pgs: 321 active+clean; 206 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 616 KiB/s rd, 2.3 MiB/s wr, 75 op/s
Jan 20 14:25:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3924992509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3824416908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:19 compute-1 ceph-mon[81775]: pgmap v1014: 321 pgs: 321 active+clean; 234 MiB data, 355 MiB used, 21 GiB / 21 GiB avail; 330 KiB/s rd, 3.6 MiB/s wr, 82 op/s
Jan 20 14:25:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/132688509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:19 compute-1 ceph-mon[81775]: pgmap v1015: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 361 KiB/s rd, 5.7 MiB/s wr, 123 op/s
Jan 20 14:25:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:20.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2708667634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:20 compute-1 ceph-mon[81775]: pgmap v1016: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 363 KiB/s rd, 5.7 MiB/s wr, 126 op/s
Jan 20 14:25:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/384893503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:20.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:22 compute-1 ceph-mon[81775]: pgmap v1017: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 5.7 MiB/s wr, 148 op/s
Jan 20 14:25:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:22.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:22.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:22 compute-1 nova_compute[225855]: 2026-01-20 14:25:22.833 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:23 compute-1 sudo[232459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:23 compute-1 sudo[232459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:23 compute-1 sudo[232459]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:23 compute-1 sudo[232484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:25:23 compute-1 sudo[232484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:23 compute-1 sudo[232484]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:23 compute-1 sudo[232509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:23 compute-1 sudo[232509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:23 compute-1 sudo[232509]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:23 compute-1 sudo[232534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:25:23 compute-1 sudo[232534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:23 compute-1 nova_compute[225855]: 2026-01-20 14:25:23.573 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:23 compute-1 nova_compute[225855]: 2026-01-20 14:25:23.913 225859 DEBUG nova.network.neutron [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Port 5146227f-80a8-47ae-a541-144b8dd24d4c updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Jan 20 14:25:24 compute-1 sudo[232534]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:24.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:24 compute-1 ceph-mon[81775]: pgmap v1018: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.0 MiB/s wr, 166 op/s
Jan 20 14:25:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:25:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:25:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:25:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:25:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:25:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:25:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:25:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:24.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:25:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:25 compute-1 nova_compute[225855]: 2026-01-20 14:25:25.109 225859 DEBUG nova.compute.manager [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpqbb_x95d',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='85ec4052-1453-4c76-936e-bf76f2108416',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={67ec3b5b-23d2-4f8a-84b0-4ee1bda588af='7d9d6585-3801-4399-b7db-b7178b344eaa'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Jan 20 14:25:25 compute-1 kernel: tap5146227f-80: entered promiscuous mode
Jan 20 14:25:25 compute-1 NetworkManager[49104]: <info>  [1768919125.4581] manager: (tap5146227f-80): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Jan 20 14:25:25 compute-1 ovn_controller[130490]: 2026-01-20T14:25:25Z|00062|binding|INFO|Claiming lport 5146227f-80a8-47ae-a541-144b8dd24d4c for this additional chassis.
Jan 20 14:25:25 compute-1 ovn_controller[130490]: 2026-01-20T14:25:25Z|00063|binding|INFO|5146227f-80a8-47ae-a541-144b8dd24d4c: Claiming fa:16:3e:39:f1:69 10.100.0.4
Jan 20 14:25:25 compute-1 nova_compute[225855]: 2026-01-20 14:25:25.459 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:25 compute-1 nova_compute[225855]: 2026-01-20 14:25:25.475 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:25 compute-1 nova_compute[225855]: 2026-01-20 14:25:25.478 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:25 compute-1 ovn_controller[130490]: 2026-01-20T14:25:25Z|00064|binding|INFO|Setting lport 5146227f-80a8-47ae-a541-144b8dd24d4c ovn-installed in OVS
Jan 20 14:25:25 compute-1 nova_compute[225855]: 2026-01-20 14:25:25.480 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:25 compute-1 systemd-machined[194361]: New machine qemu-6-instance-0000000c.
Jan 20 14:25:25 compute-1 systemd[1]: Started Virtual Machine qemu-6-instance-0000000c.
Jan 20 14:25:25 compute-1 systemd-udevd[232610]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:25:25 compute-1 NetworkManager[49104]: <info>  [1768919125.5201] device (tap5146227f-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:25:25 compute-1 NetworkManager[49104]: <info>  [1768919125.5206] device (tap5146227f-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:25:25 compute-1 podman[232601]: 2026-01-20 14:25:25.5943187 +0000 UTC m=+0.103861552 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:25:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:26.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:26 compute-1 nova_compute[225855]: 2026-01-20 14:25:26.247 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919126.2465813, 85ec4052-1453-4c76-936e-bf76f2108416 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:25:26 compute-1 nova_compute[225855]: 2026-01-20 14:25:26.247 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] VM Started (Lifecycle Event)
Jan 20 14:25:26 compute-1 nova_compute[225855]: 2026-01-20 14:25:26.269 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:25:26 compute-1 ceph-mon[81775]: pgmap v1019: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 129 op/s
Jan 20 14:25:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:26.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:26 compute-1 nova_compute[225855]: 2026-01-20 14:25:26.741 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919126.741106, 85ec4052-1453-4c76-936e-bf76f2108416 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:25:26 compute-1 nova_compute[225855]: 2026-01-20 14:25:26.742 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] VM Resumed (Lifecycle Event)
Jan 20 14:25:26 compute-1 nova_compute[225855]: 2026-01-20 14:25:26.803 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:25:26 compute-1 nova_compute[225855]: 2026-01-20 14:25:26.806 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:25:26 compute-1 nova_compute[225855]: 2026-01-20 14:25:26.841 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-1.ctlplane.example.com
Jan 20 14:25:27 compute-1 nova_compute[225855]: 2026-01-20 14:25:27.874 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:28.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:28.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:28 compute-1 ceph-mon[81775]: pgmap v1020: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 111 op/s
Jan 20 14:25:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/949782716' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3180814250' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:28 compute-1 ovn_controller[130490]: 2026-01-20T14:25:28Z|00065|binding|INFO|Claiming lport 5146227f-80a8-47ae-a541-144b8dd24d4c for this chassis.
Jan 20 14:25:28 compute-1 ovn_controller[130490]: 2026-01-20T14:25:28Z|00066|binding|INFO|5146227f-80a8-47ae-a541-144b8dd24d4c: Claiming fa:16:3e:39:f1:69 10.100.0.4
Jan 20 14:25:28 compute-1 ovn_controller[130490]: 2026-01-20T14:25:28Z|00067|binding|INFO|Setting lport 5146227f-80a8-47ae-a541-144b8dd24d4c up in Southbound
Jan 20 14:25:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:28.493 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:f1:69 10.100.0.4'], port_security=['fa:16:3e:39:f1:69 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '85ec4052-1453-4c76-936e-bf76f2108416', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '861a4f2b70b249afadeabfe85bda53a3', 'neutron:revision_number': '11', 'neutron:security_group_ids': '31a0931a-1aaa-4760-9d26-94149371fd1b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e54b11a-69dd-4260-a0b8-c84b36782857, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=5146227f-80a8-47ae-a541-144b8dd24d4c) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:25:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:28.495 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 5146227f-80a8-47ae-a541-144b8dd24d4c in datapath 6f01f500-b631-4cdb-ae71-b33b0ccfb1aa bound to our chassis
Jan 20 14:25:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:28.499 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6f01f500-b631-4cdb-ae71-b33b0ccfb1aa
Jan 20 14:25:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:28.525 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ce8bce-cdee-4c8d-8a2b-e265dbf7c556]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:28 compute-1 nova_compute[225855]: 2026-01-20 14:25:28.575 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:28.579 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[5364c84e-d87d-425b-a02a-4f2957e00fbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:28.583 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[2c13eeac-acab-4f30-8afd-66a7f1792bed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:28.619 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[87bbc1bf-9b78-4f51-8d7a-480f944bf7be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:28 compute-1 nova_compute[225855]: 2026-01-20 14:25:28.640 225859 INFO nova.compute.manager [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Post operation of migration started
Jan 20 14:25:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:28.643 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ddbe3424-8792-4fb6-94b3-dc9424c9e9ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f01f500-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:16:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 28, 'tx_packets': 6, 'rx_bytes': 1456, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 28, 'tx_packets': 6, 'rx_bytes': 1456, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409098, 'reachable_time': 15863, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 232688, 'error': None, 'target': 'ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:28.665 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[24e718d5-f47e-4d28-b2f2-061dec3a03e1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6f01f500-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409114, 'tstamp': 409114}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 232689, 'error': None, 'target': 'ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6f01f500-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409117, 'tstamp': 409117}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 232689, 'error': None, 'target': 'ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:28.666 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f01f500-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:25:28 compute-1 nova_compute[225855]: 2026-01-20 14:25:28.668 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:28.669 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f01f500-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:25:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:28.669 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:25:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:28.670 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6f01f500-b0, col_values=(('external_ids', {'iface-id': '428c2ef0-5c20-4d34-88ac-9a0d29a78f0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:25:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:28.670 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:25:29 compute-1 nova_compute[225855]: 2026-01-20 14:25:29.017 225859 DEBUG oslo_concurrency.lockutils [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Acquiring lock "refresh_cache-85ec4052-1453-4c76-936e-bf76f2108416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:25:29 compute-1 nova_compute[225855]: 2026-01-20 14:25:29.017 225859 DEBUG oslo_concurrency.lockutils [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Acquired lock "refresh_cache-85ec4052-1453-4c76-936e-bf76f2108416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:25:29 compute-1 nova_compute[225855]: 2026-01-20 14:25:29.018 225859 DEBUG nova.network.neutron [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:25:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:30.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:30.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:30 compute-1 ceph-mon[81775]: pgmap v1021: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Jan 20 14:25:31 compute-1 sudo[232691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:31 compute-1 sudo[232691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:31 compute-1 sudo[232691]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:31 compute-1 sudo[232716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:25:31 compute-1 sudo[232716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:31 compute-1 sudo[232716]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:25:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:25:31 compute-1 ceph-mon[81775]: pgmap v1022: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 11 KiB/s wr, 93 op/s
Jan 20 14:25:31 compute-1 nova_compute[225855]: 2026-01-20 14:25:31.960 225859 DEBUG nova.network.neutron [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Updating instance_info_cache with network_info: [{"id": "5146227f-80a8-47ae-a541-144b8dd24d4c", "address": "fa:16:3e:39:f1:69", "network": {"id": "6f01f500-b631-4cdb-ae71-b33b0ccfb1aa", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1745233184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861a4f2b70b249afadeabfe85bda53a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5146227f-80", "ovs_interfaceid": "5146227f-80a8-47ae-a541-144b8dd24d4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:25:31 compute-1 nova_compute[225855]: 2026-01-20 14:25:31.983 225859 DEBUG oslo_concurrency.lockutils [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Releasing lock "refresh_cache-85ec4052-1453-4c76-936e-bf76f2108416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:25:32 compute-1 nova_compute[225855]: 2026-01-20 14:25:32.000 225859 DEBUG oslo_concurrency.lockutils [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:32 compute-1 nova_compute[225855]: 2026-01-20 14:25:32.000 225859 DEBUG oslo_concurrency.lockutils [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:32 compute-1 nova_compute[225855]: 2026-01-20 14:25:32.000 225859 DEBUG oslo_concurrency.lockutils [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:32 compute-1 nova_compute[225855]: 2026-01-20 14:25:32.005 225859 INFO nova.virt.libvirt.driver [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Jan 20 14:25:32 compute-1 virtqemud[225396]: Domain id=6 name='instance-0000000c' uuid=85ec4052-1453-4c76-936e-bf76f2108416 is tainted: custom-monitor
Jan 20 14:25:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:32.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:32.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:32 compute-1 nova_compute[225855]: 2026-01-20 14:25:32.922 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:33 compute-1 nova_compute[225855]: 2026-01-20 14:25:33.014 225859 INFO nova.virt.libvirt.driver [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Jan 20 14:25:33 compute-1 nova_compute[225855]: 2026-01-20 14:25:33.577 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:34 compute-1 nova_compute[225855]: 2026-01-20 14:25:34.023 225859 INFO nova.virt.libvirt.driver [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Jan 20 14:25:34 compute-1 nova_compute[225855]: 2026-01-20 14:25:34.030 225859 DEBUG nova.compute.manager [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:25:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:25:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:34.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:25:34 compute-1 nova_compute[225855]: 2026-01-20 14:25:34.058 225859 DEBUG nova.objects.instance [None req-c415941b-64ef-45ea-9ab6-315d5dd03f73 b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:25:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:34.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:34 compute-1 ceph-mon[81775]: pgmap v1023: 321 pgs: 321 active+clean; 301 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 626 KiB/s wr, 108 op/s
Jan 20 14:25:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:35 compute-1 nova_compute[225855]: 2026-01-20 14:25:35.511 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:35.510 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:25:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:35.512 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:25:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:25:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:36.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:25:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:36.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:36.514 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:25:36 compute-1 ceph-mon[81775]: pgmap v1024: 321 pgs: 321 active+clean; 304 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 107 op/s
Jan 20 14:25:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1380560990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/912185248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:37 compute-1 podman[232744]: 2026-01-20 14:25:37.057265834 +0000 UTC m=+0.095847699 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 20 14:25:37 compute-1 sudo[232764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:37 compute-1 sudo[232764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:37 compute-1 sudo[232764]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:37 compute-1 sudo[232789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:37 compute-1 sudo[232789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:37 compute-1 sudo[232789]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:37 compute-1 nova_compute[225855]: 2026-01-20 14:25:37.958 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:38.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:38.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:38 compute-1 ceph-mon[81775]: pgmap v1025: 321 pgs: 321 active+clean; 326 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Jan 20 14:25:38 compute-1 nova_compute[225855]: 2026-01-20 14:25:38.579 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:40.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:40 compute-1 nova_compute[225855]: 2026-01-20 14:25:40.179 225859 DEBUG nova.virt.libvirt.driver [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Check if temp file /var/lib/nova/instances/tmpux25rhay exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Jan 20 14:25:40 compute-1 nova_compute[225855]: 2026-01-20 14:25:40.179 225859 DEBUG nova.compute.manager [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpux25rhay',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='85ec4052-1453-4c76-936e-bf76f2108416',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Jan 20 14:25:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:40.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:40 compute-1 ceph-mon[81775]: pgmap v1026: 321 pgs: 321 active+clean; 326 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Jan 20 14:25:42 compute-1 ceph-mon[81775]: pgmap v1027: 321 pgs: 321 active+clean; 326 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Jan 20 14:25:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:42.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:25:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:42.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:25:42 compute-1 nova_compute[225855]: 2026-01-20 14:25:42.995 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e140 e140: 3 total, 3 up, 3 in
Jan 20 14:25:43 compute-1 nova_compute[225855]: 2026-01-20 14:25:43.582 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:44.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:44 compute-1 ceph-mon[81775]: osdmap e140: 3 total, 3 up, 3 in
Jan 20 14:25:44 compute-1 ceph-mon[81775]: pgmap v1029: 321 pgs: 321 active+clean; 326 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.9 MiB/s wr, 96 op/s
Jan 20 14:25:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:25:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:44.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:25:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e141 e141: 3 total, 3 up, 3 in
Jan 20 14:25:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e142 e142: 3 total, 3 up, 3 in
Jan 20 14:25:45 compute-1 ceph-mon[81775]: osdmap e141: 3 total, 3 up, 3 in
Jan 20 14:25:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:46.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:46.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:46 compute-1 ceph-mon[81775]: pgmap v1031: 321 pgs: 321 active+clean; 358 MiB data, 455 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 69 op/s
Jan 20 14:25:46 compute-1 ceph-mon[81775]: osdmap e142: 3 total, 3 up, 3 in
Jan 20 14:25:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3192772719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:47 compute-1 nova_compute[225855]: 2026-01-20 14:25:47.315 225859 DEBUG nova.compute.manager [req-4ec22c54-7cd7-4d21-9615-1e4215960bb5 req-8e6e7d34-a61c-4e45-b677-0d4f3516dfca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Received event network-vif-unplugged-5146227f-80a8-47ae-a541-144b8dd24d4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:25:47 compute-1 nova_compute[225855]: 2026-01-20 14:25:47.316 225859 DEBUG oslo_concurrency.lockutils [req-4ec22c54-7cd7-4d21-9615-1e4215960bb5 req-8e6e7d34-a61c-4e45-b677-0d4f3516dfca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "85ec4052-1453-4c76-936e-bf76f2108416-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:47 compute-1 nova_compute[225855]: 2026-01-20 14:25:47.316 225859 DEBUG oslo_concurrency.lockutils [req-4ec22c54-7cd7-4d21-9615-1e4215960bb5 req-8e6e7d34-a61c-4e45-b677-0d4f3516dfca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "85ec4052-1453-4c76-936e-bf76f2108416-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:47 compute-1 nova_compute[225855]: 2026-01-20 14:25:47.317 225859 DEBUG oslo_concurrency.lockutils [req-4ec22c54-7cd7-4d21-9615-1e4215960bb5 req-8e6e7d34-a61c-4e45-b677-0d4f3516dfca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "85ec4052-1453-4c76-936e-bf76f2108416-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:47 compute-1 nova_compute[225855]: 2026-01-20 14:25:47.317 225859 DEBUG nova.compute.manager [req-4ec22c54-7cd7-4d21-9615-1e4215960bb5 req-8e6e7d34-a61c-4e45-b677-0d4f3516dfca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] No waiting events found dispatching network-vif-unplugged-5146227f-80a8-47ae-a541-144b8dd24d4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:25:47 compute-1 nova_compute[225855]: 2026-01-20 14:25:47.317 225859 DEBUG nova.compute.manager [req-4ec22c54-7cd7-4d21-9615-1e4215960bb5 req-8e6e7d34-a61c-4e45-b677-0d4f3516dfca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Received event network-vif-unplugged-5146227f-80a8-47ae-a541-144b8dd24d4c for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:25:48 compute-1 nova_compute[225855]: 2026-01-20 14:25:48.018 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:48.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:25:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:48.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:25:48 compute-1 ceph-mon[81775]: pgmap v1033: 321 pgs: 321 active+clean; 436 MiB data, 523 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 12 MiB/s wr, 293 op/s
Jan 20 14:25:48 compute-1 nova_compute[225855]: 2026-01-20 14:25:48.584 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:48 compute-1 nova_compute[225855]: 2026-01-20 14:25:48.841 225859 INFO nova.compute.manager [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Took 6.63 seconds for pre_live_migration on destination host compute-2.ctlplane.example.com.
Jan 20 14:25:48 compute-1 nova_compute[225855]: 2026-01-20 14:25:48.842 225859 DEBUG nova.compute.manager [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:25:48 compute-1 nova_compute[225855]: 2026-01-20 14:25:48.860 225859 DEBUG nova.compute.manager [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpux25rhay',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='85ec4052-1453-4c76-936e-bf76f2108416',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(04577902-c221-41db-927f-09d574c55fcd),old_vol_attachment_ids={67ec3b5b-23d2-4f8a-84b0-4ee1bda588af='ae236ab2-4596-49ae-ba58-ece541fcbf59'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Jan 20 14:25:48 compute-1 nova_compute[225855]: 2026-01-20 14:25:48.865 225859 DEBUG nova.objects.instance [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Lazy-loading 'migration_context' on Instance uuid 85ec4052-1453-4c76-936e-bf76f2108416 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:25:48 compute-1 nova_compute[225855]: 2026-01-20 14:25:48.866 225859 DEBUG nova.virt.libvirt.driver [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Jan 20 14:25:48 compute-1 nova_compute[225855]: 2026-01-20 14:25:48.868 225859 DEBUG nova.virt.libvirt.driver [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Jan 20 14:25:48 compute-1 nova_compute[225855]: 2026-01-20 14:25:48.869 225859 DEBUG nova.virt.libvirt.driver [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Jan 20 14:25:48 compute-1 nova_compute[225855]: 2026-01-20 14:25:48.888 225859 DEBUG nova.virt.libvirt.migration [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Find same serial number: pos=1, serial=67ec3b5b-23d2-4f8a-84b0-4ee1bda588af _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242
Jan 20 14:25:48 compute-1 nova_compute[225855]: 2026-01-20 14:25:48.890 225859 DEBUG nova.virt.libvirt.vif [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:24:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-2144632196',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-2144632196',id=12,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:24:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='861a4f2b70b249afadeabfe85bda53a3',ramdisk_id='',reservation_id='r-uk9a0taa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1568967339',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1568967339-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:25:34Z,user_data=None,user_id='aefb5652049e473a948c089d7c62ef1a',uuid=85ec4052-1453-4c76-936e-bf76f2108416,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5146227f-80a8-47ae-a541-144b8dd24d4c", "address": "fa:16:3e:39:f1:69", "network": {"id": "6f01f500-b631-4cdb-ae71-b33b0ccfb1aa", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1745233184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861a4f2b70b249afadeabfe85bda53a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap5146227f-80", "ovs_interfaceid": "5146227f-80a8-47ae-a541-144b8dd24d4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:25:48 compute-1 nova_compute[225855]: 2026-01-20 14:25:48.890 225859 DEBUG nova.network.os_vif_util [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Converting VIF {"id": "5146227f-80a8-47ae-a541-144b8dd24d4c", "address": "fa:16:3e:39:f1:69", "network": {"id": "6f01f500-b631-4cdb-ae71-b33b0ccfb1aa", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1745233184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861a4f2b70b249afadeabfe85bda53a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap5146227f-80", "ovs_interfaceid": "5146227f-80a8-47ae-a541-144b8dd24d4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:25:48 compute-1 nova_compute[225855]: 2026-01-20 14:25:48.892 225859 DEBUG nova.network.os_vif_util [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:f1:69,bridge_name='br-int',has_traffic_filtering=True,id=5146227f-80a8-47ae-a541-144b8dd24d4c,network=Network(6f01f500-b631-4cdb-ae71-b33b0ccfb1aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5146227f-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:25:48 compute-1 nova_compute[225855]: 2026-01-20 14:25:48.893 225859 DEBUG nova.virt.libvirt.migration [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Updating guest XML with vif config: <interface type="ethernet">
Jan 20 14:25:48 compute-1 nova_compute[225855]:   <mac address="fa:16:3e:39:f1:69"/>
Jan 20 14:25:48 compute-1 nova_compute[225855]:   <model type="virtio"/>
Jan 20 14:25:48 compute-1 nova_compute[225855]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:25:48 compute-1 nova_compute[225855]:   <mtu size="1442"/>
Jan 20 14:25:48 compute-1 nova_compute[225855]:   <target dev="tap5146227f-80"/>
Jan 20 14:25:48 compute-1 nova_compute[225855]: </interface>
Jan 20 14:25:48 compute-1 nova_compute[225855]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Jan 20 14:25:48 compute-1 nova_compute[225855]: 2026-01-20 14:25:48.894 225859 DEBUG nova.virt.libvirt.driver [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Jan 20 14:25:49 compute-1 nova_compute[225855]: 2026-01-20 14:25:49.371 225859 DEBUG nova.virt.libvirt.migration [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 20 14:25:49 compute-1 nova_compute[225855]: 2026-01-20 14:25:49.372 225859 INFO nova.virt.libvirt.migration [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Increasing downtime to 50 ms after 0 sec elapsed time
Jan 20 14:25:49 compute-1 nova_compute[225855]: 2026-01-20 14:25:49.435 225859 INFO nova.virt.libvirt.driver [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Jan 20 14:25:49 compute-1 nova_compute[225855]: 2026-01-20 14:25:49.479 225859 DEBUG nova.compute.manager [req-471a0a38-c54e-40d1-ae3f-213103a7bbb2 req-76b15a7f-12f2-44a9-8455-138e81acea5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Received event network-vif-plugged-5146227f-80a8-47ae-a541-144b8dd24d4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:25:49 compute-1 nova_compute[225855]: 2026-01-20 14:25:49.480 225859 DEBUG oslo_concurrency.lockutils [req-471a0a38-c54e-40d1-ae3f-213103a7bbb2 req-76b15a7f-12f2-44a9-8455-138e81acea5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "85ec4052-1453-4c76-936e-bf76f2108416-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:49 compute-1 nova_compute[225855]: 2026-01-20 14:25:49.480 225859 DEBUG oslo_concurrency.lockutils [req-471a0a38-c54e-40d1-ae3f-213103a7bbb2 req-76b15a7f-12f2-44a9-8455-138e81acea5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "85ec4052-1453-4c76-936e-bf76f2108416-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:49 compute-1 nova_compute[225855]: 2026-01-20 14:25:49.481 225859 DEBUG oslo_concurrency.lockutils [req-471a0a38-c54e-40d1-ae3f-213103a7bbb2 req-76b15a7f-12f2-44a9-8455-138e81acea5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "85ec4052-1453-4c76-936e-bf76f2108416-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:49 compute-1 nova_compute[225855]: 2026-01-20 14:25:49.481 225859 DEBUG nova.compute.manager [req-471a0a38-c54e-40d1-ae3f-213103a7bbb2 req-76b15a7f-12f2-44a9-8455-138e81acea5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] No waiting events found dispatching network-vif-plugged-5146227f-80a8-47ae-a541-144b8dd24d4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:25:49 compute-1 nova_compute[225855]: 2026-01-20 14:25:49.481 225859 WARNING nova.compute.manager [req-471a0a38-c54e-40d1-ae3f-213103a7bbb2 req-76b15a7f-12f2-44a9-8455-138e81acea5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Received unexpected event network-vif-plugged-5146227f-80a8-47ae-a541-144b8dd24d4c for instance with vm_state active and task_state migrating.
Jan 20 14:25:49 compute-1 nova_compute[225855]: 2026-01-20 14:25:49.482 225859 DEBUG nova.compute.manager [req-471a0a38-c54e-40d1-ae3f-213103a7bbb2 req-76b15a7f-12f2-44a9-8455-138e81acea5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Received event network-changed-5146227f-80a8-47ae-a541-144b8dd24d4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:25:49 compute-1 nova_compute[225855]: 2026-01-20 14:25:49.482 225859 DEBUG nova.compute.manager [req-471a0a38-c54e-40d1-ae3f-213103a7bbb2 req-76b15a7f-12f2-44a9-8455-138e81acea5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Refreshing instance network info cache due to event network-changed-5146227f-80a8-47ae-a541-144b8dd24d4c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:25:49 compute-1 nova_compute[225855]: 2026-01-20 14:25:49.482 225859 DEBUG oslo_concurrency.lockutils [req-471a0a38-c54e-40d1-ae3f-213103a7bbb2 req-76b15a7f-12f2-44a9-8455-138e81acea5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-85ec4052-1453-4c76-936e-bf76f2108416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:25:49 compute-1 nova_compute[225855]: 2026-01-20 14:25:49.483 225859 DEBUG oslo_concurrency.lockutils [req-471a0a38-c54e-40d1-ae3f-213103a7bbb2 req-76b15a7f-12f2-44a9-8455-138e81acea5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-85ec4052-1453-4c76-936e-bf76f2108416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:25:49 compute-1 nova_compute[225855]: 2026-01-20 14:25:49.483 225859 DEBUG nova.network.neutron [req-471a0a38-c54e-40d1-ae3f-213103a7bbb2 req-76b15a7f-12f2-44a9-8455-138e81acea5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Refreshing network info cache for port 5146227f-80a8-47ae-a541-144b8dd24d4c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:25:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1280635441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:49 compute-1 nova_compute[225855]: 2026-01-20 14:25:49.938 225859 DEBUG nova.virt.libvirt.migration [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 20 14:25:49 compute-1 nova_compute[225855]: 2026-01-20 14:25:49.939 225859 DEBUG nova.virt.libvirt.migration [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Jan 20 14:25:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:25:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:50.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:25:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e143 e143: 3 total, 3 up, 3 in
Jan 20 14:25:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:50.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:50 compute-1 nova_compute[225855]: 2026-01-20 14:25:50.445 225859 DEBUG nova.virt.libvirt.migration [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 20 14:25:50 compute-1 nova_compute[225855]: 2026-01-20 14:25:50.445 225859 DEBUG nova.virt.libvirt.migration [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Jan 20 14:25:50 compute-1 ceph-mon[81775]: pgmap v1034: 321 pgs: 321 active+clean; 424 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 12 MiB/s wr, 309 op/s
Jan 20 14:25:50 compute-1 ceph-mon[81775]: osdmap e143: 3 total, 3 up, 3 in
Jan 20 14:25:50 compute-1 nova_compute[225855]: 2026-01-20 14:25:50.949 225859 DEBUG nova.virt.libvirt.migration [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 20 14:25:50 compute-1 nova_compute[225855]: 2026-01-20 14:25:50.950 225859 DEBUG nova.virt.libvirt.migration [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Jan 20 14:25:51 compute-1 nova_compute[225855]: 2026-01-20 14:25:51.205 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919151.2046537, 85ec4052-1453-4c76-936e-bf76f2108416 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:25:51 compute-1 nova_compute[225855]: 2026-01-20 14:25:51.206 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] VM Paused (Lifecycle Event)
Jan 20 14:25:51 compute-1 nova_compute[225855]: 2026-01-20 14:25:51.228 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:25:51 compute-1 nova_compute[225855]: 2026-01-20 14:25:51.232 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:25:51 compute-1 nova_compute[225855]: 2026-01-20 14:25:51.256 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] During sync_power_state the instance has a pending task (migrating). Skip.
Jan 20 14:25:51 compute-1 kernel: tap5146227f-80 (unregistering): left promiscuous mode
Jan 20 14:25:51 compute-1 NetworkManager[49104]: <info>  [1768919151.3947] device (tap5146227f-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:25:51 compute-1 nova_compute[225855]: 2026-01-20 14:25:51.412 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:51 compute-1 ovn_controller[130490]: 2026-01-20T14:25:51Z|00068|binding|INFO|Releasing lport 5146227f-80a8-47ae-a541-144b8dd24d4c from this chassis (sb_readonly=0)
Jan 20 14:25:51 compute-1 ovn_controller[130490]: 2026-01-20T14:25:51Z|00069|binding|INFO|Setting lport 5146227f-80a8-47ae-a541-144b8dd24d4c down in Southbound
Jan 20 14:25:51 compute-1 ovn_controller[130490]: 2026-01-20T14:25:51Z|00070|binding|INFO|Removing iface tap5146227f-80 ovn-installed in OVS
Jan 20 14:25:51 compute-1 nova_compute[225855]: 2026-01-20 14:25:51.415 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:51.420 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:f1:69 10.100.0.4'], port_security=['fa:16:3e:39:f1:69 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '7c9bfe4c-7684-437c-a64a-33562743d048'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '85ec4052-1453-4c76-936e-bf76f2108416', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '861a4f2b70b249afadeabfe85bda53a3', 'neutron:revision_number': '18', 'neutron:security_group_ids': '31a0931a-1aaa-4760-9d26-94149371fd1b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e54b11a-69dd-4260-a0b8-c84b36782857, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=5146227f-80a8-47ae-a541-144b8dd24d4c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:25:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:51.423 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 5146227f-80a8-47ae-a541-144b8dd24d4c in datapath 6f01f500-b631-4cdb-ae71-b33b0ccfb1aa unbound from our chassis
Jan 20 14:25:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:51.425 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6f01f500-b631-4cdb-ae71-b33b0ccfb1aa
Jan 20 14:25:51 compute-1 nova_compute[225855]: 2026-01-20 14:25:51.446 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:51.451 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fcbb10b3-558a-4662-a574-c41c539838be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:51 compute-1 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Jan 20 14:25:51 compute-1 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000c.scope: Consumed 2.736s CPU time.
Jan 20 14:25:51 compute-1 systemd-machined[194361]: Machine qemu-6-instance-0000000c terminated.
Jan 20 14:25:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:51.494 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[471f5a82-381a-43c3-b764-8ec5fc677492]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:51.498 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[db610bf3-a0b0-41b1-8f1d-5085b668cacb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:51 compute-1 virtqemud[225396]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-67ec3b5b-23d2-4f8a-84b0-4ee1bda588af: No such file or directory
Jan 20 14:25:51 compute-1 virtqemud[225396]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-67ec3b5b-23d2-4f8a-84b0-4ee1bda588af: No such file or directory
Jan 20 14:25:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:51.542 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[3003b885-7826-4f91-8185-3da321e9f31e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:51 compute-1 kernel: tap5146227f-80: entered promiscuous mode
Jan 20 14:25:51 compute-1 kernel: tap5146227f-80 (unregistering): left promiscuous mode
Jan 20 14:25:51 compute-1 nova_compute[225855]: 2026-01-20 14:25:51.558 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:51 compute-1 nova_compute[225855]: 2026-01-20 14:25:51.566 225859 DEBUG nova.virt.libvirt.guest [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Jan 20 14:25:51 compute-1 nova_compute[225855]: 2026-01-20 14:25:51.567 225859 INFO nova.virt.libvirt.driver [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Migration operation has completed
Jan 20 14:25:51 compute-1 nova_compute[225855]: 2026-01-20 14:25:51.567 225859 INFO nova.compute.manager [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] _post_live_migration() is started..
Jan 20 14:25:51 compute-1 nova_compute[225855]: 2026-01-20 14:25:51.569 225859 DEBUG nova.virt.libvirt.driver [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Jan 20 14:25:51 compute-1 nova_compute[225855]: 2026-01-20 14:25:51.570 225859 DEBUG nova.virt.libvirt.driver [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Jan 20 14:25:51 compute-1 nova_compute[225855]: 2026-01-20 14:25:51.570 225859 DEBUG nova.virt.libvirt.driver [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Jan 20 14:25:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:51.571 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a58dd3f4-918c-41e0-b558-5aefa141cea8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f01f500-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:16:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 43, 'tx_packets': 8, 'rx_bytes': 2086, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 43, 'tx_packets': 8, 'rx_bytes': 2086, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409098, 'reachable_time': 15863, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 232841, 'error': None, 'target': 'ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:51.596 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e76d85-8cd5-4faf-8fcd-4971f3e41e37]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6f01f500-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409114, 'tstamp': 409114}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 232848, 'error': None, 'target': 'ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6f01f500-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409117, 'tstamp': 409117}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 232848, 'error': None, 'target': 'ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:51.599 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f01f500-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:25:51 compute-1 nova_compute[225855]: 2026-01-20 14:25:51.601 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:51 compute-1 nova_compute[225855]: 2026-01-20 14:25:51.608 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:51.609 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f01f500-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:25:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:51.609 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:25:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:51.610 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6f01f500-b0, col_values=(('external_ids', {'iface-id': '428c2ef0-5c20-4d34-88ac-9a0d29a78f0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:25:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:25:51.611 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:25:51 compute-1 nova_compute[225855]: 2026-01-20 14:25:51.784 225859 DEBUG nova.network.neutron [req-471a0a38-c54e-40d1-ae3f-213103a7bbb2 req-76b15a7f-12f2-44a9-8455-138e81acea5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Updated VIF entry in instance network info cache for port 5146227f-80a8-47ae-a541-144b8dd24d4c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:25:51 compute-1 nova_compute[225855]: 2026-01-20 14:25:51.784 225859 DEBUG nova.network.neutron [req-471a0a38-c54e-40d1-ae3f-213103a7bbb2 req-76b15a7f-12f2-44a9-8455-138e81acea5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Updating instance_info_cache with network_info: [{"id": "5146227f-80a8-47ae-a541-144b8dd24d4c", "address": "fa:16:3e:39:f1:69", "network": {"id": "6f01f500-b631-4cdb-ae71-b33b0ccfb1aa", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1745233184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861a4f2b70b249afadeabfe85bda53a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5146227f-80", "ovs_interfaceid": "5146227f-80a8-47ae-a541-144b8dd24d4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true, "migrating_to": "compute-2.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:25:51 compute-1 nova_compute[225855]: 2026-01-20 14:25:51.806 225859 DEBUG oslo_concurrency.lockutils [req-471a0a38-c54e-40d1-ae3f-213103a7bbb2 req-76b15a7f-12f2-44a9-8455-138e81acea5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-85ec4052-1453-4c76-936e-bf76f2108416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.056 225859 DEBUG nova.compute.manager [req-5df5d278-cce5-44ce-8ee6-c232a484c9b9 req-02c807c6-4d91-4f19-8f15-c5bce28409e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Received event network-vif-unplugged-5146227f-80a8-47ae-a541-144b8dd24d4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.056 225859 DEBUG oslo_concurrency.lockutils [req-5df5d278-cce5-44ce-8ee6-c232a484c9b9 req-02c807c6-4d91-4f19-8f15-c5bce28409e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "85ec4052-1453-4c76-936e-bf76f2108416-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.057 225859 DEBUG oslo_concurrency.lockutils [req-5df5d278-cce5-44ce-8ee6-c232a484c9b9 req-02c807c6-4d91-4f19-8f15-c5bce28409e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "85ec4052-1453-4c76-936e-bf76f2108416-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.057 225859 DEBUG oslo_concurrency.lockutils [req-5df5d278-cce5-44ce-8ee6-c232a484c9b9 req-02c807c6-4d91-4f19-8f15-c5bce28409e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "85ec4052-1453-4c76-936e-bf76f2108416-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.058 225859 DEBUG nova.compute.manager [req-5df5d278-cce5-44ce-8ee6-c232a484c9b9 req-02c807c6-4d91-4f19-8f15-c5bce28409e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] No waiting events found dispatching network-vif-unplugged-5146227f-80a8-47ae-a541-144b8dd24d4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.058 225859 DEBUG nova.compute.manager [req-5df5d278-cce5-44ce-8ee6-c232a484c9b9 req-02c807c6-4d91-4f19-8f15-c5bce28409e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Received event network-vif-unplugged-5146227f-80a8-47ae-a541-144b8dd24d4c for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:25:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:52.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:52.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:52 compute-1 ceph-mon[81775]: pgmap v1036: 321 pgs: 321 active+clean; 397 MiB data, 500 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.5 MiB/s wr, 216 op/s
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.677 225859 DEBUG nova.compute.manager [req-496df221-3e70-46d2-8e61-e71a377330c0 req-8db778df-5436-48aa-abee-c277f15b849a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Received event network-vif-unplugged-5146227f-80a8-47ae-a541-144b8dd24d4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.678 225859 DEBUG oslo_concurrency.lockutils [req-496df221-3e70-46d2-8e61-e71a377330c0 req-8db778df-5436-48aa-abee-c277f15b849a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "85ec4052-1453-4c76-936e-bf76f2108416-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.678 225859 DEBUG oslo_concurrency.lockutils [req-496df221-3e70-46d2-8e61-e71a377330c0 req-8db778df-5436-48aa-abee-c277f15b849a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "85ec4052-1453-4c76-936e-bf76f2108416-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.678 225859 DEBUG oslo_concurrency.lockutils [req-496df221-3e70-46d2-8e61-e71a377330c0 req-8db778df-5436-48aa-abee-c277f15b849a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "85ec4052-1453-4c76-936e-bf76f2108416-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.679 225859 DEBUG nova.compute.manager [req-496df221-3e70-46d2-8e61-e71a377330c0 req-8db778df-5436-48aa-abee-c277f15b849a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] No waiting events found dispatching network-vif-unplugged-5146227f-80a8-47ae-a541-144b8dd24d4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.679 225859 DEBUG nova.compute.manager [req-496df221-3e70-46d2-8e61-e71a377330c0 req-8db778df-5436-48aa-abee-c277f15b849a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Received event network-vif-unplugged-5146227f-80a8-47ae-a541-144b8dd24d4c for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.725 225859 DEBUG nova.network.neutron [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Activated binding for port 5146227f-80a8-47ae-a541-144b8dd24d4c and host compute-2.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.726 225859 DEBUG nova.compute.manager [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "5146227f-80a8-47ae-a541-144b8dd24d4c", "address": "fa:16:3e:39:f1:69", "network": {"id": "6f01f500-b631-4cdb-ae71-b33b0ccfb1aa", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1745233184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861a4f2b70b249afadeabfe85bda53a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5146227f-80", "ovs_interfaceid": "5146227f-80a8-47ae-a541-144b8dd24d4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.727 225859 DEBUG nova.virt.libvirt.vif [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:24:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-2144632196',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-2144632196',id=12,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:24:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='861a4f2b70b249afadeabfe85bda53a3',ramdisk_id='',reservation_id='r-uk9a0taa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1568967339',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1568967339-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:25:38Z,user_data=None,user_id='aefb5652049e473a948c089d7c62ef1a',uuid=85ec4052-1453-4c76-936e-bf76f2108416,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5146227f-80a8-47ae-a541-144b8dd24d4c", "address": "fa:16:3e:39:f1:69", "network": {"id": "6f01f500-b631-4cdb-ae71-b33b0ccfb1aa", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1745233184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861a4f2b70b249afadeabfe85bda53a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5146227f-80", "ovs_interfaceid": "5146227f-80a8-47ae-a541-144b8dd24d4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.728 225859 DEBUG nova.network.os_vif_util [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Converting VIF {"id": "5146227f-80a8-47ae-a541-144b8dd24d4c", "address": "fa:16:3e:39:f1:69", "network": {"id": "6f01f500-b631-4cdb-ae71-b33b0ccfb1aa", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1745233184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861a4f2b70b249afadeabfe85bda53a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5146227f-80", "ovs_interfaceid": "5146227f-80a8-47ae-a541-144b8dd24d4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.729 225859 DEBUG nova.network.os_vif_util [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:f1:69,bridge_name='br-int',has_traffic_filtering=True,id=5146227f-80a8-47ae-a541-144b8dd24d4c,network=Network(6f01f500-b631-4cdb-ae71-b33b0ccfb1aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5146227f-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.730 225859 DEBUG os_vif [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:f1:69,bridge_name='br-int',has_traffic_filtering=True,id=5146227f-80a8-47ae-a541-144b8dd24d4c,network=Network(6f01f500-b631-4cdb-ae71-b33b0ccfb1aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5146227f-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.733 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.733 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5146227f-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.735 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.739 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.743 225859 INFO os_vif [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:f1:69,bridge_name='br-int',has_traffic_filtering=True,id=5146227f-80a8-47ae-a541-144b8dd24d4c,network=Network(6f01f500-b631-4cdb-ae71-b33b0ccfb1aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5146227f-80')
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.743 225859 DEBUG oslo_concurrency.lockutils [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.743 225859 DEBUG oslo_concurrency.lockutils [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.744 225859 DEBUG oslo_concurrency.lockutils [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.744 225859 DEBUG nova.compute.manager [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.745 225859 INFO nova.virt.libvirt.driver [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Deleting instance files /var/lib/nova/instances/85ec4052-1453-4c76-936e-bf76f2108416_del
Jan 20 14:25:52 compute-1 nova_compute[225855]: 2026-01-20 14:25:52.746 225859 INFO nova.virt.libvirt.driver [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Deletion of /var/lib/nova/instances/85ec4052-1453-4c76-936e-bf76f2108416_del complete
Jan 20 14:25:53 compute-1 nova_compute[225855]: 2026-01-20 14:25:53.054 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/775805918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:25:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:54.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.186 225859 DEBUG nova.compute.manager [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Received event network-vif-plugged-5146227f-80a8-47ae-a541-144b8dd24d4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.186 225859 DEBUG oslo_concurrency.lockutils [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "85ec4052-1453-4c76-936e-bf76f2108416-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.187 225859 DEBUG oslo_concurrency.lockutils [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "85ec4052-1453-4c76-936e-bf76f2108416-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.187 225859 DEBUG oslo_concurrency.lockutils [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "85ec4052-1453-4c76-936e-bf76f2108416-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.187 225859 DEBUG nova.compute.manager [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] No waiting events found dispatching network-vif-plugged-5146227f-80a8-47ae-a541-144b8dd24d4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.188 225859 WARNING nova.compute.manager [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Received unexpected event network-vif-plugged-5146227f-80a8-47ae-a541-144b8dd24d4c for instance with vm_state active and task_state migrating.
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.188 225859 DEBUG nova.compute.manager [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Received event network-vif-plugged-5146227f-80a8-47ae-a541-144b8dd24d4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.188 225859 DEBUG oslo_concurrency.lockutils [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "85ec4052-1453-4c76-936e-bf76f2108416-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.189 225859 DEBUG oslo_concurrency.lockutils [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "85ec4052-1453-4c76-936e-bf76f2108416-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.189 225859 DEBUG oslo_concurrency.lockutils [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "85ec4052-1453-4c76-936e-bf76f2108416-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.189 225859 DEBUG nova.compute.manager [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] No waiting events found dispatching network-vif-plugged-5146227f-80a8-47ae-a541-144b8dd24d4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.189 225859 WARNING nova.compute.manager [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Received unexpected event network-vif-plugged-5146227f-80a8-47ae-a541-144b8dd24d4c for instance with vm_state active and task_state migrating.
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.190 225859 DEBUG nova.compute.manager [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Received event network-vif-plugged-5146227f-80a8-47ae-a541-144b8dd24d4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.190 225859 DEBUG oslo_concurrency.lockutils [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "85ec4052-1453-4c76-936e-bf76f2108416-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.190 225859 DEBUG oslo_concurrency.lockutils [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "85ec4052-1453-4c76-936e-bf76f2108416-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.191 225859 DEBUG oslo_concurrency.lockutils [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "85ec4052-1453-4c76-936e-bf76f2108416-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.191 225859 DEBUG nova.compute.manager [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] No waiting events found dispatching network-vif-plugged-5146227f-80a8-47ae-a541-144b8dd24d4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.191 225859 WARNING nova.compute.manager [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Received unexpected event network-vif-plugged-5146227f-80a8-47ae-a541-144b8dd24d4c for instance with vm_state active and task_state migrating.
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.192 225859 DEBUG nova.compute.manager [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Received event network-vif-plugged-5146227f-80a8-47ae-a541-144b8dd24d4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.192 225859 DEBUG oslo_concurrency.lockutils [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "85ec4052-1453-4c76-936e-bf76f2108416-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.192 225859 DEBUG oslo_concurrency.lockutils [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "85ec4052-1453-4c76-936e-bf76f2108416-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.193 225859 DEBUG oslo_concurrency.lockutils [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "85ec4052-1453-4c76-936e-bf76f2108416-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.193 225859 DEBUG nova.compute.manager [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] No waiting events found dispatching network-vif-plugged-5146227f-80a8-47ae-a541-144b8dd24d4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:25:54 compute-1 nova_compute[225855]: 2026-01-20 14:25:54.193 225859 WARNING nova.compute.manager [req-db4df5ce-519c-4ad8-8d39-53815f0e1f8a req-54e95ee6-f9e5-4572-a59c-e2bd0117d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Received unexpected event network-vif-plugged-5146227f-80a8-47ae-a541-144b8dd24d4c for instance with vm_state active and task_state migrating.
Jan 20 14:25:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:25:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:54.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:25:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:54 compute-1 ceph-mon[81775]: pgmap v1037: 321 pgs: 321 active+clean; 358 MiB data, 477 MiB used, 21 GiB / 21 GiB avail; 5.2 MiB/s rd, 6.4 MiB/s wr, 209 op/s
Jan 20 14:25:55 compute-1 ceph-mon[81775]: pgmap v1038: 321 pgs: 321 active+clean; 346 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.2 MiB/s wr, 177 op/s
Jan 20 14:25:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4189409054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:56.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:56 compute-1 podman[232852]: 2026-01-20 14:25:56.093735633 +0000 UTC m=+0.119478019 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:25:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:25:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:56.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:25:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1687057013' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:57 compute-1 sudo[232879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:57 compute-1 sudo[232879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:57 compute-1 sudo[232879]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:57 compute-1 sudo[232904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:57 compute-1 sudo[232904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:57 compute-1 sudo[232904]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:57 compute-1 nova_compute[225855]: 2026-01-20 14:25:57.737 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:58 compute-1 nova_compute[225855]: 2026-01-20 14:25:58.056 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:58.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:25:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:58.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:58 compute-1 nova_compute[225855]: 2026-01-20 14:25:58.570 225859 DEBUG oslo_concurrency.lockutils [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Acquiring lock "85ec4052-1453-4c76-936e-bf76f2108416-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:58 compute-1 nova_compute[225855]: 2026-01-20 14:25:58.571 225859 DEBUG oslo_concurrency.lockutils [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Lock "85ec4052-1453-4c76-936e-bf76f2108416-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:58 compute-1 nova_compute[225855]: 2026-01-20 14:25:58.571 225859 DEBUG oslo_concurrency.lockutils [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Lock "85ec4052-1453-4c76-936e-bf76f2108416-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:58 compute-1 nova_compute[225855]: 2026-01-20 14:25:58.598 225859 DEBUG oslo_concurrency.lockutils [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:58 compute-1 nova_compute[225855]: 2026-01-20 14:25:58.599 225859 DEBUG oslo_concurrency.lockutils [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3940468679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:58 compute-1 ceph-mon[81775]: pgmap v1039: 321 pgs: 321 active+clean; 336 MiB data, 436 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.2 MiB/s wr, 143 op/s
Jan 20 14:25:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1159291882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:58 compute-1 nova_compute[225855]: 2026-01-20 14:25:58.600 225859 DEBUG oslo_concurrency.lockutils [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:58 compute-1 nova_compute[225855]: 2026-01-20 14:25:58.601 225859 DEBUG nova.compute.resource_tracker [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:25:58 compute-1 nova_compute[225855]: 2026-01-20 14:25:58.602 225859 DEBUG oslo_concurrency.processutils [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:25:59 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4228894258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:59 compute-1 nova_compute[225855]: 2026-01-20 14:25:59.081 225859 DEBUG oslo_concurrency.processutils [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:59 compute-1 nova_compute[225855]: 2026-01-20 14:25:59.179 225859 DEBUG nova.virt.libvirt.driver [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:25:59 compute-1 nova_compute[225855]: 2026-01-20 14:25:59.180 225859 DEBUG nova.virt.libvirt.driver [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:25:59 compute-1 nova_compute[225855]: 2026-01-20 14:25:59.429 225859 WARNING nova.virt.libvirt.driver [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:25:59 compute-1 nova_compute[225855]: 2026-01-20 14:25:59.431 225859 DEBUG nova.compute.resource_tracker [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4660MB free_disk=20.911510467529297GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:25:59 compute-1 nova_compute[225855]: 2026-01-20 14:25:59.431 225859 DEBUG oslo_concurrency.lockutils [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:59 compute-1 nova_compute[225855]: 2026-01-20 14:25:59.431 225859 DEBUG oslo_concurrency.lockutils [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:59 compute-1 nova_compute[225855]: 2026-01-20 14:25:59.480 225859 DEBUG nova.compute.resource_tracker [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Migration for instance 85ec4052-1453-4c76-936e-bf76f2108416 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 20 14:25:59 compute-1 nova_compute[225855]: 2026-01-20 14:25:59.500 225859 DEBUG nova.compute.resource_tracker [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Jan 20 14:25:59 compute-1 nova_compute[225855]: 2026-01-20 14:25:59.531 225859 DEBUG nova.compute.resource_tracker [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Instance d1be7a29-3496-40ab-b61f-694622b7453b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:25:59 compute-1 nova_compute[225855]: 2026-01-20 14:25:59.532 225859 DEBUG nova.compute.resource_tracker [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Migration 04577902-c221-41db-927f-09d574c55fcd is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 20 14:25:59 compute-1 nova_compute[225855]: 2026-01-20 14:25:59.532 225859 DEBUG nova.compute.resource_tracker [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:25:59 compute-1 nova_compute[225855]: 2026-01-20 14:25:59.533 225859 DEBUG nova.compute.resource_tracker [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:25:59 compute-1 nova_compute[225855]: 2026-01-20 14:25:59.594 225859 DEBUG oslo_concurrency.processutils [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:59 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4228894258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:59 compute-1 ceph-mon[81775]: pgmap v1040: 321 pgs: 321 active+clean; 376 MiB data, 455 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 5.3 MiB/s wr, 160 op/s
Jan 20 14:25:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:00.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:26:00 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3380735433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:00 compute-1 nova_compute[225855]: 2026-01-20 14:26:00.099 225859 DEBUG oslo_concurrency.processutils [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:26:00 compute-1 nova_compute[225855]: 2026-01-20 14:26:00.107 225859 DEBUG nova.compute.provider_tree [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:26:00 compute-1 nova_compute[225855]: 2026-01-20 14:26:00.136 225859 DEBUG nova.scheduler.client.report [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:26:00 compute-1 nova_compute[225855]: 2026-01-20 14:26:00.191 225859 DEBUG nova.compute.resource_tracker [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:26:00 compute-1 nova_compute[225855]: 2026-01-20 14:26:00.192 225859 DEBUG oslo_concurrency.lockutils [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:00 compute-1 nova_compute[225855]: 2026-01-20 14:26:00.203 225859 INFO nova.compute.manager [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Migrating instance to compute-2.ctlplane.example.com finished successfully.
Jan 20 14:26:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:00.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:00 compute-1 nova_compute[225855]: 2026-01-20 14:26:00.440 225859 INFO nova.scheduler.client.report [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] Deleted allocation for migration 04577902-c221-41db-927f-09d574c55fcd
Jan 20 14:26:00 compute-1 nova_compute[225855]: 2026-01-20 14:26:00.441 225859 DEBUG nova.virt.libvirt.driver [None req-0d8e3b5b-c177-4fbf-8037-7f1dc9763eee b9adeab2c2f5486e92ca8534ef11c720 a7bb8a09ecaa40e8980b2ed19afa279f - - default default] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Jan 20 14:26:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e144 e144: 3 total, 3 up, 3 in
Jan 20 14:26:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3380735433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:01 compute-1 ceph-mon[81775]: osdmap e144: 3 total, 3 up, 3 in
Jan 20 14:26:01 compute-1 ceph-mon[81775]: pgmap v1042: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 347 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.9 MiB/s wr, 243 op/s
Jan 20 14:26:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:02.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:02.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:02 compute-1 nova_compute[225855]: 2026-01-20 14:26:02.779 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:03 compute-1 nova_compute[225855]: 2026-01-20 14:26:03.059 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:04.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:04 compute-1 ceph-mon[81775]: pgmap v1043: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 359 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 6.5 MiB/s rd, 6.8 MiB/s wr, 278 op/s
Jan 20 14:26:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1375991866' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:26:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1375991866' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:26:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3068101538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:26:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:26:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:04.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:26:04 compute-1 nova_compute[225855]: 2026-01-20 14:26:04.729 225859 DEBUG oslo_concurrency.lockutils [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] Acquiring lock "d1be7a29-3496-40ab-b61f-694622b7453b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:26:04 compute-1 nova_compute[225855]: 2026-01-20 14:26:04.730 225859 DEBUG oslo_concurrency.lockutils [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] Lock "d1be7a29-3496-40ab-b61f-694622b7453b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:26:04 compute-1 nova_compute[225855]: 2026-01-20 14:26:04.731 225859 DEBUG oslo_concurrency.lockutils [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] Acquiring lock "d1be7a29-3496-40ab-b61f-694622b7453b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:26:04 compute-1 nova_compute[225855]: 2026-01-20 14:26:04.732 225859 DEBUG oslo_concurrency.lockutils [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] Lock "d1be7a29-3496-40ab-b61f-694622b7453b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:26:04 compute-1 nova_compute[225855]: 2026-01-20 14:26:04.732 225859 DEBUG oslo_concurrency.lockutils [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] Lock "d1be7a29-3496-40ab-b61f-694622b7453b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:04 compute-1 nova_compute[225855]: 2026-01-20 14:26:04.734 225859 INFO nova.compute.manager [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Terminating instance
Jan 20 14:26:04 compute-1 nova_compute[225855]: 2026-01-20 14:26:04.736 225859 DEBUG nova.compute.manager [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:26:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:04 compute-1 kernel: tap495d90bb-db (unregistering): left promiscuous mode
Jan 20 14:26:04 compute-1 NetworkManager[49104]: <info>  [1768919164.7973] device (tap495d90bb-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:26:04 compute-1 nova_compute[225855]: 2026-01-20 14:26:04.809 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:04 compute-1 ovn_controller[130490]: 2026-01-20T14:26:04Z|00071|binding|INFO|Releasing lport 495d90bb-db88-4c6d-a712-bfd41c4e37fc from this chassis (sb_readonly=0)
Jan 20 14:26:04 compute-1 ovn_controller[130490]: 2026-01-20T14:26:04Z|00072|binding|INFO|Setting lport 495d90bb-db88-4c6d-a712-bfd41c4e37fc down in Southbound
Jan 20 14:26:04 compute-1 ovn_controller[130490]: 2026-01-20T14:26:04Z|00073|binding|INFO|Releasing lport 2837a2a0-6c5c-4cc8-9ed2-29c35ce5252e from this chassis (sb_readonly=0)
Jan 20 14:26:04 compute-1 ovn_controller[130490]: 2026-01-20T14:26:04Z|00074|binding|INFO|Setting lport 2837a2a0-6c5c-4cc8-9ed2-29c35ce5252e down in Southbound
Jan 20 14:26:04 compute-1 ovn_controller[130490]: 2026-01-20T14:26:04Z|00075|binding|INFO|Removing iface tap495d90bb-db ovn-installed in OVS
Jan 20 14:26:04 compute-1 nova_compute[225855]: 2026-01-20 14:26:04.814 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:04 compute-1 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Deactivated successfully.
Jan 20 14:26:04 compute-1 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Consumed 7.328s CPU time.
Jan 20 14:26:04 compute-1 nova_compute[225855]: 2026-01-20 14:26:04.865 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:04 compute-1 systemd-machined[194361]: Machine qemu-4-instance-00000006 terminated.
Jan 20 14:26:04 compute-1 nova_compute[225855]: 2026-01-20 14:26:04.962 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:04 compute-1 nova_compute[225855]: 2026-01-20 14:26:04.968 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:04 compute-1 nova_compute[225855]: 2026-01-20 14:26:04.987 225859 INFO nova.virt.libvirt.driver [-] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Instance destroyed successfully.
Jan 20 14:26:04 compute-1 nova_compute[225855]: 2026-01-20 14:26:04.988 225859 DEBUG nova.objects.instance [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] Lazy-loading 'resources' on Instance uuid d1be7a29-3496-40ab-b61f-694622b7453b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:26:05 compute-1 ovn_controller[130490]: 2026-01-20T14:26:05Z|00076|binding|INFO|Releasing lport 428c2ef0-5c20-4d34-88ac-9a0d29a78f0e from this chassis (sb_readonly=0)
Jan 20 14:26:05 compute-1 ovn_controller[130490]: 2026-01-20T14:26:05Z|00077|binding|INFO|Releasing lport 69d1df5f-8105-47f6-864e-688d91c13d13 from this chassis (sb_readonly=0)
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.223 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:4d:90 10.100.0.3'], port_security=['fa:16:3e:81:4d:90 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-2135821709', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd1be7a29-3496-40ab-b61f-694622b7453b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-2135821709', 'neutron:project_id': '861a4f2b70b249afadeabfe85bda53a3', 'neutron:revision_number': '12', 'neutron:security_group_ids': '31a0931a-1aaa-4760-9d26-94149371fd1b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e54b11a-69dd-4260-a0b8-c84b36782857, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=495d90bb-db88-4c6d-a712-bfd41c4e37fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.226 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:ac:f6 19.80.0.18'], port_security=['fa:16:3e:29:ac:f6 19.80.0.18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['495d90bb-db88-4c6d-a712-bfd41c4e37fc'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1487867751', 'neutron:cidrs': '19.80.0.18/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e058a3a-7e99-4576-99a5-9221e6721967', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1487867751', 'neutron:project_id': '861a4f2b70b249afadeabfe85bda53a3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '31a0931a-1aaa-4760-9d26-94149371fd1b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=212b28d9-3d5f-4303-a5f1-6dbbb277d038, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2837a2a0-6c5c-4cc8-9ed2-29c35ce5252e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.227 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 495d90bb-db88-4c6d-a712-bfd41c4e37fc in datapath 6f01f500-b631-4cdb-ae71-b33b0ccfb1aa unbound from our chassis
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.229 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6f01f500-b631-4cdb-ae71-b33b0ccfb1aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.230 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f49b30fb-6599-4f8f-a6d5-35fafcfa7eb3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.231 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa namespace which is not needed anymore
Jan 20 14:26:05 compute-1 nova_compute[225855]: 2026-01-20 14:26:05.250 225859 DEBUG nova.virt.libvirt.vif [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:23:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1966991232',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1966991232',id=6,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:23:55Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='861a4f2b70b249afadeabfe85bda53a3',ramdisk_id='',reservation_id='r-68sk3sp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1568967339',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1568967339-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:24:19Z,user_data=None,user_id='aefb5652049e473a948c089d7c62ef1a',uuid=d1be7a29-3496-40ab-b61f-694622b7453b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "495d90bb-db88-4c6d-a712-bfd41c4e37fc", "address": "fa:16:3e:81:4d:90", "network": {"id": "6f01f500-b631-4cdb-ae71-b33b0ccfb1aa", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1745233184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861a4f2b70b249afadeabfe85bda53a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap495d90bb-db", "ovs_interfaceid": "495d90bb-db88-4c6d-a712-bfd41c4e37fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:26:05 compute-1 nova_compute[225855]: 2026-01-20 14:26:05.250 225859 DEBUG nova.network.os_vif_util [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] Converting VIF {"id": "495d90bb-db88-4c6d-a712-bfd41c4e37fc", "address": "fa:16:3e:81:4d:90", "network": {"id": "6f01f500-b631-4cdb-ae71-b33b0ccfb1aa", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1745233184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861a4f2b70b249afadeabfe85bda53a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap495d90bb-db", "ovs_interfaceid": "495d90bb-db88-4c6d-a712-bfd41c4e37fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:26:05 compute-1 nova_compute[225855]: 2026-01-20 14:26:05.251 225859 DEBUG nova.network.os_vif_util [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:81:4d:90,bridge_name='br-int',has_traffic_filtering=True,id=495d90bb-db88-4c6d-a712-bfd41c4e37fc,network=Network(6f01f500-b631-4cdb-ae71-b33b0ccfb1aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap495d90bb-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:26:05 compute-1 nova_compute[225855]: 2026-01-20 14:26:05.251 225859 DEBUG os_vif [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:4d:90,bridge_name='br-int',has_traffic_filtering=True,id=495d90bb-db88-4c6d-a712-bfd41c4e37fc,network=Network(6f01f500-b631-4cdb-ae71-b33b0ccfb1aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap495d90bb-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:26:05 compute-1 nova_compute[225855]: 2026-01-20 14:26:05.253 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:05 compute-1 nova_compute[225855]: 2026-01-20 14:26:05.253 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap495d90bb-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:26:05 compute-1 nova_compute[225855]: 2026-01-20 14:26:05.255 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:05 compute-1 nova_compute[225855]: 2026-01-20 14:26:05.257 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:26:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3573996431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:26:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1157289147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:26:05 compute-1 nova_compute[225855]: 2026-01-20 14:26:05.319 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:05 compute-1 nova_compute[225855]: 2026-01-20 14:26:05.322 225859 INFO os_vif [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:4d:90,bridge_name='br-int',has_traffic_filtering=True,id=495d90bb-db88-4c6d-a712-bfd41c4e37fc,network=Network(6f01f500-b631-4cdb-ae71-b33b0ccfb1aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap495d90bb-db')
Jan 20 14:26:05 compute-1 nova_compute[225855]: 2026-01-20 14:26:05.340 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:05 compute-1 neutron-haproxy-ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa[231166]: [NOTICE]   (231170) : haproxy version is 2.8.14-c23fe91
Jan 20 14:26:05 compute-1 neutron-haproxy-ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa[231166]: [NOTICE]   (231170) : path to executable is /usr/sbin/haproxy
Jan 20 14:26:05 compute-1 neutron-haproxy-ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa[231166]: [WARNING]  (231170) : Exiting Master process...
Jan 20 14:26:05 compute-1 neutron-haproxy-ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa[231166]: [WARNING]  (231170) : Exiting Master process...
Jan 20 14:26:05 compute-1 neutron-haproxy-ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa[231166]: [ALERT]    (231170) : Current worker (231172) exited with code 143 (Terminated)
Jan 20 14:26:05 compute-1 neutron-haproxy-ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa[231166]: [WARNING]  (231170) : All workers exited. Exiting... (0)
Jan 20 14:26:05 compute-1 systemd[1]: libpod-4b580939f68977becb4612edae62f413c82a1c4eebe3a7a68594b3b2bff36f92.scope: Deactivated successfully.
Jan 20 14:26:05 compute-1 podman[233013]: 2026-01-20 14:26:05.365660657 +0000 UTC m=+0.052698263 container died 4b580939f68977becb4612edae62f413c82a1c4eebe3a7a68594b3b2bff36f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:26:05 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b580939f68977becb4612edae62f413c82a1c4eebe3a7a68594b3b2bff36f92-userdata-shm.mount: Deactivated successfully.
Jan 20 14:26:05 compute-1 systemd[1]: var-lib-containers-storage-overlay-718240c016ea3692036380758388f57c4dd1c9d71b39140f51ca20ac8e217a05-merged.mount: Deactivated successfully.
Jan 20 14:26:05 compute-1 podman[233013]: 2026-01-20 14:26:05.410559841 +0000 UTC m=+0.097597467 container cleanup 4b580939f68977becb4612edae62f413c82a1c4eebe3a7a68594b3b2bff36f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:26:05 compute-1 systemd[1]: libpod-conmon-4b580939f68977becb4612edae62f413c82a1c4eebe3a7a68594b3b2bff36f92.scope: Deactivated successfully.
Jan 20 14:26:05 compute-1 podman[233062]: 2026-01-20 14:26:05.476699429 +0000 UTC m=+0.045943875 container remove 4b580939f68977becb4612edae62f413c82a1c4eebe3a7a68594b3b2bff36f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.483 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[26332431-1b8a-4949-9af8-bec3cccc1caf]: (4, ('Tue Jan 20 02:26:05 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa (4b580939f68977becb4612edae62f413c82a1c4eebe3a7a68594b3b2bff36f92)\n4b580939f68977becb4612edae62f413c82a1c4eebe3a7a68594b3b2bff36f92\nTue Jan 20 02:26:05 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa (4b580939f68977becb4612edae62f413c82a1c4eebe3a7a68594b3b2bff36f92)\n4b580939f68977becb4612edae62f413c82a1c4eebe3a7a68594b3b2bff36f92\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.485 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[958998e9-c35e-41d6-8914-ef53e08fbd2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.486 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f01f500-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:26:05 compute-1 nova_compute[225855]: 2026-01-20 14:26:05.487 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:05 compute-1 kernel: tap6f01f500-b0: left promiscuous mode
Jan 20 14:26:05 compute-1 nova_compute[225855]: 2026-01-20 14:26:05.500 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.502 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6f45799a-36c6-4e65-99af-fce5577329bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.519 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[051ae10b-644e-41ea-8613-eed44be72a7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.520 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[42c723f2-9c31-4113-9df8-cea9c6877746]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.537 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[42d9331b-5353-496b-bb18-23630036cd15]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409089, 'reachable_time': 28470, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 233079, 'error': None, 'target': 'ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.540 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6f01f500-b631-4cdb-ae71-b33b0ccfb1aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:26:05 compute-1 systemd[1]: run-netns-ovnmeta\x2d6f01f500\x2db631\x2d4cdb\x2dae71\x2db33b0ccfb1aa.mount: Deactivated successfully.
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.540 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[7a0dde27-334f-4d57-a691-41270f91bd46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.541 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 2837a2a0-6c5c-4cc8-9ed2-29c35ce5252e in datapath 7e058a3a-7e99-4576-99a5-9221e6721967 unbound from our chassis
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.543 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7e058a3a-7e99-4576-99a5-9221e6721967, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.544 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[066b72a2-8986-42cb-9374-ece87519d1c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.545 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967 namespace which is not needed anymore
Jan 20 14:26:05 compute-1 neutron-haproxy-ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967[231376]: [NOTICE]   (231381) : haproxy version is 2.8.14-c23fe91
Jan 20 14:26:05 compute-1 neutron-haproxy-ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967[231376]: [NOTICE]   (231381) : path to executable is /usr/sbin/haproxy
Jan 20 14:26:05 compute-1 neutron-haproxy-ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967[231376]: [WARNING]  (231381) : Exiting Master process...
Jan 20 14:26:05 compute-1 neutron-haproxy-ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967[231376]: [ALERT]    (231381) : Current worker (231385) exited with code 143 (Terminated)
Jan 20 14:26:05 compute-1 neutron-haproxy-ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967[231376]: [WARNING]  (231381) : All workers exited. Exiting... (0)
Jan 20 14:26:05 compute-1 systemd[1]: libpod-903600b0c940a7555c4b06a0109946d8aa554233e1569849e83e037562c4e4f0.scope: Deactivated successfully.
Jan 20 14:26:05 compute-1 podman[233095]: 2026-01-20 14:26:05.685562585 +0000 UTC m=+0.061114019 container died 903600b0c940a7555c4b06a0109946d8aa554233e1569849e83e037562c4e4f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:26:05 compute-1 systemd[1]: var-lib-containers-storage-overlay-cbc03c7536b385d00acac66f5f16e4bc400b57b72120eb89b6a3473d1cec9b14-merged.mount: Deactivated successfully.
Jan 20 14:26:05 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-903600b0c940a7555c4b06a0109946d8aa554233e1569849e83e037562c4e4f0-userdata-shm.mount: Deactivated successfully.
Jan 20 14:26:05 compute-1 podman[233095]: 2026-01-20 14:26:05.720511931 +0000 UTC m=+0.096063355 container cleanup 903600b0c940a7555c4b06a0109946d8aa554233e1569849e83e037562c4e4f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 20 14:26:05 compute-1 systemd[1]: libpod-conmon-903600b0c940a7555c4b06a0109946d8aa554233e1569849e83e037562c4e4f0.scope: Deactivated successfully.
Jan 20 14:26:05 compute-1 nova_compute[225855]: 2026-01-20 14:26:05.747 225859 INFO nova.virt.libvirt.driver [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Deleting instance files /var/lib/nova/instances/d1be7a29-3496-40ab-b61f-694622b7453b_del
Jan 20 14:26:05 compute-1 nova_compute[225855]: 2026-01-20 14:26:05.748 225859 INFO nova.virt.libvirt.driver [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Deletion of /var/lib/nova/instances/d1be7a29-3496-40ab-b61f-694622b7453b_del complete
Jan 20 14:26:05 compute-1 podman[233121]: 2026-01-20 14:26:05.776931227 +0000 UTC m=+0.038369583 container remove 903600b0c940a7555c4b06a0109946d8aa554233e1569849e83e037562c4e4f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.779 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fab0fc2f-4e56-482e-9e92-c5bc4a332f9d]: (4, ('Tue Jan 20 02:26:05 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967 (903600b0c940a7555c4b06a0109946d8aa554233e1569849e83e037562c4e4f0)\n903600b0c940a7555c4b06a0109946d8aa554233e1569849e83e037562c4e4f0\nTue Jan 20 02:26:05 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967 (903600b0c940a7555c4b06a0109946d8aa554233e1569849e83e037562c4e4f0)\n903600b0c940a7555c4b06a0109946d8aa554233e1569849e83e037562c4e4f0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.781 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f56609-21c9-4d5d-b476-980878bed50e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.782 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e058a3a-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:26:05 compute-1 nova_compute[225855]: 2026-01-20 14:26:05.783 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:05 compute-1 kernel: tap7e058a3a-70: left promiscuous mode
Jan 20 14:26:05 compute-1 nova_compute[225855]: 2026-01-20 14:26:05.785 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.789 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fc4b4731-4901-404d-b784-4cd3f672a3e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:26:05 compute-1 nova_compute[225855]: 2026-01-20 14:26:05.799 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.804 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a9682754-b8e8-4f05-9787-f6615e3b8274]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.806 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fb12b33d-cba7-4275-863e-b82ebe6dfe16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.830 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4ebb45e8-d15e-4ae7-bbba-be164cb59ab1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409207, 'reachable_time': 16918, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 233137, 'error': None, 'target': 'ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.833 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7e058a3a-7e99-4576-99a5-9221e6721967 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:26:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:05.833 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[ed2c9068-27dc-4f1b-9cd4-18f5d1a9197a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:26:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:06.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:06 compute-1 ceph-mon[81775]: pgmap v1044: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 301 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 303 op/s
Jan 20 14:26:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:06.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:06 compute-1 systemd[1]: run-netns-ovnmeta\x2d7e058a3a\x2d7e99\x2d4576\x2d99a5\x2d9221e6721967.mount: Deactivated successfully.
Jan 20 14:26:06 compute-1 nova_compute[225855]: 2026-01-20 14:26:06.479 225859 INFO nova.compute.manager [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Took 1.74 seconds to destroy the instance on the hypervisor.
Jan 20 14:26:06 compute-1 nova_compute[225855]: 2026-01-20 14:26:06.480 225859 DEBUG oslo.service.loopingcall [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:26:06 compute-1 nova_compute[225855]: 2026-01-20 14:26:06.481 225859 DEBUG nova.compute.manager [-] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:26:06 compute-1 nova_compute[225855]: 2026-01-20 14:26:06.482 225859 DEBUG nova.network.neutron [-] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:26:06 compute-1 nova_compute[225855]: 2026-01-20 14:26:06.565 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919151.564746, 85ec4052-1453-4c76-936e-bf76f2108416 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:26:06 compute-1 nova_compute[225855]: 2026-01-20 14:26:06.566 225859 INFO nova.compute.manager [-] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] VM Stopped (Lifecycle Event)
Jan 20 14:26:06 compute-1 nova_compute[225855]: 2026-01-20 14:26:06.586 225859 DEBUG nova.compute.manager [None req-0181ed31-72bb-4f2d-a2ad-123e33285c93 - - - - - -] [instance: 85ec4052-1453-4c76-936e-bf76f2108416] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:26:07 compute-1 nova_compute[225855]: 2026-01-20 14:26:07.240 225859 DEBUG nova.compute.manager [req-a448b2df-5d3f-4ca4-ab80-cc1c6073274d req-3438179e-7358-425c-b950-a8699fbca18d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Received event network-vif-unplugged-495d90bb-db88-4c6d-a712-bfd41c4e37fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:26:07 compute-1 nova_compute[225855]: 2026-01-20 14:26:07.241 225859 DEBUG oslo_concurrency.lockutils [req-a448b2df-5d3f-4ca4-ab80-cc1c6073274d req-3438179e-7358-425c-b950-a8699fbca18d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d1be7a29-3496-40ab-b61f-694622b7453b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:26:07 compute-1 nova_compute[225855]: 2026-01-20 14:26:07.242 225859 DEBUG oslo_concurrency.lockutils [req-a448b2df-5d3f-4ca4-ab80-cc1c6073274d req-3438179e-7358-425c-b950-a8699fbca18d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d1be7a29-3496-40ab-b61f-694622b7453b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:26:07 compute-1 nova_compute[225855]: 2026-01-20 14:26:07.243 225859 DEBUG oslo_concurrency.lockutils [req-a448b2df-5d3f-4ca4-ab80-cc1c6073274d req-3438179e-7358-425c-b950-a8699fbca18d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d1be7a29-3496-40ab-b61f-694622b7453b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:07 compute-1 nova_compute[225855]: 2026-01-20 14:26:07.243 225859 DEBUG nova.compute.manager [req-a448b2df-5d3f-4ca4-ab80-cc1c6073274d req-3438179e-7358-425c-b950-a8699fbca18d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] No waiting events found dispatching network-vif-unplugged-495d90bb-db88-4c6d-a712-bfd41c4e37fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:26:07 compute-1 nova_compute[225855]: 2026-01-20 14:26:07.244 225859 DEBUG nova.compute.manager [req-a448b2df-5d3f-4ca4-ab80-cc1c6073274d req-3438179e-7358-425c-b950-a8699fbca18d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Received event network-vif-unplugged-495d90bb-db88-4c6d-a712-bfd41c4e37fc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:26:08 compute-1 podman[233139]: 2026-01-20 14:26:08.012963583 +0000 UTC m=+0.064138843 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 20 14:26:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:26:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:08.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:26:08 compute-1 nova_compute[225855]: 2026-01-20 14:26:08.098 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:08 compute-1 ceph-mon[81775]: pgmap v1045: 321 pgs: 321 active+clean; 199 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 3.6 MiB/s wr, 254 op/s
Jan 20 14:26:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:08.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:09 compute-1 nova_compute[225855]: 2026-01-20 14:26:09.403 225859 DEBUG nova.compute.manager [req-bcb3c3a5-7d12-4ff3-89c3-fc6782191433 req-a361f914-d7a8-40af-941e-d79e6bda0934 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Received event network-vif-plugged-495d90bb-db88-4c6d-a712-bfd41c4e37fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:26:09 compute-1 nova_compute[225855]: 2026-01-20 14:26:09.404 225859 DEBUG oslo_concurrency.lockutils [req-bcb3c3a5-7d12-4ff3-89c3-fc6782191433 req-a361f914-d7a8-40af-941e-d79e6bda0934 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d1be7a29-3496-40ab-b61f-694622b7453b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:26:09 compute-1 nova_compute[225855]: 2026-01-20 14:26:09.404 225859 DEBUG oslo_concurrency.lockutils [req-bcb3c3a5-7d12-4ff3-89c3-fc6782191433 req-a361f914-d7a8-40af-941e-d79e6bda0934 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d1be7a29-3496-40ab-b61f-694622b7453b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:26:09 compute-1 nova_compute[225855]: 2026-01-20 14:26:09.405 225859 DEBUG oslo_concurrency.lockutils [req-bcb3c3a5-7d12-4ff3-89c3-fc6782191433 req-a361f914-d7a8-40af-941e-d79e6bda0934 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d1be7a29-3496-40ab-b61f-694622b7453b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:09 compute-1 nova_compute[225855]: 2026-01-20 14:26:09.405 225859 DEBUG nova.compute.manager [req-bcb3c3a5-7d12-4ff3-89c3-fc6782191433 req-a361f914-d7a8-40af-941e-d79e6bda0934 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] No waiting events found dispatching network-vif-plugged-495d90bb-db88-4c6d-a712-bfd41c4e37fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:26:09 compute-1 nova_compute[225855]: 2026-01-20 14:26:09.405 225859 WARNING nova.compute.manager [req-bcb3c3a5-7d12-4ff3-89c3-fc6782191433 req-a361f914-d7a8-40af-941e-d79e6bda0934 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Received unexpected event network-vif-plugged-495d90bb-db88-4c6d-a712-bfd41c4e37fc for instance with vm_state active and task_state deleting.
Jan 20 14:26:09 compute-1 nova_compute[225855]: 2026-01-20 14:26:09.493 225859 DEBUG nova.network.neutron [-] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:26:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:09 compute-1 nova_compute[225855]: 2026-01-20 14:26:09.791 225859 INFO nova.compute.manager [-] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Took 3.31 seconds to deallocate network for instance.
Jan 20 14:26:09 compute-1 nova_compute[225855]: 2026-01-20 14:26:09.921 225859 DEBUG oslo_concurrency.lockutils [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:26:09 compute-1 nova_compute[225855]: 2026-01-20 14:26:09.923 225859 DEBUG oslo_concurrency.lockutils [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:26:09 compute-1 nova_compute[225855]: 2026-01-20 14:26:09.980 225859 DEBUG oslo_concurrency.processutils [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:26:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:26:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:10.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:26:10 compute-1 nova_compute[225855]: 2026-01-20 14:26:10.278 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:10 compute-1 nova_compute[225855]: 2026-01-20 14:26:10.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:26:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:26:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:10.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:26:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:26:10 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1338130560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:10 compute-1 nova_compute[225855]: 2026-01-20 14:26:10.485 225859 DEBUG oslo_concurrency.processutils [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:26:10 compute-1 nova_compute[225855]: 2026-01-20 14:26:10.490 225859 DEBUG nova.compute.provider_tree [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:26:10 compute-1 nova_compute[225855]: 2026-01-20 14:26:10.529 225859 DEBUG nova.scheduler.client.report [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:26:10 compute-1 nova_compute[225855]: 2026-01-20 14:26:10.557 225859 DEBUG oslo_concurrency.lockutils [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e145 e145: 3 total, 3 up, 3 in
Jan 20 14:26:10 compute-1 ceph-mon[81775]: pgmap v1046: 321 pgs: 321 active+clean; 169 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.6 MiB/s wr, 232 op/s
Jan 20 14:26:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1338130560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:10 compute-1 nova_compute[225855]: 2026-01-20 14:26:10.606 225859 INFO nova.scheduler.client.report [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] Deleted allocations for instance d1be7a29-3496-40ab-b61f-694622b7453b
Jan 20 14:26:10 compute-1 nova_compute[225855]: 2026-01-20 14:26:10.701 225859 DEBUG oslo_concurrency.lockutils [None req-0bc81c77-aa15-4caa-a2a7-e72302364e33 aefb5652049e473a948c089d7c62ef1a 861a4f2b70b249afadeabfe85bda53a3 - - default default] Lock "d1be7a29-3496-40ab-b61f-694622b7453b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:11 compute-1 ceph-mon[81775]: osdmap e145: 3 total, 3 up, 3 in
Jan 20 14:26:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:12.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:12 compute-1 nova_compute[225855]: 2026-01-20 14:26:12.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:26:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:26:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:12.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:26:12 compute-1 nova_compute[225855]: 2026-01-20 14:26:12.375 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:26:12 compute-1 nova_compute[225855]: 2026-01-20 14:26:12.375 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:26:12 compute-1 nova_compute[225855]: 2026-01-20 14:26:12.375 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:26:12 compute-1 nova_compute[225855]: 2026-01-20 14:26:12.392 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:26:12 compute-1 nova_compute[225855]: 2026-01-20 14:26:12.392 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:26:13 compute-1 nova_compute[225855]: 2026-01-20 14:26:13.100 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:13 compute-1 ceph-mon[81775]: pgmap v1048: 321 pgs: 321 active+clean; 169 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 956 KiB/s wr, 167 op/s
Jan 20 14:26:13 compute-1 nova_compute[225855]: 2026-01-20 14:26:13.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:26:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:26:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:14.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:26:14 compute-1 ceph-mon[81775]: pgmap v1049: 321 pgs: 321 active+clean; 169 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 34 KiB/s wr, 156 op/s
Jan 20 14:26:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/817449645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1018950850' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:26:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1018950850' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:26:14 compute-1 nova_compute[225855]: 2026-01-20 14:26:14.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:26:14 compute-1 nova_compute[225855]: 2026-01-20 14:26:14.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:26:14 compute-1 nova_compute[225855]: 2026-01-20 14:26:14.369 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:26:14 compute-1 nova_compute[225855]: 2026-01-20 14:26:14.369 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:26:14 compute-1 nova_compute[225855]: 2026-01-20 14:26:14.369 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:14 compute-1 nova_compute[225855]: 2026-01-20 14:26:14.369 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:26:14 compute-1 nova_compute[225855]: 2026-01-20 14:26:14.370 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:26:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:14.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:26:14 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/448007984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:14 compute-1 nova_compute[225855]: 2026-01-20 14:26:14.793 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:26:14 compute-1 nova_compute[225855]: 2026-01-20 14:26:14.992 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:26:14 compute-1 nova_compute[225855]: 2026-01-20 14:26:14.994 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4810MB free_disk=20.921836853027344GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:26:14 compute-1 nova_compute[225855]: 2026-01-20 14:26:14.994 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:26:14 compute-1 nova_compute[225855]: 2026-01-20 14:26:14.995 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:26:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3490535771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/448007984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:15 compute-1 nova_compute[225855]: 2026-01-20 14:26:15.283 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:15 compute-1 nova_compute[225855]: 2026-01-20 14:26:15.701 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:26:15 compute-1 nova_compute[225855]: 2026-01-20 14:26:15.701 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:26:15 compute-1 nova_compute[225855]: 2026-01-20 14:26:15.722 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:26:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:16.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:26:16 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1806203902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:16 compute-1 nova_compute[225855]: 2026-01-20 14:26:16.169 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:26:16 compute-1 nova_compute[225855]: 2026-01-20 14:26:16.173 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:26:16 compute-1 nova_compute[225855]: 2026-01-20 14:26:16.370 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:26:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:26:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:16.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:26:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:16.386 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:26:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:16.387 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:26:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:16.387 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:16 compute-1 nova_compute[225855]: 2026-01-20 14:26:16.672 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:26:16 compute-1 nova_compute[225855]: 2026-01-20 14:26:16.673 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:17 compute-1 ceph-mon[81775]: pgmap v1050: 321 pgs: 321 active+clean; 169 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 19 KiB/s wr, 163 op/s
Jan 20 14:26:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1806203902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:17 compute-1 sudo[233231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:17 compute-1 sudo[233231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:17 compute-1 sudo[233231]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:17 compute-1 nova_compute[225855]: 2026-01-20 14:26:17.671 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:26:17 compute-1 nova_compute[225855]: 2026-01-20 14:26:17.672 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:26:17 compute-1 nova_compute[225855]: 2026-01-20 14:26:17.672 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:26:17 compute-1 nova_compute[225855]: 2026-01-20 14:26:17.672 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:26:17 compute-1 sudo[233256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:17 compute-1 sudo[233256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:17 compute-1 sudo[233256]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:18.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:18 compute-1 nova_compute[225855]: 2026-01-20 14:26:18.137 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/333192495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:18 compute-1 ceph-mon[81775]: pgmap v1051: 321 pgs: 321 active+clean; 171 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 51 KiB/s wr, 155 op/s
Jan 20 14:26:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:18.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3111387232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e146 e146: 3 total, 3 up, 3 in
Jan 20 14:26:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:19 compute-1 nova_compute[225855]: 2026-01-20 14:26:19.985 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919164.9837813, d1be7a29-3496-40ab-b61f-694622b7453b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:26:19 compute-1 nova_compute[225855]: 2026-01-20 14:26:19.985 225859 INFO nova.compute.manager [-] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] VM Stopped (Lifecycle Event)
Jan 20 14:26:20 compute-1 nova_compute[225855]: 2026-01-20 14:26:20.016 225859 DEBUG nova.compute.manager [None req-c8fa4772-e876-47d9-907d-cfbd7fa67f72 - - - - - -] [instance: d1be7a29-3496-40ab-b61f-694622b7453b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:26:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:26:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:20.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:26:20 compute-1 nova_compute[225855]: 2026-01-20 14:26:20.288 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:20 compute-1 ceph-mon[81775]: pgmap v1052: 321 pgs: 321 active+clean; 171 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 52 KiB/s wr, 145 op/s
Jan 20 14:26:20 compute-1 ceph-mon[81775]: osdmap e146: 3 total, 3 up, 3 in
Jan 20 14:26:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e147 e147: 3 total, 3 up, 3 in
Jan 20 14:26:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:20.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:21 compute-1 ceph-mon[81775]: osdmap e147: 3 total, 3 up, 3 in
Jan 20 14:26:21 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e148 e148: 3 total, 3 up, 3 in
Jan 20 14:26:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:26:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:22.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:26:22 compute-1 ceph-mon[81775]: pgmap v1055: 321 pgs: 321 active+clean; 204 MiB data, 382 MiB used, 21 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.0 MiB/s wr, 140 op/s
Jan 20 14:26:22 compute-1 ceph-mon[81775]: osdmap e148: 3 total, 3 up, 3 in
Jan 20 14:26:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:26:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:22.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:26:22.400378) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919182400431, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2152, "num_deletes": 252, "total_data_size": 4857933, "memory_usage": 4932152, "flush_reason": "Manual Compaction"}
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919182418025, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 3192400, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23273, "largest_seqno": 25420, "table_properties": {"data_size": 3183657, "index_size": 5301, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19330, "raw_average_key_size": 20, "raw_value_size": 3165714, "raw_average_value_size": 3393, "num_data_blocks": 234, "num_entries": 933, "num_filter_entries": 933, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768919003, "oldest_key_time": 1768919003, "file_creation_time": 1768919182, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 17677 microseconds, and 6911 cpu microseconds.
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:26:22.418056) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 3192400 bytes OK
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:26:22.418071) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:26:22.424291) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:26:22.424305) EVENT_LOG_v1 {"time_micros": 1768919182424300, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:26:22.424322) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 4848221, prev total WAL file size 4848221, number of live WAL files 2.
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:26:22.425533) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(3117KB)], [48(7667KB)]
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919182425568, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 11044413, "oldest_snapshot_seqno": -1}
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 4957 keys, 9068597 bytes, temperature: kUnknown
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919182514546, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 9068597, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9034598, "index_size": 20513, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 125550, "raw_average_key_size": 25, "raw_value_size": 8944040, "raw_average_value_size": 1804, "num_data_blocks": 840, "num_entries": 4957, "num_filter_entries": 4957, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768919182, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:26:22.514933) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9068597 bytes
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:26:22.517927) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.0 rd, 101.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 7.5 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(6.3) write-amplify(2.8) OK, records in: 5479, records dropped: 522 output_compression: NoCompression
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:26:22.517947) EVENT_LOG_v1 {"time_micros": 1768919182517938, "job": 28, "event": "compaction_finished", "compaction_time_micros": 89088, "compaction_time_cpu_micros": 20699, "output_level": 6, "num_output_files": 1, "total_output_size": 9068597, "num_input_records": 5479, "num_output_records": 4957, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919182518683, "job": 28, "event": "table_file_deletion", "file_number": 50}
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919182520533, "job": 28, "event": "table_file_deletion", "file_number": 48}
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:26:22.425437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:26:22.520632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:26:22.520638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:26:22.520640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:26:22.520642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:26:22 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:26:22.520644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:26:23 compute-1 nova_compute[225855]: 2026-01-20 14:26:23.176 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:24.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:24.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:24 compute-1 ceph-mon[81775]: pgmap v1057: 321 pgs: 321 active+clean; 226 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 4.6 MiB/s wr, 78 op/s
Jan 20 14:26:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:25 compute-1 nova_compute[225855]: 2026-01-20 14:26:25.292 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e149 e149: 3 total, 3 up, 3 in
Jan 20 14:26:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:26.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:26.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:26 compute-1 ceph-mon[81775]: pgmap v1058: 321 pgs: 321 active+clean; 256 MiB data, 418 MiB used, 21 GiB / 21 GiB avail; 8.0 MiB/s rd, 8.6 MiB/s wr, 199 op/s
Jan 20 14:26:26 compute-1 ceph-mon[81775]: osdmap e149: 3 total, 3 up, 3 in
Jan 20 14:26:27 compute-1 podman[233285]: 2026-01-20 14:26:27.104016018 +0000 UTC m=+0.149054665 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 14:26:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1499673647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:28.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:28 compute-1 nova_compute[225855]: 2026-01-20 14:26:28.207 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:26:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:28.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:26:28 compute-1 ceph-mon[81775]: pgmap v1060: 321 pgs: 321 active+clean; 261 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 7.4 MiB/s rd, 10 MiB/s wr, 278 op/s
Jan 20 14:26:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:30.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:30 compute-1 nova_compute[225855]: 2026-01-20 14:26:30.227 225859 DEBUG oslo_concurrency.lockutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Acquiring lock "6091ab6e-2530-4b48-b482-00867d3c66c5" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:26:30 compute-1 nova_compute[225855]: 2026-01-20 14:26:30.227 225859 DEBUG oslo_concurrency.lockutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lock "6091ab6e-2530-4b48-b482-00867d3c66c5" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:26:30 compute-1 nova_compute[225855]: 2026-01-20 14:26:30.228 225859 INFO nova.compute.manager [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Unshelving
Jan 20 14:26:30 compute-1 nova_compute[225855]: 2026-01-20 14:26:30.296 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:30 compute-1 nova_compute[225855]: 2026-01-20 14:26:30.349 225859 DEBUG oslo_concurrency.lockutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:26:30 compute-1 nova_compute[225855]: 2026-01-20 14:26:30.350 225859 DEBUG oslo_concurrency.lockutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:26:30 compute-1 nova_compute[225855]: 2026-01-20 14:26:30.355 225859 DEBUG nova.objects.instance [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lazy-loading 'pci_requests' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:26:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:30.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:30 compute-1 nova_compute[225855]: 2026-01-20 14:26:30.504 225859 DEBUG nova.objects.instance [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:26:30 compute-1 nova_compute[225855]: 2026-01-20 14:26:30.520 225859 DEBUG nova.virt.hardware [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:26:30 compute-1 nova_compute[225855]: 2026-01-20 14:26:30.520 225859 INFO nova.compute.claims [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:26:30 compute-1 nova_compute[225855]: 2026-01-20 14:26:30.624 225859 DEBUG oslo_concurrency.processutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:26:30 compute-1 ceph-mon[81775]: pgmap v1061: 321 pgs: 321 active+clean; 241 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.1 MiB/s wr, 203 op/s
Jan 20 14:26:31 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:26:31 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1116669953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:31 compute-1 nova_compute[225855]: 2026-01-20 14:26:31.041 225859 DEBUG oslo_concurrency.processutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:26:31 compute-1 nova_compute[225855]: 2026-01-20 14:26:31.046 225859 DEBUG nova.compute.provider_tree [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:26:31 compute-1 nova_compute[225855]: 2026-01-20 14:26:31.060 225859 DEBUG nova.scheduler.client.report [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:26:31 compute-1 nova_compute[225855]: 2026-01-20 14:26:31.079 225859 DEBUG oslo_concurrency.lockutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:31 compute-1 sudo[233334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:31 compute-1 rsyslogd[1002]: imjournal: 4943 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 20 14:26:31 compute-1 sudo[233334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:31 compute-1 sudo[233334]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:31 compute-1 nova_compute[225855]: 2026-01-20 14:26:31.429 225859 DEBUG oslo_concurrency.lockutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Acquiring lock "refresh_cache-6091ab6e-2530-4b48-b482-00867d3c66c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:26:31 compute-1 nova_compute[225855]: 2026-01-20 14:26:31.429 225859 DEBUG oslo_concurrency.lockutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Acquired lock "refresh_cache-6091ab6e-2530-4b48-b482-00867d3c66c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:26:31 compute-1 nova_compute[225855]: 2026-01-20 14:26:31.429 225859 DEBUG nova.network.neutron [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:26:31 compute-1 sudo[233359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:26:31 compute-1 sudo[233359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:31 compute-1 sudo[233359]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:31 compute-1 sudo[233384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:31 compute-1 sudo[233384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:31 compute-1 sudo[233384]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:31 compute-1 sudo[233410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:26:31 compute-1 sudo[233410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1116669953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:32 compute-1 nova_compute[225855]: 2026-01-20 14:26:32.071 225859 DEBUG nova.network.neutron [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:26:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:32.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:32 compute-1 sudo[233410]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:32 compute-1 sudo[233466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:32 compute-1 sudo[233466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:32 compute-1 sudo[233466]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:32 compute-1 sudo[233491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:26:32 compute-1 sudo[233491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:32 compute-1 sudo[233491]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:32 compute-1 sudo[233516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:32 compute-1 sudo[233516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:32 compute-1 sudo[233516]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:32.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:32 compute-1 sudo[233541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 20 14:26:32 compute-1 nova_compute[225855]: 2026-01-20 14:26:32.448 225859 DEBUG nova.network.neutron [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:26:32 compute-1 sudo[233541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:32 compute-1 nova_compute[225855]: 2026-01-20 14:26:32.468 225859 DEBUG oslo_concurrency.lockutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Releasing lock "refresh_cache-6091ab6e-2530-4b48-b482-00867d3c66c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:26:32 compute-1 nova_compute[225855]: 2026-01-20 14:26:32.471 225859 DEBUG nova.virt.libvirt.driver [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:26:32 compute-1 nova_compute[225855]: 2026-01-20 14:26:32.472 225859 INFO nova.virt.libvirt.driver [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Creating image(s)
Jan 20 14:26:32 compute-1 nova_compute[225855]: 2026-01-20 14:26:32.503 225859 DEBUG nova.storage.rbd_utils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] rbd image 6091ab6e-2530-4b48-b482-00867d3c66c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:26:32 compute-1 nova_compute[225855]: 2026-01-20 14:26:32.507 225859 DEBUG nova.objects.instance [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:26:32 compute-1 nova_compute[225855]: 2026-01-20 14:26:32.606 225859 DEBUG nova.storage.rbd_utils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] rbd image 6091ab6e-2530-4b48-b482-00867d3c66c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:26:32 compute-1 sudo[233541]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:32 compute-1 nova_compute[225855]: 2026-01-20 14:26:32.636 225859 DEBUG nova.storage.rbd_utils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] rbd image 6091ab6e-2530-4b48-b482-00867d3c66c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:26:32 compute-1 nova_compute[225855]: 2026-01-20 14:26:32.638 225859 DEBUG oslo_concurrency.lockutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Acquiring lock "c76a5946aff378ee70c25c8996110c54c3c4f8a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:26:32 compute-1 nova_compute[225855]: 2026-01-20 14:26:32.639 225859 DEBUG oslo_concurrency.lockutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lock "c76a5946aff378ee70c25c8996110c54c3c4f8a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:26:32 compute-1 ceph-mon[81775]: pgmap v1062: 321 pgs: 321 active+clean; 204 MiB data, 402 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 187 op/s
Jan 20 14:26:33 compute-1 nova_compute[225855]: 2026-01-20 14:26:33.016 225859 DEBUG nova.virt.libvirt.imagebackend [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Image locations are: [{'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/0db5d54c-c1b5-4100-80fe-c616a5483520/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/0db5d54c-c1b5-4100-80fe-c616a5483520/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 20 14:26:33 compute-1 nova_compute[225855]: 2026-01-20 14:26:33.079 225859 DEBUG nova.virt.libvirt.imagebackend [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Selected location: {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/0db5d54c-c1b5-4100-80fe-c616a5483520/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 20 14:26:33 compute-1 nova_compute[225855]: 2026-01-20 14:26:33.080 225859 DEBUG nova.storage.rbd_utils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] cloning images/0db5d54c-c1b5-4100-80fe-c616a5483520@snap to None/6091ab6e-2530-4b48-b482-00867d3c66c5_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:26:33 compute-1 nova_compute[225855]: 2026-01-20 14:26:33.213 225859 DEBUG oslo_concurrency.lockutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lock "c76a5946aff378ee70c25c8996110c54c3c4f8a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:33 compute-1 nova_compute[225855]: 2026-01-20 14:26:33.259 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:33 compute-1 nova_compute[225855]: 2026-01-20 14:26:33.375 225859 DEBUG nova.objects.instance [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lazy-loading 'migration_context' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:26:33 compute-1 nova_compute[225855]: 2026-01-20 14:26:33.453 225859 DEBUG nova.storage.rbd_utils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] flattening vms/6091ab6e-2530-4b48-b482-00867d3c66c5_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:26:33 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:33 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:33 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:33 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:33 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:26:33 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:26:33 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:33 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:26:33 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:26:33 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:26:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:26:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:34.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:26:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:34.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:35 compute-1 ceph-mon[81775]: pgmap v1063: 321 pgs: 321 active+clean; 204 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.5 MiB/s wr, 179 op/s
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.092 225859 DEBUG nova.virt.libvirt.driver [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Image rbd:vms/6091ab6e-2530-4b48-b482-00867d3c66c5_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.093 225859 DEBUG nova.virt.libvirt.driver [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.093 225859 DEBUG nova.virt.libvirt.driver [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Ensure instance console log exists: /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.093 225859 DEBUG oslo_concurrency.lockutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.093 225859 DEBUG oslo_concurrency.lockutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.094 225859 DEBUG oslo_concurrency.lockutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.095 225859 DEBUG nova.virt.libvirt.driver [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-20T14:26:03Z,direct_url=<?>,disk_format='raw',id=0db5d54c-c1b5-4100-80fe-c616a5483520,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-767584007-shelved',owner='14ebcff06a484899a9725832f1eddfdf',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-20T14:26:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.099 225859 WARNING nova.virt.libvirt.driver [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.102 225859 DEBUG nova.virt.libvirt.host [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.103 225859 DEBUG nova.virt.libvirt.host [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.105 225859 DEBUG nova.virt.libvirt.host [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.105 225859 DEBUG nova.virt.libvirt.host [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.106 225859 DEBUG nova.virt.libvirt.driver [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.106 225859 DEBUG nova.virt.hardware [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-20T14:26:03Z,direct_url=<?>,disk_format='raw',id=0db5d54c-c1b5-4100-80fe-c616a5483520,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-767584007-shelved',owner='14ebcff06a484899a9725832f1eddfdf',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-20T14:26:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.106 225859 DEBUG nova.virt.hardware [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.107 225859 DEBUG nova.virt.hardware [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.107 225859 DEBUG nova.virt.hardware [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.107 225859 DEBUG nova.virt.hardware [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.107 225859 DEBUG nova.virt.hardware [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.107 225859 DEBUG nova.virt.hardware [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.107 225859 DEBUG nova.virt.hardware [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.108 225859 DEBUG nova.virt.hardware [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.108 225859 DEBUG nova.virt.hardware [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.108 225859 DEBUG nova.virt.hardware [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.108 225859 DEBUG nova.objects.instance [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.334 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.390 225859 DEBUG oslo_concurrency.processutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.725 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:35.726 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:26:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:35.727 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:26:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:26:35 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3726797702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.854 225859 DEBUG oslo_concurrency.processutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.890 225859 DEBUG nova.storage.rbd_utils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] rbd image 6091ab6e-2530-4b48-b482-00867d3c66c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:26:35 compute-1 nova_compute[225855]: 2026-01-20 14:26:35.896 225859 DEBUG oslo_concurrency.processutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:26:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:26:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:36.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:26:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:36.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:26:36 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2320921482' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:26:36 compute-1 ceph-mon[81775]: pgmap v1064: 321 pgs: 321 active+clean; 204 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Jan 20 14:26:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3726797702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:26:36 compute-1 nova_compute[225855]: 2026-01-20 14:26:36.905 225859 DEBUG oslo_concurrency.processutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:26:36 compute-1 nova_compute[225855]: 2026-01-20 14:26:36.907 225859 DEBUG nova.objects.instance [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:26:36 compute-1 nova_compute[225855]: 2026-01-20 14:26:36.950 225859 DEBUG nova.virt.libvirt.driver [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:26:36 compute-1 nova_compute[225855]:   <uuid>6091ab6e-2530-4b48-b482-00867d3c66c5</uuid>
Jan 20 14:26:36 compute-1 nova_compute[225855]:   <name>instance-0000000e</name>
Jan 20 14:26:36 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:26:36 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:26:36 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <nova:name>tempest-UnshelveToHostMultiNodesTest-server-767584007</nova:name>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:26:35</nova:creationTime>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:26:36 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:26:36 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:26:36 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:26:36 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:26:36 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:26:36 compute-1 nova_compute[225855]:         <nova:user uuid="8ea9f3cd2cbb462a8ecbb488e6a1a25d">tempest-UnshelveToHostMultiNodesTest-997401309-project-member</nova:user>
Jan 20 14:26:36 compute-1 nova_compute[225855]:         <nova:project uuid="14ebcff06a484899a9725832f1eddfdf">tempest-UnshelveToHostMultiNodesTest-997401309</nova:project>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="0db5d54c-c1b5-4100-80fe-c616a5483520"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <nova:ports/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:26:36 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:26:36 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <system>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <entry name="serial">6091ab6e-2530-4b48-b482-00867d3c66c5</entry>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <entry name="uuid">6091ab6e-2530-4b48-b482-00867d3c66c5</entry>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     </system>
Jan 20 14:26:36 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:26:36 compute-1 nova_compute[225855]:   <os>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:   </os>
Jan 20 14:26:36 compute-1 nova_compute[225855]:   <features>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:   </features>
Jan 20 14:26:36 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:26:36 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:26:36 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/6091ab6e-2530-4b48-b482-00867d3c66c5_disk">
Jan 20 14:26:36 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       </source>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:26:36 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/6091ab6e-2530-4b48-b482-00867d3c66c5_disk.config">
Jan 20 14:26:36 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       </source>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:26:36 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/console.log" append="off"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <video>
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     </video>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <input type="keyboard" bus="usb"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:26:36 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:26:36 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:26:36 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:26:36 compute-1 nova_compute[225855]: </domain>
Jan 20 14:26:36 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:26:37 compute-1 nova_compute[225855]: 2026-01-20 14:26:37.121 225859 DEBUG nova.virt.libvirt.driver [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:26:37 compute-1 nova_compute[225855]: 2026-01-20 14:26:37.122 225859 DEBUG nova.virt.libvirt.driver [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:26:37 compute-1 nova_compute[225855]: 2026-01-20 14:26:37.123 225859 INFO nova.virt.libvirt.driver [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Using config drive
Jan 20 14:26:37 compute-1 nova_compute[225855]: 2026-01-20 14:26:37.152 225859 DEBUG nova.storage.rbd_utils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] rbd image 6091ab6e-2530-4b48-b482-00867d3c66c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:26:37 compute-1 nova_compute[225855]: 2026-01-20 14:26:37.175 225859 DEBUG nova.objects.instance [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:26:37 compute-1 nova_compute[225855]: 2026-01-20 14:26:37.252 225859 DEBUG nova.objects.instance [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lazy-loading 'keypairs' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:26:37 compute-1 nova_compute[225855]: 2026-01-20 14:26:37.687 225859 INFO nova.virt.libvirt.driver [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Creating config drive at /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/disk.config
Jan 20 14:26:37 compute-1 nova_compute[225855]: 2026-01-20 14:26:37.691 225859 DEBUG oslo_concurrency.processutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp68y71qz1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:26:37 compute-1 nova_compute[225855]: 2026-01-20 14:26:37.819 225859 DEBUG oslo_concurrency.processutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp68y71qz1" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:26:37 compute-1 sudo[233883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:37 compute-1 sudo[233883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:37 compute-1 sudo[233883]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:37 compute-1 nova_compute[225855]: 2026-01-20 14:26:37.853 225859 DEBUG nova.storage.rbd_utils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] rbd image 6091ab6e-2530-4b48-b482-00867d3c66c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:26:37 compute-1 nova_compute[225855]: 2026-01-20 14:26:37.858 225859 DEBUG oslo_concurrency.processutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/disk.config 6091ab6e-2530-4b48-b482-00867d3c66c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:26:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2320921482' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:26:37 compute-1 ceph-mon[81775]: pgmap v1065: 321 pgs: 321 active+clean; 264 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.6 MiB/s wr, 145 op/s
Jan 20 14:26:37 compute-1 sudo[233916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:37 compute-1 sudo[233916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:37 compute-1 sudo[233916]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:38 compute-1 nova_compute[225855]: 2026-01-20 14:26:38.047 225859 DEBUG oslo_concurrency.processutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/disk.config 6091ab6e-2530-4b48-b482-00867d3c66c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:26:38 compute-1 nova_compute[225855]: 2026-01-20 14:26:38.049 225859 INFO nova.virt.libvirt.driver [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Deleting local config drive /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/disk.config because it was imported into RBD.
Jan 20 14:26:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:38.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:38 compute-1 systemd-machined[194361]: New machine qemu-7-instance-0000000e.
Jan 20 14:26:38 compute-1 systemd[1]: Started Virtual Machine qemu-7-instance-0000000e.
Jan 20 14:26:38 compute-1 nova_compute[225855]: 2026-01-20 14:26:38.211 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:38 compute-1 podman[233976]: 2026-01-20 14:26:38.224310947 +0000 UTC m=+0.083540960 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:26:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:38.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:38 compute-1 nova_compute[225855]: 2026-01-20 14:26:38.877 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919198.8766315, 6091ab6e-2530-4b48-b482-00867d3c66c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:26:38 compute-1 nova_compute[225855]: 2026-01-20 14:26:38.877 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] VM Resumed (Lifecycle Event)
Jan 20 14:26:38 compute-1 nova_compute[225855]: 2026-01-20 14:26:38.880 225859 DEBUG nova.compute.manager [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:26:38 compute-1 nova_compute[225855]: 2026-01-20 14:26:38.881 225859 DEBUG nova.virt.libvirt.driver [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:26:38 compute-1 nova_compute[225855]: 2026-01-20 14:26:38.885 225859 INFO nova.virt.libvirt.driver [-] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance spawned successfully.
Jan 20 14:26:38 compute-1 nova_compute[225855]: 2026-01-20 14:26:38.912 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:26:38 compute-1 nova_compute[225855]: 2026-01-20 14:26:38.916 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:26:38 compute-1 nova_compute[225855]: 2026-01-20 14:26:38.950 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:26:38 compute-1 nova_compute[225855]: 2026-01-20 14:26:38.950 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919198.8804593, 6091ab6e-2530-4b48-b482-00867d3c66c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:26:38 compute-1 nova_compute[225855]: 2026-01-20 14:26:38.950 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] VM Started (Lifecycle Event)
Jan 20 14:26:38 compute-1 nova_compute[225855]: 2026-01-20 14:26:38.973 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:26:38 compute-1 nova_compute[225855]: 2026-01-20 14:26:38.978 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:26:39 compute-1 nova_compute[225855]: 2026-01-20 14:26:39.004 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:26:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:26:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:40.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:26:40 compute-1 nova_compute[225855]: 2026-01-20 14:26:40.337 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:40.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:40 compute-1 ceph-mon[81775]: pgmap v1066: 321 pgs: 321 active+clean; 285 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 105 op/s
Jan 20 14:26:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:26:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:42.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:26:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:42.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:26:42.730 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:26:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e150 e150: 3 total, 3 up, 3 in
Jan 20 14:26:43 compute-1 nova_compute[225855]: 2026-01-20 14:26:43.252 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:43 compute-1 ceph-mon[81775]: pgmap v1067: 321 pgs: 321 active+clean; 285 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.9 MiB/s wr, 136 op/s
Jan 20 14:26:44 compute-1 sudo[234050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:44 compute-1 sudo[234050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:44 compute-1 sudo[234050]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:44.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:44 compute-1 sudo[234075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:26:44 compute-1 sudo[234075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:44 compute-1 sudo[234075]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:44.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:44 compute-1 ceph-mon[81775]: osdmap e150: 3 total, 3 up, 3 in
Jan 20 14:26:44 compute-1 ceph-mon[81775]: pgmap v1069: 321 pgs: 321 active+clean; 285 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 6.5 MiB/s rd, 4.7 MiB/s wr, 166 op/s
Jan 20 14:26:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:45 compute-1 nova_compute[225855]: 2026-01-20 14:26:45.341 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:46.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:46.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:47 compute-1 ceph-mon[81775]: pgmap v1070: 321 pgs: 321 active+clean; 285 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 6.1 MiB/s rd, 4.7 MiB/s wr, 164 op/s
Jan 20 14:26:47 compute-1 nova_compute[225855]: 2026-01-20 14:26:47.711 225859 DEBUG nova.compute.manager [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:26:47 compute-1 nova_compute[225855]: 2026-01-20 14:26:47.909 225859 DEBUG oslo_concurrency.lockutils [None req-1f59619e-257a-4cab-b645-d76ff4f626f0 c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lock "6091ab6e-2530-4b48-b482-00867d3c66c5" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 17.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:26:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:48.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:26:48 compute-1 nova_compute[225855]: 2026-01-20 14:26:48.256 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:48 compute-1 ceph-mon[81775]: pgmap v1071: 321 pgs: 321 active+clean; 230 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.4 MiB/s wr, 130 op/s
Jan 20 14:26:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:48.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:50 compute-1 nova_compute[225855]: 2026-01-20 14:26:50.000 225859 DEBUG oslo_concurrency.lockutils [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Acquiring lock "6091ab6e-2530-4b48-b482-00867d3c66c5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:26:50 compute-1 nova_compute[225855]: 2026-01-20 14:26:50.001 225859 DEBUG oslo_concurrency.lockutils [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "6091ab6e-2530-4b48-b482-00867d3c66c5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:26:50 compute-1 nova_compute[225855]: 2026-01-20 14:26:50.001 225859 DEBUG oslo_concurrency.lockutils [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Acquiring lock "6091ab6e-2530-4b48-b482-00867d3c66c5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:26:50 compute-1 nova_compute[225855]: 2026-01-20 14:26:50.002 225859 DEBUG oslo_concurrency.lockutils [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "6091ab6e-2530-4b48-b482-00867d3c66c5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:26:50 compute-1 nova_compute[225855]: 2026-01-20 14:26:50.002 225859 DEBUG oslo_concurrency.lockutils [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "6091ab6e-2530-4b48-b482-00867d3c66c5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:50 compute-1 nova_compute[225855]: 2026-01-20 14:26:50.004 225859 INFO nova.compute.manager [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Terminating instance
Jan 20 14:26:50 compute-1 nova_compute[225855]: 2026-01-20 14:26:50.006 225859 DEBUG oslo_concurrency.lockutils [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Acquiring lock "refresh_cache-6091ab6e-2530-4b48-b482-00867d3c66c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:26:50 compute-1 nova_compute[225855]: 2026-01-20 14:26:50.006 225859 DEBUG oslo_concurrency.lockutils [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Acquired lock "refresh_cache-6091ab6e-2530-4b48-b482-00867d3c66c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:26:50 compute-1 nova_compute[225855]: 2026-01-20 14:26:50.007 225859 DEBUG nova.network.neutron [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:26:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:50.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:50 compute-1 nova_compute[225855]: 2026-01-20 14:26:50.345 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:50 compute-1 nova_compute[225855]: 2026-01-20 14:26:50.375 225859 DEBUG nova.network.neutron [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:26:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:26:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:50.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:26:50 compute-1 ceph-mon[81775]: pgmap v1072: 321 pgs: 321 active+clean; 180 MiB data, 385 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 115 op/s
Jan 20 14:26:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e151 e151: 3 total, 3 up, 3 in
Jan 20 14:26:51 compute-1 nova_compute[225855]: 2026-01-20 14:26:51.789 225859 DEBUG nova.network.neutron [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:26:51 compute-1 nova_compute[225855]: 2026-01-20 14:26:51.819 225859 DEBUG oslo_concurrency.lockutils [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Releasing lock "refresh_cache-6091ab6e-2530-4b48-b482-00867d3c66c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:26:51 compute-1 nova_compute[225855]: 2026-01-20 14:26:51.819 225859 DEBUG nova.compute.manager [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:26:51 compute-1 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Jan 20 14:26:51 compute-1 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000e.scope: Consumed 12.313s CPU time.
Jan 20 14:26:51 compute-1 systemd-machined[194361]: Machine qemu-7-instance-0000000e terminated.
Jan 20 14:26:52 compute-1 nova_compute[225855]: 2026-01-20 14:26:52.045 225859 INFO nova.virt.libvirt.driver [-] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance destroyed successfully.
Jan 20 14:26:52 compute-1 nova_compute[225855]: 2026-01-20 14:26:52.045 225859 DEBUG nova.objects.instance [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lazy-loading 'resources' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:26:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:52.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:52 compute-1 ceph-mon[81775]: osdmap e151: 3 total, 3 up, 3 in
Jan 20 14:26:52 compute-1 ceph-mon[81775]: pgmap v1074: 321 pgs: 321 active+clean; 160 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 707 KiB/s rd, 14 KiB/s wr, 93 op/s
Jan 20 14:26:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:26:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:52.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:26:53 compute-1 nova_compute[225855]: 2026-01-20 14:26:53.286 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1900977882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:53 compute-1 nova_compute[225855]: 2026-01-20 14:26:53.822 225859 INFO nova.virt.libvirt.driver [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Deleting instance files /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5_del
Jan 20 14:26:53 compute-1 nova_compute[225855]: 2026-01-20 14:26:53.823 225859 INFO nova.virt.libvirt.driver [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Deletion of /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5_del complete
Jan 20 14:26:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:54.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:26:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:54.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:26:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:54 compute-1 ceph-mon[81775]: pgmap v1075: 321 pgs: 321 active+clean; 122 MiB data, 349 MiB used, 21 GiB / 21 GiB avail; 722 KiB/s rd, 13 KiB/s wr, 111 op/s
Jan 20 14:26:55 compute-1 nova_compute[225855]: 2026-01-20 14:26:55.350 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:55 compute-1 nova_compute[225855]: 2026-01-20 14:26:55.783 225859 INFO nova.compute.manager [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Took 3.96 seconds to destroy the instance on the hypervisor.
Jan 20 14:26:55 compute-1 nova_compute[225855]: 2026-01-20 14:26:55.784 225859 DEBUG oslo.service.loopingcall [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:26:55 compute-1 nova_compute[225855]: 2026-01-20 14:26:55.784 225859 DEBUG nova.compute.manager [-] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:26:55 compute-1 nova_compute[225855]: 2026-01-20 14:26:55.784 225859 DEBUG nova.network.neutron [-] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:26:56 compute-1 nova_compute[225855]: 2026-01-20 14:26:56.036 225859 DEBUG nova.network.neutron [-] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:26:56 compute-1 nova_compute[225855]: 2026-01-20 14:26:56.050 225859 DEBUG nova.network.neutron [-] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:26:56 compute-1 nova_compute[225855]: 2026-01-20 14:26:56.065 225859 INFO nova.compute.manager [-] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Took 0.28 seconds to deallocate network for instance.
Jan 20 14:26:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:56.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:56 compute-1 nova_compute[225855]: 2026-01-20 14:26:56.190 225859 DEBUG oslo_concurrency.lockutils [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:26:56 compute-1 nova_compute[225855]: 2026-01-20 14:26:56.191 225859 DEBUG oslo_concurrency.lockutils [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:26:56 compute-1 nova_compute[225855]: 2026-01-20 14:26:56.273 225859 DEBUG oslo_concurrency.processutils [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:26:56 compute-1 ceph-mon[81775]: pgmap v1076: 321 pgs: 321 active+clean; 101 MiB data, 335 MiB used, 21 GiB / 21 GiB avail; 289 KiB/s rd, 14 KiB/s wr, 99 op/s
Jan 20 14:26:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:56.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:56 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:26:56 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3603524033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:56 compute-1 nova_compute[225855]: 2026-01-20 14:26:56.763 225859 DEBUG oslo_concurrency.processutils [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:26:56 compute-1 nova_compute[225855]: 2026-01-20 14:26:56.772 225859 DEBUG nova.compute.provider_tree [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:26:56 compute-1 nova_compute[225855]: 2026-01-20 14:26:56.800 225859 DEBUG nova.scheduler.client.report [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:26:56 compute-1 nova_compute[225855]: 2026-01-20 14:26:56.831 225859 DEBUG oslo_concurrency.lockutils [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:56 compute-1 nova_compute[225855]: 2026-01-20 14:26:56.873 225859 INFO nova.scheduler.client.report [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Deleted allocations for instance 6091ab6e-2530-4b48-b482-00867d3c66c5
Jan 20 14:26:56 compute-1 nova_compute[225855]: 2026-01-20 14:26:56.943 225859 DEBUG oslo_concurrency.lockutils [None req-820fdf5b-ca9d-4022-90bb-c341c8e998c7 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "6091ab6e-2530-4b48-b482-00867d3c66c5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:57 compute-1 sudo[234151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:58 compute-1 sudo[234151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:58 compute-1 sudo[234151]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:58 compute-1 sudo[234188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:58 compute-1 sudo[234188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:58 compute-1 sudo[234188]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:58 compute-1 podman[234155]: 2026-01-20 14:26:58.121439915 +0000 UTC m=+0.144404211 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:26:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:26:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:58.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:26:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3603524033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:58 compute-1 nova_compute[225855]: 2026-01-20 14:26:58.287 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:26:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:58.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:59 compute-1 ceph-mon[81775]: pgmap v1077: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 458 KiB/s rd, 14 KiB/s wr, 132 op/s
Jan 20 14:26:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:00.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:00 compute-1 nova_compute[225855]: 2026-01-20 14:27:00.354 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:00 compute-1 ceph-mon[81775]: pgmap v1078: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 448 KiB/s rd, 13 KiB/s wr, 116 op/s
Jan 20 14:27:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:00.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:02 compute-1 ceph-mon[81775]: pgmap v1079: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 371 KiB/s rd, 4.2 KiB/s wr, 82 op/s
Jan 20 14:27:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:02.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:02.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:03 compute-1 nova_compute[225855]: 2026-01-20 14:27:03.289 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:04.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:04.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:05 compute-1 ceph-mon[81775]: pgmap v1080: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 310 KiB/s rd, 3.5 KiB/s wr, 69 op/s
Jan 20 14:27:05 compute-1 nova_compute[225855]: 2026-01-20 14:27:05.356 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:06.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:06 compute-1 ceph-mon[81775]: pgmap v1081: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 195 KiB/s rd, 2.5 KiB/s wr, 40 op/s
Jan 20 14:27:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:27:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:06.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:27:07 compute-1 nova_compute[225855]: 2026-01-20 14:27:07.042 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919212.0404408, 6091ab6e-2530-4b48-b482-00867d3c66c5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:27:07 compute-1 nova_compute[225855]: 2026-01-20 14:27:07.042 225859 INFO nova.compute.manager [-] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] VM Stopped (Lifecycle Event)
Jan 20 14:27:07 compute-1 nova_compute[225855]: 2026-01-20 14:27:07.068 225859 DEBUG nova.compute.manager [None req-5d44fdcd-e341-4f55-8c93-deb2404ba457 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:27:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:08.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:08 compute-1 nova_compute[225855]: 2026-01-20 14:27:08.291 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:08 compute-1 ceph-mon[81775]: pgmap v1082: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 142 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Jan 20 14:27:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:08.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:09 compute-1 podman[234232]: 2026-01-20 14:27:09.017687352 +0000 UTC m=+0.064089943 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 20 14:27:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:10.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:10 compute-1 nova_compute[225855]: 2026-01-20 14:27:10.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:27:10 compute-1 nova_compute[225855]: 2026-01-20 14:27:10.360 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:10.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:10 compute-1 ceph-mon[81775]: pgmap v1083: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:27:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:27:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:12.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:27:12 compute-1 ceph-mon[81775]: pgmap v1084: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:27:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:12.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:13 compute-1 nova_compute[225855]: 2026-01-20 14:27:13.293 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:13 compute-1 nova_compute[225855]: 2026-01-20 14:27:13.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:27:13 compute-1 nova_compute[225855]: 2026-01-20 14:27:13.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:27:13 compute-1 nova_compute[225855]: 2026-01-20 14:27:13.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:27:13 compute-1 nova_compute[225855]: 2026-01-20 14:27:13.362 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:27:13 compute-1 nova_compute[225855]: 2026-01-20 14:27:13.362 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:27:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:27:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3830522030' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:27:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:27:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3830522030' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:27:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:27:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:14.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:27:14 compute-1 ceph-mon[81775]: pgmap v1085: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:27:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:14.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:15 compute-1 nova_compute[225855]: 2026-01-20 14:27:15.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:27:15 compute-1 nova_compute[225855]: 2026-01-20 14:27:15.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:27:15 compute-1 nova_compute[225855]: 2026-01-20 14:27:15.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:27:15 compute-1 nova_compute[225855]: 2026-01-20 14:27:15.362 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:15 compute-1 nova_compute[225855]: 2026-01-20 14:27:15.380 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:15 compute-1 nova_compute[225855]: 2026-01-20 14:27:15.380 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:15 compute-1 nova_compute[225855]: 2026-01-20 14:27:15.380 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:15 compute-1 nova_compute[225855]: 2026-01-20 14:27:15.380 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:27:15 compute-1 nova_compute[225855]: 2026-01-20 14:27:15.381 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:27:15 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1596450767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:15 compute-1 nova_compute[225855]: 2026-01-20 14:27:15.803 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3830522030' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:27:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3830522030' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:27:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2896920713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:15 compute-1 sshd-session[234253]: Invalid user admin from 45.179.5.170 port 55476
Jan 20 14:27:16 compute-1 nova_compute[225855]: 2026-01-20 14:27:16.017 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:27:16 compute-1 nova_compute[225855]: 2026-01-20 14:27:16.018 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4804MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:27:16 compute-1 nova_compute[225855]: 2026-01-20 14:27:16.019 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:16 compute-1 nova_compute[225855]: 2026-01-20 14:27:16.019 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:16 compute-1 nova_compute[225855]: 2026-01-20 14:27:16.081 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:27:16 compute-1 nova_compute[225855]: 2026-01-20 14:27:16.082 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:27:16 compute-1 nova_compute[225855]: 2026-01-20 14:27:16.097 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:16.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:16.387 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:16.387 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:16.387 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:16 compute-1 sshd-session[234253]: Connection closed by invalid user admin 45.179.5.170 port 55476 [preauth]
Jan 20 14:27:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:27:16 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3796316152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:16 compute-1 nova_compute[225855]: 2026-01-20 14:27:16.561 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:16 compute-1 nova_compute[225855]: 2026-01-20 14:27:16.567 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:27:16 compute-1 nova_compute[225855]: 2026-01-20 14:27:16.580 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:27:16 compute-1 nova_compute[225855]: 2026-01-20 14:27:16.601 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:27:16 compute-1 nova_compute[225855]: 2026-01-20 14:27:16.601 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:16.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:17 compute-1 nova_compute[225855]: 2026-01-20 14:27:17.601 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:27:17 compute-1 nova_compute[225855]: 2026-01-20 14:27:17.602 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:27:17 compute-1 ceph-mon[81775]: pgmap v1086: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:27:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1596450767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1469458289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3796316152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:18.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:18 compute-1 sudo[234303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:18 compute-1 sudo[234303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:18 compute-1 sudo[234303]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:18 compute-1 sudo[234328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:18 compute-1 sudo[234328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:18 compute-1 sudo[234328]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:18 compute-1 nova_compute[225855]: 2026-01-20 14:27:18.295 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:18 compute-1 nova_compute[225855]: 2026-01-20 14:27:18.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:27:18 compute-1 nova_compute[225855]: 2026-01-20 14:27:18.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:27:18 compute-1 ceph-mon[81775]: pgmap v1087: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:27:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/338726957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1678624093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/499083521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:27:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:18.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:27:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:20.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:20 compute-1 ceph-mon[81775]: pgmap v1088: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:27:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3505650960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:20 compute-1 nova_compute[225855]: 2026-01-20 14:27:20.366 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:20.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1706451896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/697472392' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:22.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:27:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3116 syncs, 3.59 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4724 writes, 17K keys, 4724 commit groups, 1.0 writes per commit group, ingest: 18.16 MB, 0.03 MB/s
                                           Interval WAL: 4724 writes, 1965 syncs, 2.40 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 14:27:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:22.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:22 compute-1 ceph-mon[81775]: pgmap v1089: 321 pgs: 321 active+clean; 86 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 MiB/s wr, 28 op/s
Jan 20 14:27:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1274305540' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:23 compute-1 nova_compute[225855]: 2026-01-20 14:27:23.297 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:23 compute-1 ceph-mon[81775]: pgmap v1090: 321 pgs: 321 active+clean; 126 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 20 14:27:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:24.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:27:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:24.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:27:25 compute-1 nova_compute[225855]: 2026-01-20 14:27:25.369 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:26.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:26 compute-1 ceph-mon[81775]: pgmap v1091: 321 pgs: 321 active+clean; 145 MiB data, 349 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 3.9 MiB/s wr, 64 op/s
Jan 20 14:27:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1326481010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1257689580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/205760377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:27:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:26.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:27:27 compute-1 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 20 14:27:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:28.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3520391376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:28 compute-1 nova_compute[225855]: 2026-01-20 14:27:28.300 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:28 compute-1 nova_compute[225855]: 2026-01-20 14:27:28.414 225859 DEBUG oslo_concurrency.lockutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Acquiring lock "11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:28 compute-1 nova_compute[225855]: 2026-01-20 14:27:28.415 225859 DEBUG oslo_concurrency.lockutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Lock "11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:28 compute-1 nova_compute[225855]: 2026-01-20 14:27:28.436 225859 DEBUG nova.compute.manager [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:27:28 compute-1 nova_compute[225855]: 2026-01-20 14:27:28.522 225859 DEBUG oslo_concurrency.lockutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:28 compute-1 nova_compute[225855]: 2026-01-20 14:27:28.523 225859 DEBUG oslo_concurrency.lockutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:28 compute-1 nova_compute[225855]: 2026-01-20 14:27:28.536 225859 DEBUG nova.virt.hardware [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:27:28 compute-1 nova_compute[225855]: 2026-01-20 14:27:28.537 225859 INFO nova.compute.claims [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:27:28 compute-1 nova_compute[225855]: 2026-01-20 14:27:28.651 225859 DEBUG oslo_concurrency.processutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:28.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:29 compute-1 podman[234379]: 2026-01-20 14:27:29.052013194 +0000 UTC m=+0.094199279 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:27:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:27:29 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/819079954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.091 225859 DEBUG oslo_concurrency.processutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.099 225859 DEBUG nova.compute.provider_tree [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.198 225859 DEBUG nova.scheduler.client.report [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.273 225859 DEBUG oslo_concurrency.lockutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.274 225859 DEBUG nova.compute.manager [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.398 225859 DEBUG nova.compute.manager [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.398 225859 DEBUG nova.network.neutron [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.419 225859 INFO nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.443 225859 DEBUG nova.compute.manager [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.531 225859 DEBUG nova.compute.manager [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.533 225859 DEBUG nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.534 225859 INFO nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Creating image(s)
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.569 225859 DEBUG nova.storage.rbd_utils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] rbd image 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.596 225859 DEBUG nova.storage.rbd_utils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] rbd image 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.627 225859 DEBUG nova.storage.rbd_utils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] rbd image 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.630 225859 DEBUG oslo_concurrency.processutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.686 225859 DEBUG oslo_concurrency.processutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.687 225859 DEBUG oslo_concurrency.lockutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.688 225859 DEBUG oslo_concurrency.lockutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.688 225859 DEBUG oslo_concurrency.lockutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.717 225859 DEBUG nova.storage.rbd_utils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] rbd image 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.720 225859 DEBUG oslo_concurrency.processutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.850 225859 DEBUG nova.network.neutron [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 20 14:27:29 compute-1 nova_compute[225855]: 2026-01-20 14:27:29.851 225859 DEBUG nova.compute.manager [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:27:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:30.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:30 compute-1 nova_compute[225855]: 2026-01-20 14:27:30.372 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:30.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:31 compute-1 ceph-mon[81775]: pgmap v1092: 321 pgs: 321 active+clean; 180 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.3 MiB/s wr, 148 op/s
Jan 20 14:27:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/819079954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:32 compute-1 ovn_controller[130490]: 2026-01-20T14:27:32Z|00078|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 20 14:27:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:32.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:32 compute-1 nova_compute[225855]: 2026-01-20 14:27:32.686 225859 DEBUG oslo_concurrency.processutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.966s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:32 compute-1 nova_compute[225855]: 2026-01-20 14:27:32.757 225859 DEBUG nova.storage.rbd_utils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] resizing rbd image 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:27:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:32.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:32 compute-1 nova_compute[225855]: 2026-01-20 14:27:32.870 225859 DEBUG nova.objects.instance [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Lazy-loading 'migration_context' on Instance uuid 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.115 225859 DEBUG nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.116 225859 DEBUG nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Ensure instance console log exists: /var/lib/nova/instances/11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.116 225859 DEBUG oslo_concurrency.lockutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.117 225859 DEBUG oslo_concurrency.lockutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.117 225859 DEBUG oslo_concurrency.lockutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.119 225859 DEBUG nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.123 225859 WARNING nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.128 225859 DEBUG nova.virt.libvirt.host [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.128 225859 DEBUG nova.virt.libvirt.host [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.132 225859 DEBUG nova.virt.libvirt.host [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.132 225859 DEBUG nova.virt.libvirt.host [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.134 225859 DEBUG nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.134 225859 DEBUG nova.virt.hardware [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.135 225859 DEBUG nova.virt.hardware [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.135 225859 DEBUG nova.virt.hardware [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.136 225859 DEBUG nova.virt.hardware [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.136 225859 DEBUG nova.virt.hardware [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.137 225859 DEBUG nova.virt.hardware [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.137 225859 DEBUG nova.virt.hardware [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.137 225859 DEBUG nova.virt.hardware [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.138 225859 DEBUG nova.virt.hardware [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.138 225859 DEBUG nova.virt.hardware [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.139 225859 DEBUG nova.virt.hardware [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.143 225859 DEBUG oslo_concurrency.processutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:33 compute-1 ceph-mon[81775]: pgmap v1093: 321 pgs: 321 active+clean; 181 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 155 op/s
Jan 20 14:27:33 compute-1 ceph-mon[81775]: pgmap v1094: 321 pgs: 321 active+clean; 192 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.4 MiB/s wr, 184 op/s
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.302 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.659 225859 DEBUG oslo_concurrency.processutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.688 225859 DEBUG nova.storage.rbd_utils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] rbd image 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:33 compute-1 nova_compute[225855]: 2026-01-20 14:27:33.692 225859 DEBUG oslo_concurrency.processutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:27:34 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1353817912' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:34 compute-1 nova_compute[225855]: 2026-01-20 14:27:34.121 225859 DEBUG oslo_concurrency.processutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:34 compute-1 nova_compute[225855]: 2026-01-20 14:27:34.125 225859 DEBUG nova.objects.instance [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Lazy-loading 'pci_devices' on Instance uuid 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:27:34 compute-1 nova_compute[225855]: 2026-01-20 14:27:34.140 225859 DEBUG nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:27:34 compute-1 nova_compute[225855]:   <uuid>11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef</uuid>
Jan 20 14:27:34 compute-1 nova_compute[225855]:   <name>instance-00000013</name>
Jan 20 14:27:34 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:27:34 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:27:34 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <nova:name>tempest-LiveMigrationNegativeTest-server-156396385</nova:name>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:27:33</nova:creationTime>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:27:34 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:27:34 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:27:34 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:27:34 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:27:34 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:27:34 compute-1 nova_compute[225855]:         <nova:user uuid="399cc9abe2cd4ab196a4e5789992ae51">tempest-LiveMigrationNegativeTest-1807701797-project-member</nova:user>
Jan 20 14:27:34 compute-1 nova_compute[225855]:         <nova:project uuid="1759b9d61ad946b6afa3e8448ce02190">tempest-LiveMigrationNegativeTest-1807701797</nova:project>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <nova:ports/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:27:34 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:27:34 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <system>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <entry name="serial">11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef</entry>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <entry name="uuid">11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef</entry>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     </system>
Jan 20 14:27:34 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:27:34 compute-1 nova_compute[225855]:   <os>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:   </os>
Jan 20 14:27:34 compute-1 nova_compute[225855]:   <features>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:   </features>
Jan 20 14:27:34 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:27:34 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:27:34 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef_disk">
Jan 20 14:27:34 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       </source>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:27:34 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef_disk.config">
Jan 20 14:27:34 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       </source>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:27:34 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef/console.log" append="off"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <video>
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     </video>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:27:34 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:27:34 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:27:34 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:27:34 compute-1 nova_compute[225855]: </domain>
Jan 20 14:27:34 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:27:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:34.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:34 compute-1 nova_compute[225855]: 2026-01-20 14:27:34.204 225859 DEBUG nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:27:34 compute-1 nova_compute[225855]: 2026-01-20 14:27:34.204 225859 DEBUG nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:27:34 compute-1 nova_compute[225855]: 2026-01-20 14:27:34.204 225859 INFO nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Using config drive
Jan 20 14:27:34 compute-1 nova_compute[225855]: 2026-01-20 14:27:34.225 225859 DEBUG nova.storage.rbd_utils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] rbd image 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:34 compute-1 nova_compute[225855]: 2026-01-20 14:27:34.622 225859 INFO nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Creating config drive at /var/lib/nova/instances/11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef/disk.config
Jan 20 14:27:34 compute-1 nova_compute[225855]: 2026-01-20 14:27:34.632 225859 DEBUG oslo_concurrency.processutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplblpws97 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:34 compute-1 nova_compute[225855]: 2026-01-20 14:27:34.777 225859 DEBUG oslo_concurrency.processutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplblpws97" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:27:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:34.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:27:34 compute-1 nova_compute[225855]: 2026-01-20 14:27:34.824 225859 DEBUG nova.storage.rbd_utils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] rbd image 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:34 compute-1 nova_compute[225855]: 2026-01-20 14:27:34.830 225859 DEBUG oslo_concurrency.processutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef/disk.config 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:35 compute-1 nova_compute[225855]: 2026-01-20 14:27:35.375 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:35 compute-1 ceph-mon[81775]: pgmap v1095: 321 pgs: 321 active+clean; 219 MiB data, 384 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.5 MiB/s wr, 206 op/s
Jan 20 14:27:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3043623211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:36.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.304 225859 DEBUG oslo_concurrency.processutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef/disk.config 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.305 225859 INFO nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Deleting local config drive /var/lib/nova/instances/11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef/disk.config because it was imported into RBD.
Jan 20 14:27:36 compute-1 systemd-machined[194361]: New machine qemu-8-instance-00000013.
Jan 20 14:27:36 compute-1 systemd[1]: Started Virtual Machine qemu-8-instance-00000013.
Jan 20 14:27:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:36.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.850 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919256.8501842, 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.852 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] VM Resumed (Lifecycle Event)
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.855 225859 DEBUG nova.compute.manager [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.855 225859 DEBUG nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.859 225859 INFO nova.virt.libvirt.driver [-] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Instance spawned successfully.
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.860 225859 DEBUG nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.885 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.894 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.899 225859 DEBUG nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.900 225859 DEBUG nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.900 225859 DEBUG nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.901 225859 DEBUG nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.902 225859 DEBUG nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.903 225859 DEBUG nova.virt.libvirt.driver [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.950 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.951 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919256.8517892, 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.951 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] VM Started (Lifecycle Event)
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.996 225859 INFO nova.compute.manager [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Took 7.46 seconds to spawn the instance on the hypervisor.
Jan 20 14:27:36 compute-1 nova_compute[225855]: 2026-01-20 14:27:36.997 225859 DEBUG nova.compute.manager [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:27:37 compute-1 nova_compute[225855]: 2026-01-20 14:27:37.007 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:27:37 compute-1 nova_compute[225855]: 2026-01-20 14:27:37.010 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:27:37 compute-1 nova_compute[225855]: 2026-01-20 14:27:37.052 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:27:37 compute-1 nova_compute[225855]: 2026-01-20 14:27:37.074 225859 INFO nova.compute.manager [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Took 8.58 seconds to build instance.
Jan 20 14:27:37 compute-1 nova_compute[225855]: 2026-01-20 14:27:37.096 225859 DEBUG oslo_concurrency.lockutils [None req-76b51570-913e-4765-a697-65fe21bb2b2d 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Lock "11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e152 e152: 3 total, 3 up, 3 in
Jan 20 14:27:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1353817912' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:37 compute-1 ceph-mon[81775]: pgmap v1096: 321 pgs: 321 active+clean; 227 MiB data, 387 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.7 MiB/s wr, 209 op/s
Jan 20 14:27:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:38.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:38 compute-1 nova_compute[225855]: 2026-01-20 14:27:38.247 225859 DEBUG nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Creating tmpfile /var/lib/nova/instances/tmpuyxcxf2_ to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Jan 20 14:27:38 compute-1 nova_compute[225855]: 2026-01-20 14:27:38.248 225859 DEBUG nova.compute.manager [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpuyxcxf2_',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Jan 20 14:27:38 compute-1 nova_compute[225855]: 2026-01-20 14:27:38.304 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:38 compute-1 sudo[234755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:38 compute-1 sudo[234755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:38 compute-1 sudo[234755]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:38 compute-1 nova_compute[225855]: 2026-01-20 14:27:38.365 225859 DEBUG nova.objects.instance [None req-8a9be0b3-2f06-48f2-873b-2d853aae1721 8bb376f888a54e9d8ed785e1b46d4fe5 7e22d2ad7d84451d89e5288d170589b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:27:38 compute-1 sudo[234780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:38 compute-1 sudo[234780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:38 compute-1 sudo[234780]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:38 compute-1 nova_compute[225855]: 2026-01-20 14:27:38.403 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919258.4035904, 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:27:38 compute-1 nova_compute[225855]: 2026-01-20 14:27:38.404 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] VM Paused (Lifecycle Event)
Jan 20 14:27:38 compute-1 ceph-mon[81775]: pgmap v1097: 321 pgs: 321 active+clean; 254 MiB data, 398 MiB used, 21 GiB / 21 GiB avail; 7.5 MiB/s rd, 5.4 MiB/s wr, 303 op/s
Jan 20 14:27:38 compute-1 ceph-mon[81775]: osdmap e152: 3 total, 3 up, 3 in
Jan 20 14:27:38 compute-1 nova_compute[225855]: 2026-01-20 14:27:38.443 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:27:38 compute-1 nova_compute[225855]: 2026-01-20 14:27:38.450 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:27:38 compute-1 nova_compute[225855]: 2026-01-20 14:27:38.478 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 20 14:27:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:38.572 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:27:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:38.572 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:27:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:38.573 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:38 compute-1 nova_compute[225855]: 2026-01-20 14:27:38.619 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:38 compute-1 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000013.scope: Deactivated successfully.
Jan 20 14:27:38 compute-1 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000013.scope: Consumed 1.933s CPU time.
Jan 20 14:27:38 compute-1 systemd-machined[194361]: Machine qemu-8-instance-00000013 terminated.
Jan 20 14:27:38 compute-1 nova_compute[225855]: 2026-01-20 14:27:38.752 225859 DEBUG nova.compute.manager [None req-8a9be0b3-2f06-48f2-873b-2d853aae1721 8bb376f888a54e9d8ed785e1b46d4fe5 7e22d2ad7d84451d89e5288d170589b5 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:27:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:38.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/817646247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:39 compute-1 nova_compute[225855]: 2026-01-20 14:27:39.604 225859 DEBUG nova.compute.manager [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpuyxcxf2_',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='d726266f-b9a6-406b-ad13-f9db3e0dc6aa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Jan 20 14:27:39 compute-1 nova_compute[225855]: 2026-01-20 14:27:39.638 225859 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "refresh_cache-d726266f-b9a6-406b-ad13-f9db3e0dc6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:27:39 compute-1 nova_compute[225855]: 2026-01-20 14:27:39.638 225859 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquired lock "refresh_cache-d726266f-b9a6-406b-ad13-f9db3e0dc6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:27:39 compute-1 nova_compute[225855]: 2026-01-20 14:27:39.639 225859 DEBUG nova.network.neutron [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:27:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:40 compute-1 podman[234811]: 2026-01-20 14:27:40.010346977 +0000 UTC m=+0.061015107 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:27:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:27:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:40.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.380 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:40 compute-1 ceph-mon[81775]: pgmap v1099: 321 pgs: 321 active+clean; 254 MiB data, 403 MiB used, 21 GiB / 21 GiB avail; 7.1 MiB/s rd, 5.3 MiB/s wr, 289 op/s
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.717 225859 DEBUG oslo_concurrency.lockutils [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Acquiring lock "11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.718 225859 DEBUG oslo_concurrency.lockutils [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Lock "11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.718 225859 DEBUG oslo_concurrency.lockutils [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Acquiring lock "11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.718 225859 DEBUG oslo_concurrency.lockutils [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Lock "11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.718 225859 DEBUG oslo_concurrency.lockutils [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Lock "11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.719 225859 INFO nova.compute.manager [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Terminating instance
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.720 225859 DEBUG oslo_concurrency.lockutils [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Acquiring lock "refresh_cache-11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.721 225859 DEBUG oslo_concurrency.lockutils [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Acquired lock "refresh_cache-11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.721 225859 DEBUG nova.network.neutron [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:27:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:27:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:40.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.833 225859 DEBUG nova.network.neutron [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Updating instance_info_cache with network_info: [{"id": "e6067076-0f97-4e9c-9355-353277570e11", "address": "fa:16:3e:db:cf:b7", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6067076-0f", "ovs_interfaceid": "e6067076-0f97-4e9c-9355-353277570e11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.855 225859 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Releasing lock "refresh_cache-d726266f-b9a6-406b-ad13-f9db3e0dc6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.856 225859 DEBUG nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpuyxcxf2_',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='d726266f-b9a6-406b-ad13-f9db3e0dc6aa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.857 225859 DEBUG nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Creating instance directory: /var/lib/nova/instances/d726266f-b9a6-406b-ad13-f9db3e0dc6aa pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.857 225859 DEBUG nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Ensure instance console log exists: /var/lib/nova/instances/d726266f-b9a6-406b-ad13-f9db3e0dc6aa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.857 225859 DEBUG nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.858 225859 DEBUG nova.virt.libvirt.vif [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:27:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1394818615',display_name='tempest-LiveMigrationTest-server-1394818615',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1394818615',id=16,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:27:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d15f60b9e48e4175b5520d1e57ed2d3a',ramdisk_id='',reservation_id='r-pti072hl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-864280704',owner_user_name='tempest-LiveMigrationTest-864280704-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:27:33Z,user_data=None,user_id='bce7fcbd19554e29bb80c5b93b7dd3c9',uuid=d726266f-b9a6-406b-ad13-f9db3e0dc6aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e6067076-0f97-4e9c-9355-353277570e11", "address": "fa:16:3e:db:cf:b7", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tape6067076-0f", "ovs_interfaceid": "e6067076-0f97-4e9c-9355-353277570e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.858 225859 DEBUG nova.network.os_vif_util [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Converting VIF {"id": "e6067076-0f97-4e9c-9355-353277570e11", "address": "fa:16:3e:db:cf:b7", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tape6067076-0f", "ovs_interfaceid": "e6067076-0f97-4e9c-9355-353277570e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.859 225859 DEBUG nova.network.os_vif_util [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:cf:b7,bridge_name='br-int',has_traffic_filtering=True,id=e6067076-0f97-4e9c-9355-353277570e11,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape6067076-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.859 225859 DEBUG os_vif [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:cf:b7,bridge_name='br-int',has_traffic_filtering=True,id=e6067076-0f97-4e9c-9355-353277570e11,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape6067076-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.860 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.860 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.861 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.864 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.864 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape6067076-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.864 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape6067076-0f, col_values=(('external_ids', {'iface-id': 'e6067076-0f97-4e9c-9355-353277570e11', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:cf:b7', 'vm-uuid': 'd726266f-b9a6-406b-ad13-f9db3e0dc6aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.866 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:40 compute-1 NetworkManager[49104]: <info>  [1768919260.8676] manager: (tape6067076-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.869 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.873 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.874 225859 INFO os_vif [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:cf:b7,bridge_name='br-int',has_traffic_filtering=True,id=e6067076-0f97-4e9c-9355-353277570e11,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape6067076-0f')
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.874 225859 DEBUG nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.875 225859 DEBUG nova.compute.manager [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpuyxcxf2_',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='d726266f-b9a6-406b-ad13-f9db3e0dc6aa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Jan 20 14:27:40 compute-1 nova_compute[225855]: 2026-01-20 14:27:40.928 225859 DEBUG nova.network.neutron [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:27:41 compute-1 nova_compute[225855]: 2026-01-20 14:27:41.477 225859 DEBUG nova.network.neutron [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:27:41 compute-1 nova_compute[225855]: 2026-01-20 14:27:41.497 225859 DEBUG oslo_concurrency.lockutils [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Releasing lock "refresh_cache-11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:27:41 compute-1 nova_compute[225855]: 2026-01-20 14:27:41.497 225859 DEBUG nova.compute.manager [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:27:41 compute-1 nova_compute[225855]: 2026-01-20 14:27:41.505 225859 INFO nova.virt.libvirt.driver [-] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Instance destroyed successfully.
Jan 20 14:27:41 compute-1 nova_compute[225855]: 2026-01-20 14:27:41.505 225859 DEBUG nova.objects.instance [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Lazy-loading 'resources' on Instance uuid 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:27:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:42.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:42 compute-1 ceph-mon[81775]: pgmap v1100: 321 pgs: 321 active+clean; 243 MiB data, 445 MiB used, 21 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.6 MiB/s wr, 309 op/s
Jan 20 14:27:42 compute-1 nova_compute[225855]: 2026-01-20 14:27:42.489 225859 DEBUG nova.network.neutron [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Port e6067076-0f97-4e9c-9355-353277570e11 updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Jan 20 14:27:42 compute-1 nova_compute[225855]: 2026-01-20 14:27:42.491 225859 DEBUG nova.compute.manager [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpuyxcxf2_',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='d726266f-b9a6-406b-ad13-f9db3e0dc6aa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Jan 20 14:27:42 compute-1 nova_compute[225855]: 2026-01-20 14:27:42.519 225859 INFO nova.virt.libvirt.driver [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Deleting instance files /var/lib/nova/instances/11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef_del
Jan 20 14:27:42 compute-1 nova_compute[225855]: 2026-01-20 14:27:42.520 225859 INFO nova.virt.libvirt.driver [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Deletion of /var/lib/nova/instances/11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef_del complete
Jan 20 14:27:42 compute-1 nova_compute[225855]: 2026-01-20 14:27:42.616 225859 INFO nova.compute.manager [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Took 1.12 seconds to destroy the instance on the hypervisor.
Jan 20 14:27:42 compute-1 nova_compute[225855]: 2026-01-20 14:27:42.616 225859 DEBUG oslo.service.loopingcall [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:27:42 compute-1 nova_compute[225855]: 2026-01-20 14:27:42.617 225859 DEBUG nova.compute.manager [-] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:27:42 compute-1 nova_compute[225855]: 2026-01-20 14:27:42.617 225859 DEBUG nova.network.neutron [-] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:27:42 compute-1 systemd[1]: Starting libvirt proxy daemon...
Jan 20 14:27:42 compute-1 systemd[1]: Started libvirt proxy daemon.
Jan 20 14:27:42 compute-1 nova_compute[225855]: 2026-01-20 14:27:42.758 225859 DEBUG nova.network.neutron [-] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:27:42 compute-1 nova_compute[225855]: 2026-01-20 14:27:42.770 225859 DEBUG nova.network.neutron [-] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:27:42 compute-1 nova_compute[225855]: 2026-01-20 14:27:42.800 225859 INFO nova.compute.manager [-] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Took 0.18 seconds to deallocate network for instance.
Jan 20 14:27:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:42.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:42 compute-1 NetworkManager[49104]: <info>  [1768919262.8301] manager: (tape6067076-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Jan 20 14:27:42 compute-1 kernel: tape6067076-0f: entered promiscuous mode
Jan 20 14:27:42 compute-1 nova_compute[225855]: 2026-01-20 14:27:42.832 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:42 compute-1 ovn_controller[130490]: 2026-01-20T14:27:42Z|00079|binding|INFO|Claiming lport e6067076-0f97-4e9c-9355-353277570e11 for this additional chassis.
Jan 20 14:27:42 compute-1 ovn_controller[130490]: 2026-01-20T14:27:42Z|00080|binding|INFO|e6067076-0f97-4e9c-9355-353277570e11: Claiming fa:16:3e:db:cf:b7 10.100.0.12
Jan 20 14:27:42 compute-1 ovn_controller[130490]: 2026-01-20T14:27:42Z|00081|binding|INFO|Claiming lport 9013ed66-b0f2-4a83-b7d4-572f1324f582 for this additional chassis.
Jan 20 14:27:42 compute-1 ovn_controller[130490]: 2026-01-20T14:27:42Z|00082|binding|INFO|9013ed66-b0f2-4a83-b7d4-572f1324f582: Claiming fa:16:3e:51:74:79 19.80.0.125
Jan 20 14:27:42 compute-1 systemd-machined[194361]: New machine qemu-9-instance-00000010.
Jan 20 14:27:42 compute-1 systemd[1]: Started Virtual Machine qemu-9-instance-00000010.
Jan 20 14:27:42 compute-1 systemd-udevd[234886]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:27:42 compute-1 NetworkManager[49104]: <info>  [1768919262.9043] device (tape6067076-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:27:42 compute-1 NetworkManager[49104]: <info>  [1768919262.9052] device (tape6067076-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:27:42 compute-1 nova_compute[225855]: 2026-01-20 14:27:42.910 225859 DEBUG oslo_concurrency.lockutils [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:42 compute-1 nova_compute[225855]: 2026-01-20 14:27:42.911 225859 DEBUG oslo_concurrency.lockutils [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:42 compute-1 nova_compute[225855]: 2026-01-20 14:27:42.970 225859 DEBUG oslo_concurrency.processutils [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:42 compute-1 ovn_controller[130490]: 2026-01-20T14:27:42Z|00083|binding|INFO|Setting lport e6067076-0f97-4e9c-9355-353277570e11 ovn-installed in OVS
Jan 20 14:27:42 compute-1 nova_compute[225855]: 2026-01-20 14:27:42.990 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:43 compute-1 nova_compute[225855]: 2026-01-20 14:27:43.306 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:27:43 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1307297049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:43 compute-1 nova_compute[225855]: 2026-01-20 14:27:43.386 225859 DEBUG oslo_concurrency.processutils [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:43 compute-1 nova_compute[225855]: 2026-01-20 14:27:43.391 225859 DEBUG nova.compute.provider_tree [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:27:43 compute-1 nova_compute[225855]: 2026-01-20 14:27:43.485 225859 DEBUG nova.scheduler.client.report [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:27:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2865709230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1307297049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:43 compute-1 nova_compute[225855]: 2026-01-20 14:27:43.504 225859 DEBUG oslo_concurrency.lockutils [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:43 compute-1 nova_compute[225855]: 2026-01-20 14:27:43.600 225859 INFO nova.scheduler.client.report [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Deleted allocations for instance 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef
Jan 20 14:27:43 compute-1 nova_compute[225855]: 2026-01-20 14:27:43.681 225859 DEBUG oslo_concurrency.lockutils [None req-f50a58a7-ab3e-4402-8521-8f3921fdc37c 399cc9abe2cd4ab196a4e5789992ae51 1759b9d61ad946b6afa3e8448ce02190 - - default default] Lock "11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:43 compute-1 nova_compute[225855]: 2026-01-20 14:27:43.953 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919263.9528074, d726266f-b9a6-406b-ad13-f9db3e0dc6aa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:27:43 compute-1 nova_compute[225855]: 2026-01-20 14:27:43.954 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] VM Started (Lifecycle Event)
Jan 20 14:27:43 compute-1 nova_compute[225855]: 2026-01-20 14:27:43.993 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:27:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:44.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:44 compute-1 sudo[234960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:44 compute-1 sudo[234960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:44 compute-1 sudo[234960]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:44 compute-1 nova_compute[225855]: 2026-01-20 14:27:44.473 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919264.4732022, d726266f-b9a6-406b-ad13-f9db3e0dc6aa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:27:44 compute-1 nova_compute[225855]: 2026-01-20 14:27:44.473 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] VM Resumed (Lifecycle Event)
Jan 20 14:27:44 compute-1 sudo[234985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:27:44 compute-1 sudo[234985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:44 compute-1 sudo[234985]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:44 compute-1 nova_compute[225855]: 2026-01-20 14:27:44.505 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:27:44 compute-1 nova_compute[225855]: 2026-01-20 14:27:44.509 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:27:44 compute-1 ceph-mon[81775]: pgmap v1101: 321 pgs: 321 active+clean; 205 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 7.6 MiB/s rd, 4.8 MiB/s wr, 333 op/s
Jan 20 14:27:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4262737880' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4111249090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:44 compute-1 nova_compute[225855]: 2026-01-20 14:27:44.535 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com
Jan 20 14:27:44 compute-1 sudo[235010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:44 compute-1 sudo[235010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:44 compute-1 sudo[235010]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:44 compute-1 sudo[235035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:27:44 compute-1 sudo[235035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:44.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:45 compute-1 sudo[235035]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:45 compute-1 sudo[235091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:45 compute-1 sudo[235091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:45 compute-1 sudo[235091]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:45 compute-1 sudo[235116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:27:45 compute-1 sudo[235116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:45 compute-1 sudo[235116]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:45 compute-1 sudo[235141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:45 compute-1 sudo[235141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:45 compute-1 sudo[235141]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:45 compute-1 sudo[235166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- inventory --format=json-pretty --filter-for-batch
Jan 20 14:27:45 compute-1 sudo[235166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:45 compute-1 podman[235232]: 2026-01-20 14:27:45.729777804 +0000 UTC m=+0.042157946 container create d249f2af06070d59eb73cb3ade189c82d70acd6cfd3cafd5b501e45ff0069162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_zhukovsky, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:27:45 compute-1 systemd[1]: Started libpod-conmon-d249f2af06070d59eb73cb3ade189c82d70acd6cfd3cafd5b501e45ff0069162.scope.
Jan 20 14:27:45 compute-1 podman[235232]: 2026-01-20 14:27:45.707532209 +0000 UTC m=+0.019912361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:27:45 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:27:45 compute-1 podman[235232]: 2026-01-20 14:27:45.850984872 +0000 UTC m=+0.163365034 container init d249f2af06070d59eb73cb3ade189c82d70acd6cfd3cafd5b501e45ff0069162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_zhukovsky, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:27:45 compute-1 podman[235232]: 2026-01-20 14:27:45.860815628 +0000 UTC m=+0.173195790 container start d249f2af06070d59eb73cb3ade189c82d70acd6cfd3cafd5b501e45ff0069162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_zhukovsky, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 14:27:45 compute-1 podman[235232]: 2026-01-20 14:27:45.865275253 +0000 UTC m=+0.177655385 container attach d249f2af06070d59eb73cb3ade189c82d70acd6cfd3cafd5b501e45ff0069162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:27:45 compute-1 stupefied_zhukovsky[235248]: 167 167
Jan 20 14:27:45 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:45 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/603074013' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:45 compute-1 ceph-mon[81775]: pgmap v1102: 321 pgs: 321 active+clean; 220 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 7.0 MiB/s rd, 5.4 MiB/s wr, 309 op/s
Jan 20 14:27:45 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:45 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:45 compute-1 systemd[1]: libpod-d249f2af06070d59eb73cb3ade189c82d70acd6cfd3cafd5b501e45ff0069162.scope: Deactivated successfully.
Jan 20 14:27:45 compute-1 conmon[235248]: conmon d249f2af06070d59eb73 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d249f2af06070d59eb73cb3ade189c82d70acd6cfd3cafd5b501e45ff0069162.scope/container/memory.events
Jan 20 14:27:45 compute-1 podman[235232]: 2026-01-20 14:27:45.89645316 +0000 UTC m=+0.208833302 container died d249f2af06070d59eb73cb3ade189c82d70acd6cfd3cafd5b501e45ff0069162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_zhukovsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:27:45 compute-1 nova_compute[225855]: 2026-01-20 14:27:45.894 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:45 compute-1 systemd[1]: var-lib-containers-storage-overlay-a0c0d2345ad9421be60a9a53d44f9eb6df75f1b1da909f519c3d49fe170898ac-merged.mount: Deactivated successfully.
Jan 20 14:27:45 compute-1 podman[235232]: 2026-01-20 14:27:45.935033305 +0000 UTC m=+0.247413417 container remove d249f2af06070d59eb73cb3ade189c82d70acd6cfd3cafd5b501e45ff0069162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:27:45 compute-1 systemd[1]: libpod-conmon-d249f2af06070d59eb73cb3ade189c82d70acd6cfd3cafd5b501e45ff0069162.scope: Deactivated successfully.
Jan 20 14:27:45 compute-1 ovn_controller[130490]: 2026-01-20T14:27:45Z|00084|binding|INFO|Claiming lport e6067076-0f97-4e9c-9355-353277570e11 for this chassis.
Jan 20 14:27:45 compute-1 ovn_controller[130490]: 2026-01-20T14:27:45Z|00085|binding|INFO|e6067076-0f97-4e9c-9355-353277570e11: Claiming fa:16:3e:db:cf:b7 10.100.0.12
Jan 20 14:27:45 compute-1 ovn_controller[130490]: 2026-01-20T14:27:45Z|00086|binding|INFO|Claiming lport 9013ed66-b0f2-4a83-b7d4-572f1324f582 for this chassis.
Jan 20 14:27:45 compute-1 ovn_controller[130490]: 2026-01-20T14:27:45Z|00087|binding|INFO|9013ed66-b0f2-4a83-b7d4-572f1324f582: Claiming fa:16:3e:51:74:79 19.80.0.125
Jan 20 14:27:45 compute-1 ovn_controller[130490]: 2026-01-20T14:27:45Z|00088|binding|INFO|Setting lport e6067076-0f97-4e9c-9355-353277570e11 up in Southbound
Jan 20 14:27:45 compute-1 ovn_controller[130490]: 2026-01-20T14:27:45Z|00089|binding|INFO|Setting lport 9013ed66-b0f2-4a83-b7d4-572f1324f582 up in Southbound
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.004 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:74:79 19.80.0.125'], port_security=['fa:16:3e:51:74:79 19.80.0.125'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['e6067076-0f97-4e9c-9355-353277570e11'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1871336558', 'neutron:cidrs': '19.80.0.125/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08e625c5-899c-442a-8ef4-9a3c96892de4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1871336558', 'neutron:project_id': 'd15f60b9e48e4175b5520d1e57ed2d3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6d729cfd-2f98-4ca5-a524-e543b12b3766', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=62d5dc3b-a6a9-4e55-8632-5a7fe1112862, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9013ed66-b0f2-4a83-b7d4-572f1324f582) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.006 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:cf:b7 10.100.0.12'], port_security=['fa:16:3e:db:cf:b7 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-395006048', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd726266f-b9a6-406b-ad13-f9db3e0dc6aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14f18b27-1594-48d8-a08b-a930f7adbc08', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-395006048', 'neutron:project_id': 'd15f60b9e48e4175b5520d1e57ed2d3a', 'neutron:revision_number': '11', 'neutron:security_group_ids': '6d729cfd-2f98-4ca5-a524-e543b12b3766', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02983c41-bbec-48cf-910a-84fed1be783f, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=e6067076-0f97-4e9c-9355-353277570e11) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.007 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 9013ed66-b0f2-4a83-b7d4-572f1324f582 in datapath 08e625c5-899c-442a-8ef4-9a3c96892de4 bound to our chassis
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.008 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08e625c5-899c-442a-8ef4-9a3c96892de4
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.021 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[17752f3f-9d97-444b-841c-c5b42ad46cdd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.022 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap08e625c5-81 in ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.023 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap08e625c5-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.023 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b74a80c1-ff6f-4aeb-94f4-b1779fa03743]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.024 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c9934431-9d10-4c3a-87d7-aec4ca04bc86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.039 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9c8bfe-895e-4d2b-9a1f-9283e9b9b066]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.063 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc626cf-f716-4935-92f9-98191a0cd26b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.114 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc97b7e-8704-4bef-91c6-e05202d02d43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.127 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[11ac09c5-bf54-4b86-9c05-f3fbb7eb03db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:46 compute-1 podman[235273]: 2026-01-20 14:27:46.128682869 +0000 UTC m=+0.066379187 container create 723b5cbabfbdb2954ad39dd5ca33954a38d3882430cd9f04424d216072c7ef7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Jan 20 14:27:46 compute-1 NetworkManager[49104]: <info>  [1768919266.1290] manager: (tap08e625c5-80): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.156 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d6df16a5-4681-48e9-87fd-eedaf84d584f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.160 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c0bdf150-acd5-41c8-8cd0-b09947d1f8da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:46 compute-1 systemd-udevd[235291]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:27:46 compute-1 systemd[1]: Started libpod-conmon-723b5cbabfbdb2954ad39dd5ca33954a38d3882430cd9f04424d216072c7ef7c.scope.
Jan 20 14:27:46 compute-1 NetworkManager[49104]: <info>  [1768919266.1904] device (tap08e625c5-80): carrier: link connected
Jan 20 14:27:46 compute-1 podman[235273]: 2026-01-20 14:27:46.100446535 +0000 UTC m=+0.038142943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.196 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[b97c2a46-3a82-4955-b039-34f727dc9287]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:46.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:46 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:27:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b00fa219391a75310795fc48774642fdbf2769a450a9b244613588abfb8d5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b00fa219391a75310795fc48774642fdbf2769a450a9b244613588abfb8d5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b00fa219391a75310795fc48774642fdbf2769a450a9b244613588abfb8d5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.215 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8c9c11-cc1b-4a6b-b5a1-d481091358fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08e625c5-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:55:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 430511, 'reachable_time': 44959, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235313, 'error': None, 'target': 'ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b00fa219391a75310795fc48774642fdbf2769a450a9b244613588abfb8d5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.233 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4c0caada-ebbc-4a21-8a29-e892f0715b47]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe47:5580'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 430511, 'tstamp': 430511}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235314, 'error': None, 'target': 'ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:46 compute-1 podman[235273]: 2026-01-20 14:27:46.235697888 +0000 UTC m=+0.173394206 container init 723b5cbabfbdb2954ad39dd5ca33954a38d3882430cd9f04424d216072c7ef7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jemison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:27:46 compute-1 podman[235273]: 2026-01-20 14:27:46.243488447 +0000 UTC m=+0.181184775 container start 723b5cbabfbdb2954ad39dd5ca33954a38d3882430cd9f04424d216072c7ef7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:27:46 compute-1 podman[235273]: 2026-01-20 14:27:46.246383348 +0000 UTC m=+0.184079696 container attach 723b5cbabfbdb2954ad39dd5ca33954a38d3882430cd9f04424d216072c7ef7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jemison, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.251 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b5df8caa-c13d-4825-94b7-27194b07a89f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08e625c5-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:55:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 430511, 'reachable_time': 44959, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 235318, 'error': None, 'target': 'ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.282 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cffab6d3-8f39-474b-afbc-2c6bab84b34c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.337 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[21dced7d-19e7-4cc8-b699-b21db43a882c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.338 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08e625c5-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.338 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.339 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08e625c5-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:46 compute-1 NetworkManager[49104]: <info>  [1768919266.3411] manager: (tap08e625c5-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Jan 20 14:27:46 compute-1 kernel: tap08e625c5-80: entered promiscuous mode
Jan 20 14:27:46 compute-1 nova_compute[225855]: 2026-01-20 14:27:46.340 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.343 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08e625c5-80, col_values=(('external_ids', {'iface-id': 'e10f34be-dfc1-4bfe-806f-f00a84c17390'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:46 compute-1 ovn_controller[130490]: 2026-01-20T14:27:46Z|00090|binding|INFO|Releasing lport e10f34be-dfc1-4bfe-806f-f00a84c17390 from this chassis (sb_readonly=0)
Jan 20 14:27:46 compute-1 nova_compute[225855]: 2026-01-20 14:27:46.344 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:46 compute-1 nova_compute[225855]: 2026-01-20 14:27:46.358 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.359 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/08e625c5-899c-442a-8ef4-9a3c96892de4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/08e625c5-899c-442a-8ef4-9a3c96892de4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.360 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f1c1c1ac-b8e6-48ca-98a6-4ccf412371d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.361 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-08e625c5-899c-442a-8ef4-9a3c96892de4
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/08e625c5-899c-442a-8ef4-9a3c96892de4.pid.haproxy
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 08e625c5-899c-442a-8ef4-9a3c96892de4
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.361 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4', 'env', 'PROCESS_TAG=haproxy-08e625c5-899c-442a-8ef4-9a3c96892de4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/08e625c5-899c-442a-8ef4-9a3c96892de4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:27:46 compute-1 nova_compute[225855]: 2026-01-20 14:27:46.459 225859 INFO nova.compute.manager [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Post operation of migration started
Jan 20 14:27:46 compute-1 podman[235352]: 2026-01-20 14:27:46.776040388 +0000 UTC m=+0.052108316 container create ed48cc542c051179becb0bef90fd036c02c3a10fbac0702dc06ee569e76c9395 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 14:27:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:46.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:46 compute-1 systemd[1]: Started libpod-conmon-ed48cc542c051179becb0bef90fd036c02c3a10fbac0702dc06ee569e76c9395.scope.
Jan 20 14:27:46 compute-1 podman[235352]: 2026-01-20 14:27:46.746627371 +0000 UTC m=+0.022695329 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:27:46 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:27:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693c57f2e466814459bb967b5b8379f5bf0326b3ec16540677e4a65e7bbf1a2c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:46 compute-1 podman[235352]: 2026-01-20 14:27:46.863663231 +0000 UTC m=+0.139731179 container init ed48cc542c051179becb0bef90fd036c02c3a10fbac0702dc06ee569e76c9395 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 14:27:46 compute-1 podman[235352]: 2026-01-20 14:27:46.869166226 +0000 UTC m=+0.145234144 container start ed48cc542c051179becb0bef90fd036c02c3a10fbac0702dc06ee569e76c9395 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 14:27:46 compute-1 neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4[235367]: [NOTICE]   (235372) : New worker (235374) forked
Jan 20 14:27:46 compute-1 neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4[235367]: [NOTICE]   (235372) : Loading success.
Jan 20 14:27:46 compute-1 nova_compute[225855]: 2026-01-20 14:27:46.888 225859 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "refresh_cache-d726266f-b9a6-406b-ad13-f9db3e0dc6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:27:46 compute-1 nova_compute[225855]: 2026-01-20 14:27:46.888 225859 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquired lock "refresh_cache-d726266f-b9a6-406b-ad13-f9db3e0dc6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:27:46 compute-1 nova_compute[225855]: 2026-01-20 14:27:46.888 225859 DEBUG nova.network.neutron [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.921 140354 INFO neutron.agent.ovn.metadata.agent [-] Port e6067076-0f97-4e9c-9355-353277570e11 in datapath 14f18b27-1594-48d8-a08b-a930f7adbc08 unbound from our chassis
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.923 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14f18b27-1594-48d8-a08b-a930f7adbc08
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.933 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a602b218-6c2b-4dee-974b-4cbe0100eea9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.934 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14f18b27-11 in ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.936 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14f18b27-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.936 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[12f46e0d-43a8-4a32-9684-5e7141236665]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.937 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[73ce1273-9487-446b-82f5-06960f9b542e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.950 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c0a386-76f5-4766-8fe2-91e3dd41a930]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.971 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb1d067-2a98-4312-8838-9260d5aaa963]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:46.999 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[69f37fd0-5fd9-4b5d-890a-1f2c7f41053c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:47.006 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6e609cf7-4fde-4e41-bc38-4086adca8c7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:47 compute-1 NetworkManager[49104]: <info>  [1768919267.0070] manager: (tap14f18b27-10): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Jan 20 14:27:47 compute-1 systemd-udevd[235311]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:47.037 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[19be65da-fd9d-4490-a761-98a607397c7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:47.039 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[8dc0f149-82c4-4a6d-b064-4445b4822f3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:47 compute-1 NetworkManager[49104]: <info>  [1768919267.0650] device (tap14f18b27-10): carrier: link connected
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:47.071 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[753fdee6-83e3-42d4-b377-3d8ddd3d4bd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:47.087 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b1cb4a-e006-4b41-8f26-5994ff41fc5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14f18b27-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:1f:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 430599, 'reachable_time': 25641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235401, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:47.104 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e2ef7a9f-3c82-44b3-8a12-b69b45d9f4f9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7d:1f17'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 430599, 'tstamp': 430599}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235404, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:47.120 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d28047cc-2d18-45fc-bf27-08a6d3f77323]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14f18b27-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:1f:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 430599, 'reachable_time': 25641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 235407, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:47 compute-1 ovn_controller[130490]: 2026-01-20T14:27:47Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:db:cf:b7 10.100.0.12
Jan 20 14:27:47 compute-1 ovn_controller[130490]: 2026-01-20T14:27:47Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:db:cf:b7 10.100.0.12
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:47.149 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[920fe278-cf87-4353-ba1c-9b304682c841]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:47.204 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[dfe733e1-fc7d-4a76-812a-1e519ee7966e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:47.205 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14f18b27-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:47.205 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:47.206 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14f18b27-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:47 compute-1 nova_compute[225855]: 2026-01-20 14:27:47.254 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:47 compute-1 NetworkManager[49104]: <info>  [1768919267.2545] manager: (tap14f18b27-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Jan 20 14:27:47 compute-1 kernel: tap14f18b27-10: entered promiscuous mode
Jan 20 14:27:47 compute-1 nova_compute[225855]: 2026-01-20 14:27:47.257 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:47.259 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14f18b27-10, col_values=(('external_ids', {'iface-id': 'aa1c73c5-9761-4457-acdc-9f93220f739f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:47 compute-1 nova_compute[225855]: 2026-01-20 14:27:47.260 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:47 compute-1 ovn_controller[130490]: 2026-01-20T14:27:47Z|00091|binding|INFO|Releasing lport aa1c73c5-9761-4457-acdc-9f93220f739f from this chassis (sb_readonly=0)
Jan 20 14:27:47 compute-1 nova_compute[225855]: 2026-01-20 14:27:47.260 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:47.262 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14f18b27-1594-48d8-a08b-a930f7adbc08.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14f18b27-1594-48d8-a08b-a930f7adbc08.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:47.263 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ba307b89-be5c-4bb7-828c-ba293ead4064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:47.264 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-14f18b27-1594-48d8-a08b-a930f7adbc08
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/14f18b27-1594-48d8-a08b-a930f7adbc08.pid.haproxy
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 14f18b27-1594-48d8-a08b-a930f7adbc08
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:27:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:27:47.265 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'env', 'PROCESS_TAG=haproxy-14f18b27-1594-48d8-a08b-a930f7adbc08', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14f18b27-1594-48d8-a08b-a930f7adbc08.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:27:47 compute-1 nova_compute[225855]: 2026-01-20 14:27:47.274 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:47 compute-1 crazy_jemison[235295]: [
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:     {
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:         "available": false,
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:         "ceph_device": false,
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:         "lsm_data": {},
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:         "lvs": [],
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:         "path": "/dev/sr0",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:         "rejected_reasons": [
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "Insufficient space (<5GB)",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "Has a FileSystem"
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:         ],
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:         "sys_api": {
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "actuators": null,
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "device_nodes": "sr0",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "devname": "sr0",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "human_readable_size": "482.00 KB",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "id_bus": "ata",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "model": "QEMU DVD-ROM",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "nr_requests": "2",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "parent": "/dev/sr0",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "partitions": {},
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "path": "/dev/sr0",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "removable": "1",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "rev": "2.5+",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "ro": "0",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "rotational": "1",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "sas_address": "",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "sas_device_handle": "",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "scheduler_mode": "mq-deadline",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "sectors": 0,
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "sectorsize": "2048",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "size": 493568.0,
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "support_discard": "2048",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "type": "disk",
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:             "vendor": "QEMU"
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:         }
Jan 20 14:27:47 compute-1 crazy_jemison[235295]:     }
Jan 20 14:27:47 compute-1 crazy_jemison[235295]: ]
Jan 20 14:27:47 compute-1 systemd[1]: libpod-723b5cbabfbdb2954ad39dd5ca33954a38d3882430cd9f04424d216072c7ef7c.scope: Deactivated successfully.
Jan 20 14:27:47 compute-1 systemd[1]: libpod-723b5cbabfbdb2954ad39dd5ca33954a38d3882430cd9f04424d216072c7ef7c.scope: Consumed 1.147s CPU time.
Jan 20 14:27:47 compute-1 podman[235273]: 2026-01-20 14:27:47.430253051 +0000 UTC m=+1.367949369 container died 723b5cbabfbdb2954ad39dd5ca33954a38d3882430cd9f04424d216072c7ef7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:27:47 compute-1 systemd[1]: var-lib-containers-storage-overlay-e4b00fa219391a75310795fc48774642fdbf2769a450a9b244613588abfb8d5e-merged.mount: Deactivated successfully.
Jan 20 14:27:47 compute-1 podman[235273]: 2026-01-20 14:27:47.498989813 +0000 UTC m=+1.436686141 container remove 723b5cbabfbdb2954ad39dd5ca33954a38d3882430cd9f04424d216072c7ef7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 20 14:27:47 compute-1 systemd[1]: libpod-conmon-723b5cbabfbdb2954ad39dd5ca33954a38d3882430cd9f04424d216072c7ef7c.scope: Deactivated successfully.
Jan 20 14:27:47 compute-1 sudo[235166]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:47 compute-1 podman[236624]: 2026-01-20 14:27:47.612567786 +0000 UTC m=+0.043717840 container create cb549f3dfd3a2168af12352c43b7f75cb0e219d08716514e5f07f4b105589042 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 20 14:27:47 compute-1 systemd[1]: Started libpod-conmon-cb549f3dfd3a2168af12352c43b7f75cb0e219d08716514e5f07f4b105589042.scope.
Jan 20 14:27:47 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:27:47 compute-1 podman[236624]: 2026-01-20 14:27:47.591102803 +0000 UTC m=+0.022252867 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:27:47 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2676fe810d64d27697c2b84fe44f9ab65fef13401e606b1abe81b75e60236b87/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:47 compute-1 podman[236624]: 2026-01-20 14:27:47.698389689 +0000 UTC m=+0.129539753 container init cb549f3dfd3a2168af12352c43b7f75cb0e219d08716514e5f07f4b105589042 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:27:47 compute-1 podman[236624]: 2026-01-20 14:27:47.705004405 +0000 UTC m=+0.136154449 container start cb549f3dfd3a2168af12352c43b7f75cb0e219d08716514e5f07f4b105589042 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:27:47 compute-1 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[236640]: [NOTICE]   (236644) : New worker (236646) forked
Jan 20 14:27:47 compute-1 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[236640]: [NOTICE]   (236644) : Loading success.
Jan 20 14:27:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:27:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:48.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:27:48 compute-1 nova_compute[225855]: 2026-01-20 14:27:48.308 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:48 compute-1 ceph-mon[81775]: pgmap v1103: 321 pgs: 321 active+clean; 230 MiB data, 426 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.9 MiB/s wr, 342 op/s
Jan 20 14:27:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1118793105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:27:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:27:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:27:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:27:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:27:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:48.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:49 compute-1 nova_compute[225855]: 2026-01-20 14:27:49.914 225859 DEBUG nova.network.neutron [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Updating instance_info_cache with network_info: [{"id": "e6067076-0f97-4e9c-9355-353277570e11", "address": "fa:16:3e:db:cf:b7", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6067076-0f", "ovs_interfaceid": "e6067076-0f97-4e9c-9355-353277570e11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:27:49 compute-1 nova_compute[225855]: 2026-01-20 14:27:49.936 225859 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Releasing lock "refresh_cache-d726266f-b9a6-406b-ad13-f9db3e0dc6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:27:49 compute-1 nova_compute[225855]: 2026-01-20 14:27:49.960 225859 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:49 compute-1 nova_compute[225855]: 2026-01-20 14:27:49.960 225859 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:49 compute-1 nova_compute[225855]: 2026-01-20 14:27:49.961 225859 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:49 compute-1 nova_compute[225855]: 2026-01-20 14:27:49.965 225859 INFO nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Jan 20 14:27:49 compute-1 virtqemud[225396]: Domain id=9 name='instance-00000010' uuid=d726266f-b9a6-406b-ad13-f9db3e0dc6aa is tainted: custom-monitor
Jan 20 14:27:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:27:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:50.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:27:50 compute-1 ceph-mon[81775]: pgmap v1104: 321 pgs: 321 active+clean; 229 MiB data, 443 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 6.0 MiB/s wr, 331 op/s
Jan 20 14:27:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:27:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:50.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:27:50 compute-1 sshd-session[235371]: Invalid user orangepi from 45.179.5.170 port 42898
Jan 20 14:27:50 compute-1 nova_compute[225855]: 2026-01-20 14:27:50.899 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:50 compute-1 nova_compute[225855]: 2026-01-20 14:27:50.972 225859 INFO nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Jan 20 14:27:51 compute-1 sshd-session[235371]: Connection closed by invalid user orangepi 45.179.5.170 port 42898 [preauth]
Jan 20 14:27:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/785786679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/112098410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:51 compute-1 nova_compute[225855]: 2026-01-20 14:27:51.978 225859 INFO nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Jan 20 14:27:51 compute-1 nova_compute[225855]: 2026-01-20 14:27:51.982 225859 DEBUG nova.compute.manager [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:27:52 compute-1 nova_compute[225855]: 2026-01-20 14:27:52.004 225859 DEBUG nova.objects.instance [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:27:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:27:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:52.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:27:52 compute-1 ceph-mon[81775]: pgmap v1105: 321 pgs: 321 active+clean; 206 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.9 MiB/s wr, 337 op/s
Jan 20 14:27:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1556740575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:52.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:53 compute-1 nova_compute[225855]: 2026-01-20 14:27:53.353 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:53 compute-1 nova_compute[225855]: 2026-01-20 14:27:53.754 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919258.7535377, 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:27:53 compute-1 nova_compute[225855]: 2026-01-20 14:27:53.755 225859 INFO nova.compute.manager [-] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] VM Stopped (Lifecycle Event)
Jan 20 14:27:53 compute-1 nova_compute[225855]: 2026-01-20 14:27:53.773 225859 DEBUG nova.compute.manager [None req-725642af-64a3-46f6-b583-d2ee9bc768f3 - - - - - -] [instance: 11d7ff3d-7e9c-4ecb-86b5-ffb0e3e62bef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:27:53 compute-1 ceph-mon[81775]: pgmap v1106: 321 pgs: 321 active+clean; 208 MiB data, 432 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.7 MiB/s wr, 349 op/s
Jan 20 14:27:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4134874017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:27:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:54.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:27:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:27:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5036 writes, 26K keys, 5036 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s
                                           Cumulative WAL: 5036 writes, 5036 syncs, 1.00 writes per sync, written: 0.05 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1541 writes, 7455 keys, 1541 commit groups, 1.0 writes per commit group, ingest: 15.91 MB, 0.03 MB/s
                                           Interval WAL: 1541 writes, 1541 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     64.4      0.46              0.12        14    0.033       0      0       0.0       0.0
                                             L6      1/0    8.65 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    107.7     89.6      1.17              0.39        13    0.090     61K   6800       0.0       0.0
                                            Sum      1/0    8.65 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5     77.2     82.5      1.63              0.51        27    0.060     61K   6800       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.2     98.6    101.2      0.50              0.18        10    0.050     26K   2531       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    107.7     89.6      1.17              0.39        13    0.090     61K   6800       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     64.7      0.46              0.12        13    0.035       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.029, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.13 GB write, 0.07 MB/s write, 0.12 GB read, 0.07 MB/s read, 1.6 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564d515a71f0#2 capacity: 304.00 MB usage: 12.47 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(713,11.98 MB,3.94008%) FilterBlock(27,179.42 KB,0.0576371%) IndexBlock(27,325.92 KB,0.104698%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 14:27:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:54.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2932853683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:55 compute-1 nova_compute[225855]: 2026-01-20 14:27:55.903 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:56.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:56 compute-1 ceph-mon[81775]: pgmap v1107: 321 pgs: 321 active+clean; 187 MiB data, 422 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 279 op/s
Jan 20 14:27:56 compute-1 sudo[236662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:56 compute-1 sudo[236662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:56 compute-1 sudo[236662]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:27:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:56.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:27:56 compute-1 sudo[236687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:27:56 compute-1 sudo[236687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:56 compute-1 sudo[236687]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:57 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:57 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:58.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:58 compute-1 nova_compute[225855]: 2026-01-20 14:27:58.355 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:58 compute-1 sudo[236713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:58 compute-1 sudo[236713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:58 compute-1 sudo[236713]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:58 compute-1 sudo[236738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:58 compute-1 sudo[236738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:58 compute-1 sudo[236738]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:58 compute-1 ceph-mon[81775]: pgmap v1108: 321 pgs: 321 active+clean; 188 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.0 MiB/s wr, 335 op/s
Jan 20 14:27:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:27:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:58.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:59 compute-1 ceph-mon[81775]: pgmap v1109: 321 pgs: 321 active+clean; 188 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.6 MiB/s wr, 213 op/s
Jan 20 14:28:00 compute-1 podman[236764]: 2026-01-20 14:28:00.063131651 +0000 UTC m=+0.108305365 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:28:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:00.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:00.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:00 compute-1 nova_compute[225855]: 2026-01-20 14:28:00.904 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2407875768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:02.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:02 compute-1 ceph-mon[81775]: pgmap v1110: 321 pgs: 321 active+clean; 188 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.5 MiB/s wr, 172 op/s
Jan 20 14:28:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:02.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:03 compute-1 nova_compute[225855]: 2026-01-20 14:28:03.358 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2155246561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3793975136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:04.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:04.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:05 compute-1 ceph-mon[81775]: pgmap v1111: 321 pgs: 321 active+clean; 188 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.1 MiB/s wr, 134 op/s
Jan 20 14:28:05 compute-1 nova_compute[225855]: 2026-01-20 14:28:05.908 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:05 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 20 14:28:06 compute-1 ceph-mon[81775]: pgmap v1112: 321 pgs: 321 active+clean; 197 MiB data, 413 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 421 KiB/s wr, 83 op/s
Jan 20 14:28:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:06.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:06.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:08.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:08 compute-1 nova_compute[225855]: 2026-01-20 14:28:08.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:08 compute-1 nova_compute[225855]: 2026-01-20 14:28:08.361 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:08 compute-1 nova_compute[225855]: 2026-01-20 14:28:08.387 225859 DEBUG nova.compute.manager [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 20 14:28:08 compute-1 nova_compute[225855]: 2026-01-20 14:28:08.487 225859 DEBUG oslo_concurrency.lockutils [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:08 compute-1 nova_compute[225855]: 2026-01-20 14:28:08.488 225859 DEBUG oslo_concurrency.lockutils [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:08 compute-1 nova_compute[225855]: 2026-01-20 14:28:08.512 225859 DEBUG nova.objects.instance [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lazy-loading 'pci_requests' on Instance uuid 29f0b4d4-abf0-46e7-bf67-38e71eb42e28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:28:08 compute-1 nova_compute[225855]: 2026-01-20 14:28:08.529 225859 DEBUG nova.virt.hardware [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:28:08 compute-1 nova_compute[225855]: 2026-01-20 14:28:08.529 225859 INFO nova.compute.claims [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:28:08 compute-1 nova_compute[225855]: 2026-01-20 14:28:08.530 225859 DEBUG nova.objects.instance [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lazy-loading 'resources' on Instance uuid 29f0b4d4-abf0-46e7-bf67-38e71eb42e28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:28:08 compute-1 nova_compute[225855]: 2026-01-20 14:28:08.545 225859 DEBUG nova.objects.instance [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lazy-loading 'numa_topology' on Instance uuid 29f0b4d4-abf0-46e7-bf67-38e71eb42e28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:28:08 compute-1 nova_compute[225855]: 2026-01-20 14:28:08.562 225859 DEBUG nova.objects.instance [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lazy-loading 'pci_devices' on Instance uuid 29f0b4d4-abf0-46e7-bf67-38e71eb42e28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:28:08 compute-1 nova_compute[225855]: 2026-01-20 14:28:08.622 225859 INFO nova.compute.resource_tracker [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Updating resource usage from migration 285e048b-eb47-4e34-b0d7-f2b9b65cbadb
Jan 20 14:28:08 compute-1 nova_compute[225855]: 2026-01-20 14:28:08.622 225859 DEBUG nova.compute.resource_tracker [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Starting to track incoming migration 285e048b-eb47-4e34-b0d7-f2b9b65cbadb with flavor 522deaab-a741-4dbb-932d-d8b13a211c33 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 20 14:28:08 compute-1 nova_compute[225855]: 2026-01-20 14:28:08.682 225859 DEBUG oslo_concurrency.processutils [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:08 compute-1 ceph-mon[81775]: pgmap v1113: 321 pgs: 321 active+clean; 246 MiB data, 444 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.2 MiB/s wr, 184 op/s
Jan 20 14:28:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:28:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:08.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:28:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:28:09 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4136608474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:09 compute-1 nova_compute[225855]: 2026-01-20 14:28:09.108 225859 DEBUG oslo_concurrency.processutils [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:09 compute-1 nova_compute[225855]: 2026-01-20 14:28:09.114 225859 DEBUG nova.compute.provider_tree [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:28:09 compute-1 nova_compute[225855]: 2026-01-20 14:28:09.138 225859 DEBUG nova.scheduler.client.report [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:28:09 compute-1 nova_compute[225855]: 2026-01-20 14:28:09.163 225859 DEBUG oslo_concurrency.lockutils [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:09 compute-1 nova_compute[225855]: 2026-01-20 14:28:09.164 225859 INFO nova.compute.manager [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Migrating
Jan 20 14:28:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4136608474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:09 compute-1 ceph-mon[81775]: pgmap v1114: 321 pgs: 321 active+clean; 269 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Jan 20 14:28:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:28:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:10.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:28:10 compute-1 sshd-session[236817]: Accepted publickey for nova from 192.168.122.102 port 59358 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 14:28:10 compute-1 systemd[1]: Created slice User Slice of UID 42436.
Jan 20 14:28:10 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 20 14:28:10 compute-1 systemd-logind[783]: New session 51 of user nova.
Jan 20 14:28:10 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 20 14:28:10 compute-1 systemd[1]: Starting User Manager for UID 42436...
Jan 20 14:28:10 compute-1 systemd[236831]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:28:10 compute-1 podman[236819]: 2026-01-20 14:28:10.475601425 +0000 UTC m=+0.072295613 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 14:28:10 compute-1 systemd[236831]: Queued start job for default target Main User Target.
Jan 20 14:28:10 compute-1 systemd[236831]: Created slice User Application Slice.
Jan 20 14:28:10 compute-1 systemd[236831]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 14:28:10 compute-1 systemd[236831]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 14:28:10 compute-1 systemd[236831]: Reached target Paths.
Jan 20 14:28:10 compute-1 systemd[236831]: Reached target Timers.
Jan 20 14:28:10 compute-1 systemd[236831]: Starting D-Bus User Message Bus Socket...
Jan 20 14:28:10 compute-1 systemd[236831]: Starting Create User's Volatile Files and Directories...
Jan 20 14:28:10 compute-1 systemd[236831]: Finished Create User's Volatile Files and Directories.
Jan 20 14:28:10 compute-1 systemd[236831]: Listening on D-Bus User Message Bus Socket.
Jan 20 14:28:10 compute-1 systemd[236831]: Reached target Sockets.
Jan 20 14:28:10 compute-1 systemd[236831]: Reached target Basic System.
Jan 20 14:28:10 compute-1 systemd[236831]: Reached target Main User Target.
Jan 20 14:28:10 compute-1 systemd[236831]: Startup finished in 156ms.
Jan 20 14:28:10 compute-1 systemd[1]: Started User Manager for UID 42436.
Jan 20 14:28:10 compute-1 systemd[1]: Started Session 51 of User nova.
Jan 20 14:28:10 compute-1 sshd-session[236817]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:28:10 compute-1 sshd-session[236855]: Received disconnect from 192.168.122.102 port 59358:11: disconnected by user
Jan 20 14:28:10 compute-1 sshd-session[236855]: Disconnected from user nova 192.168.122.102 port 59358
Jan 20 14:28:10 compute-1 sshd-session[236817]: pam_unix(sshd:session): session closed for user nova
Jan 20 14:28:10 compute-1 systemd[1]: session-51.scope: Deactivated successfully.
Jan 20 14:28:10 compute-1 systemd-logind[783]: Session 51 logged out. Waiting for processes to exit.
Jan 20 14:28:10 compute-1 systemd-logind[783]: Removed session 51.
Jan 20 14:28:10 compute-1 sshd-session[236857]: Accepted publickey for nova from 192.168.122.102 port 59366 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 14:28:10 compute-1 systemd-logind[783]: New session 53 of user nova.
Jan 20 14:28:10 compute-1 systemd[1]: Started Session 53 of User nova.
Jan 20 14:28:10 compute-1 sshd-session[236857]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:28:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:10.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:10 compute-1 nova_compute[225855]: 2026-01-20 14:28:10.950 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:10 compute-1 sshd-session[236860]: Received disconnect from 192.168.122.102 port 59366:11: disconnected by user
Jan 20 14:28:10 compute-1 sshd-session[236860]: Disconnected from user nova 192.168.122.102 port 59366
Jan 20 14:28:10 compute-1 sshd-session[236857]: pam_unix(sshd:session): session closed for user nova
Jan 20 14:28:10 compute-1 systemd[1]: session-53.scope: Deactivated successfully.
Jan 20 14:28:10 compute-1 systemd-logind[783]: Session 53 logged out. Waiting for processes to exit.
Jan 20 14:28:10 compute-1 systemd-logind[783]: Removed session 53.
Jan 20 14:28:11 compute-1 nova_compute[225855]: 2026-01-20 14:28:11.360 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:11 compute-1 nova_compute[225855]: 2026-01-20 14:28:11.361 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 14:28:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:12.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:12 compute-1 nova_compute[225855]: 2026-01-20 14:28:12.354 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:12 compute-1 nova_compute[225855]: 2026-01-20 14:28:12.354 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:12 compute-1 nova_compute[225855]: 2026-01-20 14:28:12.354 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 14:28:12 compute-1 nova_compute[225855]: 2026-01-20 14:28:12.367 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 14:28:12 compute-1 ceph-mon[81775]: pgmap v1115: 321 pgs: 321 active+clean; 302 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.2 MiB/s wr, 197 op/s
Jan 20 14:28:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:12.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:13 compute-1 nova_compute[225855]: 2026-01-20 14:28:13.388 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2012390759' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:28:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2012390759' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:28:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:14.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:14 compute-1 nova_compute[225855]: 2026-01-20 14:28:14.353 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:14 compute-1 nova_compute[225855]: 2026-01-20 14:28:14.354 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:28:14 compute-1 nova_compute[225855]: 2026-01-20 14:28:14.355 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:28:14 compute-1 nova_compute[225855]: 2026-01-20 14:28:14.527 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-d726266f-b9a6-406b-ad13-f9db3e0dc6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:28:14 compute-1 nova_compute[225855]: 2026-01-20 14:28:14.527 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-d726266f-b9a6-406b-ad13-f9db3e0dc6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:28:14 compute-1 nova_compute[225855]: 2026-01-20 14:28:14.528 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:28:14 compute-1 nova_compute[225855]: 2026-01-20 14:28:14.528 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d726266f-b9a6-406b-ad13-f9db3e0dc6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:28:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:14.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:15 compute-1 ceph-mon[81775]: pgmap v1116: 321 pgs: 321 active+clean; 313 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 206 op/s
Jan 20 14:28:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/373042378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:15 compute-1 nova_compute[225855]: 2026-01-20 14:28:15.978 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:16 compute-1 ceph-mon[81775]: pgmap v1117: 321 pgs: 321 active+clean; 313 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 207 op/s
Jan 20 14:28:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/226743044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:16.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:28:16.387 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:28:16.388 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:28:16.389 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:16 compute-1 nova_compute[225855]: 2026-01-20 14:28:16.560 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Updating instance_info_cache with network_info: [{"id": "e6067076-0f97-4e9c-9355-353277570e11", "address": "fa:16:3e:db:cf:b7", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6067076-0f", "ovs_interfaceid": "e6067076-0f97-4e9c-9355-353277570e11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:28:16 compute-1 nova_compute[225855]: 2026-01-20 14:28:16.579 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-d726266f-b9a6-406b-ad13-f9db3e0dc6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:28:16 compute-1 nova_compute[225855]: 2026-01-20 14:28:16.579 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:28:16 compute-1 nova_compute[225855]: 2026-01-20 14:28:16.580 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:16 compute-1 nova_compute[225855]: 2026-01-20 14:28:16.580 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:16 compute-1 nova_compute[225855]: 2026-01-20 14:28:16.580 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:16 compute-1 nova_compute[225855]: 2026-01-20 14:28:16.580 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:16 compute-1 nova_compute[225855]: 2026-01-20 14:28:16.609 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:16 compute-1 nova_compute[225855]: 2026-01-20 14:28:16.609 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:16 compute-1 nova_compute[225855]: 2026-01-20 14:28:16.609 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:16 compute-1 nova_compute[225855]: 2026-01-20 14:28:16.609 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:28:16 compute-1 nova_compute[225855]: 2026-01-20 14:28:16.610 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:16.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2276074320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3069433274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:28:17 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2141615976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:17 compute-1 nova_compute[225855]: 2026-01-20 14:28:17.076 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:17 compute-1 nova_compute[225855]: 2026-01-20 14:28:17.150 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:28:17 compute-1 nova_compute[225855]: 2026-01-20 14:28:17.151 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:28:17 compute-1 nova_compute[225855]: 2026-01-20 14:28:17.374 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:28:17 compute-1 nova_compute[225855]: 2026-01-20 14:28:17.375 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4547MB free_disk=20.87643814086914GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:28:17 compute-1 nova_compute[225855]: 2026-01-20 14:28:17.375 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:17 compute-1 nova_compute[225855]: 2026-01-20 14:28:17.376 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:17 compute-1 nova_compute[225855]: 2026-01-20 14:28:17.492 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Migration for instance 29f0b4d4-abf0-46e7-bf67-38e71eb42e28 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 20 14:28:17 compute-1 nova_compute[225855]: 2026-01-20 14:28:17.577 225859 INFO nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Updating resource usage from migration 285e048b-eb47-4e34-b0d7-f2b9b65cbadb
Jan 20 14:28:17 compute-1 nova_compute[225855]: 2026-01-20 14:28:17.578 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Starting to track incoming migration 285e048b-eb47-4e34-b0d7-f2b9b65cbadb with flavor 522deaab-a741-4dbb-932d-d8b13a211c33 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 20 14:28:17 compute-1 nova_compute[225855]: 2026-01-20 14:28:17.736 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance d726266f-b9a6-406b-ad13-f9db3e0dc6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:28:17 compute-1 nova_compute[225855]: 2026-01-20 14:28:17.755 225859 WARNING nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 29f0b4d4-abf0-46e7-bf67-38e71eb42e28 has been moved to another host compute-2.ctlplane.example.com(compute-2.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.
Jan 20 14:28:17 compute-1 nova_compute[225855]: 2026-01-20 14:28:17.755 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:28:17 compute-1 nova_compute[225855]: 2026-01-20 14:28:17.755 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:28:17 compute-1 nova_compute[225855]: 2026-01-20 14:28:17.837 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing inventories for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 14:28:17 compute-1 nova_compute[225855]: 2026-01-20 14:28:17.867 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating ProviderTree inventory for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 14:28:17 compute-1 nova_compute[225855]: 2026-01-20 14:28:17.868 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:28:17 compute-1 nova_compute[225855]: 2026-01-20 14:28:17.903 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing aggregate associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 14:28:17 compute-1 nova_compute[225855]: 2026-01-20 14:28:17.933 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing trait associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 14:28:18 compute-1 nova_compute[225855]: 2026-01-20 14:28:18.035 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2141615976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3394207389' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:18 compute-1 ceph-mon[81775]: pgmap v1118: 321 pgs: 321 active+clean; 326 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 6.4 MiB/s wr, 222 op/s
Jan 20 14:28:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:18.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:18 compute-1 nova_compute[225855]: 2026-01-20 14:28:18.391 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:28:18 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1146682809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:18 compute-1 nova_compute[225855]: 2026-01-20 14:28:18.485 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:18 compute-1 nova_compute[225855]: 2026-01-20 14:28:18.491 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:28:18 compute-1 nova_compute[225855]: 2026-01-20 14:28:18.519 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:28:18 compute-1 nova_compute[225855]: 2026-01-20 14:28:18.557 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:28:18 compute-1 nova_compute[225855]: 2026-01-20 14:28:18.557 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:18 compute-1 sudo[236911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:28:18 compute-1 sudo[236911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:18 compute-1 sudo[236911]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:18 compute-1 sudo[236936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:28:18 compute-1 sudo[236936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:18 compute-1 sudo[236936]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:18.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1146682809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:19 compute-1 nova_compute[225855]: 2026-01-20 14:28:19.317 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:19 compute-1 nova_compute[225855]: 2026-01-20 14:28:19.353 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:19 compute-1 nova_compute[225855]: 2026-01-20 14:28:19.354 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:19 compute-1 nova_compute[225855]: 2026-01-20 14:28:19.354 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:28:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:20 compute-1 ceph-mon[81775]: pgmap v1119: 321 pgs: 321 active+clean; 328 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.8 MiB/s wr, 131 op/s
Jan 20 14:28:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:28:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:20.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:28:20 compute-1 nova_compute[225855]: 2026-01-20 14:28:20.373 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:20.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:20 compute-1 nova_compute[225855]: 2026-01-20 14:28:20.983 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4021864044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:21 compute-1 systemd[1]: Stopping User Manager for UID 42436...
Jan 20 14:28:21 compute-1 systemd[236831]: Activating special unit Exit the Session...
Jan 20 14:28:21 compute-1 systemd[236831]: Stopped target Main User Target.
Jan 20 14:28:21 compute-1 systemd[236831]: Stopped target Basic System.
Jan 20 14:28:21 compute-1 systemd[236831]: Stopped target Paths.
Jan 20 14:28:21 compute-1 systemd[236831]: Stopped target Sockets.
Jan 20 14:28:21 compute-1 systemd[236831]: Stopped target Timers.
Jan 20 14:28:21 compute-1 systemd[236831]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 20 14:28:21 compute-1 systemd[236831]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 14:28:21 compute-1 systemd[236831]: Closed D-Bus User Message Bus Socket.
Jan 20 14:28:21 compute-1 systemd[236831]: Stopped Create User's Volatile Files and Directories.
Jan 20 14:28:21 compute-1 systemd[236831]: Removed slice User Application Slice.
Jan 20 14:28:21 compute-1 systemd[236831]: Reached target Shutdown.
Jan 20 14:28:21 compute-1 systemd[236831]: Finished Exit the Session.
Jan 20 14:28:21 compute-1 systemd[236831]: Reached target Exit the Session.
Jan 20 14:28:21 compute-1 systemd[1]: user@42436.service: Deactivated successfully.
Jan 20 14:28:21 compute-1 systemd[1]: Stopped User Manager for UID 42436.
Jan 20 14:28:21 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 20 14:28:21 compute-1 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 20 14:28:21 compute-1 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 20 14:28:21 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 20 14:28:21 compute-1 systemd[1]: Removed slice User Slice of UID 42436.
Jan 20 14:28:22 compute-1 ceph-mon[81775]: pgmap v1120: 321 pgs: 321 active+clean; 339 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 960 KiB/s rd, 3.8 MiB/s wr, 114 op/s
Jan 20 14:28:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/428076612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/117801400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:22.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:22.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:23 compute-1 nova_compute[225855]: 2026-01-20 14:28:23.427 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:24.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:24 compute-1 nova_compute[225855]: 2026-01-20 14:28:24.652 225859 DEBUG oslo_concurrency.lockutils [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Acquiring lock "refresh_cache-29f0b4d4-abf0-46e7-bf67-38e71eb42e28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:28:24 compute-1 nova_compute[225855]: 2026-01-20 14:28:24.653 225859 DEBUG oslo_concurrency.lockutils [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Acquired lock "refresh_cache-29f0b4d4-abf0-46e7-bf67-38e71eb42e28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:28:24 compute-1 nova_compute[225855]: 2026-01-20 14:28:24.653 225859 DEBUG nova.network.neutron [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:28:24 compute-1 ceph-mon[81775]: pgmap v1121: 321 pgs: 321 active+clean; 346 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 388 KiB/s rd, 2.6 MiB/s wr, 80 op/s
Jan 20 14:28:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:28:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:24.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:28:24 compute-1 nova_compute[225855]: 2026-01-20 14:28:24.967 225859 DEBUG nova.network.neutron [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:28:25 compute-1 nova_compute[225855]: 2026-01-20 14:28:25.443 225859 DEBUG nova.network.neutron [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:28:25 compute-1 nova_compute[225855]: 2026-01-20 14:28:25.462 225859 DEBUG oslo_concurrency.lockutils [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Releasing lock "refresh_cache-29f0b4d4-abf0-46e7-bf67-38e71eb42e28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:28:25 compute-1 nova_compute[225855]: 2026-01-20 14:28:25.580 225859 DEBUG nova.virt.libvirt.driver [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 20 14:28:25 compute-1 nova_compute[225855]: 2026-01-20 14:28:25.581 225859 DEBUG nova.virt.libvirt.driver [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 14:28:25 compute-1 nova_compute[225855]: 2026-01-20 14:28:25.581 225859 INFO nova.virt.libvirt.driver [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Creating image(s)
Jan 20 14:28:25 compute-1 nova_compute[225855]: 2026-01-20 14:28:25.630 225859 DEBUG nova.storage.rbd_utils [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] creating snapshot(nova-resize) on rbd image(29f0b4d4-abf0-46e7-bf67-38e71eb42e28_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:28:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3997936891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e153 e153: 3 total, 3 up, 3 in
Jan 20 14:28:25 compute-1 nova_compute[225855]: 2026-01-20 14:28:25.864 225859 DEBUG nova.objects.instance [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 29f0b4d4-abf0-46e7-bf67-38e71eb42e28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.059 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.144 225859 DEBUG nova.virt.libvirt.driver [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.145 225859 DEBUG nova.virt.libvirt.driver [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Ensure instance console log exists: /var/lib/nova/instances/29f0b4d4-abf0-46e7-bf67-38e71eb42e28/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.145 225859 DEBUG oslo_concurrency.lockutils [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.146 225859 DEBUG oslo_concurrency.lockutils [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.146 225859 DEBUG oslo_concurrency.lockutils [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.148 225859 DEBUG nova.virt.libvirt.driver [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.152 225859 WARNING nova.virt.libvirt.driver [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.156 225859 DEBUG nova.virt.libvirt.host [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.157 225859 DEBUG nova.virt.libvirt.host [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.161 225859 DEBUG nova.virt.libvirt.host [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.161 225859 DEBUG nova.virt.libvirt.host [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.162 225859 DEBUG nova.virt.libvirt.driver [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.163 225859 DEBUG nova.virt.hardware [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.163 225859 DEBUG nova.virt.hardware [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.164 225859 DEBUG nova.virt.hardware [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.164 225859 DEBUG nova.virt.hardware [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.164 225859 DEBUG nova.virt.hardware [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.165 225859 DEBUG nova.virt.hardware [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.165 225859 DEBUG nova.virt.hardware [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.165 225859 DEBUG nova.virt.hardware [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.166 225859 DEBUG nova.virt.hardware [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.166 225859 DEBUG nova.virt.hardware [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.166 225859 DEBUG nova.virt.hardware [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.166 225859 DEBUG nova.objects.instance [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 29f0b4d4-abf0-46e7-bf67-38e71eb42e28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.203 225859 DEBUG oslo_concurrency.processutils [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:26.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:28:26 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/868074923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.658 225859 DEBUG oslo_concurrency.processutils [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:26 compute-1 nova_compute[225855]: 2026-01-20 14:28:26.694 225859 DEBUG oslo_concurrency.processutils [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:26 compute-1 ceph-mon[81775]: pgmap v1122: 321 pgs: 321 active+clean; 335 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Jan 20 14:28:26 compute-1 ceph-mon[81775]: osdmap e153: 3 total, 3 up, 3 in
Jan 20 14:28:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/868074923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:26.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:28:27 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2394368548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:27 compute-1 nova_compute[225855]: 2026-01-20 14:28:27.116 225859 DEBUG oslo_concurrency.processutils [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:27 compute-1 nova_compute[225855]: 2026-01-20 14:28:27.119 225859 DEBUG nova.virt.libvirt.driver [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:28:27 compute-1 nova_compute[225855]:   <uuid>29f0b4d4-abf0-46e7-bf67-38e71eb42e28</uuid>
Jan 20 14:28:27 compute-1 nova_compute[225855]:   <name>instance-00000016</name>
Jan 20 14:28:27 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:28:27 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:28:27 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <nova:name>tempest-MigrationsAdminTest-server-920976466</nova:name>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:28:26</nova:creationTime>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:28:27 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:28:27 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:28:27 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:28:27 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:28:27 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:28:27 compute-1 nova_compute[225855]:         <nova:user uuid="01a3d712f05049b19d4ecc7051720ad5">tempest-MigrationsAdminTest-1518611738-project-member</nova:user>
Jan 20 14:28:27 compute-1 nova_compute[225855]:         <nova:project uuid="f3c2e72a7148496394c8bcd618a19c80">tempest-MigrationsAdminTest-1518611738</nova:project>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <nova:ports/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:28:27 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:28:27 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <system>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <entry name="serial">29f0b4d4-abf0-46e7-bf67-38e71eb42e28</entry>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <entry name="uuid">29f0b4d4-abf0-46e7-bf67-38e71eb42e28</entry>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     </system>
Jan 20 14:28:27 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:28:27 compute-1 nova_compute[225855]:   <os>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:   </os>
Jan 20 14:28:27 compute-1 nova_compute[225855]:   <features>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:   </features>
Jan 20 14:28:27 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:28:27 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:28:27 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/29f0b4d4-abf0-46e7-bf67-38e71eb42e28_disk">
Jan 20 14:28:27 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       </source>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:28:27 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/29f0b4d4-abf0-46e7-bf67-38e71eb42e28_disk.config">
Jan 20 14:28:27 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       </source>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:28:27 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/29f0b4d4-abf0-46e7-bf67-38e71eb42e28/console.log" append="off"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <video>
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     </video>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:28:27 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:28:27 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:28:27 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:28:27 compute-1 nova_compute[225855]: </domain>
Jan 20 14:28:27 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:28:27 compute-1 nova_compute[225855]: 2026-01-20 14:28:27.250 225859 DEBUG nova.virt.libvirt.driver [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:28:27 compute-1 nova_compute[225855]: 2026-01-20 14:28:27.250 225859 DEBUG nova.virt.libvirt.driver [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:28:27 compute-1 nova_compute[225855]: 2026-01-20 14:28:27.251 225859 INFO nova.virt.libvirt.driver [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Using config drive
Jan 20 14:28:27 compute-1 systemd-machined[194361]: New machine qemu-10-instance-00000016.
Jan 20 14:28:27 compute-1 ceph-mgr[82135]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2542147622
Jan 20 14:28:27 compute-1 systemd[1]: Started Virtual Machine qemu-10-instance-00000016.
Jan 20 14:28:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2394368548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:27 compute-1 ceph-mon[81775]: pgmap v1124: 321 pgs: 321 active+clean; 267 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 178 op/s
Jan 20 14:28:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:28:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:28.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:28:28 compute-1 nova_compute[225855]: 2026-01-20 14:28:28.412 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919308.4120557, 29f0b4d4-abf0-46e7-bf67-38e71eb42e28 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:28:28 compute-1 nova_compute[225855]: 2026-01-20 14:28:28.413 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] VM Resumed (Lifecycle Event)
Jan 20 14:28:28 compute-1 nova_compute[225855]: 2026-01-20 14:28:28.416 225859 DEBUG nova.compute.manager [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:28:28 compute-1 nova_compute[225855]: 2026-01-20 14:28:28.420 225859 INFO nova.virt.libvirt.driver [-] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Instance running successfully.
Jan 20 14:28:28 compute-1 virtqemud[225396]: argument unsupported: QEMU guest agent is not configured
Jan 20 14:28:28 compute-1 nova_compute[225855]: 2026-01-20 14:28:28.423 225859 DEBUG nova.virt.libvirt.guest [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 20 14:28:28 compute-1 nova_compute[225855]: 2026-01-20 14:28:28.423 225859 DEBUG nova.virt.libvirt.driver [None req-2a504ad3-9dad-4a29-b7b7-86dde0c37d43 d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 20 14:28:28 compute-1 nova_compute[225855]: 2026-01-20 14:28:28.481 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:28:28 compute-1 nova_compute[225855]: 2026-01-20 14:28:28.486 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:28 compute-1 nova_compute[225855]: 2026-01-20 14:28:28.490 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:28:28 compute-1 nova_compute[225855]: 2026-01-20 14:28:28.535 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 20 14:28:28 compute-1 nova_compute[225855]: 2026-01-20 14:28:28.535 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919308.413144, 29f0b4d4-abf0-46e7-bf67-38e71eb42e28 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:28:28 compute-1 nova_compute[225855]: 2026-01-20 14:28:28.536 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] VM Started (Lifecycle Event)
Jan 20 14:28:28 compute-1 nova_compute[225855]: 2026-01-20 14:28:28.559 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:28:28 compute-1 nova_compute[225855]: 2026-01-20 14:28:28.564 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:28:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:28.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:29 compute-1 ceph-mon[81775]: pgmap v1125: 321 pgs: 321 active+clean; 267 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.1 MiB/s wr, 184 op/s
Jan 20 14:28:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:28:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:30.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:28:30 compute-1 nova_compute[225855]: 2026-01-20 14:28:30.498 225859 DEBUG nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Creating tmpfile /var/lib/nova/instances/tmpt3smbf1a to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Jan 20 14:28:30 compute-1 nova_compute[225855]: 2026-01-20 14:28:30.499 225859 DEBUG nova.compute.manager [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpt3smbf1a',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Jan 20 14:28:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e154 e154: 3 total, 3 up, 3 in
Jan 20 14:28:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:28:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:30.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:28:31 compute-1 nova_compute[225855]: 2026-01-20 14:28:31.088 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:31 compute-1 podman[237179]: 2026-01-20 14:28:31.106423539 +0000 UTC m=+0.134284096 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:28:31 compute-1 ceph-mon[81775]: osdmap e154: 3 total, 3 up, 3 in
Jan 20 14:28:31 compute-1 ceph-mon[81775]: pgmap v1127: 321 pgs: 321 active+clean; 267 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 5.3 MiB/s rd, 23 KiB/s wr, 276 op/s
Jan 20 14:28:32 compute-1 nova_compute[225855]: 2026-01-20 14:28:32.004 225859 DEBUG nova.compute.manager [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpt3smbf1a',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='79b5596e-43c9-4085-9829-454fecf59490',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Jan 20 14:28:32 compute-1 nova_compute[225855]: 2026-01-20 14:28:32.031 225859 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:28:32 compute-1 nova_compute[225855]: 2026-01-20 14:28:32.032 225859 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquired lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:28:32 compute-1 nova_compute[225855]: 2026-01-20 14:28:32.032 225859 DEBUG nova.network.neutron [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:28:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:32.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:28:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:32.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:28:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e155 e155: 3 total, 3 up, 3 in
Jan 20 14:28:33 compute-1 nova_compute[225855]: 2026-01-20 14:28:33.485 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:33 compute-1 nova_compute[225855]: 2026-01-20 14:28:33.962 225859 DEBUG nova.network.neutron [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Updating instance_info_cache with network_info: [{"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:28:34 compute-1 nova_compute[225855]: 2026-01-20 14:28:34.010 225859 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Releasing lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:28:34 compute-1 nova_compute[225855]: 2026-01-20 14:28:34.013 225859 DEBUG os_brick.utils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:28:34 compute-1 nova_compute[225855]: 2026-01-20 14:28:34.014 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:34 compute-1 nova_compute[225855]: 2026-01-20 14:28:34.032 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:34 compute-1 nova_compute[225855]: 2026-01-20 14:28:34.033 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe20fb8-3ba3-4214-b0dd-0aa3d6eb0cde]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:34 compute-1 nova_compute[225855]: 2026-01-20 14:28:34.035 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:34 compute-1 nova_compute[225855]: 2026-01-20 14:28:34.047 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:34 compute-1 nova_compute[225855]: 2026-01-20 14:28:34.047 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[2d90f998-6a7d-40bb-aec6-1d8d05dd9b97]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:34 compute-1 nova_compute[225855]: 2026-01-20 14:28:34.050 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:34 compute-1 nova_compute[225855]: 2026-01-20 14:28:34.065 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:34 compute-1 nova_compute[225855]: 2026-01-20 14:28:34.065 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd1411a-a593-4dce-a931-ff32f81453e0]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:34 compute-1 nova_compute[225855]: 2026-01-20 14:28:34.067 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[1941289a-db24-48c4-8a46-04ef1da0976b]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:34 compute-1 nova_compute[225855]: 2026-01-20 14:28:34.068 225859 DEBUG oslo_concurrency.processutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:34 compute-1 ceph-mon[81775]: osdmap e155: 3 total, 3 up, 3 in
Jan 20 14:28:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2290410301' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:28:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2290410301' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:28:34 compute-1 ceph-mon[81775]: pgmap v1129: 321 pgs: 321 active+clean; 267 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 22 KiB/s wr, 230 op/s
Jan 20 14:28:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1026837311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:34 compute-1 nova_compute[225855]: 2026-01-20 14:28:34.108 225859 DEBUG oslo_concurrency.processutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:34 compute-1 nova_compute[225855]: 2026-01-20 14:28:34.111 225859 DEBUG os_brick.initiator.connectors.lightos [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:28:34 compute-1 nova_compute[225855]: 2026-01-20 14:28:34.112 225859 DEBUG os_brick.initiator.connectors.lightos [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:28:34 compute-1 nova_compute[225855]: 2026-01-20 14:28:34.112 225859 DEBUG os_brick.initiator.connectors.lightos [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:28:34 compute-1 nova_compute[225855]: 2026-01-20 14:28:34.113 225859 DEBUG os_brick.utils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] <== get_connector_properties: return (99ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:28:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:34.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:34.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1979035454' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.392 225859 DEBUG nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpt3smbf1a',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='79b5596e-43c9-4085-9829-454fecf59490',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={47e883f3-6efe-40b3-be28-6c01525dfc0c='9eb63166-9838-4b2e-9a3b-635bb42864f1'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.394 225859 DEBUG nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Creating instance directory: /var/lib/nova/instances/79b5596e-43c9-4085-9829-454fecf59490 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.395 225859 DEBUG nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Ensure instance console log exists: /var/lib/nova/instances/79b5596e-43c9-4085-9829-454fecf59490/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.395 225859 DEBUG nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.399 225859 DEBUG nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.401 225859 DEBUG nova.virt.libvirt.vif [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1483268234',display_name='tempest-LiveMigrationTest-server-1483268234',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1483268234',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:28:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d15f60b9e48e4175b5520d1e57ed2d3a',ramdisk_id='',reservation_id='r-jglb1q09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-864280704',owner_user_name='tempest-LiveMigrationTest-864280704-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:28:24Z,user_data=None,user_id='bce7fcbd19554e29bb80c5b93b7dd3c9',uuid=79b5596e-43c9-4085-9829-454fecf59490,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.401 225859 DEBUG nova.network.os_vif_util [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Converting VIF {"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.403 225859 DEBUG nova.network.os_vif_util [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.404 225859 DEBUG os_vif [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.405 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.405 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.407 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.410 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.411 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd002580-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.412 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbd002580-dd, col_values=(('external_ids', {'iface-id': 'bd002580-dd95-49e1-bc34-e85f86272a05', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:ce:0d', 'vm-uuid': '79b5596e-43c9-4085-9829-454fecf59490'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.414 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:35 compute-1 NetworkManager[49104]: <info>  [1768919315.4152] manager: (tapbd002580-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.417 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.420 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.421 225859 INFO os_vif [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd')
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.426 225859 DEBUG nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Jan 20 14:28:35 compute-1 nova_compute[225855]: 2026-01-20 14:28:35.427 225859 DEBUG nova.compute.manager [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpt3smbf1a',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='79b5596e-43c9-4085-9829-454fecf59490',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={47e883f3-6efe-40b3-be28-6c01525dfc0c='9eb63166-9838-4b2e-9a3b-635bb42864f1'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Jan 20 14:28:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:28:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:36.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:28:36 compute-1 ceph-mon[81775]: pgmap v1130: 321 pgs: 321 active+clean; 262 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.2 KiB/s wr, 178 op/s
Jan 20 14:28:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:28:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:36.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:28:37 compute-1 nova_compute[225855]: 2026-01-20 14:28:37.452 225859 DEBUG nova.network.neutron [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Port bd002580-dd95-49e1-bc34-e85f86272a05 updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Jan 20 14:28:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:38.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:38 compute-1 nova_compute[225855]: 2026-01-20 14:28:38.520 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:38 compute-1 nova_compute[225855]: 2026-01-20 14:28:38.604 225859 DEBUG nova.compute.manager [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpt3smbf1a',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='79b5596e-43c9-4085-9829-454fecf59490',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={47e883f3-6efe-40b3-be28-6c01525dfc0c='9eb63166-9838-4b2e-9a3b-635bb42864f1'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Jan 20 14:28:38 compute-1 ceph-mon[81775]: pgmap v1131: 321 pgs: 321 active+clean; 246 MiB data, 453 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.9 KiB/s wr, 175 op/s
Jan 20 14:28:38 compute-1 sudo[237219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:28:38 compute-1 sudo[237219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:38 compute-1 sudo[237219]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:38.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:38 compute-1 sudo[237244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:28:38 compute-1 sudo[237244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:38 compute-1 sudo[237244]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:39 compute-1 kernel: tapbd002580-dd: entered promiscuous mode
Jan 20 14:28:39 compute-1 NetworkManager[49104]: <info>  [1768919319.1852] manager: (tapbd002580-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Jan 20 14:28:39 compute-1 nova_compute[225855]: 2026-01-20 14:28:39.187 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:39 compute-1 ovn_controller[130490]: 2026-01-20T14:28:39Z|00092|binding|INFO|Claiming lport bd002580-dd95-49e1-bc34-e85f86272a05 for this additional chassis.
Jan 20 14:28:39 compute-1 ovn_controller[130490]: 2026-01-20T14:28:39Z|00093|binding|INFO|bd002580-dd95-49e1-bc34-e85f86272a05: Claiming fa:16:3e:24:ce:0d 10.100.0.10
Jan 20 14:28:39 compute-1 ovn_controller[130490]: 2026-01-20T14:28:39Z|00094|binding|INFO|Setting lport bd002580-dd95-49e1-bc34-e85f86272a05 ovn-installed in OVS
Jan 20 14:28:39 compute-1 nova_compute[225855]: 2026-01-20 14:28:39.209 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:39 compute-1 nova_compute[225855]: 2026-01-20 14:28:39.213 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:39 compute-1 systemd-udevd[237280]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:28:39 compute-1 NetworkManager[49104]: <info>  [1768919319.2322] device (tapbd002580-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:28:39 compute-1 NetworkManager[49104]: <info>  [1768919319.2330] device (tapbd002580-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:28:39 compute-1 systemd-machined[194361]: New machine qemu-11-instance-00000017.
Jan 20 14:28:39 compute-1 systemd[1]: Started Virtual Machine qemu-11-instance-00000017.
Jan 20 14:28:39 compute-1 nova_compute[225855]: 2026-01-20 14:28:39.749 225859 DEBUG oslo_concurrency.lockutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "87fe16d6-774e-4002-8df4-9eb202621ab9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:39 compute-1 nova_compute[225855]: 2026-01-20 14:28:39.751 225859 DEBUG oslo_concurrency.lockutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "87fe16d6-774e-4002-8df4-9eb202621ab9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:39 compute-1 nova_compute[225855]: 2026-01-20 14:28:39.774 225859 DEBUG nova.compute.manager [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:28:39 compute-1 ceph-mon[81775]: pgmap v1132: 321 pgs: 321 active+clean; 250 MiB data, 456 MiB used, 21 GiB / 21 GiB avail; 561 KiB/s rd, 372 KiB/s wr, 90 op/s
Jan 20 14:28:39 compute-1 nova_compute[225855]: 2026-01-20 14:28:39.876 225859 DEBUG oslo_concurrency.lockutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:39 compute-1 nova_compute[225855]: 2026-01-20 14:28:39.877 225859 DEBUG oslo_concurrency.lockutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:39 compute-1 nova_compute[225855]: 2026-01-20 14:28:39.885 225859 DEBUG nova.virt.hardware [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:28:39 compute-1 nova_compute[225855]: 2026-01-20 14:28:39.885 225859 INFO nova.compute.claims [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:28:39 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:28:39.892 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:28:39 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:28:39.892 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:28:39 compute-1 nova_compute[225855]: 2026-01-20 14:28:39.922 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.076 225859 DEBUG oslo_concurrency.processutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.173 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919320.1727831, 79b5596e-43c9-4085-9829-454fecf59490 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.174 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] VM Started (Lifecycle Event)
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.195 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:28:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:40.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.414 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:28:40 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2708764190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.523 225859 DEBUG oslo_concurrency.processutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.527 225859 DEBUG nova.compute.provider_tree [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.550 225859 DEBUG nova.scheduler.client.report [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:28:40 compute-1 sshd-session[236964]: Connection closed by authenticating user root 45.179.5.170 port 45520 [preauth]
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.584 225859 DEBUG oslo_concurrency.lockutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.585 225859 DEBUG nova.compute.manager [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.646 225859 DEBUG nova.compute.manager [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.648 225859 DEBUG nova.network.neutron [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.670 225859 INFO nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.687 225859 DEBUG nova.compute.manager [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.711 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919320.711011, 79b5596e-43c9-4085-9829-454fecf59490 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.711 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] VM Resumed (Lifecycle Event)
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.749 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.752 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.786 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com
Jan 20 14:28:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2708764190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.805 225859 DEBUG nova.compute.manager [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.806 225859 DEBUG nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.807 225859 INFO nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Creating image(s)
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.830 225859 DEBUG nova.storage.rbd_utils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image 87fe16d6-774e-4002-8df4-9eb202621ab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.860 225859 DEBUG nova.storage.rbd_utils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image 87fe16d6-774e-4002-8df4-9eb202621ab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.889 225859 DEBUG nova.storage.rbd_utils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image 87fe16d6-774e-4002-8df4-9eb202621ab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:28:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e156 e156: 3 total, 3 up, 3 in
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.893 225859 DEBUG oslo_concurrency.processutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:40.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.970 225859 DEBUG oslo_concurrency.processutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.972 225859 DEBUG oslo_concurrency.lockutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.973 225859 DEBUG oslo_concurrency.lockutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:40 compute-1 nova_compute[225855]: 2026-01-20 14:28:40.973 225859 DEBUG oslo_concurrency.lockutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.006 225859 DEBUG nova.storage.rbd_utils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image 87fe16d6-774e-4002-8df4-9eb202621ab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.010 225859 DEBUG oslo_concurrency.processutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 87fe16d6-774e-4002-8df4-9eb202621ab9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.041 225859 DEBUG nova.network.neutron [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.042 225859 DEBUG nova.compute.manager [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:28:41 compute-1 podman[237412]: 2026-01-20 14:28:41.094566716 +0000 UTC m=+0.129374718 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.293 225859 DEBUG oslo_concurrency.processutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 87fe16d6-774e-4002-8df4-9eb202621ab9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.394 225859 DEBUG nova.storage.rbd_utils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] resizing rbd image 87fe16d6-774e-4002-8df4-9eb202621ab9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.547 225859 DEBUG nova.objects.instance [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'migration_context' on Instance uuid 87fe16d6-774e-4002-8df4-9eb202621ab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.565 225859 DEBUG nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.565 225859 DEBUG nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Ensure instance console log exists: /var/lib/nova/instances/87fe16d6-774e-4002-8df4-9eb202621ab9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.566 225859 DEBUG oslo_concurrency.lockutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.567 225859 DEBUG oslo_concurrency.lockutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.567 225859 DEBUG oslo_concurrency.lockutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.571 225859 DEBUG nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.579 225859 WARNING nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.585 225859 DEBUG nova.virt.libvirt.host [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.587 225859 DEBUG nova.virt.libvirt.host [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.590 225859 DEBUG nova.virt.libvirt.host [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.590 225859 DEBUG nova.virt.libvirt.host [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.591 225859 DEBUG nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.592 225859 DEBUG nova.virt.hardware [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.592 225859 DEBUG nova.virt.hardware [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.592 225859 DEBUG nova.virt.hardware [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.593 225859 DEBUG nova.virt.hardware [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.593 225859 DEBUG nova.virt.hardware [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.593 225859 DEBUG nova.virt.hardware [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.593 225859 DEBUG nova.virt.hardware [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.593 225859 DEBUG nova.virt.hardware [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.594 225859 DEBUG nova.virt.hardware [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.594 225859 DEBUG nova.virt.hardware [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.594 225859 DEBUG nova.virt.hardware [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:28:41 compute-1 nova_compute[225855]: 2026-01-20 14:28:41.597 225859 DEBUG oslo_concurrency.processutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:41 compute-1 ceph-mon[81775]: osdmap e156: 3 total, 3 up, 3 in
Jan 20 14:28:41 compute-1 ceph-mon[81775]: pgmap v1134: 321 pgs: 321 active+clean; 279 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.1 MiB/s wr, 212 op/s
Jan 20 14:28:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:28:42 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2946817491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:42 compute-1 nova_compute[225855]: 2026-01-20 14:28:42.093 225859 DEBUG oslo_concurrency.processutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:42 compute-1 nova_compute[225855]: 2026-01-20 14:28:42.129 225859 DEBUG nova.storage.rbd_utils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image 87fe16d6-774e-4002-8df4-9eb202621ab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:28:42 compute-1 nova_compute[225855]: 2026-01-20 14:28:42.134 225859 DEBUG oslo_concurrency.processutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:28:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:42.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:28:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:28:42 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3456057515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:42 compute-1 nova_compute[225855]: 2026-01-20 14:28:42.595 225859 DEBUG oslo_concurrency.processutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:42 compute-1 nova_compute[225855]: 2026-01-20 14:28:42.597 225859 DEBUG nova.objects.instance [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'pci_devices' on Instance uuid 87fe16d6-774e-4002-8df4-9eb202621ab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:28:42 compute-1 nova_compute[225855]: 2026-01-20 14:28:42.611 225859 DEBUG nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:28:42 compute-1 nova_compute[225855]:   <uuid>87fe16d6-774e-4002-8df4-9eb202621ab9</uuid>
Jan 20 14:28:42 compute-1 nova_compute[225855]:   <name>instance-00000018</name>
Jan 20 14:28:42 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:28:42 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:28:42 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <nova:name>tempest-MigrationsAdminTest-server-724945079</nova:name>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:28:41</nova:creationTime>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:28:42 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:28:42 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:28:42 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:28:42 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:28:42 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:28:42 compute-1 nova_compute[225855]:         <nova:user uuid="01a3d712f05049b19d4ecc7051720ad5">tempest-MigrationsAdminTest-1518611738-project-member</nova:user>
Jan 20 14:28:42 compute-1 nova_compute[225855]:         <nova:project uuid="f3c2e72a7148496394c8bcd618a19c80">tempest-MigrationsAdminTest-1518611738</nova:project>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <nova:ports/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:28:42 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:28:42 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <system>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <entry name="serial">87fe16d6-774e-4002-8df4-9eb202621ab9</entry>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <entry name="uuid">87fe16d6-774e-4002-8df4-9eb202621ab9</entry>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     </system>
Jan 20 14:28:42 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:28:42 compute-1 nova_compute[225855]:   <os>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:   </os>
Jan 20 14:28:42 compute-1 nova_compute[225855]:   <features>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:   </features>
Jan 20 14:28:42 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:28:42 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:28:42 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/87fe16d6-774e-4002-8df4-9eb202621ab9_disk">
Jan 20 14:28:42 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       </source>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:28:42 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/87fe16d6-774e-4002-8df4-9eb202621ab9_disk.config">
Jan 20 14:28:42 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       </source>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:28:42 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/87fe16d6-774e-4002-8df4-9eb202621ab9/console.log" append="off"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <video>
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     </video>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:28:42 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:28:42 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:28:42 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:28:42 compute-1 nova_compute[225855]: </domain>
Jan 20 14:28:42 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:28:42 compute-1 nova_compute[225855]: 2026-01-20 14:28:42.660 225859 DEBUG nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:28:42 compute-1 nova_compute[225855]: 2026-01-20 14:28:42.660 225859 DEBUG nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:28:42 compute-1 nova_compute[225855]: 2026-01-20 14:28:42.661 225859 INFO nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Using config drive
Jan 20 14:28:42 compute-1 nova_compute[225855]: 2026-01-20 14:28:42.687 225859 DEBUG nova.storage.rbd_utils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image 87fe16d6-774e-4002-8df4-9eb202621ab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:28:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:28:42.894 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:42 compute-1 nova_compute[225855]: 2026-01-20 14:28:42.899 225859 INFO nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Creating config drive at /var/lib/nova/instances/87fe16d6-774e-4002-8df4-9eb202621ab9/disk.config
Jan 20 14:28:42 compute-1 nova_compute[225855]: 2026-01-20 14:28:42.904 225859 DEBUG oslo_concurrency.processutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/87fe16d6-774e-4002-8df4-9eb202621ab9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7imlze8h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:42.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2946817491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3456057515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.049 225859 DEBUG oslo_concurrency.processutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/87fe16d6-774e-4002-8df4-9eb202621ab9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7imlze8h" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:43 compute-1 ovn_controller[130490]: 2026-01-20T14:28:43Z|00095|binding|INFO|Claiming lport bd002580-dd95-49e1-bc34-e85f86272a05 for this chassis.
Jan 20 14:28:43 compute-1 ovn_controller[130490]: 2026-01-20T14:28:43Z|00096|binding|INFO|bd002580-dd95-49e1-bc34-e85f86272a05: Claiming fa:16:3e:24:ce:0d 10.100.0.10
Jan 20 14:28:43 compute-1 ovn_controller[130490]: 2026-01-20T14:28:43Z|00097|binding|INFO|Setting lport bd002580-dd95-49e1-bc34-e85f86272a05 up in Southbound
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.092 225859 DEBUG nova.storage.rbd_utils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image 87fe16d6-774e-4002-8df4-9eb202621ab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:28:43.095 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:ce:0d 10.100.0.10'], port_security=['fa:16:3e:24:ce:0d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '79b5596e-43c9-4085-9829-454fecf59490', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14f18b27-1594-48d8-a08b-a930f7adbc08', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd15f60b9e48e4175b5520d1e57ed2d3a', 'neutron:revision_number': '11', 'neutron:security_group_ids': '6d729cfd-2f98-4ca5-a524-e543b12b3766', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02983c41-bbec-48cf-910a-84fed1be783f, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=bd002580-dd95-49e1-bc34-e85f86272a05) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.097 225859 DEBUG oslo_concurrency.processutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/87fe16d6-774e-4002-8df4-9eb202621ab9/disk.config 87fe16d6-774e-4002-8df4-9eb202621ab9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:28:43.097 140354 INFO neutron.agent.ovn.metadata.agent [-] Port bd002580-dd95-49e1-bc34-e85f86272a05 in datapath 14f18b27-1594-48d8-a08b-a930f7adbc08 bound to our chassis
Jan 20 14:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:28:43.100 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14f18b27-1594-48d8-a08b-a930f7adbc08
Jan 20 14:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:28:43.118 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2f87bc06-571b-4637-9ceb-cf247adee100]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:28:43.152 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[cbb0a8f7-2e90-484d-93a9-fafc78090b19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:28:43.158 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[35ee62dd-b38f-4eda-a656-416ad73998e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:28:43.198 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[028c4a95-7124-44ee-8f53-608a20fa34cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:28:43.220 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[59b7fdee-13ae-43fa-88ed-17d771ba4721]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14f18b27-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:1f:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 25, 'tx_packets': 5, 'rx_bytes': 1752, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 3, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 25, 'tx_packets': 5, 'rx_bytes': 1752, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 3, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 430599, 'reachable_time': 25641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 14, 'inoctets': 1040, 'indelivers': 5, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 14, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1040, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 14, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 5, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237666, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:28:43.243 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a14e43a0-b8a6-4309-9791-7d0d7babffc0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap14f18b27-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 430610, 'tstamp': 430610}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237670, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap14f18b27-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 430612, 'tstamp': 430612}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237670, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:28:43.244 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14f18b27-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.246 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:28:43.247 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14f18b27-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:28:43.248 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:28:43.248 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14f18b27-10, col_values=(('external_ids', {'iface-id': 'aa1c73c5-9761-4457-acdc-9f93220f739f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:28:43.249 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.273 225859 DEBUG oslo_concurrency.processutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/87fe16d6-774e-4002-8df4-9eb202621ab9/disk.config 87fe16d6-774e-4002-8df4-9eb202621ab9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.274 225859 INFO nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Deleting local config drive /var/lib/nova/instances/87fe16d6-774e-4002-8df4-9eb202621ab9/disk.config because it was imported into RBD.
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.328 225859 INFO nova.compute.manager [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Post operation of migration started
Jan 20 14:28:43 compute-1 systemd-machined[194361]: New machine qemu-12-instance-00000018.
Jan 20 14:28:43 compute-1 systemd[1]: Started Virtual Machine qemu-12-instance-00000018.
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.561 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.802 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919323.8021781, 87fe16d6-774e-4002-8df4-9eb202621ab9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.803 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] VM Resumed (Lifecycle Event)
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.806 225859 DEBUG nova.compute.manager [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.807 225859 DEBUG nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.811 225859 INFO nova.virt.libvirt.driver [-] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Instance spawned successfully.
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.811 225859 DEBUG nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.837 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.846 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.851 225859 DEBUG nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.852 225859 DEBUG nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.853 225859 DEBUG nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.853 225859 DEBUG nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.854 225859 DEBUG nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.855 225859 DEBUG nova.virt.libvirt.driver [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.897 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.897 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919323.8032815, 87fe16d6-774e-4002-8df4-9eb202621ab9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.898 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] VM Started (Lifecycle Event)
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.924 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.928 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.932 225859 INFO nova.compute.manager [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Took 3.13 seconds to spawn the instance on the hypervisor.
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.932 225859 DEBUG nova.compute.manager [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:28:43 compute-1 ceph-mon[81775]: pgmap v1135: 321 pgs: 321 active+clean; 279 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 174 op/s
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.934 225859 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.934 225859 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquired lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.934 225859 DEBUG nova.network.neutron [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:28:43 compute-1 nova_compute[225855]: 2026-01-20 14:28:43.949 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:28:44 compute-1 nova_compute[225855]: 2026-01-20 14:28:44.007 225859 INFO nova.compute.manager [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Took 4.17 seconds to build instance.
Jan 20 14:28:44 compute-1 nova_compute[225855]: 2026-01-20 14:28:44.027 225859 DEBUG oslo_concurrency.lockutils [None req-f779cab0-8c82-4d01-896e-0ee4904b5f0b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "87fe16d6-774e-4002-8df4-9eb202621ab9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:28:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:44.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:28:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:44.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:45 compute-1 nova_compute[225855]: 2026-01-20 14:28:45.418 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:45 compute-1 nova_compute[225855]: 2026-01-20 14:28:45.818 225859 DEBUG nova.network.neutron [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Updating instance_info_cache with network_info: [{"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:28:45 compute-1 nova_compute[225855]: 2026-01-20 14:28:45.840 225859 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Releasing lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:28:45 compute-1 nova_compute[225855]: 2026-01-20 14:28:45.860 225859 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:45 compute-1 nova_compute[225855]: 2026-01-20 14:28:45.860 225859 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:45 compute-1 nova_compute[225855]: 2026-01-20 14:28:45.861 225859 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:45 compute-1 nova_compute[225855]: 2026-01-20 14:28:45.870 225859 INFO nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Jan 20 14:28:45 compute-1 virtqemud[225396]: Domain id=11 name='instance-00000017' uuid=79b5596e-43c9-4085-9829-454fecf59490 is tainted: custom-monitor
Jan 20 14:28:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:28:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:46.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:28:46 compute-1 ceph-mon[81775]: pgmap v1136: 321 pgs: 321 active+clean; 312 MiB data, 490 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.8 MiB/s wr, 187 op/s
Jan 20 14:28:46 compute-1 nova_compute[225855]: 2026-01-20 14:28:46.882 225859 INFO nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Jan 20 14:28:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:46.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:47 compute-1 ceph-mon[81775]: pgmap v1137: 321 pgs: 321 active+clean; 326 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.7 MiB/s wr, 216 op/s
Jan 20 14:28:47 compute-1 nova_compute[225855]: 2026-01-20 14:28:47.888 225859 INFO nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Jan 20 14:28:47 compute-1 nova_compute[225855]: 2026-01-20 14:28:47.894 225859 DEBUG nova.compute.manager [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:28:47 compute-1 nova_compute[225855]: 2026-01-20 14:28:47.923 225859 DEBUG nova.objects.instance [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:28:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:48.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.392 225859 DEBUG oslo_concurrency.lockutils [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "refresh_cache-87fe16d6-774e-4002-8df4-9eb202621ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.393 225859 DEBUG oslo_concurrency.lockutils [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquired lock "refresh_cache-87fe16d6-774e-4002-8df4-9eb202621ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.393 225859 DEBUG nova.network.neutron [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.601 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.618 225859 DEBUG nova.network.neutron [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:28:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/663865372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.888 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:48.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.932 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Triggering sync for uuid d726266f-b9a6-406b-ad13-f9db3e0dc6aa _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.933 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Triggering sync for uuid 29f0b4d4-abf0-46e7-bf67-38e71eb42e28 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.934 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Triggering sync for uuid 79b5596e-43c9-4085-9829-454fecf59490 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.934 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Triggering sync for uuid 87fe16d6-774e-4002-8df4-9eb202621ab9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.935 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.935 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.936 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "29f0b4d4-abf0-46e7-bf67-38e71eb42e28" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.937 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "29f0b4d4-abf0-46e7-bf67-38e71eb42e28" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.938 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.938 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "79b5596e-43c9-4085-9829-454fecf59490" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.939 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "87fe16d6-774e-4002-8df4-9eb202621ab9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.940 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "87fe16d6-774e-4002-8df4-9eb202621ab9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.941 225859 INFO nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] During sync_power_state the instance has a pending task (resize_prep). Skip.
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.942 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "87fe16d6-774e-4002-8df4-9eb202621ab9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.994 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "29f0b4d4-abf0-46e7-bf67-38e71eb42e28" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:48 compute-1 nova_compute[225855]: 2026-01-20 14:28:48.995 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:49 compute-1 nova_compute[225855]: 2026-01-20 14:28:49.039 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "79b5596e-43c9-4085-9829-454fecf59490" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:49 compute-1 nova_compute[225855]: 2026-01-20 14:28:49.305 225859 DEBUG nova.network.neutron [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:28:49 compute-1 nova_compute[225855]: 2026-01-20 14:28:49.340 225859 DEBUG oslo_concurrency.lockutils [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Releasing lock "refresh_cache-87fe16d6-774e-4002-8df4-9eb202621ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:28:49 compute-1 nova_compute[225855]: 2026-01-20 14:28:49.463 225859 DEBUG nova.virt.libvirt.driver [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 20 14:28:49 compute-1 nova_compute[225855]: 2026-01-20 14:28:49.464 225859 DEBUG nova.virt.libvirt.volume.remotefs [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Creating file /var/lib/nova/instances/87fe16d6-774e-4002-8df4-9eb202621ab9/4d3b1e03b92e457094cc45cb59666465.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Jan 20 14:28:49 compute-1 nova_compute[225855]: 2026-01-20 14:28:49.465 225859 DEBUG oslo_concurrency.processutils [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/87fe16d6-774e-4002-8df4-9eb202621ab9/4d3b1e03b92e457094cc45cb59666465.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2858502062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:49 compute-1 ceph-mon[81775]: pgmap v1138: 321 pgs: 321 active+clean; 326 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.4 MiB/s wr, 219 op/s
Jan 20 14:28:50 compute-1 nova_compute[225855]: 2026-01-20 14:28:50.008 225859 DEBUG oslo_concurrency.processutils [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/87fe16d6-774e-4002-8df4-9eb202621ab9/4d3b1e03b92e457094cc45cb59666465.tmp" returned: 1 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:50 compute-1 nova_compute[225855]: 2026-01-20 14:28:50.010 225859 DEBUG oslo_concurrency.processutils [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/87fe16d6-774e-4002-8df4-9eb202621ab9/4d3b1e03b92e457094cc45cb59666465.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 20 14:28:50 compute-1 nova_compute[225855]: 2026-01-20 14:28:50.010 225859 DEBUG nova.virt.libvirt.volume.remotefs [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Creating directory /var/lib/nova/instances/87fe16d6-774e-4002-8df4-9eb202621ab9 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Jan 20 14:28:50 compute-1 nova_compute[225855]: 2026-01-20 14:28:50.011 225859 DEBUG oslo_concurrency.processutils [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/87fe16d6-774e-4002-8df4-9eb202621ab9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:50 compute-1 nova_compute[225855]: 2026-01-20 14:28:50.260 225859 DEBUG oslo_concurrency.processutils [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/87fe16d6-774e-4002-8df4-9eb202621ab9" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:50 compute-1 nova_compute[225855]: 2026-01-20 14:28:50.265 225859 DEBUG nova.virt.libvirt.driver [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:28:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:50.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:50 compute-1 nova_compute[225855]: 2026-01-20 14:28:50.421 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:50.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/66379269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:52 compute-1 ceph-mon[81775]: pgmap v1139: 321 pgs: 321 active+clean; 327 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 170 op/s
Jan 20 14:28:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:28:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:52.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:28:52 compute-1 nova_compute[225855]: 2026-01-20 14:28:52.816 225859 DEBUG nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Check if temp file /var/lib/nova/instances/tmp27yeses5 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Jan 20 14:28:52 compute-1 nova_compute[225855]: 2026-01-20 14:28:52.817 225859 DEBUG nova.compute.manager [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp27yeses5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='79b5596e-43c9-4085-9829-454fecf59490',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Jan 20 14:28:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:28:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:52.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:28:53 compute-1 nova_compute[225855]: 2026-01-20 14:28:53.646 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:54.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:54 compute-1 ceph-mon[81775]: pgmap v1140: 321 pgs: 321 active+clean; 327 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 147 op/s
Jan 20 14:28:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:54.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:55 compute-1 nova_compute[225855]: 2026-01-20 14:28:55.425 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.003000085s ======
Jan 20 14:28:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:56.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000085s
Jan 20 14:28:56 compute-1 ceph-mon[81775]: pgmap v1141: 321 pgs: 321 active+clean; 327 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 238 op/s
Jan 20 14:28:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:28:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:56.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:28:56 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:28:56 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/731545174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:57 compute-1 sudo[237738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:28:57 compute-1 sudo[237738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:57 compute-1 sudo[237738]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:57 compute-1 sudo[237763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:28:57 compute-1 sudo[237763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:57 compute-1 sudo[237763]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:57 compute-1 sudo[237788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:28:57 compute-1 sudo[237788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:57 compute-1 sudo[237788]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:57 compute-1 sudo[237813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 14:28:57 compute-1 sudo[237813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:57 compute-1 sudo[237813]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/731545174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:57 compute-1 sudo[237860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:28:57 compute-1 sudo[237860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:57 compute-1 sudo[237860]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:57 compute-1 sudo[237885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:28:57 compute-1 sudo[237885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:57 compute-1 sudo[237885]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:58 compute-1 sudo[237910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:28:58 compute-1 sudo[237910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:58 compute-1 sudo[237910]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:58 compute-1 sudo[237935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:28:58 compute-1 sudo[237935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:58.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:58 compute-1 nova_compute[225855]: 2026-01-20 14:28:58.684 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:58 compute-1 sudo[237935]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:58 compute-1 ceph-mon[81775]: pgmap v1142: 321 pgs: 321 active+clean; 327 MiB data, 508 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 804 KiB/s wr, 226 op/s
Jan 20 14:28:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:28:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:28:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:28:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 14:28:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:28:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:28:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:58.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:58 compute-1 nova_compute[225855]: 2026-01-20 14:28:58.985 225859 DEBUG nova.compute.manager [req-abef9692-ee67-4217-ac01-bdceccb1f562 req-d4ce792c-0a42-49c3-92be-8893d9e5fbbc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:28:58 compute-1 nova_compute[225855]: 2026-01-20 14:28:58.986 225859 DEBUG oslo_concurrency.lockutils [req-abef9692-ee67-4217-ac01-bdceccb1f562 req-d4ce792c-0a42-49c3-92be-8893d9e5fbbc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:58 compute-1 nova_compute[225855]: 2026-01-20 14:28:58.986 225859 DEBUG oslo_concurrency.lockutils [req-abef9692-ee67-4217-ac01-bdceccb1f562 req-d4ce792c-0a42-49c3-92be-8893d9e5fbbc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:58 compute-1 nova_compute[225855]: 2026-01-20 14:28:58.986 225859 DEBUG oslo_concurrency.lockutils [req-abef9692-ee67-4217-ac01-bdceccb1f562 req-d4ce792c-0a42-49c3-92be-8893d9e5fbbc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:58 compute-1 nova_compute[225855]: 2026-01-20 14:28:58.987 225859 DEBUG nova.compute.manager [req-abef9692-ee67-4217-ac01-bdceccb1f562 req-d4ce792c-0a42-49c3-92be-8893d9e5fbbc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] No waiting events found dispatching network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:28:58 compute-1 nova_compute[225855]: 2026-01-20 14:28:58.987 225859 DEBUG nova.compute.manager [req-abef9692-ee67-4217-ac01-bdceccb1f562 req-d4ce792c-0a42-49c3-92be-8893d9e5fbbc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:28:59 compute-1 sudo[237990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:28:59 compute-1 sudo[237990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:59 compute-1 sudo[237990]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:59 compute-1 sudo[238015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:28:59 compute-1 sudo[238015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:59 compute-1 sudo[238015]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:59 compute-1 nova_compute[225855]: 2026-01-20 14:28:59.724 225859 INFO nova.compute.manager [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Took 5.68 seconds for pre_live_migration on destination host compute-0.ctlplane.example.com.
Jan 20 14:28:59 compute-1 nova_compute[225855]: 2026-01-20 14:28:59.726 225859 DEBUG nova.compute.manager [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:28:59 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 14:28:59 compute-1 ceph-mon[81775]: pgmap v1143: 321 pgs: 321 active+clean; 329 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 144 KiB/s wr, 195 op/s
Jan 20 14:28:59 compute-1 nova_compute[225855]: 2026-01-20 14:28:59.755 225859 DEBUG nova.compute.manager [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp27yeses5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='79b5596e-43c9-4085-9829-454fecf59490',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(522850ff-68d3-4cab-8c83-50b3a540cdfa),old_vol_attachment_ids={47e883f3-6efe-40b3-be28-6c01525dfc0c='636146ab-4bc6-4c21-9609-7755eb208c7c'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Jan 20 14:28:59 compute-1 nova_compute[225855]: 2026-01-20 14:28:59.759 225859 DEBUG nova.objects.instance [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lazy-loading 'migration_context' on Instance uuid 79b5596e-43c9-4085-9829-454fecf59490 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:28:59 compute-1 nova_compute[225855]: 2026-01-20 14:28:59.761 225859 DEBUG nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Jan 20 14:28:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:59 compute-1 nova_compute[225855]: 2026-01-20 14:28:59.764 225859 DEBUG nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Jan 20 14:28:59 compute-1 nova_compute[225855]: 2026-01-20 14:28:59.764 225859 DEBUG nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Jan 20 14:28:59 compute-1 nova_compute[225855]: 2026-01-20 14:28:59.792 225859 DEBUG nova.virt.libvirt.migration [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Find same serial number: pos=1, serial=47e883f3-6efe-40b3-be28-6c01525dfc0c _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242
Jan 20 14:28:59 compute-1 nova_compute[225855]: 2026-01-20 14:28:59.795 225859 DEBUG nova.virt.libvirt.vif [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1483268234',display_name='tempest-LiveMigrationTest-server-1483268234',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1483268234',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:28:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d15f60b9e48e4175b5520d1e57ed2d3a',ramdisk_id='',reservation_id='r-jglb1q09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-864280704',owner_user_name='tempest-LiveMigrationTest-864280704-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:28:47Z,user_data=None,user_id='bce7fcbd19554e29bb80c5b93b7dd3c9',uuid=79b5596e-43c9-4085-9829-454fecf59490,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:28:59 compute-1 nova_compute[225855]: 2026-01-20 14:28:59.796 225859 DEBUG nova.network.os_vif_util [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Converting VIF {"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:28:59 compute-1 nova_compute[225855]: 2026-01-20 14:28:59.797 225859 DEBUG nova.network.os_vif_util [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:28:59 compute-1 nova_compute[225855]: 2026-01-20 14:28:59.798 225859 DEBUG nova.virt.libvirt.migration [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Updating guest XML with vif config: <interface type="ethernet">
Jan 20 14:28:59 compute-1 nova_compute[225855]:   <mac address="fa:16:3e:24:ce:0d"/>
Jan 20 14:28:59 compute-1 nova_compute[225855]:   <model type="virtio"/>
Jan 20 14:28:59 compute-1 nova_compute[225855]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:28:59 compute-1 nova_compute[225855]:   <mtu size="1442"/>
Jan 20 14:28:59 compute-1 nova_compute[225855]:   <target dev="tapbd002580-dd"/>
Jan 20 14:28:59 compute-1 nova_compute[225855]: </interface>
Jan 20 14:28:59 compute-1 nova_compute[225855]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Jan 20 14:28:59 compute-1 nova_compute[225855]: 2026-01-20 14:28:59.799 225859 DEBUG nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Jan 20 14:29:00 compute-1 nova_compute[225855]: 2026-01-20 14:29:00.267 225859 DEBUG nova.virt.libvirt.migration [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 20 14:29:00 compute-1 nova_compute[225855]: 2026-01-20 14:29:00.268 225859 INFO nova.virt.libvirt.migration [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Increasing downtime to 50 ms after 0 sec elapsed time
Jan 20 14:29:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:00.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:00 compute-1 nova_compute[225855]: 2026-01-20 14:29:00.313 225859 DEBUG nova.virt.libvirt.driver [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 14:29:00 compute-1 nova_compute[225855]: 2026-01-20 14:29:00.426 225859 INFO nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Jan 20 14:29:00 compute-1 nova_compute[225855]: 2026-01-20 14:29:00.428 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:00 compute-1 nova_compute[225855]: 2026-01-20 14:29:00.929 225859 DEBUG nova.virt.libvirt.migration [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 20 14:29:00 compute-1 nova_compute[225855]: 2026-01-20 14:29:00.930 225859 DEBUG nova.virt.libvirt.migration [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Jan 20 14:29:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:00.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:01 compute-1 nova_compute[225855]: 2026-01-20 14:29:01.082 225859 DEBUG nova.compute.manager [req-54663048-335f-4fa6-ad1a-3950f7e92c71 req-8eb0251f-2127-42e9-98a3-61941c4c86ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:29:01 compute-1 nova_compute[225855]: 2026-01-20 14:29:01.082 225859 DEBUG oslo_concurrency.lockutils [req-54663048-335f-4fa6-ad1a-3950f7e92c71 req-8eb0251f-2127-42e9-98a3-61941c4c86ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:01 compute-1 nova_compute[225855]: 2026-01-20 14:29:01.082 225859 DEBUG oslo_concurrency.lockutils [req-54663048-335f-4fa6-ad1a-3950f7e92c71 req-8eb0251f-2127-42e9-98a3-61941c4c86ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:01 compute-1 nova_compute[225855]: 2026-01-20 14:29:01.083 225859 DEBUG oslo_concurrency.lockutils [req-54663048-335f-4fa6-ad1a-3950f7e92c71 req-8eb0251f-2127-42e9-98a3-61941c4c86ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:01 compute-1 nova_compute[225855]: 2026-01-20 14:29:01.083 225859 DEBUG nova.compute.manager [req-54663048-335f-4fa6-ad1a-3950f7e92c71 req-8eb0251f-2127-42e9-98a3-61941c4c86ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] No waiting events found dispatching network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:29:01 compute-1 nova_compute[225855]: 2026-01-20 14:29:01.083 225859 WARNING nova.compute.manager [req-54663048-335f-4fa6-ad1a-3950f7e92c71 req-8eb0251f-2127-42e9-98a3-61941c4c86ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received unexpected event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 for instance with vm_state active and task_state migrating.
Jan 20 14:29:01 compute-1 nova_compute[225855]: 2026-01-20 14:29:01.083 225859 DEBUG nova.compute.manager [req-54663048-335f-4fa6-ad1a-3950f7e92c71 req-8eb0251f-2127-42e9-98a3-61941c4c86ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-changed-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:29:01 compute-1 nova_compute[225855]: 2026-01-20 14:29:01.084 225859 DEBUG nova.compute.manager [req-54663048-335f-4fa6-ad1a-3950f7e92c71 req-8eb0251f-2127-42e9-98a3-61941c4c86ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Refreshing instance network info cache due to event network-changed-bd002580-dd95-49e1-bc34-e85f86272a05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:29:01 compute-1 nova_compute[225855]: 2026-01-20 14:29:01.084 225859 DEBUG oslo_concurrency.lockutils [req-54663048-335f-4fa6-ad1a-3950f7e92c71 req-8eb0251f-2127-42e9-98a3-61941c4c86ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:29:01 compute-1 nova_compute[225855]: 2026-01-20 14:29:01.084 225859 DEBUG oslo_concurrency.lockutils [req-54663048-335f-4fa6-ad1a-3950f7e92c71 req-8eb0251f-2127-42e9-98a3-61941c4c86ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:29:01 compute-1 nova_compute[225855]: 2026-01-20 14:29:01.085 225859 DEBUG nova.network.neutron [req-54663048-335f-4fa6-ad1a-3950f7e92c71 req-8eb0251f-2127-42e9-98a3-61941c4c86ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Refreshing network info cache for port bd002580-dd95-49e1-bc34-e85f86272a05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:29:01 compute-1 nova_compute[225855]: 2026-01-20 14:29:01.433 225859 DEBUG nova.virt.libvirt.migration [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 20 14:29:01 compute-1 nova_compute[225855]: 2026-01-20 14:29:01.436 225859 DEBUG nova.virt.libvirt.migration [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Jan 20 14:29:01 compute-1 nova_compute[225855]: 2026-01-20 14:29:01.812 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919341.8116124, 79b5596e-43c9-4085-9829-454fecf59490 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:29:01 compute-1 nova_compute[225855]: 2026-01-20 14:29:01.813 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] VM Paused (Lifecycle Event)
Jan 20 14:29:01 compute-1 nova_compute[225855]: 2026-01-20 14:29:01.833 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:01 compute-1 nova_compute[225855]: 2026-01-20 14:29:01.840 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:29:01 compute-1 nova_compute[225855]: 2026-01-20 14:29:01.863 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] During sync_power_state the instance has a pending task (migrating). Skip.
Jan 20 14:29:01 compute-1 nova_compute[225855]: 2026-01-20 14:29:01.941 225859 DEBUG nova.virt.libvirt.migration [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 20 14:29:01 compute-1 nova_compute[225855]: 2026-01-20 14:29:01.942 225859 DEBUG nova.virt.libvirt.migration [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Jan 20 14:29:01 compute-1 kernel: tapbd002580-dd (unregistering): left promiscuous mode
Jan 20 14:29:02 compute-1 NetworkManager[49104]: <info>  [1768919342.0100] device (tapbd002580-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:29:02 compute-1 nova_compute[225855]: 2026-01-20 14:29:02.022 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:02 compute-1 ovn_controller[130490]: 2026-01-20T14:29:02Z|00098|binding|INFO|Releasing lport bd002580-dd95-49e1-bc34-e85f86272a05 from this chassis (sb_readonly=0)
Jan 20 14:29:02 compute-1 ovn_controller[130490]: 2026-01-20T14:29:02Z|00099|binding|INFO|Setting lport bd002580-dd95-49e1-bc34-e85f86272a05 down in Southbound
Jan 20 14:29:02 compute-1 ovn_controller[130490]: 2026-01-20T14:29:02Z|00100|binding|INFO|Removing iface tapbd002580-dd ovn-installed in OVS
Jan 20 14:29:02 compute-1 nova_compute[225855]: 2026-01-20 14:29:02.024 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:02 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:02.035 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:ce:0d 10.100.0.10'], port_security=['fa:16:3e:24:ce:0d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '367c1a2c-b16a-4828-ab5a-626bb50023b4'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '79b5596e-43c9-4085-9829-454fecf59490', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14f18b27-1594-48d8-a08b-a930f7adbc08', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd15f60b9e48e4175b5520d1e57ed2d3a', 'neutron:revision_number': '18', 'neutron:security_group_ids': '6d729cfd-2f98-4ca5-a524-e543b12b3766', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02983c41-bbec-48cf-910a-84fed1be783f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=bd002580-dd95-49e1-bc34-e85f86272a05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:29:02 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:02.038 140354 INFO neutron.agent.ovn.metadata.agent [-] Port bd002580-dd95-49e1-bc34-e85f86272a05 in datapath 14f18b27-1594-48d8-a08b-a930f7adbc08 unbound from our chassis
Jan 20 14:29:02 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:02.040 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14f18b27-1594-48d8-a08b-a930f7adbc08
Jan 20 14:29:02 compute-1 nova_compute[225855]: 2026-01-20 14:29:02.052 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:02 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:02.064 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e159633d-5608-4aef-9fdd-2a5b45cb1f1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:02 compute-1 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000017.scope: Deactivated successfully.
Jan 20 14:29:02 compute-1 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000017.scope: Consumed 2.761s CPU time.
Jan 20 14:29:02 compute-1 systemd-machined[194361]: Machine qemu-11-instance-00000017 terminated.
Jan 20 14:29:02 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:02.108 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[a2758760-3225-49fe-9a31-9f4bc01c8497]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:02 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:02.111 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[cf938cea-5f6c-462f-983a-b84f1d793cf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:02 compute-1 podman[238045]: 2026-01-20 14:29:02.123605967 +0000 UTC m=+0.151404868 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 20 14:29:02 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:02.141 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[abad8b4b-d929-4ea6-8d60-1bd313c93011]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:02 compute-1 virtqemud[225396]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-47e883f3-6efe-40b3-be28-6c01525dfc0c: No such file or directory
Jan 20 14:29:02 compute-1 virtqemud[225396]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-47e883f3-6efe-40b3-be28-6c01525dfc0c: No such file or directory
Jan 20 14:29:02 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:02.161 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[686ced00-f413-4407-a1f4-b0c956035cea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14f18b27-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:1f:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 2466, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 3, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 2466, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 3, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 430599, 'reachable_time': 25641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 14, 'inoctets': 1040, 'indelivers': 5, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 14, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1040, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 14, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 5, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238081, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:02 compute-1 nova_compute[225855]: 2026-01-20 14:29:02.175 225859 DEBUG nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Jan 20 14:29:02 compute-1 nova_compute[225855]: 2026-01-20 14:29:02.175 225859 DEBUG nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Jan 20 14:29:02 compute-1 nova_compute[225855]: 2026-01-20 14:29:02.176 225859 DEBUG nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Jan 20 14:29:02 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:02.179 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ee257a25-eb13-4a52-af49-73760ecea4bf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap14f18b27-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 430610, 'tstamp': 430610}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238087, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap14f18b27-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 430612, 'tstamp': 430612}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238087, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:02 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:02.181 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14f18b27-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:29:02 compute-1 nova_compute[225855]: 2026-01-20 14:29:02.213 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:02 compute-1 nova_compute[225855]: 2026-01-20 14:29:02.217 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:02 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:02.218 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14f18b27-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:29:02 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:02.218 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:29:02 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:02.219 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14f18b27-10, col_values=(('external_ids', {'iface-id': 'aa1c73c5-9761-4457-acdc-9f93220f739f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:29:02 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:02.219 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:29:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:29:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:02.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:29:02 compute-1 nova_compute[225855]: 2026-01-20 14:29:02.321 225859 DEBUG nova.compute.manager [req-a8c41029-e8f0-4ebd-a84f-b4655a531645 req-9c5a2d83-f75a-4349-8304-114fe71972bd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:29:02 compute-1 nova_compute[225855]: 2026-01-20 14:29:02.322 225859 DEBUG oslo_concurrency.lockutils [req-a8c41029-e8f0-4ebd-a84f-b4655a531645 req-9c5a2d83-f75a-4349-8304-114fe71972bd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:02 compute-1 nova_compute[225855]: 2026-01-20 14:29:02.322 225859 DEBUG oslo_concurrency.lockutils [req-a8c41029-e8f0-4ebd-a84f-b4655a531645 req-9c5a2d83-f75a-4349-8304-114fe71972bd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:02 compute-1 nova_compute[225855]: 2026-01-20 14:29:02.322 225859 DEBUG oslo_concurrency.lockutils [req-a8c41029-e8f0-4ebd-a84f-b4655a531645 req-9c5a2d83-f75a-4349-8304-114fe71972bd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:02 compute-1 nova_compute[225855]: 2026-01-20 14:29:02.323 225859 DEBUG nova.compute.manager [req-a8c41029-e8f0-4ebd-a84f-b4655a531645 req-9c5a2d83-f75a-4349-8304-114fe71972bd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] No waiting events found dispatching network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:29:02 compute-1 nova_compute[225855]: 2026-01-20 14:29:02.323 225859 DEBUG nova.compute.manager [req-a8c41029-e8f0-4ebd-a84f-b4655a531645 req-9c5a2d83-f75a-4349-8304-114fe71972bd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:29:02 compute-1 nova_compute[225855]: 2026-01-20 14:29:02.445 225859 DEBUG nova.virt.libvirt.guest [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '79b5596e-43c9-4085-9829-454fecf59490' (instance-00000017) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Jan 20 14:29:02 compute-1 nova_compute[225855]: 2026-01-20 14:29:02.445 225859 INFO nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Migration operation has completed
Jan 20 14:29:02 compute-1 nova_compute[225855]: 2026-01-20 14:29:02.446 225859 INFO nova.compute.manager [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] _post_live_migration() is started..
Jan 20 14:29:02 compute-1 ceph-mon[81775]: pgmap v1144: 321 pgs: 321 active+clean; 360 MiB data, 516 MiB used, 20 GiB / 21 GiB avail; 841 KiB/s rd, 2.1 MiB/s wr, 226 op/s
Jan 20 14:29:02 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:29:02 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:29:02 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:29:02 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:29:02 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:29:02 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:29:02 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:29:02 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:29:02 compute-1 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000018.scope: Deactivated successfully.
Jan 20 14:29:02 compute-1 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000018.scope: Consumed 15.206s CPU time.
Jan 20 14:29:02 compute-1 systemd-machined[194361]: Machine qemu-12-instance-00000018 terminated.
Jan 20 14:29:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:02.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:03 compute-1 nova_compute[225855]: 2026-01-20 14:29:03.330 225859 INFO nova.virt.libvirt.driver [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Instance shutdown successfully after 13 seconds.
Jan 20 14:29:03 compute-1 nova_compute[225855]: 2026-01-20 14:29:03.338 225859 INFO nova.virt.libvirt.driver [-] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Instance destroyed successfully.
Jan 20 14:29:03 compute-1 nova_compute[225855]: 2026-01-20 14:29:03.343 225859 DEBUG nova.virt.libvirt.driver [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:29:03 compute-1 nova_compute[225855]: 2026-01-20 14:29:03.344 225859 DEBUG nova.virt.libvirt.driver [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:29:03 compute-1 nova_compute[225855]: 2026-01-20 14:29:03.375 225859 DEBUG nova.network.neutron [req-54663048-335f-4fa6-ad1a-3950f7e92c71 req-8eb0251f-2127-42e9-98a3-61941c4c86ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Updated VIF entry in instance network info cache for port bd002580-dd95-49e1-bc34-e85f86272a05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:29:03 compute-1 nova_compute[225855]: 2026-01-20 14:29:03.376 225859 DEBUG nova.network.neutron [req-54663048-335f-4fa6-ad1a-3950f7e92c71 req-8eb0251f-2127-42e9-98a3-61941c4c86ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Updating instance_info_cache with network_info: [{"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true, "migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:29:03 compute-1 nova_compute[225855]: 2026-01-20 14:29:03.429 225859 DEBUG oslo_concurrency.lockutils [req-54663048-335f-4fa6-ad1a-3950f7e92c71 req-8eb0251f-2127-42e9-98a3-61941c4c86ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:29:03 compute-1 nova_compute[225855]: 2026-01-20 14:29:03.500 225859 DEBUG oslo_concurrency.lockutils [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "87fe16d6-774e-4002-8df4-9eb202621ab9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:03 compute-1 nova_compute[225855]: 2026-01-20 14:29:03.501 225859 DEBUG oslo_concurrency.lockutils [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "87fe16d6-774e-4002-8df4-9eb202621ab9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:03 compute-1 nova_compute[225855]: 2026-01-20 14:29:03.501 225859 DEBUG oslo_concurrency.lockutils [None req-21751958-ead6-449d-8a04-ee82f802da6d 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "87fe16d6-774e-4002-8df4-9eb202621ab9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:03 compute-1 nova_compute[225855]: 2026-01-20 14:29:03.686 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:04.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.500 225859 DEBUG nova.compute.manager [req-f379ab50-2398-4aeb-a00d-d95717a1a867 req-289af249-87b5-4fc0-9118-2d2a68bd7201 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.500 225859 DEBUG oslo_concurrency.lockutils [req-f379ab50-2398-4aeb-a00d-d95717a1a867 req-289af249-87b5-4fc0-9118-2d2a68bd7201 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.501 225859 DEBUG oslo_concurrency.lockutils [req-f379ab50-2398-4aeb-a00d-d95717a1a867 req-289af249-87b5-4fc0-9118-2d2a68bd7201 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.502 225859 DEBUG oslo_concurrency.lockutils [req-f379ab50-2398-4aeb-a00d-d95717a1a867 req-289af249-87b5-4fc0-9118-2d2a68bd7201 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.502 225859 DEBUG nova.compute.manager [req-f379ab50-2398-4aeb-a00d-d95717a1a867 req-289af249-87b5-4fc0-9118-2d2a68bd7201 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] No waiting events found dispatching network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.503 225859 WARNING nova.compute.manager [req-f379ab50-2398-4aeb-a00d-d95717a1a867 req-289af249-87b5-4fc0-9118-2d2a68bd7201 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received unexpected event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 for instance with vm_state active and task_state migrating.
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.503 225859 DEBUG nova.compute.manager [req-f379ab50-2398-4aeb-a00d-d95717a1a867 req-289af249-87b5-4fc0-9118-2d2a68bd7201 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.504 225859 DEBUG oslo_concurrency.lockutils [req-f379ab50-2398-4aeb-a00d-d95717a1a867 req-289af249-87b5-4fc0-9118-2d2a68bd7201 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.504 225859 DEBUG oslo_concurrency.lockutils [req-f379ab50-2398-4aeb-a00d-d95717a1a867 req-289af249-87b5-4fc0-9118-2d2a68bd7201 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.504 225859 DEBUG oslo_concurrency.lockutils [req-f379ab50-2398-4aeb-a00d-d95717a1a867 req-289af249-87b5-4fc0-9118-2d2a68bd7201 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.505 225859 DEBUG nova.compute.manager [req-f379ab50-2398-4aeb-a00d-d95717a1a867 req-289af249-87b5-4fc0-9118-2d2a68bd7201 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] No waiting events found dispatching network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.505 225859 WARNING nova.compute.manager [req-f379ab50-2398-4aeb-a00d-d95717a1a867 req-289af249-87b5-4fc0-9118-2d2a68bd7201 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received unexpected event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 for instance with vm_state active and task_state migrating.
Jan 20 14:29:04 compute-1 ceph-mon[81775]: pgmap v1145: 321 pgs: 321 active+clean; 360 MiB data, 516 MiB used, 20 GiB / 21 GiB avail; 320 KiB/s rd, 2.1 MiB/s wr, 179 op/s
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.692 225859 DEBUG nova.compute.manager [req-2d43485c-63ac-4e27-8096-eb6987ead2c2 req-8ee442b6-d76e-4b99-842e-ba8d53557c71 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.693 225859 DEBUG oslo_concurrency.lockutils [req-2d43485c-63ac-4e27-8096-eb6987ead2c2 req-8ee442b6-d76e-4b99-842e-ba8d53557c71 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.693 225859 DEBUG oslo_concurrency.lockutils [req-2d43485c-63ac-4e27-8096-eb6987ead2c2 req-8ee442b6-d76e-4b99-842e-ba8d53557c71 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.693 225859 DEBUG oslo_concurrency.lockutils [req-2d43485c-63ac-4e27-8096-eb6987ead2c2 req-8ee442b6-d76e-4b99-842e-ba8d53557c71 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.694 225859 DEBUG nova.compute.manager [req-2d43485c-63ac-4e27-8096-eb6987ead2c2 req-8ee442b6-d76e-4b99-842e-ba8d53557c71 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] No waiting events found dispatching network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.694 225859 DEBUG nova.compute.manager [req-2d43485c-63ac-4e27-8096-eb6987ead2c2 req-8ee442b6-d76e-4b99-842e-ba8d53557c71 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:29:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:04.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.994 225859 DEBUG nova.network.neutron [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Activated binding for port bd002580-dd95-49e1-bc34-e85f86272a05 and host compute-0.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.995 225859 DEBUG nova.compute.manager [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.997 225859 DEBUG nova.virt.libvirt.vif [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1483268234',display_name='tempest-LiveMigrationTest-server-1483268234',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1483268234',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:28:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d15f60b9e48e4175b5520d1e57ed2d3a',ramdisk_id='',reservation_id='r-jglb1q09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-864280704',owner_user_name='tempest-LiveMigrationTest-864280704-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:28:51Z,user_data=None,user_id='bce7fcbd19554e29bb80c5b93b7dd3c9',uuid=79b5596e-43c9-4085-9829-454fecf59490,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.997 225859 DEBUG nova.network.os_vif_util [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Converting VIF {"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:29:04 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.999 225859 DEBUG nova.network.os_vif_util [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:29:05 compute-1 nova_compute[225855]: 2026-01-20 14:29:04.999 225859 DEBUG os_vif [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:29:05 compute-1 nova_compute[225855]: 2026-01-20 14:29:05.002 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:05 compute-1 nova_compute[225855]: 2026-01-20 14:29:05.002 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd002580-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:29:05 compute-1 nova_compute[225855]: 2026-01-20 14:29:05.005 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:05 compute-1 nova_compute[225855]: 2026-01-20 14:29:05.006 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:05 compute-1 nova_compute[225855]: 2026-01-20 14:29:05.010 225859 INFO os_vif [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd')
Jan 20 14:29:05 compute-1 nova_compute[225855]: 2026-01-20 14:29:05.010 225859 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:05 compute-1 nova_compute[225855]: 2026-01-20 14:29:05.011 225859 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:05 compute-1 nova_compute[225855]: 2026-01-20 14:29:05.011 225859 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:05 compute-1 nova_compute[225855]: 2026-01-20 14:29:05.012 225859 DEBUG nova.compute.manager [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Jan 20 14:29:05 compute-1 nova_compute[225855]: 2026-01-20 14:29:05.013 225859 INFO nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Deleting instance files /var/lib/nova/instances/79b5596e-43c9-4085-9829-454fecf59490_del
Jan 20 14:29:05 compute-1 nova_compute[225855]: 2026-01-20 14:29:05.013 225859 INFO nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Deletion of /var/lib/nova/instances/79b5596e-43c9-4085-9829-454fecf59490_del complete
Jan 20 14:29:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e157 e157: 3 total, 3 up, 3 in
Jan 20 14:29:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:06.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:06 compute-1 ceph-mon[81775]: pgmap v1146: 321 pgs: 321 active+clean; 360 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 190 op/s
Jan 20 14:29:06 compute-1 ceph-mon[81775]: osdmap e157: 3 total, 3 up, 3 in
Jan 20 14:29:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2168453940' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:06 compute-1 nova_compute[225855]: 2026-01-20 14:29:06.591 225859 DEBUG nova.compute.manager [req-82ca5b27-87b8-4179-b3d9-a7fab695433e req-8378bd0e-ee95-41dd-915b-88d906712153 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:29:06 compute-1 nova_compute[225855]: 2026-01-20 14:29:06.591 225859 DEBUG oslo_concurrency.lockutils [req-82ca5b27-87b8-4179-b3d9-a7fab695433e req-8378bd0e-ee95-41dd-915b-88d906712153 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:06 compute-1 nova_compute[225855]: 2026-01-20 14:29:06.592 225859 DEBUG oslo_concurrency.lockutils [req-82ca5b27-87b8-4179-b3d9-a7fab695433e req-8378bd0e-ee95-41dd-915b-88d906712153 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:06 compute-1 nova_compute[225855]: 2026-01-20 14:29:06.592 225859 DEBUG oslo_concurrency.lockutils [req-82ca5b27-87b8-4179-b3d9-a7fab695433e req-8378bd0e-ee95-41dd-915b-88d906712153 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:06 compute-1 nova_compute[225855]: 2026-01-20 14:29:06.592 225859 DEBUG nova.compute.manager [req-82ca5b27-87b8-4179-b3d9-a7fab695433e req-8378bd0e-ee95-41dd-915b-88d906712153 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] No waiting events found dispatching network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:29:06 compute-1 nova_compute[225855]: 2026-01-20 14:29:06.593 225859 WARNING nova.compute.manager [req-82ca5b27-87b8-4179-b3d9-a7fab695433e req-8378bd0e-ee95-41dd-915b-88d906712153 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received unexpected event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 for instance with vm_state active and task_state migrating.
Jan 20 14:29:06 compute-1 nova_compute[225855]: 2026-01-20 14:29:06.593 225859 DEBUG nova.compute.manager [req-82ca5b27-87b8-4179-b3d9-a7fab695433e req-8378bd0e-ee95-41dd-915b-88d906712153 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:29:06 compute-1 nova_compute[225855]: 2026-01-20 14:29:06.593 225859 DEBUG oslo_concurrency.lockutils [req-82ca5b27-87b8-4179-b3d9-a7fab695433e req-8378bd0e-ee95-41dd-915b-88d906712153 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:06 compute-1 nova_compute[225855]: 2026-01-20 14:29:06.593 225859 DEBUG oslo_concurrency.lockutils [req-82ca5b27-87b8-4179-b3d9-a7fab695433e req-8378bd0e-ee95-41dd-915b-88d906712153 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:06 compute-1 nova_compute[225855]: 2026-01-20 14:29:06.594 225859 DEBUG oslo_concurrency.lockutils [req-82ca5b27-87b8-4179-b3d9-a7fab695433e req-8378bd0e-ee95-41dd-915b-88d906712153 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:06 compute-1 nova_compute[225855]: 2026-01-20 14:29:06.594 225859 DEBUG nova.compute.manager [req-82ca5b27-87b8-4179-b3d9-a7fab695433e req-8378bd0e-ee95-41dd-915b-88d906712153 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] No waiting events found dispatching network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:29:06 compute-1 nova_compute[225855]: 2026-01-20 14:29:06.594 225859 WARNING nova.compute.manager [req-82ca5b27-87b8-4179-b3d9-a7fab695433e req-8378bd0e-ee95-41dd-915b-88d906712153 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received unexpected event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 for instance with vm_state active and task_state migrating.
Jan 20 14:29:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:06.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2585897243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:08 compute-1 sudo[238100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:29:08 compute-1 sudo[238100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:08 compute-1 sudo[238100]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:08 compute-1 sudo[238125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:29:08 compute-1 sudo[238125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:08 compute-1 sudo[238125]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:08.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:08 compute-1 ceph-mon[81775]: pgmap v1148: 321 pgs: 321 active+clean; 360 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 337 KiB/s rd, 2.6 MiB/s wr, 83 op/s
Jan 20 14:29:08 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:29:08 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:29:08 compute-1 nova_compute[225855]: 2026-01-20 14:29:08.732 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:08 compute-1 nova_compute[225855]: 2026-01-20 14:29:08.919 225859 DEBUG oslo_concurrency.lockutils [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "87fe16d6-774e-4002-8df4-9eb202621ab9" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:08 compute-1 nova_compute[225855]: 2026-01-20 14:29:08.920 225859 DEBUG oslo_concurrency.lockutils [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "87fe16d6-774e-4002-8df4-9eb202621ab9" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:08 compute-1 nova_compute[225855]: 2026-01-20 14:29:08.920 225859 DEBUG nova.compute.manager [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Going to confirm migration 7 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Jan 20 14:29:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:29:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:08.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:29:09 compute-1 nova_compute[225855]: 2026-01-20 14:29:09.117 225859 DEBUG oslo_concurrency.lockutils [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "refresh_cache-87fe16d6-774e-4002-8df4-9eb202621ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:29:09 compute-1 nova_compute[225855]: 2026-01-20 14:29:09.117 225859 DEBUG oslo_concurrency.lockutils [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquired lock "refresh_cache-87fe16d6-774e-4002-8df4-9eb202621ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:29:09 compute-1 nova_compute[225855]: 2026-01-20 14:29:09.118 225859 DEBUG nova.network.neutron [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:29:09 compute-1 nova_compute[225855]: 2026-01-20 14:29:09.118 225859 DEBUG nova.objects.instance [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'info_cache' on Instance uuid 87fe16d6-774e-4002-8df4-9eb202621ab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:29:09 compute-1 nova_compute[225855]: 2026-01-20 14:29:09.277 225859 DEBUG nova.network.neutron [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:29:09 compute-1 nova_compute[225855]: 2026-01-20 14:29:09.540 225859 DEBUG nova.network.neutron [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:29:09 compute-1 nova_compute[225855]: 2026-01-20 14:29:09.557 225859 DEBUG oslo_concurrency.lockutils [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Releasing lock "refresh_cache-87fe16d6-774e-4002-8df4-9eb202621ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:29:09 compute-1 nova_compute[225855]: 2026-01-20 14:29:09.558 225859 DEBUG nova.objects.instance [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'migration_context' on Instance uuid 87fe16d6-774e-4002-8df4-9eb202621ab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:29:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1324916243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:09 compute-1 nova_compute[225855]: 2026-01-20 14:29:09.681 225859 DEBUG nova.storage.rbd_utils [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] removing snapshot(nova-resize) on rbd image(87fe16d6-774e-4002-8df4-9eb202621ab9_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:29:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:10 compute-1 nova_compute[225855]: 2026-01-20 14:29:10.041 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:29:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:10.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:29:10 compute-1 ceph-mon[81775]: pgmap v1149: 321 pgs: 321 active+clean; 360 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 917 KiB/s rd, 2.4 MiB/s wr, 97 op/s
Jan 20 14:29:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e158 e158: 3 total, 3 up, 3 in
Jan 20 14:29:10 compute-1 nova_compute[225855]: 2026-01-20 14:29:10.686 225859 DEBUG oslo_concurrency.lockutils [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:10 compute-1 nova_compute[225855]: 2026-01-20 14:29:10.686 225859 DEBUG oslo_concurrency.lockutils [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:10 compute-1 nova_compute[225855]: 2026-01-20 14:29:10.805 225859 DEBUG oslo_concurrency.processutils [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:10 compute-1 nova_compute[225855]: 2026-01-20 14:29:10.836 225859 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:10 compute-1 nova_compute[225855]: 2026-01-20 14:29:10.837 225859 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:10 compute-1 nova_compute[225855]: 2026-01-20 14:29:10.837 225859 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:10 compute-1 nova_compute[225855]: 2026-01-20 14:29:10.861 225859 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:10.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:29:11 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/107033582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:11 compute-1 nova_compute[225855]: 2026-01-20 14:29:11.266 225859 DEBUG oslo_concurrency.processutils [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:11 compute-1 nova_compute[225855]: 2026-01-20 14:29:11.275 225859 DEBUG nova.compute.provider_tree [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:29:11 compute-1 nova_compute[225855]: 2026-01-20 14:29:11.293 225859 DEBUG nova.scheduler.client.report [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:29:11 compute-1 nova_compute[225855]: 2026-01-20 14:29:11.341 225859 DEBUG oslo_concurrency.lockutils [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:11 compute-1 nova_compute[225855]: 2026-01-20 14:29:11.346 225859 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:11 compute-1 nova_compute[225855]: 2026-01-20 14:29:11.347 225859 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:11 compute-1 nova_compute[225855]: 2026-01-20 14:29:11.347 225859 DEBUG nova.compute.resource_tracker [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:29:11 compute-1 nova_compute[225855]: 2026-01-20 14:29:11.348 225859 DEBUG oslo_concurrency.processutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:11 compute-1 nova_compute[225855]: 2026-01-20 14:29:11.489 225859 INFO nova.scheduler.client.report [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Deleted allocation for migration 8dc09b06-46b2-4315-8857-eb43dfbe98ff
Jan 20 14:29:11 compute-1 nova_compute[225855]: 2026-01-20 14:29:11.573 225859 DEBUG oslo_concurrency.lockutils [None req-5094afda-e7db-467d-b2b0-a6e5c26bf603 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "87fe16d6-774e-4002-8df4-9eb202621ab9" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 2.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:11 compute-1 ceph-mon[81775]: osdmap e158: 3 total, 3 up, 3 in
Jan 20 14:29:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/909451222' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3359295586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/107033582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:29:11 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/549940775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:11 compute-1 nova_compute[225855]: 2026-01-20 14:29:11.772 225859 DEBUG oslo_concurrency.processutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:11 compute-1 nova_compute[225855]: 2026-01-20 14:29:11.872 225859 DEBUG nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:29:11 compute-1 nova_compute[225855]: 2026-01-20 14:29:11.873 225859 DEBUG nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:29:11 compute-1 nova_compute[225855]: 2026-01-20 14:29:11.877 225859 DEBUG nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:29:11 compute-1 nova_compute[225855]: 2026-01-20 14:29:11.878 225859 DEBUG nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:29:11 compute-1 podman[238234]: 2026-01-20 14:29:11.920892107 +0000 UTC m=+0.089088155 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 14:29:12 compute-1 nova_compute[225855]: 2026-01-20 14:29:12.072 225859 WARNING nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:29:12 compute-1 nova_compute[225855]: 2026-01-20 14:29:12.073 225859 DEBUG nova.compute.resource_tracker [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4382MB free_disk=20.845653533935547GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:29:12 compute-1 nova_compute[225855]: 2026-01-20 14:29:12.073 225859 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:12 compute-1 nova_compute[225855]: 2026-01-20 14:29:12.074 225859 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:12 compute-1 nova_compute[225855]: 2026-01-20 14:29:12.125 225859 DEBUG nova.compute.resource_tracker [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Migration for instance 79b5596e-43c9-4085-9829-454fecf59490 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 20 14:29:12 compute-1 nova_compute[225855]: 2026-01-20 14:29:12.146 225859 DEBUG nova.compute.resource_tracker [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Jan 20 14:29:12 compute-1 nova_compute[225855]: 2026-01-20 14:29:12.168 225859 DEBUG nova.compute.resource_tracker [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Instance d726266f-b9a6-406b-ad13-f9db3e0dc6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:29:12 compute-1 nova_compute[225855]: 2026-01-20 14:29:12.169 225859 DEBUG nova.compute.resource_tracker [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Instance 29f0b4d4-abf0-46e7-bf67-38e71eb42e28 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:29:12 compute-1 nova_compute[225855]: 2026-01-20 14:29:12.169 225859 DEBUG nova.compute.resource_tracker [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Migration 522850ff-68d3-4cab-8c83-50b3a540cdfa is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 20 14:29:12 compute-1 nova_compute[225855]: 2026-01-20 14:29:12.169 225859 DEBUG nova.compute.resource_tracker [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:29:12 compute-1 nova_compute[225855]: 2026-01-20 14:29:12.169 225859 DEBUG nova.compute.resource_tracker [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:29:12 compute-1 nova_compute[225855]: 2026-01-20 14:29:12.285 225859 DEBUG oslo_concurrency.processutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:12.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:12 compute-1 nova_compute[225855]: 2026-01-20 14:29:12.394 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:29:12 compute-1 ceph-mon[81775]: pgmap v1151: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 382 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 915 KiB/s wr, 188 op/s
Jan 20 14:29:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/549940775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:29:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1240338787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:12 compute-1 nova_compute[225855]: 2026-01-20 14:29:12.784 225859 DEBUG oslo_concurrency.processutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:12 compute-1 nova_compute[225855]: 2026-01-20 14:29:12.792 225859 DEBUG nova.compute.provider_tree [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:29:12 compute-1 nova_compute[225855]: 2026-01-20 14:29:12.825 225859 DEBUG nova.scheduler.client.report [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:29:12 compute-1 nova_compute[225855]: 2026-01-20 14:29:12.856 225859 DEBUG nova.compute.resource_tracker [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:29:12 compute-1 nova_compute[225855]: 2026-01-20 14:29:12.856 225859 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:12 compute-1 nova_compute[225855]: 2026-01-20 14:29:12.862 225859 INFO nova.compute.manager [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Migrating instance to compute-0.ctlplane.example.com finished successfully.
Jan 20 14:29:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:12.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:13 compute-1 nova_compute[225855]: 2026-01-20 14:29:13.000 225859 INFO nova.scheduler.client.report [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Deleted allocation for migration 522850ff-68d3-4cab-8c83-50b3a540cdfa
Jan 20 14:29:13 compute-1 nova_compute[225855]: 2026-01-20 14:29:13.001 225859 DEBUG nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Jan 20 14:29:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1240338787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/636651936' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:29:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/636651936' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:29:13 compute-1 nova_compute[225855]: 2026-01-20 14:29:13.735 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:14.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:14 compute-1 ceph-mon[81775]: pgmap v1152: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 382 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 786 KiB/s wr, 172 op/s
Jan 20 14:29:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1914442263' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:29:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1914442263' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.699 225859 DEBUG oslo_concurrency.lockutils [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Acquiring lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.699 225859 DEBUG oslo_concurrency.lockutils [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.700 225859 DEBUG oslo_concurrency.lockutils [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Acquiring lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.700 225859 DEBUG oslo_concurrency.lockutils [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.700 225859 DEBUG oslo_concurrency.lockutils [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.702 225859 INFO nova.compute.manager [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Terminating instance
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.703 225859 DEBUG nova.compute.manager [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:29:14 compute-1 kernel: tape6067076-0f (unregistering): left promiscuous mode
Jan 20 14:29:14 compute-1 NetworkManager[49104]: <info>  [1768919354.7537] device (tape6067076-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.763 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:14 compute-1 ovn_controller[130490]: 2026-01-20T14:29:14Z|00101|binding|INFO|Releasing lport e6067076-0f97-4e9c-9355-353277570e11 from this chassis (sb_readonly=0)
Jan 20 14:29:14 compute-1 ovn_controller[130490]: 2026-01-20T14:29:14Z|00102|binding|INFO|Setting lport e6067076-0f97-4e9c-9355-353277570e11 down in Southbound
Jan 20 14:29:14 compute-1 ovn_controller[130490]: 2026-01-20T14:29:14Z|00103|binding|INFO|Releasing lport 9013ed66-b0f2-4a83-b7d4-572f1324f582 from this chassis (sb_readonly=0)
Jan 20 14:29:14 compute-1 ovn_controller[130490]: 2026-01-20T14:29:14Z|00104|binding|INFO|Setting lport 9013ed66-b0f2-4a83-b7d4-572f1324f582 down in Southbound
Jan 20 14:29:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:14 compute-1 ovn_controller[130490]: 2026-01-20T14:29:14Z|00105|binding|INFO|Removing iface tape6067076-0f ovn-installed in OVS
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.768 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:14 compute-1 ovn_controller[130490]: 2026-01-20T14:29:14Z|00106|binding|INFO|Releasing lport aa1c73c5-9761-4457-acdc-9f93220f739f from this chassis (sb_readonly=0)
Jan 20 14:29:14 compute-1 ovn_controller[130490]: 2026-01-20T14:29:14Z|00107|binding|INFO|Releasing lport e10f34be-dfc1-4bfe-806f-f00a84c17390 from this chassis (sb_readonly=0)
Jan 20 14:29:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:14.772 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:74:79 19.80.0.125'], port_security=['fa:16:3e:51:74:79 19.80.0.125'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['e6067076-0f97-4e9c-9355-353277570e11'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1871336558', 'neutron:cidrs': '19.80.0.125/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08e625c5-899c-442a-8ef4-9a3c96892de4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1871336558', 'neutron:project_id': 'd15f60b9e48e4175b5520d1e57ed2d3a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '6d729cfd-2f98-4ca5-a524-e543b12b3766', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=62d5dc3b-a6a9-4e55-8632-5a7fe1112862, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9013ed66-b0f2-4a83-b7d4-572f1324f582) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:29:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:14.773 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:cf:b7 10.100.0.12'], port_security=['fa:16:3e:db:cf:b7 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-395006048', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd726266f-b9a6-406b-ad13-f9db3e0dc6aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14f18b27-1594-48d8-a08b-a930f7adbc08', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-395006048', 'neutron:project_id': 'd15f60b9e48e4175b5520d1e57ed2d3a', 'neutron:revision_number': '14', 'neutron:security_group_ids': '6d729cfd-2f98-4ca5-a524-e543b12b3766', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02983c41-bbec-48cf-910a-84fed1be783f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=e6067076-0f97-4e9c-9355-353277570e11) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:29:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:14.774 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 9013ed66-b0f2-4a83-b7d4-572f1324f582 in datapath 08e625c5-899c-442a-8ef4-9a3c96892de4 unbound from our chassis
Jan 20 14:29:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:14.775 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08e625c5-899c-442a-8ef4-9a3c96892de4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:29:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:14.776 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e01054d3-ec00-4e06-975b-33c22d54b2b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:14.776 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4 namespace which is not needed anymore
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.795 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:14 compute-1 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000010.scope: Deactivated successfully.
Jan 20 14:29:14 compute-1 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000010.scope: Consumed 6.962s CPU time.
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.868 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:14 compute-1 systemd-machined[194361]: Machine qemu-9-instance-00000010 terminated.
Jan 20 14:29:14 compute-1 neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4[235367]: [NOTICE]   (235372) : haproxy version is 2.8.14-c23fe91
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.947 225859 INFO nova.virt.libvirt.driver [-] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Instance destroyed successfully.
Jan 20 14:29:14 compute-1 neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4[235367]: [NOTICE]   (235372) : path to executable is /usr/sbin/haproxy
Jan 20 14:29:14 compute-1 neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4[235367]: [WARNING]  (235372) : Exiting Master process...
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.947 225859 DEBUG nova.objects.instance [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lazy-loading 'resources' on Instance uuid d726266f-b9a6-406b-ad13-f9db3e0dc6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:29:14 compute-1 neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4[235367]: [ALERT]    (235372) : Current worker (235374) exited with code 143 (Terminated)
Jan 20 14:29:14 compute-1 neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4[235367]: [WARNING]  (235372) : All workers exited. Exiting... (0)
Jan 20 14:29:14 compute-1 systemd[1]: libpod-ed48cc542c051179becb0bef90fd036c02c3a10fbac0702dc06ee569e76c9395.scope: Deactivated successfully.
Jan 20 14:29:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:14.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:14 compute-1 podman[238303]: 2026-01-20 14:29:14.971037219 +0000 UTC m=+0.081078471 container died ed48cc542c051179becb0bef90fd036c02c3a10fbac0702dc06ee569e76c9395 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.972 225859 DEBUG nova.virt.libvirt.vif [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:27:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1394818615',display_name='tempest-LiveMigrationTest-server-1394818615',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1394818615',id=16,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:27:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d15f60b9e48e4175b5520d1e57ed2d3a',ramdisk_id='',reservation_id='r-pti072hl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-864280704',owner_user_name='tempest-LiveMigrationTest-864280704-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:27:52Z,user_data=None,user_id='bce7fcbd19554e29bb80c5b93b7dd3c9',uuid=d726266f-b9a6-406b-ad13-f9db3e0dc6aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e6067076-0f97-4e9c-9355-353277570e11", "address": "fa:16:3e:db:cf:b7", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6067076-0f", "ovs_interfaceid": "e6067076-0f97-4e9c-9355-353277570e11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.973 225859 DEBUG nova.network.os_vif_util [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Converting VIF {"id": "e6067076-0f97-4e9c-9355-353277570e11", "address": "fa:16:3e:db:cf:b7", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6067076-0f", "ovs_interfaceid": "e6067076-0f97-4e9c-9355-353277570e11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.973 225859 DEBUG nova.network.os_vif_util [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:db:cf:b7,bridge_name='br-int',has_traffic_filtering=True,id=e6067076-0f97-4e9c-9355-353277570e11,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape6067076-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.974 225859 DEBUG os_vif [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:cf:b7,bridge_name='br-int',has_traffic_filtering=True,id=e6067076-0f97-4e9c-9355-353277570e11,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape6067076-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.976 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.976 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape6067076-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.978 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.981 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:29:14 compute-1 nova_compute[225855]: 2026-01-20 14:29:14.983 225859 INFO os_vif [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:cf:b7,bridge_name='br-int',has_traffic_filtering=True,id=e6067076-0f97-4e9c-9355-353277570e11,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape6067076-0f')
Jan 20 14:29:15 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed48cc542c051179becb0bef90fd036c02c3a10fbac0702dc06ee569e76c9395-userdata-shm.mount: Deactivated successfully.
Jan 20 14:29:15 compute-1 systemd[1]: var-lib-containers-storage-overlay-693c57f2e466814459bb967b5b8379f5bf0326b3ec16540677e4a65e7bbf1a2c-merged.mount: Deactivated successfully.
Jan 20 14:29:15 compute-1 podman[238303]: 2026-01-20 14:29:15.138176318 +0000 UTC m=+0.248217530 container cleanup ed48cc542c051179becb0bef90fd036c02c3a10fbac0702dc06ee569e76c9395 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:29:15 compute-1 systemd[1]: libpod-conmon-ed48cc542c051179becb0bef90fd036c02c3a10fbac0702dc06ee569e76c9395.scope: Deactivated successfully.
Jan 20 14:29:15 compute-1 podman[238362]: 2026-01-20 14:29:15.232402697 +0000 UTC m=+0.052829377 container remove ed48cc542c051179becb0bef90fd036c02c3a10fbac0702dc06ee569e76c9395 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.240 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3d89f1a2-90a0-4378-8ee8-65350243bfa5]: (4, ('Tue Jan 20 02:29:14 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4 (ed48cc542c051179becb0bef90fd036c02c3a10fbac0702dc06ee569e76c9395)\ned48cc542c051179becb0bef90fd036c02c3a10fbac0702dc06ee569e76c9395\nTue Jan 20 02:29:15 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4 (ed48cc542c051179becb0bef90fd036c02c3a10fbac0702dc06ee569e76c9395)\ned48cc542c051179becb0bef90fd036c02c3a10fbac0702dc06ee569e76c9395\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.242 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[de2dc939-daa1-485f-ad86-593d5bb5b9f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.242 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08e625c5-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:29:15 compute-1 nova_compute[225855]: 2026-01-20 14:29:15.244 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:15 compute-1 kernel: tap08e625c5-80: left promiscuous mode
Jan 20 14:29:15 compute-1 nova_compute[225855]: 2026-01-20 14:29:15.264 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.268 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f6261943-daee-4790-9879-6e39edc7604e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.282 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6a176e0d-a0d5-4d48-9481-81131cac3d61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.283 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[62981dca-71da-4769-9f6c-5b998bbeb9c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.306 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[71498d26-243e-42eb-a792-5f2d817faa3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 430503, 'reachable_time': 36585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238377, 'error': None, 'target': 'ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:15 compute-1 systemd[1]: run-netns-ovnmeta\x2d08e625c5\x2d899c\x2d442a\x2d8ef4\x2d9a3c96892de4.mount: Deactivated successfully.
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.309 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.309 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[7bf93d36-9e73-47b6-83ae-840a46422c30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.312 140354 INFO neutron.agent.ovn.metadata.agent [-] Port e6067076-0f97-4e9c-9355-353277570e11 in datapath 14f18b27-1594-48d8-a08b-a930f7adbc08 unbound from our chassis
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.314 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14f18b27-1594-48d8-a08b-a930f7adbc08, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.315 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[aed04eb8-1b06-401b-b529-77f7e472b112]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.316 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08 namespace which is not needed anymore
Jan 20 14:29:15 compute-1 nova_compute[225855]: 2026-01-20 14:29:15.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:29:15 compute-1 nova_compute[225855]: 2026-01-20 14:29:15.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:29:15 compute-1 nova_compute[225855]: 2026-01-20 14:29:15.400 225859 DEBUG nova.compute.manager [req-7e1cc6f2-a0e4-423e-894a-7c9081463568 req-de383624-7210-42c5-95aa-7f302944c22c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received event network-vif-unplugged-e6067076-0f97-4e9c-9355-353277570e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:29:15 compute-1 nova_compute[225855]: 2026-01-20 14:29:15.401 225859 DEBUG oslo_concurrency.lockutils [req-7e1cc6f2-a0e4-423e-894a-7c9081463568 req-de383624-7210-42c5-95aa-7f302944c22c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:15 compute-1 nova_compute[225855]: 2026-01-20 14:29:15.402 225859 DEBUG oslo_concurrency.lockutils [req-7e1cc6f2-a0e4-423e-894a-7c9081463568 req-de383624-7210-42c5-95aa-7f302944c22c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:15 compute-1 nova_compute[225855]: 2026-01-20 14:29:15.402 225859 DEBUG oslo_concurrency.lockutils [req-7e1cc6f2-a0e4-423e-894a-7c9081463568 req-de383624-7210-42c5-95aa-7f302944c22c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:15 compute-1 nova_compute[225855]: 2026-01-20 14:29:15.402 225859 DEBUG nova.compute.manager [req-7e1cc6f2-a0e4-423e-894a-7c9081463568 req-de383624-7210-42c5-95aa-7f302944c22c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] No waiting events found dispatching network-vif-unplugged-e6067076-0f97-4e9c-9355-353277570e11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:29:15 compute-1 nova_compute[225855]: 2026-01-20 14:29:15.402 225859 DEBUG nova.compute.manager [req-7e1cc6f2-a0e4-423e-894a-7c9081463568 req-de383624-7210-42c5-95aa-7f302944c22c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received event network-vif-unplugged-e6067076-0f97-4e9c-9355-353277570e11 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:29:15 compute-1 nova_compute[225855]: 2026-01-20 14:29:15.454 225859 INFO nova.virt.libvirt.driver [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Deleting instance files /var/lib/nova/instances/d726266f-b9a6-406b-ad13-f9db3e0dc6aa_del
Jan 20 14:29:15 compute-1 nova_compute[225855]: 2026-01-20 14:29:15.455 225859 INFO nova.virt.libvirt.driver [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Deletion of /var/lib/nova/instances/d726266f-b9a6-406b-ad13-f9db3e0dc6aa_del complete
Jan 20 14:29:15 compute-1 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[236640]: [NOTICE]   (236644) : haproxy version is 2.8.14-c23fe91
Jan 20 14:29:15 compute-1 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[236640]: [NOTICE]   (236644) : path to executable is /usr/sbin/haproxy
Jan 20 14:29:15 compute-1 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[236640]: [WARNING]  (236644) : Exiting Master process...
Jan 20 14:29:15 compute-1 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[236640]: [ALERT]    (236644) : Current worker (236646) exited with code 143 (Terminated)
Jan 20 14:29:15 compute-1 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[236640]: [WARNING]  (236644) : All workers exited. Exiting... (0)
Jan 20 14:29:15 compute-1 systemd[1]: libpod-cb549f3dfd3a2168af12352c43b7f75cb0e219d08716514e5f07f4b105589042.scope: Deactivated successfully.
Jan 20 14:29:15 compute-1 podman[238395]: 2026-01-20 14:29:15.510593388 +0000 UTC m=+0.062349314 container died cb549f3dfd3a2168af12352c43b7f75cb0e219d08716514e5f07f4b105589042 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:29:15 compute-1 nova_compute[225855]: 2026-01-20 14:29:15.540 225859 INFO nova.compute.manager [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Took 0.84 seconds to destroy the instance on the hypervisor.
Jan 20 14:29:15 compute-1 nova_compute[225855]: 2026-01-20 14:29:15.541 225859 DEBUG oslo.service.loopingcall [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:29:15 compute-1 nova_compute[225855]: 2026-01-20 14:29:15.542 225859 DEBUG nova.compute.manager [-] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:29:15 compute-1 nova_compute[225855]: 2026-01-20 14:29:15.542 225859 DEBUG nova.network.neutron [-] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:29:15 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cb549f3dfd3a2168af12352c43b7f75cb0e219d08716514e5f07f4b105589042-userdata-shm.mount: Deactivated successfully.
Jan 20 14:29:15 compute-1 systemd[1]: var-lib-containers-storage-overlay-2676fe810d64d27697c2b84fe44f9ab65fef13401e606b1abe81b75e60236b87-merged.mount: Deactivated successfully.
Jan 20 14:29:15 compute-1 podman[238395]: 2026-01-20 14:29:15.554108511 +0000 UTC m=+0.105864447 container cleanup cb549f3dfd3a2168af12352c43b7f75cb0e219d08716514e5f07f4b105589042 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 20 14:29:15 compute-1 systemd[1]: libpod-conmon-cb549f3dfd3a2168af12352c43b7f75cb0e219d08716514e5f07f4b105589042.scope: Deactivated successfully.
Jan 20 14:29:15 compute-1 podman[238424]: 2026-01-20 14:29:15.63978024 +0000 UTC m=+0.054424851 container remove cb549f3dfd3a2168af12352c43b7f75cb0e219d08716514e5f07f4b105589042 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.646 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ca283b-9baa-42e7-bc22-7d22f58d15f5]: (4, ('Tue Jan 20 02:29:15 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08 (cb549f3dfd3a2168af12352c43b7f75cb0e219d08716514e5f07f4b105589042)\ncb549f3dfd3a2168af12352c43b7f75cb0e219d08716514e5f07f4b105589042\nTue Jan 20 02:29:15 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08 (cb549f3dfd3a2168af12352c43b7f75cb0e219d08716514e5f07f4b105589042)\ncb549f3dfd3a2168af12352c43b7f75cb0e219d08716514e5f07f4b105589042\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.648 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d19e8391-8e2f-43cf-8e32-c2987233e00f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.649 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14f18b27-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:29:15 compute-1 nova_compute[225855]: 2026-01-20 14:29:15.651 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:15 compute-1 kernel: tap14f18b27-10: left promiscuous mode
Jan 20 14:29:15 compute-1 nova_compute[225855]: 2026-01-20 14:29:15.668 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:15 compute-1 nova_compute[225855]: 2026-01-20 14:29:15.669 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.671 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f80fb3-e9db-469c-a2ca-b9071566a9e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.692 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[87f2cabc-f325-4c6d-84f5-3bd8fa6bd233]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.694 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[acc53794-64d4-4d74-a486-23895cc45456]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2974734948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.725 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf02f34-ad9d-42ff-bc30-608646578f98]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 430592, 'reachable_time': 24162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238440, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.727 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:29:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:15.727 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[d90d7eae-e5e0-4f8f-a116-0a6eb0fc5827]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:16 compute-1 systemd[1]: run-netns-ovnmeta\x2d14f18b27\x2d1594\x2d48d8\x2da08b\x2da930f7adbc08.mount: Deactivated successfully.
Jan 20 14:29:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:16.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:16 compute-1 nova_compute[225855]: 2026-01-20 14:29:16.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:29:16 compute-1 nova_compute[225855]: 2026-01-20 14:29:16.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:29:16 compute-1 nova_compute[225855]: 2026-01-20 14:29:16.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:29:16 compute-1 nova_compute[225855]: 2026-01-20 14:29:16.367 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 20 14:29:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:16.388 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:16.389 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:16.389 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:16 compute-1 nova_compute[225855]: 2026-01-20 14:29:16.582 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-29f0b4d4-abf0-46e7-bf67-38e71eb42e28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:29:16 compute-1 nova_compute[225855]: 2026-01-20 14:29:16.582 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-29f0b4d4-abf0-46e7-bf67-38e71eb42e28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:29:16 compute-1 nova_compute[225855]: 2026-01-20 14:29:16.582 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:29:16 compute-1 nova_compute[225855]: 2026-01-20 14:29:16.583 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 29f0b4d4-abf0-46e7-bf67-38e71eb42e28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:29:16 compute-1 ceph-mon[81775]: pgmap v1153: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 367 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.2 MiB/s wr, 237 op/s
Jan 20 14:29:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3273831407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:16 compute-1 nova_compute[225855]: 2026-01-20 14:29:16.759 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:29:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:16.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.174 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919342.172607, 79b5596e-43c9-4085-9829-454fecf59490 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.176 225859 INFO nova.compute.manager [-] [instance: 79b5596e-43c9-4085-9829-454fecf59490] VM Stopped (Lifecycle Event)
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.195 225859 DEBUG nova.compute.manager [None req-e4966053-ba2b-41d3-b2e0-e6f392d201f8 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.465 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.482 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-29f0b4d4-abf0-46e7-bf67-38e71eb42e28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.483 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.484 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.484 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:29:17 compute-1 sshd-session[238207]: Connection closed by authenticating user root 45.179.5.170 port 41676 [preauth]
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.543 225859 DEBUG nova.compute.manager [req-a80e6f7f-b9fe-4e9d-9720-1081061a09b4 req-92e22e9c-6090-45e1-a6a9-2129992d6e01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received event network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.543 225859 DEBUG oslo_concurrency.lockutils [req-a80e6f7f-b9fe-4e9d-9720-1081061a09b4 req-92e22e9c-6090-45e1-a6a9-2129992d6e01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.544 225859 DEBUG oslo_concurrency.lockutils [req-a80e6f7f-b9fe-4e9d-9720-1081061a09b4 req-92e22e9c-6090-45e1-a6a9-2129992d6e01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.544 225859 DEBUG oslo_concurrency.lockutils [req-a80e6f7f-b9fe-4e9d-9720-1081061a09b4 req-92e22e9c-6090-45e1-a6a9-2129992d6e01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.544 225859 DEBUG nova.compute.manager [req-a80e6f7f-b9fe-4e9d-9720-1081061a09b4 req-92e22e9c-6090-45e1-a6a9-2129992d6e01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] No waiting events found dispatching network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.545 225859 WARNING nova.compute.manager [req-a80e6f7f-b9fe-4e9d-9720-1081061a09b4 req-92e22e9c-6090-45e1-a6a9-2129992d6e01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received unexpected event network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 for instance with vm_state active and task_state deleting.
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.546 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.547 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.547 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.547 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.548 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2254805158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.773 225859 DEBUG nova.network.neutron [-] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.788 225859 INFO nova.compute.manager [-] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Took 2.25 seconds to deallocate network for instance.
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.833 225859 DEBUG oslo_concurrency.lockutils [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.834 225859 DEBUG oslo_concurrency.lockutils [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.835 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919342.833644, 87fe16d6-774e-4002-8df4-9eb202621ab9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.835 225859 INFO nova.compute.manager [-] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] VM Stopped (Lifecycle Event)
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.867 225859 DEBUG nova.compute.manager [None req-81758240-e53e-4bb6-b90e-3e322c35b46f - - - - - -] [instance: 87fe16d6-774e-4002-8df4-9eb202621ab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:17 compute-1 nova_compute[225855]: 2026-01-20 14:29:17.910 225859 DEBUG oslo_concurrency.processutils [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:29:18 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/771574584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:18 compute-1 nova_compute[225855]: 2026-01-20 14:29:18.022 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:18 compute-1 nova_compute[225855]: 2026-01-20 14:29:18.114 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:29:18 compute-1 nova_compute[225855]: 2026-01-20 14:29:18.115 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:29:18 compute-1 nova_compute[225855]: 2026-01-20 14:29:18.313 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:29:18 compute-1 nova_compute[225855]: 2026-01-20 14:29:18.315 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4553MB free_disk=20.838512420654297GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:29:18 compute-1 nova_compute[225855]: 2026-01-20 14:29:18.316 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:18.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:18 compute-1 nova_compute[225855]: 2026-01-20 14:29:18.364 225859 DEBUG oslo_concurrency.processutils [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:18 compute-1 nova_compute[225855]: 2026-01-20 14:29:18.370 225859 DEBUG nova.compute.provider_tree [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:29:18 compute-1 nova_compute[225855]: 2026-01-20 14:29:18.389 225859 DEBUG nova.scheduler.client.report [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:29:18 compute-1 nova_compute[225855]: 2026-01-20 14:29:18.439 225859 DEBUG oslo_concurrency.lockutils [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:18 compute-1 nova_compute[225855]: 2026-01-20 14:29:18.444 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:18 compute-1 nova_compute[225855]: 2026-01-20 14:29:18.518 225859 INFO nova.scheduler.client.report [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Deleted allocations for instance d726266f-b9a6-406b-ad13-f9db3e0dc6aa
Jan 20 14:29:18 compute-1 nova_compute[225855]: 2026-01-20 14:29:18.544 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 29f0b4d4-abf0-46e7-bf67-38e71eb42e28 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:29:18 compute-1 nova_compute[225855]: 2026-01-20 14:29:18.545 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:29:18 compute-1 nova_compute[225855]: 2026-01-20 14:29:18.545 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:29:18 compute-1 nova_compute[225855]: 2026-01-20 14:29:18.610 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:18 compute-1 nova_compute[225855]: 2026-01-20 14:29:18.654 225859 DEBUG oslo_concurrency.lockutils [None req-aaff5fb5-a6be-4311-ae7f-a5adbf7cc1e2 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:18 compute-1 ceph-mon[81775]: pgmap v1154: 321 pgs: 321 active+clean; 311 MiB data, 493 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 284 op/s
Jan 20 14:29:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/771574584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3599865667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1152397161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:18 compute-1 nova_compute[225855]: 2026-01-20 14:29:18.772 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:18.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:29:19 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1452425071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:19 compute-1 nova_compute[225855]: 2026-01-20 14:29:19.026 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:19 compute-1 nova_compute[225855]: 2026-01-20 14:29:19.031 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:29:19 compute-1 nova_compute[225855]: 2026-01-20 14:29:19.060 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:29:19 compute-1 nova_compute[225855]: 2026-01-20 14:29:19.132 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:29:19 compute-1 nova_compute[225855]: 2026-01-20 14:29:19.133 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:19 compute-1 sudo[238510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:29:19 compute-1 sudo[238510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:19 compute-1 sudo[238510]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:19 compute-1 sudo[238535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:29:19 compute-1 sudo[238535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:19 compute-1 sudo[238535]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1452425071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:19 compute-1 ceph-mon[81775]: pgmap v1155: 321 pgs: 321 active+clean; 287 MiB data, 475 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 303 op/s
Jan 20 14:29:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1597519327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:19 compute-1 nova_compute[225855]: 2026-01-20 14:29:19.979 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:19 compute-1 nova_compute[225855]: 2026-01-20 14:29:19.988 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:29:19 compute-1 nova_compute[225855]: 2026-01-20 14:29:19.988 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:29:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:20.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:20 compute-1 nova_compute[225855]: 2026-01-20 14:29:20.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:29:20 compute-1 nova_compute[225855]: 2026-01-20 14:29:20.338 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:29:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:20.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:21 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e159 e159: 3 total, 3 up, 3 in
Jan 20 14:29:22 compute-1 ceph-mon[81775]: osdmap e159: 3 total, 3 up, 3 in
Jan 20 14:29:22 compute-1 ceph-mon[81775]: pgmap v1157: 321 pgs: 321 active+clean; 241 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.4 MiB/s wr, 257 op/s
Jan 20 14:29:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4157179260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:22.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:22.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/972668459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3564187580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:23 compute-1 nova_compute[225855]: 2026-01-20 14:29:23.814 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:24 compute-1 ceph-mon[81775]: pgmap v1158: 321 pgs: 321 active+clean; 241 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.4 MiB/s wr, 257 op/s
Jan 20 14:29:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/423882642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:24.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:29:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:24.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:29:25 compute-1 nova_compute[225855]: 2026-01-20 14:29:25.024 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:26.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:26 compute-1 ceph-mon[81775]: pgmap v1159: 321 pgs: 321 active+clean; 294 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.3 MiB/s wr, 240 op/s
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:26.395165) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919366395213, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2463, "num_deletes": 256, "total_data_size": 5766159, "memory_usage": 5862912, "flush_reason": "Manual Compaction"}
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919366416445, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3707485, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25425, "largest_seqno": 27883, "table_properties": {"data_size": 3697574, "index_size": 6213, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21636, "raw_average_key_size": 20, "raw_value_size": 3677289, "raw_average_value_size": 3539, "num_data_blocks": 273, "num_entries": 1039, "num_filter_entries": 1039, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768919183, "oldest_key_time": 1768919183, "file_creation_time": 1768919366, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 21351 microseconds, and 8039 cpu microseconds.
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:26.416509) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3707485 bytes OK
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:26.416534) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:26.423820) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:26.423847) EVENT_LOG_v1 {"time_micros": 1768919366423838, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:26.423910) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 5755142, prev total WAL file size 5755142, number of live WAL files 2.
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:26.426179) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3620KB)], [51(8856KB)]
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919366426246, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 12776082, "oldest_snapshot_seqno": -1}
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5465 keys, 10745733 bytes, temperature: kUnknown
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919366495289, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 10745733, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10707139, "index_size": 23828, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 137024, "raw_average_key_size": 25, "raw_value_size": 10606575, "raw_average_value_size": 1940, "num_data_blocks": 980, "num_entries": 5465, "num_filter_entries": 5465, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768919366, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:26.495533) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 10745733 bytes
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:26.497015) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.9 rd, 155.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.6 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 5996, records dropped: 531 output_compression: NoCompression
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:26.497040) EVENT_LOG_v1 {"time_micros": 1768919366497027, "job": 30, "event": "compaction_finished", "compaction_time_micros": 69115, "compaction_time_cpu_micros": 23127, "output_level": 6, "num_output_files": 1, "total_output_size": 10745733, "num_input_records": 5996, "num_output_records": 5465, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919366498025, "job": 30, "event": "table_file_deletion", "file_number": 53}
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919366499853, "job": 30, "event": "table_file_deletion", "file_number": 51}
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:26.426063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:26.499956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:26.499961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:26.499963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:26.499965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:26.499967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:26.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:29:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:28.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:29:28 compute-1 ceph-mon[81775]: pgmap v1160: 321 pgs: 321 active+clean; 294 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.3 MiB/s wr, 213 op/s
Jan 20 14:29:28 compute-1 nova_compute[225855]: 2026-01-20 14:29:28.848 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:28.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2219983015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:29 compute-1 nova_compute[225855]: 2026-01-20 14:29:29.764 225859 DEBUG nova.compute.manager [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 20 14:29:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:29 compute-1 nova_compute[225855]: 2026-01-20 14:29:29.856 225859 DEBUG oslo_concurrency.lockutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:29 compute-1 nova_compute[225855]: 2026-01-20 14:29:29.856 225859 DEBUG oslo_concurrency.lockutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:29 compute-1 nova_compute[225855]: 2026-01-20 14:29:29.891 225859 DEBUG nova.objects.instance [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9f5c9253-e2bd-42d3-8253-fac568daeda7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:29:29 compute-1 nova_compute[225855]: 2026-01-20 14:29:29.906 225859 DEBUG nova.virt.hardware [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:29:29 compute-1 nova_compute[225855]: 2026-01-20 14:29:29.907 225859 INFO nova.compute.claims [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:29:29 compute-1 nova_compute[225855]: 2026-01-20 14:29:29.907 225859 DEBUG nova.objects.instance [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'resources' on Instance uuid 9f5c9253-e2bd-42d3-8253-fac568daeda7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:29:29 compute-1 nova_compute[225855]: 2026-01-20 14:29:29.922 225859 DEBUG nova.objects.instance [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f5c9253-e2bd-42d3-8253-fac568daeda7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:29:29 compute-1 nova_compute[225855]: 2026-01-20 14:29:29.944 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919354.9443636, d726266f-b9a6-406b-ad13-f9db3e0dc6aa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:29:29 compute-1 nova_compute[225855]: 2026-01-20 14:29:29.945 225859 INFO nova.compute.manager [-] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] VM Stopped (Lifecycle Event)
Jan 20 14:29:29 compute-1 nova_compute[225855]: 2026-01-20 14:29:29.976 225859 DEBUG nova.compute.manager [None req-653c3fd6-92c6-4b47-9117-13c5a68ff91f - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:29 compute-1 nova_compute[225855]: 2026-01-20 14:29:29.981 225859 INFO nova.compute.resource_tracker [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Updating resource usage from migration 9ed1b4f4-9705-4902-bd56-a18b9866cbf3
Jan 20 14:29:29 compute-1 nova_compute[225855]: 2026-01-20 14:29:29.982 225859 DEBUG nova.compute.resource_tracker [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Starting to track incoming migration 9ed1b4f4-9705-4902-bd56-a18b9866cbf3 with flavor 30c26a27-d918-46d8-a512-4ef3b4ce5955 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 20 14:29:30 compute-1 nova_compute[225855]: 2026-01-20 14:29:30.027 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:30 compute-1 nova_compute[225855]: 2026-01-20 14:29:30.067 225859 DEBUG oslo_concurrency.processutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:30.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:30 compute-1 ceph-mon[81775]: pgmap v1161: 321 pgs: 321 active+clean; 296 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.3 MiB/s wr, 190 op/s
Jan 20 14:29:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1801464026' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:29:30 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/650463931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:30 compute-1 nova_compute[225855]: 2026-01-20 14:29:30.545 225859 DEBUG oslo_concurrency.processutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:30 compute-1 nova_compute[225855]: 2026-01-20 14:29:30.554 225859 DEBUG nova.compute.provider_tree [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:29:30 compute-1 nova_compute[225855]: 2026-01-20 14:29:30.571 225859 DEBUG nova.scheduler.client.report [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:29:30 compute-1 nova_compute[225855]: 2026-01-20 14:29:30.604 225859 DEBUG oslo_concurrency.lockutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:30 compute-1 nova_compute[225855]: 2026-01-20 14:29:30.605 225859 INFO nova.compute.manager [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Migrating
Jan 20 14:29:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:30.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/650463931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:32 compute-1 sshd-session[238588]: Accepted publickey for nova from 192.168.122.100 port 40426 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 14:29:32 compute-1 systemd-logind[783]: New session 54 of user nova.
Jan 20 14:29:32 compute-1 systemd[1]: Created slice User Slice of UID 42436.
Jan 20 14:29:32 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 20 14:29:32 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 20 14:29:32 compute-1 systemd[1]: Starting User Manager for UID 42436...
Jan 20 14:29:32 compute-1 systemd[238594]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:29:32 compute-1 systemd[238594]: Queued start job for default target Main User Target.
Jan 20 14:29:32 compute-1 systemd[238594]: Created slice User Application Slice.
Jan 20 14:29:32 compute-1 systemd[238594]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 14:29:32 compute-1 systemd[238594]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 14:29:32 compute-1 systemd[238594]: Reached target Paths.
Jan 20 14:29:32 compute-1 systemd[238594]: Reached target Timers.
Jan 20 14:29:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:32.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:32 compute-1 systemd[238594]: Starting D-Bus User Message Bus Socket...
Jan 20 14:29:32 compute-1 systemd[238594]: Starting Create User's Volatile Files and Directories...
Jan 20 14:29:32 compute-1 systemd[238594]: Listening on D-Bus User Message Bus Socket.
Jan 20 14:29:32 compute-1 systemd[238594]: Reached target Sockets.
Jan 20 14:29:32 compute-1 systemd[238594]: Finished Create User's Volatile Files and Directories.
Jan 20 14:29:32 compute-1 systemd[238594]: Reached target Basic System.
Jan 20 14:29:32 compute-1 systemd[238594]: Reached target Main User Target.
Jan 20 14:29:32 compute-1 systemd[238594]: Startup finished in 164ms.
Jan 20 14:29:32 compute-1 systemd[1]: Started User Manager for UID 42436.
Jan 20 14:29:32 compute-1 podman[238592]: 2026-01-20 14:29:32.361669078 +0000 UTC m=+0.184185660 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:29:32 compute-1 systemd[1]: Started Session 54 of User nova.
Jan 20 14:29:32 compute-1 sshd-session[238588]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:29:32 compute-1 sshd-session[238635]: Received disconnect from 192.168.122.100 port 40426:11: disconnected by user
Jan 20 14:29:32 compute-1 sshd-session[238635]: Disconnected from user nova 192.168.122.100 port 40426
Jan 20 14:29:32 compute-1 sshd-session[238588]: pam_unix(sshd:session): session closed for user nova
Jan 20 14:29:32 compute-1 systemd[1]: session-54.scope: Deactivated successfully.
Jan 20 14:29:32 compute-1 systemd-logind[783]: Session 54 logged out. Waiting for processes to exit.
Jan 20 14:29:32 compute-1 systemd-logind[783]: Removed session 54.
Jan 20 14:29:32 compute-1 ceph-mon[81775]: pgmap v1162: 321 pgs: 321 active+clean; 296 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.4 MiB/s wr, 160 op/s
Jan 20 14:29:32 compute-1 sshd-session[238637]: Accepted publickey for nova from 192.168.122.100 port 40500 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 14:29:32 compute-1 systemd-logind[783]: New session 56 of user nova.
Jan 20 14:29:32 compute-1 systemd[1]: Started Session 56 of User nova.
Jan 20 14:29:32 compute-1 sshd-session[238637]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:29:32 compute-1 sshd-session[238640]: Received disconnect from 192.168.122.100 port 40500:11: disconnected by user
Jan 20 14:29:32 compute-1 sshd-session[238640]: Disconnected from user nova 192.168.122.100 port 40500
Jan 20 14:29:32 compute-1 sshd-session[238637]: pam_unix(sshd:session): session closed for user nova
Jan 20 14:29:32 compute-1 systemd[1]: session-56.scope: Deactivated successfully.
Jan 20 14:29:32 compute-1 systemd-logind[783]: Session 56 logged out. Waiting for processes to exit.
Jan 20 14:29:32 compute-1 systemd-logind[783]: Removed session 56.
Jan 20 14:29:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:32.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:33 compute-1 nova_compute[225855]: 2026-01-20 14:29:33.850 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:34.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:34 compute-1 ceph-mon[81775]: pgmap v1163: 321 pgs: 321 active+clean; 296 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 135 op/s
Jan 20 14:29:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:34.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:35 compute-1 nova_compute[225855]: 2026-01-20 14:29:35.029 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:29:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:36.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:29:36 compute-1 ceph-mon[81775]: pgmap v1164: 321 pgs: 321 active+clean; 296 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.0 MiB/s wr, 185 op/s
Jan 20 14:29:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:36.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:38.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:38 compute-1 ceph-mon[81775]: pgmap v1165: 321 pgs: 321 active+clean; 296 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 36 KiB/s wr, 145 op/s
Jan 20 14:29:38 compute-1 nova_compute[225855]: 2026-01-20 14:29:38.852 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:38.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:39 compute-1 sudo[238646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:29:39 compute-1 sudo[238646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:39 compute-1 sudo[238646]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:39 compute-1 sudo[238671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:29:39 compute-1 sudo[238671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:39 compute-1 sudo[238671]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:40 compute-1 nova_compute[225855]: 2026-01-20 14:29:40.066 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:40.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:40 compute-1 ceph-mon[81775]: pgmap v1166: 321 pgs: 321 active+clean; 305 MiB data, 479 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 630 KiB/s wr, 140 op/s
Jan 20 14:29:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:41.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:41 compute-1 nova_compute[225855]: 2026-01-20 14:29:41.327 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:41.328 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:29:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:41.329 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:29:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:42.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:42 compute-1 ceph-mon[81775]: pgmap v1167: 321 pgs: 321 active+clean; 329 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 167 op/s
Jan 20 14:29:42 compute-1 systemd[1]: Stopping User Manager for UID 42436...
Jan 20 14:29:42 compute-1 systemd[238594]: Activating special unit Exit the Session...
Jan 20 14:29:42 compute-1 systemd[238594]: Stopped target Main User Target.
Jan 20 14:29:42 compute-1 systemd[238594]: Stopped target Basic System.
Jan 20 14:29:42 compute-1 systemd[238594]: Stopped target Paths.
Jan 20 14:29:42 compute-1 systemd[238594]: Stopped target Sockets.
Jan 20 14:29:42 compute-1 systemd[238594]: Stopped target Timers.
Jan 20 14:29:42 compute-1 systemd[238594]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 20 14:29:42 compute-1 systemd[238594]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 14:29:42 compute-1 systemd[238594]: Closed D-Bus User Message Bus Socket.
Jan 20 14:29:42 compute-1 systemd[238594]: Stopped Create User's Volatile Files and Directories.
Jan 20 14:29:42 compute-1 systemd[238594]: Removed slice User Application Slice.
Jan 20 14:29:42 compute-1 systemd[238594]: Reached target Shutdown.
Jan 20 14:29:42 compute-1 systemd[238594]: Finished Exit the Session.
Jan 20 14:29:42 compute-1 systemd[238594]: Reached target Exit the Session.
Jan 20 14:29:42 compute-1 systemd[1]: user@42436.service: Deactivated successfully.
Jan 20 14:29:42 compute-1 systemd[1]: Stopped User Manager for UID 42436.
Jan 20 14:29:42 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 20 14:29:42 compute-1 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 20 14:29:42 compute-1 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 20 14:29:42 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 20 14:29:42 compute-1 systemd[1]: Removed slice User Slice of UID 42436.
Jan 20 14:29:42 compute-1 podman[238697]: 2026-01-20 14:29:42.760771467 +0000 UTC m=+0.058421214 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 20 14:29:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:43.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:43 compute-1 nova_compute[225855]: 2026-01-20 14:29:43.888 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:44.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:44 compute-1 ceph-mon[81775]: pgmap v1168: 321 pgs: 321 active+clean; 329 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Jan 20 14:29:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:45.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:45 compute-1 nova_compute[225855]: 2026-01-20 14:29:45.115 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:46.167318) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919386167509, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 433, "num_deletes": 255, "total_data_size": 518289, "memory_usage": 527672, "flush_reason": "Manual Compaction"}
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919386173931, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 342211, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27889, "largest_seqno": 28316, "table_properties": {"data_size": 339776, "index_size": 535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5437, "raw_average_key_size": 17, "raw_value_size": 335011, "raw_average_value_size": 1053, "num_data_blocks": 24, "num_entries": 318, "num_filter_entries": 318, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768919366, "oldest_key_time": 1768919366, "file_creation_time": 1768919386, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 6666 microseconds, and 2360 cpu microseconds.
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:46.173992) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 342211 bytes OK
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:46.174011) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:46.175543) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:46.175559) EVENT_LOG_v1 {"time_micros": 1768919386175554, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:46.175578) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 515567, prev total WAL file size 515567, number of live WAL files 2.
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:46.176115) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353039' seq:72057594037927935, type:22 .. '6C6F676D00373630' seq:0, type:0; will stop at (end)
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(334KB)], [54(10MB)]
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919386176177, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 11087944, "oldest_snapshot_seqno": -1}
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 5265 keys, 10980131 bytes, temperature: kUnknown
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919386281146, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 10980131, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10941988, "index_size": 23889, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 134034, "raw_average_key_size": 25, "raw_value_size": 10844074, "raw_average_value_size": 2059, "num_data_blocks": 979, "num_entries": 5265, "num_filter_entries": 5265, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768919386, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:46.281473) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 10980131 bytes
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:46.283146) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.5 rd, 104.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.2 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(64.5) write-amplify(32.1) OK, records in: 5783, records dropped: 518 output_compression: NoCompression
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:46.283177) EVENT_LOG_v1 {"time_micros": 1768919386283164, "job": 32, "event": "compaction_finished", "compaction_time_micros": 105071, "compaction_time_cpu_micros": 25804, "output_level": 6, "num_output_files": 1, "total_output_size": 10980131, "num_input_records": 5783, "num_output_records": 5265, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919386283449, "job": 32, "event": "table_file_deletion", "file_number": 56}
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919386286989, "job": 32, "event": "table_file_deletion", "file_number": 54}
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:46.175995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:46.287104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:46.287109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:46.287111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:46.287112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:46 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:29:46.287114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:46 compute-1 nova_compute[225855]: 2026-01-20 14:29:46.291 225859 DEBUG oslo_concurrency.lockutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "refresh_cache-9f5c9253-e2bd-42d3-8253-fac568daeda7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:29:46 compute-1 nova_compute[225855]: 2026-01-20 14:29:46.292 225859 DEBUG oslo_concurrency.lockutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquired lock "refresh_cache-9f5c9253-e2bd-42d3-8253-fac568daeda7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:29:46 compute-1 nova_compute[225855]: 2026-01-20 14:29:46.292 225859 DEBUG nova.network.neutron [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:29:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:46.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:46 compute-1 nova_compute[225855]: 2026-01-20 14:29:46.525 225859 DEBUG nova.network.neutron [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:29:46 compute-1 ceph-mon[81775]: pgmap v1169: 321 pgs: 321 active+clean; 338 MiB data, 516 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 165 op/s
Jan 20 14:29:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:47.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.101 225859 DEBUG nova.network.neutron [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.117 225859 DEBUG oslo_concurrency.lockutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Releasing lock "refresh_cache-9f5c9253-e2bd-42d3-8253-fac568daeda7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.254 225859 DEBUG nova.virt.libvirt.driver [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.257 225859 DEBUG nova.virt.libvirt.driver [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.257 225859 INFO nova.virt.libvirt.driver [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Creating image(s)
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.304 225859 DEBUG nova.storage.rbd_utils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] creating snapshot(nova-resize) on rbd image(9f5c9253-e2bd-42d3-8253-fac568daeda7_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:29:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e160 e160: 3 total, 3 up, 3 in
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.664 225859 DEBUG nova.objects.instance [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9f5c9253-e2bd-42d3-8253-fac568daeda7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.768 225859 DEBUG nova.virt.libvirt.driver [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.768 225859 DEBUG nova.virt.libvirt.driver [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Ensure instance console log exists: /var/lib/nova/instances/9f5c9253-e2bd-42d3-8253-fac568daeda7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.769 225859 DEBUG oslo_concurrency.lockutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.770 225859 DEBUG oslo_concurrency.lockutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.770 225859 DEBUG oslo_concurrency.lockutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.772 225859 DEBUG nova.virt.libvirt.driver [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.778 225859 WARNING nova.virt.libvirt.driver [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.787 225859 DEBUG nova.virt.libvirt.host [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.787 225859 DEBUG nova.virt.libvirt.host [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.791 225859 DEBUG nova.virt.libvirt.host [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.791 225859 DEBUG nova.virt.libvirt.host [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.793 225859 DEBUG nova.virt.libvirt.driver [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.793 225859 DEBUG nova.virt.hardware [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='30c26a27-d918-46d8-a512-4ef3b4ce5955',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.794 225859 DEBUG nova.virt.hardware [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.794 225859 DEBUG nova.virt.hardware [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.794 225859 DEBUG nova.virt.hardware [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.795 225859 DEBUG nova.virt.hardware [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.795 225859 DEBUG nova.virt.hardware [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.795 225859 DEBUG nova.virt.hardware [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.796 225859 DEBUG nova.virt.hardware [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.796 225859 DEBUG nova.virt.hardware [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.796 225859 DEBUG nova.virt.hardware [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.797 225859 DEBUG nova.virt.hardware [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.797 225859 DEBUG nova.objects.instance [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9f5c9253-e2bd-42d3-8253-fac568daeda7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:29:47 compute-1 nova_compute[225855]: 2026-01-20 14:29:47.812 225859 DEBUG oslo_concurrency.processutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:29:48 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3115653324' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:48 compute-1 nova_compute[225855]: 2026-01-20 14:29:48.240 225859 DEBUG oslo_concurrency.processutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:48 compute-1 nova_compute[225855]: 2026-01-20 14:29:48.278 225859 DEBUG oslo_concurrency.processutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:29:48.332 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:29:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:29:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:48.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:29:48 compute-1 ceph-mon[81775]: pgmap v1170: 321 pgs: 321 active+clean; 349 MiB data, 527 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Jan 20 14:29:48 compute-1 ceph-mon[81775]: osdmap e160: 3 total, 3 up, 3 in
Jan 20 14:29:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3115653324' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:29:48 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2245526178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:48 compute-1 nova_compute[225855]: 2026-01-20 14:29:48.761 225859 DEBUG oslo_concurrency.processutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:48 compute-1 nova_compute[225855]: 2026-01-20 14:29:48.765 225859 DEBUG nova.virt.libvirt.driver [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:29:48 compute-1 nova_compute[225855]:   <uuid>9f5c9253-e2bd-42d3-8253-fac568daeda7</uuid>
Jan 20 14:29:48 compute-1 nova_compute[225855]:   <name>instance-0000001b</name>
Jan 20 14:29:48 compute-1 nova_compute[225855]:   <memory>196608</memory>
Jan 20 14:29:48 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:29:48 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <nova:name>tempest-MigrationsAdminTest-server-326963183</nova:name>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:29:47</nova:creationTime>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <nova:flavor name="m1.micro">
Jan 20 14:29:48 compute-1 nova_compute[225855]:         <nova:memory>192</nova:memory>
Jan 20 14:29:48 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:29:48 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:29:48 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:29:48 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:29:48 compute-1 nova_compute[225855]:         <nova:user uuid="01a3d712f05049b19d4ecc7051720ad5">tempest-MigrationsAdminTest-1518611738-project-member</nova:user>
Jan 20 14:29:48 compute-1 nova_compute[225855]:         <nova:project uuid="f3c2e72a7148496394c8bcd618a19c80">tempest-MigrationsAdminTest-1518611738</nova:project>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <nova:ports/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:29:48 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:29:48 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <system>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <entry name="serial">9f5c9253-e2bd-42d3-8253-fac568daeda7</entry>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <entry name="uuid">9f5c9253-e2bd-42d3-8253-fac568daeda7</entry>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     </system>
Jan 20 14:29:48 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:29:48 compute-1 nova_compute[225855]:   <os>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:   </os>
Jan 20 14:29:48 compute-1 nova_compute[225855]:   <features>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:   </features>
Jan 20 14:29:48 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:29:48 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:29:48 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/9f5c9253-e2bd-42d3-8253-fac568daeda7_disk">
Jan 20 14:29:48 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       </source>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:29:48 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/9f5c9253-e2bd-42d3-8253-fac568daeda7_disk.config">
Jan 20 14:29:48 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       </source>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:29:48 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/9f5c9253-e2bd-42d3-8253-fac568daeda7/console.log" append="off"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <video>
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     </video>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:29:48 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:29:48 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:29:48 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:29:48 compute-1 nova_compute[225855]: </domain>
Jan 20 14:29:48 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:29:48 compute-1 nova_compute[225855]: 2026-01-20 14:29:48.850 225859 DEBUG nova.virt.libvirt.driver [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:29:48 compute-1 nova_compute[225855]: 2026-01-20 14:29:48.850 225859 DEBUG nova.virt.libvirt.driver [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:29:48 compute-1 nova_compute[225855]: 2026-01-20 14:29:48.851 225859 INFO nova.virt.libvirt.driver [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Using config drive
Jan 20 14:29:48 compute-1 nova_compute[225855]: 2026-01-20 14:29:48.889 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:48 compute-1 systemd-machined[194361]: New machine qemu-13-instance-0000001b.
Jan 20 14:29:48 compute-1 systemd[1]: Started Virtual Machine qemu-13-instance-0000001b.
Jan 20 14:29:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:49.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:49 compute-1 nova_compute[225855]: 2026-01-20 14:29:49.587 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919389.586829, 9f5c9253-e2bd-42d3-8253-fac568daeda7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:29:49 compute-1 nova_compute[225855]: 2026-01-20 14:29:49.588 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] VM Resumed (Lifecycle Event)
Jan 20 14:29:49 compute-1 nova_compute[225855]: 2026-01-20 14:29:49.590 225859 DEBUG nova.compute.manager [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:29:49 compute-1 nova_compute[225855]: 2026-01-20 14:29:49.594 225859 INFO nova.virt.libvirt.driver [-] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Instance running successfully.
Jan 20 14:29:49 compute-1 virtqemud[225396]: argument unsupported: QEMU guest agent is not configured
Jan 20 14:29:49 compute-1 nova_compute[225855]: 2026-01-20 14:29:49.596 225859 DEBUG nova.virt.libvirt.guest [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 20 14:29:49 compute-1 nova_compute[225855]: 2026-01-20 14:29:49.597 225859 DEBUG nova.virt.libvirt.driver [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 20 14:29:49 compute-1 nova_compute[225855]: 2026-01-20 14:29:49.619 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:49 compute-1 nova_compute[225855]: 2026-01-20 14:29:49.623 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:29:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2245526178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:49 compute-1 nova_compute[225855]: 2026-01-20 14:29:49.652 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 20 14:29:49 compute-1 nova_compute[225855]: 2026-01-20 14:29:49.653 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919389.588149, 9f5c9253-e2bd-42d3-8253-fac568daeda7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:29:49 compute-1 nova_compute[225855]: 2026-01-20 14:29:49.654 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] VM Started (Lifecycle Event)
Jan 20 14:29:49 compute-1 nova_compute[225855]: 2026-01-20 14:29:49.702 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:49 compute-1 nova_compute[225855]: 2026-01-20 14:29:49.705 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:29:49 compute-1 nova_compute[225855]: 2026-01-20 14:29:49.749 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 20 14:29:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:50 compute-1 nova_compute[225855]: 2026-01-20 14:29:50.117 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:50.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:50 compute-1 ceph-mon[81775]: pgmap v1172: 321 pgs: 321 active+clean; 355 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 553 KiB/s rd, 4.3 MiB/s wr, 116 op/s
Jan 20 14:29:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:51.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:51 compute-1 sshd-session[238811]: Connection closed by authenticating user root 45.179.5.170 port 56854 [preauth]
Jan 20 14:29:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/956641195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:52 compute-1 nova_compute[225855]: 2026-01-20 14:29:52.104 225859 DEBUG oslo_concurrency.lockutils [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "refresh_cache-9f5c9253-e2bd-42d3-8253-fac568daeda7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:29:52 compute-1 nova_compute[225855]: 2026-01-20 14:29:52.104 225859 DEBUG oslo_concurrency.lockutils [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquired lock "refresh_cache-9f5c9253-e2bd-42d3-8253-fac568daeda7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:29:52 compute-1 nova_compute[225855]: 2026-01-20 14:29:52.104 225859 DEBUG nova.network.neutron [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:29:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:52.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:52 compute-1 nova_compute[225855]: 2026-01-20 14:29:52.501 225859 DEBUG nova.network.neutron [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:29:52 compute-1 ceph-mon[81775]: pgmap v1173: 321 pgs: 321 active+clean; 362 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 131 op/s
Jan 20 14:29:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:53.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:53 compute-1 nova_compute[225855]: 2026-01-20 14:29:53.448 225859 DEBUG nova.network.neutron [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:29:53 compute-1 nova_compute[225855]: 2026-01-20 14:29:53.892 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:53 compute-1 ceph-mon[81775]: pgmap v1174: 321 pgs: 321 active+clean; 362 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 131 op/s
Jan 20 14:29:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:54.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:54 compute-1 nova_compute[225855]: 2026-01-20 14:29:54.384 225859 DEBUG oslo_concurrency.lockutils [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Releasing lock "refresh_cache-9f5c9253-e2bd-42d3-8253-fac568daeda7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:29:54 compute-1 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Jan 20 14:29:54 compute-1 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001b.scope: Consumed 5.640s CPU time.
Jan 20 14:29:54 compute-1 systemd-machined[194361]: Machine qemu-13-instance-0000001b terminated.
Jan 20 14:29:54 compute-1 nova_compute[225855]: 2026-01-20 14:29:54.630 225859 INFO nova.virt.libvirt.driver [-] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Instance destroyed successfully.
Jan 20 14:29:54 compute-1 nova_compute[225855]: 2026-01-20 14:29:54.631 225859 DEBUG nova.objects.instance [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'resources' on Instance uuid 9f5c9253-e2bd-42d3-8253-fac568daeda7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:29:54 compute-1 nova_compute[225855]: 2026-01-20 14:29:54.649 225859 DEBUG oslo_concurrency.lockutils [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:54 compute-1 nova_compute[225855]: 2026-01-20 14:29:54.650 225859 DEBUG oslo_concurrency.lockutils [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:54 compute-1 nova_compute[225855]: 2026-01-20 14:29:54.668 225859 DEBUG nova.objects.instance [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'migration_context' on Instance uuid 9f5c9253-e2bd-42d3-8253-fac568daeda7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:29:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:54 compute-1 nova_compute[225855]: 2026-01-20 14:29:54.797 225859 DEBUG oslo_concurrency.processutils [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:55.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:55 compute-1 nova_compute[225855]: 2026-01-20 14:29:55.120 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:29:55 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3861732727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:55 compute-1 nova_compute[225855]: 2026-01-20 14:29:55.325 225859 DEBUG oslo_concurrency.processutils [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:55 compute-1 nova_compute[225855]: 2026-01-20 14:29:55.333 225859 DEBUG nova.compute.provider_tree [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:29:55 compute-1 nova_compute[225855]: 2026-01-20 14:29:55.361 225859 DEBUG nova.scheduler.client.report [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:29:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3861732727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:55 compute-1 nova_compute[225855]: 2026-01-20 14:29:55.436 225859 DEBUG oslo_concurrency.lockutils [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:56.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:56 compute-1 ceph-mon[81775]: pgmap v1175: 321 pgs: 321 active+clean; 386 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 168 op/s
Jan 20 14:29:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:57.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2270811472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3031609186' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:57 compute-1 ceph-mon[81775]: pgmap v1176: 321 pgs: 321 active+clean; 409 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 156 op/s
Jan 20 14:29:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:58.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:58 compute-1 nova_compute[225855]: 2026-01-20 14:29:58.981 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:29:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:29:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:59.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:29:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e161 e161: 3 total, 3 up, 3 in
Jan 20 14:29:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:59 compute-1 sudo[238962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:29:59 compute-1 sudo[238962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:59 compute-1 sudo[238962]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:59 compute-1 sudo[238987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:29:59 compute-1 sudo[238987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:59 compute-1 sudo[238987]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:00 compute-1 ceph-mon[81775]: osdmap e161: 3 total, 3 up, 3 in
Jan 20 14:30:00 compute-1 ceph-mon[81775]: pgmap v1178: 321 pgs: 321 active+clean; 409 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 153 op/s
Jan 20 14:30:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1957976833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:00 compute-1 ceph-mon[81775]: overall HEALTH_OK
Jan 20 14:30:00 compute-1 nova_compute[225855]: 2026-01-20 14:30:00.124 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:00.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:30:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:01.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:30:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3310716266' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:02 compute-1 ovn_controller[130490]: 2026-01-20T14:30:02Z|00108|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 20 14:30:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:02.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:02 compute-1 ceph-mon[81775]: pgmap v1179: 321 pgs: 321 active+clean; 409 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 107 op/s
Jan 20 14:30:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:03.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:03 compute-1 podman[239013]: 2026-01-20 14:30:03.084806497 +0000 UTC m=+0.115055886 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:30:03 compute-1 nova_compute[225855]: 2026-01-20 14:30:03.983 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:30:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:04.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:30:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:04 compute-1 ceph-mon[81775]: pgmap v1180: 321 pgs: 321 active+clean; 409 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 107 op/s
Jan 20 14:30:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:05.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:05 compute-1 nova_compute[225855]: 2026-01-20 14:30:05.126 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:06 compute-1 ceph-mon[81775]: pgmap v1181: 321 pgs: 321 active+clean; 409 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 917 KiB/s wr, 133 op/s
Jan 20 14:30:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:30:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:06.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:30:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e162 e162: 3 total, 3 up, 3 in
Jan 20 14:30:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:07.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:07 compute-1 ceph-mon[81775]: osdmap e162: 3 total, 3 up, 3 in
Jan 20 14:30:07 compute-1 nova_compute[225855]: 2026-01-20 14:30:07.674 225859 DEBUG oslo_concurrency.lockutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "d95ca690-20e1-4b0c-919b-d64c9af25eba" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:07 compute-1 nova_compute[225855]: 2026-01-20 14:30:07.675 225859 DEBUG oslo_concurrency.lockutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "d95ca690-20e1-4b0c-919b-d64c9af25eba" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:07 compute-1 nova_compute[225855]: 2026-01-20 14:30:07.842 225859 DEBUG nova.compute.manager [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:30:08 compute-1 nova_compute[225855]: 2026-01-20 14:30:08.196 225859 DEBUG oslo_concurrency.lockutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:08 compute-1 nova_compute[225855]: 2026-01-20 14:30:08.197 225859 DEBUG oslo_concurrency.lockutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:08 compute-1 nova_compute[225855]: 2026-01-20 14:30:08.206 225859 DEBUG nova.virt.hardware [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:30:08 compute-1 nova_compute[225855]: 2026-01-20 14:30:08.207 225859 INFO nova.compute.claims [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:30:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:08.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:08 compute-1 sudo[239042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:30:08 compute-1 sudo[239042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:08 compute-1 sudo[239042]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:08 compute-1 sudo[239067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:30:08 compute-1 sudo[239067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:08 compute-1 sudo[239067]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:08 compute-1 sudo[239092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:30:08 compute-1 sudo[239092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:08 compute-1 sudo[239092]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:08 compute-1 nova_compute[225855]: 2026-01-20 14:30:08.561 225859 DEBUG oslo_concurrency.processutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:08 compute-1 ceph-mon[81775]: pgmap v1183: 321 pgs: 321 active+clean; 409 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 26 KiB/s wr, 230 op/s
Jan 20 14:30:08 compute-1 sudo[239117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:30:08 compute-1 sudo[239117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:08 compute-1 nova_compute[225855]: 2026-01-20 14:30:08.985 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:30:09 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4260148581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:30:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:09.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:30:09 compute-1 nova_compute[225855]: 2026-01-20 14:30:09.058 225859 DEBUG oslo_concurrency.processutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:09 compute-1 nova_compute[225855]: 2026-01-20 14:30:09.066 225859 DEBUG nova.compute.provider_tree [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:30:09 compute-1 sudo[239117]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:09 compute-1 nova_compute[225855]: 2026-01-20 14:30:09.167 225859 DEBUG nova.scheduler.client.report [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:30:09 compute-1 nova_compute[225855]: 2026-01-20 14:30:09.289 225859 DEBUG oslo_concurrency.lockutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:09 compute-1 nova_compute[225855]: 2026-01-20 14:30:09.290 225859 DEBUG nova.compute.manager [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:30:09 compute-1 nova_compute[225855]: 2026-01-20 14:30:09.455 225859 DEBUG nova.compute.manager [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:30:09 compute-1 nova_compute[225855]: 2026-01-20 14:30:09.456 225859 DEBUG nova.network.neutron [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:30:09 compute-1 nova_compute[225855]: 2026-01-20 14:30:09.572 225859 INFO nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:30:09 compute-1 nova_compute[225855]: 2026-01-20 14:30:09.618 225859 DEBUG nova.compute.manager [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:30:09 compute-1 nova_compute[225855]: 2026-01-20 14:30:09.628 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919394.6276348, 9f5c9253-e2bd-42d3-8253-fac568daeda7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:30:09 compute-1 nova_compute[225855]: 2026-01-20 14:30:09.629 225859 INFO nova.compute.manager [-] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] VM Stopped (Lifecycle Event)
Jan 20 14:30:09 compute-1 nova_compute[225855]: 2026-01-20 14:30:09.683 225859 DEBUG nova.compute.manager [None req-940dff08-c42c-4fc8-a1c6-374c085d1c4c - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:09 compute-1 nova_compute[225855]: 2026-01-20 14:30:09.849 225859 DEBUG nova.compute.manager [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:30:09 compute-1 nova_compute[225855]: 2026-01-20 14:30:09.854 225859 DEBUG nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:30:09 compute-1 nova_compute[225855]: 2026-01-20 14:30:09.855 225859 INFO nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Creating image(s)
Jan 20 14:30:09 compute-1 nova_compute[225855]: 2026-01-20 14:30:09.898 225859 DEBUG nova.storage.rbd_utils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image d95ca690-20e1-4b0c-919b-d64c9af25eba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:09 compute-1 nova_compute[225855]: 2026-01-20 14:30:09.943 225859 DEBUG nova.storage.rbd_utils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image d95ca690-20e1-4b0c-919b-d64c9af25eba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:09 compute-1 nova_compute[225855]: 2026-01-20 14:30:09.970 225859 DEBUG nova.storage.rbd_utils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image d95ca690-20e1-4b0c-919b-d64c9af25eba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:09 compute-1 nova_compute[225855]: 2026-01-20 14:30:09.974 225859 DEBUG oslo_concurrency.processutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:10 compute-1 nova_compute[225855]: 2026-01-20 14:30:09.999 225859 DEBUG nova.network.neutron [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 20 14:30:10 compute-1 nova_compute[225855]: 2026-01-20 14:30:10.000 225859 DEBUG nova.compute.manager [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:30:10 compute-1 nova_compute[225855]: 2026-01-20 14:30:10.054 225859 DEBUG oslo_concurrency.processutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:10 compute-1 nova_compute[225855]: 2026-01-20 14:30:10.055 225859 DEBUG oslo_concurrency.lockutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:10 compute-1 nova_compute[225855]: 2026-01-20 14:30:10.056 225859 DEBUG oslo_concurrency.lockutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:10 compute-1 nova_compute[225855]: 2026-01-20 14:30:10.056 225859 DEBUG oslo_concurrency.lockutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:10 compute-1 nova_compute[225855]: 2026-01-20 14:30:10.083 225859 DEBUG nova.storage.rbd_utils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image d95ca690-20e1-4b0c-919b-d64c9af25eba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:10 compute-1 nova_compute[225855]: 2026-01-20 14:30:10.087 225859 DEBUG oslo_concurrency.processutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 d95ca690-20e1-4b0c-919b-d64c9af25eba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:10 compute-1 nova_compute[225855]: 2026-01-20 14:30:10.160 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:30:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:10.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:30:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4260148581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 14:30:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:30:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:30:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:30:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:30:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:30:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:30:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:11.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:11 compute-1 ceph-mon[81775]: pgmap v1184: 321 pgs: 321 active+clean; 409 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 21 KiB/s wr, 189 op/s
Jan 20 14:30:11 compute-1 ceph-mon[81775]: pgmap v1185: 321 pgs: 321 active+clean; 409 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.6 KiB/s wr, 169 op/s
Jan 20 14:30:12 compute-1 nova_compute[225855]: 2026-01-20 14:30:12.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:30:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:12.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:12 compute-1 nova_compute[225855]: 2026-01-20 14:30:12.946 225859 DEBUG oslo_concurrency.processutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 d95ca690-20e1-4b0c-919b-d64c9af25eba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.859s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.027 225859 DEBUG oslo_concurrency.lockutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "f3faf996-e066-4b11-b7f3-30aeffff726e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.028 225859 DEBUG oslo_concurrency.lockutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.035 225859 DEBUG nova.storage.rbd_utils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] resizing rbd image d95ca690-20e1-4b0c-919b-d64c9af25eba_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:30:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:30:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:13.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.089 225859 DEBUG oslo_concurrency.lockutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Acquiring lock "d08682f8-72ef-462c-b4b7-044cf16fc193" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.089 225859 DEBUG oslo_concurrency.lockutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Lock "d08682f8-72ef-462c-b4b7-044cf16fc193" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:13 compute-1 podman[239291]: 2026-01-20 14:30:13.097372811 +0000 UTC m=+0.108980885 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.154 225859 DEBUG nova.compute.manager [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.157 225859 DEBUG nova.compute.manager [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.544 225859 DEBUG nova.objects.instance [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'migration_context' on Instance uuid d95ca690-20e1-4b0c-919b-d64c9af25eba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.577 225859 DEBUG nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.577 225859 DEBUG nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Ensure instance console log exists: /var/lib/nova/instances/d95ca690-20e1-4b0c-919b-d64c9af25eba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.579 225859 DEBUG oslo_concurrency.lockutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.579 225859 DEBUG oslo_concurrency.lockutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.580 225859 DEBUG oslo_concurrency.lockutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.582 225859 DEBUG nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.592 225859 WARNING nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.600 225859 DEBUG nova.virt.libvirt.host [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.600 225859 DEBUG nova.virt.libvirt.host [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.604 225859 DEBUG nova.virt.libvirt.host [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.604 225859 DEBUG nova.virt.libvirt.host [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.606 225859 DEBUG nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.606 225859 DEBUG nova.virt.hardware [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.606 225859 DEBUG nova.virt.hardware [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.607 225859 DEBUG nova.virt.hardware [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.607 225859 DEBUG nova.virt.hardware [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.607 225859 DEBUG nova.virt.hardware [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.607 225859 DEBUG nova.virt.hardware [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.608 225859 DEBUG nova.virt.hardware [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.608 225859 DEBUG nova.virt.hardware [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.608 225859 DEBUG nova.virt.hardware [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.608 225859 DEBUG nova.virt.hardware [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.609 225859 DEBUG nova.virt.hardware [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.612 225859 DEBUG oslo_concurrency.processutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.638 225859 DEBUG oslo_concurrency.lockutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.639 225859 DEBUG oslo_concurrency.lockutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.641 225859 DEBUG oslo_concurrency.lockutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.649 225859 DEBUG nova.virt.hardware [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.650 225859 INFO nova.compute.claims [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.823 225859 DEBUG oslo_concurrency.processutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:13 compute-1 nova_compute[225855]: 2026-01-20 14:30:13.987 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:30:14 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1567150312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:14 compute-1 ceph-mon[81775]: pgmap v1186: 321 pgs: 321 active+clean; 409 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.6 KiB/s wr, 169 op/s
Jan 20 14:30:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/478407422' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:30:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/478407422' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.111 225859 DEBUG oslo_concurrency.processutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.138 225859 DEBUG nova.storage.rbd_utils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image d95ca690-20e1-4b0c-919b-d64c9af25eba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.141 225859 DEBUG oslo_concurrency.processutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:30:14 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1108007621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.297 225859 DEBUG oslo_concurrency.processutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.304 225859 DEBUG nova.compute.provider_tree [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.327 225859 DEBUG nova.scheduler.client.report [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.352 225859 DEBUG oslo_concurrency.lockutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.353 225859 DEBUG nova.compute.manager [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.355 225859 DEBUG oslo_concurrency.lockutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.361 225859 DEBUG nova.virt.hardware [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.362 225859 INFO nova.compute.claims [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:30:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:14.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.595 225859 DEBUG nova.compute.manager [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.596 225859 DEBUG nova.network.neutron [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:30:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:30:14 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3859938377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.637 225859 DEBUG oslo_concurrency.processutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.638 225859 DEBUG nova.objects.instance [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'pci_devices' on Instance uuid d95ca690-20e1-4b0c-919b-d64c9af25eba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.687 225859 DEBUG nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:30:14 compute-1 nova_compute[225855]:   <uuid>d95ca690-20e1-4b0c-919b-d64c9af25eba</uuid>
Jan 20 14:30:14 compute-1 nova_compute[225855]:   <name>instance-0000001d</name>
Jan 20 14:30:14 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:30:14 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:30:14 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <nova:name>tempest-MigrationsAdminTest-server-1542965426</nova:name>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:30:13</nova:creationTime>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:30:14 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:30:14 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:30:14 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:30:14 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:30:14 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:30:14 compute-1 nova_compute[225855]:         <nova:user uuid="01a3d712f05049b19d4ecc7051720ad5">tempest-MigrationsAdminTest-1518611738-project-member</nova:user>
Jan 20 14:30:14 compute-1 nova_compute[225855]:         <nova:project uuid="f3c2e72a7148496394c8bcd618a19c80">tempest-MigrationsAdminTest-1518611738</nova:project>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <nova:ports/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:30:14 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:30:14 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <system>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <entry name="serial">d95ca690-20e1-4b0c-919b-d64c9af25eba</entry>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <entry name="uuid">d95ca690-20e1-4b0c-919b-d64c9af25eba</entry>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     </system>
Jan 20 14:30:14 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:30:14 compute-1 nova_compute[225855]:   <os>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:   </os>
Jan 20 14:30:14 compute-1 nova_compute[225855]:   <features>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:   </features>
Jan 20 14:30:14 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:30:14 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:30:14 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/d95ca690-20e1-4b0c-919b-d64c9af25eba_disk">
Jan 20 14:30:14 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       </source>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:30:14 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/d95ca690-20e1-4b0c-919b-d64c9af25eba_disk.config">
Jan 20 14:30:14 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       </source>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:30:14 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/d95ca690-20e1-4b0c-919b-d64c9af25eba/console.log" append="off"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <video>
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     </video>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:30:14 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:30:14 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:30:14 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:30:14 compute-1 nova_compute[225855]: </domain>
Jan 20 14:30:14 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.708 225859 INFO nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.746 225859 DEBUG nova.compute.manager [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.753 225859 DEBUG nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.753 225859 DEBUG nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.753 225859 INFO nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Using config drive
Jan 20 14:30:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.779 225859 DEBUG nova.storage.rbd_utils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image d95ca690-20e1-4b0c-919b-d64c9af25eba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.803 225859 DEBUG nova.policy [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0cec872a00f742d78563d6d16fc545cb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '78f151250c04467bb4f6a229dda16fc5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.986 225859 DEBUG nova.compute.manager [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.987 225859 DEBUG nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:30:14 compute-1 nova_compute[225855]: 2026-01-20 14:30:14.988 225859 INFO nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Creating image(s)
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.011 225859 DEBUG nova.storage.rbd_utils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] rbd image d08682f8-72ef-462c-b4b7-044cf16fc193_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.039 225859 DEBUG nova.storage.rbd_utils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] rbd image d08682f8-72ef-462c-b4b7-044cf16fc193_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:30:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:15.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.067 225859 DEBUG nova.storage.rbd_utils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] rbd image d08682f8-72ef-462c-b4b7-044cf16fc193_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.071 225859 DEBUG oslo_concurrency.processutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.099 225859 INFO nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Creating config drive at /var/lib/nova/instances/d95ca690-20e1-4b0c-919b-d64c9af25eba/disk.config
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.107 225859 DEBUG oslo_concurrency.processutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d95ca690-20e1-4b0c-919b-d64c9af25eba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb_hdohpe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.130 225859 DEBUG oslo_concurrency.processutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.151 225859 DEBUG oslo_concurrency.processutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.152 225859 DEBUG oslo_concurrency.lockutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.153 225859 DEBUG oslo_concurrency.lockutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.153 225859 DEBUG oslo_concurrency.lockutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.173 225859 DEBUG nova.storage.rbd_utils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] rbd image d08682f8-72ef-462c-b4b7-044cf16fc193_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.176 225859 DEBUG oslo_concurrency.processutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 d08682f8-72ef-462c-b4b7-044cf16fc193_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1567150312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1108007621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3859938377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.196 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.236 225859 DEBUG oslo_concurrency.processutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d95ca690-20e1-4b0c-919b-d64c9af25eba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb_hdohpe" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.270 225859 DEBUG nova.storage.rbd_utils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image d95ca690-20e1-4b0c-919b-d64c9af25eba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.274 225859 DEBUG oslo_concurrency.processutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d95ca690-20e1-4b0c-919b-d64c9af25eba/disk.config d95ca690-20e1-4b0c-919b-d64c9af25eba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.425 225859 DEBUG oslo_concurrency.processutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 d08682f8-72ef-462c-b4b7-044cf16fc193_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.456 225859 DEBUG oslo_concurrency.processutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d95ca690-20e1-4b0c-919b-d64c9af25eba/disk.config d95ca690-20e1-4b0c-919b-d64c9af25eba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.457 225859 INFO nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Deleting local config drive /var/lib/nova/instances/d95ca690-20e1-4b0c-919b-d64c9af25eba/disk.config because it was imported into RBD.
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.499 225859 DEBUG nova.storage.rbd_utils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] resizing rbd image d08682f8-72ef-462c-b4b7-044cf16fc193_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:30:15 compute-1 systemd-machined[194361]: New machine qemu-14-instance-0000001d.
Jan 20 14:30:15 compute-1 systemd[1]: Started Virtual Machine qemu-14-instance-0000001d.
Jan 20 14:30:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:30:15 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3056334551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.564 225859 DEBUG oslo_concurrency.processutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.573 225859 DEBUG nova.compute.provider_tree [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.602 225859 DEBUG nova.scheduler.client.report [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.636 225859 DEBUG oslo_concurrency.lockutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.281s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.637 225859 DEBUG nova.compute.manager [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.644 225859 DEBUG nova.objects.instance [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Lazy-loading 'migration_context' on Instance uuid d08682f8-72ef-462c-b4b7-044cf16fc193 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.681 225859 DEBUG nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.681 225859 DEBUG nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Ensure instance console log exists: /var/lib/nova/instances/d08682f8-72ef-462c-b4b7-044cf16fc193/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.682 225859 DEBUG oslo_concurrency.lockutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.682 225859 DEBUG oslo_concurrency.lockutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.683 225859 DEBUG oslo_concurrency.lockutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.744 225859 DEBUG nova.compute.manager [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.745 225859 DEBUG nova.network.neutron [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.811 225859 INFO nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.913 225859 DEBUG nova.compute.manager [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:30:15 compute-1 nova_compute[225855]: 2026-01-20 14:30:15.917 225859 DEBUG nova.network.neutron [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Successfully created port: e848e00f-d594-4d70-9026-cd650417bf47 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.051 225859 DEBUG nova.policy [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6a3fbc3f92a849e88cbf34d28ca17e43', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0cee74dd60da4a839bb5eb0ba3137edf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.273 225859 DEBUG nova.compute.manager [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.275 225859 DEBUG nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.275 225859 INFO nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Creating image(s)
Jan 20 14:30:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:16.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:16.389 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:16.390 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:16.390 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.477 225859 DEBUG nova.storage.rbd_utils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] rbd image f3faf996-e066-4b11-b7f3-30aeffff726e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.505 225859 DEBUG nova.storage.rbd_utils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] rbd image f3faf996-e066-4b11-b7f3-30aeffff726e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.530 225859 DEBUG nova.storage.rbd_utils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] rbd image f3faf996-e066-4b11-b7f3-30aeffff726e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.533 225859 DEBUG oslo_concurrency.processutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.559 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.559 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.559 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.578 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.578 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.578 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.608 225859 DEBUG oslo_concurrency.processutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.608 225859 DEBUG oslo_concurrency.lockutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.609 225859 DEBUG oslo_concurrency.lockutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.609 225859 DEBUG oslo_concurrency.lockutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.635 225859 DEBUG nova.storage.rbd_utils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] rbd image f3faf996-e066-4b11-b7f3-30aeffff726e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.638 225859 DEBUG oslo_concurrency.processutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 f3faf996-e066-4b11-b7f3-30aeffff726e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:16 compute-1 ceph-mon[81775]: pgmap v1187: 321 pgs: 321 active+clean; 463 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 169 op/s
Jan 20 14:30:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3056334551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.770 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-29f0b4d4-abf0-46e7-bf67-38e71eb42e28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.770 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-29f0b4d4-abf0-46e7-bf67-38e71eb42e28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.770 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.770 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 29f0b4d4-abf0-46e7-bf67-38e71eb42e28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.841 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919416.8409173, d95ca690-20e1-4b0c-919b-d64c9af25eba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.841 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] VM Resumed (Lifecycle Event)
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.843 225859 DEBUG nova.compute.manager [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.843 225859 DEBUG nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.850 225859 INFO nova.virt.libvirt.driver [-] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Instance spawned successfully.
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.851 225859 DEBUG nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.873 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.878 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.883 225859 DEBUG nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.883 225859 DEBUG nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.884 225859 DEBUG nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.884 225859 DEBUG nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.885 225859 DEBUG nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.885 225859 DEBUG nova.virt.libvirt.driver [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.903 225859 DEBUG nova.network.neutron [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Successfully updated port: e848e00f-d594-4d70-9026-cd650417bf47 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.926 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.926 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919416.841072, d95ca690-20e1-4b0c-919b-d64c9af25eba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.927 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] VM Started (Lifecycle Event)
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.940 225859 DEBUG oslo_concurrency.lockutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Acquiring lock "refresh_cache-d08682f8-72ef-462c-b4b7-044cf16fc193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.940 225859 DEBUG oslo_concurrency.lockutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Acquired lock "refresh_cache-d08682f8-72ef-462c-b4b7-044cf16fc193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.940 225859 DEBUG nova.network.neutron [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.972 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.974 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.982 225859 INFO nova.compute.manager [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Took 7.13 seconds to spawn the instance on the hypervisor.
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.982 225859 DEBUG nova.compute.manager [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:16 compute-1 nova_compute[225855]: 2026-01-20 14:30:16.984 225859 DEBUG oslo_concurrency.processutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 f3faf996-e066-4b11-b7f3-30aeffff726e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.012 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.048 225859 DEBUG nova.storage.rbd_utils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] resizing rbd image f3faf996-e066-4b11-b7f3-30aeffff726e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:30:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:30:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:17.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.140 225859 DEBUG nova.compute.manager [req-a9aeec7f-be7c-489d-bb89-e11c2eb98e93 req-20df941b-e8d7-4122-a27c-481b964e4bca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Received event network-changed-e848e00f-d594-4d70-9026-cd650417bf47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.141 225859 DEBUG nova.compute.manager [req-a9aeec7f-be7c-489d-bb89-e11c2eb98e93 req-20df941b-e8d7-4122-a27c-481b964e4bca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Refreshing instance network info cache due to event network-changed-e848e00f-d594-4d70-9026-cd650417bf47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.141 225859 DEBUG oslo_concurrency.lockutils [req-a9aeec7f-be7c-489d-bb89-e11c2eb98e93 req-20df941b-e8d7-4122-a27c-481b964e4bca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-d08682f8-72ef-462c-b4b7-044cf16fc193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.147 225859 DEBUG nova.objects.instance [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lazy-loading 'migration_context' on Instance uuid f3faf996-e066-4b11-b7f3-30aeffff726e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.164 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.170 225859 DEBUG nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.170 225859 DEBUG nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Ensure instance console log exists: /var/lib/nova/instances/f3faf996-e066-4b11-b7f3-30aeffff726e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.170 225859 DEBUG oslo_concurrency.lockutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.170 225859 DEBUG oslo_concurrency.lockutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.171 225859 DEBUG oslo_concurrency.lockutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.247 225859 INFO nova.compute.manager [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Took 9.10 seconds to build instance.
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.313 225859 DEBUG oslo_concurrency.lockutils [None req-9274dcb1-dd36-4dea-af60-f65dbc3dda50 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "d95ca690-20e1-4b0c-919b-d64c9af25eba" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.520 225859 DEBUG nova.network.neutron [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.765 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.772 225859 DEBUG nova.network.neutron [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Successfully created port: f65050ac-6a44-490a-b4b9-8c82c1f61630 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.779 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-29f0b4d4-abf0-46e7-bf67-38e71eb42e28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.779 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.779 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.780 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.780 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.799 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.800 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.800 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.800 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:30:17 compute-1 nova_compute[225855]: 2026-01-20 14:30:17.800 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2106470088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.256 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.331 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.332 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.336 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.336 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:30:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:18.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.515 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.516 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4401MB free_disk=20.73217010498047GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.517 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.517 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.592 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 29f0b4d4-abf0-46e7-bf67-38e71eb42e28 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.593 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance d95ca690-20e1-4b0c-919b-d64c9af25eba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.593 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance f3faf996-e066-4b11-b7f3-30aeffff726e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.593 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance d08682f8-72ef-462c-b4b7-044cf16fc193 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.594 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.594 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.644 225859 DEBUG nova.network.neutron [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Updating instance_info_cache with network_info: [{"id": "e848e00f-d594-4d70-9026-cd650417bf47", "address": "fa:16:3e:89:9f:c4", "network": {"id": "01e6deef-9aca-4d36-8215-4517982a86a3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-70171378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "78f151250c04467bb4f6a229dda16fc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape848e00f-d5", "ovs_interfaceid": "e848e00f-d594-4d70-9026-cd650417bf47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.664 225859 DEBUG oslo_concurrency.lockutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Releasing lock "refresh_cache-d08682f8-72ef-462c-b4b7-044cf16fc193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.665 225859 DEBUG nova.compute.manager [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Instance network_info: |[{"id": "e848e00f-d594-4d70-9026-cd650417bf47", "address": "fa:16:3e:89:9f:c4", "network": {"id": "01e6deef-9aca-4d36-8215-4517982a86a3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-70171378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "78f151250c04467bb4f6a229dda16fc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape848e00f-d5", "ovs_interfaceid": "e848e00f-d594-4d70-9026-cd650417bf47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.666 225859 DEBUG oslo_concurrency.lockutils [req-a9aeec7f-be7c-489d-bb89-e11c2eb98e93 req-20df941b-e8d7-4122-a27c-481b964e4bca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-d08682f8-72ef-462c-b4b7-044cf16fc193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.667 225859 DEBUG nova.network.neutron [req-a9aeec7f-be7c-489d-bb89-e11c2eb98e93 req-20df941b-e8d7-4122-a27c-481b964e4bca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Refreshing network info cache for port e848e00f-d594-4d70-9026-cd650417bf47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.672 225859 DEBUG nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Start _get_guest_xml network_info=[{"id": "e848e00f-d594-4d70-9026-cd650417bf47", "address": "fa:16:3e:89:9f:c4", "network": {"id": "01e6deef-9aca-4d36-8215-4517982a86a3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-70171378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "78f151250c04467bb4f6a229dda16fc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape848e00f-d5", "ovs_interfaceid": "e848e00f-d594-4d70-9026-cd650417bf47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.682 225859 WARNING nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.687 225859 DEBUG nova.virt.libvirt.host [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.688 225859 DEBUG nova.virt.libvirt.host [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.693 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.737 225859 DEBUG nova.virt.libvirt.host [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.739 225859 DEBUG nova.virt.libvirt.host [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.741 225859 DEBUG nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.741 225859 DEBUG nova.virt.hardware [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.742 225859 DEBUG nova.virt.hardware [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.742 225859 DEBUG nova.virt.hardware [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.743 225859 DEBUG nova.virt.hardware [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.743 225859 DEBUG nova.virt.hardware [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.743 225859 DEBUG nova.virt.hardware [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.744 225859 DEBUG nova.virt.hardware [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.744 225859 DEBUG nova.virt.hardware [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.745 225859 DEBUG nova.virt.hardware [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.745 225859 DEBUG nova.virt.hardware [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.745 225859 DEBUG nova.virt.hardware [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.749 225859 DEBUG oslo_concurrency.processutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.773 225859 DEBUG nova.network.neutron [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Successfully updated port: f65050ac-6a44-490a-b4b9-8c82c1f61630 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.789 225859 DEBUG oslo_concurrency.lockutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "refresh_cache-f3faf996-e066-4b11-b7f3-30aeffff726e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.790 225859 DEBUG oslo_concurrency.lockutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquired lock "refresh_cache-f3faf996-e066-4b11-b7f3-30aeffff726e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.790 225859 DEBUG nova.network.neutron [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:30:18 compute-1 ceph-mon[81775]: pgmap v1188: 321 pgs: 321 active+clean; 505 MiB data, 631 MiB used, 20 GiB / 21 GiB avail; 872 KiB/s rd, 5.0 MiB/s wr, 142 op/s
Jan 20 14:30:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/61502453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1828975272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.949 225859 DEBUG nova.compute.manager [req-eb3753cc-2879-474d-9930-bd6934878356 req-9cfeba9d-76b7-48d9-8fbf-17ec4c1531b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Received event network-changed-f65050ac-6a44-490a-b4b9-8c82c1f61630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.950 225859 DEBUG nova.compute.manager [req-eb3753cc-2879-474d-9930-bd6934878356 req-9cfeba9d-76b7-48d9-8fbf-17ec4c1531b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Refreshing instance network info cache due to event network-changed-f65050ac-6a44-490a-b4b9-8c82c1f61630. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:30:18 compute-1 nova_compute[225855]: 2026-01-20 14:30:18.951 225859 DEBUG oslo_concurrency.lockutils [req-eb3753cc-2879-474d-9930-bd6934878356 req-9cfeba9d-76b7-48d9-8fbf-17ec4c1531b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-f3faf996-e066-4b11-b7f3-30aeffff726e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.020 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:19.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.137 225859 DEBUG nova.network.neutron [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:30:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:30:19 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2148169791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.160 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.166 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:30:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:30:19 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2203175762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.189 225859 DEBUG oslo_concurrency.processutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.211 225859 DEBUG nova.storage.rbd_utils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] rbd image d08682f8-72ef-462c-b4b7-044cf16fc193_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.214 225859 DEBUG oslo_concurrency.processutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.236 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.275 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.276 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:30:19 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3241562052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.629 225859 DEBUG oslo_concurrency.processutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.630 225859 DEBUG nova.virt.libvirt.vif [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:30:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-952138345',display_name='tempest-FloatingIPsAssociationTestJSON-server-952138345',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-952138345',id=31,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='78f151250c04467bb4f6a229dda16fc5',ramdisk_id='',reservation_id='r-bkzlkdyv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-146254261',owner_user_name='tempest-FloatingIPsAssociationTestJSON-146254261-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:30:14Z,user_data=None,user_id='0cec872a00f742d78563d6d16fc545cb',uuid=d08682f8-72ef-462c-b4b7-044cf16fc193,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e848e00f-d594-4d70-9026-cd650417bf47", "address": "fa:16:3e:89:9f:c4", "network": {"id": "01e6deef-9aca-4d36-8215-4517982a86a3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-70171378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "78f151250c04467bb4f6a229dda16fc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape848e00f-d5", "ovs_interfaceid": "e848e00f-d594-4d70-9026-cd650417bf47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.631 225859 DEBUG nova.network.os_vif_util [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Converting VIF {"id": "e848e00f-d594-4d70-9026-cd650417bf47", "address": "fa:16:3e:89:9f:c4", "network": {"id": "01e6deef-9aca-4d36-8215-4517982a86a3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-70171378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "78f151250c04467bb4f6a229dda16fc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape848e00f-d5", "ovs_interfaceid": "e848e00f-d594-4d70-9026-cd650417bf47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.632 225859 DEBUG nova.network.os_vif_util [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:9f:c4,bridge_name='br-int',has_traffic_filtering=True,id=e848e00f-d594-4d70-9026-cd650417bf47,network=Network(01e6deef-9aca-4d36-8215-4517982a86a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape848e00f-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.633 225859 DEBUG nova.objects.instance [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Lazy-loading 'pci_devices' on Instance uuid d08682f8-72ef-462c-b4b7-044cf16fc193 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.647 225859 DEBUG nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:30:19 compute-1 nova_compute[225855]:   <uuid>d08682f8-72ef-462c-b4b7-044cf16fc193</uuid>
Jan 20 14:30:19 compute-1 nova_compute[225855]:   <name>instance-0000001f</name>
Jan 20 14:30:19 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:30:19 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:30:19 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-952138345</nova:name>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:30:18</nova:creationTime>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:30:19 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:30:19 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:30:19 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:30:19 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:30:19 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:30:19 compute-1 nova_compute[225855]:         <nova:user uuid="0cec872a00f742d78563d6d16fc545cb">tempest-FloatingIPsAssociationTestJSON-146254261-project-member</nova:user>
Jan 20 14:30:19 compute-1 nova_compute[225855]:         <nova:project uuid="78f151250c04467bb4f6a229dda16fc5">tempest-FloatingIPsAssociationTestJSON-146254261</nova:project>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:30:19 compute-1 nova_compute[225855]:         <nova:port uuid="e848e00f-d594-4d70-9026-cd650417bf47">
Jan 20 14:30:19 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:30:19 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:30:19 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <system>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <entry name="serial">d08682f8-72ef-462c-b4b7-044cf16fc193</entry>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <entry name="uuid">d08682f8-72ef-462c-b4b7-044cf16fc193</entry>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     </system>
Jan 20 14:30:19 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:30:19 compute-1 nova_compute[225855]:   <os>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:   </os>
Jan 20 14:30:19 compute-1 nova_compute[225855]:   <features>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:   </features>
Jan 20 14:30:19 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:30:19 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:30:19 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/d08682f8-72ef-462c-b4b7-044cf16fc193_disk">
Jan 20 14:30:19 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       </source>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:30:19 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/d08682f8-72ef-462c-b4b7-044cf16fc193_disk.config">
Jan 20 14:30:19 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       </source>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:30:19 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:89:9f:c4"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <target dev="tape848e00f-d5"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/d08682f8-72ef-462c-b4b7-044cf16fc193/console.log" append="off"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <video>
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     </video>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:30:19 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:30:19 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:30:19 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:30:19 compute-1 nova_compute[225855]: </domain>
Jan 20 14:30:19 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.649 225859 DEBUG nova.compute.manager [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Preparing to wait for external event network-vif-plugged-e848e00f-d594-4d70-9026-cd650417bf47 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.650 225859 DEBUG oslo_concurrency.lockutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Acquiring lock "d08682f8-72ef-462c-b4b7-044cf16fc193-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.650 225859 DEBUG oslo_concurrency.lockutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Lock "d08682f8-72ef-462c-b4b7-044cf16fc193-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.650 225859 DEBUG oslo_concurrency.lockutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Lock "d08682f8-72ef-462c-b4b7-044cf16fc193-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.651 225859 DEBUG nova.virt.libvirt.vif [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:30:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-952138345',display_name='tempest-FloatingIPsAssociationTestJSON-server-952138345',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-952138345',id=31,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='78f151250c04467bb4f6a229dda16fc5',ramdisk_id='',reservation_id='r-bkzlkdyv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-146254261',owner_user_name='tempest-FloatingIPsAssociationTestJSON-146254261-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:30:14Z,user_data=None,user_id='0cec872a00f742d78563d6d16fc545cb',uuid=d08682f8-72ef-462c-b4b7-044cf16fc193,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e848e00f-d594-4d70-9026-cd650417bf47", "address": "fa:16:3e:89:9f:c4", "network": {"id": "01e6deef-9aca-4d36-8215-4517982a86a3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-70171378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "78f151250c04467bb4f6a229dda16fc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape848e00f-d5", "ovs_interfaceid": "e848e00f-d594-4d70-9026-cd650417bf47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.651 225859 DEBUG nova.network.os_vif_util [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Converting VIF {"id": "e848e00f-d594-4d70-9026-cd650417bf47", "address": "fa:16:3e:89:9f:c4", "network": {"id": "01e6deef-9aca-4d36-8215-4517982a86a3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-70171378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "78f151250c04467bb4f6a229dda16fc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape848e00f-d5", "ovs_interfaceid": "e848e00f-d594-4d70-9026-cd650417bf47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.652 225859 DEBUG nova.network.os_vif_util [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:9f:c4,bridge_name='br-int',has_traffic_filtering=True,id=e848e00f-d594-4d70-9026-cd650417bf47,network=Network(01e6deef-9aca-4d36-8215-4517982a86a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape848e00f-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.652 225859 DEBUG os_vif [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:9f:c4,bridge_name='br-int',has_traffic_filtering=True,id=e848e00f-d594-4d70-9026-cd650417bf47,network=Network(01e6deef-9aca-4d36-8215-4517982a86a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape848e00f-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.653 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.653 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.654 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.658 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.658 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape848e00f-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.659 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape848e00f-d5, col_values=(('external_ids', {'iface-id': 'e848e00f-d594-4d70-9026-cd650417bf47', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:9f:c4', 'vm-uuid': 'd08682f8-72ef-462c-b4b7-044cf16fc193'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.660 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:19 compute-1 NetworkManager[49104]: <info>  [1768919419.6620] manager: (tape848e00f-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.662 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.670 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.671 225859 INFO os_vif [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:9f:c4,bridge_name='br-int',has_traffic_filtering=True,id=e848e00f-d594-4d70-9026-cd650417bf47,network=Network(01e6deef-9aca-4d36-8215-4517982a86a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape848e00f-d5')
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.720 225859 DEBUG nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.721 225859 DEBUG nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.721 225859 DEBUG nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] No VIF found with MAC fa:16:3e:89:9f:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.722 225859 INFO nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Using config drive
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.751 225859 DEBUG nova.storage.rbd_utils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] rbd image d08682f8-72ef-462c-b4b7-044cf16fc193_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.835 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.835 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.835 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:30:19 compute-1 nova_compute[225855]: 2026-01-20 14:30:19.992 225859 DEBUG nova.network.neutron [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Updating instance_info_cache with network_info: [{"id": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "address": "fa:16:3e:fc:ae:50", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf65050ac-6a", "ovs_interfaceid": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.010 225859 DEBUG oslo_concurrency.lockutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Releasing lock "refresh_cache-f3faf996-e066-4b11-b7f3-30aeffff726e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.011 225859 DEBUG nova.compute.manager [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Instance network_info: |[{"id": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "address": "fa:16:3e:fc:ae:50", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf65050ac-6a", "ovs_interfaceid": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.012 225859 DEBUG oslo_concurrency.lockutils [req-eb3753cc-2879-474d-9930-bd6934878356 req-9cfeba9d-76b7-48d9-8fbf-17ec4c1531b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-f3faf996-e066-4b11-b7f3-30aeffff726e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.013 225859 DEBUG nova.network.neutron [req-eb3753cc-2879-474d-9930-bd6934878356 req-9cfeba9d-76b7-48d9-8fbf-17ec4c1531b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Refreshing network info cache for port f65050ac-6a44-490a-b4b9-8c82c1f61630 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.018 225859 DEBUG nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Start _get_guest_xml network_info=[{"id": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "address": "fa:16:3e:fc:ae:50", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf65050ac-6a", "ovs_interfaceid": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.024 225859 WARNING nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.028 225859 DEBUG nova.virt.libvirt.host [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.029 225859 DEBUG nova.virt.libvirt.host [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.034 225859 DEBUG nova.virt.libvirt.host [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.035 225859 DEBUG nova.virt.libvirt.host [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.037 225859 DEBUG nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.038 225859 DEBUG nova.virt.hardware [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.039 225859 DEBUG nova.virt.hardware [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.039 225859 DEBUG nova.virt.hardware [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.040 225859 DEBUG nova.virt.hardware [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.041 225859 DEBUG nova.virt.hardware [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.041 225859 DEBUG nova.virt.hardware [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.042 225859 DEBUG nova.virt.hardware [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.043 225859 DEBUG nova.virt.hardware [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.043 225859 DEBUG nova.virt.hardware [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.044 225859 DEBUG nova.virt.hardware [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.044 225859 DEBUG nova.virt.hardware [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.049 225859 DEBUG oslo_concurrency.processutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.163 225859 INFO nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Creating config drive at /var/lib/nova/instances/d08682f8-72ef-462c-b4b7-044cf16fc193/disk.config
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.173 225859 DEBUG oslo_concurrency.processutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d08682f8-72ef-462c-b4b7-044cf16fc193/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeqdwh14w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.258 225859 DEBUG nova.network.neutron [req-a9aeec7f-be7c-489d-bb89-e11c2eb98e93 req-20df941b-e8d7-4122-a27c-481b964e4bca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Updated VIF entry in instance network info cache for port e848e00f-d594-4d70-9026-cd650417bf47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.265 225859 DEBUG nova.network.neutron [req-a9aeec7f-be7c-489d-bb89-e11c2eb98e93 req-20df941b-e8d7-4122-a27c-481b964e4bca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Updating instance_info_cache with network_info: [{"id": "e848e00f-d594-4d70-9026-cd650417bf47", "address": "fa:16:3e:89:9f:c4", "network": {"id": "01e6deef-9aca-4d36-8215-4517982a86a3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-70171378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "78f151250c04467bb4f6a229dda16fc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape848e00f-d5", "ovs_interfaceid": "e848e00f-d594-4d70-9026-cd650417bf47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.293 225859 DEBUG oslo_concurrency.lockutils [req-a9aeec7f-be7c-489d-bb89-e11c2eb98e93 req-20df941b-e8d7-4122-a27c-481b964e4bca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-d08682f8-72ef-462c-b4b7-044cf16fc193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.305 225859 DEBUG oslo_concurrency.processutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d08682f8-72ef-462c-b4b7-044cf16fc193/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeqdwh14w" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.338 225859 DEBUG nova.storage.rbd_utils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] rbd image d08682f8-72ef-462c-b4b7-044cf16fc193_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.342 225859 DEBUG oslo_concurrency.processutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d08682f8-72ef-462c-b4b7-044cf16fc193/disk.config d08682f8-72ef-462c-b4b7-044cf16fc193_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.367 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.368 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:30:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:20.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.399 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:30:20 compute-1 sudo[240107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:30:20 compute-1 sudo[240107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:20 compute-1 sudo[240107]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:20 compute-1 sudo[240147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:30:20 compute-1 sudo[240147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:20 compute-1 sudo[240147]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:30:20 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2267082636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:20 compute-1 sudo[240172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:30:20 compute-1 sudo[240172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.589 225859 DEBUG oslo_concurrency.processutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:20 compute-1 sudo[240172]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.633 225859 DEBUG nova.storage.rbd_utils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] rbd image f3faf996-e066-4b11-b7f3-30aeffff726e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:20 compute-1 nova_compute[225855]: 2026-01-20 14:30:20.639 225859 DEBUG oslo_concurrency.processutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:20 compute-1 sudo[240203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:30:20 compute-1 sudo[240203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:20 compute-1 sudo[240203]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:21.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:21 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:30:21 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/891766828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2148169791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2203175762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:21 compute-1 ceph-mon[81775]: pgmap v1189: 321 pgs: 321 active+clean; 520 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.1 MiB/s wr, 168 op/s
Jan 20 14:30:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3241562052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:21 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.391 225859 DEBUG oslo_concurrency.processutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d08682f8-72ef-462c-b4b7-044cf16fc193/disk.config d08682f8-72ef-462c-b4b7-044cf16fc193_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.393 225859 INFO nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Deleting local config drive /var/lib/nova/instances/d08682f8-72ef-462c-b4b7-044cf16fc193/disk.config because it was imported into RBD.
Jan 20 14:30:21 compute-1 kernel: tape848e00f-d5: entered promiscuous mode
Jan 20 14:30:21 compute-1 NetworkManager[49104]: <info>  [1768919421.4445] manager: (tape848e00f-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Jan 20 14:30:21 compute-1 ovn_controller[130490]: 2026-01-20T14:30:21Z|00109|binding|INFO|Claiming lport e848e00f-d594-4d70-9026-cd650417bf47 for this chassis.
Jan 20 14:30:21 compute-1 ovn_controller[130490]: 2026-01-20T14:30:21Z|00110|binding|INFO|e848e00f-d594-4d70-9026-cd650417bf47: Claiming fa:16:3e:89:9f:c4 10.100.0.5
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.489 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.495 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.502 225859 DEBUG oslo_concurrency.processutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.863s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.503 225859 DEBUG nova.virt.libvirt.vif [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:30:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-1191836092',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-1191836092',id=30,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH+01n3DJe3yYfRmwifZEomZrLtaFilErLasmr7ze/p0n1d6nPaSWQOHrHfJ9ubgBCwoqlwHjFIWrKKyRcRI1f3OIubHCG4LO7UMySAzmCXBSDkLJPz6Qzoln3dTb/xrow==',key_name='tempest-keypair-696534507',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0cee74dd60da4a839bb5eb0ba3137edf',ramdisk_id='',reservation_id='r-0tyxczv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-859917658',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-859917658-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:30:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6a3fbc3f92a849e88cbf34d28ca17e43',uuid=f3faf996-e066-4b11-b7f3-30aeffff726e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "address": "fa:16:3e:fc:ae:50", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf65050ac-6a", "ovs_interfaceid": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.505 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:9f:c4 10.100.0.5'], port_security=['fa:16:3e:89:9f:c4 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd08682f8-72ef-462c-b4b7-044cf16fc193', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01e6deef-9aca-4d36-8215-4517982a86a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '78f151250c04467bb4f6a229dda16fc5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd36e8d2-993a-4618-8fff-62abafaadfd3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86700b79-bb44-47f0-88a5-d4c8eda3acbb, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=e848e00f-d594-4d70-9026-cd650417bf47) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.506 140354 INFO neutron.agent.ovn.metadata.agent [-] Port e848e00f-d594-4d70-9026-cd650417bf47 in datapath 01e6deef-9aca-4d36-8215-4517982a86a3 bound to our chassis
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.504 225859 DEBUG nova.network.os_vif_util [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Converting VIF {"id": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "address": "fa:16:3e:fc:ae:50", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf65050ac-6a", "ovs_interfaceid": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.505 225859 DEBUG nova.network.os_vif_util [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ae:50,bridge_name='br-int',has_traffic_filtering=True,id=f65050ac-6a44-490a-b4b9-8c82c1f61630,network=Network(02f86d1d-5cad-49c5-9004-3de3e4739ad5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf65050ac-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.507 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01e6deef-9aca-4d36-8215-4517982a86a3
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.506 225859 DEBUG nova.objects.instance [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lazy-loading 'pci_devices' on Instance uuid f3faf996-e066-4b11-b7f3-30aeffff726e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:21 compute-1 systemd-machined[194361]: New machine qemu-15-instance-0000001f.
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.517 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b136facf-5f1e-4233-991b-c3111542a368]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.518 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01e6deef-91 in ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.519 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01e6deef-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.520 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[57b4d52d-dc1c-46f7-8166-a649eda8219b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.520 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bbfed9dc-6deb-4416-9d4a-5e4d3551efd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.526 225859 DEBUG nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:30:21 compute-1 nova_compute[225855]:   <uuid>f3faf996-e066-4b11-b7f3-30aeffff726e</uuid>
Jan 20 14:30:21 compute-1 nova_compute[225855]:   <name>instance-0000001e</name>
Jan 20 14:30:21 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:30:21 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:30:21 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <nova:name>tempest-UpdateMultiattachVolumeNegativeTest-server-1191836092</nova:name>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:30:20</nova:creationTime>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:30:21 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:30:21 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:30:21 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:30:21 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:30:21 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:30:21 compute-1 nova_compute[225855]:         <nova:user uuid="6a3fbc3f92a849e88cbf34d28ca17e43">tempest-UpdateMultiattachVolumeNegativeTest-859917658-project-member</nova:user>
Jan 20 14:30:21 compute-1 nova_compute[225855]:         <nova:project uuid="0cee74dd60da4a839bb5eb0ba3137edf">tempest-UpdateMultiattachVolumeNegativeTest-859917658</nova:project>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:30:21 compute-1 nova_compute[225855]:         <nova:port uuid="f65050ac-6a44-490a-b4b9-8c82c1f61630">
Jan 20 14:30:21 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:30:21 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:30:21 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <system>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <entry name="serial">f3faf996-e066-4b11-b7f3-30aeffff726e</entry>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <entry name="uuid">f3faf996-e066-4b11-b7f3-30aeffff726e</entry>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     </system>
Jan 20 14:30:21 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:30:21 compute-1 nova_compute[225855]:   <os>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:   </os>
Jan 20 14:30:21 compute-1 nova_compute[225855]:   <features>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:   </features>
Jan 20 14:30:21 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:30:21 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:30:21 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/f3faf996-e066-4b11-b7f3-30aeffff726e_disk">
Jan 20 14:30:21 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       </source>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:30:21 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/f3faf996-e066-4b11-b7f3-30aeffff726e_disk.config">
Jan 20 14:30:21 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       </source>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:30:21 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:fc:ae:50"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <target dev="tapf65050ac-6a"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/f3faf996-e066-4b11-b7f3-30aeffff726e/console.log" append="off"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <video>
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     </video>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:30:21 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:30:21 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:30:21 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:30:21 compute-1 nova_compute[225855]: </domain>
Jan 20 14:30:21 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.526 225859 DEBUG nova.compute.manager [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Preparing to wait for external event network-vif-plugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.527 225859 DEBUG oslo_concurrency.lockutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.527 225859 DEBUG oslo_concurrency.lockutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.527 225859 DEBUG oslo_concurrency.lockutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:21 compute-1 systemd[1]: Started Virtual Machine qemu-15-instance-0000001f.
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.533 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b16ded-db6c-4734-b048-897b3d8a2767]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.534 225859 DEBUG nova.virt.libvirt.vif [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:30:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-1191836092',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-1191836092',id=30,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH+01n3DJe3yYfRmwifZEomZrLtaFilErLasmr7ze/p0n1d6nPaSWQOHrHfJ9ubgBCwoqlwHjFIWrKKyRcRI1f3OIubHCG4LO7UMySAzmCXBSDkLJPz6Qzoln3dTb/xrow==',key_name='tempest-keypair-696534507',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0cee74dd60da4a839bb5eb0ba3137edf',ramdisk_id='',reservation_id='r-0tyxczv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-859917658',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-859917658-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:30:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6a3fbc3f92a849e88cbf34d28ca17e43',uuid=f3faf996-e066-4b11-b7f3-30aeffff726e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "address": "fa:16:3e:fc:ae:50", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf65050ac-6a", "ovs_interfaceid": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.535 225859 DEBUG nova.network.os_vif_util [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Converting VIF {"id": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "address": "fa:16:3e:fc:ae:50", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf65050ac-6a", "ovs_interfaceid": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.535 225859 DEBUG nova.network.os_vif_util [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ae:50,bridge_name='br-int',has_traffic_filtering=True,id=f65050ac-6a44-490a-b4b9-8c82c1f61630,network=Network(02f86d1d-5cad-49c5-9004-3de3e4739ad5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf65050ac-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.536 225859 DEBUG os_vif [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ae:50,bridge_name='br-int',has_traffic_filtering=True,id=f65050ac-6a44-490a-b4b9-8c82c1f61630,network=Network(02f86d1d-5cad-49c5-9004-3de3e4739ad5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf65050ac-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.536 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.537 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.538 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.542 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.542 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf65050ac-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.543 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf65050ac-6a, col_values=(('external_ids', {'iface-id': 'f65050ac-6a44-490a-b4b9-8c82c1f61630', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:ae:50', 'vm-uuid': 'f3faf996-e066-4b11-b7f3-30aeffff726e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:30:21 compute-1 NetworkManager[49104]: <info>  [1768919421.5457] manager: (tapf65050ac-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.550 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.552 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:30:21 compute-1 systemd-udevd[240284]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.558 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ea8044-e2bd-4166-b940-189058460999]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:21 compute-1 NetworkManager[49104]: <info>  [1768919421.5682] device (tape848e00f-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:30:21 compute-1 NetworkManager[49104]: <info>  [1768919421.5688] device (tape848e00f-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.579 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.582 225859 DEBUG oslo_concurrency.lockutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Acquiring lock "refresh_cache-d95ca690-20e1-4b0c-919b-d64c9af25eba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.582 225859 DEBUG oslo_concurrency.lockutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Acquired lock "refresh_cache-d95ca690-20e1-4b0c-919b-d64c9af25eba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.582 225859 DEBUG nova.network.neutron [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.585 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.586 225859 INFO os_vif [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:ae:50,bridge_name='br-int',has_traffic_filtering=True,id=f65050ac-6a44-490a-b4b9-8c82c1f61630,network=Network(02f86d1d-5cad-49c5-9004-3de3e4739ad5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf65050ac-6a')
Jan 20 14:30:21 compute-1 ovn_controller[130490]: 2026-01-20T14:30:21Z|00111|binding|INFO|Setting lport e848e00f-d594-4d70-9026-cd650417bf47 ovn-installed in OVS
Jan 20 14:30:21 compute-1 ovn_controller[130490]: 2026-01-20T14:30:21Z|00112|binding|INFO|Setting lport e848e00f-d594-4d70-9026-cd650417bf47 up in Southbound
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.588 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.595 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c9050b-3c46-4b63-9ed1-9e4aa4050c0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:21 compute-1 NetworkManager[49104]: <info>  [1768919421.6028] manager: (tap01e6deef-90): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Jan 20 14:30:21 compute-1 systemd-udevd[240292]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.603 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[024d256f-5dd2-46d9-bbf4-935c23bcd510]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.630 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0981e1-161d-4071-9157-1d570366d8c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.633 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[8b0adaff-a880-47c6-98b1-c475d6ae0a88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:21 compute-1 NetworkManager[49104]: <info>  [1768919421.6524] device (tap01e6deef-90): carrier: link connected
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.658 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[df6c5693-4d8b-4179-bbcb-b9c64bd33c3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.683 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6f346454-e322-4888-ac0e-76ae184fb6ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01e6deef-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:81:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446058, 'reachable_time': 15941, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240320, 'error': None, 'target': 'ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.697 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[73da090c-f7ff-4018-95ba-485308e62d9a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe47:818c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446058, 'tstamp': 446058}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240321, 'error': None, 'target': 'ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.711 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4210faf5-171e-4123-9c2c-134508ff7036]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01e6deef-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:81:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446058, 'reachable_time': 15941, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 240322, 'error': None, 'target': 'ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.740 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[86694319-858e-4ffd-bfa3-f550188d29f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.791 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[19f4afcd-3694-49b2-9b50-fa4c2296cc06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.792 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01e6deef-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.792 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.793 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01e6deef-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:30:21 compute-1 NetworkManager[49104]: <info>  [1768919421.7952] manager: (tap01e6deef-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Jan 20 14:30:21 compute-1 kernel: tap01e6deef-90: entered promiscuous mode
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.794 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.800 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01e6deef-90, col_values=(('external_ids', {'iface-id': 'b3bfa880-f76c-4bab-98ca-24729b0d77e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.800 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:21 compute-1 ovn_controller[130490]: 2026-01-20T14:30:21Z|00113|binding|INFO|Releasing lport b3bfa880-f76c-4bab-98ca-24729b0d77e7 from this chassis (sb_readonly=0)
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.802 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.805 225859 DEBUG nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.805 225859 DEBUG nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.805 225859 DEBUG nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] No VIF found with MAC fa:16:3e:fc:ae:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.805 225859 INFO nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Using config drive
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.827 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01e6deef-9aca-4d36-8215-4517982a86a3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01e6deef-9aca-4d36-8215-4517982a86a3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.828 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0cd508-fcb4-455d-9da9-521ba8d6e7c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.829 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-01e6deef-9aca-4d36-8215-4517982a86a3
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/01e6deef-9aca-4d36-8215-4517982a86a3.pid.haproxy
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 01e6deef-9aca-4d36-8215-4517982a86a3
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:30:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:21.830 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3', 'env', 'PROCESS_TAG=haproxy-01e6deef-9aca-4d36-8215-4517982a86a3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01e6deef-9aca-4d36-8215-4517982a86a3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.841 225859 DEBUG nova.storage.rbd_utils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] rbd image f3faf996-e066-4b11-b7f3-30aeffff726e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.847 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.963 225859 DEBUG nova.network.neutron [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.968 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919421.968317, d08682f8-72ef-462c-b4b7-044cf16fc193 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.969 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] VM Started (Lifecycle Event)
Jan 20 14:30:21 compute-1 nova_compute[225855]: 2026-01-20 14:30:21.994 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.000 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919421.9706268, d08682f8-72ef-462c-b4b7-044cf16fc193 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.001 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] VM Paused (Lifecycle Event)
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.022 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.026 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.042 225859 DEBUG nova.network.neutron [req-eb3753cc-2879-474d-9930-bd6934878356 req-9cfeba9d-76b7-48d9-8fbf-17ec4c1531b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Updated VIF entry in instance network info cache for port f65050ac-6a44-490a-b4b9-8c82c1f61630. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.042 225859 DEBUG nova.network.neutron [req-eb3753cc-2879-474d-9930-bd6934878356 req-9cfeba9d-76b7-48d9-8fbf-17ec4c1531b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Updating instance_info_cache with network_info: [{"id": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "address": "fa:16:3e:fc:ae:50", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf65050ac-6a", "ovs_interfaceid": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.079 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.085 225859 DEBUG oslo_concurrency.lockutils [req-eb3753cc-2879-474d-9930-bd6934878356 req-9cfeba9d-76b7-48d9-8fbf-17ec4c1531b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-f3faf996-e066-4b11-b7f3-30aeffff726e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.269 225859 DEBUG nova.compute.manager [req-b2519775-a15d-4431-8676-ad34523086ce req-5a8dcbbd-835c-4ea2-86ae-07ce63dfd60f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Received event network-vif-plugged-e848e00f-d594-4d70-9026-cd650417bf47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.270 225859 DEBUG oslo_concurrency.lockutils [req-b2519775-a15d-4431-8676-ad34523086ce req-5a8dcbbd-835c-4ea2-86ae-07ce63dfd60f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d08682f8-72ef-462c-b4b7-044cf16fc193-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.270 225859 DEBUG oslo_concurrency.lockutils [req-b2519775-a15d-4431-8676-ad34523086ce req-5a8dcbbd-835c-4ea2-86ae-07ce63dfd60f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d08682f8-72ef-462c-b4b7-044cf16fc193-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.271 225859 DEBUG oslo_concurrency.lockutils [req-b2519775-a15d-4431-8676-ad34523086ce req-5a8dcbbd-835c-4ea2-86ae-07ce63dfd60f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d08682f8-72ef-462c-b4b7-044cf16fc193-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.271 225859 DEBUG nova.compute.manager [req-b2519775-a15d-4431-8676-ad34523086ce req-5a8dcbbd-835c-4ea2-86ae-07ce63dfd60f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Processing event network-vif-plugged-e848e00f-d594-4d70-9026-cd650417bf47 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.273 225859 DEBUG nova.compute.manager [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.276 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919422.2761977, d08682f8-72ef-462c-b4b7-044cf16fc193 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.277 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] VM Resumed (Lifecycle Event)
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.279 225859 DEBUG nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.285 225859 INFO nova.virt.libvirt.driver [-] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Instance spawned successfully.
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.286 225859 DEBUG nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:30:22 compute-1 podman[240420]: 2026-01-20 14:30:22.197938504 +0000 UTC m=+0.025662653 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.299 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.304 225859 INFO nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Creating config drive at /var/lib/nova/instances/f3faf996-e066-4b11-b7f3-30aeffff726e/disk.config
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.316 225859 DEBUG oslo_concurrency.processutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f3faf996-e066-4b11-b7f3-30aeffff726e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp291v285o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:30:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2267082636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/891766828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:22 compute-1 ceph-mon[81775]: pgmap v1190: 321 pgs: 321 active+clean; 582 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.5 MiB/s wr, 265 op/s
Jan 20 14:30:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3075205744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.359 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.362 225859 DEBUG nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.363 225859 DEBUG nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.363 225859 DEBUG nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.363 225859 DEBUG nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.364 225859 DEBUG nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.364 225859 DEBUG nova.virt.libvirt.driver [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.389 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:30:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:30:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:22.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:30:22 compute-1 podman[240420]: 2026-01-20 14:30:22.408341899 +0000 UTC m=+0.236066028 container create 5d6a44e3e459bb355cdf6f9a47a3cf4c054c368c8501bc071f9e3e640145abc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.417 225859 INFO nova.compute.manager [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Took 7.43 seconds to spawn the instance on the hypervisor.
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.418 225859 DEBUG nova.compute.manager [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:22 compute-1 systemd[1]: Started libpod-conmon-5d6a44e3e459bb355cdf6f9a47a3cf4c054c368c8501bc071f9e3e640145abc6.scope.
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.462 225859 DEBUG nova.network.neutron [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.464 225859 DEBUG oslo_concurrency.processutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f3faf996-e066-4b11-b7f3-30aeffff726e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp291v285o" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:22 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:30:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c6a841c35edcc3f8a6f139ec1d95e7afdb44a50568f11f14c28989dc0834ec4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.494 225859 DEBUG nova.storage.rbd_utils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] rbd image f3faf996-e066-4b11-b7f3-30aeffff726e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:22 compute-1 podman[240420]: 2026-01-20 14:30:22.497962199 +0000 UTC m=+0.325686348 container init 5d6a44e3e459bb355cdf6f9a47a3cf4c054c368c8501bc071f9e3e640145abc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 20 14:30:22 compute-1 podman[240420]: 2026-01-20 14:30:22.503977368 +0000 UTC m=+0.331701497 container start 5d6a44e3e459bb355cdf6f9a47a3cf4c054c368c8501bc071f9e3e640145abc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.505 225859 DEBUG oslo_concurrency.processutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f3faf996-e066-4b11-b7f3-30aeffff726e/disk.config f3faf996-e066-4b11-b7f3-30aeffff726e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.527 225859 DEBUG oslo_concurrency.lockutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Releasing lock "refresh_cache-d95ca690-20e1-4b0c-919b-d64c9af25eba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:30:22 compute-1 neutron-haproxy-ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3[240438]: [NOTICE]   (240460) : New worker (240463) forked
Jan 20 14:30:22 compute-1 neutron-haproxy-ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3[240438]: [NOTICE]   (240460) : Loading success.
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.539 225859 INFO nova.compute.manager [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Took 8.97 seconds to build instance.
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.561 225859 DEBUG oslo_concurrency.lockutils [None req-42bf5750-f887-41a7-8e0f-ec5c640634de 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Lock "d08682f8-72ef-462c-b4b7-044cf16fc193" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.615 225859 DEBUG nova.virt.libvirt.driver [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.615 225859 DEBUG nova.virt.libvirt.volume.remotefs [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Creating file /var/lib/nova/instances/d95ca690-20e1-4b0c-919b-d64c9af25eba/35613d1c57984e108f84e93a2c7361cd.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.615 225859 DEBUG oslo_concurrency.processutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/d95ca690-20e1-4b0c-919b-d64c9af25eba/35613d1c57984e108f84e93a2c7361cd.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.797 225859 DEBUG oslo_concurrency.processutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f3faf996-e066-4b11-b7f3-30aeffff726e/disk.config f3faf996-e066-4b11-b7f3-30aeffff726e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.799 225859 INFO nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Deleting local config drive /var/lib/nova/instances/f3faf996-e066-4b11-b7f3-30aeffff726e/disk.config because it was imported into RBD.
Jan 20 14:30:22 compute-1 kernel: tapf65050ac-6a: entered promiscuous mode
Jan 20 14:30:22 compute-1 NetworkManager[49104]: <info>  [1768919422.8482] manager: (tapf65050ac-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.849 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:22 compute-1 ovn_controller[130490]: 2026-01-20T14:30:22Z|00114|binding|INFO|Claiming lport f65050ac-6a44-490a-b4b9-8c82c1f61630 for this chassis.
Jan 20 14:30:22 compute-1 ovn_controller[130490]: 2026-01-20T14:30:22Z|00115|binding|INFO|f65050ac-6a44-490a-b4b9-8c82c1f61630: Claiming fa:16:3e:fc:ae:50 10.100.0.8
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.858 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:22 compute-1 NetworkManager[49104]: <info>  [1768919422.8612] device (tapf65050ac-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:30:22 compute-1 NetworkManager[49104]: <info>  [1768919422.8618] device (tapf65050ac-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:30:22 compute-1 NetworkManager[49104]: <info>  [1768919422.8702] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Jan 20 14:30:22 compute-1 NetworkManager[49104]: <info>  [1768919422.8707] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Jan 20 14:30:22 compute-1 nova_compute[225855]: 2026-01-20 14:30:22.870 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:22.876 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:ae:50 10.100.0.8'], port_security=['fa:16:3e:fc:ae:50 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f3faf996-e066-4b11-b7f3-30aeffff726e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02f86d1d-5cad-49c5-9004-3de3e4739ad5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0cee74dd60da4a839bb5eb0ba3137edf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e08f10e3-3a95-4e33-b03d-21860ea0dc91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f4fb07a-2698-4a11-a9e3-5a66d678d9d5, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=f65050ac-6a44-490a-b4b9-8c82c1f61630) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:30:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:22.877 140354 INFO neutron.agent.ovn.metadata.agent [-] Port f65050ac-6a44-490a-b4b9-8c82c1f61630 in datapath 02f86d1d-5cad-49c5-9004-3de3e4739ad5 bound to our chassis
Jan 20 14:30:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:22.878 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02f86d1d-5cad-49c5-9004-3de3e4739ad5
Jan 20 14:30:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:22.890 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[07d84e0c-5f30-44c1-a901-b590069c115d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:22.892 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap02f86d1d-51 in ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:30:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:22.894 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap02f86d1d-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:30:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:22.894 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[06ffd71e-a3a7-4ca8-b17f-ed815d8b429e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:22.894 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[10cf60b8-fdba-484a-a6aa-06a44107e5dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:22.906 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1958c6-da89-4dca-8739-164d25649d9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:22 compute-1 systemd-machined[194361]: New machine qemu-16-instance-0000001e.
Jan 20 14:30:22 compute-1 systemd[1]: Started Virtual Machine qemu-16-instance-0000001e.
Jan 20 14:30:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:22.931 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f347651b-ccca-4606-adfa-381baf9ac006]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:22.956 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e337afdc-c1dd-4aeb-ba92-16c2130442e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:22.966 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[730109c1-a20e-43e5-ade3-a5e3d9357c00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:22 compute-1 NetworkManager[49104]: <info>  [1768919422.9675] manager: (tap02f86d1d-50): new Veth device (/org/freedesktop/NetworkManager/Devices/59)
Jan 20 14:30:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:22.994 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c1c8eb4e-8e9b-4f0f-82b5-f74176e233a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:23.006 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[16ec4420-664b-4790-8bc5-6ae3e4b86fd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.025 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:23 compute-1 NetworkManager[49104]: <info>  [1768919423.0299] device (tap02f86d1d-50): carrier: link connected
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:23.033 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[3151c8eb-f1ef-44fc-b7a7-831cf9f7cae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.034 225859 DEBUG oslo_concurrency.processutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/d95ca690-20e1-4b0c-919b-d64c9af25eba/35613d1c57984e108f84e93a2c7361cd.tmp" returned: 1 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.035 225859 DEBUG oslo_concurrency.processutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/d95ca690-20e1-4b0c-919b-d64c9af25eba/35613d1c57984e108f84e93a2c7361cd.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.035 225859 DEBUG nova.virt.libvirt.volume.remotefs [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Creating directory /var/lib/nova/instances/d95ca690-20e1-4b0c-919b-d64c9af25eba on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.035 225859 DEBUG oslo_concurrency.processutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/d95ca690-20e1-4b0c-919b-d64c9af25eba execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:23 compute-1 ovn_controller[130490]: 2026-01-20T14:30:23Z|00116|binding|INFO|Releasing lport b3bfa880-f76c-4bab-98ca-24729b0d77e7 from this chassis (sb_readonly=0)
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:23.048 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e4d00a-4009-4e33-98d4-0d061a7bde45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02f86d1d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:08:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446195, 'reachable_time': 33326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240523, 'error': None, 'target': 'ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.053 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:23.062 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[920a26f1-b1f5-4954-b22a-72d877728b2c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:8de'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446195, 'tstamp': 446195}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240525, 'error': None, 'target': 'ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:23 compute-1 ovn_controller[130490]: 2026-01-20T14:30:23Z|00117|binding|INFO|Setting lport f65050ac-6a44-490a-b4b9-8c82c1f61630 ovn-installed in OVS
Jan 20 14:30:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:23.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:23 compute-1 ovn_controller[130490]: 2026-01-20T14:30:23Z|00118|binding|INFO|Setting lport f65050ac-6a44-490a-b4b9-8c82c1f61630 up in Southbound
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.065 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:23.079 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e774e6a6-1378-4583-afc7-5a8b9538c5aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02f86d1d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:08:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446195, 'reachable_time': 33326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 240526, 'error': None, 'target': 'ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:23.105 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[efd69c80-714f-4bae-974f-d540412ccf30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:23.159 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bcd3518d-e520-4628-844c-c7892f7d72b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:23.160 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02f86d1d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:23.160 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:23.161 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02f86d1d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.163 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:23 compute-1 NetworkManager[49104]: <info>  [1768919423.1637] manager: (tap02f86d1d-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Jan 20 14:30:23 compute-1 kernel: tap02f86d1d-50: entered promiscuous mode
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.165 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:23.167 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02f86d1d-50, col_values=(('external_ids', {'iface-id': '2f798c1c-f9b6-4141-904d-4124d05888ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.169 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:23 compute-1 ovn_controller[130490]: 2026-01-20T14:30:23Z|00119|binding|INFO|Releasing lport 2f798c1c-f9b6-4141-904d-4124d05888ca from this chassis (sb_readonly=0)
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:23.170 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/02f86d1d-5cad-49c5-9004-3de3e4739ad5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/02f86d1d-5cad-49c5-9004-3de3e4739ad5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:23.171 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9790b14e-2ef5-4a85-8725-433dcc4d80b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:23.173 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-02f86d1d-5cad-49c5-9004-3de3e4739ad5
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/02f86d1d-5cad-49c5-9004-3de3e4739ad5.pid.haproxy
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 02f86d1d-5cad-49c5-9004-3de3e4739ad5
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:30:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:23.174 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5', 'env', 'PROCESS_TAG=haproxy-02f86d1d-5cad-49c5-9004-3de3e4739ad5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/02f86d1d-5cad-49c5-9004-3de3e4739ad5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.185 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.270 225859 DEBUG oslo_concurrency.processutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/d95ca690-20e1-4b0c-919b-d64c9af25eba" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.275 225859 DEBUG nova.virt.libvirt.driver [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.414 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919423.4144607, f3faf996-e066-4b11-b7f3-30aeffff726e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.420 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] VM Started (Lifecycle Event)
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.441 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.445 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919423.41456, f3faf996-e066-4b11-b7f3-30aeffff726e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.445 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] VM Paused (Lifecycle Event)
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.601 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:23 compute-1 podman[240600]: 2026-01-20 14:30:23.507435139 +0000 UTC m=+0.028994196 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.605 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.634 225859 DEBUG nova.compute.manager [req-2ca9a36a-00d8-4ec1-afaa-0687a51a9397 req-50602aab-0cdb-49e4-9c52-2226624bc818 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Received event network-vif-plugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.635 225859 DEBUG oslo_concurrency.lockutils [req-2ca9a36a-00d8-4ec1-afaa-0687a51a9397 req-50602aab-0cdb-49e4-9c52-2226624bc818 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.635 225859 DEBUG oslo_concurrency.lockutils [req-2ca9a36a-00d8-4ec1-afaa-0687a51a9397 req-50602aab-0cdb-49e4-9c52-2226624bc818 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.635 225859 DEBUG oslo_concurrency.lockutils [req-2ca9a36a-00d8-4ec1-afaa-0687a51a9397 req-50602aab-0cdb-49e4-9c52-2226624bc818 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.635 225859 DEBUG nova.compute.manager [req-2ca9a36a-00d8-4ec1-afaa-0687a51a9397 req-50602aab-0cdb-49e4-9c52-2226624bc818 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Processing event network-vif-plugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.636 225859 DEBUG nova.compute.manager [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.637 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.640 225859 DEBUG nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.640 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919423.6407187, f3faf996-e066-4b11-b7f3-30aeffff726e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.641 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] VM Resumed (Lifecycle Event)
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.644 225859 INFO nova.virt.libvirt.driver [-] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Instance spawned successfully.
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.644 225859 DEBUG nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.671 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:23 compute-1 podman[240600]: 2026-01-20 14:30:23.673448177 +0000 UTC m=+0.195007214 container create d8a0ee9d6ac43a9bc3bd7cb9d79f6a436d6ee7fbfe9aa9472ca3997bc110a44f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.688 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.691 225859 DEBUG nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.691 225859 DEBUG nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.692 225859 DEBUG nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.692 225859 DEBUG nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.693 225859 DEBUG nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.693 225859 DEBUG nova.virt.libvirt.driver [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:23 compute-1 systemd[1]: Started libpod-conmon-d8a0ee9d6ac43a9bc3bd7cb9d79f6a436d6ee7fbfe9aa9472ca3997bc110a44f.scope.
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.725 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:30:23 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:30:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2b8ddd195f8cf3eab1b6717aa28a9f762bb41de1ab6eed44d1d21f47344a69/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:30:23 compute-1 podman[240600]: 2026-01-20 14:30:23.758057215 +0000 UTC m=+0.279616272 container init d8a0ee9d6ac43a9bc3bd7cb9d79f6a436d6ee7fbfe9aa9472ca3997bc110a44f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 14:30:23 compute-1 podman[240600]: 2026-01-20 14:30:23.763729325 +0000 UTC m=+0.285288362 container start d8a0ee9d6ac43a9bc3bd7cb9d79f6a436d6ee7fbfe9aa9472ca3997bc110a44f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 20 14:30:23 compute-1 neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5[240617]: [NOTICE]   (240621) : New worker (240623) forked
Jan 20 14:30:23 compute-1 neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5[240617]: [NOTICE]   (240621) : Loading success.
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.782 225859 INFO nova.compute.manager [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Took 7.51 seconds to spawn the instance on the hypervisor.
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.783 225859 DEBUG nova.compute.manager [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.843 225859 INFO nova.compute.manager [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Took 10.28 seconds to build instance.
Jan 20 14:30:23 compute-1 nova_compute[225855]: 2026-01-20 14:30:23.881 225859 DEBUG oslo_concurrency.lockutils [None req-ffb5ecbd-9a67-415f-a514-18a4e75e1236 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:24 compute-1 nova_compute[225855]: 2026-01-20 14:30:24.021 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:24 compute-1 nova_compute[225855]: 2026-01-20 14:30:24.384 225859 DEBUG nova.compute.manager [req-0300ed0f-b12a-41ea-a302-2b81b3e2e82f req-2da519f9-f123-43e5-8dfd-a2e15bcdfcb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Received event network-vif-plugged-e848e00f-d594-4d70-9026-cd650417bf47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:30:24 compute-1 nova_compute[225855]: 2026-01-20 14:30:24.385 225859 DEBUG oslo_concurrency.lockutils [req-0300ed0f-b12a-41ea-a302-2b81b3e2e82f req-2da519f9-f123-43e5-8dfd-a2e15bcdfcb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d08682f8-72ef-462c-b4b7-044cf16fc193-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:24 compute-1 nova_compute[225855]: 2026-01-20 14:30:24.385 225859 DEBUG oslo_concurrency.lockutils [req-0300ed0f-b12a-41ea-a302-2b81b3e2e82f req-2da519f9-f123-43e5-8dfd-a2e15bcdfcb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d08682f8-72ef-462c-b4b7-044cf16fc193-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:24 compute-1 nova_compute[225855]: 2026-01-20 14:30:24.386 225859 DEBUG oslo_concurrency.lockutils [req-0300ed0f-b12a-41ea-a302-2b81b3e2e82f req-2da519f9-f123-43e5-8dfd-a2e15bcdfcb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d08682f8-72ef-462c-b4b7-044cf16fc193-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:24 compute-1 nova_compute[225855]: 2026-01-20 14:30:24.386 225859 DEBUG nova.compute.manager [req-0300ed0f-b12a-41ea-a302-2b81b3e2e82f req-2da519f9-f123-43e5-8dfd-a2e15bcdfcb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] No waiting events found dispatching network-vif-plugged-e848e00f-d594-4d70-9026-cd650417bf47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:30:24 compute-1 nova_compute[225855]: 2026-01-20 14:30:24.387 225859 WARNING nova.compute.manager [req-0300ed0f-b12a-41ea-a302-2b81b3e2e82f req-2da519f9-f123-43e5-8dfd-a2e15bcdfcb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Received unexpected event network-vif-plugged-e848e00f-d594-4d70-9026-cd650417bf47 for instance with vm_state active and task_state None.
Jan 20 14:30:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:24.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:25.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:25 compute-1 ceph-mon[81775]: pgmap v1191: 321 pgs: 321 active+clean; 582 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.5 MiB/s wr, 265 op/s
Jan 20 14:30:25 compute-1 nova_compute[225855]: 2026-01-20 14:30:25.712 225859 DEBUG nova.compute.manager [req-80abcfc6-797e-4cf1-bd1a-136cccfc9592 req-65fb452a-b561-4f08-be4a-e8034101b770 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Received event network-vif-plugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:30:25 compute-1 nova_compute[225855]: 2026-01-20 14:30:25.712 225859 DEBUG oslo_concurrency.lockutils [req-80abcfc6-797e-4cf1-bd1a-136cccfc9592 req-65fb452a-b561-4f08-be4a-e8034101b770 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:25 compute-1 nova_compute[225855]: 2026-01-20 14:30:25.713 225859 DEBUG oslo_concurrency.lockutils [req-80abcfc6-797e-4cf1-bd1a-136cccfc9592 req-65fb452a-b561-4f08-be4a-e8034101b770 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:25 compute-1 nova_compute[225855]: 2026-01-20 14:30:25.713 225859 DEBUG oslo_concurrency.lockutils [req-80abcfc6-797e-4cf1-bd1a-136cccfc9592 req-65fb452a-b561-4f08-be4a-e8034101b770 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:25 compute-1 nova_compute[225855]: 2026-01-20 14:30:25.713 225859 DEBUG nova.compute.manager [req-80abcfc6-797e-4cf1-bd1a-136cccfc9592 req-65fb452a-b561-4f08-be4a-e8034101b770 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] No waiting events found dispatching network-vif-plugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:30:25 compute-1 nova_compute[225855]: 2026-01-20 14:30:25.713 225859 WARNING nova.compute.manager [req-80abcfc6-797e-4cf1-bd1a-136cccfc9592 req-65fb452a-b561-4f08-be4a-e8034101b770 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Received unexpected event network-vif-plugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 for instance with vm_state active and task_state None.
Jan 20 14:30:25 compute-1 sshd-session[240396]: Connection closed by authenticating user root 45.179.5.170 port 58150 [preauth]
Jan 20 14:30:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:26.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/256037197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1585323425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:26 compute-1 ceph-mon[81775]: pgmap v1192: 321 pgs: 321 active+clean; 583 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 7.5 MiB/s wr, 338 op/s
Jan 20 14:30:26 compute-1 nova_compute[225855]: 2026-01-20 14:30:26.545 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:27.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2870659096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:27 compute-1 nova_compute[225855]: 2026-01-20 14:30:27.812 225859 DEBUG nova.compute.manager [req-5feb12f5-f148-4dd7-8e9e-4c1e1b23a535 req-cebe04e4-5266-4ccb-a562-4d3cd542d204 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Received event network-changed-f65050ac-6a44-490a-b4b9-8c82c1f61630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:30:27 compute-1 nova_compute[225855]: 2026-01-20 14:30:27.812 225859 DEBUG nova.compute.manager [req-5feb12f5-f148-4dd7-8e9e-4c1e1b23a535 req-cebe04e4-5266-4ccb-a562-4d3cd542d204 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Refreshing instance network info cache due to event network-changed-f65050ac-6a44-490a-b4b9-8c82c1f61630. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:30:27 compute-1 nova_compute[225855]: 2026-01-20 14:30:27.812 225859 DEBUG oslo_concurrency.lockutils [req-5feb12f5-f148-4dd7-8e9e-4c1e1b23a535 req-cebe04e4-5266-4ccb-a562-4d3cd542d204 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-f3faf996-e066-4b11-b7f3-30aeffff726e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:30:27 compute-1 nova_compute[225855]: 2026-01-20 14:30:27.813 225859 DEBUG oslo_concurrency.lockutils [req-5feb12f5-f148-4dd7-8e9e-4c1e1b23a535 req-cebe04e4-5266-4ccb-a562-4d3cd542d204 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-f3faf996-e066-4b11-b7f3-30aeffff726e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:30:27 compute-1 nova_compute[225855]: 2026-01-20 14:30:27.813 225859 DEBUG nova.network.neutron [req-5feb12f5-f148-4dd7-8e9e-4c1e1b23a535 req-cebe04e4-5266-4ccb-a562-4d3cd542d204 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Refreshing network info cache for port f65050ac-6a44-490a-b4b9-8c82c1f61630 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:30:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:28.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:28 compute-1 ceph-mon[81775]: pgmap v1193: 321 pgs: 321 active+clean; 583 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 5.1 MiB/s wr, 328 op/s
Jan 20 14:30:29 compute-1 nova_compute[225855]: 2026-01-20 14:30:29.024 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:29.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:29 compute-1 nova_compute[225855]: 2026-01-20 14:30:29.159 225859 DEBUG nova.network.neutron [req-5feb12f5-f148-4dd7-8e9e-4c1e1b23a535 req-cebe04e4-5266-4ccb-a562-4d3cd542d204 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Updated VIF entry in instance network info cache for port f65050ac-6a44-490a-b4b9-8c82c1f61630. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:30:29 compute-1 nova_compute[225855]: 2026-01-20 14:30:29.160 225859 DEBUG nova.network.neutron [req-5feb12f5-f148-4dd7-8e9e-4c1e1b23a535 req-cebe04e4-5266-4ccb-a562-4d3cd542d204 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Updating instance_info_cache with network_info: [{"id": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "address": "fa:16:3e:fc:ae:50", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf65050ac-6a", "ovs_interfaceid": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:30:29 compute-1 nova_compute[225855]: 2026-01-20 14:30:29.183 225859 DEBUG oslo_concurrency.lockutils [req-5feb12f5-f148-4dd7-8e9e-4c1e1b23a535 req-cebe04e4-5266-4ccb-a562-4d3cd542d204 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-f3faf996-e066-4b11-b7f3-30aeffff726e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:30:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:30 compute-1 ceph-mon[81775]: pgmap v1194: 321 pgs: 321 active+clean; 590 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.2 MiB/s wr, 302 op/s
Jan 20 14:30:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2358003891' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:30:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:30.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:30:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:31.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2341563966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:31 compute-1 nova_compute[225855]: 2026-01-20 14:30:31.334 225859 DEBUG nova.compute.manager [req-00c5853e-97df-4614-8365-d715abbc4c54 req-75d7488c-648d-4568-995b-79396b8ca661 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Received event network-changed-e848e00f-d594-4d70-9026-cd650417bf47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:30:31 compute-1 nova_compute[225855]: 2026-01-20 14:30:31.336 225859 DEBUG nova.compute.manager [req-00c5853e-97df-4614-8365-d715abbc4c54 req-75d7488c-648d-4568-995b-79396b8ca661 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Refreshing instance network info cache due to event network-changed-e848e00f-d594-4d70-9026-cd650417bf47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:30:31 compute-1 nova_compute[225855]: 2026-01-20 14:30:31.336 225859 DEBUG oslo_concurrency.lockutils [req-00c5853e-97df-4614-8365-d715abbc4c54 req-75d7488c-648d-4568-995b-79396b8ca661 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-d08682f8-72ef-462c-b4b7-044cf16fc193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:30:31 compute-1 nova_compute[225855]: 2026-01-20 14:30:31.337 225859 DEBUG oslo_concurrency.lockutils [req-00c5853e-97df-4614-8365-d715abbc4c54 req-75d7488c-648d-4568-995b-79396b8ca661 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-d08682f8-72ef-462c-b4b7-044cf16fc193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:30:31 compute-1 nova_compute[225855]: 2026-01-20 14:30:31.337 225859 DEBUG nova.network.neutron [req-00c5853e-97df-4614-8365-d715abbc4c54 req-75d7488c-648d-4568-995b-79396b8ca661 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Refreshing network info cache for port e848e00f-d594-4d70-9026-cd650417bf47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:30:31 compute-1 nova_compute[225855]: 2026-01-20 14:30:31.549 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:32 compute-1 ceph-mon[81775]: pgmap v1195: 321 pgs: 321 active+clean; 655 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 6.1 MiB/s wr, 301 op/s
Jan 20 14:30:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:32.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:32 compute-1 nova_compute[225855]: 2026-01-20 14:30:32.407 225859 DEBUG nova.network.neutron [req-00c5853e-97df-4614-8365-d715abbc4c54 req-75d7488c-648d-4568-995b-79396b8ca661 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Updated VIF entry in instance network info cache for port e848e00f-d594-4d70-9026-cd650417bf47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:30:32 compute-1 nova_compute[225855]: 2026-01-20 14:30:32.408 225859 DEBUG nova.network.neutron [req-00c5853e-97df-4614-8365-d715abbc4c54 req-75d7488c-648d-4568-995b-79396b8ca661 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Updating instance_info_cache with network_info: [{"id": "e848e00f-d594-4d70-9026-cd650417bf47", "address": "fa:16:3e:89:9f:c4", "network": {"id": "01e6deef-9aca-4d36-8215-4517982a86a3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-70171378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "78f151250c04467bb4f6a229dda16fc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape848e00f-d5", "ovs_interfaceid": "e848e00f-d594-4d70-9026-cd650417bf47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:30:32 compute-1 nova_compute[225855]: 2026-01-20 14:30:32.426 225859 DEBUG oslo_concurrency.lockutils [req-00c5853e-97df-4614-8365-d715abbc4c54 req-75d7488c-648d-4568-995b-79396b8ca661 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-d08682f8-72ef-462c-b4b7-044cf16fc193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:30:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:33.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:33 compute-1 nova_compute[225855]: 2026-01-20 14:30:33.315 225859 DEBUG nova.virt.libvirt.driver [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 14:30:34 compute-1 nova_compute[225855]: 2026-01-20 14:30:34.025 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:34 compute-1 podman[240637]: 2026-01-20 14:30:34.068706119 +0000 UTC m=+0.104554601 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 14:30:34 compute-1 ceph-mon[81775]: pgmap v1196: 321 pgs: 321 active+clean; 655 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.7 MiB/s wr, 203 op/s
Jan 20 14:30:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:34.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:35.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:35 compute-1 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Jan 20 14:30:35 compute-1 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001d.scope: Consumed 13.256s CPU time.
Jan 20 14:30:35 compute-1 systemd-machined[194361]: Machine qemu-14-instance-0000001d terminated.
Jan 20 14:30:35 compute-1 ceph-mon[81775]: pgmap v1197: 321 pgs: 321 active+clean; 674 MiB data, 727 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.9 MiB/s wr, 281 op/s
Jan 20 14:30:36 compute-1 ovn_controller[130490]: 2026-01-20T14:30:36Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:89:9f:c4 10.100.0.5
Jan 20 14:30:36 compute-1 ovn_controller[130490]: 2026-01-20T14:30:36Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:89:9f:c4 10.100.0.5
Jan 20 14:30:36 compute-1 nova_compute[225855]: 2026-01-20 14:30:36.328 225859 INFO nova.virt.libvirt.driver [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Instance shutdown successfully after 13 seconds.
Jan 20 14:30:36 compute-1 nova_compute[225855]: 2026-01-20 14:30:36.335 225859 INFO nova.virt.libvirt.driver [-] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Instance destroyed successfully.
Jan 20 14:30:36 compute-1 nova_compute[225855]: 2026-01-20 14:30:36.340 225859 DEBUG nova.virt.libvirt.driver [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:30:36 compute-1 nova_compute[225855]: 2026-01-20 14:30:36.341 225859 DEBUG nova.virt.libvirt.driver [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:30:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:36.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:36 compute-1 nova_compute[225855]: 2026-01-20 14:30:36.439 225859 DEBUG oslo_concurrency.lockutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Acquiring lock "d95ca690-20e1-4b0c-919b-d64c9af25eba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:36 compute-1 nova_compute[225855]: 2026-01-20 14:30:36.440 225859 DEBUG oslo_concurrency.lockutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lock "d95ca690-20e1-4b0c-919b-d64c9af25eba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:36 compute-1 nova_compute[225855]: 2026-01-20 14:30:36.440 225859 DEBUG oslo_concurrency.lockutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lock "d95ca690-20e1-4b0c-919b-d64c9af25eba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:36 compute-1 nova_compute[225855]: 2026-01-20 14:30:36.552 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:30:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:37.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:30:37 compute-1 ceph-mon[81775]: pgmap v1198: 321 pgs: 321 active+clean; 689 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 6.1 MiB/s wr, 288 op/s
Jan 20 14:30:38 compute-1 ovn_controller[130490]: 2026-01-20T14:30:38Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:ae:50 10.100.0.8
Jan 20 14:30:38 compute-1 ovn_controller[130490]: 2026-01-20T14:30:38Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:ae:50 10.100.0.8
Jan 20 14:30:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:38.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:38 compute-1 nova_compute[225855]: 2026-01-20 14:30:38.651 225859 DEBUG nova.compute.manager [req-6fbe3102-c26e-4376-bbf8-1f793a1760f9 req-446c2998-ba43-415f-a04c-3da6ba039f0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Received event network-changed-e848e00f-d594-4d70-9026-cd650417bf47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:30:38 compute-1 nova_compute[225855]: 2026-01-20 14:30:38.652 225859 DEBUG nova.compute.manager [req-6fbe3102-c26e-4376-bbf8-1f793a1760f9 req-446c2998-ba43-415f-a04c-3da6ba039f0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Refreshing instance network info cache due to event network-changed-e848e00f-d594-4d70-9026-cd650417bf47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:30:38 compute-1 nova_compute[225855]: 2026-01-20 14:30:38.652 225859 DEBUG oslo_concurrency.lockutils [req-6fbe3102-c26e-4376-bbf8-1f793a1760f9 req-446c2998-ba43-415f-a04c-3da6ba039f0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-d08682f8-72ef-462c-b4b7-044cf16fc193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:30:38 compute-1 nova_compute[225855]: 2026-01-20 14:30:38.653 225859 DEBUG oslo_concurrency.lockutils [req-6fbe3102-c26e-4376-bbf8-1f793a1760f9 req-446c2998-ba43-415f-a04c-3da6ba039f0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-d08682f8-72ef-462c-b4b7-044cf16fc193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:30:38 compute-1 nova_compute[225855]: 2026-01-20 14:30:38.653 225859 DEBUG nova.network.neutron [req-6fbe3102-c26e-4376-bbf8-1f793a1760f9 req-446c2998-ba43-415f-a04c-3da6ba039f0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Refreshing network info cache for port e848e00f-d594-4d70-9026-cd650417bf47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:30:39 compute-1 nova_compute[225855]: 2026-01-20 14:30:39.028 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:39.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e163 e163: 3 total, 3 up, 3 in
Jan 20 14:30:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/561453425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:39 compute-1 ceph-mon[81775]: osdmap e163: 3 total, 3 up, 3 in
Jan 20 14:30:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:40 compute-1 ceph-mon[81775]: pgmap v1200: 321 pgs: 321 active+clean; 696 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 7.9 MiB/s wr, 276 op/s
Jan 20 14:30:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/742726590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:40.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:40 compute-1 sudo[240668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:30:40 compute-1 sudo[240668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:40 compute-1 sudo[240668]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:40 compute-1 sudo[240693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:30:40 compute-1 sudo[240693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:40 compute-1 sudo[240693]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:40 compute-1 nova_compute[225855]: 2026-01-20 14:30:40.808 225859 DEBUG oslo_concurrency.lockutils [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Acquiring lock "d08682f8-72ef-462c-b4b7-044cf16fc193" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:40 compute-1 nova_compute[225855]: 2026-01-20 14:30:40.809 225859 DEBUG oslo_concurrency.lockutils [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Lock "d08682f8-72ef-462c-b4b7-044cf16fc193" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:40 compute-1 nova_compute[225855]: 2026-01-20 14:30:40.809 225859 DEBUG oslo_concurrency.lockutils [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Acquiring lock "d08682f8-72ef-462c-b4b7-044cf16fc193-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:40 compute-1 nova_compute[225855]: 2026-01-20 14:30:40.810 225859 DEBUG oslo_concurrency.lockutils [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Lock "d08682f8-72ef-462c-b4b7-044cf16fc193-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:40 compute-1 nova_compute[225855]: 2026-01-20 14:30:40.810 225859 DEBUG oslo_concurrency.lockutils [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Lock "d08682f8-72ef-462c-b4b7-044cf16fc193-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:40 compute-1 nova_compute[225855]: 2026-01-20 14:30:40.812 225859 INFO nova.compute.manager [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Terminating instance
Jan 20 14:30:40 compute-1 nova_compute[225855]: 2026-01-20 14:30:40.814 225859 DEBUG nova.compute.manager [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:30:40 compute-1 nova_compute[225855]: 2026-01-20 14:30:40.865 225859 DEBUG nova.network.neutron [req-6fbe3102-c26e-4376-bbf8-1f793a1760f9 req-446c2998-ba43-415f-a04c-3da6ba039f0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Updated VIF entry in instance network info cache for port e848e00f-d594-4d70-9026-cd650417bf47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:30:40 compute-1 nova_compute[225855]: 2026-01-20 14:30:40.865 225859 DEBUG nova.network.neutron [req-6fbe3102-c26e-4376-bbf8-1f793a1760f9 req-446c2998-ba43-415f-a04c-3da6ba039f0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Updating instance_info_cache with network_info: [{"id": "e848e00f-d594-4d70-9026-cd650417bf47", "address": "fa:16:3e:89:9f:c4", "network": {"id": "01e6deef-9aca-4d36-8215-4517982a86a3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-70171378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "78f151250c04467bb4f6a229dda16fc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape848e00f-d5", "ovs_interfaceid": "e848e00f-d594-4d70-9026-cd650417bf47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:30:40 compute-1 kernel: tape848e00f-d5 (unregistering): left promiscuous mode
Jan 20 14:30:40 compute-1 NetworkManager[49104]: <info>  [1768919440.8779] device (tape848e00f-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:30:40 compute-1 ovn_controller[130490]: 2026-01-20T14:30:40Z|00120|binding|INFO|Releasing lport e848e00f-d594-4d70-9026-cd650417bf47 from this chassis (sb_readonly=0)
Jan 20 14:30:40 compute-1 ovn_controller[130490]: 2026-01-20T14:30:40Z|00121|binding|INFO|Setting lport e848e00f-d594-4d70-9026-cd650417bf47 down in Southbound
Jan 20 14:30:40 compute-1 nova_compute[225855]: 2026-01-20 14:30:40.884 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:40 compute-1 ovn_controller[130490]: 2026-01-20T14:30:40Z|00122|binding|INFO|Removing iface tape848e00f-d5 ovn-installed in OVS
Jan 20 14:30:40 compute-1 nova_compute[225855]: 2026-01-20 14:30:40.888 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:40 compute-1 nova_compute[225855]: 2026-01-20 14:30:40.893 225859 DEBUG oslo_concurrency.lockutils [req-6fbe3102-c26e-4376-bbf8-1f793a1760f9 req-446c2998-ba43-415f-a04c-3da6ba039f0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-d08682f8-72ef-462c-b4b7-044cf16fc193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:30:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:40.895 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:9f:c4 10.100.0.5'], port_security=['fa:16:3e:89:9f:c4 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd08682f8-72ef-462c-b4b7-044cf16fc193', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01e6deef-9aca-4d36-8215-4517982a86a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '78f151250c04467bb4f6a229dda16fc5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd36e8d2-993a-4618-8fff-62abafaadfd3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86700b79-bb44-47f0-88a5-d4c8eda3acbb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=e848e00f-d594-4d70-9026-cd650417bf47) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:30:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:40.897 140354 INFO neutron.agent.ovn.metadata.agent [-] Port e848e00f-d594-4d70-9026-cd650417bf47 in datapath 01e6deef-9aca-4d36-8215-4517982a86a3 unbound from our chassis
Jan 20 14:30:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:40.900 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01e6deef-9aca-4d36-8215-4517982a86a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:30:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:40.902 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b9cc37aa-7132-46d0-84cc-d27cc5b5a9f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:40.903 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3 namespace which is not needed anymore
Jan 20 14:30:40 compute-1 nova_compute[225855]: 2026-01-20 14:30:40.925 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:40 compute-1 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Jan 20 14:30:40 compute-1 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001f.scope: Consumed 13.474s CPU time.
Jan 20 14:30:40 compute-1 systemd-machined[194361]: Machine qemu-15-instance-0000001f terminated.
Jan 20 14:30:40 compute-1 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.054 225859 INFO nova.virt.libvirt.driver [-] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Instance destroyed successfully.
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.056 225859 DEBUG nova.objects.instance [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Lazy-loading 'resources' on Instance uuid d08682f8-72ef-462c-b4b7-044cf16fc193 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.073 225859 DEBUG nova.virt.libvirt.vif [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:30:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-952138345',display_name='tempest-FloatingIPsAssociationTestJSON-server-952138345',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-952138345',id=31,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:30:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='78f151250c04467bb4f6a229dda16fc5',ramdisk_id='',reservation_id='r-bkzlkdyv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-146254261',owner_user_name='tempest-FloatingIPsAssociationTestJSON-146254261-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:30:22Z,user_data=None,user_id='0cec872a00f742d78563d6d16fc545cb',uuid=d08682f8-72ef-462c-b4b7-044cf16fc193,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e848e00f-d594-4d70-9026-cd650417bf47", "address": "fa:16:3e:89:9f:c4", "network": {"id": "01e6deef-9aca-4d36-8215-4517982a86a3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-70171378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "78f151250c04467bb4f6a229dda16fc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape848e00f-d5", "ovs_interfaceid": "e848e00f-d594-4d70-9026-cd650417bf47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.074 225859 DEBUG nova.network.os_vif_util [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Converting VIF {"id": "e848e00f-d594-4d70-9026-cd650417bf47", "address": "fa:16:3e:89:9f:c4", "network": {"id": "01e6deef-9aca-4d36-8215-4517982a86a3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-70171378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "78f151250c04467bb4f6a229dda16fc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape848e00f-d5", "ovs_interfaceid": "e848e00f-d594-4d70-9026-cd650417bf47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.075 225859 DEBUG nova.network.os_vif_util [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:9f:c4,bridge_name='br-int',has_traffic_filtering=True,id=e848e00f-d594-4d70-9026-cd650417bf47,network=Network(01e6deef-9aca-4d36-8215-4517982a86a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape848e00f-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.075 225859 DEBUG os_vif [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:9f:c4,bridge_name='br-int',has_traffic_filtering=True,id=e848e00f-d594-4d70-9026-cd650417bf47,network=Network(01e6deef-9aca-4d36-8215-4517982a86a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape848e00f-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.076 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.077 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape848e00f-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.079 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.081 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.083 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.087 225859 INFO os_vif [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:9f:c4,bridge_name='br-int',has_traffic_filtering=True,id=e848e00f-d594-4d70-9026-cd650417bf47,network=Network(01e6deef-9aca-4d36-8215-4517982a86a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape848e00f-d5')
Jan 20 14:30:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:41.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:41 compute-1 neutron-haproxy-ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3[240438]: [NOTICE]   (240460) : haproxy version is 2.8.14-c23fe91
Jan 20 14:30:41 compute-1 neutron-haproxy-ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3[240438]: [NOTICE]   (240460) : path to executable is /usr/sbin/haproxy
Jan 20 14:30:41 compute-1 neutron-haproxy-ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3[240438]: [WARNING]  (240460) : Exiting Master process...
Jan 20 14:30:41 compute-1 neutron-haproxy-ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3[240438]: [ALERT]    (240460) : Current worker (240463) exited with code 143 (Terminated)
Jan 20 14:30:41 compute-1 neutron-haproxy-ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3[240438]: [WARNING]  (240460) : All workers exited. Exiting... (0)
Jan 20 14:30:41 compute-1 systemd[1]: libpod-5d6a44e3e459bb355cdf6f9a47a3cf4c054c368c8501bc071f9e3e640145abc6.scope: Deactivated successfully.
Jan 20 14:30:41 compute-1 podman[240743]: 2026-01-20 14:30:41.104568905 +0000 UTC m=+0.071630205 container died 5d6a44e3e459bb355cdf6f9a47a3cf4c054c368c8501bc071f9e3e640145abc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 14:30:41 compute-1 systemd[1]: var-lib-containers-storage-overlay-3c6a841c35edcc3f8a6f139ec1d95e7afdb44a50568f11f14c28989dc0834ec4-merged.mount: Deactivated successfully.
Jan 20 14:30:41 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d6a44e3e459bb355cdf6f9a47a3cf4c054c368c8501bc071f9e3e640145abc6-userdata-shm.mount: Deactivated successfully.
Jan 20 14:30:41 compute-1 podman[240743]: 2026-01-20 14:30:41.157706289 +0000 UTC m=+0.124767569 container cleanup 5d6a44e3e459bb355cdf6f9a47a3cf4c054c368c8501bc071f9e3e640145abc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:30:41 compute-1 systemd[1]: libpod-conmon-5d6a44e3e459bb355cdf6f9a47a3cf4c054c368c8501bc071f9e3e640145abc6.scope: Deactivated successfully.
Jan 20 14:30:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3766126878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:41 compute-1 podman[240801]: 2026-01-20 14:30:41.233001266 +0000 UTC m=+0.052978861 container remove 5d6a44e3e459bb355cdf6f9a47a3cf4c054c368c8501bc071f9e3e640145abc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:30:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:41.240 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7d81d470-1833-48db-838c-3f5fd3fcc975]: (4, ('Tue Jan 20 02:30:41 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3 (5d6a44e3e459bb355cdf6f9a47a3cf4c054c368c8501bc071f9e3e640145abc6)\n5d6a44e3e459bb355cdf6f9a47a3cf4c054c368c8501bc071f9e3e640145abc6\nTue Jan 20 02:30:41 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3 (5d6a44e3e459bb355cdf6f9a47a3cf4c054c368c8501bc071f9e3e640145abc6)\n5d6a44e3e459bb355cdf6f9a47a3cf4c054c368c8501bc071f9e3e640145abc6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:41.242 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ea43f60b-752e-4341-97ca-068215ae09f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:41.243 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01e6deef-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.245 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:41 compute-1 kernel: tap01e6deef-90: left promiscuous mode
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.262 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:41.266 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ffadfbec-5911-4589-9698-4bacd5f3306c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:41.282 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c9592f51-b119-443e-8df2-0707ac53e830]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:41.283 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ee09ed13-b0e7-4d37-9d51-8531b76900e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:41.300 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[97026f3f-30ce-43ed-809c-3f6ccc8ba5dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446051, 'reachable_time': 23571, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240816, 'error': None, 'target': 'ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:41.303 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-01e6deef-9aca-4d36-8215-4517982a86a3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:30:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:41.303 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[2afd49a1-396f-49f2-98c1-792f469e4345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:30:41 compute-1 systemd[1]: run-netns-ovnmeta\x2d01e6deef\x2d9aca\x2d4d36\x2d8215\x2d4517982a86a3.mount: Deactivated successfully.
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.497 225859 INFO nova.virt.libvirt.driver [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Deleting instance files /var/lib/nova/instances/d08682f8-72ef-462c-b4b7-044cf16fc193_del
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.498 225859 INFO nova.virt.libvirt.driver [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Deletion of /var/lib/nova/instances/d08682f8-72ef-462c-b4b7-044cf16fc193_del complete
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.578 225859 INFO nova.compute.manager [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Took 0.76 seconds to destroy the instance on the hypervisor.
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.579 225859 DEBUG oslo.service.loopingcall [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.579 225859 DEBUG nova.compute.manager [-] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.579 225859 DEBUG nova.network.neutron [-] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.737 225859 DEBUG nova.compute.manager [req-069e3248-ffcc-4ef3-b5c9-20fa54841ad8 req-8559ef34-6398-4700-8212-25fa2d987a16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Received event network-vif-unplugged-e848e00f-d594-4d70-9026-cd650417bf47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.737 225859 DEBUG oslo_concurrency.lockutils [req-069e3248-ffcc-4ef3-b5c9-20fa54841ad8 req-8559ef34-6398-4700-8212-25fa2d987a16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d08682f8-72ef-462c-b4b7-044cf16fc193-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.738 225859 DEBUG oslo_concurrency.lockutils [req-069e3248-ffcc-4ef3-b5c9-20fa54841ad8 req-8559ef34-6398-4700-8212-25fa2d987a16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d08682f8-72ef-462c-b4b7-044cf16fc193-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.738 225859 DEBUG oslo_concurrency.lockutils [req-069e3248-ffcc-4ef3-b5c9-20fa54841ad8 req-8559ef34-6398-4700-8212-25fa2d987a16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d08682f8-72ef-462c-b4b7-044cf16fc193-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.739 225859 DEBUG nova.compute.manager [req-069e3248-ffcc-4ef3-b5c9-20fa54841ad8 req-8559ef34-6398-4700-8212-25fa2d987a16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] No waiting events found dispatching network-vif-unplugged-e848e00f-d594-4d70-9026-cd650417bf47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:30:41 compute-1 nova_compute[225855]: 2026-01-20 14:30:41.739 225859 DEBUG nova.compute.manager [req-069e3248-ffcc-4ef3-b5c9-20fa54841ad8 req-8559ef34-6398-4700-8212-25fa2d987a16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Received event network-vif-unplugged-e848e00f-d594-4d70-9026-cd650417bf47 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:30:42 compute-1 ceph-mon[81775]: pgmap v1201: 321 pgs: 321 active+clean; 681 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.5 MiB/s wr, 341 op/s
Jan 20 14:30:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:42.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:42 compute-1 nova_compute[225855]: 2026-01-20 14:30:42.445 225859 DEBUG nova.network.neutron [-] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:30:42 compute-1 nova_compute[225855]: 2026-01-20 14:30:42.485 225859 INFO nova.compute.manager [-] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Took 0.91 seconds to deallocate network for instance.
Jan 20 14:30:42 compute-1 nova_compute[225855]: 2026-01-20 14:30:42.558 225859 DEBUG oslo_concurrency.lockutils [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:42 compute-1 nova_compute[225855]: 2026-01-20 14:30:42.559 225859 DEBUG oslo_concurrency.lockutils [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:42 compute-1 nova_compute[225855]: 2026-01-20 14:30:42.637 225859 DEBUG nova.compute.manager [req-c15082bf-cb54-45d2-99f9-c1112a9f7ec7 req-8085d03c-b330-4294-a9dc-8420f2a28abb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Received event network-vif-deleted-e848e00f-d594-4d70-9026-cd650417bf47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:30:42 compute-1 nova_compute[225855]: 2026-01-20 14:30:42.669 225859 DEBUG oslo_concurrency.processutils [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:43.049 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:30:43 compute-1 nova_compute[225855]: 2026-01-20 14:30:43.049 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:43.051 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:30:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:30:43 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2733108454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:43 compute-1 nova_compute[225855]: 2026-01-20 14:30:43.084 225859 DEBUG oslo_concurrency.processutils [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:43 compute-1 nova_compute[225855]: 2026-01-20 14:30:43.090 225859 DEBUG nova.compute.provider_tree [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:30:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:43.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:43 compute-1 nova_compute[225855]: 2026-01-20 14:30:43.111 225859 DEBUG nova.scheduler.client.report [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:30:43 compute-1 nova_compute[225855]: 2026-01-20 14:30:43.134 225859 DEBUG oslo_concurrency.lockutils [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:43 compute-1 nova_compute[225855]: 2026-01-20 14:30:43.169 225859 INFO nova.scheduler.client.report [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Deleted allocations for instance d08682f8-72ef-462c-b4b7-044cf16fc193
Jan 20 14:30:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2733108454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:43 compute-1 nova_compute[225855]: 2026-01-20 14:30:43.252 225859 DEBUG oslo_concurrency.lockutils [None req-75591914-9108-47b2-a433-16634e89d62b 0cec872a00f742d78563d6d16fc545cb 78f151250c04467bb4f6a229dda16fc5 - - default default] Lock "d08682f8-72ef-462c-b4b7-044cf16fc193" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:43 compute-1 nova_compute[225855]: 2026-01-20 14:30:43.846 225859 DEBUG nova.compute.manager [req-ee16a478-3fbe-470c-9254-d5c8b751d4d9 req-9c62e640-afde-4360-9fb0-56786d0cfbe8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Received event network-vif-plugged-e848e00f-d594-4d70-9026-cd650417bf47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:30:43 compute-1 nova_compute[225855]: 2026-01-20 14:30:43.846 225859 DEBUG oslo_concurrency.lockutils [req-ee16a478-3fbe-470c-9254-d5c8b751d4d9 req-9c62e640-afde-4360-9fb0-56786d0cfbe8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d08682f8-72ef-462c-b4b7-044cf16fc193-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:43 compute-1 nova_compute[225855]: 2026-01-20 14:30:43.847 225859 DEBUG oslo_concurrency.lockutils [req-ee16a478-3fbe-470c-9254-d5c8b751d4d9 req-9c62e640-afde-4360-9fb0-56786d0cfbe8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d08682f8-72ef-462c-b4b7-044cf16fc193-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:43 compute-1 nova_compute[225855]: 2026-01-20 14:30:43.848 225859 DEBUG oslo_concurrency.lockutils [req-ee16a478-3fbe-470c-9254-d5c8b751d4d9 req-9c62e640-afde-4360-9fb0-56786d0cfbe8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d08682f8-72ef-462c-b4b7-044cf16fc193-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:43 compute-1 nova_compute[225855]: 2026-01-20 14:30:43.848 225859 DEBUG nova.compute.manager [req-ee16a478-3fbe-470c-9254-d5c8b751d4d9 req-9c62e640-afde-4360-9fb0-56786d0cfbe8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] No waiting events found dispatching network-vif-plugged-e848e00f-d594-4d70-9026-cd650417bf47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:30:43 compute-1 nova_compute[225855]: 2026-01-20 14:30:43.849 225859 WARNING nova.compute.manager [req-ee16a478-3fbe-470c-9254-d5c8b751d4d9 req-9c62e640-afde-4360-9fb0-56786d0cfbe8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Received unexpected event network-vif-plugged-e848e00f-d594-4d70-9026-cd650417bf47 for instance with vm_state deleted and task_state None.
Jan 20 14:30:44 compute-1 nova_compute[225855]: 2026-01-20 14:30:44.030 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:44 compute-1 podman[240842]: 2026-01-20 14:30:44.034682472 +0000 UTC m=+0.067238421 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:30:44 compute-1 ceph-mon[81775]: pgmap v1202: 321 pgs: 321 active+clean; 681 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.5 MiB/s wr, 341 op/s
Jan 20 14:30:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1040306816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:44 compute-1 nova_compute[225855]: 2026-01-20 14:30:44.291 225859 INFO nova.compute.manager [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Swapping old allocation on dict_keys(['bbb02880-a710-4ac1-8b2c-5c09765848d1']) held by migration 60518ded-c4ce-45b7-a976-2c06150ab129 for instance
Jan 20 14:30:44 compute-1 nova_compute[225855]: 2026-01-20 14:30:44.375 225859 DEBUG nova.scheduler.client.report [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Overwriting current allocation {'allocations': {'068db7fd-4bd6-45a9-8bd6-a22cfe7596ed': {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}, 'generation': 24}}, 'project_id': 'f3c2e72a7148496394c8bcd618a19c80', 'user_id': '01a3d712f05049b19d4ecc7051720ad5', 'consumer_generation': 1} on consumer d95ca690-20e1-4b0c-919b-d64c9af25eba move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Jan 20 14:30:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:44.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:44 compute-1 nova_compute[225855]: 2026-01-20 14:30:44.606 225859 DEBUG oslo_concurrency.lockutils [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "refresh_cache-d95ca690-20e1-4b0c-919b-d64c9af25eba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:30:44 compute-1 nova_compute[225855]: 2026-01-20 14:30:44.607 225859 DEBUG oslo_concurrency.lockutils [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquired lock "refresh_cache-d95ca690-20e1-4b0c-919b-d64c9af25eba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:30:44 compute-1 nova_compute[225855]: 2026-01-20 14:30:44.607 225859 DEBUG nova.network.neutron [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:30:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:44 compute-1 nova_compute[225855]: 2026-01-20 14:30:44.829 225859 DEBUG nova.network.neutron [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:30:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:45.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:45 compute-1 nova_compute[225855]: 2026-01-20 14:30:45.138 225859 DEBUG nova.network.neutron [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:30:45 compute-1 nova_compute[225855]: 2026-01-20 14:30:45.153 225859 DEBUG oslo_concurrency.lockutils [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Releasing lock "refresh_cache-d95ca690-20e1-4b0c-919b-d64c9af25eba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:30:45 compute-1 nova_compute[225855]: 2026-01-20 14:30:45.154 225859 DEBUG nova.virt.libvirt.driver [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Jan 20 14:30:45 compute-1 nova_compute[225855]: 2026-01-20 14:30:45.248 225859 DEBUG nova.storage.rbd_utils [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rolling back rbd image(d95ca690-20e1-4b0c-919b-d64c9af25eba_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Jan 20 14:30:45 compute-1 nova_compute[225855]: 2026-01-20 14:30:45.374 225859 DEBUG nova.storage.rbd_utils [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] removing snapshot(nova-resize) on rbd image(d95ca690-20e1-4b0c-919b-d64c9af25eba_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.080 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:46.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:46 compute-1 ceph-mon[81775]: pgmap v1203: 321 pgs: 321 active+clean; 632 MiB data, 720 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.0 MiB/s wr, 326 op/s
Jan 20 14:30:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e164 e164: 3 total, 3 up, 3 in
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.941 225859 DEBUG nova.virt.libvirt.driver [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.948 225859 WARNING nova.virt.libvirt.driver [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.956 225859 DEBUG nova.virt.libvirt.host [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.957 225859 DEBUG nova.virt.libvirt.host [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.961 225859 DEBUG nova.virt.libvirt.host [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.962 225859 DEBUG nova.virt.libvirt.host [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.964 225859 DEBUG nova.virt.libvirt.driver [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.965 225859 DEBUG nova.virt.hardware [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.966 225859 DEBUG nova.virt.hardware [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.966 225859 DEBUG nova.virt.hardware [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.967 225859 DEBUG nova.virt.hardware [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.967 225859 DEBUG nova.virt.hardware [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.968 225859 DEBUG nova.virt.hardware [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.969 225859 DEBUG nova.virt.hardware [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.969 225859 DEBUG nova.virt.hardware [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.970 225859 DEBUG nova.virt.hardware [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.971 225859 DEBUG nova.virt.hardware [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.971 225859 DEBUG nova.virt.hardware [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.972 225859 DEBUG nova.objects.instance [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d95ca690-20e1-4b0c-919b-d64c9af25eba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:46 compute-1 nova_compute[225855]: 2026-01-20 14:30:46.992 225859 DEBUG oslo_concurrency.processutils [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:47.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:30:47 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3386445853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:47 compute-1 nova_compute[225855]: 2026-01-20 14:30:47.479 225859 DEBUG oslo_concurrency.processutils [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:47 compute-1 nova_compute[225855]: 2026-01-20 14:30:47.510 225859 DEBUG oslo_concurrency.processutils [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:47 compute-1 ceph-mon[81775]: osdmap e164: 3 total, 3 up, 3 in
Jan 20 14:30:47 compute-1 ceph-mon[81775]: pgmap v1205: 321 pgs: 321 active+clean; 602 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.0 MiB/s wr, 279 op/s
Jan 20 14:30:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3386445853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:30:47 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/462858156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:47 compute-1 nova_compute[225855]: 2026-01-20 14:30:47.932 225859 DEBUG oslo_concurrency.processutils [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:47 compute-1 nova_compute[225855]: 2026-01-20 14:30:47.937 225859 DEBUG nova.virt.libvirt.driver [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:30:47 compute-1 nova_compute[225855]:   <uuid>d95ca690-20e1-4b0c-919b-d64c9af25eba</uuid>
Jan 20 14:30:47 compute-1 nova_compute[225855]:   <name>instance-0000001d</name>
Jan 20 14:30:47 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:30:47 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:30:47 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <nova:name>tempest-MigrationsAdminTest-server-1542965426</nova:name>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:30:46</nova:creationTime>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:30:47 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:30:47 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:30:47 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:30:47 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:30:47 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:30:47 compute-1 nova_compute[225855]:         <nova:user uuid="01a3d712f05049b19d4ecc7051720ad5">tempest-MigrationsAdminTest-1518611738-project-member</nova:user>
Jan 20 14:30:47 compute-1 nova_compute[225855]:         <nova:project uuid="f3c2e72a7148496394c8bcd618a19c80">tempest-MigrationsAdminTest-1518611738</nova:project>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <nova:ports/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:30:47 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:30:47 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <system>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <entry name="serial">d95ca690-20e1-4b0c-919b-d64c9af25eba</entry>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <entry name="uuid">d95ca690-20e1-4b0c-919b-d64c9af25eba</entry>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     </system>
Jan 20 14:30:47 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:30:47 compute-1 nova_compute[225855]:   <os>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:   </os>
Jan 20 14:30:47 compute-1 nova_compute[225855]:   <features>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:   </features>
Jan 20 14:30:47 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:30:47 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:30:47 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/d95ca690-20e1-4b0c-919b-d64c9af25eba_disk">
Jan 20 14:30:47 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       </source>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:30:47 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/d95ca690-20e1-4b0c-919b-d64c9af25eba_disk.config">
Jan 20 14:30:47 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       </source>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:30:47 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/d95ca690-20e1-4b0c-919b-d64c9af25eba/console.log" append="off"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <video>
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     </video>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <input type="keyboard" bus="usb"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:30:47 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:30:47 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:30:47 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:30:47 compute-1 nova_compute[225855]: </domain>
Jan 20 14:30:47 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:30:48 compute-1 systemd-machined[194361]: New machine qemu-17-instance-0000001d.
Jan 20 14:30:48 compute-1 systemd[1]: Started Virtual Machine qemu-17-instance-0000001d.
Jan 20 14:30:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:48.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:48 compute-1 nova_compute[225855]: 2026-01-20 14:30:48.625 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for d95ca690-20e1-4b0c-919b-d64c9af25eba due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:30:48 compute-1 nova_compute[225855]: 2026-01-20 14:30:48.626 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919448.6250632, d95ca690-20e1-4b0c-919b-d64c9af25eba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:30:48 compute-1 nova_compute[225855]: 2026-01-20 14:30:48.626 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] VM Resumed (Lifecycle Event)
Jan 20 14:30:48 compute-1 nova_compute[225855]: 2026-01-20 14:30:48.630 225859 DEBUG nova.compute.manager [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:30:48 compute-1 nova_compute[225855]: 2026-01-20 14:30:48.636 225859 INFO nova.virt.libvirt.driver [-] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Instance running successfully.
Jan 20 14:30:48 compute-1 nova_compute[225855]: 2026-01-20 14:30:48.637 225859 DEBUG nova.virt.libvirt.driver [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Jan 20 14:30:48 compute-1 nova_compute[225855]: 2026-01-20 14:30:48.646 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:48 compute-1 nova_compute[225855]: 2026-01-20 14:30:48.651 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:30:48 compute-1 nova_compute[225855]: 2026-01-20 14:30:48.674 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 20 14:30:48 compute-1 nova_compute[225855]: 2026-01-20 14:30:48.674 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919448.6295831, d95ca690-20e1-4b0c-919b-d64c9af25eba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:30:48 compute-1 nova_compute[225855]: 2026-01-20 14:30:48.675 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] VM Started (Lifecycle Event)
Jan 20 14:30:48 compute-1 nova_compute[225855]: 2026-01-20 14:30:48.712 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:48 compute-1 nova_compute[225855]: 2026-01-20 14:30:48.717 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:30:48 compute-1 nova_compute[225855]: 2026-01-20 14:30:48.735 225859 INFO nova.compute.manager [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Updating instance to original state: 'active'
Jan 20 14:30:48 compute-1 nova_compute[225855]: 2026-01-20 14:30:48.743 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 20 14:30:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/462858156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:49 compute-1 nova_compute[225855]: 2026-01-20 14:30:49.032 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:30:49.052 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:30:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:49.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2967623465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:49 compute-1 ceph-mon[81775]: pgmap v1206: 321 pgs: 321 active+clean; 602 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.7 MiB/s wr, 234 op/s
Jan 20 14:30:50 compute-1 nova_compute[225855]: 2026-01-20 14:30:50.342 225859 DEBUG oslo_concurrency.lockutils [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "d95ca690-20e1-4b0c-919b-d64c9af25eba" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:50 compute-1 nova_compute[225855]: 2026-01-20 14:30:50.343 225859 DEBUG oslo_concurrency.lockutils [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "d95ca690-20e1-4b0c-919b-d64c9af25eba" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:50 compute-1 nova_compute[225855]: 2026-01-20 14:30:50.343 225859 DEBUG oslo_concurrency.lockutils [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "d95ca690-20e1-4b0c-919b-d64c9af25eba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:50 compute-1 nova_compute[225855]: 2026-01-20 14:30:50.343 225859 DEBUG oslo_concurrency.lockutils [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "d95ca690-20e1-4b0c-919b-d64c9af25eba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:50 compute-1 nova_compute[225855]: 2026-01-20 14:30:50.343 225859 DEBUG oslo_concurrency.lockutils [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "d95ca690-20e1-4b0c-919b-d64c9af25eba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:50 compute-1 nova_compute[225855]: 2026-01-20 14:30:50.344 225859 INFO nova.compute.manager [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Terminating instance
Jan 20 14:30:50 compute-1 nova_compute[225855]: 2026-01-20 14:30:50.345 225859 DEBUG oslo_concurrency.lockutils [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "refresh_cache-d95ca690-20e1-4b0c-919b-d64c9af25eba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:30:50 compute-1 nova_compute[225855]: 2026-01-20 14:30:50.345 225859 DEBUG oslo_concurrency.lockutils [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquired lock "refresh_cache-d95ca690-20e1-4b0c-919b-d64c9af25eba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:30:50 compute-1 nova_compute[225855]: 2026-01-20 14:30:50.345 225859 DEBUG nova.network.neutron [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:30:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:50.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:50 compute-1 nova_compute[225855]: 2026-01-20 14:30:50.489 225859 DEBUG nova.network.neutron [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:30:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:51.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:51 compute-1 nova_compute[225855]: 2026-01-20 14:30:51.136 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3855722107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:51 compute-1 nova_compute[225855]: 2026-01-20 14:30:51.461 225859 DEBUG nova.network.neutron [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:30:51 compute-1 nova_compute[225855]: 2026-01-20 14:30:51.476 225859 DEBUG oslo_concurrency.lockutils [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Releasing lock "refresh_cache-d95ca690-20e1-4b0c-919b-d64c9af25eba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:30:51 compute-1 nova_compute[225855]: 2026-01-20 14:30:51.477 225859 DEBUG nova.compute.manager [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:30:51 compute-1 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Jan 20 14:30:51 compute-1 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000001d.scope: Consumed 3.616s CPU time.
Jan 20 14:30:51 compute-1 systemd-machined[194361]: Machine qemu-17-instance-0000001d terminated.
Jan 20 14:30:51 compute-1 nova_compute[225855]: 2026-01-20 14:30:51.698 225859 INFO nova.virt.libvirt.driver [-] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Instance destroyed successfully.
Jan 20 14:30:51 compute-1 nova_compute[225855]: 2026-01-20 14:30:51.699 225859 DEBUG nova.objects.instance [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'resources' on Instance uuid d95ca690-20e1-4b0c-919b-d64c9af25eba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1012040475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:52 compute-1 ceph-mon[81775]: pgmap v1207: 321 pgs: 321 active+clean; 640 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 223 op/s
Jan 20 14:30:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:52.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:52 compute-1 nova_compute[225855]: 2026-01-20 14:30:52.608 225859 INFO nova.virt.libvirt.driver [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Deleting instance files /var/lib/nova/instances/d95ca690-20e1-4b0c-919b-d64c9af25eba_del
Jan 20 14:30:52 compute-1 nova_compute[225855]: 2026-01-20 14:30:52.609 225859 INFO nova.virt.libvirt.driver [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Deletion of /var/lib/nova/instances/d95ca690-20e1-4b0c-919b-d64c9af25eba_del complete
Jan 20 14:30:52 compute-1 nova_compute[225855]: 2026-01-20 14:30:52.691 225859 INFO nova.compute.manager [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Took 1.21 seconds to destroy the instance on the hypervisor.
Jan 20 14:30:52 compute-1 nova_compute[225855]: 2026-01-20 14:30:52.691 225859 DEBUG oslo.service.loopingcall [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:30:52 compute-1 nova_compute[225855]: 2026-01-20 14:30:52.692 225859 DEBUG nova.compute.manager [-] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:30:52 compute-1 nova_compute[225855]: 2026-01-20 14:30:52.692 225859 DEBUG nova.network.neutron [-] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:30:53 compute-1 nova_compute[225855]: 2026-01-20 14:30:53.013 225859 DEBUG nova.network.neutron [-] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:30:53 compute-1 nova_compute[225855]: 2026-01-20 14:30:53.031 225859 DEBUG nova.network.neutron [-] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:30:53 compute-1 nova_compute[225855]: 2026-01-20 14:30:53.043 225859 INFO nova.compute.manager [-] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Took 0.35 seconds to deallocate network for instance.
Jan 20 14:30:53 compute-1 nova_compute[225855]: 2026-01-20 14:30:53.084 225859 DEBUG oslo_concurrency.lockutils [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:53 compute-1 nova_compute[225855]: 2026-01-20 14:30:53.084 225859 DEBUG oslo_concurrency.lockutils [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:30:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:53.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:30:53 compute-1 nova_compute[225855]: 2026-01-20 14:30:53.206 225859 DEBUG oslo_concurrency.processutils [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:30:53 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1259635317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:53 compute-1 nova_compute[225855]: 2026-01-20 14:30:53.638 225859 DEBUG oslo_concurrency.processutils [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:53 compute-1 nova_compute[225855]: 2026-01-20 14:30:53.643 225859 DEBUG nova.compute.provider_tree [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:30:53 compute-1 nova_compute[225855]: 2026-01-20 14:30:53.670 225859 DEBUG nova.scheduler.client.report [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:30:53 compute-1 nova_compute[225855]: 2026-01-20 14:30:53.697 225859 DEBUG oslo_concurrency.lockutils [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:53 compute-1 nova_compute[225855]: 2026-01-20 14:30:53.733 225859 INFO nova.scheduler.client.report [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Deleted allocations for instance d95ca690-20e1-4b0c-919b-d64c9af25eba
Jan 20 14:30:53 compute-1 nova_compute[225855]: 2026-01-20 14:30:53.803 225859 DEBUG oslo_concurrency.lockutils [None req-b4645b9f-f5f3-448f-b29a-cb1a0fe3617b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "d95ca690-20e1-4b0c-919b-d64c9af25eba" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.461s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:54 compute-1 nova_compute[225855]: 2026-01-20 14:30:54.083 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:30:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:54.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:30:54 compute-1 ceph-mon[81775]: pgmap v1208: 321 pgs: 321 active+clean; 640 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 223 op/s
Jan 20 14:30:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1259635317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:55.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:56 compute-1 nova_compute[225855]: 2026-01-20 14:30:56.050 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919441.049494, d08682f8-72ef-462c-b4b7-044cf16fc193 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:30:56 compute-1 nova_compute[225855]: 2026-01-20 14:30:56.051 225859 INFO nova.compute.manager [-] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] VM Stopped (Lifecycle Event)
Jan 20 14:30:56 compute-1 nova_compute[225855]: 2026-01-20 14:30:56.070 225859 DEBUG nova.compute.manager [None req-4108f74c-21b9-4dfb-8410-306005be1d76 - - - - - -] [instance: d08682f8-72ef-462c-b4b7-044cf16fc193] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:56 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1852034554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:56 compute-1 nova_compute[225855]: 2026-01-20 14:30:56.140 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:56.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:56 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 e165: 3 total, 3 up, 3 in
Jan 20 14:30:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:57.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:57 compute-1 ceph-mon[81775]: pgmap v1209: 321 pgs: 321 active+clean; 548 MiB data, 671 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.2 MiB/s wr, 263 op/s
Jan 20 14:30:57 compute-1 ceph-mon[81775]: osdmap e165: 3 total, 3 up, 3 in
Jan 20 14:30:58 compute-1 ceph-mon[81775]: pgmap v1211: 321 pgs: 321 active+clean; 490 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.2 MiB/s wr, 279 op/s
Jan 20 14:30:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:58.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:58 compute-1 ovn_controller[130490]: 2026-01-20T14:30:58Z|00123|binding|INFO|Releasing lport 2f798c1c-f9b6-4141-904d-4124d05888ca from this chassis (sb_readonly=0)
Jan 20 14:30:59 compute-1 nova_compute[225855]: 2026-01-20 14:30:59.007 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:59 compute-1 nova_compute[225855]: 2026-01-20 14:30:59.086 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:30:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:59.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:59 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/951074256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:00 compute-1 sshd-session[241086]: Connection closed by authenticating user root 45.179.5.170 port 46558 [preauth]
Jan 20 14:31:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:00.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:00 compute-1 ceph-mon[81775]: pgmap v1212: 321 pgs: 321 active+clean; 472 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.2 MiB/s wr, 295 op/s
Jan 20 14:31:00 compute-1 sudo[241090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:00 compute-1 sudo[241090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:00 compute-1 sudo[241090]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:00 compute-1 sudo[241115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:00 compute-1 sudo[241115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:00 compute-1 sudo[241115]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:01.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:01 compute-1 nova_compute[225855]: 2026-01-20 14:31:01.179 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:31:01 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3285058670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:31:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3285058670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:31:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:02.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:02 compute-1 nova_compute[225855]: 2026-01-20 14:31:02.909 225859 DEBUG oslo_concurrency.lockutils [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "f3faf996-e066-4b11-b7f3-30aeffff726e" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:02 compute-1 nova_compute[225855]: 2026-01-20 14:31:02.909 225859 DEBUG oslo_concurrency.lockutils [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:02 compute-1 nova_compute[225855]: 2026-01-20 14:31:02.931 225859 DEBUG nova.objects.instance [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lazy-loading 'flavor' on Instance uuid f3faf996-e066-4b11-b7f3-30aeffff726e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:31:03 compute-1 ceph-mon[81775]: pgmap v1213: 321 pgs: 321 active+clean; 409 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 423 KiB/s wr, 220 op/s
Jan 20 14:31:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:31:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:03.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:31:03 compute-1 nova_compute[225855]: 2026-01-20 14:31:03.360 225859 DEBUG oslo_concurrency.lockutils [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.025 225859 DEBUG oslo_concurrency.lockutils [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "f3faf996-e066-4b11-b7f3-30aeffff726e" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.025 225859 DEBUG oslo_concurrency.lockutils [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.025 225859 INFO nova.compute.manager [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Attaching volume 73c5b3f0-c4ca-48f3-9dc2-d2c15d3fd745 to /dev/vdb
Jan 20 14:31:04 compute-1 ceph-mon[81775]: pgmap v1214: 321 pgs: 321 active+clean; 409 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 423 KiB/s wr, 220 op/s
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.089 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.308 225859 DEBUG os_brick.utils [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.309 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.324 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.324 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[18502c70-0055-46c8-a72b-696d21294f22]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.326 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.335 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.336 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb344e0-43be-46e2-83b5-139ff61a28ed]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.338 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.348 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.348 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[297a0143-ee00-487b-beb5-3b6476bd7f00]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.350 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[aee7ee31-846f-44a4-babf-e372b36e4fd2]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.351 225859 DEBUG oslo_concurrency.processutils [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.380 225859 DEBUG oslo_concurrency.processutils [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.385 225859 DEBUG os_brick.initiator.connectors.lightos [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.386 225859 DEBUG os_brick.initiator.connectors.lightos [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.386 225859 DEBUG os_brick.initiator.connectors.lightos [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.387 225859 DEBUG os_brick.utils [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] <== get_connector_properties: return (78ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:31:04 compute-1 nova_compute[225855]: 2026-01-20 14:31:04.388 225859 DEBUG nova.virt.block_device [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Updating existing volume attachment record: 1b82c01d-d5c1-48cc-9d9f-078b75fe40c6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 14:31:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:04.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:05 compute-1 podman[241149]: 2026-01-20 14:31:05.105109255 +0000 UTC m=+0.132983888 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:31:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:05.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:05 compute-1 nova_compute[225855]: 2026-01-20 14:31:05.850 225859 DEBUG nova.objects.instance [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lazy-loading 'flavor' on Instance uuid f3faf996-e066-4b11-b7f3-30aeffff726e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:31:05 compute-1 nova_compute[225855]: 2026-01-20 14:31:05.908 225859 DEBUG nova.virt.libvirt.driver [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Attempting to attach volume 73c5b3f0-c4ca-48f3-9dc2-d2c15d3fd745 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 20 14:31:05 compute-1 nova_compute[225855]: 2026-01-20 14:31:05.912 225859 DEBUG nova.virt.libvirt.guest [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] attach device xml: <disk type="network" device="disk">
Jan 20 14:31:05 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:31:05 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-73c5b3f0-c4ca-48f3-9dc2-d2c15d3fd745">
Jan 20 14:31:05 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:31:05 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:31:05 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:31:05 compute-1 nova_compute[225855]:   </source>
Jan 20 14:31:05 compute-1 nova_compute[225855]:   <auth username="openstack">
Jan 20 14:31:05 compute-1 nova_compute[225855]:     <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:31:05 compute-1 nova_compute[225855]:   </auth>
Jan 20 14:31:05 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:31:05 compute-1 nova_compute[225855]:   <serial>73c5b3f0-c4ca-48f3-9dc2-d2c15d3fd745</serial>
Jan 20 14:31:05 compute-1 nova_compute[225855]:   <shareable/>
Jan 20 14:31:05 compute-1 nova_compute[225855]: </disk>
Jan 20 14:31:05 compute-1 nova_compute[225855]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 14:31:06 compute-1 nova_compute[225855]: 2026-01-20 14:31:06.183 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3890482712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:31:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:31:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:06.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:31:06 compute-1 nova_compute[225855]: 2026-01-20 14:31:06.698 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919451.6964307, d95ca690-20e1-4b0c-919b-d64c9af25eba => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:31:06 compute-1 nova_compute[225855]: 2026-01-20 14:31:06.698 225859 INFO nova.compute.manager [-] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] VM Stopped (Lifecycle Event)
Jan 20 14:31:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:07.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:07 compute-1 ceph-mon[81775]: pgmap v1215: 321 pgs: 321 active+clean; 355 MiB data, 558 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 5.1 KiB/s wr, 132 op/s
Jan 20 14:31:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3548925211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:07 compute-1 nova_compute[225855]: 2026-01-20 14:31:07.388 225859 DEBUG nova.compute.manager [None req-b95d732f-1d9c-4501-b569-5c03b1099505 - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:31:07 compute-1 nova_compute[225855]: 2026-01-20 14:31:07.418 225859 DEBUG nova.virt.libvirt.driver [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:31:07 compute-1 nova_compute[225855]: 2026-01-20 14:31:07.419 225859 DEBUG nova.virt.libvirt.driver [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:31:07 compute-1 nova_compute[225855]: 2026-01-20 14:31:07.419 225859 DEBUG nova.virt.libvirt.driver [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:31:07 compute-1 nova_compute[225855]: 2026-01-20 14:31:07.419 225859 DEBUG nova.virt.libvirt.driver [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] No VIF found with MAC fa:16:3e:fc:ae:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:31:07 compute-1 nova_compute[225855]: 2026-01-20 14:31:07.708 225859 DEBUG oslo_concurrency.lockutils [None req-13fd3e8c-3da6-4d25-a48f-a47724a98b0e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:07 compute-1 nova_compute[225855]: 2026-01-20 14:31:07.912 225859 DEBUG oslo_concurrency.lockutils [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "29f0b4d4-abf0-46e7-bf67-38e71eb42e28" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:07 compute-1 nova_compute[225855]: 2026-01-20 14:31:07.912 225859 DEBUG oslo_concurrency.lockutils [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "29f0b4d4-abf0-46e7-bf67-38e71eb42e28" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:07 compute-1 nova_compute[225855]: 2026-01-20 14:31:07.913 225859 DEBUG oslo_concurrency.lockutils [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "29f0b4d4-abf0-46e7-bf67-38e71eb42e28-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:07 compute-1 nova_compute[225855]: 2026-01-20 14:31:07.913 225859 DEBUG oslo_concurrency.lockutils [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "29f0b4d4-abf0-46e7-bf67-38e71eb42e28-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:07 compute-1 nova_compute[225855]: 2026-01-20 14:31:07.913 225859 DEBUG oslo_concurrency.lockutils [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "29f0b4d4-abf0-46e7-bf67-38e71eb42e28-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:07 compute-1 nova_compute[225855]: 2026-01-20 14:31:07.914 225859 INFO nova.compute.manager [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Terminating instance
Jan 20 14:31:07 compute-1 nova_compute[225855]: 2026-01-20 14:31:07.916 225859 DEBUG oslo_concurrency.lockutils [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "refresh_cache-29f0b4d4-abf0-46e7-bf67-38e71eb42e28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:31:07 compute-1 nova_compute[225855]: 2026-01-20 14:31:07.916 225859 DEBUG oslo_concurrency.lockutils [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquired lock "refresh_cache-29f0b4d4-abf0-46e7-bf67-38e71eb42e28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:31:07 compute-1 nova_compute[225855]: 2026-01-20 14:31:07.916 225859 DEBUG nova.network.neutron [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:31:08 compute-1 ceph-mon[81775]: pgmap v1216: 321 pgs: 321 active+clean; 330 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 115 KiB/s wr, 84 op/s
Jan 20 14:31:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:08.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:08 compute-1 nova_compute[225855]: 2026-01-20 14:31:08.617 225859 DEBUG nova.network.neutron [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:31:09 compute-1 nova_compute[225855]: 2026-01-20 14:31:09.055 225859 DEBUG nova.network.neutron [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:31:09 compute-1 nova_compute[225855]: 2026-01-20 14:31:09.078 225859 DEBUG oslo_concurrency.lockutils [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Releasing lock "refresh_cache-29f0b4d4-abf0-46e7-bf67-38e71eb42e28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:31:09 compute-1 nova_compute[225855]: 2026-01-20 14:31:09.079 225859 DEBUG nova.compute.manager [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:31:09 compute-1 nova_compute[225855]: 2026-01-20 14:31:09.120 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:09.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:09 compute-1 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000016.scope: Deactivated successfully.
Jan 20 14:31:09 compute-1 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000016.scope: Consumed 19.693s CPU time.
Jan 20 14:31:09 compute-1 systemd-machined[194361]: Machine qemu-10-instance-00000016 terminated.
Jan 20 14:31:09 compute-1 nova_compute[225855]: 2026-01-20 14:31:09.299 225859 INFO nova.virt.libvirt.driver [-] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Instance destroyed successfully.
Jan 20 14:31:09 compute-1 nova_compute[225855]: 2026-01-20 14:31:09.299 225859 DEBUG nova.objects.instance [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'resources' on Instance uuid 29f0b4d4-abf0-46e7-bf67-38e71eb42e28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:31:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:10 compute-1 ceph-mon[81775]: pgmap v1217: 321 pgs: 321 active+clean; 322 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 392 KiB/s rd, 945 KiB/s wr, 93 op/s
Jan 20 14:31:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1268478636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:10.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:10 compute-1 nova_compute[225855]: 2026-01-20 14:31:10.548 225859 INFO nova.virt.libvirt.driver [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Deleting instance files /var/lib/nova/instances/29f0b4d4-abf0-46e7-bf67-38e71eb42e28_del
Jan 20 14:31:10 compute-1 nova_compute[225855]: 2026-01-20 14:31:10.549 225859 INFO nova.virt.libvirt.driver [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Deletion of /var/lib/nova/instances/29f0b4d4-abf0-46e7-bf67-38e71eb42e28_del complete
Jan 20 14:31:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:11.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:11 compute-1 nova_compute[225855]: 2026-01-20 14:31:11.176 225859 INFO nova.compute.manager [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Took 2.10 seconds to destroy the instance on the hypervisor.
Jan 20 14:31:11 compute-1 nova_compute[225855]: 2026-01-20 14:31:11.177 225859 DEBUG oslo.service.loopingcall [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:31:11 compute-1 nova_compute[225855]: 2026-01-20 14:31:11.177 225859 DEBUG nova.compute.manager [-] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:31:11 compute-1 nova_compute[225855]: 2026-01-20 14:31:11.177 225859 DEBUG nova.network.neutron [-] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:31:11 compute-1 nova_compute[225855]: 2026-01-20 14:31:11.187 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:12 compute-1 ceph-mon[81775]: pgmap v1218: 321 pgs: 321 active+clean; 306 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 245 KiB/s rd, 1.2 MiB/s wr, 106 op/s
Jan 20 14:31:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:12.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:13.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/86488182' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:31:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/86488182' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:31:14 compute-1 nova_compute[225855]: 2026-01-20 14:31:14.177 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:14 compute-1 nova_compute[225855]: 2026-01-20 14:31:14.282 225859 DEBUG nova.network.neutron [-] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:31:14 compute-1 nova_compute[225855]: 2026-01-20 14:31:14.296 225859 DEBUG nova.network.neutron [-] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:31:14 compute-1 nova_compute[225855]: 2026-01-20 14:31:14.311 225859 INFO nova.compute.manager [-] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Took 3.13 seconds to deallocate network for instance.
Jan 20 14:31:14 compute-1 nova_compute[225855]: 2026-01-20 14:31:14.338 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:31:14 compute-1 nova_compute[225855]: 2026-01-20 14:31:14.398 225859 DEBUG oslo_concurrency.lockutils [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:14 compute-1 nova_compute[225855]: 2026-01-20 14:31:14.398 225859 DEBUG oslo_concurrency.lockutils [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:31:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:14.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:31:14 compute-1 nova_compute[225855]: 2026-01-20 14:31:14.479 225859 DEBUG oslo_concurrency.processutils [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:31:14 compute-1 ceph-mon[81775]: pgmap v1219: 321 pgs: 321 active+clean; 263 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 340 KiB/s rd, 2.0 MiB/s wr, 119 op/s
Jan 20 14:31:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:31:14 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2038386253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:14 compute-1 nova_compute[225855]: 2026-01-20 14:31:14.953 225859 DEBUG oslo_concurrency.processutils [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:31:14 compute-1 nova_compute[225855]: 2026-01-20 14:31:14.962 225859 DEBUG nova.compute.provider_tree [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:31:15 compute-1 nova_compute[225855]: 2026-01-20 14:31:15.011 225859 DEBUG nova.scheduler.client.report [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:31:15 compute-1 podman[241242]: 2026-01-20 14:31:15.014175006 +0000 UTC m=+0.060500797 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:31:15 compute-1 nova_compute[225855]: 2026-01-20 14:31:15.039 225859 DEBUG oslo_concurrency.lockutils [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:15 compute-1 nova_compute[225855]: 2026-01-20 14:31:15.097 225859 INFO nova.scheduler.client.report [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Deleted allocations for instance 29f0b4d4-abf0-46e7-bf67-38e71eb42e28
Jan 20 14:31:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:15.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:15 compute-1 nova_compute[225855]: 2026-01-20 14:31:15.190 225859 DEBUG oslo_concurrency.lockutils [None req-78211a69-48e4-41e0-bc73-fcd33e4fcd7b 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "29f0b4d4-abf0-46e7-bf67-38e71eb42e28" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2038386253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:16 compute-1 nova_compute[225855]: 2026-01-20 14:31:16.191 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:16 compute-1 nova_compute[225855]: 2026-01-20 14:31:16.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:31:16 compute-1 nova_compute[225855]: 2026-01-20 14:31:16.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:31:16 compute-1 nova_compute[225855]: 2026-01-20 14:31:16.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:31:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:16.390 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:16.391 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:16.391 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:16.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:31:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:17.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:31:17 compute-1 ceph-mon[81775]: pgmap v1220: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 355 KiB/s rd, 2.0 MiB/s wr, 140 op/s
Jan 20 14:31:18 compute-1 ceph-mon[81775]: pgmap v1221: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 331 KiB/s rd, 2.0 MiB/s wr, 114 op/s
Jan 20 14:31:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:31:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:18.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:31:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:19.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:19 compute-1 nova_compute[225855]: 2026-01-20 14:31:19.180 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:19 compute-1 nova_compute[225855]: 2026-01-20 14:31:19.545 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-f3faf996-e066-4b11-b7f3-30aeffff726e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:31:19 compute-1 nova_compute[225855]: 2026-01-20 14:31:19.546 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-f3faf996-e066-4b11-b7f3-30aeffff726e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:31:19 compute-1 nova_compute[225855]: 2026-01-20 14:31:19.546 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:31:19 compute-1 nova_compute[225855]: 2026-01-20 14:31:19.547 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f3faf996-e066-4b11-b7f3-30aeffff726e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:31:19 compute-1 nova_compute[225855]: 2026-01-20 14:31:19.652 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:20 compute-1 ceph-mon[81775]: pgmap v1222: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 316 KiB/s rd, 2.0 MiB/s wr, 103 op/s
Jan 20 14:31:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3306767532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:20.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:20 compute-1 sudo[241266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:20 compute-1 sudo[241266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:20 compute-1 sudo[241266]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:20 compute-1 sudo[241291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:31:20 compute-1 sudo[241291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:20 compute-1 sudo[241291]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:20 compute-1 sudo[241309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:20 compute-1 sudo[241309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:20 compute-1 sudo[241309]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:20 compute-1 sudo[241339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:20 compute-1 sudo[241339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:20 compute-1 sudo[241339]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:20 compute-1 sudo[241364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:21 compute-1 sudo[241364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:21 compute-1 sudo[241364]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:21 compute-1 sudo[241389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 14:31:21 compute-1 sudo[241389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:21.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:21 compute-1 nova_compute[225855]: 2026-01-20 14:31:21.194 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:21 compute-1 nova_compute[225855]: 2026-01-20 14:31:21.344 225859 DEBUG oslo_concurrency.lockutils [None req-8cd05146-8312-492a-a9d0-9949b7655b8a 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "f3faf996-e066-4b11-b7f3-30aeffff726e" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:21 compute-1 nova_compute[225855]: 2026-01-20 14:31:21.345 225859 DEBUG oslo_concurrency.lockutils [None req-8cd05146-8312-492a-a9d0-9949b7655b8a 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:21 compute-1 nova_compute[225855]: 2026-01-20 14:31:21.403 225859 INFO nova.compute.manager [None req-8cd05146-8312-492a-a9d0-9949b7655b8a 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Detaching volume 73c5b3f0-c4ca-48f3-9dc2-d2c15d3fd745
Jan 20 14:31:21 compute-1 podman[241487]: 2026-01-20 14:31:21.490971356 +0000 UTC m=+0.066430109 container exec 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:31:21 compute-1 podman[241487]: 2026-01-20 14:31:21.588820682 +0000 UTC m=+0.164279455 container exec_died 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:31:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:21.595 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:31:21 compute-1 nova_compute[225855]: 2026-01-20 14:31:21.596 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:21.596 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:31:22 compute-1 nova_compute[225855]: 2026-01-20 14:31:22.002 225859 INFO nova.virt.block_device [None req-8cd05146-8312-492a-a9d0-9949b7655b8a 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Attempting to driver detach volume 73c5b3f0-c4ca-48f3-9dc2-d2c15d3fd745 from mountpoint /dev/vdb
Jan 20 14:31:22 compute-1 nova_compute[225855]: 2026-01-20 14:31:22.012 225859 DEBUG nova.virt.libvirt.driver [None req-8cd05146-8312-492a-a9d0-9949b7655b8a 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Attempting to detach device vdb from instance f3faf996-e066-4b11-b7f3-30aeffff726e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 14:31:22 compute-1 nova_compute[225855]: 2026-01-20 14:31:22.013 225859 DEBUG nova.virt.libvirt.guest [None req-8cd05146-8312-492a-a9d0-9949b7655b8a 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:31:22 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:31:22 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-73c5b3f0-c4ca-48f3-9dc2-d2c15d3fd745">
Jan 20 14:31:22 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:31:22 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:31:22 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:31:22 compute-1 nova_compute[225855]:   </source>
Jan 20 14:31:22 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:31:22 compute-1 nova_compute[225855]:   <serial>73c5b3f0-c4ca-48f3-9dc2-d2c15d3fd745</serial>
Jan 20 14:31:22 compute-1 nova_compute[225855]:   <shareable/>
Jan 20 14:31:22 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 14:31:22 compute-1 nova_compute[225855]: </disk>
Jan 20 14:31:22 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:31:22 compute-1 nova_compute[225855]: 2026-01-20 14:31:22.021 225859 INFO nova.virt.libvirt.driver [None req-8cd05146-8312-492a-a9d0-9949b7655b8a 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Successfully detached device vdb from instance f3faf996-e066-4b11-b7f3-30aeffff726e from the persistent domain config.
Jan 20 14:31:22 compute-1 nova_compute[225855]: 2026-01-20 14:31:22.021 225859 DEBUG nova.virt.libvirt.driver [None req-8cd05146-8312-492a-a9d0-9949b7655b8a 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f3faf996-e066-4b11-b7f3-30aeffff726e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 14:31:22 compute-1 nova_compute[225855]: 2026-01-20 14:31:22.022 225859 DEBUG nova.virt.libvirt.guest [None req-8cd05146-8312-492a-a9d0-9949b7655b8a 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:31:22 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:31:22 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-73c5b3f0-c4ca-48f3-9dc2-d2c15d3fd745">
Jan 20 14:31:22 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:31:22 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:31:22 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:31:22 compute-1 nova_compute[225855]:   </source>
Jan 20 14:31:22 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:31:22 compute-1 nova_compute[225855]:   <serial>73c5b3f0-c4ca-48f3-9dc2-d2c15d3fd745</serial>
Jan 20 14:31:22 compute-1 nova_compute[225855]:   <shareable/>
Jan 20 14:31:22 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 14:31:22 compute-1 nova_compute[225855]: </disk>
Jan 20 14:31:22 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:31:22 compute-1 nova_compute[225855]: 2026-01-20 14:31:22.071 225859 DEBUG nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Received event <DeviceRemovedEvent: 1768919482.0713122, f3faf996-e066-4b11-b7f3-30aeffff726e => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 14:31:22 compute-1 nova_compute[225855]: 2026-01-20 14:31:22.073 225859 DEBUG nova.virt.libvirt.driver [None req-8cd05146-8312-492a-a9d0-9949b7655b8a 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f3faf996-e066-4b11-b7f3-30aeffff726e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 14:31:22 compute-1 nova_compute[225855]: 2026-01-20 14:31:22.075 225859 INFO nova.virt.libvirt.driver [None req-8cd05146-8312-492a-a9d0-9949b7655b8a 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Successfully detached device vdb from instance f3faf996-e066-4b11-b7f3-30aeffff726e from the live domain config.
Jan 20 14:31:22 compute-1 podman[241642]: 2026-01-20 14:31:22.213143151 +0000 UTC m=+0.050927995 container exec 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 14:31:22 compute-1 podman[241642]: 2026-01-20 14:31:22.22226055 +0000 UTC m=+0.060045364 container exec_died 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 14:31:22 compute-1 podman[241707]: 2026-01-20 14:31:22.397542315 +0000 UTC m=+0.048579840 container exec e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, version=2.2.4, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, release=1793)
Jan 20 14:31:22 compute-1 podman[241707]: 2026-01-20 14:31:22.409012839 +0000 UTC m=+0.060050354 container exec_died e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, description=keepalived for Ceph, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, vcs-type=git, name=keepalived, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, release=1793)
Jan 20 14:31:22 compute-1 ceph-mon[81775]: pgmap v1223: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 250 KiB/s rd, 1.1 MiB/s wr, 84 op/s
Jan 20 14:31:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1483551337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:22 compute-1 sudo[241389]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:31:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:22.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:31:22 compute-1 sudo[241740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:22 compute-1 sudo[241740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:22 compute-1 sudo[241740]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:22 compute-1 sudo[241765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:31:22 compute-1 sudo[241765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:22 compute-1 sudo[241765]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:22 compute-1 sudo[241790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:22 compute-1 sudo[241790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:22 compute-1 sudo[241790]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:22 compute-1 sudo[241815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:31:22 compute-1 sudo[241815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:23 compute-1 sudo[241815]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:23.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:23 compute-1 nova_compute[225855]: 2026-01-20 14:31:23.320 225859 DEBUG nova.objects.instance [None req-8cd05146-8312-492a-a9d0-9949b7655b8a 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lazy-loading 'flavor' on Instance uuid f3faf996-e066-4b11-b7f3-30aeffff726e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:31:23 compute-1 nova_compute[225855]: 2026-01-20 14:31:23.405 225859 DEBUG oslo_concurrency.lockutils [None req-8cd05146-8312-492a-a9d0-9949b7655b8a 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 2.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:24 compute-1 nova_compute[225855]: 2026-01-20 14:31:24.213 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:24 compute-1 nova_compute[225855]: 2026-01-20 14:31:24.297 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919469.2962034, 29f0b4d4-abf0-46e7-bf67-38e71eb42e28 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:31:24 compute-1 nova_compute[225855]: 2026-01-20 14:31:24.297 225859 INFO nova.compute.manager [-] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] VM Stopped (Lifecycle Event)
Jan 20 14:31:24 compute-1 nova_compute[225855]: 2026-01-20 14:31:24.323 225859 DEBUG nova.compute.manager [None req-96d6591b-4df2-49e1-97f5-a65cc1323c5a - - - - - -] [instance: 29f0b4d4-abf0-46e7-bf67-38e71eb42e28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:31:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:24.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:24 compute-1 nova_compute[225855]: 2026-01-20 14:31:24.534 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Updating instance_info_cache with network_info: [{"id": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "address": "fa:16:3e:fc:ae:50", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf65050ac-6a", "ovs_interfaceid": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:31:24 compute-1 nova_compute[225855]: 2026-01-20 14:31:24.566 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-f3faf996-e066-4b11-b7f3-30aeffff726e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:31:24 compute-1 nova_compute[225855]: 2026-01-20 14:31:24.566 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:31:24 compute-1 nova_compute[225855]: 2026-01-20 14:31:24.566 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:31:24 compute-1 nova_compute[225855]: 2026-01-20 14:31:24.567 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:31:24 compute-1 nova_compute[225855]: 2026-01-20 14:31:24.567 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:31:24 compute-1 nova_compute[225855]: 2026-01-20 14:31:24.567 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:31:24 compute-1 nova_compute[225855]: 2026-01-20 14:31:24.567 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:31:24 compute-1 nova_compute[225855]: 2026-01-20 14:31:24.567 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:31:24 compute-1 nova_compute[225855]: 2026-01-20 14:31:24.568 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:31:24 compute-1 nova_compute[225855]: 2026-01-20 14:31:24.602 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:24 compute-1 nova_compute[225855]: 2026-01-20 14:31:24.602 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:24 compute-1 nova_compute[225855]: 2026-01-20 14:31:24.603 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:24 compute-1 nova_compute[225855]: 2026-01-20 14:31:24.603 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:31:24 compute-1 nova_compute[225855]: 2026-01-20 14:31:24.603 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:31:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:24 compute-1 ceph-mon[81775]: pgmap v1224: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 123 KiB/s rd, 855 KiB/s wr, 54 op/s
Jan 20 14:31:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:31:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:31:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:31:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:31:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:31:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4199824210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:25 compute-1 nova_compute[225855]: 2026-01-20 14:31:25.053 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:31:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:25.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/183400782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1937716485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:25 compute-1 ceph-mon[81775]: pgmap v1225: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 9.1 KiB/s wr, 22 op/s
Jan 20 14:31:26 compute-1 nova_compute[225855]: 2026-01-20 14:31:26.197 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:26.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:27.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:27 compute-1 nova_compute[225855]: 2026-01-20 14:31:27.716 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:31:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:28.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:31:28 compute-1 ceph-mon[81775]: pgmap v1226: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 20 14:31:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:28.598 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:31:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:29.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:29 compute-1 nova_compute[225855]: 2026-01-20 14:31:29.247 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:29 compute-1 nova_compute[225855]: 2026-01-20 14:31:29.317 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:31:29 compute-1 nova_compute[225855]: 2026-01-20 14:31:29.318 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:31:29 compute-1 nova_compute[225855]: 2026-01-20 14:31:29.480 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:31:29 compute-1 nova_compute[225855]: 2026-01-20 14:31:29.481 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4504MB free_disk=20.89706039428711GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:31:29 compute-1 nova_compute[225855]: 2026-01-20 14:31:29.481 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:29 compute-1 nova_compute[225855]: 2026-01-20 14:31:29.481 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:29 compute-1 nova_compute[225855]: 2026-01-20 14:31:29.664 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance f3faf996-e066-4b11-b7f3-30aeffff726e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:31:29 compute-1 nova_compute[225855]: 2026-01-20 14:31:29.665 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:31:29 compute-1 nova_compute[225855]: 2026-01-20 14:31:29.665 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:31:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:29 compute-1 nova_compute[225855]: 2026-01-20 14:31:29.793 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:31:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:31:30 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1825073402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:30 compute-1 nova_compute[225855]: 2026-01-20 14:31:30.233 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:31:30 compute-1 nova_compute[225855]: 2026-01-20 14:31:30.241 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:31:30 compute-1 nova_compute[225855]: 2026-01-20 14:31:30.261 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:31:30 compute-1 nova_compute[225855]: 2026-01-20 14:31:30.326 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:31:30 compute-1 nova_compute[225855]: 2026-01-20 14:31:30.327 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:30.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:30 compute-1 ceph-mon[81775]: pgmap v1227: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 20 14:31:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1825073402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:31 compute-1 sudo[241924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:31 compute-1 sudo[241924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:31 compute-1 sudo[241924]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:31.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:31 compute-1 nova_compute[225855]: 2026-01-20 14:31:31.201 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:31 compute-1 sudo[241949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:31:31 compute-1 sudo[241949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:31 compute-1 sudo[241949]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:31 compute-1 ceph-mon[81775]: pgmap v1228: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 20 14:31:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:31:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:32.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:31:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:33.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:34 compute-1 nova_compute[225855]: 2026-01-20 14:31:34.296 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:31:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:34.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:31:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:34 compute-1 ceph-mon[81775]: pgmap v1229: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 8.6 KiB/s wr, 1 op/s
Jan 20 14:31:35 compute-1 sshd-session[241923]: Connection closed by authenticating user root 45.179.5.170 port 52536 [preauth]
Jan 20 14:31:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:35.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:35 compute-1 nova_compute[225855]: 2026-01-20 14:31:35.322 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:31:35 compute-1 nova_compute[225855]: 2026-01-20 14:31:35.531 225859 DEBUG oslo_concurrency.lockutils [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "f3faf996-e066-4b11-b7f3-30aeffff726e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:35 compute-1 nova_compute[225855]: 2026-01-20 14:31:35.532 225859 DEBUG oslo_concurrency.lockutils [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:35 compute-1 nova_compute[225855]: 2026-01-20 14:31:35.532 225859 DEBUG oslo_concurrency.lockutils [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:35 compute-1 nova_compute[225855]: 2026-01-20 14:31:35.532 225859 DEBUG oslo_concurrency.lockutils [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:35 compute-1 nova_compute[225855]: 2026-01-20 14:31:35.532 225859 DEBUG oslo_concurrency.lockutils [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:35 compute-1 nova_compute[225855]: 2026-01-20 14:31:35.534 225859 INFO nova.compute.manager [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Terminating instance
Jan 20 14:31:35 compute-1 nova_compute[225855]: 2026-01-20 14:31:35.535 225859 DEBUG nova.compute.manager [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:31:35 compute-1 kernel: tapf65050ac-6a (unregistering): left promiscuous mode
Jan 20 14:31:35 compute-1 NetworkManager[49104]: <info>  [1768919495.7124] device (tapf65050ac-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:31:35 compute-1 nova_compute[225855]: 2026-01-20 14:31:35.724 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:35 compute-1 ovn_controller[130490]: 2026-01-20T14:31:35Z|00124|binding|INFO|Releasing lport f65050ac-6a44-490a-b4b9-8c82c1f61630 from this chassis (sb_readonly=0)
Jan 20 14:31:35 compute-1 ovn_controller[130490]: 2026-01-20T14:31:35Z|00125|binding|INFO|Setting lport f65050ac-6a44-490a-b4b9-8c82c1f61630 down in Southbound
Jan 20 14:31:35 compute-1 ovn_controller[130490]: 2026-01-20T14:31:35Z|00126|binding|INFO|Removing iface tapf65050ac-6a ovn-installed in OVS
Jan 20 14:31:35 compute-1 nova_compute[225855]: 2026-01-20 14:31:35.726 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:35.732 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:ae:50 10.100.0.8'], port_security=['fa:16:3e:fc:ae:50 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f3faf996-e066-4b11-b7f3-30aeffff726e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02f86d1d-5cad-49c5-9004-3de3e4739ad5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0cee74dd60da4a839bb5eb0ba3137edf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e08f10e3-3a95-4e33-b03d-21860ea0dc91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f4fb07a-2698-4a11-a9e3-5a66d678d9d5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=f65050ac-6a44-490a-b4b9-8c82c1f61630) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:31:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:35.733 140354 INFO neutron.agent.ovn.metadata.agent [-] Port f65050ac-6a44-490a-b4b9-8c82c1f61630 in datapath 02f86d1d-5cad-49c5-9004-3de3e4739ad5 unbound from our chassis
Jan 20 14:31:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:35.734 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 02f86d1d-5cad-49c5-9004-3de3e4739ad5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:31:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:35.736 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[42702bbe-6898-4272-a5a7-c3b8e8dd8ce9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:35.736 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5 namespace which is not needed anymore
Jan 20 14:31:35 compute-1 nova_compute[225855]: 2026-01-20 14:31:35.770 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:35 compute-1 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Jan 20 14:31:35 compute-1 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001e.scope: Consumed 16.166s CPU time.
Jan 20 14:31:35 compute-1 systemd-machined[194361]: Machine qemu-16-instance-0000001e terminated.
Jan 20 14:31:35 compute-1 podman[241978]: 2026-01-20 14:31:35.848971605 +0000 UTC m=+0.098533206 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 20 14:31:35 compute-1 neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5[240617]: [NOTICE]   (240621) : haproxy version is 2.8.14-c23fe91
Jan 20 14:31:35 compute-1 neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5[240617]: [NOTICE]   (240621) : path to executable is /usr/sbin/haproxy
Jan 20 14:31:35 compute-1 neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5[240617]: [WARNING]  (240621) : Exiting Master process...
Jan 20 14:31:35 compute-1 neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5[240617]: [ALERT]    (240621) : Current worker (240623) exited with code 143 (Terminated)
Jan 20 14:31:35 compute-1 neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5[240617]: [WARNING]  (240621) : All workers exited. Exiting... (0)
Jan 20 14:31:35 compute-1 systemd[1]: libpod-d8a0ee9d6ac43a9bc3bd7cb9d79f6a436d6ee7fbfe9aa9472ca3997bc110a44f.scope: Deactivated successfully.
Jan 20 14:31:35 compute-1 podman[242018]: 2026-01-20 14:31:35.90068208 +0000 UTC m=+0.071875527 container died d8a0ee9d6ac43a9bc3bd7cb9d79f6a436d6ee7fbfe9aa9472ca3997bc110a44f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Jan 20 14:31:35 compute-1 ceph-mon[81775]: pgmap v1230: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 8.6 KiB/s wr, 1 op/s
Jan 20 14:31:35 compute-1 kernel: tapf65050ac-6a: entered promiscuous mode
Jan 20 14:31:35 compute-1 systemd-udevd[241988]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:31:35 compute-1 kernel: tapf65050ac-6a (unregistering): left promiscuous mode
Jan 20 14:31:35 compute-1 NetworkManager[49104]: <info>  [1768919495.9563] manager: (tapf65050ac-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Jan 20 14:31:35 compute-1 nova_compute[225855]: 2026-01-20 14:31:35.958 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:35 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8a0ee9d6ac43a9bc3bd7cb9d79f6a436d6ee7fbfe9aa9472ca3997bc110a44f-userdata-shm.mount: Deactivated successfully.
Jan 20 14:31:35 compute-1 ovn_controller[130490]: 2026-01-20T14:31:35Z|00127|binding|INFO|Claiming lport f65050ac-6a44-490a-b4b9-8c82c1f61630 for this chassis.
Jan 20 14:31:35 compute-1 ovn_controller[130490]: 2026-01-20T14:31:35Z|00128|binding|INFO|f65050ac-6a44-490a-b4b9-8c82c1f61630: Claiming fa:16:3e:fc:ae:50 10.100.0.8
Jan 20 14:31:35 compute-1 systemd[1]: var-lib-containers-storage-overlay-7c2b8ddd195f8cf3eab1b6717aa28a9f762bb41de1ab6eed44d1d21f47344a69-merged.mount: Deactivated successfully.
Jan 20 14:31:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:35.971 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:ae:50 10.100.0.8'], port_security=['fa:16:3e:fc:ae:50 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f3faf996-e066-4b11-b7f3-30aeffff726e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02f86d1d-5cad-49c5-9004-3de3e4739ad5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0cee74dd60da4a839bb5eb0ba3137edf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e08f10e3-3a95-4e33-b03d-21860ea0dc91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f4fb07a-2698-4a11-a9e3-5a66d678d9d5, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=f65050ac-6a44-490a-b4b9-8c82c1f61630) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:31:35 compute-1 podman[242018]: 2026-01-20 14:31:35.999244506 +0000 UTC m=+0.170437993 container cleanup d8a0ee9d6ac43a9bc3bd7cb9d79f6a436d6ee7fbfe9aa9472ca3997bc110a44f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:31:36 compute-1 ovn_controller[130490]: 2026-01-20T14:31:35Z|00129|binding|INFO|Setting lport f65050ac-6a44-490a-b4b9-8c82c1f61630 ovn-installed in OVS
Jan 20 14:31:36 compute-1 ovn_controller[130490]: 2026-01-20T14:31:35Z|00130|binding|INFO|Setting lport f65050ac-6a44-490a-b4b9-8c82c1f61630 up in Southbound
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.001 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:36 compute-1 ovn_controller[130490]: 2026-01-20T14:31:36Z|00131|binding|INFO|Releasing lport f65050ac-6a44-490a-b4b9-8c82c1f61630 from this chassis (sb_readonly=1)
Jan 20 14:31:36 compute-1 ovn_controller[130490]: 2026-01-20T14:31:36Z|00132|binding|INFO|Removing iface tapf65050ac-6a ovn-installed in OVS
Jan 20 14:31:36 compute-1 ovn_controller[130490]: 2026-01-20T14:31:36Z|00133|if_status|INFO|Not setting lport f65050ac-6a44-490a-b4b9-8c82c1f61630 down as sb is readonly
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.002 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.003 225859 INFO nova.virt.libvirt.driver [-] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Instance destroyed successfully.
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.003 225859 DEBUG nova.objects.instance [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lazy-loading 'resources' on Instance uuid f3faf996-e066-4b11-b7f3-30aeffff726e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:31:36 compute-1 ovn_controller[130490]: 2026-01-20T14:31:36Z|00134|binding|INFO|Releasing lport f65050ac-6a44-490a-b4b9-8c82c1f61630 from this chassis (sb_readonly=0)
Jan 20 14:31:36 compute-1 ovn_controller[130490]: 2026-01-20T14:31:36Z|00135|binding|INFO|Setting lport f65050ac-6a44-490a-b4b9-8c82c1f61630 down in Southbound
Jan 20 14:31:36 compute-1 systemd[1]: libpod-conmon-d8a0ee9d6ac43a9bc3bd7cb9d79f6a436d6ee7fbfe9aa9472ca3997bc110a44f.scope: Deactivated successfully.
Jan 20 14:31:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:36.017 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:ae:50 10.100.0.8'], port_security=['fa:16:3e:fc:ae:50 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f3faf996-e066-4b11-b7f3-30aeffff726e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02f86d1d-5cad-49c5-9004-3de3e4739ad5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0cee74dd60da4a839bb5eb0ba3137edf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e08f10e3-3a95-4e33-b03d-21860ea0dc91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f4fb07a-2698-4a11-a9e3-5a66d678d9d5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=f65050ac-6a44-490a-b4b9-8c82c1f61630) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.020 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.030 225859 DEBUG nova.virt.libvirt.vif [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:30:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-1191836092',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-1191836092',id=30,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH+01n3DJe3yYfRmwifZEomZrLtaFilErLasmr7ze/p0n1d6nPaSWQOHrHfJ9ubgBCwoqlwHjFIWrKKyRcRI1f3OIubHCG4LO7UMySAzmCXBSDkLJPz6Qzoln3dTb/xrow==',key_name='tempest-keypair-696534507',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:30:23Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0cee74dd60da4a839bb5eb0ba3137edf',ramdisk_id='',reservation_id='r-0tyxczv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-859917658',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-859917658-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:30:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6a3fbc3f92a849e88cbf34d28ca17e43',uuid=f3faf996-e066-4b11-b7f3-30aeffff726e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "address": "fa:16:3e:fc:ae:50", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf65050ac-6a", "ovs_interfaceid": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.030 225859 DEBUG nova.network.os_vif_util [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Converting VIF {"id": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "address": "fa:16:3e:fc:ae:50", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf65050ac-6a", "ovs_interfaceid": "f65050ac-6a44-490a-b4b9-8c82c1f61630", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.031 225859 DEBUG nova.network.os_vif_util [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:ae:50,bridge_name='br-int',has_traffic_filtering=True,id=f65050ac-6a44-490a-b4b9-8c82c1f61630,network=Network(02f86d1d-5cad-49c5-9004-3de3e4739ad5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf65050ac-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.031 225859 DEBUG os_vif [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:ae:50,bridge_name='br-int',has_traffic_filtering=True,id=f65050ac-6a44-490a-b4b9-8c82c1f61630,network=Network(02f86d1d-5cad-49c5-9004-3de3e4739ad5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf65050ac-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.033 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.034 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf65050ac-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.035 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.037 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.040 225859 INFO os_vif [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:ae:50,bridge_name='br-int',has_traffic_filtering=True,id=f65050ac-6a44-490a-b4b9-8c82c1f61630,network=Network(02f86d1d-5cad-49c5-9004-3de3e4739ad5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf65050ac-6a')
Jan 20 14:31:36 compute-1 podman[242060]: 2026-01-20 14:31:36.077156497 +0000 UTC m=+0.052868457 container remove d8a0ee9d6ac43a9bc3bd7cb9d79f6a436d6ee7fbfe9aa9472ca3997bc110a44f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:31:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:36.084 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ccad529c-f852-4a1e-8b17-2ca942213b4f]: (4, ('Tue Jan 20 02:31:35 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5 (d8a0ee9d6ac43a9bc3bd7cb9d79f6a436d6ee7fbfe9aa9472ca3997bc110a44f)\nd8a0ee9d6ac43a9bc3bd7cb9d79f6a436d6ee7fbfe9aa9472ca3997bc110a44f\nTue Jan 20 02:31:36 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5 (d8a0ee9d6ac43a9bc3bd7cb9d79f6a436d6ee7fbfe9aa9472ca3997bc110a44f)\nd8a0ee9d6ac43a9bc3bd7cb9d79f6a436d6ee7fbfe9aa9472ca3997bc110a44f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:36.085 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f315b231-c941-4785-a542-26a8d6d0abd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:36.087 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02f86d1d-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.089 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:36 compute-1 kernel: tap02f86d1d-50: left promiscuous mode
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.110 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.111 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:36.117 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f49f1d71-0929-4592-bc0d-802201e64cc3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:36.138 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c284b254-7c6c-427a-a58c-75316f3b6e55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:36.139 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5d84458f-d193-4d6d-9c00-406be7e4c61e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:36.151 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[78cd529a-5f8c-448a-b5f4-bf99173fa543]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446187, 'reachable_time': 33231, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242093, 'error': None, 'target': 'ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:36.154 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:31:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:36.154 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[c053d965-568e-4f47-bc28-12a2348037a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:36 compute-1 systemd[1]: run-netns-ovnmeta\x2d02f86d1d\x2d5cad\x2d49c5\x2d9004\x2d3de3e4739ad5.mount: Deactivated successfully.
Jan 20 14:31:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:36.155 140354 INFO neutron.agent.ovn.metadata.agent [-] Port f65050ac-6a44-490a-b4b9-8c82c1f61630 in datapath 02f86d1d-5cad-49c5-9004-3de3e4739ad5 unbound from our chassis
Jan 20 14:31:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:36.156 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 02f86d1d-5cad-49c5-9004-3de3e4739ad5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:31:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:36.157 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[31c45d06-2b7b-42c3-838c-06ef399a19a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:36.157 140354 INFO neutron.agent.ovn.metadata.agent [-] Port f65050ac-6a44-490a-b4b9-8c82c1f61630 in datapath 02f86d1d-5cad-49c5-9004-3de3e4739ad5 unbound from our chassis
Jan 20 14:31:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:36.158 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 02f86d1d-5cad-49c5-9004-3de3e4739ad5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:31:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:31:36.159 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1c39594e-3d1c-4297-9a6b-592a711d2187]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:36.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.470 225859 INFO nova.virt.libvirt.driver [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Deleting instance files /var/lib/nova/instances/f3faf996-e066-4b11-b7f3-30aeffff726e_del
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.471 225859 INFO nova.virt.libvirt.driver [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Deletion of /var/lib/nova/instances/f3faf996-e066-4b11-b7f3-30aeffff726e_del complete
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.568 225859 INFO nova.compute.manager [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Took 1.03 seconds to destroy the instance on the hypervisor.
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.569 225859 DEBUG oslo.service.loopingcall [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.569 225859 DEBUG nova.compute.manager [-] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:31:36 compute-1 nova_compute[225855]: 2026-01-20 14:31:36.569 225859 DEBUG nova.network.neutron [-] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:31:37 compute-1 nova_compute[225855]: 2026-01-20 14:31:37.054 225859 DEBUG nova.compute.manager [req-3f9e0a72-0907-4a4e-94f7-7230b01add89 req-20cf6565-ffc2-4cf4-9147-7eae857ad506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Received event network-vif-unplugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:31:37 compute-1 nova_compute[225855]: 2026-01-20 14:31:37.055 225859 DEBUG oslo_concurrency.lockutils [req-3f9e0a72-0907-4a4e-94f7-7230b01add89 req-20cf6565-ffc2-4cf4-9147-7eae857ad506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:37 compute-1 nova_compute[225855]: 2026-01-20 14:31:37.055 225859 DEBUG oslo_concurrency.lockutils [req-3f9e0a72-0907-4a4e-94f7-7230b01add89 req-20cf6565-ffc2-4cf4-9147-7eae857ad506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:37 compute-1 nova_compute[225855]: 2026-01-20 14:31:37.056 225859 DEBUG oslo_concurrency.lockutils [req-3f9e0a72-0907-4a4e-94f7-7230b01add89 req-20cf6565-ffc2-4cf4-9147-7eae857ad506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:37 compute-1 nova_compute[225855]: 2026-01-20 14:31:37.056 225859 DEBUG nova.compute.manager [req-3f9e0a72-0907-4a4e-94f7-7230b01add89 req-20cf6565-ffc2-4cf4-9147-7eae857ad506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] No waiting events found dispatching network-vif-unplugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:31:37 compute-1 nova_compute[225855]: 2026-01-20 14:31:37.057 225859 DEBUG nova.compute.manager [req-3f9e0a72-0907-4a4e-94f7-7230b01add89 req-20cf6565-ffc2-4cf4-9147-7eae857ad506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Received event network-vif-unplugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:31:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:37.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:38.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:38 compute-1 ceph-mon[81775]: pgmap v1231: 321 pgs: 321 active+clean; 175 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 680 B/s wr, 5 op/s
Jan 20 14:31:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:39.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:39 compute-1 nova_compute[225855]: 2026-01-20 14:31:39.333 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/246380524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:31:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:40.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:31:40 compute-1 ceph-mon[81775]: pgmap v1232: 321 pgs: 321 active+clean; 162 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 682 B/s wr, 6 op/s
Jan 20 14:31:41 compute-1 nova_compute[225855]: 2026-01-20 14:31:41.037 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:41 compute-1 sudo[242097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:41 compute-1 sudo[242097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:41 compute-1 sudo[242097]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:41.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:41 compute-1 sudo[242122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:41 compute-1 sudo[242122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:41 compute-1 sudo[242122]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:41 compute-1 nova_compute[225855]: 2026-01-20 14:31:41.700 225859 DEBUG nova.compute.manager [req-3aa8789f-02ae-4d94-a233-8326f12f7be4 req-21a02727-99fe-4332-89ac-e6fc0270f9a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Received event network-vif-plugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:31:41 compute-1 nova_compute[225855]: 2026-01-20 14:31:41.701 225859 DEBUG oslo_concurrency.lockutils [req-3aa8789f-02ae-4d94-a233-8326f12f7be4 req-21a02727-99fe-4332-89ac-e6fc0270f9a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:41 compute-1 nova_compute[225855]: 2026-01-20 14:31:41.701 225859 DEBUG oslo_concurrency.lockutils [req-3aa8789f-02ae-4d94-a233-8326f12f7be4 req-21a02727-99fe-4332-89ac-e6fc0270f9a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:41 compute-1 nova_compute[225855]: 2026-01-20 14:31:41.702 225859 DEBUG oslo_concurrency.lockutils [req-3aa8789f-02ae-4d94-a233-8326f12f7be4 req-21a02727-99fe-4332-89ac-e6fc0270f9a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:41 compute-1 nova_compute[225855]: 2026-01-20 14:31:41.702 225859 DEBUG nova.compute.manager [req-3aa8789f-02ae-4d94-a233-8326f12f7be4 req-21a02727-99fe-4332-89ac-e6fc0270f9a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] No waiting events found dispatching network-vif-plugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:31:41 compute-1 nova_compute[225855]: 2026-01-20 14:31:41.702 225859 WARNING nova.compute.manager [req-3aa8789f-02ae-4d94-a233-8326f12f7be4 req-21a02727-99fe-4332-89ac-e6fc0270f9a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Received unexpected event network-vif-plugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 for instance with vm_state active and task_state deleting.
Jan 20 14:31:41 compute-1 nova_compute[225855]: 2026-01-20 14:31:41.703 225859 DEBUG nova.compute.manager [req-3aa8789f-02ae-4d94-a233-8326f12f7be4 req-21a02727-99fe-4332-89ac-e6fc0270f9a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Received event network-vif-plugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:31:41 compute-1 nova_compute[225855]: 2026-01-20 14:31:41.704 225859 DEBUG oslo_concurrency.lockutils [req-3aa8789f-02ae-4d94-a233-8326f12f7be4 req-21a02727-99fe-4332-89ac-e6fc0270f9a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:41 compute-1 nova_compute[225855]: 2026-01-20 14:31:41.704 225859 DEBUG oslo_concurrency.lockutils [req-3aa8789f-02ae-4d94-a233-8326f12f7be4 req-21a02727-99fe-4332-89ac-e6fc0270f9a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:41 compute-1 nova_compute[225855]: 2026-01-20 14:31:41.704 225859 DEBUG oslo_concurrency.lockutils [req-3aa8789f-02ae-4d94-a233-8326f12f7be4 req-21a02727-99fe-4332-89ac-e6fc0270f9a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:41 compute-1 nova_compute[225855]: 2026-01-20 14:31:41.705 225859 DEBUG nova.compute.manager [req-3aa8789f-02ae-4d94-a233-8326f12f7be4 req-21a02727-99fe-4332-89ac-e6fc0270f9a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] No waiting events found dispatching network-vif-plugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:31:41 compute-1 nova_compute[225855]: 2026-01-20 14:31:41.705 225859 WARNING nova.compute.manager [req-3aa8789f-02ae-4d94-a233-8326f12f7be4 req-21a02727-99fe-4332-89ac-e6fc0270f9a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Received unexpected event network-vif-plugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 for instance with vm_state active and task_state deleting.
Jan 20 14:31:41 compute-1 nova_compute[225855]: 2026-01-20 14:31:41.706 225859 DEBUG nova.compute.manager [req-3aa8789f-02ae-4d94-a233-8326f12f7be4 req-21a02727-99fe-4332-89ac-e6fc0270f9a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Received event network-vif-plugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:31:41 compute-1 nova_compute[225855]: 2026-01-20 14:31:41.706 225859 DEBUG oslo_concurrency.lockutils [req-3aa8789f-02ae-4d94-a233-8326f12f7be4 req-21a02727-99fe-4332-89ac-e6fc0270f9a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:41 compute-1 nova_compute[225855]: 2026-01-20 14:31:41.706 225859 DEBUG oslo_concurrency.lockutils [req-3aa8789f-02ae-4d94-a233-8326f12f7be4 req-21a02727-99fe-4332-89ac-e6fc0270f9a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:41 compute-1 nova_compute[225855]: 2026-01-20 14:31:41.707 225859 DEBUG oslo_concurrency.lockutils [req-3aa8789f-02ae-4d94-a233-8326f12f7be4 req-21a02727-99fe-4332-89ac-e6fc0270f9a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:41 compute-1 nova_compute[225855]: 2026-01-20 14:31:41.707 225859 DEBUG nova.compute.manager [req-3aa8789f-02ae-4d94-a233-8326f12f7be4 req-21a02727-99fe-4332-89ac-e6fc0270f9a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] No waiting events found dispatching network-vif-plugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:31:41 compute-1 nova_compute[225855]: 2026-01-20 14:31:41.708 225859 WARNING nova.compute.manager [req-3aa8789f-02ae-4d94-a233-8326f12f7be4 req-21a02727-99fe-4332-89ac-e6fc0270f9a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Received unexpected event network-vif-plugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 for instance with vm_state active and task_state deleting.
Jan 20 14:31:41 compute-1 ceph-mon[81775]: pgmap v1233: 321 pgs: 321 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 14:31:42 compute-1 nova_compute[225855]: 2026-01-20 14:31:42.312 225859 DEBUG nova.network.neutron [-] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:31:42 compute-1 nova_compute[225855]: 2026-01-20 14:31:42.403 225859 INFO nova.compute.manager [-] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Took 5.83 seconds to deallocate network for instance.
Jan 20 14:31:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:31:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:42.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:31:42 compute-1 nova_compute[225855]: 2026-01-20 14:31:42.510 225859 DEBUG oslo_concurrency.lockutils [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:42 compute-1 nova_compute[225855]: 2026-01-20 14:31:42.510 225859 DEBUG oslo_concurrency.lockutils [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:42 compute-1 nova_compute[225855]: 2026-01-20 14:31:42.642 225859 DEBUG oslo_concurrency.processutils [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:31:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3784413517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:31:43 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2619803401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:43 compute-1 nova_compute[225855]: 2026-01-20 14:31:43.055 225859 DEBUG oslo_concurrency.processutils [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:31:43 compute-1 nova_compute[225855]: 2026-01-20 14:31:43.061 225859 DEBUG nova.compute.provider_tree [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:31:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:43.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:43 compute-1 nova_compute[225855]: 2026-01-20 14:31:43.210 225859 DEBUG nova.compute.manager [req-665a852b-45e2-41dc-baa5-6dac3582993a req-1bd26dbc-f3f6-456e-9725-9c489e4d6dbc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Received event network-vif-deleted-f65050ac-6a44-490a-b4b9-8c82c1f61630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:31:43 compute-1 nova_compute[225855]: 2026-01-20 14:31:43.247 225859 DEBUG nova.scheduler.client.report [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:31:43 compute-1 nova_compute[225855]: 2026-01-20 14:31:43.295 225859 DEBUG oslo_concurrency.lockutils [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:43 compute-1 nova_compute[225855]: 2026-01-20 14:31:43.364 225859 INFO nova.scheduler.client.report [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Deleted allocations for instance f3faf996-e066-4b11-b7f3-30aeffff726e
Jan 20 14:31:43 compute-1 nova_compute[225855]: 2026-01-20 14:31:43.514 225859 DEBUG oslo_concurrency.lockutils [None req-cc85b8e0-128a-4116-8005-9a2d9a2e89ef 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2619803401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:43 compute-1 ceph-mon[81775]: pgmap v1234: 321 pgs: 321 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 14:31:44 compute-1 nova_compute[225855]: 2026-01-20 14:31:44.038 225859 DEBUG nova.compute.manager [req-ece0dc98-2deb-472e-a234-eb2b47d208cb req-f01e7aaf-914b-4d30-a66d-65fb4aa231ad 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Received event network-vif-plugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:31:44 compute-1 nova_compute[225855]: 2026-01-20 14:31:44.038 225859 DEBUG oslo_concurrency.lockutils [req-ece0dc98-2deb-472e-a234-eb2b47d208cb req-f01e7aaf-914b-4d30-a66d-65fb4aa231ad 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:44 compute-1 nova_compute[225855]: 2026-01-20 14:31:44.039 225859 DEBUG oslo_concurrency.lockutils [req-ece0dc98-2deb-472e-a234-eb2b47d208cb req-f01e7aaf-914b-4d30-a66d-65fb4aa231ad 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:44 compute-1 nova_compute[225855]: 2026-01-20 14:31:44.039 225859 DEBUG oslo_concurrency.lockutils [req-ece0dc98-2deb-472e-a234-eb2b47d208cb req-f01e7aaf-914b-4d30-a66d-65fb4aa231ad 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3faf996-e066-4b11-b7f3-30aeffff726e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:44 compute-1 nova_compute[225855]: 2026-01-20 14:31:44.039 225859 DEBUG nova.compute.manager [req-ece0dc98-2deb-472e-a234-eb2b47d208cb req-f01e7aaf-914b-4d30-a66d-65fb4aa231ad 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] No waiting events found dispatching network-vif-plugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:31:44 compute-1 nova_compute[225855]: 2026-01-20 14:31:44.039 225859 WARNING nova.compute.manager [req-ece0dc98-2deb-472e-a234-eb2b47d208cb req-f01e7aaf-914b-4d30-a66d-65fb4aa231ad 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Received unexpected event network-vif-plugged-f65050ac-6a44-490a-b4b9-8c82c1f61630 for instance with vm_state deleted and task_state None.
Jan 20 14:31:44 compute-1 nova_compute[225855]: 2026-01-20 14:31:44.335 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:44.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2511464146' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:31:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1240311285' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:31:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:45.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:45 compute-1 ceph-mon[81775]: pgmap v1235: 321 pgs: 321 active+clean; 133 MiB data, 447 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 20 14:31:46 compute-1 podman[242172]: 2026-01-20 14:31:46.019617272 +0000 UTC m=+0.061332599 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:31:46 compute-1 nova_compute[225855]: 2026-01-20 14:31:46.040 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:46.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:47.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:48 compute-1 ceph-mon[81775]: pgmap v1236: 321 pgs: 321 active+clean; 127 MiB data, 440 MiB used, 21 GiB / 21 GiB avail; 75 KiB/s rd, 2.6 MiB/s wr, 111 op/s
Jan 20 14:31:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1433866750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:31:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:48.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:31:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:49.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:49 compute-1 nova_compute[225855]: 2026-01-20 14:31:49.338 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:50.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:50 compute-1 ceph-mon[81775]: pgmap v1237: 321 pgs: 321 active+clean; 134 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 183 KiB/s rd, 3.5 MiB/s wr, 116 op/s
Jan 20 14:31:51 compute-1 nova_compute[225855]: 2026-01-20 14:31:51.000 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919495.9989328, f3faf996-e066-4b11-b7f3-30aeffff726e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:31:51 compute-1 nova_compute[225855]: 2026-01-20 14:31:51.001 225859 INFO nova.compute.manager [-] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] VM Stopped (Lifecycle Event)
Jan 20 14:31:51 compute-1 nova_compute[225855]: 2026-01-20 14:31:51.035 225859 DEBUG nova.compute.manager [None req-86a42ba4-7e30-43e8-8bed-b6a73b5db3aa - - - - - -] [instance: f3faf996-e066-4b11-b7f3-30aeffff726e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:31:51 compute-1 nova_compute[225855]: 2026-01-20 14:31:51.044 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:31:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:51.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:31:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:52.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:52 compute-1 ceph-mon[81775]: pgmap v1238: 321 pgs: 321 active+clean; 134 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 177 op/s
Jan 20 14:31:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:53.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4014400721' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:31:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4014400721' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:31:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3938435753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:31:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/715650878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:31:54 compute-1 nova_compute[225855]: 2026-01-20 14:31:54.340 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:54.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:54 compute-1 ceph-mon[81775]: pgmap v1239: 321 pgs: 321 active+clean; 134 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 155 op/s
Jan 20 14:31:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4270292619' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:31:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4270292619' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:31:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:55.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:56 compute-1 nova_compute[225855]: 2026-01-20 14:31:56.046 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:56.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:56 compute-1 ceph-mon[81775]: pgmap v1240: 321 pgs: 321 active+clean; 134 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 167 op/s
Jan 20 14:31:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:31:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:57.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:31:58 compute-1 nova_compute[225855]: 2026-01-20 14:31:58.313 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:58 compute-1 nova_compute[225855]: 2026-01-20 14:31:58.484 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:31:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:58.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:31:58 compute-1 ceph-mon[81775]: pgmap v1241: 321 pgs: 321 active+clean; 134 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 144 op/s
Jan 20 14:31:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:31:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:59.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:59 compute-1 nova_compute[225855]: 2026-01-20 14:31:59.342 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:00.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:00 compute-1 ceph-mon[81775]: pgmap v1242: 321 pgs: 321 active+clean; 143 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.6 MiB/s wr, 176 op/s
Jan 20 14:32:01 compute-1 nova_compute[225855]: 2026-01-20 14:32:01.049 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:01.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:01 compute-1 sudo[242200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:01 compute-1 sudo[242200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:01 compute-1 sudo[242200]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:01 compute-1 sudo[242225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:01 compute-1 sudo[242225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:01 compute-1 sudo[242225]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:32:01.784 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:32:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:32:01.784 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:32:01 compute-1 nova_compute[225855]: 2026-01-20 14:32:01.785 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:32:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:02.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:32:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:03.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:03 compute-1 ceph-mon[81775]: pgmap v1243: 321 pgs: 321 active+clean; 166 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 208 op/s
Jan 20 14:32:04 compute-1 nova_compute[225855]: 2026-01-20 14:32:04.345 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:04 compute-1 ceph-mon[81775]: pgmap v1244: 321 pgs: 321 active+clean; 166 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 147 op/s
Jan 20 14:32:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:04.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:05.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:06 compute-1 podman[242254]: 2026-01-20 14:32:06.037888417 +0000 UTC m=+0.089063247 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 14:32:06 compute-1 nova_compute[225855]: 2026-01-20 14:32:06.051 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:06.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:06 compute-1 ceph-mon[81775]: pgmap v1245: 321 pgs: 321 active+clean; 122 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 158 op/s
Jan 20 14:32:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:07.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:32:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:08.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:32:08 compute-1 ceph-mon[81775]: pgmap v1246: 321 pgs: 321 active+clean; 108 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 160 op/s
Jan 20 14:32:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:09.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:09 compute-1 nova_compute[225855]: 2026-01-20 14:32:09.346 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:32:09.786 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:32:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1509807238' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:32:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3429196109' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:32:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:10.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:10 compute-1 ceph-mon[81775]: pgmap v1247: 321 pgs: 321 active+clean; 113 MiB data, 405 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.0 MiB/s wr, 156 op/s
Jan 20 14:32:11 compute-1 nova_compute[225855]: 2026-01-20 14:32:11.054 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:11 compute-1 sshd-session[242252]: Invalid user test from 45.179.5.170 port 51806
Jan 20 14:32:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:11.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:11 compute-1 ceph-mon[81775]: pgmap v1248: 321 pgs: 321 active+clean; 167 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.5 MiB/s wr, 180 op/s
Jan 20 14:32:11 compute-1 sshd-session[242252]: Connection closed by invalid user test 45.179.5.170 port 51806 [preauth]
Jan 20 14:32:12 compute-1 nova_compute[225855]: 2026-01-20 14:32:12.033 225859 DEBUG oslo_concurrency.lockutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Acquiring lock "561d1914-3348-438c-84ba-1205f479c245" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:12 compute-1 nova_compute[225855]: 2026-01-20 14:32:12.034 225859 DEBUG oslo_concurrency.lockutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Lock "561d1914-3348-438c-84ba-1205f479c245" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:12 compute-1 nova_compute[225855]: 2026-01-20 14:32:12.053 225859 DEBUG nova.compute.manager [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:32:12 compute-1 nova_compute[225855]: 2026-01-20 14:32:12.200 225859 DEBUG oslo_concurrency.lockutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:12 compute-1 nova_compute[225855]: 2026-01-20 14:32:12.201 225859 DEBUG oslo_concurrency.lockutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:12 compute-1 nova_compute[225855]: 2026-01-20 14:32:12.211 225859 DEBUG nova.virt.hardware [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:32:12 compute-1 nova_compute[225855]: 2026-01-20 14:32:12.211 225859 INFO nova.compute.claims [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:32:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:12.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:12 compute-1 nova_compute[225855]: 2026-01-20 14:32:12.622 225859 DEBUG oslo_concurrency.processutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:32:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2127417258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.065 225859 DEBUG oslo_concurrency.processutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.070 225859 DEBUG nova.compute.provider_tree [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.087 225859 DEBUG nova.scheduler.client.report [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:32:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2127417258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.113 225859 DEBUG oslo_concurrency.lockutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.912s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.115 225859 DEBUG nova.compute.manager [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.180 225859 DEBUG nova.compute.manager [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.181 225859 DEBUG nova.network.neutron [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:32:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:32:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:13.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.249 225859 INFO nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.269 225859 DEBUG nova.compute.manager [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.410 225859 DEBUG nova.compute.manager [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.411 225859 DEBUG nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.411 225859 INFO nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Creating image(s)
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.441 225859 DEBUG nova.storage.rbd_utils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] rbd image 561d1914-3348-438c-84ba-1205f479c245_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.469 225859 DEBUG nova.storage.rbd_utils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] rbd image 561d1914-3348-438c-84ba-1205f479c245_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.497 225859 DEBUG nova.storage.rbd_utils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] rbd image 561d1914-3348-438c-84ba-1205f479c245_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.501 225859 DEBUG oslo_concurrency.processutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.598 225859 DEBUG oslo_concurrency.processutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.600 225859 DEBUG oslo_concurrency.lockutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.601 225859 DEBUG oslo_concurrency.lockutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.602 225859 DEBUG oslo_concurrency.lockutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.641 225859 DEBUG nova.storage.rbd_utils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] rbd image 561d1914-3348-438c-84ba-1205f479c245_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.646 225859 DEBUG oslo_concurrency.processutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 561d1914-3348-438c-84ba-1205f479c245_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.705 225859 DEBUG nova.network.neutron [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 20 14:32:13 compute-1 nova_compute[225855]: 2026-01-20 14:32:13.706 225859 DEBUG nova.compute.manager [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.030 225859 DEBUG oslo_concurrency.processutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 561d1914-3348-438c-84ba-1205f479c245_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.091 225859 DEBUG nova.storage.rbd_utils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] resizing rbd image 561d1914-3348-438c-84ba-1205f479c245_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:32:14 compute-1 ceph-mon[81775]: pgmap v1249: 321 pgs: 321 active+clean; 167 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Jan 20 14:32:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3261774672' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:32:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3261774672' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.184 225859 DEBUG nova.objects.instance [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Lazy-loading 'migration_context' on Instance uuid 561d1914-3348-438c-84ba-1205f479c245 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.202 225859 DEBUG nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.203 225859 DEBUG nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Ensure instance console log exists: /var/lib/nova/instances/561d1914-3348-438c-84ba-1205f479c245/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.203 225859 DEBUG oslo_concurrency.lockutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.203 225859 DEBUG oslo_concurrency.lockutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.204 225859 DEBUG oslo_concurrency.lockutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.206 225859 DEBUG nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.210 225859 WARNING nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.215 225859 DEBUG nova.virt.libvirt.host [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.216 225859 DEBUG nova.virt.libvirt.host [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.220 225859 DEBUG nova.virt.libvirt.host [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.220 225859 DEBUG nova.virt.libvirt.host [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.221 225859 DEBUG nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.221 225859 DEBUG nova.virt.hardware [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.222 225859 DEBUG nova.virt.hardware [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.222 225859 DEBUG nova.virt.hardware [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.222 225859 DEBUG nova.virt.hardware [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.223 225859 DEBUG nova.virt.hardware [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.223 225859 DEBUG nova.virt.hardware [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.223 225859 DEBUG nova.virt.hardware [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.223 225859 DEBUG nova.virt.hardware [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.224 225859 DEBUG nova.virt.hardware [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.224 225859 DEBUG nova.virt.hardware [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.224 225859 DEBUG nova.virt.hardware [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.227 225859 DEBUG oslo_concurrency.processutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.347 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:14.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:32:14 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/651665474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.671 225859 DEBUG oslo_concurrency.processutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.710 225859 DEBUG nova.storage.rbd_utils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] rbd image 561d1914-3348-438c-84ba-1205f479c245_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:32:14 compute-1 nova_compute[225855]: 2026-01-20 14:32:14.715 225859 DEBUG oslo_concurrency.processutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:32:15 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1619627399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:32:15 compute-1 nova_compute[225855]: 2026-01-20 14:32:15.145 225859 DEBUG oslo_concurrency.processutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:15 compute-1 nova_compute[225855]: 2026-01-20 14:32:15.147 225859 DEBUG nova.objects.instance [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Lazy-loading 'pci_devices' on Instance uuid 561d1914-3348-438c-84ba-1205f479c245 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:32:15 compute-1 nova_compute[225855]: 2026-01-20 14:32:15.172 225859 DEBUG nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:32:15 compute-1 nova_compute[225855]:   <uuid>561d1914-3348-438c-84ba-1205f479c245</uuid>
Jan 20 14:32:15 compute-1 nova_compute[225855]:   <name>instance-00000024</name>
Jan 20 14:32:15 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:32:15 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:32:15 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerDiagnosticsNegativeTest-server-607115172</nova:name>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:32:14</nova:creationTime>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:32:15 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:32:15 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:32:15 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:32:15 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:32:15 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:32:15 compute-1 nova_compute[225855]:         <nova:user uuid="a76f8a7bf01145bf8c953695d87aed2a">tempest-ServerDiagnosticsNegativeTest-1836592071-project-member</nova:user>
Jan 20 14:32:15 compute-1 nova_compute[225855]:         <nova:project uuid="16c25c6af46845c8b8f7beaa0a50bd38">tempest-ServerDiagnosticsNegativeTest-1836592071</nova:project>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <nova:ports/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:32:15 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:32:15 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <system>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <entry name="serial">561d1914-3348-438c-84ba-1205f479c245</entry>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <entry name="uuid">561d1914-3348-438c-84ba-1205f479c245</entry>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     </system>
Jan 20 14:32:15 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:32:15 compute-1 nova_compute[225855]:   <os>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:   </os>
Jan 20 14:32:15 compute-1 nova_compute[225855]:   <features>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:   </features>
Jan 20 14:32:15 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:32:15 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:32:15 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/561d1914-3348-438c-84ba-1205f479c245_disk">
Jan 20 14:32:15 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       </source>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:32:15 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/561d1914-3348-438c-84ba-1205f479c245_disk.config">
Jan 20 14:32:15 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       </source>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:32:15 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/561d1914-3348-438c-84ba-1205f479c245/console.log" append="off"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <video>
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     </video>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:32:15 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:32:15 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:32:15 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:32:15 compute-1 nova_compute[225855]: </domain>
Jan 20 14:32:15 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:32:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:15.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:15 compute-1 nova_compute[225855]: 2026-01-20 14:32:15.234 225859 DEBUG nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:32:15 compute-1 nova_compute[225855]: 2026-01-20 14:32:15.235 225859 DEBUG nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:32:15 compute-1 nova_compute[225855]: 2026-01-20 14:32:15.235 225859 INFO nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Using config drive
Jan 20 14:32:15 compute-1 nova_compute[225855]: 2026-01-20 14:32:15.262 225859 DEBUG nova.storage.rbd_utils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] rbd image 561d1914-3348-438c-84ba-1205f479c245_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:32:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/651665474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:32:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1619627399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:32:15 compute-1 nova_compute[225855]: 2026-01-20 14:32:15.338 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:32:15 compute-1 nova_compute[225855]: 2026-01-20 14:32:15.772 225859 INFO nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Creating config drive at /var/lib/nova/instances/561d1914-3348-438c-84ba-1205f479c245/disk.config
Jan 20 14:32:15 compute-1 nova_compute[225855]: 2026-01-20 14:32:15.779 225859 DEBUG oslo_concurrency.processutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/561d1914-3348-438c-84ba-1205f479c245/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa2kmc1d9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:15 compute-1 nova_compute[225855]: 2026-01-20 14:32:15.912 225859 DEBUG oslo_concurrency.processutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/561d1914-3348-438c-84ba-1205f479c245/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa2kmc1d9" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:15 compute-1 nova_compute[225855]: 2026-01-20 14:32:15.947 225859 DEBUG nova.storage.rbd_utils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] rbd image 561d1914-3348-438c-84ba-1205f479c245_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:32:15 compute-1 nova_compute[225855]: 2026-01-20 14:32:15.952 225859 DEBUG oslo_concurrency.processutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/561d1914-3348-438c-84ba-1205f479c245/disk.config 561d1914-3348-438c-84ba-1205f479c245_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.058 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.093 225859 DEBUG oslo_concurrency.processutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/561d1914-3348-438c-84ba-1205f479c245/disk.config 561d1914-3348-438c-84ba-1205f479c245_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.093 225859 INFO nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Deleting local config drive /var/lib/nova/instances/561d1914-3348-438c-84ba-1205f479c245/disk.config because it was imported into RBD.
Jan 20 14:32:16 compute-1 systemd-machined[194361]: New machine qemu-18-instance-00000024.
Jan 20 14:32:16 compute-1 systemd[1]: Started Virtual Machine qemu-18-instance-00000024.
Jan 20 14:32:16 compute-1 podman[242603]: 2026-01-20 14:32:16.21865295 +0000 UTC m=+0.054534953 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 20 14:32:16 compute-1 ceph-mon[81775]: pgmap v1250: 321 pgs: 321 active+clean; 186 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.8 MiB/s wr, 210 op/s
Jan 20 14:32:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:32:16.391 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:32:16.392 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:32:16.392 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:16.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.655 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919536.6547425, 561d1914-3348-438c-84ba-1205f479c245 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.656 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 561d1914-3348-438c-84ba-1205f479c245] VM Resumed (Lifecycle Event)
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.658 225859 DEBUG nova.compute.manager [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.658 225859 DEBUG nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.662 225859 INFO nova.virt.libvirt.driver [-] [instance: 561d1914-3348-438c-84ba-1205f479c245] Instance spawned successfully.
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.663 225859 DEBUG nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.685 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 561d1914-3348-438c-84ba-1205f479c245] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.691 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 561d1914-3348-438c-84ba-1205f479c245] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.694 225859 DEBUG nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.694 225859 DEBUG nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.694 225859 DEBUG nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.695 225859 DEBUG nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.695 225859 DEBUG nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.696 225859 DEBUG nova.virt.libvirt.driver [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.723 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 561d1914-3348-438c-84ba-1205f479c245] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.724 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919536.6574569, 561d1914-3348-438c-84ba-1205f479c245 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.724 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 561d1914-3348-438c-84ba-1205f479c245] VM Started (Lifecycle Event)
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.751 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 561d1914-3348-438c-84ba-1205f479c245] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.755 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 561d1914-3348-438c-84ba-1205f479c245] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.785 225859 INFO nova.compute.manager [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Took 3.38 seconds to spawn the instance on the hypervisor.
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.786 225859 DEBUG nova.compute.manager [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.794 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 561d1914-3348-438c-84ba-1205f479c245] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.919 225859 INFO nova.compute.manager [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Took 4.77 seconds to build instance.
Jan 20 14:32:16 compute-1 nova_compute[225855]: 2026-01-20 14:32:16.956 225859 DEBUG oslo_concurrency.lockutils [None req-5a15f14b-d2e2-44fb-839f-cc33c32dc1af a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Lock "561d1914-3348-438c-84ba-1205f479c245" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:17.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:17 compute-1 nova_compute[225855]: 2026-01-20 14:32:17.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:32:17 compute-1 nova_compute[225855]: 2026-01-20 14:32:17.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:32:17 compute-1 nova_compute[225855]: 2026-01-20 14:32:17.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:32:17 compute-1 nova_compute[225855]: 2026-01-20 14:32:17.639 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-561d1914-3348-438c-84ba-1205f479c245" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:32:17 compute-1 nova_compute[225855]: 2026-01-20 14:32:17.639 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-561d1914-3348-438c-84ba-1205f479c245" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:32:17 compute-1 nova_compute[225855]: 2026-01-20 14:32:17.640 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 561d1914-3348-438c-84ba-1205f479c245] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:32:17 compute-1 nova_compute[225855]: 2026-01-20 14:32:17.640 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 561d1914-3348-438c-84ba-1205f479c245 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:32:17 compute-1 nova_compute[225855]: 2026-01-20 14:32:17.730 225859 DEBUG oslo_concurrency.lockutils [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Acquiring lock "561d1914-3348-438c-84ba-1205f479c245" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:17 compute-1 nova_compute[225855]: 2026-01-20 14:32:17.731 225859 DEBUG oslo_concurrency.lockutils [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Lock "561d1914-3348-438c-84ba-1205f479c245" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:17 compute-1 nova_compute[225855]: 2026-01-20 14:32:17.731 225859 DEBUG oslo_concurrency.lockutils [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Acquiring lock "561d1914-3348-438c-84ba-1205f479c245-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:17 compute-1 nova_compute[225855]: 2026-01-20 14:32:17.731 225859 DEBUG oslo_concurrency.lockutils [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Lock "561d1914-3348-438c-84ba-1205f479c245-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:17 compute-1 nova_compute[225855]: 2026-01-20 14:32:17.732 225859 DEBUG oslo_concurrency.lockutils [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Lock "561d1914-3348-438c-84ba-1205f479c245-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:17 compute-1 nova_compute[225855]: 2026-01-20 14:32:17.733 225859 INFO nova.compute.manager [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Terminating instance
Jan 20 14:32:17 compute-1 nova_compute[225855]: 2026-01-20 14:32:17.734 225859 DEBUG oslo_concurrency.lockutils [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Acquiring lock "refresh_cache-561d1914-3348-438c-84ba-1205f479c245" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:32:18 compute-1 nova_compute[225855]: 2026-01-20 14:32:18.004 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 561d1914-3348-438c-84ba-1205f479c245] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:32:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:32:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:18.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:32:18 compute-1 ceph-mon[81775]: pgmap v1251: 321 pgs: 321 active+clean; 206 MiB data, 483 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.6 MiB/s wr, 229 op/s
Jan 20 14:32:18 compute-1 nova_compute[225855]: 2026-01-20 14:32:18.968 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 561d1914-3348-438c-84ba-1205f479c245] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:32:18 compute-1 nova_compute[225855]: 2026-01-20 14:32:18.996 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-561d1914-3348-438c-84ba-1205f479c245" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:32:18 compute-1 nova_compute[225855]: 2026-01-20 14:32:18.997 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 561d1914-3348-438c-84ba-1205f479c245] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:32:18 compute-1 nova_compute[225855]: 2026-01-20 14:32:18.997 225859 DEBUG oslo_concurrency.lockutils [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Acquired lock "refresh_cache-561d1914-3348-438c-84ba-1205f479c245" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:32:18 compute-1 nova_compute[225855]: 2026-01-20 14:32:18.998 225859 DEBUG nova.network.neutron [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:32:19 compute-1 nova_compute[225855]: 2026-01-20 14:32:19.000 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:32:19 compute-1 nova_compute[225855]: 2026-01-20 14:32:19.000 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:32:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:19.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:19 compute-1 nova_compute[225855]: 2026-01-20 14:32:19.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:32:19 compute-1 nova_compute[225855]: 2026-01-20 14:32:19.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:32:19 compute-1 nova_compute[225855]: 2026-01-20 14:32:19.349 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2069181788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:19 compute-1 nova_compute[225855]: 2026-01-20 14:32:19.771 225859 DEBUG nova.network.neutron [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:32:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:20 compute-1 nova_compute[225855]: 2026-01-20 14:32:20.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:32:20 compute-1 nova_compute[225855]: 2026-01-20 14:32:20.369 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:20 compute-1 nova_compute[225855]: 2026-01-20 14:32:20.369 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:20 compute-1 nova_compute[225855]: 2026-01-20 14:32:20.369 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:20 compute-1 nova_compute[225855]: 2026-01-20 14:32:20.369 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:32:20 compute-1 nova_compute[225855]: 2026-01-20 14:32:20.370 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:20.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:20 compute-1 nova_compute[225855]: 2026-01-20 14:32:20.762 225859 DEBUG nova.network.neutron [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:32:20 compute-1 nova_compute[225855]: 2026-01-20 14:32:20.801 225859 DEBUG oslo_concurrency.lockutils [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Releasing lock "refresh_cache-561d1914-3348-438c-84ba-1205f479c245" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:32:20 compute-1 nova_compute[225855]: 2026-01-20 14:32:20.802 225859 DEBUG nova.compute.manager [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:32:20 compute-1 ceph-mon[81775]: pgmap v1252: 321 pgs: 321 active+clean; 214 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.7 MiB/s wr, 218 op/s
Jan 20 14:32:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1619202486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:32:20 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1361981238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.001 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:21 compute-1 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000024.scope: Deactivated successfully.
Jan 20 14:32:21 compute-1 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000024.scope: Consumed 4.845s CPU time.
Jan 20 14:32:21 compute-1 systemd-machined[194361]: Machine qemu-18-instance-00000024 terminated.
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.061 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.224 225859 INFO nova.virt.libvirt.driver [-] [instance: 561d1914-3348-438c-84ba-1205f479c245] Instance destroyed successfully.
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.225 225859 DEBUG nova.objects.instance [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Lazy-loading 'resources' on Instance uuid 561d1914-3348-438c-84ba-1205f479c245 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:32:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:21.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.352 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.352 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:32:21 compute-1 sudo[242719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:21 compute-1 sudo[242719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:21 compute-1 sudo[242719]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:21 compute-1 sudo[242744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:21 compute-1 sudo[242744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:21 compute-1 sudo[242744]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.546 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.548 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4706MB free_disk=20.9010009765625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.548 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.548 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.679 225859 INFO nova.virt.libvirt.driver [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Deleting instance files /var/lib/nova/instances/561d1914-3348-438c-84ba-1205f479c245_del
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.680 225859 INFO nova.virt.libvirt.driver [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Deletion of /var/lib/nova/instances/561d1914-3348-438c-84ba-1205f479c245_del complete
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.686 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 561d1914-3348-438c-84ba-1205f479c245 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.687 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.687 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.730 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.764 225859 INFO nova.compute.manager [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] [instance: 561d1914-3348-438c-84ba-1205f479c245] Took 0.96 seconds to destroy the instance on the hypervisor.
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.765 225859 DEBUG oslo.service.loopingcall [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.766 225859 DEBUG nova.compute.manager [-] [instance: 561d1914-3348-438c-84ba-1205f479c245] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.766 225859 DEBUG nova.network.neutron [-] [instance: 561d1914-3348-438c-84ba-1205f479c245] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:32:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1361981238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:21 compute-1 ceph-mon[81775]: pgmap v1253: 321 pgs: 321 active+clean; 214 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 4.8 MiB/s wr, 244 op/s
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.971 225859 DEBUG nova.network.neutron [-] [instance: 561d1914-3348-438c-84ba-1205f479c245] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:32:21 compute-1 nova_compute[225855]: 2026-01-20 14:32:21.989 225859 DEBUG nova.network.neutron [-] [instance: 561d1914-3348-438c-84ba-1205f479c245] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:32:22 compute-1 nova_compute[225855]: 2026-01-20 14:32:22.003 225859 INFO nova.compute.manager [-] [instance: 561d1914-3348-438c-84ba-1205f479c245] Took 0.24 seconds to deallocate network for instance.
Jan 20 14:32:22 compute-1 nova_compute[225855]: 2026-01-20 14:32:22.049 225859 DEBUG oslo_concurrency.lockutils [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:32:22 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3324155321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:22 compute-1 nova_compute[225855]: 2026-01-20 14:32:22.236 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:22 compute-1 nova_compute[225855]: 2026-01-20 14:32:22.242 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:32:22 compute-1 nova_compute[225855]: 2026-01-20 14:32:22.271 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:32:22 compute-1 nova_compute[225855]: 2026-01-20 14:32:22.295 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:32:22 compute-1 nova_compute[225855]: 2026-01-20 14:32:22.295 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:22 compute-1 nova_compute[225855]: 2026-01-20 14:32:22.296 225859 DEBUG oslo_concurrency.lockutils [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:22 compute-1 nova_compute[225855]: 2026-01-20 14:32:22.336 225859 DEBUG oslo_concurrency.processutils [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:32:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:22.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:32:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:32:22 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2670042754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:22 compute-1 nova_compute[225855]: 2026-01-20 14:32:22.812 225859 DEBUG oslo_concurrency.processutils [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:22 compute-1 nova_compute[225855]: 2026-01-20 14:32:22.819 225859 DEBUG nova.compute.provider_tree [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:32:22 compute-1 nova_compute[225855]: 2026-01-20 14:32:22.835 225859 DEBUG nova.scheduler.client.report [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:32:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3324155321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2670042754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:22 compute-1 nova_compute[225855]: 2026-01-20 14:32:22.863 225859 DEBUG oslo_concurrency.lockutils [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:22 compute-1 nova_compute[225855]: 2026-01-20 14:32:22.907 225859 INFO nova.scheduler.client.report [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Deleted allocations for instance 561d1914-3348-438c-84ba-1205f479c245
Jan 20 14:32:22 compute-1 nova_compute[225855]: 2026-01-20 14:32:22.987 225859 DEBUG oslo_concurrency.lockutils [None req-15eb9962-6ec7-4032-be04-a45efdd89a5d a76f8a7bf01145bf8c953695d87aed2a 16c25c6af46845c8b8f7beaa0a50bd38 - - default default] Lock "561d1914-3348-438c-84ba-1205f479c245" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:23.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:23 compute-1 nova_compute[225855]: 2026-01-20 14:32:23.296 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:32:23 compute-1 nova_compute[225855]: 2026-01-20 14:32:23.297 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:32:23 compute-1 nova_compute[225855]: 2026-01-20 14:32:23.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:32:23 compute-1 ceph-mon[81775]: pgmap v1254: 321 pgs: 321 active+clean; 214 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 172 op/s
Jan 20 14:32:24 compute-1 nova_compute[225855]: 2026-01-20 14:32:24.350 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:24.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2967665820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:25.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:25 compute-1 nova_compute[225855]: 2026-01-20 14:32:25.334 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:32:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/157752220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:25 compute-1 ceph-mon[81775]: pgmap v1255: 321 pgs: 321 active+clean; 210 MiB data, 490 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.9 MiB/s wr, 219 op/s
Jan 20 14:32:26 compute-1 nova_compute[225855]: 2026-01-20 14:32:26.064 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:26.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:27.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:32:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:28.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:32:28 compute-1 ceph-mon[81775]: pgmap v1256: 321 pgs: 321 active+clean; 200 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.1 MiB/s wr, 188 op/s
Jan 20 14:32:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:29.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:29 compute-1 nova_compute[225855]: 2026-01-20 14:32:29.353 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:30.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:30 compute-1 ceph-mon[81775]: pgmap v1257: 321 pgs: 321 active+clean; 140 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 176 op/s
Jan 20 14:32:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4060471299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:32:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/115828176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/830453568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:32:31 compute-1 nova_compute[225855]: 2026-01-20 14:32:31.067 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:32:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:31.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:32:31 compute-1 sudo[242820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:31 compute-1 sudo[242820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:31 compute-1 sudo[242820]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:31 compute-1 sudo[242845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:32:31 compute-1 sudo[242845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:31 compute-1 sudo[242845]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:31 compute-1 sudo[242870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:31 compute-1 sudo[242870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:31 compute-1 sudo[242870]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:31 compute-1 sudo[242896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:32:31 compute-1 sudo[242896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:32 compute-1 sudo[242896]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:32.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:32 compute-1 ceph-mon[81775]: pgmap v1258: 321 pgs: 321 active+clean; 80 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 239 op/s
Jan 20 14:32:32 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:32:32 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:32:32 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:32:32 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:32:32 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:32:32 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:32:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:33.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:34 compute-1 nova_compute[225855]: 2026-01-20 14:32:34.396 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:34.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:34 compute-1 ceph-mon[81775]: pgmap v1259: 321 pgs: 321 active+clean; 80 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 394 KiB/s rd, 3.8 MiB/s wr, 172 op/s
Jan 20 14:32:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:35.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:36 compute-1 nova_compute[225855]: 2026-01-20 14:32:36.070 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:36 compute-1 nova_compute[225855]: 2026-01-20 14:32:36.221 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919541.220913, 561d1914-3348-438c-84ba-1205f479c245 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:32:36 compute-1 nova_compute[225855]: 2026-01-20 14:32:36.222 225859 INFO nova.compute.manager [-] [instance: 561d1914-3348-438c-84ba-1205f479c245] VM Stopped (Lifecycle Event)
Jan 20 14:32:36 compute-1 nova_compute[225855]: 2026-01-20 14:32:36.256 225859 DEBUG nova.compute.manager [None req-a0129269-2c75-4ad8-8d4f-eec756c10b21 - - - - - -] [instance: 561d1914-3348-438c-84ba-1205f479c245] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:32:36 compute-1 podman[242955]: 2026-01-20 14:32:36.332899141 +0000 UTC m=+0.078173909 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:32:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:36.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:36 compute-1 ceph-mon[81775]: pgmap v1260: 321 pgs: 321 active+clean; 88 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 1020 KiB/s rd, 3.9 MiB/s wr, 202 op/s
Jan 20 14:32:36 compute-1 nova_compute[225855]: 2026-01-20 14:32:36.758 225859 DEBUG oslo_concurrency.lockutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Acquiring lock "965c89d3-c13e-442b-8d32-f351bf95dda5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:36 compute-1 nova_compute[225855]: 2026-01-20 14:32:36.758 225859 DEBUG oslo_concurrency.lockutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Lock "965c89d3-c13e-442b-8d32-f351bf95dda5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:36 compute-1 nova_compute[225855]: 2026-01-20 14:32:36.780 225859 DEBUG nova.compute.manager [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:32:36 compute-1 nova_compute[225855]: 2026-01-20 14:32:36.863 225859 DEBUG oslo_concurrency.lockutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:36 compute-1 nova_compute[225855]: 2026-01-20 14:32:36.863 225859 DEBUG oslo_concurrency.lockutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:36 compute-1 nova_compute[225855]: 2026-01-20 14:32:36.869 225859 DEBUG nova.virt.hardware [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:32:36 compute-1 nova_compute[225855]: 2026-01-20 14:32:36.869 225859 INFO nova.compute.claims [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:32:37 compute-1 nova_compute[225855]: 2026-01-20 14:32:37.022 225859 DEBUG oslo_concurrency.processutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:32:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:37.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:32:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:32:37 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3125161546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:37 compute-1 nova_compute[225855]: 2026-01-20 14:32:37.439 225859 DEBUG oslo_concurrency.processutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:37 compute-1 nova_compute[225855]: 2026-01-20 14:32:37.445 225859 DEBUG nova.compute.provider_tree [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:32:37 compute-1 nova_compute[225855]: 2026-01-20 14:32:37.488 225859 DEBUG nova.scheduler.client.report [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:32:37 compute-1 nova_compute[225855]: 2026-01-20 14:32:37.519 225859 DEBUG oslo_concurrency.lockutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:37 compute-1 nova_compute[225855]: 2026-01-20 14:32:37.520 225859 DEBUG nova.compute.manager [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:32:37 compute-1 nova_compute[225855]: 2026-01-20 14:32:37.639 225859 DEBUG nova.compute.manager [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:32:37 compute-1 nova_compute[225855]: 2026-01-20 14:32:37.640 225859 DEBUG nova.network.neutron [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:32:37 compute-1 nova_compute[225855]: 2026-01-20 14:32:37.674 225859 INFO nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:32:37 compute-1 sudo[243004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:37 compute-1 sudo[243004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:37 compute-1 sudo[243004]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:37 compute-1 nova_compute[225855]: 2026-01-20 14:32:37.819 225859 DEBUG nova.compute.manager [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:32:37 compute-1 sudo[243029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:32:37 compute-1 sudo[243029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:37 compute-1 sudo[243029]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4194897972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3125161546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:32:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:32:37 compute-1 nova_compute[225855]: 2026-01-20 14:32:37.985 225859 DEBUG nova.compute.manager [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:32:37 compute-1 nova_compute[225855]: 2026-01-20 14:32:37.986 225859 DEBUG nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:32:37 compute-1 nova_compute[225855]: 2026-01-20 14:32:37.987 225859 INFO nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Creating image(s)
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.014 225859 DEBUG nova.storage.rbd_utils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] rbd image 965c89d3-c13e-442b-8d32-f351bf95dda5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.048 225859 DEBUG nova.storage.rbd_utils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] rbd image 965c89d3-c13e-442b-8d32-f351bf95dda5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.076 225859 DEBUG nova.storage.rbd_utils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] rbd image 965c89d3-c13e-442b-8d32-f351bf95dda5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.080 225859 DEBUG oslo_concurrency.processutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.139 225859 DEBUG oslo_concurrency.processutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.140 225859 DEBUG oslo_concurrency.lockutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.140 225859 DEBUG oslo_concurrency.lockutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.141 225859 DEBUG oslo_concurrency.lockutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.168 225859 DEBUG nova.storage.rbd_utils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] rbd image 965c89d3-c13e-442b-8d32-f351bf95dda5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.171 225859 DEBUG oslo_concurrency.processutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 965c89d3-c13e-442b-8d32-f351bf95dda5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:38 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.462 225859 DEBUG oslo_concurrency.processutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 965c89d3-c13e-442b-8d32-f351bf95dda5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.518 225859 DEBUG nova.storage.rbd_utils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] resizing rbd image 965c89d3-c13e-442b-8d32-f351bf95dda5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:32:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:38.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.603 225859 DEBUG nova.objects.instance [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Lazy-loading 'migration_context' on Instance uuid 965c89d3-c13e-442b-8d32-f351bf95dda5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.768 225859 DEBUG nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.769 225859 DEBUG nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Ensure instance console log exists: /var/lib/nova/instances/965c89d3-c13e-442b-8d32-f351bf95dda5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.769 225859 DEBUG oslo_concurrency.lockutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.769 225859 DEBUG oslo_concurrency.lockutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.770 225859 DEBUG oslo_concurrency.lockutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.786 225859 DEBUG nova.network.neutron [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.787 225859 DEBUG nova.compute.manager [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.788 225859 DEBUG nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.791 225859 WARNING nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.794 225859 DEBUG nova.virt.libvirt.host [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.795 225859 DEBUG nova.virt.libvirt.host [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.799 225859 DEBUG nova.virt.libvirt.host [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.799 225859 DEBUG nova.virt.libvirt.host [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.800 225859 DEBUG nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.800 225859 DEBUG nova.virt.hardware [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.801 225859 DEBUG nova.virt.hardware [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.801 225859 DEBUG nova.virt.hardware [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.801 225859 DEBUG nova.virt.hardware [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.801 225859 DEBUG nova.virt.hardware [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.801 225859 DEBUG nova.virt.hardware [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.802 225859 DEBUG nova.virt.hardware [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.802 225859 DEBUG nova.virt.hardware [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.802 225859 DEBUG nova.virt.hardware [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.802 225859 DEBUG nova.virt.hardware [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.802 225859 DEBUG nova.virt.hardware [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:32:38 compute-1 nova_compute[225855]: 2026-01-20 14:32:38.805 225859 DEBUG oslo_concurrency.processutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:38 compute-1 ceph-mon[81775]: pgmap v1261: 321 pgs: 321 active+clean; 62 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 204 op/s
Jan 20 14:32:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:32:39 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/151885523' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:32:39 compute-1 nova_compute[225855]: 2026-01-20 14:32:39.197 225859 DEBUG oslo_concurrency.processutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:39 compute-1 nova_compute[225855]: 2026-01-20 14:32:39.219 225859 DEBUG nova.storage.rbd_utils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] rbd image 965c89d3-c13e-442b-8d32-f351bf95dda5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:32:39 compute-1 nova_compute[225855]: 2026-01-20 14:32:39.224 225859 DEBUG oslo_concurrency.processutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:32:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:39.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:32:39 compute-1 nova_compute[225855]: 2026-01-20 14:32:39.396 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:32:39 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4006511380' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:32:39 compute-1 nova_compute[225855]: 2026-01-20 14:32:39.634 225859 DEBUG oslo_concurrency.processutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:39 compute-1 nova_compute[225855]: 2026-01-20 14:32:39.636 225859 DEBUG nova.objects.instance [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Lazy-loading 'pci_devices' on Instance uuid 965c89d3-c13e-442b-8d32-f351bf95dda5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:32:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/151885523' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:32:40 compute-1 ceph-mon[81775]: pgmap v1262: 321 pgs: 321 active+clean; 70 MiB data, 425 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 185 op/s
Jan 20 14:32:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4006511380' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:32:40 compute-1 nova_compute[225855]: 2026-01-20 14:32:40.239 225859 DEBUG nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:32:40 compute-1 nova_compute[225855]:   <uuid>965c89d3-c13e-442b-8d32-f351bf95dda5</uuid>
Jan 20 14:32:40 compute-1 nova_compute[225855]:   <name>instance-00000025</name>
Jan 20 14:32:40 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:32:40 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:32:40 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerExternalEventsTest-server-1702471022</nova:name>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:32:38</nova:creationTime>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:32:40 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:32:40 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:32:40 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:32:40 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:32:40 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:32:40 compute-1 nova_compute[225855]:         <nova:user uuid="360feb9a3f0146f0b84b6c28241e41a9">tempest-ServerExternalEventsTest-1124989951-project-member</nova:user>
Jan 20 14:32:40 compute-1 nova_compute[225855]:         <nova:project uuid="bd830773c45f46e3b6fd28d50255c383">tempest-ServerExternalEventsTest-1124989951</nova:project>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <nova:ports/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:32:40 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:32:40 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <system>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <entry name="serial">965c89d3-c13e-442b-8d32-f351bf95dda5</entry>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <entry name="uuid">965c89d3-c13e-442b-8d32-f351bf95dda5</entry>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     </system>
Jan 20 14:32:40 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:32:40 compute-1 nova_compute[225855]:   <os>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:   </os>
Jan 20 14:32:40 compute-1 nova_compute[225855]:   <features>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:   </features>
Jan 20 14:32:40 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:32:40 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:32:40 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/965c89d3-c13e-442b-8d32-f351bf95dda5_disk">
Jan 20 14:32:40 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       </source>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:32:40 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/965c89d3-c13e-442b-8d32-f351bf95dda5_disk.config">
Jan 20 14:32:40 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       </source>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:32:40 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/965c89d3-c13e-442b-8d32-f351bf95dda5/console.log" append="off"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <video>
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     </video>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:32:40 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:32:40 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:32:40 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:32:40 compute-1 nova_compute[225855]: </domain>
Jan 20 14:32:40 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:32:40 compute-1 nova_compute[225855]: 2026-01-20 14:32:40.358 225859 DEBUG nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:32:40 compute-1 nova_compute[225855]: 2026-01-20 14:32:40.358 225859 DEBUG nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:32:40 compute-1 nova_compute[225855]: 2026-01-20 14:32:40.359 225859 INFO nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Using config drive
Jan 20 14:32:40 compute-1 nova_compute[225855]: 2026-01-20 14:32:40.381 225859 DEBUG nova.storage.rbd_utils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] rbd image 965c89d3-c13e-442b-8d32-f351bf95dda5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:32:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:40.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:40 compute-1 nova_compute[225855]: 2026-01-20 14:32:40.571 225859 INFO nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Creating config drive at /var/lib/nova/instances/965c89d3-c13e-442b-8d32-f351bf95dda5/disk.config
Jan 20 14:32:40 compute-1 nova_compute[225855]: 2026-01-20 14:32:40.577 225859 DEBUG oslo_concurrency.processutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/965c89d3-c13e-442b-8d32-f351bf95dda5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph84qe834 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:40 compute-1 nova_compute[225855]: 2026-01-20 14:32:40.701 225859 DEBUG oslo_concurrency.processutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/965c89d3-c13e-442b-8d32-f351bf95dda5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph84qe834" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:40 compute-1 nova_compute[225855]: 2026-01-20 14:32:40.725 225859 DEBUG nova.storage.rbd_utils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] rbd image 965c89d3-c13e-442b-8d32-f351bf95dda5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:32:40 compute-1 nova_compute[225855]: 2026-01-20 14:32:40.728 225859 DEBUG oslo_concurrency.processutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/965c89d3-c13e-442b-8d32-f351bf95dda5/disk.config 965c89d3-c13e-442b-8d32-f351bf95dda5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:40 compute-1 nova_compute[225855]: 2026-01-20 14:32:40.894 225859 DEBUG oslo_concurrency.processutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/965c89d3-c13e-442b-8d32-f351bf95dda5/disk.config 965c89d3-c13e-442b-8d32-f351bf95dda5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:40 compute-1 nova_compute[225855]: 2026-01-20 14:32:40.895 225859 INFO nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Deleting local config drive /var/lib/nova/instances/965c89d3-c13e-442b-8d32-f351bf95dda5/disk.config because it was imported into RBD.
Jan 20 14:32:40 compute-1 systemd-machined[194361]: New machine qemu-19-instance-00000025.
Jan 20 14:32:40 compute-1 systemd[1]: Started Virtual Machine qemu-19-instance-00000025.
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.073 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:32:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:41.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.353 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919561.3523674, 965c89d3-c13e-442b-8d32-f351bf95dda5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.353 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] VM Resumed (Lifecycle Event)
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.358 225859 DEBUG nova.compute.manager [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.358 225859 DEBUG nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.363 225859 INFO nova.virt.libvirt.driver [-] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Instance spawned successfully.
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.364 225859 DEBUG nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.385 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.392 225859 DEBUG nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.393 225859 DEBUG nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.394 225859 DEBUG nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.394 225859 DEBUG nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.395 225859 DEBUG nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.395 225859 DEBUG nova.virt.libvirt.driver [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.402 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.451 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.452 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919561.3558192, 965c89d3-c13e-442b-8d32-f351bf95dda5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.452 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] VM Started (Lifecycle Event)
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.474 225859 INFO nova.compute.manager [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Took 3.49 seconds to spawn the instance on the hypervisor.
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.475 225859 DEBUG nova.compute.manager [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.477 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.486 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.514 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.552 225859 INFO nova.compute.manager [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Took 4.71 seconds to build instance.
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:32:41.583662) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919561583759, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2490, "num_deletes": 505, "total_data_size": 5062463, "memory_usage": 5140560, "flush_reason": "Manual Compaction"}
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Jan 20 14:32:41 compute-1 sudo[243399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:41 compute-1 sudo[243399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919561610250, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 2936030, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28321, "largest_seqno": 30806, "table_properties": {"data_size": 2926940, "index_size": 5008, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 24239, "raw_average_key_size": 20, "raw_value_size": 2906012, "raw_average_value_size": 2448, "num_data_blocks": 218, "num_entries": 1187, "num_filter_entries": 1187, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768919386, "oldest_key_time": 1768919386, "file_creation_time": 1768919561, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 26682 microseconds, and 6510 cpu microseconds.
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:32:41.610359) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 2936030 bytes OK
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:32:41.610398) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:32:41.611493) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:32:41.611507) EVENT_LOG_v1 {"time_micros": 1768919561611503, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:32:41.611525) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 5050420, prev total WAL file size 5050420, number of live WAL files 2.
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:32:41.613219) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(2867KB)], [57(10MB)]
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919561613287, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 13916161, "oldest_snapshot_seqno": -1}
Jan 20 14:32:41 compute-1 sudo[243399]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:41 compute-1 nova_compute[225855]: 2026-01-20 14:32:41.621 225859 DEBUG oslo_concurrency.lockutils [None req-bdc1104b-921d-426f-88bf-03a1336d4cae 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Lock "965c89d3-c13e-442b-8d32-f351bf95dda5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:41 compute-1 sudo[243424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:41 compute-1 sudo[243424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:41 compute-1 sudo[243424]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 5445 keys, 8452471 bytes, temperature: kUnknown
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919561707442, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 8452471, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8416859, "index_size": 20910, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 139192, "raw_average_key_size": 25, "raw_value_size": 8319448, "raw_average_value_size": 1527, "num_data_blocks": 843, "num_entries": 5445, "num_filter_entries": 5445, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768919561, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:32:41.707752) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 8452471 bytes
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:32:41.710197) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.6 rd, 89.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 10.5 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(7.6) write-amplify(2.9) OK, records in: 6452, records dropped: 1007 output_compression: NoCompression
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:32:41.710229) EVENT_LOG_v1 {"time_micros": 1768919561710214, "job": 34, "event": "compaction_finished", "compaction_time_micros": 94261, "compaction_time_cpu_micros": 21563, "output_level": 6, "num_output_files": 1, "total_output_size": 8452471, "num_input_records": 6452, "num_output_records": 5445, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919561711255, "job": 34, "event": "table_file_deletion", "file_number": 59}
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919561714709, "job": 34, "event": "table_file_deletion", "file_number": 57}
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:32:41.613094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:32:41.714806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:32:41.714811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:32:41.714812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:32:41.714814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:32:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:32:41.714816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:32:42 compute-1 nova_compute[225855]: 2026-01-20 14:32:42.516 225859 DEBUG nova.compute.manager [None req-c5c0e48a-d44c-438a-b8ed-0cf5dae9c9ae fab698d5ba454eb38c305f93992c91ab 0e2b622671af4e3fbcdb594e88f55f4c - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:32:42 compute-1 nova_compute[225855]: 2026-01-20 14:32:42.516 225859 DEBUG nova.compute.manager [None req-c5c0e48a-d44c-438a-b8ed-0cf5dae9c9ae fab698d5ba454eb38c305f93992c91ab 0e2b622671af4e3fbcdb594e88f55f4c - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:32:42 compute-1 nova_compute[225855]: 2026-01-20 14:32:42.517 225859 DEBUG oslo_concurrency.lockutils [None req-c5c0e48a-d44c-438a-b8ed-0cf5dae9c9ae fab698d5ba454eb38c305f93992c91ab 0e2b622671af4e3fbcdb594e88f55f4c - - default default] Acquiring lock "refresh_cache-965c89d3-c13e-442b-8d32-f351bf95dda5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:32:42 compute-1 nova_compute[225855]: 2026-01-20 14:32:42.517 225859 DEBUG oslo_concurrency.lockutils [None req-c5c0e48a-d44c-438a-b8ed-0cf5dae9c9ae fab698d5ba454eb38c305f93992c91ab 0e2b622671af4e3fbcdb594e88f55f4c - - default default] Acquired lock "refresh_cache-965c89d3-c13e-442b-8d32-f351bf95dda5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:32:42 compute-1 nova_compute[225855]: 2026-01-20 14:32:42.517 225859 DEBUG nova.network.neutron [None req-c5c0e48a-d44c-438a-b8ed-0cf5dae9c9ae fab698d5ba454eb38c305f93992c91ab 0e2b622671af4e3fbcdb594e88f55f4c - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:32:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:42.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:42 compute-1 ceph-mon[81775]: pgmap v1263: 321 pgs: 321 active+clean; 88 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 196 op/s
Jan 20 14:32:42 compute-1 nova_compute[225855]: 2026-01-20 14:32:42.788 225859 DEBUG oslo_concurrency.lockutils [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Acquiring lock "965c89d3-c13e-442b-8d32-f351bf95dda5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:42 compute-1 nova_compute[225855]: 2026-01-20 14:32:42.789 225859 DEBUG oslo_concurrency.lockutils [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Lock "965c89d3-c13e-442b-8d32-f351bf95dda5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:42 compute-1 nova_compute[225855]: 2026-01-20 14:32:42.789 225859 DEBUG oslo_concurrency.lockutils [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Acquiring lock "965c89d3-c13e-442b-8d32-f351bf95dda5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:42 compute-1 nova_compute[225855]: 2026-01-20 14:32:42.790 225859 DEBUG oslo_concurrency.lockutils [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Lock "965c89d3-c13e-442b-8d32-f351bf95dda5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:42 compute-1 nova_compute[225855]: 2026-01-20 14:32:42.790 225859 DEBUG oslo_concurrency.lockutils [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Lock "965c89d3-c13e-442b-8d32-f351bf95dda5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:42 compute-1 nova_compute[225855]: 2026-01-20 14:32:42.793 225859 INFO nova.compute.manager [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Terminating instance
Jan 20 14:32:42 compute-1 nova_compute[225855]: 2026-01-20 14:32:42.795 225859 DEBUG oslo_concurrency.lockutils [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Acquiring lock "refresh_cache-965c89d3-c13e-442b-8d32-f351bf95dda5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:32:42 compute-1 nova_compute[225855]: 2026-01-20 14:32:42.841 225859 DEBUG nova.network.neutron [None req-c5c0e48a-d44c-438a-b8ed-0cf5dae9c9ae fab698d5ba454eb38c305f93992c91ab 0e2b622671af4e3fbcdb594e88f55f4c - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:32:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:43.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:43 compute-1 nova_compute[225855]: 2026-01-20 14:32:43.419 225859 DEBUG nova.network.neutron [None req-c5c0e48a-d44c-438a-b8ed-0cf5dae9c9ae fab698d5ba454eb38c305f93992c91ab 0e2b622671af4e3fbcdb594e88f55f4c - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:32:43 compute-1 nova_compute[225855]: 2026-01-20 14:32:43.464 225859 DEBUG oslo_concurrency.lockutils [None req-c5c0e48a-d44c-438a-b8ed-0cf5dae9c9ae fab698d5ba454eb38c305f93992c91ab 0e2b622671af4e3fbcdb594e88f55f4c - - default default] Releasing lock "refresh_cache-965c89d3-c13e-442b-8d32-f351bf95dda5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:32:43 compute-1 nova_compute[225855]: 2026-01-20 14:32:43.464 225859 DEBUG oslo_concurrency.lockutils [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Acquired lock "refresh_cache-965c89d3-c13e-442b-8d32-f351bf95dda5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:32:43 compute-1 nova_compute[225855]: 2026-01-20 14:32:43.465 225859 DEBUG nova.network.neutron [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:32:43 compute-1 nova_compute[225855]: 2026-01-20 14:32:43.735 225859 DEBUG nova.network.neutron [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:32:44 compute-1 nova_compute[225855]: 2026-01-20 14:32:44.399 225859 DEBUG nova.network.neutron [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:32:44 compute-1 nova_compute[225855]: 2026-01-20 14:32:44.400 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:44 compute-1 nova_compute[225855]: 2026-01-20 14:32:44.449 225859 DEBUG oslo_concurrency.lockutils [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Releasing lock "refresh_cache-965c89d3-c13e-442b-8d32-f351bf95dda5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:32:44 compute-1 nova_compute[225855]: 2026-01-20 14:32:44.450 225859 DEBUG nova.compute.manager [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:32:44 compute-1 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000025.scope: Deactivated successfully.
Jan 20 14:32:44 compute-1 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000025.scope: Consumed 3.545s CPU time.
Jan 20 14:32:44 compute-1 systemd-machined[194361]: Machine qemu-19-instance-00000025 terminated.
Jan 20 14:32:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:44.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:44 compute-1 ceph-mon[81775]: pgmap v1264: 321 pgs: 321 active+clean; 88 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 129 op/s
Jan 20 14:32:44 compute-1 nova_compute[225855]: 2026-01-20 14:32:44.670 225859 INFO nova.virt.libvirt.driver [-] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Instance destroyed successfully.
Jan 20 14:32:44 compute-1 nova_compute[225855]: 2026-01-20 14:32:44.670 225859 DEBUG nova.objects.instance [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Lazy-loading 'resources' on Instance uuid 965c89d3-c13e-442b-8d32-f351bf95dda5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:32:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:32:44.702 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:32:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:32:44.703 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:32:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:32:44.703 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:32:44 compute-1 nova_compute[225855]: 2026-01-20 14:32:44.704 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:45.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:45 compute-1 nova_compute[225855]: 2026-01-20 14:32:45.336 225859 INFO nova.virt.libvirt.driver [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Deleting instance files /var/lib/nova/instances/965c89d3-c13e-442b-8d32-f351bf95dda5_del
Jan 20 14:32:45 compute-1 nova_compute[225855]: 2026-01-20 14:32:45.337 225859 INFO nova.virt.libvirt.driver [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Deletion of /var/lib/nova/instances/965c89d3-c13e-442b-8d32-f351bf95dda5_del complete
Jan 20 14:32:45 compute-1 nova_compute[225855]: 2026-01-20 14:32:45.414 225859 INFO nova.compute.manager [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Took 0.96 seconds to destroy the instance on the hypervisor.
Jan 20 14:32:45 compute-1 nova_compute[225855]: 2026-01-20 14:32:45.414 225859 DEBUG oslo.service.loopingcall [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:32:45 compute-1 nova_compute[225855]: 2026-01-20 14:32:45.415 225859 DEBUG nova.compute.manager [-] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:32:45 compute-1 nova_compute[225855]: 2026-01-20 14:32:45.415 225859 DEBUG nova.network.neutron [-] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:32:45 compute-1 sshd-session[243449]: Invalid user user from 45.179.5.170 port 49820
Jan 20 14:32:45 compute-1 nova_compute[225855]: 2026-01-20 14:32:45.708 225859 DEBUG nova.network.neutron [-] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:32:45 compute-1 nova_compute[225855]: 2026-01-20 14:32:45.747 225859 DEBUG nova.network.neutron [-] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:32:45 compute-1 nova_compute[225855]: 2026-01-20 14:32:45.781 225859 INFO nova.compute.manager [-] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Took 0.37 seconds to deallocate network for instance.
Jan 20 14:32:45 compute-1 nova_compute[225855]: 2026-01-20 14:32:45.849 225859 DEBUG oslo_concurrency.lockutils [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:45 compute-1 nova_compute[225855]: 2026-01-20 14:32:45.850 225859 DEBUG oslo_concurrency.lockutils [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:45 compute-1 nova_compute[225855]: 2026-01-20 14:32:45.962 225859 DEBUG oslo_concurrency.processutils [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:46 compute-1 nova_compute[225855]: 2026-01-20 14:32:46.077 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:46 compute-1 sshd-session[243449]: Connection closed by invalid user user 45.179.5.170 port 49820 [preauth]
Jan 20 14:32:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:32:46 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/260108864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:46 compute-1 nova_compute[225855]: 2026-01-20 14:32:46.425 225859 DEBUG oslo_concurrency.processutils [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:46 compute-1 nova_compute[225855]: 2026-01-20 14:32:46.431 225859 DEBUG nova.compute.provider_tree [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:32:46 compute-1 nova_compute[225855]: 2026-01-20 14:32:46.466 225859 DEBUG nova.scheduler.client.report [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:32:46 compute-1 nova_compute[225855]: 2026-01-20 14:32:46.511 225859 DEBUG oslo_concurrency.lockutils [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:46.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:46 compute-1 nova_compute[225855]: 2026-01-20 14:32:46.572 225859 INFO nova.scheduler.client.report [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Deleted allocations for instance 965c89d3-c13e-442b-8d32-f351bf95dda5
Jan 20 14:32:46 compute-1 nova_compute[225855]: 2026-01-20 14:32:46.646 225859 DEBUG oslo_concurrency.lockutils [None req-4c37bd00-6c2c-41c6-b3b4-146fc6d36cc7 360feb9a3f0146f0b84b6c28241e41a9 bd830773c45f46e3b6fd28d50255c383 - - default default] Lock "965c89d3-c13e-442b-8d32-f351bf95dda5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:46 compute-1 ceph-mon[81775]: pgmap v1265: 321 pgs: 321 active+clean; 68 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.9 MiB/s wr, 183 op/s
Jan 20 14:32:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/260108864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:47 compute-1 podman[243497]: 2026-01-20 14:32:47.024505069 +0000 UTC m=+0.055295564 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 20 14:32:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:47.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:48.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:48 compute-1 ceph-mon[81775]: pgmap v1266: 321 pgs: 321 active+clean; 57 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Jan 20 14:32:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:49.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:49 compute-1 nova_compute[225855]: 2026-01-20 14:32:49.400 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:32:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:50.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:32:50 compute-1 ceph-mon[81775]: pgmap v1267: 321 pgs: 321 active+clean; 41 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Jan 20 14:32:51 compute-1 nova_compute[225855]: 2026-01-20 14:32:51.082 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:51.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:32:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:52.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:32:52 compute-1 ceph-mon[81775]: pgmap v1268: 321 pgs: 321 active+clean; 41 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 127 op/s
Jan 20 14:32:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:53.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:54 compute-1 nova_compute[225855]: 2026-01-20 14:32:54.402 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:32:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:54.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:32:54 compute-1 ceph-mon[81775]: pgmap v1269: 321 pgs: 321 active+clean; 41 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 97 op/s
Jan 20 14:32:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:32:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:55.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:32:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1950850921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:56 compute-1 nova_compute[225855]: 2026-01-20 14:32:56.085 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:56.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:56 compute-1 ceph-mon[81775]: pgmap v1270: 321 pgs: 321 active+clean; 41 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 97 op/s
Jan 20 14:32:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:57.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:57 compute-1 ceph-mon[81775]: pgmap v1271: 321 pgs: 321 active+clean; 41 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 625 KiB/s rd, 1.2 KiB/s wr, 44 op/s
Jan 20 14:32:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:58.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:32:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:32:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:59.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:32:59 compute-1 nova_compute[225855]: 2026-01-20 14:32:59.404 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:59 compute-1 nova_compute[225855]: 2026-01-20 14:32:59.669 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919564.6684756, 965c89d3-c13e-442b-8d32-f351bf95dda5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:32:59 compute-1 nova_compute[225855]: 2026-01-20 14:32:59.670 225859 INFO nova.compute.manager [-] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] VM Stopped (Lifecycle Event)
Jan 20 14:32:59 compute-1 nova_compute[225855]: 2026-01-20 14:32:59.736 225859 DEBUG nova.compute.manager [None req-9501c16f-bf8b-4821-9210-ef25ad4a15b9 - - - - - -] [instance: 965c89d3-c13e-442b-8d32-f351bf95dda5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:32:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:00.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:00 compute-1 ceph-mon[81775]: pgmap v1272: 321 pgs: 321 active+clean; 45 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 11 KiB/s wr, 14 op/s
Jan 20 14:33:01 compute-1 nova_compute[225855]: 2026-01-20 14:33:01.088 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:01.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:01 compute-1 sudo[243528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:01 compute-1 sudo[243528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:01 compute-1 sudo[243528]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:01 compute-1 sudo[243553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:01 compute-1 sudo[243553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:01 compute-1 sudo[243553]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:02 compute-1 ceph-mon[81775]: pgmap v1273: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 20 14:33:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:33:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:02.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:33:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:33:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:03.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:33:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4212914262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4157934418' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:04 compute-1 nova_compute[225855]: 2026-01-20 14:33:04.406 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:33:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:04.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:33:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:04 compute-1 ceph-mon[81775]: pgmap v1274: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 14:33:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:33:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:05.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:33:05 compute-1 ceph-mon[81775]: pgmap v1275: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 14:33:06 compute-1 nova_compute[225855]: 2026-01-20 14:33:06.092 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:06.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:07 compute-1 podman[243580]: 2026-01-20 14:33:07.041094848 +0000 UTC m=+0.085486230 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 20 14:33:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:07.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:08 compute-1 nova_compute[225855]: 2026-01-20 14:33:08.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:08 compute-1 ceph-mon[81775]: pgmap v1276: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 20 14:33:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:33:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:08.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:33:08 compute-1 ovn_controller[130490]: 2026-01-20T14:33:08Z|00136|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 20 14:33:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:09.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:09 compute-1 nova_compute[225855]: 2026-01-20 14:33:09.408 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:10.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:10 compute-1 ceph-mon[81775]: pgmap v1277: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 20 14:33:11 compute-1 nova_compute[225855]: 2026-01-20 14:33:11.096 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:11.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:33:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:12.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:33:12 compute-1 ceph-mon[81775]: pgmap v1278: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 20 14:33:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:13.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2650109795' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:33:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2650109795' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:33:14 compute-1 nova_compute[225855]: 2026-01-20 14:33:14.410 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:14.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:14 compute-1 ceph-mon[81775]: pgmap v1279: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 20 14:33:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:15.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:15 compute-1 ceph-mon[81775]: pgmap v1280: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 20 14:33:16 compute-1 nova_compute[225855]: 2026-01-20 14:33:16.099 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:16.392 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:16.393 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:16.394 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:16.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:17.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:17 compute-1 nova_compute[225855]: 2026-01-20 14:33:17.372 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:18 compute-1 podman[243611]: 2026-01-20 14:33:18.038005116 +0000 UTC m=+0.083960668 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 14:33:18 compute-1 nova_compute[225855]: 2026-01-20 14:33:18.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:18 compute-1 nova_compute[225855]: 2026-01-20 14:33:18.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:33:18 compute-1 nova_compute[225855]: 2026-01-20 14:33:18.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:33:18 compute-1 nova_compute[225855]: 2026-01-20 14:33:18.357 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:33:18 compute-1 nova_compute[225855]: 2026-01-20 14:33:18.357 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:18 compute-1 nova_compute[225855]: 2026-01-20 14:33:18.358 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:33:18 compute-1 ceph-mon[81775]: pgmap v1281: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 72 op/s
Jan 20 14:33:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:18.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:19.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:19 compute-1 nova_compute[225855]: 2026-01-20 14:33:19.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:19 compute-1 nova_compute[225855]: 2026-01-20 14:33:19.412 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:20 compute-1 nova_compute[225855]: 2026-01-20 14:33:20.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:20.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:20 compute-1 ceph-mon[81775]: pgmap v1282: 321 pgs: 321 active+clean; 96 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 560 KiB/s wr, 78 op/s
Jan 20 14:33:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1830974497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e166 e166: 3 total, 3 up, 3 in
Jan 20 14:33:21 compute-1 nova_compute[225855]: 2026-01-20 14:33:21.102 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:21.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:21 compute-1 nova_compute[225855]: 2026-01-20 14:33:21.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:21 compute-1 nova_compute[225855]: 2026-01-20 14:33:21.411 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:21 compute-1 nova_compute[225855]: 2026-01-20 14:33:21.411 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:21 compute-1 nova_compute[225855]: 2026-01-20 14:33:21.411 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:21 compute-1 nova_compute[225855]: 2026-01-20 14:33:21.412 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:33:21 compute-1 nova_compute[225855]: 2026-01-20 14:33:21.412 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:21 compute-1 ceph-mon[81775]: osdmap e166: 3 total, 3 up, 3 in
Jan 20 14:33:21 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:33:21 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1270072728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:21 compute-1 nova_compute[225855]: 2026-01-20 14:33:21.840 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:21 compute-1 sudo[243656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:21 compute-1 sudo[243656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:21 compute-1 sudo[243656]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:21 compute-1 sudo[243682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:21 compute-1 sudo[243682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:21 compute-1 sudo[243682]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:22 compute-1 nova_compute[225855]: 2026-01-20 14:33:22.035 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:33:22 compute-1 nova_compute[225855]: 2026-01-20 14:33:22.036 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4747MB free_disk=20.94301986694336GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:33:22 compute-1 nova_compute[225855]: 2026-01-20 14:33:22.037 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:22 compute-1 nova_compute[225855]: 2026-01-20 14:33:22.037 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:22.170 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:33:22 compute-1 nova_compute[225855]: 2026-01-20 14:33:22.170 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:22.171 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:33:22 compute-1 nova_compute[225855]: 2026-01-20 14:33:22.572 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:33:22 compute-1 nova_compute[225855]: 2026-01-20 14:33:22.573 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:33:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:22.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:22 compute-1 ceph-mon[81775]: pgmap v1284: 321 pgs: 321 active+clean; 114 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 367 KiB/s rd, 2.5 MiB/s wr, 72 op/s
Jan 20 14:33:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1270072728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1496720829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:22 compute-1 nova_compute[225855]: 2026-01-20 14:33:22.808 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing inventories for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 14:33:22 compute-1 nova_compute[225855]: 2026-01-20 14:33:22.828 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating ProviderTree inventory for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 14:33:22 compute-1 nova_compute[225855]: 2026-01-20 14:33:22.828 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:33:22 compute-1 nova_compute[225855]: 2026-01-20 14:33:22.857 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing aggregate associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 14:33:22 compute-1 nova_compute[225855]: 2026-01-20 14:33:22.905 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing trait associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 14:33:22 compute-1 nova_compute[225855]: 2026-01-20 14:33:22.927 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:33:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:23.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:33:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:33:23 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4270915150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:23 compute-1 nova_compute[225855]: 2026-01-20 14:33:23.355 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:23 compute-1 nova_compute[225855]: 2026-01-20 14:33:23.362 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:33:23 compute-1 nova_compute[225855]: 2026-01-20 14:33:23.392 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:33:23 compute-1 nova_compute[225855]: 2026-01-20 14:33:23.418 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:33:23 compute-1 nova_compute[225855]: 2026-01-20 14:33:23.419 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.382s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:23 compute-1 nova_compute[225855]: 2026-01-20 14:33:23.420 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:23 compute-1 nova_compute[225855]: 2026-01-20 14:33:23.421 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 14:33:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4270915150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:24 compute-1 nova_compute[225855]: 2026-01-20 14:33:24.360 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:24 compute-1 nova_compute[225855]: 2026-01-20 14:33:24.361 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:24 compute-1 nova_compute[225855]: 2026-01-20 14:33:24.361 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:24 compute-1 nova_compute[225855]: 2026-01-20 14:33:24.362 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 14:33:24 compute-1 nova_compute[225855]: 2026-01-20 14:33:24.386 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 14:33:24 compute-1 nova_compute[225855]: 2026-01-20 14:33:24.414 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:24.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:24 compute-1 ceph-mon[81775]: pgmap v1285: 321 pgs: 321 active+clean; 114 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 367 KiB/s rd, 2.5 MiB/s wr, 72 op/s
Jan 20 14:33:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e167 e167: 3 total, 3 up, 3 in
Jan 20 14:33:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:25.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:25 compute-1 nova_compute[225855]: 2026-01-20 14:33:25.361 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:25 compute-1 ceph-mon[81775]: osdmap e167: 3 total, 3 up, 3 in
Jan 20 14:33:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2112737495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:25 compute-1 ceph-mon[81775]: pgmap v1287: 321 pgs: 321 active+clean; 121 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 545 KiB/s rd, 3.2 MiB/s wr, 118 op/s
Jan 20 14:33:26 compute-1 nova_compute[225855]: 2026-01-20 14:33:26.105 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:33:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:26.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:33:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:27.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1012833942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:28 compute-1 sshd-session[243609]: ssh_dispatch_run_fatal: Connection from 45.179.5.170 port 36274: Connection timed out [preauth]
Jan 20 14:33:28 compute-1 ceph-mon[81775]: pgmap v1288: 321 pgs: 321 active+clean; 121 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 511 KiB/s rd, 2.4 MiB/s wr, 103 op/s
Jan 20 14:33:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:33:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:28.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:33:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:29.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:29 compute-1 nova_compute[225855]: 2026-01-20 14:33:29.415 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:33:30 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/659662857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:30.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:30 compute-1 ceph-mon[81775]: pgmap v1289: 321 pgs: 321 active+clean; 121 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 113 KiB/s rd, 379 KiB/s wr, 68 op/s
Jan 20 14:33:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/659662857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:31 compute-1 nova_compute[225855]: 2026-01-20 14:33:31.109 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:31 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:31.172 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:33:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:33:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:31.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:33:31 compute-1 ceph-mon[81775]: pgmap v1290: 321 pgs: 321 active+clean; 121 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 88 KiB/s rd, 38 KiB/s wr, 48 op/s
Jan 20 14:33:32 compute-1 nova_compute[225855]: 2026-01-20 14:33:32.024 225859 DEBUG oslo_concurrency.lockutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:32 compute-1 nova_compute[225855]: 2026-01-20 14:33:32.024 225859 DEBUG oslo_concurrency.lockutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:32 compute-1 nova_compute[225855]: 2026-01-20 14:33:32.051 225859 DEBUG nova.compute.manager [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:33:32 compute-1 nova_compute[225855]: 2026-01-20 14:33:32.162 225859 DEBUG oslo_concurrency.lockutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:32 compute-1 nova_compute[225855]: 2026-01-20 14:33:32.162 225859 DEBUG oslo_concurrency.lockutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:32 compute-1 nova_compute[225855]: 2026-01-20 14:33:32.168 225859 DEBUG nova.virt.hardware [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:33:32 compute-1 nova_compute[225855]: 2026-01-20 14:33:32.169 225859 INFO nova.compute.claims [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:33:32 compute-1 nova_compute[225855]: 2026-01-20 14:33:32.307 225859 DEBUG oslo_concurrency.processutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e168 e168: 3 total, 3 up, 3 in
Jan 20 14:33:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:33:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:32.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:33:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:33:32 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3926106385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:32 compute-1 nova_compute[225855]: 2026-01-20 14:33:32.778 225859 DEBUG oslo_concurrency.processutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:32 compute-1 nova_compute[225855]: 2026-01-20 14:33:32.786 225859 DEBUG nova.compute.provider_tree [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:33:32 compute-1 nova_compute[225855]: 2026-01-20 14:33:32.816 225859 DEBUG nova.scheduler.client.report [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:33:32 compute-1 nova_compute[225855]: 2026-01-20 14:33:32.845 225859 DEBUG oslo_concurrency.lockutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:32 compute-1 nova_compute[225855]: 2026-01-20 14:33:32.846 225859 DEBUG nova.compute.manager [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:33:32 compute-1 nova_compute[225855]: 2026-01-20 14:33:32.953 225859 DEBUG nova.compute.manager [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:33:32 compute-1 nova_compute[225855]: 2026-01-20 14:33:32.953 225859 DEBUG nova.network.neutron [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:33:32 compute-1 nova_compute[225855]: 2026-01-20 14:33:32.983 225859 INFO nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:33:33 compute-1 nova_compute[225855]: 2026-01-20 14:33:33.004 225859 DEBUG nova.compute.manager [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:33:33 compute-1 nova_compute[225855]: 2026-01-20 14:33:33.116 225859 DEBUG nova.compute.manager [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:33:33 compute-1 nova_compute[225855]: 2026-01-20 14:33:33.118 225859 DEBUG nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:33:33 compute-1 nova_compute[225855]: 2026-01-20 14:33:33.119 225859 INFO nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Creating image(s)
Jan 20 14:33:33 compute-1 nova_compute[225855]: 2026-01-20 14:33:33.225 225859 DEBUG nova.storage.rbd_utils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:33:33 compute-1 nova_compute[225855]: 2026-01-20 14:33:33.254 225859 DEBUG nova.storage.rbd_utils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:33:33 compute-1 nova_compute[225855]: 2026-01-20 14:33:33.288 225859 DEBUG nova.storage.rbd_utils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:33:33 compute-1 nova_compute[225855]: 2026-01-20 14:33:33.292 225859 DEBUG oslo_concurrency.processutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:33:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:33.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:33:33 compute-1 nova_compute[225855]: 2026-01-20 14:33:33.374 225859 DEBUG nova.policy [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f51c395107c84dbd9067113b84ff01dd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a841e7a1434c488390475174e10bc161', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:33:33 compute-1 nova_compute[225855]: 2026-01-20 14:33:33.391 225859 DEBUG oslo_concurrency.processutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:33 compute-1 nova_compute[225855]: 2026-01-20 14:33:33.392 225859 DEBUG oslo_concurrency.lockutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:33 compute-1 nova_compute[225855]: 2026-01-20 14:33:33.392 225859 DEBUG oslo_concurrency.lockutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:33 compute-1 nova_compute[225855]: 2026-01-20 14:33:33.393 225859 DEBUG oslo_concurrency.lockutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:33 compute-1 nova_compute[225855]: 2026-01-20 14:33:33.417 225859 DEBUG nova.storage.rbd_utils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:33:33 compute-1 nova_compute[225855]: 2026-01-20 14:33:33.420 225859 DEBUG oslo_concurrency.processutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:33 compute-1 ceph-mon[81775]: osdmap e168: 3 total, 3 up, 3 in
Jan 20 14:33:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3926106385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:33 compute-1 nova_compute[225855]: 2026-01-20 14:33:33.864 225859 DEBUG oslo_concurrency.processutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:33 compute-1 nova_compute[225855]: 2026-01-20 14:33:33.975 225859 DEBUG nova.storage.rbd_utils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] resizing rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:33:34 compute-1 nova_compute[225855]: 2026-01-20 14:33:34.105 225859 DEBUG nova.objects.instance [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'migration_context' on Instance uuid 2ec7b07d-b593-46b7-9751-b6116e4d2cec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:33:34 compute-1 nova_compute[225855]: 2026-01-20 14:33:34.122 225859 DEBUG nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:33:34 compute-1 nova_compute[225855]: 2026-01-20 14:33:34.123 225859 DEBUG nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Ensure instance console log exists: /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:33:34 compute-1 nova_compute[225855]: 2026-01-20 14:33:34.124 225859 DEBUG oslo_concurrency.lockutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:34 compute-1 nova_compute[225855]: 2026-01-20 14:33:34.124 225859 DEBUG oslo_concurrency.lockutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:34 compute-1 nova_compute[225855]: 2026-01-20 14:33:34.124 225859 DEBUG oslo_concurrency.lockutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:34 compute-1 nova_compute[225855]: 2026-01-20 14:33:34.417 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:33:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:34.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:33:34 compute-1 nova_compute[225855]: 2026-01-20 14:33:34.674 225859 DEBUG nova.network.neutron [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Successfully created port: 73e232f9-3860-4b9a-9cec-535fa2fb0c9f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:33:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:33:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:35.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:33:36 compute-1 ceph-mon[81775]: pgmap v1292: 321 pgs: 321 active+clean; 121 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 20 KiB/s wr, 36 op/s
Jan 20 14:33:36 compute-1 nova_compute[225855]: 2026-01-20 14:33:36.112 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:36 compute-1 nova_compute[225855]: 2026-01-20 14:33:36.210 225859 DEBUG nova.network.neutron [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Successfully updated port: 73e232f9-3860-4b9a-9cec-535fa2fb0c9f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:33:36 compute-1 nova_compute[225855]: 2026-01-20 14:33:36.234 225859 DEBUG oslo_concurrency.lockutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "refresh_cache-2ec7b07d-b593-46b7-9751-b6116e4d2cec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:33:36 compute-1 nova_compute[225855]: 2026-01-20 14:33:36.234 225859 DEBUG oslo_concurrency.lockutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquired lock "refresh_cache-2ec7b07d-b593-46b7-9751-b6116e4d2cec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:33:36 compute-1 nova_compute[225855]: 2026-01-20 14:33:36.234 225859 DEBUG nova.network.neutron [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:33:36 compute-1 nova_compute[225855]: 2026-01-20 14:33:36.495 225859 DEBUG nova.compute.manager [req-691c09c7-5da3-4b8f-b87d-e0cc6eca78cf req-1e9ac1fd-0c15-4f3e-9cb9-078609bea9a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received event network-changed-73e232f9-3860-4b9a-9cec-535fa2fb0c9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:33:36 compute-1 nova_compute[225855]: 2026-01-20 14:33:36.496 225859 DEBUG nova.compute.manager [req-691c09c7-5da3-4b8f-b87d-e0cc6eca78cf req-1e9ac1fd-0c15-4f3e-9cb9-078609bea9a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Refreshing instance network info cache due to event network-changed-73e232f9-3860-4b9a-9cec-535fa2fb0c9f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:33:36 compute-1 nova_compute[225855]: 2026-01-20 14:33:36.496 225859 DEBUG oslo_concurrency.lockutils [req-691c09c7-5da3-4b8f-b87d-e0cc6eca78cf req-1e9ac1fd-0c15-4f3e-9cb9-078609bea9a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-2ec7b07d-b593-46b7-9751-b6116e4d2cec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:33:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:36.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:36 compute-1 nova_compute[225855]: 2026-01-20 14:33:36.615 225859 DEBUG nova.network.neutron [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:33:37 compute-1 ceph-mon[81775]: pgmap v1293: 321 pgs: 321 active+clean; 150 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 1.3 MiB/s wr, 57 op/s
Jan 20 14:33:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2118206802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:37.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:38 compute-1 podman[243925]: 2026-01-20 14:33:38.072801045 +0000 UTC m=+0.111618084 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 20 14:33:38 compute-1 sudo[243938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:38 compute-1 sudo[243938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:38 compute-1 sudo[243938]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.145 225859 DEBUG nova.network.neutron [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Updating instance_info_cache with network_info: [{"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:33:38 compute-1 sudo[243977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:33:38 compute-1 sudo[243977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:38 compute-1 sudo[243977]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.187 225859 DEBUG oslo_concurrency.lockutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Releasing lock "refresh_cache-2ec7b07d-b593-46b7-9751-b6116e4d2cec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.187 225859 DEBUG nova.compute.manager [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Instance network_info: |[{"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.188 225859 DEBUG oslo_concurrency.lockutils [req-691c09c7-5da3-4b8f-b87d-e0cc6eca78cf req-1e9ac1fd-0c15-4f3e-9cb9-078609bea9a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-2ec7b07d-b593-46b7-9751-b6116e4d2cec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.188 225859 DEBUG nova.network.neutron [req-691c09c7-5da3-4b8f-b87d-e0cc6eca78cf req-1e9ac1fd-0c15-4f3e-9cb9-078609bea9a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Refreshing network info cache for port 73e232f9-3860-4b9a-9cec-535fa2fb0c9f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.191 225859 DEBUG nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Start _get_guest_xml network_info=[{"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.196 225859 WARNING nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.202 225859 DEBUG nova.virt.libvirt.host [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.203 225859 DEBUG nova.virt.libvirt.host [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.206 225859 DEBUG nova.virt.libvirt.host [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.207 225859 DEBUG nova.virt.libvirt.host [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.209 225859 DEBUG nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.209 225859 DEBUG nova.virt.hardware [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.209 225859 DEBUG nova.virt.hardware [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.210 225859 DEBUG nova.virt.hardware [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.210 225859 DEBUG nova.virt.hardware [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.210 225859 DEBUG nova.virt.hardware [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.210 225859 DEBUG nova.virt.hardware [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.211 225859 DEBUG nova.virt.hardware [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.211 225859 DEBUG nova.virt.hardware [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.211 225859 DEBUG nova.virt.hardware [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.212 225859 DEBUG nova.virt.hardware [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.212 225859 DEBUG nova.virt.hardware [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.215 225859 DEBUG oslo_concurrency.processutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:38 compute-1 sudo[244003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:38 compute-1 sudo[244003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:38 compute-1 sudo[244003]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:38 compute-1 sudo[244029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:33:38 compute-1 sudo[244029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:38 compute-1 ceph-mon[81775]: pgmap v1294: 321 pgs: 321 active+clean; 156 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 1.5 MiB/s wr, 60 op/s
Jan 20 14:33:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:38.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:33:38 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/246388539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.662 225859 DEBUG oslo_concurrency.processutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.688 225859 DEBUG nova.storage.rbd_utils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:33:38 compute-1 nova_compute[225855]: 2026-01-20 14:33:38.691 225859 DEBUG oslo_concurrency.processutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:38 compute-1 sudo[244029]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:33:39 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1858393990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.146 225859 DEBUG oslo_concurrency.processutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.149 225859 DEBUG nova.virt.libvirt.vif [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:33:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1907009380',display_name='tempest-ServersAdminTestJSON-server-1907009380',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1907009380',id=39,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a841e7a1434c488390475174e10bc161',ramdisk_id='',reservation_id='r-8k5b63bj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1261404595',owner_user_name='tempest-ServersAdminTestJSON-1261404595-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:33:33Z,user_data=None,user_id='f51c395107c84dbd9067113b84ff01dd',uuid=2ec7b07d-b593-46b7-9751-b6116e4d2cec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.150 225859 DEBUG nova.network.os_vif_util [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converting VIF {"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.151 225859 DEBUG nova.network.os_vif_util [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.154 225859 DEBUG nova.objects.instance [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ec7b07d-b593-46b7-9751-b6116e4d2cec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.180 225859 DEBUG nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:33:39 compute-1 nova_compute[225855]:   <uuid>2ec7b07d-b593-46b7-9751-b6116e4d2cec</uuid>
Jan 20 14:33:39 compute-1 nova_compute[225855]:   <name>instance-00000027</name>
Jan 20 14:33:39 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:33:39 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:33:39 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <nova:name>tempest-ServersAdminTestJSON-server-1907009380</nova:name>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:33:38</nova:creationTime>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:33:39 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:33:39 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:33:39 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:33:39 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:33:39 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:33:39 compute-1 nova_compute[225855]:         <nova:user uuid="f51c395107c84dbd9067113b84ff01dd">tempest-ServersAdminTestJSON-1261404595-project-member</nova:user>
Jan 20 14:33:39 compute-1 nova_compute[225855]:         <nova:project uuid="a841e7a1434c488390475174e10bc161">tempest-ServersAdminTestJSON-1261404595</nova:project>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:33:39 compute-1 nova_compute[225855]:         <nova:port uuid="73e232f9-3860-4b9a-9cec-535fa2fb0c9f">
Jan 20 14:33:39 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:33:39 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:33:39 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <system>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <entry name="serial">2ec7b07d-b593-46b7-9751-b6116e4d2cec</entry>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <entry name="uuid">2ec7b07d-b593-46b7-9751-b6116e4d2cec</entry>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     </system>
Jan 20 14:33:39 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:33:39 compute-1 nova_compute[225855]:   <os>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:   </os>
Jan 20 14:33:39 compute-1 nova_compute[225855]:   <features>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:   </features>
Jan 20 14:33:39 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:33:39 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:33:39 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk">
Jan 20 14:33:39 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       </source>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:33:39 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk.config">
Jan 20 14:33:39 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       </source>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:33:39 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:17:6a:15"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <target dev="tap73e232f9-38"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/console.log" append="off"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <video>
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     </video>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:33:39 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:33:39 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:33:39 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:33:39 compute-1 nova_compute[225855]: </domain>
Jan 20 14:33:39 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.181 225859 DEBUG nova.compute.manager [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Preparing to wait for external event network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.182 225859 DEBUG oslo_concurrency.lockutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.182 225859 DEBUG oslo_concurrency.lockutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.182 225859 DEBUG oslo_concurrency.lockutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.183 225859 DEBUG nova.virt.libvirt.vif [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:33:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1907009380',display_name='tempest-ServersAdminTestJSON-server-1907009380',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1907009380',id=39,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a841e7a1434c488390475174e10bc161',ramdisk_id='',reservation_id='r-8k5b63bj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1261404595',owner_user_name='tempest-ServersAdminTestJSON-1261404595-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:33:33Z,user_data=None,user_id='f51c395107c84dbd9067113b84ff01dd',uuid=2ec7b07d-b593-46b7-9751-b6116e4d2cec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.183 225859 DEBUG nova.network.os_vif_util [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converting VIF {"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.184 225859 DEBUG nova.network.os_vif_util [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.184 225859 DEBUG os_vif [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.185 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.185 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.186 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.189 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.190 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73e232f9-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.190 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap73e232f9-38, col_values=(('external_ids', {'iface-id': '73e232f9-3860-4b9a-9cec-535fa2fb0c9f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:6a:15', 'vm-uuid': '2ec7b07d-b593-46b7-9751-b6116e4d2cec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.192 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:39 compute-1 NetworkManager[49104]: <info>  [1768919619.1934] manager: (tap73e232f9-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.194 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.198 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.199 225859 INFO os_vif [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38')
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.265 225859 DEBUG nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.266 225859 DEBUG nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.266 225859 DEBUG nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] No VIF found with MAC fa:16:3e:17:6a:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.267 225859 INFO nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Using config drive
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.294 225859 DEBUG nova.storage.rbd_utils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:33:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:33:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:39.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:33:39 compute-1 nova_compute[225855]: 2026-01-20 14:33:39.419 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/246388539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:39 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:33:39 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:33:39 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:33:39 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:33:39 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:33:39 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:33:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1858393990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:40 compute-1 nova_compute[225855]: 2026-01-20 14:33:40.148 225859 DEBUG nova.network.neutron [req-691c09c7-5da3-4b8f-b87d-e0cc6eca78cf req-1e9ac1fd-0c15-4f3e-9cb9-078609bea9a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Updated VIF entry in instance network info cache for port 73e232f9-3860-4b9a-9cec-535fa2fb0c9f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:33:40 compute-1 nova_compute[225855]: 2026-01-20 14:33:40.149 225859 DEBUG nova.network.neutron [req-691c09c7-5da3-4b8f-b87d-e0cc6eca78cf req-1e9ac1fd-0c15-4f3e-9cb9-078609bea9a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Updating instance_info_cache with network_info: [{"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:33:40 compute-1 nova_compute[225855]: 2026-01-20 14:33:40.165 225859 DEBUG oslo_concurrency.lockutils [req-691c09c7-5da3-4b8f-b87d-e0cc6eca78cf req-1e9ac1fd-0c15-4f3e-9cb9-078609bea9a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-2ec7b07d-b593-46b7-9751-b6116e4d2cec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:33:40 compute-1 nova_compute[225855]: 2026-01-20 14:33:40.458 225859 INFO nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Creating config drive at /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/disk.config
Jan 20 14:33:40 compute-1 nova_compute[225855]: 2026-01-20 14:33:40.468 225859 DEBUG oslo_concurrency.processutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsa93env5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:40.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:40 compute-1 nova_compute[225855]: 2026-01-20 14:33:40.601 225859 DEBUG oslo_concurrency.processutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsa93env5" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:40 compute-1 nova_compute[225855]: 2026-01-20 14:33:40.648 225859 DEBUG nova.storage.rbd_utils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:33:40 compute-1 nova_compute[225855]: 2026-01-20 14:33:40.652 225859 DEBUG oslo_concurrency.processutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/disk.config 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:40 compute-1 ceph-mon[81775]: pgmap v1295: 321 pgs: 321 active+clean; 180 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 2.5 MiB/s wr, 37 op/s
Jan 20 14:33:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:41.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:41 compute-1 nova_compute[225855]: 2026-01-20 14:33:41.376 225859 DEBUG oslo_concurrency.processutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/disk.config 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.724s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:41 compute-1 nova_compute[225855]: 2026-01-20 14:33:41.378 225859 INFO nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Deleting local config drive /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/disk.config because it was imported into RBD.
Jan 20 14:33:41 compute-1 kernel: tap73e232f9-38: entered promiscuous mode
Jan 20 14:33:41 compute-1 NetworkManager[49104]: <info>  [1768919621.4491] manager: (tap73e232f9-38): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Jan 20 14:33:41 compute-1 ovn_controller[130490]: 2026-01-20T14:33:41Z|00137|binding|INFO|Claiming lport 73e232f9-3860-4b9a-9cec-535fa2fb0c9f for this chassis.
Jan 20 14:33:41 compute-1 ovn_controller[130490]: 2026-01-20T14:33:41Z|00138|binding|INFO|73e232f9-3860-4b9a-9cec-535fa2fb0c9f: Claiming fa:16:3e:17:6a:15 10.100.0.6
Jan 20 14:33:41 compute-1 nova_compute[225855]: 2026-01-20 14:33:41.450 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:41 compute-1 nova_compute[225855]: 2026-01-20 14:33:41.453 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.464 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:6a:15 10.100.0.6'], port_security=['fa:16:3e:17:6a:15 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2ec7b07d-b593-46b7-9751-b6116e4d2cec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a841e7a1434c488390475174e10bc161', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0bbdea05-fba7-47c7-ba4e-5dac58212a25', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c43dea88-ea55-4069-a4be-2c30a432a754, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=73e232f9-3860-4b9a-9cec-535fa2fb0c9f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.466 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 73e232f9-3860-4b9a-9cec-535fa2fb0c9f in datapath 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a bound to our chassis
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.468 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.483 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6a28aced-99c2-4e7b-b7e1-c06012c9b41f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.484 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap33c9a20a-d1 in ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.486 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap33c9a20a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.486 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[45615288-289e-421e-82c4-19a0aaf45265]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.487 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[767f890e-ead5-46ab-badc-e19461397440]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:41 compute-1 systemd-machined[194361]: New machine qemu-20-instance-00000027.
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.502 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[06837b5d-3352-4254-8270-96974dc7cbf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:41 compute-1 systemd[1]: Started Virtual Machine qemu-20-instance-00000027.
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.525 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba63c86-38c7-4925-a759-daa1697f1099]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:41 compute-1 systemd-udevd[244224]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:33:41 compute-1 NetworkManager[49104]: <info>  [1768919621.5440] device (tap73e232f9-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:33:41 compute-1 NetworkManager[49104]: <info>  [1768919621.5449] device (tap73e232f9-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:33:41 compute-1 nova_compute[225855]: 2026-01-20 14:33:41.549 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:41 compute-1 ovn_controller[130490]: 2026-01-20T14:33:41Z|00139|binding|INFO|Setting lport 73e232f9-3860-4b9a-9cec-535fa2fb0c9f ovn-installed in OVS
Jan 20 14:33:41 compute-1 ovn_controller[130490]: 2026-01-20T14:33:41Z|00140|binding|INFO|Setting lport 73e232f9-3860-4b9a-9cec-535fa2fb0c9f up in Southbound
Jan 20 14:33:41 compute-1 nova_compute[225855]: 2026-01-20 14:33:41.557 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.563 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[8f6fc406-0c5e-406f-be89-8b60e399207a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.568 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f99db80f-a8e7-49d4-9a97-4fa82f9ad190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:41 compute-1 NetworkManager[49104]: <info>  [1768919621.5691] manager: (tap33c9a20a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/64)
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.602 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[836acec3-7e29-4a71-9f1c-e296c72f2740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.605 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[afeaf2fb-b75d-46ed-8bd0-31a3d0c18033]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:41 compute-1 NetworkManager[49104]: <info>  [1768919621.6249] device (tap33c9a20a-d0): carrier: link connected
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.629 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b01108-15b1-4fdb-9c7d-6c4bfbc3976e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.643 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e53c7bed-6bcb-4841-bb60-2e7b887f45a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33c9a20a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:8e:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466055, 'reachable_time': 29489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244255, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.657 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[42276c09-2b87-4199-80df-f90acaed1401]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe89:8ebd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466055, 'tstamp': 466055}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244256, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.671 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[32288afa-dc8b-410f-b6a7-9403acf6a732]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33c9a20a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:8e:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466055, 'reachable_time': 29489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 244257, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.702 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[91b860d9-3574-41ab-a6c4-ec1d1884583f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.757 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[748c4050-38bd-416d-9c36-b5f2af429643]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.758 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33c9a20a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.758 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.758 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33c9a20a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:33:41 compute-1 kernel: tap33c9a20a-d0: entered promiscuous mode
Jan 20 14:33:41 compute-1 nova_compute[225855]: 2026-01-20 14:33:41.760 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.763 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap33c9a20a-d0, col_values=(('external_ids', {'iface-id': '90c69687-c788-4dba-881f-3ed4a5ee6007'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:33:41 compute-1 ovn_controller[130490]: 2026-01-20T14:33:41Z|00141|binding|INFO|Releasing lport 90c69687-c788-4dba-881f-3ed4a5ee6007 from this chassis (sb_readonly=0)
Jan 20 14:33:41 compute-1 NetworkManager[49104]: <info>  [1768919621.7651] manager: (tap33c9a20a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Jan 20 14:33:41 compute-1 nova_compute[225855]: 2026-01-20 14:33:41.766 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:41 compute-1 nova_compute[225855]: 2026-01-20 14:33:41.777 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.778 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/33c9a20a-d976-42a8-b8bf-f83ddfc97c9a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/33c9a20a-d976-42a8-b8bf-f83ddfc97c9a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:33:41 compute-1 nova_compute[225855]: 2026-01-20 14:33:41.779 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.779 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd7344c-3d09-4004-a222-40748b8cc6ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.780 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/33c9a20a-d976-42a8-b8bf-f83ddfc97c9a.pid.haproxy
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:33:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:33:41.780 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'env', 'PROCESS_TAG=haproxy-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/33c9a20a-d976-42a8-b8bf-f83ddfc97c9a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:33:42 compute-1 sudo[244271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:42 compute-1 sudo[244271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:42 compute-1 sudo[244271]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:42 compute-1 sudo[244306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:42 compute-1 sudo[244306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:42 compute-1 sudo[244306]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:42 compute-1 podman[244318]: 2026-01-20 14:33:42.140394803 +0000 UTC m=+0.051027527 container create ab810e99d7949df31ae8ecec1ce4cb6e08341183c61e4c0ebc73d5a2b21ea83d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 14:33:42 compute-1 systemd[1]: Started libpod-conmon-ab810e99d7949df31ae8ecec1ce4cb6e08341183c61e4c0ebc73d5a2b21ea83d.scope.
Jan 20 14:33:42 compute-1 podman[244318]: 2026-01-20 14:33:42.113576149 +0000 UTC m=+0.024208893 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:33:42 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:33:42 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61f71f999ebe55922d4470c26bf6ec7028f2091bfc297c60f9663a1040a21c70/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:33:42 compute-1 podman[244318]: 2026-01-20 14:33:42.247746929 +0000 UTC m=+0.158379673 container init ab810e99d7949df31ae8ecec1ce4cb6e08341183c61e4c0ebc73d5a2b21ea83d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:33:42 compute-1 podman[244318]: 2026-01-20 14:33:42.257783924 +0000 UTC m=+0.168416648 container start ab810e99d7949df31ae8ecec1ce4cb6e08341183c61e4c0ebc73d5a2b21ea83d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:33:42 compute-1 neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a[244351]: [NOTICE]   (244356) : New worker (244358) forked
Jan 20 14:33:42 compute-1 neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a[244351]: [NOTICE]   (244356) : Loading success.
Jan 20 14:33:42 compute-1 nova_compute[225855]: 2026-01-20 14:33:42.371 225859 DEBUG nova.compute.manager [req-2b3a0c21-208a-4e3a-94ca-593540b554b0 req-da37091d-4475-4c01-a2f6-3ed8875f23d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received event network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:33:42 compute-1 nova_compute[225855]: 2026-01-20 14:33:42.371 225859 DEBUG oslo_concurrency.lockutils [req-2b3a0c21-208a-4e3a-94ca-593540b554b0 req-da37091d-4475-4c01-a2f6-3ed8875f23d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:42 compute-1 nova_compute[225855]: 2026-01-20 14:33:42.372 225859 DEBUG oslo_concurrency.lockutils [req-2b3a0c21-208a-4e3a-94ca-593540b554b0 req-da37091d-4475-4c01-a2f6-3ed8875f23d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:42 compute-1 nova_compute[225855]: 2026-01-20 14:33:42.372 225859 DEBUG oslo_concurrency.lockutils [req-2b3a0c21-208a-4e3a-94ca-593540b554b0 req-da37091d-4475-4c01-a2f6-3ed8875f23d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:42 compute-1 nova_compute[225855]: 2026-01-20 14:33:42.372 225859 DEBUG nova.compute.manager [req-2b3a0c21-208a-4e3a-94ca-593540b554b0 req-da37091d-4475-4c01-a2f6-3ed8875f23d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Processing event network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:33:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:33:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:42.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:33:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:43.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:43 compute-1 ceph-mon[81775]: pgmap v1296: 321 pgs: 321 active+clean; 213 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 4.3 MiB/s wr, 70 op/s
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.579 225859 DEBUG nova.compute.manager [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.580 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919623.5791976, 2ec7b07d-b593-46b7-9751-b6116e4d2cec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.580 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] VM Started (Lifecycle Event)
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.583 225859 DEBUG nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.586 225859 INFO nova.virt.libvirt.driver [-] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Instance spawned successfully.
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.587 225859 DEBUG nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.615 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.620 225859 DEBUG nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.620 225859 DEBUG nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.621 225859 DEBUG nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.621 225859 DEBUG nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.622 225859 DEBUG nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.622 225859 DEBUG nova.virt.libvirt.driver [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.626 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.682 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.682 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919623.5800834, 2ec7b07d-b593-46b7-9751-b6116e4d2cec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.683 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] VM Paused (Lifecycle Event)
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.712 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.715 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919623.58235, 2ec7b07d-b593-46b7-9751-b6116e4d2cec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.715 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] VM Resumed (Lifecycle Event)
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.736 225859 INFO nova.compute.manager [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Took 10.62 seconds to spawn the instance on the hypervisor.
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.736 225859 DEBUG nova.compute.manager [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.742 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.744 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.769 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.804 225859 INFO nova.compute.manager [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Took 11.67 seconds to build instance.
Jan 20 14:33:43 compute-1 nova_compute[225855]: 2026-01-20 14:33:43.828 225859 DEBUG oslo_concurrency.lockutils [None req-36c56da3-e6dd-43b3-8db2-5f7143100150 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:44 compute-1 nova_compute[225855]: 2026-01-20 14:33:44.194 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:44 compute-1 nova_compute[225855]: 2026-01-20 14:33:44.421 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:44 compute-1 nova_compute[225855]: 2026-01-20 14:33:44.510 225859 DEBUG nova.compute.manager [req-3a254025-3885-43c8-811e-99352aa32552 req-bb977258-209d-4e70-9691-3a34a9fe326a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received event network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:33:44 compute-1 nova_compute[225855]: 2026-01-20 14:33:44.510 225859 DEBUG oslo_concurrency.lockutils [req-3a254025-3885-43c8-811e-99352aa32552 req-bb977258-209d-4e70-9691-3a34a9fe326a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:44 compute-1 nova_compute[225855]: 2026-01-20 14:33:44.511 225859 DEBUG oslo_concurrency.lockutils [req-3a254025-3885-43c8-811e-99352aa32552 req-bb977258-209d-4e70-9691-3a34a9fe326a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:44 compute-1 nova_compute[225855]: 2026-01-20 14:33:44.511 225859 DEBUG oslo_concurrency.lockutils [req-3a254025-3885-43c8-811e-99352aa32552 req-bb977258-209d-4e70-9691-3a34a9fe326a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:44 compute-1 nova_compute[225855]: 2026-01-20 14:33:44.512 225859 DEBUG nova.compute.manager [req-3a254025-3885-43c8-811e-99352aa32552 req-bb977258-209d-4e70-9691-3a34a9fe326a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] No waiting events found dispatching network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:33:44 compute-1 nova_compute[225855]: 2026-01-20 14:33:44.512 225859 WARNING nova.compute.manager [req-3a254025-3885-43c8-811e-99352aa32552 req-bb977258-209d-4e70-9691-3a34a9fe326a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received unexpected event network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f for instance with vm_state active and task_state None.
Jan 20 14:33:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:33:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:44.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:33:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1412916936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2270142018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2864710443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:44 compute-1 ceph-mon[81775]: pgmap v1297: 321 pgs: 321 active+clean; 213 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 3.9 MiB/s wr, 63 op/s
Jan 20 14:33:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:45.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:46 compute-1 ceph-mon[81775]: pgmap v1298: 321 pgs: 321 active+clean; 229 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 968 KiB/s rd, 4.1 MiB/s wr, 110 op/s
Jan 20 14:33:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:33:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:46.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:33:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:47.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:48.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:49 compute-1 podman[244412]: 2026-01-20 14:33:49.023084387 +0000 UTC m=+0.066953372 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:33:49 compute-1 ceph-mon[81775]: pgmap v1299: 321 pgs: 321 active+clean; 252 MiB data, 528 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 111 op/s
Jan 20 14:33:49 compute-1 nova_compute[225855]: 2026-01-20 14:33:49.198 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:49.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:49 compute-1 nova_compute[225855]: 2026-01-20 14:33:49.423 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:50 compute-1 ceph-mon[81775]: pgmap v1300: 321 pgs: 321 active+clean; 260 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.1 MiB/s wr, 133 op/s
Jan 20 14:33:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:33:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:50.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:33:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:51.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:51 compute-1 sudo[244435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:51 compute-1 sudo[244435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:51 compute-1 sudo[244435]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:51 compute-1 sudo[244460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:33:51 compute-1 sudo[244460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:51 compute-1 sudo[244460]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:52 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:33:52 compute-1 ceph-mon[81775]: pgmap v1301: 321 pgs: 321 active+clean; 260 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.3 MiB/s wr, 190 op/s
Jan 20 14:33:52 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:33:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:33:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:52.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:33:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:53.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3854223112' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:54 compute-1 nova_compute[225855]: 2026-01-20 14:33:54.202 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:54 compute-1 nova_compute[225855]: 2026-01-20 14:33:54.458 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:54.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:54 compute-1 ceph-mon[81775]: pgmap v1302: 321 pgs: 321 active+clean; 260 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Jan 20 14:33:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/867397850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:33:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:55.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:33:56 compute-1 ovn_controller[130490]: 2026-01-20T14:33:56Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:17:6a:15 10.100.0.6
Jan 20 14:33:56 compute-1 ovn_controller[130490]: 2026-01-20T14:33:56Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:17:6a:15 10.100.0.6
Jan 20 14:33:56 compute-1 ceph-mon[81775]: pgmap v1303: 321 pgs: 321 active+clean; 260 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Jan 20 14:33:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:56.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:33:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:57.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:33:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:58.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:58 compute-1 ceph-mon[81775]: pgmap v1304: 321 pgs: 321 active+clean; 273 MiB data, 544 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.0 MiB/s wr, 135 op/s
Jan 20 14:33:59 compute-1 nova_compute[225855]: 2026-01-20 14:33:59.206 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:33:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:59.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:59 compute-1 nova_compute[225855]: 2026-01-20 14:33:59.460 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:59 compute-1 nova_compute[225855]: 2026-01-20 14:33:59.502 225859 DEBUG oslo_concurrency.lockutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "fdba30ff-e02a-4857-92f6-1828ce3ab175" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:59 compute-1 nova_compute[225855]: 2026-01-20 14:33:59.503 225859 DEBUG oslo_concurrency.lockutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "fdba30ff-e02a-4857-92f6-1828ce3ab175" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:59 compute-1 nova_compute[225855]: 2026-01-20 14:33:59.531 225859 DEBUG nova.compute.manager [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:33:59 compute-1 nova_compute[225855]: 2026-01-20 14:33:59.679 225859 DEBUG oslo_concurrency.lockutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:59 compute-1 nova_compute[225855]: 2026-01-20 14:33:59.679 225859 DEBUG oslo_concurrency.lockutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:59 compute-1 nova_compute[225855]: 2026-01-20 14:33:59.686 225859 DEBUG nova.virt.hardware [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:33:59 compute-1 nova_compute[225855]: 2026-01-20 14:33:59.687 225859 INFO nova.compute.claims [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:33:59 compute-1 nova_compute[225855]: 2026-01-20 14:33:59.934 225859 DEBUG oslo_concurrency.processutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:34:00 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2444071069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.472 225859 DEBUG oslo_concurrency.processutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.478 225859 DEBUG nova.compute.provider_tree [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.495 225859 DEBUG nova.scheduler.client.report [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:34:00 compute-1 ceph-mon[81775]: pgmap v1305: 321 pgs: 321 active+clean; 278 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.6 MiB/s wr, 127 op/s
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.539 225859 DEBUG oslo_concurrency.lockutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.540 225859 DEBUG nova.compute.manager [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.590 225859 DEBUG nova.compute.manager [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.591 225859 DEBUG nova.network.neutron [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:34:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:00.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.616 225859 INFO nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.634 225859 DEBUG nova.compute.manager [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.735 225859 DEBUG nova.compute.manager [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.736 225859 DEBUG nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.736 225859 INFO nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Creating image(s)
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.762 225859 DEBUG nova.storage.rbd_utils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image fdba30ff-e02a-4857-92f6-1828ce3ab175_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.788 225859 DEBUG nova.storage.rbd_utils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image fdba30ff-e02a-4857-92f6-1828ce3ab175_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.826 225859 DEBUG nova.storage.rbd_utils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image fdba30ff-e02a-4857-92f6-1828ce3ab175_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.831 225859 DEBUG oslo_concurrency.processutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.909 225859 DEBUG oslo_concurrency.processutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.910 225859 DEBUG oslo_concurrency.lockutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.910 225859 DEBUG oslo_concurrency.lockutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.911 225859 DEBUG oslo_concurrency.lockutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.935 225859 DEBUG nova.storage.rbd_utils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image fdba30ff-e02a-4857-92f6-1828ce3ab175_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:00 compute-1 nova_compute[225855]: 2026-01-20 14:34:00.939 225859 DEBUG oslo_concurrency.processutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 fdba30ff-e02a-4857-92f6-1828ce3ab175_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:01 compute-1 nova_compute[225855]: 2026-01-20 14:34:01.017 225859 DEBUG nova.policy [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f51c395107c84dbd9067113b84ff01dd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a841e7a1434c488390475174e10bc161', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:34:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:34:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:01.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:34:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2444071069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:01 compute-1 nova_compute[225855]: 2026-01-20 14:34:01.824 225859 DEBUG oslo_concurrency.processutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 fdba30ff-e02a-4857-92f6-1828ce3ab175_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.886s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:01 compute-1 nova_compute[225855]: 2026-01-20 14:34:01.894 225859 DEBUG nova.storage.rbd_utils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] resizing rbd image fdba30ff-e02a-4857-92f6-1828ce3ab175_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:34:01 compute-1 nova_compute[225855]: 2026-01-20 14:34:01.987 225859 DEBUG nova.objects.instance [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'migration_context' on Instance uuid fdba30ff-e02a-4857-92f6-1828ce3ab175 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:02 compute-1 nova_compute[225855]: 2026-01-20 14:34:02.004 225859 DEBUG nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:34:02 compute-1 nova_compute[225855]: 2026-01-20 14:34:02.005 225859 DEBUG nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Ensure instance console log exists: /var/lib/nova/instances/fdba30ff-e02a-4857-92f6-1828ce3ab175/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:34:02 compute-1 nova_compute[225855]: 2026-01-20 14:34:02.005 225859 DEBUG oslo_concurrency.lockutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:02 compute-1 nova_compute[225855]: 2026-01-20 14:34:02.006 225859 DEBUG oslo_concurrency.lockutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:02 compute-1 nova_compute[225855]: 2026-01-20 14:34:02.006 225859 DEBUG oslo_concurrency.lockutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:02 compute-1 sudo[244679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:02 compute-1 sudo[244679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:02 compute-1 sudo[244679]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:02 compute-1 sudo[244704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:02 compute-1 sudo[244704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:02 compute-1 sudo[244704]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:34:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:02.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:34:02 compute-1 ceph-mon[81775]: pgmap v1306: 321 pgs: 321 active+clean; 304 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.1 MiB/s wr, 194 op/s
Jan 20 14:34:03 compute-1 nova_compute[225855]: 2026-01-20 14:34:03.340 225859 DEBUG nova.network.neutron [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Successfully created port: 87a0a5ba-6446-4265-8ada-94d1bd815aed _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:34:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:03.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:34:03 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/424104226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:34:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:34:03 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/424104226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:34:04 compute-1 nova_compute[225855]: 2026-01-20 14:34:04.209 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:04 compute-1 nova_compute[225855]: 2026-01-20 14:34:04.462 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:34:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:04.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:34:04 compute-1 nova_compute[225855]: 2026-01-20 14:34:04.692 225859 DEBUG nova.network.neutron [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Successfully updated port: 87a0a5ba-6446-4265-8ada-94d1bd815aed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:34:04 compute-1 nova_compute[225855]: 2026-01-20 14:34:04.707 225859 DEBUG oslo_concurrency.lockutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "refresh_cache-fdba30ff-e02a-4857-92f6-1828ce3ab175" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:34:04 compute-1 nova_compute[225855]: 2026-01-20 14:34:04.707 225859 DEBUG oslo_concurrency.lockutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquired lock "refresh_cache-fdba30ff-e02a-4857-92f6-1828ce3ab175" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:34:04 compute-1 nova_compute[225855]: 2026-01-20 14:34:04.707 225859 DEBUG nova.network.neutron [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:34:04 compute-1 ceph-mon[81775]: pgmap v1307: 321 pgs: 321 active+clean; 304 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 137 op/s
Jan 20 14:34:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/424104226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:34:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/424104226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:34:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:05 compute-1 nova_compute[225855]: 2026-01-20 14:34:05.120 225859 DEBUG nova.compute.manager [req-e6e71d3b-b359-4542-89e0-33c13e68dac2 req-6433cca5-2ea3-4f3f-bfa8-5d7817c303df 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Received event network-changed-87a0a5ba-6446-4265-8ada-94d1bd815aed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:34:05 compute-1 nova_compute[225855]: 2026-01-20 14:34:05.121 225859 DEBUG nova.compute.manager [req-e6e71d3b-b359-4542-89e0-33c13e68dac2 req-6433cca5-2ea3-4f3f-bfa8-5d7817c303df 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Refreshing instance network info cache due to event network-changed-87a0a5ba-6446-4265-8ada-94d1bd815aed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:34:05 compute-1 nova_compute[225855]: 2026-01-20 14:34:05.121 225859 DEBUG oslo_concurrency.lockutils [req-e6e71d3b-b359-4542-89e0-33c13e68dac2 req-6433cca5-2ea3-4f3f-bfa8-5d7817c303df 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-fdba30ff-e02a-4857-92f6-1828ce3ab175" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:34:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:05.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:05 compute-1 nova_compute[225855]: 2026-01-20 14:34:05.509 225859 DEBUG nova.network.neutron [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:34:05 compute-1 ceph-mon[81775]: pgmap v1308: 321 pgs: 321 active+clean; 357 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.7 MiB/s wr, 237 op/s
Jan 20 14:34:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:34:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:06.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:34:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:07.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:08.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:09 compute-1 ceph-mon[81775]: pgmap v1309: 321 pgs: 321 active+clean; 349 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 251 op/s
Jan 20 14:34:09 compute-1 podman[244732]: 2026-01-20 14:34:09.127758117 +0000 UTC m=+0.170242038 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 20 14:34:09 compute-1 nova_compute[225855]: 2026-01-20 14:34:09.212 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:34:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:09.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:34:09 compute-1 nova_compute[225855]: 2026-01-20 14:34:09.464 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.248 225859 DEBUG nova.network.neutron [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Updating instance_info_cache with network_info: [{"id": "87a0a5ba-6446-4265-8ada-94d1bd815aed", "address": "fa:16:3e:70:1f:46", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87a0a5ba-64", "ovs_interfaceid": "87a0a5ba-6446-4265-8ada-94d1bd815aed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.294 225859 DEBUG oslo_concurrency.lockutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Releasing lock "refresh_cache-fdba30ff-e02a-4857-92f6-1828ce3ab175" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.294 225859 DEBUG nova.compute.manager [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Instance network_info: |[{"id": "87a0a5ba-6446-4265-8ada-94d1bd815aed", "address": "fa:16:3e:70:1f:46", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87a0a5ba-64", "ovs_interfaceid": "87a0a5ba-6446-4265-8ada-94d1bd815aed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.295 225859 DEBUG oslo_concurrency.lockutils [req-e6e71d3b-b359-4542-89e0-33c13e68dac2 req-6433cca5-2ea3-4f3f-bfa8-5d7817c303df 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-fdba30ff-e02a-4857-92f6-1828ce3ab175" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.295 225859 DEBUG nova.network.neutron [req-e6e71d3b-b359-4542-89e0-33c13e68dac2 req-6433cca5-2ea3-4f3f-bfa8-5d7817c303df 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Refreshing network info cache for port 87a0a5ba-6446-4265-8ada-94d1bd815aed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.298 225859 DEBUG nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Start _get_guest_xml network_info=[{"id": "87a0a5ba-6446-4265-8ada-94d1bd815aed", "address": "fa:16:3e:70:1f:46", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87a0a5ba-64", "ovs_interfaceid": "87a0a5ba-6446-4265-8ada-94d1bd815aed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.302 225859 WARNING nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.313 225859 DEBUG nova.virt.libvirt.host [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.314 225859 DEBUG nova.virt.libvirt.host [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.317 225859 DEBUG nova.virt.libvirt.host [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.318 225859 DEBUG nova.virt.libvirt.host [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.319 225859 DEBUG nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.319 225859 DEBUG nova.virt.hardware [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.320 225859 DEBUG nova.virt.hardware [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.320 225859 DEBUG nova.virt.hardware [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.320 225859 DEBUG nova.virt.hardware [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.321 225859 DEBUG nova.virt.hardware [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.321 225859 DEBUG nova.virt.hardware [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.321 225859 DEBUG nova.virt.hardware [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.321 225859 DEBUG nova.virt.hardware [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.321 225859 DEBUG nova.virt.hardware [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.322 225859 DEBUG nova.virt.hardware [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.322 225859 DEBUG nova.virt.hardware [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:34:10 compute-1 nova_compute[225855]: 2026-01-20 14:34:10.324 225859 DEBUG oslo_concurrency.processutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:10.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:34:10 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3250290687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:10 compute-1 ceph-mon[81775]: pgmap v1310: 321 pgs: 321 active+clean; 341 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.3 MiB/s wr, 248 op/s
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.214 225859 DEBUG oslo_concurrency.processutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.890s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.247 225859 DEBUG nova.storage.rbd_utils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image fdba30ff-e02a-4857-92f6-1828ce3ab175_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.252 225859 DEBUG oslo_concurrency.processutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:34:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:11.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:34:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:34:11 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1227090643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.711 225859 DEBUG oslo_concurrency.processutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.713 225859 DEBUG nova.virt.libvirt.vif [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:33:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1832306325',display_name='tempest-ServersAdminTestJSON-server-1832306325',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1832306325',id=42,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a841e7a1434c488390475174e10bc161',ramdisk_id='',reservation_id='r-sb3w0f0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1261404595',owner_user_name='tempest-ServersAdminTestJSON-1261404595-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:34:00Z,user_data=None,user_id='f51c395107c84dbd9067113b84ff01dd',uuid=fdba30ff-e02a-4857-92f6-1828ce3ab175,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "87a0a5ba-6446-4265-8ada-94d1bd815aed", "address": "fa:16:3e:70:1f:46", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87a0a5ba-64", "ovs_interfaceid": "87a0a5ba-6446-4265-8ada-94d1bd815aed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.713 225859 DEBUG nova.network.os_vif_util [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converting VIF {"id": "87a0a5ba-6446-4265-8ada-94d1bd815aed", "address": "fa:16:3e:70:1f:46", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87a0a5ba-64", "ovs_interfaceid": "87a0a5ba-6446-4265-8ada-94d1bd815aed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.714 225859 DEBUG nova.network.os_vif_util [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:1f:46,bridge_name='br-int',has_traffic_filtering=True,id=87a0a5ba-6446-4265-8ada-94d1bd815aed,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87a0a5ba-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.716 225859 DEBUG nova.objects.instance [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'pci_devices' on Instance uuid fdba30ff-e02a-4857-92f6-1828ce3ab175 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.741 225859 DEBUG nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:34:11 compute-1 nova_compute[225855]:   <uuid>fdba30ff-e02a-4857-92f6-1828ce3ab175</uuid>
Jan 20 14:34:11 compute-1 nova_compute[225855]:   <name>instance-0000002a</name>
Jan 20 14:34:11 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:34:11 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:34:11 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <nova:name>tempest-ServersAdminTestJSON-server-1832306325</nova:name>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:34:10</nova:creationTime>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:34:11 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:34:11 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:34:11 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:34:11 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:34:11 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:34:11 compute-1 nova_compute[225855]:         <nova:user uuid="f51c395107c84dbd9067113b84ff01dd">tempest-ServersAdminTestJSON-1261404595-project-member</nova:user>
Jan 20 14:34:11 compute-1 nova_compute[225855]:         <nova:project uuid="a841e7a1434c488390475174e10bc161">tempest-ServersAdminTestJSON-1261404595</nova:project>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:34:11 compute-1 nova_compute[225855]:         <nova:port uuid="87a0a5ba-6446-4265-8ada-94d1bd815aed">
Jan 20 14:34:11 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:34:11 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:34:11 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <system>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <entry name="serial">fdba30ff-e02a-4857-92f6-1828ce3ab175</entry>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <entry name="uuid">fdba30ff-e02a-4857-92f6-1828ce3ab175</entry>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     </system>
Jan 20 14:34:11 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:34:11 compute-1 nova_compute[225855]:   <os>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:   </os>
Jan 20 14:34:11 compute-1 nova_compute[225855]:   <features>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:   </features>
Jan 20 14:34:11 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:34:11 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:34:11 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/fdba30ff-e02a-4857-92f6-1828ce3ab175_disk">
Jan 20 14:34:11 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       </source>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:34:11 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/fdba30ff-e02a-4857-92f6-1828ce3ab175_disk.config">
Jan 20 14:34:11 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       </source>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:34:11 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:70:1f:46"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <target dev="tap87a0a5ba-64"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/fdba30ff-e02a-4857-92f6-1828ce3ab175/console.log" append="off"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <video>
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     </video>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:34:11 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:34:11 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:34:11 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:34:11 compute-1 nova_compute[225855]: </domain>
Jan 20 14:34:11 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.744 225859 DEBUG nova.compute.manager [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Preparing to wait for external event network-vif-plugged-87a0a5ba-6446-4265-8ada-94d1bd815aed prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.744 225859 DEBUG oslo_concurrency.lockutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "fdba30ff-e02a-4857-92f6-1828ce3ab175-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.745 225859 DEBUG oslo_concurrency.lockutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "fdba30ff-e02a-4857-92f6-1828ce3ab175-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.745 225859 DEBUG oslo_concurrency.lockutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "fdba30ff-e02a-4857-92f6-1828ce3ab175-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.747 225859 DEBUG nova.virt.libvirt.vif [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:33:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1832306325',display_name='tempest-ServersAdminTestJSON-server-1832306325',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1832306325',id=42,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a841e7a1434c488390475174e10bc161',ramdisk_id='',reservation_id='r-sb3w0f0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1261404595',owner_user_name='tempest-ServersAdminTestJSON-1261404595-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:34:00Z,user_data=None,user_id='f51c395107c84dbd9067113b84ff01dd',uuid=fdba30ff-e02a-4857-92f6-1828ce3ab175,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "87a0a5ba-6446-4265-8ada-94d1bd815aed", "address": "fa:16:3e:70:1f:46", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87a0a5ba-64", "ovs_interfaceid": "87a0a5ba-6446-4265-8ada-94d1bd815aed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.747 225859 DEBUG nova.network.os_vif_util [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converting VIF {"id": "87a0a5ba-6446-4265-8ada-94d1bd815aed", "address": "fa:16:3e:70:1f:46", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87a0a5ba-64", "ovs_interfaceid": "87a0a5ba-6446-4265-8ada-94d1bd815aed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.748 225859 DEBUG nova.network.os_vif_util [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:1f:46,bridge_name='br-int',has_traffic_filtering=True,id=87a0a5ba-6446-4265-8ada-94d1bd815aed,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87a0a5ba-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.749 225859 DEBUG os_vif [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:1f:46,bridge_name='br-int',has_traffic_filtering=True,id=87a0a5ba-6446-4265-8ada-94d1bd815aed,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87a0a5ba-64') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.750 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.751 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.752 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.757 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.758 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87a0a5ba-64, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.759 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap87a0a5ba-64, col_values=(('external_ids', {'iface-id': '87a0a5ba-6446-4265-8ada-94d1bd815aed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:1f:46', 'vm-uuid': 'fdba30ff-e02a-4857-92f6-1828ce3ab175'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.803 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:11 compute-1 NetworkManager[49104]: <info>  [1768919651.8041] manager: (tap87a0a5ba-64): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.806 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.810 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.812 225859 INFO os_vif [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:1f:46,bridge_name='br-int',has_traffic_filtering=True,id=87a0a5ba-6446-4265-8ada-94d1bd815aed,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87a0a5ba-64')
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.870 225859 DEBUG nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.871 225859 DEBUG nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.871 225859 DEBUG nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] No VIF found with MAC fa:16:3e:70:1f:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.872 225859 INFO nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Using config drive
Jan 20 14:34:11 compute-1 nova_compute[225855]: 2026-01-20 14:34:11.897 225859 DEBUG nova.storage.rbd_utils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image fdba30ff-e02a-4857-92f6-1828ce3ab175_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3250290687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1936457463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:12 compute-1 ceph-mon[81775]: pgmap v1311: 321 pgs: 321 active+clean; 326 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 239 op/s
Jan 20 14:34:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1227090643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:12.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:12 compute-1 nova_compute[225855]: 2026-01-20 14:34:12.813 225859 INFO nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Creating config drive at /var/lib/nova/instances/fdba30ff-e02a-4857-92f6-1828ce3ab175/disk.config
Jan 20 14:34:12 compute-1 nova_compute[225855]: 2026-01-20 14:34:12.822 225859 DEBUG oslo_concurrency.processutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fdba30ff-e02a-4857-92f6-1828ce3ab175/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_b6rd1t4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:12 compute-1 nova_compute[225855]: 2026-01-20 14:34:12.961 225859 DEBUG oslo_concurrency.processutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fdba30ff-e02a-4857-92f6-1828ce3ab175/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_b6rd1t4" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:13 compute-1 nova_compute[225855]: 2026-01-20 14:34:12.999 225859 DEBUG nova.storage.rbd_utils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image fdba30ff-e02a-4857-92f6-1828ce3ab175_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:13 compute-1 nova_compute[225855]: 2026-01-20 14:34:13.002 225859 DEBUG oslo_concurrency.processutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fdba30ff-e02a-4857-92f6-1828ce3ab175/disk.config fdba30ff-e02a-4857-92f6-1828ce3ab175_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:13 compute-1 nova_compute[225855]: 2026-01-20 14:34:13.100 225859 DEBUG nova.network.neutron [req-e6e71d3b-b359-4542-89e0-33c13e68dac2 req-6433cca5-2ea3-4f3f-bfa8-5d7817c303df 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Updated VIF entry in instance network info cache for port 87a0a5ba-6446-4265-8ada-94d1bd815aed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:34:13 compute-1 nova_compute[225855]: 2026-01-20 14:34:13.101 225859 DEBUG nova.network.neutron [req-e6e71d3b-b359-4542-89e0-33c13e68dac2 req-6433cca5-2ea3-4f3f-bfa8-5d7817c303df 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Updating instance_info_cache with network_info: [{"id": "87a0a5ba-6446-4265-8ada-94d1bd815aed", "address": "fa:16:3e:70:1f:46", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87a0a5ba-64", "ovs_interfaceid": "87a0a5ba-6446-4265-8ada-94d1bd815aed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:34:13 compute-1 nova_compute[225855]: 2026-01-20 14:34:13.137 225859 DEBUG oslo_concurrency.lockutils [req-e6e71d3b-b359-4542-89e0-33c13e68dac2 req-6433cca5-2ea3-4f3f-bfa8-5d7817c303df 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-fdba30ff-e02a-4857-92f6-1828ce3ab175" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:34:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:13.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:34:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/467321397' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:34:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:34:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/467321397' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:34:14 compute-1 nova_compute[225855]: 2026-01-20 14:34:14.375 225859 DEBUG oslo_concurrency.processutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fdba30ff-e02a-4857-92f6-1828ce3ab175/disk.config fdba30ff-e02a-4857-92f6-1828ce3ab175_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:14 compute-1 nova_compute[225855]: 2026-01-20 14:34:14.376 225859 INFO nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Deleting local config drive /var/lib/nova/instances/fdba30ff-e02a-4857-92f6-1828ce3ab175/disk.config because it was imported into RBD.
Jan 20 14:34:14 compute-1 kernel: tap87a0a5ba-64: entered promiscuous mode
Jan 20 14:34:14 compute-1 NetworkManager[49104]: <info>  [1768919654.4383] manager: (tap87a0a5ba-64): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Jan 20 14:34:14 compute-1 ovn_controller[130490]: 2026-01-20T14:34:14Z|00142|binding|INFO|Claiming lport 87a0a5ba-6446-4265-8ada-94d1bd815aed for this chassis.
Jan 20 14:34:14 compute-1 ovn_controller[130490]: 2026-01-20T14:34:14Z|00143|binding|INFO|87a0a5ba-6446-4265-8ada-94d1bd815aed: Claiming fa:16:3e:70:1f:46 10.100.0.10
Jan 20 14:34:14 compute-1 nova_compute[225855]: 2026-01-20 14:34:14.441 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:14.452 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:1f:46 10.100.0.10'], port_security=['fa:16:3e:70:1f:46 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fdba30ff-e02a-4857-92f6-1828ce3ab175', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a841e7a1434c488390475174e10bc161', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0bbdea05-fba7-47c7-ba4e-5dac58212a25', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c43dea88-ea55-4069-a4be-2c30a432a754, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=87a0a5ba-6446-4265-8ada-94d1bd815aed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:34:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:14.455 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 87a0a5ba-6446-4265-8ada-94d1bd815aed in datapath 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a bound to our chassis
Jan 20 14:34:14 compute-1 ovn_controller[130490]: 2026-01-20T14:34:14Z|00144|binding|INFO|Setting lport 87a0a5ba-6446-4265-8ada-94d1bd815aed ovn-installed in OVS
Jan 20 14:34:14 compute-1 ovn_controller[130490]: 2026-01-20T14:34:14Z|00145|binding|INFO|Setting lport 87a0a5ba-6446-4265-8ada-94d1bd815aed up in Southbound
Jan 20 14:34:14 compute-1 nova_compute[225855]: 2026-01-20 14:34:14.458 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:14.458 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a
Jan 20 14:34:14 compute-1 nova_compute[225855]: 2026-01-20 14:34:14.463 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:14 compute-1 nova_compute[225855]: 2026-01-20 14:34:14.465 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:14 compute-1 systemd-machined[194361]: New machine qemu-21-instance-0000002a.
Jan 20 14:34:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:14.475 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8de91ed2-086f-49d4-841f-14895bcf6adb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:14 compute-1 systemd[1]: Started Virtual Machine qemu-21-instance-0000002a.
Jan 20 14:34:14 compute-1 systemd-udevd[244901]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:34:14 compute-1 NetworkManager[49104]: <info>  [1768919654.5142] device (tap87a0a5ba-64): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:34:14 compute-1 NetworkManager[49104]: <info>  [1768919654.5153] device (tap87a0a5ba-64): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:34:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:14.516 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[2672983d-b4ba-4f64-a515-f040ee4837c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:14.519 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[4f6566d4-096f-41e6-bdae-74d806ff7ebc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:14.554 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[4259060a-1af6-40d6-aed6-7dd314c6af57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:14.575 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1c974546-29f8-4316-8308-23066b2692af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33c9a20a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:8e:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466055, 'reachable_time': 29489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244911, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:14.594 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d0bae0-23e6-488f-ac49-5883a0190f3e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap33c9a20a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466065, 'tstamp': 466065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244913, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap33c9a20a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466068, 'tstamp': 466068}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244913, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:14.596 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33c9a20a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:14 compute-1 nova_compute[225855]: 2026-01-20 14:34:14.597 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:14 compute-1 nova_compute[225855]: 2026-01-20 14:34:14.599 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:14.599 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33c9a20a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:14.600 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:34:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:14.600 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap33c9a20a-d0, col_values=(('external_ids', {'iface-id': '90c69687-c788-4dba-881f-3ed4a5ee6007'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:14.600 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:34:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:14.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:14 compute-1 nova_compute[225855]: 2026-01-20 14:34:14.963 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919654.962634, fdba30ff-e02a-4857-92f6-1828ce3ab175 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:34:14 compute-1 nova_compute[225855]: 2026-01-20 14:34:14.963 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] VM Started (Lifecycle Event)
Jan 20 14:34:14 compute-1 nova_compute[225855]: 2026-01-20 14:34:14.988 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:34:14 compute-1 nova_compute[225855]: 2026-01-20 14:34:14.993 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919654.9627392, fdba30ff-e02a-4857-92f6-1828ce3ab175 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:34:14 compute-1 nova_compute[225855]: 2026-01-20 14:34:14.993 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] VM Paused (Lifecycle Event)
Jan 20 14:34:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.012 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.015 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.033 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.103 225859 DEBUG nova.compute.manager [req-369ef029-38d8-4f4f-84c5-bdc8c0258d9a req-4a938ab4-c5d1-4026-a47f-9a7d908ebddd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Received event network-vif-plugged-87a0a5ba-6446-4265-8ada-94d1bd815aed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.104 225859 DEBUG oslo_concurrency.lockutils [req-369ef029-38d8-4f4f-84c5-bdc8c0258d9a req-4a938ab4-c5d1-4026-a47f-9a7d908ebddd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fdba30ff-e02a-4857-92f6-1828ce3ab175-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.104 225859 DEBUG oslo_concurrency.lockutils [req-369ef029-38d8-4f4f-84c5-bdc8c0258d9a req-4a938ab4-c5d1-4026-a47f-9a7d908ebddd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdba30ff-e02a-4857-92f6-1828ce3ab175-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.104 225859 DEBUG oslo_concurrency.lockutils [req-369ef029-38d8-4f4f-84c5-bdc8c0258d9a req-4a938ab4-c5d1-4026-a47f-9a7d908ebddd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdba30ff-e02a-4857-92f6-1828ce3ab175-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.105 225859 DEBUG nova.compute.manager [req-369ef029-38d8-4f4f-84c5-bdc8c0258d9a req-4a938ab4-c5d1-4026-a47f-9a7d908ebddd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Processing event network-vif-plugged-87a0a5ba-6446-4265-8ada-94d1bd815aed _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.105 225859 DEBUG nova.compute.manager [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.109 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919655.1083775, fdba30ff-e02a-4857-92f6-1828ce3ab175 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.109 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] VM Resumed (Lifecycle Event)
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.110 225859 DEBUG nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.113 225859 INFO nova.virt.libvirt.driver [-] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Instance spawned successfully.
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.114 225859 DEBUG nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.142 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.147 225859 DEBUG nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.147 225859 DEBUG nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.148 225859 DEBUG nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.148 225859 DEBUG nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.149 225859 DEBUG nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.149 225859 DEBUG nova.virt.libvirt.driver [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.153 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:34:15 compute-1 ceph-mon[81775]: pgmap v1312: 321 pgs: 321 active+clean; 326 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.0 MiB/s wr, 147 op/s
Jan 20 14:34:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/467321397' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:34:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/467321397' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.201 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.224 225859 INFO nova.compute.manager [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Took 14.49 seconds to spawn the instance on the hypervisor.
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.225 225859 DEBUG nova.compute.manager [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.312 225859 INFO nova.compute.manager [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Took 15.73 seconds to build instance.
Jan 20 14:34:15 compute-1 nova_compute[225855]: 2026-01-20 14:34:15.332 225859 DEBUG oslo_concurrency.lockutils [None req-fad19961-02ca-4563-bcb6-95c343d93c0e f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "fdba30ff-e02a-4857-92f6-1828ce3ab175" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:15.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:16 compute-1 ceph-mon[81775]: pgmap v1313: 321 pgs: 321 active+clean; 326 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.0 MiB/s wr, 155 op/s
Jan 20 14:34:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4048469168' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:34:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4048469168' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:34:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:16.392 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:16.393 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:16.393 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:34:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:16.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:34:16 compute-1 nova_compute[225855]: 2026-01-20 14:34:16.805 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:17 compute-1 nova_compute[225855]: 2026-01-20 14:34:17.210 225859 DEBUG nova.compute.manager [req-ad17f70f-ff76-489a-a0c9-eeca4723e611 req-854a633c-2da8-4aed-aabf-298119db7c86 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Received event network-vif-plugged-87a0a5ba-6446-4265-8ada-94d1bd815aed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:34:17 compute-1 nova_compute[225855]: 2026-01-20 14:34:17.210 225859 DEBUG oslo_concurrency.lockutils [req-ad17f70f-ff76-489a-a0c9-eeca4723e611 req-854a633c-2da8-4aed-aabf-298119db7c86 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fdba30ff-e02a-4857-92f6-1828ce3ab175-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:17 compute-1 nova_compute[225855]: 2026-01-20 14:34:17.211 225859 DEBUG oslo_concurrency.lockutils [req-ad17f70f-ff76-489a-a0c9-eeca4723e611 req-854a633c-2da8-4aed-aabf-298119db7c86 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdba30ff-e02a-4857-92f6-1828ce3ab175-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:17 compute-1 nova_compute[225855]: 2026-01-20 14:34:17.211 225859 DEBUG oslo_concurrency.lockutils [req-ad17f70f-ff76-489a-a0c9-eeca4723e611 req-854a633c-2da8-4aed-aabf-298119db7c86 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdba30ff-e02a-4857-92f6-1828ce3ab175-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:17 compute-1 nova_compute[225855]: 2026-01-20 14:34:17.212 225859 DEBUG nova.compute.manager [req-ad17f70f-ff76-489a-a0c9-eeca4723e611 req-854a633c-2da8-4aed-aabf-298119db7c86 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] No waiting events found dispatching network-vif-plugged-87a0a5ba-6446-4265-8ada-94d1bd815aed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:34:17 compute-1 nova_compute[225855]: 2026-01-20 14:34:17.212 225859 WARNING nova.compute.manager [req-ad17f70f-ff76-489a-a0c9-eeca4723e611 req-854a633c-2da8-4aed-aabf-298119db7c86 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Received unexpected event network-vif-plugged-87a0a5ba-6446-4265-8ada-94d1bd815aed for instance with vm_state active and task_state None.
Jan 20 14:34:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:34:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:17.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:34:18 compute-1 nova_compute[225855]: 2026-01-20 14:34:18.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:34:18 compute-1 nova_compute[225855]: 2026-01-20 14:34:18.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:34:18 compute-1 ceph-mon[81775]: pgmap v1314: 321 pgs: 321 active+clean; 326 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 398 KiB/s rd, 389 KiB/s wr, 70 op/s
Jan 20 14:34:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:18.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:19 compute-1 nova_compute[225855]: 2026-01-20 14:34:19.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:34:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:34:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:19.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:34:19 compute-1 nova_compute[225855]: 2026-01-20 14:34:19.467 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:19 compute-1 ceph-mon[81775]: pgmap v1315: 321 pgs: 321 active+clean; 308 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 600 KiB/s rd, 27 KiB/s wr, 57 op/s
Jan 20 14:34:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1805719283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:20 compute-1 podman[244959]: 2026-01-20 14:34:20.023638502 +0000 UTC m=+0.065904764 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 20 14:34:20 compute-1 nova_compute[225855]: 2026-01-20 14:34:20.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:34:20 compute-1 nova_compute[225855]: 2026-01-20 14:34:20.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:34:20 compute-1 nova_compute[225855]: 2026-01-20 14:34:20.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:34:20 compute-1 nova_compute[225855]: 2026-01-20 14:34:20.590 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-2ec7b07d-b593-46b7-9751-b6116e4d2cec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:34:20 compute-1 nova_compute[225855]: 2026-01-20 14:34:20.590 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-2ec7b07d-b593-46b7-9751-b6116e4d2cec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:34:20 compute-1 nova_compute[225855]: 2026-01-20 14:34:20.591 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:34:20 compute-1 nova_compute[225855]: 2026-01-20 14:34:20.592 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2ec7b07d-b593-46b7-9751-b6116e4d2cec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:20.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:21.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:21 compute-1 nova_compute[225855]: 2026-01-20 14:34:21.808 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3074879648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:22 compute-1 sudo[244980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:22 compute-1 sudo[244980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:22 compute-1 sudo[244980]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:22 compute-1 sudo[245005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:22 compute-1 sudo[245005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:22 compute-1 sudo[245005]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:22.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:22 compute-1 ceph-mon[81775]: pgmap v1316: 321 pgs: 321 active+clean; 246 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 125 op/s
Jan 20 14:34:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1462991547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:23.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:23.630 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:34:23 compute-1 nova_compute[225855]: 2026-01-20 14:34:23.630 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:23.633 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:34:23 compute-1 ceph-mon[81775]: pgmap v1317: 321 pgs: 321 active+clean; 246 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 115 op/s
Jan 20 14:34:23 compute-1 nova_compute[225855]: 2026-01-20 14:34:23.998 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Updating instance_info_cache with network_info: [{"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.024 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-2ec7b07d-b593-46b7-9751-b6116e4d2cec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.024 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.025 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.025 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.026 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.051 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.052 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.052 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.053 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.053 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.521 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:34:24 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3789156520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.553 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.628 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.629 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.633 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.633 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:34:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:24.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.788 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.790 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4375MB free_disk=20.876426696777344GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.791 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.791 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.861 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 2ec7b07d-b593-46b7-9751-b6116e4d2cec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.862 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance fdba30ff-e02a-4857-92f6-1828ce3ab175 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.862 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.862 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:34:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3128845683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3789156520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:24 compute-1 nova_compute[225855]: 2026-01-20 14:34:24.953 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:34:25 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1416592101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:25 compute-1 nova_compute[225855]: 2026-01-20 14:34:25.387 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:25 compute-1 nova_compute[225855]: 2026-01-20 14:34:25.395 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:34:25 compute-1 nova_compute[225855]: 2026-01-20 14:34:25.415 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:34:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:25.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:25 compute-1 nova_compute[225855]: 2026-01-20 14:34:25.446 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:34:25 compute-1 nova_compute[225855]: 2026-01-20 14:34:25.447 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:25 compute-1 nova_compute[225855]: 2026-01-20 14:34:25.761 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:34:25 compute-1 nova_compute[225855]: 2026-01-20 14:34:25.762 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:34:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1416592101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:25 compute-1 ceph-mon[81775]: pgmap v1318: 321 pgs: 321 active+clean; 281 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 140 op/s
Jan 20 14:34:26 compute-1 nova_compute[225855]: 2026-01-20 14:34:26.334 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:34:26 compute-1 nova_compute[225855]: 2026-01-20 14:34:26.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:34:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:34:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:26.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:34:26 compute-1 nova_compute[225855]: 2026-01-20 14:34:26.858 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2572931249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3329220002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:27.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/252065068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/247561712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:28 compute-1 ceph-mon[81775]: pgmap v1319: 321 pgs: 321 active+clean; 293 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Jan 20 14:34:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:34:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:28.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:34:28 compute-1 ovn_controller[130490]: 2026-01-20T14:34:28Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:70:1f:46 10.100.0.10
Jan 20 14:34:28 compute-1 ovn_controller[130490]: 2026-01-20T14:34:28Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:1f:46 10.100.0.10
Jan 20 14:34:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:29.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:29 compute-1 nova_compute[225855]: 2026-01-20 14:34:29.524 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:30 compute-1 ceph-mon[81775]: pgmap v1320: 321 pgs: 321 active+clean; 299 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Jan 20 14:34:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:30.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:31.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:31 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:31.636 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1462311559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:31 compute-1 nova_compute[225855]: 2026-01-20 14:34:31.860 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:34:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:32.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:34:32 compute-1 ceph-mon[81775]: pgmap v1321: 321 pgs: 321 active+clean; 326 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 193 op/s
Jan 20 14:34:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:33.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:34 compute-1 nova_compute[225855]: 2026-01-20 14:34:34.526 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:34:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:34.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:34:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:35 compute-1 ceph-mon[81775]: pgmap v1322: 321 pgs: 321 active+clean; 326 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 843 KiB/s rd, 3.9 MiB/s wr, 115 op/s
Jan 20 14:34:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.003000082s ======
Jan 20 14:34:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:35.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000082s
Jan 20 14:34:36 compute-1 ceph-mon[81775]: pgmap v1323: 321 pgs: 321 active+clean; 361 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.4 MiB/s wr, 190 op/s
Jan 20 14:34:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:34:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:36.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:34:36 compute-1 nova_compute[225855]: 2026-01-20 14:34:36.864 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:37 compute-1 nova_compute[225855]: 2026-01-20 14:34:37.453 225859 INFO nova.compute.manager [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Rebuilding instance
Jan 20 14:34:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:34:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:37.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:34:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/366852801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3690699125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:37 compute-1 nova_compute[225855]: 2026-01-20 14:34:37.764 225859 DEBUG nova.objects.instance [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2ec7b07d-b593-46b7-9751-b6116e4d2cec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:37 compute-1 nova_compute[225855]: 2026-01-20 14:34:37.885 225859 DEBUG nova.compute.manager [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:34:37 compute-1 nova_compute[225855]: 2026-01-20 14:34:37.939 225859 DEBUG nova.objects.instance [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'pci_requests' on Instance uuid 2ec7b07d-b593-46b7-9751-b6116e4d2cec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:37 compute-1 nova_compute[225855]: 2026-01-20 14:34:37.953 225859 DEBUG nova.objects.instance [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ec7b07d-b593-46b7-9751-b6116e4d2cec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:37 compute-1 nova_compute[225855]: 2026-01-20 14:34:37.970 225859 DEBUG nova.objects.instance [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'resources' on Instance uuid 2ec7b07d-b593-46b7-9751-b6116e4d2cec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:37 compute-1 nova_compute[225855]: 2026-01-20 14:34:37.985 225859 DEBUG nova.objects.instance [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'migration_context' on Instance uuid 2ec7b07d-b593-46b7-9751-b6116e4d2cec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:38 compute-1 nova_compute[225855]: 2026-01-20 14:34:38.002 225859 DEBUG nova.objects.instance [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:34:38 compute-1 nova_compute[225855]: 2026-01-20 14:34:38.005 225859 DEBUG nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:34:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:34:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:38.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:34:38 compute-1 ceph-mon[81775]: pgmap v1324: 321 pgs: 321 active+clean; 372 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 167 op/s
Jan 20 14:34:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e169 e169: 3 total, 3 up, 3 in
Jan 20 14:34:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:39.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:39 compute-1 nova_compute[225855]: 2026-01-20 14:34:39.530 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:40 compute-1 podman[245084]: 2026-01-20 14:34:40.100478708 +0000 UTC m=+0.120281061 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 14:34:40 compute-1 ceph-mon[81775]: osdmap e169: 3 total, 3 up, 3 in
Jan 20 14:34:40 compute-1 ceph-mon[81775]: pgmap v1326: 321 pgs: 321 active+clean; 372 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.3 MiB/s wr, 189 op/s
Jan 20 14:34:40 compute-1 kernel: tap73e232f9-38 (unregistering): left promiscuous mode
Jan 20 14:34:40 compute-1 NetworkManager[49104]: <info>  [1768919680.5024] device (tap73e232f9-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:34:40 compute-1 ovn_controller[130490]: 2026-01-20T14:34:40Z|00146|binding|INFO|Releasing lport 73e232f9-3860-4b9a-9cec-535fa2fb0c9f from this chassis (sb_readonly=0)
Jan 20 14:34:40 compute-1 ovn_controller[130490]: 2026-01-20T14:34:40Z|00147|binding|INFO|Setting lport 73e232f9-3860-4b9a-9cec-535fa2fb0c9f down in Southbound
Jan 20 14:34:40 compute-1 ovn_controller[130490]: 2026-01-20T14:34:40Z|00148|binding|INFO|Removing iface tap73e232f9-38 ovn-installed in OVS
Jan 20 14:34:40 compute-1 nova_compute[225855]: 2026-01-20 14:34:40.525 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:40 compute-1 nova_compute[225855]: 2026-01-20 14:34:40.528 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:40.533 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:6a:15 10.100.0.6'], port_security=['fa:16:3e:17:6a:15 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2ec7b07d-b593-46b7-9751-b6116e4d2cec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a841e7a1434c488390475174e10bc161', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0bbdea05-fba7-47c7-ba4e-5dac58212a25', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c43dea88-ea55-4069-a4be-2c30a432a754, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=73e232f9-3860-4b9a-9cec-535fa2fb0c9f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:34:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:40.534 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 73e232f9-3860-4b9a-9cec-535fa2fb0c9f in datapath 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a unbound from our chassis
Jan 20 14:34:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:40.535 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a
Jan 20 14:34:40 compute-1 nova_compute[225855]: 2026-01-20 14:34:40.545 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:40.558 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a24ad195-11dc-492e-978a-7fab7eb64e4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:40 compute-1 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000027.scope: Deactivated successfully.
Jan 20 14:34:40 compute-1 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000027.scope: Consumed 16.355s CPU time.
Jan 20 14:34:40 compute-1 systemd-machined[194361]: Machine qemu-20-instance-00000027 terminated.
Jan 20 14:34:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:40.597 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[bd907b1c-2921-4e4b-9235-d294c88437a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:40.602 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d8cd6ad1-786f-47c7-a4c9-35b3e180237c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:40.633 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f32a5a-3a38-4346-962f-5b83ffeb46f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 20 14:34:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:40.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 20 14:34:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:40.656 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0199eefa-c47f-469c-b4df-29ceaef26d29]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33c9a20a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:8e:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466055, 'reachable_time': 29489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245123, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:40.677 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c4ede9-c547-47ec-b9f3-653f018ff56e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap33c9a20a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466065, 'tstamp': 466065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245124, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap33c9a20a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466068, 'tstamp': 466068}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245124, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:40.679 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33c9a20a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:40 compute-1 nova_compute[225855]: 2026-01-20 14:34:40.680 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:40 compute-1 nova_compute[225855]: 2026-01-20 14:34:40.686 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:40.687 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33c9a20a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:40.688 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:34:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:40.688 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap33c9a20a-d0, col_values=(('external_ids', {'iface-id': '90c69687-c788-4dba-881f-3ed4a5ee6007'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:40.689 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:34:40 compute-1 nova_compute[225855]: 2026-01-20 14:34:40.737 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:40 compute-1 nova_compute[225855]: 2026-01-20 14:34:40.742 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:40 compute-1 nova_compute[225855]: 2026-01-20 14:34:40.908 225859 DEBUG nova.compute.manager [req-74e8a092-04cf-44ad-9d96-951b1c1821b4 req-8325b682-ef38-4fa9-bc7d-1182d037eb70 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received event network-vif-unplugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:34:40 compute-1 nova_compute[225855]: 2026-01-20 14:34:40.910 225859 DEBUG oslo_concurrency.lockutils [req-74e8a092-04cf-44ad-9d96-951b1c1821b4 req-8325b682-ef38-4fa9-bc7d-1182d037eb70 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:40 compute-1 nova_compute[225855]: 2026-01-20 14:34:40.910 225859 DEBUG oslo_concurrency.lockutils [req-74e8a092-04cf-44ad-9d96-951b1c1821b4 req-8325b682-ef38-4fa9-bc7d-1182d037eb70 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:40 compute-1 nova_compute[225855]: 2026-01-20 14:34:40.911 225859 DEBUG oslo_concurrency.lockutils [req-74e8a092-04cf-44ad-9d96-951b1c1821b4 req-8325b682-ef38-4fa9-bc7d-1182d037eb70 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:40 compute-1 nova_compute[225855]: 2026-01-20 14:34:40.911 225859 DEBUG nova.compute.manager [req-74e8a092-04cf-44ad-9d96-951b1c1821b4 req-8325b682-ef38-4fa9-bc7d-1182d037eb70 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] No waiting events found dispatching network-vif-unplugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:34:40 compute-1 nova_compute[225855]: 2026-01-20 14:34:40.912 225859 WARNING nova.compute.manager [req-74e8a092-04cf-44ad-9d96-951b1c1821b4 req-8325b682-ef38-4fa9-bc7d-1182d037eb70 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received unexpected event network-vif-unplugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f for instance with vm_state error and task_state rebuilding.
Jan 20 14:34:41 compute-1 nova_compute[225855]: 2026-01-20 14:34:41.026 225859 INFO nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Instance shutdown successfully after 3 seconds.
Jan 20 14:34:41 compute-1 nova_compute[225855]: 2026-01-20 14:34:41.036 225859 INFO nova.virt.libvirt.driver [-] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Instance destroyed successfully.
Jan 20 14:34:41 compute-1 nova_compute[225855]: 2026-01-20 14:34:41.044 225859 INFO nova.virt.libvirt.driver [-] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Instance destroyed successfully.
Jan 20 14:34:41 compute-1 nova_compute[225855]: 2026-01-20 14:34:41.045 225859 DEBUG nova.virt.libvirt.vif [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:33:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1907009380',display_name='tempest-ServersAdminTestJSON-server-1907009380',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1907009380',id=39,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:33:43Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a841e7a1434c488390475174e10bc161',ramdisk_id='',reservation_id='r-8k5b63bj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1261404595',owner_user_name='tempest-ServersAdminTestJSON-1261404595-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:34:36Z,user_data=None,user_id='f51c395107c84dbd9067113b84ff01dd',uuid=2ec7b07d-b593-46b7-9751-b6116e4d2cec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:34:41 compute-1 nova_compute[225855]: 2026-01-20 14:34:41.046 225859 DEBUG nova.network.os_vif_util [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converting VIF {"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:34:41 compute-1 nova_compute[225855]: 2026-01-20 14:34:41.046 225859 DEBUG nova.network.os_vif_util [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:34:41 compute-1 nova_compute[225855]: 2026-01-20 14:34:41.047 225859 DEBUG os_vif [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:34:41 compute-1 nova_compute[225855]: 2026-01-20 14:34:41.049 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:41 compute-1 nova_compute[225855]: 2026-01-20 14:34:41.049 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73e232f9-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:41 compute-1 nova_compute[225855]: 2026-01-20 14:34:41.051 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:41 compute-1 nova_compute[225855]: 2026-01-20 14:34:41.053 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:41 compute-1 nova_compute[225855]: 2026-01-20 14:34:41.056 225859 INFO os_vif [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38')
Jan 20 14:34:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:41.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:42 compute-1 ceph-mon[81775]: pgmap v1327: 321 pgs: 321 active+clean; 372 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Jan 20 14:34:42 compute-1 sudo[245155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:42 compute-1 sudo[245155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:42 compute-1 sudo[245155]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:42 compute-1 sudo[245180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:42 compute-1 sudo[245180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:42 compute-1 sudo[245180]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:42.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:43 compute-1 nova_compute[225855]: 2026-01-20 14:34:42.998 225859 DEBUG nova.compute.manager [req-fc1a4841-4f7d-4d2b-a19c-ce0ab9853f2d req-e28d6c9f-49ac-4838-a5cf-3112179360c7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received event network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:34:43 compute-1 nova_compute[225855]: 2026-01-20 14:34:43.000 225859 DEBUG oslo_concurrency.lockutils [req-fc1a4841-4f7d-4d2b-a19c-ce0ab9853f2d req-e28d6c9f-49ac-4838-a5cf-3112179360c7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:43 compute-1 nova_compute[225855]: 2026-01-20 14:34:43.000 225859 DEBUG oslo_concurrency.lockutils [req-fc1a4841-4f7d-4d2b-a19c-ce0ab9853f2d req-e28d6c9f-49ac-4838-a5cf-3112179360c7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:43 compute-1 nova_compute[225855]: 2026-01-20 14:34:43.001 225859 DEBUG oslo_concurrency.lockutils [req-fc1a4841-4f7d-4d2b-a19c-ce0ab9853f2d req-e28d6c9f-49ac-4838-a5cf-3112179360c7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:43 compute-1 nova_compute[225855]: 2026-01-20 14:34:43.001 225859 DEBUG nova.compute.manager [req-fc1a4841-4f7d-4d2b-a19c-ce0ab9853f2d req-e28d6c9f-49ac-4838-a5cf-3112179360c7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] No waiting events found dispatching network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:34:43 compute-1 nova_compute[225855]: 2026-01-20 14:34:43.002 225859 WARNING nova.compute.manager [req-fc1a4841-4f7d-4d2b-a19c-ce0ab9853f2d req-e28d6c9f-49ac-4838-a5cf-3112179360c7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received unexpected event network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f for instance with vm_state error and task_state rebuilding.
Jan 20 14:34:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:43.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:44 compute-1 ceph-mon[81775]: pgmap v1328: 321 pgs: 321 active+clean; 372 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Jan 20 14:34:44 compute-1 nova_compute[225855]: 2026-01-20 14:34:44.532 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:44 compute-1 nova_compute[225855]: 2026-01-20 14:34:44.542 225859 INFO nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Deleting instance files /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec_del
Jan 20 14:34:44 compute-1 nova_compute[225855]: 2026-01-20 14:34:44.543 225859 INFO nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Deletion of /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec_del complete
Jan 20 14:34:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:34:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:44.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:34:44 compute-1 nova_compute[225855]: 2026-01-20 14:34:44.693 225859 DEBUG nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:34:44 compute-1 nova_compute[225855]: 2026-01-20 14:34:44.694 225859 INFO nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Creating image(s)
Jan 20 14:34:44 compute-1 nova_compute[225855]: 2026-01-20 14:34:44.724 225859 DEBUG nova.storage.rbd_utils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:44 compute-1 nova_compute[225855]: 2026-01-20 14:34:44.759 225859 DEBUG nova.storage.rbd_utils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:44 compute-1 nova_compute[225855]: 2026-01-20 14:34:44.789 225859 DEBUG nova.storage.rbd_utils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:44 compute-1 nova_compute[225855]: 2026-01-20 14:34:44.794 225859 DEBUG oslo_concurrency.lockutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:44 compute-1 nova_compute[225855]: 2026-01-20 14:34:44.795 225859 DEBUG oslo_concurrency.lockutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:45 compute-1 nova_compute[225855]: 2026-01-20 14:34:45.211 225859 DEBUG nova.virt.libvirt.imagebackend [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Image locations are: [{'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/26699514-f465-4b50-98b7-36f2cfc6a308/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/26699514-f465-4b50-98b7-36f2cfc6a308/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 20 14:34:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:45.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:46 compute-1 nova_compute[225855]: 2026-01-20 14:34:46.052 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:46.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:46 compute-1 nova_compute[225855]: 2026-01-20 14:34:46.788 225859 DEBUG oslo_concurrency.processutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:46 compute-1 ceph-mon[81775]: pgmap v1329: 321 pgs: 321 active+clean; 338 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.0 MiB/s wr, 197 op/s
Jan 20 14:34:46 compute-1 nova_compute[225855]: 2026-01-20 14:34:46.857 225859 DEBUG oslo_concurrency.processutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c.part --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:46 compute-1 nova_compute[225855]: 2026-01-20 14:34:46.858 225859 DEBUG nova.virt.images [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] 26699514-f465-4b50-98b7-36f2cfc6a308 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 20 14:34:46 compute-1 nova_compute[225855]: 2026-01-20 14:34:46.859 225859 DEBUG nova.privsep.utils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 20 14:34:46 compute-1 nova_compute[225855]: 2026-01-20 14:34:46.859 225859 DEBUG oslo_concurrency.processutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c.part /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e170 e170: 3 total, 3 up, 3 in
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.082 225859 DEBUG oslo_concurrency.processutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c.part /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c.converted" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.087 225859 DEBUG oslo_concurrency.processutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.144 225859 DEBUG oslo_concurrency.processutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c.converted --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.145 225859 DEBUG oslo_concurrency.lockutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.176 225859 DEBUG nova.storage.rbd_utils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.179 225859 DEBUG oslo_concurrency.processutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:34:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:47.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.516 225859 DEBUG oslo_concurrency.processutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.601 225859 DEBUG nova.storage.rbd_utils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] resizing rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.704 225859 DEBUG nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.704 225859 DEBUG nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Ensure instance console log exists: /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.705 225859 DEBUG oslo_concurrency.lockutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.705 225859 DEBUG oslo_concurrency.lockutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.705 225859 DEBUG oslo_concurrency.lockutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.707 225859 DEBUG nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Start _get_guest_xml network_info=[{"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:22:02Z,direct_url=<?>,disk_format='qcow2',id=26699514-f465-4b50-98b7-36f2cfc6a308,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.711 225859 WARNING nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.724 225859 DEBUG nova.virt.libvirt.host [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.725 225859 DEBUG nova.virt.libvirt.host [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.737 225859 DEBUG nova.virt.libvirt.host [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.737 225859 DEBUG nova.virt.libvirt.host [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.738 225859 DEBUG nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.738 225859 DEBUG nova.virt.hardware [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:22:02Z,direct_url=<?>,disk_format='qcow2',id=26699514-f465-4b50-98b7-36f2cfc6a308,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.739 225859 DEBUG nova.virt.hardware [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.739 225859 DEBUG nova.virt.hardware [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.739 225859 DEBUG nova.virt.hardware [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.739 225859 DEBUG nova.virt.hardware [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.739 225859 DEBUG nova.virt.hardware [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.740 225859 DEBUG nova.virt.hardware [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.740 225859 DEBUG nova.virt.hardware [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.740 225859 DEBUG nova.virt.hardware [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.740 225859 DEBUG nova.virt.hardware [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.741 225859 DEBUG nova.virt.hardware [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.741 225859 DEBUG nova.objects.instance [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2ec7b07d-b593-46b7-9751-b6116e4d2cec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:47 compute-1 nova_compute[225855]: 2026-01-20 14:34:47.765 225859 DEBUG oslo_concurrency.processutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:47 compute-1 ceph-mon[81775]: osdmap e170: 3 total, 3 up, 3 in
Jan 20 14:34:47 compute-1 ceph-mon[81775]: pgmap v1331: 321 pgs: 321 active+clean; 322 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.1 MiB/s wr, 252 op/s
Jan 20 14:34:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e171 e171: 3 total, 3 up, 3 in
Jan 20 14:34:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:34:48 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/80779919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.243 225859 DEBUG oslo_concurrency.processutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.271 225859 DEBUG nova.storage.rbd_utils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.275 225859 DEBUG oslo_concurrency.processutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:48.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:34:48 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/578134844' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.698 225859 DEBUG oslo_concurrency.processutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.701 225859 DEBUG nova.virt.libvirt.vif [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:33:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1907009380',display_name='tempest-ServersAdminTestJSON-server-1907009380',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1907009380',id=39,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:33:43Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a841e7a1434c488390475174e10bc161',ramdisk_id='',reservation_id='r-8k5b63bj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1261404595',owner_user_name='tempest-ServersAdminTestJSON-1261404595-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:34:44Z,user_data=None,user_id='f51c395107c84dbd9067113b84ff01dd',uuid=2ec7b07d-b593-46b7-9751-b6116e4d2cec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.701 225859 DEBUG nova.network.os_vif_util [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converting VIF {"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.702 225859 DEBUG nova.network.os_vif_util [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.706 225859 DEBUG nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:34:48 compute-1 nova_compute[225855]:   <uuid>2ec7b07d-b593-46b7-9751-b6116e4d2cec</uuid>
Jan 20 14:34:48 compute-1 nova_compute[225855]:   <name>instance-00000027</name>
Jan 20 14:34:48 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:34:48 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:34:48 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <nova:name>tempest-ServersAdminTestJSON-server-1907009380</nova:name>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:34:47</nova:creationTime>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:34:48 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:34:48 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:34:48 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:34:48 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:34:48 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:34:48 compute-1 nova_compute[225855]:         <nova:user uuid="f51c395107c84dbd9067113b84ff01dd">tempest-ServersAdminTestJSON-1261404595-project-member</nova:user>
Jan 20 14:34:48 compute-1 nova_compute[225855]:         <nova:project uuid="a841e7a1434c488390475174e10bc161">tempest-ServersAdminTestJSON-1261404595</nova:project>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="26699514-f465-4b50-98b7-36f2cfc6a308"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:34:48 compute-1 nova_compute[225855]:         <nova:port uuid="73e232f9-3860-4b9a-9cec-535fa2fb0c9f">
Jan 20 14:34:48 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:34:48 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:34:48 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <system>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <entry name="serial">2ec7b07d-b593-46b7-9751-b6116e4d2cec</entry>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <entry name="uuid">2ec7b07d-b593-46b7-9751-b6116e4d2cec</entry>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     </system>
Jan 20 14:34:48 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:34:48 compute-1 nova_compute[225855]:   <os>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:   </os>
Jan 20 14:34:48 compute-1 nova_compute[225855]:   <features>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:   </features>
Jan 20 14:34:48 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:34:48 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:34:48 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk">
Jan 20 14:34:48 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       </source>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:34:48 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk.config">
Jan 20 14:34:48 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       </source>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:34:48 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:17:6a:15"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <target dev="tap73e232f9-38"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/console.log" append="off"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <video>
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     </video>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:34:48 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:34:48 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:34:48 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:34:48 compute-1 nova_compute[225855]: </domain>
Jan 20 14:34:48 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.708 225859 DEBUG nova.virt.libvirt.vif [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:33:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1907009380',display_name='tempest-ServersAdminTestJSON-server-1907009380',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1907009380',id=39,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:33:43Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a841e7a1434c488390475174e10bc161',ramdisk_id='',reservation_id='r-8k5b63bj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1261404595',owner_user_name='tempest-ServersAdminTestJSON-1261404595-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:34:44Z,user_data=None,user_id='f51c395107c84dbd9067113b84ff01dd',uuid=2ec7b07d-b593-46b7-9751-b6116e4d2cec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.709 225859 DEBUG nova.network.os_vif_util [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converting VIF {"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.710 225859 DEBUG nova.network.os_vif_util [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.710 225859 DEBUG os_vif [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.711 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.712 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.712 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.714 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.715 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73e232f9-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.715 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap73e232f9-38, col_values=(('external_ids', {'iface-id': '73e232f9-3860-4b9a-9cec-535fa2fb0c9f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:6a:15', 'vm-uuid': '2ec7b07d-b593-46b7-9751-b6116e4d2cec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:48 compute-1 NetworkManager[49104]: <info>  [1768919688.7177] manager: (tap73e232f9-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.719 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.722 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.723 225859 INFO os_vif [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38')
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.777 225859 DEBUG nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.777 225859 DEBUG nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.778 225859 DEBUG nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] No VIF found with MAC fa:16:3e:17:6a:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.778 225859 INFO nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Using config drive
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.811 225859 DEBUG nova.storage.rbd_utils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.831 225859 DEBUG nova.objects.instance [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 2ec7b07d-b593-46b7-9751-b6116e4d2cec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:48 compute-1 nova_compute[225855]: 2026-01-20 14:34:48.856 225859 DEBUG nova.objects.instance [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'keypairs' on Instance uuid 2ec7b07d-b593-46b7-9751-b6116e4d2cec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e172 e172: 3 total, 3 up, 3 in
Jan 20 14:34:48 compute-1 ceph-mon[81775]: osdmap e171: 3 total, 3 up, 3 in
Jan 20 14:34:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/80779919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/578134844' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:49.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:49 compute-1 nova_compute[225855]: 2026-01-20 14:34:49.535 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:49 compute-1 nova_compute[225855]: 2026-01-20 14:34:49.755 225859 INFO nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Creating config drive at /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/disk.config
Jan 20 14:34:49 compute-1 nova_compute[225855]: 2026-01-20 14:34:49.761 225859 DEBUG oslo_concurrency.processutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptkclbi5w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:49 compute-1 ceph-mon[81775]: osdmap e172: 3 total, 3 up, 3 in
Jan 20 14:34:49 compute-1 ceph-mon[81775]: pgmap v1334: 321 pgs: 321 active+clean; 347 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 6.4 MiB/s wr, 367 op/s
Jan 20 14:34:49 compute-1 nova_compute[225855]: 2026-01-20 14:34:49.902 225859 DEBUG oslo_concurrency.processutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptkclbi5w" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:49 compute-1 nova_compute[225855]: 2026-01-20 14:34:49.940 225859 DEBUG nova.storage.rbd_utils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:49 compute-1 nova_compute[225855]: 2026-01-20 14:34:49.944 225859 DEBUG oslo_concurrency.processutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/disk.config 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:50 compute-1 nova_compute[225855]: 2026-01-20 14:34:50.142 225859 DEBUG oslo_concurrency.processutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/disk.config 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:50 compute-1 nova_compute[225855]: 2026-01-20 14:34:50.143 225859 INFO nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Deleting local config drive /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/disk.config because it was imported into RBD.
Jan 20 14:34:50 compute-1 kernel: tap73e232f9-38: entered promiscuous mode
Jan 20 14:34:50 compute-1 nova_compute[225855]: 2026-01-20 14:34:50.215 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:50 compute-1 ovn_controller[130490]: 2026-01-20T14:34:50Z|00149|binding|INFO|Claiming lport 73e232f9-3860-4b9a-9cec-535fa2fb0c9f for this chassis.
Jan 20 14:34:50 compute-1 ovn_controller[130490]: 2026-01-20T14:34:50Z|00150|binding|INFO|73e232f9-3860-4b9a-9cec-535fa2fb0c9f: Claiming fa:16:3e:17:6a:15 10.100.0.6
Jan 20 14:34:50 compute-1 NetworkManager[49104]: <info>  [1768919690.2155] manager: (tap73e232f9-38): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Jan 20 14:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:50.225 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:6a:15 10.100.0.6'], port_security=['fa:16:3e:17:6a:15 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2ec7b07d-b593-46b7-9751-b6116e4d2cec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a841e7a1434c488390475174e10bc161', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0bbdea05-fba7-47c7-ba4e-5dac58212a25', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c43dea88-ea55-4069-a4be-2c30a432a754, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=73e232f9-3860-4b9a-9cec-535fa2fb0c9f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:50.228 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 73e232f9-3860-4b9a-9cec-535fa2fb0c9f in datapath 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a bound to our chassis
Jan 20 14:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:50.230 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a
Jan 20 14:34:50 compute-1 ovn_controller[130490]: 2026-01-20T14:34:50Z|00151|binding|INFO|Setting lport 73e232f9-3860-4b9a-9cec-535fa2fb0c9f ovn-installed in OVS
Jan 20 14:34:50 compute-1 ovn_controller[130490]: 2026-01-20T14:34:50Z|00152|binding|INFO|Setting lport 73e232f9-3860-4b9a-9cec-535fa2fb0c9f up in Southbound
Jan 20 14:34:50 compute-1 nova_compute[225855]: 2026-01-20 14:34:50.233 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:50 compute-1 nova_compute[225855]: 2026-01-20 14:34:50.238 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:50.247 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[abb1a060-1475-49c3-8c07-62eae2cd9652]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:50 compute-1 systemd-udevd[245533]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:34:50 compute-1 systemd-machined[194361]: New machine qemu-22-instance-00000027.
Jan 20 14:34:50 compute-1 systemd[1]: Started Virtual Machine qemu-22-instance-00000027.
Jan 20 14:34:50 compute-1 NetworkManager[49104]: <info>  [1768919690.2800] device (tap73e232f9-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:34:50 compute-1 NetworkManager[49104]: <info>  [1768919690.2810] device (tap73e232f9-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:50.288 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[9b9fe3f9-894f-4fc7-ade2-948f5e9afe68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:50.293 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0c6e71-7096-40ff-a05a-04fc9e626ecd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:50.331 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[cb0c72ab-4057-4705-b901-98becdbad45c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:50 compute-1 podman[245522]: 2026-01-20 14:34:50.33318228 +0000 UTC m=+0.082209029 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 20 14:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:50.349 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ced0a71e-445b-4e10-a232-7f3e47d2ac22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33c9a20a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:8e:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466055, 'reachable_time': 29489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245556, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:50.369 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[dff825a1-5291-47b9-a6d1-f8e0a4f2370a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap33c9a20a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466065, 'tstamp': 466065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245557, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap33c9a20a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466068, 'tstamp': 466068}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245557, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:50.370 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33c9a20a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:50 compute-1 nova_compute[225855]: 2026-01-20 14:34:50.372 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:50 compute-1 nova_compute[225855]: 2026-01-20 14:34:50.373 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:50.373 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33c9a20a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:50.374 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:50.374 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap33c9a20a-d0, col_values=(('external_ids', {'iface-id': '90c69687-c788-4dba-881f-3ed4a5ee6007'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:34:50.374 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:34:50 compute-1 nova_compute[225855]: 2026-01-20 14:34:50.516 225859 DEBUG nova.compute.manager [req-54fcfc19-c368-470e-834a-fa9cb874c9b7 req-e99bbeea-af60-4fcc-82cb-61f48007815e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received event network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:34:50 compute-1 nova_compute[225855]: 2026-01-20 14:34:50.516 225859 DEBUG oslo_concurrency.lockutils [req-54fcfc19-c368-470e-834a-fa9cb874c9b7 req-e99bbeea-af60-4fcc-82cb-61f48007815e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:50 compute-1 nova_compute[225855]: 2026-01-20 14:34:50.517 225859 DEBUG oslo_concurrency.lockutils [req-54fcfc19-c368-470e-834a-fa9cb874c9b7 req-e99bbeea-af60-4fcc-82cb-61f48007815e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:50 compute-1 nova_compute[225855]: 2026-01-20 14:34:50.517 225859 DEBUG oslo_concurrency.lockutils [req-54fcfc19-c368-470e-834a-fa9cb874c9b7 req-e99bbeea-af60-4fcc-82cb-61f48007815e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:50 compute-1 nova_compute[225855]: 2026-01-20 14:34:50.518 225859 DEBUG nova.compute.manager [req-54fcfc19-c368-470e-834a-fa9cb874c9b7 req-e99bbeea-af60-4fcc-82cb-61f48007815e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] No waiting events found dispatching network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:34:50 compute-1 nova_compute[225855]: 2026-01-20 14:34:50.518 225859 WARNING nova.compute.manager [req-54fcfc19-c368-470e-834a-fa9cb874c9b7 req-e99bbeea-af60-4fcc-82cb-61f48007815e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received unexpected event network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f for instance with vm_state error and task_state rebuild_spawning.
Jan 20 14:34:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:34:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:50.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:34:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e173 e173: 3 total, 3 up, 3 in
Jan 20 14:34:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:34:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:51.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.726 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for 2ec7b07d-b593-46b7-9751-b6116e4d2cec due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.727 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919691.7263894, 2ec7b07d-b593-46b7-9751-b6116e4d2cec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.727 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] VM Resumed (Lifecycle Event)
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.731 225859 DEBUG nova.compute.manager [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.732 225859 DEBUG nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.736 225859 INFO nova.virt.libvirt.driver [-] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Instance spawned successfully.
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.736 225859 DEBUG nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.757 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.768 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.772 225859 DEBUG nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.772 225859 DEBUG nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.773 225859 DEBUG nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.773 225859 DEBUG nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.774 225859 DEBUG nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.775 225859 DEBUG nova.virt.libvirt.driver [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.804 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.805 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919691.7274077, 2ec7b07d-b593-46b7-9751-b6116e4d2cec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.805 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] VM Started (Lifecycle Event)
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.832 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.835 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.842 225859 DEBUG nova.compute.manager [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.852 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.904 225859 DEBUG oslo_concurrency.lockutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.904 225859 DEBUG oslo_concurrency.lockutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:51 compute-1 nova_compute[225855]: 2026-01-20 14:34:51.905 225859 DEBUG nova.objects.instance [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:34:51 compute-1 sudo[245602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:51 compute-1 sudo[245602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:51 compute-1 sudo[245602]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:52 compute-1 nova_compute[225855]: 2026-01-20 14:34:52.041 225859 DEBUG oslo_concurrency.lockutils [None req-794cd3f2-804a-4c65-baab-dd821341efb7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:52 compute-1 sudo[245627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:34:52 compute-1 sudo[245627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:52 compute-1 sudo[245627]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:52 compute-1 sudo[245652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:52 compute-1 sudo[245652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:52 compute-1 sudo[245652]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:52 compute-1 sudo[245677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:34:52 compute-1 sudo[245677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:52 compute-1 ceph-mon[81775]: osdmap e173: 3 total, 3 up, 3 in
Jan 20 14:34:52 compute-1 ceph-mon[81775]: pgmap v1336: 321 pgs: 321 active+clean; 418 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 9.4 MiB/s wr, 298 op/s
Jan 20 14:34:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e174 e174: 3 total, 3 up, 3 in
Jan 20 14:34:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e175 e175: 3 total, 3 up, 3 in
Jan 20 14:34:52 compute-1 nova_compute[225855]: 2026-01-20 14:34:52.617 225859 DEBUG nova.compute.manager [req-a0af8d46-3a12-443e-b9a2-8f602873e7e8 req-6ab3ced8-2443-41ba-909e-a2f5350e50a6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received event network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:34:52 compute-1 nova_compute[225855]: 2026-01-20 14:34:52.617 225859 DEBUG oslo_concurrency.lockutils [req-a0af8d46-3a12-443e-b9a2-8f602873e7e8 req-6ab3ced8-2443-41ba-909e-a2f5350e50a6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:52 compute-1 nova_compute[225855]: 2026-01-20 14:34:52.618 225859 DEBUG oslo_concurrency.lockutils [req-a0af8d46-3a12-443e-b9a2-8f602873e7e8 req-6ab3ced8-2443-41ba-909e-a2f5350e50a6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:52 compute-1 nova_compute[225855]: 2026-01-20 14:34:52.618 225859 DEBUG oslo_concurrency.lockutils [req-a0af8d46-3a12-443e-b9a2-8f602873e7e8 req-6ab3ced8-2443-41ba-909e-a2f5350e50a6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:52 compute-1 nova_compute[225855]: 2026-01-20 14:34:52.618 225859 DEBUG nova.compute.manager [req-a0af8d46-3a12-443e-b9a2-8f602873e7e8 req-6ab3ced8-2443-41ba-909e-a2f5350e50a6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] No waiting events found dispatching network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:34:52 compute-1 nova_compute[225855]: 2026-01-20 14:34:52.618 225859 WARNING nova.compute.manager [req-a0af8d46-3a12-443e-b9a2-8f602873e7e8 req-6ab3ced8-2443-41ba-909e-a2f5350e50a6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received unexpected event network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f for instance with vm_state active and task_state None.
Jan 20 14:34:52 compute-1 sudo[245677]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:52.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:53 compute-1 ceph-mon[81775]: osdmap e174: 3 total, 3 up, 3 in
Jan 20 14:34:53 compute-1 ceph-mon[81775]: osdmap e175: 3 total, 3 up, 3 in
Jan 20 14:34:53 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:34:53 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:34:53 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:34:53 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:34:53 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:34:53 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:34:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:34:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:53.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:34:53 compute-1 nova_compute[225855]: 2026-01-20 14:34:53.718 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:54 compute-1 ceph-mon[81775]: pgmap v1339: 321 pgs: 321 active+clean; 418 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 6.5 MiB/s wr, 180 op/s
Jan 20 14:34:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e176 e176: 3 total, 3 up, 3 in
Jan 20 14:34:54 compute-1 nova_compute[225855]: 2026-01-20 14:34:54.537 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:34:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:54.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:34:54 compute-1 nova_compute[225855]: 2026-01-20 14:34:54.675 225859 INFO nova.compute.manager [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Rebuilding instance
Jan 20 14:34:54 compute-1 nova_compute[225855]: 2026-01-20 14:34:54.996 225859 DEBUG nova.objects.instance [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2ec7b07d-b593-46b7-9751-b6116e4d2cec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:55 compute-1 nova_compute[225855]: 2026-01-20 14:34:55.015 225859 DEBUG nova.compute.manager [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:34:55 compute-1 nova_compute[225855]: 2026-01-20 14:34:55.067 225859 DEBUG nova.objects.instance [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'pci_requests' on Instance uuid 2ec7b07d-b593-46b7-9751-b6116e4d2cec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:55 compute-1 nova_compute[225855]: 2026-01-20 14:34:55.080 225859 DEBUG nova.objects.instance [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ec7b07d-b593-46b7-9751-b6116e4d2cec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:55 compute-1 nova_compute[225855]: 2026-01-20 14:34:55.091 225859 DEBUG nova.objects.instance [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'resources' on Instance uuid 2ec7b07d-b593-46b7-9751-b6116e4d2cec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:55 compute-1 nova_compute[225855]: 2026-01-20 14:34:55.101 225859 DEBUG nova.objects.instance [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'migration_context' on Instance uuid 2ec7b07d-b593-46b7-9751-b6116e4d2cec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:55 compute-1 nova_compute[225855]: 2026-01-20 14:34:55.112 225859 DEBUG nova.objects.instance [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:34:55 compute-1 nova_compute[225855]: 2026-01-20 14:34:55.116 225859 DEBUG nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:34:55 compute-1 ceph-mon[81775]: osdmap e176: 3 total, 3 up, 3 in
Jan 20 14:34:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:55.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:56 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2671332732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:56 compute-1 ceph-mon[81775]: pgmap v1341: 321 pgs: 321 active+clean; 356 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.7 MiB/s wr, 363 op/s
Jan 20 14:34:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:56.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:57 compute-1 nova_compute[225855]: 2026-01-20 14:34:57.391 225859 DEBUG oslo_concurrency.lockutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Acquiring lock "680a9e49-0486-46a0-8857-99a7a56c46e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:57 compute-1 nova_compute[225855]: 2026-01-20 14:34:57.392 225859 DEBUG oslo_concurrency.lockutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "680a9e49-0486-46a0-8857-99a7a56c46e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e177 e177: 3 total, 3 up, 3 in
Jan 20 14:34:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3288395047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:57 compute-1 nova_compute[225855]: 2026-01-20 14:34:57.417 225859 DEBUG nova.compute.manager [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:34:57 compute-1 nova_compute[225855]: 2026-01-20 14:34:57.483 225859 DEBUG oslo_concurrency.lockutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:57 compute-1 nova_compute[225855]: 2026-01-20 14:34:57.484 225859 DEBUG oslo_concurrency.lockutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:57.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:57 compute-1 nova_compute[225855]: 2026-01-20 14:34:57.493 225859 DEBUG nova.virt.hardware [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:34:57 compute-1 nova_compute[225855]: 2026-01-20 14:34:57.494 225859 INFO nova.compute.claims [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:34:57 compute-1 nova_compute[225855]: 2026-01-20 14:34:57.641 225859 DEBUG oslo_concurrency.processutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:34:58 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3207990597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.109 225859 DEBUG oslo_concurrency.processutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.115 225859 DEBUG nova.compute.provider_tree [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.132 225859 DEBUG nova.scheduler.client.report [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.151 225859 DEBUG oslo_concurrency.lockutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.152 225859 DEBUG nova.compute.manager [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.191 225859 DEBUG nova.compute.manager [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.191 225859 DEBUG nova.network.neutron [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.211 225859 INFO nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.230 225859 DEBUG nova.compute.manager [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.357 225859 DEBUG nova.compute.manager [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.359 225859 DEBUG nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.359 225859 INFO nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Creating image(s)
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.387 225859 DEBUG nova.storage.rbd_utils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] rbd image 680a9e49-0486-46a0-8857-99a7a56c46e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.412 225859 DEBUG nova.storage.rbd_utils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] rbd image 680a9e49-0486-46a0-8857-99a7a56c46e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:58 compute-1 ceph-mon[81775]: osdmap e177: 3 total, 3 up, 3 in
Jan 20 14:34:58 compute-1 ceph-mon[81775]: pgmap v1343: 321 pgs: 321 active+clean; 326 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 38 KiB/s wr, 333 op/s
Jan 20 14:34:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3207990597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.612 225859 DEBUG nova.storage.rbd_utils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] rbd image 680a9e49-0486-46a0-8857-99a7a56c46e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.617 225859 DEBUG oslo_concurrency.processutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:34:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:58.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.671 225859 DEBUG oslo_concurrency.processutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.672 225859 DEBUG oslo_concurrency.lockutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.673 225859 DEBUG oslo_concurrency.lockutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.673 225859 DEBUG oslo_concurrency.lockutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.779 225859 DEBUG nova.storage.rbd_utils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] rbd image 680a9e49-0486-46a0-8857-99a7a56c46e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.783 225859 DEBUG oslo_concurrency.processutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 680a9e49-0486-46a0-8857-99a7a56c46e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.802 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.840 225859 DEBUG nova.network.neutron [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 20 14:34:58 compute-1 nova_compute[225855]: 2026-01-20 14:34:58.840 225859 DEBUG nova.compute.manager [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.081 225859 DEBUG oslo_concurrency.processutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 680a9e49-0486-46a0-8857-99a7a56c46e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.163 225859 DEBUG nova.storage.rbd_utils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] resizing rbd image 680a9e49-0486-46a0-8857-99a7a56c46e1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:34:59 compute-1 sudo[245907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:59 compute-1 sudo[245907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:59 compute-1 sudo[245907]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.340 225859 DEBUG nova.objects.instance [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lazy-loading 'migration_context' on Instance uuid 680a9e49-0486-46a0-8857-99a7a56c46e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.358 225859 DEBUG nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.359 225859 DEBUG nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Ensure instance console log exists: /var/lib/nova/instances/680a9e49-0486-46a0-8857-99a7a56c46e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.360 225859 DEBUG oslo_concurrency.lockutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.361 225859 DEBUG oslo_concurrency.lockutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.361 225859 DEBUG oslo_concurrency.lockutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.364 225859 DEBUG nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:34:59 compute-1 sudo[245942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.370 225859 WARNING nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:34:59 compute-1 sudo[245942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.377 225859 DEBUG nova.virt.libvirt.host [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.379 225859 DEBUG nova.virt.libvirt.host [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:34:59 compute-1 sudo[245942]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.384 225859 DEBUG nova.virt.libvirt.host [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.385 225859 DEBUG nova.virt.libvirt.host [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.387 225859 DEBUG nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.388 225859 DEBUG nova.virt.hardware [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.389 225859 DEBUG nova.virt.hardware [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.389 225859 DEBUG nova.virt.hardware [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.390 225859 DEBUG nova.virt.hardware [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.390 225859 DEBUG nova.virt.hardware [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.391 225859 DEBUG nova.virt.hardware [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.391 225859 DEBUG nova.virt.hardware [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.392 225859 DEBUG nova.virt.hardware [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.393 225859 DEBUG nova.virt.hardware [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.393 225859 DEBUG nova.virt.hardware [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.394 225859 DEBUG nova.virt.hardware [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.398 225859 DEBUG oslo_concurrency.processutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:34:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:59.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.538 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:34:59 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2891658671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.830 225859 DEBUG oslo_concurrency.processutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.855 225859 DEBUG nova.storage.rbd_utils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] rbd image 680a9e49-0486-46a0-8857-99a7a56c46e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:59 compute-1 nova_compute[225855]: 2026-01-20 14:34:59.859 225859 DEBUG oslo_concurrency.processutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:35:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:35:00 compute-1 ceph-mon[81775]: pgmap v1344: 321 pgs: 321 active+clean; 343 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.3 MiB/s wr, 280 op/s
Jan 20 14:35:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3333457106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2891658671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:35:00 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/38576092' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:00 compute-1 nova_compute[225855]: 2026-01-20 14:35:00.321 225859 DEBUG oslo_concurrency.processutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:00 compute-1 nova_compute[225855]: 2026-01-20 14:35:00.323 225859 DEBUG nova.objects.instance [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lazy-loading 'pci_devices' on Instance uuid 680a9e49-0486-46a0-8857-99a7a56c46e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:35:00 compute-1 nova_compute[225855]: 2026-01-20 14:35:00.342 225859 DEBUG nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:35:00 compute-1 nova_compute[225855]:   <uuid>680a9e49-0486-46a0-8857-99a7a56c46e1</uuid>
Jan 20 14:35:00 compute-1 nova_compute[225855]:   <name>instance-0000002e</name>
Jan 20 14:35:00 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:35:00 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:35:00 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <nova:name>tempest-ListImageFiltersTestJSON-server-1834946582</nova:name>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:34:59</nova:creationTime>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:35:00 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:35:00 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:35:00 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:35:00 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:35:00 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:35:00 compute-1 nova_compute[225855]:         <nova:user uuid="72ad8e217e1348378596753eefca1452">tempest-ListImageFiltersTestJSON-1649594432-project-member</nova:user>
Jan 20 14:35:00 compute-1 nova_compute[225855]:         <nova:project uuid="9e10f687e8a14fc3bfa98df19df5befd">tempest-ListImageFiltersTestJSON-1649594432</nova:project>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <nova:ports/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:35:00 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:35:00 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <system>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <entry name="serial">680a9e49-0486-46a0-8857-99a7a56c46e1</entry>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <entry name="uuid">680a9e49-0486-46a0-8857-99a7a56c46e1</entry>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     </system>
Jan 20 14:35:00 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:35:00 compute-1 nova_compute[225855]:   <os>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:   </os>
Jan 20 14:35:00 compute-1 nova_compute[225855]:   <features>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:   </features>
Jan 20 14:35:00 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:35:00 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:35:00 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/680a9e49-0486-46a0-8857-99a7a56c46e1_disk">
Jan 20 14:35:00 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       </source>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:35:00 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/680a9e49-0486-46a0-8857-99a7a56c46e1_disk.config">
Jan 20 14:35:00 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       </source>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:35:00 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/680a9e49-0486-46a0-8857-99a7a56c46e1/console.log" append="off"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <video>
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     </video>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:35:00 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:35:00 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:35:00 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:35:00 compute-1 nova_compute[225855]: </domain>
Jan 20 14:35:00 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:35:00 compute-1 nova_compute[225855]: 2026-01-20 14:35:00.469 225859 DEBUG nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:35:00 compute-1 nova_compute[225855]: 2026-01-20 14:35:00.469 225859 DEBUG nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:35:00 compute-1 nova_compute[225855]: 2026-01-20 14:35:00.469 225859 INFO nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Using config drive
Jan 20 14:35:00 compute-1 nova_compute[225855]: 2026-01-20 14:35:00.492 225859 DEBUG nova.storage.rbd_utils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] rbd image 680a9e49-0486-46a0-8857-99a7a56c46e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:00.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:00 compute-1 nova_compute[225855]: 2026-01-20 14:35:00.775 225859 INFO nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Creating config drive at /var/lib/nova/instances/680a9e49-0486-46a0-8857-99a7a56c46e1/disk.config
Jan 20 14:35:00 compute-1 nova_compute[225855]: 2026-01-20 14:35:00.780 225859 DEBUG oslo_concurrency.processutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/680a9e49-0486-46a0-8857-99a7a56c46e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq21yowwm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:00 compute-1 nova_compute[225855]: 2026-01-20 14:35:00.909 225859 DEBUG oslo_concurrency.processutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/680a9e49-0486-46a0-8857-99a7a56c46e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq21yowwm" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:00 compute-1 nova_compute[225855]: 2026-01-20 14:35:00.937 225859 DEBUG nova.storage.rbd_utils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] rbd image 680a9e49-0486-46a0-8857-99a7a56c46e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:00 compute-1 nova_compute[225855]: 2026-01-20 14:35:00.941 225859 DEBUG oslo_concurrency.processutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/680a9e49-0486-46a0-8857-99a7a56c46e1/disk.config 680a9e49-0486-46a0-8857-99a7a56c46e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.134 225859 DEBUG oslo_concurrency.processutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/680a9e49-0486-46a0-8857-99a7a56c46e1/disk.config 680a9e49-0486-46a0-8857-99a7a56c46e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.135 225859 INFO nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Deleting local config drive /var/lib/nova/instances/680a9e49-0486-46a0-8857-99a7a56c46e1/disk.config because it was imported into RBD.
Jan 20 14:35:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/38576092' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:01 compute-1 systemd-machined[194361]: New machine qemu-23-instance-0000002e.
Jan 20 14:35:01 compute-1 systemd[1]: Started Virtual Machine qemu-23-instance-0000002e.
Jan 20 14:35:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:01.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.617 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919701.6168437, 680a9e49-0486-46a0-8857-99a7a56c46e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.617 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] VM Resumed (Lifecycle Event)
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.621 225859 DEBUG nova.compute.manager [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.621 225859 DEBUG nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.624 225859 INFO nova.virt.libvirt.driver [-] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Instance spawned successfully.
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.625 225859 DEBUG nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.643 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.647 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.651 225859 DEBUG nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.652 225859 DEBUG nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.652 225859 DEBUG nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.653 225859 DEBUG nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.653 225859 DEBUG nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.654 225859 DEBUG nova.virt.libvirt.driver [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.676 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.676 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919701.6206298, 680a9e49-0486-46a0-8857-99a7a56c46e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.677 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] VM Started (Lifecycle Event)
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.701 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.705 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.710 225859 INFO nova.compute.manager [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Took 3.35 seconds to spawn the instance on the hypervisor.
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.711 225859 DEBUG nova.compute.manager [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.742 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.791 225859 INFO nova.compute.manager [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Took 4.33 seconds to build instance.
Jan 20 14:35:01 compute-1 nova_compute[225855]: 2026-01-20 14:35:01.811 225859 DEBUG oslo_concurrency.lockutils [None req-d25f88da-e047-4e24-89f2-d9e5dffd2790 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "680a9e49-0486-46a0-8857-99a7a56c46e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2503250766' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3221613903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:02 compute-1 ceph-mon[81775]: pgmap v1345: 321 pgs: 321 active+clean; 429 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.6 MiB/s wr, 322 op/s
Jan 20 14:35:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3622645078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2378119403' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e178 e178: 3 total, 3 up, 3 in
Jan 20 14:35:02 compute-1 sudo[246155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:35:02 compute-1 sudo[246155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:35:02 compute-1 sudo[246155]: pam_unix(sudo:session): session closed for user root
Jan 20 14:35:02 compute-1 sudo[246180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:35:02 compute-1 sudo[246180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:35:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:02 compute-1 sudo[246180]: pam_unix(sudo:session): session closed for user root
Jan 20 14:35:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:02.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:03 compute-1 ceph-mon[81775]: osdmap e178: 3 total, 3 up, 3 in
Jan 20 14:35:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:03.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:03 compute-1 nova_compute[225855]: 2026-01-20 14:35:03.805 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:04 compute-1 ceph-mon[81775]: pgmap v1347: 321 pgs: 321 active+clean; 429 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 6.6 MiB/s wr, 177 op/s
Jan 20 14:35:04 compute-1 nova_compute[225855]: 2026-01-20 14:35:04.541 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:04.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:05 compute-1 ovn_controller[130490]: 2026-01-20T14:35:05Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:17:6a:15 10.100.0.6
Jan 20 14:35:05 compute-1 ovn_controller[130490]: 2026-01-20T14:35:05Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:17:6a:15 10.100.0.6
Jan 20 14:35:05 compute-1 nova_compute[225855]: 2026-01-20 14:35:05.230 225859 DEBUG nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 14:35:05 compute-1 nova_compute[225855]: 2026-01-20 14:35:05.334 225859 DEBUG nova.compute.manager [None req-20e68329-0731-4f9d-a2b5-67fd5d6960b8 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:05 compute-1 nova_compute[225855]: 2026-01-20 14:35:05.380 225859 INFO nova.compute.manager [None req-20e68329-0731-4f9d-a2b5-67fd5d6960b8 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] instance snapshotting
Jan 20 14:35:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:05.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:05 compute-1 nova_compute[225855]: 2026-01-20 14:35:05.677 225859 INFO nova.virt.libvirt.driver [None req-20e68329-0731-4f9d-a2b5-67fd5d6960b8 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Beginning live snapshot process
Jan 20 14:35:05 compute-1 nova_compute[225855]: 2026-01-20 14:35:05.832 225859 DEBUG nova.virt.libvirt.imagebackend [None req-20e68329-0731-4f9d-a2b5-67fd5d6960b8 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 14:35:06 compute-1 nova_compute[225855]: 2026-01-20 14:35:06.143 225859 DEBUG nova.storage.rbd_utils [None req-20e68329-0731-4f9d-a2b5-67fd5d6960b8 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] creating snapshot(e99d7a396ba9481c94fa6ca492217c0c) on rbd image(680a9e49-0486-46a0-8857-99a7a56c46e1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:35:06 compute-1 ceph-mon[81775]: pgmap v1348: 321 pgs: 321 active+clean; 490 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 10 MiB/s wr, 327 op/s
Jan 20 14:35:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:06.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e179 e179: 3 total, 3 up, 3 in
Jan 20 14:35:06 compute-1 nova_compute[225855]: 2026-01-20 14:35:06.784 225859 DEBUG nova.storage.rbd_utils [None req-20e68329-0731-4f9d-a2b5-67fd5d6960b8 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] cloning vms/680a9e49-0486-46a0-8857-99a7a56c46e1_disk@e99d7a396ba9481c94fa6ca492217c0c to images/70955243-a059-4d15-b65b-03ec50f95c21 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:35:06 compute-1 nova_compute[225855]: 2026-01-20 14:35:06.936 225859 DEBUG nova.storage.rbd_utils [None req-20e68329-0731-4f9d-a2b5-67fd5d6960b8 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] flattening images/70955243-a059-4d15-b65b-03ec50f95c21 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:35:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:07.210 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:35:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:07.212 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:35:07 compute-1 nova_compute[225855]: 2026-01-20 14:35:07.211 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:35:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:07.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:35:07 compute-1 nova_compute[225855]: 2026-01-20 14:35:07.504 225859 DEBUG nova.storage.rbd_utils [None req-20e68329-0731-4f9d-a2b5-67fd5d6960b8 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] removing snapshot(e99d7a396ba9481c94fa6ca492217c0c) on rbd image(680a9e49-0486-46a0-8857-99a7a56c46e1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:35:07 compute-1 kernel: tap73e232f9-38 (unregistering): left promiscuous mode
Jan 20 14:35:07 compute-1 NetworkManager[49104]: <info>  [1768919707.5139] device (tap73e232f9-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:35:07 compute-1 ovn_controller[130490]: 2026-01-20T14:35:07Z|00153|binding|INFO|Releasing lport 73e232f9-3860-4b9a-9cec-535fa2fb0c9f from this chassis (sb_readonly=0)
Jan 20 14:35:07 compute-1 ovn_controller[130490]: 2026-01-20T14:35:07Z|00154|binding|INFO|Setting lport 73e232f9-3860-4b9a-9cec-535fa2fb0c9f down in Southbound
Jan 20 14:35:07 compute-1 nova_compute[225855]: 2026-01-20 14:35:07.522 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:07 compute-1 ovn_controller[130490]: 2026-01-20T14:35:07Z|00155|binding|INFO|Removing iface tap73e232f9-38 ovn-installed in OVS
Jan 20 14:35:07 compute-1 nova_compute[225855]: 2026-01-20 14:35:07.524 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:07.528 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:6a:15 10.100.0.6'], port_security=['fa:16:3e:17:6a:15 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2ec7b07d-b593-46b7-9751-b6116e4d2cec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a841e7a1434c488390475174e10bc161', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0bbdea05-fba7-47c7-ba4e-5dac58212a25', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c43dea88-ea55-4069-a4be-2c30a432a754, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=73e232f9-3860-4b9a-9cec-535fa2fb0c9f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:35:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:07.530 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 73e232f9-3860-4b9a-9cec-535fa2fb0c9f in datapath 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a unbound from our chassis
Jan 20 14:35:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:07.531 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a
Jan 20 14:35:07 compute-1 nova_compute[225855]: 2026-01-20 14:35:07.539 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:07.552 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6089a0bd-8d78-4514-8e8b-1881575ad308]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:07.576 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[489c76ed-5397-4375-989f-386e2ec0913e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:07.579 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c212f49a-6b9a-4048-8aa4-0bdfabdf8bee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:07 compute-1 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000027.scope: Deactivated successfully.
Jan 20 14:35:07 compute-1 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000027.scope: Consumed 14.305s CPU time.
Jan 20 14:35:07 compute-1 systemd-machined[194361]: Machine qemu-22-instance-00000027 terminated.
Jan 20 14:35:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:07.606 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[05367c25-a7dd-4856-9951-3e1e537181bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:07.621 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a6894c2f-536c-4299-8e08-4d649c201b35]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33c9a20a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:8e:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466055, 'reachable_time': 29489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246342, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:07.636 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f14451-1fff-44a3-9fcb-7d50162e8953]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap33c9a20a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466065, 'tstamp': 466065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246343, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap33c9a20a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466068, 'tstamp': 466068}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246343, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:07.637 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33c9a20a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:07 compute-1 nova_compute[225855]: 2026-01-20 14:35:07.638 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:07 compute-1 nova_compute[225855]: 2026-01-20 14:35:07.642 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:07.642 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33c9a20a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:07.642 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:35:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:07.643 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap33c9a20a-d0, col_values=(('external_ids', {'iface-id': '90c69687-c788-4dba-881f-3ed4a5ee6007'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:07.643 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:35:07 compute-1 ceph-mon[81775]: osdmap e179: 3 total, 3 up, 3 in
Jan 20 14:35:07 compute-1 nova_compute[225855]: 2026-01-20 14:35:07.721 225859 DEBUG nova.compute.manager [req-5429dd98-3169-4734-8738-233da46ad54c req-a24801e7-8a32-4b74-b90f-7b1698d75453 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received event network-vif-unplugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:35:07 compute-1 nova_compute[225855]: 2026-01-20 14:35:07.721 225859 DEBUG oslo_concurrency.lockutils [req-5429dd98-3169-4734-8738-233da46ad54c req-a24801e7-8a32-4b74-b90f-7b1698d75453 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:07 compute-1 nova_compute[225855]: 2026-01-20 14:35:07.721 225859 DEBUG oslo_concurrency.lockutils [req-5429dd98-3169-4734-8738-233da46ad54c req-a24801e7-8a32-4b74-b90f-7b1698d75453 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:07 compute-1 nova_compute[225855]: 2026-01-20 14:35:07.721 225859 DEBUG oslo_concurrency.lockutils [req-5429dd98-3169-4734-8738-233da46ad54c req-a24801e7-8a32-4b74-b90f-7b1698d75453 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:07 compute-1 nova_compute[225855]: 2026-01-20 14:35:07.722 225859 DEBUG nova.compute.manager [req-5429dd98-3169-4734-8738-233da46ad54c req-a24801e7-8a32-4b74-b90f-7b1698d75453 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] No waiting events found dispatching network-vif-unplugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:35:07 compute-1 nova_compute[225855]: 2026-01-20 14:35:07.722 225859 WARNING nova.compute.manager [req-5429dd98-3169-4734-8738-233da46ad54c req-a24801e7-8a32-4b74-b90f-7b1698d75453 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received unexpected event network-vif-unplugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f for instance with vm_state active and task_state rebuilding.
Jan 20 14:35:07 compute-1 nova_compute[225855]: 2026-01-20 14:35:07.757 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:07 compute-1 nova_compute[225855]: 2026-01-20 14:35:07.765 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:08 compute-1 nova_compute[225855]: 2026-01-20 14:35:08.243 225859 INFO nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Instance shutdown successfully after 13 seconds.
Jan 20 14:35:08 compute-1 nova_compute[225855]: 2026-01-20 14:35:08.250 225859 INFO nova.virt.libvirt.driver [-] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Instance destroyed successfully.
Jan 20 14:35:08 compute-1 nova_compute[225855]: 2026-01-20 14:35:08.256 225859 INFO nova.virt.libvirt.driver [-] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Instance destroyed successfully.
Jan 20 14:35:08 compute-1 nova_compute[225855]: 2026-01-20 14:35:08.261 225859 DEBUG nova.virt.libvirt.vif [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:33:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1907009380',display_name='tempest-ServersAdminTestJSON-server-1907009380',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1907009380',id=39,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:34:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a841e7a1434c488390475174e10bc161',ramdisk_id='',reservation_id='r-8k5b63bj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1261404595',owner_user_name='tempest-ServersAdminTestJSON-1261404595-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:34:54Z,user_data=None,user_id='f51c395107c84dbd9067113b84ff01dd',uuid=2ec7b07d-b593-46b7-9751-b6116e4d2cec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:35:08 compute-1 nova_compute[225855]: 2026-01-20 14:35:08.262 225859 DEBUG nova.network.os_vif_util [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converting VIF {"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:35:08 compute-1 nova_compute[225855]: 2026-01-20 14:35:08.264 225859 DEBUG nova.network.os_vif_util [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:35:08 compute-1 nova_compute[225855]: 2026-01-20 14:35:08.268 225859 DEBUG os_vif [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:35:08 compute-1 nova_compute[225855]: 2026-01-20 14:35:08.277 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:08 compute-1 nova_compute[225855]: 2026-01-20 14:35:08.279 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73e232f9-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:08 compute-1 nova_compute[225855]: 2026-01-20 14:35:08.287 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:08 compute-1 nova_compute[225855]: 2026-01-20 14:35:08.290 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:35:08 compute-1 nova_compute[225855]: 2026-01-20 14:35:08.295 225859 INFO os_vif [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38')
Jan 20 14:35:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:35:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:08.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:35:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e180 e180: 3 total, 3 up, 3 in
Jan 20 14:35:08 compute-1 ceph-mon[81775]: pgmap v1350: 321 pgs: 321 active+clean; 497 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 10 MiB/s wr, 466 op/s
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.029 225859 DEBUG nova.storage.rbd_utils [None req-20e68329-0731-4f9d-a2b5-67fd5d6960b8 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] creating snapshot(snap) on rbd image(70955243-a059-4d15-b65b-03ec50f95c21) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:35:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:09.214 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.336 225859 INFO nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Deleting instance files /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec_del
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.338 225859 INFO nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Deletion of /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec_del complete
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.493 225859 DEBUG nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.493 225859 INFO nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Creating image(s)
Jan 20 14:35:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:35:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:09.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.528 225859 DEBUG nova.storage.rbd_utils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.560 225859 DEBUG nova.storage.rbd_utils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.594 225859 DEBUG nova.storage.rbd_utils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.601 225859 DEBUG oslo_concurrency.processutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.631 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.681 225859 DEBUG oslo_concurrency.processutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.682 225859 DEBUG oslo_concurrency.lockutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.684 225859 DEBUG oslo_concurrency.lockutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.685 225859 DEBUG oslo_concurrency.lockutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.716 225859 DEBUG nova.storage.rbd_utils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.721 225859 DEBUG oslo_concurrency.processutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:09 compute-1 ceph-mon[81775]: osdmap e180: 3 total, 3 up, 3 in
Jan 20 14:35:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e181 e181: 3 total, 3 up, 3 in
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.859 225859 DEBUG nova.compute.manager [req-1bba3f56-9e7f-438d-81f2-50dd564fd169 req-0f2f8756-eecb-4c54-98ea-83e847d243d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received event network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.861 225859 DEBUG oslo_concurrency.lockutils [req-1bba3f56-9e7f-438d-81f2-50dd564fd169 req-0f2f8756-eecb-4c54-98ea-83e847d243d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.862 225859 DEBUG oslo_concurrency.lockutils [req-1bba3f56-9e7f-438d-81f2-50dd564fd169 req-0f2f8756-eecb-4c54-98ea-83e847d243d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.862 225859 DEBUG oslo_concurrency.lockutils [req-1bba3f56-9e7f-438d-81f2-50dd564fd169 req-0f2f8756-eecb-4c54-98ea-83e847d243d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.862 225859 DEBUG nova.compute.manager [req-1bba3f56-9e7f-438d-81f2-50dd564fd169 req-0f2f8756-eecb-4c54-98ea-83e847d243d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] No waiting events found dispatching network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:35:09 compute-1 nova_compute[225855]: 2026-01-20 14:35:09.862 225859 WARNING nova.compute.manager [req-1bba3f56-9e7f-438d-81f2-50dd564fd169 req-0f2f8756-eecb-4c54-98ea-83e847d243d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received unexpected event network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f for instance with vm_state active and task_state rebuild_spawning.
Jan 20 14:35:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.049 225859 DEBUG oslo_concurrency.processutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.154 225859 DEBUG nova.storage.rbd_utils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] resizing rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.278 225859 DEBUG nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.278 225859 DEBUG nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Ensure instance console log exists: /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.279 225859 DEBUG oslo_concurrency.lockutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.279 225859 DEBUG oslo_concurrency.lockutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.280 225859 DEBUG oslo_concurrency.lockutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.282 225859 DEBUG nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Start _get_guest_xml network_info=[{"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.287 225859 WARNING nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.312 225859 DEBUG nova.virt.libvirt.host [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.312 225859 DEBUG nova.virt.libvirt.host [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.317 225859 DEBUG nova.virt.libvirt.host [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.318 225859 DEBUG nova.virt.libvirt.host [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.319 225859 DEBUG nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.319 225859 DEBUG nova.virt.hardware [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.320 225859 DEBUG nova.virt.hardware [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.320 225859 DEBUG nova.virt.hardware [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.320 225859 DEBUG nova.virt.hardware [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.321 225859 DEBUG nova.virt.hardware [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.321 225859 DEBUG nova.virt.hardware [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.321 225859 DEBUG nova.virt.hardware [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.321 225859 DEBUG nova.virt.hardware [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.322 225859 DEBUG nova.virt.hardware [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.322 225859 DEBUG nova.virt.hardware [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.322 225859 DEBUG nova.virt.hardware [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.323 225859 DEBUG nova.objects.instance [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2ec7b07d-b593-46b7-9751-b6116e4d2cec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.368 225859 DEBUG oslo_concurrency.processutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:10.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:35:10 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1346870109' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:10 compute-1 ceph-mon[81775]: pgmap v1352: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 507 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 7.0 MiB/s wr, 609 op/s
Jan 20 14:35:10 compute-1 ceph-mon[81775]: osdmap e181: 3 total, 3 up, 3 in
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.838 225859 DEBUG oslo_concurrency.processutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.867 225859 DEBUG nova.storage.rbd_utils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:10 compute-1 nova_compute[225855]: 2026-01-20 14:35:10.872 225859 DEBUG oslo_concurrency.processutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:11 compute-1 podman[246598]: 2026-01-20 14:35:11.072177958 +0000 UTC m=+0.120351578 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:35:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:35:11 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2312094919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.328 225859 DEBUG oslo_concurrency.processutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.331 225859 DEBUG nova.virt.libvirt.vif [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:33:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1907009380',display_name='tempest-ServersAdminTestJSON-server-1907009380',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1907009380',id=39,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:34:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a841e7a1434c488390475174e10bc161',ramdisk_id='',reservation_id='r-8k5b63bj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1261404595',owner_user_name='tempest-ServersAdminTestJSON-1261404595-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:35:09Z,user_data=None,user_id='f51c395107c84dbd9067113b84ff01dd',uuid=2ec7b07d-b593-46b7-9751-b6116e4d2cec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.331 225859 DEBUG nova.network.os_vif_util [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converting VIF {"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.332 225859 DEBUG nova.network.os_vif_util [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.336 225859 DEBUG nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:35:11 compute-1 nova_compute[225855]:   <uuid>2ec7b07d-b593-46b7-9751-b6116e4d2cec</uuid>
Jan 20 14:35:11 compute-1 nova_compute[225855]:   <name>instance-00000027</name>
Jan 20 14:35:11 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:35:11 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:35:11 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <nova:name>tempest-ServersAdminTestJSON-server-1907009380</nova:name>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:35:10</nova:creationTime>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:35:11 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:35:11 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:35:11 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:35:11 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:35:11 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:35:11 compute-1 nova_compute[225855]:         <nova:user uuid="f51c395107c84dbd9067113b84ff01dd">tempest-ServersAdminTestJSON-1261404595-project-member</nova:user>
Jan 20 14:35:11 compute-1 nova_compute[225855]:         <nova:project uuid="a841e7a1434c488390475174e10bc161">tempest-ServersAdminTestJSON-1261404595</nova:project>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:35:11 compute-1 nova_compute[225855]:         <nova:port uuid="73e232f9-3860-4b9a-9cec-535fa2fb0c9f">
Jan 20 14:35:11 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:35:11 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:35:11 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <system>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <entry name="serial">2ec7b07d-b593-46b7-9751-b6116e4d2cec</entry>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <entry name="uuid">2ec7b07d-b593-46b7-9751-b6116e4d2cec</entry>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     </system>
Jan 20 14:35:11 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:35:11 compute-1 nova_compute[225855]:   <os>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:   </os>
Jan 20 14:35:11 compute-1 nova_compute[225855]:   <features>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:   </features>
Jan 20 14:35:11 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:35:11 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:35:11 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk">
Jan 20 14:35:11 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       </source>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:35:11 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk.config">
Jan 20 14:35:11 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       </source>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:35:11 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:17:6a:15"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <target dev="tap73e232f9-38"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/console.log" append="off"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <video>
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     </video>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:35:11 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:35:11 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:35:11 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:35:11 compute-1 nova_compute[225855]: </domain>
Jan 20 14:35:11 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.344 225859 DEBUG nova.virt.libvirt.vif [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:33:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1907009380',display_name='tempest-ServersAdminTestJSON-server-1907009380',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1907009380',id=39,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:34:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a841e7a1434c488390475174e10bc161',ramdisk_id='',reservation_id='r-8k5b63bj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1261404595',owner_user_name='tempest-ServersAdminTestJSON-1261404595-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:35:09Z,user_data=None,user_id='f51c395107c84dbd9067113b84ff01dd',uuid=2ec7b07d-b593-46b7-9751-b6116e4d2cec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.344 225859 DEBUG nova.network.os_vif_util [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converting VIF {"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.345 225859 DEBUG nova.network.os_vif_util [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.346 225859 DEBUG os_vif [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.347 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.348 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.348 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.351 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.351 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73e232f9-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.352 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap73e232f9-38, col_values=(('external_ids', {'iface-id': '73e232f9-3860-4b9a-9cec-535fa2fb0c9f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:6a:15', 'vm-uuid': '2ec7b07d-b593-46b7-9751-b6116e4d2cec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.353 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:11 compute-1 NetworkManager[49104]: <info>  [1768919711.3543] manager: (tap73e232f9-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.358 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.359 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.360 225859 INFO os_vif [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38')
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.364 225859 INFO nova.virt.libvirt.driver [None req-20e68329-0731-4f9d-a2b5-67fd5d6960b8 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Snapshot image upload complete
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.364 225859 INFO nova.compute.manager [None req-20e68329-0731-4f9d-a2b5-67fd5d6960b8 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Took 5.98 seconds to snapshot the instance on the hypervisor.
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.432 225859 DEBUG nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.433 225859 DEBUG nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.434 225859 DEBUG nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] No VIF found with MAC fa:16:3e:17:6a:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.434 225859 INFO nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Using config drive
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.461 225859 DEBUG nova.storage.rbd_utils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.477 225859 DEBUG nova.objects.instance [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 2ec7b07d-b593-46b7-9751-b6116e4d2cec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:35:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:35:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:11.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:35:11 compute-1 nova_compute[225855]: 2026-01-20 14:35:11.508 225859 DEBUG nova.objects.instance [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'keypairs' on Instance uuid 2ec7b07d-b593-46b7-9751-b6116e4d2cec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:35:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1346870109' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2312094919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:12 compute-1 nova_compute[225855]: 2026-01-20 14:35:12.092 225859 INFO nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Creating config drive at /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/disk.config
Jan 20 14:35:12 compute-1 nova_compute[225855]: 2026-01-20 14:35:12.098 225859 DEBUG oslo_concurrency.processutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2dbsfxiu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:12 compute-1 nova_compute[225855]: 2026-01-20 14:35:12.246 225859 DEBUG oslo_concurrency.processutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2dbsfxiu" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:12 compute-1 nova_compute[225855]: 2026-01-20 14:35:12.287 225859 DEBUG nova.storage.rbd_utils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:12 compute-1 nova_compute[225855]: 2026-01-20 14:35:12.292 225859 DEBUG oslo_concurrency.processutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/disk.config 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:12 compute-1 nova_compute[225855]: 2026-01-20 14:35:12.487 225859 DEBUG oslo_concurrency.processutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/disk.config 2ec7b07d-b593-46b7-9751-b6116e4d2cec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:12 compute-1 nova_compute[225855]: 2026-01-20 14:35:12.488 225859 INFO nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Deleting local config drive /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec/disk.config because it was imported into RBD.
Jan 20 14:35:12 compute-1 kernel: tap73e232f9-38: entered promiscuous mode
Jan 20 14:35:12 compute-1 NetworkManager[49104]: <info>  [1768919712.5564] manager: (tap73e232f9-38): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Jan 20 14:35:12 compute-1 ovn_controller[130490]: 2026-01-20T14:35:12Z|00156|binding|INFO|Claiming lport 73e232f9-3860-4b9a-9cec-535fa2fb0c9f for this chassis.
Jan 20 14:35:12 compute-1 ovn_controller[130490]: 2026-01-20T14:35:12Z|00157|binding|INFO|73e232f9-3860-4b9a-9cec-535fa2fb0c9f: Claiming fa:16:3e:17:6a:15 10.100.0.6
Jan 20 14:35:12 compute-1 nova_compute[225855]: 2026-01-20 14:35:12.559 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:12.567 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:6a:15 10.100.0.6'], port_security=['fa:16:3e:17:6a:15 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2ec7b07d-b593-46b7-9751-b6116e4d2cec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a841e7a1434c488390475174e10bc161', 'neutron:revision_number': '7', 'neutron:security_group_ids': '0bbdea05-fba7-47c7-ba4e-5dac58212a25', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c43dea88-ea55-4069-a4be-2c30a432a754, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=73e232f9-3860-4b9a-9cec-535fa2fb0c9f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:35:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:12.570 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 73e232f9-3860-4b9a-9cec-535fa2fb0c9f in datapath 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a bound to our chassis
Jan 20 14:35:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:12.573 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a
Jan 20 14:35:12 compute-1 ovn_controller[130490]: 2026-01-20T14:35:12Z|00158|binding|INFO|Setting lport 73e232f9-3860-4b9a-9cec-535fa2fb0c9f ovn-installed in OVS
Jan 20 14:35:12 compute-1 ovn_controller[130490]: 2026-01-20T14:35:12Z|00159|binding|INFO|Setting lport 73e232f9-3860-4b9a-9cec-535fa2fb0c9f up in Southbound
Jan 20 14:35:12 compute-1 nova_compute[225855]: 2026-01-20 14:35:12.601 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:12 compute-1 nova_compute[225855]: 2026-01-20 14:35:12.603 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:12.605 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3c6fe1a0-4d8f-4ab2-8695-0046580916ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:12 compute-1 systemd-machined[194361]: New machine qemu-24-instance-00000027.
Jan 20 14:35:12 compute-1 systemd-udevd[246722]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:35:12 compute-1 systemd[1]: Started Virtual Machine qemu-24-instance-00000027.
Jan 20 14:35:12 compute-1 NetworkManager[49104]: <info>  [1768919712.6449] device (tap73e232f9-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:35:12 compute-1 NetworkManager[49104]: <info>  [1768919712.6455] device (tap73e232f9-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:35:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:12.657 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f300e6ac-5de0-4ce6-9927-68bba72a358d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:12.661 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[b4fb90b7-e1c8-4742-91cf-9eccddfbaa54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:12.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:12.699 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[1043de85-5640-4753-abf8-5421be5a8c07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:12.763 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5a779436-d0bd-4367-876a-32422ec9d81a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33c9a20a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:8e:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466055, 'reachable_time': 29489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246733, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:12.782 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[97ddac91-a36e-4dca-ba25-f70c3524dd11]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap33c9a20a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466065, 'tstamp': 466065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246735, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap33c9a20a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466068, 'tstamp': 466068}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246735, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:12.784 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33c9a20a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:12 compute-1 nova_compute[225855]: 2026-01-20 14:35:12.787 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:12.789 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33c9a20a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:12.789 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:35:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:12.790 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap33c9a20a-d0, col_values=(('external_ids', {'iface-id': '90c69687-c788-4dba-881f-3ed4a5ee6007'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:12.790 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:35:12 compute-1 ceph-mon[81775]: pgmap v1354: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 473 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 5.9 MiB/s wr, 533 op/s
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.074 225859 DEBUG nova.compute.manager [req-2ae57ba0-92b4-4474-ba32-94190499f283 req-fb00b422-9f35-4267-9b8a-b530fe2e4597 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received event network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.075 225859 DEBUG oslo_concurrency.lockutils [req-2ae57ba0-92b4-4474-ba32-94190499f283 req-fb00b422-9f35-4267-9b8a-b530fe2e4597 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.075 225859 DEBUG oslo_concurrency.lockutils [req-2ae57ba0-92b4-4474-ba32-94190499f283 req-fb00b422-9f35-4267-9b8a-b530fe2e4597 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.076 225859 DEBUG oslo_concurrency.lockutils [req-2ae57ba0-92b4-4474-ba32-94190499f283 req-fb00b422-9f35-4267-9b8a-b530fe2e4597 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.076 225859 DEBUG nova.compute.manager [req-2ae57ba0-92b4-4474-ba32-94190499f283 req-fb00b422-9f35-4267-9b8a-b530fe2e4597 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] No waiting events found dispatching network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.077 225859 WARNING nova.compute.manager [req-2ae57ba0-92b4-4474-ba32-94190499f283 req-fb00b422-9f35-4267-9b8a-b530fe2e4597 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received unexpected event network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f for instance with vm_state active and task_state rebuild_spawning.
Jan 20 14:35:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:13.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.607 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for 2ec7b07d-b593-46b7-9751-b6116e4d2cec due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.607 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919713.6066756, 2ec7b07d-b593-46b7-9751-b6116e4d2cec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.608 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] VM Resumed (Lifecycle Event)
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.610 225859 DEBUG nova.compute.manager [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.610 225859 DEBUG nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.614 225859 INFO nova.virt.libvirt.driver [-] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Instance spawned successfully.
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.614 225859 DEBUG nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.631 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.639 225859 DEBUG nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.640 225859 DEBUG nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.641 225859 DEBUG nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.641 225859 DEBUG nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.641 225859 DEBUG nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.642 225859 DEBUG nova.virt.libvirt.driver [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.647 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.685 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.686 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919713.6102376, 2ec7b07d-b593-46b7-9751-b6116e4d2cec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.686 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] VM Started (Lifecycle Event)
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.711 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.714 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.722 225859 DEBUG nova.compute.manager [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.731 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.776 225859 DEBUG oslo_concurrency.lockutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.777 225859 DEBUG oslo_concurrency.lockutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.777 225859 DEBUG nova.objects.instance [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:35:13 compute-1 nova_compute[225855]: 2026-01-20 14:35:13.847 225859 DEBUG oslo_concurrency.lockutils [None req-85a922c4-6375-4395-a9cf-d7257192f2c5 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4216979389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/52686846' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:35:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/52686846' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:35:14 compute-1 nova_compute[225855]: 2026-01-20 14:35:14.570 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:35:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:14.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:35:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:15 compute-1 ceph-mon[81775]: pgmap v1355: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 473 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 3.9 MiB/s wr, 273 op/s
Jan 20 14:35:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e182 e182: 3 total, 3 up, 3 in
Jan 20 14:35:15 compute-1 nova_compute[225855]: 2026-01-20 14:35:15.191 225859 DEBUG nova.compute.manager [req-648a3480-a7a3-4db4-b73d-4b7f6b33b956 req-8c3597ac-22aa-4679-a8fe-f0c922e8b3b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received event network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:35:15 compute-1 nova_compute[225855]: 2026-01-20 14:35:15.192 225859 DEBUG oslo_concurrency.lockutils [req-648a3480-a7a3-4db4-b73d-4b7f6b33b956 req-8c3597ac-22aa-4679-a8fe-f0c922e8b3b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:15 compute-1 nova_compute[225855]: 2026-01-20 14:35:15.192 225859 DEBUG oslo_concurrency.lockutils [req-648a3480-a7a3-4db4-b73d-4b7f6b33b956 req-8c3597ac-22aa-4679-a8fe-f0c922e8b3b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:15 compute-1 nova_compute[225855]: 2026-01-20 14:35:15.192 225859 DEBUG oslo_concurrency.lockutils [req-648a3480-a7a3-4db4-b73d-4b7f6b33b956 req-8c3597ac-22aa-4679-a8fe-f0c922e8b3b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:15 compute-1 nova_compute[225855]: 2026-01-20 14:35:15.192 225859 DEBUG nova.compute.manager [req-648a3480-a7a3-4db4-b73d-4b7f6b33b956 req-8c3597ac-22aa-4679-a8fe-f0c922e8b3b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] No waiting events found dispatching network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:35:15 compute-1 nova_compute[225855]: 2026-01-20 14:35:15.192 225859 WARNING nova.compute.manager [req-648a3480-a7a3-4db4-b73d-4b7f6b33b956 req-8c3597ac-22aa-4679-a8fe-f0c922e8b3b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received unexpected event network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f for instance with vm_state active and task_state None.
Jan 20 14:35:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:15.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:16 compute-1 ceph-mon[81775]: osdmap e182: 3 total, 3 up, 3 in
Jan 20 14:35:16 compute-1 ceph-mon[81775]: pgmap v1357: 321 pgs: 321 active+clean; 562 MiB data, 741 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 11 MiB/s wr, 478 op/s
Jan 20 14:35:16 compute-1 nova_compute[225855]: 2026-01-20 14:35:16.355 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:16.392 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:16.393 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:16.393 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:35:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:16.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:35:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e183 e183: 3 total, 3 up, 3 in
Jan 20 14:35:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:17.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e184 e184: 3 total, 3 up, 3 in
Jan 20 14:35:18 compute-1 ceph-mon[81775]: osdmap e183: 3 total, 3 up, 3 in
Jan 20 14:35:18 compute-1 ceph-mon[81775]: pgmap v1359: 321 pgs: 321 active+clean; 631 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 14 MiB/s wr, 352 op/s
Jan 20 14:35:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3187377245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:35:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:18.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:35:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:19.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:19 compute-1 ceph-mon[81775]: osdmap e184: 3 total, 3 up, 3 in
Jan 20 14:35:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1954847869' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:19 compute-1 nova_compute[225855]: 2026-01-20 14:35:19.573 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:20 compute-1 nova_compute[225855]: 2026-01-20 14:35:20.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:35:20 compute-1 nova_compute[225855]: 2026-01-20 14:35:20.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:35:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:20.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:20 compute-1 ceph-mon[81775]: pgmap v1361: 321 pgs: 321 active+clean; 662 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 18 MiB/s wr, 562 op/s
Jan 20 14:35:21 compute-1 podman[246782]: 2026-01-20 14:35:21.027651737 +0000 UTC m=+0.067810304 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.357 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.516 225859 DEBUG oslo_concurrency.lockutils [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "fdba30ff-e02a-4857-92f6-1828ce3ab175" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.516 225859 DEBUG oslo_concurrency.lockutils [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "fdba30ff-e02a-4857-92f6-1828ce3ab175" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.516 225859 DEBUG oslo_concurrency.lockutils [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "fdba30ff-e02a-4857-92f6-1828ce3ab175-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.517 225859 DEBUG oslo_concurrency.lockutils [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "fdba30ff-e02a-4857-92f6-1828ce3ab175-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.517 225859 DEBUG oslo_concurrency.lockutils [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "fdba30ff-e02a-4857-92f6-1828ce3ab175-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.518 225859 INFO nova.compute.manager [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Terminating instance
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.519 225859 DEBUG nova.compute.manager [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:35:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:21.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:21 compute-1 kernel: tap87a0a5ba-64 (unregistering): left promiscuous mode
Jan 20 14:35:21 compute-1 NetworkManager[49104]: <info>  [1768919721.5739] device (tap87a0a5ba-64): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:35:21 compute-1 ovn_controller[130490]: 2026-01-20T14:35:21Z|00160|binding|INFO|Releasing lport 87a0a5ba-6446-4265-8ada-94d1bd815aed from this chassis (sb_readonly=0)
Jan 20 14:35:21 compute-1 ovn_controller[130490]: 2026-01-20T14:35:21Z|00161|binding|INFO|Setting lport 87a0a5ba-6446-4265-8ada-94d1bd815aed down in Southbound
Jan 20 14:35:21 compute-1 ovn_controller[130490]: 2026-01-20T14:35:21Z|00162|binding|INFO|Removing iface tap87a0a5ba-64 ovn-installed in OVS
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.582 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.584 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.599 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:21.614 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:1f:46 10.100.0.10'], port_security=['fa:16:3e:70:1f:46 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fdba30ff-e02a-4857-92f6-1828ce3ab175', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a841e7a1434c488390475174e10bc161', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0bbdea05-fba7-47c7-ba4e-5dac58212a25', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c43dea88-ea55-4069-a4be-2c30a432a754, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=87a0a5ba-6446-4265-8ada-94d1bd815aed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:35:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:21.615 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 87a0a5ba-6446-4265-8ada-94d1bd815aed in datapath 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a unbound from our chassis
Jan 20 14:35:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:21.617 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a
Jan 20 14:35:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:21.630 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d2848a-34af-4663-851c-1ab4bdacbbe0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:21 compute-1 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Jan 20 14:35:21 compute-1 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d0000002a.scope: Consumed 15.401s CPU time.
Jan 20 14:35:21 compute-1 systemd-machined[194361]: Machine qemu-21-instance-0000002a terminated.
Jan 20 14:35:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:21.659 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[a106108d-454b-40c8-8ca4-5c97073baca4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:21.662 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[5189f3f5-2926-4a6f-ab69-cc79ebb485a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:21.692 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f46c9af9-a5cc-4297-824d-6d629f46390d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:21.709 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e0df43db-303c-4eb5-9939-86e06a2974e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33c9a20a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:8e:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466055, 'reachable_time': 29489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246814, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:21.724 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a94a23-404e-4a36-83ae-e0e72ffab2f5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap33c9a20a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466065, 'tstamp': 466065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246815, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap33c9a20a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466068, 'tstamp': 466068}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246815, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:21.725 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33c9a20a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.763 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.765 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.770 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:21.771 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33c9a20a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:21.771 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:35:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:21.772 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap33c9a20a-d0, col_values=(('external_ids', {'iface-id': '90c69687-c788-4dba-881f-3ed4a5ee6007'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:21.772 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.782 225859 INFO nova.virt.libvirt.driver [-] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Instance destroyed successfully.
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.783 225859 DEBUG nova.objects.instance [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'resources' on Instance uuid fdba30ff-e02a-4857-92f6-1828ce3ab175 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.799 225859 DEBUG nova.virt.libvirt.vif [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:33:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1832306325',display_name='tempest-ServersAdminTestJSON-server-1832306325',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1832306325',id=42,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:34:15Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a841e7a1434c488390475174e10bc161',ramdisk_id='',reservation_id='r-sb3w0f0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1261404595',owner_user_name='tempest-ServersAdminTestJSON-1261404595-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:34:15Z,user_data=None,user_id='f51c395107c84dbd9067113b84ff01dd',uuid=fdba30ff-e02a-4857-92f6-1828ce3ab175,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "87a0a5ba-6446-4265-8ada-94d1bd815aed", "address": "fa:16:3e:70:1f:46", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87a0a5ba-64", "ovs_interfaceid": "87a0a5ba-6446-4265-8ada-94d1bd815aed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.800 225859 DEBUG nova.network.os_vif_util [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converting VIF {"id": "87a0a5ba-6446-4265-8ada-94d1bd815aed", "address": "fa:16:3e:70:1f:46", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87a0a5ba-64", "ovs_interfaceid": "87a0a5ba-6446-4265-8ada-94d1bd815aed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.800 225859 DEBUG nova.network.os_vif_util [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:1f:46,bridge_name='br-int',has_traffic_filtering=True,id=87a0a5ba-6446-4265-8ada-94d1bd815aed,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87a0a5ba-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.801 225859 DEBUG os_vif [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:1f:46,bridge_name='br-int',has_traffic_filtering=True,id=87a0a5ba-6446-4265-8ada-94d1bd815aed,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87a0a5ba-64') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.802 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.803 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87a0a5ba-64, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.807 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:35:21 compute-1 nova_compute[225855]: 2026-01-20 14:35:21.810 225859 INFO os_vif [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:1f:46,bridge_name='br-int',has_traffic_filtering=True,id=87a0a5ba-6446-4265-8ada-94d1bd815aed,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87a0a5ba-64')
Jan 20 14:35:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2306401299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.155 225859 DEBUG nova.compute.manager [None req-101774a7-9f44-44b5-b6ce-5de2fe11cb61 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.198 225859 INFO nova.compute.manager [None req-101774a7-9f44-44b5-b6ce-5de2fe11cb61 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] instance snapshotting
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.272 225859 INFO nova.virt.libvirt.driver [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Deleting instance files /var/lib/nova/instances/fdba30ff-e02a-4857-92f6-1828ce3ab175_del
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.273 225859 INFO nova.virt.libvirt.driver [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Deletion of /var/lib/nova/instances/fdba30ff-e02a-4857-92f6-1828ce3ab175_del complete
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.342 225859 INFO nova.compute.manager [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Took 0.82 seconds to destroy the instance on the hypervisor.
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.342 225859 DEBUG oslo.service.loopingcall [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.343 225859 DEBUG nova.compute.manager [-] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.343 225859 DEBUG nova.network.neutron [-] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.376 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9907
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.377 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.377 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.377 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.398 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.399 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.399 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.399 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.400 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.470 225859 INFO nova.virt.libvirt.driver [None req-101774a7-9f44-44b5-b6ce-5de2fe11cb61 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Beginning live snapshot process
Jan 20 14:35:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e185 e185: 3 total, 3 up, 3 in
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.623 225859 DEBUG nova.virt.libvirt.imagebackend [None req-101774a7-9f44-44b5-b6ce-5de2fe11cb61 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 14:35:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:35:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:22.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:35:22 compute-1 sudo[246897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:35:22 compute-1 sudo[246897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:35:22 compute-1 sudo[246897]: pam_unix(sudo:session): session closed for user root
Jan 20 14:35:22 compute-1 sudo[246922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.808 225859 DEBUG nova.storage.rbd_utils [None req-101774a7-9f44-44b5-b6ce-5de2fe11cb61 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] creating snapshot(336cafe49ed3471a888416cc0350ffb9) on rbd image(680a9e49-0486-46a0-8857-99a7a56c46e1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:35:22 compute-1 sudo[246922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:35:22 compute-1 sudo[246922]: pam_unix(sudo:session): session closed for user root
Jan 20 14:35:22 compute-1 ceph-mon[81775]: pgmap v1362: 321 pgs: 321 active+clean; 650 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 14 MiB/s wr, 583 op/s
Jan 20 14:35:22 compute-1 ceph-mon[81775]: osdmap e185: 3 total, 3 up, 3 in
Jan 20 14:35:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:35:22 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/832753989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.899 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.992 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.994 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.998 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:35:22 compute-1 nova_compute[225855]: 2026-01-20 14:35:22.999 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.172 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.173 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4267MB free_disk=20.703880310058594GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.174 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.174 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.287 225859 DEBUG nova.compute.manager [req-cff80749-54f6-42a9-84e1-a206c503f0bc req-562abf7a-5d07-4765-8066-4a8f96c33af0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Received event network-vif-unplugged-87a0a5ba-6446-4265-8ada-94d1bd815aed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.287 225859 DEBUG oslo_concurrency.lockutils [req-cff80749-54f6-42a9-84e1-a206c503f0bc req-562abf7a-5d07-4765-8066-4a8f96c33af0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fdba30ff-e02a-4857-92f6-1828ce3ab175-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.288 225859 DEBUG oslo_concurrency.lockutils [req-cff80749-54f6-42a9-84e1-a206c503f0bc req-562abf7a-5d07-4765-8066-4a8f96c33af0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdba30ff-e02a-4857-92f6-1828ce3ab175-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.288 225859 DEBUG oslo_concurrency.lockutils [req-cff80749-54f6-42a9-84e1-a206c503f0bc req-562abf7a-5d07-4765-8066-4a8f96c33af0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdba30ff-e02a-4857-92f6-1828ce3ab175-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.288 225859 DEBUG nova.compute.manager [req-cff80749-54f6-42a9-84e1-a206c503f0bc req-562abf7a-5d07-4765-8066-4a8f96c33af0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] No waiting events found dispatching network-vif-unplugged-87a0a5ba-6446-4265-8ada-94d1bd815aed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.289 225859 DEBUG nova.compute.manager [req-cff80749-54f6-42a9-84e1-a206c503f0bc req-562abf7a-5d07-4765-8066-4a8f96c33af0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Received event network-vif-unplugged-87a0a5ba-6446-4265-8ada-94d1bd815aed for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.289 225859 DEBUG nova.compute.manager [req-cff80749-54f6-42a9-84e1-a206c503f0bc req-562abf7a-5d07-4765-8066-4a8f96c33af0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Received event network-vif-plugged-87a0a5ba-6446-4265-8ada-94d1bd815aed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.289 225859 DEBUG oslo_concurrency.lockutils [req-cff80749-54f6-42a9-84e1-a206c503f0bc req-562abf7a-5d07-4765-8066-4a8f96c33af0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fdba30ff-e02a-4857-92f6-1828ce3ab175-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.290 225859 DEBUG oslo_concurrency.lockutils [req-cff80749-54f6-42a9-84e1-a206c503f0bc req-562abf7a-5d07-4765-8066-4a8f96c33af0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdba30ff-e02a-4857-92f6-1828ce3ab175-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.290 225859 DEBUG oslo_concurrency.lockutils [req-cff80749-54f6-42a9-84e1-a206c503f0bc req-562abf7a-5d07-4765-8066-4a8f96c33af0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdba30ff-e02a-4857-92f6-1828ce3ab175-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.290 225859 DEBUG nova.compute.manager [req-cff80749-54f6-42a9-84e1-a206c503f0bc req-562abf7a-5d07-4765-8066-4a8f96c33af0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] No waiting events found dispatching network-vif-plugged-87a0a5ba-6446-4265-8ada-94d1bd815aed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.290 225859 WARNING nova.compute.manager [req-cff80749-54f6-42a9-84e1-a206c503f0bc req-562abf7a-5d07-4765-8066-4a8f96c33af0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Received unexpected event network-vif-plugged-87a0a5ba-6446-4265-8ada-94d1bd815aed for instance with vm_state active and task_state deleting.
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.315 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 2ec7b07d-b593-46b7-9751-b6116e4d2cec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.315 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance fdba30ff-e02a-4857-92f6-1828ce3ab175 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.316 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 680a9e49-0486-46a0-8857-99a7a56c46e1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.316 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.316 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.410 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.482 225859 DEBUG nova.network.neutron [-] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.499 225859 INFO nova.compute.manager [-] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Took 1.16 seconds to deallocate network for instance.
Jan 20 14:35:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:23.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.535 225859 DEBUG nova.compute.manager [req-0b0ea41c-d08a-459c-b021-13256e7963b3 req-8f390230-be26-4bac-988f-71dbef7b8e7d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Received event network-vif-deleted-87a0a5ba-6446-4265-8ada-94d1bd815aed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.545 225859 DEBUG oslo_concurrency.lockutils [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e186 e186: 3 total, 3 up, 3 in
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.602 225859 DEBUG nova.storage.rbd_utils [None req-101774a7-9f44-44b5-b6ce-5de2fe11cb61 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] cloning vms/680a9e49-0486-46a0-8857-99a7a56c46e1_disk@336cafe49ed3471a888416cc0350ffb9 to images/606f74dc-79ac-4b4e-b154-695b258203bd clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.727 225859 DEBUG nova.storage.rbd_utils [None req-101774a7-9f44-44b5-b6ce-5de2fe11cb61 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] flattening images/606f74dc-79ac-4b4e-b154-695b258203bd flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:35:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:35:23 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3777007206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.852 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.858 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.888 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.919 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.920 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:23 compute-1 nova_compute[225855]: 2026-01-20 14:35:23.920 225859 DEBUG oslo_concurrency.lockutils [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/832753989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:23 compute-1 ceph-mon[81775]: pgmap v1364: 321 pgs: 321 active+clean; 650 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 7.7 MiB/s wr, 503 op/s
Jan 20 14:35:23 compute-1 ceph-mon[81775]: osdmap e186: 3 total, 3 up, 3 in
Jan 20 14:35:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3777007206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:24 compute-1 nova_compute[225855]: 2026-01-20 14:35:24.015 225859 DEBUG oslo_concurrency.processutils [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:24 compute-1 nova_compute[225855]: 2026-01-20 14:35:24.126 225859 DEBUG nova.storage.rbd_utils [None req-101774a7-9f44-44b5-b6ce-5de2fe11cb61 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] removing snapshot(336cafe49ed3471a888416cc0350ffb9) on rbd image(680a9e49-0486-46a0-8857-99a7a56c46e1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:35:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:35:24 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/685615446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:24 compute-1 nova_compute[225855]: 2026-01-20 14:35:24.487 225859 DEBUG oslo_concurrency.processutils [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:24 compute-1 nova_compute[225855]: 2026-01-20 14:35:24.496 225859 DEBUG nova.compute.provider_tree [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:35:24 compute-1 nova_compute[225855]: 2026-01-20 14:35:24.512 225859 DEBUG nova.scheduler.client.report [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:35:24 compute-1 nova_compute[225855]: 2026-01-20 14:35:24.540 225859 DEBUG oslo_concurrency.lockutils [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:24 compute-1 nova_compute[225855]: 2026-01-20 14:35:24.570 225859 INFO nova.scheduler.client.report [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Deleted allocations for instance fdba30ff-e02a-4857-92f6-1828ce3ab175
Jan 20 14:35:24 compute-1 nova_compute[225855]: 2026-01-20 14:35:24.578 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:24 compute-1 nova_compute[225855]: 2026-01-20 14:35:24.632 225859 DEBUG oslo_concurrency.lockutils [None req-818c50c4-732e-447e-be1c-00273f386d27 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "fdba30ff-e02a-4857-92f6-1828ce3ab175" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:35:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:24.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:35:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e187 e187: 3 total, 3 up, 3 in
Jan 20 14:35:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/685615446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4037861475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:24 compute-1 nova_compute[225855]: 2026-01-20 14:35:24.953 225859 DEBUG nova.storage.rbd_utils [None req-101774a7-9f44-44b5-b6ce-5de2fe11cb61 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] creating snapshot(snap) on rbd image(606f74dc-79ac-4b4e-b154-695b258203bd) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:35:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:35:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:25.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:35:25 compute-1 nova_compute[225855]: 2026-01-20 14:35:25.884 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:35:25 compute-1 nova_compute[225855]: 2026-01-20 14:35:25.884 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:35:25 compute-1 ceph-mon[81775]: osdmap e187: 3 total, 3 up, 3 in
Jan 20 14:35:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2618122951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/979848207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:25 compute-1 ceph-mon[81775]: pgmap v1367: 321 pgs: 321 active+clean; 546 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 5.7 MiB/s wr, 494 op/s
Jan 20 14:35:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e188 e188: 3 total, 3 up, 3 in
Jan 20 14:35:26 compute-1 ovn_controller[130490]: 2026-01-20T14:35:26Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:17:6a:15 10.100.0.6
Jan 20 14:35:26 compute-1 ovn_controller[130490]: 2026-01-20T14:35:26Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:17:6a:15 10.100.0.6
Jan 20 14:35:26 compute-1 nova_compute[225855]: 2026-01-20 14:35:26.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:35:26 compute-1 nova_compute[225855]: 2026-01-20 14:35:26.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:35:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:26.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:26 compute-1 nova_compute[225855]: 2026-01-20 14:35:26.817 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:26 compute-1 ceph-mon[81775]: osdmap e188: 3 total, 3 up, 3 in
Jan 20 14:35:27 compute-1 nova_compute[225855]: 2026-01-20 14:35:27.463 225859 INFO nova.virt.libvirt.driver [None req-101774a7-9f44-44b5-b6ce-5de2fe11cb61 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Snapshot image upload complete
Jan 20 14:35:27 compute-1 nova_compute[225855]: 2026-01-20 14:35:27.464 225859 INFO nova.compute.manager [None req-101774a7-9f44-44b5-b6ce-5de2fe11cb61 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Took 5.26 seconds to snapshot the instance on the hypervisor.
Jan 20 14:35:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:35:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:27.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:35:27 compute-1 ceph-mon[81775]: pgmap v1369: 321 pgs: 321 active+clean; 522 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 7.5 MiB/s wr, 531 op/s
Jan 20 14:35:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:35:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:28.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:35:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2919091173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3279341162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.367 225859 DEBUG oslo_concurrency.lockutils [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.367 225859 DEBUG oslo_concurrency.lockutils [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.367 225859 DEBUG oslo_concurrency.lockutils [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.368 225859 DEBUG oslo_concurrency.lockutils [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.368 225859 DEBUG oslo_concurrency.lockutils [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.369 225859 INFO nova.compute.manager [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Terminating instance
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.370 225859 DEBUG nova.compute.manager [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:35:29 compute-1 kernel: tap73e232f9-38 (unregistering): left promiscuous mode
Jan 20 14:35:29 compute-1 NetworkManager[49104]: <info>  [1768919729.4358] device (tap73e232f9-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.445 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:29 compute-1 ovn_controller[130490]: 2026-01-20T14:35:29Z|00163|binding|INFO|Releasing lport 73e232f9-3860-4b9a-9cec-535fa2fb0c9f from this chassis (sb_readonly=0)
Jan 20 14:35:29 compute-1 ovn_controller[130490]: 2026-01-20T14:35:29Z|00164|binding|INFO|Setting lport 73e232f9-3860-4b9a-9cec-535fa2fb0c9f down in Southbound
Jan 20 14:35:29 compute-1 ovn_controller[130490]: 2026-01-20T14:35:29Z|00165|binding|INFO|Removing iface tap73e232f9-38 ovn-installed in OVS
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.448 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:29.452 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:6a:15 10.100.0.6'], port_security=['fa:16:3e:17:6a:15 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2ec7b07d-b593-46b7-9751-b6116e4d2cec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a841e7a1434c488390475174e10bc161', 'neutron:revision_number': '8', 'neutron:security_group_ids': '0bbdea05-fba7-47c7-ba4e-5dac58212a25', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c43dea88-ea55-4069-a4be-2c30a432a754, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=73e232f9-3860-4b9a-9cec-535fa2fb0c9f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:35:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:29.454 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 73e232f9-3860-4b9a-9cec-535fa2fb0c9f in datapath 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a unbound from our chassis
Jan 20 14:35:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:29.455 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:35:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:29.457 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[46e86d9c-60e4-469e-a549-feb877e22561]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:29.457 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a namespace which is not needed anymore
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.481 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:29 compute-1 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000027.scope: Deactivated successfully.
Jan 20 14:35:29 compute-1 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000027.scope: Consumed 13.575s CPU time.
Jan 20 14:35:29 compute-1 systemd-machined[194361]: Machine qemu-24-instance-00000027 terminated.
Jan 20 14:35:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:29.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.580 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.589 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.594 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.606 225859 INFO nova.virt.libvirt.driver [-] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Instance destroyed successfully.
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.607 225859 DEBUG nova.objects.instance [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'resources' on Instance uuid 2ec7b07d-b593-46b7-9751-b6116e4d2cec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:35:29 compute-1 neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a[244351]: [NOTICE]   (244356) : haproxy version is 2.8.14-c23fe91
Jan 20 14:35:29 compute-1 neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a[244351]: [NOTICE]   (244356) : path to executable is /usr/sbin/haproxy
Jan 20 14:35:29 compute-1 neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a[244351]: [WARNING]  (244356) : Exiting Master process...
Jan 20 14:35:29 compute-1 neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a[244351]: [WARNING]  (244356) : Exiting Master process...
Jan 20 14:35:29 compute-1 neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a[244351]: [ALERT]    (244356) : Current worker (244358) exited with code 143 (Terminated)
Jan 20 14:35:29 compute-1 neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a[244351]: [WARNING]  (244356) : All workers exited. Exiting... (0)
Jan 20 14:35:29 compute-1 systemd[1]: libpod-ab810e99d7949df31ae8ecec1ce4cb6e08341183c61e4c0ebc73d5a2b21ea83d.scope: Deactivated successfully.
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.620 225859 DEBUG nova.virt.libvirt.vif [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:33:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1907009380',display_name='tempest-ServersAdminTestJSON-server-1907009380',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1907009380',id=39,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:35:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a841e7a1434c488390475174e10bc161',ramdisk_id='',reservation_id='r-8k5b63bj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1261404595',owner_user_name='tempest-ServersAdminTestJSON-1261404595-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:35:16Z,user_data=None,user_id='f51c395107c84dbd9067113b84ff01dd',uuid=2ec7b07d-b593-46b7-9751-b6116e4d2cec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.621 225859 DEBUG nova.network.os_vif_util [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converting VIF {"id": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "address": "fa:16:3e:17:6a:15", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73e232f9-38", "ovs_interfaceid": "73e232f9-3860-4b9a-9cec-535fa2fb0c9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:35:29 compute-1 podman[247132]: 2026-01-20 14:35:29.621629097 +0000 UTC m=+0.058375389 container died ab810e99d7949df31ae8ecec1ce4cb6e08341183c61e4c0ebc73d5a2b21ea83d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.622 225859 DEBUG nova.network.os_vif_util [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.623 225859 DEBUG os_vif [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.625 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.625 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73e232f9-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.628 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.632 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.633 225859 INFO os_vif [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:6a:15,bridge_name='br-int',has_traffic_filtering=True,id=73e232f9-3860-4b9a-9cec-535fa2fb0c9f,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73e232f9-38')
Jan 20 14:35:29 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab810e99d7949df31ae8ecec1ce4cb6e08341183c61e4c0ebc73d5a2b21ea83d-userdata-shm.mount: Deactivated successfully.
Jan 20 14:35:29 compute-1 systemd[1]: var-lib-containers-storage-overlay-61f71f999ebe55922d4470c26bf6ec7028f2091bfc297c60f9663a1040a21c70-merged.mount: Deactivated successfully.
Jan 20 14:35:29 compute-1 podman[247132]: 2026-01-20 14:35:29.677254487 +0000 UTC m=+0.114000779 container cleanup ab810e99d7949df31ae8ecec1ce4cb6e08341183c61e4c0ebc73d5a2b21ea83d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 20 14:35:29 compute-1 systemd[1]: libpod-conmon-ab810e99d7949df31ae8ecec1ce4cb6e08341183c61e4c0ebc73d5a2b21ea83d.scope: Deactivated successfully.
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.686 225859 DEBUG nova.compute.manager [req-2478d1ca-616a-4a9f-8baf-f0115f84292d req-8d128d77-2b1f-4992-848c-204bb58fce0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received event network-vif-unplugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.686 225859 DEBUG oslo_concurrency.lockutils [req-2478d1ca-616a-4a9f-8baf-f0115f84292d req-8d128d77-2b1f-4992-848c-204bb58fce0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.686 225859 DEBUG oslo_concurrency.lockutils [req-2478d1ca-616a-4a9f-8baf-f0115f84292d req-8d128d77-2b1f-4992-848c-204bb58fce0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.686 225859 DEBUG oslo_concurrency.lockutils [req-2478d1ca-616a-4a9f-8baf-f0115f84292d req-8d128d77-2b1f-4992-848c-204bb58fce0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.687 225859 DEBUG nova.compute.manager [req-2478d1ca-616a-4a9f-8baf-f0115f84292d req-8d128d77-2b1f-4992-848c-204bb58fce0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] No waiting events found dispatching network-vif-unplugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.687 225859 DEBUG nova.compute.manager [req-2478d1ca-616a-4a9f-8baf-f0115f84292d req-8d128d77-2b1f-4992-848c-204bb58fce0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received event network-vif-unplugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:35:29 compute-1 podman[247187]: 2026-01-20 14:35:29.74006577 +0000 UTC m=+0.042363187 container remove ab810e99d7949df31ae8ecec1ce4cb6e08341183c61e4c0ebc73d5a2b21ea83d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:35:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:29.747 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d1bc923d-4dce-4577-a0e4-008fa21eb39c]: (4, ('Tue Jan 20 02:35:29 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a (ab810e99d7949df31ae8ecec1ce4cb6e08341183c61e4c0ebc73d5a2b21ea83d)\nab810e99d7949df31ae8ecec1ce4cb6e08341183c61e4c0ebc73d5a2b21ea83d\nTue Jan 20 02:35:29 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a (ab810e99d7949df31ae8ecec1ce4cb6e08341183c61e4c0ebc73d5a2b21ea83d)\nab810e99d7949df31ae8ecec1ce4cb6e08341183c61e4c0ebc73d5a2b21ea83d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:29.748 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[17df3e05-32af-4710-ae51-1fd1710ba162]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:29.749 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33c9a20a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:29 compute-1 kernel: tap33c9a20a-d0: left promiscuous mode
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.751 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:29 compute-1 nova_compute[225855]: 2026-01-20 14:35:29.765 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:29.768 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8713dca9-eb06-44c2-a4f6-13b6e74995bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:29.788 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b5582c61-20f7-4993-a661-f54d12ca6af6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:29.789 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[283fbf91-f3f4-4461-bb0d-009c3bb6bd48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:29.808 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b63a1dfd-14ab-409e-af6e-6591abf9dcfa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466048, 'reachable_time': 17265, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247202, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:29 compute-1 systemd[1]: run-netns-ovnmeta\x2d33c9a20a\x2dd976\x2d42a8\x2db8bf\x2df83ddfc97c9a.mount: Deactivated successfully.
Jan 20 14:35:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:29.811 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:35:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:29.811 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[1e6184e2-a0ec-4d78-87c8-9bbca2e0b207]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3897833887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:30 compute-1 ceph-mon[81775]: pgmap v1370: 321 pgs: 321 active+clean; 533 MiB data, 805 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 10 MiB/s wr, 506 op/s
Jan 20 14:35:30 compute-1 nova_compute[225855]: 2026-01-20 14:35:30.117 225859 INFO nova.virt.libvirt.driver [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Deleting instance files /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec_del
Jan 20 14:35:30 compute-1 nova_compute[225855]: 2026-01-20 14:35:30.117 225859 INFO nova.virt.libvirt.driver [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Deletion of /var/lib/nova/instances/2ec7b07d-b593-46b7-9751-b6116e4d2cec_del complete
Jan 20 14:35:30 compute-1 nova_compute[225855]: 2026-01-20 14:35:30.165 225859 INFO nova.compute.manager [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Took 0.80 seconds to destroy the instance on the hypervisor.
Jan 20 14:35:30 compute-1 nova_compute[225855]: 2026-01-20 14:35:30.166 225859 DEBUG oslo.service.loopingcall [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:35:30 compute-1 nova_compute[225855]: 2026-01-20 14:35:30.166 225859 DEBUG nova.compute.manager [-] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:35:30 compute-1 nova_compute[225855]: 2026-01-20 14:35:30.166 225859 DEBUG nova.network.neutron [-] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:35:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:35:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:30.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:35:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:31.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:31 compute-1 nova_compute[225855]: 2026-01-20 14:35:31.602 225859 DEBUG nova.network.neutron [-] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:35:31 compute-1 nova_compute[225855]: 2026-01-20 14:35:31.623 225859 INFO nova.compute.manager [-] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Took 1.46 seconds to deallocate network for instance.
Jan 20 14:35:31 compute-1 nova_compute[225855]: 2026-01-20 14:35:31.665 225859 DEBUG oslo_concurrency.lockutils [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:31 compute-1 nova_compute[225855]: 2026-01-20 14:35:31.665 225859 DEBUG oslo_concurrency.lockutils [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:31 compute-1 nova_compute[225855]: 2026-01-20 14:35:31.725 225859 DEBUG oslo_concurrency.processutils [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:31 compute-1 nova_compute[225855]: 2026-01-20 14:35:31.798 225859 DEBUG nova.compute.manager [req-1d4859ea-c912-4cf8-b487-337fbe4305ad req-66f290ba-03cd-43bd-ba73-500119eb195f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received event network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:35:31 compute-1 nova_compute[225855]: 2026-01-20 14:35:31.799 225859 DEBUG oslo_concurrency.lockutils [req-1d4859ea-c912-4cf8-b487-337fbe4305ad req-66f290ba-03cd-43bd-ba73-500119eb195f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:31 compute-1 nova_compute[225855]: 2026-01-20 14:35:31.801 225859 DEBUG oslo_concurrency.lockutils [req-1d4859ea-c912-4cf8-b487-337fbe4305ad req-66f290ba-03cd-43bd-ba73-500119eb195f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:31 compute-1 nova_compute[225855]: 2026-01-20 14:35:31.801 225859 DEBUG oslo_concurrency.lockutils [req-1d4859ea-c912-4cf8-b487-337fbe4305ad req-66f290ba-03cd-43bd-ba73-500119eb195f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:31 compute-1 nova_compute[225855]: 2026-01-20 14:35:31.802 225859 DEBUG nova.compute.manager [req-1d4859ea-c912-4cf8-b487-337fbe4305ad req-66f290ba-03cd-43bd-ba73-500119eb195f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] No waiting events found dispatching network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:35:31 compute-1 nova_compute[225855]: 2026-01-20 14:35:31.802 225859 WARNING nova.compute.manager [req-1d4859ea-c912-4cf8-b487-337fbe4305ad req-66f290ba-03cd-43bd-ba73-500119eb195f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received unexpected event network-vif-plugged-73e232f9-3860-4b9a-9cec-535fa2fb0c9f for instance with vm_state deleted and task_state None.
Jan 20 14:35:31 compute-1 nova_compute[225855]: 2026-01-20 14:35:31.803 225859 DEBUG nova.compute.manager [req-1d4859ea-c912-4cf8-b487-337fbe4305ad req-66f290ba-03cd-43bd-ba73-500119eb195f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Received event network-vif-deleted-73e232f9-3860-4b9a-9cec-535fa2fb0c9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:35:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:35:32 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1217843012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:32 compute-1 nova_compute[225855]: 2026-01-20 14:35:32.183 225859 DEBUG oslo_concurrency.processutils [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:32 compute-1 nova_compute[225855]: 2026-01-20 14:35:32.190 225859 DEBUG nova.compute.provider_tree [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:35:32 compute-1 nova_compute[225855]: 2026-01-20 14:35:32.202 225859 DEBUG nova.scheduler.client.report [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:35:32 compute-1 nova_compute[225855]: 2026-01-20 14:35:32.226 225859 DEBUG oslo_concurrency.lockutils [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:32 compute-1 nova_compute[225855]: 2026-01-20 14:35:32.256 225859 INFO nova.scheduler.client.report [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Deleted allocations for instance 2ec7b07d-b593-46b7-9751-b6116e4d2cec
Jan 20 14:35:32 compute-1 nova_compute[225855]: 2026-01-20 14:35:32.324 225859 DEBUG oslo_concurrency.lockutils [None req-d3e49ae9-5b42-4e29-8be6-815fdb92fbdb f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "2ec7b07d-b593-46b7-9751-b6116e4d2cec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e189 e189: 3 total, 3 up, 3 in
Jan 20 14:35:32 compute-1 ceph-mon[81775]: pgmap v1371: 321 pgs: 321 active+clean; 485 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 9.1 MiB/s wr, 499 op/s
Jan 20 14:35:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1217843012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:32 compute-1 ceph-mon[81775]: osdmap e189: 3 total, 3 up, 3 in
Jan 20 14:35:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:35:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:32.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:35:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:35:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:33.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:35:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e190 e190: 3 total, 3 up, 3 in
Jan 20 14:35:34 compute-1 ceph-mon[81775]: pgmap v1373: 321 pgs: 321 active+clean; 485 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 8.3 MiB/s wr, 333 op/s
Jan 20 14:35:34 compute-1 ceph-mon[81775]: osdmap e190: 3 total, 3 up, 3 in
Jan 20 14:35:34 compute-1 nova_compute[225855]: 2026-01-20 14:35:34.582 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e191 e191: 3 total, 3 up, 3 in
Jan 20 14:35:34 compute-1 nova_compute[225855]: 2026-01-20 14:35:34.628 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:35:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:34.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:35:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:35 compute-1 nova_compute[225855]: 2026-01-20 14:35:35.067 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:35.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e192 e192: 3 total, 3 up, 3 in
Jan 20 14:35:35 compute-1 ceph-mon[81775]: osdmap e191: 3 total, 3 up, 3 in
Jan 20 14:35:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:35:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:36.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:35:36 compute-1 nova_compute[225855]: 2026-01-20 14:35:36.781 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919721.7790616, fdba30ff-e02a-4857-92f6-1828ce3ab175 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:35:36 compute-1 nova_compute[225855]: 2026-01-20 14:35:36.781 225859 INFO nova.compute.manager [-] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] VM Stopped (Lifecycle Event)
Jan 20 14:35:36 compute-1 nova_compute[225855]: 2026-01-20 14:35:36.821 225859 DEBUG nova.compute.manager [None req-c9863632-9eb7-41f9-8857-36080c01ae35 - - - - - -] [instance: fdba30ff-e02a-4857-92f6-1828ce3ab175] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:36 compute-1 ceph-mon[81775]: pgmap v1376: 321 pgs: 321 active+clean; 470 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.3 MiB/s wr, 236 op/s
Jan 20 14:35:36 compute-1 ceph-mon[81775]: osdmap e192: 3 total, 3 up, 3 in
Jan 20 14:35:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e193 e193: 3 total, 3 up, 3 in
Jan 20 14:35:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:37.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:37 compute-1 ceph-mon[81775]: osdmap e193: 3 total, 3 up, 3 in
Jan 20 14:35:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e194 e194: 3 total, 3 up, 3 in
Jan 20 14:35:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:35:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:38.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:35:38 compute-1 ceph-mon[81775]: pgmap v1379: 321 pgs: 321 active+clean; 503 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 7.0 MiB/s wr, 205 op/s
Jan 20 14:35:38 compute-1 ceph-mon[81775]: osdmap e194: 3 total, 3 up, 3 in
Jan 20 14:35:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e195 e195: 3 total, 3 up, 3 in
Jan 20 14:35:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:35:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:39.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:35:39 compute-1 nova_compute[225855]: 2026-01-20 14:35:39.583 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:39 compute-1 nova_compute[225855]: 2026-01-20 14:35:39.630 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e196 e196: 3 total, 3 up, 3 in
Jan 20 14:35:39 compute-1 ceph-mon[81775]: osdmap e195: 3 total, 3 up, 3 in
Jan 20 14:35:39 compute-1 ceph-mon[81775]: pgmap v1382: 321 pgs: 321 active+clean; 503 MiB data, 770 MiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 8.4 MiB/s wr, 190 op/s
Jan 20 14:35:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:35:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:40.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:35:40 compute-1 ceph-mon[81775]: osdmap e196: 3 total, 3 up, 3 in
Jan 20 14:35:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:41.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:41 compute-1 ceph-mon[81775]: pgmap v1384: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 283 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.0 MiB/s wr, 275 op/s
Jan 20 14:35:42 compute-1 podman[247232]: 2026-01-20 14:35:42.080570797 +0000 UTC m=+0.117558569 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:35:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:42.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e197 e197: 3 total, 3 up, 3 in
Jan 20 14:35:42 compute-1 sudo[247259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:35:42 compute-1 sudo[247259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:35:42 compute-1 sudo[247259]: pam_unix(sudo:session): session closed for user root
Jan 20 14:35:42 compute-1 sudo[247284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:35:42 compute-1 sudo[247284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:35:42 compute-1 sudo[247284]: pam_unix(sudo:session): session closed for user root
Jan 20 14:35:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4269044680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:43 compute-1 ceph-mon[81775]: osdmap e197: 3 total, 3 up, 3 in
Jan 20 14:35:43 compute-1 nova_compute[225855]: 2026-01-20 14:35:43.385 225859 DEBUG oslo_concurrency.lockutils [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Acquiring lock "680a9e49-0486-46a0-8857-99a7a56c46e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:43 compute-1 nova_compute[225855]: 2026-01-20 14:35:43.386 225859 DEBUG oslo_concurrency.lockutils [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "680a9e49-0486-46a0-8857-99a7a56c46e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:43 compute-1 nova_compute[225855]: 2026-01-20 14:35:43.387 225859 DEBUG oslo_concurrency.lockutils [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Acquiring lock "680a9e49-0486-46a0-8857-99a7a56c46e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:43 compute-1 nova_compute[225855]: 2026-01-20 14:35:43.389 225859 DEBUG oslo_concurrency.lockutils [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "680a9e49-0486-46a0-8857-99a7a56c46e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:43 compute-1 nova_compute[225855]: 2026-01-20 14:35:43.389 225859 DEBUG oslo_concurrency.lockutils [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "680a9e49-0486-46a0-8857-99a7a56c46e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:43 compute-1 nova_compute[225855]: 2026-01-20 14:35:43.391 225859 INFO nova.compute.manager [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Terminating instance
Jan 20 14:35:43 compute-1 nova_compute[225855]: 2026-01-20 14:35:43.394 225859 DEBUG oslo_concurrency.lockutils [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Acquiring lock "refresh_cache-680a9e49-0486-46a0-8857-99a7a56c46e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:35:43 compute-1 nova_compute[225855]: 2026-01-20 14:35:43.394 225859 DEBUG oslo_concurrency.lockutils [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Acquired lock "refresh_cache-680a9e49-0486-46a0-8857-99a7a56c46e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:35:43 compute-1 nova_compute[225855]: 2026-01-20 14:35:43.395 225859 DEBUG nova.network.neutron [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:35:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:35:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:43.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:35:43 compute-1 nova_compute[225855]: 2026-01-20 14:35:43.594 225859 DEBUG nova.network.neutron [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:35:43 compute-1 nova_compute[225855]: 2026-01-20 14:35:43.888 225859 DEBUG nova.network.neutron [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:35:43 compute-1 nova_compute[225855]: 2026-01-20 14:35:43.904 225859 DEBUG oslo_concurrency.lockutils [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Releasing lock "refresh_cache-680a9e49-0486-46a0-8857-99a7a56c46e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:35:43 compute-1 nova_compute[225855]: 2026-01-20 14:35:43.905 225859 DEBUG nova.compute.manager [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:35:44 compute-1 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Jan 20 14:35:44 compute-1 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000002e.scope: Consumed 14.256s CPU time.
Jan 20 14:35:44 compute-1 systemd-machined[194361]: Machine qemu-23-instance-0000002e terminated.
Jan 20 14:35:44 compute-1 nova_compute[225855]: 2026-01-20 14:35:44.332 225859 INFO nova.virt.libvirt.driver [-] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Instance destroyed successfully.
Jan 20 14:35:44 compute-1 nova_compute[225855]: 2026-01-20 14:35:44.333 225859 DEBUG nova.objects.instance [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lazy-loading 'resources' on Instance uuid 680a9e49-0486-46a0-8857-99a7a56c46e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:35:44 compute-1 ceph-mon[81775]: pgmap v1386: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 283 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.3 MiB/s wr, 227 op/s
Jan 20 14:35:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1152927409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:44 compute-1 nova_compute[225855]: 2026-01-20 14:35:44.586 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:44 compute-1 nova_compute[225855]: 2026-01-20 14:35:44.605 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919729.6044123, 2ec7b07d-b593-46b7-9751-b6116e4d2cec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:35:44 compute-1 nova_compute[225855]: 2026-01-20 14:35:44.605 225859 INFO nova.compute.manager [-] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] VM Stopped (Lifecycle Event)
Jan 20 14:35:44 compute-1 nova_compute[225855]: 2026-01-20 14:35:44.630 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:44.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:44 compute-1 nova_compute[225855]: 2026-01-20 14:35:44.865 225859 DEBUG nova.compute.manager [None req-1294c7fe-eb58-4f5d-9aee-24b8391d9120 - - - - - -] [instance: 2ec7b07d-b593-46b7-9751-b6116e4d2cec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:45 compute-1 nova_compute[225855]: 2026-01-20 14:35:45.316 225859 INFO nova.virt.libvirt.driver [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Deleting instance files /var/lib/nova/instances/680a9e49-0486-46a0-8857-99a7a56c46e1_del
Jan 20 14:35:45 compute-1 nova_compute[225855]: 2026-01-20 14:35:45.317 225859 INFO nova.virt.libvirt.driver [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Deletion of /var/lib/nova/instances/680a9e49-0486-46a0-8857-99a7a56c46e1_del complete
Jan 20 14:35:45 compute-1 nova_compute[225855]: 2026-01-20 14:35:45.554 225859 INFO nova.compute.manager [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Took 1.65 seconds to destroy the instance on the hypervisor.
Jan 20 14:35:45 compute-1 nova_compute[225855]: 2026-01-20 14:35:45.555 225859 DEBUG oslo.service.loopingcall [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:35:45 compute-1 nova_compute[225855]: 2026-01-20 14:35:45.555 225859 DEBUG nova.compute.manager [-] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:35:45 compute-1 nova_compute[225855]: 2026-01-20 14:35:45.556 225859 DEBUG nova.network.neutron [-] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:35:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:35:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:45.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:35:45 compute-1 nova_compute[225855]: 2026-01-20 14:35:45.931 225859 DEBUG nova.network.neutron [-] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:35:45 compute-1 nova_compute[225855]: 2026-01-20 14:35:45.953 225859 DEBUG nova.network.neutron [-] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:35:45 compute-1 nova_compute[225855]: 2026-01-20 14:35:45.967 225859 INFO nova.compute.manager [-] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Took 0.41 seconds to deallocate network for instance.
Jan 20 14:35:46 compute-1 nova_compute[225855]: 2026-01-20 14:35:46.293 225859 DEBUG oslo_concurrency.lockutils [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:46 compute-1 nova_compute[225855]: 2026-01-20 14:35:46.293 225859 DEBUG oslo_concurrency.lockutils [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:46 compute-1 nova_compute[225855]: 2026-01-20 14:35:46.365 225859 DEBUG oslo_concurrency.processutils [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:46 compute-1 ceph-mon[81775]: pgmap v1387: 321 pgs: 321 active+clean; 121 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 173 KiB/s rd, 9.9 KiB/s wr, 252 op/s
Jan 20 14:35:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1925318780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:35:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:46.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:35:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:35:46 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4051494342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:46 compute-1 nova_compute[225855]: 2026-01-20 14:35:46.804 225859 DEBUG oslo_concurrency.processutils [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:46 compute-1 nova_compute[225855]: 2026-01-20 14:35:46.810 225859 DEBUG nova.compute.provider_tree [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:35:46 compute-1 nova_compute[225855]: 2026-01-20 14:35:46.840 225859 DEBUG nova.scheduler.client.report [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:35:46 compute-1 nova_compute[225855]: 2026-01-20 14:35:46.995 225859 DEBUG oslo_concurrency.lockutils [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:47 compute-1 nova_compute[225855]: 2026-01-20 14:35:47.041 225859 INFO nova.scheduler.client.report [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Deleted allocations for instance 680a9e49-0486-46a0-8857-99a7a56c46e1
Jan 20 14:35:47 compute-1 nova_compute[225855]: 2026-01-20 14:35:47.208 225859 DEBUG oslo_concurrency.lockutils [None req-b9ab3b0c-4f1e-446f-b6f4-8690c5ef6d3d 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "680a9e49-0486-46a0-8857-99a7a56c46e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e198 e198: 3 total, 3 up, 3 in
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:35:47.558764) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919747558890, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2549, "num_deletes": 263, "total_data_size": 5587969, "memory_usage": 5645232, "flush_reason": "Manual Compaction"}
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Jan 20 14:35:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:47.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919747627484, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 3669107, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30811, "largest_seqno": 33355, "table_properties": {"data_size": 3658662, "index_size": 6683, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22357, "raw_average_key_size": 21, "raw_value_size": 3637541, "raw_average_value_size": 3434, "num_data_blocks": 288, "num_entries": 1059, "num_filter_entries": 1059, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768919562, "oldest_key_time": 1768919562, "file_creation_time": 1768919747, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 68772 microseconds, and 10887 cpu microseconds.
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:35:47.627532) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 3669107 bytes OK
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:35:47.627552) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:35:47.636896) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:35:47.636933) EVENT_LOG_v1 {"time_micros": 1768919747636924, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:35:47.636956) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 5576607, prev total WAL file size 5578306, number of live WAL files 2.
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:35:47.638842) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(3583KB)], [60(8254KB)]
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919747638897, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 12121578, "oldest_snapshot_seqno": -1}
Jan 20 14:35:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4051494342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3882842628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:47 compute-1 ceph-mon[81775]: osdmap e198: 3 total, 3 up, 3 in
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 5968 keys, 10180835 bytes, temperature: kUnknown
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919747758787, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 10180835, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10139950, "index_size": 24839, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14981, "raw_key_size": 151248, "raw_average_key_size": 25, "raw_value_size": 10031679, "raw_average_value_size": 1680, "num_data_blocks": 1003, "num_entries": 5968, "num_filter_entries": 5968, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768919747, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:35:47.759011) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10180835 bytes
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:35:47.761512) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 101.1 rd, 84.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.1 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 6504, records dropped: 536 output_compression: NoCompression
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:35:47.761547) EVENT_LOG_v1 {"time_micros": 1768919747761536, "job": 36, "event": "compaction_finished", "compaction_time_micros": 119945, "compaction_time_cpu_micros": 22558, "output_level": 6, "num_output_files": 1, "total_output_size": 10180835, "num_input_records": 6504, "num_output_records": 5968, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919747762261, "job": 36, "event": "table_file_deletion", "file_number": 62}
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919747763616, "job": 36, "event": "table_file_deletion", "file_number": 60}
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:35:47.638542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:35:47.763662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:35:47.763668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:35:47.763671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:35:47.763674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:35:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:35:47.763676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:35:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:48.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e199 e199: 3 total, 3 up, 3 in
Jan 20 14:35:48 compute-1 ceph-mon[81775]: pgmap v1388: 321 pgs: 321 active+clean; 97 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 148 KiB/s rd, 9.5 KiB/s wr, 218 op/s
Jan 20 14:35:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:35:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:49.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:35:49 compute-1 nova_compute[225855]: 2026-01-20 14:35:49.587 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:49 compute-1 nova_compute[225855]: 2026-01-20 14:35:49.632 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:49 compute-1 ceph-mon[81775]: osdmap e199: 3 total, 3 up, 3 in
Jan 20 14:35:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:50.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:50 compute-1 ceph-mon[81775]: pgmap v1391: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 89 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 119 KiB/s rd, 780 KiB/s wr, 172 op/s
Jan 20 14:35:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/260268613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e200 e200: 3 total, 3 up, 3 in
Jan 20 14:35:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:35:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:51.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:35:51 compute-1 ceph-mon[81775]: osdmap e200: 3 total, 3 up, 3 in
Jan 20 14:35:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2648038281' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:51 compute-1 ceph-mon[81775]: pgmap v1393: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 134 MiB data, 550 MiB used, 20 GiB / 21 GiB avail; 150 KiB/s rd, 7.1 MiB/s wr, 225 op/s
Jan 20 14:35:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/659460142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2898319916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e201 e201: 3 total, 3 up, 3 in
Jan 20 14:35:52 compute-1 podman[247358]: 2026-01-20 14:35:52.016728638 +0000 UTC m=+0.055571129 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 20 14:35:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:52.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:52 compute-1 ceph-mon[81775]: osdmap e201: 3 total, 3 up, 3 in
Jan 20 14:35:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3689223483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/635606860' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:53.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1932494090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:53 compute-1 ceph-mon[81775]: pgmap v1395: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 134 MiB data, 550 MiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 7.1 MiB/s wr, 214 op/s
Jan 20 14:35:54 compute-1 nova_compute[225855]: 2026-01-20 14:35:54.588 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:54 compute-1 nova_compute[225855]: 2026-01-20 14:35:54.634 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:54.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:55.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:56.049 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:35:56 compute-1 nova_compute[225855]: 2026-01-20 14:35:56.049 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:35:56.050 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:35:56 compute-1 ceph-mon[81775]: pgmap v1396: 321 pgs: 321 active+clean; 165 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 8.7 MiB/s wr, 313 op/s
Jan 20 14:35:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:56.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e202 e202: 3 total, 3 up, 3 in
Jan 20 14:35:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:57.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:58 compute-1 ceph-mon[81775]: osdmap e202: 3 total, 3 up, 3 in
Jan 20 14:35:58 compute-1 ceph-mon[81775]: pgmap v1398: 321 pgs: 321 active+clean; 181 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 6.6 MiB/s wr, 438 op/s
Jan 20 14:35:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:58.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:59 compute-1 nova_compute[225855]: 2026-01-20 14:35:59.329 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919744.3280523, 680a9e49-0486-46a0-8857-99a7a56c46e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:35:59 compute-1 nova_compute[225855]: 2026-01-20 14:35:59.330 225859 INFO nova.compute.manager [-] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] VM Stopped (Lifecycle Event)
Jan 20 14:35:59 compute-1 nova_compute[225855]: 2026-01-20 14:35:59.434 225859 DEBUG nova.compute.manager [None req-a808b2ad-3404-4249-9449-f3120c536a7d - - - - - -] [instance: 680a9e49-0486-46a0-8857-99a7a56c46e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:35:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:35:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:59.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:35:59 compute-1 sudo[247383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:35:59 compute-1 sudo[247383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:35:59 compute-1 nova_compute[225855]: 2026-01-20 14:35:59.592 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:59 compute-1 sudo[247383]: pam_unix(sudo:session): session closed for user root
Jan 20 14:35:59 compute-1 nova_compute[225855]: 2026-01-20 14:35:59.634 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:59 compute-1 sudo[247409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:35:59 compute-1 sudo[247409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:35:59 compute-1 sudo[247409]: pam_unix(sudo:session): session closed for user root
Jan 20 14:35:59 compute-1 sudo[247434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:35:59 compute-1 sudo[247434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:35:59 compute-1 sudo[247434]: pam_unix(sudo:session): session closed for user root
Jan 20 14:35:59 compute-1 sudo[247459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:35:59 compute-1 sudo[247459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:00.052 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:00 compute-1 sudo[247459]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:00 compute-1 ceph-mon[81775]: pgmap v1399: 321 pgs: 321 active+clean; 181 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.7 MiB/s wr, 335 op/s
Jan 20 14:36:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:36:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:36:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:36:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:36:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:36:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:36:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:36:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:00.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:36:01 compute-1 nova_compute[225855]: 2026-01-20 14:36:01.226 225859 DEBUG oslo_concurrency.lockutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Acquiring lock "eb82fc99-1632-42b0-90d2-7ce2b9d542a2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:01 compute-1 nova_compute[225855]: 2026-01-20 14:36:01.226 225859 DEBUG oslo_concurrency.lockutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Lock "eb82fc99-1632-42b0-90d2-7ce2b9d542a2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:01 compute-1 nova_compute[225855]: 2026-01-20 14:36:01.244 225859 DEBUG nova.compute.manager [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:36:01 compute-1 nova_compute[225855]: 2026-01-20 14:36:01.340 225859 DEBUG oslo_concurrency.lockutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:01 compute-1 nova_compute[225855]: 2026-01-20 14:36:01.341 225859 DEBUG oslo_concurrency.lockutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:01 compute-1 nova_compute[225855]: 2026-01-20 14:36:01.346 225859 DEBUG nova.virt.hardware [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:36:01 compute-1 nova_compute[225855]: 2026-01-20 14:36:01.346 225859 INFO nova.compute.claims [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:36:01 compute-1 nova_compute[225855]: 2026-01-20 14:36:01.458 225859 DEBUG oslo_concurrency.processutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e203 e203: 3 total, 3 up, 3 in
Jan 20 14:36:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:01.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:36:01 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3097242567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:01 compute-1 nova_compute[225855]: 2026-01-20 14:36:01.930 225859 DEBUG oslo_concurrency.processutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:01 compute-1 nova_compute[225855]: 2026-01-20 14:36:01.937 225859 DEBUG nova.compute.provider_tree [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:36:01 compute-1 nova_compute[225855]: 2026-01-20 14:36:01.957 225859 DEBUG nova.scheduler.client.report [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:36:01 compute-1 nova_compute[225855]: 2026-01-20 14:36:01.983 225859 DEBUG oslo_concurrency.lockutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:01 compute-1 nova_compute[225855]: 2026-01-20 14:36:01.983 225859 DEBUG nova.compute.manager [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.039 225859 DEBUG nova.compute.manager [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.039 225859 DEBUG nova.network.neutron [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.059 225859 INFO nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.086 225859 DEBUG nova.compute.manager [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.173 225859 DEBUG nova.compute.manager [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.175 225859 DEBUG nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.176 225859 INFO nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Creating image(s)
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.209 225859 DEBUG nova.storage.rbd_utils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] rbd image eb82fc99-1632-42b0-90d2-7ce2b9d542a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.244 225859 DEBUG nova.storage.rbd_utils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] rbd image eb82fc99-1632-42b0-90d2-7ce2b9d542a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.277 225859 DEBUG nova.storage.rbd_utils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] rbd image eb82fc99-1632-42b0-90d2-7ce2b9d542a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.281 225859 DEBUG oslo_concurrency.processutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.357 225859 DEBUG oslo_concurrency.processutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.358 225859 DEBUG oslo_concurrency.lockutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.359 225859 DEBUG oslo_concurrency.lockutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.359 225859 DEBUG oslo_concurrency.lockutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.389 225859 DEBUG nova.storage.rbd_utils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] rbd image eb82fc99-1632-42b0-90d2-7ce2b9d542a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.393 225859 DEBUG oslo_concurrency.processutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 eb82fc99-1632-42b0-90d2-7ce2b9d542a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.448 225859 DEBUG nova.network.neutron [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.449 225859 DEBUG nova.compute.manager [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:36:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e204 e204: 3 total, 3 up, 3 in
Jan 20 14:36:02 compute-1 ceph-mon[81775]: pgmap v1400: 321 pgs: 321 active+clean; 181 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 2.3 MiB/s wr, 351 op/s
Jan 20 14:36:02 compute-1 ceph-mon[81775]: osdmap e203: 3 total, 3 up, 3 in
Jan 20 14:36:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3097242567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.714 225859 DEBUG oslo_concurrency.processutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 eb82fc99-1632-42b0-90d2-7ce2b9d542a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:02.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.785 225859 DEBUG nova.storage.rbd_utils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] resizing rbd image eb82fc99-1632-42b0-90d2-7ce2b9d542a2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.899 225859 DEBUG nova.objects.instance [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Lazy-loading 'migration_context' on Instance uuid eb82fc99-1632-42b0-90d2-7ce2b9d542a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.924 225859 DEBUG nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.924 225859 DEBUG nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Ensure instance console log exists: /var/lib/nova/instances/eb82fc99-1632-42b0-90d2-7ce2b9d542a2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.925 225859 DEBUG oslo_concurrency.lockutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.926 225859 DEBUG oslo_concurrency.lockutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.927 225859 DEBUG oslo_concurrency.lockutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.929 225859 DEBUG nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:36:02 compute-1 nova_compute[225855]: 2026-01-20 14:36:02.935 225859 WARNING nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:36:03 compute-1 nova_compute[225855]: 2026-01-20 14:36:03.080 225859 DEBUG nova.virt.libvirt.host [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:36:03 compute-1 nova_compute[225855]: 2026-01-20 14:36:03.082 225859 DEBUG nova.virt.libvirt.host [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:36:03 compute-1 nova_compute[225855]: 2026-01-20 14:36:03.091 225859 DEBUG nova.virt.libvirt.host [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:36:03 compute-1 nova_compute[225855]: 2026-01-20 14:36:03.092 225859 DEBUG nova.virt.libvirt.host [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:36:03 compute-1 sudo[247705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:36:03 compute-1 nova_compute[225855]: 2026-01-20 14:36:03.095 225859 DEBUG nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:36:03 compute-1 nova_compute[225855]: 2026-01-20 14:36:03.096 225859 DEBUG nova.virt.hardware [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:36:03 compute-1 nova_compute[225855]: 2026-01-20 14:36:03.098 225859 DEBUG nova.virt.hardware [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:36:03 compute-1 sudo[247705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:03 compute-1 nova_compute[225855]: 2026-01-20 14:36:03.098 225859 DEBUG nova.virt.hardware [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:36:03 compute-1 nova_compute[225855]: 2026-01-20 14:36:03.099 225859 DEBUG nova.virt.hardware [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:36:03 compute-1 nova_compute[225855]: 2026-01-20 14:36:03.100 225859 DEBUG nova.virt.hardware [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:36:03 compute-1 nova_compute[225855]: 2026-01-20 14:36:03.100 225859 DEBUG nova.virt.hardware [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:36:03 compute-1 sudo[247705]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:03 compute-1 nova_compute[225855]: 2026-01-20 14:36:03.101 225859 DEBUG nova.virt.hardware [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:36:03 compute-1 nova_compute[225855]: 2026-01-20 14:36:03.102 225859 DEBUG nova.virt.hardware [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:36:03 compute-1 nova_compute[225855]: 2026-01-20 14:36:03.102 225859 DEBUG nova.virt.hardware [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:36:03 compute-1 nova_compute[225855]: 2026-01-20 14:36:03.103 225859 DEBUG nova.virt.hardware [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:36:03 compute-1 nova_compute[225855]: 2026-01-20 14:36:03.104 225859 DEBUG nova.virt.hardware [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:36:03 compute-1 nova_compute[225855]: 2026-01-20 14:36:03.109 225859 DEBUG oslo_concurrency.processutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:03 compute-1 sudo[247730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:36:03 compute-1 sudo[247730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:03 compute-1 sudo[247730]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:36:03 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/791379787' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:03 compute-1 nova_compute[225855]: 2026-01-20 14:36:03.581 225859 DEBUG oslo_concurrency.processutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:03.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:03 compute-1 nova_compute[225855]: 2026-01-20 14:36:03.614 225859 DEBUG nova.storage.rbd_utils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] rbd image eb82fc99-1632-42b0-90d2-7ce2b9d542a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:03 compute-1 nova_compute[225855]: 2026-01-20 14:36:03.618 225859 DEBUG oslo_concurrency.processutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:03 compute-1 ceph-mon[81775]: osdmap e204: 3 total, 3 up, 3 in
Jan 20 14:36:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/791379787' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e205 e205: 3 total, 3 up, 3 in
Jan 20 14:36:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:36:04 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2849367976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:04 compute-1 nova_compute[225855]: 2026-01-20 14:36:04.098 225859 DEBUG oslo_concurrency.processutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:04 compute-1 nova_compute[225855]: 2026-01-20 14:36:04.099 225859 DEBUG nova.objects.instance [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Lazy-loading 'pci_devices' on Instance uuid eb82fc99-1632-42b0-90d2-7ce2b9d542a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:36:04 compute-1 nova_compute[225855]: 2026-01-20 14:36:04.115 225859 DEBUG nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:36:04 compute-1 nova_compute[225855]:   <uuid>eb82fc99-1632-42b0-90d2-7ce2b9d542a2</uuid>
Jan 20 14:36:04 compute-1 nova_compute[225855]:   <name>instance-00000034</name>
Jan 20 14:36:04 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:36:04 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:36:04 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-1842602227</nova:name>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:36:02</nova:creationTime>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:36:04 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:36:04 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:36:04 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:36:04 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:36:04 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:36:04 compute-1 nova_compute[225855]:         <nova:user uuid="ecab37cbd7714ddd81e1db5b37ba85b3">tempest-ServersAdminNegativeTestJSON-1522974762-project-member</nova:user>
Jan 20 14:36:04 compute-1 nova_compute[225855]:         <nova:project uuid="b6594bd13c35449abc258d30a1a2509b">tempest-ServersAdminNegativeTestJSON-1522974762</nova:project>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <nova:ports/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:36:04 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:36:04 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <system>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <entry name="serial">eb82fc99-1632-42b0-90d2-7ce2b9d542a2</entry>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <entry name="uuid">eb82fc99-1632-42b0-90d2-7ce2b9d542a2</entry>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     </system>
Jan 20 14:36:04 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:36:04 compute-1 nova_compute[225855]:   <os>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:   </os>
Jan 20 14:36:04 compute-1 nova_compute[225855]:   <features>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:   </features>
Jan 20 14:36:04 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:36:04 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:36:04 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/eb82fc99-1632-42b0-90d2-7ce2b9d542a2_disk">
Jan 20 14:36:04 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       </source>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:36:04 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/eb82fc99-1632-42b0-90d2-7ce2b9d542a2_disk.config">
Jan 20 14:36:04 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       </source>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:36:04 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/eb82fc99-1632-42b0-90d2-7ce2b9d542a2/console.log" append="off"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <video>
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     </video>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:36:04 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:36:04 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:36:04 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:36:04 compute-1 nova_compute[225855]: </domain>
Jan 20 14:36:04 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:36:04 compute-1 nova_compute[225855]: 2026-01-20 14:36:04.173 225859 DEBUG nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:36:04 compute-1 nova_compute[225855]: 2026-01-20 14:36:04.173 225859 DEBUG nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:36:04 compute-1 nova_compute[225855]: 2026-01-20 14:36:04.174 225859 INFO nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Using config drive
Jan 20 14:36:04 compute-1 nova_compute[225855]: 2026-01-20 14:36:04.198 225859 DEBUG nova.storage.rbd_utils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] rbd image eb82fc99-1632-42b0-90d2-7ce2b9d542a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:04 compute-1 nova_compute[225855]: 2026-01-20 14:36:04.390 225859 INFO nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Creating config drive at /var/lib/nova/instances/eb82fc99-1632-42b0-90d2-7ce2b9d542a2/disk.config
Jan 20 14:36:04 compute-1 nova_compute[225855]: 2026-01-20 14:36:04.395 225859 DEBUG oslo_concurrency.processutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eb82fc99-1632-42b0-90d2-7ce2b9d542a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9wgvap5w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:04 compute-1 nova_compute[225855]: 2026-01-20 14:36:04.525 225859 DEBUG oslo_concurrency.processutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eb82fc99-1632-42b0-90d2-7ce2b9d542a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9wgvap5w" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:04 compute-1 nova_compute[225855]: 2026-01-20 14:36:04.560 225859 DEBUG nova.storage.rbd_utils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] rbd image eb82fc99-1632-42b0-90d2-7ce2b9d542a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:04 compute-1 nova_compute[225855]: 2026-01-20 14:36:04.565 225859 DEBUG oslo_concurrency.processutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eb82fc99-1632-42b0-90d2-7ce2b9d542a2/disk.config eb82fc99-1632-42b0-90d2-7ce2b9d542a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:04 compute-1 nova_compute[225855]: 2026-01-20 14:36:04.602 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:04 compute-1 nova_compute[225855]: 2026-01-20 14:36:04.637 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:04.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:05 compute-1 ceph-mon[81775]: pgmap v1403: 321 pgs: 321 active+clean; 181 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 170 B/s wr, 177 op/s
Jan 20 14:36:05 compute-1 ceph-mon[81775]: osdmap e205: 3 total, 3 up, 3 in
Jan 20 14:36:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2849367976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e206 e206: 3 total, 3 up, 3 in
Jan 20 14:36:05 compute-1 nova_compute[225855]: 2026-01-20 14:36:05.152 225859 DEBUG oslo_concurrency.processutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eb82fc99-1632-42b0-90d2-7ce2b9d542a2/disk.config eb82fc99-1632-42b0-90d2-7ce2b9d542a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:05 compute-1 nova_compute[225855]: 2026-01-20 14:36:05.153 225859 INFO nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Deleting local config drive /var/lib/nova/instances/eb82fc99-1632-42b0-90d2-7ce2b9d542a2/disk.config because it was imported into RBD.
Jan 20 14:36:05 compute-1 systemd-machined[194361]: New machine qemu-25-instance-00000034.
Jan 20 14:36:05 compute-1 systemd[1]: Started Virtual Machine qemu-25-instance-00000034.
Jan 20 14:36:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:05.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:05 compute-1 nova_compute[225855]: 2026-01-20 14:36:05.858 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919765.8583286, eb82fc99-1632-42b0-90d2-7ce2b9d542a2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:36:05 compute-1 nova_compute[225855]: 2026-01-20 14:36:05.860 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] VM Resumed (Lifecycle Event)
Jan 20 14:36:05 compute-1 nova_compute[225855]: 2026-01-20 14:36:05.864 225859 DEBUG nova.compute.manager [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:36:05 compute-1 nova_compute[225855]: 2026-01-20 14:36:05.864 225859 DEBUG nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:36:05 compute-1 nova_compute[225855]: 2026-01-20 14:36:05.868 225859 INFO nova.virt.libvirt.driver [-] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Instance spawned successfully.
Jan 20 14:36:05 compute-1 nova_compute[225855]: 2026-01-20 14:36:05.868 225859 DEBUG nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:36:05 compute-1 nova_compute[225855]: 2026-01-20 14:36:05.899 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:36:05 compute-1 nova_compute[225855]: 2026-01-20 14:36:05.905 225859 DEBUG nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:36:05 compute-1 nova_compute[225855]: 2026-01-20 14:36:05.906 225859 DEBUG nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:36:05 compute-1 nova_compute[225855]: 2026-01-20 14:36:05.906 225859 DEBUG nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:36:05 compute-1 nova_compute[225855]: 2026-01-20 14:36:05.907 225859 DEBUG nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:36:05 compute-1 nova_compute[225855]: 2026-01-20 14:36:05.908 225859 DEBUG nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:36:05 compute-1 nova_compute[225855]: 2026-01-20 14:36:05.908 225859 DEBUG nova.virt.libvirt.driver [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:36:05 compute-1 nova_compute[225855]: 2026-01-20 14:36:05.913 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:36:05 compute-1 nova_compute[225855]: 2026-01-20 14:36:05.955 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:36:05 compute-1 nova_compute[225855]: 2026-01-20 14:36:05.955 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919765.8631897, eb82fc99-1632-42b0-90d2-7ce2b9d542a2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:36:05 compute-1 nova_compute[225855]: 2026-01-20 14:36:05.956 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] VM Started (Lifecycle Event)
Jan 20 14:36:05 compute-1 nova_compute[225855]: 2026-01-20 14:36:05.994 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:36:06 compute-1 nova_compute[225855]: 2026-01-20 14:36:06.000 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:36:06 compute-1 nova_compute[225855]: 2026-01-20 14:36:06.017 225859 INFO nova.compute.manager [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Took 3.84 seconds to spawn the instance on the hypervisor.
Jan 20 14:36:06 compute-1 nova_compute[225855]: 2026-01-20 14:36:06.018 225859 DEBUG nova.compute.manager [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:36:06 compute-1 nova_compute[225855]: 2026-01-20 14:36:06.049 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:36:06 compute-1 ceph-mon[81775]: osdmap e206: 3 total, 3 up, 3 in
Jan 20 14:36:06 compute-1 ceph-mon[81775]: pgmap v1406: 321 pgs: 321 active+clean; 247 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 9.6 MiB/s wr, 188 op/s
Jan 20 14:36:06 compute-1 nova_compute[225855]: 2026-01-20 14:36:06.121 225859 INFO nova.compute.manager [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Took 4.81 seconds to build instance.
Jan 20 14:36:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e207 e207: 3 total, 3 up, 3 in
Jan 20 14:36:06 compute-1 nova_compute[225855]: 2026-01-20 14:36:06.164 225859 DEBUG oslo_concurrency.lockutils [None req-a10aa867-0a22-4aaa-a7f8-bc926ace5b4c ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Lock "eb82fc99-1632-42b0-90d2-7ce2b9d542a2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:06 compute-1 sudo[247936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:36:06 compute-1 sudo[247936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:06 compute-1 sudo[247936]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:06 compute-1 sudo[247961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:36:06 compute-1 sudo[247961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:06 compute-1 sudo[247961]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:06.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:07 compute-1 ceph-mon[81775]: osdmap e207: 3 total, 3 up, 3 in
Jan 20 14:36:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:36:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:36:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e208 e208: 3 total, 3 up, 3 in
Jan 20 14:36:07 compute-1 nova_compute[225855]: 2026-01-20 14:36:07.458 225859 DEBUG nova.objects.instance [None req-8d12183e-a531-4316-927b-0a95364feca9 3d7010bab0db493e8ba3b1a86ad4cf7d 5202cc9c82134fadb20a0003e1f09cf3 - - default default] Lazy-loading 'pci_devices' on Instance uuid eb82fc99-1632-42b0-90d2-7ce2b9d542a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:36:07 compute-1 nova_compute[225855]: 2026-01-20 14:36:07.479 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919767.4789515, eb82fc99-1632-42b0-90d2-7ce2b9d542a2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:36:07 compute-1 nova_compute[225855]: 2026-01-20 14:36:07.480 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] VM Paused (Lifecycle Event)
Jan 20 14:36:07 compute-1 nova_compute[225855]: 2026-01-20 14:36:07.508 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:36:07 compute-1 nova_compute[225855]: 2026-01-20 14:36:07.514 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:36:07 compute-1 nova_compute[225855]: 2026-01-20 14:36:07.552 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 20 14:36:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:07.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:08 compute-1 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000034.scope: Deactivated successfully.
Jan 20 14:36:08 compute-1 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000034.scope: Consumed 2.295s CPU time.
Jan 20 14:36:08 compute-1 systemd-machined[194361]: Machine qemu-25-instance-00000034 terminated.
Jan 20 14:36:08 compute-1 ceph-mon[81775]: osdmap e208: 3 total, 3 up, 3 in
Jan 20 14:36:08 compute-1 ceph-mon[81775]: pgmap v1409: 321 pgs: 321 active+clean; 310 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 17 MiB/s wr, 336 op/s
Jan 20 14:36:08 compute-1 nova_compute[225855]: 2026-01-20 14:36:08.307 225859 DEBUG nova.compute.manager [None req-8d12183e-a531-4316-927b-0a95364feca9 3d7010bab0db493e8ba3b1a86ad4cf7d 5202cc9c82134fadb20a0003e1f09cf3 - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:36:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:08.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e209 e209: 3 total, 3 up, 3 in
Jan 20 14:36:09 compute-1 nova_compute[225855]: 2026-01-20 14:36:09.597 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:09.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:09 compute-1 nova_compute[225855]: 2026-01-20 14:36:09.638 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:10 compute-1 ceph-mon[81775]: pgmap v1410: 321 pgs: 321 active+clean; 329 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 12 MiB/s wr, 398 op/s
Jan 20 14:36:10 compute-1 ceph-mon[81775]: osdmap e209: 3 total, 3 up, 3 in
Jan 20 14:36:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:10.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:11.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e210 e210: 3 total, 3 up, 3 in
Jan 20 14:36:12 compute-1 ovn_controller[130490]: 2026-01-20T14:36:12Z|00166|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 20 14:36:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e211 e211: 3 total, 3 up, 3 in
Jan 20 14:36:12 compute-1 ceph-mon[81775]: pgmap v1412: 321 pgs: 321 active+clean; 367 MiB data, 708 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 13 MiB/s wr, 594 op/s
Jan 20 14:36:12 compute-1 ceph-mon[81775]: osdmap e210: 3 total, 3 up, 3 in
Jan 20 14:36:12 compute-1 ceph-mon[81775]: osdmap e211: 3 total, 3 up, 3 in
Jan 20 14:36:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:12.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:13 compute-1 podman[247994]: 2026-01-20 14:36:13.065685236 +0000 UTC m=+0.107160425 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 20 14:36:13 compute-1 nova_compute[225855]: 2026-01-20 14:36:13.481 225859 DEBUG oslo_concurrency.lockutils [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Acquiring lock "eb82fc99-1632-42b0-90d2-7ce2b9d542a2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:13 compute-1 nova_compute[225855]: 2026-01-20 14:36:13.482 225859 DEBUG oslo_concurrency.lockutils [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Lock "eb82fc99-1632-42b0-90d2-7ce2b9d542a2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:13 compute-1 nova_compute[225855]: 2026-01-20 14:36:13.482 225859 DEBUG oslo_concurrency.lockutils [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Acquiring lock "eb82fc99-1632-42b0-90d2-7ce2b9d542a2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:13 compute-1 nova_compute[225855]: 2026-01-20 14:36:13.482 225859 DEBUG oslo_concurrency.lockutils [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Lock "eb82fc99-1632-42b0-90d2-7ce2b9d542a2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:13 compute-1 nova_compute[225855]: 2026-01-20 14:36:13.482 225859 DEBUG oslo_concurrency.lockutils [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Lock "eb82fc99-1632-42b0-90d2-7ce2b9d542a2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:13 compute-1 nova_compute[225855]: 2026-01-20 14:36:13.483 225859 INFO nova.compute.manager [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Terminating instance
Jan 20 14:36:13 compute-1 nova_compute[225855]: 2026-01-20 14:36:13.484 225859 DEBUG oslo_concurrency.lockutils [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Acquiring lock "refresh_cache-eb82fc99-1632-42b0-90d2-7ce2b9d542a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:36:13 compute-1 nova_compute[225855]: 2026-01-20 14:36:13.484 225859 DEBUG oslo_concurrency.lockutils [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Acquired lock "refresh_cache-eb82fc99-1632-42b0-90d2-7ce2b9d542a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:36:13 compute-1 nova_compute[225855]: 2026-01-20 14:36:13.484 225859 DEBUG nova.network.neutron [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:36:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:36:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/920590240' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:36:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:36:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/920590240' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:36:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:13.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/920590240' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:36:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/920590240' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:36:13 compute-1 nova_compute[225855]: 2026-01-20 14:36:13.800 225859 DEBUG nova.network.neutron [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:36:14 compute-1 nova_compute[225855]: 2026-01-20 14:36:14.599 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:14 compute-1 nova_compute[225855]: 2026-01-20 14:36:14.639 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:14.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:14 compute-1 ceph-mon[81775]: pgmap v1415: 321 pgs: 321 active+clean; 367 MiB data, 708 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 7.8 MiB/s wr, 495 op/s
Jan 20 14:36:14 compute-1 nova_compute[225855]: 2026-01-20 14:36:14.810 225859 DEBUG nova.network.neutron [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:36:14 compute-1 nova_compute[225855]: 2026-01-20 14:36:14.830 225859 DEBUG oslo_concurrency.lockutils [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Releasing lock "refresh_cache-eb82fc99-1632-42b0-90d2-7ce2b9d542a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:36:14 compute-1 nova_compute[225855]: 2026-01-20 14:36:14.831 225859 DEBUG nova.compute.manager [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:36:14 compute-1 nova_compute[225855]: 2026-01-20 14:36:14.841 225859 INFO nova.virt.libvirt.driver [-] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Instance destroyed successfully.
Jan 20 14:36:14 compute-1 nova_compute[225855]: 2026-01-20 14:36:14.842 225859 DEBUG nova.objects.instance [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Lazy-loading 'resources' on Instance uuid eb82fc99-1632-42b0-90d2-7ce2b9d542a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:36:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:15 compute-1 nova_compute[225855]: 2026-01-20 14:36:15.359 225859 INFO nova.virt.libvirt.driver [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Deleting instance files /var/lib/nova/instances/eb82fc99-1632-42b0-90d2-7ce2b9d542a2_del
Jan 20 14:36:15 compute-1 nova_compute[225855]: 2026-01-20 14:36:15.360 225859 INFO nova.virt.libvirt.driver [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Deletion of /var/lib/nova/instances/eb82fc99-1632-42b0-90d2-7ce2b9d542a2_del complete
Jan 20 14:36:15 compute-1 nova_compute[225855]: 2026-01-20 14:36:15.425 225859 INFO nova.compute.manager [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Took 0.59 seconds to destroy the instance on the hypervisor.
Jan 20 14:36:15 compute-1 nova_compute[225855]: 2026-01-20 14:36:15.426 225859 DEBUG oslo.service.loopingcall [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:36:15 compute-1 nova_compute[225855]: 2026-01-20 14:36:15.426 225859 DEBUG nova.compute.manager [-] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:36:15 compute-1 nova_compute[225855]: 2026-01-20 14:36:15.426 225859 DEBUG nova.network.neutron [-] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:36:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:15.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e212 e212: 3 total, 3 up, 3 in
Jan 20 14:36:16 compute-1 nova_compute[225855]: 2026-01-20 14:36:16.112 225859 DEBUG nova.network.neutron [-] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:36:16 compute-1 nova_compute[225855]: 2026-01-20 14:36:16.128 225859 DEBUG nova.network.neutron [-] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:36:16 compute-1 nova_compute[225855]: 2026-01-20 14:36:16.145 225859 INFO nova.compute.manager [-] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Took 0.72 seconds to deallocate network for instance.
Jan 20 14:36:16 compute-1 nova_compute[225855]: 2026-01-20 14:36:16.212 225859 DEBUG oslo_concurrency.lockutils [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:16 compute-1 nova_compute[225855]: 2026-01-20 14:36:16.212 225859 DEBUG oslo_concurrency.lockutils [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:16 compute-1 nova_compute[225855]: 2026-01-20 14:36:16.268 225859 DEBUG oslo_concurrency.processutils [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:16.394 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:16.395 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:16.395 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:36:16 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/717322656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:16 compute-1 nova_compute[225855]: 2026-01-20 14:36:16.728 225859 DEBUG oslo_concurrency.processutils [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:16 compute-1 nova_compute[225855]: 2026-01-20 14:36:16.734 225859 DEBUG nova.compute.provider_tree [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:36:16 compute-1 nova_compute[225855]: 2026-01-20 14:36:16.751 225859 DEBUG nova.scheduler.client.report [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:36:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:16.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:16 compute-1 nova_compute[225855]: 2026-01-20 14:36:16.777 225859 DEBUG oslo_concurrency.lockutils [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:16 compute-1 ceph-mon[81775]: pgmap v1416: 321 pgs: 321 active+clean; 298 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 5.5 MiB/s wr, 436 op/s
Jan 20 14:36:16 compute-1 ceph-mon[81775]: osdmap e212: 3 total, 3 up, 3 in
Jan 20 14:36:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/717322656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e213 e213: 3 total, 3 up, 3 in
Jan 20 14:36:16 compute-1 nova_compute[225855]: 2026-01-20 14:36:16.821 225859 INFO nova.scheduler.client.report [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Deleted allocations for instance eb82fc99-1632-42b0-90d2-7ce2b9d542a2
Jan 20 14:36:16 compute-1 nova_compute[225855]: 2026-01-20 14:36:16.938 225859 DEBUG oslo_concurrency.lockutils [None req-c61c69e0-76cc-4797-9cf1-d369ea29c6b0 ecab37cbd7714ddd81e1db5b37ba85b3 b6594bd13c35449abc258d30a1a2509b - - default default] Lock "eb82fc99-1632-42b0-90d2-7ce2b9d542a2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e214 e214: 3 total, 3 up, 3 in
Jan 20 14:36:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:17.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:17 compute-1 nova_compute[225855]: 2026-01-20 14:36:17.691 225859 DEBUG oslo_concurrency.lockutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "59387c9d-df91-4f43-b389-00174486fc84" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:17 compute-1 nova_compute[225855]: 2026-01-20 14:36:17.692 225859 DEBUG oslo_concurrency.lockutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "59387c9d-df91-4f43-b389-00174486fc84" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:17 compute-1 nova_compute[225855]: 2026-01-20 14:36:17.714 225859 DEBUG nova.compute.manager [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:36:17 compute-1 nova_compute[225855]: 2026-01-20 14:36:17.783 225859 DEBUG oslo_concurrency.lockutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:17 compute-1 nova_compute[225855]: 2026-01-20 14:36:17.784 225859 DEBUG oslo_concurrency.lockutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:17 compute-1 nova_compute[225855]: 2026-01-20 14:36:17.793 225859 DEBUG nova.virt.hardware [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:36:17 compute-1 nova_compute[225855]: 2026-01-20 14:36:17.794 225859 INFO nova.compute.claims [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:36:17 compute-1 ceph-mon[81775]: osdmap e213: 3 total, 3 up, 3 in
Jan 20 14:36:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2591651344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:17 compute-1 ceph-mon[81775]: osdmap e214: 3 total, 3 up, 3 in
Jan 20 14:36:17 compute-1 nova_compute[225855]: 2026-01-20 14:36:17.919 225859 DEBUG oslo_concurrency.processutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:36:18 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/169692220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.340 225859 DEBUG oslo_concurrency.processutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.345 225859 DEBUG nova.compute.provider_tree [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.398 225859 DEBUG nova.scheduler.client.report [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.522 225859 DEBUG oslo_concurrency.lockutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.523 225859 DEBUG nova.compute.manager [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.571 225859 DEBUG nova.compute.manager [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.572 225859 DEBUG nova.network.neutron [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.591 225859 INFO nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.615 225859 DEBUG nova.compute.manager [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.703 225859 DEBUG nova.compute.manager [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.704 225859 DEBUG nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.704 225859 INFO nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Creating image(s)
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.733 225859 DEBUG nova.storage.rbd_utils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image 59387c9d-df91-4f43-b389-00174486fc84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:18.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.760 225859 DEBUG nova.storage.rbd_utils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image 59387c9d-df91-4f43-b389-00174486fc84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.785 225859 DEBUG nova.storage.rbd_utils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image 59387c9d-df91-4f43-b389-00174486fc84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.789 225859 DEBUG oslo_concurrency.processutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:18 compute-1 ceph-mon[81775]: pgmap v1419: 321 pgs: 321 active+clean; 270 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.8 MiB/s wr, 233 op/s
Jan 20 14:36:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/169692220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.869 225859 DEBUG oslo_concurrency.processutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.870 225859 DEBUG oslo_concurrency.lockutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.871 225859 DEBUG oslo_concurrency.lockutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.871 225859 DEBUG oslo_concurrency.lockutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.894 225859 DEBUG nova.storage.rbd_utils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image 59387c9d-df91-4f43-b389-00174486fc84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:18 compute-1 nova_compute[225855]: 2026-01-20 14:36:18.897 225859 DEBUG oslo_concurrency.processutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 59387c9d-df91-4f43-b389-00174486fc84_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:19 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Jan 20 14:36:19 compute-1 nova_compute[225855]: 2026-01-20 14:36:19.154 225859 DEBUG nova.policy [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56e2959629114d3d8a48e7a80ed96c4b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3750c56415134773aa9d9880038f1749', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:36:19 compute-1 nova_compute[225855]: 2026-01-20 14:36:19.291 225859 DEBUG oslo_concurrency.processutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 59387c9d-df91-4f43-b389-00174486fc84_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:19 compute-1 nova_compute[225855]: 2026-01-20 14:36:19.343 225859 DEBUG nova.storage.rbd_utils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] resizing rbd image 59387c9d-df91-4f43-b389-00174486fc84_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:36:19 compute-1 nova_compute[225855]: 2026-01-20 14:36:19.471 225859 DEBUG nova.objects.instance [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lazy-loading 'migration_context' on Instance uuid 59387c9d-df91-4f43-b389-00174486fc84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:36:19 compute-1 nova_compute[225855]: 2026-01-20 14:36:19.488 225859 DEBUG nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:36:19 compute-1 nova_compute[225855]: 2026-01-20 14:36:19.488 225859 DEBUG nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Ensure instance console log exists: /var/lib/nova/instances/59387c9d-df91-4f43-b389-00174486fc84/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:36:19 compute-1 nova_compute[225855]: 2026-01-20 14:36:19.489 225859 DEBUG oslo_concurrency.lockutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:19 compute-1 nova_compute[225855]: 2026-01-20 14:36:19.490 225859 DEBUG oslo_concurrency.lockutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:19 compute-1 nova_compute[225855]: 2026-01-20 14:36:19.490 225859 DEBUG oslo_concurrency.lockutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:19 compute-1 nova_compute[225855]: 2026-01-20 14:36:19.600 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:19.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:19 compute-1 nova_compute[225855]: 2026-01-20 14:36:19.641 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:19 compute-1 nova_compute[225855]: 2026-01-20 14:36:19.965 225859 DEBUG nova.network.neutron [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Successfully created port: f9059531-e6dc-4451-9c17-ec3b63e4b85f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:36:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:20.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:20 compute-1 ceph-mon[81775]: pgmap v1421: 321 pgs: 321 active+clean; 261 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 6.0 MiB/s wr, 254 op/s
Jan 20 14:36:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3748006424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:21 compute-1 nova_compute[225855]: 2026-01-20 14:36:21.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:36:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:21.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:21 compute-1 ceph-mon[81775]: pgmap v1422: 321 pgs: 321 active+clean; 263 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 340 op/s
Jan 20 14:36:22 compute-1 nova_compute[225855]: 2026-01-20 14:36:22.181 225859 DEBUG nova.network.neutron [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Successfully updated port: f9059531-e6dc-4451-9c17-ec3b63e4b85f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:36:22 compute-1 nova_compute[225855]: 2026-01-20 14:36:22.202 225859 DEBUG oslo_concurrency.lockutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "refresh_cache-59387c9d-df91-4f43-b389-00174486fc84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:36:22 compute-1 nova_compute[225855]: 2026-01-20 14:36:22.202 225859 DEBUG oslo_concurrency.lockutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquired lock "refresh_cache-59387c9d-df91-4f43-b389-00174486fc84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:36:22 compute-1 nova_compute[225855]: 2026-01-20 14:36:22.202 225859 DEBUG nova.network.neutron [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:36:22 compute-1 nova_compute[225855]: 2026-01-20 14:36:22.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:36:22 compute-1 nova_compute[225855]: 2026-01-20 14:36:22.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:36:22 compute-1 nova_compute[225855]: 2026-01-20 14:36:22.456 225859 DEBUG nova.network.neutron [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:36:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e215 e215: 3 total, 3 up, 3 in
Jan 20 14:36:22 compute-1 nova_compute[225855]: 2026-01-20 14:36:22.739 225859 DEBUG nova.compute.manager [req-42fd0086-13c1-4d18-a974-8ac33f18fb9f req-0561c550-7ff4-4448-aaa6-8ca74d01ae27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Received event network-changed-f9059531-e6dc-4451-9c17-ec3b63e4b85f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:36:22 compute-1 nova_compute[225855]: 2026-01-20 14:36:22.740 225859 DEBUG nova.compute.manager [req-42fd0086-13c1-4d18-a974-8ac33f18fb9f req-0561c550-7ff4-4448-aaa6-8ca74d01ae27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Refreshing instance network info cache due to event network-changed-f9059531-e6dc-4451-9c17-ec3b63e4b85f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:36:22 compute-1 nova_compute[225855]: 2026-01-20 14:36:22.740 225859 DEBUG oslo_concurrency.lockutils [req-42fd0086-13c1-4d18-a974-8ac33f18fb9f req-0561c550-7ff4-4448-aaa6-8ca74d01ae27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-59387c9d-df91-4f43-b389-00174486fc84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:36:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:36:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:22.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:36:23 compute-1 podman[248254]: 2026-01-20 14:36:23.02530031 +0000 UTC m=+0.058952065 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:36:23 compute-1 sudo[248273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:36:23 compute-1 sudo[248273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:23 compute-1 sudo[248273]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:23 compute-1 sudo[248298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:36:23 compute-1 sudo[248298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:23 compute-1 sudo[248298]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:23 compute-1 nova_compute[225855]: 2026-01-20 14:36:23.308 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919768.3072345, eb82fc99-1632-42b0-90d2-7ce2b9d542a2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:36:23 compute-1 nova_compute[225855]: 2026-01-20 14:36:23.309 225859 INFO nova.compute.manager [-] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] VM Stopped (Lifecycle Event)
Jan 20 14:36:23 compute-1 nova_compute[225855]: 2026-01-20 14:36:23.331 225859 DEBUG nova.compute.manager [None req-69c21bcf-2fd0-42da-9c3e-d33b466a321e - - - - - -] [instance: eb82fc99-1632-42b0-90d2-7ce2b9d542a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:36:23 compute-1 nova_compute[225855]: 2026-01-20 14:36:23.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:36:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:23.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:23 compute-1 ceph-mon[81775]: osdmap e215: 3 total, 3 up, 3 in
Jan 20 14:36:23 compute-1 nova_compute[225855]: 2026-01-20 14:36:23.959 225859 DEBUG nova.network.neutron [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Updating instance_info_cache with network_info: [{"id": "f9059531-e6dc-4451-9c17-ec3b63e4b85f", "address": "fa:16:3e:5e:a4:97", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9059531-e6", "ovs_interfaceid": "f9059531-e6dc-4451-9c17-ec3b63e4b85f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:36:23 compute-1 nova_compute[225855]: 2026-01-20 14:36:23.985 225859 DEBUG oslo_concurrency.lockutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Releasing lock "refresh_cache-59387c9d-df91-4f43-b389-00174486fc84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:36:23 compute-1 nova_compute[225855]: 2026-01-20 14:36:23.985 225859 DEBUG nova.compute.manager [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Instance network_info: |[{"id": "f9059531-e6dc-4451-9c17-ec3b63e4b85f", "address": "fa:16:3e:5e:a4:97", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9059531-e6", "ovs_interfaceid": "f9059531-e6dc-4451-9c17-ec3b63e4b85f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:36:23 compute-1 nova_compute[225855]: 2026-01-20 14:36:23.985 225859 DEBUG oslo_concurrency.lockutils [req-42fd0086-13c1-4d18-a974-8ac33f18fb9f req-0561c550-7ff4-4448-aaa6-8ca74d01ae27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-59387c9d-df91-4f43-b389-00174486fc84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:36:23 compute-1 nova_compute[225855]: 2026-01-20 14:36:23.986 225859 DEBUG nova.network.neutron [req-42fd0086-13c1-4d18-a974-8ac33f18fb9f req-0561c550-7ff4-4448-aaa6-8ca74d01ae27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Refreshing network info cache for port f9059531-e6dc-4451-9c17-ec3b63e4b85f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:36:23 compute-1 nova_compute[225855]: 2026-01-20 14:36:23.988 225859 DEBUG nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Start _get_guest_xml network_info=[{"id": "f9059531-e6dc-4451-9c17-ec3b63e4b85f", "address": "fa:16:3e:5e:a4:97", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9059531-e6", "ovs_interfaceid": "f9059531-e6dc-4451-9c17-ec3b63e4b85f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:36:23 compute-1 nova_compute[225855]: 2026-01-20 14:36:23.993 225859 WARNING nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:36:23 compute-1 nova_compute[225855]: 2026-01-20 14:36:23.998 225859 DEBUG nova.virt.libvirt.host [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:36:23 compute-1 nova_compute[225855]: 2026-01-20 14:36:23.999 225859 DEBUG nova.virt.libvirt.host [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.005 225859 DEBUG nova.virt.libvirt.host [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.005 225859 DEBUG nova.virt.libvirt.host [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.008 225859 DEBUG nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.008 225859 DEBUG nova.virt.hardware [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.009 225859 DEBUG nova.virt.hardware [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.009 225859 DEBUG nova.virt.hardware [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.009 225859 DEBUG nova.virt.hardware [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.009 225859 DEBUG nova.virt.hardware [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.010 225859 DEBUG nova.virt.hardware [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.010 225859 DEBUG nova.virt.hardware [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.010 225859 DEBUG nova.virt.hardware [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.010 225859 DEBUG nova.virt.hardware [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.010 225859 DEBUG nova.virt.hardware [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.010 225859 DEBUG nova.virt.hardware [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.013 225859 DEBUG oslo_concurrency.processutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.364 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.365 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.366 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.367 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.389 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.391 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.391 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.392 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.392 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:36:24 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/240091767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.449 225859 DEBUG oslo_concurrency.processutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.489 225859 DEBUG nova.storage.rbd_utils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image 59387c9d-df91-4f43-b389-00174486fc84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.493 225859 DEBUG oslo_concurrency.processutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.602 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.642 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:24 compute-1 ceph-mon[81775]: pgmap v1424: 321 pgs: 321 active+clean; 263 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 6.7 MiB/s wr, 219 op/s
Jan 20 14:36:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/240091767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:36:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:24.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:36:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:36:24 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3193668944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.831 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:36:24 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1706595247' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.964 225859 DEBUG oslo_concurrency.processutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.966 225859 DEBUG nova.virt.libvirt.vif [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1053628830',display_name='tempest-ImagesTestJSON-server-1053628830',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-1053628830',id=53,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3750c56415134773aa9d9880038f1749',ramdisk_id='',reservation_id='r-2v7i5wvb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-338390217',owner_user_name='tempest-ImagesTestJSON-338390217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:36:18Z,user_data=None,user_id='56e2959629114d3d8a48e7a80ed96c4b',uuid=59387c9d-df91-4f43-b389-00174486fc84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f9059531-e6dc-4451-9c17-ec3b63e4b85f", "address": "fa:16:3e:5e:a4:97", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9059531-e6", "ovs_interfaceid": "f9059531-e6dc-4451-9c17-ec3b63e4b85f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.967 225859 DEBUG nova.network.os_vif_util [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Converting VIF {"id": "f9059531-e6dc-4451-9c17-ec3b63e4b85f", "address": "fa:16:3e:5e:a4:97", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9059531-e6", "ovs_interfaceid": "f9059531-e6dc-4451-9c17-ec3b63e4b85f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.969 225859 DEBUG nova.network.os_vif_util [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=f9059531-e6dc-4451-9c17-ec3b63e4b85f,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9059531-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.972 225859 DEBUG nova.objects.instance [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lazy-loading 'pci_devices' on Instance uuid 59387c9d-df91-4f43-b389-00174486fc84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.990 225859 DEBUG nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:36:24 compute-1 nova_compute[225855]:   <uuid>59387c9d-df91-4f43-b389-00174486fc84</uuid>
Jan 20 14:36:24 compute-1 nova_compute[225855]:   <name>instance-00000035</name>
Jan 20 14:36:24 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:36:24 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:36:24 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <nova:name>tempest-ImagesTestJSON-server-1053628830</nova:name>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:36:23</nova:creationTime>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:36:24 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:36:24 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:36:24 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:36:24 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:36:24 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:36:24 compute-1 nova_compute[225855]:         <nova:user uuid="56e2959629114d3d8a48e7a80ed96c4b">tempest-ImagesTestJSON-338390217-project-member</nova:user>
Jan 20 14:36:24 compute-1 nova_compute[225855]:         <nova:project uuid="3750c56415134773aa9d9880038f1749">tempest-ImagesTestJSON-338390217</nova:project>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:36:24 compute-1 nova_compute[225855]:         <nova:port uuid="f9059531-e6dc-4451-9c17-ec3b63e4b85f">
Jan 20 14:36:24 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:36:24 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:36:24 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <system>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <entry name="serial">59387c9d-df91-4f43-b389-00174486fc84</entry>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <entry name="uuid">59387c9d-df91-4f43-b389-00174486fc84</entry>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     </system>
Jan 20 14:36:24 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:36:24 compute-1 nova_compute[225855]:   <os>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:   </os>
Jan 20 14:36:24 compute-1 nova_compute[225855]:   <features>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:   </features>
Jan 20 14:36:24 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:36:24 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:36:24 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/59387c9d-df91-4f43-b389-00174486fc84_disk">
Jan 20 14:36:24 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       </source>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:36:24 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/59387c9d-df91-4f43-b389-00174486fc84_disk.config">
Jan 20 14:36:24 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       </source>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:36:24 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:5e:a4:97"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <target dev="tapf9059531-e6"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/59387c9d-df91-4f43-b389-00174486fc84/console.log" append="off"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <video>
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     </video>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:36:24 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:36:24 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:36:24 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:36:24 compute-1 nova_compute[225855]: </domain>
Jan 20 14:36:24 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.992 225859 DEBUG nova.compute.manager [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Preparing to wait for external event network-vif-plugged-f9059531-e6dc-4451-9c17-ec3b63e4b85f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.993 225859 DEBUG oslo_concurrency.lockutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "59387c9d-df91-4f43-b389-00174486fc84-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.993 225859 DEBUG oslo_concurrency.lockutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "59387c9d-df91-4f43-b389-00174486fc84-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.993 225859 DEBUG oslo_concurrency.lockutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "59387c9d-df91-4f43-b389-00174486fc84-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.994 225859 DEBUG nova.virt.libvirt.vif [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1053628830',display_name='tempest-ImagesTestJSON-server-1053628830',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-1053628830',id=53,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3750c56415134773aa9d9880038f1749',ramdisk_id='',reservation_id='r-2v7i5wvb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-338390217',owner_user_name='tempest-ImagesTestJSON-338390217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:36:18Z,user_data=None,user_id='56e2959629114d3d8a48e7a80ed96c4b',uuid=59387c9d-df91-4f43-b389-00174486fc84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f9059531-e6dc-4451-9c17-ec3b63e4b85f", "address": "fa:16:3e:5e:a4:97", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9059531-e6", "ovs_interfaceid": "f9059531-e6dc-4451-9c17-ec3b63e4b85f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.995 225859 DEBUG nova.network.os_vif_util [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Converting VIF {"id": "f9059531-e6dc-4451-9c17-ec3b63e4b85f", "address": "fa:16:3e:5e:a4:97", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9059531-e6", "ovs_interfaceid": "f9059531-e6dc-4451-9c17-ec3b63e4b85f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.995 225859 DEBUG nova.network.os_vif_util [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=f9059531-e6dc-4451-9c17-ec3b63e4b85f,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9059531-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.996 225859 DEBUG os_vif [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=f9059531-e6dc-4451-9c17-ec3b63e4b85f,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9059531-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.997 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.997 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:24 compute-1 nova_compute[225855]: 2026-01-20 14:36:24.997 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.002 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.002 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf9059531-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.003 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf9059531-e6, col_values=(('external_ids', {'iface-id': 'f9059531-e6dc-4451-9c17-ec3b63e4b85f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:a4:97', 'vm-uuid': '59387c9d-df91-4f43-b389-00174486fc84'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.005 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:25 compute-1 NetworkManager[49104]: <info>  [1768919785.0064] manager: (tapf9059531-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.016 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.017 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.019 225859 INFO os_vif [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=f9059531-e6dc-4451-9c17-ec3b63e4b85f,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9059531-e6')
Jan 20 14:36:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.057 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.059 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4560MB free_disk=20.912002563476562GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.059 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.060 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.070 225859 DEBUG nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.071 225859 DEBUG nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.071 225859 DEBUG nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] No VIF found with MAC fa:16:3e:5e:a4:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.072 225859 INFO nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Using config drive
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.098 225859 DEBUG nova.storage.rbd_utils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image 59387c9d-df91-4f43-b389-00174486fc84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.160 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 59387c9d-df91-4f43-b389-00174486fc84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.160 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.160 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.205 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:25.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:36:25 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2085895492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.675 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.680 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.696 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.711 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:36:25 compute-1 nova_compute[225855]: 2026-01-20 14:36:25.712 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3193668944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1706595247' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2085895492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:26 compute-1 nova_compute[225855]: 2026-01-20 14:36:26.131 225859 INFO nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Creating config drive at /var/lib/nova/instances/59387c9d-df91-4f43-b389-00174486fc84/disk.config
Jan 20 14:36:26 compute-1 nova_compute[225855]: 2026-01-20 14:36:26.136 225859 DEBUG oslo_concurrency.processutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59387c9d-df91-4f43-b389-00174486fc84/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg9qke6yn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:26 compute-1 nova_compute[225855]: 2026-01-20 14:36:26.264 225859 DEBUG oslo_concurrency.processutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59387c9d-df91-4f43-b389-00174486fc84/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg9qke6yn" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:26 compute-1 nova_compute[225855]: 2026-01-20 14:36:26.295 225859 DEBUG nova.storage.rbd_utils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image 59387c9d-df91-4f43-b389-00174486fc84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:26 compute-1 nova_compute[225855]: 2026-01-20 14:36:26.300 225859 DEBUG oslo_concurrency.processutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59387c9d-df91-4f43-b389-00174486fc84/disk.config 59387c9d-df91-4f43-b389-00174486fc84_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:26 compute-1 nova_compute[225855]: 2026-01-20 14:36:26.327 225859 DEBUG nova.network.neutron [req-42fd0086-13c1-4d18-a974-8ac33f18fb9f req-0561c550-7ff4-4448-aaa6-8ca74d01ae27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Updated VIF entry in instance network info cache for port f9059531-e6dc-4451-9c17-ec3b63e4b85f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:36:26 compute-1 nova_compute[225855]: 2026-01-20 14:36:26.329 225859 DEBUG nova.network.neutron [req-42fd0086-13c1-4d18-a974-8ac33f18fb9f req-0561c550-7ff4-4448-aaa6-8ca74d01ae27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Updating instance_info_cache with network_info: [{"id": "f9059531-e6dc-4451-9c17-ec3b63e4b85f", "address": "fa:16:3e:5e:a4:97", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9059531-e6", "ovs_interfaceid": "f9059531-e6dc-4451-9c17-ec3b63e4b85f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:36:26 compute-1 nova_compute[225855]: 2026-01-20 14:36:26.350 225859 DEBUG oslo_concurrency.lockutils [req-42fd0086-13c1-4d18-a974-8ac33f18fb9f req-0561c550-7ff4-4448-aaa6-8ca74d01ae27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-59387c9d-df91-4f43-b389-00174486fc84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:36:26 compute-1 nova_compute[225855]: 2026-01-20 14:36:26.660 225859 DEBUG oslo_concurrency.processutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59387c9d-df91-4f43-b389-00174486fc84/disk.config 59387c9d-df91-4f43-b389-00174486fc84_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:26 compute-1 nova_compute[225855]: 2026-01-20 14:36:26.661 225859 INFO nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Deleting local config drive /var/lib/nova/instances/59387c9d-df91-4f43-b389-00174486fc84/disk.config because it was imported into RBD.
Jan 20 14:36:26 compute-1 kernel: tapf9059531-e6: entered promiscuous mode
Jan 20 14:36:26 compute-1 NetworkManager[49104]: <info>  [1768919786.7147] manager: (tapf9059531-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Jan 20 14:36:26 compute-1 ovn_controller[130490]: 2026-01-20T14:36:26Z|00167|binding|INFO|Claiming lport f9059531-e6dc-4451-9c17-ec3b63e4b85f for this chassis.
Jan 20 14:36:26 compute-1 nova_compute[225855]: 2026-01-20 14:36:26.716 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:26 compute-1 ovn_controller[130490]: 2026-01-20T14:36:26Z|00168|binding|INFO|f9059531-e6dc-4451-9c17-ec3b63e4b85f: Claiming fa:16:3e:5e:a4:97 10.100.0.13
Jan 20 14:36:26 compute-1 nova_compute[225855]: 2026-01-20 14:36:26.732 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:26.739 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:a4:97 10.100.0.13'], port_security=['fa:16:3e:5e:a4:97 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '59387c9d-df91-4f43-b389-00174486fc84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3750c56415134773aa9d9880038f1749', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2e302063-2ccd-4f7c-8835-ef521762a486', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4125934e-1dea-4e34-a38d-5291c850f0b2, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=f9059531-e6dc-4451-9c17-ec3b63e4b85f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:36:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:26.740 140354 INFO neutron.agent.ovn.metadata.agent [-] Port f9059531-e6dc-4451-9c17-ec3b63e4b85f in datapath abb83e3e-0b12-431b-ad86-a1d271b5b46a bound to our chassis
Jan 20 14:36:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:26.741 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abb83e3e-0b12-431b-ad86-a1d271b5b46a
Jan 20 14:36:26 compute-1 systemd-udevd[248504]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:36:26 compute-1 systemd-machined[194361]: New machine qemu-26-instance-00000035.
Jan 20 14:36:26 compute-1 NetworkManager[49104]: <info>  [1768919786.7582] device (tapf9059531-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:36:26 compute-1 NetworkManager[49104]: <info>  [1768919786.7588] device (tapf9059531-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:36:26 compute-1 systemd[1]: Started Virtual Machine qemu-26-instance-00000035.
Jan 20 14:36:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:26.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:26.766 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[48e5a0da-ce41-43a3-b2b5-eda0d0d44fe9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:26.767 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabb83e3e-01 in ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:36:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:26.769 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabb83e3e-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:36:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:26.769 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[609cd498-8d46-41d3-9259-df616264300e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:26.769 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[33fdcc8d-5d70-4afe-8fc8-de39f835bfff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:26.786 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[7517bff3-2aaa-4117-beac-9a210aa018d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:26.818 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3ee5e4-125c-4e9e-8268-5af005c73983]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:26 compute-1 ovn_controller[130490]: 2026-01-20T14:36:26Z|00169|binding|INFO|Setting lport f9059531-e6dc-4451-9c17-ec3b63e4b85f ovn-installed in OVS
Jan 20 14:36:26 compute-1 ovn_controller[130490]: 2026-01-20T14:36:26Z|00170|binding|INFO|Setting lport f9059531-e6dc-4451-9c17-ec3b63e4b85f up in Southbound
Jan 20 14:36:26 compute-1 nova_compute[225855]: 2026-01-20 14:36:26.839 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:26.853 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[3e508503-b09d-4983-be58-96de858c7735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:26.857 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d533ecc1-9266-4971-a7e3-dd4968cadb0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:26 compute-1 systemd-udevd[248508]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:36:26 compute-1 NetworkManager[49104]: <info>  [1768919786.8586] manager: (tapabb83e3e-00): new Veth device (/org/freedesktop/NetworkManager/Devices/74)
Jan 20 14:36:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:26.889 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8995e6-baf5-403f-9c49-b0a03298610c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:26.892 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[51f40e0a-3b92-42b8-b47b-4717e1cde276]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:26 compute-1 NetworkManager[49104]: <info>  [1768919786.9146] device (tapabb83e3e-00): carrier: link connected
Jan 20 14:36:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:26.919 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[6c90e64d-c606-4893-b6ac-3c009c820190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:26.933 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d3e86e03-5417-42cf-a326-dd3bd2bfc718]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabb83e3e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:0b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482584, 'reachable_time': 25192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248538, 'error': None, 'target': 'ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:26.946 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f81fb0c8-ed51-4503-90fc-04f3f5fc3d1d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefd:bd2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 482584, 'tstamp': 482584}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248539, 'error': None, 'target': 'ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:26.960 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0c69b903-9598-4fd0-8c72-43ffb417bc20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabb83e3e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:0b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482584, 'reachable_time': 25192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248540, 'error': None, 'target': 'ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:26 compute-1 ceph-mon[81775]: pgmap v1425: 321 pgs: 321 active+clean; 168 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 5.7 MiB/s wr, 231 op/s
Jan 20 14:36:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/430201608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:26.987 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[18d35d01-448c-4661-8134-8a3fa05e3e0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:27.042 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e8a01cb7-2b72-4619-9f12-d8b57273d427]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:27.044 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabb83e3e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:27.045 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:27.045 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabb83e3e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:27 compute-1 nova_compute[225855]: 2026-01-20 14:36:27.048 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:27 compute-1 kernel: tapabb83e3e-00: entered promiscuous mode
Jan 20 14:36:27 compute-1 nova_compute[225855]: 2026-01-20 14:36:27.051 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:27 compute-1 NetworkManager[49104]: <info>  [1768919787.0527] manager: (tapabb83e3e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:27.055 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabb83e3e-00, col_values=(('external_ids', {'iface-id': 'dfacaf19-f896-4c13-a7ad-47b57cf03fc1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:27 compute-1 nova_compute[225855]: 2026-01-20 14:36:27.057 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:27 compute-1 nova_compute[225855]: 2026-01-20 14:36:27.058 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:27 compute-1 ovn_controller[130490]: 2026-01-20T14:36:27Z|00171|binding|INFO|Releasing lport dfacaf19-f896-4c13-a7ad-47b57cf03fc1 from this chassis (sb_readonly=0)
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:27.059 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abb83e3e-0b12-431b-ad86-a1d271b5b46a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abb83e3e-0b12-431b-ad86-a1d271b5b46a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:27.060 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2c7fbe1f-65b1-4bbd-b8bf-f52eb645a0b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:27.061 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-abb83e3e-0b12-431b-ad86-a1d271b5b46a
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/abb83e3e-0b12-431b-ad86-a1d271b5b46a.pid.haproxy
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID abb83e3e-0b12-431b-ad86-a1d271b5b46a
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:36:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:27.062 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'env', 'PROCESS_TAG=haproxy-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abb83e3e-0b12-431b-ad86-a1d271b5b46a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:36:27 compute-1 nova_compute[225855]: 2026-01-20 14:36:27.073 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:27 compute-1 nova_compute[225855]: 2026-01-20 14:36:27.283 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919787.2827053, 59387c9d-df91-4f43-b389-00174486fc84 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:36:27 compute-1 nova_compute[225855]: 2026-01-20 14:36:27.283 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 59387c9d-df91-4f43-b389-00174486fc84] VM Started (Lifecycle Event)
Jan 20 14:36:27 compute-1 nova_compute[225855]: 2026-01-20 14:36:27.305 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:36:27 compute-1 nova_compute[225855]: 2026-01-20 14:36:27.310 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919787.2835515, 59387c9d-df91-4f43-b389-00174486fc84 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:36:27 compute-1 nova_compute[225855]: 2026-01-20 14:36:27.310 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 59387c9d-df91-4f43-b389-00174486fc84] VM Paused (Lifecycle Event)
Jan 20 14:36:27 compute-1 nova_compute[225855]: 2026-01-20 14:36:27.331 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:36:27 compute-1 nova_compute[225855]: 2026-01-20 14:36:27.335 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:36:27 compute-1 nova_compute[225855]: 2026-01-20 14:36:27.359 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 59387c9d-df91-4f43-b389-00174486fc84] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:36:27 compute-1 podman[248614]: 2026-01-20 14:36:27.453952531 +0000 UTC m=+0.062790113 container create a034a5d55e70bf048ac50890668baf65a5a1a8160f7f28acce0ccefec6537cac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:36:27 compute-1 systemd[1]: Started libpod-conmon-a034a5d55e70bf048ac50890668baf65a5a1a8160f7f28acce0ccefec6537cac.scope.
Jan 20 14:36:27 compute-1 podman[248614]: 2026-01-20 14:36:27.417678858 +0000 UTC m=+0.026516440 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:36:27 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:36:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be9f6ce425d00a5cc617685f10663b2cda4fb6c10c8d6add8670f4d6e110ceb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:36:27 compute-1 podman[248614]: 2026-01-20 14:36:27.564381068 +0000 UTC m=+0.173218680 container init a034a5d55e70bf048ac50890668baf65a5a1a8160f7f28acce0ccefec6537cac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:36:27 compute-1 podman[248614]: 2026-01-20 14:36:27.574604507 +0000 UTC m=+0.183442089 container start a034a5d55e70bf048ac50890668baf65a5a1a8160f7f28acce0ccefec6537cac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 14:36:27 compute-1 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[248629]: [NOTICE]   (248634) : New worker (248636) forked
Jan 20 14:36:27 compute-1 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[248629]: [NOTICE]   (248634) : Loading success.
Jan 20 14:36:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:27.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:27 compute-1 nova_compute[225855]: 2026-01-20 14:36:27.684 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:36:27 compute-1 nova_compute[225855]: 2026-01-20 14:36:27.703 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:36:27 compute-1 nova_compute[225855]: 2026-01-20 14:36:27.703 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:36:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2077652599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3674391804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:27 compute-1 ceph-mon[81775]: pgmap v1426: 321 pgs: 321 active+clean; 99 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.6 MiB/s wr, 210 op/s
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.353 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.533 225859 DEBUG nova.compute.manager [req-b03cf418-d8ee-411c-a06e-eb99eef412af req-81e1b353-c8b5-4a1c-92af-585e8882ed8f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Received event network-vif-plugged-f9059531-e6dc-4451-9c17-ec3b63e4b85f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.534 225859 DEBUG oslo_concurrency.lockutils [req-b03cf418-d8ee-411c-a06e-eb99eef412af req-81e1b353-c8b5-4a1c-92af-585e8882ed8f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "59387c9d-df91-4f43-b389-00174486fc84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.534 225859 DEBUG oslo_concurrency.lockutils [req-b03cf418-d8ee-411c-a06e-eb99eef412af req-81e1b353-c8b5-4a1c-92af-585e8882ed8f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "59387c9d-df91-4f43-b389-00174486fc84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.534 225859 DEBUG oslo_concurrency.lockutils [req-b03cf418-d8ee-411c-a06e-eb99eef412af req-81e1b353-c8b5-4a1c-92af-585e8882ed8f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "59387c9d-df91-4f43-b389-00174486fc84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.535 225859 DEBUG nova.compute.manager [req-b03cf418-d8ee-411c-a06e-eb99eef412af req-81e1b353-c8b5-4a1c-92af-585e8882ed8f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Processing event network-vif-plugged-f9059531-e6dc-4451-9c17-ec3b63e4b85f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.535 225859 DEBUG nova.compute.manager [req-b03cf418-d8ee-411c-a06e-eb99eef412af req-81e1b353-c8b5-4a1c-92af-585e8882ed8f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Received event network-vif-plugged-f9059531-e6dc-4451-9c17-ec3b63e4b85f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.535 225859 DEBUG oslo_concurrency.lockutils [req-b03cf418-d8ee-411c-a06e-eb99eef412af req-81e1b353-c8b5-4a1c-92af-585e8882ed8f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "59387c9d-df91-4f43-b389-00174486fc84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.535 225859 DEBUG oslo_concurrency.lockutils [req-b03cf418-d8ee-411c-a06e-eb99eef412af req-81e1b353-c8b5-4a1c-92af-585e8882ed8f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "59387c9d-df91-4f43-b389-00174486fc84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.536 225859 DEBUG oslo_concurrency.lockutils [req-b03cf418-d8ee-411c-a06e-eb99eef412af req-81e1b353-c8b5-4a1c-92af-585e8882ed8f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "59387c9d-df91-4f43-b389-00174486fc84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.536 225859 DEBUG nova.compute.manager [req-b03cf418-d8ee-411c-a06e-eb99eef412af req-81e1b353-c8b5-4a1c-92af-585e8882ed8f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] No waiting events found dispatching network-vif-plugged-f9059531-e6dc-4451-9c17-ec3b63e4b85f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.536 225859 WARNING nova.compute.manager [req-b03cf418-d8ee-411c-a06e-eb99eef412af req-81e1b353-c8b5-4a1c-92af-585e8882ed8f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Received unexpected event network-vif-plugged-f9059531-e6dc-4451-9c17-ec3b63e4b85f for instance with vm_state building and task_state spawning.
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.537 225859 DEBUG nova.compute.manager [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.544 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919788.544152, 59387c9d-df91-4f43-b389-00174486fc84 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.544 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 59387c9d-df91-4f43-b389-00174486fc84] VM Resumed (Lifecycle Event)
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.547 225859 DEBUG nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.550 225859 INFO nova.virt.libvirt.driver [-] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Instance spawned successfully.
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.551 225859 DEBUG nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.578 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.583 225859 DEBUG nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.584 225859 DEBUG nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.585 225859 DEBUG nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.585 225859 DEBUG nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.586 225859 DEBUG nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.586 225859 DEBUG nova.virt.libvirt.driver [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.591 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.623 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 59387c9d-df91-4f43-b389-00174486fc84] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.661 225859 INFO nova.compute.manager [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Took 9.96 seconds to spawn the instance on the hypervisor.
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.661 225859 DEBUG nova.compute.manager [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.721 225859 INFO nova.compute.manager [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Took 10.96 seconds to build instance.
Jan 20 14:36:28 compute-1 nova_compute[225855]: 2026-01-20 14:36:28.748 225859 DEBUG oslo_concurrency.lockutils [None req-728666bb-0e91-4b68-b737-aae579f8c119 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "59387c9d-df91-4f43-b389-00174486fc84" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:36:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:28.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:36:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2774003200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3876039807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:29 compute-1 nova_compute[225855]: 2026-01-20 14:36:29.604 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:29.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:29 compute-1 ceph-mon[81775]: pgmap v1427: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.2 MiB/s wr, 199 op/s
Jan 20 14:36:30 compute-1 nova_compute[225855]: 2026-01-20 14:36:30.005 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:30 compute-1 nova_compute[225855]: 2026-01-20 14:36:30.641 225859 DEBUG nova.compute.manager [None req-8f0d650b-1046-4a77-a3f4-7f3e4363c0c0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:36:30 compute-1 nova_compute[225855]: 2026-01-20 14:36:30.689 225859 INFO nova.compute.manager [None req-8f0d650b-1046-4a77-a3f4-7f3e4363c0c0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] instance snapshotting
Jan 20 14:36:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:30.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:30 compute-1 nova_compute[225855]: 2026-01-20 14:36:30.975 225859 INFO nova.virt.libvirt.driver [None req-8f0d650b-1046-4a77-a3f4-7f3e4363c0c0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Beginning live snapshot process
Jan 20 14:36:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3016561653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:31 compute-1 nova_compute[225855]: 2026-01-20 14:36:31.399 225859 DEBUG nova.virt.libvirt.imagebackend [None req-8f0d650b-1046-4a77-a3f4-7f3e4363c0c0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 14:36:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:31.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:31 compute-1 nova_compute[225855]: 2026-01-20 14:36:31.771 225859 DEBUG nova.storage.rbd_utils [None req-8f0d650b-1046-4a77-a3f4-7f3e4363c0c0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] creating snapshot(541e8cdc347946dcb7aca41472a67483) on rbd image(59387c9d-df91-4f43-b389-00174486fc84_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:36:32 compute-1 ceph-mon[81775]: pgmap v1428: 321 pgs: 321 active+clean; 93 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 308 KiB/s wr, 148 op/s
Jan 20 14:36:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e216 e216: 3 total, 3 up, 3 in
Jan 20 14:36:32 compute-1 nova_compute[225855]: 2026-01-20 14:36:32.387 225859 DEBUG nova.storage.rbd_utils [None req-8f0d650b-1046-4a77-a3f4-7f3e4363c0c0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] cloning vms/59387c9d-df91-4f43-b389-00174486fc84_disk@541e8cdc347946dcb7aca41472a67483 to images/d9608a6b-abac-47e3-a9dd-70a6230a92c0 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:36:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e217 e217: 3 total, 3 up, 3 in
Jan 20 14:36:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:32.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:32 compute-1 nova_compute[225855]: 2026-01-20 14:36:32.802 225859 DEBUG nova.storage.rbd_utils [None req-8f0d650b-1046-4a77-a3f4-7f3e4363c0c0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] flattening images/d9608a6b-abac-47e3-a9dd-70a6230a92c0 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:36:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:33.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:33 compute-1 ceph-mon[81775]: osdmap e216: 3 total, 3 up, 3 in
Jan 20 14:36:33 compute-1 ceph-mon[81775]: osdmap e217: 3 total, 3 up, 3 in
Jan 20 14:36:33 compute-1 nova_compute[225855]: 2026-01-20 14:36:33.946 225859 DEBUG nova.storage.rbd_utils [None req-8f0d650b-1046-4a77-a3f4-7f3e4363c0c0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] removing snapshot(541e8cdc347946dcb7aca41472a67483) on rbd image(59387c9d-df91-4f43-b389-00174486fc84_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:36:34 compute-1 nova_compute[225855]: 2026-01-20 14:36:34.605 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:34.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:34 compute-1 ceph-mon[81775]: pgmap v1431: 321 pgs: 321 active+clean; 93 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 283 KiB/s wr, 137 op/s
Jan 20 14:36:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e218 e218: 3 total, 3 up, 3 in
Jan 20 14:36:35 compute-1 nova_compute[225855]: 2026-01-20 14:36:35.006 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:35 compute-1 nova_compute[225855]: 2026-01-20 14:36:35.060 225859 DEBUG nova.storage.rbd_utils [None req-8f0d650b-1046-4a77-a3f4-7f3e4363c0c0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] creating snapshot(snap) on rbd image(d9608a6b-abac-47e3-a9dd-70a6230a92c0) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:36:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:35.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:35 compute-1 ceph-mon[81775]: osdmap e218: 3 total, 3 up, 3 in
Jan 20 14:36:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1394839658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3940471981' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:35 compute-1 ceph-mon[81775]: pgmap v1433: 321 pgs: 321 active+clean; 168 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 6.6 MiB/s wr, 233 op/s
Jan 20 14:36:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e219 e219: 3 total, 3 up, 3 in
Jan 20 14:36:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:36.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:37 compute-1 ceph-mon[81775]: osdmap e219: 3 total, 3 up, 3 in
Jan 20 14:36:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:37.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:38 compute-1 ceph-mon[81775]: pgmap v1435: 321 pgs: 321 active+clean; 180 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 7.7 MiB/s wr, 189 op/s
Jan 20 14:36:38 compute-1 nova_compute[225855]: 2026-01-20 14:36:38.423 225859 INFO nova.virt.libvirt.driver [None req-8f0d650b-1046-4a77-a3f4-7f3e4363c0c0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Snapshot image upload complete
Jan 20 14:36:38 compute-1 nova_compute[225855]: 2026-01-20 14:36:38.424 225859 INFO nova.compute.manager [None req-8f0d650b-1046-4a77-a3f4-7f3e4363c0c0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Took 7.73 seconds to snapshot the instance on the hypervisor.
Jan 20 14:36:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:38.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:39 compute-1 nova_compute[225855]: 2026-01-20 14:36:39.608 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:39.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:40 compute-1 nova_compute[225855]: 2026-01-20 14:36:40.009 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:40 compute-1 ceph-mon[81775]: pgmap v1436: 321 pgs: 321 active+clean; 181 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.0 MiB/s wr, 154 op/s
Jan 20 14:36:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:40.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:41 compute-1 ovn_controller[130490]: 2026-01-20T14:36:41Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:a4:97 10.100.0.13
Jan 20 14:36:41 compute-1 ovn_controller[130490]: 2026-01-20T14:36:41Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:a4:97 10.100.0.13
Jan 20 14:36:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:41.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:42.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e220 e220: 3 total, 3 up, 3 in
Jan 20 14:36:42 compute-1 ceph-mon[81775]: pgmap v1437: 321 pgs: 321 active+clean; 187 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 6.2 MiB/s wr, 194 op/s
Jan 20 14:36:43 compute-1 sudo[248793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:36:43 compute-1 sudo[248793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:43 compute-1 sudo[248793]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:43 compute-1 sudo[248819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:36:43 compute-1 sudo[248819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:43 compute-1 sudo[248819]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:43 compute-1 podman[248817]: 2026-01-20 14:36:43.641425851 +0000 UTC m=+0.150310144 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 14:36:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:43.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:44 compute-1 ceph-mon[81775]: osdmap e220: 3 total, 3 up, 3 in
Jan 20 14:36:44 compute-1 nova_compute[225855]: 2026-01-20 14:36:44.613 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:44.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:45 compute-1 ceph-mon[81775]: pgmap v1439: 321 pgs: 321 active+clean; 187 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 593 KiB/s rd, 1.5 MiB/s wr, 119 op/s
Jan 20 14:36:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/658670013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:45 compute-1 nova_compute[225855]: 2026-01-20 14:36:45.012 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:45.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1444865890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:46 compute-1 ceph-mon[81775]: pgmap v1440: 321 pgs: 321 active+clean; 210 MiB data, 641 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 182 op/s
Jan 20 14:36:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:46.189 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:36:46 compute-1 nova_compute[225855]: 2026-01-20 14:36:46.191 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:46.192 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:36:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:46.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:36:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:47.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:36:48 compute-1 ceph-mon[81775]: pgmap v1441: 321 pgs: 321 active+clean; 238 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 166 op/s
Jan 20 14:36:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:36:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:48.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:36:49 compute-1 nova_compute[225855]: 2026-01-20 14:36:49.613 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:49.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:50 compute-1 nova_compute[225855]: 2026-01-20 14:36:50.014 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:50 compute-1 ceph-mon[81775]: pgmap v1442: 321 pgs: 321 active+clean; 248 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.2 MiB/s wr, 201 op/s
Jan 20 14:36:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/347259012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3157347975' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:50.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:36:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:51.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:36:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:36:52.195 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:52.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:52 compute-1 ceph-mon[81775]: pgmap v1443: 321 pgs: 321 active+clean; 260 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.8 MiB/s wr, 190 op/s
Jan 20 14:36:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3250829967' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3598022065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:53.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:54 compute-1 podman[248876]: 2026-01-20 14:36:54.023925552 +0000 UTC m=+0.060154308 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:36:54 compute-1 nova_compute[225855]: 2026-01-20 14:36:54.616 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:36:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:54.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:36:54 compute-1 ceph-mon[81775]: pgmap v1444: 321 pgs: 321 active+clean; 260 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Jan 20 14:36:55 compute-1 nova_compute[225855]: 2026-01-20 14:36:55.016 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:55.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:55 compute-1 ceph-mon[81775]: pgmap v1445: 321 pgs: 321 active+clean; 260 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 165 op/s
Jan 20 14:36:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:56.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:57.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:58 compute-1 ceph-mon[81775]: pgmap v1446: 321 pgs: 321 active+clean; 262 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.0 MiB/s wr, 131 op/s
Jan 20 14:36:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:58.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:59 compute-1 nova_compute[225855]: 2026-01-20 14:36:59.620 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:36:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:59.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:59 compute-1 ceph-mon[81775]: pgmap v1447: 321 pgs: 321 active+clean; 274 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.2 MiB/s wr, 166 op/s
Jan 20 14:37:00 compute-1 nova_compute[225855]: 2026-01-20 14:37:00.018 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:00.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:01.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:02 compute-1 ceph-mon[81775]: pgmap v1448: 321 pgs: 321 active+clean; 283 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.6 MiB/s wr, 228 op/s
Jan 20 14:37:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:02.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:02 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:37:02 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:37:03 compute-1 sudo[248901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:03 compute-1 sudo[248901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:03 compute-1 sudo[248901]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:03.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:03 compute-1 sudo[248926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:03 compute-1 sudo[248926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:03 compute-1 sudo[248926]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:04 compute-1 ceph-mon[81775]: pgmap v1449: 321 pgs: 321 active+clean; 283 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 195 op/s
Jan 20 14:37:04 compute-1 nova_compute[225855]: 2026-01-20 14:37:04.685 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:04.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:05 compute-1 nova_compute[225855]: 2026-01-20 14:37:05.020 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/287032635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:05.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:06 compute-1 ceph-mon[81775]: pgmap v1450: 321 pgs: 321 active+clean; 213 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 242 op/s
Jan 20 14:37:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:06.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:06 compute-1 sudo[248952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:06 compute-1 sudo[248952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:06 compute-1 sudo[248952]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:06 compute-1 sudo[248977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:37:06 compute-1 sudo[248977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:06 compute-1 sudo[248977]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:06 compute-1 sudo[249003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:06 compute-1 sudo[249003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:06 compute-1 sudo[249003]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:07 compute-1 sudo[249028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:37:07 compute-1 sudo[249028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3114007443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e221 e221: 3 total, 3 up, 3 in
Jan 20 14:37:07 compute-1 sudo[249028]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:07.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:07 compute-1 nova_compute[225855]: 2026-01-20 14:37:07.963 225859 DEBUG oslo_concurrency.lockutils [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "59387c9d-df91-4f43-b389-00174486fc84" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:07 compute-1 nova_compute[225855]: 2026-01-20 14:37:07.963 225859 DEBUG oslo_concurrency.lockutils [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "59387c9d-df91-4f43-b389-00174486fc84" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:07 compute-1 nova_compute[225855]: 2026-01-20 14:37:07.963 225859 DEBUG oslo_concurrency.lockutils [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "59387c9d-df91-4f43-b389-00174486fc84-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:07 compute-1 nova_compute[225855]: 2026-01-20 14:37:07.964 225859 DEBUG oslo_concurrency.lockutils [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "59387c9d-df91-4f43-b389-00174486fc84-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:07 compute-1 nova_compute[225855]: 2026-01-20 14:37:07.964 225859 DEBUG oslo_concurrency.lockutils [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "59387c9d-df91-4f43-b389-00174486fc84-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:07 compute-1 nova_compute[225855]: 2026-01-20 14:37:07.965 225859 INFO nova.compute.manager [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Terminating instance
Jan 20 14:37:07 compute-1 nova_compute[225855]: 2026-01-20 14:37:07.966 225859 DEBUG nova.compute.manager [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:37:08 compute-1 kernel: tapf9059531-e6 (unregistering): left promiscuous mode
Jan 20 14:37:08 compute-1 NetworkManager[49104]: <info>  [1768919828.0236] device (tapf9059531-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:37:08 compute-1 ovn_controller[130490]: 2026-01-20T14:37:08Z|00172|binding|INFO|Releasing lport f9059531-e6dc-4451-9c17-ec3b63e4b85f from this chassis (sb_readonly=0)
Jan 20 14:37:08 compute-1 ovn_controller[130490]: 2026-01-20T14:37:08Z|00173|binding|INFO|Setting lport f9059531-e6dc-4451-9c17-ec3b63e4b85f down in Southbound
Jan 20 14:37:08 compute-1 ovn_controller[130490]: 2026-01-20T14:37:08Z|00174|binding|INFO|Removing iface tapf9059531-e6 ovn-installed in OVS
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.031 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.034 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:08.043 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:a4:97 10.100.0.13'], port_security=['fa:16:3e:5e:a4:97 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '59387c9d-df91-4f43-b389-00174486fc84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3750c56415134773aa9d9880038f1749', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2e302063-2ccd-4f7c-8835-ef521762a486', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4125934e-1dea-4e34-a38d-5291c850f0b2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=f9059531-e6dc-4451-9c17-ec3b63e4b85f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:37:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:08.045 140354 INFO neutron.agent.ovn.metadata.agent [-] Port f9059531-e6dc-4451-9c17-ec3b63e4b85f in datapath abb83e3e-0b12-431b-ad86-a1d271b5b46a unbound from our chassis
Jan 20 14:37:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:08.046 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abb83e3e-0b12-431b-ad86-a1d271b5b46a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:37:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:08.047 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea57fca-8e36-481f-8b74-04e9dbd7f7e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:08.048 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a namespace which is not needed anymore
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.054 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:08 compute-1 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000035.scope: Deactivated successfully.
Jan 20 14:37:08 compute-1 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000035.scope: Consumed 14.303s CPU time.
Jan 20 14:37:08 compute-1 systemd-machined[194361]: Machine qemu-26-instance-00000035 terminated.
Jan 20 14:37:08 compute-1 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[248629]: [NOTICE]   (248634) : haproxy version is 2.8.14-c23fe91
Jan 20 14:37:08 compute-1 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[248629]: [NOTICE]   (248634) : path to executable is /usr/sbin/haproxy
Jan 20 14:37:08 compute-1 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[248629]: [WARNING]  (248634) : Exiting Master process...
Jan 20 14:37:08 compute-1 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[248629]: [WARNING]  (248634) : Exiting Master process...
Jan 20 14:37:08 compute-1 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[248629]: [ALERT]    (248634) : Current worker (248636) exited with code 143 (Terminated)
Jan 20 14:37:08 compute-1 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[248629]: [WARNING]  (248634) : All workers exited. Exiting... (0)
Jan 20 14:37:08 compute-1 systemd[1]: libpod-a034a5d55e70bf048ac50890668baf65a5a1a8160f7f28acce0ccefec6537cac.scope: Deactivated successfully.
Jan 20 14:37:08 compute-1 podman[249107]: 2026-01-20 14:37:08.193097123 +0000 UTC m=+0.055031754 container died a034a5d55e70bf048ac50890668baf65a5a1a8160f7f28acce0ccefec6537cac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.206 225859 INFO nova.virt.libvirt.driver [-] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Instance destroyed successfully.
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.208 225859 DEBUG nova.objects.instance [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lazy-loading 'resources' on Instance uuid 59387c9d-df91-4f43-b389-00174486fc84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:37:08 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a034a5d55e70bf048ac50890668baf65a5a1a8160f7f28acce0ccefec6537cac-userdata-shm.mount: Deactivated successfully.
Jan 20 14:37:08 compute-1 systemd[1]: var-lib-containers-storage-overlay-4be9f6ce425d00a5cc617685f10663b2cda4fb6c10c8d6add8670f4d6e110ceb-merged.mount: Deactivated successfully.
Jan 20 14:37:08 compute-1 podman[249107]: 2026-01-20 14:37:08.230353945 +0000 UTC m=+0.092288576 container cleanup a034a5d55e70bf048ac50890668baf65a5a1a8160f7f28acce0ccefec6537cac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.236 225859 DEBUG nova.virt.libvirt.vif [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1053628830',display_name='tempest-ImagesTestJSON-server-1053628830',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-1053628830',id=53,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:36:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3750c56415134773aa9d9880038f1749',ramdisk_id='',reservation_id='r-2v7i5wvb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-338390217',owner_user_name='tempest-ImagesTestJSON-338390217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:36:38Z,user_data=None,user_id='56e2959629114d3d8a48e7a80ed96c4b',uuid=59387c9d-df91-4f43-b389-00174486fc84,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f9059531-e6dc-4451-9c17-ec3b63e4b85f", "address": "fa:16:3e:5e:a4:97", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9059531-e6", "ovs_interfaceid": "f9059531-e6dc-4451-9c17-ec3b63e4b85f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.237 225859 DEBUG nova.network.os_vif_util [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Converting VIF {"id": "f9059531-e6dc-4451-9c17-ec3b63e4b85f", "address": "fa:16:3e:5e:a4:97", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9059531-e6", "ovs_interfaceid": "f9059531-e6dc-4451-9c17-ec3b63e4b85f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.238 225859 DEBUG nova.network.os_vif_util [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=f9059531-e6dc-4451-9c17-ec3b63e4b85f,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9059531-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.238 225859 DEBUG os_vif [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=f9059531-e6dc-4451-9c17-ec3b63e4b85f,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9059531-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.240 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.241 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf9059531-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.243 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.244 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.247 225859 INFO os_vif [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=f9059531-e6dc-4451-9c17-ec3b63e4b85f,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9059531-e6')
Jan 20 14:37:08 compute-1 systemd[1]: libpod-conmon-a034a5d55e70bf048ac50890668baf65a5a1a8160f7f28acce0ccefec6537cac.scope: Deactivated successfully.
Jan 20 14:37:08 compute-1 ceph-mon[81775]: osdmap e221: 3 total, 3 up, 3 in
Jan 20 14:37:08 compute-1 ceph-mon[81775]: pgmap v1452: 321 pgs: 321 active+clean; 213 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.4 MiB/s wr, 246 op/s
Jan 20 14:37:08 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:37:08 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:37:08 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:37:08 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:37:08 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:37:08 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:37:08 compute-1 podman[249150]: 2026-01-20 14:37:08.308027747 +0000 UTC m=+0.051198366 container remove a034a5d55e70bf048ac50890668baf65a5a1a8160f7f28acce0ccefec6537cac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:37:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:08.313 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b24ddd82-f78d-441f-aecb-a72f47074a8b]: (4, ('Tue Jan 20 02:37:08 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a (a034a5d55e70bf048ac50890668baf65a5a1a8160f7f28acce0ccefec6537cac)\na034a5d55e70bf048ac50890668baf65a5a1a8160f7f28acce0ccefec6537cac\nTue Jan 20 02:37:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a (a034a5d55e70bf048ac50890668baf65a5a1a8160f7f28acce0ccefec6537cac)\na034a5d55e70bf048ac50890668baf65a5a1a8160f7f28acce0ccefec6537cac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:08.315 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a1db865d-9d28-422e-83d3-f13e2623bfd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:08.316 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabb83e3e-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:08 compute-1 kernel: tapabb83e3e-00: left promiscuous mode
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.446 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.461 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.462 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:08.464 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c369fa64-066b-4c87-8ec2-4fe65b6258a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:08.478 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[286dde08-fdce-4b7c-8057-11fdaffbea1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:08.480 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c31d67c2-5f69-45e3-b527-1e63d036dc58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:08.495 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e061ba13-fb95-4106-ab46-23205a676493]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482577, 'reachable_time': 27519, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249184, 'error': None, 'target': 'ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:08.498 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:37:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:08.498 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[f580e933-e19a-4e42-b5d1-45606bb74bfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:08 compute-1 systemd[1]: run-netns-ovnmeta\x2dabb83e3e\x2d0b12\x2d431b\x2dad86\x2da1d271b5b46a.mount: Deactivated successfully.
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.760 225859 INFO nova.virt.libvirt.driver [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Deleting instance files /var/lib/nova/instances/59387c9d-df91-4f43-b389-00174486fc84_del
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.761 225859 INFO nova.virt.libvirt.driver [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Deletion of /var/lib/nova/instances/59387c9d-df91-4f43-b389-00174486fc84_del complete
Jan 20 14:37:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:08.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.840 225859 INFO nova.compute.manager [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Took 0.87 seconds to destroy the instance on the hypervisor.
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.840 225859 DEBUG oslo.service.loopingcall [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.841 225859 DEBUG nova.compute.manager [-] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:37:08 compute-1 nova_compute[225855]: 2026-01-20 14:37:08.841 225859 DEBUG nova.network.neutron [-] [instance: 59387c9d-df91-4f43-b389-00174486fc84] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:37:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:09.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:09 compute-1 nova_compute[225855]: 2026-01-20 14:37:09.725 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:10 compute-1 nova_compute[225855]: 2026-01-20 14:37:10.477 225859 DEBUG nova.network.neutron [-] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:37:10 compute-1 nova_compute[225855]: 2026-01-20 14:37:10.501 225859 INFO nova.compute.manager [-] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Took 1.66 seconds to deallocate network for instance.
Jan 20 14:37:10 compute-1 nova_compute[225855]: 2026-01-20 14:37:10.588 225859 DEBUG oslo_concurrency.lockutils [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:10 compute-1 nova_compute[225855]: 2026-01-20 14:37:10.589 225859 DEBUG oslo_concurrency.lockutils [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:10 compute-1 nova_compute[225855]: 2026-01-20 14:37:10.634 225859 DEBUG nova.compute.manager [req-6fa74c7a-287b-422b-95b6-e62120fe6281 req-7e30ef4e-162a-40f3-829c-5079f6ed9100 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Received event network-vif-deleted-f9059531-e6dc-4451-9c17-ec3b63e4b85f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:37:10 compute-1 nova_compute[225855]: 2026-01-20 14:37:10.655 225859 DEBUG oslo_concurrency.processutils [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:37:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:10.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:37:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:10 compute-1 ceph-mon[81775]: pgmap v1453: 321 pgs: 321 active+clean; 185 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 196 op/s
Jan 20 14:37:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:37:11 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/998884099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:11 compute-1 nova_compute[225855]: 2026-01-20 14:37:11.253 225859 DEBUG oslo_concurrency.processutils [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:11 compute-1 nova_compute[225855]: 2026-01-20 14:37:11.259 225859 DEBUG nova.compute.provider_tree [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:37:11 compute-1 nova_compute[225855]: 2026-01-20 14:37:11.289 225859 DEBUG nova.scheduler.client.report [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:37:11 compute-1 nova_compute[225855]: 2026-01-20 14:37:11.356 225859 DEBUG oslo_concurrency.lockutils [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:11 compute-1 nova_compute[225855]: 2026-01-20 14:37:11.394 225859 INFO nova.scheduler.client.report [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Deleted allocations for instance 59387c9d-df91-4f43-b389-00174486fc84
Jan 20 14:37:11 compute-1 nova_compute[225855]: 2026-01-20 14:37:11.521 225859 DEBUG oslo_concurrency.lockutils [None req-b1a140e3-48f6-4754-b3f1-f895eb3c6fb2 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "59387c9d-df91-4f43-b389-00174486fc84" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:11.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e222 e222: 3 total, 3 up, 3 in
Jan 20 14:37:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/998884099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:11 compute-1 ceph-mon[81775]: pgmap v1454: 321 pgs: 321 active+clean; 121 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 471 KiB/s rd, 2.6 MiB/s wr, 194 op/s
Jan 20 14:37:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:12.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e223 e223: 3 total, 3 up, 3 in
Jan 20 14:37:12 compute-1 ceph-mon[81775]: osdmap e222: 3 total, 3 up, 3 in
Jan 20 14:37:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3321828056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:12 compute-1 ceph-mon[81775]: osdmap e223: 3 total, 3 up, 3 in
Jan 20 14:37:13 compute-1 sudo[249210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:13 compute-1 sudo[249210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:13 compute-1 sudo[249210]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:13 compute-1 nova_compute[225855]: 2026-01-20 14:37:13.263 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:13 compute-1 sudo[249235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:37:13 compute-1 sudo[249235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:13 compute-1 sudo[249235]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:37:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1783961933' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:37:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:37:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1783961933' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:37:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:13.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e224 e224: 3 total, 3 up, 3 in
Jan 20 14:37:13 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:37:13 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:37:13 compute-1 ceph-mon[81775]: pgmap v1457: 321 pgs: 321 active+clean; 121 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 683 KiB/s rd, 4.1 MiB/s wr, 219 op/s
Jan 20 14:37:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1783961933' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:37:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1783961933' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:37:13 compute-1 ceph-mon[81775]: osdmap e224: 3 total, 3 up, 3 in
Jan 20 14:37:14 compute-1 podman[249261]: 2026-01-20 14:37:14.037148805 +0000 UTC m=+0.076855041 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:37:14 compute-1 nova_compute[225855]: 2026-01-20 14:37:14.742 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:37:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:14.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:37:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e225 e225: 3 total, 3 up, 3 in
Jan 20 14:37:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:15.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:16 compute-1 ceph-mon[81775]: osdmap e225: 3 total, 3 up, 3 in
Jan 20 14:37:16 compute-1 ceph-mon[81775]: pgmap v1460: 321 pgs: 321 active+clean; 180 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 9.3 MiB/s wr, 142 op/s
Jan 20 14:37:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:16.396 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:16.397 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:16.397 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:37:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:16.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:37:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:17.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e226 e226: 3 total, 3 up, 3 in
Jan 20 14:37:18 compute-1 nova_compute[225855]: 2026-01-20 14:37:18.265 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:18.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:18 compute-1 ceph-mon[81775]: pgmap v1461: 321 pgs: 321 active+clean; 212 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 12 MiB/s wr, 262 op/s
Jan 20 14:37:18 compute-1 ceph-mon[81775]: osdmap e226: 3 total, 3 up, 3 in
Jan 20 14:37:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:19.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:19 compute-1 nova_compute[225855]: 2026-01-20 14:37:19.744 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1595490804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:37:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3192988678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:37:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:37:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:20.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:37:20 compute-1 ceph-mon[81775]: pgmap v1463: 321 pgs: 321 active+clean; 185 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 250 op/s
Jan 20 14:37:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:21 compute-1 nova_compute[225855]: 2026-01-20 14:37:21.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:37:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:21.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:22 compute-1 ceph-mon[81775]: pgmap v1464: 321 pgs: 321 active+clean; 101 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 6.5 MiB/s wr, 206 op/s
Jan 20 14:37:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1752266427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:37:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 23K writes, 92K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.04 MB/s
                                           Cumulative WAL: 23K writes, 7643 syncs, 3.01 writes per sync, written: 0.08 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 11K writes, 49K keys, 11K commit groups, 1.0 writes per commit group, ingest: 48.28 MB, 0.08 MB/s
                                           Interval WAL: 11K writes, 4527 syncs, 2.61 writes per sync, written: 0.05 GB, 0.08 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 14:37:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:22.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e227 e227: 3 total, 3 up, 3 in
Jan 20 14:37:23 compute-1 nova_compute[225855]: 2026-01-20 14:37:23.205 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919828.2039974, 59387c9d-df91-4f43-b389-00174486fc84 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:37:23 compute-1 nova_compute[225855]: 2026-01-20 14:37:23.206 225859 INFO nova.compute.manager [-] [instance: 59387c9d-df91-4f43-b389-00174486fc84] VM Stopped (Lifecycle Event)
Jan 20 14:37:23 compute-1 nova_compute[225855]: 2026-01-20 14:37:23.229 225859 DEBUG nova.compute.manager [None req-dca99096-b13a-4f5d-a020-2b949e73ed1b - - - - - -] [instance: 59387c9d-df91-4f43-b389-00174486fc84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:37:23 compute-1 nova_compute[225855]: 2026-01-20 14:37:23.267 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:23.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:23 compute-1 sudo[249292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:23 compute-1 sudo[249292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:23 compute-1 sudo[249292]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:23 compute-1 ceph-mon[81775]: osdmap e227: 3 total, 3 up, 3 in
Jan 20 14:37:23 compute-1 sudo[249317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:23 compute-1 sudo[249317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:23 compute-1 sudo[249317]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:24 compute-1 nova_compute[225855]: 2026-01-20 14:37:24.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:37:24 compute-1 nova_compute[225855]: 2026-01-20 14:37:24.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:37:24 compute-1 nova_compute[225855]: 2026-01-20 14:37:24.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:37:24 compute-1 nova_compute[225855]: 2026-01-20 14:37:24.522 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:24 compute-1 nova_compute[225855]: 2026-01-20 14:37:24.522 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:24 compute-1 nova_compute[225855]: 2026-01-20 14:37:24.522 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:24 compute-1 nova_compute[225855]: 2026-01-20 14:37:24.522 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:37:24 compute-1 nova_compute[225855]: 2026-01-20 14:37:24.523 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:24 compute-1 nova_compute[225855]: 2026-01-20 14:37:24.746 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:24.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:24 compute-1 ceph-mon[81775]: pgmap v1466: 321 pgs: 321 active+clean; 101 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 151 op/s
Jan 20 14:37:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:37:24 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/754849335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:25 compute-1 nova_compute[225855]: 2026-01-20 14:37:25.006 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:25 compute-1 podman[249362]: 2026-01-20 14:37:25.026377633 +0000 UTC m=+0.070099919 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 20 14:37:25 compute-1 nova_compute[225855]: 2026-01-20 14:37:25.171 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:37:25 compute-1 nova_compute[225855]: 2026-01-20 14:37:25.173 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4628MB free_disk=20.957855224609375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:37:25 compute-1 nova_compute[225855]: 2026-01-20 14:37:25.173 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:25 compute-1 nova_compute[225855]: 2026-01-20 14:37:25.173 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:25 compute-1 nova_compute[225855]: 2026-01-20 14:37:25.270 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:37:25 compute-1 nova_compute[225855]: 2026-01-20 14:37:25.270 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:37:25 compute-1 nova_compute[225855]: 2026-01-20 14:37:25.310 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:25.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:37:25 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4289147269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:25 compute-1 nova_compute[225855]: 2026-01-20 14:37:25.755 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:25 compute-1 nova_compute[225855]: 2026-01-20 14:37:25.760 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:37:25 compute-1 nova_compute[225855]: 2026-01-20 14:37:25.791 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:37:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/754849335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:25 compute-1 ceph-mon[81775]: pgmap v1467: 321 pgs: 321 active+clean; 88 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 21 KiB/s wr, 92 op/s
Jan 20 14:37:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4289147269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:25 compute-1 nova_compute[225855]: 2026-01-20 14:37:25.950 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:37:25 compute-1 nova_compute[225855]: 2026-01-20 14:37:25.951 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:26.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:26 compute-1 nova_compute[225855]: 2026-01-20 14:37:26.952 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:37:26 compute-1 nova_compute[225855]: 2026-01-20 14:37:26.952 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:37:26 compute-1 nova_compute[225855]: 2026-01-20 14:37:26.952 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:37:26 compute-1 nova_compute[225855]: 2026-01-20 14:37:26.967 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:37:26 compute-1 nova_compute[225855]: 2026-01-20 14:37:26.968 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:37:26 compute-1 nova_compute[225855]: 2026-01-20 14:37:26.968 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:37:26 compute-1 nova_compute[225855]: 2026-01-20 14:37:26.969 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:37:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e228 e228: 3 total, 3 up, 3 in
Jan 20 14:37:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:27.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:28 compute-1 ceph-mon[81775]: osdmap e228: 3 total, 3 up, 3 in
Jan 20 14:37:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2516451502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:28 compute-1 ceph-mon[81775]: pgmap v1469: 321 pgs: 321 active+clean; 88 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 20 KiB/s wr, 160 op/s
Jan 20 14:37:28 compute-1 nova_compute[225855]: 2026-01-20 14:37:28.269 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:28 compute-1 nova_compute[225855]: 2026-01-20 14:37:28.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:37:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:28.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/907367584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/22471952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e229 e229: 3 total, 3 up, 3 in
Jan 20 14:37:29 compute-1 nova_compute[225855]: 2026-01-20 14:37:29.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:37:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:29.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:29 compute-1 nova_compute[225855]: 2026-01-20 14:37:29.748 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:29 compute-1 nova_compute[225855]: 2026-01-20 14:37:29.806 225859 DEBUG oslo_concurrency.lockutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Acquiring lock "98e22622-b8b8-44a5-befe-1bd745f9c946" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:29 compute-1 nova_compute[225855]: 2026-01-20 14:37:29.807 225859 DEBUG oslo_concurrency.lockutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "98e22622-b8b8-44a5-befe-1bd745f9c946" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:29 compute-1 nova_compute[225855]: 2026-01-20 14:37:29.823 225859 DEBUG nova.compute.manager [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:37:29 compute-1 nova_compute[225855]: 2026-01-20 14:37:29.920 225859 DEBUG oslo_concurrency.lockutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:29 compute-1 nova_compute[225855]: 2026-01-20 14:37:29.920 225859 DEBUG oslo_concurrency.lockutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:29 compute-1 nova_compute[225855]: 2026-01-20 14:37:29.925 225859 DEBUG nova.virt.hardware [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:37:29 compute-1 nova_compute[225855]: 2026-01-20 14:37:29.926 225859 INFO nova.compute.claims [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.047 225859 DEBUG oslo_concurrency.processutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e230 e230: 3 total, 3 up, 3 in
Jan 20 14:37:30 compute-1 ceph-mon[81775]: osdmap e229: 3 total, 3 up, 3 in
Jan 20 14:37:30 compute-1 ceph-mon[81775]: pgmap v1471: 321 pgs: 321 active+clean; 105 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.4 MiB/s wr, 173 op/s
Jan 20 14:37:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1863201075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:37:30 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2696979408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.467 225859 DEBUG oslo_concurrency.processutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.473 225859 DEBUG nova.compute.provider_tree [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.492 225859 DEBUG nova.scheduler.client.report [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.516 225859 DEBUG oslo_concurrency.lockutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.517 225859 DEBUG nova.compute.manager [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:37:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:30.533 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:37:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:30.534 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.626 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.633 225859 DEBUG nova.compute.manager [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.633 225859 DEBUG nova.network.neutron [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.653 225859 INFO nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.673 225859 DEBUG nova.compute.manager [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.803 225859 DEBUG nova.compute.manager [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.804 225859 DEBUG nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.805 225859 INFO nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Creating image(s)
Jan 20 14:37:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:30.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.833 225859 DEBUG nova.storage.rbd_utils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] rbd image 98e22622-b8b8-44a5-befe-1bd745f9c946_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.858 225859 DEBUG nova.storage.rbd_utils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] rbd image 98e22622-b8b8-44a5-befe-1bd745f9c946_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.886 225859 DEBUG nova.storage.rbd_utils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] rbd image 98e22622-b8b8-44a5-befe-1bd745f9c946_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.890 225859 DEBUG oslo_concurrency.processutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.948 225859 DEBUG nova.policy [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '592a0204f38a4596ab1ab81774214a6d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7d78990d13704d629a8a3e8910d005c5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.956 225859 DEBUG oslo_concurrency.processutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.956 225859 DEBUG oslo_concurrency.lockutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.957 225859 DEBUG oslo_concurrency.lockutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.957 225859 DEBUG oslo_concurrency.lockutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.982 225859 DEBUG nova.storage.rbd_utils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] rbd image 98e22622-b8b8-44a5-befe-1bd745f9c946_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:37:30 compute-1 nova_compute[225855]: 2026-01-20 14:37:30.986 225859 DEBUG oslo_concurrency.processutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 98e22622-b8b8-44a5-befe-1bd745f9c946_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:31 compute-1 ceph-mon[81775]: osdmap e230: 3 total, 3 up, 3 in
Jan 20 14:37:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2696979408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:31 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e231 e231: 3 total, 3 up, 3 in
Jan 20 14:37:31 compute-1 nova_compute[225855]: 2026-01-20 14:37:31.267 225859 DEBUG oslo_concurrency.processutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 98e22622-b8b8-44a5-befe-1bd745f9c946_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:31 compute-1 nova_compute[225855]: 2026-01-20 14:37:31.355 225859 DEBUG nova.storage.rbd_utils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] resizing rbd image 98e22622-b8b8-44a5-befe-1bd745f9c946_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:37:31 compute-1 nova_compute[225855]: 2026-01-20 14:37:31.456 225859 DEBUG nova.objects.instance [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lazy-loading 'migration_context' on Instance uuid 98e22622-b8b8-44a5-befe-1bd745f9c946 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:37:31 compute-1 nova_compute[225855]: 2026-01-20 14:37:31.469 225859 DEBUG nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:37:31 compute-1 nova_compute[225855]: 2026-01-20 14:37:31.470 225859 DEBUG nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Ensure instance console log exists: /var/lib/nova/instances/98e22622-b8b8-44a5-befe-1bd745f9c946/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:37:31 compute-1 nova_compute[225855]: 2026-01-20 14:37:31.470 225859 DEBUG oslo_concurrency.lockutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:31 compute-1 nova_compute[225855]: 2026-01-20 14:37:31.471 225859 DEBUG oslo_concurrency.lockutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:31 compute-1 nova_compute[225855]: 2026-01-20 14:37:31.471 225859 DEBUG oslo_concurrency.lockutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:31.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:32 compute-1 ceph-mon[81775]: osdmap e231: 3 total, 3 up, 3 in
Jan 20 14:37:32 compute-1 ceph-mon[81775]: pgmap v1474: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 150 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 6.3 MiB/s wr, 156 op/s
Jan 20 14:37:32 compute-1 nova_compute[225855]: 2026-01-20 14:37:32.540 225859 DEBUG nova.network.neutron [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Successfully created port: 25ba0729-4796-48e4-9b7a-6c0716d26545 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:37:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:32.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:33 compute-1 nova_compute[225855]: 2026-01-20 14:37:33.271 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:33.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:33 compute-1 nova_compute[225855]: 2026-01-20 14:37:33.848 225859 DEBUG nova.network.neutron [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Successfully updated port: 25ba0729-4796-48e4-9b7a-6c0716d26545 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:37:33 compute-1 nova_compute[225855]: 2026-01-20 14:37:33.881 225859 DEBUG oslo_concurrency.lockutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Acquiring lock "refresh_cache-98e22622-b8b8-44a5-befe-1bd745f9c946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:37:33 compute-1 nova_compute[225855]: 2026-01-20 14:37:33.881 225859 DEBUG oslo_concurrency.lockutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Acquired lock "refresh_cache-98e22622-b8b8-44a5-befe-1bd745f9c946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:37:33 compute-1 nova_compute[225855]: 2026-01-20 14:37:33.881 225859 DEBUG nova.network.neutron [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:37:34 compute-1 nova_compute[225855]: 2026-01-20 14:37:34.297 225859 DEBUG nova.network.neutron [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:37:34 compute-1 nova_compute[225855]: 2026-01-20 14:37:34.520 225859 DEBUG nova.compute.manager [req-38e57a0d-2523-429c-8006-28c6843ec8f2 req-2f3b7776-5a31-4a72-ab50-2a44f88596ca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Received event network-changed-25ba0729-4796-48e4-9b7a-6c0716d26545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:37:34 compute-1 nova_compute[225855]: 2026-01-20 14:37:34.521 225859 DEBUG nova.compute.manager [req-38e57a0d-2523-429c-8006-28c6843ec8f2 req-2f3b7776-5a31-4a72-ab50-2a44f88596ca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Refreshing instance network info cache due to event network-changed-25ba0729-4796-48e4-9b7a-6c0716d26545. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:37:34 compute-1 nova_compute[225855]: 2026-01-20 14:37:34.521 225859 DEBUG oslo_concurrency.lockutils [req-38e57a0d-2523-429c-8006-28c6843ec8f2 req-2f3b7776-5a31-4a72-ab50-2a44f88596ca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-98e22622-b8b8-44a5-befe-1bd745f9c946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:37:34 compute-1 ceph-mon[81775]: pgmap v1475: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 150 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.8 MiB/s wr, 119 op/s
Jan 20 14:37:34 compute-1 nova_compute[225855]: 2026-01-20 14:37:34.749 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:34.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.487 225859 DEBUG nova.network.neutron [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Updating instance_info_cache with network_info: [{"id": "25ba0729-4796-48e4-9b7a-6c0716d26545", "address": "fa:16:3e:23:1a:21", "network": {"id": "b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-462971735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d78990d13704d629a8a3e8910d005c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ba0729-47", "ovs_interfaceid": "25ba0729-4796-48e4-9b7a-6c0716d26545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.557 225859 DEBUG oslo_concurrency.lockutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Releasing lock "refresh_cache-98e22622-b8b8-44a5-befe-1bd745f9c946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.557 225859 DEBUG nova.compute.manager [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Instance network_info: |[{"id": "25ba0729-4796-48e4-9b7a-6c0716d26545", "address": "fa:16:3e:23:1a:21", "network": {"id": "b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-462971735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d78990d13704d629a8a3e8910d005c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ba0729-47", "ovs_interfaceid": "25ba0729-4796-48e4-9b7a-6c0716d26545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.557 225859 DEBUG oslo_concurrency.lockutils [req-38e57a0d-2523-429c-8006-28c6843ec8f2 req-2f3b7776-5a31-4a72-ab50-2a44f88596ca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-98e22622-b8b8-44a5-befe-1bd745f9c946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.558 225859 DEBUG nova.network.neutron [req-38e57a0d-2523-429c-8006-28c6843ec8f2 req-2f3b7776-5a31-4a72-ab50-2a44f88596ca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Refreshing network info cache for port 25ba0729-4796-48e4-9b7a-6c0716d26545 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.560 225859 DEBUG nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Start _get_guest_xml network_info=[{"id": "25ba0729-4796-48e4-9b7a-6c0716d26545", "address": "fa:16:3e:23:1a:21", "network": {"id": "b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-462971735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d78990d13704d629a8a3e8910d005c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ba0729-47", "ovs_interfaceid": "25ba0729-4796-48e4-9b7a-6c0716d26545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.566 225859 WARNING nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.570 225859 DEBUG nova.virt.libvirt.host [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.570 225859 DEBUG nova.virt.libvirt.host [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.573 225859 DEBUG nova.virt.libvirt.host [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.573 225859 DEBUG nova.virt.libvirt.host [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.574 225859 DEBUG nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.574 225859 DEBUG nova.virt.hardware [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.575 225859 DEBUG nova.virt.hardware [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.575 225859 DEBUG nova.virt.hardware [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.575 225859 DEBUG nova.virt.hardware [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.575 225859 DEBUG nova.virt.hardware [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.576 225859 DEBUG nova.virt.hardware [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.576 225859 DEBUG nova.virt.hardware [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.576 225859 DEBUG nova.virt.hardware [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.576 225859 DEBUG nova.virt.hardware [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.577 225859 DEBUG nova.virt.hardware [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.577 225859 DEBUG nova.virt.hardware [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:37:35 compute-1 nova_compute[225855]: 2026-01-20 14:37:35.579 225859 DEBUG oslo_concurrency.processutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/471447498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:35.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:37:35 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2472477865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.012 225859 DEBUG oslo_concurrency.processutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.040 225859 DEBUG nova.storage.rbd_utils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] rbd image 98e22622-b8b8-44a5-befe-1bd745f9c946_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.045 225859 DEBUG oslo_concurrency.processutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:37:36 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3403050555' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.524 225859 DEBUG oslo_concurrency.processutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.528 225859 DEBUG nova.virt.libvirt.vif [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:37:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-127847800',display_name='tempest-ImagesOneServerNegativeTestJSON-server-127847800',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-127847800',id=58,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7d78990d13704d629a8a3e8910d005c5',ramdisk_id='',reservation_id='r-5l39qm5d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-866315696',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-866315696-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:37:30Z,user_data=None,user_id='592a0204f38a4596ab1ab81774214a6d',uuid=98e22622-b8b8-44a5-befe-1bd745f9c946,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25ba0729-4796-48e4-9b7a-6c0716d26545", "address": "fa:16:3e:23:1a:21", "network": {"id": "b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-462971735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d78990d13704d629a8a3e8910d005c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ba0729-47", "ovs_interfaceid": "25ba0729-4796-48e4-9b7a-6c0716d26545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.529 225859 DEBUG nova.network.os_vif_util [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Converting VIF {"id": "25ba0729-4796-48e4-9b7a-6c0716d26545", "address": "fa:16:3e:23:1a:21", "network": {"id": "b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-462971735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d78990d13704d629a8a3e8910d005c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ba0729-47", "ovs_interfaceid": "25ba0729-4796-48e4-9b7a-6c0716d26545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.531 225859 DEBUG nova.network.os_vif_util [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:1a:21,bridge_name='br-int',has_traffic_filtering=True,id=25ba0729-4796-48e4-9b7a-6c0716d26545,network=Network(b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ba0729-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.534 225859 DEBUG nova.objects.instance [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98e22622-b8b8-44a5-befe-1bd745f9c946 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:37:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:36.536 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.565 225859 DEBUG nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:37:36 compute-1 nova_compute[225855]:   <uuid>98e22622-b8b8-44a5-befe-1bd745f9c946</uuid>
Jan 20 14:37:36 compute-1 nova_compute[225855]:   <name>instance-0000003a</name>
Jan 20 14:37:36 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:37:36 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:37:36 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-127847800</nova:name>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:37:35</nova:creationTime>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:37:36 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:37:36 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:37:36 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:37:36 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:37:36 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:37:36 compute-1 nova_compute[225855]:         <nova:user uuid="592a0204f38a4596ab1ab81774214a6d">tempest-ImagesOneServerNegativeTestJSON-866315696-project-member</nova:user>
Jan 20 14:37:36 compute-1 nova_compute[225855]:         <nova:project uuid="7d78990d13704d629a8a3e8910d005c5">tempest-ImagesOneServerNegativeTestJSON-866315696</nova:project>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:37:36 compute-1 nova_compute[225855]:         <nova:port uuid="25ba0729-4796-48e4-9b7a-6c0716d26545">
Jan 20 14:37:36 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:37:36 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:37:36 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <system>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <entry name="serial">98e22622-b8b8-44a5-befe-1bd745f9c946</entry>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <entry name="uuid">98e22622-b8b8-44a5-befe-1bd745f9c946</entry>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     </system>
Jan 20 14:37:36 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:37:36 compute-1 nova_compute[225855]:   <os>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:   </os>
Jan 20 14:37:36 compute-1 nova_compute[225855]:   <features>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:   </features>
Jan 20 14:37:36 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:37:36 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:37:36 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/98e22622-b8b8-44a5-befe-1bd745f9c946_disk">
Jan 20 14:37:36 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       </source>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:37:36 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/98e22622-b8b8-44a5-befe-1bd745f9c946_disk.config">
Jan 20 14:37:36 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       </source>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:37:36 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:23:1a:21"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <target dev="tap25ba0729-47"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/98e22622-b8b8-44a5-befe-1bd745f9c946/console.log" append="off"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <video>
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     </video>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:37:36 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:37:36 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:37:36 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:37:36 compute-1 nova_compute[225855]: </domain>
Jan 20 14:37:36 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.567 225859 DEBUG nova.compute.manager [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Preparing to wait for external event network-vif-plugged-25ba0729-4796-48e4-9b7a-6c0716d26545 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.568 225859 DEBUG oslo_concurrency.lockutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Acquiring lock "98e22622-b8b8-44a5-befe-1bd745f9c946-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.568 225859 DEBUG oslo_concurrency.lockutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "98e22622-b8b8-44a5-befe-1bd745f9c946-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.568 225859 DEBUG oslo_concurrency.lockutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "98e22622-b8b8-44a5-befe-1bd745f9c946-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.569 225859 DEBUG nova.virt.libvirt.vif [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:37:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-127847800',display_name='tempest-ImagesOneServerNegativeTestJSON-server-127847800',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-127847800',id=58,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7d78990d13704d629a8a3e8910d005c5',ramdisk_id='',reservation_id='r-5l39qm5d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-866315696',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-866315696-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:37:30Z,user_data=None,user_id='592a0204f38a4596ab1ab81774214a6d',uuid=98e22622-b8b8-44a5-befe-1bd745f9c946,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25ba0729-4796-48e4-9b7a-6c0716d26545", "address": "fa:16:3e:23:1a:21", "network": {"id": "b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-462971735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d78990d13704d629a8a3e8910d005c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ba0729-47", "ovs_interfaceid": "25ba0729-4796-48e4-9b7a-6c0716d26545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.569 225859 DEBUG nova.network.os_vif_util [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Converting VIF {"id": "25ba0729-4796-48e4-9b7a-6c0716d26545", "address": "fa:16:3e:23:1a:21", "network": {"id": "b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-462971735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d78990d13704d629a8a3e8910d005c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ba0729-47", "ovs_interfaceid": "25ba0729-4796-48e4-9b7a-6c0716d26545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.569 225859 DEBUG nova.network.os_vif_util [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:1a:21,bridge_name='br-int',has_traffic_filtering=True,id=25ba0729-4796-48e4-9b7a-6c0716d26545,network=Network(b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ba0729-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.570 225859 DEBUG os_vif [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:1a:21,bridge_name='br-int',has_traffic_filtering=True,id=25ba0729-4796-48e4-9b7a-6c0716d26545,network=Network(b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ba0729-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.570 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.571 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.571 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.574 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.574 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25ba0729-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.575 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25ba0729-47, col_values=(('external_ids', {'iface-id': '25ba0729-4796-48e4-9b7a-6c0716d26545', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:1a:21', 'vm-uuid': '98e22622-b8b8-44a5-befe-1bd745f9c946'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.576 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:36 compute-1 NetworkManager[49104]: <info>  [1768919856.5772] manager: (tap25ba0729-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.578 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.582 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.582 225859 INFO os_vif [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:1a:21,bridge_name='br-int',has_traffic_filtering=True,id=25ba0729-4796-48e4-9b7a-6c0716d26545,network=Network(b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ba0729-47')
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.623 225859 DEBUG nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.624 225859 DEBUG nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.624 225859 DEBUG nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] No VIF found with MAC fa:16:3e:23:1a:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.624 225859 INFO nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Using config drive
Jan 20 14:37:36 compute-1 nova_compute[225855]: 2026-01-20 14:37:36.651 225859 DEBUG nova.storage.rbd_utils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] rbd image 98e22622-b8b8-44a5-befe-1bd745f9c946_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:37:36 compute-1 ceph-mon[81775]: pgmap v1476: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 133 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 5.3 MiB/s wr, 178 op/s
Jan 20 14:37:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/160214948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2472477865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:37:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3403050555' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:37:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 20 14:37:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:36.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 20 14:37:37 compute-1 nova_compute[225855]: 2026-01-20 14:37:37.176 225859 INFO nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Creating config drive at /var/lib/nova/instances/98e22622-b8b8-44a5-befe-1bd745f9c946/disk.config
Jan 20 14:37:37 compute-1 nova_compute[225855]: 2026-01-20 14:37:37.180 225859 DEBUG oslo_concurrency.processutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98e22622-b8b8-44a5-befe-1bd745f9c946/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1zyaq7rr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:37 compute-1 nova_compute[225855]: 2026-01-20 14:37:37.328 225859 DEBUG oslo_concurrency.processutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98e22622-b8b8-44a5-befe-1bd745f9c946/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1zyaq7rr" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:37 compute-1 nova_compute[225855]: 2026-01-20 14:37:37.362 225859 DEBUG nova.storage.rbd_utils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] rbd image 98e22622-b8b8-44a5-befe-1bd745f9c946_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:37:37 compute-1 nova_compute[225855]: 2026-01-20 14:37:37.367 225859 DEBUG oslo_concurrency.processutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98e22622-b8b8-44a5-befe-1bd745f9c946/disk.config 98e22622-b8b8-44a5-befe-1bd745f9c946_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:37 compute-1 nova_compute[225855]: 2026-01-20 14:37:37.432 225859 DEBUG nova.network.neutron [req-38e57a0d-2523-429c-8006-28c6843ec8f2 req-2f3b7776-5a31-4a72-ab50-2a44f88596ca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Updated VIF entry in instance network info cache for port 25ba0729-4796-48e4-9b7a-6c0716d26545. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:37:37 compute-1 nova_compute[225855]: 2026-01-20 14:37:37.433 225859 DEBUG nova.network.neutron [req-38e57a0d-2523-429c-8006-28c6843ec8f2 req-2f3b7776-5a31-4a72-ab50-2a44f88596ca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Updating instance_info_cache with network_info: [{"id": "25ba0729-4796-48e4-9b7a-6c0716d26545", "address": "fa:16:3e:23:1a:21", "network": {"id": "b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-462971735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d78990d13704d629a8a3e8910d005c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ba0729-47", "ovs_interfaceid": "25ba0729-4796-48e4-9b7a-6c0716d26545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:37:37 compute-1 nova_compute[225855]: 2026-01-20 14:37:37.452 225859 DEBUG oslo_concurrency.lockutils [req-38e57a0d-2523-429c-8006-28c6843ec8f2 req-2f3b7776-5a31-4a72-ab50-2a44f88596ca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-98e22622-b8b8-44a5-befe-1bd745f9c946" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:37:37 compute-1 nova_compute[225855]: 2026-01-20 14:37:37.624 225859 DEBUG oslo_concurrency.processutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98e22622-b8b8-44a5-befe-1bd745f9c946/disk.config 98e22622-b8b8-44a5-befe-1bd745f9c946_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:37 compute-1 nova_compute[225855]: 2026-01-20 14:37:37.624 225859 INFO nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Deleting local config drive /var/lib/nova/instances/98e22622-b8b8-44a5-befe-1bd745f9c946/disk.config because it was imported into RBD.
Jan 20 14:37:37 compute-1 kernel: tap25ba0729-47: entered promiscuous mode
Jan 20 14:37:37 compute-1 nova_compute[225855]: 2026-01-20 14:37:37.685 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:37 compute-1 NetworkManager[49104]: <info>  [1768919857.6877] manager: (tap25ba0729-47): new Tun device (/org/freedesktop/NetworkManager/Devices/77)
Jan 20 14:37:37 compute-1 ovn_controller[130490]: 2026-01-20T14:37:37Z|00175|binding|INFO|Claiming lport 25ba0729-4796-48e4-9b7a-6c0716d26545 for this chassis.
Jan 20 14:37:37 compute-1 ovn_controller[130490]: 2026-01-20T14:37:37Z|00176|binding|INFO|25ba0729-4796-48e4-9b7a-6c0716d26545: Claiming fa:16:3e:23:1a:21 10.100.0.11
Jan 20 14:37:37 compute-1 nova_compute[225855]: 2026-01-20 14:37:37.691 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:37.697 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:1a:21 10.100.0.11'], port_security=['fa:16:3e:23:1a:21 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '98e22622-b8b8-44a5-befe-1bd745f9c946', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d78990d13704d629a8a3e8910d005c5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3763ece7-c739-40ca-8e07-6dde1584ba85', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a613141e-df34-49c4-9712-c3d232327d6b, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=25ba0729-4796-48e4-9b7a-6c0716d26545) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:37:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:37.698 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 25ba0729-4796-48e4-9b7a-6c0716d26545 in datapath b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23 bound to our chassis
Jan 20 14:37:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:37.700 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23
Jan 20 14:37:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:37.713 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1de23175-9516-4af4-adb4-f0f726f43e66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:37.714 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb1f372f9-f1 in ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:37:37 compute-1 systemd-machined[194361]: New machine qemu-27-instance-0000003a.
Jan 20 14:37:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:37.716 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb1f372f9-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:37:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:37.716 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e9d187-d127-4f37-b064-a15c2d1b163a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:37.717 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[eb0977e0-c27e-4bf3-8e46-8e6bcdc5f8bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:37.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:37.734 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[1e90133e-ee08-4cff-bca8-6f1952d369ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:37.757 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[18d913dd-867f-4190-adb6-0b3d46834648]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:37 compute-1 systemd[1]: Started Virtual Machine qemu-27-instance-0000003a.
Jan 20 14:37:37 compute-1 nova_compute[225855]: 2026-01-20 14:37:37.790 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:37 compute-1 ovn_controller[130490]: 2026-01-20T14:37:37Z|00177|binding|INFO|Setting lport 25ba0729-4796-48e4-9b7a-6c0716d26545 ovn-installed in OVS
Jan 20 14:37:37 compute-1 ovn_controller[130490]: 2026-01-20T14:37:37Z|00178|binding|INFO|Setting lport 25ba0729-4796-48e4-9b7a-6c0716d26545 up in Southbound
Jan 20 14:37:37 compute-1 nova_compute[225855]: 2026-01-20 14:37:37.798 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:37.802 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[727c94d8-09ff-4286-8b27-c1debbae1728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:37.806 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d5bf822d-2f62-449e-a813-3bec8b04ac1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:37 compute-1 NetworkManager[49104]: <info>  [1768919857.8085] manager: (tapb1f372f9-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/78)
Jan 20 14:37:37 compute-1 systemd-udevd[249742]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:37:37 compute-1 systemd-udevd[249743]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:37:37 compute-1 NetworkManager[49104]: <info>  [1768919857.8285] device (tap25ba0729-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:37:37 compute-1 NetworkManager[49104]: <info>  [1768919857.8294] device (tap25ba0729-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:37:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:37.840 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1aff8a-12b7-4970-a598-71710c1fc827]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:37.844 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f3310ea2-f7f2-4bcc-b029-f963ca566261]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 e232: 3 total, 3 up, 3 in
Jan 20 14:37:37 compute-1 NetworkManager[49104]: <info>  [1768919857.8805] device (tapb1f372f9-f0): carrier: link connected
Jan 20 14:37:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:37.888 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e0772fcb-cf5c-4c9b-a75d-c34cada52911]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:37.906 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2d73acf6-f5f7-49d0-8391-fa4622203461]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1f372f9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:d0:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489680, 'reachable_time': 22290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249769, 'error': None, 'target': 'ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:37.923 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f0615591-7411-40c5-a82e-220d4760102e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:d0c5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489680, 'tstamp': 489680}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249771, 'error': None, 'target': 'ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:37.945 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cc732bf7-09f4-4ed9-a123-d2283da7ebfb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1f372f9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:d0:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489680, 'reachable_time': 22290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249772, 'error': None, 'target': 'ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:37.980 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[db0dfd15-56a9-4396-990a-be9daca7154e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:38.040 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ac919d1e-1d9e-4935-b6d0-16b723386a36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:38.041 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1f372f9-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:38.041 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:38.042 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1f372f9-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.044 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:38 compute-1 NetworkManager[49104]: <info>  [1768919858.0448] manager: (tapb1f372f9-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Jan 20 14:37:38 compute-1 kernel: tapb1f372f9-f0: entered promiscuous mode
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.050 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:38.051 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1f372f9-f0, col_values=(('external_ids', {'iface-id': 'f0137d70-4bff-4646-9f70-7e0c82ac1e88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.054 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:38 compute-1 ovn_controller[130490]: 2026-01-20T14:37:38Z|00179|binding|INFO|Releasing lport f0137d70-4bff-4646-9f70-7e0c82ac1e88 from this chassis (sb_readonly=0)
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.075 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.076 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:38.078 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:38.079 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[35c2221a-b970-4061-a6c1-94afdca79965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:38.079 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23.pid.haproxy
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:37:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:38.080 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23', 'env', 'PROCESS_TAG=haproxy-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.431 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919858.4316707, 98e22622-b8b8-44a5-befe-1bd745f9c946 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.432 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] VM Started (Lifecycle Event)
Jan 20 14:37:38 compute-1 podman[249845]: 2026-01-20 14:37:38.450493335 +0000 UTC m=+0.053788989 container create 184b158e09711f91160511cc9d1e573ae340b527e39ab76701e8a51b9a334aa0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:37:38 compute-1 systemd[1]: Started libpod-conmon-184b158e09711f91160511cc9d1e573ae340b527e39ab76701e8a51b9a334aa0.scope.
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.493 225859 DEBUG nova.compute.manager [req-d5282684-34f6-48d7-b01a-d8c5462da80b req-a46a1206-fc7d-491e-9271-5c4fc2ce1f01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Received event network-vif-plugged-25ba0729-4796-48e4-9b7a-6c0716d26545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.494 225859 DEBUG oslo_concurrency.lockutils [req-d5282684-34f6-48d7-b01a-d8c5462da80b req-a46a1206-fc7d-491e-9271-5c4fc2ce1f01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "98e22622-b8b8-44a5-befe-1bd745f9c946-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.494 225859 DEBUG oslo_concurrency.lockutils [req-d5282684-34f6-48d7-b01a-d8c5462da80b req-a46a1206-fc7d-491e-9271-5c4fc2ce1f01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "98e22622-b8b8-44a5-befe-1bd745f9c946-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.495 225859 DEBUG oslo_concurrency.lockutils [req-d5282684-34f6-48d7-b01a-d8c5462da80b req-a46a1206-fc7d-491e-9271-5c4fc2ce1f01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "98e22622-b8b8-44a5-befe-1bd745f9c946-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.495 225859 DEBUG nova.compute.manager [req-d5282684-34f6-48d7-b01a-d8c5462da80b req-a46a1206-fc7d-491e-9271-5c4fc2ce1f01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Processing event network-vif-plugged-25ba0729-4796-48e4-9b7a-6c0716d26545 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.495 225859 DEBUG nova.compute.manager [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.500 225859 DEBUG nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.502 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.506 225859 INFO nova.virt.libvirt.driver [-] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Instance spawned successfully.
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.506 225859 DEBUG nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.510 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:37:38 compute-1 podman[249845]: 2026-01-20 14:37:38.422142395 +0000 UTC m=+0.025438069 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:37:38 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:37:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be3ebbad909f05951835ad49a3aeee9e1168ceb25c4d16b8266150b05185476/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.530 225859 DEBUG nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.531 225859 DEBUG nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.531 225859 DEBUG nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.532 225859 DEBUG nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.532 225859 DEBUG nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.533 225859 DEBUG nova.virt.libvirt.driver [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.536 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.536 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919858.4323697, 98e22622-b8b8-44a5-befe-1bd745f9c946 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.536 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] VM Paused (Lifecycle Event)
Jan 20 14:37:38 compute-1 podman[249845]: 2026-01-20 14:37:38.539081256 +0000 UTC m=+0.142376930 container init 184b158e09711f91160511cc9d1e573ae340b527e39ab76701e8a51b9a334aa0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:37:38 compute-1 podman[249845]: 2026-01-20 14:37:38.544917451 +0000 UTC m=+0.148213125 container start 184b158e09711f91160511cc9d1e573ae340b527e39ab76701e8a51b9a334aa0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:37:38 compute-1 neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23[249861]: [NOTICE]   (249865) : New worker (249867) forked
Jan 20 14:37:38 compute-1 neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23[249861]: [NOTICE]   (249865) : Loading success.
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.570 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.575 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919858.4996648, 98e22622-b8b8-44a5-befe-1bd745f9c946 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.575 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] VM Resumed (Lifecycle Event)
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.693 225859 INFO nova.compute.manager [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Took 7.89 seconds to spawn the instance on the hypervisor.
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.693 225859 DEBUG nova.compute.manager [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.723 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.726 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.753 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.766 225859 INFO nova.compute.manager [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Took 8.87 seconds to build instance.
Jan 20 14:37:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:37:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:38.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:37:38 compute-1 ceph-mon[81775]: pgmap v1477: 321 pgs: 321 active+clean; 107 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.5 MiB/s wr, 197 op/s
Jan 20 14:37:38 compute-1 ceph-mon[81775]: osdmap e232: 3 total, 3 up, 3 in
Jan 20 14:37:38 compute-1 nova_compute[225855]: 2026-01-20 14:37:38.908 225859 DEBUG oslo_concurrency.lockutils [None req-59c3a9a4-7848-4b64-b142-2499305cdad5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "98e22622-b8b8-44a5-befe-1bd745f9c946" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:39.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:39 compute-1 nova_compute[225855]: 2026-01-20 14:37:39.781 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:40 compute-1 nova_compute[225855]: 2026-01-20 14:37:40.657 225859 DEBUG nova.compute.manager [req-62bc4a7d-1291-4b41-b1c5-019b6b572702 req-79e4a9d5-61c6-40c0-b143-8a43d128927a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Received event network-vif-plugged-25ba0729-4796-48e4-9b7a-6c0716d26545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:37:40 compute-1 nova_compute[225855]: 2026-01-20 14:37:40.658 225859 DEBUG oslo_concurrency.lockutils [req-62bc4a7d-1291-4b41-b1c5-019b6b572702 req-79e4a9d5-61c6-40c0-b143-8a43d128927a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "98e22622-b8b8-44a5-befe-1bd745f9c946-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:40 compute-1 nova_compute[225855]: 2026-01-20 14:37:40.659 225859 DEBUG oslo_concurrency.lockutils [req-62bc4a7d-1291-4b41-b1c5-019b6b572702 req-79e4a9d5-61c6-40c0-b143-8a43d128927a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "98e22622-b8b8-44a5-befe-1bd745f9c946-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:40 compute-1 nova_compute[225855]: 2026-01-20 14:37:40.659 225859 DEBUG oslo_concurrency.lockutils [req-62bc4a7d-1291-4b41-b1c5-019b6b572702 req-79e4a9d5-61c6-40c0-b143-8a43d128927a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "98e22622-b8b8-44a5-befe-1bd745f9c946-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:40 compute-1 nova_compute[225855]: 2026-01-20 14:37:40.660 225859 DEBUG nova.compute.manager [req-62bc4a7d-1291-4b41-b1c5-019b6b572702 req-79e4a9d5-61c6-40c0-b143-8a43d128927a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] No waiting events found dispatching network-vif-plugged-25ba0729-4796-48e4-9b7a-6c0716d26545 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:37:40 compute-1 nova_compute[225855]: 2026-01-20 14:37:40.660 225859 WARNING nova.compute.manager [req-62bc4a7d-1291-4b41-b1c5-019b6b572702 req-79e4a9d5-61c6-40c0-b143-8a43d128927a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Received unexpected event network-vif-plugged-25ba0729-4796-48e4-9b7a-6c0716d26545 for instance with vm_state active and task_state None.
Jan 20 14:37:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:40.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:40 compute-1 ceph-mon[81775]: pgmap v1479: 321 pgs: 321 active+clean; 112 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.0 MiB/s wr, 174 op/s
Jan 20 14:37:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:41 compute-1 nova_compute[225855]: 2026-01-20 14:37:41.397 225859 DEBUG nova.compute.manager [None req-a9a54335-d43d-496a-b97d-426d1d18ffe5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:37:41 compute-1 nova_compute[225855]: 2026-01-20 14:37:41.575 225859 INFO nova.compute.manager [None req-a9a54335-d43d-496a-b97d-426d1d18ffe5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] instance snapshotting
Jan 20 14:37:41 compute-1 nova_compute[225855]: 2026-01-20 14:37:41.579 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:41.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:41 compute-1 ceph-mon[81775]: pgmap v1480: 321 pgs: 321 active+clean; 134 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 184 op/s
Jan 20 14:37:41 compute-1 nova_compute[225855]: 2026-01-20 14:37:41.938 225859 WARNING nova.compute.manager [None req-a9a54335-d43d-496a-b97d-426d1d18ffe5 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Image not found during snapshot: nova.exception.ImageNotFound: Image d6eb065b-6bd9-4a87-ab49-a63678d86cff could not be found.
Jan 20 14:37:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:42.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3670143803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:37:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3145099478' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:37:42 compute-1 nova_compute[225855]: 2026-01-20 14:37:42.978 225859 DEBUG oslo_concurrency.lockutils [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Acquiring lock "98e22622-b8b8-44a5-befe-1bd745f9c946" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:42 compute-1 nova_compute[225855]: 2026-01-20 14:37:42.978 225859 DEBUG oslo_concurrency.lockutils [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "98e22622-b8b8-44a5-befe-1bd745f9c946" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:42 compute-1 nova_compute[225855]: 2026-01-20 14:37:42.978 225859 DEBUG oslo_concurrency.lockutils [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Acquiring lock "98e22622-b8b8-44a5-befe-1bd745f9c946-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:42 compute-1 nova_compute[225855]: 2026-01-20 14:37:42.979 225859 DEBUG oslo_concurrency.lockutils [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "98e22622-b8b8-44a5-befe-1bd745f9c946-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:42 compute-1 nova_compute[225855]: 2026-01-20 14:37:42.979 225859 DEBUG oslo_concurrency.lockutils [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "98e22622-b8b8-44a5-befe-1bd745f9c946-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:42 compute-1 nova_compute[225855]: 2026-01-20 14:37:42.980 225859 INFO nova.compute.manager [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Terminating instance
Jan 20 14:37:42 compute-1 nova_compute[225855]: 2026-01-20 14:37:42.981 225859 DEBUG nova.compute.manager [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:37:43 compute-1 kernel: tap25ba0729-47 (unregistering): left promiscuous mode
Jan 20 14:37:43 compute-1 NetworkManager[49104]: <info>  [1768919863.0149] device (tap25ba0729-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.019 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:43 compute-1 ovn_controller[130490]: 2026-01-20T14:37:43Z|00180|binding|INFO|Releasing lport 25ba0729-4796-48e4-9b7a-6c0716d26545 from this chassis (sb_readonly=0)
Jan 20 14:37:43 compute-1 ovn_controller[130490]: 2026-01-20T14:37:43Z|00181|binding|INFO|Setting lport 25ba0729-4796-48e4-9b7a-6c0716d26545 down in Southbound
Jan 20 14:37:43 compute-1 ovn_controller[130490]: 2026-01-20T14:37:43Z|00182|binding|INFO|Removing iface tap25ba0729-47 ovn-installed in OVS
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.022 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:43.027 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:1a:21 10.100.0.11'], port_security=['fa:16:3e:23:1a:21 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '98e22622-b8b8-44a5-befe-1bd745f9c946', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d78990d13704d629a8a3e8910d005c5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3763ece7-c739-40ca-8e07-6dde1584ba85', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a613141e-df34-49c4-9712-c3d232327d6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=25ba0729-4796-48e4-9b7a-6c0716d26545) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:37:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:43.028 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 25ba0729-4796-48e4-9b7a-6c0716d26545 in datapath b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23 unbound from our chassis
Jan 20 14:37:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:43.030 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:37:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:43.032 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[497523bb-f590-4d1c-a90d-58959fc76151]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:43.034 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23 namespace which is not needed anymore
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.044 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:43 compute-1 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Jan 20 14:37:43 compute-1 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000003a.scope: Consumed 5.263s CPU time.
Jan 20 14:37:43 compute-1 systemd-machined[194361]: Machine qemu-27-instance-0000003a terminated.
Jan 20 14:37:43 compute-1 neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23[249861]: [NOTICE]   (249865) : haproxy version is 2.8.14-c23fe91
Jan 20 14:37:43 compute-1 neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23[249861]: [NOTICE]   (249865) : path to executable is /usr/sbin/haproxy
Jan 20 14:37:43 compute-1 neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23[249861]: [WARNING]  (249865) : Exiting Master process...
Jan 20 14:37:43 compute-1 neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23[249861]: [ALERT]    (249865) : Current worker (249867) exited with code 143 (Terminated)
Jan 20 14:37:43 compute-1 neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23[249861]: [WARNING]  (249865) : All workers exited. Exiting... (0)
Jan 20 14:37:43 compute-1 systemd[1]: libpod-184b158e09711f91160511cc9d1e573ae340b527e39ab76701e8a51b9a334aa0.scope: Deactivated successfully.
Jan 20 14:37:43 compute-1 podman[249902]: 2026-01-20 14:37:43.168181675 +0000 UTC m=+0.041409730 container died 184b158e09711f91160511cc9d1e573ae340b527e39ab76701e8a51b9a334aa0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:37:43 compute-1 systemd[1]: var-lib-containers-storage-overlay-2be3ebbad909f05951835ad49a3aeee9e1168ceb25c4d16b8266150b05185476-merged.mount: Deactivated successfully.
Jan 20 14:37:43 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-184b158e09711f91160511cc9d1e573ae340b527e39ab76701e8a51b9a334aa0-userdata-shm.mount: Deactivated successfully.
Jan 20 14:37:43 compute-1 podman[249902]: 2026-01-20 14:37:43.212805235 +0000 UTC m=+0.086033280 container cleanup 184b158e09711f91160511cc9d1e573ae340b527e39ab76701e8a51b9a334aa0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.218 225859 INFO nova.virt.libvirt.driver [-] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Instance destroyed successfully.
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.219 225859 DEBUG nova.objects.instance [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lazy-loading 'resources' on Instance uuid 98e22622-b8b8-44a5-befe-1bd745f9c946 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:37:43 compute-1 systemd[1]: libpod-conmon-184b158e09711f91160511cc9d1e573ae340b527e39ab76701e8a51b9a334aa0.scope: Deactivated successfully.
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.239 225859 DEBUG nova.virt.libvirt.vif [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:37:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-127847800',display_name='tempest-ImagesOneServerNegativeTestJSON-server-127847800',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-127847800',id=58,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:37:38Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d78990d13704d629a8a3e8910d005c5',ramdisk_id='',reservation_id='r-5l39qm5d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-866315696',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-866315696-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:37:41Z,user_data=None,user_id='592a0204f38a4596ab1ab81774214a6d',uuid=98e22622-b8b8-44a5-befe-1bd745f9c946,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25ba0729-4796-48e4-9b7a-6c0716d26545", "address": "fa:16:3e:23:1a:21", "network": {"id": "b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-462971735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d78990d13704d629a8a3e8910d005c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ba0729-47", "ovs_interfaceid": "25ba0729-4796-48e4-9b7a-6c0716d26545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.240 225859 DEBUG nova.network.os_vif_util [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Converting VIF {"id": "25ba0729-4796-48e4-9b7a-6c0716d26545", "address": "fa:16:3e:23:1a:21", "network": {"id": "b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-462971735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d78990d13704d629a8a3e8910d005c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ba0729-47", "ovs_interfaceid": "25ba0729-4796-48e4-9b7a-6c0716d26545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.241 225859 DEBUG nova.network.os_vif_util [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:1a:21,bridge_name='br-int',has_traffic_filtering=True,id=25ba0729-4796-48e4-9b7a-6c0716d26545,network=Network(b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ba0729-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.241 225859 DEBUG os_vif [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:1a:21,bridge_name='br-int',has_traffic_filtering=True,id=25ba0729-4796-48e4-9b7a-6c0716d26545,network=Network(b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ba0729-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.243 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.244 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25ba0729-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.262 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.264 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.266 225859 INFO os_vif [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:1a:21,bridge_name='br-int',has_traffic_filtering=True,id=25ba0729-4796-48e4-9b7a-6c0716d26545,network=Network(b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ba0729-47')
Jan 20 14:37:43 compute-1 podman[249943]: 2026-01-20 14:37:43.293365569 +0000 UTC m=+0.056424424 container remove 184b158e09711f91160511cc9d1e573ae340b527e39ab76701e8a51b9a334aa0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:37:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:43.300 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf4b644-7401-49b3-aabf-c72e8accda74]: (4, ('Tue Jan 20 02:37:43 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23 (184b158e09711f91160511cc9d1e573ae340b527e39ab76701e8a51b9a334aa0)\n184b158e09711f91160511cc9d1e573ae340b527e39ab76701e8a51b9a334aa0\nTue Jan 20 02:37:43 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23 (184b158e09711f91160511cc9d1e573ae340b527e39ab76701e8a51b9a334aa0)\n184b158e09711f91160511cc9d1e573ae340b527e39ab76701e8a51b9a334aa0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:43.302 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0b0d3c93-c5ab-45fe-b34c-3df22d03e875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:43.304 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1f372f9-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.305 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:43 compute-1 kernel: tapb1f372f9-f0: left promiscuous mode
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.321 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:43.323 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9e752905-f366-46fd-853b-2f96250c9205]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:43.343 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[00ff6918-7666-492c-a63e-8046fbe27a47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:43.345 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9de64a-e5d8-4aa3-8d7f-913531ad7141]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:43.357 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[38cf2ee7-487f-4329-a3bd-7a371e31d252]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489671, 'reachable_time': 30057, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249976, 'error': None, 'target': 'ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:43.359 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:37:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:37:43.359 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[520f0b2b-72b8-425d-a806-e92e43592525]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:43 compute-1 systemd[1]: run-netns-ovnmeta\x2db1f372f9\x2dfbd1\x2d4ef7\x2d9be7\x2dace7ce14bb23.mount: Deactivated successfully.
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.405 225859 DEBUG nova.compute.manager [req-cf1312e4-9450-4142-8a57-f16ad715ee07 req-dfbd023c-25bd-434b-8a30-36fe27392125 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Received event network-vif-unplugged-25ba0729-4796-48e4-9b7a-6c0716d26545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.405 225859 DEBUG oslo_concurrency.lockutils [req-cf1312e4-9450-4142-8a57-f16ad715ee07 req-dfbd023c-25bd-434b-8a30-36fe27392125 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "98e22622-b8b8-44a5-befe-1bd745f9c946-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.406 225859 DEBUG oslo_concurrency.lockutils [req-cf1312e4-9450-4142-8a57-f16ad715ee07 req-dfbd023c-25bd-434b-8a30-36fe27392125 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "98e22622-b8b8-44a5-befe-1bd745f9c946-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.406 225859 DEBUG oslo_concurrency.lockutils [req-cf1312e4-9450-4142-8a57-f16ad715ee07 req-dfbd023c-25bd-434b-8a30-36fe27392125 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "98e22622-b8b8-44a5-befe-1bd745f9c946-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.406 225859 DEBUG nova.compute.manager [req-cf1312e4-9450-4142-8a57-f16ad715ee07 req-dfbd023c-25bd-434b-8a30-36fe27392125 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] No waiting events found dispatching network-vif-unplugged-25ba0729-4796-48e4-9b7a-6c0716d26545 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.406 225859 DEBUG nova.compute.manager [req-cf1312e4-9450-4142-8a57-f16ad715ee07 req-dfbd023c-25bd-434b-8a30-36fe27392125 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Received event network-vif-unplugged-25ba0729-4796-48e4-9b7a-6c0716d26545 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:37:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:43.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.810 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:43 compute-1 sudo[249979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:43 compute-1 sudo[249979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.965 225859 INFO nova.virt.libvirt.driver [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Deleting instance files /var/lib/nova/instances/98e22622-b8b8-44a5-befe-1bd745f9c946_del
Jan 20 14:37:43 compute-1 sudo[249979]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:43 compute-1 nova_compute[225855]: 2026-01-20 14:37:43.966 225859 INFO nova.virt.libvirt.driver [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Deletion of /var/lib/nova/instances/98e22622-b8b8-44a5-befe-1bd745f9c946_del complete
Jan 20 14:37:43 compute-1 ceph-mon[81775]: pgmap v1481: 321 pgs: 321 active+clean; 134 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 184 op/s
Jan 20 14:37:44 compute-1 nova_compute[225855]: 2026-01-20 14:37:44.020 225859 INFO nova.compute.manager [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Took 1.04 seconds to destroy the instance on the hypervisor.
Jan 20 14:37:44 compute-1 nova_compute[225855]: 2026-01-20 14:37:44.021 225859 DEBUG oslo.service.loopingcall [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:37:44 compute-1 nova_compute[225855]: 2026-01-20 14:37:44.021 225859 DEBUG nova.compute.manager [-] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:37:44 compute-1 nova_compute[225855]: 2026-01-20 14:37:44.021 225859 DEBUG nova.network.neutron [-] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:37:44 compute-1 sudo[250004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:44 compute-1 sudo[250004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:44 compute-1 sudo[250004]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:44 compute-1 nova_compute[225855]: 2026-01-20 14:37:44.783 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:44.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:44 compute-1 nova_compute[225855]: 2026-01-20 14:37:44.963 225859 DEBUG nova.network.neutron [-] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:37:45 compute-1 nova_compute[225855]: 2026-01-20 14:37:45.005 225859 INFO nova.compute.manager [-] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Took 0.98 seconds to deallocate network for instance.
Jan 20 14:37:45 compute-1 nova_compute[225855]: 2026-01-20 14:37:45.066 225859 DEBUG oslo_concurrency.lockutils [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:45 compute-1 nova_compute[225855]: 2026-01-20 14:37:45.067 225859 DEBUG oslo_concurrency.lockutils [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:45 compute-1 podman[250029]: 2026-01-20 14:37:45.076023405 +0000 UTC m=+0.120290467 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:37:45 compute-1 nova_compute[225855]: 2026-01-20 14:37:45.117 225859 DEBUG oslo_concurrency.processutils [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:37:45 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1773620489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:45 compute-1 nova_compute[225855]: 2026-01-20 14:37:45.605 225859 DEBUG oslo_concurrency.processutils [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:45 compute-1 nova_compute[225855]: 2026-01-20 14:37:45.611 225859 DEBUG nova.compute.provider_tree [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:37:45 compute-1 nova_compute[225855]: 2026-01-20 14:37:45.631 225859 DEBUG nova.scheduler.client.report [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:37:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1773620489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:45 compute-1 nova_compute[225855]: 2026-01-20 14:37:45.655 225859 DEBUG oslo_concurrency.lockutils [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:45 compute-1 nova_compute[225855]: 2026-01-20 14:37:45.706 225859 INFO nova.scheduler.client.report [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Deleted allocations for instance 98e22622-b8b8-44a5-befe-1bd745f9c946
Jan 20 14:37:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:45.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:45 compute-1 nova_compute[225855]: 2026-01-20 14:37:45.788 225859 DEBUG nova.compute.manager [req-9a69a638-ed08-43c2-b400-a01496fc2638 req-5d87e747-a072-4c17-a20e-6c6dc45d05f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Received event network-vif-plugged-25ba0729-4796-48e4-9b7a-6c0716d26545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:37:45 compute-1 nova_compute[225855]: 2026-01-20 14:37:45.789 225859 DEBUG oslo_concurrency.lockutils [req-9a69a638-ed08-43c2-b400-a01496fc2638 req-5d87e747-a072-4c17-a20e-6c6dc45d05f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "98e22622-b8b8-44a5-befe-1bd745f9c946-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:45 compute-1 nova_compute[225855]: 2026-01-20 14:37:45.789 225859 DEBUG oslo_concurrency.lockutils [req-9a69a638-ed08-43c2-b400-a01496fc2638 req-5d87e747-a072-4c17-a20e-6c6dc45d05f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "98e22622-b8b8-44a5-befe-1bd745f9c946-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:45 compute-1 nova_compute[225855]: 2026-01-20 14:37:45.789 225859 DEBUG oslo_concurrency.lockutils [req-9a69a638-ed08-43c2-b400-a01496fc2638 req-5d87e747-a072-4c17-a20e-6c6dc45d05f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "98e22622-b8b8-44a5-befe-1bd745f9c946-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:45 compute-1 nova_compute[225855]: 2026-01-20 14:37:45.790 225859 DEBUG nova.compute.manager [req-9a69a638-ed08-43c2-b400-a01496fc2638 req-5d87e747-a072-4c17-a20e-6c6dc45d05f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] No waiting events found dispatching network-vif-plugged-25ba0729-4796-48e4-9b7a-6c0716d26545 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:37:45 compute-1 nova_compute[225855]: 2026-01-20 14:37:45.790 225859 WARNING nova.compute.manager [req-9a69a638-ed08-43c2-b400-a01496fc2638 req-5d87e747-a072-4c17-a20e-6c6dc45d05f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Received unexpected event network-vif-plugged-25ba0729-4796-48e4-9b7a-6c0716d26545 for instance with vm_state deleted and task_state None.
Jan 20 14:37:45 compute-1 nova_compute[225855]: 2026-01-20 14:37:45.809 225859 DEBUG oslo_concurrency.lockutils [None req-35de1d14-6ccc-4867-8f50-13b31c1a82ab 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "98e22622-b8b8-44a5-befe-1bd745f9c946" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:46 compute-1 nova_compute[225855]: 2026-01-20 14:37:46.780 225859 DEBUG nova.compute.manager [req-adaa3bc1-cf1e-43d1-aa74-31e517b73dbe req-9b3e73bc-0df7-440c-90ba-3b382dd0c099 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Received event network-vif-deleted-25ba0729-4796-48e4-9b7a-6c0716d26545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:37:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:46.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:46 compute-1 ceph-mon[81775]: pgmap v1482: 321 pgs: 321 active+clean; 104 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 160 op/s
Jan 20 14:37:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:47.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:48 compute-1 nova_compute[225855]: 2026-01-20 14:37:48.264 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:48 compute-1 ceph-mon[81775]: pgmap v1483: 321 pgs: 321 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.0 MiB/s wr, 177 op/s
Jan 20 14:37:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:48.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:49.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:49 compute-1 nova_compute[225855]: 2026-01-20 14:37:49.785 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:50.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:50 compute-1 ceph-mon[81775]: pgmap v1484: 321 pgs: 321 active+clean; 88 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 915 KiB/s wr, 159 op/s
Jan 20 14:37:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:51.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:52 compute-1 ceph-mon[81775]: pgmap v1485: 321 pgs: 321 active+clean; 88 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 717 KiB/s wr, 179 op/s
Jan 20 14:37:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:37:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:52.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:37:53 compute-1 nova_compute[225855]: 2026-01-20 14:37:53.266 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:37:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:53.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:37:54 compute-1 ceph-mon[81775]: pgmap v1486: 321 pgs: 321 active+clean; 88 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 113 op/s
Jan 20 14:37:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:37:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6847 writes, 35K keys, 6847 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 6846 writes, 6846 syncs, 1.00 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1811 writes, 8924 keys, 1811 commit groups, 1.0 writes per commit group, ingest: 17.43 MB, 0.03 MB/s
                                           Interval WAL: 1810 writes, 1810 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     68.2      0.58              0.14        18    0.032       0      0       0.0       0.0
                                             L6      1/0    9.71 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.6    111.4     92.0      1.55              0.48        17    0.091     86K   9392       0.0       0.0
                                            Sum      1/0    9.71 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.6     80.9     85.5      2.14              0.63        35    0.061     86K   9392       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8     93.0     95.0      0.51              0.12         8    0.064     24K   2592       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    111.4     92.0      1.55              0.48        17    0.091     86K   9392       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     68.5      0.58              0.14        17    0.034       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.039, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.18 GB write, 0.08 MB/s write, 0.17 GB read, 0.07 MB/s read, 2.1 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564d515a71f0#2 capacity: 304.00 MB usage: 19.97 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000149 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1154,19.29 MB,6.34496%) FilterBlock(35,250.05 KB,0.0803245%) IndexBlock(35,444.61 KB,0.142825%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 14:37:54 compute-1 nova_compute[225855]: 2026-01-20 14:37:54.787 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:54.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1198049895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:55.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:56 compute-1 podman[250086]: 2026-01-20 14:37:56.01555912 +0000 UTC m=+0.064361378 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:37:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:56.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:57.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:58 compute-1 ceph-mon[81775]: pgmap v1487: 321 pgs: 321 active+clean; 88 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 114 op/s
Jan 20 14:37:58 compute-1 nova_compute[225855]: 2026-01-20 14:37:58.215 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919863.2139645, 98e22622-b8b8-44a5-befe-1bd745f9c946 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:37:58 compute-1 nova_compute[225855]: 2026-01-20 14:37:58.215 225859 INFO nova.compute.manager [-] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] VM Stopped (Lifecycle Event)
Jan 20 14:37:58 compute-1 nova_compute[225855]: 2026-01-20 14:37:58.255 225859 DEBUG nova.compute.manager [None req-ac9f89af-dfa1-4fc6-b423-866abea6a614 - - - - - -] [instance: 98e22622-b8b8-44a5-befe-1bd745f9c946] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:37:58 compute-1 nova_compute[225855]: 2026-01-20 14:37:58.303 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:58.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:59 compute-1 ceph-mon[81775]: pgmap v1488: 321 pgs: 321 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 148 KiB/s wr, 89 op/s
Jan 20 14:37:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:37:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:37:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:59.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:37:59 compute-1 nova_compute[225855]: 2026-01-20 14:37:59.790 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:00 compute-1 ceph-mon[81775]: pgmap v1489: 321 pgs: 321 active+clean; 99 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 962 KiB/s rd, 938 KiB/s wr, 42 op/s
Jan 20 14:38:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:00.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2702146314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:38:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:01.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:02.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2126701600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:38:03 compute-1 ceph-mon[81775]: pgmap v1490: 321 pgs: 321 active+clean; 111 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 804 KiB/s rd, 1.7 MiB/s wr, 48 op/s
Jan 20 14:38:03 compute-1 nova_compute[225855]: 2026-01-20 14:38:03.305 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:03.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:04 compute-1 sudo[250110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:04 compute-1 sudo[250110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:04 compute-1 sudo[250110]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:04 compute-1 sudo[250135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:04 compute-1 sudo[250135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:04 compute-1 sudo[250135]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:04 compute-1 ceph-mon[81775]: pgmap v1491: 321 pgs: 321 active+clean; 129 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 139 KiB/s rd, 2.7 MiB/s wr, 51 op/s
Jan 20 14:38:04 compute-1 nova_compute[225855]: 2026-01-20 14:38:04.792 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:04.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:05.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:06 compute-1 ceph-mon[81775]: pgmap v1492: 321 pgs: 321 active+clean; 167 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 371 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Jan 20 14:38:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:06.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:07.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:08 compute-1 nova_compute[225855]: 2026-01-20 14:38:08.307 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:08 compute-1 ceph-mon[81775]: pgmap v1493: 321 pgs: 321 active+clean; 167 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 125 op/s
Jan 20 14:38:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:08.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:09.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:09 compute-1 nova_compute[225855]: 2026-01-20 14:38:09.839 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:10.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:10 compute-1 ceph-mon[81775]: pgmap v1494: 321 pgs: 321 active+clean; 150 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.8 MiB/s wr, 160 op/s
Jan 20 14:38:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3441600990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:11 compute-1 nova_compute[225855]: 2026-01-20 14:38:11.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:38:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:11.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:38:12 compute-1 ceph-mon[81775]: pgmap v1495: 321 pgs: 321 active+clean; 136 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 180 op/s
Jan 20 14:38:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:12.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:13 compute-1 nova_compute[225855]: 2026-01-20 14:38:13.310 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:13 compute-1 sudo[250164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:13 compute-1 sudo[250164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:13 compute-1 sudo[250164]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:38:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1969300733' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:38:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:38:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1969300733' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:38:13 compute-1 sudo[250189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:38:13 compute-1 sudo[250189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:13 compute-1 sudo[250189]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1969300733' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:38:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1969300733' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:38:13 compute-1 sudo[250215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:13 compute-1 sudo[250215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:13 compute-1 sudo[250215]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:13 compute-1 sudo[250240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:38:13 compute-1 sudo[250240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:13.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:14 compute-1 sudo[250240]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:14 compute-1 ceph-mon[81775]: pgmap v1496: 321 pgs: 321 active+clean; 121 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 172 op/s
Jan 20 14:38:14 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:38:14 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:38:14 compute-1 nova_compute[225855]: 2026-01-20 14:38:14.849 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:14.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:15 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:38:15 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:38:15 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:38:15 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:38:15 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:38:15 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:38:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:15.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:16 compute-1 podman[250298]: 2026-01-20 14:38:16.115521601 +0000 UTC m=+0.150174860 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 14:38:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:38:16.397 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:38:16.398 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:38:16.398 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:38:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:16.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:38:17 compute-1 ceph-mon[81775]: pgmap v1497: 321 pgs: 321 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 143 op/s
Jan 20 14:38:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:17.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:18 compute-1 ceph-mon[81775]: pgmap v1498: 321 pgs: 321 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 94 op/s
Jan 20 14:38:18 compute-1 nova_compute[225855]: 2026-01-20 14:38:18.312 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:38:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:18.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:38:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:19.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:19 compute-1 nova_compute[225855]: 2026-01-20 14:38:19.899 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:20 compute-1 ceph-mon[81775]: pgmap v1499: 321 pgs: 321 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 16 KiB/s wr, 68 op/s
Jan 20 14:38:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:20.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:21 compute-1 sudo[250328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:21 compute-1 sudo[250328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:21 compute-1 sudo[250328]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:21 compute-1 sudo[250353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:38:21 compute-1 sudo[250353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:21 compute-1 sudo[250353]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:21.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:22 compute-1 nova_compute[225855]: 2026-01-20 14:38:22.353 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:38:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:38:22 compute-1 ceph-mon[81775]: pgmap v1500: 321 pgs: 321 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 507 KiB/s rd, 17 KiB/s wr, 31 op/s
Jan 20 14:38:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:22.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:23 compute-1 nova_compute[225855]: 2026-01-20 14:38:23.315 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:38:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:23.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:38:24 compute-1 sudo[250379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:24 compute-1 sudo[250379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:24 compute-1 sudo[250379]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:24 compute-1 sudo[250404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:24 compute-1 sudo[250404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:24 compute-1 sudo[250404]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:24 compute-1 nova_compute[225855]: 2026-01-20 14:38:24.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:24 compute-1 nova_compute[225855]: 2026-01-20 14:38:24.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 14:38:24 compute-1 nova_compute[225855]: 2026-01-20 14:38:24.360 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 14:38:24 compute-1 ceph-mon[81775]: pgmap v1501: 321 pgs: 321 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s rd, 5.5 KiB/s wr, 6 op/s
Jan 20 14:38:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:38:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:24.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:38:24 compute-1 nova_compute[225855]: 2026-01-20 14:38:24.936 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:25 compute-1 nova_compute[225855]: 2026-01-20 14:38:25.360 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:25 compute-1 nova_compute[225855]: 2026-01-20 14:38:25.361 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:25.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:26 compute-1 nova_compute[225855]: 2026-01-20 14:38:26.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:26 compute-1 nova_compute[225855]: 2026-01-20 14:38:26.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:38:26 compute-1 nova_compute[225855]: 2026-01-20 14:38:26.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:38:26 compute-1 nova_compute[225855]: 2026-01-20 14:38:26.379 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:38:26 compute-1 nova_compute[225855]: 2026-01-20 14:38:26.380 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:26 compute-1 nova_compute[225855]: 2026-01-20 14:38:26.381 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:26 compute-1 nova_compute[225855]: 2026-01-20 14:38:26.382 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:38:26 compute-1 nova_compute[225855]: 2026-01-20 14:38:26.382 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:26 compute-1 nova_compute[225855]: 2026-01-20 14:38:26.425 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:26 compute-1 nova_compute[225855]: 2026-01-20 14:38:26.426 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:26 compute-1 nova_compute[225855]: 2026-01-20 14:38:26.426 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:26 compute-1 nova_compute[225855]: 2026-01-20 14:38:26.426 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:38:26 compute-1 nova_compute[225855]: 2026-01-20 14:38:26.427 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:38:26 compute-1 ceph-mon[81775]: pgmap v1502: 321 pgs: 321 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 1023 B/s rd, 4.7 KiB/s wr, 0 op/s
Jan 20 14:38:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:26.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:38:26 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2281049994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:26 compute-1 nova_compute[225855]: 2026-01-20 14:38:26.926 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:38:27 compute-1 podman[250452]: 2026-01-20 14:38:27.02437264 +0000 UTC m=+0.063839853 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:38:27 compute-1 nova_compute[225855]: 2026-01-20 14:38:27.174 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:38:27 compute-1 nova_compute[225855]: 2026-01-20 14:38:27.175 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4678MB free_disk=20.942676544189453GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:38:27 compute-1 nova_compute[225855]: 2026-01-20 14:38:27.176 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:27 compute-1 nova_compute[225855]: 2026-01-20 14:38:27.176 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:27 compute-1 nova_compute[225855]: 2026-01-20 14:38:27.254 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:38:27 compute-1 nova_compute[225855]: 2026-01-20 14:38:27.254 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:38:27 compute-1 nova_compute[225855]: 2026-01-20 14:38:27.417 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing inventories for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 14:38:27 compute-1 nova_compute[225855]: 2026-01-20 14:38:27.436 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating ProviderTree inventory for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 14:38:27 compute-1 nova_compute[225855]: 2026-01-20 14:38:27.436 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:38:27 compute-1 nova_compute[225855]: 2026-01-20 14:38:27.450 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing aggregate associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 14:38:27 compute-1 nova_compute[225855]: 2026-01-20 14:38:27.470 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing trait associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 14:38:27 compute-1 nova_compute[225855]: 2026-01-20 14:38:27.616 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:38:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2281049994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:38:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:27.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:38:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:38:28 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1324154779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:28 compute-1 nova_compute[225855]: 2026-01-20 14:38:28.062 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:38:28 compute-1 nova_compute[225855]: 2026-01-20 14:38:28.068 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:38:28 compute-1 nova_compute[225855]: 2026-01-20 14:38:28.091 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:38:28 compute-1 nova_compute[225855]: 2026-01-20 14:38:28.113 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:38:28 compute-1 nova_compute[225855]: 2026-01-20 14:38:28.113 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:28 compute-1 nova_compute[225855]: 2026-01-20 14:38:28.317 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:28 compute-1 ceph-mon[81775]: pgmap v1503: 321 pgs: 321 active+clean; 99 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 KiB/s wr, 22 op/s
Jan 20 14:38:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1324154779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/297649950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1204876683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:28.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:29 compute-1 nova_compute[225855]: 2026-01-20 14:38:29.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:29 compute-1 nova_compute[225855]: 2026-01-20 14:38:29.365 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:29 compute-1 nova_compute[225855]: 2026-01-20 14:38:29.366 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:29 compute-1 nova_compute[225855]: 2026-01-20 14:38:29.366 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 14:38:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/416411339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3156133779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:38:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:29.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:38:29 compute-1 nova_compute[225855]: 2026-01-20 14:38:29.938 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:30 compute-1 ceph-mon[81775]: pgmap v1504: 321 pgs: 321 active+clean; 78 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 25 op/s
Jan 20 14:38:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3805276064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:30.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:31 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:38:31.005 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:38:31 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:38:31.006 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:38:31 compute-1 nova_compute[225855]: 2026-01-20 14:38:31.088 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:31 compute-1 nova_compute[225855]: 2026-01-20 14:38:31.374 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:31.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:38:32.008 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:38:32 compute-1 ceph-mon[81775]: pgmap v1505: 321 pgs: 321 active+clean; 41 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 20 14:38:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:38:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:32.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:38:33 compute-1 nova_compute[225855]: 2026-01-20 14:38:33.318 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:33.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:38:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:34.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:38:34 compute-1 nova_compute[225855]: 2026-01-20 14:38:34.990 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:35 compute-1 ceph-mon[81775]: pgmap v1506: 321 pgs: 321 active+clean; 41 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 14:38:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:35.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:35 compute-1 ceph-mon[81775]: pgmap v1507: 321 pgs: 321 active+clean; 41 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.3 KiB/s wr, 34 op/s
Jan 20 14:38:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:36.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:37.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:38 compute-1 ceph-mon[81775]: pgmap v1508: 321 pgs: 321 active+clean; 41 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.5 KiB/s wr, 35 op/s
Jan 20 14:38:38 compute-1 nova_compute[225855]: 2026-01-20 14:38:38.320 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:38.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:39.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:39 compute-1 nova_compute[225855]: 2026-01-20 14:38:39.992 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:40.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:40 compute-1 ceph-mon[81775]: pgmap v1509: 321 pgs: 321 active+clean; 62 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 678 KiB/s wr, 21 op/s
Jan 20 14:38:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:41.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:41 compute-1 ceph-mon[81775]: pgmap v1510: 321 pgs: 321 active+clean; 84 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 20 14:38:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:42.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:43 compute-1 nova_compute[225855]: 2026-01-20 14:38:43.322 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:38:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:43.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:38:44 compute-1 sudo[250503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:44 compute-1 sudo[250503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:44 compute-1 sudo[250503]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:44 compute-1 sudo[250528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:44 compute-1 sudo[250528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:44 compute-1 sudo[250528]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:44 compute-1 ceph-mon[81775]: pgmap v1511: 321 pgs: 321 active+clean; 88 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 20 14:38:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2224742800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:44.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:45 compute-1 nova_compute[225855]: 2026-01-20 14:38:45.042 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:45.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:46 compute-1 ceph-mon[81775]: pgmap v1512: 321 pgs: 321 active+clean; 88 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 20 14:38:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:46.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:47 compute-1 podman[250554]: 2026-01-20 14:38:47.044044085 +0000 UTC m=+0.088079897 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:38:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:38:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:47.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:38:47 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Jan 20 14:38:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:38:47.887048) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:38:47 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Jan 20 14:38:47 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919927887519, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2829, "num_deletes": 526, "total_data_size": 5727781, "memory_usage": 5817688, "flush_reason": "Manual Compaction"}
Jan 20 14:38:47 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Jan 20 14:38:47 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919927926680, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 3737600, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33360, "largest_seqno": 36184, "table_properties": {"data_size": 3726364, "index_size": 6834, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 26949, "raw_average_key_size": 20, "raw_value_size": 3701618, "raw_average_value_size": 2791, "num_data_blocks": 296, "num_entries": 1326, "num_filter_entries": 1326, "num_deletions": 526, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768919747, "oldest_key_time": 1768919747, "file_creation_time": 1768919927, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:38:47 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 39703 microseconds, and 7470 cpu microseconds.
Jan 20 14:38:47 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:38:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:38:47.926751) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 3737600 bytes OK
Jan 20 14:38:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:38:47.926783) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Jan 20 14:38:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:38:47.928922) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Jan 20 14:38:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:38:47.928940) EVENT_LOG_v1 {"time_micros": 1768919927928934, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:38:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:38:47.928961) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:38:47 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 5714208, prev total WAL file size 5714208, number of live WAL files 2.
Jan 20 14:38:47 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:38:47 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:38:47.930699) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 20 14:38:47 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:38:47 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(3650KB)], [63(9942KB)]
Jan 20 14:38:47 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919927930730, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 13918435, "oldest_snapshot_seqno": -1}
Jan 20 14:38:48 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6238 keys, 11936636 bytes, temperature: kUnknown
Jan 20 14:38:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919928061241, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 11936636, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11891289, "index_size": 28661, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 159267, "raw_average_key_size": 25, "raw_value_size": 11775603, "raw_average_value_size": 1887, "num_data_blocks": 1154, "num_entries": 6238, "num_filter_entries": 6238, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768919927, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:38:48 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:38:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:38:48.061527) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 11936636 bytes
Jan 20 14:38:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:38:48.063424) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 106.6 rd, 91.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.7 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 7294, records dropped: 1056 output_compression: NoCompression
Jan 20 14:38:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:38:48.063444) EVENT_LOG_v1 {"time_micros": 1768919928063434, "job": 38, "event": "compaction_finished", "compaction_time_micros": 130624, "compaction_time_cpu_micros": 25289, "output_level": 6, "num_output_files": 1, "total_output_size": 11936636, "num_input_records": 7294, "num_output_records": 6238, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:38:48 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:38:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919928064568, "job": 38, "event": "table_file_deletion", "file_number": 65}
Jan 20 14:38:48 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:38:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919928066698, "job": 38, "event": "table_file_deletion", "file_number": 63}
Jan 20 14:38:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:38:47.930570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:38:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:38:48.066776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:38:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:38:48.066780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:38:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:38:48.066783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:38:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:38:48.066784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:38:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:38:48.066786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:38:48 compute-1 nova_compute[225855]: 2026-01-20 14:38:48.324 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:38:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:48.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:38:48 compute-1 ceph-mon[81775]: pgmap v1513: 321 pgs: 321 active+clean; 97 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 40 op/s
Jan 20 14:38:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:49.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3403050049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:49 compute-1 ceph-mon[81775]: pgmap v1514: 321 pgs: 321 active+clean; 114 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.9 MiB/s wr, 39 op/s
Jan 20 14:38:50 compute-1 nova_compute[225855]: 2026-01-20 14:38:50.044 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:38:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:50.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:38:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4123654074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:38:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:38:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:51.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:38:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4161260471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:38:51 compute-1 ceph-mon[81775]: pgmap v1515: 321 pgs: 321 active+clean; 147 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.1 MiB/s wr, 74 op/s
Jan 20 14:38:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:52.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3938428519' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:38:53 compute-1 nova_compute[225855]: 2026-01-20 14:38:53.326 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:53.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/971674263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:38:54 compute-1 ceph-mon[81775]: pgmap v1516: 321 pgs: 321 active+clean; 166 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 3.1 MiB/s wr, 106 op/s
Jan 20 14:38:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:54.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:55 compute-1 nova_compute[225855]: 2026-01-20 14:38:55.046 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:38:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:55.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:38:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:56 compute-1 ceph-mon[81775]: pgmap v1517: 321 pgs: 321 active+clean; 181 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 858 KiB/s rd, 3.6 MiB/s wr, 243 op/s
Jan 20 14:38:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:38:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:56.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:38:57 compute-1 ovn_controller[130490]: 2026-01-20T14:38:57Z|00183|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 20 14:38:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:57.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:58 compute-1 podman[250586]: 2026-01-20 14:38:58.018064239 +0000 UTC m=+0.060561541 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:38:58 compute-1 nova_compute[225855]: 2026-01-20 14:38:58.327 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:58 compute-1 ceph-mon[81775]: pgmap v1518: 321 pgs: 321 active+clean; 181 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.6 MiB/s wr, 272 op/s
Jan 20 14:38:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:38:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:58.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:38:59 compute-1 nova_compute[225855]: 2026-01-20 14:38:59.665 225859 DEBUG oslo_concurrency.lockutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Acquiring lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:59 compute-1 nova_compute[225855]: 2026-01-20 14:38:59.666 225859 DEBUG oslo_concurrency.lockutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:59 compute-1 nova_compute[225855]: 2026-01-20 14:38:59.692 225859 DEBUG nova.compute.manager [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:38:59 compute-1 nova_compute[225855]: 2026-01-20 14:38:59.797 225859 DEBUG oslo_concurrency.lockutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:59 compute-1 nova_compute[225855]: 2026-01-20 14:38:59.798 225859 DEBUG oslo_concurrency.lockutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:59 compute-1 nova_compute[225855]: 2026-01-20 14:38:59.805 225859 DEBUG nova.virt.hardware [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:38:59 compute-1 nova_compute[225855]: 2026-01-20 14:38:59.805 225859 INFO nova.compute.claims [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:38:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:38:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:59.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:59 compute-1 nova_compute[225855]: 2026-01-20 14:38:59.906 225859 DEBUG oslo_concurrency.processutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.048 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:39:00 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/616156782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.384 225859 DEBUG oslo_concurrency.processutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.391 225859 DEBUG nova.compute.provider_tree [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.420 225859 DEBUG nova.scheduler.client.report [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.444 225859 DEBUG oslo_concurrency.lockutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.445 225859 DEBUG nova.compute.manager [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.502 225859 DEBUG nova.compute.manager [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.503 225859 DEBUG nova.network.neutron [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.523 225859 INFO nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.544 225859 DEBUG nova.compute.manager [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.658 225859 DEBUG nova.compute.manager [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.660 225859 DEBUG nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.661 225859 INFO nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Creating image(s)
Jan 20 14:39:00 compute-1 ceph-mon[81775]: pgmap v1519: 321 pgs: 321 active+clean; 181 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.2 MiB/s wr, 312 op/s
Jan 20 14:39:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/616156782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.699 225859 DEBUG nova.storage.rbd_utils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] rbd image c12d0bd2-ff69-4827-a5a0-8bf5e44094f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.730 225859 DEBUG nova.storage.rbd_utils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] rbd image c12d0bd2-ff69-4827-a5a0-8bf5e44094f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.758 225859 DEBUG nova.storage.rbd_utils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] rbd image c12d0bd2-ff69-4827-a5a0-8bf5e44094f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.761 225859 DEBUG oslo_concurrency.processutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.838 225859 DEBUG oslo_concurrency.processutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.839 225859 DEBUG oslo_concurrency.lockutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.839 225859 DEBUG oslo_concurrency.lockutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.840 225859 DEBUG oslo_concurrency.lockutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.863 225859 DEBUG nova.storage.rbd_utils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] rbd image c12d0bd2-ff69-4827-a5a0-8bf5e44094f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:00 compute-1 nova_compute[225855]: 2026-01-20 14:39:00.866 225859 DEBUG oslo_concurrency.processutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 c12d0bd2-ff69-4827-a5a0-8bf5e44094f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:00.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:01 compute-1 nova_compute[225855]: 2026-01-20 14:39:01.190 225859 DEBUG oslo_concurrency.processutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 c12d0bd2-ff69-4827-a5a0-8bf5e44094f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:01 compute-1 nova_compute[225855]: 2026-01-20 14:39:01.262 225859 DEBUG nova.storage.rbd_utils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] resizing rbd image c12d0bd2-ff69-4827-a5a0-8bf5e44094f7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:39:01 compute-1 nova_compute[225855]: 2026-01-20 14:39:01.372 225859 DEBUG nova.policy [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ddee6eb6c32d451ca50c9ea499a23c1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4dadd5f5212f432693d35e765126f4df', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:39:01 compute-1 nova_compute[225855]: 2026-01-20 14:39:01.379 225859 DEBUG nova.objects.instance [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Lazy-loading 'migration_context' on Instance uuid c12d0bd2-ff69-4827-a5a0-8bf5e44094f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:39:01 compute-1 nova_compute[225855]: 2026-01-20 14:39:01.392 225859 DEBUG nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:39:01 compute-1 nova_compute[225855]: 2026-01-20 14:39:01.393 225859 DEBUG nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Ensure instance console log exists: /var/lib/nova/instances/c12d0bd2-ff69-4827-a5a0-8bf5e44094f7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:39:01 compute-1 nova_compute[225855]: 2026-01-20 14:39:01.393 225859 DEBUG oslo_concurrency.lockutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:01 compute-1 nova_compute[225855]: 2026-01-20 14:39:01.394 225859 DEBUG oslo_concurrency.lockutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:01 compute-1 nova_compute[225855]: 2026-01-20 14:39:01.394 225859 DEBUG oslo_concurrency.lockutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:01.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:01 compute-1 nova_compute[225855]: 2026-01-20 14:39:01.888 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:39:01 compute-1 nova_compute[225855]: 2026-01-20 14:39:01.922 225859 WARNING nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
Jan 20 14:39:01 compute-1 nova_compute[225855]: 2026-01-20 14:39:01.923 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Triggering sync for uuid c12d0bd2-ff69-4827-a5a0-8bf5e44094f7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 20 14:39:01 compute-1 nova_compute[225855]: 2026-01-20 14:39:01.924 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:01 compute-1 anacron[89046]: Job `cron.daily' started
Jan 20 14:39:02 compute-1 anacron[89046]: Job `cron.daily' terminated
Jan 20 14:39:02 compute-1 nova_compute[225855]: 2026-01-20 14:39:02.433 225859 DEBUG nova.network.neutron [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Successfully created port: 722de795-61c5-4a11-ade3-6c19621e1054 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:39:02 compute-1 ceph-mon[81775]: pgmap v1520: 321 pgs: 321 active+clean; 181 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.5 MiB/s wr, 323 op/s
Jan 20 14:39:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:02.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:03 compute-1 nova_compute[225855]: 2026-01-20 14:39:03.328 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:03 compute-1 nova_compute[225855]: 2026-01-20 14:39:03.462 225859 DEBUG nova.network.neutron [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Successfully updated port: 722de795-61c5-4a11-ade3-6c19621e1054 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:39:03 compute-1 nova_compute[225855]: 2026-01-20 14:39:03.482 225859 DEBUG oslo_concurrency.lockutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Acquiring lock "refresh_cache-c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:39:03 compute-1 nova_compute[225855]: 2026-01-20 14:39:03.482 225859 DEBUG oslo_concurrency.lockutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Acquired lock "refresh_cache-c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:39:03 compute-1 nova_compute[225855]: 2026-01-20 14:39:03.482 225859 DEBUG nova.network.neutron [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:39:03 compute-1 nova_compute[225855]: 2026-01-20 14:39:03.554 225859 DEBUG nova.compute.manager [req-a13f5730-2c44-4baf-8ad3-87ea95078a0b req-638fa99b-ce39-4b86-a40e-f22791c5e373 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Received event network-changed-722de795-61c5-4a11-ade3-6c19621e1054 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:03 compute-1 nova_compute[225855]: 2026-01-20 14:39:03.555 225859 DEBUG nova.compute.manager [req-a13f5730-2c44-4baf-8ad3-87ea95078a0b req-638fa99b-ce39-4b86-a40e-f22791c5e373 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Refreshing instance network info cache due to event network-changed-722de795-61c5-4a11-ade3-6c19621e1054. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:39:03 compute-1 nova_compute[225855]: 2026-01-20 14:39:03.556 225859 DEBUG oslo_concurrency.lockutils [req-a13f5730-2c44-4baf-8ad3-87ea95078a0b req-638fa99b-ce39-4b86-a40e-f22791c5e373 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:39:03 compute-1 nova_compute[225855]: 2026-01-20 14:39:03.635 225859 DEBUG nova.network.neutron [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:39:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:39:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:03.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.382 225859 DEBUG nova.network.neutron [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Updating instance_info_cache with network_info: [{"id": "722de795-61c5-4a11-ade3-6c19621e1054", "address": "fa:16:3e:be:4b:da", "network": {"id": "c5a9008d-9eea-43f2-a495-bf2e645a81fb", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2127234184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dadd5f5212f432693d35e765126f4df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722de795-61", "ovs_interfaceid": "722de795-61c5-4a11-ade3-6c19621e1054", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.400 225859 DEBUG oslo_concurrency.lockutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Releasing lock "refresh_cache-c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.400 225859 DEBUG nova.compute.manager [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Instance network_info: |[{"id": "722de795-61c5-4a11-ade3-6c19621e1054", "address": "fa:16:3e:be:4b:da", "network": {"id": "c5a9008d-9eea-43f2-a495-bf2e645a81fb", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2127234184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dadd5f5212f432693d35e765126f4df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722de795-61", "ovs_interfaceid": "722de795-61c5-4a11-ade3-6c19621e1054", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.401 225859 DEBUG oslo_concurrency.lockutils [req-a13f5730-2c44-4baf-8ad3-87ea95078a0b req-638fa99b-ce39-4b86-a40e-f22791c5e373 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.401 225859 DEBUG nova.network.neutron [req-a13f5730-2c44-4baf-8ad3-87ea95078a0b req-638fa99b-ce39-4b86-a40e-f22791c5e373 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Refreshing network info cache for port 722de795-61c5-4a11-ade3-6c19621e1054 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.406 225859 DEBUG nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Start _get_guest_xml network_info=[{"id": "722de795-61c5-4a11-ade3-6c19621e1054", "address": "fa:16:3e:be:4b:da", "network": {"id": "c5a9008d-9eea-43f2-a495-bf2e645a81fb", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2127234184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dadd5f5212f432693d35e765126f4df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722de795-61", "ovs_interfaceid": "722de795-61c5-4a11-ade3-6c19621e1054", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.412 225859 WARNING nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.419 225859 DEBUG nova.virt.libvirt.host [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.420 225859 DEBUG nova.virt.libvirt.host [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.428 225859 DEBUG nova.virt.libvirt.host [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.429 225859 DEBUG nova.virt.libvirt.host [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.431 225859 DEBUG nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.432 225859 DEBUG nova.virt.hardware [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.432 225859 DEBUG nova.virt.hardware [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.433 225859 DEBUG nova.virt.hardware [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.433 225859 DEBUG nova.virt.hardware [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.434 225859 DEBUG nova.virt.hardware [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.434 225859 DEBUG nova.virt.hardware [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.434 225859 DEBUG nova.virt.hardware [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.435 225859 DEBUG nova.virt.hardware [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.435 225859 DEBUG nova.virt.hardware [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.436 225859 DEBUG nova.virt.hardware [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.436 225859 DEBUG nova.virt.hardware [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.441 225859 DEBUG oslo_concurrency.processutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:04 compute-1 sudo[250799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:04 compute-1 sudo[250799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:04 compute-1 sudo[250799]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:04 compute-1 sudo[250824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:04 compute-1 sudo[250824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:04 compute-1 sudo[250824]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:04 compute-1 ceph-mon[81775]: pgmap v1521: 321 pgs: 321 active+clean; 197 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 330 op/s
Jan 20 14:39:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:39:04 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3372781718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.925 225859 DEBUG oslo_concurrency.processutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:04.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.953 225859 DEBUG nova.storage.rbd_utils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] rbd image c12d0bd2-ff69-4827-a5a0-8bf5e44094f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:04 compute-1 nova_compute[225855]: 2026-01-20 14:39:04.957 225859 DEBUG oslo_concurrency.processutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.049 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:39:05 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2327636316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.385 225859 DEBUG oslo_concurrency.processutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.387 225859 DEBUG nova.virt.libvirt.vif [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:38:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1848412501',display_name='tempest-ServersTestManualDisk-server-1848412501',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1848412501',id=63,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN1H3ohoasKNqW9/qCu5grGBs3IV04fg9gxxB4grih5Zd5WxPzj2gaQyovrov9cUlcTcdLXAKoF+QUCFPVVxhI1Y4NXPI0qz/O7wrYwAYL2Je6ImmzeATRgxmFMwN+zj/A==',key_name='tempest-keypair-1139962663',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4dadd5f5212f432693d35e765126f4df',ramdisk_id='',reservation_id='r-l9tyz3ue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-665340674',owner_user_name='tempest-ServersTestManualDisk-665340674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:39:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ddee6eb6c32d451ca50c9ea499a23c1a',uuid=c12d0bd2-ff69-4827-a5a0-8bf5e44094f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "722de795-61c5-4a11-ade3-6c19621e1054", "address": "fa:16:3e:be:4b:da", "network": {"id": "c5a9008d-9eea-43f2-a495-bf2e645a81fb", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2127234184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dadd5f5212f432693d35e765126f4df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722de795-61", "ovs_interfaceid": "722de795-61c5-4a11-ade3-6c19621e1054", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.388 225859 DEBUG nova.network.os_vif_util [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Converting VIF {"id": "722de795-61c5-4a11-ade3-6c19621e1054", "address": "fa:16:3e:be:4b:da", "network": {"id": "c5a9008d-9eea-43f2-a495-bf2e645a81fb", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2127234184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dadd5f5212f432693d35e765126f4df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722de795-61", "ovs_interfaceid": "722de795-61c5-4a11-ade3-6c19621e1054", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.389 225859 DEBUG nova.network.os_vif_util [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:4b:da,bridge_name='br-int',has_traffic_filtering=True,id=722de795-61c5-4a11-ade3-6c19621e1054,network=Network(c5a9008d-9eea-43f2-a495-bf2e645a81fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap722de795-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.390 225859 DEBUG nova.objects.instance [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Lazy-loading 'pci_devices' on Instance uuid c12d0bd2-ff69-4827-a5a0-8bf5e44094f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.408 225859 DEBUG nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:39:05 compute-1 nova_compute[225855]:   <uuid>c12d0bd2-ff69-4827-a5a0-8bf5e44094f7</uuid>
Jan 20 14:39:05 compute-1 nova_compute[225855]:   <name>instance-0000003f</name>
Jan 20 14:39:05 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:39:05 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:39:05 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <nova:name>tempest-ServersTestManualDisk-server-1848412501</nova:name>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:39:04</nova:creationTime>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:39:05 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:39:05 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:39:05 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:39:05 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:39:05 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:39:05 compute-1 nova_compute[225855]:         <nova:user uuid="ddee6eb6c32d451ca50c9ea499a23c1a">tempest-ServersTestManualDisk-665340674-project-member</nova:user>
Jan 20 14:39:05 compute-1 nova_compute[225855]:         <nova:project uuid="4dadd5f5212f432693d35e765126f4df">tempest-ServersTestManualDisk-665340674</nova:project>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:39:05 compute-1 nova_compute[225855]:         <nova:port uuid="722de795-61c5-4a11-ade3-6c19621e1054">
Jan 20 14:39:05 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:39:05 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:39:05 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <system>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <entry name="serial">c12d0bd2-ff69-4827-a5a0-8bf5e44094f7</entry>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <entry name="uuid">c12d0bd2-ff69-4827-a5a0-8bf5e44094f7</entry>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     </system>
Jan 20 14:39:05 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:39:05 compute-1 nova_compute[225855]:   <os>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:   </os>
Jan 20 14:39:05 compute-1 nova_compute[225855]:   <features>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:   </features>
Jan 20 14:39:05 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:39:05 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:39:05 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/c12d0bd2-ff69-4827-a5a0-8bf5e44094f7_disk">
Jan 20 14:39:05 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       </source>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:39:05 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/c12d0bd2-ff69-4827-a5a0-8bf5e44094f7_disk.config">
Jan 20 14:39:05 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       </source>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:39:05 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:be:4b:da"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <target dev="tap722de795-61"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/c12d0bd2-ff69-4827-a5a0-8bf5e44094f7/console.log" append="off"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <video>
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     </video>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:39:05 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:39:05 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:39:05 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:39:05 compute-1 nova_compute[225855]: </domain>
Jan 20 14:39:05 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.410 225859 DEBUG nova.compute.manager [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Preparing to wait for external event network-vif-plugged-722de795-61c5-4a11-ade3-6c19621e1054 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.410 225859 DEBUG oslo_concurrency.lockutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Acquiring lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.410 225859 DEBUG oslo_concurrency.lockutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.410 225859 DEBUG oslo_concurrency.lockutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.411 225859 DEBUG nova.virt.libvirt.vif [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:38:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1848412501',display_name='tempest-ServersTestManualDisk-server-1848412501',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1848412501',id=63,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN1H3ohoasKNqW9/qCu5grGBs3IV04fg9gxxB4grih5Zd5WxPzj2gaQyovrov9cUlcTcdLXAKoF+QUCFPVVxhI1Y4NXPI0qz/O7wrYwAYL2Je6ImmzeATRgxmFMwN+zj/A==',key_name='tempest-keypair-1139962663',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4dadd5f5212f432693d35e765126f4df',ramdisk_id='',reservation_id='r-l9tyz3ue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-665340674',owner_user_name='tempest-ServersTestManualDisk-665340674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:39:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ddee6eb6c32d451ca50c9ea499a23c1a',uuid=c12d0bd2-ff69-4827-a5a0-8bf5e44094f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "722de795-61c5-4a11-ade3-6c19621e1054", "address": "fa:16:3e:be:4b:da", "network": {"id": "c5a9008d-9eea-43f2-a495-bf2e645a81fb", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2127234184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dadd5f5212f432693d35e765126f4df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722de795-61", "ovs_interfaceid": "722de795-61c5-4a11-ade3-6c19621e1054", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.411 225859 DEBUG nova.network.os_vif_util [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Converting VIF {"id": "722de795-61c5-4a11-ade3-6c19621e1054", "address": "fa:16:3e:be:4b:da", "network": {"id": "c5a9008d-9eea-43f2-a495-bf2e645a81fb", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2127234184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dadd5f5212f432693d35e765126f4df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722de795-61", "ovs_interfaceid": "722de795-61c5-4a11-ade3-6c19621e1054", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.412 225859 DEBUG nova.network.os_vif_util [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:4b:da,bridge_name='br-int',has_traffic_filtering=True,id=722de795-61c5-4a11-ade3-6c19621e1054,network=Network(c5a9008d-9eea-43f2-a495-bf2e645a81fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap722de795-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.412 225859 DEBUG os_vif [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:4b:da,bridge_name='br-int',has_traffic_filtering=True,id=722de795-61c5-4a11-ade3-6c19621e1054,network=Network(c5a9008d-9eea-43f2-a495-bf2e645a81fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap722de795-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.413 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.413 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.413 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.416 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.416 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap722de795-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.417 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap722de795-61, col_values=(('external_ids', {'iface-id': '722de795-61c5-4a11-ade3-6c19621e1054', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:4b:da', 'vm-uuid': 'c12d0bd2-ff69-4827-a5a0-8bf5e44094f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.418 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:05 compute-1 NetworkManager[49104]: <info>  [1768919945.4190] manager: (tap722de795-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.423 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.425 225859 INFO os_vif [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:4b:da,bridge_name='br-int',has_traffic_filtering=True,id=722de795-61c5-4a11-ade3-6c19621e1054,network=Network(c5a9008d-9eea-43f2-a495-bf2e645a81fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap722de795-61')
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.465 225859 DEBUG nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.465 225859 DEBUG nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.465 225859 DEBUG nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] No VIF found with MAC fa:16:3e:be:4b:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.466 225859 INFO nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Using config drive
Jan 20 14:39:05 compute-1 nova_compute[225855]: 2026-01-20 14:39:05.487 225859 DEBUG nova.storage.rbd_utils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] rbd image c12d0bd2-ff69-4827-a5a0-8bf5e44094f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3372781718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2327636316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:05.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:06 compute-1 nova_compute[225855]: 2026-01-20 14:39:06.327 225859 DEBUG nova.network.neutron [req-a13f5730-2c44-4baf-8ad3-87ea95078a0b req-638fa99b-ce39-4b86-a40e-f22791c5e373 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Updated VIF entry in instance network info cache for port 722de795-61c5-4a11-ade3-6c19621e1054. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:39:06 compute-1 nova_compute[225855]: 2026-01-20 14:39:06.328 225859 DEBUG nova.network.neutron [req-a13f5730-2c44-4baf-8ad3-87ea95078a0b req-638fa99b-ce39-4b86-a40e-f22791c5e373 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Updating instance_info_cache with network_info: [{"id": "722de795-61c5-4a11-ade3-6c19621e1054", "address": "fa:16:3e:be:4b:da", "network": {"id": "c5a9008d-9eea-43f2-a495-bf2e645a81fb", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2127234184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dadd5f5212f432693d35e765126f4df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722de795-61", "ovs_interfaceid": "722de795-61c5-4a11-ade3-6c19621e1054", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:39:06 compute-1 nova_compute[225855]: 2026-01-20 14:39:06.353 225859 INFO nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Creating config drive at /var/lib/nova/instances/c12d0bd2-ff69-4827-a5a0-8bf5e44094f7/disk.config
Jan 20 14:39:06 compute-1 nova_compute[225855]: 2026-01-20 14:39:06.364 225859 DEBUG oslo_concurrency.processutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c12d0bd2-ff69-4827-a5a0-8bf5e44094f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq_kelpup execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:06 compute-1 nova_compute[225855]: 2026-01-20 14:39:06.393 225859 DEBUG oslo_concurrency.lockutils [req-a13f5730-2c44-4baf-8ad3-87ea95078a0b req-638fa99b-ce39-4b86-a40e-f22791c5e373 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:39:06 compute-1 nova_compute[225855]: 2026-01-20 14:39:06.498 225859 DEBUG oslo_concurrency.processutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c12d0bd2-ff69-4827-a5a0-8bf5e44094f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq_kelpup" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:06 compute-1 nova_compute[225855]: 2026-01-20 14:39:06.530 225859 DEBUG nova.storage.rbd_utils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] rbd image c12d0bd2-ff69-4827-a5a0-8bf5e44094f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:06 compute-1 nova_compute[225855]: 2026-01-20 14:39:06.534 225859 DEBUG oslo_concurrency.processutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c12d0bd2-ff69-4827-a5a0-8bf5e44094f7/disk.config c12d0bd2-ff69-4827-a5a0-8bf5e44094f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:06 compute-1 nova_compute[225855]: 2026-01-20 14:39:06.678 225859 DEBUG oslo_concurrency.processutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c12d0bd2-ff69-4827-a5a0-8bf5e44094f7/disk.config c12d0bd2-ff69-4827-a5a0-8bf5e44094f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:06 compute-1 nova_compute[225855]: 2026-01-20 14:39:06.679 225859 INFO nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Deleting local config drive /var/lib/nova/instances/c12d0bd2-ff69-4827-a5a0-8bf5e44094f7/disk.config because it was imported into RBD.
Jan 20 14:39:06 compute-1 kernel: tap722de795-61: entered promiscuous mode
Jan 20 14:39:06 compute-1 NetworkManager[49104]: <info>  [1768919946.7339] manager: (tap722de795-61): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Jan 20 14:39:06 compute-1 ovn_controller[130490]: 2026-01-20T14:39:06Z|00184|binding|INFO|Claiming lport 722de795-61c5-4a11-ade3-6c19621e1054 for this chassis.
Jan 20 14:39:06 compute-1 ovn_controller[130490]: 2026-01-20T14:39:06Z|00185|binding|INFO|722de795-61c5-4a11-ade3-6c19621e1054: Claiming fa:16:3e:be:4b:da 10.100.0.4
Jan 20 14:39:06 compute-1 nova_compute[225855]: 2026-01-20 14:39:06.734 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:06 compute-1 nova_compute[225855]: 2026-01-20 14:39:06.738 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:06 compute-1 nova_compute[225855]: 2026-01-20 14:39:06.740 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:06.751 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:4b:da 10.100.0.4'], port_security=['fa:16:3e:be:4b:da 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c12d0bd2-ff69-4827-a5a0-8bf5e44094f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5a9008d-9eea-43f2-a495-bf2e645a81fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4dadd5f5212f432693d35e765126f4df', 'neutron:revision_number': '2', 'neutron:security_group_ids': '333884db-2591-4fb4-b140-3b52543605e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b4a1cca-fd45-4fc9-bc45-b55a7f22b84a, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=722de795-61c5-4a11-ade3-6c19621e1054) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:39:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:06.754 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 722de795-61c5-4a11-ade3-6c19621e1054 in datapath c5a9008d-9eea-43f2-a495-bf2e645a81fb bound to our chassis
Jan 20 14:39:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:06.758 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c5a9008d-9eea-43f2-a495-bf2e645a81fb
Jan 20 14:39:06 compute-1 ceph-mon[81775]: pgmap v1522: 321 pgs: 321 active+clean; 227 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 290 op/s
Jan 20 14:39:06 compute-1 systemd-machined[194361]: New machine qemu-28-instance-0000003f.
Jan 20 14:39:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:06.767 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[84c95e56-e503-4026-a321-e0faf72904d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:06.769 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc5a9008d-91 in ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:39:06 compute-1 systemd-udevd[250984]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:39:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:06.771 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc5a9008d-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:39:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:06.771 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[976e543c-60f2-4f58-81ed-77f3c7fad264]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:06.772 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca2c753-b802-46a9-a798-35555242802f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:06 compute-1 NetworkManager[49104]: <info>  [1768919946.7802] device (tap722de795-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:39:06 compute-1 NetworkManager[49104]: <info>  [1768919946.7811] device (tap722de795-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:39:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:06.783 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[c76002d9-93be-4988-91d5-9ca4d622bb56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:06 compute-1 nova_compute[225855]: 2026-01-20 14:39:06.803 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:06 compute-1 systemd[1]: Started Virtual Machine qemu-28-instance-0000003f.
Jan 20 14:39:06 compute-1 ovn_controller[130490]: 2026-01-20T14:39:06Z|00186|binding|INFO|Setting lport 722de795-61c5-4a11-ade3-6c19621e1054 ovn-installed in OVS
Jan 20 14:39:06 compute-1 ovn_controller[130490]: 2026-01-20T14:39:06Z|00187|binding|INFO|Setting lport 722de795-61c5-4a11-ade3-6c19621e1054 up in Southbound
Jan 20 14:39:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:06.807 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ea492631-f7d0-46cd-8362-ac13bdce486e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:06 compute-1 nova_compute[225855]: 2026-01-20 14:39:06.808 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:06.835 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[9840bbcf-2b57-46e7-987f-8bf035d44d1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:06.840 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d303b835-bc0d-428a-82e8-364afab305b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:06 compute-1 systemd-udevd[250988]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:39:06 compute-1 NetworkManager[49104]: <info>  [1768919946.8411] manager: (tapc5a9008d-90): new Veth device (/org/freedesktop/NetworkManager/Devices/82)
Jan 20 14:39:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:06.869 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e33ad1ac-bdf1-4b8e-ad0d-d056653c8b94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:06.872 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[0404ea2c-ad28-4755-9064-17a2f34fb7b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:06 compute-1 NetworkManager[49104]: <info>  [1768919946.8943] device (tapc5a9008d-90): carrier: link connected
Jan 20 14:39:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:06.899 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f93ba5-66e1-4175-899d-cf9f6b4daaa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:06.914 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1c0cd223-22d8-493b-b334-932e2e7c31b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5a9008d-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:2d:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498582, 'reachable_time': 19181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251017, 'error': None, 'target': 'ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:06.927 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[51fa788e-3843-4a44-bf1c-c61e5c275485]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:2dfa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498582, 'tstamp': 498582}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251018, 'error': None, 'target': 'ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:06.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:06.943 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e22893c6-b2aa-4e0e-8c4e-cf074020d2a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5a9008d-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:2d:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498582, 'reachable_time': 19181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251019, 'error': None, 'target': 'ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:06.966 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6510ed5b-5061-41a1-9527-330ec295c697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:07.018 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ce57d9d2-3988-41f5-a168-913e4b822915]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:07.020 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5a9008d-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:07.020 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:07.020 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5a9008d-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.022 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:07 compute-1 NetworkManager[49104]: <info>  [1768919947.0226] manager: (tapc5a9008d-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Jan 20 14:39:07 compute-1 kernel: tapc5a9008d-90: entered promiscuous mode
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:07.024 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc5a9008d-90, col_values=(('external_ids', {'iface-id': 'f398fb65-c4f7-4041-baf6-a23646124813'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.025 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:07 compute-1 ovn_controller[130490]: 2026-01-20T14:39:07Z|00188|binding|INFO|Releasing lport f398fb65-c4f7-4041-baf6-a23646124813 from this chassis (sb_readonly=0)
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.039 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:07.040 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c5a9008d-9eea-43f2-a495-bf2e645a81fb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c5a9008d-9eea-43f2-a495-bf2e645a81fb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:07.041 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[22bbd639-aaf4-4b81-860d-482241bf5aa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:07.042 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-c5a9008d-9eea-43f2-a495-bf2e645a81fb
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/c5a9008d-9eea-43f2-a495-bf2e645a81fb.pid.haproxy
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID c5a9008d-9eea-43f2-a495-bf2e645a81fb
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:39:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:07.046 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb', 'env', 'PROCESS_TAG=haproxy-c5a9008d-9eea-43f2-a495-bf2e645a81fb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c5a9008d-9eea-43f2-a495-bf2e645a81fb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:39:07 compute-1 podman[251051]: 2026-01-20 14:39:07.435988424 +0000 UTC m=+0.052041920 container create 940eb6ae8742c65c6962999621b5cbe09e821ea1435acfd2e7c484c1e998c809 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.446 225859 DEBUG nova.compute.manager [req-e889511f-ece4-45fe-ba9d-2e048063f26e req-0af0c0bf-3b31-4b56-bb79-af67e26d7b32 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Received event network-vif-plugged-722de795-61c5-4a11-ade3-6c19621e1054 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.446 225859 DEBUG oslo_concurrency.lockutils [req-e889511f-ece4-45fe-ba9d-2e048063f26e req-0af0c0bf-3b31-4b56-bb79-af67e26d7b32 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.447 225859 DEBUG oslo_concurrency.lockutils [req-e889511f-ece4-45fe-ba9d-2e048063f26e req-0af0c0bf-3b31-4b56-bb79-af67e26d7b32 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.447 225859 DEBUG oslo_concurrency.lockutils [req-e889511f-ece4-45fe-ba9d-2e048063f26e req-0af0c0bf-3b31-4b56-bb79-af67e26d7b32 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.447 225859 DEBUG nova.compute.manager [req-e889511f-ece4-45fe-ba9d-2e048063f26e req-0af0c0bf-3b31-4b56-bb79-af67e26d7b32 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Processing event network-vif-plugged-722de795-61c5-4a11-ade3-6c19621e1054 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:39:07 compute-1 systemd[1]: Started libpod-conmon-940eb6ae8742c65c6962999621b5cbe09e821ea1435acfd2e7c484c1e998c809.scope.
Jan 20 14:39:07 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:39:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced956dcc0e0a49545b20b665d5cc72b5ad48cfb671bd1d0627044fdd6837f48/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:39:07 compute-1 podman[251051]: 2026-01-20 14:39:07.407409307 +0000 UTC m=+0.023462803 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:39:07 compute-1 podman[251051]: 2026-01-20 14:39:07.515748295 +0000 UTC m=+0.131801811 container init 940eb6ae8742c65c6962999621b5cbe09e821ea1435acfd2e7c484c1e998c809 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:39:07 compute-1 podman[251051]: 2026-01-20 14:39:07.520387596 +0000 UTC m=+0.136441082 container start 940eb6ae8742c65c6962999621b5cbe09e821ea1435acfd2e7c484c1e998c809 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 20 14:39:07 compute-1 neutron-haproxy-ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb[251107]: [NOTICE]   (251112) : New worker (251116) forked
Jan 20 14:39:07 compute-1 neutron-haproxy-ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb[251107]: [NOTICE]   (251112) : Loading success.
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.588 225859 DEBUG nova.compute.manager [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.590 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919947.5888853, c12d0bd2-ff69-4827-a5a0-8bf5e44094f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.590 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] VM Started (Lifecycle Event)
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.592 225859 DEBUG nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.596 225859 INFO nova.virt.libvirt.driver [-] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Instance spawned successfully.
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.596 225859 DEBUG nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.619 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.624 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.629 225859 DEBUG nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.630 225859 DEBUG nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.630 225859 DEBUG nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.631 225859 DEBUG nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.631 225859 DEBUG nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.632 225859 DEBUG nova.virt.libvirt.driver [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.660 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.661 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919947.5891743, c12d0bd2-ff69-4827-a5a0-8bf5e44094f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.661 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] VM Paused (Lifecycle Event)
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.704 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.710 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919947.5918474, c12d0bd2-ff69-4827-a5a0-8bf5e44094f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.710 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] VM Resumed (Lifecycle Event)
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.746 225859 INFO nova.compute.manager [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Took 7.09 seconds to spawn the instance on the hypervisor.
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.746 225859 DEBUG nova.compute.manager [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.748 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.755 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.796 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.829 225859 INFO nova.compute.manager [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Took 8.07 seconds to build instance.
Jan 20 14:39:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1592924687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.845 225859 DEBUG oslo_concurrency.lockutils [None req-d8f41aaa-9211-42c1-843f-fe24927da5f5 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.847 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 5.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.847 225859 INFO nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:39:07 compute-1 nova_compute[225855]: 2026-01-20 14:39:07.847 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:07.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:39:08.132792) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919948132832, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 459, "num_deletes": 251, "total_data_size": 527068, "memory_usage": 535360, "flush_reason": "Manual Compaction"}
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919948149701, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 285520, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36189, "largest_seqno": 36643, "table_properties": {"data_size": 283148, "index_size": 472, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6627, "raw_average_key_size": 20, "raw_value_size": 278274, "raw_average_value_size": 856, "num_data_blocks": 21, "num_entries": 325, "num_filter_entries": 325, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768919928, "oldest_key_time": 1768919928, "file_creation_time": 1768919948, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 16950 microseconds, and 2510 cpu microseconds.
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:39:08.149738) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 285520 bytes OK
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:39:08.149758) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:39:08.183298) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:39:08.183339) EVENT_LOG_v1 {"time_micros": 1768919948183328, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:39:08.183366) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 524240, prev total WAL file size 524240, number of live WAL files 2.
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:39:08.183921) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(278KB)], [66(11MB)]
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919948183951, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 12222156, "oldest_snapshot_seqno": -1}
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 6056 keys, 8440354 bytes, temperature: kUnknown
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919948291051, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 8440354, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8400921, "index_size": 23195, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15173, "raw_key_size": 155690, "raw_average_key_size": 25, "raw_value_size": 8293075, "raw_average_value_size": 1369, "num_data_blocks": 926, "num_entries": 6056, "num_filter_entries": 6056, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768919948, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:39:08.291263) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8440354 bytes
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:39:08.412925) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.1 rd, 78.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.4 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(72.4) write-amplify(29.6) OK, records in: 6563, records dropped: 507 output_compression: NoCompression
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:39:08.412990) EVENT_LOG_v1 {"time_micros": 1768919948412966, "job": 40, "event": "compaction_finished", "compaction_time_micros": 107163, "compaction_time_cpu_micros": 38022, "output_level": 6, "num_output_files": 1, "total_output_size": 8440354, "num_input_records": 6563, "num_output_records": 6056, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919948413427, "job": 40, "event": "table_file_deletion", "file_number": 68}
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919948417772, "job": 40, "event": "table_file_deletion", "file_number": 66}
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:39:08.183831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:39:08.417832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:39:08.417839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:39:08.417841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:39:08.417843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:39:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:39:08.417845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:39:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:08.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:08 compute-1 ceph-mon[81775]: pgmap v1523: 321 pgs: 321 active+clean; 232 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.3 MiB/s wr, 169 op/s
Jan 20 14:39:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1148979419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:09 compute-1 nova_compute[225855]: 2026-01-20 14:39:09.542 225859 DEBUG nova.compute.manager [req-b11ea75b-9b5c-4d46-8fd4-897161483b4b req-ebadc5b0-918c-4248-b1e0-fa1a0d134c52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Received event network-vif-plugged-722de795-61c5-4a11-ade3-6c19621e1054 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:09 compute-1 nova_compute[225855]: 2026-01-20 14:39:09.543 225859 DEBUG oslo_concurrency.lockutils [req-b11ea75b-9b5c-4d46-8fd4-897161483b4b req-ebadc5b0-918c-4248-b1e0-fa1a0d134c52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:09 compute-1 nova_compute[225855]: 2026-01-20 14:39:09.543 225859 DEBUG oslo_concurrency.lockutils [req-b11ea75b-9b5c-4d46-8fd4-897161483b4b req-ebadc5b0-918c-4248-b1e0-fa1a0d134c52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:09 compute-1 nova_compute[225855]: 2026-01-20 14:39:09.543 225859 DEBUG oslo_concurrency.lockutils [req-b11ea75b-9b5c-4d46-8fd4-897161483b4b req-ebadc5b0-918c-4248-b1e0-fa1a0d134c52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:09 compute-1 nova_compute[225855]: 2026-01-20 14:39:09.544 225859 DEBUG nova.compute.manager [req-b11ea75b-9b5c-4d46-8fd4-897161483b4b req-ebadc5b0-918c-4248-b1e0-fa1a0d134c52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] No waiting events found dispatching network-vif-plugged-722de795-61c5-4a11-ade3-6c19621e1054 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:39:09 compute-1 nova_compute[225855]: 2026-01-20 14:39:09.544 225859 WARNING nova.compute.manager [req-b11ea75b-9b5c-4d46-8fd4-897161483b4b req-ebadc5b0-918c-4248-b1e0-fa1a0d134c52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Received unexpected event network-vif-plugged-722de795-61c5-4a11-ade3-6c19621e1054 for instance with vm_state active and task_state None.
Jan 20 14:39:09 compute-1 NetworkManager[49104]: <info>  [1768919949.5682] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Jan 20 14:39:09 compute-1 NetworkManager[49104]: <info>  [1768919949.5698] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Jan 20 14:39:09 compute-1 nova_compute[225855]: 2026-01-20 14:39:09.567 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:09 compute-1 nova_compute[225855]: 2026-01-20 14:39:09.763 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:09 compute-1 ovn_controller[130490]: 2026-01-20T14:39:09Z|00189|binding|INFO|Releasing lport f398fb65-c4f7-4041-baf6-a23646124813 from this chassis (sb_readonly=0)
Jan 20 14:39:09 compute-1 nova_compute[225855]: 2026-01-20 14:39:09.798 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:09.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:09 compute-1 ceph-mon[81775]: pgmap v1524: 321 pgs: 321 active+clean; 245 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.6 MiB/s wr, 174 op/s
Jan 20 14:39:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3539387829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:10 compute-1 nova_compute[225855]: 2026-01-20 14:39:10.052 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:10 compute-1 nova_compute[225855]: 2026-01-20 14:39:10.419 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:10.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1234018470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:11 compute-1 nova_compute[225855]: 2026-01-20 14:39:11.686 225859 DEBUG nova.compute.manager [req-597f19f7-5dd2-493f-a826-3d3a02e718a7 req-49ec243d-2387-49f9-9ba5-04ba91dd1c72 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Received event network-changed-722de795-61c5-4a11-ade3-6c19621e1054 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:11 compute-1 nova_compute[225855]: 2026-01-20 14:39:11.687 225859 DEBUG nova.compute.manager [req-597f19f7-5dd2-493f-a826-3d3a02e718a7 req-49ec243d-2387-49f9-9ba5-04ba91dd1c72 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Refreshing instance network info cache due to event network-changed-722de795-61c5-4a11-ade3-6c19621e1054. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:39:11 compute-1 nova_compute[225855]: 2026-01-20 14:39:11.687 225859 DEBUG oslo_concurrency.lockutils [req-597f19f7-5dd2-493f-a826-3d3a02e718a7 req-49ec243d-2387-49f9-9ba5-04ba91dd1c72 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:39:11 compute-1 nova_compute[225855]: 2026-01-20 14:39:11.687 225859 DEBUG oslo_concurrency.lockutils [req-597f19f7-5dd2-493f-a826-3d3a02e718a7 req-49ec243d-2387-49f9-9ba5-04ba91dd1c72 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:39:11 compute-1 nova_compute[225855]: 2026-01-20 14:39:11.688 225859 DEBUG nova.network.neutron [req-597f19f7-5dd2-493f-a826-3d3a02e718a7 req-49ec243d-2387-49f9-9ba5-04ba91dd1c72 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Refreshing network info cache for port 722de795-61c5-4a11-ade3-6c19621e1054 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:39:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:39:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:11.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:39:12 compute-1 ceph-mon[81775]: pgmap v1525: 321 pgs: 321 active+clean; 268 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.2 MiB/s wr, 224 op/s
Jan 20 14:39:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:12.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:13 compute-1 nova_compute[225855]: 2026-01-20 14:39:13.203 225859 DEBUG nova.network.neutron [req-597f19f7-5dd2-493f-a826-3d3a02e718a7 req-49ec243d-2387-49f9-9ba5-04ba91dd1c72 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Updated VIF entry in instance network info cache for port 722de795-61c5-4a11-ade3-6c19621e1054. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:39:13 compute-1 nova_compute[225855]: 2026-01-20 14:39:13.211 225859 DEBUG nova.network.neutron [req-597f19f7-5dd2-493f-a826-3d3a02e718a7 req-49ec243d-2387-49f9-9ba5-04ba91dd1c72 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Updating instance_info_cache with network_info: [{"id": "722de795-61c5-4a11-ade3-6c19621e1054", "address": "fa:16:3e:be:4b:da", "network": {"id": "c5a9008d-9eea-43f2-a495-bf2e645a81fb", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2127234184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dadd5f5212f432693d35e765126f4df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722de795-61", "ovs_interfaceid": "722de795-61c5-4a11-ade3-6c19621e1054", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:39:13 compute-1 nova_compute[225855]: 2026-01-20 14:39:13.242 225859 DEBUG oslo_concurrency.lockutils [req-597f19f7-5dd2-493f-a826-3d3a02e718a7 req-49ec243d-2387-49f9-9ba5-04ba91dd1c72 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:39:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3028375435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:13.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2354170290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:39:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2354170290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:39:14 compute-1 ceph-mon[81775]: pgmap v1526: 321 pgs: 321 active+clean; 281 MiB data, 690 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.8 MiB/s wr, 246 op/s
Jan 20 14:39:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:39:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:14.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:39:15 compute-1 nova_compute[225855]: 2026-01-20 14:39:15.055 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:15 compute-1 nova_compute[225855]: 2026-01-20 14:39:15.459 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:15.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:16.399 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:16.400 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:16.401 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:16 compute-1 ceph-mon[81775]: pgmap v1527: 321 pgs: 321 active+clean; 329 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.8 MiB/s wr, 232 op/s
Jan 20 14:39:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:16.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3294190681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/248444674' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:39:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:17.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:39:18 compute-1 podman[251132]: 2026-01-20 14:39:18.052191511 +0000 UTC m=+0.085704370 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:39:18 compute-1 ceph-mon[81775]: pgmap v1528: 321 pgs: 321 active+clean; 339 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 230 op/s
Jan 20 14:39:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:18.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:39:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:19.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:39:20 compute-1 nova_compute[225855]: 2026-01-20 14:39:20.054 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:20 compute-1 ovn_controller[130490]: 2026-01-20T14:39:20Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:be:4b:da 10.100.0.4
Jan 20 14:39:20 compute-1 ovn_controller[130490]: 2026-01-20T14:39:20Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:be:4b:da 10.100.0.4
Jan 20 14:39:20 compute-1 nova_compute[225855]: 2026-01-20 14:39:20.509 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:20 compute-1 ceph-mon[81775]: pgmap v1529: 321 pgs: 321 active+clean; 348 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 6.3 MiB/s wr, 217 op/s
Jan 20 14:39:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:39:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:20.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:39:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:21.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:21 compute-1 ceph-mon[81775]: pgmap v1530: 321 pgs: 321 active+clean; 352 MiB data, 732 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.5 MiB/s wr, 223 op/s
Jan 20 14:39:22 compute-1 sudo[251162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:22 compute-1 sudo[251162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:22 compute-1 sudo[251162]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:22 compute-1 sudo[251187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:39:22 compute-1 sudo[251187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:22 compute-1 sudo[251187]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:22 compute-1 sudo[251212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:22 compute-1 sudo[251212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:22 compute-1 sudo[251212]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:22 compute-1 sudo[251237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 14:39:22 compute-1 sudo[251237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:22 compute-1 nova_compute[225855]: 2026-01-20 14:39:22.375 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:39:22 compute-1 sudo[251237]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:22 compute-1 sudo[251283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:22 compute-1 sudo[251283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:22 compute-1 sudo[251283]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:22 compute-1 sudo[251308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:39:22 compute-1 sudo[251308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:22 compute-1 sudo[251308]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:22 compute-1 sudo[251333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:22 compute-1 sudo[251333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:22 compute-1 sudo[251333]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:22 compute-1 sudo[251358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:39:22 compute-1 sudo[251358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:22.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:23 compute-1 sudo[251358]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 14:39:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 14:39:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:23.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:39:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:39:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:39:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:39:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:39:24 compute-1 ceph-mon[81775]: pgmap v1531: 321 pgs: 321 active+clean; 363 MiB data, 741 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.7 MiB/s wr, 171 op/s
Jan 20 14:39:24 compute-1 sudo[251416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:24 compute-1 sudo[251416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:24 compute-1 sudo[251416]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:24 compute-1 sudo[251441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:24 compute-1 sudo[251441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:24 compute-1 sudo[251441]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:24.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:25 compute-1 nova_compute[225855]: 2026-01-20 14:39:25.056 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:25 compute-1 nova_compute[225855]: 2026-01-20 14:39:25.554 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:25.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:26 compute-1 nova_compute[225855]: 2026-01-20 14:39:26.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:39:26 compute-1 nova_compute[225855]: 2026-01-20 14:39:26.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:39:26 compute-1 nova_compute[225855]: 2026-01-20 14:39:26.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:39:26 compute-1 nova_compute[225855]: 2026-01-20 14:39:26.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:39:26 compute-1 ceph-mon[81775]: pgmap v1532: 321 pgs: 321 active+clean; 372 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.2 MiB/s wr, 189 op/s
Jan 20 14:39:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:26.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:27 compute-1 nova_compute[225855]: 2026-01-20 14:39:27.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:39:27 compute-1 nova_compute[225855]: 2026-01-20 14:39:27.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:39:27 compute-1 nova_compute[225855]: 2026-01-20 14:39:27.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:39:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:27.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:28 compute-1 nova_compute[225855]: 2026-01-20 14:39:28.274 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:39:28 compute-1 nova_compute[225855]: 2026-01-20 14:39:28.274 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:39:28 compute-1 nova_compute[225855]: 2026-01-20 14:39:28.274 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:39:28 compute-1 nova_compute[225855]: 2026-01-20 14:39:28.274 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c12d0bd2-ff69-4827-a5a0-8bf5e44094f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:39:28 compute-1 nova_compute[225855]: 2026-01-20 14:39:28.687 225859 DEBUG oslo_concurrency.lockutils [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Acquiring lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:28 compute-1 nova_compute[225855]: 2026-01-20 14:39:28.687 225859 DEBUG oslo_concurrency.lockutils [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:28 compute-1 nova_compute[225855]: 2026-01-20 14:39:28.688 225859 DEBUG oslo_concurrency.lockutils [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Acquiring lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:28 compute-1 nova_compute[225855]: 2026-01-20 14:39:28.688 225859 DEBUG oslo_concurrency.lockutils [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:28 compute-1 nova_compute[225855]: 2026-01-20 14:39:28.688 225859 DEBUG oslo_concurrency.lockutils [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:28 compute-1 nova_compute[225855]: 2026-01-20 14:39:28.689 225859 INFO nova.compute.manager [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Terminating instance
Jan 20 14:39:28 compute-1 nova_compute[225855]: 2026-01-20 14:39:28.691 225859 DEBUG nova.compute.manager [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:39:28 compute-1 ceph-mon[81775]: pgmap v1533: 321 pgs: 321 active+clean; 372 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 153 op/s
Jan 20 14:39:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/148360181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/148624311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:28 compute-1 kernel: tap722de795-61 (unregistering): left promiscuous mode
Jan 20 14:39:28 compute-1 NetworkManager[49104]: <info>  [1768919968.7992] device (tap722de795-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:39:28 compute-1 nova_compute[225855]: 2026-01-20 14:39:28.849 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:28 compute-1 ovn_controller[130490]: 2026-01-20T14:39:28Z|00190|binding|INFO|Releasing lport 722de795-61c5-4a11-ade3-6c19621e1054 from this chassis (sb_readonly=0)
Jan 20 14:39:28 compute-1 ovn_controller[130490]: 2026-01-20T14:39:28Z|00191|binding|INFO|Setting lport 722de795-61c5-4a11-ade3-6c19621e1054 down in Southbound
Jan 20 14:39:28 compute-1 ovn_controller[130490]: 2026-01-20T14:39:28Z|00192|binding|INFO|Removing iface tap722de795-61 ovn-installed in OVS
Jan 20 14:39:28 compute-1 nova_compute[225855]: 2026-01-20 14:39:28.852 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:28.870 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:4b:da 10.100.0.4'], port_security=['fa:16:3e:be:4b:da 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c12d0bd2-ff69-4827-a5a0-8bf5e44094f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5a9008d-9eea-43f2-a495-bf2e645a81fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4dadd5f5212f432693d35e765126f4df', 'neutron:revision_number': '4', 'neutron:security_group_ids': '333884db-2591-4fb4-b140-3b52543605e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b4a1cca-fd45-4fc9-bc45-b55a7f22b84a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=722de795-61c5-4a11-ade3-6c19621e1054) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:39:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:28.872 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 722de795-61c5-4a11-ade3-6c19621e1054 in datapath c5a9008d-9eea-43f2-a495-bf2e645a81fb unbound from our chassis
Jan 20 14:39:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:28.873 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c5a9008d-9eea-43f2-a495-bf2e645a81fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:39:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:28.874 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[98d30263-ee8a-44bf-ac37-1043477b9bbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:28.875 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb namespace which is not needed anymore
Jan 20 14:39:28 compute-1 nova_compute[225855]: 2026-01-20 14:39:28.884 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:28 compute-1 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Jan 20 14:39:28 compute-1 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000003f.scope: Consumed 13.404s CPU time.
Jan 20 14:39:28 compute-1 systemd-machined[194361]: Machine qemu-28-instance-0000003f terminated.
Jan 20 14:39:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:39:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:28.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:39:28 compute-1 podman[251468]: 2026-01-20 14:39:28.963930164 +0000 UTC m=+0.078530475 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 14:39:28 compute-1 neutron-haproxy-ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb[251107]: [NOTICE]   (251112) : haproxy version is 2.8.14-c23fe91
Jan 20 14:39:28 compute-1 neutron-haproxy-ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb[251107]: [NOTICE]   (251112) : path to executable is /usr/sbin/haproxy
Jan 20 14:39:28 compute-1 neutron-haproxy-ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb[251107]: [WARNING]  (251112) : Exiting Master process...
Jan 20 14:39:28 compute-1 neutron-haproxy-ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb[251107]: [WARNING]  (251112) : Exiting Master process...
Jan 20 14:39:28 compute-1 neutron-haproxy-ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb[251107]: [ALERT]    (251112) : Current worker (251116) exited with code 143 (Terminated)
Jan 20 14:39:28 compute-1 neutron-haproxy-ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb[251107]: [WARNING]  (251112) : All workers exited. Exiting... (0)
Jan 20 14:39:28 compute-1 systemd[1]: libpod-940eb6ae8742c65c6962999621b5cbe09e821ea1435acfd2e7c484c1e998c809.scope: Deactivated successfully.
Jan 20 14:39:29 compute-1 podman[251510]: 2026-01-20 14:39:29.005784069 +0000 UTC m=+0.043591326 container died 940eb6ae8742c65c6962999621b5cbe09e821ea1435acfd2e7c484c1e998c809 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:39:29 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-940eb6ae8742c65c6962999621b5cbe09e821ea1435acfd2e7c484c1e998c809-userdata-shm.mount: Deactivated successfully.
Jan 20 14:39:29 compute-1 systemd[1]: var-lib-containers-storage-overlay-ced956dcc0e0a49545b20b665d5cc72b5ad48cfb671bd1d0627044fdd6837f48-merged.mount: Deactivated successfully.
Jan 20 14:39:29 compute-1 podman[251510]: 2026-01-20 14:39:29.042719265 +0000 UTC m=+0.080526522 container cleanup 940eb6ae8742c65c6962999621b5cbe09e821ea1435acfd2e7c484c1e998c809 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:39:29 compute-1 systemd[1]: libpod-conmon-940eb6ae8742c65c6962999621b5cbe09e821ea1435acfd2e7c484c1e998c809.scope: Deactivated successfully.
Jan 20 14:39:29 compute-1 podman[251541]: 2026-01-20 14:39:29.102660613 +0000 UTC m=+0.040498258 container remove 940eb6ae8742c65c6962999621b5cbe09e821ea1435acfd2e7c484c1e998c809 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:39:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:29.110 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[54423d33-90ab-4c7a-b4a1-6a687ae99e5a]: (4, ('Tue Jan 20 02:39:28 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb (940eb6ae8742c65c6962999621b5cbe09e821ea1435acfd2e7c484c1e998c809)\n940eb6ae8742c65c6962999621b5cbe09e821ea1435acfd2e7c484c1e998c809\nTue Jan 20 02:39:29 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb (940eb6ae8742c65c6962999621b5cbe09e821ea1435acfd2e7c484c1e998c809)\n940eb6ae8742c65c6962999621b5cbe09e821ea1435acfd2e7c484c1e998c809\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:29.112 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4871bb-e188-4b00-b11e-6b6e519fa423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:29.113 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5a9008d-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.115 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.132 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:29 compute-1 kernel: tapc5a9008d-90: left promiscuous mode
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.133 225859 INFO nova.virt.libvirt.driver [-] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Instance destroyed successfully.
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.133 225859 DEBUG nova.objects.instance [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Lazy-loading 'resources' on Instance uuid c12d0bd2-ff69-4827-a5a0-8bf5e44094f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.137 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:29.140 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d28e9fa4-31bb-4420-a0f3-2f1529c2f1cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.147 225859 DEBUG nova.virt.libvirt.vif [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:38:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1848412501',display_name='tempest-ServersTestManualDisk-server-1848412501',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1848412501',id=63,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN1H3ohoasKNqW9/qCu5grGBs3IV04fg9gxxB4grih5Zd5WxPzj2gaQyovrov9cUlcTcdLXAKoF+QUCFPVVxhI1Y4NXPI0qz/O7wrYwAYL2Je6ImmzeATRgxmFMwN+zj/A==',key_name='tempest-keypair-1139962663',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:07Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4dadd5f5212f432693d35e765126f4df',ramdisk_id='',reservation_id='r-l9tyz3ue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-665340674',owner_user_name='tempest-ServersTestManualDisk-665340674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:39:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ddee6eb6c32d451ca50c9ea499a23c1a',uuid=c12d0bd2-ff69-4827-a5a0-8bf5e44094f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "722de795-61c5-4a11-ade3-6c19621e1054", "address": "fa:16:3e:be:4b:da", "network": {"id": "c5a9008d-9eea-43f2-a495-bf2e645a81fb", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2127234184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dadd5f5212f432693d35e765126f4df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722de795-61", "ovs_interfaceid": "722de795-61c5-4a11-ade3-6c19621e1054", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.148 225859 DEBUG nova.network.os_vif_util [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Converting VIF {"id": "722de795-61c5-4a11-ade3-6c19621e1054", "address": "fa:16:3e:be:4b:da", "network": {"id": "c5a9008d-9eea-43f2-a495-bf2e645a81fb", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2127234184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dadd5f5212f432693d35e765126f4df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722de795-61", "ovs_interfaceid": "722de795-61c5-4a11-ade3-6c19621e1054", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.149 225859 DEBUG nova.network.os_vif_util [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:be:4b:da,bridge_name='br-int',has_traffic_filtering=True,id=722de795-61c5-4a11-ade3-6c19621e1054,network=Network(c5a9008d-9eea-43f2-a495-bf2e645a81fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap722de795-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.149 225859 DEBUG os_vif [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:4b:da,bridge_name='br-int',has_traffic_filtering=True,id=722de795-61c5-4a11-ade3-6c19621e1054,network=Network(c5a9008d-9eea-43f2-a495-bf2e645a81fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap722de795-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.152 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.152 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap722de795-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.154 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:29.155 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d15dcefd-4d3d-4229-9d9e-0c242e4e8b9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.156 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:39:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:29.156 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[aad12c12-1571-4a45-8d28-68dcbc697565]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.157 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.161 225859 INFO os_vif [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:4b:da,bridge_name='br-int',has_traffic_filtering=True,id=722de795-61c5-4a11-ade3-6c19621e1054,network=Network(c5a9008d-9eea-43f2-a495-bf2e645a81fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap722de795-61')
Jan 20 14:39:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:29.172 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5d84890d-9f8c-4c37-a145-aa4f0b57330c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498575, 'reachable_time': 35908, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251571, 'error': None, 'target': 'ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:29 compute-1 systemd[1]: run-netns-ovnmeta\x2dc5a9008d\x2d9eea\x2d43f2\x2da495\x2dbf2e645a81fb.mount: Deactivated successfully.
Jan 20 14:39:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:29.176 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c5a9008d-9eea-43f2-a495-bf2e645a81fb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:39:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:29.176 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[9c3c907e-3157-413a-b6e5-e892c16d526c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.541 225859 INFO nova.virt.libvirt.driver [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Deleting instance files /var/lib/nova/instances/c12d0bd2-ff69-4827-a5a0-8bf5e44094f7_del
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.542 225859 INFO nova.virt.libvirt.driver [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Deletion of /var/lib/nova/instances/c12d0bd2-ff69-4827-a5a0-8bf5e44094f7_del complete
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.596 225859 INFO nova.compute.manager [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Took 0.91 seconds to destroy the instance on the hypervisor.
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.597 225859 DEBUG oslo.service.loopingcall [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.598 225859 DEBUG nova.compute.manager [-] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.598 225859 DEBUG nova.network.neutron [-] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:39:29 compute-1 sudo[251592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:29 compute-1 sudo[251592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:29 compute-1 sudo[251592]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1337503018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3331275778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1254287090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:29 compute-1 sudo[251617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:39:29 compute-1 sudo[251617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:29 compute-1 sudo[251617]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:29.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.918 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Updating instance_info_cache with network_info: [{"id": "722de795-61c5-4a11-ade3-6c19621e1054", "address": "fa:16:3e:be:4b:da", "network": {"id": "c5a9008d-9eea-43f2-a495-bf2e645a81fb", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2127234184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dadd5f5212f432693d35e765126f4df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722de795-61", "ovs_interfaceid": "722de795-61c5-4a11-ade3-6c19621e1054", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.936 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.937 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.937 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.937 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.940 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.961 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.961 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.962 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.962 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:39:29 compute-1 nova_compute[225855]: 2026-01-20 14:39:29.963 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:30 compute-1 nova_compute[225855]: 2026-01-20 14:39:30.059 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:39:30 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/837437866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:30 compute-1 nova_compute[225855]: 2026-01-20 14:39:30.403 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:30 compute-1 nova_compute[225855]: 2026-01-20 14:39:30.570 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:39:30 compute-1 nova_compute[225855]: 2026-01-20 14:39:30.571 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4593MB free_disk=20.83075714111328GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:39:30 compute-1 nova_compute[225855]: 2026-01-20 14:39:30.572 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:30 compute-1 nova_compute[225855]: 2026-01-20 14:39:30.572 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:30 compute-1 nova_compute[225855]: 2026-01-20 14:39:30.649 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance c12d0bd2-ff69-4827-a5a0-8bf5e44094f7 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:39:30 compute-1 nova_compute[225855]: 2026-01-20 14:39:30.650 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:39:30 compute-1 nova_compute[225855]: 2026-01-20 14:39:30.650 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:39:30 compute-1 nova_compute[225855]: 2026-01-20 14:39:30.668 225859 DEBUG nova.network.neutron [-] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:39:30 compute-1 nova_compute[225855]: 2026-01-20 14:39:30.696 225859 INFO nova.compute.manager [-] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Took 1.10 seconds to deallocate network for instance.
Jan 20 14:39:30 compute-1 nova_compute[225855]: 2026-01-20 14:39:30.736 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:30 compute-1 nova_compute[225855]: 2026-01-20 14:39:30.772 225859 DEBUG oslo_concurrency.lockutils [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:30 compute-1 ceph-mon[81775]: pgmap v1534: 321 pgs: 321 active+clean; 372 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Jan 20 14:39:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/837437866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:30 compute-1 nova_compute[225855]: 2026-01-20 14:39:30.808 225859 DEBUG nova.compute.manager [req-70eb112b-a7c4-414a-9961-505215ac1462 req-dfcc9e80-7dae-4df3-86c7-2d4281107499 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Received event network-vif-deleted-722de795-61c5-4a11-ade3-6c19621e1054 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:30.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:31 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:39:31 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/219743222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.138 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.144 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.163 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.193 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.193 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.193 225859 DEBUG oslo_concurrency.lockutils [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.231 225859 DEBUG nova.compute.manager [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Received event network-vif-unplugged-722de795-61c5-4a11-ade3-6c19621e1054 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.231 225859 DEBUG oslo_concurrency.lockutils [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.231 225859 DEBUG oslo_concurrency.lockutils [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.231 225859 DEBUG oslo_concurrency.lockutils [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.232 225859 DEBUG nova.compute.manager [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] No waiting events found dispatching network-vif-unplugged-722de795-61c5-4a11-ade3-6c19621e1054 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.232 225859 WARNING nova.compute.manager [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Received unexpected event network-vif-unplugged-722de795-61c5-4a11-ade3-6c19621e1054 for instance with vm_state deleted and task_state None.
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.232 225859 DEBUG nova.compute.manager [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Received event network-vif-plugged-722de795-61c5-4a11-ade3-6c19621e1054 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.232 225859 DEBUG oslo_concurrency.lockutils [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.232 225859 DEBUG oslo_concurrency.lockutils [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.233 225859 DEBUG oslo_concurrency.lockutils [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.233 225859 DEBUG nova.compute.manager [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] No waiting events found dispatching network-vif-plugged-722de795-61c5-4a11-ade3-6c19621e1054 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.233 225859 WARNING nova.compute.manager [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Received unexpected event network-vif-plugged-722de795-61c5-4a11-ade3-6c19621e1054 for instance with vm_state deleted and task_state None.
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.250 225859 DEBUG oslo_concurrency.processutils [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.670 225859 DEBUG oslo_concurrency.processutils [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.676 225859 DEBUG nova.compute.provider_tree [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.695 225859 DEBUG nova.scheduler.client.report [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.719 225859 DEBUG oslo_concurrency.lockutils [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:31 compute-1 nova_compute[225855]: 2026-01-20 14:39:31.753 225859 INFO nova.scheduler.client.report [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Deleted allocations for instance c12d0bd2-ff69-4827-a5a0-8bf5e44094f7
Jan 20 14:39:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/219743222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/722968545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:31.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:32 compute-1 nova_compute[225855]: 2026-01-20 14:39:32.022 225859 DEBUG oslo_concurrency.lockutils [None req-b4b8d585-b3c9-4471-ae80-c1c039e1eef1 ddee6eb6c32d451ca50c9ea499a23c1a 4dadd5f5212f432693d35e765126f4df - - default default] Lock "c12d0bd2-ff69-4827-a5a0-8bf5e44094f7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:32.079 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:39:32 compute-1 nova_compute[225855]: 2026-01-20 14:39:32.080 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:32.082 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:39:32 compute-1 ceph-mon[81775]: pgmap v1535: 321 pgs: 321 active+clean; 337 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 MiB/s wr, 147 op/s
Jan 20 14:39:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2096865750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:32.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2502978402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:33.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:34 compute-1 nova_compute[225855]: 2026-01-20 14:39:34.154 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:34 compute-1 ceph-mon[81775]: pgmap v1536: 321 pgs: 321 active+clean; 282 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 919 KiB/s wr, 154 op/s
Jan 20 14:39:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1735536733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:34.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:35 compute-1 nova_compute[225855]: 2026-01-20 14:39:35.061 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:35 compute-1 nova_compute[225855]: 2026-01-20 14:39:35.189 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:39:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:35.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:36 compute-1 ceph-mon[81775]: pgmap v1537: 321 pgs: 321 active+clean; 177 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 81 KiB/s wr, 218 op/s
Jan 20 14:39:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3025398922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:36.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:37.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:38 compute-1 ceph-mon[81775]: pgmap v1538: 321 pgs: 321 active+clean; 167 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 34 KiB/s wr, 172 op/s
Jan 20 14:39:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:38.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:39 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:39.084 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:39 compute-1 nova_compute[225855]: 2026-01-20 14:39:39.155 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:39.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:40 compute-1 nova_compute[225855]: 2026-01-20 14:39:40.063 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:40 compute-1 nova_compute[225855]: 2026-01-20 14:39:40.377 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:40 compute-1 nova_compute[225855]: 2026-01-20 14:39:40.588 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:40 compute-1 ceph-mon[81775]: pgmap v1539: 321 pgs: 321 active+clean; 167 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 22 KiB/s wr, 206 op/s
Jan 20 14:39:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:39:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:40.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:39:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:41.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:42 compute-1 ceph-mon[81775]: pgmap v1540: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 21 KiB/s wr, 229 op/s
Jan 20 14:39:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:42.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:43 compute-1 nova_compute[225855]: 2026-01-20 14:39:43.496 225859 DEBUG oslo_concurrency.lockutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "d5c2df9d-748f-4df2-9392-b45741975f65" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:43 compute-1 nova_compute[225855]: 2026-01-20 14:39:43.496 225859 DEBUG oslo_concurrency.lockutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:43 compute-1 nova_compute[225855]: 2026-01-20 14:39:43.526 225859 DEBUG nova.compute.manager [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:39:43 compute-1 nova_compute[225855]: 2026-01-20 14:39:43.604 225859 DEBUG oslo_concurrency.lockutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:43 compute-1 nova_compute[225855]: 2026-01-20 14:39:43.605 225859 DEBUG oslo_concurrency.lockutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:43 compute-1 nova_compute[225855]: 2026-01-20 14:39:43.611 225859 DEBUG nova.virt.hardware [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:39:43 compute-1 nova_compute[225855]: 2026-01-20 14:39:43.611 225859 INFO nova.compute.claims [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:39:43 compute-1 nova_compute[225855]: 2026-01-20 14:39:43.708 225859 DEBUG oslo_concurrency.processutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:43.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:39:44 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1901219903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:44 compute-1 nova_compute[225855]: 2026-01-20 14:39:44.129 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919969.128188, c12d0bd2-ff69-4827-a5a0-8bf5e44094f7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:39:44 compute-1 nova_compute[225855]: 2026-01-20 14:39:44.130 225859 INFO nova.compute.manager [-] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] VM Stopped (Lifecycle Event)
Jan 20 14:39:44 compute-1 nova_compute[225855]: 2026-01-20 14:39:44.145 225859 DEBUG oslo_concurrency.processutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:44 compute-1 nova_compute[225855]: 2026-01-20 14:39:44.150 225859 DEBUG nova.compute.provider_tree [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:39:44 compute-1 nova_compute[225855]: 2026-01-20 14:39:44.157 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:44 compute-1 nova_compute[225855]: 2026-01-20 14:39:44.730 225859 DEBUG nova.compute.manager [None req-00d7702b-bd2e-42a2-bfb2-eb448dd896eb - - - - - -] [instance: c12d0bd2-ff69-4827-a5a0-8bf5e44094f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:39:44 compute-1 nova_compute[225855]: 2026-01-20 14:39:44.732 225859 DEBUG nova.scheduler.client.report [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:39:44 compute-1 nova_compute[225855]: 2026-01-20 14:39:44.831 225859 DEBUG oslo_concurrency.lockutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:44 compute-1 nova_compute[225855]: 2026-01-20 14:39:44.832 225859 DEBUG nova.compute.manager [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:39:44 compute-1 sudo[251739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:44 compute-1 sudo[251739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:44 compute-1 sudo[251739]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:44 compute-1 nova_compute[225855]: 2026-01-20 14:39:44.891 225859 DEBUG nova.compute.manager [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:39:44 compute-1 nova_compute[225855]: 2026-01-20 14:39:44.892 225859 DEBUG nova.network.neutron [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:39:44 compute-1 sudo[251764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:44 compute-1 sudo[251764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:44 compute-1 sudo[251764]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:44 compute-1 nova_compute[225855]: 2026-01-20 14:39:44.947 225859 INFO nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:39:44 compute-1 nova_compute[225855]: 2026-01-20 14:39:44.963 225859 DEBUG nova.compute.manager [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:39:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:44.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:45 compute-1 nova_compute[225855]: 2026-01-20 14:39:45.064 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:45 compute-1 ceph-mon[81775]: pgmap v1541: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 22 KiB/s wr, 219 op/s
Jan 20 14:39:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1901219903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:45 compute-1 nova_compute[225855]: 2026-01-20 14:39:45.284 225859 DEBUG nova.compute.manager [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:39:45 compute-1 nova_compute[225855]: 2026-01-20 14:39:45.287 225859 DEBUG nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:39:45 compute-1 nova_compute[225855]: 2026-01-20 14:39:45.288 225859 INFO nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Creating image(s)
Jan 20 14:39:45 compute-1 nova_compute[225855]: 2026-01-20 14:39:45.328 225859 DEBUG nova.storage.rbd_utils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image d5c2df9d-748f-4df2-9392-b45741975f65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:45 compute-1 nova_compute[225855]: 2026-01-20 14:39:45.362 225859 DEBUG nova.storage.rbd_utils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image d5c2df9d-748f-4df2-9392-b45741975f65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:45 compute-1 nova_compute[225855]: 2026-01-20 14:39:45.395 225859 DEBUG nova.storage.rbd_utils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image d5c2df9d-748f-4df2-9392-b45741975f65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:45 compute-1 nova_compute[225855]: 2026-01-20 14:39:45.399 225859 DEBUG oslo_concurrency.processutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:45 compute-1 nova_compute[225855]: 2026-01-20 14:39:45.431 225859 DEBUG nova.policy [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c8a9fb458d27434495a77a94827b6097', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3f93fd4b2154dda9f38e62334904303', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:39:45 compute-1 nova_compute[225855]: 2026-01-20 14:39:45.589 225859 DEBUG oslo_concurrency.processutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:45 compute-1 nova_compute[225855]: 2026-01-20 14:39:45.590 225859 DEBUG oslo_concurrency.lockutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:45 compute-1 nova_compute[225855]: 2026-01-20 14:39:45.591 225859 DEBUG oslo_concurrency.lockutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:45 compute-1 nova_compute[225855]: 2026-01-20 14:39:45.591 225859 DEBUG oslo_concurrency.lockutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:45 compute-1 nova_compute[225855]: 2026-01-20 14:39:45.615 225859 DEBUG nova.storage.rbd_utils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image d5c2df9d-748f-4df2-9392-b45741975f65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:45 compute-1 nova_compute[225855]: 2026-01-20 14:39:45.619 225859 DEBUG oslo_concurrency.processutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 d5c2df9d-748f-4df2-9392-b45741975f65_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:45.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:45 compute-1 nova_compute[225855]: 2026-01-20 14:39:45.940 225859 DEBUG oslo_concurrency.processutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 d5c2df9d-748f-4df2-9392-b45741975f65_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:46 compute-1 nova_compute[225855]: 2026-01-20 14:39:46.054 225859 DEBUG nova.storage.rbd_utils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] resizing rbd image d5c2df9d-748f-4df2-9392-b45741975f65_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:39:46 compute-1 ceph-mon[81775]: pgmap v1542: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 19 KiB/s wr, 175 op/s
Jan 20 14:39:46 compute-1 nova_compute[225855]: 2026-01-20 14:39:46.595 225859 DEBUG nova.objects.instance [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'migration_context' on Instance uuid d5c2df9d-748f-4df2-9392-b45741975f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:39:46 compute-1 nova_compute[225855]: 2026-01-20 14:39:46.616 225859 DEBUG nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:39:46 compute-1 nova_compute[225855]: 2026-01-20 14:39:46.617 225859 DEBUG nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Ensure instance console log exists: /var/lib/nova/instances/d5c2df9d-748f-4df2-9392-b45741975f65/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:39:46 compute-1 nova_compute[225855]: 2026-01-20 14:39:46.618 225859 DEBUG oslo_concurrency.lockutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:46 compute-1 nova_compute[225855]: 2026-01-20 14:39:46.618 225859 DEBUG oslo_concurrency.lockutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:46 compute-1 nova_compute[225855]: 2026-01-20 14:39:46.619 225859 DEBUG oslo_concurrency.lockutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:46 compute-1 nova_compute[225855]: 2026-01-20 14:39:46.910 225859 DEBUG nova.network.neutron [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Successfully created port: b48170b0-717d-48f0-8172-742a4a8596e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:39:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:46.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:47 compute-1 nova_compute[225855]: 2026-01-20 14:39:47.792 225859 DEBUG nova.network.neutron [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Successfully updated port: b48170b0-717d-48f0-8172-742a4a8596e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:39:47 compute-1 nova_compute[225855]: 2026-01-20 14:39:47.812 225859 DEBUG oslo_concurrency.lockutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:39:47 compute-1 nova_compute[225855]: 2026-01-20 14:39:47.812 225859 DEBUG oslo_concurrency.lockutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquired lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:39:47 compute-1 nova_compute[225855]: 2026-01-20 14:39:47.812 225859 DEBUG nova.network.neutron [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:39:47 compute-1 nova_compute[225855]: 2026-01-20 14:39:47.912 225859 DEBUG nova.compute.manager [req-c581aa3d-73ba-44e7-aa5d-6b87cb19210d req-4b156c00-7c0c-43c2-8f0f-abb11063dbcf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-changed-b48170b0-717d-48f0-8172-742a4a8596e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:47 compute-1 nova_compute[225855]: 2026-01-20 14:39:47.912 225859 DEBUG nova.compute.manager [req-c581aa3d-73ba-44e7-aa5d-6b87cb19210d req-4b156c00-7c0c-43c2-8f0f-abb11063dbcf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Refreshing instance network info cache due to event network-changed-b48170b0-717d-48f0-8172-742a4a8596e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:39:47 compute-1 nova_compute[225855]: 2026-01-20 14:39:47.913 225859 DEBUG oslo_concurrency.lockutils [req-c581aa3d-73ba-44e7-aa5d-6b87cb19210d req-4b156c00-7c0c-43c2-8f0f-abb11063dbcf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:39:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:47.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:47 compute-1 nova_compute[225855]: 2026-01-20 14:39:47.981 225859 DEBUG nova.network.neutron [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:39:48 compute-1 ceph-mon[81775]: pgmap v1543: 321 pgs: 321 active+clean; 163 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 930 KiB/s wr, 83 op/s
Jan 20 14:39:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1926474629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:48.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:49 compute-1 podman[251957]: 2026-01-20 14:39:49.037449258 +0000 UTC m=+0.081546631 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:39:49 compute-1 nova_compute[225855]: 2026-01-20 14:39:49.158 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:49.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.034 225859 DEBUG nova.network.neutron [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Updating instance_info_cache with network_info: [{"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.050 225859 DEBUG oslo_concurrency.lockutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Releasing lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.051 225859 DEBUG nova.compute.manager [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Instance network_info: |[{"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.051 225859 DEBUG oslo_concurrency.lockutils [req-c581aa3d-73ba-44e7-aa5d-6b87cb19210d req-4b156c00-7c0c-43c2-8f0f-abb11063dbcf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.051 225859 DEBUG nova.network.neutron [req-c581aa3d-73ba-44e7-aa5d-6b87cb19210d req-4b156c00-7c0c-43c2-8f0f-abb11063dbcf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Refreshing network info cache for port b48170b0-717d-48f0-8172-742a4a8596e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.053 225859 DEBUG nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Start _get_guest_xml network_info=[{"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.057 225859 WARNING nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.062 225859 DEBUG nova.virt.libvirt.host [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.062 225859 DEBUG nova.virt.libvirt.host [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.065 225859 DEBUG nova.virt.libvirt.host [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.065 225859 DEBUG nova.virt.libvirt.host [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.066 225859 DEBUG nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.066 225859 DEBUG nova.virt.hardware [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.067 225859 DEBUG nova.virt.hardware [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.067 225859 DEBUG nova.virt.hardware [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.067 225859 DEBUG nova.virt.hardware [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.067 225859 DEBUG nova.virt.hardware [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.067 225859 DEBUG nova.virt.hardware [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.067 225859 DEBUG nova.virt.hardware [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.068 225859 DEBUG nova.virt.hardware [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.068 225859 DEBUG nova.virt.hardware [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.068 225859 DEBUG nova.virt.hardware [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.068 225859 DEBUG nova.virt.hardware [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.070 225859 DEBUG oslo_concurrency.processutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.093 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:39:50 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4224668871' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.479 225859 DEBUG oslo_concurrency.processutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.508 225859 DEBUG nova.storage.rbd_utils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image d5c2df9d-748f-4df2-9392-b45741975f65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.512 225859 DEBUG oslo_concurrency.processutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:50 compute-1 ceph-mon[81775]: pgmap v1544: 321 pgs: 321 active+clean; 166 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.4 MiB/s wr, 132 op/s
Jan 20 14:39:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4224668871' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:39:50 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2535468181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.970 225859 DEBUG oslo_concurrency.processutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:50.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.973 225859 DEBUG nova.virt.libvirt.vif [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2088149366',display_name='tempest-AttachInterfacesTestJSON-server-2088149366',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2088149366',id=66,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZwLcyXdmuPk9iAZlMOAxeFM3EdHKE0x5nJT3i2GTbVf6EkhYVj3hEmoeSwYo6iZrNjT6w/g2TndK4CzLIvGDWLEyKfIPgg2vbEtoL1oIxCHYN2ytrctbkHi1netydaRQ==',key_name='tempest-keypair-230916378',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-odubqy1o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:39:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=d5c2df9d-748f-4df2-9392-b45741975f65,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.974 225859 DEBUG nova.network.os_vif_util [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.975 225859 DEBUG nova.network.os_vif_util [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:35:f2,bridge_name='br-int',has_traffic_filtering=True,id=b48170b0-717d-48f0-8172-742a4a8596e9,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb48170b0-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:39:50 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.977 225859 DEBUG nova.objects.instance [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'pci_devices' on Instance uuid d5c2df9d-748f-4df2-9392-b45741975f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:50.999 225859 DEBUG nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:39:51 compute-1 nova_compute[225855]:   <uuid>d5c2df9d-748f-4df2-9392-b45741975f65</uuid>
Jan 20 14:39:51 compute-1 nova_compute[225855]:   <name>instance-00000042</name>
Jan 20 14:39:51 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:39:51 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:39:51 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <nova:name>tempest-AttachInterfacesTestJSON-server-2088149366</nova:name>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:39:50</nova:creationTime>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:39:51 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:39:51 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:39:51 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:39:51 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:39:51 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:39:51 compute-1 nova_compute[225855]:         <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:39:51 compute-1 nova_compute[225855]:         <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:39:51 compute-1 nova_compute[225855]:         <nova:port uuid="b48170b0-717d-48f0-8172-742a4a8596e9">
Jan 20 14:39:51 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:39:51 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:39:51 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <system>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <entry name="serial">d5c2df9d-748f-4df2-9392-b45741975f65</entry>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <entry name="uuid">d5c2df9d-748f-4df2-9392-b45741975f65</entry>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     </system>
Jan 20 14:39:51 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:39:51 compute-1 nova_compute[225855]:   <os>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:   </os>
Jan 20 14:39:51 compute-1 nova_compute[225855]:   <features>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:   </features>
Jan 20 14:39:51 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:39:51 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:39:51 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/d5c2df9d-748f-4df2-9392-b45741975f65_disk">
Jan 20 14:39:51 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       </source>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:39:51 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/d5c2df9d-748f-4df2-9392-b45741975f65_disk.config">
Jan 20 14:39:51 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       </source>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:39:51 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:3b:35:f2"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <target dev="tapb48170b0-71"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/d5c2df9d-748f-4df2-9392-b45741975f65/console.log" append="off"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <video>
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     </video>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:39:51 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:39:51 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:39:51 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:39:51 compute-1 nova_compute[225855]: </domain>
Jan 20 14:39:51 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.001 225859 DEBUG nova.compute.manager [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Preparing to wait for external event network-vif-plugged-b48170b0-717d-48f0-8172-742a4a8596e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.001 225859 DEBUG oslo_concurrency.lockutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.001 225859 DEBUG oslo_concurrency.lockutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.002 225859 DEBUG oslo_concurrency.lockutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.002 225859 DEBUG nova.virt.libvirt.vif [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2088149366',display_name='tempest-AttachInterfacesTestJSON-server-2088149366',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2088149366',id=66,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZwLcyXdmuPk9iAZlMOAxeFM3EdHKE0x5nJT3i2GTbVf6EkhYVj3hEmoeSwYo6iZrNjT6w/g2TndK4CzLIvGDWLEyKfIPgg2vbEtoL1oIxCHYN2ytrctbkHi1netydaRQ==',key_name='tempest-keypair-230916378',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-odubqy1o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:39:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=d5c2df9d-748f-4df2-9392-b45741975f65,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.003 225859 DEBUG nova.network.os_vif_util [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.003 225859 DEBUG nova.network.os_vif_util [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:35:f2,bridge_name='br-int',has_traffic_filtering=True,id=b48170b0-717d-48f0-8172-742a4a8596e9,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb48170b0-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.004 225859 DEBUG os_vif [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:35:f2,bridge_name='br-int',has_traffic_filtering=True,id=b48170b0-717d-48f0-8172-742a4a8596e9,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb48170b0-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.005 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.005 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.006 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.010 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.011 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb48170b0-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.012 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb48170b0-71, col_values=(('external_ids', {'iface-id': 'b48170b0-717d-48f0-8172-742a4a8596e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3b:35:f2', 'vm-uuid': 'd5c2df9d-748f-4df2-9392-b45741975f65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:51 compute-1 NetworkManager[49104]: <info>  [1768919991.0553] manager: (tapb48170b0-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.054 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.059 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.061 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.062 225859 INFO os_vif [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:35:f2,bridge_name='br-int',has_traffic_filtering=True,id=b48170b0-717d-48f0-8172-742a4a8596e9,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb48170b0-71')
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.137 225859 DEBUG nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.138 225859 DEBUG nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.138 225859 DEBUG nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No VIF found with MAC fa:16:3e:3b:35:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.139 225859 INFO nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Using config drive
Jan 20 14:39:51 compute-1 nova_compute[225855]: 2026-01-20 14:39:51.172 225859 DEBUG nova.storage.rbd_utils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image d5c2df9d-748f-4df2-9392-b45741975f65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2535468181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:51.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:52 compute-1 nova_compute[225855]: 2026-01-20 14:39:52.564 225859 INFO nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Creating config drive at /var/lib/nova/instances/d5c2df9d-748f-4df2-9392-b45741975f65/disk.config
Jan 20 14:39:52 compute-1 nova_compute[225855]: 2026-01-20 14:39:52.573 225859 DEBUG oslo_concurrency.processutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d5c2df9d-748f-4df2-9392-b45741975f65/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc7a84mwk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:52 compute-1 nova_compute[225855]: 2026-01-20 14:39:52.712 225859 DEBUG oslo_concurrency.processutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d5c2df9d-748f-4df2-9392-b45741975f65/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc7a84mwk" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:52 compute-1 nova_compute[225855]: 2026-01-20 14:39:52.742 225859 DEBUG nova.storage.rbd_utils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image d5c2df9d-748f-4df2-9392-b45741975f65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:52 compute-1 nova_compute[225855]: 2026-01-20 14:39:52.745 225859 DEBUG oslo_concurrency.processutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d5c2df9d-748f-4df2-9392-b45741975f65/disk.config d5c2df9d-748f-4df2-9392-b45741975f65_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:52 compute-1 ceph-mon[81775]: pgmap v1545: 321 pgs: 321 active+clean; 165 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 147 op/s
Jan 20 14:39:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:52.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.103 225859 DEBUG oslo_concurrency.processutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d5c2df9d-748f-4df2-9392-b45741975f65/disk.config d5c2df9d-748f-4df2-9392-b45741975f65_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.104 225859 INFO nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Deleting local config drive /var/lib/nova/instances/d5c2df9d-748f-4df2-9392-b45741975f65/disk.config because it was imported into RBD.
Jan 20 14:39:53 compute-1 kernel: tapb48170b0-71: entered promiscuous mode
Jan 20 14:39:53 compute-1 NetworkManager[49104]: <info>  [1768919993.1734] manager: (tapb48170b0-71): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Jan 20 14:39:53 compute-1 ovn_controller[130490]: 2026-01-20T14:39:53Z|00193|binding|INFO|Claiming lport b48170b0-717d-48f0-8172-742a4a8596e9 for this chassis.
Jan 20 14:39:53 compute-1 ovn_controller[130490]: 2026-01-20T14:39:53Z|00194|binding|INFO|b48170b0-717d-48f0-8172-742a4a8596e9: Claiming fa:16:3e:3b:35:f2 10.100.0.13
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.174 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.179 225859 DEBUG nova.network.neutron [req-c581aa3d-73ba-44e7-aa5d-6b87cb19210d req-4b156c00-7c0c-43c2-8f0f-abb11063dbcf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Updated VIF entry in instance network info cache for port b48170b0-717d-48f0-8172-742a4a8596e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.180 225859 DEBUG nova.network.neutron [req-c581aa3d-73ba-44e7-aa5d-6b87cb19210d req-4b156c00-7c0c-43c2-8f0f-abb11063dbcf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Updating instance_info_cache with network_info: [{"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.183 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.202 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:35:f2 10.100.0.13'], port_security=['fa:16:3e:3b:35:f2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd5c2df9d-748f-4df2-9392-b45741975f65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc21b99b-4e34-422c-be05-0a440009dac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3f93fd4b2154dda9f38e62334904303', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b03ad0a9-4e4a-464d-b7d2-84d77d6554bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7af6b6bc-3cbd-48be-9f10-23ec011e0426, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=b48170b0-717d-48f0-8172-742a4a8596e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.204 140354 INFO neutron.agent.ovn.metadata.agent [-] Port b48170b0-717d-48f0-8172-742a4a8596e9 in datapath fc21b99b-4e34-422c-be05-0a440009dac4 bound to our chassis
Jan 20 14:39:53 compute-1 systemd-udevd[252119]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.206 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc21b99b-4e34-422c-be05-0a440009dac4
Jan 20 14:39:53 compute-1 systemd-machined[194361]: New machine qemu-29-instance-00000042.
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.214 225859 DEBUG oslo_concurrency.lockutils [req-c581aa3d-73ba-44e7-aa5d-6b87cb19210d req-4b156c00-7c0c-43c2-8f0f-abb11063dbcf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:39:53 compute-1 NetworkManager[49104]: <info>  [1768919993.2184] device (tapb48170b0-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:39:53 compute-1 NetworkManager[49104]: <info>  [1768919993.2196] device (tapb48170b0-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.219 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8a120482-2f70-4ea2-980a-a00c853da07c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.220 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfc21b99b-41 in ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.222 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfc21b99b-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.222 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2d6795bf-5883-4c27-99fb-7e50539ffd47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.223 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd754d9-0603-4b68-876e-0db00d5a14b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:53 compute-1 systemd[1]: Started Virtual Machine qemu-29-instance-00000042.
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.238 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[5b050288-b431-454a-ac37-1514e860c620]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:53 compute-1 ovn_controller[130490]: 2026-01-20T14:39:53Z|00195|binding|INFO|Setting lport b48170b0-717d-48f0-8172-742a4a8596e9 ovn-installed in OVS
Jan 20 14:39:53 compute-1 ovn_controller[130490]: 2026-01-20T14:39:53Z|00196|binding|INFO|Setting lport b48170b0-717d-48f0-8172-742a4a8596e9 up in Southbound
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.295 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.308 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[aed39129-127d-4ec6-9dd1-6b91268aa5d4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.339 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[3756f0c5-21bb-4eec-84ad-63f0c401b381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.345 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1e17b826-19dd-4086-b1d1-6516b75ce132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:53 compute-1 NetworkManager[49104]: <info>  [1768919993.3465] manager: (tapfc21b99b-40): new Veth device (/org/freedesktop/NetworkManager/Devices/88)
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.377 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[98fcb32b-3e1e-4f00-ad42-62f3aaa9f936]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.381 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[74a5e366-f194-400c-9eb8-aae384e5d29e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:53 compute-1 NetworkManager[49104]: <info>  [1768919993.4109] device (tapfc21b99b-40): carrier: link connected
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.412 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[78c36fc3-3ce7-4e21-8f8d-236d23e1ad08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.428 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0a808e-ef11-4160-85a2-8c09b99fec49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc21b99b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:5b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503233, 'reachable_time': 15488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252153, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.443 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4e52b0b1-4ee3-4319-ad94-2d424ec06b59]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:5bd2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503233, 'tstamp': 503233}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252154, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.462 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a7dde585-17be-4b71-956d-16f05c9d896f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc21b99b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:5b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503233, 'reachable_time': 15488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252155, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.494 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5f5bbf51-4a3b-4bba-a65e-1df499d43df0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.496 225859 DEBUG nova.compute.manager [req-20a05fac-2422-4b0a-a0af-0a7fbd614100 req-1250e884-0fb0-4ead-9c23-0ed937f394e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-vif-plugged-b48170b0-717d-48f0-8172-742a4a8596e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.497 225859 DEBUG oslo_concurrency.lockutils [req-20a05fac-2422-4b0a-a0af-0a7fbd614100 req-1250e884-0fb0-4ead-9c23-0ed937f394e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.497 225859 DEBUG oslo_concurrency.lockutils [req-20a05fac-2422-4b0a-a0af-0a7fbd614100 req-1250e884-0fb0-4ead-9c23-0ed937f394e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.497 225859 DEBUG oslo_concurrency.lockutils [req-20a05fac-2422-4b0a-a0af-0a7fbd614100 req-1250e884-0fb0-4ead-9c23-0ed937f394e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.498 225859 DEBUG nova.compute.manager [req-20a05fac-2422-4b0a-a0af-0a7fbd614100 req-1250e884-0fb0-4ead-9c23-0ed937f394e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Processing event network-vif-plugged-b48170b0-717d-48f0-8172-742a4a8596e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.567 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[083c7f52-1c8f-4779-a206-6a6220076016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.568 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc21b99b-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.568 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.569 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc21b99b-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.570 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:53 compute-1 NetworkManager[49104]: <info>  [1768919993.5711] manager: (tapfc21b99b-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Jan 20 14:39:53 compute-1 kernel: tapfc21b99b-40: entered promiscuous mode
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.574 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc21b99b-40, col_values=(('external_ids', {'iface-id': '583df905-1d9f-49c1-b209-4b7fad1599f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.573 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:53 compute-1 ovn_controller[130490]: 2026-01-20T14:39:53Z|00197|binding|INFO|Releasing lport 583df905-1d9f-49c1-b209-4b7fad1599f6 from this chassis (sb_readonly=0)
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.590 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.592 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fc21b99b-4e34-422c-be05-0a440009dac4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fc21b99b-4e34-422c-be05-0a440009dac4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.592 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf6dd0e-b96d-4336-ad73-8c8b5c627f7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.593 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-fc21b99b-4e34-422c-be05-0a440009dac4
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/fc21b99b-4e34-422c-be05-0a440009dac4.pid.haproxy
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID fc21b99b-4e34-422c-be05-0a440009dac4
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:39:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:39:53.594 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'env', 'PROCESS_TAG=haproxy-fc21b99b-4e34-422c-be05-0a440009dac4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fc21b99b-4e34-422c-be05-0a440009dac4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.694 225859 DEBUG nova.compute.manager [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.695 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919993.6939595, d5c2df9d-748f-4df2-9392-b45741975f65 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.695 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] VM Started (Lifecycle Event)
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.701 225859 DEBUG nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.704 225859 INFO nova.virt.libvirt.driver [-] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Instance spawned successfully.
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.705 225859 DEBUG nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.713 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.716 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.723 225859 DEBUG nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.724 225859 DEBUG nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.724 225859 DEBUG nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.725 225859 DEBUG nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.725 225859 DEBUG nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.726 225859 DEBUG nova.virt.libvirt.driver [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.758 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.759 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919993.6946414, d5c2df9d-748f-4df2-9392-b45741975f65 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.759 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] VM Paused (Lifecycle Event)
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.785 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.789 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768919993.700576, d5c2df9d-748f-4df2-9392-b45741975f65 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.789 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] VM Resumed (Lifecycle Event)
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.798 225859 INFO nova.compute.manager [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Took 8.51 seconds to spawn the instance on the hypervisor.
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.798 225859 DEBUG nova.compute.manager [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.807 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:39:53 compute-1 nova_compute[225855]: 2026-01-20 14:39:53.809 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:39:53 compute-1 ceph-mon[81775]: pgmap v1546: 321 pgs: 321 active+clean; 167 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 466 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Jan 20 14:39:53 compute-1 podman[252230]: 2026-01-20 14:39:53.942346769 +0000 UTC m=+0.053245419 container create ad0796275d2b7bc69c4b451ca13c4323d2608039a8c7168dc31ea54a35ab1b48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 20 14:39:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:53.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:53 compute-1 systemd[1]: Started libpod-conmon-ad0796275d2b7bc69c4b451ca13c4323d2608039a8c7168dc31ea54a35ab1b48.scope.
Jan 20 14:39:54 compute-1 podman[252230]: 2026-01-20 14:39:53.914655585 +0000 UTC m=+0.025554245 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:39:54 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:39:54 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1daf9cb422d92ee82699c7df2fb191ba24031863f4d9fd4cf2bcb41f474f37a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:39:54 compute-1 podman[252230]: 2026-01-20 14:39:54.082510419 +0000 UTC m=+0.193409099 container init ad0796275d2b7bc69c4b451ca13c4323d2608039a8c7168dc31ea54a35ab1b48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:39:54 compute-1 podman[252230]: 2026-01-20 14:39:54.091490043 +0000 UTC m=+0.202388683 container start ad0796275d2b7bc69c4b451ca13c4323d2608039a8c7168dc31ea54a35ab1b48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:39:54 compute-1 neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4[252245]: [NOTICE]   (252249) : New worker (252251) forked
Jan 20 14:39:54 compute-1 neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4[252245]: [NOTICE]   (252249) : Loading success.
Jan 20 14:39:54 compute-1 nova_compute[225855]: 2026-01-20 14:39:54.170 225859 INFO nova.compute.manager [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Took 10.59 seconds to build instance.
Jan 20 14:39:54 compute-1 nova_compute[225855]: 2026-01-20 14:39:54.203 225859 DEBUG oslo_concurrency.lockutils [None req-bc9b2d72-ce4e-43cd-962e-9257da3c9038 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:54.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:55 compute-1 nova_compute[225855]: 2026-01-20 14:39:55.067 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:55 compute-1 nova_compute[225855]: 2026-01-20 14:39:55.621 225859 DEBUG nova.compute.manager [req-815f991c-db2b-4a4d-b4c2-9b0bfeb10db6 req-2708a757-a18b-4e0d-945f-e9c7f4ee9504 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-vif-plugged-b48170b0-717d-48f0-8172-742a4a8596e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:55 compute-1 nova_compute[225855]: 2026-01-20 14:39:55.622 225859 DEBUG oslo_concurrency.lockutils [req-815f991c-db2b-4a4d-b4c2-9b0bfeb10db6 req-2708a757-a18b-4e0d-945f-e9c7f4ee9504 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:55 compute-1 nova_compute[225855]: 2026-01-20 14:39:55.623 225859 DEBUG oslo_concurrency.lockutils [req-815f991c-db2b-4a4d-b4c2-9b0bfeb10db6 req-2708a757-a18b-4e0d-945f-e9c7f4ee9504 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:55 compute-1 nova_compute[225855]: 2026-01-20 14:39:55.623 225859 DEBUG oslo_concurrency.lockutils [req-815f991c-db2b-4a4d-b4c2-9b0bfeb10db6 req-2708a757-a18b-4e0d-945f-e9c7f4ee9504 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:55 compute-1 nova_compute[225855]: 2026-01-20 14:39:55.623 225859 DEBUG nova.compute.manager [req-815f991c-db2b-4a4d-b4c2-9b0bfeb10db6 req-2708a757-a18b-4e0d-945f-e9c7f4ee9504 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] No waiting events found dispatching network-vif-plugged-b48170b0-717d-48f0-8172-742a4a8596e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:39:55 compute-1 nova_compute[225855]: 2026-01-20 14:39:55.624 225859 WARNING nova.compute.manager [req-815f991c-db2b-4a4d-b4c2-9b0bfeb10db6 req-2708a757-a18b-4e0d-945f-e9c7f4ee9504 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received unexpected event network-vif-plugged-b48170b0-717d-48f0-8172-742a4a8596e9 for instance with vm_state active and task_state None.
Jan 20 14:39:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:55.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:56 compute-1 nova_compute[225855]: 2026-01-20 14:39:56.054 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:56 compute-1 ceph-mon[81775]: pgmap v1547: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 150 op/s
Jan 20 14:39:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:56.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:39:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:57.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:39:58 compute-1 nova_compute[225855]: 2026-01-20 14:39:58.541 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:58 compute-1 NetworkManager[49104]: <info>  [1768919998.5473] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Jan 20 14:39:58 compute-1 NetworkManager[49104]: <info>  [1768919998.5504] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Jan 20 14:39:58 compute-1 nova_compute[225855]: 2026-01-20 14:39:58.700 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:58 compute-1 ovn_controller[130490]: 2026-01-20T14:39:58Z|00198|binding|INFO|Releasing lport 583df905-1d9f-49c1-b209-4b7fad1599f6 from this chassis (sb_readonly=0)
Jan 20 14:39:58 compute-1 nova_compute[225855]: 2026-01-20 14:39:58.716 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:58.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:59 compute-1 ceph-mon[81775]: pgmap v1548: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Jan 20 14:39:59 compute-1 nova_compute[225855]: 2026-01-20 14:39:59.151 225859 DEBUG nova.compute.manager [req-e6f14b1c-a0be-43e3-9ece-38c4b95ee32b req-6dd3ee31-83b2-401e-9264-719bc6bd8306 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-changed-b48170b0-717d-48f0-8172-742a4a8596e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:59 compute-1 nova_compute[225855]: 2026-01-20 14:39:59.152 225859 DEBUG nova.compute.manager [req-e6f14b1c-a0be-43e3-9ece-38c4b95ee32b req-6dd3ee31-83b2-401e-9264-719bc6bd8306 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Refreshing instance network info cache due to event network-changed-b48170b0-717d-48f0-8172-742a4a8596e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:39:59 compute-1 nova_compute[225855]: 2026-01-20 14:39:59.152 225859 DEBUG oslo_concurrency.lockutils [req-e6f14b1c-a0be-43e3-9ece-38c4b95ee32b req-6dd3ee31-83b2-401e-9264-719bc6bd8306 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:39:59 compute-1 nova_compute[225855]: 2026-01-20 14:39:59.152 225859 DEBUG oslo_concurrency.lockutils [req-e6f14b1c-a0be-43e3-9ece-38c4b95ee32b req-6dd3ee31-83b2-401e-9264-719bc6bd8306 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:39:59 compute-1 nova_compute[225855]: 2026-01-20 14:39:59.152 225859 DEBUG nova.network.neutron [req-e6f14b1c-a0be-43e3-9ece-38c4b95ee32b req-6dd3ee31-83b2-401e-9264-719bc6bd8306 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Refreshing network info cache for port b48170b0-717d-48f0-8172-742a4a8596e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:39:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:39:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:59.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:00 compute-1 ceph-mon[81775]: pgmap v1549: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 180 op/s
Jan 20 14:40:00 compute-1 podman[252264]: 2026-01-20 14:40:00.008101832 +0000 UTC m=+0.056825251 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 20 14:40:00 compute-1 nova_compute[225855]: 2026-01-20 14:40:00.069 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:40:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:00.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:40:01 compute-1 ceph-mon[81775]: overall HEALTH_OK
Jan 20 14:40:01 compute-1 nova_compute[225855]: 2026-01-20 14:40:01.057 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:01 compute-1 nova_compute[225855]: 2026-01-20 14:40:01.264 225859 DEBUG nova.network.neutron [req-e6f14b1c-a0be-43e3-9ece-38c4b95ee32b req-6dd3ee31-83b2-401e-9264-719bc6bd8306 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Updated VIF entry in instance network info cache for port b48170b0-717d-48f0-8172-742a4a8596e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:40:01 compute-1 nova_compute[225855]: 2026-01-20 14:40:01.265 225859 DEBUG nova.network.neutron [req-e6f14b1c-a0be-43e3-9ece-38c4b95ee32b req-6dd3ee31-83b2-401e-9264-719bc6bd8306 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Updating instance_info_cache with network_info: [{"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:40:01 compute-1 nova_compute[225855]: 2026-01-20 14:40:01.294 225859 DEBUG oslo_concurrency.lockutils [req-e6f14b1c-a0be-43e3-9ece-38c4b95ee32b req-6dd3ee31-83b2-401e-9264-719bc6bd8306 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:40:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:01.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:02 compute-1 ceph-mon[81775]: pgmap v1550: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 125 op/s
Jan 20 14:40:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:40:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:02.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:40:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:03.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:04 compute-1 ceph-mon[81775]: pgmap v1551: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 35 KiB/s wr, 75 op/s
Jan 20 14:40:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:40:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:04.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:40:05 compute-1 sudo[252285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:05 compute-1 sudo[252285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:05 compute-1 sudo[252285]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:05 compute-1 nova_compute[225855]: 2026-01-20 14:40:05.070 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:05 compute-1 sudo[252310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:05 compute-1 sudo[252310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:05 compute-1 sudo[252310]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:05 compute-1 nova_compute[225855]: 2026-01-20 14:40:05.600 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3907619847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:05.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:06 compute-1 nova_compute[225855]: 2026-01-20 14:40:06.118 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:06 compute-1 ceph-mon[81775]: pgmap v1552: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 32 KiB/s wr, 81 op/s
Jan 20 14:40:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:06.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:07 compute-1 ovn_controller[130490]: 2026-01-20T14:40:07Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3b:35:f2 10.100.0.13
Jan 20 14:40:07 compute-1 ovn_controller[130490]: 2026-01-20T14:40:07Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3b:35:f2 10.100.0.13
Jan 20 14:40:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:40:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:07.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:40:08 compute-1 ceph-mon[81775]: pgmap v1553: 321 pgs: 321 active+clean; 197 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.6 MiB/s wr, 60 op/s
Jan 20 14:40:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:40:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:08.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:40:09 compute-1 nova_compute[225855]: 2026-01-20 14:40:09.659 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3459109517' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:40:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:09.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:10 compute-1 nova_compute[225855]: 2026-01-20 14:40:10.073 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:10 compute-1 ceph-mon[81775]: pgmap v1554: 321 pgs: 321 active+clean; 207 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 658 KiB/s rd, 2.0 MiB/s wr, 44 op/s
Jan 20 14:40:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1895402904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:40:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:40:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:10.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:40:11 compute-1 nova_compute[225855]: 2026-01-20 14:40:11.122 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:11.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:12.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:13 compute-1 ceph-mon[81775]: pgmap v1555: 321 pgs: 321 active+clean; 247 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 20 14:40:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:40:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1177646554' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:40:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:40:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1177646554' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:40:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:13.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1177646554' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:40:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1177646554' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:40:14 compute-1 ceph-mon[81775]: pgmap v1556: 321 pgs: 321 active+clean; 247 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Jan 20 14:40:14 compute-1 nova_compute[225855]: 2026-01-20 14:40:14.719 225859 DEBUG oslo_concurrency.lockutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Acquiring lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:14 compute-1 nova_compute[225855]: 2026-01-20 14:40:14.719 225859 DEBUG oslo_concurrency.lockutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:14 compute-1 nova_compute[225855]: 2026-01-20 14:40:14.750 225859 DEBUG nova.compute.manager [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:40:14 compute-1 nova_compute[225855]: 2026-01-20 14:40:14.840 225859 DEBUG oslo_concurrency.lockutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:14 compute-1 nova_compute[225855]: 2026-01-20 14:40:14.841 225859 DEBUG oslo_concurrency.lockutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:14 compute-1 nova_compute[225855]: 2026-01-20 14:40:14.849 225859 DEBUG nova.virt.hardware [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:40:14 compute-1 nova_compute[225855]: 2026-01-20 14:40:14.849 225859 INFO nova.compute.claims [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:40:14 compute-1 nova_compute[225855]: 2026-01-20 14:40:14.983 225859 DEBUG oslo_concurrency.processutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:14.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.076 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:40:15 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2441771306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.421 225859 DEBUG oslo_concurrency.processutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.427 225859 DEBUG nova.compute.provider_tree [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.442 225859 DEBUG nova.scheduler.client.report [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.461 225859 DEBUG oslo_concurrency.lockutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.462 225859 DEBUG nova.compute.manager [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:40:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2441771306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.509 225859 DEBUG nova.compute.manager [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.510 225859 DEBUG nova.network.neutron [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.526 225859 INFO nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.542 225859 DEBUG nova.compute.manager [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.621 225859 DEBUG nova.compute.manager [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.622 225859 DEBUG nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.623 225859 INFO nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Creating image(s)
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.654 225859 DEBUG nova.storage.rbd_utils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] rbd image a96ccadd-ac1d-4040-8bcc-bebb460ee233_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.684 225859 DEBUG nova.storage.rbd_utils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] rbd image a96ccadd-ac1d-4040-8bcc-bebb460ee233_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.713 225859 DEBUG nova.storage.rbd_utils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] rbd image a96ccadd-ac1d-4040-8bcc-bebb460ee233_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.717 225859 DEBUG oslo_concurrency.processutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.757 225859 DEBUG nova.policy [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '16c05e1ac16f428bab6b36346856235e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b50ce2f25e8943e28ddf8bf69c721e75', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.784 225859 DEBUG oslo_concurrency.processutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.785 225859 DEBUG oslo_concurrency.lockutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.786 225859 DEBUG oslo_concurrency.lockutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.786 225859 DEBUG oslo_concurrency.lockutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.811 225859 DEBUG nova.storage.rbd_utils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] rbd image a96ccadd-ac1d-4040-8bcc-bebb460ee233_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.815 225859 DEBUG oslo_concurrency.processutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 a96ccadd-ac1d-4040-8bcc-bebb460ee233_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.963 225859 DEBUG oslo_concurrency.lockutils [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "interface-d5c2df9d-748f-4df2-9392-b45741975f65-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.964 225859 DEBUG oslo_concurrency.lockutils [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "interface-d5c2df9d-748f-4df2-9392-b45741975f65-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.964 225859 DEBUG nova.objects.instance [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'flavor' on Instance uuid d5c2df9d-748f-4df2-9392-b45741975f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:40:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:15.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:15 compute-1 nova_compute[225855]: 2026-01-20 14:40:15.988 225859 DEBUG nova.objects.instance [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'pci_requests' on Instance uuid d5c2df9d-748f-4df2-9392-b45741975f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:40:16 compute-1 nova_compute[225855]: 2026-01-20 14:40:16.001 225859 DEBUG nova.network.neutron [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:40:16 compute-1 nova_compute[225855]: 2026-01-20 14:40:16.124 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:16 compute-1 nova_compute[225855]: 2026-01-20 14:40:16.151 225859 DEBUG oslo_concurrency.processutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 a96ccadd-ac1d-4040-8bcc-bebb460ee233_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:16 compute-1 nova_compute[225855]: 2026-01-20 14:40:16.221 225859 DEBUG nova.storage.rbd_utils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] resizing rbd image a96ccadd-ac1d-4040-8bcc-bebb460ee233_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:40:16 compute-1 nova_compute[225855]: 2026-01-20 14:40:16.326 225859 DEBUG nova.objects.instance [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Lazy-loading 'migration_context' on Instance uuid a96ccadd-ac1d-4040-8bcc-bebb460ee233 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:40:16 compute-1 nova_compute[225855]: 2026-01-20 14:40:16.352 225859 DEBUG nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:40:16 compute-1 nova_compute[225855]: 2026-01-20 14:40:16.353 225859 DEBUG nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Ensure instance console log exists: /var/lib/nova/instances/a96ccadd-ac1d-4040-8bcc-bebb460ee233/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:40:16 compute-1 nova_compute[225855]: 2026-01-20 14:40:16.353 225859 DEBUG oslo_concurrency.lockutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:16 compute-1 nova_compute[225855]: 2026-01-20 14:40:16.353 225859 DEBUG oslo_concurrency.lockutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:16 compute-1 nova_compute[225855]: 2026-01-20 14:40:16.354 225859 DEBUG oslo_concurrency.lockutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:16 compute-1 nova_compute[225855]: 2026-01-20 14:40:16.362 225859 DEBUG nova.policy [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c8a9fb458d27434495a77a94827b6097', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3f93fd4b2154dda9f38e62334904303', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:40:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:16.400 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:16.401 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:16.402 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:16 compute-1 ceph-mon[81775]: pgmap v1557: 321 pgs: 321 active+clean; 247 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 147 op/s
Jan 20 14:40:16 compute-1 nova_compute[225855]: 2026-01-20 14:40:16.926 225859 DEBUG nova.network.neutron [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Successfully created port: 19a89daa-770c-4c3f-970c-a9a462503b06 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:40:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:40:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:16.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:40:17 compute-1 nova_compute[225855]: 2026-01-20 14:40:17.021 225859 DEBUG nova.network.neutron [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Successfully created port: db46acd4-809b-4127-ad48-870ae429b4d6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:40:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:17.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:18 compute-1 nova_compute[225855]: 2026-01-20 14:40:18.029 225859 DEBUG nova.network.neutron [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Successfully updated port: db46acd4-809b-4127-ad48-870ae429b4d6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:40:18 compute-1 nova_compute[225855]: 2026-01-20 14:40:18.059 225859 DEBUG oslo_concurrency.lockutils [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:40:18 compute-1 nova_compute[225855]: 2026-01-20 14:40:18.060 225859 DEBUG oslo_concurrency.lockutils [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquired lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:40:18 compute-1 nova_compute[225855]: 2026-01-20 14:40:18.060 225859 DEBUG nova.network.neutron [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:40:18 compute-1 nova_compute[225855]: 2026-01-20 14:40:18.103 225859 DEBUG nova.compute.manager [req-d7555cd0-1901-4197-b5d5-bed5a20edf60 req-68a01821-fc6f-463a-9132-16e9cd02bc48 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-changed-db46acd4-809b-4127-ad48-870ae429b4d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:18 compute-1 nova_compute[225855]: 2026-01-20 14:40:18.104 225859 DEBUG nova.compute.manager [req-d7555cd0-1901-4197-b5d5-bed5a20edf60 req-68a01821-fc6f-463a-9132-16e9cd02bc48 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Refreshing instance network info cache due to event network-changed-db46acd4-809b-4127-ad48-870ae429b4d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:40:18 compute-1 nova_compute[225855]: 2026-01-20 14:40:18.104 225859 DEBUG oslo_concurrency.lockutils [req-d7555cd0-1901-4197-b5d5-bed5a20edf60 req-68a01821-fc6f-463a-9132-16e9cd02bc48 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:40:18 compute-1 nova_compute[225855]: 2026-01-20 14:40:18.298 225859 WARNING nova.network.neutron [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] fc21b99b-4e34-422c-be05-0a440009dac4 already exists in list: networks containing: ['fc21b99b-4e34-422c-be05-0a440009dac4']. ignoring it
Jan 20 14:40:18 compute-1 nova_compute[225855]: 2026-01-20 14:40:18.568 225859 DEBUG nova.network.neutron [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Successfully updated port: 19a89daa-770c-4c3f-970c-a9a462503b06 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:40:18 compute-1 nova_compute[225855]: 2026-01-20 14:40:18.595 225859 DEBUG oslo_concurrency.lockutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Acquiring lock "refresh_cache-a96ccadd-ac1d-4040-8bcc-bebb460ee233" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:40:18 compute-1 nova_compute[225855]: 2026-01-20 14:40:18.595 225859 DEBUG oslo_concurrency.lockutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Acquired lock "refresh_cache-a96ccadd-ac1d-4040-8bcc-bebb460ee233" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:40:18 compute-1 nova_compute[225855]: 2026-01-20 14:40:18.595 225859 DEBUG nova.network.neutron [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:40:18 compute-1 ceph-mon[81775]: pgmap v1558: 321 pgs: 321 active+clean; 268 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.8 MiB/s wr, 177 op/s
Jan 20 14:40:18 compute-1 nova_compute[225855]: 2026-01-20 14:40:18.840 225859 DEBUG nova.network.neutron [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:40:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:19.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.760 225859 DEBUG nova.compute.manager [req-9aa08c3b-df5d-455d-a6c7-b122e663321f req-3850f364-33e1-4a03-8ef0-db34ae5d7263 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Received event network-changed-19a89daa-770c-4c3f-970c-a9a462503b06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.760 225859 DEBUG nova.compute.manager [req-9aa08c3b-df5d-455d-a6c7-b122e663321f req-3850f364-33e1-4a03-8ef0-db34ae5d7263 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Refreshing instance network info cache due to event network-changed-19a89daa-770c-4c3f-970c-a9a462503b06. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.760 225859 DEBUG oslo_concurrency.lockutils [req-9aa08c3b-df5d-455d-a6c7-b122e663321f req-3850f364-33e1-4a03-8ef0-db34ae5d7263 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-a96ccadd-ac1d-4040-8bcc-bebb460ee233" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.776 225859 DEBUG nova.network.neutron [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Updating instance_info_cache with network_info: [{"id": "19a89daa-770c-4c3f-970c-a9a462503b06", "address": "fa:16:3e:7c:c6:66", "network": {"id": "1002188b-c6a7-4b59-9326-3a1a837a00fd", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1606456759-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b50ce2f25e8943e28ddf8bf69c721e75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a89daa-77", "ovs_interfaceid": "19a89daa-770c-4c3f-970c-a9a462503b06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.803 225859 DEBUG oslo_concurrency.lockutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Releasing lock "refresh_cache-a96ccadd-ac1d-4040-8bcc-bebb460ee233" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.803 225859 DEBUG nova.compute.manager [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Instance network_info: |[{"id": "19a89daa-770c-4c3f-970c-a9a462503b06", "address": "fa:16:3e:7c:c6:66", "network": {"id": "1002188b-c6a7-4b59-9326-3a1a837a00fd", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1606456759-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b50ce2f25e8943e28ddf8bf69c721e75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a89daa-77", "ovs_interfaceid": "19a89daa-770c-4c3f-970c-a9a462503b06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.804 225859 DEBUG oslo_concurrency.lockutils [req-9aa08c3b-df5d-455d-a6c7-b122e663321f req-3850f364-33e1-4a03-8ef0-db34ae5d7263 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-a96ccadd-ac1d-4040-8bcc-bebb460ee233" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.804 225859 DEBUG nova.network.neutron [req-9aa08c3b-df5d-455d-a6c7-b122e663321f req-3850f364-33e1-4a03-8ef0-db34ae5d7263 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Refreshing network info cache for port 19a89daa-770c-4c3f-970c-a9a462503b06 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.807 225859 DEBUG nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Start _get_guest_xml network_info=[{"id": "19a89daa-770c-4c3f-970c-a9a462503b06", "address": "fa:16:3e:7c:c6:66", "network": {"id": "1002188b-c6a7-4b59-9326-3a1a837a00fd", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1606456759-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b50ce2f25e8943e28ddf8bf69c721e75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a89daa-77", "ovs_interfaceid": "19a89daa-770c-4c3f-970c-a9a462503b06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.811 225859 WARNING nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.815 225859 DEBUG nova.virt.libvirt.host [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.815 225859 DEBUG nova.virt.libvirt.host [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.819 225859 DEBUG nova.virt.libvirt.host [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.819 225859 DEBUG nova.virt.libvirt.host [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.820 225859 DEBUG nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.820 225859 DEBUG nova.virt.hardware [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.821 225859 DEBUG nova.virt.hardware [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.821 225859 DEBUG nova.virt.hardware [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.822 225859 DEBUG nova.virt.hardware [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.822 225859 DEBUG nova.virt.hardware [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.822 225859 DEBUG nova.virt.hardware [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.822 225859 DEBUG nova.virt.hardware [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.823 225859 DEBUG nova.virt.hardware [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.823 225859 DEBUG nova.virt.hardware [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.824 225859 DEBUG nova.virt.hardware [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.824 225859 DEBUG nova.virt.hardware [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:40:19 compute-1 nova_compute[225855]: 2026-01-20 14:40:19.827 225859 DEBUG oslo_concurrency.processutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:19.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.051 225859 DEBUG nova.network.neutron [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Updating instance_info_cache with network_info: [{"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.073 225859 DEBUG oslo_concurrency.lockutils [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Releasing lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.074 225859 DEBUG oslo_concurrency.lockutils [req-d7555cd0-1901-4197-b5d5-bed5a20edf60 req-68a01821-fc6f-463a-9132-16e9cd02bc48 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.074 225859 DEBUG nova.network.neutron [req-d7555cd0-1901-4197-b5d5-bed5a20edf60 req-68a01821-fc6f-463a-9132-16e9cd02bc48 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Refreshing network info cache for port db46acd4-809b-4127-ad48-870ae429b4d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.078 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.083 225859 DEBUG nova.virt.libvirt.vif [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2088149366',display_name='tempest-AttachInterfacesTestJSON-server-2088149366',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2088149366',id=66,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZwLcyXdmuPk9iAZlMOAxeFM3EdHKE0x5nJT3i2GTbVf6EkhYVj3hEmoeSwYo6iZrNjT6w/g2TndK4CzLIvGDWLEyKfIPgg2vbEtoL1oIxCHYN2ytrctbkHi1netydaRQ==',key_name='tempest-keypair-230916378',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-odubqy1o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:39:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=d5c2df9d-748f-4df2-9392-b45741975f65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.084 225859 DEBUG nova.network.os_vif_util [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.085 225859 DEBUG nova.network.os_vif_util [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:df:88,bridge_name='br-int',has_traffic_filtering=True,id=db46acd4-809b-4127-ad48-870ae429b4d6,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb46acd4-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.087 225859 DEBUG os_vif [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:df:88,bridge_name='br-int',has_traffic_filtering=True,id=db46acd4-809b-4127-ad48-870ae429b4d6,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb46acd4-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.088 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.088 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.089 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.093 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.093 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb46acd4-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.094 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdb46acd4-80, col_values=(('external_ids', {'iface-id': 'db46acd4-809b-4127-ad48-870ae429b4d6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:df:88', 'vm-uuid': 'd5c2df9d-748f-4df2-9392-b45741975f65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.095 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:20 compute-1 NetworkManager[49104]: <info>  [1768920020.0962] manager: (tapdb46acd4-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.098 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.104 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.105 225859 INFO os_vif [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:df:88,bridge_name='br-int',has_traffic_filtering=True,id=db46acd4-809b-4127-ad48-870ae429b4d6,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb46acd4-80')
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.105 225859 DEBUG nova.virt.libvirt.vif [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2088149366',display_name='tempest-AttachInterfacesTestJSON-server-2088149366',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2088149366',id=66,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZwLcyXdmuPk9iAZlMOAxeFM3EdHKE0x5nJT3i2GTbVf6EkhYVj3hEmoeSwYo6iZrNjT6w/g2TndK4CzLIvGDWLEyKfIPgg2vbEtoL1oIxCHYN2ytrctbkHi1netydaRQ==',key_name='tempest-keypair-230916378',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-odubqy1o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:39:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=d5c2df9d-748f-4df2-9392-b45741975f65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.106 225859 DEBUG nova.network.os_vif_util [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.106 225859 DEBUG nova.network.os_vif_util [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:df:88,bridge_name='br-int',has_traffic_filtering=True,id=db46acd4-809b-4127-ad48-870ae429b4d6,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb46acd4-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.109 225859 DEBUG nova.virt.libvirt.guest [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] attach device xml: <interface type="ethernet">
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <mac address="fa:16:3e:24:df:88"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <model type="virtio"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <mtu size="1442"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <target dev="tapdb46acd4-80"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]: </interface>
Jan 20 14:40:20 compute-1 nova_compute[225855]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 14:40:20 compute-1 podman[252533]: 2026-01-20 14:40:20.113997781 +0000 UTC m=+0.140360406 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 20 14:40:20 compute-1 kernel: tapdb46acd4-80: entered promiscuous mode
Jan 20 14:40:20 compute-1 NetworkManager[49104]: <info>  [1768920020.1213] manager: (tapdb46acd4-80): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Jan 20 14:40:20 compute-1 ovn_controller[130490]: 2026-01-20T14:40:20Z|00199|binding|INFO|Claiming lport db46acd4-809b-4127-ad48-870ae429b4d6 for this chassis.
Jan 20 14:40:20 compute-1 ovn_controller[130490]: 2026-01-20T14:40:20Z|00200|binding|INFO|db46acd4-809b-4127-ad48-870ae429b4d6: Claiming fa:16:3e:24:df:88 10.100.0.12
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.125 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:20.131 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:df:88 10.100.0.12'], port_security=['fa:16:3e:24:df:88 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd5c2df9d-748f-4df2-9392-b45741975f65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc21b99b-4e34-422c-be05-0a440009dac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3f93fd4b2154dda9f38e62334904303', 'neutron:revision_number': '2', 'neutron:security_group_ids': '52cb9fb4-4318-4f53-9b5a-002d95792517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7af6b6bc-3cbd-48be-9f10-23ec011e0426, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=db46acd4-809b-4127-ad48-870ae429b4d6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:40:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:20.132 140354 INFO neutron.agent.ovn.metadata.agent [-] Port db46acd4-809b-4127-ad48-870ae429b4d6 in datapath fc21b99b-4e34-422c-be05-0a440009dac4 bound to our chassis
Jan 20 14:40:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:20.134 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc21b99b-4e34-422c-be05-0a440009dac4
Jan 20 14:40:20 compute-1 ovn_controller[130490]: 2026-01-20T14:40:20Z|00201|binding|INFO|Setting lport db46acd4-809b-4127-ad48-870ae429b4d6 ovn-installed in OVS
Jan 20 14:40:20 compute-1 ovn_controller[130490]: 2026-01-20T14:40:20Z|00202|binding|INFO|Setting lport db46acd4-809b-4127-ad48-870ae429b4d6 up in Southbound
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.142 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:20.163 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2cf441-3877-49ac-9507-4c443d1b940f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:20 compute-1 systemd-udevd[252585]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:40:20 compute-1 NetworkManager[49104]: <info>  [1768920020.1925] device (tapdb46acd4-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:40:20 compute-1 NetworkManager[49104]: <info>  [1768920020.1935] device (tapdb46acd4-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.199 225859 DEBUG nova.virt.libvirt.driver [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.199 225859 DEBUG nova.virt.libvirt.driver [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.199 225859 DEBUG nova.virt.libvirt.driver [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No VIF found with MAC fa:16:3e:3b:35:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.200 225859 DEBUG nova.virt.libvirt.driver [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No VIF found with MAC fa:16:3e:24:df:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:40:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:20.199 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[bef15fdd-ff0f-427c-b842-aaef978e6fd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:20.202 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f953f56c-5726-4e48-b3dd-b13438ab0689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.225 225859 DEBUG nova.virt.libvirt.guest [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2088149366</nova:name>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <nova:creationTime>2026-01-20 14:40:20</nova:creationTime>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <nova:flavor name="m1.nano">
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <nova:memory>128</nova:memory>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <nova:disk>1</nova:disk>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <nova:swap>0</nova:swap>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <nova:vcpus>1</nova:vcpus>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   </nova:flavor>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <nova:owner>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   </nova:owner>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <nova:ports>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <nova:port uuid="b48170b0-717d-48f0-8172-742a4a8596e9">
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <nova:port uuid="db46acd4-809b-4127-ad48-870ae429b4d6">
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   </nova:ports>
Jan 20 14:40:20 compute-1 nova_compute[225855]: </nova:instance>
Jan 20 14:40:20 compute-1 nova_compute[225855]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 20 14:40:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:20.233 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[0e8eadce-0c54-4324-b474-92960aa1cb23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.248 225859 DEBUG oslo_concurrency.lockutils [None req-bc0698b6-4f44-4581-89b2-1332a8a98461 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "interface-d5c2df9d-748f-4df2-9392-b45741975f65-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.285s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:20.249 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[aec06468-7fdb-4a3f-8c9f-eeafa279ca7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc21b99b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:5b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503233, 'reachable_time': 15488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252592, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:20.266 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[13309b0f-81bb-4914-86aa-6e25371b19fa]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc21b99b-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503245, 'tstamp': 503245}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252593, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc21b99b-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503248, 'tstamp': 503248}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252593, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:20.268 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc21b99b-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.270 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:20.271 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc21b99b-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:20.272 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:20.272 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc21b99b-40, col_values=(('external_ids', {'iface-id': '583df905-1d9f-49c1-b209-4b7fad1599f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:20.273 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:40:20 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2413042588' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.306 225859 DEBUG oslo_concurrency.processutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.331 225859 DEBUG nova.storage.rbd_utils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] rbd image a96ccadd-ac1d-4040-8bcc-bebb460ee233_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.335 225859 DEBUG oslo_concurrency.processutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:40:20 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/564781149' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.772 225859 DEBUG oslo_concurrency.processutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.774 225859 DEBUG nova.virt.libvirt.vif [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:40:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=68,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEbOWirsAXlTIoevt4kXzpqBpapeg8X6KpUPmzDXnXlw7wqoLKHnmHfUIYL+FmHPJoWs+SV643EEJY+tqAkcrZlCPnWit4UcMgPhE0LGoYJ6xDnxZGwNzSj5VV503kGh5A==',key_name='tempest-keypair-1553012660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b50ce2f25e8943e28ddf8bf69c721e75',ramdisk_id='',reservation_id='r-w26et9o4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestFqdnHostnames-626473092',owner_user_name='tempest-ServersTestFqdnHostnames-626473092-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:40:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='16c05e1ac16f428bab6b36346856235e',uuid=a96ccadd-ac1d-4040-8bcc-bebb460ee233,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "19a89daa-770c-4c3f-970c-a9a462503b06", "address": "fa:16:3e:7c:c6:66", "network": {"id": "1002188b-c6a7-4b59-9326-3a1a837a00fd", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1606456759-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b50ce2f25e8943e28ddf8bf69c721e75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a89daa-77", "ovs_interfaceid": "19a89daa-770c-4c3f-970c-a9a462503b06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.775 225859 DEBUG nova.network.os_vif_util [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Converting VIF {"id": "19a89daa-770c-4c3f-970c-a9a462503b06", "address": "fa:16:3e:7c:c6:66", "network": {"id": "1002188b-c6a7-4b59-9326-3a1a837a00fd", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1606456759-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b50ce2f25e8943e28ddf8bf69c721e75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a89daa-77", "ovs_interfaceid": "19a89daa-770c-4c3f-970c-a9a462503b06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.776 225859 DEBUG nova.network.os_vif_util [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:c6:66,bridge_name='br-int',has_traffic_filtering=True,id=19a89daa-770c-4c3f-970c-a9a462503b06,network=Network(1002188b-c6a7-4b59-9326-3a1a837a00fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19a89daa-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.777 225859 DEBUG nova.objects.instance [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Lazy-loading 'pci_devices' on Instance uuid a96ccadd-ac1d-4040-8bcc-bebb460ee233 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.795 225859 DEBUG nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <uuid>a96ccadd-ac1d-4040-8bcc-bebb460ee233</uuid>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <name>instance-00000044</name>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <nova:name>guest-instance-1.domain.com</nova:name>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:40:19</nova:creationTime>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:40:20 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:40:20 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:40:20 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:40:20 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:40:20 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:40:20 compute-1 nova_compute[225855]:         <nova:user uuid="16c05e1ac16f428bab6b36346856235e">tempest-ServersTestFqdnHostnames-626473092-project-member</nova:user>
Jan 20 14:40:20 compute-1 nova_compute[225855]:         <nova:project uuid="b50ce2f25e8943e28ddf8bf69c721e75">tempest-ServersTestFqdnHostnames-626473092</nova:project>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:40:20 compute-1 nova_compute[225855]:         <nova:port uuid="19a89daa-770c-4c3f-970c-a9a462503b06">
Jan 20 14:40:20 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <system>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <entry name="serial">a96ccadd-ac1d-4040-8bcc-bebb460ee233</entry>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <entry name="uuid">a96ccadd-ac1d-4040-8bcc-bebb460ee233</entry>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     </system>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <os>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   </os>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <features>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   </features>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/a96ccadd-ac1d-4040-8bcc-bebb460ee233_disk">
Jan 20 14:40:20 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       </source>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:40:20 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/a96ccadd-ac1d-4040-8bcc-bebb460ee233_disk.config">
Jan 20 14:40:20 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       </source>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:40:20 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:7c:c6:66"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <target dev="tap19a89daa-77"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/a96ccadd-ac1d-4040-8bcc-bebb460ee233/console.log" append="off"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <video>
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     </video>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:40:20 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:40:20 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:40:20 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:40:20 compute-1 nova_compute[225855]: </domain>
Jan 20 14:40:20 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.797 225859 DEBUG nova.compute.manager [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Preparing to wait for external event network-vif-plugged-19a89daa-770c-4c3f-970c-a9a462503b06 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.797 225859 DEBUG oslo_concurrency.lockutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Acquiring lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.797 225859 DEBUG oslo_concurrency.lockutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.798 225859 DEBUG oslo_concurrency.lockutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.798 225859 DEBUG nova.virt.libvirt.vif [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:40:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=68,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEbOWirsAXlTIoevt4kXzpqBpapeg8X6KpUPmzDXnXlw7wqoLKHnmHfUIYL+FmHPJoWs+SV643EEJY+tqAkcrZlCPnWit4UcMgPhE0LGoYJ6xDnxZGwNzSj5VV503kGh5A==',key_name='tempest-keypair-1553012660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b50ce2f25e8943e28ddf8bf69c721e75',ramdisk_id='',reservation_id='r-w26et9o4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestFqdnHostnames-626473092',owner_user_name='tempest-ServersTestFqdnHostnames-626473092-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:40:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='16c05e1ac16f428bab6b36346856235e',uuid=a96ccadd-ac1d-4040-8bcc-bebb460ee233,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "19a89daa-770c-4c3f-970c-a9a462503b06", "address": "fa:16:3e:7c:c6:66", "network": {"id": "1002188b-c6a7-4b59-9326-3a1a837a00fd", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1606456759-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b50ce2f25e8943e28ddf8bf69c721e75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a89daa-77", "ovs_interfaceid": "19a89daa-770c-4c3f-970c-a9a462503b06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.799 225859 DEBUG nova.network.os_vif_util [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Converting VIF {"id": "19a89daa-770c-4c3f-970c-a9a462503b06", "address": "fa:16:3e:7c:c6:66", "network": {"id": "1002188b-c6a7-4b59-9326-3a1a837a00fd", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1606456759-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b50ce2f25e8943e28ddf8bf69c721e75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a89daa-77", "ovs_interfaceid": "19a89daa-770c-4c3f-970c-a9a462503b06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.799 225859 DEBUG nova.network.os_vif_util [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:c6:66,bridge_name='br-int',has_traffic_filtering=True,id=19a89daa-770c-4c3f-970c-a9a462503b06,network=Network(1002188b-c6a7-4b59-9326-3a1a837a00fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19a89daa-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.799 225859 DEBUG os_vif [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:c6:66,bridge_name='br-int',has_traffic_filtering=True,id=19a89daa-770c-4c3f-970c-a9a462503b06,network=Network(1002188b-c6a7-4b59-9326-3a1a837a00fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19a89daa-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.800 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.800 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.801 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.803 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.804 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19a89daa-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.804 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap19a89daa-77, col_values=(('external_ids', {'iface-id': '19a89daa-770c-4c3f-970c-a9a462503b06', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:c6:66', 'vm-uuid': 'a96ccadd-ac1d-4040-8bcc-bebb460ee233'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.806 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.808 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:40:20 compute-1 NetworkManager[49104]: <info>  [1768920020.8087] manager: (tap19a89daa-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Jan 20 14:40:20 compute-1 ceph-mon[81775]: pgmap v1559: 321 pgs: 321 active+clean; 278 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 171 op/s
Jan 20 14:40:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2413042588' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:40:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/564781149' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.811 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.813 225859 INFO os_vif [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:c6:66,bridge_name='br-int',has_traffic_filtering=True,id=19a89daa-770c-4c3f-970c-a9a462503b06,network=Network(1002188b-c6a7-4b59-9326-3a1a837a00fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19a89daa-77')
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.893 225859 DEBUG nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.894 225859 DEBUG nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.894 225859 DEBUG nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] No VIF found with MAC fa:16:3e:7c:c6:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.894 225859 INFO nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Using config drive
Jan 20 14:40:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.923 225859 DEBUG nova.storage.rbd_utils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] rbd image a96ccadd-ac1d-4040-8bcc-bebb460ee233_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.976 225859 DEBUG oslo_concurrency.lockutils [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "interface-d5c2df9d-748f-4df2-9392-b45741975f65-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.976 225859 DEBUG oslo_concurrency.lockutils [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "interface-d5c2df9d-748f-4df2-9392-b45741975f65-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:20 compute-1 nova_compute[225855]: 2026-01-20 14:40:20.977 225859 DEBUG nova.objects.instance [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'flavor' on Instance uuid d5c2df9d-748f-4df2-9392-b45741975f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:40:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:40:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:21.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:40:21 compute-1 ovn_controller[130490]: 2026-01-20T14:40:21Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:df:88 10.100.0.12
Jan 20 14:40:21 compute-1 ovn_controller[130490]: 2026-01-20T14:40:21Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:df:88 10.100.0.12
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.338 225859 INFO nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Creating config drive at /var/lib/nova/instances/a96ccadd-ac1d-4040-8bcc-bebb460ee233/disk.config
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.345 225859 DEBUG oslo_concurrency.processutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a96ccadd-ac1d-4040-8bcc-bebb460ee233/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnlmyqo5i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.421 225859 DEBUG nova.network.neutron [req-d7555cd0-1901-4197-b5d5-bed5a20edf60 req-68a01821-fc6f-463a-9132-16e9cd02bc48 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Updated VIF entry in instance network info cache for port db46acd4-809b-4127-ad48-870ae429b4d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.421 225859 DEBUG nova.network.neutron [req-d7555cd0-1901-4197-b5d5-bed5a20edf60 req-68a01821-fc6f-463a-9132-16e9cd02bc48 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Updating instance_info_cache with network_info: [{"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.432 225859 DEBUG nova.objects.instance [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'pci_requests' on Instance uuid d5c2df9d-748f-4df2-9392-b45741975f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.448 225859 DEBUG oslo_concurrency.lockutils [req-d7555cd0-1901-4197-b5d5-bed5a20edf60 req-68a01821-fc6f-463a-9132-16e9cd02bc48 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.450 225859 DEBUG nova.network.neutron [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.478 225859 DEBUG oslo_concurrency.processutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a96ccadd-ac1d-4040-8bcc-bebb460ee233/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnlmyqo5i" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.505 225859 DEBUG nova.storage.rbd_utils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] rbd image a96ccadd-ac1d-4040-8bcc-bebb460ee233_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.510 225859 DEBUG oslo_concurrency.processutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a96ccadd-ac1d-4040-8bcc-bebb460ee233/disk.config a96ccadd-ac1d-4040-8bcc-bebb460ee233_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.533 225859 DEBUG nova.network.neutron [req-9aa08c3b-df5d-455d-a6c7-b122e663321f req-3850f364-33e1-4a03-8ef0-db34ae5d7263 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Updated VIF entry in instance network info cache for port 19a89daa-770c-4c3f-970c-a9a462503b06. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.533 225859 DEBUG nova.network.neutron [req-9aa08c3b-df5d-455d-a6c7-b122e663321f req-3850f364-33e1-4a03-8ef0-db34ae5d7263 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Updating instance_info_cache with network_info: [{"id": "19a89daa-770c-4c3f-970c-a9a462503b06", "address": "fa:16:3e:7c:c6:66", "network": {"id": "1002188b-c6a7-4b59-9326-3a1a837a00fd", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1606456759-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b50ce2f25e8943e28ddf8bf69c721e75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a89daa-77", "ovs_interfaceid": "19a89daa-770c-4c3f-970c-a9a462503b06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.552 225859 DEBUG oslo_concurrency.lockutils [req-9aa08c3b-df5d-455d-a6c7-b122e663321f req-3850f364-33e1-4a03-8ef0-db34ae5d7263 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-a96ccadd-ac1d-4040-8bcc-bebb460ee233" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.638 225859 DEBUG nova.policy [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c8a9fb458d27434495a77a94827b6097', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3f93fd4b2154dda9f38e62334904303', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.861 225859 DEBUG nova.compute.manager [req-08521e48-03a6-4901-9492-78424ce68bbd req-4dac17cb-4548-4fa9-abc6-6a883c1f4e8a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-vif-plugged-db46acd4-809b-4127-ad48-870ae429b4d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.862 225859 DEBUG oslo_concurrency.lockutils [req-08521e48-03a6-4901-9492-78424ce68bbd req-4dac17cb-4548-4fa9-abc6-6a883c1f4e8a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.862 225859 DEBUG oslo_concurrency.lockutils [req-08521e48-03a6-4901-9492-78424ce68bbd req-4dac17cb-4548-4fa9-abc6-6a883c1f4e8a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.862 225859 DEBUG oslo_concurrency.lockutils [req-08521e48-03a6-4901-9492-78424ce68bbd req-4dac17cb-4548-4fa9-abc6-6a883c1f4e8a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.862 225859 DEBUG nova.compute.manager [req-08521e48-03a6-4901-9492-78424ce68bbd req-4dac17cb-4548-4fa9-abc6-6a883c1f4e8a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] No waiting events found dispatching network-vif-plugged-db46acd4-809b-4127-ad48-870ae429b4d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.863 225859 WARNING nova.compute.manager [req-08521e48-03a6-4901-9492-78424ce68bbd req-4dac17cb-4548-4fa9-abc6-6a883c1f4e8a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received unexpected event network-vif-plugged-db46acd4-809b-4127-ad48-870ae429b4d6 for instance with vm_state active and task_state None.
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.863 225859 DEBUG nova.compute.manager [req-08521e48-03a6-4901-9492-78424ce68bbd req-4dac17cb-4548-4fa9-abc6-6a883c1f4e8a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-vif-plugged-db46acd4-809b-4127-ad48-870ae429b4d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.863 225859 DEBUG oslo_concurrency.lockutils [req-08521e48-03a6-4901-9492-78424ce68bbd req-4dac17cb-4548-4fa9-abc6-6a883c1f4e8a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.866 225859 DEBUG oslo_concurrency.lockutils [req-08521e48-03a6-4901-9492-78424ce68bbd req-4dac17cb-4548-4fa9-abc6-6a883c1f4e8a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.866 225859 DEBUG oslo_concurrency.lockutils [req-08521e48-03a6-4901-9492-78424ce68bbd req-4dac17cb-4548-4fa9-abc6-6a883c1f4e8a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.867 225859 DEBUG nova.compute.manager [req-08521e48-03a6-4901-9492-78424ce68bbd req-4dac17cb-4548-4fa9-abc6-6a883c1f4e8a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] No waiting events found dispatching network-vif-plugged-db46acd4-809b-4127-ad48-870ae429b4d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:40:21 compute-1 nova_compute[225855]: 2026-01-20 14:40:21.867 225859 WARNING nova.compute.manager [req-08521e48-03a6-4901-9492-78424ce68bbd req-4dac17cb-4548-4fa9-abc6-6a883c1f4e8a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received unexpected event network-vif-plugged-db46acd4-809b-4127-ad48-870ae429b4d6 for instance with vm_state active and task_state None.
Jan 20 14:40:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:21.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:22 compute-1 nova_compute[225855]: 2026-01-20 14:40:22.192 225859 DEBUG nova.network.neutron [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Successfully created port: 5fd7b3ad-0cf1-4294-b552-6141c8ee85bd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:40:22 compute-1 nova_compute[225855]: 2026-01-20 14:40:22.225 225859 DEBUG oslo_concurrency.processutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a96ccadd-ac1d-4040-8bcc-bebb460ee233/disk.config a96ccadd-ac1d-4040-8bcc-bebb460ee233_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.716s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:22 compute-1 nova_compute[225855]: 2026-01-20 14:40:22.226 225859 INFO nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Deleting local config drive /var/lib/nova/instances/a96ccadd-ac1d-4040-8bcc-bebb460ee233/disk.config because it was imported into RBD.
Jan 20 14:40:22 compute-1 kernel: tap19a89daa-77: entered promiscuous mode
Jan 20 14:40:22 compute-1 NetworkManager[49104]: <info>  [1768920022.2876] manager: (tap19a89daa-77): new Tun device (/org/freedesktop/NetworkManager/Devices/95)
Jan 20 14:40:22 compute-1 NetworkManager[49104]: <info>  [1768920022.3120] device (tap19a89daa-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:40:22 compute-1 ovn_controller[130490]: 2026-01-20T14:40:22Z|00203|binding|INFO|Claiming lport 19a89daa-770c-4c3f-970c-a9a462503b06 for this chassis.
Jan 20 14:40:22 compute-1 ovn_controller[130490]: 2026-01-20T14:40:22Z|00204|binding|INFO|19a89daa-770c-4c3f-970c-a9a462503b06: Claiming fa:16:3e:7c:c6:66 10.100.0.12
Jan 20 14:40:22 compute-1 nova_compute[225855]: 2026-01-20 14:40:22.312 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:22 compute-1 NetworkManager[49104]: <info>  [1768920022.3148] device (tap19a89daa-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.318 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:c6:66 10.100.0.12'], port_security=['fa:16:3e:7c:c6:66 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a96ccadd-ac1d-4040-8bcc-bebb460ee233', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002188b-c6a7-4b59-9326-3a1a837a00fd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b50ce2f25e8943e28ddf8bf69c721e75', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b713641f-ba5a-4dd0-8917-baab1ca61007', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01ad0ee9-ef23-4072-99f7-406bf8559611, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=19a89daa-770c-4c3f-970c-a9a462503b06) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.319 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 19a89daa-770c-4c3f-970c-a9a462503b06 in datapath 1002188b-c6a7-4b59-9326-3a1a837a00fd bound to our chassis
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.321 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1002188b-c6a7-4b59-9326-3a1a837a00fd
Jan 20 14:40:22 compute-1 ovn_controller[130490]: 2026-01-20T14:40:22Z|00205|binding|INFO|Setting lport 19a89daa-770c-4c3f-970c-a9a462503b06 ovn-installed in OVS
Jan 20 14:40:22 compute-1 ovn_controller[130490]: 2026-01-20T14:40:22Z|00206|binding|INFO|Setting lport 19a89daa-770c-4c3f-970c-a9a462503b06 up in Southbound
Jan 20 14:40:22 compute-1 nova_compute[225855]: 2026-01-20 14:40:22.331 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.331 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[68768ec4-5faf-45cc-8ef9-98e2e72ab28f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.331 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1002188b-c1 in ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.333 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1002188b-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.334 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cf763fe0-7daf-4ff8-9184-4227016539d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.334 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[75d3cae2-1f87-4e12-aba9-6a35c4b1dc37]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:22 compute-1 nova_compute[225855]: 2026-01-20 14:40:22.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:40:22 compute-1 systemd-machined[194361]: New machine qemu-30-instance-00000044.
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.352 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[fffa861e-2f6f-4b7d-8b03-98460518d36d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:22 compute-1 systemd[1]: Started Virtual Machine qemu-30-instance-00000044.
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.381 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f20c6513-3daa-4c5f-808d-10655ee99497]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.412 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[cc856307-acf6-4afb-a125-b8e7bf691c48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:22 compute-1 NetworkManager[49104]: <info>  [1768920022.4192] manager: (tap1002188b-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/96)
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.418 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6bcfe5a6-97c4-4769-8873-e3a719febdd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.448 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f72cf4-db2f-4c15-bbce-8d8f4d74b2fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.452 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[b2de6250-bf99-46a9-96eb-b9bc334db59b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:22 compute-1 NetworkManager[49104]: <info>  [1768920022.4747] device (tap1002188b-c0): carrier: link connected
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.480 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c8fec8-9633-46e3-a49c-da00c34f27ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.494 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[07dfeb92-1cf1-4002-b9bf-f6a409f6be4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1002188b-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:44:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506140, 'reachable_time': 26689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252741, 'error': None, 'target': 'ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.511 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[77715749-e12e-46d7-8b4e-67cdd88dc28c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:443e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506140, 'tstamp': 506140}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252742, 'error': None, 'target': 'ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.528 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe43366-cbb2-4664-8c27-5ac23e68341e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1002188b-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:44:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506140, 'reachable_time': 26689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252743, 'error': None, 'target': 'ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.564 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[78cf3aa7-8ad8-4ab0-8f48-0a30c5e16824]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.631 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c62c9dd0-3031-4bec-90ea-550fd2664d44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.632 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1002188b-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.633 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.633 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1002188b-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:22 compute-1 nova_compute[225855]: 2026-01-20 14:40:22.633 225859 DEBUG nova.compute.manager [req-22bed124-98d5-413c-b58b-10aefd809095 req-6e81b693-7275-4900-8236-aec7c1c2a0e2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Received event network-vif-plugged-19a89daa-770c-4c3f-970c-a9a462503b06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:22 compute-1 nova_compute[225855]: 2026-01-20 14:40:22.633 225859 DEBUG oslo_concurrency.lockutils [req-22bed124-98d5-413c-b58b-10aefd809095 req-6e81b693-7275-4900-8236-aec7c1c2a0e2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:22 compute-1 nova_compute[225855]: 2026-01-20 14:40:22.633 225859 DEBUG oslo_concurrency.lockutils [req-22bed124-98d5-413c-b58b-10aefd809095 req-6e81b693-7275-4900-8236-aec7c1c2a0e2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:22 compute-1 nova_compute[225855]: 2026-01-20 14:40:22.634 225859 DEBUG oslo_concurrency.lockutils [req-22bed124-98d5-413c-b58b-10aefd809095 req-6e81b693-7275-4900-8236-aec7c1c2a0e2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:22 compute-1 nova_compute[225855]: 2026-01-20 14:40:22.634 225859 DEBUG nova.compute.manager [req-22bed124-98d5-413c-b58b-10aefd809095 req-6e81b693-7275-4900-8236-aec7c1c2a0e2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Processing event network-vif-plugged-19a89daa-770c-4c3f-970c-a9a462503b06 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:40:22 compute-1 nova_compute[225855]: 2026-01-20 14:40:22.634 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:22 compute-1 kernel: tap1002188b-c0: entered promiscuous mode
Jan 20 14:40:22 compute-1 NetworkManager[49104]: <info>  [1768920022.6354] manager: (tap1002188b-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Jan 20 14:40:22 compute-1 nova_compute[225855]: 2026-01-20 14:40:22.637 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.639 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1002188b-c0, col_values=(('external_ids', {'iface-id': '8cff17f5-b792-4e6f-8f1e-6c48322af961'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:22 compute-1 nova_compute[225855]: 2026-01-20 14:40:22.640 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:22 compute-1 ovn_controller[130490]: 2026-01-20T14:40:22Z|00207|binding|INFO|Releasing lport 8cff17f5-b792-4e6f-8f1e-6c48322af961 from this chassis (sb_readonly=0)
Jan 20 14:40:22 compute-1 nova_compute[225855]: 2026-01-20 14:40:22.656 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.658 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1002188b-c6a7-4b59-9326-3a1a837a00fd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1002188b-c6a7-4b59-9326-3a1a837a00fd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.659 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[03c88e3e-8394-44ba-a481-46d61cee9f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.659 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-1002188b-c6a7-4b59-9326-3a1a837a00fd
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/1002188b-c6a7-4b59-9326-3a1a837a00fd.pid.haproxy
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 1002188b-c6a7-4b59-9326-3a1a837a00fd
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:40:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:22.660 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd', 'env', 'PROCESS_TAG=haproxy-1002188b-c6a7-4b59-9326-3a1a837a00fd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1002188b-c6a7-4b59-9326-3a1a837a00fd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:40:22 compute-1 ceph-mon[81775]: pgmap v1560: 321 pgs: 321 active+clean; 293 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.7 MiB/s wr, 182 op/s
Jan 20 14:40:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/488662490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:23.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.060 225859 DEBUG nova.network.neutron [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Successfully updated port: 5fd7b3ad-0cf1-4294-b552-6141c8ee85bd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.074 225859 DEBUG oslo_concurrency.lockutils [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.074 225859 DEBUG oslo_concurrency.lockutils [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquired lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.074 225859 DEBUG nova.network.neutron [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:40:23 compute-1 podman[252800]: 2026-01-20 14:40:22.987161504 +0000 UTC m=+0.021448409 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:40:23 compute-1 podman[252800]: 2026-01-20 14:40:23.092338132 +0000 UTC m=+0.126625037 container create 38da31d8f51b431b660e3b2370ac55baf90389fb1943ecc974f1f1032615fe8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 14:40:23 compute-1 systemd[1]: Started libpod-conmon-38da31d8f51b431b660e3b2370ac55baf90389fb1943ecc974f1f1032615fe8b.scope.
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.159 225859 DEBUG nova.compute.manager [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.162 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920023.1591918, a96ccadd-ac1d-4040-8bcc-bebb460ee233 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.163 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] VM Started (Lifecycle Event)
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.167 225859 DEBUG nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.171 225859 INFO nova.virt.libvirt.driver [-] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Instance spawned successfully.
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.172 225859 DEBUG nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:40:23 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:40:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc585d326ff1df1a671eac1b099fad631006b99f0da45b11e11943fba074a4f1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.190 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.196 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.200 225859 DEBUG nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.200 225859 DEBUG nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.201 225859 DEBUG nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.201 225859 DEBUG nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.201 225859 DEBUG nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.202 225859 DEBUG nova.virt.libvirt.driver [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:40:23 compute-1 podman[252800]: 2026-01-20 14:40:23.214015378 +0000 UTC m=+0.248302373 container init 38da31d8f51b431b660e3b2370ac55baf90389fb1943ecc974f1f1032615fe8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.215 225859 WARNING nova.network.neutron [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] fc21b99b-4e34-422c-be05-0a440009dac4 already exists in list: networks containing: ['fc21b99b-4e34-422c-be05-0a440009dac4']. ignoring it
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.216 225859 WARNING nova.network.neutron [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] fc21b99b-4e34-422c-be05-0a440009dac4 already exists in list: networks containing: ['fc21b99b-4e34-422c-be05-0a440009dac4']. ignoring it
Jan 20 14:40:23 compute-1 podman[252800]: 2026-01-20 14:40:23.219299748 +0000 UTC m=+0.253586643 container start 38da31d8f51b431b660e3b2370ac55baf90389fb1943ecc974f1f1032615fe8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.227 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.227 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920023.1603715, a96ccadd-ac1d-4040-8bcc-bebb460ee233 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.228 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] VM Paused (Lifecycle Event)
Jan 20 14:40:23 compute-1 neutron-haproxy-ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd[252832]: [NOTICE]   (252836) : New worker (252838) forked
Jan 20 14:40:23 compute-1 neutron-haproxy-ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd[252832]: [NOTICE]   (252836) : Loading success.
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.251 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.255 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920023.1641107, a96ccadd-ac1d-4040-8bcc-bebb460ee233 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.255 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] VM Resumed (Lifecycle Event)
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.259 225859 INFO nova.compute.manager [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Took 7.64 seconds to spawn the instance on the hypervisor.
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.259 225859 DEBUG nova.compute.manager [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.269 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.273 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.303 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.323 225859 INFO nova.compute.manager [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Took 8.52 seconds to build instance.
Jan 20 14:40:23 compute-1 nova_compute[225855]: 2026-01-20 14:40:23.338 225859 DEBUG oslo_concurrency.lockutils [None req-30feda58-4032-4227-b08f-128214a95310 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:23.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:24 compute-1 ceph-mon[81775]: pgmap v1561: 321 pgs: 321 active+clean; 293 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Jan 20 14:40:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:25.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:25 compute-1 nova_compute[225855]: 2026-01-20 14:40:25.083 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:25 compute-1 nova_compute[225855]: 2026-01-20 14:40:25.146 225859 DEBUG nova.compute.manager [req-0099f7ca-40af-4b82-8d82-e1f9572f203a req-ff92b3ce-8adf-4275-a7bf-0c706845dd52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Received event network-vif-plugged-19a89daa-770c-4c3f-970c-a9a462503b06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:25 compute-1 nova_compute[225855]: 2026-01-20 14:40:25.147 225859 DEBUG oslo_concurrency.lockutils [req-0099f7ca-40af-4b82-8d82-e1f9572f203a req-ff92b3ce-8adf-4275-a7bf-0c706845dd52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:25 compute-1 nova_compute[225855]: 2026-01-20 14:40:25.148 225859 DEBUG oslo_concurrency.lockutils [req-0099f7ca-40af-4b82-8d82-e1f9572f203a req-ff92b3ce-8adf-4275-a7bf-0c706845dd52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:25 compute-1 nova_compute[225855]: 2026-01-20 14:40:25.149 225859 DEBUG oslo_concurrency.lockutils [req-0099f7ca-40af-4b82-8d82-e1f9572f203a req-ff92b3ce-8adf-4275-a7bf-0c706845dd52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:25 compute-1 nova_compute[225855]: 2026-01-20 14:40:25.149 225859 DEBUG nova.compute.manager [req-0099f7ca-40af-4b82-8d82-e1f9572f203a req-ff92b3ce-8adf-4275-a7bf-0c706845dd52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] No waiting events found dispatching network-vif-plugged-19a89daa-770c-4c3f-970c-a9a462503b06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:40:25 compute-1 nova_compute[225855]: 2026-01-20 14:40:25.151 225859 WARNING nova.compute.manager [req-0099f7ca-40af-4b82-8d82-e1f9572f203a req-ff92b3ce-8adf-4275-a7bf-0c706845dd52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Received unexpected event network-vif-plugged-19a89daa-770c-4c3f-970c-a9a462503b06 for instance with vm_state active and task_state None.
Jan 20 14:40:25 compute-1 nova_compute[225855]: 2026-01-20 14:40:25.152 225859 DEBUG nova.compute.manager [req-0099f7ca-40af-4b82-8d82-e1f9572f203a req-ff92b3ce-8adf-4275-a7bf-0c706845dd52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-changed-5fd7b3ad-0cf1-4294-b552-6141c8ee85bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:25 compute-1 nova_compute[225855]: 2026-01-20 14:40:25.153 225859 DEBUG nova.compute.manager [req-0099f7ca-40af-4b82-8d82-e1f9572f203a req-ff92b3ce-8adf-4275-a7bf-0c706845dd52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Refreshing instance network info cache due to event network-changed-5fd7b3ad-0cf1-4294-b552-6141c8ee85bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:40:25 compute-1 nova_compute[225855]: 2026-01-20 14:40:25.154 225859 DEBUG oslo_concurrency.lockutils [req-0099f7ca-40af-4b82-8d82-e1f9572f203a req-ff92b3ce-8adf-4275-a7bf-0c706845dd52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:40:25 compute-1 sudo[252848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:25 compute-1 sudo[252848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:25 compute-1 sudo[252848]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:25 compute-1 sudo[252873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:25 compute-1 sudo[252873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:25 compute-1 sudo[252873]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:25 compute-1 nova_compute[225855]: 2026-01-20 14:40:25.847 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:25 compute-1 ceph-mon[81775]: pgmap v1562: 321 pgs: 321 active+clean; 299 MiB data, 728 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 157 op/s
Jan 20 14:40:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:25.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:26 compute-1 nova_compute[225855]: 2026-01-20 14:40:26.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:40:26 compute-1 nova_compute[225855]: 2026-01-20 14:40:26.346 225859 DEBUG nova.compute.manager [req-384dc47d-bdd2-4048-ba5c-cefdac094c99 req-03abf074-ef8e-429f-80fc-ad9e57739eb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Received event network-changed-19a89daa-770c-4c3f-970c-a9a462503b06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:26 compute-1 nova_compute[225855]: 2026-01-20 14:40:26.347 225859 DEBUG nova.compute.manager [req-384dc47d-bdd2-4048-ba5c-cefdac094c99 req-03abf074-ef8e-429f-80fc-ad9e57739eb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Refreshing instance network info cache due to event network-changed-19a89daa-770c-4c3f-970c-a9a462503b06. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:40:26 compute-1 nova_compute[225855]: 2026-01-20 14:40:26.347 225859 DEBUG oslo_concurrency.lockutils [req-384dc47d-bdd2-4048-ba5c-cefdac094c99 req-03abf074-ef8e-429f-80fc-ad9e57739eb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-a96ccadd-ac1d-4040-8bcc-bebb460ee233" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:40:26 compute-1 nova_compute[225855]: 2026-01-20 14:40:26.347 225859 DEBUG oslo_concurrency.lockutils [req-384dc47d-bdd2-4048-ba5c-cefdac094c99 req-03abf074-ef8e-429f-80fc-ad9e57739eb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-a96ccadd-ac1d-4040-8bcc-bebb460ee233" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:40:26 compute-1 nova_compute[225855]: 2026-01-20 14:40:26.348 225859 DEBUG nova.network.neutron [req-384dc47d-bdd2-4048-ba5c-cefdac094c99 req-03abf074-ef8e-429f-80fc-ad9e57739eb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Refreshing network info cache for port 19a89daa-770c-4c3f-970c-a9a462503b06 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:40:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:40:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:27.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.375 225859 DEBUG nova.network.neutron [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Updating instance_info_cache with network_info: [{"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "address": "fa:16:3e:d2:51:f5", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fd7b3ad-0c", "ovs_interfaceid": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.394 225859 DEBUG oslo_concurrency.lockutils [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Releasing lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.395 225859 DEBUG oslo_concurrency.lockutils [req-0099f7ca-40af-4b82-8d82-e1f9572f203a req-ff92b3ce-8adf-4275-a7bf-0c706845dd52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.396 225859 DEBUG nova.network.neutron [req-0099f7ca-40af-4b82-8d82-e1f9572f203a req-ff92b3ce-8adf-4275-a7bf-0c706845dd52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Refreshing network info cache for port 5fd7b3ad-0cf1-4294-b552-6141c8ee85bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.399 225859 DEBUG nova.virt.libvirt.vif [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2088149366',display_name='tempest-AttachInterfacesTestJSON-server-2088149366',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2088149366',id=66,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZwLcyXdmuPk9iAZlMOAxeFM3EdHKE0x5nJT3i2GTbVf6EkhYVj3hEmoeSwYo6iZrNjT6w/g2TndK4CzLIvGDWLEyKfIPgg2vbEtoL1oIxCHYN2ytrctbkHi1netydaRQ==',key_name='tempest-keypair-230916378',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-odubqy1o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:39:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=d5c2df9d-748f-4df2-9392-b45741975f65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "address": "fa:16:3e:d2:51:f5", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fd7b3ad-0c", "ovs_interfaceid": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.399 225859 DEBUG nova.network.os_vif_util [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "address": "fa:16:3e:d2:51:f5", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fd7b3ad-0c", "ovs_interfaceid": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.400 225859 DEBUG nova.network.os_vif_util [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:51:f5,bridge_name='br-int',has_traffic_filtering=True,id=5fd7b3ad-0cf1-4294-b552-6141c8ee85bd,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fd7b3ad-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.401 225859 DEBUG os_vif [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:51:f5,bridge_name='br-int',has_traffic_filtering=True,id=5fd7b3ad-0cf1-4294-b552-6141c8ee85bd,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fd7b3ad-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.402 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.402 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.403 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.406 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.407 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fd7b3ad-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.407 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5fd7b3ad-0c, col_values=(('external_ids', {'iface-id': '5fd7b3ad-0cf1-4294-b552-6141c8ee85bd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:51:f5', 'vm-uuid': 'd5c2df9d-748f-4df2-9392-b45741975f65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.408 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:27 compute-1 NetworkManager[49104]: <info>  [1768920027.4096] manager: (tap5fd7b3ad-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.413 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.415 225859 INFO os_vif [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:51:f5,bridge_name='br-int',has_traffic_filtering=True,id=5fd7b3ad-0cf1-4294-b552-6141c8ee85bd,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fd7b3ad-0c')
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.416 225859 DEBUG nova.virt.libvirt.vif [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2088149366',display_name='tempest-AttachInterfacesTestJSON-server-2088149366',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2088149366',id=66,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZwLcyXdmuPk9iAZlMOAxeFM3EdHKE0x5nJT3i2GTbVf6EkhYVj3hEmoeSwYo6iZrNjT6w/g2TndK4CzLIvGDWLEyKfIPgg2vbEtoL1oIxCHYN2ytrctbkHi1netydaRQ==',key_name='tempest-keypair-230916378',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-odubqy1o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:39:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=d5c2df9d-748f-4df2-9392-b45741975f65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "address": "fa:16:3e:d2:51:f5", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fd7b3ad-0c", "ovs_interfaceid": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.417 225859 DEBUG nova.network.os_vif_util [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "address": "fa:16:3e:d2:51:f5", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fd7b3ad-0c", "ovs_interfaceid": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.417 225859 DEBUG nova.network.os_vif_util [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:51:f5,bridge_name='br-int',has_traffic_filtering=True,id=5fd7b3ad-0cf1-4294-b552-6141c8ee85bd,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fd7b3ad-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.420 225859 DEBUG nova.virt.libvirt.guest [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] attach device xml: <interface type="ethernet">
Jan 20 14:40:27 compute-1 nova_compute[225855]:   <mac address="fa:16:3e:d2:51:f5"/>
Jan 20 14:40:27 compute-1 nova_compute[225855]:   <model type="virtio"/>
Jan 20 14:40:27 compute-1 nova_compute[225855]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:40:27 compute-1 nova_compute[225855]:   <mtu size="1442"/>
Jan 20 14:40:27 compute-1 nova_compute[225855]:   <target dev="tap5fd7b3ad-0c"/>
Jan 20 14:40:27 compute-1 nova_compute[225855]: </interface>
Jan 20 14:40:27 compute-1 nova_compute[225855]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 14:40:27 compute-1 kernel: tap5fd7b3ad-0c: entered promiscuous mode
Jan 20 14:40:27 compute-1 NetworkManager[49104]: <info>  [1768920027.4311] manager: (tap5fd7b3ad-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/99)
Jan 20 14:40:27 compute-1 ovn_controller[130490]: 2026-01-20T14:40:27Z|00208|binding|INFO|Claiming lport 5fd7b3ad-0cf1-4294-b552-6141c8ee85bd for this chassis.
Jan 20 14:40:27 compute-1 ovn_controller[130490]: 2026-01-20T14:40:27Z|00209|binding|INFO|5fd7b3ad-0cf1-4294-b552-6141c8ee85bd: Claiming fa:16:3e:d2:51:f5 10.100.0.11
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.433 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:27.440 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:51:f5 10.100.0.11'], port_security=['fa:16:3e:d2:51:f5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd5c2df9d-748f-4df2-9392-b45741975f65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc21b99b-4e34-422c-be05-0a440009dac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3f93fd4b2154dda9f38e62334904303', 'neutron:revision_number': '2', 'neutron:security_group_ids': '52cb9fb4-4318-4f53-9b5a-002d95792517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7af6b6bc-3cbd-48be-9f10-23ec011e0426, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=5fd7b3ad-0cf1-4294-b552-6141c8ee85bd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:40:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:27.442 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 5fd7b3ad-0cf1-4294-b552-6141c8ee85bd in datapath fc21b99b-4e34-422c-be05-0a440009dac4 bound to our chassis
Jan 20 14:40:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:27.444 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc21b99b-4e34-422c-be05-0a440009dac4
Jan 20 14:40:27 compute-1 ovn_controller[130490]: 2026-01-20T14:40:27Z|00210|binding|INFO|Setting lport 5fd7b3ad-0cf1-4294-b552-6141c8ee85bd ovn-installed in OVS
Jan 20 14:40:27 compute-1 ovn_controller[130490]: 2026-01-20T14:40:27Z|00211|binding|INFO|Setting lport 5fd7b3ad-0cf1-4294-b552-6141c8ee85bd up in Southbound
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.454 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.456 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:27 compute-1 systemd-udevd[252907]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:40:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:27.465 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[63f2a438-816c-4917-954f-db4318e013e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:27 compute-1 NetworkManager[49104]: <info>  [1768920027.4793] device (tap5fd7b3ad-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:40:27 compute-1 NetworkManager[49104]: <info>  [1768920027.4798] device (tap5fd7b3ad-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:40:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:27.496 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[a841517a-18da-4bb1-a06e-1f4f01d6e35e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:27.505 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[12a52736-1dbd-4237-a30d-3555ec5398b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:27.532 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[0c193ff8-e27d-4bc4-8b5f-c2bbeac0dd09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:27.550 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[79d8e360-40f1-42e4-a31a-c04125a43466]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc21b99b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:5b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503233, 'reachable_time': 15488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252914, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:27.569 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6bf84414-535c-494a-8733-aa65806d727e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc21b99b-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503245, 'tstamp': 503245}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252916, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc21b99b-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503248, 'tstamp': 503248}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252916, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:27.571 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc21b99b-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.572 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:27.575 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc21b99b-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:27.576 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:27.576 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc21b99b-40, col_values=(('external_ids', {'iface-id': '583df905-1d9f-49c1-b209-4b7fad1599f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:27.577 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.653 225859 DEBUG nova.virt.libvirt.driver [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.653 225859 DEBUG nova.virt.libvirt.driver [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.654 225859 DEBUG nova.virt.libvirt.driver [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No VIF found with MAC fa:16:3e:3b:35:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.654 225859 DEBUG nova.virt.libvirt.driver [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No VIF found with MAC fa:16:3e:24:df:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.654 225859 DEBUG nova.virt.libvirt.driver [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No VIF found with MAC fa:16:3e:d2:51:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.687 225859 DEBUG nova.virt.libvirt.guest [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:40:27 compute-1 nova_compute[225855]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:40:27 compute-1 nova_compute[225855]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2088149366</nova:name>
Jan 20 14:40:27 compute-1 nova_compute[225855]:   <nova:creationTime>2026-01-20 14:40:27</nova:creationTime>
Jan 20 14:40:27 compute-1 nova_compute[225855]:   <nova:flavor name="m1.nano">
Jan 20 14:40:27 compute-1 nova_compute[225855]:     <nova:memory>128</nova:memory>
Jan 20 14:40:27 compute-1 nova_compute[225855]:     <nova:disk>1</nova:disk>
Jan 20 14:40:27 compute-1 nova_compute[225855]:     <nova:swap>0</nova:swap>
Jan 20 14:40:27 compute-1 nova_compute[225855]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:40:27 compute-1 nova_compute[225855]:     <nova:vcpus>1</nova:vcpus>
Jan 20 14:40:27 compute-1 nova_compute[225855]:   </nova:flavor>
Jan 20 14:40:27 compute-1 nova_compute[225855]:   <nova:owner>
Jan 20 14:40:27 compute-1 nova_compute[225855]:     <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:40:27 compute-1 nova_compute[225855]:     <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:40:27 compute-1 nova_compute[225855]:   </nova:owner>
Jan 20 14:40:27 compute-1 nova_compute[225855]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:40:27 compute-1 nova_compute[225855]:   <nova:ports>
Jan 20 14:40:27 compute-1 nova_compute[225855]:     <nova:port uuid="b48170b0-717d-48f0-8172-742a4a8596e9">
Jan 20 14:40:27 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 20 14:40:27 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:27 compute-1 nova_compute[225855]:     <nova:port uuid="db46acd4-809b-4127-ad48-870ae429b4d6">
Jan 20 14:40:27 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 14:40:27 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:27 compute-1 nova_compute[225855]:     <nova:port uuid="5fd7b3ad-0cf1-4294-b552-6141c8ee85bd">
Jan 20 14:40:27 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 14:40:27 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:27 compute-1 nova_compute[225855]:   </nova:ports>
Jan 20 14:40:27 compute-1 nova_compute[225855]: </nova:instance>
Jan 20 14:40:27 compute-1 nova_compute[225855]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 20 14:40:27 compute-1 nova_compute[225855]: 2026-01-20 14:40:27.711 225859 DEBUG oslo_concurrency.lockutils [None req-de3a07a7-560e-4eaf-8d3f-454117b11be7 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "interface-d5c2df9d-748f-4df2-9392-b45741975f65-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:27.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:28 compute-1 nova_compute[225855]: 2026-01-20 14:40:28.321 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:40:28 compute-1 ceph-mon[81775]: pgmap v1563: 321 pgs: 321 active+clean; 315 MiB data, 739 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.4 MiB/s wr, 165 op/s
Jan 20 14:40:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3936017395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:40:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:29.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:40:29 compute-1 ovn_controller[130490]: 2026-01-20T14:40:29Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d2:51:f5 10.100.0.11
Jan 20 14:40:29 compute-1 ovn_controller[130490]: 2026-01-20T14:40:29Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d2:51:f5 10.100.0.11
Jan 20 14:40:29 compute-1 nova_compute[225855]: 2026-01-20 14:40:29.471 225859 DEBUG nova.compute.manager [req-0ab3b4c9-b364-4cb4-a72c-283ae197b3b0 req-7957c781-b55e-44a2-ae5d-021a274452f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-vif-plugged-5fd7b3ad-0cf1-4294-b552-6141c8ee85bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:29 compute-1 nova_compute[225855]: 2026-01-20 14:40:29.472 225859 DEBUG oslo_concurrency.lockutils [req-0ab3b4c9-b364-4cb4-a72c-283ae197b3b0 req-7957c781-b55e-44a2-ae5d-021a274452f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:29 compute-1 nova_compute[225855]: 2026-01-20 14:40:29.472 225859 DEBUG oslo_concurrency.lockutils [req-0ab3b4c9-b364-4cb4-a72c-283ae197b3b0 req-7957c781-b55e-44a2-ae5d-021a274452f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:29 compute-1 nova_compute[225855]: 2026-01-20 14:40:29.472 225859 DEBUG oslo_concurrency.lockutils [req-0ab3b4c9-b364-4cb4-a72c-283ae197b3b0 req-7957c781-b55e-44a2-ae5d-021a274452f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:29 compute-1 nova_compute[225855]: 2026-01-20 14:40:29.472 225859 DEBUG nova.compute.manager [req-0ab3b4c9-b364-4cb4-a72c-283ae197b3b0 req-7957c781-b55e-44a2-ae5d-021a274452f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] No waiting events found dispatching network-vif-plugged-5fd7b3ad-0cf1-4294-b552-6141c8ee85bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:40:29 compute-1 nova_compute[225855]: 2026-01-20 14:40:29.473 225859 WARNING nova.compute.manager [req-0ab3b4c9-b364-4cb4-a72c-283ae197b3b0 req-7957c781-b55e-44a2-ae5d-021a274452f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received unexpected event network-vif-plugged-5fd7b3ad-0cf1-4294-b552-6141c8ee85bd for instance with vm_state active and task_state None.
Jan 20 14:40:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1861985885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:40:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:29.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:40:30 compute-1 nova_compute[225855]: 2026-01-20 14:40:30.084 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:30 compute-1 sudo[252918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:30 compute-1 sudo[252918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:30 compute-1 sudo[252918]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:30 compute-1 sudo[252950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:40:30 compute-1 sudo[252950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:30 compute-1 sudo[252950]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:30 compute-1 podman[252942]: 2026-01-20 14:40:30.174603273 +0000 UTC m=+0.055269277 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:40:30 compute-1 nova_compute[225855]: 2026-01-20 14:40:30.192 225859 DEBUG nova.network.neutron [req-0099f7ca-40af-4b82-8d82-e1f9572f203a req-ff92b3ce-8adf-4275-a7bf-0c706845dd52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Updated VIF entry in instance network info cache for port 5fd7b3ad-0cf1-4294-b552-6141c8ee85bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:40:30 compute-1 nova_compute[225855]: 2026-01-20 14:40:30.193 225859 DEBUG nova.network.neutron [req-0099f7ca-40af-4b82-8d82-e1f9572f203a req-ff92b3ce-8adf-4275-a7bf-0c706845dd52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Updating instance_info_cache with network_info: [{"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "address": "fa:16:3e:d2:51:f5", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fd7b3ad-0c", "ovs_interfaceid": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:40:30 compute-1 sudo[252987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:30 compute-1 nova_compute[225855]: 2026-01-20 14:40:30.225 225859 DEBUG oslo_concurrency.lockutils [req-0099f7ca-40af-4b82-8d82-e1f9572f203a req-ff92b3ce-8adf-4275-a7bf-0c706845dd52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:40:30 compute-1 sudo[252987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:30 compute-1 nova_compute[225855]: 2026-01-20 14:40:30.225 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:40:30 compute-1 nova_compute[225855]: 2026-01-20 14:40:30.226 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:40:30 compute-1 nova_compute[225855]: 2026-01-20 14:40:30.226 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d5c2df9d-748f-4df2-9392-b45741975f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:40:30 compute-1 sudo[252987]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:30 compute-1 nova_compute[225855]: 2026-01-20 14:40:30.243 225859 DEBUG nova.network.neutron [req-384dc47d-bdd2-4048-ba5c-cefdac094c99 req-03abf074-ef8e-429f-80fc-ad9e57739eb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Updated VIF entry in instance network info cache for port 19a89daa-770c-4c3f-970c-a9a462503b06. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:40:30 compute-1 nova_compute[225855]: 2026-01-20 14:40:30.244 225859 DEBUG nova.network.neutron [req-384dc47d-bdd2-4048-ba5c-cefdac094c99 req-03abf074-ef8e-429f-80fc-ad9e57739eb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Updating instance_info_cache with network_info: [{"id": "19a89daa-770c-4c3f-970c-a9a462503b06", "address": "fa:16:3e:7c:c6:66", "network": {"id": "1002188b-c6a7-4b59-9326-3a1a837a00fd", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1606456759-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b50ce2f25e8943e28ddf8bf69c721e75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a89daa-77", "ovs_interfaceid": "19a89daa-770c-4c3f-970c-a9a462503b06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:40:30 compute-1 nova_compute[225855]: 2026-01-20 14:40:30.270 225859 DEBUG oslo_concurrency.lockutils [req-384dc47d-bdd2-4048-ba5c-cefdac094c99 req-03abf074-ef8e-429f-80fc-ad9e57739eb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-a96ccadd-ac1d-4040-8bcc-bebb460ee233" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:40:30 compute-1 sudo[253013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:40:30 compute-1 sudo[253013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:30 compute-1 sudo[253013]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:30 compute-1 ceph-mon[81775]: pgmap v1564: 321 pgs: 321 active+clean; 321 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 144 op/s
Jan 20 14:40:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:40:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:31.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:40:31 compute-1 nova_compute[225855]: 2026-01-20 14:40:31.631 225859 DEBUG nova.compute.manager [req-c138b668-5d51-41c2-a531-a4cfdd15618a req-f680e24f-c065-48bb-b7c4-ef0fb9ff6c94 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-vif-plugged-5fd7b3ad-0cf1-4294-b552-6141c8ee85bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:31 compute-1 nova_compute[225855]: 2026-01-20 14:40:31.633 225859 DEBUG oslo_concurrency.lockutils [req-c138b668-5d51-41c2-a531-a4cfdd15618a req-f680e24f-c065-48bb-b7c4-ef0fb9ff6c94 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:31 compute-1 nova_compute[225855]: 2026-01-20 14:40:31.633 225859 DEBUG oslo_concurrency.lockutils [req-c138b668-5d51-41c2-a531-a4cfdd15618a req-f680e24f-c065-48bb-b7c4-ef0fb9ff6c94 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:31 compute-1 nova_compute[225855]: 2026-01-20 14:40:31.633 225859 DEBUG oslo_concurrency.lockutils [req-c138b668-5d51-41c2-a531-a4cfdd15618a req-f680e24f-c065-48bb-b7c4-ef0fb9ff6c94 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:31 compute-1 nova_compute[225855]: 2026-01-20 14:40:31.634 225859 DEBUG nova.compute.manager [req-c138b668-5d51-41c2-a531-a4cfdd15618a req-f680e24f-c065-48bb-b7c4-ef0fb9ff6c94 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] No waiting events found dispatching network-vif-plugged-5fd7b3ad-0cf1-4294-b552-6141c8ee85bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:40:31 compute-1 nova_compute[225855]: 2026-01-20 14:40:31.634 225859 WARNING nova.compute.manager [req-c138b668-5d51-41c2-a531-a4cfdd15618a req-f680e24f-c065-48bb-b7c4-ef0fb9ff6c94 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received unexpected event network-vif-plugged-5fd7b3ad-0cf1-4294-b552-6141c8ee85bd for instance with vm_state active and task_state None.
Jan 20 14:40:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 14:40:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:40:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:40:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:40:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:40:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:40:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:40:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1270905783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2878926089' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:40:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2878926089' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:40:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:31.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:32 compute-1 nova_compute[225855]: 2026-01-20 14:40:32.107 225859 DEBUG oslo_concurrency.lockutils [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "interface-d5c2df9d-748f-4df2-9392-b45741975f65-0b83ac2a-727a-4db9-91f2-69f939deeb69" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:32 compute-1 nova_compute[225855]: 2026-01-20 14:40:32.108 225859 DEBUG oslo_concurrency.lockutils [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "interface-d5c2df9d-748f-4df2-9392-b45741975f65-0b83ac2a-727a-4db9-91f2-69f939deeb69" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:32 compute-1 nova_compute[225855]: 2026-01-20 14:40:32.108 225859 DEBUG nova.objects.instance [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'flavor' on Instance uuid d5c2df9d-748f-4df2-9392-b45741975f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:40:32 compute-1 nova_compute[225855]: 2026-01-20 14:40:32.411 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:32 compute-1 nova_compute[225855]: 2026-01-20 14:40:32.643 225859 DEBUG nova.objects.instance [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'pci_requests' on Instance uuid d5c2df9d-748f-4df2-9392-b45741975f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:40:32 compute-1 nova_compute[225855]: 2026-01-20 14:40:32.659 225859 DEBUG nova.network.neutron [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:40:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:33.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:33 compute-1 ceph-mon[81775]: pgmap v1565: 321 pgs: 321 active+clean; 326 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 156 op/s
Jan 20 14:40:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2590034195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:33 compute-1 nova_compute[225855]: 2026-01-20 14:40:33.247 225859 DEBUG nova.policy [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c8a9fb458d27434495a77a94827b6097', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3f93fd4b2154dda9f38e62334904303', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:40:33 compute-1 nova_compute[225855]: 2026-01-20 14:40:33.958 225859 DEBUG nova.network.neutron [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Successfully updated port: 0b83ac2a-727a-4db9-91f2-69f939deeb69 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:40:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:33.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.002 225859 DEBUG oslo_concurrency.lockutils [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.056 225859 DEBUG nova.compute.manager [req-29ab351e-d7f1-45a4-be9d-33e98aaab4ff req-46720557-2704-4b58-bf50-c27aa2c9e14d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-changed-0b83ac2a-727a-4db9-91f2-69f939deeb69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.057 225859 DEBUG nova.compute.manager [req-29ab351e-d7f1-45a4-be9d-33e98aaab4ff req-46720557-2704-4b58-bf50-c27aa2c9e14d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Refreshing instance network info cache due to event network-changed-0b83ac2a-727a-4db9-91f2-69f939deeb69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.057 225859 DEBUG oslo_concurrency.lockutils [req-29ab351e-d7f1-45a4-be9d-33e98aaab4ff req-46720557-2704-4b58-bf50-c27aa2c9e14d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:40:34 compute-1 ceph-mon[81775]: pgmap v1566: 321 pgs: 321 active+clean; 326 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Jan 20 14:40:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3830778880' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:40:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3830778880' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.342 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Updating instance_info_cache with network_info: [{"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "address": "fa:16:3e:d2:51:f5", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fd7b3ad-0c", "ovs_interfaceid": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.363 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.364 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.365 225859 DEBUG oslo_concurrency.lockutils [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquired lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.365 225859 DEBUG nova.network.neutron [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.366 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.367 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.367 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.367 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.368 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.368 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.387 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.387 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.387 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.388 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.388 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.514 225859 WARNING nova.network.neutron [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] fc21b99b-4e34-422c-be05-0a440009dac4 already exists in list: networks containing: ['fc21b99b-4e34-422c-be05-0a440009dac4']. ignoring it
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.516 225859 WARNING nova.network.neutron [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] fc21b99b-4e34-422c-be05-0a440009dac4 already exists in list: networks containing: ['fc21b99b-4e34-422c-be05-0a440009dac4']. ignoring it
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.516 225859 WARNING nova.network.neutron [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] fc21b99b-4e34-422c-be05-0a440009dac4 already exists in list: networks containing: ['fc21b99b-4e34-422c-be05-0a440009dac4']. ignoring it
Jan 20 14:40:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:40:34 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/164278318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.860 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.936 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.937 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.941 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:40:34 compute-1 nova_compute[225855]: 2026-01-20 14:40:34.941 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:40:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:40:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:35.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:40:35 compute-1 nova_compute[225855]: 2026-01-20 14:40:35.086 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:35 compute-1 nova_compute[225855]: 2026-01-20 14:40:35.124 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:40:35 compute-1 nova_compute[225855]: 2026-01-20 14:40:35.126 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4149MB free_disk=20.8763427734375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:40:35 compute-1 nova_compute[225855]: 2026-01-20 14:40:35.126 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:35 compute-1 nova_compute[225855]: 2026-01-20 14:40:35.127 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:35 compute-1 nova_compute[225855]: 2026-01-20 14:40:35.206 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance d5c2df9d-748f-4df2-9392-b45741975f65 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:40:35 compute-1 nova_compute[225855]: 2026-01-20 14:40:35.206 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance a96ccadd-ac1d-4040-8bcc-bebb460ee233 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:40:35 compute-1 nova_compute[225855]: 2026-01-20 14:40:35.206 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:40:35 compute-1 nova_compute[225855]: 2026-01-20 14:40:35.207 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:40:35 compute-1 nova_compute[225855]: 2026-01-20 14:40:35.350 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:40:35 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2698970955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:35 compute-1 nova_compute[225855]: 2026-01-20 14:40:35.820 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:35 compute-1 nova_compute[225855]: 2026-01-20 14:40:35.826 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:40:35 compute-1 nova_compute[225855]: 2026-01-20 14:40:35.842 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:40:35 compute-1 nova_compute[225855]: 2026-01-20 14:40:35.866 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:40:35 compute-1 nova_compute[225855]: 2026-01-20 14:40:35.866 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:35.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/164278318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:36 compute-1 ceph-mon[81775]: pgmap v1567: 321 pgs: 321 active+clean; 304 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 175 op/s
Jan 20 14:40:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2698970955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1847975642' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:40:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1847975642' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:40:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:37.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:37 compute-1 nova_compute[225855]: 2026-01-20 14:40:37.424 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:37 compute-1 ovn_controller[130490]: 2026-01-20T14:40:37Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7c:c6:66 10.100.0.12
Jan 20 14:40:37 compute-1 ovn_controller[130490]: 2026-01-20T14:40:37Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7c:c6:66 10.100.0.12
Jan 20 14:40:37 compute-1 nova_compute[225855]: 2026-01-20 14:40:37.855 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:37.855 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:40:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:37.856 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:40:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:37.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:38 compute-1 ceph-mon[81775]: pgmap v1568: 321 pgs: 321 active+clean; 301 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.9 MiB/s wr, 153 op/s
Jan 20 14:40:38 compute-1 sudo[253118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:38 compute-1 sudo[253118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:38 compute-1 sudo[253118]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:38 compute-1 sudo[253143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:40:38 compute-1 sudo[253143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:38 compute-1 sudo[253143]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:40:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:39.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:40:39 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:40:39 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.889 225859 DEBUG nova.network.neutron [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Updating instance_info_cache with network_info: [{"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "address": "fa:16:3e:d2:51:f5", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fd7b3ad-0c", "ovs_interfaceid": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "address": "fa:16:3e:e7:0e:b2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b83ac2a-72", "ovs_interfaceid": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.909 225859 DEBUG oslo_concurrency.lockutils [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Releasing lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.910 225859 DEBUG oslo_concurrency.lockutils [req-29ab351e-d7f1-45a4-be9d-33e98aaab4ff req-46720557-2704-4b58-bf50-c27aa2c9e14d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.911 225859 DEBUG nova.network.neutron [req-29ab351e-d7f1-45a4-be9d-33e98aaab4ff req-46720557-2704-4b58-bf50-c27aa2c9e14d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Refreshing network info cache for port 0b83ac2a-727a-4db9-91f2-69f939deeb69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.914 225859 DEBUG nova.virt.libvirt.vif [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2088149366',display_name='tempest-AttachInterfacesTestJSON-server-2088149366',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2088149366',id=66,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZwLcyXdmuPk9iAZlMOAxeFM3EdHKE0x5nJT3i2GTbVf6EkhYVj3hEmoeSwYo6iZrNjT6w/g2TndK4CzLIvGDWLEyKfIPgg2vbEtoL1oIxCHYN2ytrctbkHi1netydaRQ==',key_name='tempest-keypair-230916378',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-odubqy1o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:39:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=d5c2df9d-748f-4df2-9392-b45741975f65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "address": "fa:16:3e:e7:0e:b2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b83ac2a-72", "ovs_interfaceid": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.914 225859 DEBUG nova.network.os_vif_util [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "address": "fa:16:3e:e7:0e:b2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b83ac2a-72", "ovs_interfaceid": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.915 225859 DEBUG nova.network.os_vif_util [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:0e:b2,bridge_name='br-int',has_traffic_filtering=True,id=0b83ac2a-727a-4db9-91f2-69f939deeb69,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0b83ac2a-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.915 225859 DEBUG os_vif [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:0e:b2,bridge_name='br-int',has_traffic_filtering=True,id=0b83ac2a-727a-4db9-91f2-69f939deeb69,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0b83ac2a-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.916 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.916 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.917 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.919 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.919 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b83ac2a-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.919 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b83ac2a-72, col_values=(('external_ids', {'iface-id': '0b83ac2a-727a-4db9-91f2-69f939deeb69', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:0e:b2', 'vm-uuid': 'd5c2df9d-748f-4df2-9392-b45741975f65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.921 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:39 compute-1 NetworkManager[49104]: <info>  [1768920039.9221] manager: (tap0b83ac2a-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.923 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.937 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.941 225859 INFO os_vif [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:0e:b2,bridge_name='br-int',has_traffic_filtering=True,id=0b83ac2a-727a-4db9-91f2-69f939deeb69,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0b83ac2a-72')
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.942 225859 DEBUG nova.virt.libvirt.vif [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2088149366',display_name='tempest-AttachInterfacesTestJSON-server-2088149366',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2088149366',id=66,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZwLcyXdmuPk9iAZlMOAxeFM3EdHKE0x5nJT3i2GTbVf6EkhYVj3hEmoeSwYo6iZrNjT6w/g2TndK4CzLIvGDWLEyKfIPgg2vbEtoL1oIxCHYN2ytrctbkHi1netydaRQ==',key_name='tempest-keypair-230916378',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-odubqy1o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:39:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=d5c2df9d-748f-4df2-9392-b45741975f65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "address": "fa:16:3e:e7:0e:b2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b83ac2a-72", "ovs_interfaceid": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.942 225859 DEBUG nova.network.os_vif_util [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "address": "fa:16:3e:e7:0e:b2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b83ac2a-72", "ovs_interfaceid": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.943 225859 DEBUG nova.network.os_vif_util [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:0e:b2,bridge_name='br-int',has_traffic_filtering=True,id=0b83ac2a-727a-4db9-91f2-69f939deeb69,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0b83ac2a-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.945 225859 DEBUG nova.virt.libvirt.guest [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] attach device xml: <interface type="ethernet">
Jan 20 14:40:39 compute-1 nova_compute[225855]:   <mac address="fa:16:3e:e7:0e:b2"/>
Jan 20 14:40:39 compute-1 nova_compute[225855]:   <model type="virtio"/>
Jan 20 14:40:39 compute-1 nova_compute[225855]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:40:39 compute-1 nova_compute[225855]:   <mtu size="1442"/>
Jan 20 14:40:39 compute-1 nova_compute[225855]:   <target dev="tap0b83ac2a-72"/>
Jan 20 14:40:39 compute-1 nova_compute[225855]: </interface>
Jan 20 14:40:39 compute-1 nova_compute[225855]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 14:40:39 compute-1 kernel: tap0b83ac2a-72: entered promiscuous mode
Jan 20 14:40:39 compute-1 NetworkManager[49104]: <info>  [1768920039.9546] manager: (tap0b83ac2a-72): new Tun device (/org/freedesktop/NetworkManager/Devices/101)
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.957 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:39 compute-1 ovn_controller[130490]: 2026-01-20T14:40:39Z|00212|binding|INFO|Claiming lport 0b83ac2a-727a-4db9-91f2-69f939deeb69 for this chassis.
Jan 20 14:40:39 compute-1 ovn_controller[130490]: 2026-01-20T14:40:39Z|00213|binding|INFO|0b83ac2a-727a-4db9-91f2-69f939deeb69: Claiming fa:16:3e:e7:0e:b2 10.100.0.4
Jan 20 14:40:39 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:39.965 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:0e:b2 10.100.0.4'], port_security=['fa:16:3e:e7:0e:b2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-2013995474', 'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd5c2df9d-748f-4df2-9392-b45741975f65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc21b99b-4e34-422c-be05-0a440009dac4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-2013995474', 'neutron:project_id': 'e3f93fd4b2154dda9f38e62334904303', 'neutron:revision_number': '2', 'neutron:security_group_ids': '52cb9fb4-4318-4f53-9b5a-002d95792517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7af6b6bc-3cbd-48be-9f10-23ec011e0426, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=0b83ac2a-727a-4db9-91f2-69f939deeb69) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:40:39 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:39.967 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 0b83ac2a-727a-4db9-91f2-69f939deeb69 in datapath fc21b99b-4e34-422c-be05-0a440009dac4 bound to our chassis
Jan 20 14:40:39 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:39.969 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc21b99b-4e34-422c-be05-0a440009dac4
Jan 20 14:40:39 compute-1 ovn_controller[130490]: 2026-01-20T14:40:39Z|00214|binding|INFO|Setting lport 0b83ac2a-727a-4db9-91f2-69f939deeb69 ovn-installed in OVS
Jan 20 14:40:39 compute-1 ovn_controller[130490]: 2026-01-20T14:40:39Z|00215|binding|INFO|Setting lport 0b83ac2a-727a-4db9-91f2-69f939deeb69 up in Southbound
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.977 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:39 compute-1 systemd-udevd[253176]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:40:39 compute-1 nova_compute[225855]: 2026-01-20 14:40:39.983 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:39 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:39.986 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[efe96add-e1c0-4e02-951b-9931c2090042]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:39 compute-1 NetworkManager[49104]: <info>  [1768920039.9990] device (tap0b83ac2a-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:40:39 compute-1 NetworkManager[49104]: <info>  [1768920039.9998] device (tap0b83ac2a-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:40:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:40.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:40.012 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e928b5f4-93b9-4370-ad33-474b13163ea2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:40.015 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c3408f-b408-4171-9f40-fd345a07cd6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:40 compute-1 nova_compute[225855]: 2026-01-20 14:40:40.028 225859 DEBUG nova.virt.libvirt.driver [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:40:40 compute-1 nova_compute[225855]: 2026-01-20 14:40:40.029 225859 DEBUG nova.virt.libvirt.driver [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:40:40 compute-1 nova_compute[225855]: 2026-01-20 14:40:40.029 225859 DEBUG nova.virt.libvirt.driver [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No VIF found with MAC fa:16:3e:3b:35:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:40:40 compute-1 nova_compute[225855]: 2026-01-20 14:40:40.029 225859 DEBUG nova.virt.libvirt.driver [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No VIF found with MAC fa:16:3e:24:df:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:40:40 compute-1 nova_compute[225855]: 2026-01-20 14:40:40.029 225859 DEBUG nova.virt.libvirt.driver [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No VIF found with MAC fa:16:3e:d2:51:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:40:40 compute-1 nova_compute[225855]: 2026-01-20 14:40:40.029 225859 DEBUG nova.virt.libvirt.driver [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No VIF found with MAC fa:16:3e:e7:0e:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:40:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:40.044 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[ced0e286-95d2-4f45-ba1a-457ba2ee96a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:40 compute-1 nova_compute[225855]: 2026-01-20 14:40:40.054 225859 DEBUG nova.virt.libvirt.guest [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:40:40 compute-1 nova_compute[225855]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:40:40 compute-1 nova_compute[225855]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2088149366</nova:name>
Jan 20 14:40:40 compute-1 nova_compute[225855]:   <nova:creationTime>2026-01-20 14:40:40</nova:creationTime>
Jan 20 14:40:40 compute-1 nova_compute[225855]:   <nova:flavor name="m1.nano">
Jan 20 14:40:40 compute-1 nova_compute[225855]:     <nova:memory>128</nova:memory>
Jan 20 14:40:40 compute-1 nova_compute[225855]:     <nova:disk>1</nova:disk>
Jan 20 14:40:40 compute-1 nova_compute[225855]:     <nova:swap>0</nova:swap>
Jan 20 14:40:40 compute-1 nova_compute[225855]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:40:40 compute-1 nova_compute[225855]:     <nova:vcpus>1</nova:vcpus>
Jan 20 14:40:40 compute-1 nova_compute[225855]:   </nova:flavor>
Jan 20 14:40:40 compute-1 nova_compute[225855]:   <nova:owner>
Jan 20 14:40:40 compute-1 nova_compute[225855]:     <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:40:40 compute-1 nova_compute[225855]:     <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:40:40 compute-1 nova_compute[225855]:   </nova:owner>
Jan 20 14:40:40 compute-1 nova_compute[225855]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:40:40 compute-1 nova_compute[225855]:   <nova:ports>
Jan 20 14:40:40 compute-1 nova_compute[225855]:     <nova:port uuid="b48170b0-717d-48f0-8172-742a4a8596e9">
Jan 20 14:40:40 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 20 14:40:40 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:40 compute-1 nova_compute[225855]:     <nova:port uuid="db46acd4-809b-4127-ad48-870ae429b4d6">
Jan 20 14:40:40 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 14:40:40 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:40 compute-1 nova_compute[225855]:     <nova:port uuid="5fd7b3ad-0cf1-4294-b552-6141c8ee85bd">
Jan 20 14:40:40 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 14:40:40 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:40 compute-1 nova_compute[225855]:     <nova:port uuid="0b83ac2a-727a-4db9-91f2-69f939deeb69">
Jan 20 14:40:40 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 14:40:40 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:40 compute-1 nova_compute[225855]:   </nova:ports>
Jan 20 14:40:40 compute-1 nova_compute[225855]: </nova:instance>
Jan 20 14:40:40 compute-1 nova_compute[225855]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 20 14:40:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:40.061 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d24fb849-2302-4a35-9fd0-eaa44c8a6988]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc21b99b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:5b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503233, 'reachable_time': 33912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253183, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:40 compute-1 nova_compute[225855]: 2026-01-20 14:40:40.077 225859 DEBUG oslo_concurrency.lockutils [None req-b2246b1c-96ec-4ece-a914-b82c394b5339 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "interface-d5c2df9d-748f-4df2-9392-b45741975f65-0b83ac2a-727a-4db9-91f2-69f939deeb69" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:40.077 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8aff4e-d663-4f15-a247-25274c4fad5f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc21b99b-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503245, 'tstamp': 503245}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253184, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc21b99b-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503248, 'tstamp': 503248}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253184, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:40 compute-1 ovn_controller[130490]: 2026-01-20T14:40:40Z|00216|binding|INFO|Releasing lport 8cff17f5-b792-4e6f-8f1e-6c48322af961 from this chassis (sb_readonly=0)
Jan 20 14:40:40 compute-1 ovn_controller[130490]: 2026-01-20T14:40:40Z|00217|binding|INFO|Releasing lport 583df905-1d9f-49c1-b209-4b7fad1599f6 from this chassis (sb_readonly=0)
Jan 20 14:40:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:40.079 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc21b99b-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:40 compute-1 nova_compute[225855]: 2026-01-20 14:40:40.081 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:40.082 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc21b99b-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:40.082 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:40.083 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc21b99b-40, col_values=(('external_ids', {'iface-id': '583df905-1d9f-49c1-b209-4b7fad1599f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:40.083 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:40 compute-1 nova_compute[225855]: 2026-01-20 14:40:40.089 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:40 compute-1 nova_compute[225855]: 2026-01-20 14:40:40.147 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:40 compute-1 nova_compute[225855]: 2026-01-20 14:40:40.301 225859 DEBUG nova.compute.manager [req-48374b3d-1394-4b47-9446-d588c4bacf94 req-5766119b-4130-4a62-abb1-ca73edb8064e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-vif-plugged-0b83ac2a-727a-4db9-91f2-69f939deeb69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:40 compute-1 nova_compute[225855]: 2026-01-20 14:40:40.301 225859 DEBUG oslo_concurrency.lockutils [req-48374b3d-1394-4b47-9446-d588c4bacf94 req-5766119b-4130-4a62-abb1-ca73edb8064e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:40 compute-1 nova_compute[225855]: 2026-01-20 14:40:40.302 225859 DEBUG oslo_concurrency.lockutils [req-48374b3d-1394-4b47-9446-d588c4bacf94 req-5766119b-4130-4a62-abb1-ca73edb8064e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:40 compute-1 nova_compute[225855]: 2026-01-20 14:40:40.302 225859 DEBUG oslo_concurrency.lockutils [req-48374b3d-1394-4b47-9446-d588c4bacf94 req-5766119b-4130-4a62-abb1-ca73edb8064e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:40 compute-1 nova_compute[225855]: 2026-01-20 14:40:40.302 225859 DEBUG nova.compute.manager [req-48374b3d-1394-4b47-9446-d588c4bacf94 req-5766119b-4130-4a62-abb1-ca73edb8064e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] No waiting events found dispatching network-vif-plugged-0b83ac2a-727a-4db9-91f2-69f939deeb69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:40:40 compute-1 nova_compute[225855]: 2026-01-20 14:40:40.302 225859 WARNING nova.compute.manager [req-48374b3d-1394-4b47-9446-d588c4bacf94 req-5766119b-4130-4a62-abb1-ca73edb8064e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received unexpected event network-vif-plugged-0b83ac2a-727a-4db9-91f2-69f939deeb69 for instance with vm_state active and task_state None.
Jan 20 14:40:40 compute-1 ceph-mon[81775]: pgmap v1569: 321 pgs: 321 active+clean; 278 MiB data, 739 MiB used, 20 GiB / 21 GiB avail; 760 KiB/s rd, 2.6 MiB/s wr, 123 op/s
Jan 20 14:40:40 compute-1 nova_compute[225855]: 2026-01-20 14:40:40.862 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:40:40 compute-1 nova_compute[225855]: 2026-01-20 14:40:40.863 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:40:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:41.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:41 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:41 compute-1 nova_compute[225855]: 2026-01-20 14:40:41.375 225859 DEBUG nova.network.neutron [req-29ab351e-d7f1-45a4-be9d-33e98aaab4ff req-46720557-2704-4b58-bf50-c27aa2c9e14d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Updated VIF entry in instance network info cache for port 0b83ac2a-727a-4db9-91f2-69f939deeb69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:40:41 compute-1 nova_compute[225855]: 2026-01-20 14:40:41.376 225859 DEBUG nova.network.neutron [req-29ab351e-d7f1-45a4-be9d-33e98aaab4ff req-46720557-2704-4b58-bf50-c27aa2c9e14d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Updating instance_info_cache with network_info: [{"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "address": "fa:16:3e:d2:51:f5", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fd7b3ad-0c", "ovs_interfaceid": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "address": "fa:16:3e:e7:0e:b2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b83ac2a-72", "ovs_interfaceid": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:40:41 compute-1 nova_compute[225855]: 2026-01-20 14:40:41.393 225859 DEBUG oslo_concurrency.lockutils [req-29ab351e-d7f1-45a4-be9d-33e98aaab4ff req-46720557-2704-4b58-bf50-c27aa2c9e14d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:40:41 compute-1 nova_compute[225855]: 2026-01-20 14:40:41.982 225859 DEBUG oslo_concurrency.lockutils [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "interface-d5c2df9d-748f-4df2-9392-b45741975f65-db46acd4-809b-4127-ad48-870ae429b4d6" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:41 compute-1 nova_compute[225855]: 2026-01-20 14:40:41.983 225859 DEBUG oslo_concurrency.lockutils [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "interface-d5c2df9d-748f-4df2-9392-b45741975f65-db46acd4-809b-4127-ad48-870ae429b4d6" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:41 compute-1 nova_compute[225855]: 2026-01-20 14:40:41.999 225859 DEBUG nova.objects.instance [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'flavor' on Instance uuid d5c2df9d-748f-4df2-9392-b45741975f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:40:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:42.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.016 225859 DEBUG nova.virt.libvirt.vif [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2088149366',display_name='tempest-AttachInterfacesTestJSON-server-2088149366',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2088149366',id=66,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZwLcyXdmuPk9iAZlMOAxeFM3EdHKE0x5nJT3i2GTbVf6EkhYVj3hEmoeSwYo6iZrNjT6w/g2TndK4CzLIvGDWLEyKfIPgg2vbEtoL1oIxCHYN2ytrctbkHi1netydaRQ==',key_name='tempest-keypair-230916378',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-odubqy1o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:39:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=d5c2df9d-748f-4df2-9392-b45741975f65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.017 225859 DEBUG nova.network.os_vif_util [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.017 225859 DEBUG nova.network.os_vif_util [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:df:88,bridge_name='br-int',has_traffic_filtering=True,id=db46acd4-809b-4127-ad48-870ae429b4d6,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb46acd4-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.020 225859 DEBUG nova.virt.libvirt.guest [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:24:df:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdb46acd4-80"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.022 225859 DEBUG nova.virt.libvirt.guest [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:24:df:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdb46acd4-80"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.025 225859 DEBUG nova.virt.libvirt.driver [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Attempting to detach device tapdb46acd4-80 from instance d5c2df9d-748f-4df2-9392-b45741975f65 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.025 225859 DEBUG nova.virt.libvirt.guest [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] detach device xml: <interface type="ethernet">
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <mac address="fa:16:3e:24:df:88"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <model type="virtio"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <mtu size="1442"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <target dev="tapdb46acd4-80"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]: </interface>
Jan 20 14:40:42 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.033 225859 DEBUG nova.virt.libvirt.guest [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:24:df:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdb46acd4-80"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.037 225859 DEBUG nova.virt.libvirt.guest [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:24:df:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdb46acd4-80"/></interface>not found in domain: <domain type='kvm' id='29'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <name>instance-00000042</name>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <uuid>d5c2df9d-748f-4df2-9392-b45741975f65</uuid>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2088149366</nova:name>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:creationTime>2026-01-20 14:40:40</nova:creationTime>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:flavor name="m1.nano">
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:memory>128</nova:memory>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:disk>1</nova:disk>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:swap>0</nova:swap>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:vcpus>1</nova:vcpus>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </nova:flavor>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:owner>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </nova:owner>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:ports>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:port uuid="b48170b0-717d-48f0-8172-742a4a8596e9">
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:port uuid="db46acd4-809b-4127-ad48-870ae429b4d6">
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:port uuid="5fd7b3ad-0cf1-4294-b552-6141c8ee85bd">
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:port uuid="0b83ac2a-727a-4db9-91f2-69f939deeb69">
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </nova:ports>
Jan 20 14:40:42 compute-1 nova_compute[225855]: </nova:instance>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <memory unit='KiB'>131072</memory>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <vcpu placement='static'>1</vcpu>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <resource>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <partition>/machine</partition>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </resource>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <sysinfo type='smbios'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <system>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <entry name='manufacturer'>RDO</entry>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <entry name='product'>OpenStack Compute</entry>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <entry name='serial'>d5c2df9d-748f-4df2-9392-b45741975f65</entry>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <entry name='uuid'>d5c2df9d-748f-4df2-9392-b45741975f65</entry>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <entry name='family'>Virtual Machine</entry>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </system>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <os>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <boot dev='hd'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <smbios mode='sysinfo'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </os>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <features>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <vmcoreinfo state='on'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </features>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <cpu mode='custom' match='exact' check='full'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <model fallback='forbid'>Nehalem</model>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <feature policy='require' name='x2apic'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <feature policy='require' name='hypervisor'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <feature policy='require' name='vme'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <clock offset='utc'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <timer name='pit' tickpolicy='delay'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <timer name='hpet' present='no'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <on_poweroff>destroy</on_poweroff>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <on_reboot>restart</on_reboot>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <on_crash>destroy</on_crash>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <disk type='network' device='disk'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <auth username='openstack'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <source protocol='rbd' name='vms/d5c2df9d-748f-4df2-9392-b45741975f65_disk' index='2'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:         <host name='192.168.122.100' port='6789'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:         <host name='192.168.122.102' port='6789'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:         <host name='192.168.122.101' port='6789'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       </source>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target dev='vda' bus='virtio'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='virtio-disk0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <disk type='network' device='cdrom'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <auth username='openstack'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <source protocol='rbd' name='vms/d5c2df9d-748f-4df2-9392-b45741975f65_disk.config' index='1'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:         <host name='192.168.122.100' port='6789'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:         <host name='192.168.122.102' port='6789'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:         <host name='192.168.122.101' port='6789'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       </source>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target dev='sda' bus='sata'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <readonly/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='sata0-0-0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='0' model='pcie-root'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pcie.0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='1' port='0x10'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.1'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='2' port='0x11'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.2'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='3' port='0x12'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.3'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='4' port='0x13'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.4'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='5' port='0x14'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.5'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='6' port='0x15'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.6'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='7' port='0x16'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.7'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='8' port='0x17'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.8'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='9' port='0x18'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.9'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='10' port='0x19'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.10'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='11' port='0x1a'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.11'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='12' port='0x1b'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.12'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='13' port='0x1c'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.13'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='14' port='0x1d'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.14'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='15' port='0x1e'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.15'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='16' port='0x1f'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.16'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='17' port='0x20'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.17'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='18' port='0x21'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.18'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='19' port='0x22'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.19'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='20' port='0x23'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.20'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='21' port='0x24'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.21'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='22' port='0x25'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.22'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='23' port='0x26'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.23'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='24' port='0x27'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.24'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='25' port='0x28'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.25'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-pci-bridge'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.26'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='usb'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='sata' index='0'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='ide'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <interface type='ethernet'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <mac address='fa:16:3e:3b:35:f2'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target dev='tapb48170b0-71'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model type='virtio'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <mtu size='1442'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='net0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <interface type='ethernet'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <mac address='fa:16:3e:24:df:88'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target dev='tapdb46acd4-80'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model type='virtio'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <mtu size='1442'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='net1'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <interface type='ethernet'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <mac address='fa:16:3e:d2:51:f5'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target dev='tap5fd7b3ad-0c'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model type='virtio'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <mtu size='1442'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='net2'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <interface type='ethernet'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <mac address='fa:16:3e:e7:0e:b2'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target dev='tap0b83ac2a-72'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model type='virtio'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <mtu size='1442'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='net3'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <serial type='pty'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <source path='/dev/pts/0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <log file='/var/lib/nova/instances/d5c2df9d-748f-4df2-9392-b45741975f65/console.log' append='off'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target type='isa-serial' port='0'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:         <model name='isa-serial'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       </target>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='serial0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <console type='pty' tty='/dev/pts/0'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <source path='/dev/pts/0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <log file='/var/lib/nova/instances/d5c2df9d-748f-4df2-9392-b45741975f65/console.log' append='off'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target type='serial' port='0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='serial0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </console>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <input type='tablet' bus='usb'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='input0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='usb' bus='0' port='1'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </input>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <input type='mouse' bus='ps2'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='input1'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </input>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <input type='keyboard' bus='ps2'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='input2'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </input>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <listen type='address' address='::0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </graphics>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <audio id='1' type='none'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <video>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model type='virtio' heads='1' primary='yes'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='video0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </video>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <watchdog model='itco' action='reset'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='watchdog0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </watchdog>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <memballoon model='virtio'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <stats period='10'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='balloon0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <rng model='virtio'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <backend model='random'>/dev/urandom</backend>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='rng0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <label>system_u:system_r:svirt_t:s0:c277,c849</label>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c277,c849</imagelabel>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </seclabel>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <label>+107:+107</label>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <imagelabel>+107:+107</imagelabel>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </seclabel>
Jan 20 14:40:42 compute-1 nova_compute[225855]: </domain>
Jan 20 14:40:42 compute-1 nova_compute[225855]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.039 225859 INFO nova.virt.libvirt.driver [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully detached device tapdb46acd4-80 from instance d5c2df9d-748f-4df2-9392-b45741975f65 from the persistent domain config.
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.039 225859 DEBUG nova.virt.libvirt.driver [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] (1/8): Attempting to detach device tapdb46acd4-80 with device alias net1 from instance d5c2df9d-748f-4df2-9392-b45741975f65 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.040 225859 DEBUG nova.virt.libvirt.guest [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] detach device xml: <interface type="ethernet">
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <mac address="fa:16:3e:24:df:88"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <model type="virtio"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <mtu size="1442"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <target dev="tapdb46acd4-80"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]: </interface>
Jan 20 14:40:42 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:40:42 compute-1 kernel: tapdb46acd4-80 (unregistering): left promiscuous mode
Jan 20 14:40:42 compute-1 NetworkManager[49104]: <info>  [1768920042.1373] device (tapdb46acd4-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.148 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:42 compute-1 ovn_controller[130490]: 2026-01-20T14:40:42Z|00218|binding|INFO|Releasing lport db46acd4-809b-4127-ad48-870ae429b4d6 from this chassis (sb_readonly=0)
Jan 20 14:40:42 compute-1 ovn_controller[130490]: 2026-01-20T14:40:42Z|00219|binding|INFO|Setting lport db46acd4-809b-4127-ad48-870ae429b4d6 down in Southbound
Jan 20 14:40:42 compute-1 ovn_controller[130490]: 2026-01-20T14:40:42Z|00220|binding|INFO|Removing iface tapdb46acd4-80 ovn-installed in OVS
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.150 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:42.154 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:df:88 10.100.0.12'], port_security=['fa:16:3e:24:df:88 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd5c2df9d-748f-4df2-9392-b45741975f65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc21b99b-4e34-422c-be05-0a440009dac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3f93fd4b2154dda9f38e62334904303', 'neutron:revision_number': '4', 'neutron:security_group_ids': '52cb9fb4-4318-4f53-9b5a-002d95792517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7af6b6bc-3cbd-48be-9f10-23ec011e0426, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=db46acd4-809b-4127-ad48-870ae429b4d6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.155 225859 DEBUG nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Received event <DeviceRemovedEvent: 1768920042.1547081, d5c2df9d-748f-4df2-9392-b45741975f65 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 14:40:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:42.155 140354 INFO neutron.agent.ovn.metadata.agent [-] Port db46acd4-809b-4127-ad48-870ae429b4d6 in datapath fc21b99b-4e34-422c-be05-0a440009dac4 unbound from our chassis
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.156 225859 DEBUG nova.virt.libvirt.driver [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Start waiting for the detach event from libvirt for device tapdb46acd4-80 with device alias net1 for instance d5c2df9d-748f-4df2-9392-b45741975f65 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 14:40:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:42.156 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc21b99b-4e34-422c-be05-0a440009dac4
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.156 225859 DEBUG nova.virt.libvirt.guest [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:24:df:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdb46acd4-80"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.159 225859 DEBUG nova.virt.libvirt.guest [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:24:df:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdb46acd4-80"/></interface>not found in domain: <domain type='kvm' id='29'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <name>instance-00000042</name>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <uuid>d5c2df9d-748f-4df2-9392-b45741975f65</uuid>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2088149366</nova:name>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:creationTime>2026-01-20 14:40:40</nova:creationTime>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:flavor name="m1.nano">
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:memory>128</nova:memory>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:disk>1</nova:disk>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:swap>0</nova:swap>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:vcpus>1</nova:vcpus>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </nova:flavor>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:owner>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </nova:owner>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:ports>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:port uuid="b48170b0-717d-48f0-8172-742a4a8596e9">
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:port uuid="db46acd4-809b-4127-ad48-870ae429b4d6">
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:port uuid="5fd7b3ad-0cf1-4294-b552-6141c8ee85bd">
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:port uuid="0b83ac2a-727a-4db9-91f2-69f939deeb69">
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </nova:ports>
Jan 20 14:40:42 compute-1 nova_compute[225855]: </nova:instance>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <memory unit='KiB'>131072</memory>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <vcpu placement='static'>1</vcpu>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <resource>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <partition>/machine</partition>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </resource>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <sysinfo type='smbios'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <system>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <entry name='manufacturer'>RDO</entry>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <entry name='product'>OpenStack Compute</entry>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <entry name='serial'>d5c2df9d-748f-4df2-9392-b45741975f65</entry>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <entry name='uuid'>d5c2df9d-748f-4df2-9392-b45741975f65</entry>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <entry name='family'>Virtual Machine</entry>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </system>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <os>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <boot dev='hd'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <smbios mode='sysinfo'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </os>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <features>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <vmcoreinfo state='on'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </features>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <cpu mode='custom' match='exact' check='full'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <model fallback='forbid'>Nehalem</model>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <feature policy='require' name='x2apic'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <feature policy='require' name='hypervisor'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <feature policy='require' name='vme'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <clock offset='utc'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <timer name='pit' tickpolicy='delay'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <timer name='hpet' present='no'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <on_poweroff>destroy</on_poweroff>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <on_reboot>restart</on_reboot>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <on_crash>destroy</on_crash>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <disk type='network' device='disk'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <auth username='openstack'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <source protocol='rbd' name='vms/d5c2df9d-748f-4df2-9392-b45741975f65_disk' index='2'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:         <host name='192.168.122.100' port='6789'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:         <host name='192.168.122.102' port='6789'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:         <host name='192.168.122.101' port='6789'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       </source>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target dev='vda' bus='virtio'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='virtio-disk0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <disk type='network' device='cdrom'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <auth username='openstack'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <source protocol='rbd' name='vms/d5c2df9d-748f-4df2-9392-b45741975f65_disk.config' index='1'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:         <host name='192.168.122.100' port='6789'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:         <host name='192.168.122.102' port='6789'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:         <host name='192.168.122.101' port='6789'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       </source>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target dev='sda' bus='sata'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <readonly/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='sata0-0-0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='0' model='pcie-root'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pcie.0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='1' port='0x10'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.1'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='2' port='0x11'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.2'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='3' port='0x12'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.3'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='4' port='0x13'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.4'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='5' port='0x14'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.5'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='6' port='0x15'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.6'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='7' port='0x16'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.7'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='8' port='0x17'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.8'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='9' port='0x18'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.9'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='10' port='0x19'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.10'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='11' port='0x1a'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.11'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='12' port='0x1b'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.12'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='13' port='0x1c'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.13'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='14' port='0x1d'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.14'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='15' port='0x1e'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.15'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='16' port='0x1f'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.16'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='17' port='0x20'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.17'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='18' port='0x21'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.18'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='19' port='0x22'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.19'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='20' port='0x23'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.20'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='21' port='0x24'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.21'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='22' port='0x25'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.22'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='23' port='0x26'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.23'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='24' port='0x27'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.24'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target chassis='25' port='0x28'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.25'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model name='pcie-pci-bridge'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='pci.26'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='usb'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <controller type='sata' index='0'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='ide'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <interface type='ethernet'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <mac address='fa:16:3e:3b:35:f2'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target dev='tapb48170b0-71'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model type='virtio'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <mtu size='1442'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='net0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <interface type='ethernet'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <mac address='fa:16:3e:d2:51:f5'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target dev='tap5fd7b3ad-0c'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model type='virtio'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <mtu size='1442'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='net2'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <interface type='ethernet'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <mac address='fa:16:3e:e7:0e:b2'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target dev='tap0b83ac2a-72'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model type='virtio'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <mtu size='1442'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='net3'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <serial type='pty'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <source path='/dev/pts/0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <log file='/var/lib/nova/instances/d5c2df9d-748f-4df2-9392-b45741975f65/console.log' append='off'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target type='isa-serial' port='0'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:         <model name='isa-serial'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       </target>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='serial0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <console type='pty' tty='/dev/pts/0'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <source path='/dev/pts/0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <log file='/var/lib/nova/instances/d5c2df9d-748f-4df2-9392-b45741975f65/console.log' append='off'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <target type='serial' port='0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='serial0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </console>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <input type='tablet' bus='usb'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='input0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='usb' bus='0' port='1'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </input>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <input type='mouse' bus='ps2'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='input1'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </input>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <input type='keyboard' bus='ps2'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='input2'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </input>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <listen type='address' address='::0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </graphics>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <audio id='1' type='none'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <video>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <model type='virtio' heads='1' primary='yes'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='video0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </video>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <watchdog model='itco' action='reset'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='watchdog0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </watchdog>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <memballoon model='virtio'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <stats period='10'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='balloon0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <rng model='virtio'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <backend model='random'>/dev/urandom</backend>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <alias name='rng0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <label>system_u:system_r:svirt_t:s0:c277,c849</label>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c277,c849</imagelabel>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </seclabel>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <label>+107:+107</label>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <imagelabel>+107:+107</imagelabel>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </seclabel>
Jan 20 14:40:42 compute-1 nova_compute[225855]: </domain>
Jan 20 14:40:42 compute-1 nova_compute[225855]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.159 225859 INFO nova.virt.libvirt.driver [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully detached device tapdb46acd4-80 from instance d5c2df9d-748f-4df2-9392-b45741975f65 from the live domain config.
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.160 225859 DEBUG nova.virt.libvirt.vif [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2088149366',display_name='tempest-AttachInterfacesTestJSON-server-2088149366',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2088149366',id=66,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZwLcyXdmuPk9iAZlMOAxeFM3EdHKE0x5nJT3i2GTbVf6EkhYVj3hEmoeSwYo6iZrNjT6w/g2TndK4CzLIvGDWLEyKfIPgg2vbEtoL1oIxCHYN2ytrctbkHi1netydaRQ==',key_name='tempest-keypair-230916378',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-odubqy1o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:39:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=d5c2df9d-748f-4df2-9392-b45741975f65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.160 225859 DEBUG nova.network.os_vif_util [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.161 225859 DEBUG nova.network.os_vif_util [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:df:88,bridge_name='br-int',has_traffic_filtering=True,id=db46acd4-809b-4127-ad48-870ae429b4d6,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb46acd4-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.161 225859 DEBUG os_vif [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:df:88,bridge_name='br-int',has_traffic_filtering=True,id=db46acd4-809b-4127-ad48-870ae429b4d6,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb46acd4-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.163 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.163 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb46acd4-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:42.169 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[72172048-83ef-4567-b69b-90aac239c4e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:42.194 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[06cf3e1e-638f-404c-99e3-31f116ef312b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:42.197 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[616db67d-c1c6-4609-a14c-8b63376f74b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.220 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.222 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.223 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:42.223 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[a61b2160-7d30-43c7-9eed-8764b4bde2f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.225 225859 INFO os_vif [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:df:88,bridge_name='br-int',has_traffic_filtering=True,id=db46acd4-809b-4127-ad48-870ae429b4d6,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb46acd4-80')
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.226 225859 DEBUG nova.virt.libvirt.guest [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2088149366</nova:name>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:creationTime>2026-01-20 14:40:42</nova:creationTime>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:flavor name="m1.nano">
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:memory>128</nova:memory>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:disk>1</nova:disk>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:swap>0</nova:swap>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:vcpus>1</nova:vcpus>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </nova:flavor>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:owner>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </nova:owner>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   <nova:ports>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:port uuid="b48170b0-717d-48f0-8172-742a4a8596e9">
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:port uuid="5fd7b3ad-0cf1-4294-b552-6141c8ee85bd">
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     <nova:port uuid="0b83ac2a-727a-4db9-91f2-69f939deeb69">
Jan 20 14:40:42 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 14:40:42 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:42 compute-1 nova_compute[225855]:   </nova:ports>
Jan 20 14:40:42 compute-1 nova_compute[225855]: </nova:instance>
Jan 20 14:40:42 compute-1 nova_compute[225855]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 20 14:40:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:42.240 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc58e68-296e-4057-868b-ad5e3a2a1aa6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc21b99b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:5b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503233, 'reachable_time': 33912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253195, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:42.257 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[38b10a69-8859-4bf3-8d0c-657437599968]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc21b99b-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503245, 'tstamp': 503245}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253196, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc21b99b-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503248, 'tstamp': 503248}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253196, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:42.258 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc21b99b-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.259 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.260 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:42.261 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc21b99b-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:42.261 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:42.261 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc21b99b-40, col_values=(('external_ids', {'iface-id': '583df905-1d9f-49c1-b209-4b7fad1599f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:42.261 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.431 225859 DEBUG nova.compute.manager [req-6f5a0e58-8463-4cf2-8953-12046cbdd9a3 req-11ad0333-3d8a-444d-8987-4e7bcc210705 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-vif-plugged-0b83ac2a-727a-4db9-91f2-69f939deeb69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.431 225859 DEBUG oslo_concurrency.lockutils [req-6f5a0e58-8463-4cf2-8953-12046cbdd9a3 req-11ad0333-3d8a-444d-8987-4e7bcc210705 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.431 225859 DEBUG oslo_concurrency.lockutils [req-6f5a0e58-8463-4cf2-8953-12046cbdd9a3 req-11ad0333-3d8a-444d-8987-4e7bcc210705 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.432 225859 DEBUG oslo_concurrency.lockutils [req-6f5a0e58-8463-4cf2-8953-12046cbdd9a3 req-11ad0333-3d8a-444d-8987-4e7bcc210705 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.432 225859 DEBUG nova.compute.manager [req-6f5a0e58-8463-4cf2-8953-12046cbdd9a3 req-11ad0333-3d8a-444d-8987-4e7bcc210705 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] No waiting events found dispatching network-vif-plugged-0b83ac2a-727a-4db9-91f2-69f939deeb69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.432 225859 WARNING nova.compute.manager [req-6f5a0e58-8463-4cf2-8953-12046cbdd9a3 req-11ad0333-3d8a-444d-8987-4e7bcc210705 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received unexpected event network-vif-plugged-0b83ac2a-727a-4db9-91f2-69f939deeb69 for instance with vm_state active and task_state None.
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.510 225859 DEBUG nova.compute.manager [req-e3e8713f-0164-48d9-9883-bedfb70c8bf4 req-f4548433-ef47-464c-a31c-fa2e79369e4f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-vif-unplugged-db46acd4-809b-4127-ad48-870ae429b4d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.510 225859 DEBUG oslo_concurrency.lockutils [req-e3e8713f-0164-48d9-9883-bedfb70c8bf4 req-f4548433-ef47-464c-a31c-fa2e79369e4f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.510 225859 DEBUG oslo_concurrency.lockutils [req-e3e8713f-0164-48d9-9883-bedfb70c8bf4 req-f4548433-ef47-464c-a31c-fa2e79369e4f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.511 225859 DEBUG oslo_concurrency.lockutils [req-e3e8713f-0164-48d9-9883-bedfb70c8bf4 req-f4548433-ef47-464c-a31c-fa2e79369e4f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.511 225859 DEBUG nova.compute.manager [req-e3e8713f-0164-48d9-9883-bedfb70c8bf4 req-f4548433-ef47-464c-a31c-fa2e79369e4f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] No waiting events found dispatching network-vif-unplugged-db46acd4-809b-4127-ad48-870ae429b4d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.511 225859 WARNING nova.compute.manager [req-e3e8713f-0164-48d9-9883-bedfb70c8bf4 req-f4548433-ef47-464c-a31c-fa2e79369e4f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received unexpected event network-vif-unplugged-db46acd4-809b-4127-ad48-870ae429b4d6 for instance with vm_state active and task_state None.
Jan 20 14:40:42 compute-1 ovn_controller[130490]: 2026-01-20T14:40:42Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e7:0e:b2 10.100.0.4
Jan 20 14:40:42 compute-1 ovn_controller[130490]: 2026-01-20T14:40:42Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e7:0e:b2 10.100.0.4
Jan 20 14:40:42 compute-1 ceph-mon[81775]: pgmap v1570: 321 pgs: 321 active+clean; 279 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 2.2 MiB/s wr, 123 op/s
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.973 225859 DEBUG oslo_concurrency.lockutils [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.974 225859 DEBUG oslo_concurrency.lockutils [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquired lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:40:42 compute-1 nova_compute[225855]: 2026-01-20 14:40:42.974 225859 DEBUG nova.network.neutron [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:40:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:43.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:40:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:44.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:40:44 compute-1 ovn_controller[130490]: 2026-01-20T14:40:44Z|00221|binding|INFO|Releasing lport 8cff17f5-b792-4e6f-8f1e-6c48322af961 from this chassis (sb_readonly=0)
Jan 20 14:40:44 compute-1 ovn_controller[130490]: 2026-01-20T14:40:44Z|00222|binding|INFO|Releasing lport 583df905-1d9f-49c1-b209-4b7fad1599f6 from this chassis (sb_readonly=0)
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.087 140354 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 1972e093-3e75-4ae7-adfa-a5698a2d1439 with type ""
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.088 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:0e:b2 10.100.0.4'], port_security=['fa:16:3e:e7:0e:b2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-2013995474', 'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd5c2df9d-748f-4df2-9392-b45741975f65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc21b99b-4e34-422c-be05-0a440009dac4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-2013995474', 'neutron:project_id': 'e3f93fd4b2154dda9f38e62334904303', 'neutron:revision_number': '4', 'neutron:security_group_ids': '52cb9fb4-4318-4f53-9b5a-002d95792517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7af6b6bc-3cbd-48be-9f10-23ec011e0426, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=0b83ac2a-727a-4db9-91f2-69f939deeb69) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.090 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 0b83ac2a-727a-4db9-91f2-69f939deeb69 in datapath fc21b99b-4e34-422c-be05-0a440009dac4 unbound from our chassis
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.091 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc21b99b-4e34-422c-be05-0a440009dac4
Jan 20 14:40:44 compute-1 ovn_controller[130490]: 2026-01-20T14:40:44Z|00223|binding|INFO|Removing iface tap0b83ac2a-72 ovn-installed in OVS
Jan 20 14:40:44 compute-1 ovn_controller[130490]: 2026-01-20T14:40:44Z|00224|binding|INFO|Removing lport 0b83ac2a-727a-4db9-91f2-69f939deeb69 ovn-installed in OVS
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.097 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.106 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[318cf7f3-021f-4d55-aa38-5f2b9a0e4e43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.122 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.134 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.136 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[74733991-44b3-435e-8a91-efa6f207ca0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.140 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[48e1118c-8adc-4231-af6c-0107db2eab32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.167 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[67420da7-9e2e-4559-8644-0bbc2a0b5285]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.183 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fd6f6c82-eaa8-4980-bbdf-629c817ecc90]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc21b99b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:5b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503233, 'reachable_time': 33912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253203, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.203 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1719610c-416d-4538-b496-be32681ad7d6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc21b99b-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503245, 'tstamp': 503245}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253204, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc21b99b-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503248, 'tstamp': 503248}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253204, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.205 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc21b99b-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.206 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.208 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.208 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc21b99b-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.208 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.209 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc21b99b-40, col_values=(('external_ids', {'iface-id': '583df905-1d9f-49c1-b209-4b7fad1599f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.209 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.359 225859 INFO nova.network.neutron [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Port db46acd4-809b-4127-ad48-870ae429b4d6 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.495 225859 DEBUG oslo_concurrency.lockutils [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "d5c2df9d-748f-4df2-9392-b45741975f65" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.495 225859 DEBUG oslo_concurrency.lockutils [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.496 225859 DEBUG oslo_concurrency.lockutils [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.496 225859 DEBUG oslo_concurrency.lockutils [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.496 225859 DEBUG oslo_concurrency.lockutils [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.497 225859 INFO nova.compute.manager [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Terminating instance
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.498 225859 DEBUG nova.compute.manager [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:40:44 compute-1 kernel: tapb48170b0-71 (unregistering): left promiscuous mode
Jan 20 14:40:44 compute-1 NetworkManager[49104]: <info>  [1768920044.6108] device (tapb48170b0-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:40:44 compute-1 ovn_controller[130490]: 2026-01-20T14:40:44Z|00225|binding|INFO|Releasing lport b48170b0-717d-48f0-8172-742a4a8596e9 from this chassis (sb_readonly=0)
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.618 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:44 compute-1 ovn_controller[130490]: 2026-01-20T14:40:44Z|00226|binding|INFO|Setting lport b48170b0-717d-48f0-8172-742a4a8596e9 down in Southbound
Jan 20 14:40:44 compute-1 ovn_controller[130490]: 2026-01-20T14:40:44Z|00227|binding|INFO|Removing iface tapb48170b0-71 ovn-installed in OVS
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.621 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.627 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:35:f2 10.100.0.13'], port_security=['fa:16:3e:3b:35:f2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd5c2df9d-748f-4df2-9392-b45741975f65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc21b99b-4e34-422c-be05-0a440009dac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3f93fd4b2154dda9f38e62334904303', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b03ad0a9-4e4a-464d-b7d2-84d77d6554bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.174'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7af6b6bc-3cbd-48be-9f10-23ec011e0426, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=b48170b0-717d-48f0-8172-742a4a8596e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.629 140354 INFO neutron.agent.ovn.metadata.agent [-] Port b48170b0-717d-48f0-8172-742a4a8596e9 in datapath fc21b99b-4e34-422c-be05-0a440009dac4 unbound from our chassis
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.630 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc21b99b-4e34-422c-be05-0a440009dac4
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.634 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:44 compute-1 kernel: tap5fd7b3ad-0c (unregistering): left promiscuous mode
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.663 225859 DEBUG nova.compute.manager [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-vif-plugged-db46acd4-809b-4127-ad48-870ae429b4d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.663 225859 DEBUG oslo_concurrency.lockutils [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.664 225859 DEBUG oslo_concurrency.lockutils [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.664 225859 DEBUG oslo_concurrency.lockutils [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.664 225859 DEBUG nova.compute.manager [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] No waiting events found dispatching network-vif-plugged-db46acd4-809b-4127-ad48-870ae429b4d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.664 225859 WARNING nova.compute.manager [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received unexpected event network-vif-plugged-db46acd4-809b-4127-ad48-870ae429b4d6 for instance with vm_state active and task_state deleting.
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.664 225859 DEBUG nova.compute.manager [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-vif-deleted-db46acd4-809b-4127-ad48-870ae429b4d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.665 225859 INFO nova.compute.manager [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Neutron deleted interface db46acd4-809b-4127-ad48-870ae429b4d6; detaching it from the instance and deleting it from the info cache
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.665 225859 DEBUG nova.network.neutron [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Updating instance_info_cache with network_info: [{"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "address": "fa:16:3e:d2:51:f5", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fd7b3ad-0c", "ovs_interfaceid": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "address": "fa:16:3e:e7:0e:b2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b83ac2a-72", "ovs_interfaceid": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:40:44 compute-1 NetworkManager[49104]: <info>  [1768920044.6721] device (tap5fd7b3ad-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.676 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[13750d84-a2d8-4312-832b-a46abc3623ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:44 compute-1 ovn_controller[130490]: 2026-01-20T14:40:44Z|00228|binding|INFO|Releasing lport 5fd7b3ad-0cf1-4294-b552-6141c8ee85bd from this chassis (sb_readonly=0)
Jan 20 14:40:44 compute-1 ovn_controller[130490]: 2026-01-20T14:40:44Z|00229|binding|INFO|Setting lport 5fd7b3ad-0cf1-4294-b552-6141c8ee85bd down in Southbound
Jan 20 14:40:44 compute-1 ovn_controller[130490]: 2026-01-20T14:40:44Z|00230|binding|INFO|Removing iface tap5fd7b3ad-0c ovn-installed in OVS
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.681 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.682 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.686 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:51:f5 10.100.0.11'], port_security=['fa:16:3e:d2:51:f5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd5c2df9d-748f-4df2-9392-b45741975f65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc21b99b-4e34-422c-be05-0a440009dac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3f93fd4b2154dda9f38e62334904303', 'neutron:revision_number': '4', 'neutron:security_group_ids': '52cb9fb4-4318-4f53-9b5a-002d95792517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7af6b6bc-3cbd-48be-9f10-23ec011e0426, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=5fd7b3ad-0cf1-4294-b552-6141c8ee85bd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:40:44 compute-1 kernel: tap0b83ac2a-72 (unregistering): left promiscuous mode
Jan 20 14:40:44 compute-1 NetworkManager[49104]: <info>  [1768920044.7161] device (tap0b83ac2a-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.718 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.726 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.726 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[875c32c9-fabf-492d-8d08-e2a364f3f5e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.730 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[cbd593e8-b040-4ef7-84be-3b2823eeaf08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.757 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[1090b53d-8bed-4b93-b55a-6bb99baeb940]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:44 compute-1 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000042.scope: Deactivated successfully.
Jan 20 14:40:44 compute-1 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000042.scope: Consumed 14.696s CPU time.
Jan 20 14:40:44 compute-1 systemd-machined[194361]: Machine qemu-29-instance-00000042 terminated.
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.774 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd5c159-a6e1-41d9-86ae-89d2b880aa9c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc21b99b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:5b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503233, 'reachable_time': 33912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253226, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.788 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[15688527-2c39-469a-9d57-e3c603bd582d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc21b99b-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503245, 'tstamp': 503245}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253227, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc21b99b-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503248, 'tstamp': 503248}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253227, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.789 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc21b99b-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.791 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.800 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.801 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc21b99b-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.801 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.802 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc21b99b-40, col_values=(('external_ids', {'iface-id': '583df905-1d9f-49c1-b209-4b7fad1599f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.802 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.803 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 5fd7b3ad-0cf1-4294-b552-6141c8ee85bd in datapath fc21b99b-4e34-422c-be05-0a440009dac4 unbound from our chassis
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.804 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc21b99b-4e34-422c-be05-0a440009dac4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.805 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2b127179-89a5-43f5-a74a-3adf294b6282]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.806 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4 namespace which is not needed anymore
Jan 20 14:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:44.858 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.858 225859 DEBUG nova.compute.manager [req-be15511e-516b-4047-a674-2f1bbd3b13fb req-c48c10b9-9ba9-480b-86b6-d29f87b7542d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-vif-unplugged-b48170b0-717d-48f0-8172-742a4a8596e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.859 225859 DEBUG oslo_concurrency.lockutils [req-be15511e-516b-4047-a674-2f1bbd3b13fb req-c48c10b9-9ba9-480b-86b6-d29f87b7542d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.859 225859 DEBUG oslo_concurrency.lockutils [req-be15511e-516b-4047-a674-2f1bbd3b13fb req-c48c10b9-9ba9-480b-86b6-d29f87b7542d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.859 225859 DEBUG oslo_concurrency.lockutils [req-be15511e-516b-4047-a674-2f1bbd3b13fb req-c48c10b9-9ba9-480b-86b6-d29f87b7542d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.860 225859 DEBUG nova.compute.manager [req-be15511e-516b-4047-a674-2f1bbd3b13fb req-c48c10b9-9ba9-480b-86b6-d29f87b7542d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] No waiting events found dispatching network-vif-unplugged-b48170b0-717d-48f0-8172-742a4a8596e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.860 225859 DEBUG nova.compute.manager [req-be15511e-516b-4047-a674-2f1bbd3b13fb req-c48c10b9-9ba9-480b-86b6-d29f87b7542d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-vif-unplugged-b48170b0-717d-48f0-8172-742a4a8596e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.906 225859 DEBUG nova.objects.instance [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lazy-loading 'system_metadata' on Instance uuid d5c2df9d-748f-4df2-9392-b45741975f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:40:44 compute-1 NetworkManager[49104]: <info>  [1768920044.9279] manager: (tap5fd7b3ad-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Jan 20 14:40:44 compute-1 NetworkManager[49104]: <info>  [1768920044.9432] manager: (tap0b83ac2a-72): new Tun device (/org/freedesktop/NetworkManager/Devices/103)
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.945 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.952 225859 DEBUG nova.objects.instance [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lazy-loading 'flavor' on Instance uuid d5c2df9d-748f-4df2-9392-b45741975f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.955 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:44 compute-1 ceph-mon[81775]: pgmap v1571: 321 pgs: 321 active+clean; 279 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 385 KiB/s rd, 2.2 MiB/s wr, 112 op/s
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.964 225859 INFO nova.virt.libvirt.driver [-] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Instance destroyed successfully.
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.965 225859 DEBUG nova.objects.instance [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'resources' on Instance uuid d5c2df9d-748f-4df2-9392-b45741975f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:40:44 compute-1 neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4[252245]: [NOTICE]   (252249) : haproxy version is 2.8.14-c23fe91
Jan 20 14:40:44 compute-1 neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4[252245]: [NOTICE]   (252249) : path to executable is /usr/sbin/haproxy
Jan 20 14:40:44 compute-1 neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4[252245]: [WARNING]  (252249) : Exiting Master process...
Jan 20 14:40:44 compute-1 neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4[252245]: [ALERT]    (252249) : Current worker (252251) exited with code 143 (Terminated)
Jan 20 14:40:44 compute-1 neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4[252245]: [WARNING]  (252249) : All workers exited. Exiting... (0)
Jan 20 14:40:44 compute-1 systemd[1]: libpod-ad0796275d2b7bc69c4b451ca13c4323d2608039a8c7168dc31ea54a35ab1b48.scope: Deactivated successfully.
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.972 225859 DEBUG nova.virt.libvirt.vif [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2088149366',display_name='tempest-AttachInterfacesTestJSON-server-2088149366',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2088149366',id=66,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZwLcyXdmuPk9iAZlMOAxeFM3EdHKE0x5nJT3i2GTbVf6EkhYVj3hEmoeSwYo6iZrNjT6w/g2TndK4CzLIvGDWLEyKfIPgg2vbEtoL1oIxCHYN2ytrctbkHi1netydaRQ==',key_name='tempest-keypair-230916378',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-odubqy1o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:40:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=d5c2df9d-748f-4df2-9392-b45741975f65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.972 225859 DEBUG nova.network.os_vif_util [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Converting VIF {"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.973 225859 DEBUG nova.network.os_vif_util [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:df:88,bridge_name='br-int',has_traffic_filtering=True,id=db46acd4-809b-4127-ad48-870ae429b4d6,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb46acd4-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.975 225859 DEBUG nova.virt.libvirt.vif [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2088149366',display_name='tempest-AttachInterfacesTestJSON-server-2088149366',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2088149366',id=66,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZwLcyXdmuPk9iAZlMOAxeFM3EdHKE0x5nJT3i2GTbVf6EkhYVj3hEmoeSwYo6iZrNjT6w/g2TndK4CzLIvGDWLEyKfIPgg2vbEtoL1oIxCHYN2ytrctbkHi1netydaRQ==',key_name='tempest-keypair-230916378',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-odubqy1o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:39:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=d5c2df9d-748f-4df2-9392-b45741975f65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.975 225859 DEBUG nova.network.os_vif_util [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.976 225859 DEBUG nova.network.os_vif_util [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3b:35:f2,bridge_name='br-int',has_traffic_filtering=True,id=b48170b0-717d-48f0-8172-742a4a8596e9,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb48170b0-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:44 compute-1 podman[253246]: 2026-01-20 14:40:44.977273238 +0000 UTC m=+0.066478414 container died ad0796275d2b7bc69c4b451ca13c4323d2608039a8c7168dc31ea54a35ab1b48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.977 225859 DEBUG os_vif [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3b:35:f2,bridge_name='br-int',has_traffic_filtering=True,id=b48170b0-717d-48f0-8172-742a4a8596e9,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb48170b0-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.979 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.979 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb48170b0-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.981 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.982 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.984 225859 DEBUG nova.virt.libvirt.guest [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:24:df:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdb46acd4-80"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.988 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.992 225859 INFO os_vif [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3b:35:f2,bridge_name='br-int',has_traffic_filtering=True,id=b48170b0-717d-48f0-8172-742a4a8596e9,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb48170b0-71')
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.993 225859 DEBUG nova.virt.libvirt.vif [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2088149366',display_name='tempest-AttachInterfacesTestJSON-server-2088149366',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2088149366',id=66,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZwLcyXdmuPk9iAZlMOAxeFM3EdHKE0x5nJT3i2GTbVf6EkhYVj3hEmoeSwYo6iZrNjT6w/g2TndK4CzLIvGDWLEyKfIPgg2vbEtoL1oIxCHYN2ytrctbkHi1netydaRQ==',key_name='tempest-keypair-230916378',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-odubqy1o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:39:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=d5c2df9d-748f-4df2-9392-b45741975f65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.993 225859 DEBUG nova.network.os_vif_util [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.994 225859 DEBUG nova.network.os_vif_util [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:df:88,bridge_name='br-int',has_traffic_filtering=True,id=db46acd4-809b-4127-ad48-870ae429b4d6,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb46acd4-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.994 225859 DEBUG os_vif [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:df:88,bridge_name='br-int',has_traffic_filtering=True,id=db46acd4-809b-4127-ad48-870ae429b4d6,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb46acd4-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.996 225859 DEBUG nova.virt.libvirt.guest [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:24:df:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdb46acd4-80"/></interface>not found in domain: <domain type='kvm'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   <name>instance-00000042</name>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   <uuid>d5c2df9d-748f-4df2-9392-b45741975f65</uuid>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <nova:name>tempest-AttachInterfacesTestJSON-server-2088149366</nova:name>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:39:50</nova:creationTime>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:40:44 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:40:44 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:40:44 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:40:44 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:40:44 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:40:44 compute-1 nova_compute[225855]:         <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:40:44 compute-1 nova_compute[225855]:         <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:40:44 compute-1 nova_compute[225855]:         <nova:port uuid="b48170b0-717d-48f0-8172-742a4a8596e9">
Jan 20 14:40:44 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   <memory unit='KiB'>131072</memory>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   <vcpu placement='static'>1</vcpu>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   <sysinfo type='smbios'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <system>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <entry name='manufacturer'>RDO</entry>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <entry name='product'>OpenStack Compute</entry>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <entry name='serial'>d5c2df9d-748f-4df2-9392-b45741975f65</entry>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <entry name='uuid'>d5c2df9d-748f-4df2-9392-b45741975f65</entry>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <entry name='family'>Virtual Machine</entry>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </system>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   <os>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <boot dev='hd'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <smbios mode='sysinfo'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   </os>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   <features>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <vmcoreinfo state='on'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   </features>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   <cpu mode='custom' match='exact' check='partial'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <model fallback='allow'>Nehalem</model>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   <clock offset='utc'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <timer name='pit' tickpolicy='delay'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <timer name='hpet' present='no'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   <on_poweroff>destroy</on_poweroff>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   <on_reboot>restart</on_reboot>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   <on_crash>destroy</on_crash>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <disk type='network' device='disk'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <auth username='openstack'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <source protocol='rbd' name='vms/d5c2df9d-748f-4df2-9392-b45741975f65_disk'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:         <host name='192.168.122.100' port='6789'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:         <host name='192.168.122.102' port='6789'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:         <host name='192.168.122.101' port='6789'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       </source>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target dev='vda' bus='virtio'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <disk type='network' device='cdrom'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <auth username='openstack'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <source protocol='rbd' name='vms/d5c2df9d-748f-4df2-9392-b45741975f65_disk.config'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:         <host name='192.168.122.100' port='6789'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:         <host name='192.168.122.102' port='6789'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:         <host name='192.168.122.101' port='6789'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       </source>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target dev='sda' bus='sata'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <readonly/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='0' model='pcie-root'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='1' port='0x10'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='2' port='0x11'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='3' port='0x12'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='4' port='0x13'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='5' port='0x14'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='6' port='0x15'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='7' port='0x16'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='8' port='0x17'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='9' port='0x18'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='10' port='0x19'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='11' port='0x1a'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='12' port='0x1b'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='13' port='0x1c'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='14' port='0x1d'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='15' port='0x1e'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='16' port='0x1f'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='17' port='0x20'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='18' port='0x21'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='19' port='0x22'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='20' port='0x23'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='21' port='0x24'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='22' port='0x25'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='23' port='0x26'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='24' port='0x27'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target chassis='25' port='0x28'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model name='pcie-pci-bridge'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <controller type='sata' index='0'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <interface type='ethernet'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <mac address='fa:16:3e:3b:35:f2'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target dev='tapb48170b0-71'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model type='virtio'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <mtu size='1442'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <interface type='ethernet'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <mac address='fa:16:3e:d2:51:f5'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target dev='tap5fd7b3ad-0c'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model type='virtio'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <mtu size='1442'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <interface type='ethernet'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <mac address='fa:16:3e:e7:0e:b2'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target dev='tap0b83ac2a-72'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model type='virtio'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <mtu size='1442'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <serial type='pty'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <log file='/var/lib/nova/instances/d5c2df9d-748f-4df2-9392-b45741975f65/console.log' append='off'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target type='isa-serial' port='0'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:         <model name='isa-serial'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       </target>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <console type='pty'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <log file='/var/lib/nova/instances/d5c2df9d-748f-4df2-9392-b45741975f65/console.log' append='off'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <target type='serial' port='0'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </console>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <input type='tablet' bus='usb'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='usb' bus='0' port='1'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </input>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <input type='mouse' bus='ps2'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <input type='keyboard' bus='ps2'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <listen type='address' address='::0'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </graphics>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <audio id='1' type='none'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <video>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <model type='virtio' heads='1' primary='yes'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </video>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <watchdog model='itco' action='reset'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <memballoon model='virtio'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <stats period='10'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     <rng model='virtio'>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <backend model='random'>/dev/urandom</backend>
Jan 20 14:40:44 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 20 14:40:44 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:40:44 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:40:44 compute-1 nova_compute[225855]: </domain>
Jan 20 14:40:44 compute-1 nova_compute[225855]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.997 225859 WARNING nova.virt.libvirt.driver [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Detaching interface fa:16:3e:24:df:88 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapdb46acd4-80' not found.
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.998 225859 DEBUG nova.virt.libvirt.vif [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2088149366',display_name='tempest-AttachInterfacesTestJSON-server-2088149366',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2088149366',id=66,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZwLcyXdmuPk9iAZlMOAxeFM3EdHKE0x5nJT3i2GTbVf6EkhYVj3hEmoeSwYo6iZrNjT6w/g2TndK4CzLIvGDWLEyKfIPgg2vbEtoL1oIxCHYN2ytrctbkHi1netydaRQ==',key_name='tempest-keypair-230916378',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-odubqy1o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:40:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=d5c2df9d-748f-4df2-9392-b45741975f65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.998 225859 DEBUG nova.network.os_vif_util [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Converting VIF {"id": "db46acd4-809b-4127-ad48-870ae429b4d6", "address": "fa:16:3e:24:df:88", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb46acd4-80", "ovs_interfaceid": "db46acd4-809b-4127-ad48-870ae429b4d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.998 225859 DEBUG nova.network.os_vif_util [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:df:88,bridge_name='br-int',has_traffic_filtering=True,id=db46acd4-809b-4127-ad48-870ae429b4d6,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb46acd4-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:44 compute-1 nova_compute[225855]: 2026-01-20 14:40:44.999 225859 DEBUG os_vif [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:df:88,bridge_name='br-int',has_traffic_filtering=True,id=db46acd4-809b-4127-ad48-870ae429b4d6,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb46acd4-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.000 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.000 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb46acd4-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.000 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.001 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.001 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb46acd4-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.002 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.002 225859 INFO os_vif [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:df:88,bridge_name='br-int',has_traffic_filtering=True,id=db46acd4-809b-4127-ad48-870ae429b4d6,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb46acd4-80')
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.003 225859 DEBUG nova.virt.libvirt.vif [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2088149366',display_name='tempest-AttachInterfacesTestJSON-server-2088149366',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2088149366',id=66,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZwLcyXdmuPk9iAZlMOAxeFM3EdHKE0x5nJT3i2GTbVf6EkhYVj3hEmoeSwYo6iZrNjT6w/g2TndK4CzLIvGDWLEyKfIPgg2vbEtoL1oIxCHYN2ytrctbkHi1netydaRQ==',key_name='tempest-keypair-230916378',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-odubqy1o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:39:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=d5c2df9d-748f-4df2-9392-b45741975f65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "address": "fa:16:3e:d2:51:f5", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fd7b3ad-0c", "ovs_interfaceid": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.003 225859 DEBUG nova.network.os_vif_util [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "address": "fa:16:3e:d2:51:f5", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fd7b3ad-0c", "ovs_interfaceid": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.003 225859 DEBUG nova.network.os_vif_util [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:51:f5,bridge_name='br-int',has_traffic_filtering=True,id=5fd7b3ad-0cf1-4294-b552-6141c8ee85bd,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fd7b3ad-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.003 225859 DEBUG os_vif [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:51:f5,bridge_name='br-int',has_traffic_filtering=True,id=5fd7b3ad-0cf1-4294-b552-6141c8ee85bd,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fd7b3ad-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.004 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.005 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fd7b3ad-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.005 225859 INFO os_vif [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:df:88,bridge_name='br-int',has_traffic_filtering=True,id=db46acd4-809b-4127-ad48-870ae429b4d6,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb46acd4-80')
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.006 225859 DEBUG nova.virt.libvirt.guest [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2088149366</nova:name>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <nova:creationTime>2026-01-20 14:40:45</nova:creationTime>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <nova:flavor name="m1.nano">
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <nova:memory>128</nova:memory>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <nova:disk>1</nova:disk>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <nova:swap>0</nova:swap>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <nova:vcpus>1</nova:vcpus>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   </nova:flavor>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <nova:owner>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   </nova:owner>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <nova:ports>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <nova:port uuid="b48170b0-717d-48f0-8172-742a4a8596e9">
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <nova:port uuid="5fd7b3ad-0cf1-4294-b552-6141c8ee85bd">
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <nova:port uuid="0b83ac2a-727a-4db9-91f2-69f939deeb69">
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   </nova:ports>
Jan 20 14:40:45 compute-1 nova_compute[225855]: </nova:instance>
Jan 20 14:40:45 compute-1 nova_compute[225855]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.007 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.010 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.011 225859 DEBUG nova.compute.manager [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-vif-deleted-0b83ac2a-727a-4db9-91f2-69f939deeb69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.011 225859 INFO nova.compute.manager [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Neutron deleted interface 0b83ac2a-727a-4db9-91f2-69f939deeb69; detaching it from the instance and deleting it from the info cache
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.012 225859 DEBUG nova.network.neutron [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Updating instance_info_cache with network_info: [{"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "address": "fa:16:3e:d2:51:f5", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fd7b3ad-0c", "ovs_interfaceid": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:40:45 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad0796275d2b7bc69c4b451ca13c4323d2608039a8c7168dc31ea54a35ab1b48-userdata-shm.mount: Deactivated successfully.
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.013 225859 INFO os_vif [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:51:f5,bridge_name='br-int',has_traffic_filtering=True,id=5fd7b3ad-0cf1-4294-b552-6141c8ee85bd,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fd7b3ad-0c')
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.014 225859 DEBUG nova.virt.libvirt.vif [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2088149366',display_name='tempest-AttachInterfacesTestJSON-server-2088149366',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2088149366',id=66,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZwLcyXdmuPk9iAZlMOAxeFM3EdHKE0x5nJT3i2GTbVf6EkhYVj3hEmoeSwYo6iZrNjT6w/g2TndK4CzLIvGDWLEyKfIPgg2vbEtoL1oIxCHYN2ytrctbkHi1netydaRQ==',key_name='tempest-keypair-230916378',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-odubqy1o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:39:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=d5c2df9d-748f-4df2-9392-b45741975f65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "address": "fa:16:3e:e7:0e:b2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b83ac2a-72", "ovs_interfaceid": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.014 225859 DEBUG nova.network.os_vif_util [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "address": "fa:16:3e:e7:0e:b2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b83ac2a-72", "ovs_interfaceid": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.015 225859 DEBUG nova.network.os_vif_util [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:0e:b2,bridge_name='br-int',has_traffic_filtering=True,id=0b83ac2a-727a-4db9-91f2-69f939deeb69,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0b83ac2a-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.015 225859 DEBUG os_vif [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:0e:b2,bridge_name='br-int',has_traffic_filtering=True,id=0b83ac2a-727a-4db9-91f2-69f939deeb69,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0b83ac2a-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:40:45 compute-1 systemd[1]: var-lib-containers-storage-overlay-1daf9cb422d92ee82699c7df2fb191ba24031863f4d9fd4cf2bcb41f474f37a6-merged.mount: Deactivated successfully.
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.016 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.016 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b83ac2a-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.019 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.020 225859 INFO os_vif [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:0e:b2,bridge_name='br-int',has_traffic_filtering=True,id=0b83ac2a-727a-4db9-91f2-69f939deeb69,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0b83ac2a-72')
Jan 20 14:40:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:45.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.040 225859 DEBUG nova.virt.libvirt.vif [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2088149366',display_name='tempest-AttachInterfacesTestJSON-server-2088149366',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2088149366',id=66,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZwLcyXdmuPk9iAZlMOAxeFM3EdHKE0x5nJT3i2GTbVf6EkhYVj3hEmoeSwYo6iZrNjT6w/g2TndK4CzLIvGDWLEyKfIPgg2vbEtoL1oIxCHYN2ytrctbkHi1netydaRQ==',key_name='tempest-keypair-230916378',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-odubqy1o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:40:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=d5c2df9d-748f-4df2-9392-b45741975f65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "address": "fa:16:3e:e7:0e:b2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b83ac2a-72", "ovs_interfaceid": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.041 225859 DEBUG nova.network.os_vif_util [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Converting VIF {"id": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "address": "fa:16:3e:e7:0e:b2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b83ac2a-72", "ovs_interfaceid": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.041 225859 DEBUG nova.network.os_vif_util [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:0e:b2,bridge_name='br-int',has_traffic_filtering=True,id=0b83ac2a-727a-4db9-91f2-69f939deeb69,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0b83ac2a-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.044 225859 DEBUG nova.virt.libvirt.guest [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:e7:0e:b2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0b83ac2a-72"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.047 225859 DEBUG nova.virt.libvirt.driver [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Attempting to detach device tap0b83ac2a-72 from instance d5c2df9d-748f-4df2-9392-b45741975f65 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.047 225859 DEBUG nova.virt.libvirt.guest [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] detach device xml: <interface type="ethernet">
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <mac address="fa:16:3e:e7:0e:b2"/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <model type="virtio"/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <mtu size="1442"/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <target dev="tap0b83ac2a-72"/>
Jan 20 14:40:45 compute-1 nova_compute[225855]: </interface>
Jan 20 14:40:45 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:40:45 compute-1 podman[253246]: 2026-01-20 14:40:45.053823316 +0000 UTC m=+0.143028492 container cleanup ad0796275d2b7bc69c4b451ca13c4323d2608039a8c7168dc31ea54a35ab1b48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:40:45 compute-1 systemd[1]: libpod-conmon-ad0796275d2b7bc69c4b451ca13c4323d2608039a8c7168dc31ea54a35ab1b48.scope: Deactivated successfully.
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.063 225859 DEBUG nova.virt.libvirt.guest [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:e7:0e:b2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0b83ac2a-72"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.067 225859 DEBUG nova.virt.libvirt.guest [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:e7:0e:b2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0b83ac2a-72"/></interface>not found in domain: <domain type='kvm'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <name>instance-00000042</name>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <uuid>d5c2df9d-748f-4df2-9392-b45741975f65</uuid>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <nova:name>tempest-AttachInterfacesTestJSON-server-2088149366</nova:name>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:40:45</nova:creationTime>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:40:45 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         <nova:port uuid="b48170b0-717d-48f0-8172-742a4a8596e9">
Jan 20 14:40:45 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         <nova:port uuid="5fd7b3ad-0cf1-4294-b552-6141c8ee85bd">
Jan 20 14:40:45 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         <nova:port uuid="0b83ac2a-727a-4db9-91f2-69f939deeb69">
Jan 20 14:40:45 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <memory unit='KiB'>131072</memory>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <vcpu placement='static'>1</vcpu>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <sysinfo type='smbios'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <system>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <entry name='manufacturer'>RDO</entry>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <entry name='product'>OpenStack Compute</entry>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <entry name='serial'>d5c2df9d-748f-4df2-9392-b45741975f65</entry>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <entry name='uuid'>d5c2df9d-748f-4df2-9392-b45741975f65</entry>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <entry name='family'>Virtual Machine</entry>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </system>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <os>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <boot dev='hd'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <smbios mode='sysinfo'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   </os>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <features>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <vmcoreinfo state='on'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   </features>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <cpu mode='custom' match='exact' check='partial'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <model fallback='allow'>Nehalem</model>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <clock offset='utc'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <timer name='pit' tickpolicy='delay'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <timer name='hpet' present='no'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <on_poweroff>destroy</on_poweroff>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <on_reboot>restart</on_reboot>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <on_crash>destroy</on_crash>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <disk type='network' device='disk'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <auth username='openstack'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <source protocol='rbd' name='vms/d5c2df9d-748f-4df2-9392-b45741975f65_disk'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         <host name='192.168.122.100' port='6789'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         <host name='192.168.122.102' port='6789'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         <host name='192.168.122.101' port='6789'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       </source>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target dev='vda' bus='virtio'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <disk type='network' device='cdrom'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <auth username='openstack'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <source protocol='rbd' name='vms/d5c2df9d-748f-4df2-9392-b45741975f65_disk.config'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         <host name='192.168.122.100' port='6789'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         <host name='192.168.122.102' port='6789'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         <host name='192.168.122.101' port='6789'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       </source>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target dev='sda' bus='sata'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <readonly/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='0' model='pcie-root'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='1' port='0x10'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='2' port='0x11'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='3' port='0x12'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='4' port='0x13'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='5' port='0x14'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='6' port='0x15'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='7' port='0x16'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='8' port='0x17'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='9' port='0x18'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='10' port='0x19'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='11' port='0x1a'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='12' port='0x1b'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='13' port='0x1c'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='14' port='0x1d'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='15' port='0x1e'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='16' port='0x1f'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='17' port='0x20'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='18' port='0x21'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='19' port='0x22'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='20' port='0x23'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='21' port='0x24'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='22' port='0x25'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='23' port='0x26'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='24' port='0x27'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target chassis='25' port='0x28'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model name='pcie-pci-bridge'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <controller type='sata' index='0'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <interface type='ethernet'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <mac address='fa:16:3e:3b:35:f2'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target dev='tapb48170b0-71'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model type='virtio'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <mtu size='1442'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <interface type='ethernet'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <mac address='fa:16:3e:d2:51:f5'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target dev='tap5fd7b3ad-0c'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model type='virtio'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <mtu size='1442'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <serial type='pty'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <log file='/var/lib/nova/instances/d5c2df9d-748f-4df2-9392-b45741975f65/console.log' append='off'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target type='isa-serial' port='0'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:         <model name='isa-serial'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       </target>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <console type='pty'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <log file='/var/lib/nova/instances/d5c2df9d-748f-4df2-9392-b45741975f65/console.log' append='off'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <target type='serial' port='0'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </console>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <input type='tablet' bus='usb'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='usb' bus='0' port='1'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </input>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <input type='mouse' bus='ps2'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <input type='keyboard' bus='ps2'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <listen type='address' address='::0'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </graphics>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <audio id='1' type='none'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <video>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <model type='virtio' heads='1' primary='yes'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </video>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <watchdog model='itco' action='reset'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <memballoon model='virtio'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <stats period='10'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <rng model='virtio'>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <backend model='random'>/dev/urandom</backend>
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:40:45 compute-1 nova_compute[225855]: </domain>
Jan 20 14:40:45 compute-1 nova_compute[225855]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.067 225859 INFO nova.virt.libvirt.driver [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Successfully detached device tap0b83ac2a-72 from instance d5c2df9d-748f-4df2-9392-b45741975f65 from the persistent domain config.
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.068 225859 DEBUG nova.virt.libvirt.vif [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2088149366',display_name='tempest-AttachInterfacesTestJSON-server-2088149366',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2088149366',id=66,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZwLcyXdmuPk9iAZlMOAxeFM3EdHKE0x5nJT3i2GTbVf6EkhYVj3hEmoeSwYo6iZrNjT6w/g2TndK4CzLIvGDWLEyKfIPgg2vbEtoL1oIxCHYN2ytrctbkHi1netydaRQ==',key_name='tempest-keypair-230916378',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-odubqy1o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:40:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=d5c2df9d-748f-4df2-9392-b45741975f65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "address": "fa:16:3e:e7:0e:b2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b83ac2a-72", "ovs_interfaceid": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.068 225859 DEBUG nova.network.os_vif_util [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Converting VIF {"id": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "address": "fa:16:3e:e7:0e:b2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b83ac2a-72", "ovs_interfaceid": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.069 225859 DEBUG nova.network.os_vif_util [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:0e:b2,bridge_name='br-int',has_traffic_filtering=True,id=0b83ac2a-727a-4db9-91f2-69f939deeb69,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0b83ac2a-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.069 225859 DEBUG os_vif [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:0e:b2,bridge_name='br-int',has_traffic_filtering=True,id=0b83ac2a-727a-4db9-91f2-69f939deeb69,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0b83ac2a-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.071 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.071 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b83ac2a-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.071 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.074 225859 INFO os_vif [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:0e:b2,bridge_name='br-int',has_traffic_filtering=True,id=0b83ac2a-727a-4db9-91f2-69f939deeb69,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0b83ac2a-72')
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.074 225859 DEBUG nova.virt.libvirt.guest [req-7399bf5a-a499-4702-9d57-c40b71169064 req-d8aeda9a-d9d9-43f9-a60a-73779b0052ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2088149366</nova:name>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <nova:creationTime>2026-01-20 14:40:45</nova:creationTime>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <nova:flavor name="m1.nano">
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <nova:memory>128</nova:memory>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <nova:disk>1</nova:disk>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <nova:swap>0</nova:swap>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <nova:vcpus>1</nova:vcpus>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   </nova:flavor>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <nova:owner>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   </nova:owner>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   <nova:ports>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <nova:port uuid="b48170b0-717d-48f0-8172-742a4a8596e9">
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     <nova:port uuid="5fd7b3ad-0cf1-4294-b552-6141c8ee85bd">
Jan 20 14:40:45 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 14:40:45 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:40:45 compute-1 nova_compute[225855]:   </nova:ports>
Jan 20 14:40:45 compute-1 nova_compute[225855]: </nova:instance>
Jan 20 14:40:45 compute-1 nova_compute[225855]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.091 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:45 compute-1 podman[253331]: 2026-01-20 14:40:45.121948196 +0000 UTC m=+0.043286117 container remove ad0796275d2b7bc69c4b451ca13c4323d2608039a8c7168dc31ea54a35ab1b48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:40:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:45.127 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec95672-f09f-41f6-8e4d-fdf15d813388]: (4, ('Tue Jan 20 02:40:44 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4 (ad0796275d2b7bc69c4b451ca13c4323d2608039a8c7168dc31ea54a35ab1b48)\nad0796275d2b7bc69c4b451ca13c4323d2608039a8c7168dc31ea54a35ab1b48\nTue Jan 20 02:40:45 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4 (ad0796275d2b7bc69c4b451ca13c4323d2608039a8c7168dc31ea54a35ab1b48)\nad0796275d2b7bc69c4b451ca13c4323d2608039a8c7168dc31ea54a35ab1b48\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:45.129 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ad67f2ff-3e87-447c-970e-daec036d7f8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:45.130 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc21b99b-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.132 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:45 compute-1 kernel: tapfc21b99b-40: left promiscuous mode
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.144 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:45.146 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[03a168af-53f3-46e7-8061-9c10f4a88c71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:45.161 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[243a3683-dd8c-4876-a374-25edb5085c4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:45.162 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a3228c96-507a-4ae0-8372-d88d6220775b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:45.175 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[43009149-b2cf-476a-bed4-7f191148194c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503226, 'reachable_time': 16842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253346, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:45 compute-1 systemd[1]: run-netns-ovnmeta\x2dfc21b99b\x2d4e34\x2d422c\x2dbe05\x2d0a440009dac4.mount: Deactivated successfully.
Jan 20 14:40:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:45.180 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:40:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:45.180 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c487bf-c31c-4f1a-87d8-680f0e0e1f21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:45 compute-1 sudo[253348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:45 compute-1 sudo[253348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:45 compute-1 sudo[253348]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:45 compute-1 sudo[253374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:45 compute-1 sudo[253374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:45 compute-1 sudo[253374]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.601 225859 INFO nova.virt.libvirt.driver [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Deleting instance files /var/lib/nova/instances/d5c2df9d-748f-4df2-9392-b45741975f65_del
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.602 225859 INFO nova.virt.libvirt.driver [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Deletion of /var/lib/nova/instances/d5c2df9d-748f-4df2-9392-b45741975f65_del complete
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.678 225859 INFO nova.compute.manager [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Took 1.18 seconds to destroy the instance on the hypervisor.
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.678 225859 DEBUG oslo.service.loopingcall [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.679 225859 DEBUG nova.compute.manager [-] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:40:45 compute-1 nova_compute[225855]: 2026-01-20 14:40:45.679 225859 DEBUG nova.network.neutron [-] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:40:45 compute-1 ceph-mon[81775]: pgmap v1572: 321 pgs: 321 active+clean; 280 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 2.2 MiB/s wr, 108 op/s
Jan 20 14:40:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:46.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:46 compute-1 nova_compute[225855]: 2026-01-20 14:40:46.817 225859 DEBUG nova.compute.manager [req-12e19205-4bc9-469f-a9ce-97f55419cab9 req-bf857b2b-91fb-4110-a3b1-4c04c58c55b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-vif-unplugged-5fd7b3ad-0cf1-4294-b552-6141c8ee85bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:46 compute-1 nova_compute[225855]: 2026-01-20 14:40:46.818 225859 DEBUG oslo_concurrency.lockutils [req-12e19205-4bc9-469f-a9ce-97f55419cab9 req-bf857b2b-91fb-4110-a3b1-4c04c58c55b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:46 compute-1 nova_compute[225855]: 2026-01-20 14:40:46.818 225859 DEBUG oslo_concurrency.lockutils [req-12e19205-4bc9-469f-a9ce-97f55419cab9 req-bf857b2b-91fb-4110-a3b1-4c04c58c55b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:46 compute-1 nova_compute[225855]: 2026-01-20 14:40:46.818 225859 DEBUG oslo_concurrency.lockutils [req-12e19205-4bc9-469f-a9ce-97f55419cab9 req-bf857b2b-91fb-4110-a3b1-4c04c58c55b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:46 compute-1 nova_compute[225855]: 2026-01-20 14:40:46.819 225859 DEBUG nova.compute.manager [req-12e19205-4bc9-469f-a9ce-97f55419cab9 req-bf857b2b-91fb-4110-a3b1-4c04c58c55b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] No waiting events found dispatching network-vif-unplugged-5fd7b3ad-0cf1-4294-b552-6141c8ee85bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:40:46 compute-1 nova_compute[225855]: 2026-01-20 14:40:46.819 225859 DEBUG nova.compute.manager [req-12e19205-4bc9-469f-a9ce-97f55419cab9 req-bf857b2b-91fb-4110-a3b1-4c04c58c55b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-vif-unplugged-5fd7b3ad-0cf1-4294-b552-6141c8ee85bd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:40:46 compute-1 nova_compute[225855]: 2026-01-20 14:40:46.819 225859 DEBUG nova.compute.manager [req-12e19205-4bc9-469f-a9ce-97f55419cab9 req-bf857b2b-91fb-4110-a3b1-4c04c58c55b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-vif-plugged-5fd7b3ad-0cf1-4294-b552-6141c8ee85bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:46 compute-1 nova_compute[225855]: 2026-01-20 14:40:46.820 225859 DEBUG oslo_concurrency.lockutils [req-12e19205-4bc9-469f-a9ce-97f55419cab9 req-bf857b2b-91fb-4110-a3b1-4c04c58c55b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:46 compute-1 nova_compute[225855]: 2026-01-20 14:40:46.820 225859 DEBUG oslo_concurrency.lockutils [req-12e19205-4bc9-469f-a9ce-97f55419cab9 req-bf857b2b-91fb-4110-a3b1-4c04c58c55b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:46 compute-1 nova_compute[225855]: 2026-01-20 14:40:46.821 225859 DEBUG oslo_concurrency.lockutils [req-12e19205-4bc9-469f-a9ce-97f55419cab9 req-bf857b2b-91fb-4110-a3b1-4c04c58c55b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:46 compute-1 nova_compute[225855]: 2026-01-20 14:40:46.821 225859 DEBUG nova.compute.manager [req-12e19205-4bc9-469f-a9ce-97f55419cab9 req-bf857b2b-91fb-4110-a3b1-4c04c58c55b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] No waiting events found dispatching network-vif-plugged-5fd7b3ad-0cf1-4294-b552-6141c8ee85bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:40:46 compute-1 nova_compute[225855]: 2026-01-20 14:40:46.821 225859 WARNING nova.compute.manager [req-12e19205-4bc9-469f-a9ce-97f55419cab9 req-bf857b2b-91fb-4110-a3b1-4c04c58c55b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received unexpected event network-vif-plugged-5fd7b3ad-0cf1-4294-b552-6141c8ee85bd for instance with vm_state active and task_state deleting.
Jan 20 14:40:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:40:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:47.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:40:47 compute-1 nova_compute[225855]: 2026-01-20 14:40:47.055 225859 DEBUG nova.compute.manager [req-ce278299-790c-4085-84d7-681da752ac54 req-a5491507-0d5f-468b-917b-f4c7c62f5808 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-vif-plugged-b48170b0-717d-48f0-8172-742a4a8596e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:47 compute-1 nova_compute[225855]: 2026-01-20 14:40:47.055 225859 DEBUG oslo_concurrency.lockutils [req-ce278299-790c-4085-84d7-681da752ac54 req-a5491507-0d5f-468b-917b-f4c7c62f5808 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:47 compute-1 nova_compute[225855]: 2026-01-20 14:40:47.055 225859 DEBUG oslo_concurrency.lockutils [req-ce278299-790c-4085-84d7-681da752ac54 req-a5491507-0d5f-468b-917b-f4c7c62f5808 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:47 compute-1 nova_compute[225855]: 2026-01-20 14:40:47.056 225859 DEBUG oslo_concurrency.lockutils [req-ce278299-790c-4085-84d7-681da752ac54 req-a5491507-0d5f-468b-917b-f4c7c62f5808 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:47 compute-1 nova_compute[225855]: 2026-01-20 14:40:47.056 225859 DEBUG nova.compute.manager [req-ce278299-790c-4085-84d7-681da752ac54 req-a5491507-0d5f-468b-917b-f4c7c62f5808 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] No waiting events found dispatching network-vif-plugged-b48170b0-717d-48f0-8172-742a4a8596e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:40:47 compute-1 nova_compute[225855]: 2026-01-20 14:40:47.056 225859 WARNING nova.compute.manager [req-ce278299-790c-4085-84d7-681da752ac54 req-a5491507-0d5f-468b-917b-f4c7c62f5808 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received unexpected event network-vif-plugged-b48170b0-717d-48f0-8172-742a4a8596e9 for instance with vm_state active and task_state deleting.
Jan 20 14:40:47 compute-1 nova_compute[225855]: 2026-01-20 14:40:47.126 225859 DEBUG nova.network.neutron [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Updating instance_info_cache with network_info: [{"id": "b48170b0-717d-48f0-8172-742a4a8596e9", "address": "fa:16:3e:3b:35:f2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb48170b0-71", "ovs_interfaceid": "b48170b0-717d-48f0-8172-742a4a8596e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "address": "fa:16:3e:d2:51:f5", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fd7b3ad-0c", "ovs_interfaceid": "5fd7b3ad-0cf1-4294-b552-6141c8ee85bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "address": "fa:16:3e:e7:0e:b2", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b83ac2a-72", "ovs_interfaceid": "0b83ac2a-727a-4db9-91f2-69f939deeb69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:40:47 compute-1 nova_compute[225855]: 2026-01-20 14:40:47.159 225859 DEBUG oslo_concurrency.lockutils [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Releasing lock "refresh_cache-d5c2df9d-748f-4df2-9392-b45741975f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:40:47 compute-1 nova_compute[225855]: 2026-01-20 14:40:47.183 225859 DEBUG oslo_concurrency.lockutils [None req-72b0302c-c7bb-4648-801b-9d4e27b907e8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "interface-d5c2df9d-748f-4df2-9392-b45741975f65-db46acd4-809b-4127-ad48-870ae429b4d6" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:47 compute-1 nova_compute[225855]: 2026-01-20 14:40:47.881 225859 DEBUG nova.network.neutron [-] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:40:47 compute-1 nova_compute[225855]: 2026-01-20 14:40:47.900 225859 INFO nova.compute.manager [-] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Took 2.22 seconds to deallocate network for instance.
Jan 20 14:40:47 compute-1 nova_compute[225855]: 2026-01-20 14:40:47.945 225859 DEBUG oslo_concurrency.lockutils [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:47 compute-1 nova_compute[225855]: 2026-01-20 14:40:47.945 225859 DEBUG oslo_concurrency.lockutils [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:48.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:48 compute-1 nova_compute[225855]: 2026-01-20 14:40:48.019 225859 DEBUG oslo_concurrency.processutils [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:40:48 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2116215996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:48 compute-1 nova_compute[225855]: 2026-01-20 14:40:48.453 225859 DEBUG oslo_concurrency.processutils [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:48 compute-1 nova_compute[225855]: 2026-01-20 14:40:48.458 225859 DEBUG nova.compute.provider_tree [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:40:48 compute-1 nova_compute[225855]: 2026-01-20 14:40:48.475 225859 DEBUG nova.scheduler.client.report [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:40:48 compute-1 nova_compute[225855]: 2026-01-20 14:40:48.496 225859 DEBUG oslo_concurrency.lockutils [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:48 compute-1 nova_compute[225855]: 2026-01-20 14:40:48.526 225859 INFO nova.scheduler.client.report [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Deleted allocations for instance d5c2df9d-748f-4df2-9392-b45741975f65
Jan 20 14:40:48 compute-1 nova_compute[225855]: 2026-01-20 14:40:48.589 225859 DEBUG oslo_concurrency.lockutils [None req-632ac997-4d58-4786-bcf9-7342e361eb4b c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "d5c2df9d-748f-4df2-9392-b45741975f65" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:48 compute-1 ceph-mon[81775]: pgmap v1573: 321 pgs: 321 active+clean; 252 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 322 KiB/s rd, 1.2 MiB/s wr, 75 op/s
Jan 20 14:40:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2116215996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:49.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:49 compute-1 nova_compute[225855]: 2026-01-20 14:40:49.330 225859 DEBUG nova.compute.manager [req-d292c7d6-0178-4d96-acc3-33b91add103f req-b867e9a1-97bf-4bb4-a5bf-35f088040c66 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-vif-deleted-b48170b0-717d-48f0-8172-742a4a8596e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:49 compute-1 nova_compute[225855]: 2026-01-20 14:40:49.330 225859 DEBUG nova.compute.manager [req-d292c7d6-0178-4d96-acc3-33b91add103f req-b867e9a1-97bf-4bb4-a5bf-35f088040c66 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Received event network-vif-deleted-5fd7b3ad-0cf1-4294-b552-6141c8ee85bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:50.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:50 compute-1 nova_compute[225855]: 2026-01-20 14:40:50.020 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:50 compute-1 nova_compute[225855]: 2026-01-20 14:40:50.095 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:50 compute-1 ceph-mon[81775]: pgmap v1574: 321 pgs: 321 active+clean; 221 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 182 KiB/s rd, 759 KiB/s wr, 69 op/s
Jan 20 14:40:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/695930938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:40:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:51.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:40:51 compute-1 podman[253424]: 2026-01-20 14:40:51.08536675 +0000 UTC m=+0.131516826 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:40:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:52.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:52 compute-1 ceph-mon[81775]: pgmap v1575: 321 pgs: 321 active+clean; 151 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 73 KiB/s wr, 72 op/s
Jan 20 14:40:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:53.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:53 compute-1 nova_compute[225855]: 2026-01-20 14:40:53.321 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:40:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:54.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:40:54 compute-1 ceph-mon[81775]: pgmap v1576: 321 pgs: 321 active+clean; 121 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 66 KiB/s wr, 59 op/s
Jan 20 14:40:55 compute-1 nova_compute[225855]: 2026-01-20 14:40:55.023 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:40:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:55.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:40:55 compute-1 nova_compute[225855]: 2026-01-20 14:40:55.096 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:55 compute-1 ovn_controller[130490]: 2026-01-20T14:40:55Z|00231|binding|INFO|Releasing lport 8cff17f5-b792-4e6f-8f1e-6c48322af961 from this chassis (sb_readonly=0)
Jan 20 14:40:55 compute-1 nova_compute[225855]: 2026-01-20 14:40:55.180 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:56.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:56 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:56 compute-1 nova_compute[225855]: 2026-01-20 14:40:56.333 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:56 compute-1 ceph-mon[81775]: pgmap v1577: 321 pgs: 321 active+clean; 121 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 19 KiB/s wr, 56 op/s
Jan 20 14:40:56 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2659128134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:40:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:57.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:40:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:58.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.372 225859 DEBUG oslo_concurrency.lockutils [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Acquiring lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.373 225859 DEBUG oslo_concurrency.lockutils [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.373 225859 DEBUG oslo_concurrency.lockutils [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Acquiring lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.373 225859 DEBUG oslo_concurrency.lockutils [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.374 225859 DEBUG oslo_concurrency.lockutils [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.375 225859 INFO nova.compute.manager [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Terminating instance
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.376 225859 DEBUG nova.compute.manager [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:40:58 compute-1 kernel: tap19a89daa-77 (unregistering): left promiscuous mode
Jan 20 14:40:58 compute-1 NetworkManager[49104]: <info>  [1768920058.4257] device (tap19a89daa-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:40:58 compute-1 ovn_controller[130490]: 2026-01-20T14:40:58Z|00232|binding|INFO|Releasing lport 19a89daa-770c-4c3f-970c-a9a462503b06 from this chassis (sb_readonly=0)
Jan 20 14:40:58 compute-1 ovn_controller[130490]: 2026-01-20T14:40:58Z|00233|binding|INFO|Setting lport 19a89daa-770c-4c3f-970c-a9a462503b06 down in Southbound
Jan 20 14:40:58 compute-1 ovn_controller[130490]: 2026-01-20T14:40:58Z|00234|binding|INFO|Removing iface tap19a89daa-77 ovn-installed in OVS
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.434 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.436 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.441 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:c6:66 10.100.0.12'], port_security=['fa:16:3e:7c:c6:66 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a96ccadd-ac1d-4040-8bcc-bebb460ee233', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002188b-c6a7-4b59-9326-3a1a837a00fd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b50ce2f25e8943e28ddf8bf69c721e75', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b713641f-ba5a-4dd0-8917-baab1ca61007', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01ad0ee9-ef23-4072-99f7-406bf8559611, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=19a89daa-770c-4c3f-970c-a9a462503b06) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.442 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 19a89daa-770c-4c3f-970c-a9a462503b06 in datapath 1002188b-c6a7-4b59-9326-3a1a837a00fd unbound from our chassis
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.445 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002188b-c6a7-4b59-9326-3a1a837a00fd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.446 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[42ea4ee2-2136-4154-a590-922b5a6f3430]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.447 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd namespace which is not needed anymore
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.458 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:58 compute-1 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000044.scope: Deactivated successfully.
Jan 20 14:40:58 compute-1 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000044.scope: Consumed 14.189s CPU time.
Jan 20 14:40:58 compute-1 systemd-machined[194361]: Machine qemu-30-instance-00000044 terminated.
Jan 20 14:40:58 compute-1 neutron-haproxy-ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd[252832]: [NOTICE]   (252836) : haproxy version is 2.8.14-c23fe91
Jan 20 14:40:58 compute-1 neutron-haproxy-ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd[252832]: [NOTICE]   (252836) : path to executable is /usr/sbin/haproxy
Jan 20 14:40:58 compute-1 neutron-haproxy-ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd[252832]: [WARNING]  (252836) : Exiting Master process...
Jan 20 14:40:58 compute-1 neutron-haproxy-ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd[252832]: [WARNING]  (252836) : Exiting Master process...
Jan 20 14:40:58 compute-1 neutron-haproxy-ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd[252832]: [ALERT]    (252836) : Current worker (252838) exited with code 143 (Terminated)
Jan 20 14:40:58 compute-1 neutron-haproxy-ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd[252832]: [WARNING]  (252836) : All workers exited. Exiting... (0)
Jan 20 14:40:58 compute-1 systemd[1]: libpod-38da31d8f51b431b660e3b2370ac55baf90389fb1943ecc974f1f1032615fe8b.scope: Deactivated successfully.
Jan 20 14:40:58 compute-1 podman[253480]: 2026-01-20 14:40:58.570435262 +0000 UTC m=+0.045292004 container died 38da31d8f51b431b660e3b2370ac55baf90389fb1943ecc974f1f1032615fe8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 20 14:40:58 compute-1 kernel: tap19a89daa-77: entered promiscuous mode
Jan 20 14:40:58 compute-1 NetworkManager[49104]: <info>  [1768920058.5968] manager: (tap19a89daa-77): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Jan 20 14:40:58 compute-1 ovn_controller[130490]: 2026-01-20T14:40:58Z|00235|binding|INFO|Claiming lport 19a89daa-770c-4c3f-970c-a9a462503b06 for this chassis.
Jan 20 14:40:58 compute-1 kernel: tap19a89daa-77 (unregistering): left promiscuous mode
Jan 20 14:40:58 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-38da31d8f51b431b660e3b2370ac55baf90389fb1943ecc974f1f1032615fe8b-userdata-shm.mount: Deactivated successfully.
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.599 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:58 compute-1 ovn_controller[130490]: 2026-01-20T14:40:58Z|00236|binding|INFO|19a89daa-770c-4c3f-970c-a9a462503b06: Claiming fa:16:3e:7c:c6:66 10.100.0.12
Jan 20 14:40:58 compute-1 systemd[1]: var-lib-containers-storage-overlay-fc585d326ff1df1a671eac1b099fad631006b99f0da45b11e11943fba074a4f1-merged.mount: Deactivated successfully.
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.607 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:c6:66 10.100.0.12'], port_security=['fa:16:3e:7c:c6:66 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a96ccadd-ac1d-4040-8bcc-bebb460ee233', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002188b-c6a7-4b59-9326-3a1a837a00fd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b50ce2f25e8943e28ddf8bf69c721e75', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b713641f-ba5a-4dd0-8917-baab1ca61007', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01ad0ee9-ef23-4072-99f7-406bf8559611, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=19a89daa-770c-4c3f-970c-a9a462503b06) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:40:58 compute-1 podman[253480]: 2026-01-20 14:40:58.614557521 +0000 UTC m=+0.089414253 container cleanup 38da31d8f51b431b660e3b2370ac55baf90389fb1943ecc974f1f1032615fe8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:40:58 compute-1 ovn_controller[130490]: 2026-01-20T14:40:58Z|00237|binding|INFO|Setting lport 19a89daa-770c-4c3f-970c-a9a462503b06 ovn-installed in OVS
Jan 20 14:40:58 compute-1 ovn_controller[130490]: 2026-01-20T14:40:58Z|00238|binding|INFO|Setting lport 19a89daa-770c-4c3f-970c-a9a462503b06 up in Southbound
Jan 20 14:40:58 compute-1 ovn_controller[130490]: 2026-01-20T14:40:58Z|00239|binding|INFO|Releasing lport 19a89daa-770c-4c3f-970c-a9a462503b06 from this chassis (sb_readonly=1)
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.622 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.624 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:58 compute-1 systemd[1]: libpod-conmon-38da31d8f51b431b660e3b2370ac55baf90389fb1943ecc974f1f1032615fe8b.scope: Deactivated successfully.
Jan 20 14:40:58 compute-1 ovn_controller[130490]: 2026-01-20T14:40:58Z|00240|if_status|INFO|Dropped 2 log messages in last 563 seconds (most recently, 563 seconds ago) due to excessive rate
Jan 20 14:40:58 compute-1 ovn_controller[130490]: 2026-01-20T14:40:58Z|00241|if_status|INFO|Not setting lport 19a89daa-770c-4c3f-970c-a9a462503b06 down as sb is readonly
Jan 20 14:40:58 compute-1 ovn_controller[130490]: 2026-01-20T14:40:58Z|00242|binding|INFO|Removing iface tap19a89daa-77 ovn-installed in OVS
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.627 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:58 compute-1 ovn_controller[130490]: 2026-01-20T14:40:58Z|00243|binding|INFO|Releasing lport 19a89daa-770c-4c3f-970c-a9a462503b06 from this chassis (sb_readonly=0)
Jan 20 14:40:58 compute-1 ovn_controller[130490]: 2026-01-20T14:40:58Z|00244|binding|INFO|Setting lport 19a89daa-770c-4c3f-970c-a9a462503b06 down in Southbound
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.631 225859 INFO nova.virt.libvirt.driver [-] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Instance destroyed successfully.
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.632 225859 DEBUG nova.objects.instance [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Lazy-loading 'resources' on Instance uuid a96ccadd-ac1d-4040-8bcc-bebb460ee233 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.637 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:c6:66 10.100.0.12'], port_security=['fa:16:3e:7c:c6:66 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a96ccadd-ac1d-4040-8bcc-bebb460ee233', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002188b-c6a7-4b59-9326-3a1a837a00fd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b50ce2f25e8943e28ddf8bf69c721e75', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b713641f-ba5a-4dd0-8917-baab1ca61007', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01ad0ee9-ef23-4072-99f7-406bf8559611, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=19a89daa-770c-4c3f-970c-a9a462503b06) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.639 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.651 225859 DEBUG nova.virt.libvirt.vif [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:40:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=68,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEbOWirsAXlTIoevt4kXzpqBpapeg8X6KpUPmzDXnXlw7wqoLKHnmHfUIYL+FmHPJoWs+SV643EEJY+tqAkcrZlCPnWit4UcMgPhE0LGoYJ6xDnxZGwNzSj5VV503kGh5A==',key_name='tempest-keypair-1553012660',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:40:23Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b50ce2f25e8943e28ddf8bf69c721e75',ramdisk_id='',reservation_id='r-w26et9o4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestFqdnHostnames-626473092',owner_user_name='tempest-ServersTestFqdnHostnames-626473092-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:40:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='16c05e1ac16f428bab6b36346856235e',uuid=a96ccadd-ac1d-4040-8bcc-bebb460ee233,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "19a89daa-770c-4c3f-970c-a9a462503b06", "address": "fa:16:3e:7c:c6:66", "network": {"id": "1002188b-c6a7-4b59-9326-3a1a837a00fd", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1606456759-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b50ce2f25e8943e28ddf8bf69c721e75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a89daa-77", "ovs_interfaceid": "19a89daa-770c-4c3f-970c-a9a462503b06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.651 225859 DEBUG nova.network.os_vif_util [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Converting VIF {"id": "19a89daa-770c-4c3f-970c-a9a462503b06", "address": "fa:16:3e:7c:c6:66", "network": {"id": "1002188b-c6a7-4b59-9326-3a1a837a00fd", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1606456759-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b50ce2f25e8943e28ddf8bf69c721e75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a89daa-77", "ovs_interfaceid": "19a89daa-770c-4c3f-970c-a9a462503b06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.652 225859 DEBUG nova.network.os_vif_util [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7c:c6:66,bridge_name='br-int',has_traffic_filtering=True,id=19a89daa-770c-4c3f-970c-a9a462503b06,network=Network(1002188b-c6a7-4b59-9326-3a1a837a00fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19a89daa-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.653 225859 DEBUG os_vif [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:c6:66,bridge_name='br-int',has_traffic_filtering=True,id=19a89daa-770c-4c3f-970c-a9a462503b06,network=Network(1002188b-c6a7-4b59-9326-3a1a837a00fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19a89daa-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.655 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.655 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19a89daa-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.658 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.661 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.663 225859 INFO os_vif [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:c6:66,bridge_name='br-int',has_traffic_filtering=True,id=19a89daa-770c-4c3f-970c-a9a462503b06,network=Network(1002188b-c6a7-4b59-9326-3a1a837a00fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19a89daa-77')
Jan 20 14:40:58 compute-1 podman[253518]: 2026-01-20 14:40:58.688428774 +0000 UTC m=+0.047292891 container remove 38da31d8f51b431b660e3b2370ac55baf90389fb1943ecc974f1f1032615fe8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.694 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[328d8f43-1ec1-4f71-bcf1-26c25031cf0d]: (4, ('Tue Jan 20 02:40:58 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd (38da31d8f51b431b660e3b2370ac55baf90389fb1943ecc974f1f1032615fe8b)\n38da31d8f51b431b660e3b2370ac55baf90389fb1943ecc974f1f1032615fe8b\nTue Jan 20 02:40:58 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd (38da31d8f51b431b660e3b2370ac55baf90389fb1943ecc974f1f1032615fe8b)\n38da31d8f51b431b660e3b2370ac55baf90389fb1943ecc974f1f1032615fe8b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.697 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7f47ca17-3a06-4e59-b9a1-bd8d89bea4de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.698 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1002188b-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.700 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:58 compute-1 kernel: tap1002188b-c0: left promiscuous mode
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.714 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:58 compute-1 nova_compute[225855]: 2026-01-20 14:40:58.716 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.719 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[03f9dd7c-5b72-437a-9b0c-3154ba9d8345]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.735 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[52014f26-3a88-49f2-8131-fec56d35c989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.736 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[527a58e4-14e0-43fe-a14d-5b9c34fe93e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.752 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8513b0-0b13-4f3b-8cc6-0c9e4f0f642f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506133, 'reachable_time': 28256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253552, 'error': None, 'target': 'ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.755 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1002188b-c6a7-4b59-9326-3a1a837a00fd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.755 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[ea8bf944-4b43-4e35-9f7f-4dd28580bf9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:58 compute-1 systemd[1]: run-netns-ovnmeta\x2d1002188b\x2dc6a7\x2d4b59\x2d9326\x2d3a1a837a00fd.mount: Deactivated successfully.
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.756 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 19a89daa-770c-4c3f-970c-a9a462503b06 in datapath 1002188b-c6a7-4b59-9326-3a1a837a00fd unbound from our chassis
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.757 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002188b-c6a7-4b59-9326-3a1a837a00fd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.758 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ce70a652-52df-4ced-b4b8-1fdb54f3e32b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.759 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 19a89daa-770c-4c3f-970c-a9a462503b06 in datapath 1002188b-c6a7-4b59-9326-3a1a837a00fd unbound from our chassis
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.760 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002188b-c6a7-4b59-9326-3a1a837a00fd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:40:58.760 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed4767a-6f60-4cae-be1c-0ab63b2808e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:40:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:40:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:59.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:40:59 compute-1 ceph-mon[81775]: pgmap v1578: 321 pgs: 321 active+clean; 121 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 55 op/s
Jan 20 14:40:59 compute-1 nova_compute[225855]: 2026-01-20 14:40:59.594 225859 INFO nova.virt.libvirt.driver [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Deleting instance files /var/lib/nova/instances/a96ccadd-ac1d-4040-8bcc-bebb460ee233_del
Jan 20 14:40:59 compute-1 nova_compute[225855]: 2026-01-20 14:40:59.595 225859 INFO nova.virt.libvirt.driver [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Deletion of /var/lib/nova/instances/a96ccadd-ac1d-4040-8bcc-bebb460ee233_del complete
Jan 20 14:40:59 compute-1 nova_compute[225855]: 2026-01-20 14:40:59.671 225859 INFO nova.compute.manager [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Took 1.30 seconds to destroy the instance on the hypervisor.
Jan 20 14:40:59 compute-1 nova_compute[225855]: 2026-01-20 14:40:59.672 225859 DEBUG oslo.service.loopingcall [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:40:59 compute-1 nova_compute[225855]: 2026-01-20 14:40:59.673 225859 DEBUG nova.compute.manager [-] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:40:59 compute-1 nova_compute[225855]: 2026-01-20 14:40:59.673 225859 DEBUG nova.network.neutron [-] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:40:59 compute-1 nova_compute[225855]: 2026-01-20 14:40:59.961 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920044.959884, d5c2df9d-748f-4df2-9392-b45741975f65 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:40:59 compute-1 nova_compute[225855]: 2026-01-20 14:40:59.961 225859 INFO nova.compute.manager [-] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] VM Stopped (Lifecycle Event)
Jan 20 14:40:59 compute-1 nova_compute[225855]: 2026-01-20 14:40:59.982 225859 DEBUG nova.compute.manager [None req-58decafb-bca1-4b05-8ad7-5b2b8002b273 - - - - - -] [instance: d5c2df9d-748f-4df2-9392-b45741975f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:41:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:00.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:00 compute-1 nova_compute[225855]: 2026-01-20 14:41:00.098 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:00 compute-1 ceph-mon[81775]: pgmap v1579: 321 pgs: 321 active+clean; 124 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 181 KiB/s wr, 64 op/s
Jan 20 14:41:00 compute-1 podman[253555]: 2026-01-20 14:41:00.997578212 +0000 UTC m=+0.049273667 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 20 14:41:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:01.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:01 compute-1 nova_compute[225855]: 2026-01-20 14:41:01.352 225859 DEBUG nova.network.neutron [-] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:41:01 compute-1 nova_compute[225855]: 2026-01-20 14:41:01.372 225859 INFO nova.compute.manager [-] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Took 1.70 seconds to deallocate network for instance.
Jan 20 14:41:01 compute-1 nova_compute[225855]: 2026-01-20 14:41:01.411 225859 DEBUG oslo_concurrency.lockutils [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:41:01 compute-1 nova_compute[225855]: 2026-01-20 14:41:01.412 225859 DEBUG oslo_concurrency.lockutils [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:41:01 compute-1 nova_compute[225855]: 2026-01-20 14:41:01.419 225859 DEBUG nova.compute.manager [req-8848f8c5-268b-40f6-8157-e55460de6431 req-631e4395-def5-4278-a6a4-121d4c2de87e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Received event network-vif-deleted-19a89daa-770c-4c3f-970c-a9a462503b06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:41:01 compute-1 nova_compute[225855]: 2026-01-20 14:41:01.465 225859 DEBUG oslo_concurrency.processutils [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:41:01 compute-1 nova_compute[225855]: 2026-01-20 14:41:01.903 225859 DEBUG nova.compute.manager [req-ea29a9fe-b5eb-4759-8019-065d6deed0e3 req-de6e0c84-ad56-4ed0-a140-8f7b60b4983d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Received event network-vif-unplugged-19a89daa-770c-4c3f-970c-a9a462503b06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:41:01 compute-1 nova_compute[225855]: 2026-01-20 14:41:01.903 225859 DEBUG oslo_concurrency.lockutils [req-ea29a9fe-b5eb-4759-8019-065d6deed0e3 req-de6e0c84-ad56-4ed0-a140-8f7b60b4983d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:41:01 compute-1 nova_compute[225855]: 2026-01-20 14:41:01.904 225859 DEBUG oslo_concurrency.lockutils [req-ea29a9fe-b5eb-4759-8019-065d6deed0e3 req-de6e0c84-ad56-4ed0-a140-8f7b60b4983d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:41:01 compute-1 nova_compute[225855]: 2026-01-20 14:41:01.904 225859 DEBUG oslo_concurrency.lockutils [req-ea29a9fe-b5eb-4759-8019-065d6deed0e3 req-de6e0c84-ad56-4ed0-a140-8f7b60b4983d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:01 compute-1 nova_compute[225855]: 2026-01-20 14:41:01.904 225859 DEBUG nova.compute.manager [req-ea29a9fe-b5eb-4759-8019-065d6deed0e3 req-de6e0c84-ad56-4ed0-a140-8f7b60b4983d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] No waiting events found dispatching network-vif-unplugged-19a89daa-770c-4c3f-970c-a9a462503b06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:41:01 compute-1 nova_compute[225855]: 2026-01-20 14:41:01.904 225859 WARNING nova.compute.manager [req-ea29a9fe-b5eb-4759-8019-065d6deed0e3 req-de6e0c84-ad56-4ed0-a140-8f7b60b4983d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Received unexpected event network-vif-unplugged-19a89daa-770c-4c3f-970c-a9a462503b06 for instance with vm_state deleted and task_state None.
Jan 20 14:41:01 compute-1 nova_compute[225855]: 2026-01-20 14:41:01.905 225859 DEBUG nova.compute.manager [req-ea29a9fe-b5eb-4759-8019-065d6deed0e3 req-de6e0c84-ad56-4ed0-a140-8f7b60b4983d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Received event network-vif-plugged-19a89daa-770c-4c3f-970c-a9a462503b06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:41:01 compute-1 nova_compute[225855]: 2026-01-20 14:41:01.905 225859 DEBUG oslo_concurrency.lockutils [req-ea29a9fe-b5eb-4759-8019-065d6deed0e3 req-de6e0c84-ad56-4ed0-a140-8f7b60b4983d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:41:01 compute-1 nova_compute[225855]: 2026-01-20 14:41:01.905 225859 DEBUG oslo_concurrency.lockutils [req-ea29a9fe-b5eb-4759-8019-065d6deed0e3 req-de6e0c84-ad56-4ed0-a140-8f7b60b4983d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:41:01 compute-1 nova_compute[225855]: 2026-01-20 14:41:01.905 225859 DEBUG oslo_concurrency.lockutils [req-ea29a9fe-b5eb-4759-8019-065d6deed0e3 req-de6e0c84-ad56-4ed0-a140-8f7b60b4983d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:01 compute-1 nova_compute[225855]: 2026-01-20 14:41:01.906 225859 DEBUG nova.compute.manager [req-ea29a9fe-b5eb-4759-8019-065d6deed0e3 req-de6e0c84-ad56-4ed0-a140-8f7b60b4983d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] No waiting events found dispatching network-vif-plugged-19a89daa-770c-4c3f-970c-a9a462503b06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:41:01 compute-1 nova_compute[225855]: 2026-01-20 14:41:01.906 225859 WARNING nova.compute.manager [req-ea29a9fe-b5eb-4759-8019-065d6deed0e3 req-de6e0c84-ad56-4ed0-a140-8f7b60b4983d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Received unexpected event network-vif-plugged-19a89daa-770c-4c3f-970c-a9a462503b06 for instance with vm_state deleted and task_state None.
Jan 20 14:41:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:41:01 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3514088597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1609353612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:02 compute-1 nova_compute[225855]: 2026-01-20 14:41:02.004 225859 DEBUG oslo_concurrency.processutils [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:41:02 compute-1 nova_compute[225855]: 2026-01-20 14:41:02.014 225859 DEBUG nova.compute.provider_tree [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:41:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:02.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:02 compute-1 nova_compute[225855]: 2026-01-20 14:41:02.030 225859 DEBUG nova.scheduler.client.report [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:41:02 compute-1 nova_compute[225855]: 2026-01-20 14:41:02.056 225859 DEBUG oslo_concurrency.lockutils [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:02 compute-1 nova_compute[225855]: 2026-01-20 14:41:02.086 225859 INFO nova.scheduler.client.report [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Deleted allocations for instance a96ccadd-ac1d-4040-8bcc-bebb460ee233
Jan 20 14:41:02 compute-1 nova_compute[225855]: 2026-01-20 14:41:02.160 225859 DEBUG oslo_concurrency.lockutils [None req-61fb8fa8-a6ad-49b5-8d79-1cf27e5804e2 16c05e1ac16f428bab6b36346856235e b50ce2f25e8943e28ddf8bf69c721e75 - - default default] Lock "a96ccadd-ac1d-4040-8bcc-bebb460ee233" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:03.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:03 compute-1 ceph-mon[81775]: pgmap v1580: 321 pgs: 321 active+clean; 114 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 948 KiB/s wr, 75 op/s
Jan 20 14:41:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1601011872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3514088597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2739494036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:03 compute-1 nova_compute[225855]: 2026-01-20 14:41:03.659 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:04.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:04 compute-1 ceph-mon[81775]: pgmap v1581: 321 pgs: 321 active+clean; 111 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 20 14:41:04 compute-1 nova_compute[225855]: 2026-01-20 14:41:04.935 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:05.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:05 compute-1 nova_compute[225855]: 2026-01-20 14:41:05.100 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:05 compute-1 sudo[253598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:05 compute-1 sudo[253598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:05 compute-1 sudo[253598]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:05 compute-1 sudo[253623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:05 compute-1 sudo[253623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:05 compute-1 sudo[253623]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:06.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:06 compute-1 ceph-mon[81775]: pgmap v1582: 321 pgs: 321 active+clean; 114 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 170 KiB/s rd, 2.7 MiB/s wr, 90 op/s
Jan 20 14:41:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1173221353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:07.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:41:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:08.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:41:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2638918246' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:08 compute-1 nova_compute[225855]: 2026-01-20 14:41:08.645 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:08 compute-1 nova_compute[225855]: 2026-01-20 14:41:08.662 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:08 compute-1 nova_compute[225855]: 2026-01-20 14:41:08.846 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:41:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:09.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:41:09 compute-1 ceph-mon[81775]: pgmap v1583: 321 pgs: 321 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 126 op/s
Jan 20 14:41:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:10.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:10 compute-1 nova_compute[225855]: 2026-01-20 14:41:10.103 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:10 compute-1 ceph-mon[81775]: pgmap v1584: 321 pgs: 321 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 138 op/s
Jan 20 14:41:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:41:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:11.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:41:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:11 compute-1 nova_compute[225855]: 2026-01-20 14:41:11.991 225859 DEBUG oslo_concurrency.lockutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "10349dde-fb60-48ba-bc7b-42180c5eb49e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:41:11 compute-1 nova_compute[225855]: 2026-01-20 14:41:11.992 225859 DEBUG oslo_concurrency.lockutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "10349dde-fb60-48ba-bc7b-42180c5eb49e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:41:12 compute-1 nova_compute[225855]: 2026-01-20 14:41:12.009 225859 DEBUG nova.compute.manager [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:41:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:41:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:12.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:41:12 compute-1 nova_compute[225855]: 2026-01-20 14:41:12.081 225859 DEBUG oslo_concurrency.lockutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:41:12 compute-1 nova_compute[225855]: 2026-01-20 14:41:12.082 225859 DEBUG oslo_concurrency.lockutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:41:12 compute-1 nova_compute[225855]: 2026-01-20 14:41:12.090 225859 DEBUG nova.virt.hardware [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:41:12 compute-1 nova_compute[225855]: 2026-01-20 14:41:12.090 225859 INFO nova.compute.claims [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:41:12 compute-1 nova_compute[225855]: 2026-01-20 14:41:12.312 225859 DEBUG oslo_concurrency.processutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:41:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:41:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/228072690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:12 compute-1 ceph-mon[81775]: pgmap v1585: 321 pgs: 321 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 157 op/s
Jan 20 14:41:12 compute-1 nova_compute[225855]: 2026-01-20 14:41:12.837 225859 DEBUG oslo_concurrency.processutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:41:12 compute-1 nova_compute[225855]: 2026-01-20 14:41:12.842 225859 DEBUG nova.compute.provider_tree [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:41:12 compute-1 nova_compute[225855]: 2026-01-20 14:41:12.857 225859 DEBUG nova.scheduler.client.report [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:41:12 compute-1 nova_compute[225855]: 2026-01-20 14:41:12.880 225859 DEBUG oslo_concurrency.lockutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:12 compute-1 nova_compute[225855]: 2026-01-20 14:41:12.881 225859 DEBUG nova.compute.manager [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:41:12 compute-1 nova_compute[225855]: 2026-01-20 14:41:12.924 225859 DEBUG nova.compute.manager [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:41:12 compute-1 nova_compute[225855]: 2026-01-20 14:41:12.924 225859 DEBUG nova.network.neutron [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:41:12 compute-1 nova_compute[225855]: 2026-01-20 14:41:12.947 225859 INFO nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:41:12 compute-1 nova_compute[225855]: 2026-01-20 14:41:12.963 225859 DEBUG nova.compute.manager [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:41:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:13.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:13 compute-1 nova_compute[225855]: 2026-01-20 14:41:13.122 225859 DEBUG nova.compute.manager [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:41:13 compute-1 nova_compute[225855]: 2026-01-20 14:41:13.124 225859 DEBUG nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:41:13 compute-1 nova_compute[225855]: 2026-01-20 14:41:13.124 225859 INFO nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Creating image(s)
Jan 20 14:41:13 compute-1 nova_compute[225855]: 2026-01-20 14:41:13.148 225859 DEBUG nova.storage.rbd_utils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image 10349dde-fb60-48ba-bc7b-42180c5eb49e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:41:13 compute-1 nova_compute[225855]: 2026-01-20 14:41:13.175 225859 DEBUG nova.storage.rbd_utils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image 10349dde-fb60-48ba-bc7b-42180c5eb49e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:41:13 compute-1 nova_compute[225855]: 2026-01-20 14:41:13.200 225859 DEBUG nova.storage.rbd_utils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image 10349dde-fb60-48ba-bc7b-42180c5eb49e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:41:13 compute-1 nova_compute[225855]: 2026-01-20 14:41:13.204 225859 DEBUG oslo_concurrency.processutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:41:13 compute-1 nova_compute[225855]: 2026-01-20 14:41:13.270 225859 DEBUG oslo_concurrency.processutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:41:13 compute-1 nova_compute[225855]: 2026-01-20 14:41:13.271 225859 DEBUG oslo_concurrency.lockutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:41:13 compute-1 nova_compute[225855]: 2026-01-20 14:41:13.271 225859 DEBUG oslo_concurrency.lockutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:41:13 compute-1 nova_compute[225855]: 2026-01-20 14:41:13.272 225859 DEBUG oslo_concurrency.lockutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:13 compute-1 nova_compute[225855]: 2026-01-20 14:41:13.296 225859 DEBUG nova.storage.rbd_utils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image 10349dde-fb60-48ba-bc7b-42180c5eb49e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:41:13 compute-1 nova_compute[225855]: 2026-01-20 14:41:13.300 225859 DEBUG oslo_concurrency.processutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 10349dde-fb60-48ba-bc7b-42180c5eb49e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:41:13 compute-1 nova_compute[225855]: 2026-01-20 14:41:13.384 225859 DEBUG nova.policy [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c8a9fb458d27434495a77a94827b6097', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3f93fd4b2154dda9f38e62334904303', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:41:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:41:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3266538832' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:41:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:41:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3266538832' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:41:13 compute-1 nova_compute[225855]: 2026-01-20 14:41:13.627 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920058.624851, a96ccadd-ac1d-4040-8bcc-bebb460ee233 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:41:13 compute-1 nova_compute[225855]: 2026-01-20 14:41:13.628 225859 INFO nova.compute.manager [-] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] VM Stopped (Lifecycle Event)
Jan 20 14:41:13 compute-1 nova_compute[225855]: 2026-01-20 14:41:13.653 225859 DEBUG nova.compute.manager [None req-d96f56bf-bd82-4598-ab03-693ba0e99081 - - - - - -] [instance: a96ccadd-ac1d-4040-8bcc-bebb460ee233] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:41:13 compute-1 nova_compute[225855]: 2026-01-20 14:41:13.664 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:14.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/228072690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3266538832' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:41:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3266538832' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:41:14 compute-1 nova_compute[225855]: 2026-01-20 14:41:14.490 225859 DEBUG nova.network.neutron [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Successfully created port: 607e59a4-2a6b-424a-9413-be318079781e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:41:14 compute-1 nova_compute[225855]: 2026-01-20 14:41:14.571 225859 DEBUG oslo_concurrency.processutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 10349dde-fb60-48ba-bc7b-42180c5eb49e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:41:14 compute-1 nova_compute[225855]: 2026-01-20 14:41:14.643 225859 DEBUG nova.storage.rbd_utils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] resizing rbd image 10349dde-fb60-48ba-bc7b-42180c5eb49e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:41:14 compute-1 nova_compute[225855]: 2026-01-20 14:41:14.752 225859 DEBUG nova.objects.instance [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'migration_context' on Instance uuid 10349dde-fb60-48ba-bc7b-42180c5eb49e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:41:14 compute-1 nova_compute[225855]: 2026-01-20 14:41:14.768 225859 DEBUG nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:41:14 compute-1 nova_compute[225855]: 2026-01-20 14:41:14.768 225859 DEBUG nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Ensure instance console log exists: /var/lib/nova/instances/10349dde-fb60-48ba-bc7b-42180c5eb49e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:41:14 compute-1 nova_compute[225855]: 2026-01-20 14:41:14.769 225859 DEBUG oslo_concurrency.lockutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:41:14 compute-1 nova_compute[225855]: 2026-01-20 14:41:14.769 225859 DEBUG oslo_concurrency.lockutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:41:14 compute-1 nova_compute[225855]: 2026-01-20 14:41:14.769 225859 DEBUG oslo_concurrency.lockutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:41:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:15.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:41:15 compute-1 nova_compute[225855]: 2026-01-20 14:41:15.104 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:15 compute-1 ceph-mon[81775]: pgmap v1586: 321 pgs: 321 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.6 MiB/s wr, 163 op/s
Jan 20 14:41:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3966429821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2521530103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2225346245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:15 compute-1 nova_compute[225855]: 2026-01-20 14:41:15.387 225859 DEBUG nova.network.neutron [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Successfully updated port: 607e59a4-2a6b-424a-9413-be318079781e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:41:15 compute-1 nova_compute[225855]: 2026-01-20 14:41:15.408 225859 DEBUG oslo_concurrency.lockutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:41:15 compute-1 nova_compute[225855]: 2026-01-20 14:41:15.409 225859 DEBUG oslo_concurrency.lockutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquired lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:41:15 compute-1 nova_compute[225855]: 2026-01-20 14:41:15.409 225859 DEBUG nova.network.neutron [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:41:15 compute-1 nova_compute[225855]: 2026-01-20 14:41:15.482 225859 DEBUG nova.compute.manager [req-754c8f91-b200-4e56-bc54-423c28a7d0e6 req-17fd8461-a98b-4b1e-ae13-5f4b6853dc27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Received event network-changed-607e59a4-2a6b-424a-9413-be318079781e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:41:15 compute-1 nova_compute[225855]: 2026-01-20 14:41:15.483 225859 DEBUG nova.compute.manager [req-754c8f91-b200-4e56-bc54-423c28a7d0e6 req-17fd8461-a98b-4b1e-ae13-5f4b6853dc27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Refreshing instance network info cache due to event network-changed-607e59a4-2a6b-424a-9413-be318079781e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:41:15 compute-1 nova_compute[225855]: 2026-01-20 14:41:15.483 225859 DEBUG oslo_concurrency.lockutils [req-754c8f91-b200-4e56-bc54-423c28a7d0e6 req-17fd8461-a98b-4b1e-ae13-5f4b6853dc27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:41:15 compute-1 nova_compute[225855]: 2026-01-20 14:41:15.581 225859 DEBUG nova.network.neutron [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:41:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:16.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:16.401 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:41:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:16.402 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:41:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:16.402 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:16 compute-1 ceph-mon[81775]: pgmap v1587: 321 pgs: 321 active+clean; 153 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.3 MiB/s wr, 181 op/s
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.859 225859 DEBUG nova.network.neutron [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Updating instance_info_cache with network_info: [{"id": "607e59a4-2a6b-424a-9413-be318079781e", "address": "fa:16:3e:a3:13:21", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap607e59a4-2a", "ovs_interfaceid": "607e59a4-2a6b-424a-9413-be318079781e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.973 225859 DEBUG oslo_concurrency.lockutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Releasing lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.973 225859 DEBUG nova.compute.manager [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Instance network_info: |[{"id": "607e59a4-2a6b-424a-9413-be318079781e", "address": "fa:16:3e:a3:13:21", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap607e59a4-2a", "ovs_interfaceid": "607e59a4-2a6b-424a-9413-be318079781e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.974 225859 DEBUG oslo_concurrency.lockutils [req-754c8f91-b200-4e56-bc54-423c28a7d0e6 req-17fd8461-a98b-4b1e-ae13-5f4b6853dc27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.974 225859 DEBUG nova.network.neutron [req-754c8f91-b200-4e56-bc54-423c28a7d0e6 req-17fd8461-a98b-4b1e-ae13-5f4b6853dc27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Refreshing network info cache for port 607e59a4-2a6b-424a-9413-be318079781e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.976 225859 DEBUG nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Start _get_guest_xml network_info=[{"id": "607e59a4-2a6b-424a-9413-be318079781e", "address": "fa:16:3e:a3:13:21", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap607e59a4-2a", "ovs_interfaceid": "607e59a4-2a6b-424a-9413-be318079781e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.981 225859 WARNING nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.987 225859 DEBUG nova.virt.libvirt.host [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.988 225859 DEBUG nova.virt.libvirt.host [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.994 225859 DEBUG nova.virt.libvirt.host [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.995 225859 DEBUG nova.virt.libvirt.host [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.996 225859 DEBUG nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.996 225859 DEBUG nova.virt.hardware [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.996 225859 DEBUG nova.virt.hardware [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.997 225859 DEBUG nova.virt.hardware [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.997 225859 DEBUG nova.virt.hardware [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.997 225859 DEBUG nova.virt.hardware [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.997 225859 DEBUG nova.virt.hardware [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.997 225859 DEBUG nova.virt.hardware [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.998 225859 DEBUG nova.virt.hardware [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.998 225859 DEBUG nova.virt.hardware [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.998 225859 DEBUG nova.virt.hardware [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:41:16 compute-1 nova_compute[225855]: 2026-01-20 14:41:16.998 225859 DEBUG nova.virt.hardware [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:41:17 compute-1 nova_compute[225855]: 2026-01-20 14:41:17.000 225859 DEBUG oslo_concurrency.processutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:41:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:41:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:17.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:41:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:41:17 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3958666232' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:17 compute-1 nova_compute[225855]: 2026-01-20 14:41:17.517 225859 DEBUG oslo_concurrency.processutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:41:17 compute-1 nova_compute[225855]: 2026-01-20 14:41:17.558 225859 DEBUG nova.storage.rbd_utils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image 10349dde-fb60-48ba-bc7b-42180c5eb49e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:41:17 compute-1 nova_compute[225855]: 2026-01-20 14:41:17.564 225859 DEBUG oslo_concurrency.processutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:41:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3958666232' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:18.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:41:18 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3232206658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.214 225859 DEBUG oslo_concurrency.processutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.216 225859 DEBUG nova.virt.libvirt.vif [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:41:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-689127716',display_name='tempest-tempest.common.compute-instance-689127716',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-689127716',id=71,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk9SkW5N7MhrGaZslG18EJ7xoBof9PQa4upjUw+XxfbO5rNOjJYMJtJMRGPfgbl1pwAZZD7LHjNNMRFKVo+T+C8Rnr+HXWsPYQmvPGwjjZ++NXvRdqES1LIbRDiwaFMJQ==',key_name='tempest-keypair-1970360297',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-2p3ovedc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:41:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=10349dde-fb60-48ba-bc7b-42180c5eb49e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "607e59a4-2a6b-424a-9413-be318079781e", "address": "fa:16:3e:a3:13:21", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap607e59a4-2a", "ovs_interfaceid": "607e59a4-2a6b-424a-9413-be318079781e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.217 225859 DEBUG nova.network.os_vif_util [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "607e59a4-2a6b-424a-9413-be318079781e", "address": "fa:16:3e:a3:13:21", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap607e59a4-2a", "ovs_interfaceid": "607e59a4-2a6b-424a-9413-be318079781e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.218 225859 DEBUG nova.network.os_vif_util [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:13:21,bridge_name='br-int',has_traffic_filtering=True,id=607e59a4-2a6b-424a-9413-be318079781e,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap607e59a4-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.220 225859 DEBUG nova.objects.instance [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'pci_devices' on Instance uuid 10349dde-fb60-48ba-bc7b-42180c5eb49e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.255 225859 DEBUG nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:41:18 compute-1 nova_compute[225855]:   <uuid>10349dde-fb60-48ba-bc7b-42180c5eb49e</uuid>
Jan 20 14:41:18 compute-1 nova_compute[225855]:   <name>instance-00000047</name>
Jan 20 14:41:18 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:41:18 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:41:18 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <nova:name>tempest-tempest.common.compute-instance-689127716</nova:name>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:41:16</nova:creationTime>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:41:18 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:41:18 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:41:18 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:41:18 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:41:18 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:41:18 compute-1 nova_compute[225855]:         <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:41:18 compute-1 nova_compute[225855]:         <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:41:18 compute-1 nova_compute[225855]:         <nova:port uuid="607e59a4-2a6b-424a-9413-be318079781e">
Jan 20 14:41:18 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:41:18 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:41:18 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <system>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <entry name="serial">10349dde-fb60-48ba-bc7b-42180c5eb49e</entry>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <entry name="uuid">10349dde-fb60-48ba-bc7b-42180c5eb49e</entry>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     </system>
Jan 20 14:41:18 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:41:18 compute-1 nova_compute[225855]:   <os>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:   </os>
Jan 20 14:41:18 compute-1 nova_compute[225855]:   <features>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:   </features>
Jan 20 14:41:18 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:41:18 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:41:18 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/10349dde-fb60-48ba-bc7b-42180c5eb49e_disk">
Jan 20 14:41:18 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       </source>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:41:18 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/10349dde-fb60-48ba-bc7b-42180c5eb49e_disk.config">
Jan 20 14:41:18 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       </source>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:41:18 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:a3:13:21"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <target dev="tap607e59a4-2a"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/10349dde-fb60-48ba-bc7b-42180c5eb49e/console.log" append="off"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <video>
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     </video>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:41:18 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:41:18 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:41:18 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:41:18 compute-1 nova_compute[225855]: </domain>
Jan 20 14:41:18 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.257 225859 DEBUG nova.compute.manager [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Preparing to wait for external event network-vif-plugged-607e59a4-2a6b-424a-9413-be318079781e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.258 225859 DEBUG oslo_concurrency.lockutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.258 225859 DEBUG oslo_concurrency.lockutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.259 225859 DEBUG oslo_concurrency.lockutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.259 225859 DEBUG nova.virt.libvirt.vif [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:41:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-689127716',display_name='tempest-tempest.common.compute-instance-689127716',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-689127716',id=71,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk9SkW5N7MhrGaZslG18EJ7xoBof9PQa4upjUw+XxfbO5rNOjJYMJtJMRGPfgbl1pwAZZD7LHjNNMRFKVo+T+C8Rnr+HXWsPYQmvPGwjjZ++NXvRdqES1LIbRDiwaFMJQ==',key_name='tempest-keypair-1970360297',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-2p3ovedc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:41:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=10349dde-fb60-48ba-bc7b-42180c5eb49e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "607e59a4-2a6b-424a-9413-be318079781e", "address": "fa:16:3e:a3:13:21", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap607e59a4-2a", "ovs_interfaceid": "607e59a4-2a6b-424a-9413-be318079781e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.260 225859 DEBUG nova.network.os_vif_util [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "607e59a4-2a6b-424a-9413-be318079781e", "address": "fa:16:3e:a3:13:21", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap607e59a4-2a", "ovs_interfaceid": "607e59a4-2a6b-424a-9413-be318079781e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.261 225859 DEBUG nova.network.os_vif_util [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:13:21,bridge_name='br-int',has_traffic_filtering=True,id=607e59a4-2a6b-424a-9413-be318079781e,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap607e59a4-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.261 225859 DEBUG os_vif [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:13:21,bridge_name='br-int',has_traffic_filtering=True,id=607e59a4-2a6b-424a-9413-be318079781e,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap607e59a4-2a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.262 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.262 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.263 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.266 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.267 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap607e59a4-2a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.267 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap607e59a4-2a, col_values=(('external_ids', {'iface-id': '607e59a4-2a6b-424a-9413-be318079781e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:13:21', 'vm-uuid': '10349dde-fb60-48ba-bc7b-42180c5eb49e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.269 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:18 compute-1 NetworkManager[49104]: <info>  [1768920078.2711] manager: (tap607e59a4-2a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.272 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.280 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.282 225859 INFO os_vif [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:13:21,bridge_name='br-int',has_traffic_filtering=True,id=607e59a4-2a6b-424a-9413-be318079781e,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap607e59a4-2a')
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.361 225859 DEBUG nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.361 225859 DEBUG nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.361 225859 DEBUG nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No VIF found with MAC fa:16:3e:a3:13:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.362 225859 INFO nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Using config drive
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.388 225859 DEBUG nova.storage.rbd_utils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image 10349dde-fb60-48ba-bc7b-42180c5eb49e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.847 225859 INFO nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Creating config drive at /var/lib/nova/instances/10349dde-fb60-48ba-bc7b-42180c5eb49e/disk.config
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.853 225859 DEBUG oslo_concurrency.processutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/10349dde-fb60-48ba-bc7b-42180c5eb49e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1l_ef0yz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.917 225859 DEBUG nova.network.neutron [req-754c8f91-b200-4e56-bc54-423c28a7d0e6 req-17fd8461-a98b-4b1e-ae13-5f4b6853dc27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Updated VIF entry in instance network info cache for port 607e59a4-2a6b-424a-9413-be318079781e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.920 225859 DEBUG nova.network.neutron [req-754c8f91-b200-4e56-bc54-423c28a7d0e6 req-17fd8461-a98b-4b1e-ae13-5f4b6853dc27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Updating instance_info_cache with network_info: [{"id": "607e59a4-2a6b-424a-9413-be318079781e", "address": "fa:16:3e:a3:13:21", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap607e59a4-2a", "ovs_interfaceid": "607e59a4-2a6b-424a-9413-be318079781e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.936 225859 DEBUG oslo_concurrency.lockutils [req-754c8f91-b200-4e56-bc54-423c28a7d0e6 req-17fd8461-a98b-4b1e-ae13-5f4b6853dc27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:41:18 compute-1 nova_compute[225855]: 2026-01-20 14:41:18.992 225859 DEBUG oslo_concurrency.processutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/10349dde-fb60-48ba-bc7b-42180c5eb49e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1l_ef0yz" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:41:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:19.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:19 compute-1 ceph-mon[81775]: pgmap v1588: 321 pgs: 321 active+clean; 195 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.9 MiB/s wr, 206 op/s
Jan 20 14:41:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3232206658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2318344812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:19 compute-1 nova_compute[225855]: 2026-01-20 14:41:19.305 225859 DEBUG nova.storage.rbd_utils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image 10349dde-fb60-48ba-bc7b-42180c5eb49e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:41:19 compute-1 nova_compute[225855]: 2026-01-20 14:41:19.311 225859 DEBUG oslo_concurrency.processutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/10349dde-fb60-48ba-bc7b-42180c5eb49e/disk.config 10349dde-fb60-48ba-bc7b-42180c5eb49e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:41:19 compute-1 nova_compute[225855]: 2026-01-20 14:41:19.963 225859 DEBUG oslo_concurrency.processutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/10349dde-fb60-48ba-bc7b-42180c5eb49e/disk.config 10349dde-fb60-48ba-bc7b-42180c5eb49e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:41:19 compute-1 nova_compute[225855]: 2026-01-20 14:41:19.964 225859 INFO nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Deleting local config drive /var/lib/nova/instances/10349dde-fb60-48ba-bc7b-42180c5eb49e/disk.config because it was imported into RBD.
Jan 20 14:41:20 compute-1 kernel: tap607e59a4-2a: entered promiscuous mode
Jan 20 14:41:20 compute-1 NetworkManager[49104]: <info>  [1768920080.0250] manager: (tap607e59a4-2a): new Tun device (/org/freedesktop/NetworkManager/Devices/106)
Jan 20 14:41:20 compute-1 nova_compute[225855]: 2026-01-20 14:41:20.030 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:20 compute-1 ovn_controller[130490]: 2026-01-20T14:41:20Z|00245|binding|INFO|Claiming lport 607e59a4-2a6b-424a-9413-be318079781e for this chassis.
Jan 20 14:41:20 compute-1 ovn_controller[130490]: 2026-01-20T14:41:20Z|00246|binding|INFO|607e59a4-2a6b-424a-9413-be318079781e: Claiming fa:16:3e:a3:13:21 10.100.0.11
Jan 20 14:41:20 compute-1 nova_compute[225855]: 2026-01-20 14:41:20.036 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:20 compute-1 nova_compute[225855]: 2026-01-20 14:41:20.047 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:20 compute-1 NetworkManager[49104]: <info>  [1768920080.0476] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Jan 20 14:41:20 compute-1 NetworkManager[49104]: <info>  [1768920080.0482] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.051 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:13:21 10.100.0.11'], port_security=['fa:16:3e:a3:13:21 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '10349dde-fb60-48ba-bc7b-42180c5eb49e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc21b99b-4e34-422c-be05-0a440009dac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3f93fd4b2154dda9f38e62334904303', 'neutron:revision_number': '2', 'neutron:security_group_ids': '52b08fd6-6aa8-4470-b89c-ece04e1c959e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7af6b6bc-3cbd-48be-9f10-23ec011e0426, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=607e59a4-2a6b-424a-9413-be318079781e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:41:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.052 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 607e59a4-2a6b-424a-9413-be318079781e in datapath fc21b99b-4e34-422c-be05-0a440009dac4 bound to our chassis
Jan 20 14:41:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:41:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:20.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.053 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc21b99b-4e34-422c-be05-0a440009dac4
Jan 20 14:41:20 compute-1 systemd-udevd[253982]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:41:20 compute-1 systemd-machined[194361]: New machine qemu-31-instance-00000047.
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.063 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa59615-95ad-4046-86cf-aae7f2ced5de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.064 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfc21b99b-41 in ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.065 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfc21b99b-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.066 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9e906596-dec2-423a-a420-ebd7fa0534f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.066 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b9ced6eb-47a3-4569-bfa2-3ed6f26ccf7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:20 compute-1 NetworkManager[49104]: <info>  [1768920080.0700] device (tap607e59a4-2a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:41:20 compute-1 NetworkManager[49104]: <info>  [1768920080.0708] device (tap607e59a4-2a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.078 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[483ce132-d68b-44c3-9cda-63a286f3bb32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:20 compute-1 systemd[1]: Started Virtual Machine qemu-31-instance-00000047.
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.093 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b92f68ed-6aa4-4b5b-b4c1-e5375f178abf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.121 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[274a6c53-cbec-47fe-80b1-36bbe2530dca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:20 compute-1 NetworkManager[49104]: <info>  [1768920080.1316] manager: (tapfc21b99b-40): new Veth device (/org/freedesktop/NetworkManager/Devices/109)
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.132 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[40493834-b92d-4d2c-b234-cc149328c0d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.162 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd447ba-c0f7-4b8d-95b1-49780b5ca043]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.165 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[06e04867-5d63-42b7-8198-d3b0a5a9ca06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:20 compute-1 NetworkManager[49104]: <info>  [1768920080.1859] device (tapfc21b99b-40): carrier: link connected
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.191 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[3410494c-def0-4e0a-b52d-27560d28cea3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.206 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1ab85a6a-ab19-4b36-86e3-3ce6c1f841a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc21b99b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:5b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511911, 'reachable_time': 34363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254015, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.217 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0eecda25-8917-4782-b694-a71b63fbf87e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:5bd2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 511911, 'tstamp': 511911}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254016, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.230 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed596e8-26ef-41fc-aa56-368ab3d1c7e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc21b99b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:5b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511911, 'reachable_time': 34363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254017, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:20 compute-1 nova_compute[225855]: 2026-01-20 14:41:20.247 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:20 compute-1 nova_compute[225855]: 2026-01-20 14:41:20.250 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.254 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[88dda050-147b-4a67-be37-72987a024285]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:20 compute-1 nova_compute[225855]: 2026-01-20 14:41:20.271 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:20 compute-1 ovn_controller[130490]: 2026-01-20T14:41:20Z|00247|binding|INFO|Setting lport 607e59a4-2a6b-424a-9413-be318079781e ovn-installed in OVS
Jan 20 14:41:20 compute-1 ovn_controller[130490]: 2026-01-20T14:41:20Z|00248|binding|INFO|Setting lport 607e59a4-2a6b-424a-9413-be318079781e up in Southbound
Jan 20 14:41:20 compute-1 nova_compute[225855]: 2026-01-20 14:41:20.283 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.301 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bc998151-a542-4cc8-ae54-01ac2f74bc68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.302 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc21b99b-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.303 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.303 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc21b99b-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:41:20 compute-1 nova_compute[225855]: 2026-01-20 14:41:20.305 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:20 compute-1 NetworkManager[49104]: <info>  [1768920080.3060] manager: (tapfc21b99b-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Jan 20 14:41:20 compute-1 kernel: tapfc21b99b-40: entered promiscuous mode
Jan 20 14:41:20 compute-1 nova_compute[225855]: 2026-01-20 14:41:20.307 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.309 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc21b99b-40, col_values=(('external_ids', {'iface-id': '583df905-1d9f-49c1-b209-4b7fad1599f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:41:20 compute-1 nova_compute[225855]: 2026-01-20 14:41:20.310 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:20 compute-1 ovn_controller[130490]: 2026-01-20T14:41:20Z|00249|binding|INFO|Releasing lport 583df905-1d9f-49c1-b209-4b7fad1599f6 from this chassis (sb_readonly=0)
Jan 20 14:41:20 compute-1 nova_compute[225855]: 2026-01-20 14:41:20.310 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.311 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fc21b99b-4e34-422c-be05-0a440009dac4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fc21b99b-4e34-422c-be05-0a440009dac4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.312 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[06e93ae3-fed4-4eb1-9258-c5d93eee8db5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.312 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-fc21b99b-4e34-422c-be05-0a440009dac4
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/fc21b99b-4e34-422c-be05-0a440009dac4.pid.haproxy
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID fc21b99b-4e34-422c-be05-0a440009dac4
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:41:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:20.314 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'env', 'PROCESS_TAG=haproxy-fc21b99b-4e34-422c-be05-0a440009dac4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fc21b99b-4e34-422c-be05-0a440009dac4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:41:20 compute-1 nova_compute[225855]: 2026-01-20 14:41:20.323 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/906151994' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:20 compute-1 ceph-mon[81775]: pgmap v1589: 321 pgs: 321 active+clean; 216 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.5 MiB/s wr, 245 op/s
Jan 20 14:41:20 compute-1 nova_compute[225855]: 2026-01-20 14:41:20.682 225859 DEBUG nova.compute.manager [req-97f26a88-04a6-4d15-bd2c-927530895d71 req-e771d66a-d08e-42f1-b6b8-c785335a8e35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Received event network-vif-plugged-607e59a4-2a6b-424a-9413-be318079781e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:41:20 compute-1 nova_compute[225855]: 2026-01-20 14:41:20.683 225859 DEBUG oslo_concurrency.lockutils [req-97f26a88-04a6-4d15-bd2c-927530895d71 req-e771d66a-d08e-42f1-b6b8-c785335a8e35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:41:20 compute-1 nova_compute[225855]: 2026-01-20 14:41:20.683 225859 DEBUG oslo_concurrency.lockutils [req-97f26a88-04a6-4d15-bd2c-927530895d71 req-e771d66a-d08e-42f1-b6b8-c785335a8e35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:41:20 compute-1 nova_compute[225855]: 2026-01-20 14:41:20.683 225859 DEBUG oslo_concurrency.lockutils [req-97f26a88-04a6-4d15-bd2c-927530895d71 req-e771d66a-d08e-42f1-b6b8-c785335a8e35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:20 compute-1 nova_compute[225855]: 2026-01-20 14:41:20.683 225859 DEBUG nova.compute.manager [req-97f26a88-04a6-4d15-bd2c-927530895d71 req-e771d66a-d08e-42f1-b6b8-c785335a8e35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Processing event network-vif-plugged-607e59a4-2a6b-424a-9413-be318079781e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:41:20 compute-1 podman[254065]: 2026-01-20 14:41:20.63913122 +0000 UTC m=+0.022329573 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:41:20 compute-1 podman[254065]: 2026-01-20 14:41:20.787185833 +0000 UTC m=+0.170384186 container create 2a35028076fb1ea020c7e21b3e0194e0459a116bbf5b4723a5ca26a751673906 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:41:20 compute-1 systemd[1]: Started libpod-conmon-2a35028076fb1ea020c7e21b3e0194e0459a116bbf5b4723a5ca26a751673906.scope.
Jan 20 14:41:20 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:41:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b359ef85c164591e81dbb650ad14fe7eb3bd9cd95784fcc030b2336836287bd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:41:20 compute-1 podman[254065]: 2026-01-20 14:41:20.892356892 +0000 UTC m=+0.275555255 container init 2a35028076fb1ea020c7e21b3e0194e0459a116bbf5b4723a5ca26a751673906 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:41:20 compute-1 podman[254065]: 2026-01-20 14:41:20.898856586 +0000 UTC m=+0.282054919 container start 2a35028076fb1ea020c7e21b3e0194e0459a116bbf5b4723a5ca26a751673906 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 14:41:20 compute-1 neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4[254080]: [NOTICE]   (254084) : New worker (254086) forked
Jan 20 14:41:20 compute-1 neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4[254080]: [NOTICE]   (254084) : Loading success.
Jan 20 14:41:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:21.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:21 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:21 compute-1 nova_compute[225855]: 2026-01-20 14:41:21.767 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:22.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:22 compute-1 podman[254105]: 2026-01-20 14:41:22.059625251 +0000 UTC m=+0.102798942 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 14:41:22 compute-1 ceph-mon[81775]: pgmap v1590: 321 pgs: 321 active+clean; 232 MiB data, 707 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 5.7 MiB/s wr, 294 op/s
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.326 225859 DEBUG nova.compute.manager [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.326 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920082.3255358, 10349dde-fb60-48ba-bc7b-42180c5eb49e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.327 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] VM Started (Lifecycle Event)
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.330 225859 DEBUG nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.333 225859 INFO nova.virt.libvirt.driver [-] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Instance spawned successfully.
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.334 225859 DEBUG nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.438 225859 DEBUG nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.439 225859 DEBUG nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.439 225859 DEBUG nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.440 225859 DEBUG nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.440 225859 DEBUG nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.441 225859 DEBUG nova.virt.libvirt.driver [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.546 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.550 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.603 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.604 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920082.326506, 10349dde-fb60-48ba-bc7b-42180c5eb49e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.604 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] VM Paused (Lifecycle Event)
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.613 225859 INFO nova.compute.manager [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Took 9.49 seconds to spawn the instance on the hypervisor.
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.614 225859 DEBUG nova.compute.manager [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.665 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.668 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920082.3298385, 10349dde-fb60-48ba-bc7b-42180c5eb49e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.668 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] VM Resumed (Lifecycle Event)
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.700 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.703 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.715 225859 INFO nova.compute.manager [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Took 10.67 seconds to build instance.
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.783 225859 DEBUG nova.compute.manager [req-20cbb18e-a79c-4c08-9e97-6ba51a96e498 req-1d29f054-4d6a-4ccf-ad09-a43db0bef1a6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Received event network-vif-plugged-607e59a4-2a6b-424a-9413-be318079781e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.783 225859 DEBUG oslo_concurrency.lockutils [req-20cbb18e-a79c-4c08-9e97-6ba51a96e498 req-1d29f054-4d6a-4ccf-ad09-a43db0bef1a6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.784 225859 DEBUG oslo_concurrency.lockutils [req-20cbb18e-a79c-4c08-9e97-6ba51a96e498 req-1d29f054-4d6a-4ccf-ad09-a43db0bef1a6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.784 225859 DEBUG oslo_concurrency.lockutils [req-20cbb18e-a79c-4c08-9e97-6ba51a96e498 req-1d29f054-4d6a-4ccf-ad09-a43db0bef1a6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.784 225859 DEBUG nova.compute.manager [req-20cbb18e-a79c-4c08-9e97-6ba51a96e498 req-1d29f054-4d6a-4ccf-ad09-a43db0bef1a6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] No waiting events found dispatching network-vif-plugged-607e59a4-2a6b-424a-9413-be318079781e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.784 225859 WARNING nova.compute.manager [req-20cbb18e-a79c-4c08-9e97-6ba51a96e498 req-1d29f054-4d6a-4ccf-ad09-a43db0bef1a6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Received unexpected event network-vif-plugged-607e59a4-2a6b-424a-9413-be318079781e for instance with vm_state active and task_state None.
Jan 20 14:41:22 compute-1 nova_compute[225855]: 2026-01-20 14:41:22.837 225859 DEBUG oslo_concurrency.lockutils [None req-66e94ac8-2e6a-4837-b9f0-dea06fd86b45 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "10349dde-fb60-48ba-bc7b-42180c5eb49e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:41:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:23.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:41:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1467856985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:23 compute-1 nova_compute[225855]: 2026-01-20 14:41:23.269 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:24.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:24 compute-1 ceph-mon[81775]: pgmap v1591: 321 pgs: 321 active+clean; 213 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.7 MiB/s wr, 277 op/s
Jan 20 14:41:24 compute-1 nova_compute[225855]: 2026-01-20 14:41:24.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:41:25 compute-1 ovn_controller[130490]: 2026-01-20T14:41:25Z|00250|binding|INFO|Releasing lport 583df905-1d9f-49c1-b209-4b7fad1599f6 from this chassis (sb_readonly=0)
Jan 20 14:41:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:25.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:25 compute-1 nova_compute[225855]: 2026-01-20 14:41:25.160 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:25 compute-1 nova_compute[225855]: 2026-01-20 14:41:25.252 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:25 compute-1 sudo[254151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:25 compute-1 sudo[254151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:25 compute-1 sudo[254151]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:25 compute-1 sudo[254176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:25 compute-1 sudo[254176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:25 compute-1 sudo[254176]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:25 compute-1 ceph-mon[81775]: pgmap v1592: 321 pgs: 321 active+clean; 214 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.7 MiB/s wr, 294 op/s
Jan 20 14:41:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:41:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:26.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:41:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:26 compute-1 nova_compute[225855]: 2026-01-20 14:41:26.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:41:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:41:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:27.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:41:27 compute-1 nova_compute[225855]: 2026-01-20 14:41:27.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:41:27 compute-1 nova_compute[225855]: 2026-01-20 14:41:27.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:41:27 compute-1 nova_compute[225855]: 2026-01-20 14:41:27.510 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:41:27 compute-1 nova_compute[225855]: 2026-01-20 14:41:27.510 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:41:27 compute-1 nova_compute[225855]: 2026-01-20 14:41:27.511 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:41:27 compute-1 nova_compute[225855]: 2026-01-20 14:41:27.641 225859 DEBUG nova.compute.manager [req-e82df75d-adc9-42ff-8051-3eb4157c27e1 req-35cdfc53-66e4-41a2-811c-12b17c10ca97 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Received event network-changed-607e59a4-2a6b-424a-9413-be318079781e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:41:27 compute-1 nova_compute[225855]: 2026-01-20 14:41:27.642 225859 DEBUG nova.compute.manager [req-e82df75d-adc9-42ff-8051-3eb4157c27e1 req-35cdfc53-66e4-41a2-811c-12b17c10ca97 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Refreshing instance network info cache due to event network-changed-607e59a4-2a6b-424a-9413-be318079781e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:41:27 compute-1 nova_compute[225855]: 2026-01-20 14:41:27.642 225859 DEBUG oslo_concurrency.lockutils [req-e82df75d-adc9-42ff-8051-3eb4157c27e1 req-35cdfc53-66e4-41a2-811c-12b17c10ca97 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:41:27 compute-1 nova_compute[225855]: 2026-01-20 14:41:27.643 225859 DEBUG oslo_concurrency.lockutils [req-e82df75d-adc9-42ff-8051-3eb4157c27e1 req-35cdfc53-66e4-41a2-811c-12b17c10ca97 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:41:27 compute-1 nova_compute[225855]: 2026-01-20 14:41:27.643 225859 DEBUG nova.network.neutron [req-e82df75d-adc9-42ff-8051-3eb4157c27e1 req-35cdfc53-66e4-41a2-811c-12b17c10ca97 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Refreshing network info cache for port 607e59a4-2a6b-424a-9413-be318079781e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:41:27 compute-1 ceph-mon[81775]: pgmap v1593: 321 pgs: 321 active+clean; 214 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.2 MiB/s wr, 291 op/s
Jan 20 14:41:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:28.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:28 compute-1 nova_compute[225855]: 2026-01-20 14:41:28.271 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:28 compute-1 nova_compute[225855]: 2026-01-20 14:41:28.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:41:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:41:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:29.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:41:29 compute-1 nova_compute[225855]: 2026-01-20 14:41:29.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:41:29 compute-1 ceph-mon[81775]: pgmap v1594: 321 pgs: 321 active+clean; 214 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.7 MiB/s wr, 242 op/s
Jan 20 14:41:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:41:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:30.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:41:30 compute-1 nova_compute[225855]: 2026-01-20 14:41:30.254 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:41:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:31.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:41:31 compute-1 nova_compute[225855]: 2026-01-20 14:41:31.164 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:41:31 compute-1 nova_compute[225855]: 2026-01-20 14:41:31.165 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:41:31 compute-1 nova_compute[225855]: 2026-01-20 14:41:31.165 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:31 compute-1 nova_compute[225855]: 2026-01-20 14:41:31.165 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:41:31 compute-1 nova_compute[225855]: 2026-01-20 14:41:31.166 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:41:31 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:31 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:41:31 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2992877323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:31 compute-1 nova_compute[225855]: 2026-01-20 14:41:31.632 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:41:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3340845056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2992877323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:31 compute-1 podman[254226]: 2026-01-20 14:41:31.740672433 +0000 UTC m=+0.057040427 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 20 14:41:31 compute-1 nova_compute[225855]: 2026-01-20 14:41:31.967 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:41:31 compute-1 nova_compute[225855]: 2026-01-20 14:41:31.968 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:41:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:32.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:32 compute-1 nova_compute[225855]: 2026-01-20 14:41:32.153 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:41:32 compute-1 nova_compute[225855]: 2026-01-20 14:41:32.155 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4339MB free_disk=20.900901794433594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:41:32 compute-1 nova_compute[225855]: 2026-01-20 14:41:32.155 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:41:32 compute-1 nova_compute[225855]: 2026-01-20 14:41:32.156 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:41:32 compute-1 nova_compute[225855]: 2026-01-20 14:41:32.306 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 10349dde-fb60-48ba-bc7b-42180c5eb49e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:41:32 compute-1 nova_compute[225855]: 2026-01-20 14:41:32.307 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:41:32 compute-1 nova_compute[225855]: 2026-01-20 14:41:32.307 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:41:32 compute-1 nova_compute[225855]: 2026-01-20 14:41:32.359 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:41:32 compute-1 ceph-mon[81775]: pgmap v1595: 321 pgs: 321 active+clean; 214 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.3 MiB/s wr, 188 op/s
Jan 20 14:41:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:41:32 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3706431175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:32 compute-1 nova_compute[225855]: 2026-01-20 14:41:32.815 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:41:32 compute-1 nova_compute[225855]: 2026-01-20 14:41:32.821 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:41:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:41:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:33.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:41:33 compute-1 nova_compute[225855]: 2026-01-20 14:41:33.273 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/54478405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3706431175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:33 compute-1 nova_compute[225855]: 2026-01-20 14:41:33.788 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:41:33 compute-1 nova_compute[225855]: 2026-01-20 14:41:33.823 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:41:33 compute-1 nova_compute[225855]: 2026-01-20 14:41:33.824 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:34.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:34 compute-1 ceph-mon[81775]: pgmap v1596: 321 pgs: 321 active+clean; 214 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 70 KiB/s wr, 159 op/s
Jan 20 14:41:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2759836805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:34 compute-1 nova_compute[225855]: 2026-01-20 14:41:34.825 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:41:34 compute-1 nova_compute[225855]: 2026-01-20 14:41:34.825 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:41:34 compute-1 nova_compute[225855]: 2026-01-20 14:41:34.826 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:41:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:35.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:35 compute-1 nova_compute[225855]: 2026-01-20 14:41:35.256 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:35 compute-1 ovn_controller[130490]: 2026-01-20T14:41:35Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a3:13:21 10.100.0.11
Jan 20 14:41:35 compute-1 ovn_controller[130490]: 2026-01-20T14:41:35Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a3:13:21 10.100.0.11
Jan 20 14:41:35 compute-1 ceph-mon[81775]: pgmap v1597: 321 pgs: 321 active+clean; 241 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.6 MiB/s wr, 177 op/s
Jan 20 14:41:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:36.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:41:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:37.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:41:37 compute-1 nova_compute[225855]: 2026-01-20 14:41:37.664 225859 DEBUG nova.network.neutron [req-e82df75d-adc9-42ff-8051-3eb4157c27e1 req-35cdfc53-66e4-41a2-811c-12b17c10ca97 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Updated VIF entry in instance network info cache for port 607e59a4-2a6b-424a-9413-be318079781e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:41:37 compute-1 nova_compute[225855]: 2026-01-20 14:41:37.664 225859 DEBUG nova.network.neutron [req-e82df75d-adc9-42ff-8051-3eb4157c27e1 req-35cdfc53-66e4-41a2-811c-12b17c10ca97 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Updating instance_info_cache with network_info: [{"id": "607e59a4-2a6b-424a-9413-be318079781e", "address": "fa:16:3e:a3:13:21", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap607e59a4-2a", "ovs_interfaceid": "607e59a4-2a6b-424a-9413-be318079781e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:41:37 compute-1 nova_compute[225855]: 2026-01-20 14:41:37.693 225859 DEBUG oslo_concurrency.lockutils [req-e82df75d-adc9-42ff-8051-3eb4157c27e1 req-35cdfc53-66e4-41a2-811c-12b17c10ca97 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:41:37 compute-1 ceph-mon[81775]: pgmap v1598: 321 pgs: 321 active+clean; 246 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 156 op/s
Jan 20 14:41:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:38.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:38 compute-1 nova_compute[225855]: 2026-01-20 14:41:38.275 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3010398542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:38 compute-1 sudo[254271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:38 compute-1 sudo[254271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:38 compute-1 sudo[254271]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:38 compute-1 sudo[254296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:41:38 compute-1 sudo[254296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:38 compute-1 sudo[254296]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:39 compute-1 sudo[254321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:39 compute-1 sudo[254321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:39 compute-1 sudo[254321]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:39 compute-1 sudo[254346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 14:41:39 compute-1 sudo[254346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:41:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:39.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:41:39 compute-1 ceph-mon[81775]: pgmap v1599: 321 pgs: 321 active+clean; 247 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Jan 20 14:41:40 compute-1 podman[254443]: 2026-01-20 14:41:40.017400582 +0000 UTC m=+0.526744210 container exec 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:41:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:41:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:40.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:41:40 compute-1 podman[254443]: 2026-01-20 14:41:40.148391062 +0000 UTC m=+0.657734680 container exec_died 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 14:41:40 compute-1 nova_compute[225855]: 2026-01-20 14:41:40.260 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:40 compute-1 podman[254597]: 2026-01-20 14:41:40.752215902 +0000 UTC m=+0.061509223 container exec 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 14:41:40 compute-1 podman[254597]: 2026-01-20 14:41:40.787342597 +0000 UTC m=+0.096635868 container exec_died 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 14:41:40 compute-1 podman[254664]: 2026-01-20 14:41:40.975883597 +0000 UTC m=+0.052506228 container exec e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, version=2.2.4, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, vcs-type=git, description=keepalived for Ceph, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., distribution-scope=public)
Jan 20 14:41:40 compute-1 podman[254664]: 2026-01-20 14:41:40.988208606 +0000 UTC m=+0.064831217 container exec_died e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, vcs-type=git, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, release=1793, distribution-scope=public, description=keepalived for Ceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., name=keepalived)
Jan 20 14:41:41 compute-1 sudo[254346]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:41 compute-1 sudo[254697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:41 compute-1 sudo[254697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:41.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:41 compute-1 sudo[254697]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:41 compute-1 sudo[254722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:41:41 compute-1 sudo[254722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:41 compute-1 sudo[254722]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:41 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:41 compute-1 sudo[254747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:41 compute-1 sudo[254747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:41 compute-1 sudo[254747]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:41 compute-1 sudo[254772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:41:41 compute-1 sudo[254772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:41 compute-1 sudo[254772]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:41 compute-1 nova_compute[225855]: 2026-01-20 14:41:41.991 225859 DEBUG nova.compute.manager [req-91a6eb4a-2834-4ef0-8f1b-ac1af5dd43f0 req-0ec7a7ad-eecc-4259-8aec-3b4ac2c938ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Received event network-changed-607e59a4-2a6b-424a-9413-be318079781e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:41:41 compute-1 nova_compute[225855]: 2026-01-20 14:41:41.992 225859 DEBUG nova.compute.manager [req-91a6eb4a-2834-4ef0-8f1b-ac1af5dd43f0 req-0ec7a7ad-eecc-4259-8aec-3b4ac2c938ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Refreshing instance network info cache due to event network-changed-607e59a4-2a6b-424a-9413-be318079781e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:41:41 compute-1 nova_compute[225855]: 2026-01-20 14:41:41.993 225859 DEBUG oslo_concurrency.lockutils [req-91a6eb4a-2834-4ef0-8f1b-ac1af5dd43f0 req-0ec7a7ad-eecc-4259-8aec-3b4ac2c938ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:41:41 compute-1 nova_compute[225855]: 2026-01-20 14:41:41.993 225859 DEBUG oslo_concurrency.lockutils [req-91a6eb4a-2834-4ef0-8f1b-ac1af5dd43f0 req-0ec7a7ad-eecc-4259-8aec-3b4ac2c938ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:41:41 compute-1 nova_compute[225855]: 2026-01-20 14:41:41.994 225859 DEBUG nova.network.neutron [req-91a6eb4a-2834-4ef0-8f1b-ac1af5dd43f0 req-0ec7a7ad-eecc-4259-8aec-3b4ac2c938ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Refreshing network info cache for port 607e59a4-2a6b-424a-9413-be318079781e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:41:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:42.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:42 compute-1 ceph-mon[81775]: pgmap v1600: 321 pgs: 321 active+clean; 261 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 143 op/s
Jan 20 14:41:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:41:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:41:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:41:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:41:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:41:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:41:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:43.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:41:43 compute-1 nova_compute[225855]: 2026-01-20 14:41:43.277 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:43 compute-1 ceph-mon[81775]: pgmap v1601: 321 pgs: 321 active+clean; 270 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.0 MiB/s wr, 146 op/s
Jan 20 14:41:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:44.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:45.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:45 compute-1 nova_compute[225855]: 2026-01-20 14:41:45.264 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:45 compute-1 ceph-mon[81775]: pgmap v1602: 321 pgs: 321 active+clean; 231 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 735 KiB/s rd, 4.3 MiB/s wr, 155 op/s
Jan 20 14:41:45 compute-1 sudo[254832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:45 compute-1 sudo[254832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:45 compute-1 sudo[254832]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:45 compute-1 sudo[254857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:45 compute-1 sudo[254857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:45 compute-1 sudo[254857]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:46.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:41:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:47.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:41:47 compute-1 nova_compute[225855]: 2026-01-20 14:41:47.449 225859 DEBUG nova.network.neutron [req-91a6eb4a-2834-4ef0-8f1b-ac1af5dd43f0 req-0ec7a7ad-eecc-4259-8aec-3b4ac2c938ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Updated VIF entry in instance network info cache for port 607e59a4-2a6b-424a-9413-be318079781e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:41:47 compute-1 nova_compute[225855]: 2026-01-20 14:41:47.450 225859 DEBUG nova.network.neutron [req-91a6eb4a-2834-4ef0-8f1b-ac1af5dd43f0 req-0ec7a7ad-eecc-4259-8aec-3b4ac2c938ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Updating instance_info_cache with network_info: [{"id": "607e59a4-2a6b-424a-9413-be318079781e", "address": "fa:16:3e:a3:13:21", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap607e59a4-2a", "ovs_interfaceid": "607e59a4-2a6b-424a-9413-be318079781e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:41:47 compute-1 ceph-mon[81775]: pgmap v1603: 321 pgs: 321 active+clean; 200 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 559 KiB/s rd, 2.7 MiB/s wr, 126 op/s
Jan 20 14:41:47 compute-1 nova_compute[225855]: 2026-01-20 14:41:47.785 225859 DEBUG oslo_concurrency.lockutils [req-91a6eb4a-2834-4ef0-8f1b-ac1af5dd43f0 req-0ec7a7ad-eecc-4259-8aec-3b4ac2c938ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:41:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:48.001 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:41:48 compute-1 nova_compute[225855]: 2026-01-20 14:41:48.002 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:48.003 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:41:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:48.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:48 compute-1 sudo[254883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:48 compute-1 sudo[254883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:48 compute-1 sudo[254883]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:48 compute-1 sudo[254908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:41:48 compute-1 sudo[254908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:48 compute-1 sudo[254908]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:48 compute-1 nova_compute[225855]: 2026-01-20 14:41:48.279 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3821528778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:49.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:49 compute-1 ceph-mon[81775]: pgmap v1604: 321 pgs: 321 active+clean; 200 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 445 KiB/s rd, 2.2 MiB/s wr, 98 op/s
Jan 20 14:41:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:41:50.005 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:41:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:50.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:50 compute-1 nova_compute[225855]: 2026-01-20 14:41:50.266 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:51.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:51 compute-1 ceph-mon[81775]: pgmap v1605: 321 pgs: 321 active+clean; 200 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 406 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Jan 20 14:41:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:41:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:52.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:41:53 compute-1 podman[254935]: 2026-01-20 14:41:53.075797756 +0000 UTC m=+0.117522049 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:41:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:53.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:53 compute-1 nova_compute[225855]: 2026-01-20 14:41:53.281 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:53 compute-1 ceph-mon[81775]: pgmap v1606: 321 pgs: 321 active+clean; 200 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 213 KiB/s rd, 904 KiB/s wr, 67 op/s
Jan 20 14:41:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:54.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:55.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:55 compute-1 nova_compute[225855]: 2026-01-20 14:41:55.268 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:55 compute-1 ceph-mon[81775]: pgmap v1607: 321 pgs: 321 active+clean; 200 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 298 KiB/s wr, 44 op/s
Jan 20 14:41:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:56.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:56 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:41:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:57.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:41:57 compute-1 ceph-mon[81775]: pgmap v1608: 321 pgs: 321 active+clean; 200 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 3.8 KiB/s wr, 4 op/s
Jan 20 14:41:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:58.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:58 compute-1 nova_compute[225855]: 2026-01-20 14:41:58.284 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:41:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:59.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:59 compute-1 ceph-mon[81775]: pgmap v1609: 321 pgs: 321 active+clean; 200 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s wr, 0 op/s
Jan 20 14:42:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:42:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:00.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:42:00 compute-1 nova_compute[225855]: 2026-01-20 14:42:00.272 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:01.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:01 compute-1 ceph-mon[81775]: pgmap v1610: 321 pgs: 321 active+clean; 200 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s wr, 0 op/s
Jan 20 14:42:02 compute-1 podman[254966]: 2026-01-20 14:42:02.037109411 +0000 UTC m=+0.074156451 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 14:42:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:42:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:02.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:42:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:42:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:03.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:42:03 compute-1 nova_compute[225855]: 2026-01-20 14:42:03.286 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:42:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:04.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:42:04 compute-1 ceph-mon[81775]: pgmap v1611: 321 pgs: 321 active+clean; 200 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 20 14:42:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:05.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4238600245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:05 compute-1 nova_compute[225855]: 2026-01-20 14:42:05.274 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:05 compute-1 sudo[254988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:05 compute-1 sudo[254988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:05 compute-1 sudo[254988]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:05 compute-1 sudo[255013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:05 compute-1 sudo[255013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:05 compute-1 sudo[255013]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:06.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:06 compute-1 ceph-mon[81775]: pgmap v1612: 321 pgs: 321 active+clean; 201 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 4.0 KiB/s rd, 5.0 KiB/s wr, 7 op/s
Jan 20 14:42:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:42:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:07.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:07.729708) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920127729759, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2175, "num_deletes": 251, "total_data_size": 5092944, "memory_usage": 5161184, "flush_reason": "Manual Compaction"}
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920127757144, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 3307047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36648, "largest_seqno": 38818, "table_properties": {"data_size": 3298227, "index_size": 5378, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18936, "raw_average_key_size": 20, "raw_value_size": 3280465, "raw_average_value_size": 3554, "num_data_blocks": 234, "num_entries": 923, "num_filter_entries": 923, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768919948, "oldest_key_time": 1768919948, "file_creation_time": 1768920127, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 27490 microseconds, and 7139 cpu microseconds.
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:07.757196) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 3307047 bytes OK
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:07.757218) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:07.760654) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:07.760670) EVENT_LOG_v1 {"time_micros": 1768920127760665, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:07.760691) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 5083216, prev total WAL file size 5083927, number of live WAL files 2.
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:07.762051) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(3229KB)], [69(8242KB)]
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920127762122, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 11747401, "oldest_snapshot_seqno": -1}
Jan 20 14:42:07 compute-1 ceph-mon[81775]: pgmap v1613: 321 pgs: 321 active+clean; 214 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 441 KiB/s wr, 21 op/s
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 6460 keys, 9846164 bytes, temperature: kUnknown
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920127840272, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 9846164, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9803177, "index_size": 25725, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 164948, "raw_average_key_size": 25, "raw_value_size": 9687475, "raw_average_value_size": 1499, "num_data_blocks": 1030, "num_entries": 6460, "num_filter_entries": 6460, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768920127, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:07.840581) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 9846164 bytes
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:07.841777) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.2 rd, 125.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.0 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 6979, records dropped: 519 output_compression: NoCompression
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:07.841800) EVENT_LOG_v1 {"time_micros": 1768920127841789, "job": 42, "event": "compaction_finished", "compaction_time_micros": 78217, "compaction_time_cpu_micros": 28555, "output_level": 6, "num_output_files": 1, "total_output_size": 9846164, "num_input_records": 6979, "num_output_records": 6460, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920127842891, "job": 42, "event": "table_file_deletion", "file_number": 71}
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920127844922, "job": 42, "event": "table_file_deletion", "file_number": 69}
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:07.761833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:07.844967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:07.844971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:07.844973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:07.844975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:07 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:07.844977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:42:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:08.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:42:08 compute-1 ovn_controller[130490]: 2026-01-20T14:42:08Z|00251|binding|INFO|Releasing lport 583df905-1d9f-49c1-b209-4b7fad1599f6 from this chassis (sb_readonly=0)
Jan 20 14:42:08 compute-1 nova_compute[225855]: 2026-01-20 14:42:08.265 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:08 compute-1 nova_compute[225855]: 2026-01-20 14:42:08.287 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2832464759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:42:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:09.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:42:09 compute-1 ceph-mon[81775]: pgmap v1614: 321 pgs: 321 active+clean; 233 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.0 MiB/s wr, 24 op/s
Jan 20 14:42:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:42:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:10.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:42:10 compute-1 nova_compute[225855]: 2026-01-20 14:42:10.276 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:11.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:11 compute-1 ceph-mon[81775]: pgmap v1615: 321 pgs: 321 active+clean; 268 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 2.3 MiB/s wr, 30 op/s
Jan 20 14:42:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:12.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:12 compute-1 nova_compute[225855]: 2026-01-20 14:42:12.425 225859 DEBUG oslo_concurrency.lockutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquiring lock "504acd93-cd55-496e-a85f-30e811f827d4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:12 compute-1 nova_compute[225855]: 2026-01-20 14:42:12.426 225859 DEBUG oslo_concurrency.lockutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "504acd93-cd55-496e-a85f-30e811f827d4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:12 compute-1 nova_compute[225855]: 2026-01-20 14:42:12.466 225859 DEBUG nova.compute.manager [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:42:12 compute-1 nova_compute[225855]: 2026-01-20 14:42:12.606 225859 DEBUG oslo_concurrency.lockutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:12 compute-1 nova_compute[225855]: 2026-01-20 14:42:12.607 225859 DEBUG oslo_concurrency.lockutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:12 compute-1 nova_compute[225855]: 2026-01-20 14:42:12.616 225859 DEBUG nova.virt.hardware [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:42:12 compute-1 nova_compute[225855]: 2026-01-20 14:42:12.616 225859 INFO nova.compute.claims [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:42:12 compute-1 nova_compute[225855]: 2026-01-20 14:42:12.869 225859 DEBUG oslo_concurrency.processutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:13.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:13.158929) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920133159164, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 310, "num_deletes": 255, "total_data_size": 122037, "memory_usage": 128576, "flush_reason": "Manual Compaction"}
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920133162424, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 79974, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38823, "largest_seqno": 39128, "table_properties": {"data_size": 78047, "index_size": 155, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4925, "raw_average_key_size": 17, "raw_value_size": 74148, "raw_average_value_size": 264, "num_data_blocks": 7, "num_entries": 280, "num_filter_entries": 280, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768920127, "oldest_key_time": 1768920127, "file_creation_time": 1768920133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 3648 microseconds, and 1252 cpu microseconds.
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:13.162587) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 79974 bytes OK
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:13.162665) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:13.164417) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:13.164432) EVENT_LOG_v1 {"time_micros": 1768920133164427, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:13.164453) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 119792, prev total WAL file size 119792, number of live WAL files 2.
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:13.165420) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303131' seq:72057594037927935, type:22 .. '6C6F676D0031323632' seq:0, type:0; will stop at (end)
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(78KB)], [72(9615KB)]
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920133165581, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 9926138, "oldest_snapshot_seqno": -1}
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 6222 keys, 9792709 bytes, temperature: kUnknown
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920133249768, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 9792709, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9750793, "index_size": 25230, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 160936, "raw_average_key_size": 25, "raw_value_size": 9638771, "raw_average_value_size": 1549, "num_data_blocks": 1005, "num_entries": 6222, "num_filter_entries": 6222, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768920133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:13.250105) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 9792709 bytes
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:13.256947) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.8 rd, 116.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 9.4 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(246.6) write-amplify(122.4) OK, records in: 6740, records dropped: 518 output_compression: NoCompression
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:13.256998) EVENT_LOG_v1 {"time_micros": 1768920133256980, "job": 44, "event": "compaction_finished", "compaction_time_micros": 84294, "compaction_time_cpu_micros": 27824, "output_level": 6, "num_output_files": 1, "total_output_size": 9792709, "num_input_records": 6740, "num_output_records": 6222, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920133257472, "job": 44, "event": "table_file_deletion", "file_number": 74}
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920133259552, "job": 44, "event": "table_file_deletion", "file_number": 72}
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:13.165342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:13.259681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:13.259689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:13.259695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:13.259699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:42:13.259702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.294 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:42:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2810974689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.392 225859 DEBUG oslo_concurrency.processutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.398 225859 DEBUG nova.compute.provider_tree [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.432 225859 DEBUG nova.scheduler.client.report [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.476 225859 DEBUG oslo_concurrency.lockutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.477 225859 DEBUG nova.compute.manager [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.534 225859 DEBUG nova.compute.manager [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.534 225859 DEBUG nova.network.neutron [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.568 225859 INFO nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.606 225859 DEBUG nova.compute.manager [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.726 225859 DEBUG nova.compute.manager [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.728 225859 DEBUG nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.729 225859 INFO nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Creating image(s)
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.766 225859 DEBUG nova.storage.rbd_utils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] rbd image 504acd93-cd55-496e-a85f-30e811f827d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.804 225859 DEBUG nova.storage.rbd_utils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] rbd image 504acd93-cd55-496e-a85f-30e811f827d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.840 225859 DEBUG nova.storage.rbd_utils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] rbd image 504acd93-cd55-496e-a85f-30e811f827d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.845 225859 DEBUG oslo_concurrency.processutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.885 225859 DEBUG nova.policy [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ff99fc8eda0640928c6e82981dacb266', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4b95747114ab4043b93a260387199c91', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.932 225859 DEBUG oslo_concurrency.processutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.933 225859 DEBUG oslo_concurrency.lockutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquiring lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.934 225859 DEBUG oslo_concurrency.lockutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.934 225859 DEBUG oslo_concurrency.lockutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.964 225859 DEBUG nova.storage.rbd_utils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] rbd image 504acd93-cd55-496e-a85f-30e811f827d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:13 compute-1 nova_compute[225855]: 2026-01-20 14:42:13.968 225859 DEBUG oslo_concurrency.processutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c 504acd93-cd55-496e-a85f-30e811f827d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:42:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:14.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:42:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2810974689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2620006287' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:42:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2620006287' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:42:14 compute-1 ceph-mon[81775]: pgmap v1616: 321 pgs: 321 active+clean; 286 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 3.1 MiB/s wr, 33 op/s
Jan 20 14:42:14 compute-1 nova_compute[225855]: 2026-01-20 14:42:14.270 225859 DEBUG oslo_concurrency.processutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c 504acd93-cd55-496e-a85f-30e811f827d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:14 compute-1 nova_compute[225855]: 2026-01-20 14:42:14.352 225859 DEBUG nova.storage.rbd_utils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] resizing rbd image 504acd93-cd55-496e-a85f-30e811f827d4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:42:14 compute-1 nova_compute[225855]: 2026-01-20 14:42:14.464 225859 DEBUG nova.objects.instance [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lazy-loading 'migration_context' on Instance uuid 504acd93-cd55-496e-a85f-30e811f827d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:14 compute-1 nova_compute[225855]: 2026-01-20 14:42:14.508 225859 DEBUG nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:42:14 compute-1 nova_compute[225855]: 2026-01-20 14:42:14.508 225859 DEBUG nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Ensure instance console log exists: /var/lib/nova/instances/504acd93-cd55-496e-a85f-30e811f827d4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:42:14 compute-1 nova_compute[225855]: 2026-01-20 14:42:14.509 225859 DEBUG oslo_concurrency.lockutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:14 compute-1 nova_compute[225855]: 2026-01-20 14:42:14.509 225859 DEBUG oslo_concurrency.lockutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:14 compute-1 nova_compute[225855]: 2026-01-20 14:42:14.509 225859 DEBUG oslo_concurrency.lockutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:15 compute-1 nova_compute[225855]: 2026-01-20 14:42:15.081 225859 DEBUG nova.network.neutron [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Successfully created port: 349b1d10-0b06-4025-80fd-4861bd487a43 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:42:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:15.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:15 compute-1 nova_compute[225855]: 2026-01-20 14:42:15.278 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:15 compute-1 ceph-mon[81775]: pgmap v1617: 321 pgs: 321 active+clean; 307 MiB data, 757 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 4.0 MiB/s wr, 68 op/s
Jan 20 14:42:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:16.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:16.403 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:16.403 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:16.404 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:17.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:17 compute-1 nova_compute[225855]: 2026-01-20 14:42:17.365 225859 DEBUG nova.network.neutron [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Successfully updated port: 349b1d10-0b06-4025-80fd-4861bd487a43 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:42:17 compute-1 nova_compute[225855]: 2026-01-20 14:42:17.410 225859 DEBUG oslo_concurrency.lockutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquiring lock "refresh_cache-504acd93-cd55-496e-a85f-30e811f827d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:42:17 compute-1 nova_compute[225855]: 2026-01-20 14:42:17.410 225859 DEBUG oslo_concurrency.lockutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquired lock "refresh_cache-504acd93-cd55-496e-a85f-30e811f827d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:42:17 compute-1 nova_compute[225855]: 2026-01-20 14:42:17.411 225859 DEBUG nova.network.neutron [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:42:17 compute-1 nova_compute[225855]: 2026-01-20 14:42:17.716 225859 DEBUG nova.network.neutron [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:42:17 compute-1 ceph-mon[81775]: pgmap v1618: 321 pgs: 321 active+clean; 324 MiB data, 769 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 5.0 MiB/s wr, 63 op/s
Jan 20 14:42:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:42:18 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3967705155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:18.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:18 compute-1 nova_compute[225855]: 2026-01-20 14:42:18.297 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3967705155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:19.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:19 compute-1 nova_compute[225855]: 2026-01-20 14:42:19.578 225859 DEBUG nova.compute.manager [req-c34e0831-1ec5-43e2-b737-614d371dffee req-b9091455-90df-4eab-a890-3071ac3f3f9a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Received event network-changed-607e59a4-2a6b-424a-9413-be318079781e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:19 compute-1 nova_compute[225855]: 2026-01-20 14:42:19.579 225859 DEBUG nova.compute.manager [req-c34e0831-1ec5-43e2-b737-614d371dffee req-b9091455-90df-4eab-a890-3071ac3f3f9a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Refreshing instance network info cache due to event network-changed-607e59a4-2a6b-424a-9413-be318079781e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:42:19 compute-1 nova_compute[225855]: 2026-01-20 14:42:19.579 225859 DEBUG oslo_concurrency.lockutils [req-c34e0831-1ec5-43e2-b737-614d371dffee req-b9091455-90df-4eab-a890-3071ac3f3f9a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:42:19 compute-1 nova_compute[225855]: 2026-01-20 14:42:19.580 225859 DEBUG oslo_concurrency.lockutils [req-c34e0831-1ec5-43e2-b737-614d371dffee req-b9091455-90df-4eab-a890-3071ac3f3f9a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:42:19 compute-1 nova_compute[225855]: 2026-01-20 14:42:19.580 225859 DEBUG nova.network.neutron [req-c34e0831-1ec5-43e2-b737-614d371dffee req-b9091455-90df-4eab-a890-3071ac3f3f9a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Refreshing network info cache for port 607e59a4-2a6b-424a-9413-be318079781e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:42:19 compute-1 ceph-mon[81775]: pgmap v1619: 321 pgs: 321 active+clean; 344 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 4.9 MiB/s wr, 62 op/s
Jan 20 14:42:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:42:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:20.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:42:20 compute-1 nova_compute[225855]: 2026-01-20 14:42:20.210 225859 DEBUG oslo_concurrency.lockutils [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "interface-10349dde-fb60-48ba-bc7b-42180c5eb49e-e2648ead-7162-4661-94e1-755faa8f1fd1" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:20 compute-1 nova_compute[225855]: 2026-01-20 14:42:20.211 225859 DEBUG oslo_concurrency.lockutils [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "interface-10349dde-fb60-48ba-bc7b-42180c5eb49e-e2648ead-7162-4661-94e1-755faa8f1fd1" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:20 compute-1 nova_compute[225855]: 2026-01-20 14:42:20.212 225859 DEBUG nova.objects.instance [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'flavor' on Instance uuid 10349dde-fb60-48ba-bc7b-42180c5eb49e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:20 compute-1 nova_compute[225855]: 2026-01-20 14:42:20.280 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3571315860' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/751689202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:21.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:21 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.481 225859 DEBUG nova.network.neutron [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Updating instance_info_cache with network_info: [{"id": "349b1d10-0b06-4025-80fd-4861bd487a43", "address": "fa:16:3e:dd:f6:26", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap349b1d10-0b", "ovs_interfaceid": "349b1d10-0b06-4025-80fd-4861bd487a43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.533 225859 DEBUG oslo_concurrency.lockutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Releasing lock "refresh_cache-504acd93-cd55-496e-a85f-30e811f827d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.533 225859 DEBUG nova.compute.manager [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Instance network_info: |[{"id": "349b1d10-0b06-4025-80fd-4861bd487a43", "address": "fa:16:3e:dd:f6:26", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap349b1d10-0b", "ovs_interfaceid": "349b1d10-0b06-4025-80fd-4861bd487a43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.535 225859 DEBUG nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Start _get_guest_xml network_info=[{"id": "349b1d10-0b06-4025-80fd-4861bd487a43", "address": "fa:16:3e:dd:f6:26", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap349b1d10-0b", "ovs_interfaceid": "349b1d10-0b06-4025-80fd-4861bd487a43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:22:02Z,direct_url=<?>,disk_format='qcow2',id=26699514-f465-4b50-98b7-36f2cfc6a308,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': '26699514-f465-4b50-98b7-36f2cfc6a308'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.539 225859 WARNING nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.545 225859 DEBUG nova.virt.libvirt.host [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.545 225859 DEBUG nova.virt.libvirt.host [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.550 225859 DEBUG nova.virt.libvirt.host [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.551 225859 DEBUG nova.virt.libvirt.host [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.552 225859 DEBUG nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.553 225859 DEBUG nova.virt.hardware [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:22:02Z,direct_url=<?>,disk_format='qcow2',id=26699514-f465-4b50-98b7-36f2cfc6a308,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.553 225859 DEBUG nova.virt.hardware [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.553 225859 DEBUG nova.virt.hardware [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.554 225859 DEBUG nova.virt.hardware [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.554 225859 DEBUG nova.virt.hardware [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.555 225859 DEBUG nova.virt.hardware [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.555 225859 DEBUG nova.virt.hardware [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.555 225859 DEBUG nova.virt.hardware [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.556 225859 DEBUG nova.virt.hardware [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.556 225859 DEBUG nova.virt.hardware [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.556 225859 DEBUG nova.virt.hardware [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.560 225859 DEBUG oslo_concurrency.processutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.616 225859 DEBUG nova.objects.instance [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'pci_requests' on Instance uuid 10349dde-fb60-48ba-bc7b-42180c5eb49e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:21 compute-1 nova_compute[225855]: 2026-01-20 14:42:21.668 225859 DEBUG nova.network.neutron [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:42:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:42:22 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3110567127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:42:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:22.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:42:22 compute-1 ceph-mon[81775]: pgmap v1620: 321 pgs: 321 active+clean; 358 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 4.8 MiB/s wr, 61 op/s
Jan 20 14:42:22 compute-1 nova_compute[225855]: 2026-01-20 14:42:22.296 225859 DEBUG nova.policy [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c8a9fb458d27434495a77a94827b6097', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3f93fd4b2154dda9f38e62334904303', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:42:22 compute-1 nova_compute[225855]: 2026-01-20 14:42:22.656 225859 DEBUG oslo_concurrency.processutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:22 compute-1 nova_compute[225855]: 2026-01-20 14:42:22.695 225859 DEBUG nova.storage.rbd_utils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] rbd image 504acd93-cd55-496e-a85f-30e811f827d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:22 compute-1 nova_compute[225855]: 2026-01-20 14:42:22.700 225859 DEBUG oslo_concurrency.processutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3110567127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:42:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:23.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:42:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:42:23 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2974363094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.195 225859 DEBUG oslo_concurrency.processutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.199 225859 DEBUG nova.virt.libvirt.vif [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:42:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1822690739',display_name='tempest-ListServerFiltersTestJSON-instance-1822690739',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1822690739',id=75,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b95747114ab4043b93a260387199c91',ramdisk_id='',reservation_id='r-c4vxbkd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-2126845308',owner_user_name='tempest-ListServerFiltersTestJSON-2126845308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:42:13Z,user_data=None,user_id='ff99fc8eda0640928c6e82981dacb266',uuid=504acd93-cd55-496e-a85f-30e811f827d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "349b1d10-0b06-4025-80fd-4861bd487a43", "address": "fa:16:3e:dd:f6:26", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap349b1d10-0b", "ovs_interfaceid": "349b1d10-0b06-4025-80fd-4861bd487a43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.200 225859 DEBUG nova.network.os_vif_util [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Converting VIF {"id": "349b1d10-0b06-4025-80fd-4861bd487a43", "address": "fa:16:3e:dd:f6:26", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap349b1d10-0b", "ovs_interfaceid": "349b1d10-0b06-4025-80fd-4861bd487a43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.202 225859 DEBUG nova.network.os_vif_util [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f6:26,bridge_name='br-int',has_traffic_filtering=True,id=349b1d10-0b06-4025-80fd-4861bd487a43,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap349b1d10-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.205 225859 DEBUG nova.objects.instance [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lazy-loading 'pci_devices' on Instance uuid 504acd93-cd55-496e-a85f-30e811f827d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.225 225859 DEBUG nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:42:23 compute-1 nova_compute[225855]:   <uuid>504acd93-cd55-496e-a85f-30e811f827d4</uuid>
Jan 20 14:42:23 compute-1 nova_compute[225855]:   <name>instance-0000004b</name>
Jan 20 14:42:23 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:42:23 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:42:23 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1822690739</nova:name>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:42:21</nova:creationTime>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:42:23 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:42:23 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:42:23 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:42:23 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:42:23 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:42:23 compute-1 nova_compute[225855]:         <nova:user uuid="ff99fc8eda0640928c6e82981dacb266">tempest-ListServerFiltersTestJSON-2126845308-project-member</nova:user>
Jan 20 14:42:23 compute-1 nova_compute[225855]:         <nova:project uuid="4b95747114ab4043b93a260387199c91">tempest-ListServerFiltersTestJSON-2126845308</nova:project>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="26699514-f465-4b50-98b7-36f2cfc6a308"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:42:23 compute-1 nova_compute[225855]:         <nova:port uuid="349b1d10-0b06-4025-80fd-4861bd487a43">
Jan 20 14:42:23 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:42:23 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:42:23 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <system>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <entry name="serial">504acd93-cd55-496e-a85f-30e811f827d4</entry>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <entry name="uuid">504acd93-cd55-496e-a85f-30e811f827d4</entry>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     </system>
Jan 20 14:42:23 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:42:23 compute-1 nova_compute[225855]:   <os>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:   </os>
Jan 20 14:42:23 compute-1 nova_compute[225855]:   <features>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:   </features>
Jan 20 14:42:23 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:42:23 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:42:23 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/504acd93-cd55-496e-a85f-30e811f827d4_disk">
Jan 20 14:42:23 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       </source>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:42:23 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/504acd93-cd55-496e-a85f-30e811f827d4_disk.config">
Jan 20 14:42:23 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       </source>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:42:23 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:dd:f6:26"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <target dev="tap349b1d10-0b"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/504acd93-cd55-496e-a85f-30e811f827d4/console.log" append="off"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <video>
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     </video>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:42:23 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:42:23 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:42:23 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:42:23 compute-1 nova_compute[225855]: </domain>
Jan 20 14:42:23 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.226 225859 DEBUG nova.compute.manager [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Preparing to wait for external event network-vif-plugged-349b1d10-0b06-4025-80fd-4861bd487a43 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.227 225859 DEBUG oslo_concurrency.lockutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquiring lock "504acd93-cd55-496e-a85f-30e811f827d4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.227 225859 DEBUG oslo_concurrency.lockutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "504acd93-cd55-496e-a85f-30e811f827d4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.228 225859 DEBUG oslo_concurrency.lockutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "504acd93-cd55-496e-a85f-30e811f827d4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.228 225859 DEBUG nova.virt.libvirt.vif [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:42:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1822690739',display_name='tempest-ListServerFiltersTestJSON-instance-1822690739',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1822690739',id=75,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b95747114ab4043b93a260387199c91',ramdisk_id='',reservation_id='r-c4vxbkd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-2126845308',owner_user_name='tempest-ListServerFiltersTestJSON-2126845308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:42:13Z,user_data=None,user_id='ff99fc8eda0640928c6e82981dacb266',uuid=504acd93-cd55-496e-a85f-30e811f827d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "349b1d10-0b06-4025-80fd-4861bd487a43", "address": "fa:16:3e:dd:f6:26", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap349b1d10-0b", "ovs_interfaceid": "349b1d10-0b06-4025-80fd-4861bd487a43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.229 225859 DEBUG nova.network.os_vif_util [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Converting VIF {"id": "349b1d10-0b06-4025-80fd-4861bd487a43", "address": "fa:16:3e:dd:f6:26", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap349b1d10-0b", "ovs_interfaceid": "349b1d10-0b06-4025-80fd-4861bd487a43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.230 225859 DEBUG nova.network.os_vif_util [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f6:26,bridge_name='br-int',has_traffic_filtering=True,id=349b1d10-0b06-4025-80fd-4861bd487a43,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap349b1d10-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.230 225859 DEBUG os_vif [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f6:26,bridge_name='br-int',has_traffic_filtering=True,id=349b1d10-0b06-4025-80fd-4861bd487a43,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap349b1d10-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.231 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.231 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.232 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.236 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.236 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap349b1d10-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.237 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap349b1d10-0b, col_values=(('external_ids', {'iface-id': '349b1d10-0b06-4025-80fd-4861bd487a43', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:f6:26', 'vm-uuid': '504acd93-cd55-496e-a85f-30e811f827d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.266 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:23 compute-1 NetworkManager[49104]: <info>  [1768920143.2686] manager: (tap349b1d10-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.268 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.277 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.278 225859 INFO os_vif [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f6:26,bridge_name='br-int',has_traffic_filtering=True,id=349b1d10-0b06-4025-80fd-4861bd487a43,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap349b1d10-0b')
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.351 225859 DEBUG nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.352 225859 DEBUG nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.352 225859 DEBUG nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] No VIF found with MAC fa:16:3e:dd:f6:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.353 225859 INFO nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Using config drive
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.389 225859 DEBUG nova.storage.rbd_utils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] rbd image 504acd93-cd55-496e-a85f-30e811f827d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.394 225859 DEBUG nova.network.neutron [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Successfully updated port: e2648ead-7162-4661-94e1-755faa8f1fd1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.430 225859 DEBUG oslo_concurrency.lockutils [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.583 225859 DEBUG nova.compute.manager [req-928ae490-427b-4f76-924a-e4181bf9d70a req-c68095fc-99a6-4de7-939c-cba34d8736e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Received event network-changed-e2648ead-7162-4661-94e1-755faa8f1fd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.584 225859 DEBUG nova.compute.manager [req-928ae490-427b-4f76-924a-e4181bf9d70a req-c68095fc-99a6-4de7-939c-cba34d8736e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Refreshing instance network info cache due to event network-changed-e2648ead-7162-4661-94e1-755faa8f1fd1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.584 225859 DEBUG oslo_concurrency.lockutils [req-928ae490-427b-4f76-924a-e4181bf9d70a req-c68095fc-99a6-4de7-939c-cba34d8736e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.686 225859 DEBUG nova.network.neutron [req-c34e0831-1ec5-43e2-b737-614d371dffee req-b9091455-90df-4eab-a890-3071ac3f3f9a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Updated VIF entry in instance network info cache for port 607e59a4-2a6b-424a-9413-be318079781e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.686 225859 DEBUG nova.network.neutron [req-c34e0831-1ec5-43e2-b737-614d371dffee req-b9091455-90df-4eab-a890-3071ac3f3f9a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Updating instance_info_cache with network_info: [{"id": "607e59a4-2a6b-424a-9413-be318079781e", "address": "fa:16:3e:a3:13:21", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap607e59a4-2a", "ovs_interfaceid": "607e59a4-2a6b-424a-9413-be318079781e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.722 225859 DEBUG oslo_concurrency.lockutils [req-c34e0831-1ec5-43e2-b737-614d371dffee req-b9091455-90df-4eab-a890-3071ac3f3f9a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.723 225859 DEBUG nova.compute.manager [req-c34e0831-1ec5-43e2-b737-614d371dffee req-b9091455-90df-4eab-a890-3071ac3f3f9a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Received event network-changed-349b1d10-0b06-4025-80fd-4861bd487a43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.723 225859 DEBUG nova.compute.manager [req-c34e0831-1ec5-43e2-b737-614d371dffee req-b9091455-90df-4eab-a890-3071ac3f3f9a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Refreshing instance network info cache due to event network-changed-349b1d10-0b06-4025-80fd-4861bd487a43. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.724 225859 DEBUG oslo_concurrency.lockutils [req-c34e0831-1ec5-43e2-b737-614d371dffee req-b9091455-90df-4eab-a890-3071ac3f3f9a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-504acd93-cd55-496e-a85f-30e811f827d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.724 225859 DEBUG oslo_concurrency.lockutils [req-c34e0831-1ec5-43e2-b737-614d371dffee req-b9091455-90df-4eab-a890-3071ac3f3f9a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-504acd93-cd55-496e-a85f-30e811f827d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.724 225859 DEBUG nova.network.neutron [req-c34e0831-1ec5-43e2-b737-614d371dffee req-b9091455-90df-4eab-a890-3071ac3f3f9a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Refreshing network info cache for port 349b1d10-0b06-4025-80fd-4861bd487a43 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.726 225859 DEBUG oslo_concurrency.lockutils [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquired lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:42:23 compute-1 nova_compute[225855]: 2026-01-20 14:42:23.726 225859 DEBUG nova.network.neutron [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:42:24 compute-1 podman[255319]: 2026-01-20 14:42:24.0927341 +0000 UTC m=+0.127362299 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 20 14:42:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:24.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2974363094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:24 compute-1 ceph-mon[81775]: pgmap v1621: 321 pgs: 321 active+clean; 381 MiB data, 794 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 4.8 MiB/s wr, 79 op/s
Jan 20 14:42:24 compute-1 nova_compute[225855]: 2026-01-20 14:42:24.458 225859 INFO nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Creating config drive at /var/lib/nova/instances/504acd93-cd55-496e-a85f-30e811f827d4/disk.config
Jan 20 14:42:24 compute-1 nova_compute[225855]: 2026-01-20 14:42:24.471 225859 DEBUG oslo_concurrency.processutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/504acd93-cd55-496e-a85f-30e811f827d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpewld6c31 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:24 compute-1 nova_compute[225855]: 2026-01-20 14:42:24.561 225859 WARNING nova.network.neutron [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] fc21b99b-4e34-422c-be05-0a440009dac4 already exists in list: networks containing: ['fc21b99b-4e34-422c-be05-0a440009dac4']. ignoring it
Jan 20 14:42:24 compute-1 nova_compute[225855]: 2026-01-20 14:42:24.615 225859 DEBUG oslo_concurrency.processutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/504acd93-cd55-496e-a85f-30e811f827d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpewld6c31" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:24 compute-1 nova_compute[225855]: 2026-01-20 14:42:24.637 225859 DEBUG nova.storage.rbd_utils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] rbd image 504acd93-cd55-496e-a85f-30e811f827d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:24 compute-1 nova_compute[225855]: 2026-01-20 14:42:24.640 225859 DEBUG oslo_concurrency.processutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/504acd93-cd55-496e-a85f-30e811f827d4/disk.config 504acd93-cd55-496e-a85f-30e811f827d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:24 compute-1 nova_compute[225855]: 2026-01-20 14:42:24.822 225859 DEBUG oslo_concurrency.processutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/504acd93-cd55-496e-a85f-30e811f827d4/disk.config 504acd93-cd55-496e-a85f-30e811f827d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:24 compute-1 nova_compute[225855]: 2026-01-20 14:42:24.823 225859 INFO nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Deleting local config drive /var/lib/nova/instances/504acd93-cd55-496e-a85f-30e811f827d4/disk.config because it was imported into RBD.
Jan 20 14:42:24 compute-1 NetworkManager[49104]: <info>  [1768920144.9144] manager: (tap349b1d10-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Jan 20 14:42:24 compute-1 kernel: tap349b1d10-0b: entered promiscuous mode
Jan 20 14:42:24 compute-1 ovn_controller[130490]: 2026-01-20T14:42:24Z|00252|binding|INFO|Claiming lport 349b1d10-0b06-4025-80fd-4861bd487a43 for this chassis.
Jan 20 14:42:24 compute-1 ovn_controller[130490]: 2026-01-20T14:42:24Z|00253|binding|INFO|349b1d10-0b06-4025-80fd-4861bd487a43: Claiming fa:16:3e:dd:f6:26 10.100.0.12
Jan 20 14:42:24 compute-1 nova_compute[225855]: 2026-01-20 14:42:24.919 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:24.942 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:f6:26 10.100.0.12'], port_security=['fa:16:3e:dd:f6:26 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '504acd93-cd55-496e-a85f-30e811f827d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b95747114ab4043b93a260387199c91', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f18b0222-78a5-4c37-8065-772dbe5c63e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80e2aa5b-ecb8-4e93-992f-baaef718dd34, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=349b1d10-0b06-4025-80fd-4861bd487a43) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:42:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:24.944 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 349b1d10-0b06-4025-80fd-4861bd487a43 in datapath b36e9cab-12c6-4a09-9aab-ef2679d875ba bound to our chassis
Jan 20 14:42:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:24.947 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b36e9cab-12c6-4a09-9aab-ef2679d875ba
Jan 20 14:42:24 compute-1 systemd-udevd[255397]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:42:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:24.962 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[44cfa2d4-bfce-49f9-8cd0-207943d2c51d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:24 compute-1 systemd-machined[194361]: New machine qemu-32-instance-0000004b.
Jan 20 14:42:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:24.963 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb36e9cab-11 in ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:42:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:24.965 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb36e9cab-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:42:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:24.965 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[41710b3a-3124-40df-abbb-8b9317aba798]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:24.967 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b82929e9-21b0-4eb2-bbba-2607c0392987]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:24 compute-1 NetworkManager[49104]: <info>  [1768920144.9709] device (tap349b1d10-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:42:24 compute-1 NetworkManager[49104]: <info>  [1768920144.9727] device (tap349b1d10-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:42:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:24.981 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3409dc-2daf-4b41-812c-484910523533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:24 compute-1 nova_compute[225855]: 2026-01-20 14:42:24.993 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:24 compute-1 systemd[1]: Started Virtual Machine qemu-32-instance-0000004b.
Jan 20 14:42:24 compute-1 ovn_controller[130490]: 2026-01-20T14:42:24Z|00254|binding|INFO|Setting lport 349b1d10-0b06-4025-80fd-4861bd487a43 ovn-installed in OVS
Jan 20 14:42:24 compute-1 ovn_controller[130490]: 2026-01-20T14:42:24Z|00255|binding|INFO|Setting lport 349b1d10-0b06-4025-80fd-4861bd487a43 up in Southbound
Jan 20 14:42:24 compute-1 nova_compute[225855]: 2026-01-20 14:42:24.997 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:25.012 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[95c358e1-4236-4f1e-a76a-8da666ef1d24]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:25.047 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[953adeab-517d-41a3-8f4a-3dcd5654624f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:25.052 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5eca8228-8af4-4843-a94a-aa224381f014]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:25 compute-1 NetworkManager[49104]: <info>  [1768920145.0552] manager: (tapb36e9cab-10): new Veth device (/org/freedesktop/NetworkManager/Devices/113)
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:25.089 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ed53ac-c29d-453c-b5a9-b0f8edc0269d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:25.092 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[72e7d6d7-31d5-48d8-b051-c05d105ae2b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:25 compute-1 NetworkManager[49104]: <info>  [1768920145.1139] device (tapb36e9cab-10): carrier: link connected
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:25.119 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[7426154a-f993-427c-b1b5-57c2c90371fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:25.136 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[42f0fdc4-7273-48c1-9010-9444864ef779]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb36e9cab-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:c2:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518404, 'reachable_time': 39895, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255431, 'error': None, 'target': 'ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:25.150 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[68fdc8ed-6940-4f9e-96e3-f004dc7cfe2e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:c252'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 518404, 'tstamp': 518404}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255432, 'error': None, 'target': 'ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:42:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:25.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:25.165 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9bc7c049-5b7f-4f3f-80ad-8aa677265c65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb36e9cab-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:c2:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518404, 'reachable_time': 39895, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255433, 'error': None, 'target': 'ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:25.196 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a30f05df-6ca1-41a9-8b4a-595983bfaf2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:25.244 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3e4166bb-5436-4523-8bc1-533ddeae11c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:25.245 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb36e9cab-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:25.245 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:25.246 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb36e9cab-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:25 compute-1 kernel: tapb36e9cab-10: entered promiscuous mode
Jan 20 14:42:25 compute-1 NetworkManager[49104]: <info>  [1768920145.2480] manager: (tapb36e9cab-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Jan 20 14:42:25 compute-1 nova_compute[225855]: 2026-01-20 14:42:25.247 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:25 compute-1 nova_compute[225855]: 2026-01-20 14:42:25.249 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:25.251 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb36e9cab-10, col_values=(('external_ids', {'iface-id': '5dcae274-b8f4-440a-a3eb-5c1a5a044346'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:25 compute-1 ovn_controller[130490]: 2026-01-20T14:42:25Z|00256|binding|INFO|Releasing lport 5dcae274-b8f4-440a-a3eb-5c1a5a044346 from this chassis (sb_readonly=0)
Jan 20 14:42:25 compute-1 nova_compute[225855]: 2026-01-20 14:42:25.252 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:25 compute-1 nova_compute[225855]: 2026-01-20 14:42:25.253 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:25.253 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b36e9cab-12c6-4a09-9aab-ef2679d875ba.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b36e9cab-12c6-4a09-9aab-ef2679d875ba.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:25.254 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e4090cd3-7e84-4fad-aad4-3a961c2be2dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:25.255 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-b36e9cab-12c6-4a09-9aab-ef2679d875ba
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/b36e9cab-12c6-4a09-9aab-ef2679d875ba.pid.haproxy
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID b36e9cab-12c6-4a09-9aab-ef2679d875ba
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:42:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:25.255 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'env', 'PROCESS_TAG=haproxy-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b36e9cab-12c6-4a09-9aab-ef2679d875ba.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:42:25 compute-1 nova_compute[225855]: 2026-01-20 14:42:25.264 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:25 compute-1 nova_compute[225855]: 2026-01-20 14:42:25.281 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:25 compute-1 nova_compute[225855]: 2026-01-20 14:42:25.587 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920145.5864213, 504acd93-cd55-496e-a85f-30e811f827d4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:42:25 compute-1 nova_compute[225855]: 2026-01-20 14:42:25.587 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] VM Started (Lifecycle Event)
Jan 20 14:42:25 compute-1 podman[255508]: 2026-01-20 14:42:25.622121924 +0000 UTC m=+0.046303762 container create a0f03681d47f02379595b6b5e8bf0975841c981497c7672805924261893db805 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 20 14:42:25 compute-1 nova_compute[225855]: 2026-01-20 14:42:25.630 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:25 compute-1 nova_compute[225855]: 2026-01-20 14:42:25.634 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920145.5874617, 504acd93-cd55-496e-a85f-30e811f827d4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:42:25 compute-1 nova_compute[225855]: 2026-01-20 14:42:25.635 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] VM Paused (Lifecycle Event)
Jan 20 14:42:25 compute-1 nova_compute[225855]: 2026-01-20 14:42:25.661 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:25 compute-1 nova_compute[225855]: 2026-01-20 14:42:25.663 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:42:25 compute-1 systemd[1]: Started libpod-conmon-a0f03681d47f02379595b6b5e8bf0975841c981497c7672805924261893db805.scope.
Jan 20 14:42:25 compute-1 nova_compute[225855]: 2026-01-20 14:42:25.693 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:42:25 compute-1 podman[255508]: 2026-01-20 14:42:25.59868322 +0000 UTC m=+0.022865078 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:42:25 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:42:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd3ad48162a1b4dec3cf75f42139906849c8a6a6a6b10f13149b76909d80e15f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:42:25 compute-1 podman[255508]: 2026-01-20 14:42:25.711693951 +0000 UTC m=+0.135875819 container init a0f03681d47f02379595b6b5e8bf0975841c981497c7672805924261893db805 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:42:25 compute-1 podman[255508]: 2026-01-20 14:42:25.716903169 +0000 UTC m=+0.141084997 container start a0f03681d47f02379595b6b5e8bf0975841c981497c7672805924261893db805 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 14:42:25 compute-1 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[255521]: [NOTICE]   (255528) : New worker (255530) forked
Jan 20 14:42:25 compute-1 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[255521]: [NOTICE]   (255528) : Loading success.
Jan 20 14:42:25 compute-1 ceph-mon[81775]: pgmap v1622: 321 pgs: 321 active+clean; 386 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 561 KiB/s rd, 4.1 MiB/s wr, 103 op/s
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.001 225859 DEBUG nova.compute.manager [req-8ef92bfb-02d7-498f-8926-44bb7d094b03 req-78a51f7d-ef54-4f32-9f98-3b92d1220ed7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Received event network-vif-plugged-349b1d10-0b06-4025-80fd-4861bd487a43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.002 225859 DEBUG oslo_concurrency.lockutils [req-8ef92bfb-02d7-498f-8926-44bb7d094b03 req-78a51f7d-ef54-4f32-9f98-3b92d1220ed7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "504acd93-cd55-496e-a85f-30e811f827d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.002 225859 DEBUG oslo_concurrency.lockutils [req-8ef92bfb-02d7-498f-8926-44bb7d094b03 req-78a51f7d-ef54-4f32-9f98-3b92d1220ed7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "504acd93-cd55-496e-a85f-30e811f827d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.003 225859 DEBUG oslo_concurrency.lockutils [req-8ef92bfb-02d7-498f-8926-44bb7d094b03 req-78a51f7d-ef54-4f32-9f98-3b92d1220ed7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "504acd93-cd55-496e-a85f-30e811f827d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.003 225859 DEBUG nova.compute.manager [req-8ef92bfb-02d7-498f-8926-44bb7d094b03 req-78a51f7d-ef54-4f32-9f98-3b92d1220ed7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Processing event network-vif-plugged-349b1d10-0b06-4025-80fd-4861bd487a43 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.004 225859 DEBUG nova.compute.manager [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.007 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920146.0074406, 504acd93-cd55-496e-a85f-30e811f827d4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.007 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] VM Resumed (Lifecycle Event)
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.009 225859 DEBUG nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.013 225859 INFO nova.virt.libvirt.driver [-] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Instance spawned successfully.
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.014 225859 DEBUG nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.030 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.040 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.044 225859 DEBUG nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.045 225859 DEBUG nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.046 225859 DEBUG nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.046 225859 DEBUG nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.047 225859 DEBUG nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.047 225859 DEBUG nova.virt.libvirt.driver [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:26 compute-1 sudo[255539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.084 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:42:26 compute-1 sudo[255539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:26 compute-1 sudo[255539]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.141 225859 INFO nova.compute.manager [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Took 12.41 seconds to spawn the instance on the hypervisor.
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.141 225859 DEBUG nova.compute.manager [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:26.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:26 compute-1 sudo[255564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:26 compute-1 sudo[255564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:26 compute-1 sudo[255564]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.246 225859 INFO nova.compute.manager [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Took 13.68 seconds to build instance.
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.277 225859 DEBUG oslo_concurrency.lockutils [None req-1e7cd38a-601c-4a17-b056-dc991ac616b5 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "504acd93-cd55-496e-a85f-30e811f827d4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:42:26 compute-1 nova_compute[225855]: 2026-01-20 14:42:26.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:42:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:42:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:27.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:42:27 compute-1 nova_compute[225855]: 2026-01-20 14:42:27.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:42:27 compute-1 nova_compute[225855]: 2026-01-20 14:42:27.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:42:27 compute-1 nova_compute[225855]: 2026-01-20 14:42:27.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:42:27 compute-1 nova_compute[225855]: 2026-01-20 14:42:27.751 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:42:27 compute-1 ceph-mon[81775]: pgmap v1623: 321 pgs: 321 active+clean; 386 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 116 op/s
Jan 20 14:42:28 compute-1 nova_compute[225855]: 2026-01-20 14:42:28.021 225859 DEBUG nova.network.neutron [req-c34e0831-1ec5-43e2-b737-614d371dffee req-b9091455-90df-4eab-a890-3071ac3f3f9a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Updated VIF entry in instance network info cache for port 349b1d10-0b06-4025-80fd-4861bd487a43. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:42:28 compute-1 nova_compute[225855]: 2026-01-20 14:42:28.021 225859 DEBUG nova.network.neutron [req-c34e0831-1ec5-43e2-b737-614d371dffee req-b9091455-90df-4eab-a890-3071ac3f3f9a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Updating instance_info_cache with network_info: [{"id": "349b1d10-0b06-4025-80fd-4861bd487a43", "address": "fa:16:3e:dd:f6:26", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap349b1d10-0b", "ovs_interfaceid": "349b1d10-0b06-4025-80fd-4861bd487a43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:28 compute-1 nova_compute[225855]: 2026-01-20 14:42:28.041 225859 DEBUG oslo_concurrency.lockutils [req-c34e0831-1ec5-43e2-b737-614d371dffee req-b9091455-90df-4eab-a890-3071ac3f3f9a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-504acd93-cd55-496e-a85f-30e811f827d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:42:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:28.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:28 compute-1 nova_compute[225855]: 2026-01-20 14:42:28.317 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/311125799' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:29 compute-1 nova_compute[225855]: 2026-01-20 14:42:29.154 225859 DEBUG nova.compute.manager [req-f86e1fb5-998e-4941-9910-6e1e0daf8400 req-2ef13c09-587b-49be-b889-adc74128c5e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Received event network-vif-plugged-349b1d10-0b06-4025-80fd-4861bd487a43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:29 compute-1 nova_compute[225855]: 2026-01-20 14:42:29.154 225859 DEBUG oslo_concurrency.lockutils [req-f86e1fb5-998e-4941-9910-6e1e0daf8400 req-2ef13c09-587b-49be-b889-adc74128c5e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "504acd93-cd55-496e-a85f-30e811f827d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:29 compute-1 nova_compute[225855]: 2026-01-20 14:42:29.154 225859 DEBUG oslo_concurrency.lockutils [req-f86e1fb5-998e-4941-9910-6e1e0daf8400 req-2ef13c09-587b-49be-b889-adc74128c5e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "504acd93-cd55-496e-a85f-30e811f827d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:29 compute-1 nova_compute[225855]: 2026-01-20 14:42:29.155 225859 DEBUG oslo_concurrency.lockutils [req-f86e1fb5-998e-4941-9910-6e1e0daf8400 req-2ef13c09-587b-49be-b889-adc74128c5e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "504acd93-cd55-496e-a85f-30e811f827d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:29 compute-1 nova_compute[225855]: 2026-01-20 14:42:29.155 225859 DEBUG nova.compute.manager [req-f86e1fb5-998e-4941-9910-6e1e0daf8400 req-2ef13c09-587b-49be-b889-adc74128c5e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] No waiting events found dispatching network-vif-plugged-349b1d10-0b06-4025-80fd-4861bd487a43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:29 compute-1 nova_compute[225855]: 2026-01-20 14:42:29.155 225859 WARNING nova.compute.manager [req-f86e1fb5-998e-4941-9910-6e1e0daf8400 req-2ef13c09-587b-49be-b889-adc74128c5e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Received unexpected event network-vif-plugged-349b1d10-0b06-4025-80fd-4861bd487a43 for instance with vm_state active and task_state None.
Jan 20 14:42:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:29.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.004 225859 DEBUG nova.network.neutron [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Updating instance_info_cache with network_info: [{"id": "607e59a4-2a6b-424a-9413-be318079781e", "address": "fa:16:3e:a3:13:21", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap607e59a4-2a", "ovs_interfaceid": "607e59a4-2a6b-424a-9413-be318079781e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.033 225859 DEBUG oslo_concurrency.lockutils [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Releasing lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.035 225859 DEBUG oslo_concurrency.lockutils [req-928ae490-427b-4f76-924a-e4181bf9d70a req-c68095fc-99a6-4de7-939c-cba34d8736e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.036 225859 DEBUG nova.network.neutron [req-928ae490-427b-4f76-924a-e4181bf9d70a req-c68095fc-99a6-4de7-939c-cba34d8736e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Refreshing network info cache for port e2648ead-7162-4661-94e1-755faa8f1fd1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.040 225859 DEBUG nova.virt.libvirt.vif [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:41:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-689127716',display_name='tempest-tempest.common.compute-instance-689127716',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-689127716',id=71,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk9SkW5N7MhrGaZslG18EJ7xoBof9PQa4upjUw+XxfbO5rNOjJYMJtJMRGPfgbl1pwAZZD7LHjNNMRFKVo+T+C8Rnr+HXWsPYQmvPGwjjZ++NXvRdqES1LIbRDiwaFMJQ==',key_name='tempest-keypair-1970360297',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:41:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-2p3ovedc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:41:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=10349dde-fb60-48ba-bc7b-42180c5eb49e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.041 225859 DEBUG nova.network.os_vif_util [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.042 225859 DEBUG nova.network.os_vif_util [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:35:48,bridge_name='br-int',has_traffic_filtering=True,id=e2648ead-7162-4661-94e1-755faa8f1fd1,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape2648ead-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.043 225859 DEBUG os_vif [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:35:48,bridge_name='br-int',has_traffic_filtering=True,id=e2648ead-7162-4661-94e1-755faa8f1fd1,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape2648ead-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.044 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.045 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.046 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.051 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.051 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape2648ead-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.053 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape2648ead-71, col_values=(('external_ids', {'iface-id': 'e2648ead-7162-4661-94e1-755faa8f1fd1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:35:48', 'vm-uuid': '10349dde-fb60-48ba-bc7b-42180c5eb49e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:30 compute-1 NetworkManager[49104]: <info>  [1768920150.0583] manager: (tape2648ead-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.064 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.069 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.070 225859 INFO os_vif [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:35:48,bridge_name='br-int',has_traffic_filtering=True,id=e2648ead-7162-4661-94e1-755faa8f1fd1,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape2648ead-71')
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.071 225859 DEBUG nova.virt.libvirt.vif [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:41:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-689127716',display_name='tempest-tempest.common.compute-instance-689127716',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-689127716',id=71,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk9SkW5N7MhrGaZslG18EJ7xoBof9PQa4upjUw+XxfbO5rNOjJYMJtJMRGPfgbl1pwAZZD7LHjNNMRFKVo+T+C8Rnr+HXWsPYQmvPGwjjZ++NXvRdqES1LIbRDiwaFMJQ==',key_name='tempest-keypair-1970360297',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:41:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-2p3ovedc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:41:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=10349dde-fb60-48ba-bc7b-42180c5eb49e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.072 225859 DEBUG nova.network.os_vif_util [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.073 225859 DEBUG nova.network.os_vif_util [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:35:48,bridge_name='br-int',has_traffic_filtering=True,id=e2648ead-7162-4661-94e1-755faa8f1fd1,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape2648ead-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.076 225859 DEBUG nova.virt.libvirt.guest [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] attach device xml: <interface type="ethernet">
Jan 20 14:42:30 compute-1 nova_compute[225855]:   <mac address="fa:16:3e:80:35:48"/>
Jan 20 14:42:30 compute-1 nova_compute[225855]:   <model type="virtio"/>
Jan 20 14:42:30 compute-1 nova_compute[225855]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:42:30 compute-1 nova_compute[225855]:   <mtu size="1442"/>
Jan 20 14:42:30 compute-1 nova_compute[225855]:   <target dev="tape2648ead-71"/>
Jan 20 14:42:30 compute-1 nova_compute[225855]: </interface>
Jan 20 14:42:30 compute-1 nova_compute[225855]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 14:42:30 compute-1 kernel: tape2648ead-71: entered promiscuous mode
Jan 20 14:42:30 compute-1 NetworkManager[49104]: <info>  [1768920150.0917] manager: (tape2648ead-71): new Tun device (/org/freedesktop/NetworkManager/Devices/116)
Jan 20 14:42:30 compute-1 ovn_controller[130490]: 2026-01-20T14:42:30Z|00257|binding|INFO|Claiming lport e2648ead-7162-4661-94e1-755faa8f1fd1 for this chassis.
Jan 20 14:42:30 compute-1 ovn_controller[130490]: 2026-01-20T14:42:30Z|00258|binding|INFO|e2648ead-7162-4661-94e1-755faa8f1fd1: Claiming fa:16:3e:80:35:48 10.100.0.6
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.097 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:30.112 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:35:48 10.100.0.6'], port_security=['fa:16:3e:80:35:48 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-381806613', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '10349dde-fb60-48ba-bc7b-42180c5eb49e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc21b99b-4e34-422c-be05-0a440009dac4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-381806613', 'neutron:project_id': 'e3f93fd4b2154dda9f38e62334904303', 'neutron:revision_number': '7', 'neutron:security_group_ids': '52cb9fb4-4318-4f53-9b5a-002d95792517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7af6b6bc-3cbd-48be-9f10-23ec011e0426, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=e2648ead-7162-4661-94e1-755faa8f1fd1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:42:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:30.115 140354 INFO neutron.agent.ovn.metadata.agent [-] Port e2648ead-7162-4661-94e1-755faa8f1fd1 in datapath fc21b99b-4e34-422c-be05-0a440009dac4 bound to our chassis
Jan 20 14:42:30 compute-1 ovn_controller[130490]: 2026-01-20T14:42:30Z|00259|binding|INFO|Setting lport e2648ead-7162-4661-94e1-755faa8f1fd1 ovn-installed in OVS
Jan 20 14:42:30 compute-1 ovn_controller[130490]: 2026-01-20T14:42:30Z|00260|binding|INFO|Setting lport e2648ead-7162-4661-94e1-755faa8f1fd1 up in Southbound
Jan 20 14:42:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:30.117 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc21b99b-4e34-422c-be05-0a440009dac4
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.120 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.123 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4237730022' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3877248440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:30 compute-1 ceph-mon[81775]: pgmap v1624: 321 pgs: 321 active+clean; 386 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 148 op/s
Jan 20 14:42:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2794071667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:30 compute-1 systemd-udevd[255597]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:42:30 compute-1 NetworkManager[49104]: <info>  [1768920150.1435] device (tape2648ead-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:42:30 compute-1 NetworkManager[49104]: <info>  [1768920150.1440] device (tape2648ead-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:42:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:30.134 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6255aac0-d890-42e8-b904-c1eafb0847ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:30.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:30.175 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[926dc833-8f57-4f9f-a219-dbf1f772291d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:30.180 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[22b3c6c9-a675-445b-8b75-64d333231303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.214 225859 DEBUG nova.virt.libvirt.driver [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.214 225859 DEBUG nova.virt.libvirt.driver [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.214 225859 DEBUG nova.virt.libvirt.driver [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No VIF found with MAC fa:16:3e:a3:13:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.215 225859 DEBUG nova.virt.libvirt.driver [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No VIF found with MAC fa:16:3e:80:35:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:42:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:30.217 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[44046935-565c-4277-a31f-d27d3dfa574e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.243 225859 DEBUG nova.virt.libvirt.guest [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:42:30 compute-1 nova_compute[225855]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:42:30 compute-1 nova_compute[225855]:   <nova:name>tempest-tempest.common.compute-instance-689127716</nova:name>
Jan 20 14:42:30 compute-1 nova_compute[225855]:   <nova:creationTime>2026-01-20 14:42:30</nova:creationTime>
Jan 20 14:42:30 compute-1 nova_compute[225855]:   <nova:flavor name="m1.nano">
Jan 20 14:42:30 compute-1 nova_compute[225855]:     <nova:memory>128</nova:memory>
Jan 20 14:42:30 compute-1 nova_compute[225855]:     <nova:disk>1</nova:disk>
Jan 20 14:42:30 compute-1 nova_compute[225855]:     <nova:swap>0</nova:swap>
Jan 20 14:42:30 compute-1 nova_compute[225855]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:42:30 compute-1 nova_compute[225855]:     <nova:vcpus>1</nova:vcpus>
Jan 20 14:42:30 compute-1 nova_compute[225855]:   </nova:flavor>
Jan 20 14:42:30 compute-1 nova_compute[225855]:   <nova:owner>
Jan 20 14:42:30 compute-1 nova_compute[225855]:     <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:42:30 compute-1 nova_compute[225855]:     <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:42:30 compute-1 nova_compute[225855]:   </nova:owner>
Jan 20 14:42:30 compute-1 nova_compute[225855]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:42:30 compute-1 nova_compute[225855]:   <nova:ports>
Jan 20 14:42:30 compute-1 nova_compute[225855]:     <nova:port uuid="607e59a4-2a6b-424a-9413-be318079781e">
Jan 20 14:42:30 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 14:42:30 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:42:30 compute-1 nova_compute[225855]:     <nova:port uuid="e2648ead-7162-4661-94e1-755faa8f1fd1">
Jan 20 14:42:30 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 14:42:30 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:42:30 compute-1 nova_compute[225855]:   </nova:ports>
Jan 20 14:42:30 compute-1 nova_compute[225855]: </nova:instance>
Jan 20 14:42:30 compute-1 nova_compute[225855]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 20 14:42:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:30.244 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8cd11d-33a8-41d6-b964-e3461f97c96c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc21b99b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:5b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511911, 'reachable_time': 34363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255605, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:30.263 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[47e5bd7c-66f6-4c8c-81e6-0bdac684dccd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc21b99b-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 511920, 'tstamp': 511920}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255606, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc21b99b-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 511922, 'tstamp': 511922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255606, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.274 225859 DEBUG oslo_concurrency.lockutils [None req-812cb0b8-b728-4175-8739-e1b0cc33e188 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "interface-10349dde-fb60-48ba-bc7b-42180c5eb49e-e2648ead-7162-4661-94e1-755faa8f1fd1" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:30.276 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc21b99b-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.278 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:30.280 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc21b99b-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:30.280 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:30.280 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc21b99b-40, col_values=(('external_ids', {'iface-id': '583df905-1d9f-49c1-b209-4b7fad1599f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:30.281 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.282 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.283 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.898 225859 DEBUG nova.compute.manager [req-5f88eef7-6460-4dc5-bd7b-46d5ecf31bb2 req-d2fb87aa-e2cb-49e6-9ace-5f878a893196 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Received event network-vif-plugged-e2648ead-7162-4661-94e1-755faa8f1fd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.899 225859 DEBUG oslo_concurrency.lockutils [req-5f88eef7-6460-4dc5-bd7b-46d5ecf31bb2 req-d2fb87aa-e2cb-49e6-9ace-5f878a893196 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.899 225859 DEBUG oslo_concurrency.lockutils [req-5f88eef7-6460-4dc5-bd7b-46d5ecf31bb2 req-d2fb87aa-e2cb-49e6-9ace-5f878a893196 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.899 225859 DEBUG oslo_concurrency.lockutils [req-5f88eef7-6460-4dc5-bd7b-46d5ecf31bb2 req-d2fb87aa-e2cb-49e6-9ace-5f878a893196 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.899 225859 DEBUG nova.compute.manager [req-5f88eef7-6460-4dc5-bd7b-46d5ecf31bb2 req-d2fb87aa-e2cb-49e6-9ace-5f878a893196 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] No waiting events found dispatching network-vif-plugged-e2648ead-7162-4661-94e1-755faa8f1fd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:30 compute-1 nova_compute[225855]: 2026-01-20 14:42:30.900 225859 WARNING nova.compute.manager [req-5f88eef7-6460-4dc5-bd7b-46d5ecf31bb2 req-d2fb87aa-e2cb-49e6-9ace-5f878a893196 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Received unexpected event network-vif-plugged-e2648ead-7162-4661-94e1-755faa8f1fd1 for instance with vm_state active and task_state None.
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.023 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:42:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:31.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:42:31 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/553078605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.715 225859 DEBUG oslo_concurrency.lockutils [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "interface-10349dde-fb60-48ba-bc7b-42180c5eb49e-e2648ead-7162-4661-94e1-755faa8f1fd1" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.716 225859 DEBUG oslo_concurrency.lockutils [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "interface-10349dde-fb60-48ba-bc7b-42180c5eb49e-e2648ead-7162-4661-94e1-755faa8f1fd1" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.747 225859 DEBUG nova.objects.instance [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'flavor' on Instance uuid 10349dde-fb60-48ba-bc7b-42180c5eb49e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.820 225859 DEBUG nova.virt.libvirt.vif [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:41:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-689127716',display_name='tempest-tempest.common.compute-instance-689127716',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-689127716',id=71,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk9SkW5N7MhrGaZslG18EJ7xoBof9PQa4upjUw+XxfbO5rNOjJYMJtJMRGPfgbl1pwAZZD7LHjNNMRFKVo+T+C8Rnr+HXWsPYQmvPGwjjZ++NXvRdqES1LIbRDiwaFMJQ==',key_name='tempest-keypair-1970360297',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:41:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-2p3ovedc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:41:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=10349dde-fb60-48ba-bc7b-42180c5eb49e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.821 225859 DEBUG nova.network.os_vif_util [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.821 225859 DEBUG nova.network.os_vif_util [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:35:48,bridge_name='br-int',has_traffic_filtering=True,id=e2648ead-7162-4661-94e1-755faa8f1fd1,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape2648ead-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.824 225859 DEBUG nova.virt.libvirt.guest [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:80:35:48"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape2648ead-71"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.826 225859 DEBUG nova.virt.libvirt.guest [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:80:35:48"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape2648ead-71"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.828 225859 DEBUG nova.virt.libvirt.driver [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Attempting to detach device tape2648ead-71 from instance 10349dde-fb60-48ba-bc7b-42180c5eb49e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.828 225859 DEBUG nova.virt.libvirt.guest [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] detach device xml: <interface type="ethernet">
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <mac address="fa:16:3e:80:35:48"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <model type="virtio"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <mtu size="1442"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <target dev="tape2648ead-71"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]: </interface>
Jan 20 14:42:31 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.833 225859 DEBUG nova.virt.libvirt.guest [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:80:35:48"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape2648ead-71"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.836 225859 DEBUG nova.virt.libvirt.guest [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:80:35:48"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape2648ead-71"/></interface>not found in domain: <domain type='kvm' id='31'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <name>instance-00000047</name>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <uuid>10349dde-fb60-48ba-bc7b-42180c5eb49e</uuid>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:name>tempest-tempest.common.compute-instance-689127716</nova:name>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:creationTime>2026-01-20 14:42:30</nova:creationTime>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:flavor name="m1.nano">
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:memory>128</nova:memory>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:disk>1</nova:disk>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:swap>0</nova:swap>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:vcpus>1</nova:vcpus>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </nova:flavor>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:owner>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </nova:owner>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:ports>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:port uuid="607e59a4-2a6b-424a-9413-be318079781e">
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:port uuid="e2648ead-7162-4661-94e1-755faa8f1fd1">
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </nova:ports>
Jan 20 14:42:31 compute-1 nova_compute[225855]: </nova:instance>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <memory unit='KiB'>131072</memory>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <vcpu placement='static'>1</vcpu>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <resource>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <partition>/machine</partition>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </resource>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <sysinfo type='smbios'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <system>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <entry name='manufacturer'>RDO</entry>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <entry name='product'>OpenStack Compute</entry>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <entry name='serial'>10349dde-fb60-48ba-bc7b-42180c5eb49e</entry>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <entry name='uuid'>10349dde-fb60-48ba-bc7b-42180c5eb49e</entry>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <entry name='family'>Virtual Machine</entry>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </system>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <os>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <boot dev='hd'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <smbios mode='sysinfo'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </os>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <features>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <vmcoreinfo state='on'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </features>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <cpu mode='custom' match='exact' check='full'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <model fallback='forbid'>Nehalem</model>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <feature policy='require' name='x2apic'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <feature policy='require' name='hypervisor'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <feature policy='require' name='vme'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <clock offset='utc'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <timer name='pit' tickpolicy='delay'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <timer name='hpet' present='no'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <on_poweroff>destroy</on_poweroff>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <on_reboot>restart</on_reboot>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <on_crash>destroy</on_crash>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <disk type='network' device='disk'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <auth username='openstack'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <source protocol='rbd' name='vms/10349dde-fb60-48ba-bc7b-42180c5eb49e_disk' index='2'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:         <host name='192.168.122.100' port='6789'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:         <host name='192.168.122.102' port='6789'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:         <host name='192.168.122.101' port='6789'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       </source>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target dev='vda' bus='virtio'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='virtio-disk0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <disk type='network' device='cdrom'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <auth username='openstack'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <source protocol='rbd' name='vms/10349dde-fb60-48ba-bc7b-42180c5eb49e_disk.config' index='1'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:         <host name='192.168.122.100' port='6789'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:         <host name='192.168.122.102' port='6789'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:         <host name='192.168.122.101' port='6789'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       </source>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target dev='sda' bus='sata'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <readonly/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='sata0-0-0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='0' model='pcie-root'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pcie.0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='1' port='0x10'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.1'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='2' port='0x11'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.2'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='3' port='0x12'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.3'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='4' port='0x13'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.4'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='5' port='0x14'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.5'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='6' port='0x15'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.6'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='7' port='0x16'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.7'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='8' port='0x17'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.8'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='9' port='0x18'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.9'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='10' port='0x19'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.10'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='11' port='0x1a'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.11'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='12' port='0x1b'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.12'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='13' port='0x1c'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.13'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='14' port='0x1d'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.14'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='15' port='0x1e'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.15'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='16' port='0x1f'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.16'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='17' port='0x20'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.17'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='18' port='0x21'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.18'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='19' port='0x22'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.19'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='20' port='0x23'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.20'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='21' port='0x24'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.21'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='22' port='0x25'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.22'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='23' port='0x26'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.23'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='24' port='0x27'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.24'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='25' port='0x28'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.25'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-pci-bridge'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.26'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='usb'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='sata' index='0'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='ide'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <interface type='ethernet'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <mac address='fa:16:3e:a3:13:21'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target dev='tap607e59a4-2a'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model type='virtio'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <mtu size='1442'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='net0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <interface type='ethernet'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <mac address='fa:16:3e:80:35:48'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target dev='tape2648ead-71'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model type='virtio'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <mtu size='1442'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='net1'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <serial type='pty'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <source path='/dev/pts/0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <log file='/var/lib/nova/instances/10349dde-fb60-48ba-bc7b-42180c5eb49e/console.log' append='off'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target type='isa-serial' port='0'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:         <model name='isa-serial'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       </target>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='serial0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <console type='pty' tty='/dev/pts/0'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <source path='/dev/pts/0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <log file='/var/lib/nova/instances/10349dde-fb60-48ba-bc7b-42180c5eb49e/console.log' append='off'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target type='serial' port='0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='serial0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </console>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <input type='tablet' bus='usb'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='input0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='usb' bus='0' port='1'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </input>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <input type='mouse' bus='ps2'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='input1'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </input>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <input type='keyboard' bus='ps2'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='input2'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </input>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <listen type='address' address='::0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </graphics>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <audio id='1' type='none'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <video>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model type='virtio' heads='1' primary='yes'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='video0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </video>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <watchdog model='itco' action='reset'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='watchdog0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </watchdog>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <memballoon model='virtio'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <stats period='10'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='balloon0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <rng model='virtio'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <backend model='random'>/dev/urandom</backend>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='rng0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <label>system_u:system_r:svirt_t:s0:c240,c925</label>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c240,c925</imagelabel>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </seclabel>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <label>+107:+107</label>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <imagelabel>+107:+107</imagelabel>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </seclabel>
Jan 20 14:42:31 compute-1 nova_compute[225855]: </domain>
Jan 20 14:42:31 compute-1 nova_compute[225855]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.838 225859 INFO nova.virt.libvirt.driver [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully detached device tape2648ead-71 from instance 10349dde-fb60-48ba-bc7b-42180c5eb49e from the persistent domain config.
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.838 225859 DEBUG nova.virt.libvirt.driver [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] (1/8): Attempting to detach device tape2648ead-71 with device alias net1 from instance 10349dde-fb60-48ba-bc7b-42180c5eb49e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.838 225859 DEBUG nova.virt.libvirt.guest [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] detach device xml: <interface type="ethernet">
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <mac address="fa:16:3e:80:35:48"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <model type="virtio"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <mtu size="1442"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <target dev="tape2648ead-71"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]: </interface>
Jan 20 14:42:31 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:42:31 compute-1 kernel: tape2648ead-71 (unregistering): left promiscuous mode
Jan 20 14:42:31 compute-1 NetworkManager[49104]: <info>  [1768920151.8968] device (tape2648ead-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.902 225859 DEBUG nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Received event <DeviceRemovedEvent: 1768920151.9018826, 10349dde-fb60-48ba-bc7b-42180c5eb49e => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.904 225859 DEBUG nova.virt.libvirt.driver [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Start waiting for the detach event from libvirt for device tape2648ead-71 with device alias net1 for instance 10349dde-fb60-48ba-bc7b-42180c5eb49e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.905 225859 DEBUG nova.virt.libvirt.guest [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:80:35:48"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape2648ead-71"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.906 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:31 compute-1 ovn_controller[130490]: 2026-01-20T14:42:31Z|00261|binding|INFO|Releasing lport e2648ead-7162-4661-94e1-755faa8f1fd1 from this chassis (sb_readonly=0)
Jan 20 14:42:31 compute-1 ovn_controller[130490]: 2026-01-20T14:42:31Z|00262|binding|INFO|Setting lport e2648ead-7162-4661-94e1-755faa8f1fd1 down in Southbound
Jan 20 14:42:31 compute-1 ovn_controller[130490]: 2026-01-20T14:42:31Z|00263|binding|INFO|Removing iface tape2648ead-71 ovn-installed in OVS
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.909 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.910 225859 DEBUG nova.virt.libvirt.guest [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:80:35:48"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape2648ead-71"/></interface>not found in domain: <domain type='kvm' id='31'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <name>instance-00000047</name>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <uuid>10349dde-fb60-48ba-bc7b-42180c5eb49e</uuid>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:name>tempest-tempest.common.compute-instance-689127716</nova:name>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:creationTime>2026-01-20 14:42:30</nova:creationTime>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:flavor name="m1.nano">
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:memory>128</nova:memory>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:disk>1</nova:disk>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:swap>0</nova:swap>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:vcpus>1</nova:vcpus>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </nova:flavor>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:owner>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </nova:owner>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:ports>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:port uuid="607e59a4-2a6b-424a-9413-be318079781e">
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:port uuid="e2648ead-7162-4661-94e1-755faa8f1fd1">
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </nova:ports>
Jan 20 14:42:31 compute-1 nova_compute[225855]: </nova:instance>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <memory unit='KiB'>131072</memory>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <vcpu placement='static'>1</vcpu>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <resource>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <partition>/machine</partition>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </resource>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <sysinfo type='smbios'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <system>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <entry name='manufacturer'>RDO</entry>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <entry name='product'>OpenStack Compute</entry>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <entry name='serial'>10349dde-fb60-48ba-bc7b-42180c5eb49e</entry>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <entry name='uuid'>10349dde-fb60-48ba-bc7b-42180c5eb49e</entry>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <entry name='family'>Virtual Machine</entry>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </system>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <os>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <boot dev='hd'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <smbios mode='sysinfo'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </os>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <features>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <vmcoreinfo state='on'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </features>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <cpu mode='custom' match='exact' check='full'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <model fallback='forbid'>Nehalem</model>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <feature policy='require' name='x2apic'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <feature policy='require' name='hypervisor'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <feature policy='require' name='vme'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <clock offset='utc'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <timer name='pit' tickpolicy='delay'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <timer name='hpet' present='no'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <on_poweroff>destroy</on_poweroff>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <on_reboot>restart</on_reboot>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <on_crash>destroy</on_crash>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <disk type='network' device='disk'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <auth username='openstack'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <source protocol='rbd' name='vms/10349dde-fb60-48ba-bc7b-42180c5eb49e_disk' index='2'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:         <host name='192.168.122.100' port='6789'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:         <host name='192.168.122.102' port='6789'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:         <host name='192.168.122.101' port='6789'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       </source>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target dev='vda' bus='virtio'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='virtio-disk0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <disk type='network' device='cdrom'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <auth username='openstack'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <source protocol='rbd' name='vms/10349dde-fb60-48ba-bc7b-42180c5eb49e_disk.config' index='1'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:         <host name='192.168.122.100' port='6789'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:         <host name='192.168.122.102' port='6789'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:         <host name='192.168.122.101' port='6789'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       </source>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target dev='sda' bus='sata'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <readonly/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='sata0-0-0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='0' model='pcie-root'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pcie.0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='1' port='0x10'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.1'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='2' port='0x11'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.2'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='3' port='0x12'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.3'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='4' port='0x13'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.4'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='5' port='0x14'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.5'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='6' port='0x15'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.6'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='7' port='0x16'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.7'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='8' port='0x17'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.8'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='9' port='0x18'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.9'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='10' port='0x19'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.10'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='11' port='0x1a'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.11'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='12' port='0x1b'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.12'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='13' port='0x1c'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.13'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='14' port='0x1d'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.14'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='15' port='0x1e'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.15'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='16' port='0x1f'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.16'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='17' port='0x20'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.17'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='18' port='0x21'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.18'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='19' port='0x22'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.19'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='20' port='0x23'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.20'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='21' port='0x24'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.21'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='22' port='0x25'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.22'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='23' port='0x26'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.23'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='24' port='0x27'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.24'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target chassis='25' port='0x28'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.25'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model name='pcie-pci-bridge'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='pci.26'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='usb'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <controller type='sata' index='0'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='ide'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </controller>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <interface type='ethernet'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <mac address='fa:16:3e:a3:13:21'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target dev='tap607e59a4-2a'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model type='virtio'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <mtu size='1442'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='net0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <serial type='pty'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <source path='/dev/pts/0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <log file='/var/lib/nova/instances/10349dde-fb60-48ba-bc7b-42180c5eb49e/console.log' append='off'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target type='isa-serial' port='0'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:         <model name='isa-serial'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       </target>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='serial0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <console type='pty' tty='/dev/pts/0'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <source path='/dev/pts/0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <log file='/var/lib/nova/instances/10349dde-fb60-48ba-bc7b-42180c5eb49e/console.log' append='off'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <target type='serial' port='0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='serial0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </console>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <input type='tablet' bus='usb'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='input0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='usb' bus='0' port='1'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </input>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <input type='mouse' bus='ps2'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='input1'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </input>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <input type='keyboard' bus='ps2'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='input2'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </input>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <listen type='address' address='::0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </graphics>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <audio id='1' type='none'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <video>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <model type='virtio' heads='1' primary='yes'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='video0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </video>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <watchdog model='itco' action='reset'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='watchdog0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </watchdog>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <memballoon model='virtio'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <stats period='10'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='balloon0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <rng model='virtio'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <backend model='random'>/dev/urandom</backend>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <alias name='rng0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <label>system_u:system_r:svirt_t:s0:c240,c925</label>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c240,c925</imagelabel>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </seclabel>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <label>+107:+107</label>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <imagelabel>+107:+107</imagelabel>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </seclabel>
Jan 20 14:42:31 compute-1 nova_compute[225855]: </domain>
Jan 20 14:42:31 compute-1 nova_compute[225855]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.910 225859 INFO nova.virt.libvirt.driver [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully detached device tape2648ead-71 from instance 10349dde-fb60-48ba-bc7b-42180c5eb49e from the live domain config.
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.911 225859 DEBUG nova.virt.libvirt.vif [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:41:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-689127716',display_name='tempest-tempest.common.compute-instance-689127716',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-689127716',id=71,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk9SkW5N7MhrGaZslG18EJ7xoBof9PQa4upjUw+XxfbO5rNOjJYMJtJMRGPfgbl1pwAZZD7LHjNNMRFKVo+T+C8Rnr+HXWsPYQmvPGwjjZ++NXvRdqES1LIbRDiwaFMJQ==',key_name='tempest-keypair-1970360297',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:41:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-2p3ovedc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:41:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=10349dde-fb60-48ba-bc7b-42180c5eb49e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.911 225859 DEBUG nova.network.os_vif_util [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.911 225859 DEBUG nova.network.os_vif_util [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:35:48,bridge_name='br-int',has_traffic_filtering=True,id=e2648ead-7162-4661-94e1-755faa8f1fd1,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape2648ead-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.912 225859 DEBUG os_vif [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:35:48,bridge_name='br-int',has_traffic_filtering=True,id=e2648ead-7162-4661-94e1-755faa8f1fd1,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape2648ead-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.916 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.916 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape2648ead-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.917 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.920 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:42:31 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:31.919 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:35:48 10.100.0.6'], port_security=['fa:16:3e:80:35:48 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-381806613', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '10349dde-fb60-48ba-bc7b-42180c5eb49e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc21b99b-4e34-422c-be05-0a440009dac4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-381806613', 'neutron:project_id': 'e3f93fd4b2154dda9f38e62334904303', 'neutron:revision_number': '9', 'neutron:security_group_ids': '52cb9fb4-4318-4f53-9b5a-002d95792517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7af6b6bc-3cbd-48be-9f10-23ec011e0426, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=e2648ead-7162-4661-94e1-755faa8f1fd1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:42:31 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:31.921 140354 INFO neutron.agent.ovn.metadata.agent [-] Port e2648ead-7162-4661-94e1-755faa8f1fd1 in datapath fc21b99b-4e34-422c-be05-0a440009dac4 unbound from our chassis
Jan 20 14:42:31 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:31.922 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc21b99b-4e34-422c-be05-0a440009dac4
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.930 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.932 225859 INFO os_vif [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:35:48,bridge_name='br-int',has_traffic_filtering=True,id=e2648ead-7162-4661-94e1-755faa8f1fd1,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape2648ead-71')
Jan 20 14:42:31 compute-1 nova_compute[225855]: 2026-01-20 14:42:31.933 225859 DEBUG nova.virt.libvirt.guest [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:name>tempest-tempest.common.compute-instance-689127716</nova:name>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:creationTime>2026-01-20 14:42:31</nova:creationTime>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:flavor name="m1.nano">
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:memory>128</nova:memory>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:disk>1</nova:disk>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:swap>0</nova:swap>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:vcpus>1</nova:vcpus>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </nova:flavor>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:owner>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </nova:owner>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   <nova:ports>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     <nova:port uuid="607e59a4-2a6b-424a-9413-be318079781e">
Jan 20 14:42:31 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 14:42:31 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 14:42:31 compute-1 nova_compute[225855]:   </nova:ports>
Jan 20 14:42:31 compute-1 nova_compute[225855]: </nova:instance>
Jan 20 14:42:31 compute-1 nova_compute[225855]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 20 14:42:31 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:31.944 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1238868d-31b4-4764-9555-fd2b18450735]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:31 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:31.974 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[ad259a30-3f36-402b-a920-471b12d47ae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:31 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:31.976 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe1bf07-fe76-4103-beda-07468e9659cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:32.002 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[dff4e8fe-277a-4920-8db0-21a56709d069]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:32.017 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[51a69b6a-e7f3-46a1-a52e-b8ac8997533e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc21b99b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:5b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511911, 'reachable_time': 34363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255617, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:32.032 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d384e570-e3a1-4038-b6d6-5a2d1506620c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc21b99b-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 511920, 'tstamp': 511920}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255618, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc21b99b-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 511922, 'tstamp': 511922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255618, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:32.033 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc21b99b-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:32 compute-1 nova_compute[225855]: 2026-01-20 14:42:32.035 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:32 compute-1 nova_compute[225855]: 2026-01-20 14:42:32.036 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:32.036 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc21b99b-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:32.036 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:32.037 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc21b99b-40, col_values=(('external_ids', {'iface-id': '583df905-1d9f-49c1-b209-4b7fad1599f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:32.037 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:32.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:32 compute-1 ceph-mon[81775]: pgmap v1625: 321 pgs: 321 active+clean; 386 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Jan 20 14:42:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/391396376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:33 compute-1 podman[255619]: 2026-01-20 14:42:33.021806385 +0000 UTC m=+0.060126354 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 20 14:42:33 compute-1 nova_compute[225855]: 2026-01-20 14:42:33.106 225859 DEBUG nova.compute.manager [req-0e2feea7-4ae8-409d-ac26-f1bf884f8daf req-f5ef4ac8-cbd1-4279-b4c0-f40aa6d8704e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Received event network-vif-plugged-e2648ead-7162-4661-94e1-755faa8f1fd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:33 compute-1 nova_compute[225855]: 2026-01-20 14:42:33.106 225859 DEBUG oslo_concurrency.lockutils [req-0e2feea7-4ae8-409d-ac26-f1bf884f8daf req-f5ef4ac8-cbd1-4279-b4c0-f40aa6d8704e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:33 compute-1 nova_compute[225855]: 2026-01-20 14:42:33.106 225859 DEBUG oslo_concurrency.lockutils [req-0e2feea7-4ae8-409d-ac26-f1bf884f8daf req-f5ef4ac8-cbd1-4279-b4c0-f40aa6d8704e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:33 compute-1 nova_compute[225855]: 2026-01-20 14:42:33.107 225859 DEBUG oslo_concurrency.lockutils [req-0e2feea7-4ae8-409d-ac26-f1bf884f8daf req-f5ef4ac8-cbd1-4279-b4c0-f40aa6d8704e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:33 compute-1 nova_compute[225855]: 2026-01-20 14:42:33.107 225859 DEBUG nova.compute.manager [req-0e2feea7-4ae8-409d-ac26-f1bf884f8daf req-f5ef4ac8-cbd1-4279-b4c0-f40aa6d8704e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] No waiting events found dispatching network-vif-plugged-e2648ead-7162-4661-94e1-755faa8f1fd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:33 compute-1 nova_compute[225855]: 2026-01-20 14:42:33.107 225859 WARNING nova.compute.manager [req-0e2feea7-4ae8-409d-ac26-f1bf884f8daf req-f5ef4ac8-cbd1-4279-b4c0-f40aa6d8704e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Received unexpected event network-vif-plugged-e2648ead-7162-4661-94e1-755faa8f1fd1 for instance with vm_state active and task_state None.
Jan 20 14:42:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:42:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:33.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:42:33 compute-1 nova_compute[225855]: 2026-01-20 14:42:33.554 225859 DEBUG nova.network.neutron [req-928ae490-427b-4f76-924a-e4181bf9d70a req-c68095fc-99a6-4de7-939c-cba34d8736e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Updated VIF entry in instance network info cache for port e2648ead-7162-4661-94e1-755faa8f1fd1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:42:33 compute-1 nova_compute[225855]: 2026-01-20 14:42:33.554 225859 DEBUG nova.network.neutron [req-928ae490-427b-4f76-924a-e4181bf9d70a req-c68095fc-99a6-4de7-939c-cba34d8736e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Updating instance_info_cache with network_info: [{"id": "607e59a4-2a6b-424a-9413-be318079781e", "address": "fa:16:3e:a3:13:21", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap607e59a4-2a", "ovs_interfaceid": "607e59a4-2a6b-424a-9413-be318079781e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3354975338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:33 compute-1 nova_compute[225855]: 2026-01-20 14:42:33.643 225859 DEBUG oslo_concurrency.lockutils [req-928ae490-427b-4f76-924a-e4181bf9d70a req-c68095fc-99a6-4de7-939c-cba34d8736e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:42:33 compute-1 nova_compute[225855]: 2026-01-20 14:42:33.644 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:42:33 compute-1 nova_compute[225855]: 2026-01-20 14:42:33.644 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:42:33 compute-1 nova_compute[225855]: 2026-01-20 14:42:33.644 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 10349dde-fb60-48ba-bc7b-42180c5eb49e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:34.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:34 compute-1 nova_compute[225855]: 2026-01-20 14:42:34.604 225859 DEBUG oslo_concurrency.lockutils [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:42:35 compute-1 ceph-mon[81775]: pgmap v1626: 321 pgs: 321 active+clean; 386 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 175 op/s
Jan 20 14:42:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3903070332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:35.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.289 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.315 225859 DEBUG oslo_concurrency.lockutils [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "10349dde-fb60-48ba-bc7b-42180c5eb49e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.316 225859 DEBUG oslo_concurrency.lockutils [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "10349dde-fb60-48ba-bc7b-42180c5eb49e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.316 225859 DEBUG oslo_concurrency.lockutils [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.316 225859 DEBUG oslo_concurrency.lockutils [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.317 225859 DEBUG oslo_concurrency.lockutils [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.318 225859 INFO nova.compute.manager [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Terminating instance
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.319 225859 DEBUG nova.compute.manager [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:42:35 compute-1 kernel: tap607e59a4-2a (unregistering): left promiscuous mode
Jan 20 14:42:35 compute-1 NetworkManager[49104]: <info>  [1768920155.4351] device (tap607e59a4-2a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.450 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:35 compute-1 ovn_controller[130490]: 2026-01-20T14:42:35Z|00264|binding|INFO|Releasing lport 607e59a4-2a6b-424a-9413-be318079781e from this chassis (sb_readonly=0)
Jan 20 14:42:35 compute-1 ovn_controller[130490]: 2026-01-20T14:42:35Z|00265|binding|INFO|Setting lport 607e59a4-2a6b-424a-9413-be318079781e down in Southbound
Jan 20 14:42:35 compute-1 ovn_controller[130490]: 2026-01-20T14:42:35Z|00266|binding|INFO|Removing iface tap607e59a4-2a ovn-installed in OVS
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.453 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:35.467 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:13:21 10.100.0.11'], port_security=['fa:16:3e:a3:13:21 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '10349dde-fb60-48ba-bc7b-42180c5eb49e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc21b99b-4e34-422c-be05-0a440009dac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3f93fd4b2154dda9f38e62334904303', 'neutron:revision_number': '4', 'neutron:security_group_ids': '52b08fd6-6aa8-4470-b89c-ece04e1c959e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7af6b6bc-3cbd-48be-9f10-23ec011e0426, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=607e59a4-2a6b-424a-9413-be318079781e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:42:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:35.468 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 607e59a4-2a6b-424a-9413-be318079781e in datapath fc21b99b-4e34-422c-be05-0a440009dac4 unbound from our chassis
Jan 20 14:42:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:35.471 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc21b99b-4e34-422c-be05-0a440009dac4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:42:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:35.472 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ab1c0c1e-1af2-45a9-8e3a-c9b190c50528]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:35.472 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4 namespace which is not needed anymore
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.474 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:35 compute-1 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000047.scope: Deactivated successfully.
Jan 20 14:42:35 compute-1 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000047.scope: Consumed 15.809s CPU time.
Jan 20 14:42:35 compute-1 systemd-machined[194361]: Machine qemu-31-instance-00000047 terminated.
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.555 225859 INFO nova.virt.libvirt.driver [-] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Instance destroyed successfully.
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.556 225859 DEBUG nova.objects.instance [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'resources' on Instance uuid 10349dde-fb60-48ba-bc7b-42180c5eb49e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.577 225859 DEBUG nova.virt.libvirt.vif [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:41:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-689127716',display_name='tempest-tempest.common.compute-instance-689127716',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-689127716',id=71,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk9SkW5N7MhrGaZslG18EJ7xoBof9PQa4upjUw+XxfbO5rNOjJYMJtJMRGPfgbl1pwAZZD7LHjNNMRFKVo+T+C8Rnr+HXWsPYQmvPGwjjZ++NXvRdqES1LIbRDiwaFMJQ==',key_name='tempest-keypair-1970360297',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:41:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-2p3ovedc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:41:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=10349dde-fb60-48ba-bc7b-42180c5eb49e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "607e59a4-2a6b-424a-9413-be318079781e", "address": "fa:16:3e:a3:13:21", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap607e59a4-2a", "ovs_interfaceid": "607e59a4-2a6b-424a-9413-be318079781e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.578 225859 DEBUG nova.network.os_vif_util [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "607e59a4-2a6b-424a-9413-be318079781e", "address": "fa:16:3e:a3:13:21", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap607e59a4-2a", "ovs_interfaceid": "607e59a4-2a6b-424a-9413-be318079781e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.579 225859 DEBUG nova.network.os_vif_util [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a3:13:21,bridge_name='br-int',has_traffic_filtering=True,id=607e59a4-2a6b-424a-9413-be318079781e,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap607e59a4-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.579 225859 DEBUG os_vif [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:13:21,bridge_name='br-int',has_traffic_filtering=True,id=607e59a4-2a6b-424a-9413-be318079781e,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap607e59a4-2a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.582 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.583 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap607e59a4-2a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.584 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.585 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.587 225859 INFO os_vif [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:13:21,bridge_name='br-int',has_traffic_filtering=True,id=607e59a4-2a6b-424a-9413-be318079781e,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap607e59a4-2a')
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.588 225859 DEBUG nova.virt.libvirt.vif [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:41:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-689127716',display_name='tempest-tempest.common.compute-instance-689127716',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-689127716',id=71,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk9SkW5N7MhrGaZslG18EJ7xoBof9PQa4upjUw+XxfbO5rNOjJYMJtJMRGPfgbl1pwAZZD7LHjNNMRFKVo+T+C8Rnr+HXWsPYQmvPGwjjZ++NXvRdqES1LIbRDiwaFMJQ==',key_name='tempest-keypair-1970360297',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:41:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-2p3ovedc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:41:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=10349dde-fb60-48ba-bc7b-42180c5eb49e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.588 225859 DEBUG nova.network.os_vif_util [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.589 225859 DEBUG nova.network.os_vif_util [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:35:48,bridge_name='br-int',has_traffic_filtering=True,id=e2648ead-7162-4661-94e1-755faa8f1fd1,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape2648ead-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.590 225859 DEBUG os_vif [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:35:48,bridge_name='br-int',has_traffic_filtering=True,id=e2648ead-7162-4661-94e1-755faa8f1fd1,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape2648ead-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.591 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.591 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape2648ead-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.591 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.594 225859 INFO os_vif [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:35:48,bridge_name='br-int',has_traffic_filtering=True,id=e2648ead-7162-4661-94e1-755faa8f1fd1,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape2648ead-71')
Jan 20 14:42:35 compute-1 neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4[254080]: [NOTICE]   (254084) : haproxy version is 2.8.14-c23fe91
Jan 20 14:42:35 compute-1 neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4[254080]: [NOTICE]   (254084) : path to executable is /usr/sbin/haproxy
Jan 20 14:42:35 compute-1 neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4[254080]: [ALERT]    (254084) : Current worker (254086) exited with code 143 (Terminated)
Jan 20 14:42:35 compute-1 neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4[254080]: [WARNING]  (254084) : All workers exited. Exiting... (0)
Jan 20 14:42:35 compute-1 systemd[1]: libpod-2a35028076fb1ea020c7e21b3e0194e0459a116bbf5b4723a5ca26a751673906.scope: Deactivated successfully.
Jan 20 14:42:35 compute-1 podman[255671]: 2026-01-20 14:42:35.630447165 +0000 UTC m=+0.055489932 container died 2a35028076fb1ea020c7e21b3e0194e0459a116bbf5b4723a5ca26a751673906 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 20 14:42:35 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a35028076fb1ea020c7e21b3e0194e0459a116bbf5b4723a5ca26a751673906-userdata-shm.mount: Deactivated successfully.
Jan 20 14:42:35 compute-1 systemd[1]: var-lib-containers-storage-overlay-6b359ef85c164591e81dbb650ad14fe7eb3bd9cd95784fcc030b2336836287bd-merged.mount: Deactivated successfully.
Jan 20 14:42:35 compute-1 podman[255671]: 2026-01-20 14:42:35.67474211 +0000 UTC m=+0.099784887 container cleanup 2a35028076fb1ea020c7e21b3e0194e0459a116bbf5b4723a5ca26a751673906 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 20 14:42:35 compute-1 systemd[1]: libpod-conmon-2a35028076fb1ea020c7e21b3e0194e0459a116bbf5b4723a5ca26a751673906.scope: Deactivated successfully.
Jan 20 14:42:35 compute-1 podman[255721]: 2026-01-20 14:42:35.736940551 +0000 UTC m=+0.039828489 container remove 2a35028076fb1ea020c7e21b3e0194e0459a116bbf5b4723a5ca26a751673906 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:42:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:35.742 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e9bf0ae6-ebbd-4ee7-ab38-ba5ea2896033]: (4, ('Tue Jan 20 02:42:35 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4 (2a35028076fb1ea020c7e21b3e0194e0459a116bbf5b4723a5ca26a751673906)\n2a35028076fb1ea020c7e21b3e0194e0459a116bbf5b4723a5ca26a751673906\nTue Jan 20 02:42:35 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4 (2a35028076fb1ea020c7e21b3e0194e0459a116bbf5b4723a5ca26a751673906)\n2a35028076fb1ea020c7e21b3e0194e0459a116bbf5b4723a5ca26a751673906\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:35.744 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e92cd75a-5f82-443a-b23c-11039b74ae5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:35.745 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc21b99b-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.747 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:35 compute-1 kernel: tapfc21b99b-40: left promiscuous mode
Jan 20 14:42:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:35.752 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[78523d57-c043-4e4c-9161-fa85018cfeff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:35 compute-1 nova_compute[225855]: 2026-01-20 14:42:35.767 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:35.769 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a07d00af-d815-4da2-919b-b33e77f5bc71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:35.770 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[842d9032-911c-4366-ba44-623a68498d63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:35.786 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8e2c256a-27e6-4762-867d-25552b1086e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511904, 'reachable_time': 44918, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255736, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:35 compute-1 systemd[1]: run-netns-ovnmeta\x2dfc21b99b\x2d4e34\x2d422c\x2dbe05\x2d0a440009dac4.mount: Deactivated successfully.
Jan 20 14:42:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:35.788 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:42:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:35.789 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[7e0f0817-782a-4216-8ab5-7e79c3e238d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:36.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:36 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Jan 20 14:42:36 compute-1 ceph-mon[81775]: pgmap v1627: 321 pgs: 321 active+clean; 386 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 82 KiB/s wr, 179 op/s
Jan 20 14:42:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.421 225859 INFO nova.virt.libvirt.driver [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Deleting instance files /var/lib/nova/instances/10349dde-fb60-48ba-bc7b-42180c5eb49e_del
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.422 225859 INFO nova.virt.libvirt.driver [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Deletion of /var/lib/nova/instances/10349dde-fb60-48ba-bc7b-42180c5eb49e_del complete
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.519 225859 INFO nova.compute.manager [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Took 1.20 seconds to destroy the instance on the hypervisor.
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.519 225859 DEBUG oslo.service.loopingcall [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.520 225859 DEBUG nova.compute.manager [-] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.520 225859 DEBUG nova.network.neutron [-] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.548 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Updating instance_info_cache with network_info: [{"id": "607e59a4-2a6b-424a-9413-be318079781e", "address": "fa:16:3e:a3:13:21", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap607e59a4-2a", "ovs_interfaceid": "607e59a4-2a6b-424a-9413-be318079781e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.765 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.766 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.767 225859 DEBUG oslo_concurrency.lockutils [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquired lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.768 225859 DEBUG nova.network.neutron [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.770 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.771 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.771 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.772 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.772 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.773 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.807 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.808 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.808 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.808 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:42:36 compute-1 nova_compute[225855]: 2026-01-20 14:42:36.809 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:42:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:37.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:42:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:42:37 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2052746815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.277 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.357 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.357 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.430 225859 DEBUG nova.compute.manager [req-a3662387-f8e9-4fa3-89e9-929adcf31f49 req-b55b30d8-0d99-42dd-8a67-643e4351944b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Received event network-vif-unplugged-607e59a4-2a6b-424a-9413-be318079781e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.430 225859 DEBUG oslo_concurrency.lockutils [req-a3662387-f8e9-4fa3-89e9-929adcf31f49 req-b55b30d8-0d99-42dd-8a67-643e4351944b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.431 225859 DEBUG oslo_concurrency.lockutils [req-a3662387-f8e9-4fa3-89e9-929adcf31f49 req-b55b30d8-0d99-42dd-8a67-643e4351944b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.431 225859 DEBUG oslo_concurrency.lockutils [req-a3662387-f8e9-4fa3-89e9-929adcf31f49 req-b55b30d8-0d99-42dd-8a67-643e4351944b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.431 225859 DEBUG nova.compute.manager [req-a3662387-f8e9-4fa3-89e9-929adcf31f49 req-b55b30d8-0d99-42dd-8a67-643e4351944b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] No waiting events found dispatching network-vif-unplugged-607e59a4-2a6b-424a-9413-be318079781e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.431 225859 DEBUG nova.compute.manager [req-a3662387-f8e9-4fa3-89e9-929adcf31f49 req-b55b30d8-0d99-42dd-8a67-643e4351944b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Received event network-vif-unplugged-607e59a4-2a6b-424a-9413-be318079781e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.432 225859 DEBUG nova.compute.manager [req-a3662387-f8e9-4fa3-89e9-929adcf31f49 req-b55b30d8-0d99-42dd-8a67-643e4351944b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Received event network-vif-plugged-607e59a4-2a6b-424a-9413-be318079781e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.432 225859 DEBUG oslo_concurrency.lockutils [req-a3662387-f8e9-4fa3-89e9-929adcf31f49 req-b55b30d8-0d99-42dd-8a67-643e4351944b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.432 225859 DEBUG oslo_concurrency.lockutils [req-a3662387-f8e9-4fa3-89e9-929adcf31f49 req-b55b30d8-0d99-42dd-8a67-643e4351944b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.433 225859 DEBUG oslo_concurrency.lockutils [req-a3662387-f8e9-4fa3-89e9-929adcf31f49 req-b55b30d8-0d99-42dd-8a67-643e4351944b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "10349dde-fb60-48ba-bc7b-42180c5eb49e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.433 225859 DEBUG nova.compute.manager [req-a3662387-f8e9-4fa3-89e9-929adcf31f49 req-b55b30d8-0d99-42dd-8a67-643e4351944b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] No waiting events found dispatching network-vif-plugged-607e59a4-2a6b-424a-9413-be318079781e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.433 225859 WARNING nova.compute.manager [req-a3662387-f8e9-4fa3-89e9-929adcf31f49 req-b55b30d8-0d99-42dd-8a67-643e4351944b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Received unexpected event network-vif-plugged-607e59a4-2a6b-424a-9413-be318079781e for instance with vm_state active and task_state deleting.
Jan 20 14:42:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:37.515 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.515 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:37.516 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.555 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.557 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4310MB free_disk=20.81363296508789GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.557 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.557 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2052746815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.738 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 10349dde-fb60-48ba-bc7b-42180c5eb49e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.738 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 504acd93-cd55-496e-a85f-30e811f827d4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.739 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.739 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:42:37 compute-1 nova_compute[225855]: 2026-01-20 14:42:37.842 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:38.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:42:38 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2781861380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:38 compute-1 nova_compute[225855]: 2026-01-20 14:42:38.285 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:38 compute-1 nova_compute[225855]: 2026-01-20 14:42:38.290 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:42:38 compute-1 nova_compute[225855]: 2026-01-20 14:42:38.305 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:42:38 compute-1 nova_compute[225855]: 2026-01-20 14:42:38.349 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:42:38 compute-1 nova_compute[225855]: 2026-01-20 14:42:38.350 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:38 compute-1 ceph-mon[81775]: pgmap v1628: 321 pgs: 321 active+clean; 372 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 627 KiB/s wr, 216 op/s
Jan 20 14:42:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2781861380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:42:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:39.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:42:39 compute-1 nova_compute[225855]: 2026-01-20 14:42:39.554 225859 DEBUG nova.network.neutron [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Updating instance_info_cache with network_info: [{"id": "607e59a4-2a6b-424a-9413-be318079781e", "address": "fa:16:3e:a3:13:21", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap607e59a4-2a", "ovs_interfaceid": "607e59a4-2a6b-424a-9413-be318079781e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:39 compute-1 nova_compute[225855]: 2026-01-20 14:42:39.584 225859 DEBUG oslo_concurrency.lockutils [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Releasing lock "refresh_cache-10349dde-fb60-48ba-bc7b-42180c5eb49e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:42:39 compute-1 ovn_controller[130490]: 2026-01-20T14:42:39Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:f6:26 10.100.0.12
Jan 20 14:42:39 compute-1 ovn_controller[130490]: 2026-01-20T14:42:39Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:f6:26 10.100.0.12
Jan 20 14:42:39 compute-1 nova_compute[225855]: 2026-01-20 14:42:39.615 225859 DEBUG oslo_concurrency.lockutils [None req-c5c1a956-f98e-4e92-8ffb-c6a46dd06190 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "interface-10349dde-fb60-48ba-bc7b-42180c5eb49e-e2648ead-7162-4661-94e1-755faa8f1fd1" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 7.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:39 compute-1 ceph-mon[81775]: pgmap v1629: 321 pgs: 321 active+clean; 362 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.5 MiB/s wr, 249 op/s
Jan 20 14:42:40 compute-1 nova_compute[225855]: 2026-01-20 14:42:40.007 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:40.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:40 compute-1 nova_compute[225855]: 2026-01-20 14:42:40.170 225859 DEBUG nova.network.neutron [-] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:40 compute-1 nova_compute[225855]: 2026-01-20 14:42:40.217 225859 INFO nova.compute.manager [-] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Took 3.70 seconds to deallocate network for instance.
Jan 20 14:42:40 compute-1 nova_compute[225855]: 2026-01-20 14:42:40.289 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:40 compute-1 nova_compute[225855]: 2026-01-20 14:42:40.315 225859 DEBUG oslo_concurrency.lockutils [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:40 compute-1 nova_compute[225855]: 2026-01-20 14:42:40.316 225859 DEBUG oslo_concurrency.lockutils [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:40 compute-1 nova_compute[225855]: 2026-01-20 14:42:40.423 225859 DEBUG oslo_concurrency.processutils [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:40 compute-1 nova_compute[225855]: 2026-01-20 14:42:40.585 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:42:40 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2242138520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:40 compute-1 nova_compute[225855]: 2026-01-20 14:42:40.884 225859 DEBUG oslo_concurrency.processutils [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:40 compute-1 nova_compute[225855]: 2026-01-20 14:42:40.889 225859 DEBUG nova.compute.provider_tree [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:42:40 compute-1 nova_compute[225855]: 2026-01-20 14:42:40.930 225859 DEBUG nova.scheduler.client.report [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:42:40 compute-1 nova_compute[225855]: 2026-01-20 14:42:40.960 225859 DEBUG oslo_concurrency.lockutils [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:41 compute-1 nova_compute[225855]: 2026-01-20 14:42:41.000 225859 INFO nova.scheduler.client.report [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Deleted allocations for instance 10349dde-fb60-48ba-bc7b-42180c5eb49e
Jan 20 14:42:41 compute-1 nova_compute[225855]: 2026-01-20 14:42:41.140 225859 DEBUG oslo_concurrency.lockutils [None req-2e03531a-2dfa-41ad-89c5-2d64e98b64c5 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "10349dde-fb60-48ba-bc7b-42180c5eb49e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:41.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:41 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2242138520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:42:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:42.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:42:42 compute-1 nova_compute[225855]: 2026-01-20 14:42:42.302 225859 DEBUG nova.compute.manager [req-a4c423ee-3814-4a0b-a36d-809c06a65fdc req-092a1cbe-9970-4a76-8f50-f9bc63f8cd52 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Received event network-vif-deleted-607e59a4-2a6b-424a-9413-be318079781e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:42 compute-1 ceph-mon[81775]: pgmap v1630: 321 pgs: 321 active+clean; 354 MiB data, 825 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 3.4 MiB/s wr, 311 op/s
Jan 20 14:42:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:42:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:43.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:42:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1080104359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:42:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:44.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:42:44 compute-1 nova_compute[225855]: 2026-01-20 14:42:44.345 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:42:44 compute-1 nova_compute[225855]: 2026-01-20 14:42:44.345 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:44.519 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:44 compute-1 ceph-mon[81775]: pgmap v1631: 321 pgs: 321 active+clean; 357 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.3 MiB/s wr, 328 op/s
Jan 20 14:42:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3388504398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:42:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:45.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:42:45 compute-1 nova_compute[225855]: 2026-01-20 14:42:45.291 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:45 compute-1 nova_compute[225855]: 2026-01-20 14:42:45.377 225859 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:45 compute-1 nova_compute[225855]: 2026-01-20 14:42:45.378 225859 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:45 compute-1 nova_compute[225855]: 2026-01-20 14:42:45.398 225859 DEBUG nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:42:45 compute-1 nova_compute[225855]: 2026-01-20 14:42:45.468 225859 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:45 compute-1 nova_compute[225855]: 2026-01-20 14:42:45.469 225859 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:45 compute-1 nova_compute[225855]: 2026-01-20 14:42:45.475 225859 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:42:45 compute-1 nova_compute[225855]: 2026-01-20 14:42:45.475 225859 INFO nova.compute.claims [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:42:45 compute-1 nova_compute[225855]: 2026-01-20 14:42:45.587 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:45 compute-1 nova_compute[225855]: 2026-01-20 14:42:45.611 225859 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:42:46 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/265236336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.113 225859 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.117 225859 DEBUG nova.compute.provider_tree [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.143 225859 DEBUG nova.scheduler.client.report [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:42:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:46.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.199 225859 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.200 225859 DEBUG nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:42:46 compute-1 sudo[255833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:46 compute-1 sudo[255833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:46 compute-1 sudo[255833]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.266 225859 DEBUG nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.266 225859 DEBUG nova.network.neutron [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:42:46 compute-1 sudo[255858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:46 compute-1 sudo[255858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:46 compute-1 sudo[255858]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.290 225859 INFO nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.318 225859 DEBUG nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:42:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.506 225859 DEBUG nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.508 225859 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.508 225859 INFO nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Creating image(s)
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.531 225859 DEBUG nova.storage.rbd_utils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.557 225859 DEBUG nova.storage.rbd_utils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.583 225859 DEBUG nova.storage.rbd_utils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.587 225859 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.635 225859 DEBUG nova.policy [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aa2e7857e85f483eb0d162e2ee8c2e2c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3e022a35f604df2bbc885e498b1e206', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.650 225859 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.651 225859 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.652 225859 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.652 225859 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.674 225859 DEBUG nova.storage.rbd_utils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:46 compute-1 nova_compute[225855]: 2026-01-20 14:42:46.677 225859 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:46 compute-1 ceph-mon[81775]: pgmap v1632: 321 pgs: 321 active+clean; 273 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 346 op/s
Jan 20 14:42:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3378286606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/265236336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:47 compute-1 nova_compute[225855]: 2026-01-20 14:42:47.006 225859 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:47 compute-1 nova_compute[225855]: 2026-01-20 14:42:47.073 225859 DEBUG nova.storage.rbd_utils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] resizing rbd image 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:42:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:42:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:47.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:42:47 compute-1 nova_compute[225855]: 2026-01-20 14:42:47.216 225859 DEBUG nova.objects.instance [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lazy-loading 'migration_context' on Instance uuid 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:47 compute-1 nova_compute[225855]: 2026-01-20 14:42:47.231 225859 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:42:47 compute-1 nova_compute[225855]: 2026-01-20 14:42:47.231 225859 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Ensure instance console log exists: /var/lib/nova/instances/69cc4cf9-dfe3-44cb-b811-0300b5ccd66b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:42:47 compute-1 nova_compute[225855]: 2026-01-20 14:42:47.232 225859 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:47 compute-1 nova_compute[225855]: 2026-01-20 14:42:47.232 225859 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:47 compute-1 nova_compute[225855]: 2026-01-20 14:42:47.232 225859 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:47 compute-1 ceph-mon[81775]: pgmap v1633: 321 pgs: 321 active+clean; 273 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.5 MiB/s wr, 352 op/s
Jan 20 14:42:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:48.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:48 compute-1 sudo[256050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:48 compute-1 sudo[256050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:48 compute-1 sudo[256050]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:48 compute-1 sudo[256075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:42:48 compute-1 sudo[256075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:48 compute-1 sudo[256075]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:48 compute-1 nova_compute[225855]: 2026-01-20 14:42:48.513 225859 DEBUG nova.network.neutron [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Successfully created port: d70a594c-be8a-461a-93b0-7416d3587e74 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:42:48 compute-1 sudo[256100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:48 compute-1 sudo[256100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:48 compute-1 sudo[256100]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:48 compute-1 sudo[256125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:42:48 compute-1 sudo[256125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:49 compute-1 sudo[256125]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:49.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:42:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:42:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:42:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:42:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:42:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:42:49 compute-1 nova_compute[225855]: 2026-01-20 14:42:49.880 225859 DEBUG nova.network.neutron [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Successfully updated port: d70a594c-be8a-461a-93b0-7416d3587e74 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:42:49 compute-1 nova_compute[225855]: 2026-01-20 14:42:49.911 225859 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "refresh_cache-69cc4cf9-dfe3-44cb-b811-0300b5ccd66b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:42:49 compute-1 nova_compute[225855]: 2026-01-20 14:42:49.912 225859 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquired lock "refresh_cache-69cc4cf9-dfe3-44cb-b811-0300b5ccd66b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:42:49 compute-1 nova_compute[225855]: 2026-01-20 14:42:49.912 225859 DEBUG nova.network.neutron [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:42:50 compute-1 nova_compute[225855]: 2026-01-20 14:42:50.092 225859 DEBUG nova.compute.manager [req-e73b78ea-1aad-4382-b311-09a10979085d req-b13176a9-0f93-4c8e-92b8-f64fd3bff8f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Received event network-changed-d70a594c-be8a-461a-93b0-7416d3587e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:50 compute-1 nova_compute[225855]: 2026-01-20 14:42:50.093 225859 DEBUG nova.compute.manager [req-e73b78ea-1aad-4382-b311-09a10979085d req-b13176a9-0f93-4c8e-92b8-f64fd3bff8f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Refreshing instance network info cache due to event network-changed-d70a594c-be8a-461a-93b0-7416d3587e74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:42:50 compute-1 nova_compute[225855]: 2026-01-20 14:42:50.093 225859 DEBUG oslo_concurrency.lockutils [req-e73b78ea-1aad-4382-b311-09a10979085d req-b13176a9-0f93-4c8e-92b8-f64fd3bff8f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-69cc4cf9-dfe3-44cb-b811-0300b5ccd66b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:42:50 compute-1 nova_compute[225855]: 2026-01-20 14:42:50.168 225859 DEBUG nova.network.neutron [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:42:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:50.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:50 compute-1 nova_compute[225855]: 2026-01-20 14:42:50.294 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:50 compute-1 nova_compute[225855]: 2026-01-20 14:42:50.552 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920155.5507116, 10349dde-fb60-48ba-bc7b-42180c5eb49e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:42:50 compute-1 nova_compute[225855]: 2026-01-20 14:42:50.553 225859 INFO nova.compute.manager [-] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] VM Stopped (Lifecycle Event)
Jan 20 14:42:50 compute-1 nova_compute[225855]: 2026-01-20 14:42:50.586 225859 DEBUG nova.compute.manager [None req-34b55a62-f842-43c3-805d-228af6a25266 - - - - - -] [instance: 10349dde-fb60-48ba-bc7b-42180c5eb49e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:50 compute-1 nova_compute[225855]: 2026-01-20 14:42:50.588 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:50 compute-1 ceph-mon[81775]: pgmap v1634: 321 pgs: 321 active+clean; 289 MiB data, 796 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.2 MiB/s wr, 305 op/s
Jan 20 14:42:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:42:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:51.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:42:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.664 225859 DEBUG nova.network.neutron [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Updating instance_info_cache with network_info: [{"id": "d70a594c-be8a-461a-93b0-7416d3587e74", "address": "fa:16:3e:11:3a:34", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd70a594c-be", "ovs_interfaceid": "d70a594c-be8a-461a-93b0-7416d3587e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.743 225859 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Releasing lock "refresh_cache-69cc4cf9-dfe3-44cb-b811-0300b5ccd66b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.743 225859 DEBUG nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Instance network_info: |[{"id": "d70a594c-be8a-461a-93b0-7416d3587e74", "address": "fa:16:3e:11:3a:34", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd70a594c-be", "ovs_interfaceid": "d70a594c-be8a-461a-93b0-7416d3587e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.744 225859 DEBUG oslo_concurrency.lockutils [req-e73b78ea-1aad-4382-b311-09a10979085d req-b13176a9-0f93-4c8e-92b8-f64fd3bff8f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-69cc4cf9-dfe3-44cb-b811-0300b5ccd66b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.744 225859 DEBUG nova.network.neutron [req-e73b78ea-1aad-4382-b311-09a10979085d req-b13176a9-0f93-4c8e-92b8-f64fd3bff8f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Refreshing network info cache for port d70a594c-be8a-461a-93b0-7416d3587e74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.750 225859 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Start _get_guest_xml network_info=[{"id": "d70a594c-be8a-461a-93b0-7416d3587e74", "address": "fa:16:3e:11:3a:34", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd70a594c-be", "ovs_interfaceid": "d70a594c-be8a-461a-93b0-7416d3587e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.757 225859 WARNING nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.765 225859 DEBUG nova.virt.libvirt.host [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.766 225859 DEBUG nova.virt.libvirt.host [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.770 225859 DEBUG nova.virt.libvirt.host [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.771 225859 DEBUG nova.virt.libvirt.host [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.774 225859 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.774 225859 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.775 225859 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.775 225859 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.775 225859 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.775 225859 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.776 225859 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.776 225859 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.776 225859 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.776 225859 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.777 225859 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.777 225859 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:42:51 compute-1 nova_compute[225855]: 2026-01-20 14:42:51.780 225859 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:51 compute-1 ceph-mon[81775]: pgmap v1635: 321 pgs: 321 active+clean; 349 MiB data, 845 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 8.0 MiB/s wr, 272 op/s
Jan 20 14:42:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:52.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:42:52 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3459419744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.219 225859 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.254 225859 DEBUG nova.storage.rbd_utils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.259 225859 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:42:52 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1767237631' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.701 225859 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.702 225859 DEBUG nova.virt.libvirt.vif [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:42:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1574727229',display_name='tempest-tempest.common.compute-instance-1574727229-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1574727229-1',id=77,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3e022a35f604df2bbc885e498b1e206',ramdisk_id='',reservation_id='r-6khz0z7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-164394330',owner_user_name='tempest-MultipleCreateTestJSON-164394330-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:42:46Z,user_data=None,user_id='aa2e7857e85f483eb0d162e2ee8c2e2c',uuid=69cc4cf9-dfe3-44cb-b811-0300b5ccd66b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d70a594c-be8a-461a-93b0-7416d3587e74", "address": "fa:16:3e:11:3a:34", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd70a594c-be", "ovs_interfaceid": "d70a594c-be8a-461a-93b0-7416d3587e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.703 225859 DEBUG nova.network.os_vif_util [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converting VIF {"id": "d70a594c-be8a-461a-93b0-7416d3587e74", "address": "fa:16:3e:11:3a:34", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd70a594c-be", "ovs_interfaceid": "d70a594c-be8a-461a-93b0-7416d3587e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.704 225859 DEBUG nova.network.os_vif_util [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:3a:34,bridge_name='br-int',has_traffic_filtering=True,id=d70a594c-be8a-461a-93b0-7416d3587e74,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd70a594c-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.705 225859 DEBUG nova.objects.instance [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lazy-loading 'pci_devices' on Instance uuid 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.726 225859 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:42:52 compute-1 nova_compute[225855]:   <uuid>69cc4cf9-dfe3-44cb-b811-0300b5ccd66b</uuid>
Jan 20 14:42:52 compute-1 nova_compute[225855]:   <name>instance-0000004d</name>
Jan 20 14:42:52 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:42:52 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:42:52 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <nova:name>tempest-tempest.common.compute-instance-1574727229-1</nova:name>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:42:51</nova:creationTime>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:42:52 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:42:52 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:42:52 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:42:52 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:42:52 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:42:52 compute-1 nova_compute[225855]:         <nova:user uuid="aa2e7857e85f483eb0d162e2ee8c2e2c">tempest-MultipleCreateTestJSON-164394330-project-member</nova:user>
Jan 20 14:42:52 compute-1 nova_compute[225855]:         <nova:project uuid="a3e022a35f604df2bbc885e498b1e206">tempest-MultipleCreateTestJSON-164394330</nova:project>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:42:52 compute-1 nova_compute[225855]:         <nova:port uuid="d70a594c-be8a-461a-93b0-7416d3587e74">
Jan 20 14:42:52 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:42:52 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:42:52 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <system>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <entry name="serial">69cc4cf9-dfe3-44cb-b811-0300b5ccd66b</entry>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <entry name="uuid">69cc4cf9-dfe3-44cb-b811-0300b5ccd66b</entry>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     </system>
Jan 20 14:42:52 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:42:52 compute-1 nova_compute[225855]:   <os>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:   </os>
Jan 20 14:42:52 compute-1 nova_compute[225855]:   <features>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:   </features>
Jan 20 14:42:52 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:42:52 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:42:52 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/69cc4cf9-dfe3-44cb-b811-0300b5ccd66b_disk">
Jan 20 14:42:52 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       </source>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:42:52 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/69cc4cf9-dfe3-44cb-b811-0300b5ccd66b_disk.config">
Jan 20 14:42:52 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       </source>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:42:52 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:11:3a:34"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <target dev="tapd70a594c-be"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/69cc4cf9-dfe3-44cb-b811-0300b5ccd66b/console.log" append="off"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <video>
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     </video>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:42:52 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:42:52 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:42:52 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:42:52 compute-1 nova_compute[225855]: </domain>
Jan 20 14:42:52 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.727 225859 DEBUG nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Preparing to wait for external event network-vif-plugged-d70a594c-be8a-461a-93b0-7416d3587e74 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.727 225859 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.728 225859 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.728 225859 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.728 225859 DEBUG nova.virt.libvirt.vif [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:42:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1574727229',display_name='tempest-tempest.common.compute-instance-1574727229-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1574727229-1',id=77,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3e022a35f604df2bbc885e498b1e206',ramdisk_id='',reservation_id='r-6khz0z7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-164394330',owner_user_name='tempest-MultipleCreateTestJSON-164394330-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:42:46Z,user_data=None,user_id='aa2e7857e85f483eb0d162e2ee8c2e2c',uuid=69cc4cf9-dfe3-44cb-b811-0300b5ccd66b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d70a594c-be8a-461a-93b0-7416d3587e74", "address": "fa:16:3e:11:3a:34", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd70a594c-be", "ovs_interfaceid": "d70a594c-be8a-461a-93b0-7416d3587e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.729 225859 DEBUG nova.network.os_vif_util [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converting VIF {"id": "d70a594c-be8a-461a-93b0-7416d3587e74", "address": "fa:16:3e:11:3a:34", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd70a594c-be", "ovs_interfaceid": "d70a594c-be8a-461a-93b0-7416d3587e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.729 225859 DEBUG nova.network.os_vif_util [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:3a:34,bridge_name='br-int',has_traffic_filtering=True,id=d70a594c-be8a-461a-93b0-7416d3587e74,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd70a594c-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.730 225859 DEBUG os_vif [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:3a:34,bridge_name='br-int',has_traffic_filtering=True,id=d70a594c-be8a-461a-93b0-7416d3587e74,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd70a594c-be') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.730 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.731 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.731 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.733 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.733 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd70a594c-be, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.734 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd70a594c-be, col_values=(('external_ids', {'iface-id': 'd70a594c-be8a-461a-93b0-7416d3587e74', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:3a:34', 'vm-uuid': '69cc4cf9-dfe3-44cb-b811-0300b5ccd66b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.736 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:52 compute-1 NetworkManager[49104]: <info>  [1768920172.7371] manager: (tapd70a594c-be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.739 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.743 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.743 225859 INFO os_vif [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:3a:34,bridge_name='br-int',has_traffic_filtering=True,id=d70a594c-be8a-461a-93b0-7416d3587e74,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd70a594c-be')
Jan 20 14:42:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/27707950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3459419744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3230279258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1767237631' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.811 225859 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.811 225859 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.811 225859 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] No VIF found with MAC fa:16:3e:11:3a:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.812 225859 INFO nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Using config drive
Jan 20 14:42:52 compute-1 nova_compute[225855]: 2026-01-20 14:42:52.836 225859 DEBUG nova.storage.rbd_utils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:42:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:53.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:42:53 compute-1 nova_compute[225855]: 2026-01-20 14:42:53.374 225859 INFO nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Creating config drive at /var/lib/nova/instances/69cc4cf9-dfe3-44cb-b811-0300b5ccd66b/disk.config
Jan 20 14:42:53 compute-1 nova_compute[225855]: 2026-01-20 14:42:53.383 225859 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/69cc4cf9-dfe3-44cb-b811-0300b5ccd66b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph9cudons execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:53 compute-1 nova_compute[225855]: 2026-01-20 14:42:53.524 225859 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/69cc4cf9-dfe3-44cb-b811-0300b5ccd66b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph9cudons" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:53 compute-1 nova_compute[225855]: 2026-01-20 14:42:53.553 225859 DEBUG nova.storage.rbd_utils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:53 compute-1 nova_compute[225855]: 2026-01-20 14:42:53.557 225859 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/69cc4cf9-dfe3-44cb-b811-0300b5ccd66b/disk.config 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:53 compute-1 nova_compute[225855]: 2026-01-20 14:42:53.721 225859 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/69cc4cf9-dfe3-44cb-b811-0300b5ccd66b/disk.config 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:53 compute-1 nova_compute[225855]: 2026-01-20 14:42:53.722 225859 INFO nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Deleting local config drive /var/lib/nova/instances/69cc4cf9-dfe3-44cb-b811-0300b5ccd66b/disk.config because it was imported into RBD.
Jan 20 14:42:53 compute-1 kernel: tapd70a594c-be: entered promiscuous mode
Jan 20 14:42:53 compute-1 NetworkManager[49104]: <info>  [1768920173.7778] manager: (tapd70a594c-be): new Tun device (/org/freedesktop/NetworkManager/Devices/118)
Jan 20 14:42:53 compute-1 ovn_controller[130490]: 2026-01-20T14:42:53Z|00267|binding|INFO|Claiming lport d70a594c-be8a-461a-93b0-7416d3587e74 for this chassis.
Jan 20 14:42:53 compute-1 ovn_controller[130490]: 2026-01-20T14:42:53Z|00268|binding|INFO|d70a594c-be8a-461a-93b0-7416d3587e74: Claiming fa:16:3e:11:3a:34 10.100.0.12
Jan 20 14:42:53 compute-1 nova_compute[225855]: 2026-01-20 14:42:53.777 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:53 compute-1 ovn_controller[130490]: 2026-01-20T14:42:53Z|00269|binding|INFO|Setting lport d70a594c-be8a-461a-93b0-7416d3587e74 ovn-installed in OVS
Jan 20 14:42:53 compute-1 ovn_controller[130490]: 2026-01-20T14:42:53Z|00270|binding|INFO|Setting lport d70a594c-be8a-461a-93b0-7416d3587e74 up in Southbound
Jan 20 14:42:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:53.809 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:3a:34 10.100.0.12'], port_security=['fa:16:3e:11:3a:34 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '69cc4cf9-dfe3-44cb-b811-0300b5ccd66b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3e022a35f604df2bbc885e498b1e206', 'neutron:revision_number': '2', 'neutron:security_group_ids': '885819b7-5060-4b73-ad54-3f31f821195c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89e607b1-9e39-47f0-8180-8aaef3a2a0e9, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=d70a594c-be8a-461a-93b0-7416d3587e74) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:42:53 compute-1 nova_compute[225855]: 2026-01-20 14:42:53.810 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:53.810 140354 INFO neutron.agent.ovn.metadata.agent [-] Port d70a594c-be8a-461a-93b0-7416d3587e74 in datapath 3e260ad9-fcf1-432b-b71b-b943d4249b65 bound to our chassis
Jan 20 14:42:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:53.811 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3e260ad9-fcf1-432b-b71b-b943d4249b65
Jan 20 14:42:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4025460005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:53 compute-1 ceph-mon[81775]: pgmap v1636: 321 pgs: 321 active+clean; 372 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 708 KiB/s rd, 6.6 MiB/s wr, 198 op/s
Jan 20 14:42:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3234445789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:53 compute-1 systemd-udevd[256314]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:42:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:53.825 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[14b97277-42b2-4425-8fd8-7f68f58c5b0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:53.827 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3e260ad9-f1 in ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:42:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:53.828 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3e260ad9-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:42:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:53.828 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5c66162b-8d24-45a9-97eb-7942c898e52e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:53.829 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[16b9b360-b5ce-45c5-ba24-3fda2203e267]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:53 compute-1 NetworkManager[49104]: <info>  [1768920173.8328] device (tapd70a594c-be): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:42:53 compute-1 systemd-machined[194361]: New machine qemu-33-instance-0000004d.
Jan 20 14:42:53 compute-1 NetworkManager[49104]: <info>  [1768920173.8347] device (tapd70a594c-be): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:42:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:53.839 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd10506-013d-4665-b01d-a328023ee07c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:53 compute-1 systemd[1]: Started Virtual Machine qemu-33-instance-0000004d.
Jan 20 14:42:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:53.852 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[14703686-db6a-49d7-94ae-105369abaffd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:53.881 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d9eb439e-8097-4cba-bba1-30b600c8ef54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:53.887 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9e3dec57-dc88-4df9-9144-ed85dfb19d00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:53 compute-1 systemd-udevd[256321]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:42:53 compute-1 NetworkManager[49104]: <info>  [1768920173.8885] manager: (tap3e260ad9-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/119)
Jan 20 14:42:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:53.926 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[74aba923-4f67-4ed4-9174-e97e8baa8d45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:53.928 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[5ad2c12b-dded-408e-a2c7-69aa17e28223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:53 compute-1 NetworkManager[49104]: <info>  [1768920173.9524] device (tap3e260ad9-f0): carrier: link connected
Jan 20 14:42:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:53.964 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[cb57dc79-0948-49c4-8bd1-84446caeb547]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:53.981 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4712d4-40a7-4e45-932e-6d0d0559af39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e260ad9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:13:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521287, 'reachable_time': 19914, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256350, 'error': None, 'target': 'ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:53.997 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[19f78edd-c3fd-42db-af6a-79a4701d146d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe88:134a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 521287, 'tstamp': 521287}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256351, 'error': None, 'target': 'ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:54.014 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b4cfa5b3-d0c2-452f-adc6-fb40159b2146]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e260ad9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:13:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521287, 'reachable_time': 19914, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256352, 'error': None, 'target': 'ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:54.054 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[601e4da3-a3f6-4a6e-a167-d359a70e5112]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:54.120 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1801b3cc-a888-4cb6-8570-20745b887865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:54.121 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e260ad9-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:54.121 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:54.121 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e260ad9-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:54 compute-1 kernel: tap3e260ad9-f0: entered promiscuous mode
Jan 20 14:42:54 compute-1 NetworkManager[49104]: <info>  [1768920174.1241] manager: (tap3e260ad9-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Jan 20 14:42:54 compute-1 nova_compute[225855]: 2026-01-20 14:42:54.123 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:54 compute-1 nova_compute[225855]: 2026-01-20 14:42:54.124 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:54.125 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3e260ad9-f0, col_values=(('external_ids', {'iface-id': '2b7c295d-f074-4cfb-aca0-08946126ddbc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:54 compute-1 nova_compute[225855]: 2026-01-20 14:42:54.126 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:54 compute-1 ovn_controller[130490]: 2026-01-20T14:42:54Z|00271|binding|INFO|Releasing lport 2b7c295d-f074-4cfb-aca0-08946126ddbc from this chassis (sb_readonly=0)
Jan 20 14:42:54 compute-1 nova_compute[225855]: 2026-01-20 14:42:54.140 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:54 compute-1 nova_compute[225855]: 2026-01-20 14:42:54.140 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:54.141 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3e260ad9-fcf1-432b-b71b-b943d4249b65.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3e260ad9-fcf1-432b-b71b-b943d4249b65.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:54.142 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[dc692e22-5f5d-4e0a-a934-2a1d23cbf78d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:54.142 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-3e260ad9-fcf1-432b-b71b-b943d4249b65
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/3e260ad9-fcf1-432b-b71b-b943d4249b65.pid.haproxy
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 3e260ad9-fcf1-432b-b71b-b943d4249b65
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:42:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:54.143 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'env', 'PROCESS_TAG=haproxy-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3e260ad9-fcf1-432b-b71b-b943d4249b65.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:42:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:54.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:54 compute-1 podman[256384]: 2026-01-20 14:42:54.480469763 +0000 UTC m=+0.029389723 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:42:54 compute-1 nova_compute[225855]: 2026-01-20 14:42:54.713 225859 DEBUG nova.compute.manager [req-68531ff8-9d58-423b-8498-14cd354b4f74 req-6000777a-97d7-4509-b5a3-0682e61fe1e0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Received event network-vif-plugged-d70a594c-be8a-461a-93b0-7416d3587e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:54 compute-1 nova_compute[225855]: 2026-01-20 14:42:54.714 225859 DEBUG oslo_concurrency.lockutils [req-68531ff8-9d58-423b-8498-14cd354b4f74 req-6000777a-97d7-4509-b5a3-0682e61fe1e0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:54 compute-1 nova_compute[225855]: 2026-01-20 14:42:54.714 225859 DEBUG oslo_concurrency.lockutils [req-68531ff8-9d58-423b-8498-14cd354b4f74 req-6000777a-97d7-4509-b5a3-0682e61fe1e0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:54 compute-1 nova_compute[225855]: 2026-01-20 14:42:54.715 225859 DEBUG oslo_concurrency.lockutils [req-68531ff8-9d58-423b-8498-14cd354b4f74 req-6000777a-97d7-4509-b5a3-0682e61fe1e0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:54 compute-1 nova_compute[225855]: 2026-01-20 14:42:54.715 225859 DEBUG nova.compute.manager [req-68531ff8-9d58-423b-8498-14cd354b4f74 req-6000777a-97d7-4509-b5a3-0682e61fe1e0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Processing event network-vif-plugged-d70a594c-be8a-461a-93b0-7416d3587e74 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:42:54 compute-1 nova_compute[225855]: 2026-01-20 14:42:54.922 225859 DEBUG nova.network.neutron [req-e73b78ea-1aad-4382-b311-09a10979085d req-b13176a9-0f93-4c8e-92b8-f64fd3bff8f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Updated VIF entry in instance network info cache for port d70a594c-be8a-461a-93b0-7416d3587e74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:42:54 compute-1 nova_compute[225855]: 2026-01-20 14:42:54.923 225859 DEBUG nova.network.neutron [req-e73b78ea-1aad-4382-b311-09a10979085d req-b13176a9-0f93-4c8e-92b8-f64fd3bff8f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Updating instance_info_cache with network_info: [{"id": "d70a594c-be8a-461a-93b0-7416d3587e74", "address": "fa:16:3e:11:3a:34", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd70a594c-be", "ovs_interfaceid": "d70a594c-be8a-461a-93b0-7416d3587e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:55.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.296 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.332 225859 DEBUG oslo_concurrency.lockutils [req-e73b78ea-1aad-4382-b311-09a10979085d req-b13176a9-0f93-4c8e-92b8-f64fd3bff8f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-69cc4cf9-dfe3-44cb-b811-0300b5ccd66b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.400 225859 DEBUG nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.401 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920175.3996596, 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.402 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] VM Started (Lifecycle Event)
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.407 225859 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.411 225859 INFO nova.virt.libvirt.driver [-] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Instance spawned successfully.
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.412 225859 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.447 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.451 225859 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.451 225859 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.452 225859 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.453 225859 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.454 225859 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.455 225859 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.466 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.507 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.507 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920175.4019752, 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.507 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] VM Paused (Lifecycle Event)
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.539 225859 INFO nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Took 9.03 seconds to spawn the instance on the hypervisor.
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.540 225859 DEBUG nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.545 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.553 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920175.4051468, 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.554 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] VM Resumed (Lifecycle Event)
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.600 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.604 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.627 225859 INFO nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Took 10.17 seconds to build instance.
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.642 225859 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.264s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:55 compute-1 ovn_controller[130490]: 2026-01-20T14:42:55Z|00272|binding|INFO|Releasing lport 5dcae274-b8f4-440a-a3eb-5c1a5a044346 from this chassis (sb_readonly=0)
Jan 20 14:42:55 compute-1 ovn_controller[130490]: 2026-01-20T14:42:55Z|00273|binding|INFO|Releasing lport 2b7c295d-f074-4cfb-aca0-08946126ddbc from this chassis (sb_readonly=0)
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.700 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:55 compute-1 podman[256384]: 2026-01-20 14:42:55.71475754 +0000 UTC m=+1.263677490 container create 6f8eed69f823a3fdf11e8147104e3b8cd6fb1499003504a66e3d765dff92ee16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:42:55 compute-1 ovn_controller[130490]: 2026-01-20T14:42:55Z|00274|binding|INFO|Releasing lport 5dcae274-b8f4-440a-a3eb-5c1a5a044346 from this chassis (sb_readonly=0)
Jan 20 14:42:55 compute-1 ovn_controller[130490]: 2026-01-20T14:42:55Z|00275|binding|INFO|Releasing lport 2b7c295d-f074-4cfb-aca0-08946126ddbc from this chassis (sb_readonly=0)
Jan 20 14:42:55 compute-1 nova_compute[225855]: 2026-01-20 14:42:55.939 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:56 compute-1 sudo[256467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:56 compute-1 sudo[256467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:56 compute-1 sudo[256467]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:56 compute-1 systemd[1]: Started libpod-conmon-6f8eed69f823a3fdf11e8147104e3b8cd6fb1499003504a66e3d765dff92ee16.scope.
Jan 20 14:42:56 compute-1 podman[256415]: 2026-01-20 14:42:56.038238651 +0000 UTC m=+1.081009086 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:42:56 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:42:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e211a072461adf5d122a2cd3877c75e44066b196736bc398b06e937a9610cce8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:42:56 compute-1 sudo[256496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:42:56 compute-1 sudo[256496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:56 compute-1 sudo[256496]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:56.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:56 compute-1 podman[256384]: 2026-01-20 14:42:56.217084956 +0000 UTC m=+1.766004956 container init 6f8eed69f823a3fdf11e8147104e3b8cd6fb1499003504a66e3d765dff92ee16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:42:56 compute-1 podman[256384]: 2026-01-20 14:42:56.223086716 +0000 UTC m=+1.772006676 container start 6f8eed69f823a3fdf11e8147104e3b8cd6fb1499003504a66e3d765dff92ee16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 20 14:42:56 compute-1 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[256494]: [NOTICE]   (256523) : New worker (256525) forked
Jan 20 14:42:56 compute-1 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[256494]: [NOTICE]   (256523) : Loading success.
Jan 20 14:42:56 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:56 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:42:56 compute-1 ceph-mon[81775]: pgmap v1637: 321 pgs: 321 active+clean; 372 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 245 KiB/s rd, 5.7 MiB/s wr, 168 op/s
Jan 20 14:42:56 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:42:56 compute-1 nova_compute[225855]: 2026-01-20 14:42:56.968 225859 DEBUG nova.compute.manager [req-3ba1085e-8d8e-4e0f-a008-7593ee3e319d req-edb24761-d2f4-45e6-9e38-8d2b0ebf4438 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Received event network-vif-plugged-d70a594c-be8a-461a-93b0-7416d3587e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:56 compute-1 nova_compute[225855]: 2026-01-20 14:42:56.969 225859 DEBUG oslo_concurrency.lockutils [req-3ba1085e-8d8e-4e0f-a008-7593ee3e319d req-edb24761-d2f4-45e6-9e38-8d2b0ebf4438 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:56 compute-1 nova_compute[225855]: 2026-01-20 14:42:56.969 225859 DEBUG oslo_concurrency.lockutils [req-3ba1085e-8d8e-4e0f-a008-7593ee3e319d req-edb24761-d2f4-45e6-9e38-8d2b0ebf4438 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:56 compute-1 nova_compute[225855]: 2026-01-20 14:42:56.970 225859 DEBUG oslo_concurrency.lockutils [req-3ba1085e-8d8e-4e0f-a008-7593ee3e319d req-edb24761-d2f4-45e6-9e38-8d2b0ebf4438 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:56 compute-1 nova_compute[225855]: 2026-01-20 14:42:56.970 225859 DEBUG nova.compute.manager [req-3ba1085e-8d8e-4e0f-a008-7593ee3e319d req-edb24761-d2f4-45e6-9e38-8d2b0ebf4438 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] No waiting events found dispatching network-vif-plugged-d70a594c-be8a-461a-93b0-7416d3587e74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:56 compute-1 nova_compute[225855]: 2026-01-20 14:42:56.971 225859 WARNING nova.compute.manager [req-3ba1085e-8d8e-4e0f-a008-7593ee3e319d req-edb24761-d2f4-45e6-9e38-8d2b0ebf4438 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Received unexpected event network-vif-plugged-d70a594c-be8a-461a-93b0-7416d3587e74 for instance with vm_state active and task_state None.
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.031 225859 DEBUG oslo_concurrency.lockutils [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.032 225859 DEBUG oslo_concurrency.lockutils [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.033 225859 DEBUG oslo_concurrency.lockutils [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.033 225859 DEBUG oslo_concurrency.lockutils [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.034 225859 DEBUG oslo_concurrency.lockutils [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.036 225859 INFO nova.compute.manager [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Terminating instance
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.037 225859 DEBUG nova.compute.manager [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:42:57 compute-1 kernel: tapd70a594c-be (unregistering): left promiscuous mode
Jan 20 14:42:57 compute-1 NetworkManager[49104]: <info>  [1768920177.0832] device (tapd70a594c-be): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:42:57 compute-1 ovn_controller[130490]: 2026-01-20T14:42:57Z|00276|binding|INFO|Releasing lport d70a594c-be8a-461a-93b0-7416d3587e74 from this chassis (sb_readonly=0)
Jan 20 14:42:57 compute-1 ovn_controller[130490]: 2026-01-20T14:42:57Z|00277|binding|INFO|Setting lport d70a594c-be8a-461a-93b0-7416d3587e74 down in Southbound
Jan 20 14:42:57 compute-1 ovn_controller[130490]: 2026-01-20T14:42:57Z|00278|binding|INFO|Removing iface tapd70a594c-be ovn-installed in OVS
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.144 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:57.152 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:3a:34 10.100.0.12'], port_security=['fa:16:3e:11:3a:34 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '69cc4cf9-dfe3-44cb-b811-0300b5ccd66b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3e022a35f604df2bbc885e498b1e206', 'neutron:revision_number': '4', 'neutron:security_group_ids': '885819b7-5060-4b73-ad54-3f31f821195c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89e607b1-9e39-47f0-8180-8aaef3a2a0e9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=d70a594c-be8a-461a-93b0-7416d3587e74) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:42:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:57.154 140354 INFO neutron.agent.ovn.metadata.agent [-] Port d70a594c-be8a-461a-93b0-7416d3587e74 in datapath 3e260ad9-fcf1-432b-b71b-b943d4249b65 unbound from our chassis
Jan 20 14:42:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:57.155 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3e260ad9-fcf1-432b-b71b-b943d4249b65, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:42:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:57.156 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f6c8b8-9a3e-4173-ac41-b5aeac4a1093]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:57.156 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65 namespace which is not needed anymore
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.159 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:57 compute-1 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Jan 20 14:42:57 compute-1 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000004d.scope: Consumed 2.677s CPU time.
Jan 20 14:42:57 compute-1 systemd-machined[194361]: Machine qemu-33-instance-0000004d terminated.
Jan 20 14:42:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:57.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:57 compute-1 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[256494]: [NOTICE]   (256523) : haproxy version is 2.8.14-c23fe91
Jan 20 14:42:57 compute-1 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[256494]: [NOTICE]   (256523) : path to executable is /usr/sbin/haproxy
Jan 20 14:42:57 compute-1 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[256494]: [WARNING]  (256523) : Exiting Master process...
Jan 20 14:42:57 compute-1 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[256494]: [ALERT]    (256523) : Current worker (256525) exited with code 143 (Terminated)
Jan 20 14:42:57 compute-1 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[256494]: [WARNING]  (256523) : All workers exited. Exiting... (0)
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.278 225859 INFO nova.virt.libvirt.driver [-] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Instance destroyed successfully.
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.278 225859 DEBUG nova.objects.instance [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lazy-loading 'resources' on Instance uuid 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:57 compute-1 systemd[1]: libpod-6f8eed69f823a3fdf11e8147104e3b8cd6fb1499003504a66e3d765dff92ee16.scope: Deactivated successfully.
Jan 20 14:42:57 compute-1 podman[256557]: 2026-01-20 14:42:57.292494664 +0000 UTC m=+0.053965290 container died 6f8eed69f823a3fdf11e8147104e3b8cd6fb1499003504a66e3d765dff92ee16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.295 225859 DEBUG nova.virt.libvirt.vif [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:42:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1574727229',display_name='tempest-tempest.common.compute-instance-1574727229-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1574727229-1',id=77,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:42:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3e022a35f604df2bbc885e498b1e206',ramdisk_id='',reservation_id='r-6khz0z7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-164394330',owner_user_name='tempest-MultipleCreateTestJSON-164394330-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:42:55Z,user_data=None,user_id='aa2e7857e85f483eb0d162e2ee8c2e2c',uuid=69cc4cf9-dfe3-44cb-b811-0300b5ccd66b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d70a594c-be8a-461a-93b0-7416d3587e74", "address": "fa:16:3e:11:3a:34", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd70a594c-be", "ovs_interfaceid": "d70a594c-be8a-461a-93b0-7416d3587e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.295 225859 DEBUG nova.network.os_vif_util [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converting VIF {"id": "d70a594c-be8a-461a-93b0-7416d3587e74", "address": "fa:16:3e:11:3a:34", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd70a594c-be", "ovs_interfaceid": "d70a594c-be8a-461a-93b0-7416d3587e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.296 225859 DEBUG nova.network.os_vif_util [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:3a:34,bridge_name='br-int',has_traffic_filtering=True,id=d70a594c-be8a-461a-93b0-7416d3587e74,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd70a594c-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.296 225859 DEBUG os_vif [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:3a:34,bridge_name='br-int',has_traffic_filtering=True,id=d70a594c-be8a-461a-93b0-7416d3587e74,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd70a594c-be') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.298 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.298 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd70a594c-be, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.300 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.302 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.304 225859 INFO os_vif [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:3a:34,bridge_name='br-int',has_traffic_filtering=True,id=d70a594c-be8a-461a-93b0-7416d3587e74,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd70a594c-be')
Jan 20 14:42:57 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f8eed69f823a3fdf11e8147104e3b8cd6fb1499003504a66e3d765dff92ee16-userdata-shm.mount: Deactivated successfully.
Jan 20 14:42:57 compute-1 systemd[1]: var-lib-containers-storage-overlay-e211a072461adf5d122a2cd3877c75e44066b196736bc398b06e937a9610cce8-merged.mount: Deactivated successfully.
Jan 20 14:42:57 compute-1 podman[256557]: 2026-01-20 14:42:57.336174121 +0000 UTC m=+0.097644737 container cleanup 6f8eed69f823a3fdf11e8147104e3b8cd6fb1499003504a66e3d765dff92ee16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:42:57 compute-1 systemd[1]: libpod-conmon-6f8eed69f823a3fdf11e8147104e3b8cd6fb1499003504a66e3d765dff92ee16.scope: Deactivated successfully.
Jan 20 14:42:57 compute-1 podman[256616]: 2026-01-20 14:42:57.405138284 +0000 UTC m=+0.045695165 container remove 6f8eed69f823a3fdf11e8147104e3b8cd6fb1499003504a66e3d765dff92ee16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:42:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:57.410 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fd9a5254-c8e1-418f-bd1c-9c0b9ad406ae]: (4, ('Tue Jan 20 02:42:57 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65 (6f8eed69f823a3fdf11e8147104e3b8cd6fb1499003504a66e3d765dff92ee16)\n6f8eed69f823a3fdf11e8147104e3b8cd6fb1499003504a66e3d765dff92ee16\nTue Jan 20 02:42:57 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65 (6f8eed69f823a3fdf11e8147104e3b8cd6fb1499003504a66e3d765dff92ee16)\n6f8eed69f823a3fdf11e8147104e3b8cd6fb1499003504a66e3d765dff92ee16\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:57.412 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[023c7a34-5e94-41c5-8068-67b4214c75cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:57.413 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e260ad9-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.415 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:57 compute-1 kernel: tap3e260ad9-f0: left promiscuous mode
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.435 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:57.437 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e017f52d-0660-4300-84bc-9a9c4952f928]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:57.453 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a09092cf-6893-469d-9245-ebfe217324ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:57.454 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2a1a9276-debe-4fd2-8e03-37c0336579f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:57.478 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e1c689a2-e9ce-460b-9567-571f8f6b3b12]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521280, 'reachable_time': 43298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256631, 'error': None, 'target': 'ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:57.482 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:42:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:42:57.482 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[ac09506a-aaa5-4357-8d6e-9595cf0de74d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:57 compute-1 systemd[1]: run-netns-ovnmeta\x2d3e260ad9\x2dfcf1\x2d432b\x2db71b\x2db943d4249b65.mount: Deactivated successfully.
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.693 225859 INFO nova.virt.libvirt.driver [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Deleting instance files /var/lib/nova/instances/69cc4cf9-dfe3-44cb-b811-0300b5ccd66b_del
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.694 225859 INFO nova.virt.libvirt.driver [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Deletion of /var/lib/nova/instances/69cc4cf9-dfe3-44cb-b811-0300b5ccd66b_del complete
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.752 225859 INFO nova.compute.manager [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Took 0.71 seconds to destroy the instance on the hypervisor.
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.753 225859 DEBUG oslo.service.loopingcall [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.754 225859 DEBUG nova.compute.manager [-] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:42:57 compute-1 nova_compute[225855]: 2026-01-20 14:42:57.754 225859 DEBUG nova.network.neutron [-] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:42:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:58.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:58 compute-1 nova_compute[225855]: 2026-01-20 14:42:58.571 225859 DEBUG nova.network.neutron [-] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:58 compute-1 nova_compute[225855]: 2026-01-20 14:42:58.588 225859 INFO nova.compute.manager [-] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Took 0.83 seconds to deallocate network for instance.
Jan 20 14:42:58 compute-1 nova_compute[225855]: 2026-01-20 14:42:58.708 225859 DEBUG oslo_concurrency.lockutils [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:58 compute-1 nova_compute[225855]: 2026-01-20 14:42:58.708 225859 DEBUG oslo_concurrency.lockutils [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:58 compute-1 nova_compute[225855]: 2026-01-20 14:42:58.842 225859 DEBUG oslo_concurrency.processutils [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:59 compute-1 ceph-mon[81775]: pgmap v1638: 321 pgs: 321 active+clean; 372 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 214 op/s
Jan 20 14:42:59 compute-1 nova_compute[225855]: 2026-01-20 14:42:59.160 225859 DEBUG nova.compute.manager [req-c4ea4a7e-04f4-415c-bc6b-5c7d7ee41ceb req-7777d7b8-1c9c-4215-bf62-3f042b8dc1d8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Received event network-vif-unplugged-d70a594c-be8a-461a-93b0-7416d3587e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:59 compute-1 nova_compute[225855]: 2026-01-20 14:42:59.161 225859 DEBUG oslo_concurrency.lockutils [req-c4ea4a7e-04f4-415c-bc6b-5c7d7ee41ceb req-7777d7b8-1c9c-4215-bf62-3f042b8dc1d8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:59 compute-1 nova_compute[225855]: 2026-01-20 14:42:59.161 225859 DEBUG oslo_concurrency.lockutils [req-c4ea4a7e-04f4-415c-bc6b-5c7d7ee41ceb req-7777d7b8-1c9c-4215-bf62-3f042b8dc1d8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:59 compute-1 nova_compute[225855]: 2026-01-20 14:42:59.162 225859 DEBUG oslo_concurrency.lockutils [req-c4ea4a7e-04f4-415c-bc6b-5c7d7ee41ceb req-7777d7b8-1c9c-4215-bf62-3f042b8dc1d8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:59 compute-1 nova_compute[225855]: 2026-01-20 14:42:59.162 225859 DEBUG nova.compute.manager [req-c4ea4a7e-04f4-415c-bc6b-5c7d7ee41ceb req-7777d7b8-1c9c-4215-bf62-3f042b8dc1d8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] No waiting events found dispatching network-vif-unplugged-d70a594c-be8a-461a-93b0-7416d3587e74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:59 compute-1 nova_compute[225855]: 2026-01-20 14:42:59.162 225859 WARNING nova.compute.manager [req-c4ea4a7e-04f4-415c-bc6b-5c7d7ee41ceb req-7777d7b8-1c9c-4215-bf62-3f042b8dc1d8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Received unexpected event network-vif-unplugged-d70a594c-be8a-461a-93b0-7416d3587e74 for instance with vm_state deleted and task_state None.
Jan 20 14:42:59 compute-1 nova_compute[225855]: 2026-01-20 14:42:59.162 225859 DEBUG nova.compute.manager [req-c4ea4a7e-04f4-415c-bc6b-5c7d7ee41ceb req-7777d7b8-1c9c-4215-bf62-3f042b8dc1d8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Received event network-vif-plugged-d70a594c-be8a-461a-93b0-7416d3587e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:59 compute-1 nova_compute[225855]: 2026-01-20 14:42:59.162 225859 DEBUG oslo_concurrency.lockutils [req-c4ea4a7e-04f4-415c-bc6b-5c7d7ee41ceb req-7777d7b8-1c9c-4215-bf62-3f042b8dc1d8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:59 compute-1 nova_compute[225855]: 2026-01-20 14:42:59.163 225859 DEBUG oslo_concurrency.lockutils [req-c4ea4a7e-04f4-415c-bc6b-5c7d7ee41ceb req-7777d7b8-1c9c-4215-bf62-3f042b8dc1d8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:59 compute-1 nova_compute[225855]: 2026-01-20 14:42:59.163 225859 DEBUG oslo_concurrency.lockutils [req-c4ea4a7e-04f4-415c-bc6b-5c7d7ee41ceb req-7777d7b8-1c9c-4215-bf62-3f042b8dc1d8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:59 compute-1 nova_compute[225855]: 2026-01-20 14:42:59.164 225859 DEBUG nova.compute.manager [req-c4ea4a7e-04f4-415c-bc6b-5c7d7ee41ceb req-7777d7b8-1c9c-4215-bf62-3f042b8dc1d8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] No waiting events found dispatching network-vif-plugged-d70a594c-be8a-461a-93b0-7416d3587e74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:59 compute-1 nova_compute[225855]: 2026-01-20 14:42:59.164 225859 WARNING nova.compute.manager [req-c4ea4a7e-04f4-415c-bc6b-5c7d7ee41ceb req-7777d7b8-1c9c-4215-bf62-3f042b8dc1d8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Received unexpected event network-vif-plugged-d70a594c-be8a-461a-93b0-7416d3587e74 for instance with vm_state deleted and task_state None.
Jan 20 14:42:59 compute-1 nova_compute[225855]: 2026-01-20 14:42:59.164 225859 DEBUG nova.compute.manager [req-c4ea4a7e-04f4-415c-bc6b-5c7d7ee41ceb req-7777d7b8-1c9c-4215-bf62-3f042b8dc1d8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Received event network-vif-deleted-d70a594c-be8a-461a-93b0-7416d3587e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:42:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:42:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:59.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:42:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:42:59 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4040533622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:59 compute-1 nova_compute[225855]: 2026-01-20 14:42:59.277 225859 DEBUG oslo_concurrency.processutils [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:59 compute-1 nova_compute[225855]: 2026-01-20 14:42:59.282 225859 DEBUG nova.compute.provider_tree [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:42:59 compute-1 nova_compute[225855]: 2026-01-20 14:42:59.296 225859 DEBUG nova.scheduler.client.report [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:42:59 compute-1 nova_compute[225855]: 2026-01-20 14:42:59.326 225859 DEBUG oslo_concurrency.lockutils [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:59 compute-1 nova_compute[225855]: 2026-01-20 14:42:59.391 225859 INFO nova.scheduler.client.report [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Deleted allocations for instance 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b
Jan 20 14:42:59 compute-1 nova_compute[225855]: 2026-01-20 14:42:59.442 225859 DEBUG oslo_concurrency.lockutils [None req-d75a9290-3b10-49e0-a981-9cf010d1562f aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "69cc4cf9-dfe3-44cb-b811-0300b5ccd66b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4040533622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/306700131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4109341430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:00 compute-1 ceph-mon[81775]: pgmap v1639: 321 pgs: 321 active+clean; 362 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.5 MiB/s wr, 219 op/s
Jan 20 14:43:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:00.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:00 compute-1 nova_compute[225855]: 2026-01-20 14:43:00.298 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:43:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:01.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:43:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:01 compute-1 ceph-mon[81775]: pgmap v1640: 321 pgs: 321 active+clean; 314 MiB data, 826 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.2 MiB/s wr, 315 op/s
Jan 20 14:43:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:02.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:02 compute-1 nova_compute[225855]: 2026-01-20 14:43:02.346 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:03.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:03 compute-1 ceph-mon[81775]: pgmap v1641: 321 pgs: 321 active+clean; 295 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.2 MiB/s wr, 295 op/s
Jan 20 14:43:04 compute-1 podman[256659]: 2026-01-20 14:43:04.035799706 +0000 UTC m=+0.073383379 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:43:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:04.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:04 compute-1 nova_compute[225855]: 2026-01-20 14:43:04.548 225859 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:04 compute-1 nova_compute[225855]: 2026-01-20 14:43:04.549 225859 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:04 compute-1 nova_compute[225855]: 2026-01-20 14:43:04.566 225859 DEBUG nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:43:04 compute-1 nova_compute[225855]: 2026-01-20 14:43:04.668 225859 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:04 compute-1 nova_compute[225855]: 2026-01-20 14:43:04.669 225859 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:04 compute-1 nova_compute[225855]: 2026-01-20 14:43:04.676 225859 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:43:04 compute-1 nova_compute[225855]: 2026-01-20 14:43:04.676 225859 INFO nova.compute.claims [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:43:04 compute-1 nova_compute[225855]: 2026-01-20 14:43:04.835 225859 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:43:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:05.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:43:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:43:05 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2608020822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.268 225859 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.273 225859 DEBUG nova.compute.provider_tree [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.296 225859 DEBUG nova.scheduler.client.report [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.299 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.338 225859 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.338 225859 DEBUG nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.412 225859 DEBUG nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.412 225859 DEBUG nova.network.neutron [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.434 225859 INFO nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.460 225859 DEBUG nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:43:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2608020822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3418317096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.610 225859 DEBUG nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.612 225859 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.612 225859 INFO nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Creating image(s)
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.654 225859 DEBUG nova.storage.rbd_utils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image cd7f8cdc-1467-4d67-b60b-dd2ee8707b09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.682 225859 DEBUG nova.storage.rbd_utils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image cd7f8cdc-1467-4d67-b60b-dd2ee8707b09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.706 225859 DEBUG nova.storage.rbd_utils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image cd7f8cdc-1467-4d67-b60b-dd2ee8707b09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.710 225859 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.746 225859 DEBUG nova.policy [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aa2e7857e85f483eb0d162e2ee8c2e2c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3e022a35f604df2bbc885e498b1e206', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.799 225859 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.799 225859 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.800 225859 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.800 225859 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.823 225859 DEBUG nova.storage.rbd_utils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image cd7f8cdc-1467-4d67-b60b-dd2ee8707b09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:05 compute-1 nova_compute[225855]: 2026-01-20 14:43:05.827 225859 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 cd7f8cdc-1467-4d67-b60b-dd2ee8707b09_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:06.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:06 compute-1 nova_compute[225855]: 2026-01-20 14:43:06.284 225859 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 cd7f8cdc-1467-4d67-b60b-dd2ee8707b09_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:06 compute-1 nova_compute[225855]: 2026-01-20 14:43:06.347 225859 DEBUG nova.storage.rbd_utils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] resizing rbd image cd7f8cdc-1467-4d67-b60b-dd2ee8707b09_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:43:06 compute-1 sudo[256811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:43:06 compute-1 sudo[256811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:06 compute-1 sudo[256811]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:06 compute-1 sudo[256872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:43:06 compute-1 sudo[256872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:06 compute-1 sudo[256872]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:06 compute-1 ceph-mon[81775]: pgmap v1642: 321 pgs: 321 active+clean; 283 MiB data, 814 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 312 op/s
Jan 20 14:43:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2177212168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:06 compute-1 nova_compute[225855]: 2026-01-20 14:43:06.438 225859 DEBUG nova.objects.instance [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lazy-loading 'migration_context' on Instance uuid cd7f8cdc-1467-4d67-b60b-dd2ee8707b09 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:43:06 compute-1 nova_compute[225855]: 2026-01-20 14:43:06.454 225859 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:43:06 compute-1 nova_compute[225855]: 2026-01-20 14:43:06.454 225859 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Ensure instance console log exists: /var/lib/nova/instances/cd7f8cdc-1467-4d67-b60b-dd2ee8707b09/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:43:06 compute-1 nova_compute[225855]: 2026-01-20 14:43:06.454 225859 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:06 compute-1 nova_compute[225855]: 2026-01-20 14:43:06.455 225859 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:06 compute-1 nova_compute[225855]: 2026-01-20 14:43:06.455 225859 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:43:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:07.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:43:07 compute-1 nova_compute[225855]: 2026-01-20 14:43:07.383 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:07 compute-1 nova_compute[225855]: 2026-01-20 14:43:07.603 225859 DEBUG nova.network.neutron [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Successfully created port: 221681ce-86ed-410e-8ca2-52951142fede _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:43:07 compute-1 ceph-mon[81775]: pgmap v1643: 321 pgs: 321 active+clean; 294 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.4 MiB/s wr, 349 op/s
Jan 20 14:43:07 compute-1 nova_compute[225855]: 2026-01-20 14:43:07.933 225859 DEBUG oslo_concurrency.lockutils [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquiring lock "504acd93-cd55-496e-a85f-30e811f827d4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:07 compute-1 nova_compute[225855]: 2026-01-20 14:43:07.933 225859 DEBUG oslo_concurrency.lockutils [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "504acd93-cd55-496e-a85f-30e811f827d4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:07 compute-1 nova_compute[225855]: 2026-01-20 14:43:07.934 225859 DEBUG oslo_concurrency.lockutils [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquiring lock "504acd93-cd55-496e-a85f-30e811f827d4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:07 compute-1 nova_compute[225855]: 2026-01-20 14:43:07.934 225859 DEBUG oslo_concurrency.lockutils [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "504acd93-cd55-496e-a85f-30e811f827d4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:07 compute-1 nova_compute[225855]: 2026-01-20 14:43:07.935 225859 DEBUG oslo_concurrency.lockutils [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "504acd93-cd55-496e-a85f-30e811f827d4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:07 compute-1 nova_compute[225855]: 2026-01-20 14:43:07.937 225859 INFO nova.compute.manager [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Terminating instance
Jan 20 14:43:07 compute-1 nova_compute[225855]: 2026-01-20 14:43:07.939 225859 DEBUG nova.compute.manager [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:43:07 compute-1 kernel: tap349b1d10-0b (unregistering): left promiscuous mode
Jan 20 14:43:07 compute-1 NetworkManager[49104]: <info>  [1768920187.9905] device (tap349b1d10-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:07.999 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:08 compute-1 ovn_controller[130490]: 2026-01-20T14:43:08Z|00279|binding|INFO|Releasing lport 349b1d10-0b06-4025-80fd-4861bd487a43 from this chassis (sb_readonly=0)
Jan 20 14:43:08 compute-1 ovn_controller[130490]: 2026-01-20T14:43:08Z|00280|binding|INFO|Setting lport 349b1d10-0b06-4025-80fd-4861bd487a43 down in Southbound
Jan 20 14:43:08 compute-1 ovn_controller[130490]: 2026-01-20T14:43:08Z|00281|binding|INFO|Removing iface tap349b1d10-0b ovn-installed in OVS
Jan 20 14:43:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:08.005 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:f6:26 10.100.0.12'], port_security=['fa:16:3e:dd:f6:26 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '504acd93-cd55-496e-a85f-30e811f827d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b95747114ab4043b93a260387199c91', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f18b0222-78a5-4c37-8065-772dbe5c63e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80e2aa5b-ecb8-4e93-992f-baaef718dd34, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=349b1d10-0b06-4025-80fd-4861bd487a43) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:43:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:08.006 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 349b1d10-0b06-4025-80fd-4861bd487a43 in datapath b36e9cab-12c6-4a09-9aab-ef2679d875ba unbound from our chassis
Jan 20 14:43:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:08.008 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b36e9cab-12c6-4a09-9aab-ef2679d875ba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:43:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:08.009 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[200f21cd-d85c-4224-9ec4-8294749854df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:08.009 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba namespace which is not needed anymore
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.022 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:08 compute-1 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Jan 20 14:43:08 compute-1 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000004b.scope: Consumed 14.487s CPU time.
Jan 20 14:43:08 compute-1 systemd-machined[194361]: Machine qemu-32-instance-0000004b terminated.
Jan 20 14:43:08 compute-1 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[255521]: [NOTICE]   (255528) : haproxy version is 2.8.14-c23fe91
Jan 20 14:43:08 compute-1 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[255521]: [NOTICE]   (255528) : path to executable is /usr/sbin/haproxy
Jan 20 14:43:08 compute-1 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[255521]: [WARNING]  (255528) : Exiting Master process...
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.167 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:08 compute-1 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[255521]: [ALERT]    (255528) : Current worker (255530) exited with code 143 (Terminated)
Jan 20 14:43:08 compute-1 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[255521]: [WARNING]  (255528) : All workers exited. Exiting... (0)
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.173 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:08 compute-1 systemd[1]: libpod-a0f03681d47f02379595b6b5e8bf0975841c981497c7672805924261893db805.scope: Deactivated successfully.
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.178 225859 INFO nova.virt.libvirt.driver [-] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Instance destroyed successfully.
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.178 225859 DEBUG nova.objects.instance [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lazy-loading 'resources' on Instance uuid 504acd93-cd55-496e-a85f-30e811f827d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:43:08 compute-1 podman[256943]: 2026-01-20 14:43:08.179527393 +0000 UTC m=+0.055816551 container died a0f03681d47f02379595b6b5e8bf0975841c981497c7672805924261893db805 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.194 225859 DEBUG nova.virt.libvirt.vif [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:42:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1822690739',display_name='tempest-ListServerFiltersTestJSON-instance-1822690739',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1822690739',id=75,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:42:26Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4b95747114ab4043b93a260387199c91',ramdisk_id='',reservation_id='r-c4vxbkd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-2126845308',owner_user_name='tempest-ListServerFiltersTestJSON-2126845308-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:42:26Z,user_data=None,user_id='ff99fc8eda0640928c6e82981dacb266',uuid=504acd93-cd55-496e-a85f-30e811f827d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "349b1d10-0b06-4025-80fd-4861bd487a43", "address": "fa:16:3e:dd:f6:26", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap349b1d10-0b", "ovs_interfaceid": "349b1d10-0b06-4025-80fd-4861bd487a43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.195 225859 DEBUG nova.network.os_vif_util [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Converting VIF {"id": "349b1d10-0b06-4025-80fd-4861bd487a43", "address": "fa:16:3e:dd:f6:26", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap349b1d10-0b", "ovs_interfaceid": "349b1d10-0b06-4025-80fd-4861bd487a43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.196 225859 DEBUG nova.network.os_vif_util [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f6:26,bridge_name='br-int',has_traffic_filtering=True,id=349b1d10-0b06-4025-80fd-4861bd487a43,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap349b1d10-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.196 225859 DEBUG os_vif [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f6:26,bridge_name='br-int',has_traffic_filtering=True,id=349b1d10-0b06-4025-80fd-4861bd487a43,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap349b1d10-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:43:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:08.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.199 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.199 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap349b1d10-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.201 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.203 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:08 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a0f03681d47f02379595b6b5e8bf0975841c981497c7672805924261893db805-userdata-shm.mount: Deactivated successfully.
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.206 225859 INFO os_vif [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f6:26,bridge_name='br-int',has_traffic_filtering=True,id=349b1d10-0b06-4025-80fd-4861bd487a43,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap349b1d10-0b')
Jan 20 14:43:08 compute-1 systemd[1]: var-lib-containers-storage-overlay-bd3ad48162a1b4dec3cf75f42139906849c8a6a6a6b10f13149b76909d80e15f-merged.mount: Deactivated successfully.
Jan 20 14:43:08 compute-1 podman[256943]: 2026-01-20 14:43:08.220674989 +0000 UTC m=+0.096964147 container cleanup a0f03681d47f02379595b6b5e8bf0975841c981497c7672805924261893db805 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:43:08 compute-1 systemd[1]: libpod-conmon-a0f03681d47f02379595b6b5e8bf0975841c981497c7672805924261893db805.scope: Deactivated successfully.
Jan 20 14:43:08 compute-1 podman[256999]: 2026-01-20 14:43:08.288016636 +0000 UTC m=+0.044546593 container remove a0f03681d47f02379595b6b5e8bf0975841c981497c7672805924261893db805 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:43:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:08.293 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ddbe5786-6f48-4e9d-ac70-44ef7f7a3a6d]: (4, ('Tue Jan 20 02:43:08 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba (a0f03681d47f02379595b6b5e8bf0975841c981497c7672805924261893db805)\na0f03681d47f02379595b6b5e8bf0975841c981497c7672805924261893db805\nTue Jan 20 02:43:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba (a0f03681d47f02379595b6b5e8bf0975841c981497c7672805924261893db805)\na0f03681d47f02379595b6b5e8bf0975841c981497c7672805924261893db805\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:08.295 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3902b087-f641-44ef-a25e-06478f1e8d0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:08.296 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb36e9cab-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:08 compute-1 kernel: tapb36e9cab-10: left promiscuous mode
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.298 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.311 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:08.313 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8b4db99a-a64f-46ab-8393-d41f3dff63c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:08.331 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8ffea9b2-2f6d-4378-827b-0113f7f0b432]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:08.333 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[08c46e92-006f-4db2-b84f-e1a10269830a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:08.348 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bb352566-0b2f-4b93-a379-f818e2b2d1f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518397, 'reachable_time': 22824, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257017, 'error': None, 'target': 'ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:08.351 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:43:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:08.351 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[be58979c-eb33-44cf-bb16-ca19937b45e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:08 compute-1 systemd[1]: run-netns-ovnmeta\x2db36e9cab\x2d12c6\x2d4a09\x2d9aab\x2def2679d875ba.mount: Deactivated successfully.
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.382 225859 DEBUG nova.compute.manager [req-fca3f18b-19d4-4e59-b9cf-6920f7d0ff38 req-0f32bb6b-8d4f-4096-8657-f6d072b798d1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Received event network-vif-unplugged-349b1d10-0b06-4025-80fd-4861bd487a43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.383 225859 DEBUG oslo_concurrency.lockutils [req-fca3f18b-19d4-4e59-b9cf-6920f7d0ff38 req-0f32bb6b-8d4f-4096-8657-f6d072b798d1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "504acd93-cd55-496e-a85f-30e811f827d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.383 225859 DEBUG oslo_concurrency.lockutils [req-fca3f18b-19d4-4e59-b9cf-6920f7d0ff38 req-0f32bb6b-8d4f-4096-8657-f6d072b798d1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "504acd93-cd55-496e-a85f-30e811f827d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.383 225859 DEBUG oslo_concurrency.lockutils [req-fca3f18b-19d4-4e59-b9cf-6920f7d0ff38 req-0f32bb6b-8d4f-4096-8657-f6d072b798d1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "504acd93-cd55-496e-a85f-30e811f827d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.384 225859 DEBUG nova.compute.manager [req-fca3f18b-19d4-4e59-b9cf-6920f7d0ff38 req-0f32bb6b-8d4f-4096-8657-f6d072b798d1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] No waiting events found dispatching network-vif-unplugged-349b1d10-0b06-4025-80fd-4861bd487a43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.384 225859 DEBUG nova.compute.manager [req-fca3f18b-19d4-4e59-b9cf-6920f7d0ff38 req-0f32bb6b-8d4f-4096-8657-f6d072b798d1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Received event network-vif-unplugged-349b1d10-0b06-4025-80fd-4861bd487a43 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.587 225859 INFO nova.virt.libvirt.driver [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Deleting instance files /var/lib/nova/instances/504acd93-cd55-496e-a85f-30e811f827d4_del
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.589 225859 INFO nova.virt.libvirt.driver [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Deletion of /var/lib/nova/instances/504acd93-cd55-496e-a85f-30e811f827d4_del complete
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.654 225859 INFO nova.compute.manager [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Took 0.71 seconds to destroy the instance on the hypervisor.
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.655 225859 DEBUG oslo.service.loopingcall [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.655 225859 DEBUG nova.compute.manager [-] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:43:08 compute-1 nova_compute[225855]: 2026-01-20 14:43:08.656 225859 DEBUG nova.network.neutron [-] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:43:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1530503608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:09.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:09 compute-1 nova_compute[225855]: 2026-01-20 14:43:09.548 225859 DEBUG nova.network.neutron [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Successfully updated port: 221681ce-86ed-410e-8ca2-52951142fede _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:43:09 compute-1 nova_compute[225855]: 2026-01-20 14:43:09.568 225859 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "refresh_cache-cd7f8cdc-1467-4d67-b60b-dd2ee8707b09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:43:09 compute-1 nova_compute[225855]: 2026-01-20 14:43:09.569 225859 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquired lock "refresh_cache-cd7f8cdc-1467-4d67-b60b-dd2ee8707b09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:43:09 compute-1 nova_compute[225855]: 2026-01-20 14:43:09.569 225859 DEBUG nova.network.neutron [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:43:09 compute-1 nova_compute[225855]: 2026-01-20 14:43:09.656 225859 DEBUG nova.network.neutron [-] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:43:09 compute-1 nova_compute[225855]: 2026-01-20 14:43:09.685 225859 INFO nova.compute.manager [-] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Took 1.03 seconds to deallocate network for instance.
Jan 20 14:43:09 compute-1 nova_compute[225855]: 2026-01-20 14:43:09.737 225859 DEBUG oslo_concurrency.lockutils [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:09 compute-1 nova_compute[225855]: 2026-01-20 14:43:09.738 225859 DEBUG oslo_concurrency.lockutils [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:09 compute-1 nova_compute[225855]: 2026-01-20 14:43:09.799 225859 DEBUG oslo_concurrency.processutils [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:09 compute-1 nova_compute[225855]: 2026-01-20 14:43:09.920 225859 DEBUG nova.network.neutron [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:43:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1857567897' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:10 compute-1 ceph-mon[81775]: pgmap v1644: 321 pgs: 321 active+clean; 295 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 4.4 MiB/s wr, 301 op/s
Jan 20 14:43:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:43:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:10.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:43:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:43:10 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1201707774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:10 compute-1 nova_compute[225855]: 2026-01-20 14:43:10.231 225859 DEBUG oslo_concurrency.processutils [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:10 compute-1 nova_compute[225855]: 2026-01-20 14:43:10.238 225859 DEBUG nova.compute.provider_tree [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:43:10 compute-1 nova_compute[225855]: 2026-01-20 14:43:10.263 225859 DEBUG nova.scheduler.client.report [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:43:10 compute-1 nova_compute[225855]: 2026-01-20 14:43:10.296 225859 DEBUG oslo_concurrency.lockutils [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:10 compute-1 nova_compute[225855]: 2026-01-20 14:43:10.300 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:10 compute-1 nova_compute[225855]: 2026-01-20 14:43:10.332 225859 INFO nova.scheduler.client.report [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Deleted allocations for instance 504acd93-cd55-496e-a85f-30e811f827d4
Jan 20 14:43:10 compute-1 nova_compute[225855]: 2026-01-20 14:43:10.423 225859 DEBUG oslo_concurrency.lockutils [None req-31fee689-1eb0-4822-b907-59dd8bf5bae2 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "504acd93-cd55-496e-a85f-30e811f827d4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:10 compute-1 nova_compute[225855]: 2026-01-20 14:43:10.500 225859 DEBUG nova.compute.manager [req-d6d7e4f8-914b-4f70-af34-9910e0e37aaf req-840959d0-7393-4732-9287-c4c366e266a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Received event network-vif-plugged-349b1d10-0b06-4025-80fd-4861bd487a43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:10 compute-1 nova_compute[225855]: 2026-01-20 14:43:10.500 225859 DEBUG oslo_concurrency.lockutils [req-d6d7e4f8-914b-4f70-af34-9910e0e37aaf req-840959d0-7393-4732-9287-c4c366e266a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "504acd93-cd55-496e-a85f-30e811f827d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:10 compute-1 nova_compute[225855]: 2026-01-20 14:43:10.501 225859 DEBUG oslo_concurrency.lockutils [req-d6d7e4f8-914b-4f70-af34-9910e0e37aaf req-840959d0-7393-4732-9287-c4c366e266a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "504acd93-cd55-496e-a85f-30e811f827d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:10 compute-1 nova_compute[225855]: 2026-01-20 14:43:10.501 225859 DEBUG oslo_concurrency.lockutils [req-d6d7e4f8-914b-4f70-af34-9910e0e37aaf req-840959d0-7393-4732-9287-c4c366e266a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "504acd93-cd55-496e-a85f-30e811f827d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:10 compute-1 nova_compute[225855]: 2026-01-20 14:43:10.501 225859 DEBUG nova.compute.manager [req-d6d7e4f8-914b-4f70-af34-9910e0e37aaf req-840959d0-7393-4732-9287-c4c366e266a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] No waiting events found dispatching network-vif-plugged-349b1d10-0b06-4025-80fd-4861bd487a43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:43:10 compute-1 nova_compute[225855]: 2026-01-20 14:43:10.502 225859 WARNING nova.compute.manager [req-d6d7e4f8-914b-4f70-af34-9910e0e37aaf req-840959d0-7393-4732-9287-c4c366e266a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Received unexpected event network-vif-plugged-349b1d10-0b06-4025-80fd-4861bd487a43 for instance with vm_state deleted and task_state None.
Jan 20 14:43:10 compute-1 nova_compute[225855]: 2026-01-20 14:43:10.502 225859 DEBUG nova.compute.manager [req-d6d7e4f8-914b-4f70-af34-9910e0e37aaf req-840959d0-7393-4732-9287-c4c366e266a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Received event network-changed-221681ce-86ed-410e-8ca2-52951142fede external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:10 compute-1 nova_compute[225855]: 2026-01-20 14:43:10.502 225859 DEBUG nova.compute.manager [req-d6d7e4f8-914b-4f70-af34-9910e0e37aaf req-840959d0-7393-4732-9287-c4c366e266a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Refreshing instance network info cache due to event network-changed-221681ce-86ed-410e-8ca2-52951142fede. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:43:10 compute-1 nova_compute[225855]: 2026-01-20 14:43:10.502 225859 DEBUG oslo_concurrency.lockutils [req-d6d7e4f8-914b-4f70-af34-9910e0e37aaf req-840959d0-7393-4732-9287-c4c366e266a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-cd7f8cdc-1467-4d67-b60b-dd2ee8707b09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:43:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1201707774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:43:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:11.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.548 225859 DEBUG nova.network.neutron [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Updating instance_info_cache with network_info: [{"id": "221681ce-86ed-410e-8ca2-52951142fede", "address": "fa:16:3e:eb:87:b2", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap221681ce-86", "ovs_interfaceid": "221681ce-86ed-410e-8ca2-52951142fede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.573 225859 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Releasing lock "refresh_cache-cd7f8cdc-1467-4d67-b60b-dd2ee8707b09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.574 225859 DEBUG nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Instance network_info: |[{"id": "221681ce-86ed-410e-8ca2-52951142fede", "address": "fa:16:3e:eb:87:b2", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap221681ce-86", "ovs_interfaceid": "221681ce-86ed-410e-8ca2-52951142fede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.574 225859 DEBUG oslo_concurrency.lockutils [req-d6d7e4f8-914b-4f70-af34-9910e0e37aaf req-840959d0-7393-4732-9287-c4c366e266a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-cd7f8cdc-1467-4d67-b60b-dd2ee8707b09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.575 225859 DEBUG nova.network.neutron [req-d6d7e4f8-914b-4f70-af34-9910e0e37aaf req-840959d0-7393-4732-9287-c4c366e266a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Refreshing network info cache for port 221681ce-86ed-410e-8ca2-52951142fede _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.578 225859 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Start _get_guest_xml network_info=[{"id": "221681ce-86ed-410e-8ca2-52951142fede", "address": "fa:16:3e:eb:87:b2", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap221681ce-86", "ovs_interfaceid": "221681ce-86ed-410e-8ca2-52951142fede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.583 225859 WARNING nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.590 225859 DEBUG nova.virt.libvirt.host [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:43:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.591 225859 DEBUG nova.virt.libvirt.host [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.600 225859 DEBUG nova.virt.libvirt.host [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.600 225859 DEBUG nova.virt.libvirt.host [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.601 225859 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.602 225859 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.602 225859 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.603 225859 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.603 225859 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.603 225859 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.604 225859 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.604 225859 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.605 225859 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.605 225859 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.605 225859 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.606 225859 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.609 225859 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:11 compute-1 nova_compute[225855]: 2026-01-20 14:43:11.654 225859 DEBUG nova.compute.manager [req-41984bb9-b218-4f0e-b367-62ac0cc7320e req-420d6751-7652-4089-8c3a-1b03d978b978 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Received event network-vif-deleted-349b1d10-0b06-4025-80fd-4861bd487a43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:43:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3879070571' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:12 compute-1 nova_compute[225855]: 2026-01-20 14:43:12.028 225859 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:12 compute-1 nova_compute[225855]: 2026-01-20 14:43:12.062 225859 DEBUG nova.storage.rbd_utils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image cd7f8cdc-1467-4d67-b60b-dd2ee8707b09_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:12 compute-1 nova_compute[225855]: 2026-01-20 14:43:12.067 225859 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:12.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:12 compute-1 nova_compute[225855]: 2026-01-20 14:43:12.276 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920177.2757256, 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:12 compute-1 nova_compute[225855]: 2026-01-20 14:43:12.278 225859 INFO nova.compute.manager [-] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] VM Stopped (Lifecycle Event)
Jan 20 14:43:12 compute-1 nova_compute[225855]: 2026-01-20 14:43:12.311 225859 DEBUG nova.compute.manager [None req-c97783c1-d633-4d0d-9b04-906461cf5274 - - - - - -] [instance: 69cc4cf9-dfe3-44cb-b811-0300b5ccd66b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/507388340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:12 compute-1 ceph-mon[81775]: pgmap v1645: 321 pgs: 321 active+clean; 288 MiB data, 816 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.3 MiB/s wr, 308 op/s
Jan 20 14:43:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/85495823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3879070571' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:43:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1554951386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.044 225859 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.977s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.046 225859 DEBUG nova.virt.libvirt.vif [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:43:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-704404998',display_name='tempest-MultipleCreateTestJSON-server-704404998-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-704404998-1',id=80,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3e022a35f604df2bbc885e498b1e206',ramdisk_id='',reservation_id='r-0enj9v31',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-164394330',owner_user_name='tempest-MultipleCreateTestJSON-164394330-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:43:05Z,user_data=None,user_id='aa2e7857e85f483eb0d162e2ee8c2e2c',uuid=cd7f8cdc-1467-4d67-b60b-dd2ee8707b09,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "221681ce-86ed-410e-8ca2-52951142fede", "address": "fa:16:3e:eb:87:b2", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap221681ce-86", "ovs_interfaceid": "221681ce-86ed-410e-8ca2-52951142fede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.046 225859 DEBUG nova.network.os_vif_util [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converting VIF {"id": "221681ce-86ed-410e-8ca2-52951142fede", "address": "fa:16:3e:eb:87:b2", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap221681ce-86", "ovs_interfaceid": "221681ce-86ed-410e-8ca2-52951142fede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.047 225859 DEBUG nova.network.os_vif_util [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:87:b2,bridge_name='br-int',has_traffic_filtering=True,id=221681ce-86ed-410e-8ca2-52951142fede,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap221681ce-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.049 225859 DEBUG nova.objects.instance [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lazy-loading 'pci_devices' on Instance uuid cd7f8cdc-1467-4d67-b60b-dd2ee8707b09 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.063 225859 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:43:13 compute-1 nova_compute[225855]:   <uuid>cd7f8cdc-1467-4d67-b60b-dd2ee8707b09</uuid>
Jan 20 14:43:13 compute-1 nova_compute[225855]:   <name>instance-00000050</name>
Jan 20 14:43:13 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:43:13 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:43:13 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <nova:name>tempest-MultipleCreateTestJSON-server-704404998-1</nova:name>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:43:11</nova:creationTime>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:43:13 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:43:13 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:43:13 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:43:13 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:43:13 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:43:13 compute-1 nova_compute[225855]:         <nova:user uuid="aa2e7857e85f483eb0d162e2ee8c2e2c">tempest-MultipleCreateTestJSON-164394330-project-member</nova:user>
Jan 20 14:43:13 compute-1 nova_compute[225855]:         <nova:project uuid="a3e022a35f604df2bbc885e498b1e206">tempest-MultipleCreateTestJSON-164394330</nova:project>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:43:13 compute-1 nova_compute[225855]:         <nova:port uuid="221681ce-86ed-410e-8ca2-52951142fede">
Jan 20 14:43:13 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:43:13 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:43:13 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <system>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <entry name="serial">cd7f8cdc-1467-4d67-b60b-dd2ee8707b09</entry>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <entry name="uuid">cd7f8cdc-1467-4d67-b60b-dd2ee8707b09</entry>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     </system>
Jan 20 14:43:13 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:43:13 compute-1 nova_compute[225855]:   <os>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:   </os>
Jan 20 14:43:13 compute-1 nova_compute[225855]:   <features>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:   </features>
Jan 20 14:43:13 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:43:13 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:43:13 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/cd7f8cdc-1467-4d67-b60b-dd2ee8707b09_disk">
Jan 20 14:43:13 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       </source>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:43:13 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/cd7f8cdc-1467-4d67-b60b-dd2ee8707b09_disk.config">
Jan 20 14:43:13 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       </source>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:43:13 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:eb:87:b2"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <target dev="tap221681ce-86"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/cd7f8cdc-1467-4d67-b60b-dd2ee8707b09/console.log" append="off"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <video>
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     </video>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:43:13 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:43:13 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:43:13 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:43:13 compute-1 nova_compute[225855]: </domain>
Jan 20 14:43:13 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.064 225859 DEBUG nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Preparing to wait for external event network-vif-plugged-221681ce-86ed-410e-8ca2-52951142fede prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.064 225859 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.065 225859 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.065 225859 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.066 225859 DEBUG nova.virt.libvirt.vif [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:43:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-704404998',display_name='tempest-MultipleCreateTestJSON-server-704404998-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-704404998-1',id=80,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3e022a35f604df2bbc885e498b1e206',ramdisk_id='',reservation_id='r-0enj9v31',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-164394330',owner_user_name='tempest-MultipleCreateTestJSON-164394330-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:43:05Z,user_data=None,user_id='aa2e7857e85f483eb0d162e2ee8c2e2c',uuid=cd7f8cdc-1467-4d67-b60b-dd2ee8707b09,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "221681ce-86ed-410e-8ca2-52951142fede", "address": "fa:16:3e:eb:87:b2", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap221681ce-86", "ovs_interfaceid": "221681ce-86ed-410e-8ca2-52951142fede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.067 225859 DEBUG nova.network.os_vif_util [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converting VIF {"id": "221681ce-86ed-410e-8ca2-52951142fede", "address": "fa:16:3e:eb:87:b2", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap221681ce-86", "ovs_interfaceid": "221681ce-86ed-410e-8ca2-52951142fede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.068 225859 DEBUG nova.network.os_vif_util [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:87:b2,bridge_name='br-int',has_traffic_filtering=True,id=221681ce-86ed-410e-8ca2-52951142fede,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap221681ce-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.068 225859 DEBUG os_vif [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:87:b2,bridge_name='br-int',has_traffic_filtering=True,id=221681ce-86ed-410e-8ca2-52951142fede,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap221681ce-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.069 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.070 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.071 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.074 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.075 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap221681ce-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.075 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap221681ce-86, col_values=(('external_ids', {'iface-id': '221681ce-86ed-410e-8ca2-52951142fede', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:87:b2', 'vm-uuid': 'cd7f8cdc-1467-4d67-b60b-dd2ee8707b09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.077 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:13 compute-1 NetworkManager[49104]: <info>  [1768920193.0781] manager: (tap221681ce-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.080 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.083 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.085 225859 INFO os_vif [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:87:b2,bridge_name='br-int',has_traffic_filtering=True,id=221681ce-86ed-410e-8ca2-52951142fede,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap221681ce-86')
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.134 225859 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.134 225859 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.135 225859 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] No VIF found with MAC fa:16:3e:eb:87:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.135 225859 INFO nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Using config drive
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.163 225859 DEBUG nova.storage.rbd_utils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image cd7f8cdc-1467-4d67-b60b-dd2ee8707b09_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:43:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:13.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.626 225859 DEBUG nova.network.neutron [req-d6d7e4f8-914b-4f70-af34-9910e0e37aaf req-840959d0-7393-4732-9287-c4c366e266a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Updated VIF entry in instance network info cache for port 221681ce-86ed-410e-8ca2-52951142fede. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.627 225859 DEBUG nova.network.neutron [req-d6d7e4f8-914b-4f70-af34-9910e0e37aaf req-840959d0-7393-4732-9287-c4c366e266a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Updating instance_info_cache with network_info: [{"id": "221681ce-86ed-410e-8ca2-52951142fede", "address": "fa:16:3e:eb:87:b2", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap221681ce-86", "ovs_interfaceid": "221681ce-86ed-410e-8ca2-52951142fede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.659 225859 DEBUG oslo_concurrency.lockutils [req-d6d7e4f8-914b-4f70-af34-9910e0e37aaf req-840959d0-7393-4732-9287-c4c366e266a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-cd7f8cdc-1467-4d67-b60b-dd2ee8707b09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.767 225859 INFO nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Creating config drive at /var/lib/nova/instances/cd7f8cdc-1467-4d67-b60b-dd2ee8707b09/disk.config
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.772 225859 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cd7f8cdc-1467-4d67-b60b-dd2ee8707b09/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_2ciybvh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:13 compute-1 nova_compute[225855]: 2026-01-20 14:43:13.918 225859 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cd7f8cdc-1467-4d67-b60b-dd2ee8707b09/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_2ciybvh" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.000 225859 DEBUG nova.storage.rbd_utils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image cd7f8cdc-1467-4d67-b60b-dd2ee8707b09_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.004 225859 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cd7f8cdc-1467-4d67-b60b-dd2ee8707b09/disk.config cd7f8cdc-1467-4d67-b60b-dd2ee8707b09_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1554951386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1120863099' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:43:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1120863099' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:43:14 compute-1 ceph-mon[81775]: pgmap v1646: 321 pgs: 321 active+clean; 260 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 636 KiB/s rd, 5.3 MiB/s wr, 205 op/s
Jan 20 14:43:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:14.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.273 225859 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cd7f8cdc-1467-4d67-b60b-dd2ee8707b09/disk.config cd7f8cdc-1467-4d67-b60b-dd2ee8707b09_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.274 225859 INFO nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Deleting local config drive /var/lib/nova/instances/cd7f8cdc-1467-4d67-b60b-dd2ee8707b09/disk.config because it was imported into RBD.
Jan 20 14:43:14 compute-1 kernel: tap221681ce-86: entered promiscuous mode
Jan 20 14:43:14 compute-1 NetworkManager[49104]: <info>  [1768920194.3202] manager: (tap221681ce-86): new Tun device (/org/freedesktop/NetworkManager/Devices/122)
Jan 20 14:43:14 compute-1 ovn_controller[130490]: 2026-01-20T14:43:14Z|00282|binding|INFO|Claiming lport 221681ce-86ed-410e-8ca2-52951142fede for this chassis.
Jan 20 14:43:14 compute-1 ovn_controller[130490]: 2026-01-20T14:43:14Z|00283|binding|INFO|221681ce-86ed-410e-8ca2-52951142fede: Claiming fa:16:3e:eb:87:b2 10.100.0.10
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.320 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.332 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:87:b2 10.100.0.10'], port_security=['fa:16:3e:eb:87:b2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'cd7f8cdc-1467-4d67-b60b-dd2ee8707b09', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3e022a35f604df2bbc885e498b1e206', 'neutron:revision_number': '2', 'neutron:security_group_ids': '885819b7-5060-4b73-ad54-3f31f821195c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89e607b1-9e39-47f0-8180-8aaef3a2a0e9, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=221681ce-86ed-410e-8ca2-52951142fede) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.333 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 221681ce-86ed-410e-8ca2-52951142fede in datapath 3e260ad9-fcf1-432b-b71b-b943d4249b65 bound to our chassis
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.334 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3e260ad9-fcf1-432b-b71b-b943d4249b65
Jan 20 14:43:14 compute-1 ovn_controller[130490]: 2026-01-20T14:43:14Z|00284|binding|INFO|Setting lport 221681ce-86ed-410e-8ca2-52951142fede ovn-installed in OVS
Jan 20 14:43:14 compute-1 ovn_controller[130490]: 2026-01-20T14:43:14Z|00285|binding|INFO|Setting lport 221681ce-86ed-410e-8ca2-52951142fede up in Southbound
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.340 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.343 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.346 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4b261033-606a-4277-bfcd-6638aa951dbb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.347 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3e260ad9-f1 in ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:43:14 compute-1 systemd-udevd[257181]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.349 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3e260ad9-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.349 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8d3ac578-6f61-4560-b2d6-23251ed410d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.350 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1916203e-0401-40c8-8c17-503bbc51bf49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:14 compute-1 systemd-machined[194361]: New machine qemu-34-instance-00000050.
Jan 20 14:43:14 compute-1 NetworkManager[49104]: <info>  [1768920194.3601] device (tap221681ce-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:43:14 compute-1 NetworkManager[49104]: <info>  [1768920194.3608] device (tap221681ce-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.362 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[977890a5-880e-45b2-bb9f-95d5598c8093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:14 compute-1 systemd[1]: Started Virtual Machine qemu-34-instance-00000050.
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.388 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d66a698a-ca8a-47f6-9a1f-dfef8facdba1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.428 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[6c7cc9d9-1ded-4ca4-a096-1af2a3b7beaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.436 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d7abbb25-b1eb-4620-af91-475bf71ae319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:14 compute-1 NetworkManager[49104]: <info>  [1768920194.4378] manager: (tap3e260ad9-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/123)
Jan 20 14:43:14 compute-1 systemd-udevd[257185]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.474 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[611fe7f2-8524-4944-9306-ba7480e06c67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.477 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[95c29bc0-4f5b-4155-827e-8e7e8da9a76e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:14 compute-1 NetworkManager[49104]: <info>  [1768920194.5045] device (tap3e260ad9-f0): carrier: link connected
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.509 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[69b6d08c-db6d-40d5-b02b-bd1fa6fa15a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.524 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0b941170-f71f-4e82-8fb1-4f74dc9c5dec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e260ad9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:13:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523343, 'reachable_time': 40244, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257214, 'error': None, 'target': 'ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.536 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[021362b0-3b13-46b2-8bcf-3d9bd0c832dc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe88:134a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 523343, 'tstamp': 523343}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257215, 'error': None, 'target': 'ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.551 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c73fe823-de3d-4106-8785-e2786622c254]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e260ad9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:13:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523343, 'reachable_time': 40244, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257216, 'error': None, 'target': 'ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.582 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[08dea712-17fe-44f4-bc33-9b5cfe6b8776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.640 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[286fbc30-7881-4459-83e1-601e2d48c0e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.642 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e260ad9-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.642 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.643 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e260ad9-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:14 compute-1 NetworkManager[49104]: <info>  [1768920194.6452] manager: (tap3e260ad9-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.644 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:14 compute-1 kernel: tap3e260ad9-f0: entered promiscuous mode
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.649 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3e260ad9-f0, col_values=(('external_ids', {'iface-id': '2b7c295d-f074-4cfb-aca0-08946126ddbc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:14 compute-1 ovn_controller[130490]: 2026-01-20T14:43:14Z|00286|binding|INFO|Releasing lport 2b7c295d-f074-4cfb-aca0-08946126ddbc from this chassis (sb_readonly=0)
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.665 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.667 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.668 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3e260ad9-fcf1-432b-b71b-b943d4249b65.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3e260ad9-fcf1-432b-b71b-b943d4249b65.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.669 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[74af6530-6e4c-4c1a-b301-2e9465f4d6d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.670 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-3e260ad9-fcf1-432b-b71b-b943d4249b65
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/3e260ad9-fcf1-432b-b71b-b943d4249b65.pid.haproxy
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 3e260ad9-fcf1-432b-b71b-b943d4249b65
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:43:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:14.672 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'env', 'PROCESS_TAG=haproxy-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3e260ad9-fcf1-432b-b71b-b943d4249b65.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.704 225859 DEBUG nova.compute.manager [req-574f4a1f-765d-44fa-8b09-01c1c5be9a03 req-976d640a-57e5-4146-9da6-ecadb28276e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Received event network-vif-plugged-221681ce-86ed-410e-8ca2-52951142fede external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.704 225859 DEBUG oslo_concurrency.lockutils [req-574f4a1f-765d-44fa-8b09-01c1c5be9a03 req-976d640a-57e5-4146-9da6-ecadb28276e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.704 225859 DEBUG oslo_concurrency.lockutils [req-574f4a1f-765d-44fa-8b09-01c1c5be9a03 req-976d640a-57e5-4146-9da6-ecadb28276e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.705 225859 DEBUG oslo_concurrency.lockutils [req-574f4a1f-765d-44fa-8b09-01c1c5be9a03 req-976d640a-57e5-4146-9da6-ecadb28276e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.705 225859 DEBUG nova.compute.manager [req-574f4a1f-765d-44fa-8b09-01c1c5be9a03 req-976d640a-57e5-4146-9da6-ecadb28276e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Processing event network-vif-plugged-221681ce-86ed-410e-8ca2-52951142fede _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.907 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920194.9065466, cd7f8cdc-1467-4d67-b60b-dd2ee8707b09 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.907 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] VM Started (Lifecycle Event)
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.910 225859 DEBUG nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.914 225859 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.917 225859 INFO nova.virt.libvirt.driver [-] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Instance spawned successfully.
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.918 225859 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.931 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.936 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.940 225859 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.941 225859 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.941 225859 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.942 225859 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.942 225859 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.943 225859 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.953 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.954 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920194.9067376, cd7f8cdc-1467-4d67-b60b-dd2ee8707b09 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.954 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] VM Paused (Lifecycle Event)
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.973 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.977 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920194.9127798, cd7f8cdc-1467-4d67-b60b-dd2ee8707b09 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.977 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] VM Resumed (Lifecycle Event)
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.995 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:14 compute-1 nova_compute[225855]: 2026-01-20 14:43:14.998 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:43:15 compute-1 nova_compute[225855]: 2026-01-20 14:43:15.004 225859 INFO nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Took 9.39 seconds to spawn the instance on the hypervisor.
Jan 20 14:43:15 compute-1 nova_compute[225855]: 2026-01-20 14:43:15.004 225859 DEBUG nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:15.028 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:43:15 compute-1 nova_compute[225855]: 2026-01-20 14:43:15.028 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:15 compute-1 podman[257288]: 2026-01-20 14:43:15.031818511 +0000 UTC m=+0.051844519 container create c6d47773a7b5e703ee98e9aa55358d1326e7e172882283f62037bdb5729618d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 14:43:15 compute-1 nova_compute[225855]: 2026-01-20 14:43:15.033 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:43:15 compute-1 nova_compute[225855]: 2026-01-20 14:43:15.070 225859 INFO nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Took 10.45 seconds to build instance.
Jan 20 14:43:15 compute-1 systemd[1]: Started libpod-conmon-c6d47773a7b5e703ee98e9aa55358d1326e7e172882283f62037bdb5729618d5.scope.
Jan 20 14:43:15 compute-1 nova_compute[225855]: 2026-01-20 14:43:15.085 225859 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:15 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:43:15 compute-1 podman[257288]: 2026-01-20 14:43:15.003993153 +0000 UTC m=+0.024019181 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:43:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a6cff7ad5f6f51949488ae9006978cd1a88ea356309dfcbb35378ee1a81275/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:43:15 compute-1 podman[257288]: 2026-01-20 14:43:15.114249646 +0000 UTC m=+0.134275654 container init c6d47773a7b5e703ee98e9aa55358d1326e7e172882283f62037bdb5729618d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:43:15 compute-1 podman[257288]: 2026-01-20 14:43:15.120629527 +0000 UTC m=+0.140655535 container start c6d47773a7b5e703ee98e9aa55358d1326e7e172882283f62037bdb5729618d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:43:15 compute-1 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[257303]: [NOTICE]   (257307) : New worker (257309) forked
Jan 20 14:43:15 compute-1 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[257303]: [NOTICE]   (257307) : Loading success.
Jan 20 14:43:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/123347246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:15.197 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:43:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:15.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:15 compute-1 nova_compute[225855]: 2026-01-20 14:43:15.302 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:16 compute-1 ceph-mon[81775]: pgmap v1647: 321 pgs: 321 active+clean; 185 MiB data, 739 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.6 MiB/s wr, 273 op/s
Jan 20 14:43:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:43:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:16.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:43:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:16.404 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:16.404 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:16.405 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:16 compute-1 nova_compute[225855]: 2026-01-20 14:43:16.857 225859 DEBUG nova.compute.manager [req-1fcc8e9a-5e24-4c69-8176-77db6ff7bba8 req-3d31ebe3-3855-4c23-95de-edbd17dc3bcf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Received event network-vif-plugged-221681ce-86ed-410e-8ca2-52951142fede external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:16 compute-1 nova_compute[225855]: 2026-01-20 14:43:16.857 225859 DEBUG oslo_concurrency.lockutils [req-1fcc8e9a-5e24-4c69-8176-77db6ff7bba8 req-3d31ebe3-3855-4c23-95de-edbd17dc3bcf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:16 compute-1 nova_compute[225855]: 2026-01-20 14:43:16.858 225859 DEBUG oslo_concurrency.lockutils [req-1fcc8e9a-5e24-4c69-8176-77db6ff7bba8 req-3d31ebe3-3855-4c23-95de-edbd17dc3bcf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:16 compute-1 nova_compute[225855]: 2026-01-20 14:43:16.858 225859 DEBUG oslo_concurrency.lockutils [req-1fcc8e9a-5e24-4c69-8176-77db6ff7bba8 req-3d31ebe3-3855-4c23-95de-edbd17dc3bcf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:16 compute-1 nova_compute[225855]: 2026-01-20 14:43:16.858 225859 DEBUG nova.compute.manager [req-1fcc8e9a-5e24-4c69-8176-77db6ff7bba8 req-3d31ebe3-3855-4c23-95de-edbd17dc3bcf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] No waiting events found dispatching network-vif-plugged-221681ce-86ed-410e-8ca2-52951142fede pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:43:16 compute-1 nova_compute[225855]: 2026-01-20 14:43:16.859 225859 WARNING nova.compute.manager [req-1fcc8e9a-5e24-4c69-8176-77db6ff7bba8 req-3d31ebe3-3855-4c23-95de-edbd17dc3bcf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Received unexpected event network-vif-plugged-221681ce-86ed-410e-8ca2-52951142fede for instance with vm_state active and task_state None.
Jan 20 14:43:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:17.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:17 compute-1 ceph-mon[81775]: pgmap v1648: 321 pgs: 321 active+clean; 141 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 327 op/s
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.078 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:18.199 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:18.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.258 225859 DEBUG oslo_concurrency.lockutils [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.258 225859 DEBUG oslo_concurrency.lockutils [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.258 225859 DEBUG oslo_concurrency.lockutils [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.259 225859 DEBUG oslo_concurrency.lockutils [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.259 225859 DEBUG oslo_concurrency.lockutils [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.260 225859 INFO nova.compute.manager [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Terminating instance
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.261 225859 DEBUG nova.compute.manager [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:43:18 compute-1 kernel: tap221681ce-86 (unregistering): left promiscuous mode
Jan 20 14:43:18 compute-1 NetworkManager[49104]: <info>  [1768920198.3024] device (tap221681ce-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.311 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:18 compute-1 ovn_controller[130490]: 2026-01-20T14:43:18Z|00287|binding|INFO|Releasing lport 221681ce-86ed-410e-8ca2-52951142fede from this chassis (sb_readonly=0)
Jan 20 14:43:18 compute-1 ovn_controller[130490]: 2026-01-20T14:43:18Z|00288|binding|INFO|Setting lport 221681ce-86ed-410e-8ca2-52951142fede down in Southbound
Jan 20 14:43:18 compute-1 ovn_controller[130490]: 2026-01-20T14:43:18Z|00289|binding|INFO|Removing iface tap221681ce-86 ovn-installed in OVS
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.314 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:18.322 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:87:b2 10.100.0.10'], port_security=['fa:16:3e:eb:87:b2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'cd7f8cdc-1467-4d67-b60b-dd2ee8707b09', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3e022a35f604df2bbc885e498b1e206', 'neutron:revision_number': '4', 'neutron:security_group_ids': '885819b7-5060-4b73-ad54-3f31f821195c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89e607b1-9e39-47f0-8180-8aaef3a2a0e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=221681ce-86ed-410e-8ca2-52951142fede) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:43:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:18.323 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 221681ce-86ed-410e-8ca2-52951142fede in datapath 3e260ad9-fcf1-432b-b71b-b943d4249b65 unbound from our chassis
Jan 20 14:43:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:18.325 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3e260ad9-fcf1-432b-b71b-b943d4249b65, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:43:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:18.326 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1f103673-73e6-427e-b74f-20e0f7779f24]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:18.327 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65 namespace which is not needed anymore
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.341 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:18 compute-1 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000050.scope: Deactivated successfully.
Jan 20 14:43:18 compute-1 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000050.scope: Consumed 3.860s CPU time.
Jan 20 14:43:18 compute-1 systemd-machined[194361]: Machine qemu-34-instance-00000050 terminated.
Jan 20 14:43:18 compute-1 ovn_controller[130490]: 2026-01-20T14:43:18Z|00290|binding|INFO|Releasing lport 2b7c295d-f074-4cfb-aca0-08946126ddbc from this chassis (sb_readonly=0)
Jan 20 14:43:18 compute-1 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[257303]: [NOTICE]   (257307) : haproxy version is 2.8.14-c23fe91
Jan 20 14:43:18 compute-1 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[257303]: [NOTICE]   (257307) : path to executable is /usr/sbin/haproxy
Jan 20 14:43:18 compute-1 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[257303]: [WARNING]  (257307) : Exiting Master process...
Jan 20 14:43:18 compute-1 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[257303]: [ALERT]    (257307) : Current worker (257309) exited with code 143 (Terminated)
Jan 20 14:43:18 compute-1 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[257303]: [WARNING]  (257307) : All workers exited. Exiting... (0)
Jan 20 14:43:18 compute-1 systemd[1]: libpod-c6d47773a7b5e703ee98e9aa55358d1326e7e172882283f62037bdb5729618d5.scope: Deactivated successfully.
Jan 20 14:43:18 compute-1 conmon[257303]: conmon c6d47773a7b5e703ee98 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6d47773a7b5e703ee98e9aa55358d1326e7e172882283f62037bdb5729618d5.scope/container/memory.events
Jan 20 14:43:18 compute-1 podman[257346]: 2026-01-20 14:43:18.474601746 +0000 UTC m=+0.050490411 container died c6d47773a7b5e703ee98e9aa55358d1326e7e172882283f62037bdb5729618d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:43:18 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6d47773a7b5e703ee98e9aa55358d1326e7e172882283f62037bdb5729618d5-userdata-shm.mount: Deactivated successfully.
Jan 20 14:43:18 compute-1 systemd[1]: var-lib-containers-storage-overlay-90a6cff7ad5f6f51949488ae9006978cd1a88ea356309dfcbb35378ee1a81275-merged.mount: Deactivated successfully.
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.518 225859 INFO nova.virt.libvirt.driver [-] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Instance destroyed successfully.
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.520 225859 DEBUG nova.objects.instance [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lazy-loading 'resources' on Instance uuid cd7f8cdc-1467-4d67-b60b-dd2ee8707b09 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.527 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:18 compute-1 podman[257346]: 2026-01-20 14:43:18.528645437 +0000 UTC m=+0.104534092 container cleanup c6d47773a7b5e703ee98e9aa55358d1326e7e172882283f62037bdb5729618d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.534 225859 DEBUG nova.virt.libvirt.vif [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:43:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-704404998',display_name='tempest-MultipleCreateTestJSON-server-704404998-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-704404998-1',id=80,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:43:15Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3e022a35f604df2bbc885e498b1e206',ramdisk_id='',reservation_id='r-0enj9v31',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-164394330',owner_user_name='tempest-MultipleCreateTestJSON-164394330-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:43:15Z,user_data=None,user_id='aa2e7857e85f483eb0d162e2ee8c2e2c',uuid=cd7f8cdc-1467-4d67-b60b-dd2ee8707b09,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "221681ce-86ed-410e-8ca2-52951142fede", "address": "fa:16:3e:eb:87:b2", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap221681ce-86", "ovs_interfaceid": "221681ce-86ed-410e-8ca2-52951142fede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.535 225859 DEBUG nova.network.os_vif_util [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converting VIF {"id": "221681ce-86ed-410e-8ca2-52951142fede", "address": "fa:16:3e:eb:87:b2", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap221681ce-86", "ovs_interfaceid": "221681ce-86ed-410e-8ca2-52951142fede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.536 225859 DEBUG nova.network.os_vif_util [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:87:b2,bridge_name='br-int',has_traffic_filtering=True,id=221681ce-86ed-410e-8ca2-52951142fede,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap221681ce-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.536 225859 DEBUG os_vif [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:87:b2,bridge_name='br-int',has_traffic_filtering=True,id=221681ce-86ed-410e-8ca2-52951142fede,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap221681ce-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.538 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.539 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap221681ce-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.540 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:18 compute-1 systemd[1]: libpod-conmon-c6d47773a7b5e703ee98e9aa55358d1326e7e172882283f62037bdb5729618d5.scope: Deactivated successfully.
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.543 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.545 225859 INFO os_vif [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:87:b2,bridge_name='br-int',has_traffic_filtering=True,id=221681ce-86ed-410e-8ca2-52951142fede,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap221681ce-86')
Jan 20 14:43:18 compute-1 podman[257381]: 2026-01-20 14:43:18.594631786 +0000 UTC m=+0.042300879 container remove c6d47773a7b5e703ee98e9aa55358d1326e7e172882283f62037bdb5729618d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:43:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:18.600 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[301afa8f-8001-41da-9f06-b3a59ba7841b]: (4, ('Tue Jan 20 02:43:18 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65 (c6d47773a7b5e703ee98e9aa55358d1326e7e172882283f62037bdb5729618d5)\nc6d47773a7b5e703ee98e9aa55358d1326e7e172882283f62037bdb5729618d5\nTue Jan 20 02:43:18 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65 (c6d47773a7b5e703ee98e9aa55358d1326e7e172882283f62037bdb5729618d5)\nc6d47773a7b5e703ee98e9aa55358d1326e7e172882283f62037bdb5729618d5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:18.602 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4786ad77-10d6-4465-bddc-f9348330772c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:18.602 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e260ad9-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.604 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:18 compute-1 kernel: tap3e260ad9-f0: left promiscuous mode
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.616 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:18.618 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a13ecaed-ac1b-4038-bfbd-f0bda06de716]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:18.630 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e1705f-6428-4484-96e1-a8867e7cc0e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:18.632 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[29ccf00e-5fc9-4fca-961e-9ef6f8f78f7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:18.646 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[518ecdde-e626-4f5b-8fe2-8fe2da5613f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523334, 'reachable_time': 24440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257415, 'error': None, 'target': 'ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:18.648 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:43:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:18.649 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[830402b6-a4a1-48da-893f-7970c2d7ef87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:18 compute-1 systemd[1]: run-netns-ovnmeta\x2d3e260ad9\x2dfcf1\x2d432b\x2db71b\x2db943d4249b65.mount: Deactivated successfully.
Jan 20 14:43:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3490191056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3555294503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.960 225859 INFO nova.virt.libvirt.driver [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Deleting instance files /var/lib/nova/instances/cd7f8cdc-1467-4d67-b60b-dd2ee8707b09_del
Jan 20 14:43:18 compute-1 nova_compute[225855]: 2026-01-20 14:43:18.961 225859 INFO nova.virt.libvirt.driver [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Deletion of /var/lib/nova/instances/cd7f8cdc-1467-4d67-b60b-dd2ee8707b09_del complete
Jan 20 14:43:19 compute-1 nova_compute[225855]: 2026-01-20 14:43:19.032 225859 INFO nova.compute.manager [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Took 0.77 seconds to destroy the instance on the hypervisor.
Jan 20 14:43:19 compute-1 nova_compute[225855]: 2026-01-20 14:43:19.033 225859 DEBUG oslo.service.loopingcall [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:43:19 compute-1 nova_compute[225855]: 2026-01-20 14:43:19.033 225859 DEBUG nova.compute.manager [-] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:43:19 compute-1 nova_compute[225855]: 2026-01-20 14:43:19.034 225859 DEBUG nova.network.neutron [-] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:43:19 compute-1 nova_compute[225855]: 2026-01-20 14:43:19.051 225859 DEBUG nova.compute.manager [req-cc60b2e4-1f82-4aff-9198-1641d2dd8cf1 req-d5bcb852-3b53-48bf-a499-168961abc425 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Received event network-vif-unplugged-221681ce-86ed-410e-8ca2-52951142fede external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:19 compute-1 nova_compute[225855]: 2026-01-20 14:43:19.051 225859 DEBUG oslo_concurrency.lockutils [req-cc60b2e4-1f82-4aff-9198-1641d2dd8cf1 req-d5bcb852-3b53-48bf-a499-168961abc425 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:19 compute-1 nova_compute[225855]: 2026-01-20 14:43:19.051 225859 DEBUG oslo_concurrency.lockutils [req-cc60b2e4-1f82-4aff-9198-1641d2dd8cf1 req-d5bcb852-3b53-48bf-a499-168961abc425 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:19 compute-1 nova_compute[225855]: 2026-01-20 14:43:19.052 225859 DEBUG oslo_concurrency.lockutils [req-cc60b2e4-1f82-4aff-9198-1641d2dd8cf1 req-d5bcb852-3b53-48bf-a499-168961abc425 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:19 compute-1 nova_compute[225855]: 2026-01-20 14:43:19.052 225859 DEBUG nova.compute.manager [req-cc60b2e4-1f82-4aff-9198-1641d2dd8cf1 req-d5bcb852-3b53-48bf-a499-168961abc425 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] No waiting events found dispatching network-vif-unplugged-221681ce-86ed-410e-8ca2-52951142fede pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:43:19 compute-1 nova_compute[225855]: 2026-01-20 14:43:19.053 225859 DEBUG nova.compute.manager [req-cc60b2e4-1f82-4aff-9198-1641d2dd8cf1 req-d5bcb852-3b53-48bf-a499-168961abc425 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Received event network-vif-unplugged-221681ce-86ed-410e-8ca2-52951142fede for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:43:19 compute-1 nova_compute[225855]: 2026-01-20 14:43:19.053 225859 DEBUG nova.compute.manager [req-cc60b2e4-1f82-4aff-9198-1641d2dd8cf1 req-d5bcb852-3b53-48bf-a499-168961abc425 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Received event network-vif-plugged-221681ce-86ed-410e-8ca2-52951142fede external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:19 compute-1 nova_compute[225855]: 2026-01-20 14:43:19.053 225859 DEBUG oslo_concurrency.lockutils [req-cc60b2e4-1f82-4aff-9198-1641d2dd8cf1 req-d5bcb852-3b53-48bf-a499-168961abc425 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:19 compute-1 nova_compute[225855]: 2026-01-20 14:43:19.054 225859 DEBUG oslo_concurrency.lockutils [req-cc60b2e4-1f82-4aff-9198-1641d2dd8cf1 req-d5bcb852-3b53-48bf-a499-168961abc425 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:19 compute-1 nova_compute[225855]: 2026-01-20 14:43:19.054 225859 DEBUG oslo_concurrency.lockutils [req-cc60b2e4-1f82-4aff-9198-1641d2dd8cf1 req-d5bcb852-3b53-48bf-a499-168961abc425 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:19 compute-1 nova_compute[225855]: 2026-01-20 14:43:19.055 225859 DEBUG nova.compute.manager [req-cc60b2e4-1f82-4aff-9198-1641d2dd8cf1 req-d5bcb852-3b53-48bf-a499-168961abc425 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] No waiting events found dispatching network-vif-plugged-221681ce-86ed-410e-8ca2-52951142fede pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:43:19 compute-1 nova_compute[225855]: 2026-01-20 14:43:19.055 225859 WARNING nova.compute.manager [req-cc60b2e4-1f82-4aff-9198-1641d2dd8cf1 req-d5bcb852-3b53-48bf-a499-168961abc425 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Received unexpected event network-vif-plugged-221681ce-86ed-410e-8ca2-52951142fede for instance with vm_state active and task_state deleting.
Jan 20 14:43:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:19.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:19 compute-1 ceph-mon[81775]: pgmap v1649: 321 pgs: 321 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 1.9 MiB/s wr, 332 op/s
Jan 20 14:43:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:43:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:20.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:43:20 compute-1 nova_compute[225855]: 2026-01-20 14:43:20.258 225859 DEBUG nova.network.neutron [-] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:43:20 compute-1 nova_compute[225855]: 2026-01-20 14:43:20.298 225859 INFO nova.compute.manager [-] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Took 1.26 seconds to deallocate network for instance.
Jan 20 14:43:20 compute-1 nova_compute[225855]: 2026-01-20 14:43:20.304 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:20 compute-1 nova_compute[225855]: 2026-01-20 14:43:20.378 225859 DEBUG oslo_concurrency.lockutils [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:20 compute-1 nova_compute[225855]: 2026-01-20 14:43:20.379 225859 DEBUG oslo_concurrency.lockutils [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:20 compute-1 nova_compute[225855]: 2026-01-20 14:43:20.429 225859 DEBUG oslo_concurrency.processutils [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:20 compute-1 nova_compute[225855]: 2026-01-20 14:43:20.574 225859 DEBUG nova.compute.manager [req-d4c73e9b-d5c8-4879-8e08-5e71b00c48db req-b691813b-509a-4744-9402-04fc9a362da8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Received event network-vif-deleted-221681ce-86ed-410e-8ca2-52951142fede external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:43:20 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4067799863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:20 compute-1 nova_compute[225855]: 2026-01-20 14:43:20.870 225859 DEBUG oslo_concurrency.processutils [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:20 compute-1 nova_compute[225855]: 2026-01-20 14:43:20.876 225859 DEBUG nova.compute.provider_tree [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:43:20 compute-1 nova_compute[225855]: 2026-01-20 14:43:20.891 225859 DEBUG nova.scheduler.client.report [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:43:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4067799863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:20 compute-1 nova_compute[225855]: 2026-01-20 14:43:20.913 225859 DEBUG oslo_concurrency.lockutils [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:20 compute-1 nova_compute[225855]: 2026-01-20 14:43:20.941 225859 INFO nova.scheduler.client.report [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Deleted allocations for instance cd7f8cdc-1467-4d67-b60b-dd2ee8707b09
Jan 20 14:43:21 compute-1 nova_compute[225855]: 2026-01-20 14:43:21.026 225859 DEBUG oslo_concurrency.lockutils [None req-e46dc073-839f-4cc0-bb74-4d40bbb9f890 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "cd7f8cdc-1467-4d67-b60b-dd2ee8707b09" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:21.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:21 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3531891016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:21 compute-1 ceph-mon[81775]: pgmap v1650: 321 pgs: 321 active+clean; 113 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.5 MiB/s wr, 353 op/s
Jan 20 14:43:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:22.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:23 compute-1 nova_compute[225855]: 2026-01-20 14:43:23.176 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920188.1748006, 504acd93-cd55-496e-a85f-30e811f827d4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:23 compute-1 nova_compute[225855]: 2026-01-20 14:43:23.176 225859 INFO nova.compute.manager [-] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] VM Stopped (Lifecycle Event)
Jan 20 14:43:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:23.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:23 compute-1 nova_compute[225855]: 2026-01-20 14:43:23.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:23 compute-1 nova_compute[225855]: 2026-01-20 14:43:23.368 225859 DEBUG nova.compute.manager [None req-fc1013a5-077e-4589-bfd4-38e59c3fc6ee - - - - - -] [instance: 504acd93-cd55-496e-a85f-30e811f827d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:23 compute-1 nova_compute[225855]: 2026-01-20 14:43:23.579 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:23 compute-1 ceph-mon[81775]: pgmap v1651: 321 pgs: 321 active+clean; 70 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 816 KiB/s wr, 334 op/s
Jan 20 14:43:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:24.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1454149014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:25.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:25 compute-1 nova_compute[225855]: 2026-01-20 14:43:25.306 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/516531942' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:25 compute-1 ceph-mon[81775]: pgmap v1652: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 339 op/s
Jan 20 14:43:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:26.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:26 compute-1 sudo[257443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:43:26 compute-1 sudo[257443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:26 compute-1 sudo[257443]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:26 compute-1 sudo[257469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:43:26 compute-1 sudo[257469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:26 compute-1 sudo[257469]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:26 compute-1 podman[257467]: 2026-01-20 14:43:26.68365389 +0000 UTC m=+0.116277225 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 20 14:43:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:27.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:27 compute-1 nova_compute[225855]: 2026-01-20 14:43:27.372 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:27 compute-1 ceph-mgr[82135]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2542147622
Jan 20 14:43:27 compute-1 ceph-mon[81775]: pgmap v1653: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.8 MiB/s wr, 250 op/s
Jan 20 14:43:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:43:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:28.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:43:28 compute-1 nova_compute[225855]: 2026-01-20 14:43:28.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:28 compute-1 nova_compute[225855]: 2026-01-20 14:43:28.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:43:28 compute-1 nova_compute[225855]: 2026-01-20 14:43:28.358 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:43:28 compute-1 nova_compute[225855]: 2026-01-20 14:43:28.358 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:28 compute-1 nova_compute[225855]: 2026-01-20 14:43:28.581 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:43:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:29.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:43:29 compute-1 ceph-mon[81775]: pgmap v1654: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 198 op/s
Jan 20 14:43:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:43:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:30.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:43:30 compute-1 nova_compute[225855]: 2026-01-20 14:43:30.308 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:30 compute-1 nova_compute[225855]: 2026-01-20 14:43:30.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:31.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:31 compute-1 nova_compute[225855]: 2026-01-20 14:43:31.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:31 compute-1 nova_compute[225855]: 2026-01-20 14:43:31.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:43:31 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2958872977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:43:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:32.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:43:32 compute-1 nova_compute[225855]: 2026-01-20 14:43:32.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:32 compute-1 nova_compute[225855]: 2026-01-20 14:43:32.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:32 compute-1 nova_compute[225855]: 2026-01-20 14:43:32.362 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:32 compute-1 nova_compute[225855]: 2026-01-20 14:43:32.363 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:32 compute-1 nova_compute[225855]: 2026-01-20 14:43:32.363 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:32 compute-1 nova_compute[225855]: 2026-01-20 14:43:32.363 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:43:32 compute-1 nova_compute[225855]: 2026-01-20 14:43:32.364 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:32 compute-1 ceph-mon[81775]: pgmap v1655: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Jan 20 14:43:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/526759509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:43:32 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/543489734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:32 compute-1 nova_compute[225855]: 2026-01-20 14:43:32.800 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:32 compute-1 nova_compute[225855]: 2026-01-20 14:43:32.967 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:43:32 compute-1 nova_compute[225855]: 2026-01-20 14:43:32.969 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4550MB free_disk=20.967357635498047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:43:32 compute-1 nova_compute[225855]: 2026-01-20 14:43:32.969 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:32 compute-1 nova_compute[225855]: 2026-01-20 14:43:32.970 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:33 compute-1 nova_compute[225855]: 2026-01-20 14:43:33.067 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:43:33 compute-1 nova_compute[225855]: 2026-01-20 14:43:33.067 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:43:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:33.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:33 compute-1 nova_compute[225855]: 2026-01-20 14:43:33.515 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920198.514241, cd7f8cdc-1467-4d67-b60b-dd2ee8707b09 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:33 compute-1 nova_compute[225855]: 2026-01-20 14:43:33.516 225859 INFO nova.compute.manager [-] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] VM Stopped (Lifecycle Event)
Jan 20 14:43:33 compute-1 nova_compute[225855]: 2026-01-20 14:43:33.553 225859 DEBUG nova.compute.manager [None req-24947fce-01c0-43f6-8ce9-6d0a85715a0a - - - - - -] [instance: cd7f8cdc-1467-4d67-b60b-dd2ee8707b09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:33 compute-1 nova_compute[225855]: 2026-01-20 14:43:33.583 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:33 compute-1 nova_compute[225855]: 2026-01-20 14:43:33.632 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing inventories for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 14:43:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/543489734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/23827069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:33 compute-1 nova_compute[225855]: 2026-01-20 14:43:33.712 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating ProviderTree inventory for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 14:43:33 compute-1 nova_compute[225855]: 2026-01-20 14:43:33.713 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:43:33 compute-1 nova_compute[225855]: 2026-01-20 14:43:33.931 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing aggregate associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 14:43:33 compute-1 nova_compute[225855]: 2026-01-20 14:43:33.982 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing trait associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 14:43:34 compute-1 nova_compute[225855]: 2026-01-20 14:43:34.036 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:43:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:34.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:43:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:43:34 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/332974038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:34 compute-1 nova_compute[225855]: 2026-01-20 14:43:34.863 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.827s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:34 compute-1 nova_compute[225855]: 2026-01-20 14:43:34.870 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:43:34 compute-1 nova_compute[225855]: 2026-01-20 14:43:34.885 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:43:34 compute-1 ceph-mon[81775]: pgmap v1656: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 119 op/s
Jan 20 14:43:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/332974038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2644878239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:34 compute-1 nova_compute[225855]: 2026-01-20 14:43:34.923 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:43:34 compute-1 nova_compute[225855]: 2026-01-20 14:43:34.923 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:35 compute-1 podman[257567]: 2026-01-20 14:43:35.009659695 +0000 UTC m=+0.047733002 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:43:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:35.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:35 compute-1 nova_compute[225855]: 2026-01-20 14:43:35.310 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:35 compute-1 nova_compute[225855]: 2026-01-20 14:43:35.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:35 compute-1 nova_compute[225855]: 2026-01-20 14:43:35.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:35 compute-1 nova_compute[225855]: 2026-01-20 14:43:35.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:35 compute-1 nova_compute[225855]: 2026-01-20 14:43:35.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 14:43:35 compute-1 nova_compute[225855]: 2026-01-20 14:43:35.381 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 14:43:35 compute-1 nova_compute[225855]: 2026-01-20 14:43:35.381 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:35 compute-1 nova_compute[225855]: 2026-01-20 14:43:35.382 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 14:43:35 compute-1 ceph-mon[81775]: pgmap v1657: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 89 op/s
Jan 20 14:43:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:36.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:43:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:37.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:43:37 compute-1 nova_compute[225855]: 2026-01-20 14:43:37.382 225859 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Acquiring lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:37 compute-1 nova_compute[225855]: 2026-01-20 14:43:37.383 225859 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:37 compute-1 nova_compute[225855]: 2026-01-20 14:43:37.404 225859 DEBUG nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:43:37 compute-1 nova_compute[225855]: 2026-01-20 14:43:37.579 225859 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:37 compute-1 nova_compute[225855]: 2026-01-20 14:43:37.579 225859 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:37 compute-1 nova_compute[225855]: 2026-01-20 14:43:37.585 225859 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:43:37 compute-1 nova_compute[225855]: 2026-01-20 14:43:37.586 225859 INFO nova.compute.claims [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:43:37 compute-1 ceph-mon[81775]: pgmap v1658: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 20 14:43:37 compute-1 nova_compute[225855]: 2026-01-20 14:43:37.838 225859 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:38.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.297 225859 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.304 225859 DEBUG nova.compute.provider_tree [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.342 225859 DEBUG nova.scheduler.client.report [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.394 225859 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.395 225859 DEBUG nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.523 225859 DEBUG nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.524 225859 DEBUG nova.network.neutron [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.556 225859 INFO nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.586 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.603 225859 DEBUG nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:43:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4057377900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/23348796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/132858433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.800 225859 DEBUG nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.802 225859 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.803 225859 INFO nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Creating image(s)
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.831 225859 DEBUG nova.storage.rbd_utils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] rbd image d6ec5fce-44f2-4c13-b908-c45d7a919b34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.861 225859 DEBUG nova.storage.rbd_utils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] rbd image d6ec5fce-44f2-4c13-b908-c45d7a919b34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.900 225859 DEBUG nova.storage.rbd_utils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] rbd image d6ec5fce-44f2-4c13-b908-c45d7a919b34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.906 225859 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.987 225859 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.988 225859 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.989 225859 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:38 compute-1 nova_compute[225855]: 2026-01-20 14:43:38.989 225859 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:39 compute-1 nova_compute[225855]: 2026-01-20 14:43:39.021 225859 DEBUG nova.storage.rbd_utils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] rbd image d6ec5fce-44f2-4c13-b908-c45d7a919b34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:39 compute-1 nova_compute[225855]: 2026-01-20 14:43:39.026 225859 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 d6ec5fce-44f2-4c13-b908-c45d7a919b34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:39 compute-1 nova_compute[225855]: 2026-01-20 14:43:39.094 225859 DEBUG nova.policy [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2975742546164cad937d13671d17108a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '28a523cfe06042ff96554913a78e1e3a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:43:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:43:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:39.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:43:39 compute-1 nova_compute[225855]: 2026-01-20 14:43:39.290 225859 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 d6ec5fce-44f2-4c13-b908-c45d7a919b34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:39 compute-1 nova_compute[225855]: 2026-01-20 14:43:39.362 225859 DEBUG nova.storage.rbd_utils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] resizing rbd image d6ec5fce-44f2-4c13-b908-c45d7a919b34_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:43:39 compute-1 nova_compute[225855]: 2026-01-20 14:43:39.464 225859 DEBUG nova.objects.instance [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lazy-loading 'migration_context' on Instance uuid d6ec5fce-44f2-4c13-b908-c45d7a919b34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:43:39 compute-1 nova_compute[225855]: 2026-01-20 14:43:39.483 225859 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:43:39 compute-1 nova_compute[225855]: 2026-01-20 14:43:39.484 225859 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Ensure instance console log exists: /var/lib/nova/instances/d6ec5fce-44f2-4c13-b908-c45d7a919b34/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:43:39 compute-1 nova_compute[225855]: 2026-01-20 14:43:39.484 225859 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:39 compute-1 nova_compute[225855]: 2026-01-20 14:43:39.485 225859 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:39 compute-1 nova_compute[225855]: 2026-01-20 14:43:39.485 225859 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:39 compute-1 ceph-mon[81775]: pgmap v1659: 321 pgs: 321 active+clean; 92 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 431 KiB/s wr, 81 op/s
Jan 20 14:43:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:40.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:40 compute-1 nova_compute[225855]: 2026-01-20 14:43:40.315 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:41.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:41 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:41 compute-1 ceph-mon[81775]: pgmap v1660: 321 pgs: 321 active+clean; 176 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.1 MiB/s wr, 80 op/s
Jan 20 14:43:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:42.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:42 compute-1 nova_compute[225855]: 2026-01-20 14:43:42.875 225859 DEBUG nova.network.neutron [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Successfully created port: 69c1a502-414e-4ca7-9aec-488bbb6170b2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:43:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:43.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:43 compute-1 nova_compute[225855]: 2026-01-20 14:43:43.589 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:43 compute-1 ceph-mon[81775]: pgmap v1661: 321 pgs: 321 active+clean; 235 MiB data, 766 MiB used, 20 GiB / 21 GiB avail; 686 KiB/s rd, 6.4 MiB/s wr, 107 op/s
Jan 20 14:43:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:44.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:45.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:45 compute-1 nova_compute[225855]: 2026-01-20 14:43:45.317 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:45 compute-1 nova_compute[225855]: 2026-01-20 14:43:45.405 225859 DEBUG nova.network.neutron [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Successfully updated port: 69c1a502-414e-4ca7-9aec-488bbb6170b2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:43:45 compute-1 nova_compute[225855]: 2026-01-20 14:43:45.433 225859 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Acquiring lock "refresh_cache-d6ec5fce-44f2-4c13-b908-c45d7a919b34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:43:45 compute-1 nova_compute[225855]: 2026-01-20 14:43:45.433 225859 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Acquired lock "refresh_cache-d6ec5fce-44f2-4c13-b908-c45d7a919b34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:43:45 compute-1 nova_compute[225855]: 2026-01-20 14:43:45.434 225859 DEBUG nova.network.neutron [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:43:45 compute-1 nova_compute[225855]: 2026-01-20 14:43:45.881 225859 DEBUG nova.compute.manager [req-fd821a2b-dde3-404f-ae5f-005f57bcf5e6 req-c9a1bcdb-f1c7-4d63-b82e-af793ac06f1a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Received event network-changed-69c1a502-414e-4ca7-9aec-488bbb6170b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:45 compute-1 nova_compute[225855]: 2026-01-20 14:43:45.882 225859 DEBUG nova.compute.manager [req-fd821a2b-dde3-404f-ae5f-005f57bcf5e6 req-c9a1bcdb-f1c7-4d63-b82e-af793ac06f1a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Refreshing instance network info cache due to event network-changed-69c1a502-414e-4ca7-9aec-488bbb6170b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:43:45 compute-1 nova_compute[225855]: 2026-01-20 14:43:45.882 225859 DEBUG oslo_concurrency.lockutils [req-fd821a2b-dde3-404f-ae5f-005f57bcf5e6 req-c9a1bcdb-f1c7-4d63-b82e-af793ac06f1a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-d6ec5fce-44f2-4c13-b908-c45d7a919b34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:43:45 compute-1 nova_compute[225855]: 2026-01-20 14:43:45.962 225859 DEBUG nova.network.neutron [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:43:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4047181053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:43:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:46.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:43:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:46 compute-1 sudo[257780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:43:46 compute-1 sudo[257780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:46 compute-1 sudo[257780]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:46 compute-1 sudo[257805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:43:46 compute-1 sudo[257805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:46 compute-1 sudo[257805]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:47 compute-1 ceph-mon[81775]: pgmap v1662: 321 pgs: 321 active+clean; 260 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 443 KiB/s rd, 7.4 MiB/s wr, 148 op/s
Jan 20 14:43:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2985602347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3855425524' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1131127473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:43:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:47.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.867 225859 DEBUG nova.network.neutron [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Updating instance_info_cache with network_info: [{"id": "69c1a502-414e-4ca7-9aec-488bbb6170b2", "address": "fa:16:3e:18:6f:89", "network": {"id": "53d0b281-776f-4682-8aaf-098e1d364008", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1516883251-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28a523cfe06042ff96554913a78e1e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69c1a502-41", "ovs_interfaceid": "69c1a502-414e-4ca7-9aec-488bbb6170b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.895 225859 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Releasing lock "refresh_cache-d6ec5fce-44f2-4c13-b908-c45d7a919b34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.896 225859 DEBUG nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Instance network_info: |[{"id": "69c1a502-414e-4ca7-9aec-488bbb6170b2", "address": "fa:16:3e:18:6f:89", "network": {"id": "53d0b281-776f-4682-8aaf-098e1d364008", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1516883251-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28a523cfe06042ff96554913a78e1e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69c1a502-41", "ovs_interfaceid": "69c1a502-414e-4ca7-9aec-488bbb6170b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.897 225859 DEBUG oslo_concurrency.lockutils [req-fd821a2b-dde3-404f-ae5f-005f57bcf5e6 req-c9a1bcdb-f1c7-4d63-b82e-af793ac06f1a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-d6ec5fce-44f2-4c13-b908-c45d7a919b34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.897 225859 DEBUG nova.network.neutron [req-fd821a2b-dde3-404f-ae5f-005f57bcf5e6 req-c9a1bcdb-f1c7-4d63-b82e-af793ac06f1a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Refreshing network info cache for port 69c1a502-414e-4ca7-9aec-488bbb6170b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.902 225859 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Start _get_guest_xml network_info=[{"id": "69c1a502-414e-4ca7-9aec-488bbb6170b2", "address": "fa:16:3e:18:6f:89", "network": {"id": "53d0b281-776f-4682-8aaf-098e1d364008", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1516883251-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28a523cfe06042ff96554913a78e1e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69c1a502-41", "ovs_interfaceid": "69c1a502-414e-4ca7-9aec-488bbb6170b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.906 225859 WARNING nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.911 225859 DEBUG nova.virt.libvirt.host [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.912 225859 DEBUG nova.virt.libvirt.host [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.915 225859 DEBUG nova.virt.libvirt.host [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.915 225859 DEBUG nova.virt.libvirt.host [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.917 225859 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.918 225859 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.919 225859 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.919 225859 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.920 225859 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.920 225859 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.921 225859 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.921 225859 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.922 225859 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.922 225859 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.923 225859 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.923 225859 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:43:47 compute-1 nova_compute[225855]: 2026-01-20 14:43:47.928 225859 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:48 compute-1 ceph-mon[81775]: pgmap v1663: 321 pgs: 321 active+clean; 260 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 443 KiB/s rd, 7.4 MiB/s wr, 148 op/s
Jan 20 14:43:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:48.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:43:48 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2437304830' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:48 compute-1 nova_compute[225855]: 2026-01-20 14:43:48.440 225859 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:48 compute-1 nova_compute[225855]: 2026-01-20 14:43:48.469 225859 DEBUG nova.storage.rbd_utils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] rbd image d6ec5fce-44f2-4c13-b908-c45d7a919b34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:48 compute-1 nova_compute[225855]: 2026-01-20 14:43:48.474 225859 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:48 compute-1 nova_compute[225855]: 2026-01-20 14:43:48.591 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:43:48 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3926692694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:48 compute-1 nova_compute[225855]: 2026-01-20 14:43:48.958 225859 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:48 compute-1 nova_compute[225855]: 2026-01-20 14:43:48.960 225859 DEBUG nova.virt.libvirt.vif [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:43:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1818827013',display_name='tempest-ListServersNegativeTestJSON-server-1818827013-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1818827013-2',id=84,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28a523cfe06042ff96554913a78e1e3a',ramdisk_id='',reservation_id='r-s8qopxwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1080060493',owner_user_name='tempest-ListServersNegativeTestJSON-1080060493-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:43:38Z,user_data=None,user_id='2975742546164cad937d13671d17108a',uuid=d6ec5fce-44f2-4c13-b908-c45d7a919b34,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69c1a502-414e-4ca7-9aec-488bbb6170b2", "address": "fa:16:3e:18:6f:89", "network": {"id": "53d0b281-776f-4682-8aaf-098e1d364008", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1516883251-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28a523cfe06042ff96554913a78e1e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69c1a502-41", "ovs_interfaceid": "69c1a502-414e-4ca7-9aec-488bbb6170b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:43:48 compute-1 nova_compute[225855]: 2026-01-20 14:43:48.961 225859 DEBUG nova.network.os_vif_util [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Converting VIF {"id": "69c1a502-414e-4ca7-9aec-488bbb6170b2", "address": "fa:16:3e:18:6f:89", "network": {"id": "53d0b281-776f-4682-8aaf-098e1d364008", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1516883251-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28a523cfe06042ff96554913a78e1e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69c1a502-41", "ovs_interfaceid": "69c1a502-414e-4ca7-9aec-488bbb6170b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:43:48 compute-1 nova_compute[225855]: 2026-01-20 14:43:48.962 225859 DEBUG nova.network.os_vif_util [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:6f:89,bridge_name='br-int',has_traffic_filtering=True,id=69c1a502-414e-4ca7-9aec-488bbb6170b2,network=Network(53d0b281-776f-4682-8aaf-098e1d364008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69c1a502-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:43:48 compute-1 nova_compute[225855]: 2026-01-20 14:43:48.963 225859 DEBUG nova.objects.instance [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lazy-loading 'pci_devices' on Instance uuid d6ec5fce-44f2-4c13-b908-c45d7a919b34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:43:48 compute-1 nova_compute[225855]: 2026-01-20 14:43:48.996 225859 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:43:48 compute-1 nova_compute[225855]:   <uuid>d6ec5fce-44f2-4c13-b908-c45d7a919b34</uuid>
Jan 20 14:43:48 compute-1 nova_compute[225855]:   <name>instance-00000054</name>
Jan 20 14:43:48 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:43:48 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:43:48 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <nova:name>tempest-ListServersNegativeTestJSON-server-1818827013-2</nova:name>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:43:47</nova:creationTime>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:43:48 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:43:48 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:43:48 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:43:48 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:43:48 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:43:48 compute-1 nova_compute[225855]:         <nova:user uuid="2975742546164cad937d13671d17108a">tempest-ListServersNegativeTestJSON-1080060493-project-member</nova:user>
Jan 20 14:43:48 compute-1 nova_compute[225855]:         <nova:project uuid="28a523cfe06042ff96554913a78e1e3a">tempest-ListServersNegativeTestJSON-1080060493</nova:project>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:43:48 compute-1 nova_compute[225855]:         <nova:port uuid="69c1a502-414e-4ca7-9aec-488bbb6170b2">
Jan 20 14:43:48 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:43:48 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:43:48 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <system>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <entry name="serial">d6ec5fce-44f2-4c13-b908-c45d7a919b34</entry>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <entry name="uuid">d6ec5fce-44f2-4c13-b908-c45d7a919b34</entry>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     </system>
Jan 20 14:43:48 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:43:48 compute-1 nova_compute[225855]:   <os>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:   </os>
Jan 20 14:43:48 compute-1 nova_compute[225855]:   <features>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:   </features>
Jan 20 14:43:48 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:43:48 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:43:48 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/d6ec5fce-44f2-4c13-b908-c45d7a919b34_disk">
Jan 20 14:43:48 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       </source>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:43:48 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/d6ec5fce-44f2-4c13-b908-c45d7a919b34_disk.config">
Jan 20 14:43:48 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       </source>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:43:48 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:18:6f:89"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <target dev="tap69c1a502-41"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/d6ec5fce-44f2-4c13-b908-c45d7a919b34/console.log" append="off"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <video>
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     </video>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:43:48 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:43:48 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:43:48 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:43:48 compute-1 nova_compute[225855]: </domain>
Jan 20 14:43:48 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:43:48 compute-1 nova_compute[225855]: 2026-01-20 14:43:48.997 225859 DEBUG nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Preparing to wait for external event network-vif-plugged-69c1a502-414e-4ca7-9aec-488bbb6170b2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:43:48 compute-1 nova_compute[225855]: 2026-01-20 14:43:48.998 225859 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Acquiring lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:48 compute-1 nova_compute[225855]: 2026-01-20 14:43:48.998 225859 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:48 compute-1 nova_compute[225855]: 2026-01-20 14:43:48.998 225859 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:48 compute-1 nova_compute[225855]: 2026-01-20 14:43:48.999 225859 DEBUG nova.virt.libvirt.vif [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:43:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1818827013',display_name='tempest-ListServersNegativeTestJSON-server-1818827013-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1818827013-2',id=84,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28a523cfe06042ff96554913a78e1e3a',ramdisk_id='',reservation_id='r-s8qopxwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1080060493',owner_user_name='tempest-ListServersNegativeTestJSON-1080060493-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:43:38Z,user_data=None,user_id='2975742546164cad937d13671d17108a',uuid=d6ec5fce-44f2-4c13-b908-c45d7a919b34,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69c1a502-414e-4ca7-9aec-488bbb6170b2", "address": "fa:16:3e:18:6f:89", "network": {"id": "53d0b281-776f-4682-8aaf-098e1d364008", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1516883251-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28a523cfe06042ff96554913a78e1e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69c1a502-41", "ovs_interfaceid": "69c1a502-414e-4ca7-9aec-488bbb6170b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:43:48 compute-1 nova_compute[225855]: 2026-01-20 14:43:48.999 225859 DEBUG nova.network.os_vif_util [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Converting VIF {"id": "69c1a502-414e-4ca7-9aec-488bbb6170b2", "address": "fa:16:3e:18:6f:89", "network": {"id": "53d0b281-776f-4682-8aaf-098e1d364008", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1516883251-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28a523cfe06042ff96554913a78e1e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69c1a502-41", "ovs_interfaceid": "69c1a502-414e-4ca7-9aec-488bbb6170b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:43:49 compute-1 nova_compute[225855]: 2026-01-20 14:43:49.000 225859 DEBUG nova.network.os_vif_util [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:6f:89,bridge_name='br-int',has_traffic_filtering=True,id=69c1a502-414e-4ca7-9aec-488bbb6170b2,network=Network(53d0b281-776f-4682-8aaf-098e1d364008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69c1a502-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:43:49 compute-1 nova_compute[225855]: 2026-01-20 14:43:49.000 225859 DEBUG os_vif [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:6f:89,bridge_name='br-int',has_traffic_filtering=True,id=69c1a502-414e-4ca7-9aec-488bbb6170b2,network=Network(53d0b281-776f-4682-8aaf-098e1d364008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69c1a502-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:43:49 compute-1 nova_compute[225855]: 2026-01-20 14:43:49.001 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:49 compute-1 nova_compute[225855]: 2026-01-20 14:43:49.001 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:49 compute-1 nova_compute[225855]: 2026-01-20 14:43:49.001 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:43:49 compute-1 nova_compute[225855]: 2026-01-20 14:43:49.005 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:49 compute-1 nova_compute[225855]: 2026-01-20 14:43:49.005 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69c1a502-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:49 compute-1 nova_compute[225855]: 2026-01-20 14:43:49.006 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap69c1a502-41, col_values=(('external_ids', {'iface-id': '69c1a502-414e-4ca7-9aec-488bbb6170b2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:6f:89', 'vm-uuid': 'd6ec5fce-44f2-4c13-b908-c45d7a919b34'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:49 compute-1 nova_compute[225855]: 2026-01-20 14:43:49.007 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:49 compute-1 NetworkManager[49104]: <info>  [1768920229.0086] manager: (tap69c1a502-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Jan 20 14:43:49 compute-1 nova_compute[225855]: 2026-01-20 14:43:49.010 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:43:49 compute-1 nova_compute[225855]: 2026-01-20 14:43:49.014 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:49 compute-1 nova_compute[225855]: 2026-01-20 14:43:49.015 225859 INFO os_vif [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:6f:89,bridge_name='br-int',has_traffic_filtering=True,id=69c1a502-414e-4ca7-9aec-488bbb6170b2,network=Network(53d0b281-776f-4682-8aaf-098e1d364008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69c1a502-41')
Jan 20 14:43:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2437304830' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3926692694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:49 compute-1 nova_compute[225855]: 2026-01-20 14:43:49.137 225859 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:43:49 compute-1 nova_compute[225855]: 2026-01-20 14:43:49.138 225859 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:43:49 compute-1 nova_compute[225855]: 2026-01-20 14:43:49.138 225859 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] No VIF found with MAC fa:16:3e:18:6f:89, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:43:49 compute-1 nova_compute[225855]: 2026-01-20 14:43:49.138 225859 INFO nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Using config drive
Jan 20 14:43:49 compute-1 nova_compute[225855]: 2026-01-20 14:43:49.161 225859 DEBUG nova.storage.rbd_utils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] rbd image d6ec5fce-44f2-4c13-b908-c45d7a919b34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:49.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:49 compute-1 nova_compute[225855]: 2026-01-20 14:43:49.881 225859 INFO nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Creating config drive at /var/lib/nova/instances/d6ec5fce-44f2-4c13-b908-c45d7a919b34/disk.config
Jan 20 14:43:49 compute-1 nova_compute[225855]: 2026-01-20 14:43:49.885 225859 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d6ec5fce-44f2-4c13-b908-c45d7a919b34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph3e9ufv0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.018 225859 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d6ec5fce-44f2-4c13-b908-c45d7a919b34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph3e9ufv0" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.050 225859 DEBUG nova.storage.rbd_utils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] rbd image d6ec5fce-44f2-4c13-b908-c45d7a919b34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.054 225859 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d6ec5fce-44f2-4c13-b908-c45d7a919b34/disk.config d6ec5fce-44f2-4c13-b908-c45d7a919b34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:50 compute-1 ceph-mon[81775]: pgmap v1664: 321 pgs: 321 active+clean; 260 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 445 KiB/s rd, 7.5 MiB/s wr, 151 op/s
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.241 225859 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d6ec5fce-44f2-4c13-b908-c45d7a919b34/disk.config d6ec5fce-44f2-4c13-b908-c45d7a919b34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.242 225859 INFO nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Deleting local config drive /var/lib/nova/instances/d6ec5fce-44f2-4c13-b908-c45d7a919b34/disk.config because it was imported into RBD.
Jan 20 14:43:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:50.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:50 compute-1 kernel: tap69c1a502-41: entered promiscuous mode
Jan 20 14:43:50 compute-1 NetworkManager[49104]: <info>  [1768920230.2968] manager: (tap69c1a502-41): new Tun device (/org/freedesktop/NetworkManager/Devices/126)
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.298 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:50 compute-1 ovn_controller[130490]: 2026-01-20T14:43:50Z|00291|binding|INFO|Claiming lport 69c1a502-414e-4ca7-9aec-488bbb6170b2 for this chassis.
Jan 20 14:43:50 compute-1 ovn_controller[130490]: 2026-01-20T14:43:50Z|00292|binding|INFO|69c1a502-414e-4ca7-9aec-488bbb6170b2: Claiming fa:16:3e:18:6f:89 10.100.0.10
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.301 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.304 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.319 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:6f:89 10.100.0.10'], port_security=['fa:16:3e:18:6f:89 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd6ec5fce-44f2-4c13-b908-c45d7a919b34', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-53d0b281-776f-4682-8aaf-098e1d364008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28a523cfe06042ff96554913a78e1e3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1879c269-0854-40a3-8eb9-b61f97d38545', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a1afefec-2060-4dfb-acbb-1ce14c3a663c, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=69c1a502-414e-4ca7-9aec-488bbb6170b2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.320 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 69c1a502-414e-4ca7-9aec-488bbb6170b2 in datapath 53d0b281-776f-4682-8aaf-098e1d364008 bound to our chassis
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.322 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 53d0b281-776f-4682-8aaf-098e1d364008
Jan 20 14:43:50 compute-1 systemd-udevd[257965]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.332 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa2f397-31d3-4423-b7ca-36604ceb07ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.333 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap53d0b281-71 in ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.335 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap53d0b281-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.335 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[47b0d722-1977-499e-87e2-67934a58fe0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.336 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3825b9-9fb0-4f7a-bac5-9a3b1852ad87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:50 compute-1 systemd-machined[194361]: New machine qemu-35-instance-00000054.
Jan 20 14:43:50 compute-1 NetworkManager[49104]: <info>  [1768920230.3387] device (tap69c1a502-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:43:50 compute-1 NetworkManager[49104]: <info>  [1768920230.3393] device (tap69c1a502-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.346 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[67421eb7-c009-4442-992b-c5a3f717f187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:50 compute-1 systemd[1]: Started Virtual Machine qemu-35-instance-00000054.
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.371 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a0586ac9-b515-42f4-8d80-cb32f88db466]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.374 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:50 compute-1 ovn_controller[130490]: 2026-01-20T14:43:50Z|00293|binding|INFO|Setting lport 69c1a502-414e-4ca7-9aec-488bbb6170b2 ovn-installed in OVS
Jan 20 14:43:50 compute-1 ovn_controller[130490]: 2026-01-20T14:43:50Z|00294|binding|INFO|Setting lport 69c1a502-414e-4ca7-9aec-488bbb6170b2 up in Southbound
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.381 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.400 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[af4c8832-24bd-4a9a-8ef7-cffee8933624]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.404 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cd924935-f41d-45dc-aebc-f94f56ef5fc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:50 compute-1 NetworkManager[49104]: <info>  [1768920230.4061] manager: (tap53d0b281-70): new Veth device (/org/freedesktop/NetworkManager/Devices/127)
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.434 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[42fe7609-6a4c-4fc0-a512-99d1c417a584]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.437 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[05c5c49b-0c0c-48ab-a93b-e7dfd85c8c85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:50 compute-1 NetworkManager[49104]: <info>  [1768920230.4573] device (tap53d0b281-70): carrier: link connected
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.462 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[63f18615-131a-41fa-bb7b-ac79fead64ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.479 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf33232-2a7b-4c50-bf69-aa308c312688]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap53d0b281-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:be:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526938, 'reachable_time': 33076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258000, 'error': None, 'target': 'ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.494 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4425525b-3a32-4538-bcf3-fd6b0eab10cd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:befe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 526938, 'tstamp': 526938}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258001, 'error': None, 'target': 'ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.510 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0f0a20-1a82-4ae8-8643-a885fc48d301]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap53d0b281-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:be:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526938, 'reachable_time': 33076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258002, 'error': None, 'target': 'ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.536 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[23cc263e-12b8-43a1-8416-2cd513ceaed9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.592 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d143ca0e-1e7c-49e1-99f5-3d6556721135]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.593 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53d0b281-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.593 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.593 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap53d0b281-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:50 compute-1 kernel: tap53d0b281-70: entered promiscuous mode
Jan 20 14:43:50 compute-1 NetworkManager[49104]: <info>  [1768920230.5961] manager: (tap53d0b281-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.595 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.597 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap53d0b281-70, col_values=(('external_ids', {'iface-id': '2ea34810-4753-414f-ae43-b7b379fc432c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:50 compute-1 ovn_controller[130490]: 2026-01-20T14:43:50Z|00295|binding|INFO|Releasing lport 2ea34810-4753-414f-ae43-b7b379fc432c from this chassis (sb_readonly=0)
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.600 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/53d0b281-776f-4682-8aaf-098e1d364008.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/53d0b281-776f-4682-8aaf-098e1d364008.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.600 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.600 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[94a727af-fcc8-4d4a-910c-22db41319686]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.601 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-53d0b281-776f-4682-8aaf-098e1d364008
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/53d0b281-776f-4682-8aaf-098e1d364008.pid.haproxy
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 53d0b281-776f-4682-8aaf-098e1d364008
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:43:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:43:50.602 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008', 'env', 'PROCESS_TAG=haproxy-53d0b281-776f-4682-8aaf-098e1d364008', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/53d0b281-776f-4682-8aaf-098e1d364008.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.614 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.728 225859 DEBUG nova.compute.manager [req-32e2c550-f1f1-4bd4-99cd-c920efb2609d req-1b5073b9-ee96-466f-9e4a-7d0c87398e76 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Received event network-vif-plugged-69c1a502-414e-4ca7-9aec-488bbb6170b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.729 225859 DEBUG oslo_concurrency.lockutils [req-32e2c550-f1f1-4bd4-99cd-c920efb2609d req-1b5073b9-ee96-466f-9e4a-7d0c87398e76 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.729 225859 DEBUG oslo_concurrency.lockutils [req-32e2c550-f1f1-4bd4-99cd-c920efb2609d req-1b5073b9-ee96-466f-9e4a-7d0c87398e76 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.729 225859 DEBUG oslo_concurrency.lockutils [req-32e2c550-f1f1-4bd4-99cd-c920efb2609d req-1b5073b9-ee96-466f-9e4a-7d0c87398e76 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.730 225859 DEBUG nova.compute.manager [req-32e2c550-f1f1-4bd4-99cd-c920efb2609d req-1b5073b9-ee96-466f-9e4a-7d0c87398e76 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Processing event network-vif-plugged-69c1a502-414e-4ca7-9aec-488bbb6170b2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.753 225859 DEBUG nova.network.neutron [req-fd821a2b-dde3-404f-ae5f-005f57bcf5e6 req-c9a1bcdb-f1c7-4d63-b82e-af793ac06f1a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Updated VIF entry in instance network info cache for port 69c1a502-414e-4ca7-9aec-488bbb6170b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.753 225859 DEBUG nova.network.neutron [req-fd821a2b-dde3-404f-ae5f-005f57bcf5e6 req-c9a1bcdb-f1c7-4d63-b82e-af793ac06f1a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Updating instance_info_cache with network_info: [{"id": "69c1a502-414e-4ca7-9aec-488bbb6170b2", "address": "fa:16:3e:18:6f:89", "network": {"id": "53d0b281-776f-4682-8aaf-098e1d364008", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1516883251-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28a523cfe06042ff96554913a78e1e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69c1a502-41", "ovs_interfaceid": "69c1a502-414e-4ca7-9aec-488bbb6170b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.783 225859 DEBUG oslo_concurrency.lockutils [req-fd821a2b-dde3-404f-ae5f-005f57bcf5e6 req-c9a1bcdb-f1c7-4d63-b82e-af793ac06f1a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-d6ec5fce-44f2-4c13-b908-c45d7a919b34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:43:50 compute-1 podman[258073]: 2026-01-20 14:43:50.978365094 +0000 UTC m=+0.052153948 container create 5ede8f98a3dd4d68b1eb28fd75045c72977b65a6797a05a54d3264bb3badb1cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.983 225859 DEBUG nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.984 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920230.982973, d6ec5fce-44f2-4c13-b908-c45d7a919b34 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.984 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] VM Started (Lifecycle Event)
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.987 225859 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.991 225859 INFO nova.virt.libvirt.driver [-] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Instance spawned successfully.
Jan 20 14:43:50 compute-1 nova_compute[225855]: 2026-01-20 14:43:50.991 225859 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.009 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.015 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:43:51 compute-1 systemd[1]: Started libpod-conmon-5ede8f98a3dd4d68b1eb28fd75045c72977b65a6797a05a54d3264bb3badb1cc.scope.
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.020 225859 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.020 225859 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.021 225859 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.021 225859 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.021 225859 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.022 225859 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:51 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.047 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.048 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920230.983072, d6ec5fce-44f2-4c13-b908-c45d7a919b34 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.048 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] VM Paused (Lifecycle Event)
Jan 20 14:43:51 compute-1 podman[258073]: 2026-01-20 14:43:50.953184601 +0000 UTC m=+0.026973465 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:43:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f7c08597faeeb0c37af41a23da75f840d40edad83ad99c2c657cb81c506427/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:43:51 compute-1 podman[258073]: 2026-01-20 14:43:51.066431328 +0000 UTC m=+0.140220182 container init 5ede8f98a3dd4d68b1eb28fd75045c72977b65a6797a05a54d3264bb3badb1cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 14:43:51 compute-1 podman[258073]: 2026-01-20 14:43:51.072678875 +0000 UTC m=+0.146467719 container start 5ede8f98a3dd4d68b1eb28fd75045c72977b65a6797a05a54d3264bb3badb1cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.076 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.080 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920230.9865382, d6ec5fce-44f2-4c13-b908-c45d7a919b34 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.080 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] VM Resumed (Lifecycle Event)
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.096 225859 INFO nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Took 12.30 seconds to spawn the instance on the hypervisor.
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.096 225859 DEBUG nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:51 compute-1 neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008[258090]: [NOTICE]   (258094) : New worker (258096) forked
Jan 20 14:43:51 compute-1 neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008[258090]: [NOTICE]   (258094) : Loading success.
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.122 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.126 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.151 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.167 225859 INFO nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Took 13.66 seconds to build instance.
Jan 20 14:43:51 compute-1 nova_compute[225855]: 2026-01-20 14:43:51.189 225859 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:43:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:51.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:43:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:51 compute-1 ceph-mon[81775]: pgmap v1665: 321 pgs: 321 active+clean; 260 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 7.1 MiB/s wr, 194 op/s
Jan 20 14:43:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:52.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:52 compute-1 nova_compute[225855]: 2026-01-20 14:43:52.891 225859 DEBUG nova.compute.manager [req-1bfb8fcc-8dc2-4bc7-abd3-b75f14c363f4 req-bdd16724-d118-45c3-bcc5-83a9c832b3d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Received event network-vif-plugged-69c1a502-414e-4ca7-9aec-488bbb6170b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:52 compute-1 nova_compute[225855]: 2026-01-20 14:43:52.891 225859 DEBUG oslo_concurrency.lockutils [req-1bfb8fcc-8dc2-4bc7-abd3-b75f14c363f4 req-bdd16724-d118-45c3-bcc5-83a9c832b3d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:52 compute-1 nova_compute[225855]: 2026-01-20 14:43:52.892 225859 DEBUG oslo_concurrency.lockutils [req-1bfb8fcc-8dc2-4bc7-abd3-b75f14c363f4 req-bdd16724-d118-45c3-bcc5-83a9c832b3d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:52 compute-1 nova_compute[225855]: 2026-01-20 14:43:52.892 225859 DEBUG oslo_concurrency.lockutils [req-1bfb8fcc-8dc2-4bc7-abd3-b75f14c363f4 req-bdd16724-d118-45c3-bcc5-83a9c832b3d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:52 compute-1 nova_compute[225855]: 2026-01-20 14:43:52.892 225859 DEBUG nova.compute.manager [req-1bfb8fcc-8dc2-4bc7-abd3-b75f14c363f4 req-bdd16724-d118-45c3-bcc5-83a9c832b3d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] No waiting events found dispatching network-vif-plugged-69c1a502-414e-4ca7-9aec-488bbb6170b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:43:52 compute-1 nova_compute[225855]: 2026-01-20 14:43:52.892 225859 WARNING nova.compute.manager [req-1bfb8fcc-8dc2-4bc7-abd3-b75f14c363f4 req-bdd16724-d118-45c3-bcc5-83a9c832b3d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Received unexpected event network-vif-plugged-69c1a502-414e-4ca7-9aec-488bbb6170b2 for instance with vm_state active and task_state None.
Jan 20 14:43:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:53.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:53 compute-1 ceph-mon[81775]: pgmap v1666: 321 pgs: 321 active+clean; 260 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.4 MiB/s wr, 227 op/s
Jan 20 14:43:54 compute-1 nova_compute[225855]: 2026-01-20 14:43:54.008 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:43:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:54.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:43:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:55.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:55 compute-1 nova_compute[225855]: 2026-01-20 14:43:55.376 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:56 compute-1 ceph-mon[81775]: pgmap v1667: 321 pgs: 321 active+clean; 227 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.1 MiB/s wr, 283 op/s
Jan 20 14:43:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.004000113s ======
Jan 20 14:43:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:56.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000113s
Jan 20 14:43:56 compute-1 sudo[258108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:43:56 compute-1 sudo[258108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:56 compute-1 sudo[258108]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:56 compute-1 sudo[258133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:43:56 compute-1 sudo[258133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:56 compute-1 sudo[258133]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:56 compute-1 sudo[258158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:43:56 compute-1 sudo[258158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:56 compute-1 sudo[258158]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:56 compute-1 sudo[258183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:43:56 compute-1 sudo[258183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:56 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:57 compute-1 sudo[258183]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:57 compute-1 podman[258227]: 2026-01-20 14:43:57.037977192 +0000 UTC m=+0.087743256 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 14:43:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/278798223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3948072205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:43:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:57.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:43:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:43:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:43:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:43:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:43:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:43:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:43:58 compute-1 ceph-mon[81775]: pgmap v1668: 321 pgs: 321 active+clean; 230 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 901 KiB/s wr, 248 op/s
Jan 20 14:43:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:43:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:58.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:43:59 compute-1 nova_compute[225855]: 2026-01-20 14:43:59.010 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:43:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:59.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:59 compute-1 ceph-mon[81775]: pgmap v1669: 321 pgs: 321 active+clean; 238 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.2 MiB/s wr, 251 op/s
Jan 20 14:44:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:00.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:00 compute-1 nova_compute[225855]: 2026-01-20 14:44:00.378 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.013 225859 DEBUG oslo_concurrency.lockutils [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Acquiring lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.014 225859 DEBUG oslo_concurrency.lockutils [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.016 225859 DEBUG oslo_concurrency.lockutils [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Acquiring lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.017 225859 DEBUG oslo_concurrency.lockutils [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.017 225859 DEBUG oslo_concurrency.lockutils [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.018 225859 INFO nova.compute.manager [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Terminating instance
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.019 225859 DEBUG nova.compute.manager [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:44:01 compute-1 kernel: tap69c1a502-41 (unregistering): left promiscuous mode
Jan 20 14:44:01 compute-1 NetworkManager[49104]: <info>  [1768920241.0617] device (tap69c1a502-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:44:01 compute-1 ovn_controller[130490]: 2026-01-20T14:44:01Z|00296|binding|INFO|Releasing lport 69c1a502-414e-4ca7-9aec-488bbb6170b2 from this chassis (sb_readonly=0)
Jan 20 14:44:01 compute-1 ovn_controller[130490]: 2026-01-20T14:44:01Z|00297|binding|INFO|Setting lport 69c1a502-414e-4ca7-9aec-488bbb6170b2 down in Southbound
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.069 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:01 compute-1 ovn_controller[130490]: 2026-01-20T14:44:01Z|00298|binding|INFO|Removing iface tap69c1a502-41 ovn-installed in OVS
Jan 20 14:44:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:01.077 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:6f:89 10.100.0.10'], port_security=['fa:16:3e:18:6f:89 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd6ec5fce-44f2-4c13-b908-c45d7a919b34', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-53d0b281-776f-4682-8aaf-098e1d364008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28a523cfe06042ff96554913a78e1e3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1879c269-0854-40a3-8eb9-b61f97d38545', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a1afefec-2060-4dfb-acbb-1ce14c3a663c, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=69c1a502-414e-4ca7-9aec-488bbb6170b2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:44:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:01.078 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 69c1a502-414e-4ca7-9aec-488bbb6170b2 in datapath 53d0b281-776f-4682-8aaf-098e1d364008 unbound from our chassis
Jan 20 14:44:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:01.079 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 53d0b281-776f-4682-8aaf-098e1d364008, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:44:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:01.080 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[31fea395-25b9-40f5-b0d0-98be95dc350f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:01.081 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008 namespace which is not needed anymore
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.098 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:01 compute-1 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000054.scope: Deactivated successfully.
Jan 20 14:44:01 compute-1 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000054.scope: Consumed 10.931s CPU time.
Jan 20 14:44:01 compute-1 systemd-machined[194361]: Machine qemu-35-instance-00000054 terminated.
Jan 20 14:44:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1574054384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:01 compute-1 neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008[258090]: [NOTICE]   (258094) : haproxy version is 2.8.14-c23fe91
Jan 20 14:44:01 compute-1 neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008[258090]: [NOTICE]   (258094) : path to executable is /usr/sbin/haproxy
Jan 20 14:44:01 compute-1 neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008[258090]: [WARNING]  (258094) : Exiting Master process...
Jan 20 14:44:01 compute-1 neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008[258090]: [ALERT]    (258094) : Current worker (258096) exited with code 143 (Terminated)
Jan 20 14:44:01 compute-1 neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008[258090]: [WARNING]  (258094) : All workers exited. Exiting... (0)
Jan 20 14:44:01 compute-1 systemd[1]: libpod-5ede8f98a3dd4d68b1eb28fd75045c72977b65a6797a05a54d3264bb3badb1cc.scope: Deactivated successfully.
Jan 20 14:44:01 compute-1 conmon[258090]: conmon 5ede8f98a3dd4d68b1eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5ede8f98a3dd4d68b1eb28fd75045c72977b65a6797a05a54d3264bb3badb1cc.scope/container/memory.events
Jan 20 14:44:01 compute-1 podman[258292]: 2026-01-20 14:44:01.211113762 +0000 UTC m=+0.040428836 container died 5ede8f98a3dd4d68b1eb28fd75045c72977b65a6797a05a54d3264bb3badb1cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 20 14:44:01 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5ede8f98a3dd4d68b1eb28fd75045c72977b65a6797a05a54d3264bb3badb1cc-userdata-shm.mount: Deactivated successfully.
Jan 20 14:44:01 compute-1 systemd[1]: var-lib-containers-storage-overlay-a6f7c08597faeeb0c37af41a23da75f840d40edad83ad99c2c657cb81c506427-merged.mount: Deactivated successfully.
Jan 20 14:44:01 compute-1 podman[258292]: 2026-01-20 14:44:01.254407868 +0000 UTC m=+0.083722942 container cleanup 5ede8f98a3dd4d68b1eb28fd75045c72977b65a6797a05a54d3264bb3badb1cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.258 225859 INFO nova.virt.libvirt.driver [-] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Instance destroyed successfully.
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.258 225859 DEBUG nova.objects.instance [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lazy-loading 'resources' on Instance uuid d6ec5fce-44f2-4c13-b908-c45d7a919b34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:44:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:44:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:01.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:44:01 compute-1 systemd[1]: libpod-conmon-5ede8f98a3dd4d68b1eb28fd75045c72977b65a6797a05a54d3264bb3badb1cc.scope: Deactivated successfully.
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.278 225859 DEBUG nova.virt.libvirt.vif [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:43:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1818827013',display_name='tempest-ListServersNegativeTestJSON-server-1818827013-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1818827013-2',id=84,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-01-20T14:43:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='28a523cfe06042ff96554913a78e1e3a',ramdisk_id='',reservation_id='r-s8qopxwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1080060493',owner_user_name='tempest-ListServersNegativeTestJSON-1080060493-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:43:51Z,user_data=None,user_id='2975742546164cad937d13671d17108a',uuid=d6ec5fce-44f2-4c13-b908-c45d7a919b34,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "69c1a502-414e-4ca7-9aec-488bbb6170b2", "address": "fa:16:3e:18:6f:89", "network": {"id": "53d0b281-776f-4682-8aaf-098e1d364008", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1516883251-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28a523cfe06042ff96554913a78e1e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69c1a502-41", "ovs_interfaceid": "69c1a502-414e-4ca7-9aec-488bbb6170b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.280 225859 DEBUG nova.network.os_vif_util [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Converting VIF {"id": "69c1a502-414e-4ca7-9aec-488bbb6170b2", "address": "fa:16:3e:18:6f:89", "network": {"id": "53d0b281-776f-4682-8aaf-098e1d364008", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1516883251-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28a523cfe06042ff96554913a78e1e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69c1a502-41", "ovs_interfaceid": "69c1a502-414e-4ca7-9aec-488bbb6170b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.280 225859 DEBUG nova.network.os_vif_util [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:6f:89,bridge_name='br-int',has_traffic_filtering=True,id=69c1a502-414e-4ca7-9aec-488bbb6170b2,network=Network(53d0b281-776f-4682-8aaf-098e1d364008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69c1a502-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.281 225859 DEBUG os_vif [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:6f:89,bridge_name='br-int',has_traffic_filtering=True,id=69c1a502-414e-4ca7-9aec-488bbb6170b2,network=Network(53d0b281-776f-4682-8aaf-098e1d364008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69c1a502-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.284 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.285 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69c1a502-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.289 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.291 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.293 225859 INFO os_vif [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:6f:89,bridge_name='br-int',has_traffic_filtering=True,id=69c1a502-414e-4ca7-9aec-488bbb6170b2,network=Network(53d0b281-776f-4682-8aaf-098e1d364008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69c1a502-41')
Jan 20 14:44:01 compute-1 podman[258331]: 2026-01-20 14:44:01.321043685 +0000 UTC m=+0.044545943 container remove 5ede8f98a3dd4d68b1eb28fd75045c72977b65a6797a05a54d3264bb3badb1cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:44:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:01.329 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[34d9f01c-c9f7-4d6f-8969-14b3f41e9168]: (4, ('Tue Jan 20 02:44:01 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008 (5ede8f98a3dd4d68b1eb28fd75045c72977b65a6797a05a54d3264bb3badb1cc)\n5ede8f98a3dd4d68b1eb28fd75045c72977b65a6797a05a54d3264bb3badb1cc\nTue Jan 20 02:44:01 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008 (5ede8f98a3dd4d68b1eb28fd75045c72977b65a6797a05a54d3264bb3badb1cc)\n5ede8f98a3dd4d68b1eb28fd75045c72977b65a6797a05a54d3264bb3badb1cc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:01.330 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[735c2946-ce24-4ac3-aad8-7ce2a1a3b2f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:01.331 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53d0b281-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.332 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:01 compute-1 kernel: tap53d0b281-70: left promiscuous mode
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.334 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:01.337 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc09e1d-487c-4847-9557-189bc9426673]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.349 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:01.356 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ff8f269b-cb76-4684-8921-9cac753634cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:01.357 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ad6765ce-82f9-40e2-97d7-2663c8641ce4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:01.374 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff1135a-aae0-4242-b614-5d80c8284026]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526932, 'reachable_time': 39512, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258366, 'error': None, 'target': 'ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:01 compute-1 systemd[1]: run-netns-ovnmeta\x2d53d0b281\x2d776f\x2d4682\x2d8aaf\x2d098e1d364008.mount: Deactivated successfully.
Jan 20 14:44:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:01.379 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:44:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:01.379 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[4520dfc0-f3af-43db-857d-fbf393637f22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.611 225859 DEBUG oslo_concurrency.lockutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.613 225859 DEBUG oslo_concurrency.lockutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.648 225859 DEBUG nova.compute.manager [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.664 225859 INFO nova.virt.libvirt.driver [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Deleting instance files /var/lib/nova/instances/d6ec5fce-44f2-4c13-b908-c45d7a919b34_del
Jan 20 14:44:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.665 225859 INFO nova.virt.libvirt.driver [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Deletion of /var/lib/nova/instances/d6ec5fce-44f2-4c13-b908-c45d7a919b34_del complete
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.793 225859 DEBUG oslo_concurrency.lockutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.793 225859 DEBUG oslo_concurrency.lockutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.800 225859 INFO nova.compute.manager [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Took 0.78 seconds to destroy the instance on the hypervisor.
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.801 225859 DEBUG oslo.service.loopingcall [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.802 225859 DEBUG nova.compute.manager [-] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.802 225859 DEBUG nova.network.neutron [-] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.809 225859 DEBUG nova.virt.hardware [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:44:01 compute-1 nova_compute[225855]: 2026-01-20 14:44:01.810 225859 INFO nova.compute.claims [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:44:02 compute-1 nova_compute[225855]: 2026-01-20 14:44:02.019 225859 DEBUG oslo_concurrency.processutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4262766353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:02 compute-1 ceph-mon[81775]: pgmap v1670: 321 pgs: 321 active+clean; 260 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 273 op/s
Jan 20 14:44:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:02.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:44:02 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/14759032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:02 compute-1 nova_compute[225855]: 2026-01-20 14:44:02.479 225859 DEBUG oslo_concurrency.processutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:02 compute-1 nova_compute[225855]: 2026-01-20 14:44:02.485 225859 DEBUG nova.compute.provider_tree [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:44:02 compute-1 nova_compute[225855]: 2026-01-20 14:44:02.505 225859 DEBUG nova.scheduler.client.report [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:44:02 compute-1 nova_compute[225855]: 2026-01-20 14:44:02.557 225859 DEBUG oslo_concurrency.lockutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:02 compute-1 nova_compute[225855]: 2026-01-20 14:44:02.558 225859 DEBUG nova.compute.manager [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:44:02 compute-1 nova_compute[225855]: 2026-01-20 14:44:02.626 225859 DEBUG nova.compute.manager [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:44:02 compute-1 nova_compute[225855]: 2026-01-20 14:44:02.627 225859 DEBUG nova.network.neutron [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:44:02 compute-1 nova_compute[225855]: 2026-01-20 14:44:02.653 225859 INFO nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:44:02 compute-1 nova_compute[225855]: 2026-01-20 14:44:02.677 225859 DEBUG nova.compute.manager [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:44:02 compute-1 nova_compute[225855]: 2026-01-20 14:44:02.876 225859 DEBUG nova.compute.manager [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:44:02 compute-1 nova_compute[225855]: 2026-01-20 14:44:02.877 225859 DEBUG nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:44:02 compute-1 nova_compute[225855]: 2026-01-20 14:44:02.877 225859 INFO nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Creating image(s)
Jan 20 14:44:02 compute-1 nova_compute[225855]: 2026-01-20 14:44:02.901 225859 DEBUG nova.storage.rbd_utils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 6586bc3e-3a94-4d22-8e8c-713a86a956fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:44:02 compute-1 nova_compute[225855]: 2026-01-20 14:44:02.931 225859 DEBUG nova.storage.rbd_utils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 6586bc3e-3a94-4d22-8e8c-713a86a956fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:44:02 compute-1 nova_compute[225855]: 2026-01-20 14:44:02.954 225859 DEBUG nova.storage.rbd_utils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 6586bc3e-3a94-4d22-8e8c-713a86a956fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:44:02 compute-1 nova_compute[225855]: 2026-01-20 14:44:02.959 225859 DEBUG oslo_concurrency.processutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:02 compute-1 nova_compute[225855]: 2026-01-20 14:44:02.983 225859 DEBUG nova.policy [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '869086208e10436c9dc96c78bee9a85d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b683fcc0026242e28ba6d8fba638688e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.018 225859 DEBUG oslo_concurrency.processutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.019 225859 DEBUG oslo_concurrency.lockutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.020 225859 DEBUG oslo_concurrency.lockutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.020 225859 DEBUG oslo_concurrency.lockutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.042 225859 DEBUG nova.storage.rbd_utils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 6586bc3e-3a94-4d22-8e8c-713a86a956fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.045 225859 DEBUG oslo_concurrency.processutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 6586bc3e-3a94-4d22-8e8c-713a86a956fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:03 compute-1 sudo[258467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:44:03 compute-1 sudo[258467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:44:03 compute-1 sudo[258467]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/14759032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3245543980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:44:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:44:03 compute-1 sudo[258507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:44:03 compute-1 sudo[258507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:44:03 compute-1 sudo[258507]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:03.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.352 225859 DEBUG oslo_concurrency.processutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 6586bc3e-3a94-4d22-8e8c-713a86a956fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.432 225859 DEBUG nova.storage.rbd_utils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] resizing rbd image 6586bc3e-3a94-4d22-8e8c-713a86a956fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.536 225859 DEBUG nova.objects.instance [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'migration_context' on Instance uuid 6586bc3e-3a94-4d22-8e8c-713a86a956fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.546 225859 DEBUG nova.compute.manager [req-a0cbf06a-07e3-46cf-9223-2a05eaa64b0f req-8f0650d1-749d-4c83-8edf-c124e6abdbb0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Received event network-vif-unplugged-69c1a502-414e-4ca7-9aec-488bbb6170b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.547 225859 DEBUG oslo_concurrency.lockutils [req-a0cbf06a-07e3-46cf-9223-2a05eaa64b0f req-8f0650d1-749d-4c83-8edf-c124e6abdbb0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.547 225859 DEBUG oslo_concurrency.lockutils [req-a0cbf06a-07e3-46cf-9223-2a05eaa64b0f req-8f0650d1-749d-4c83-8edf-c124e6abdbb0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.547 225859 DEBUG oslo_concurrency.lockutils [req-a0cbf06a-07e3-46cf-9223-2a05eaa64b0f req-8f0650d1-749d-4c83-8edf-c124e6abdbb0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.548 225859 DEBUG nova.compute.manager [req-a0cbf06a-07e3-46cf-9223-2a05eaa64b0f req-8f0650d1-749d-4c83-8edf-c124e6abdbb0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] No waiting events found dispatching network-vif-unplugged-69c1a502-414e-4ca7-9aec-488bbb6170b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.548 225859 DEBUG nova.compute.manager [req-a0cbf06a-07e3-46cf-9223-2a05eaa64b0f req-8f0650d1-749d-4c83-8edf-c124e6abdbb0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Received event network-vif-unplugged-69c1a502-414e-4ca7-9aec-488bbb6170b2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.548 225859 DEBUG nova.compute.manager [req-a0cbf06a-07e3-46cf-9223-2a05eaa64b0f req-8f0650d1-749d-4c83-8edf-c124e6abdbb0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Received event network-vif-plugged-69c1a502-414e-4ca7-9aec-488bbb6170b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.549 225859 DEBUG oslo_concurrency.lockutils [req-a0cbf06a-07e3-46cf-9223-2a05eaa64b0f req-8f0650d1-749d-4c83-8edf-c124e6abdbb0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.549 225859 DEBUG oslo_concurrency.lockutils [req-a0cbf06a-07e3-46cf-9223-2a05eaa64b0f req-8f0650d1-749d-4c83-8edf-c124e6abdbb0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.550 225859 DEBUG oslo_concurrency.lockutils [req-a0cbf06a-07e3-46cf-9223-2a05eaa64b0f req-8f0650d1-749d-4c83-8edf-c124e6abdbb0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.550 225859 DEBUG nova.compute.manager [req-a0cbf06a-07e3-46cf-9223-2a05eaa64b0f req-8f0650d1-749d-4c83-8edf-c124e6abdbb0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] No waiting events found dispatching network-vif-plugged-69c1a502-414e-4ca7-9aec-488bbb6170b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.550 225859 WARNING nova.compute.manager [req-a0cbf06a-07e3-46cf-9223-2a05eaa64b0f req-8f0650d1-749d-4c83-8edf-c124e6abdbb0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Received unexpected event network-vif-plugged-69c1a502-414e-4ca7-9aec-488bbb6170b2 for instance with vm_state active and task_state deleting.
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.564 225859 DEBUG nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.564 225859 DEBUG nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Ensure instance console log exists: /var/lib/nova/instances/6586bc3e-3a94-4d22-8e8c-713a86a956fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.565 225859 DEBUG oslo_concurrency.lockutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.565 225859 DEBUG oslo_concurrency.lockutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.565 225859 DEBUG oslo_concurrency.lockutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.851 225859 DEBUG nova.network.neutron [-] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.877 225859 INFO nova.compute.manager [-] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Took 2.07 seconds to deallocate network for instance.
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.947 225859 DEBUG oslo_concurrency.lockutils [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:03 compute-1 nova_compute[225855]: 2026-01-20 14:44:03.948 225859 DEBUG oslo_concurrency.lockutils [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:04 compute-1 nova_compute[225855]: 2026-01-20 14:44:04.023 225859 DEBUG oslo_concurrency.processutils [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:04 compute-1 nova_compute[225855]: 2026-01-20 14:44:04.199 225859 DEBUG nova.network.neutron [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Successfully created port: 2c289e6f-295e-44c3-948a-9a6901251890 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:44:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/736157011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:04 compute-1 ceph-mon[81775]: pgmap v1671: 321 pgs: 321 active+clean; 209 MiB data, 770 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.0 MiB/s wr, 253 op/s
Jan 20 14:44:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:04.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:44:04 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3605756288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:04 compute-1 nova_compute[225855]: 2026-01-20 14:44:04.490 225859 DEBUG oslo_concurrency.processutils [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:04 compute-1 nova_compute[225855]: 2026-01-20 14:44:04.496 225859 DEBUG nova.compute.provider_tree [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:44:04 compute-1 nova_compute[225855]: 2026-01-20 14:44:04.545 225859 DEBUG nova.scheduler.client.report [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:44:04 compute-1 nova_compute[225855]: 2026-01-20 14:44:04.586 225859 DEBUG oslo_concurrency.lockutils [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:04 compute-1 nova_compute[225855]: 2026-01-20 14:44:04.615 225859 INFO nova.scheduler.client.report [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Deleted allocations for instance d6ec5fce-44f2-4c13-b908-c45d7a919b34
Jan 20 14:44:04 compute-1 nova_compute[225855]: 2026-01-20 14:44:04.746 225859 DEBUG oslo_concurrency.lockutils [None req-e1bb863c-b070-41d6-9c26-36531271f9be 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "d6ec5fce-44f2-4c13-b908-c45d7a919b34" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3605756288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:05.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:05 compute-1 nova_compute[225855]: 2026-01-20 14:44:05.380 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:05 compute-1 nova_compute[225855]: 2026-01-20 14:44:05.562 225859 DEBUG nova.network.neutron [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Successfully updated port: 2c289e6f-295e-44c3-948a-9a6901251890 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:44:05 compute-1 nova_compute[225855]: 2026-01-20 14:44:05.580 225859 DEBUG oslo_concurrency.lockutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:44:05 compute-1 nova_compute[225855]: 2026-01-20 14:44:05.580 225859 DEBUG oslo_concurrency.lockutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquired lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:44:05 compute-1 nova_compute[225855]: 2026-01-20 14:44:05.580 225859 DEBUG nova.network.neutron [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:44:05 compute-1 nova_compute[225855]: 2026-01-20 14:44:05.693 225859 DEBUG nova.compute.manager [req-c48853ba-bfb6-4c05-9397-cd05484a477f req-0518c30a-3877-45fc-bdd6-c8376618099f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Received event network-vif-deleted-69c1a502-414e-4ca7-9aec-488bbb6170b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:05 compute-1 nova_compute[225855]: 2026-01-20 14:44:05.710 225859 DEBUG nova.compute.manager [req-3e18d8ff-e118-4a17-aedd-174147fa8366 req-c71917d1-60b1-4698-8cf6-1d8e6757bf49 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Received event network-changed-2c289e6f-295e-44c3-948a-9a6901251890 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:05 compute-1 nova_compute[225855]: 2026-01-20 14:44:05.710 225859 DEBUG nova.compute.manager [req-3e18d8ff-e118-4a17-aedd-174147fa8366 req-c71917d1-60b1-4698-8cf6-1d8e6757bf49 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Refreshing instance network info cache due to event network-changed-2c289e6f-295e-44c3-948a-9a6901251890. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:44:05 compute-1 nova_compute[225855]: 2026-01-20 14:44:05.710 225859 DEBUG oslo_concurrency.lockutils [req-3e18d8ff-e118-4a17-aedd-174147fa8366 req-c71917d1-60b1-4698-8cf6-1d8e6757bf49 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:44:05 compute-1 nova_compute[225855]: 2026-01-20 14:44:05.811 225859 DEBUG nova.network.neutron [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:44:06 compute-1 podman[258631]: 2026-01-20 14:44:06.006586896 +0000 UTC m=+0.053131295 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 14:44:06 compute-1 ceph-mon[81775]: pgmap v1672: 321 pgs: 321 active+clean; 123 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.2 MiB/s wr, 292 op/s
Jan 20 14:44:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:06.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:06 compute-1 nova_compute[225855]: 2026-01-20 14:44:06.334 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:06 compute-1 sudo[258651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:44:06 compute-1 sudo[258651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:44:06 compute-1 sudo[258651]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:06 compute-1 sudo[258676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:44:06 compute-1 sudo[258676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:44:06 compute-1 sudo[258676]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.183 225859 DEBUG nova.network.neutron [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Updating instance_info_cache with network_info: [{"id": "2c289e6f-295e-44c3-948a-9a6901251890", "address": "fa:16:3e:2f:4c:e2", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c289e6f-29", "ovs_interfaceid": "2c289e6f-295e-44c3-948a-9a6901251890", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.217 225859 DEBUG oslo_concurrency.lockutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Releasing lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.217 225859 DEBUG nova.compute.manager [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Instance network_info: |[{"id": "2c289e6f-295e-44c3-948a-9a6901251890", "address": "fa:16:3e:2f:4c:e2", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c289e6f-29", "ovs_interfaceid": "2c289e6f-295e-44c3-948a-9a6901251890", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.218 225859 DEBUG oslo_concurrency.lockutils [req-3e18d8ff-e118-4a17-aedd-174147fa8366 req-c71917d1-60b1-4698-8cf6-1d8e6757bf49 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.218 225859 DEBUG nova.network.neutron [req-3e18d8ff-e118-4a17-aedd-174147fa8366 req-c71917d1-60b1-4698-8cf6-1d8e6757bf49 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Refreshing network info cache for port 2c289e6f-295e-44c3-948a-9a6901251890 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.221 225859 DEBUG nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Start _get_guest_xml network_info=[{"id": "2c289e6f-295e-44c3-948a-9a6901251890", "address": "fa:16:3e:2f:4c:e2", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c289e6f-29", "ovs_interfaceid": "2c289e6f-295e-44c3-948a-9a6901251890", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.226 225859 WARNING nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.231 225859 DEBUG nova.virt.libvirt.host [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.232 225859 DEBUG nova.virt.libvirt.host [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.236 225859 DEBUG nova.virt.libvirt.host [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.237 225859 DEBUG nova.virt.libvirt.host [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.238 225859 DEBUG nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.239 225859 DEBUG nova.virt.hardware [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.239 225859 DEBUG nova.virt.hardware [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.240 225859 DEBUG nova.virt.hardware [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.240 225859 DEBUG nova.virt.hardware [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.240 225859 DEBUG nova.virt.hardware [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.240 225859 DEBUG nova.virt.hardware [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.241 225859 DEBUG nova.virt.hardware [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.241 225859 DEBUG nova.virt.hardware [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.241 225859 DEBUG nova.virt.hardware [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.242 225859 DEBUG nova.virt.hardware [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.242 225859 DEBUG nova.virt.hardware [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.246 225859 DEBUG oslo_concurrency.processutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:44:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:07.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:44:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:44:07 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/847151595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.679 225859 DEBUG oslo_concurrency.processutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.705 225859 DEBUG nova.storage.rbd_utils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 6586bc3e-3a94-4d22-8e8c-713a86a956fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:44:07 compute-1 nova_compute[225855]: 2026-01-20 14:44:07.709 225859 DEBUG oslo_concurrency.processutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/847151595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:44:08 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/674612077' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.139 225859 DEBUG oslo_concurrency.processutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.141 225859 DEBUG nova.virt.libvirt.vif [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:44:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1533521351',display_name='tempest-ServerActionsTestOtherA-server-1533521351',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1533521351',id=87,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDOC6AOV9rhSIyyfBXYGEFvhdWE5GLRuNfs0sPvnXoLHIstQY2OpqwhI35UcFW1s96uqz0+j9sMbdcq/NuNcfgrlnXkEz6j2iO7WUECWdPrQW34JB1FTWAvtA4R6RDoaZA==',key_name='tempest-keypair-1611966828',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b683fcc0026242e28ba6d8fba638688e',ramdisk_id='',reservation_id='r-47bmn591',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-967087071',owner_user_name='tempest-ServerActionsTestOtherA-967087071-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:44:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='869086208e10436c9dc96c78bee9a85d',uuid=6586bc3e-3a94-4d22-8e8c-713a86a956fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c289e6f-295e-44c3-948a-9a6901251890", "address": "fa:16:3e:2f:4c:e2", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c289e6f-29", "ovs_interfaceid": "2c289e6f-295e-44c3-948a-9a6901251890", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.141 225859 DEBUG nova.network.os_vif_util [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converting VIF {"id": "2c289e6f-295e-44c3-948a-9a6901251890", "address": "fa:16:3e:2f:4c:e2", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c289e6f-29", "ovs_interfaceid": "2c289e6f-295e-44c3-948a-9a6901251890", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.142 225859 DEBUG nova.network.os_vif_util [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:4c:e2,bridge_name='br-int',has_traffic_filtering=True,id=2c289e6f-295e-44c3-948a-9a6901251890,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c289e6f-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.144 225859 DEBUG nova.objects.instance [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'pci_devices' on Instance uuid 6586bc3e-3a94-4d22-8e8c-713a86a956fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.166 225859 DEBUG nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:44:08 compute-1 nova_compute[225855]:   <uuid>6586bc3e-3a94-4d22-8e8c-713a86a956fb</uuid>
Jan 20 14:44:08 compute-1 nova_compute[225855]:   <name>instance-00000057</name>
Jan 20 14:44:08 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:44:08 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:44:08 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerActionsTestOtherA-server-1533521351</nova:name>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:44:07</nova:creationTime>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:44:08 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:44:08 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:44:08 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:44:08 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:44:08 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:44:08 compute-1 nova_compute[225855]:         <nova:user uuid="869086208e10436c9dc96c78bee9a85d">tempest-ServerActionsTestOtherA-967087071-project-member</nova:user>
Jan 20 14:44:08 compute-1 nova_compute[225855]:         <nova:project uuid="b683fcc0026242e28ba6d8fba638688e">tempest-ServerActionsTestOtherA-967087071</nova:project>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:44:08 compute-1 nova_compute[225855]:         <nova:port uuid="2c289e6f-295e-44c3-948a-9a6901251890">
Jan 20 14:44:08 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:44:08 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:44:08 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <system>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <entry name="serial">6586bc3e-3a94-4d22-8e8c-713a86a956fb</entry>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <entry name="uuid">6586bc3e-3a94-4d22-8e8c-713a86a956fb</entry>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     </system>
Jan 20 14:44:08 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:44:08 compute-1 nova_compute[225855]:   <os>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:   </os>
Jan 20 14:44:08 compute-1 nova_compute[225855]:   <features>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:   </features>
Jan 20 14:44:08 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:44:08 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:44:08 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/6586bc3e-3a94-4d22-8e8c-713a86a956fb_disk">
Jan 20 14:44:08 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       </source>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:44:08 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/6586bc3e-3a94-4d22-8e8c-713a86a956fb_disk.config">
Jan 20 14:44:08 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       </source>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:44:08 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:2f:4c:e2"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <target dev="tap2c289e6f-29"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/6586bc3e-3a94-4d22-8e8c-713a86a956fb/console.log" append="off"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <video>
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     </video>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:44:08 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:44:08 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:44:08 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:44:08 compute-1 nova_compute[225855]: </domain>
Jan 20 14:44:08 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.167 225859 DEBUG nova.compute.manager [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Preparing to wait for external event network-vif-plugged-2c289e6f-295e-44c3-948a-9a6901251890 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.168 225859 DEBUG oslo_concurrency.lockutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.168 225859 DEBUG oslo_concurrency.lockutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.168 225859 DEBUG oslo_concurrency.lockutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.169 225859 DEBUG nova.virt.libvirt.vif [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:44:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1533521351',display_name='tempest-ServerActionsTestOtherA-server-1533521351',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1533521351',id=87,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDOC6AOV9rhSIyyfBXYGEFvhdWE5GLRuNfs0sPvnXoLHIstQY2OpqwhI35UcFW1s96uqz0+j9sMbdcq/NuNcfgrlnXkEz6j2iO7WUECWdPrQW34JB1FTWAvtA4R6RDoaZA==',key_name='tempest-keypair-1611966828',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b683fcc0026242e28ba6d8fba638688e',ramdisk_id='',reservation_id='r-47bmn591',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-967087071',owner_user_name='tempest-ServerActionsTestOtherA-967087071-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:44:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='869086208e10436c9dc96c78bee9a85d',uuid=6586bc3e-3a94-4d22-8e8c-713a86a956fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c289e6f-295e-44c3-948a-9a6901251890", "address": "fa:16:3e:2f:4c:e2", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c289e6f-29", "ovs_interfaceid": "2c289e6f-295e-44c3-948a-9a6901251890", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.170 225859 DEBUG nova.network.os_vif_util [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converting VIF {"id": "2c289e6f-295e-44c3-948a-9a6901251890", "address": "fa:16:3e:2f:4c:e2", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c289e6f-29", "ovs_interfaceid": "2c289e6f-295e-44c3-948a-9a6901251890", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.170 225859 DEBUG nova.network.os_vif_util [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:4c:e2,bridge_name='br-int',has_traffic_filtering=True,id=2c289e6f-295e-44c3-948a-9a6901251890,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c289e6f-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.171 225859 DEBUG os_vif [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:4c:e2,bridge_name='br-int',has_traffic_filtering=True,id=2c289e6f-295e-44c3-948a-9a6901251890,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c289e6f-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.171 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.172 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.173 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.176 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.176 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c289e6f-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.176 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2c289e6f-29, col_values=(('external_ids', {'iface-id': '2c289e6f-295e-44c3-948a-9a6901251890', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2f:4c:e2', 'vm-uuid': '6586bc3e-3a94-4d22-8e8c-713a86a956fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.178 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:08 compute-1 NetworkManager[49104]: <info>  [1768920248.1797] manager: (tap2c289e6f-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.181 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.184 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.185 225859 INFO os_vif [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:4c:e2,bridge_name='br-int',has_traffic_filtering=True,id=2c289e6f-295e-44c3-948a-9a6901251890,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c289e6f-29')
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.250 225859 DEBUG nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.250 225859 DEBUG nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.250 225859 DEBUG nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] No VIF found with MAC fa:16:3e:2f:4c:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.251 225859 INFO nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Using config drive
Jan 20 14:44:08 compute-1 nova_compute[225855]: 2026-01-20 14:44:08.270 225859 DEBUG nova.storage.rbd_utils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 6586bc3e-3a94-4d22-8e8c-713a86a956fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:44:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:44:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:08.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:44:08 compute-1 ceph-mon[81775]: pgmap v1673: 321 pgs: 321 active+clean; 134 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.6 MiB/s wr, 238 op/s
Jan 20 14:44:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/674612077' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:44:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:09.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:44:09 compute-1 nova_compute[225855]: 2026-01-20 14:44:09.491 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:09 compute-1 nova_compute[225855]: 2026-01-20 14:44:09.564 225859 INFO nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Creating config drive at /var/lib/nova/instances/6586bc3e-3a94-4d22-8e8c-713a86a956fb/disk.config
Jan 20 14:44:09 compute-1 nova_compute[225855]: 2026-01-20 14:44:09.572 225859 DEBUG oslo_concurrency.processutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6586bc3e-3a94-4d22-8e8c-713a86a956fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu2terrax execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:09 compute-1 nova_compute[225855]: 2026-01-20 14:44:09.700 225859 DEBUG oslo_concurrency.processutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6586bc3e-3a94-4d22-8e8c-713a86a956fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu2terrax" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:09 compute-1 nova_compute[225855]: 2026-01-20 14:44:09.726 225859 DEBUG nova.storage.rbd_utils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 6586bc3e-3a94-4d22-8e8c-713a86a956fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:44:09 compute-1 nova_compute[225855]: 2026-01-20 14:44:09.730 225859 DEBUG oslo_concurrency.processutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6586bc3e-3a94-4d22-8e8c-713a86a956fb/disk.config 6586bc3e-3a94-4d22-8e8c-713a86a956fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:09 compute-1 nova_compute[225855]: 2026-01-20 14:44:09.960 225859 DEBUG nova.network.neutron [req-3e18d8ff-e118-4a17-aedd-174147fa8366 req-c71917d1-60b1-4698-8cf6-1d8e6757bf49 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Updated VIF entry in instance network info cache for port 2c289e6f-295e-44c3-948a-9a6901251890. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:44:09 compute-1 nova_compute[225855]: 2026-01-20 14:44:09.961 225859 DEBUG nova.network.neutron [req-3e18d8ff-e118-4a17-aedd-174147fa8366 req-c71917d1-60b1-4698-8cf6-1d8e6757bf49 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Updating instance_info_cache with network_info: [{"id": "2c289e6f-295e-44c3-948a-9a6901251890", "address": "fa:16:3e:2f:4c:e2", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c289e6f-29", "ovs_interfaceid": "2c289e6f-295e-44c3-948a-9a6901251890", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:44:09 compute-1 nova_compute[225855]: 2026-01-20 14:44:09.980 225859 DEBUG oslo_concurrency.lockutils [req-3e18d8ff-e118-4a17-aedd-174147fa8366 req-c71917d1-60b1-4698-8cf6-1d8e6757bf49 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:44:09 compute-1 ceph-mon[81775]: pgmap v1674: 321 pgs: 321 active+clean; 126 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.8 MiB/s wr, 230 op/s
Jan 20 14:44:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:10.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:10 compute-1 nova_compute[225855]: 2026-01-20 14:44:10.382 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:11.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1216611789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:11 compute-1 nova_compute[225855]: 2026-01-20 14:44:11.690 225859 DEBUG oslo_concurrency.processutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6586bc3e-3a94-4d22-8e8c-713a86a956fb/disk.config 6586bc3e-3a94-4d22-8e8c-713a86a956fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.960s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:11 compute-1 nova_compute[225855]: 2026-01-20 14:44:11.691 225859 INFO nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Deleting local config drive /var/lib/nova/instances/6586bc3e-3a94-4d22-8e8c-713a86a956fb/disk.config because it was imported into RBD.
Jan 20 14:44:11 compute-1 kernel: tap2c289e6f-29: entered promiscuous mode
Jan 20 14:44:11 compute-1 NetworkManager[49104]: <info>  [1768920251.7581] manager: (tap2c289e6f-29): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Jan 20 14:44:11 compute-1 ovn_controller[130490]: 2026-01-20T14:44:11Z|00299|binding|INFO|Claiming lport 2c289e6f-295e-44c3-948a-9a6901251890 for this chassis.
Jan 20 14:44:11 compute-1 ovn_controller[130490]: 2026-01-20T14:44:11Z|00300|binding|INFO|2c289e6f-295e-44c3-948a-9a6901251890: Claiming fa:16:3e:2f:4c:e2 10.100.0.9
Jan 20 14:44:11 compute-1 nova_compute[225855]: 2026-01-20 14:44:11.760 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:11 compute-1 nova_compute[225855]: 2026-01-20 14:44:11.824 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:11.825 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:4c:e2 10.100.0.9'], port_security=['fa:16:3e:2f:4c:e2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6586bc3e-3a94-4d22-8e8c-713a86a956fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b683fcc0026242e28ba6d8fba638688e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7ceb05b5-53ff-444a-b0ef-2ba8294d585b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=361f9a69-30a6-4be4-89ad-2a8f92877af2, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=2c289e6f-295e-44c3-948a-9a6901251890) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:44:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:11.827 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 2c289e6f-295e-44c3-948a-9a6901251890 in datapath a19e9d1a-864f-41ee-bdea-188e65973ea5 bound to our chassis
Jan 20 14:44:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:11.828 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a19e9d1a-864f-41ee-bdea-188e65973ea5
Jan 20 14:44:11 compute-1 systemd-udevd[258839]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:44:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:11.840 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3dbd08c7-b10c-4e08-837b-8a54abf77152]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:11.841 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa19e9d1a-81 in ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:44:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:11.843 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa19e9d1a-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:44:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:11.843 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[634b4beb-9707-4510-84c6-7a20de7c3570]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:11.844 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[60b168f9-17ca-449f-9cb7-d01f01e9fa30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:11 compute-1 systemd-machined[194361]: New machine qemu-36-instance-00000057.
Jan 20 14:44:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:11.857 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[c4d21090-ccee-4890-a967-1503a13944a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:11 compute-1 NetworkManager[49104]: <info>  [1768920251.8593] device (tap2c289e6f-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:44:11 compute-1 NetworkManager[49104]: <info>  [1768920251.8601] device (tap2c289e6f-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:44:11 compute-1 systemd[1]: Started Virtual Machine qemu-36-instance-00000057.
Jan 20 14:44:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:11.883 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0b761c1e-20dc-47f4-a613-bd469f6e1942]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:11 compute-1 nova_compute[225855]: 2026-01-20 14:44:11.894 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:11 compute-1 ovn_controller[130490]: 2026-01-20T14:44:11Z|00301|binding|INFO|Setting lport 2c289e6f-295e-44c3-948a-9a6901251890 ovn-installed in OVS
Jan 20 14:44:11 compute-1 ovn_controller[130490]: 2026-01-20T14:44:11Z|00302|binding|INFO|Setting lport 2c289e6f-295e-44c3-948a-9a6901251890 up in Southbound
Jan 20 14:44:11 compute-1 nova_compute[225855]: 2026-01-20 14:44:11.899 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:11.910 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[b52ce737-57b3-430d-8928-26917f27cf9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:11.916 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[91e89558-f29f-4e14-94dd-2b036660b316]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:11 compute-1 NetworkManager[49104]: <info>  [1768920251.9175] manager: (tapa19e9d1a-80): new Veth device (/org/freedesktop/NetworkManager/Devices/131)
Jan 20 14:44:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:11.948 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e524ce8b-3406-470c-894a-aea5e8462d31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:11.951 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[0641c8e4-f6b1-4944-a4f9-721e4d2c53d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:11 compute-1 NetworkManager[49104]: <info>  [1768920251.9790] device (tapa19e9d1a-80): carrier: link connected
Jan 20 14:44:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:11.984 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c6bda9f1-4d65-41c1-83a0-4c3d63c5eb96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:12.001 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c1f1f05f-dba8-413a-8afd-46d6039c86c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa19e9d1a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:53:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529090, 'reachable_time': 41331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258873, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:12.015 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[be399fa3-d1d0-4d6b-a8f7-8a3d646bd0d6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:5313'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529090, 'tstamp': 529090}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258874, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:12.031 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0b6fee-a76a-406a-b92c-6b2bfb703b40]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa19e9d1a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:53:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529090, 'reachable_time': 41331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258875, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:12.059 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d59a8cb4-9681-4ce2-9c2e-5c5541dce7a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:12.119 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e54e7b4f-a4cd-47a5-9b4b-93297dadacf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:12.120 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa19e9d1a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:12.120 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:12.121 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa19e9d1a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.122 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:12 compute-1 NetworkManager[49104]: <info>  [1768920252.1236] manager: (tapa19e9d1a-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Jan 20 14:44:12 compute-1 kernel: tapa19e9d1a-80: entered promiscuous mode
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:12.124 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa19e9d1a-80, col_values=(('external_ids', {'iface-id': '5527ab8d-a985-420b-9d5b-7e5d9baf7004'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:12 compute-1 ovn_controller[130490]: 2026-01-20T14:44:12Z|00303|binding|INFO|Releasing lport 5527ab8d-a985-420b-9d5b-7e5d9baf7004 from this chassis (sb_readonly=0)
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.140 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:12.142 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a19e9d1a-864f-41ee-bdea-188e65973ea5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a19e9d1a-864f-41ee-bdea-188e65973ea5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:12.142 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3244e050-4df3-4468-9102-7e74193866bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:12.143 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-a19e9d1a-864f-41ee-bdea-188e65973ea5
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/a19e9d1a-864f-41ee-bdea-188e65973ea5.pid.haproxy
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID a19e9d1a-864f-41ee-bdea-188e65973ea5
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:44:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:12.144 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'env', 'PROCESS_TAG=haproxy-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a19e9d1a-864f-41ee-bdea-188e65973ea5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:44:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:12.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.527 225859 DEBUG nova.compute.manager [req-9c4ec8cc-1b35-4a18-9bbe-da38e7223dd9 req-07ca47b1-b8d1-400c-9619-c0f6a4439590 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Received event network-vif-plugged-2c289e6f-295e-44c3-948a-9a6901251890 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.528 225859 DEBUG oslo_concurrency.lockutils [req-9c4ec8cc-1b35-4a18-9bbe-da38e7223dd9 req-07ca47b1-b8d1-400c-9619-c0f6a4439590 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.528 225859 DEBUG oslo_concurrency.lockutils [req-9c4ec8cc-1b35-4a18-9bbe-da38e7223dd9 req-07ca47b1-b8d1-400c-9619-c0f6a4439590 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.529 225859 DEBUG oslo_concurrency.lockutils [req-9c4ec8cc-1b35-4a18-9bbe-da38e7223dd9 req-07ca47b1-b8d1-400c-9619-c0f6a4439590 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.529 225859 DEBUG nova.compute.manager [req-9c4ec8cc-1b35-4a18-9bbe-da38e7223dd9 req-07ca47b1-b8d1-400c-9619-c0f6a4439590 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Processing event network-vif-plugged-2c289e6f-295e-44c3-948a-9a6901251890 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:44:12 compute-1 podman[258925]: 2026-01-20 14:44:12.533678805 +0000 UTC m=+0.094321392 container create c79270ac4f85aee516fb775db6a3d75d0fa33264a9044c47814a555bf5216ad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:44:12 compute-1 podman[258925]: 2026-01-20 14:44:12.466256596 +0000 UTC m=+0.026899253 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:44:12 compute-1 systemd[1]: Started libpod-conmon-c79270ac4f85aee516fb775db6a3d75d0fa33264a9044c47814a555bf5216ad8.scope.
Jan 20 14:44:12 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:44:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6713c84d9e47876707c1459896eab206f8324b8157aad0c912932cf9613e3a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:44:12 compute-1 podman[258925]: 2026-01-20 14:44:12.63128043 +0000 UTC m=+0.191923057 container init c79270ac4f85aee516fb775db6a3d75d0fa33264a9044c47814a555bf5216ad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.634 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920252.6341944, 6586bc3e-3a94-4d22-8e8c-713a86a956fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.635 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] VM Started (Lifecycle Event)
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.636 225859 DEBUG nova.compute.manager [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:44:12 compute-1 podman[258925]: 2026-01-20 14:44:12.642280791 +0000 UTC m=+0.202923378 container start c79270ac4f85aee516fb775db6a3d75d0fa33264a9044c47814a555bf5216ad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.643 225859 DEBUG nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.647 225859 INFO nova.virt.libvirt.driver [-] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Instance spawned successfully.
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.647 225859 DEBUG nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.656 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.659 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:44:12 compute-1 neutron-haproxy-ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5[258963]: [NOTICE]   (258968) : New worker (258970) forked
Jan 20 14:44:12 compute-1 neutron-haproxy-ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5[258963]: [NOTICE]   (258968) : Loading success.
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.690 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.690 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920252.6343024, 6586bc3e-3a94-4d22-8e8c-713a86a956fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.690 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] VM Paused (Lifecycle Event)
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.694 225859 DEBUG nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.694 225859 DEBUG nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.695 225859 DEBUG nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.696 225859 DEBUG nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.696 225859 DEBUG nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.697 225859 DEBUG nova.virt.libvirt.driver [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.741 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.745 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920252.6387117, 6586bc3e-3a94-4d22-8e8c-713a86a956fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.745 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] VM Resumed (Lifecycle Event)
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.771 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.775 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.804 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.820 225859 INFO nova.compute.manager [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Took 9.94 seconds to spawn the instance on the hypervisor.
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.820 225859 DEBUG nova.compute.manager [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:44:12 compute-1 ceph-mon[81775]: pgmap v1675: 321 pgs: 321 active+clean; 104 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.5 MiB/s wr, 269 op/s
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.895 225859 INFO nova.compute.manager [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Took 11.15 seconds to build instance.
Jan 20 14:44:12 compute-1 nova_compute[225855]: 2026-01-20 14:44:12.914 225859 DEBUG oslo_concurrency.lockutils [None req-dc51a782-12d2-48be-9a64-fcce82be7137 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:13 compute-1 nova_compute[225855]: 2026-01-20 14:44:13.179 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:13.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:44:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:14.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:44:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1650611202' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:44:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1650611202' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:44:14 compute-1 ceph-mon[81775]: pgmap v1676: 321 pgs: 321 active+clean; 88 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 250 op/s
Jan 20 14:44:14 compute-1 nova_compute[225855]: 2026-01-20 14:44:14.674 225859 DEBUG nova.compute.manager [req-3f2c7451-045c-41b5-80f6-1f9035c07593 req-d951c8d7-557a-425f-828c-e4d29d10eb1f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Received event network-vif-plugged-2c289e6f-295e-44c3-948a-9a6901251890 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:14 compute-1 nova_compute[225855]: 2026-01-20 14:44:14.674 225859 DEBUG oslo_concurrency.lockutils [req-3f2c7451-045c-41b5-80f6-1f9035c07593 req-d951c8d7-557a-425f-828c-e4d29d10eb1f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:14 compute-1 nova_compute[225855]: 2026-01-20 14:44:14.675 225859 DEBUG oslo_concurrency.lockutils [req-3f2c7451-045c-41b5-80f6-1f9035c07593 req-d951c8d7-557a-425f-828c-e4d29d10eb1f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:14 compute-1 nova_compute[225855]: 2026-01-20 14:44:14.675 225859 DEBUG oslo_concurrency.lockutils [req-3f2c7451-045c-41b5-80f6-1f9035c07593 req-d951c8d7-557a-425f-828c-e4d29d10eb1f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:14 compute-1 nova_compute[225855]: 2026-01-20 14:44:14.675 225859 DEBUG nova.compute.manager [req-3f2c7451-045c-41b5-80f6-1f9035c07593 req-d951c8d7-557a-425f-828c-e4d29d10eb1f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] No waiting events found dispatching network-vif-plugged-2c289e6f-295e-44c3-948a-9a6901251890 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:44:14 compute-1 nova_compute[225855]: 2026-01-20 14:44:14.675 225859 WARNING nova.compute.manager [req-3f2c7451-045c-41b5-80f6-1f9035c07593 req-d951c8d7-557a-425f-828c-e4d29d10eb1f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Received unexpected event network-vif-plugged-2c289e6f-295e-44c3-948a-9a6901251890 for instance with vm_state active and task_state None.
Jan 20 14:44:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:44:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:15.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:44:15 compute-1 nova_compute[225855]: 2026-01-20 14:44:15.385 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:16 compute-1 ceph-mon[81775]: pgmap v1677: 321 pgs: 321 active+clean; 88 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 263 op/s
Jan 20 14:44:16 compute-1 nova_compute[225855]: 2026-01-20 14:44:16.257 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920241.2555792, d6ec5fce-44f2-4c13-b908-c45d7a919b34 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:44:16 compute-1 nova_compute[225855]: 2026-01-20 14:44:16.257 225859 INFO nova.compute.manager [-] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] VM Stopped (Lifecycle Event)
Jan 20 14:44:16 compute-1 nova_compute[225855]: 2026-01-20 14:44:16.278 225859 DEBUG nova.compute.manager [None req-b4015f09-9d59-4aca-aaad-bcc6c5c30ff6 - - - - - -] [instance: d6ec5fce-44f2-4c13-b908-c45d7a919b34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:44:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:44:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:16.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:44:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:16.405 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:16.406 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:16.407 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:16 compute-1 nova_compute[225855]: 2026-01-20 14:44:16.614 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:16 compute-1 NetworkManager[49104]: <info>  [1768920256.6153] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Jan 20 14:44:16 compute-1 NetworkManager[49104]: <info>  [1768920256.6186] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Jan 20 14:44:16 compute-1 nova_compute[225855]: 2026-01-20 14:44:16.654 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:16 compute-1 ovn_controller[130490]: 2026-01-20T14:44:16Z|00304|binding|INFO|Releasing lport 5527ab8d-a985-420b-9d5b-7e5d9baf7004 from this chassis (sb_readonly=0)
Jan 20 14:44:16 compute-1 nova_compute[225855]: 2026-01-20 14:44:16.663 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:16 compute-1 nova_compute[225855]: 2026-01-20 14:44:16.950 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:16.951 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:44:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:16.952 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:44:17 compute-1 nova_compute[225855]: 2026-01-20 14:44:17.052 225859 DEBUG nova.compute.manager [req-23231e28-fd47-4a6c-b281-2588441c8457 req-3ccdfe15-dd36-4f5f-9511-82b526feb4f3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Received event network-changed-2c289e6f-295e-44c3-948a-9a6901251890 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:17 compute-1 nova_compute[225855]: 2026-01-20 14:44:17.053 225859 DEBUG nova.compute.manager [req-23231e28-fd47-4a6c-b281-2588441c8457 req-3ccdfe15-dd36-4f5f-9511-82b526feb4f3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Refreshing instance network info cache due to event network-changed-2c289e6f-295e-44c3-948a-9a6901251890. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:44:17 compute-1 nova_compute[225855]: 2026-01-20 14:44:17.054 225859 DEBUG oslo_concurrency.lockutils [req-23231e28-fd47-4a6c-b281-2588441c8457 req-3ccdfe15-dd36-4f5f-9511-82b526feb4f3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:44:17 compute-1 nova_compute[225855]: 2026-01-20 14:44:17.054 225859 DEBUG oslo_concurrency.lockutils [req-23231e28-fd47-4a6c-b281-2588441c8457 req-3ccdfe15-dd36-4f5f-9511-82b526feb4f3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:44:17 compute-1 nova_compute[225855]: 2026-01-20 14:44:17.055 225859 DEBUG nova.network.neutron [req-23231e28-fd47-4a6c-b281-2588441c8457 req-3ccdfe15-dd36-4f5f-9511-82b526feb4f3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Refreshing network info cache for port 2c289e6f-295e-44c3-948a-9a6901251890 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:44:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2953342997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:44:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:17.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:44:18 compute-1 nova_compute[225855]: 2026-01-20 14:44:18.181 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:18.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:18 compute-1 ceph-mon[81775]: pgmap v1678: 321 pgs: 321 active+clean; 88 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 455 KiB/s wr, 174 op/s
Jan 20 14:44:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:44:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:19.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:44:19 compute-1 nova_compute[225855]: 2026-01-20 14:44:19.304 225859 DEBUG nova.network.neutron [req-23231e28-fd47-4a6c-b281-2588441c8457 req-3ccdfe15-dd36-4f5f-9511-82b526feb4f3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Updated VIF entry in instance network info cache for port 2c289e6f-295e-44c3-948a-9a6901251890. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:44:19 compute-1 nova_compute[225855]: 2026-01-20 14:44:19.305 225859 DEBUG nova.network.neutron [req-23231e28-fd47-4a6c-b281-2588441c8457 req-3ccdfe15-dd36-4f5f-9511-82b526feb4f3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Updating instance_info_cache with network_info: [{"id": "2c289e6f-295e-44c3-948a-9a6901251890", "address": "fa:16:3e:2f:4c:e2", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c289e6f-29", "ovs_interfaceid": "2c289e6f-295e-44c3-948a-9a6901251890", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:44:19 compute-1 nova_compute[225855]: 2026-01-20 14:44:19.339 225859 DEBUG oslo_concurrency.lockutils [req-23231e28-fd47-4a6c-b281-2588441c8457 req-3ccdfe15-dd36-4f5f-9511-82b526feb4f3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:44:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:20.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:20 compute-1 nova_compute[225855]: 2026-01-20 14:44:20.387 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:20 compute-1 ceph-mon[81775]: pgmap v1679: 321 pgs: 321 active+clean; 124 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.3 MiB/s wr, 141 op/s
Jan 20 14:44:20 compute-1 nova_compute[225855]: 2026-01-20 14:44:20.612 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:20.955 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:21.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:21 compute-1 nova_compute[225855]: 2026-01-20 14:44:21.324 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:21 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/805797725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:44:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:22.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:44:22 compute-1 ceph-mon[81775]: pgmap v1680: 321 pgs: 321 active+clean; 129 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.6 MiB/s wr, 143 op/s
Jan 20 14:44:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/723143886' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:23 compute-1 nova_compute[225855]: 2026-01-20 14:44:23.184 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:44:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:23.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:44:23 compute-1 ceph-mon[81775]: pgmap v1681: 321 pgs: 321 active+clean; 134 MiB data, 720 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Jan 20 14:44:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:24.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:24 compute-1 nova_compute[225855]: 2026-01-20 14:44:24.593 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:24 compute-1 ovn_controller[130490]: 2026-01-20T14:44:24Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2f:4c:e2 10.100.0.9
Jan 20 14:44:24 compute-1 ovn_controller[130490]: 2026-01-20T14:44:24Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2f:4c:e2 10.100.0.9
Jan 20 14:44:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:25.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:25 compute-1 nova_compute[225855]: 2026-01-20 14:44:25.389 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:26 compute-1 ceph-mon[81775]: pgmap v1682: 321 pgs: 321 active+clean; 145 MiB data, 730 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 135 op/s
Jan 20 14:44:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:26.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:26 compute-1 nova_compute[225855]: 2026-01-20 14:44:26.989 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:27 compute-1 sudo[258987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:44:27 compute-1 sudo[258987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:44:27 compute-1 sudo[258987]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:27 compute-1 sudo[259012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:44:27 compute-1 sudo[259012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:44:27 compute-1 sudo[259012]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:27 compute-1 podman[259036]: 2026-01-20 14:44:27.194897774 +0000 UTC m=+0.085174334 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 20 14:44:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:44:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:27.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:44:27 compute-1 ceph-mon[81775]: pgmap v1683: 321 pgs: 321 active+clean; 149 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.0 MiB/s wr, 111 op/s
Jan 20 14:44:28 compute-1 nova_compute[225855]: 2026-01-20 14:44:28.185 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:44:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:28.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:44:28 compute-1 nova_compute[225855]: 2026-01-20 14:44:28.411 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:44:28 compute-1 nova_compute[225855]: 2026-01-20 14:44:28.412 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:44:28 compute-1 nova_compute[225855]: 2026-01-20 14:44:28.412 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:44:28 compute-1 nova_compute[225855]: 2026-01-20 14:44:28.641 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:44:28 compute-1 nova_compute[225855]: 2026-01-20 14:44:28.641 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:44:28 compute-1 nova_compute[225855]: 2026-01-20 14:44:28.642 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:44:28 compute-1 nova_compute[225855]: 2026-01-20 14:44:28.642 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6586bc3e-3a94-4d22-8e8c-713a86a956fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:44:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:29.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:29 compute-1 ceph-mon[81775]: pgmap v1684: 321 pgs: 321 active+clean; 161 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Jan 20 14:44:29 compute-1 nova_compute[225855]: 2026-01-20 14:44:29.973 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Updating instance_info_cache with network_info: [{"id": "2c289e6f-295e-44c3-948a-9a6901251890", "address": "fa:16:3e:2f:4c:e2", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c289e6f-29", "ovs_interfaceid": "2c289e6f-295e-44c3-948a-9a6901251890", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:44:29 compute-1 nova_compute[225855]: 2026-01-20 14:44:29.987 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:44:29 compute-1 nova_compute[225855]: 2026-01-20 14:44:29.987 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:44:29 compute-1 nova_compute[225855]: 2026-01-20 14:44:29.988 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:44:29 compute-1 nova_compute[225855]: 2026-01-20 14:44:29.988 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:44:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:30.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:30 compute-1 nova_compute[225855]: 2026-01-20 14:44:30.391 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:44:30 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3024534450' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3024534450' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3258013465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:31.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:31 compute-1 nova_compute[225855]: 2026-01-20 14:44:31.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:44:31 compute-1 nova_compute[225855]: 2026-01-20 14:44:31.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:44:31 compute-1 nova_compute[225855]: 2026-01-20 14:44:31.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:44:31 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:32.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1478923842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:32 compute-1 ceph-mon[81775]: pgmap v1685: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 151 op/s
Jan 20 14:44:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1114195465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:33 compute-1 nova_compute[225855]: 2026-01-20 14:44:33.187 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:44:33 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3327996647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:33.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3327996647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:34 compute-1 ceph-mon[81775]: pgmap v1686: 321 pgs: 321 active+clean; 172 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 147 op/s
Jan 20 14:44:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/899748910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:34 compute-1 nova_compute[225855]: 2026-01-20 14:44:34.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:44:34 compute-1 nova_compute[225855]: 2026-01-20 14:44:34.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:44:34 compute-1 nova_compute[225855]: 2026-01-20 14:44:34.363 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:34 compute-1 nova_compute[225855]: 2026-01-20 14:44:34.364 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:34 compute-1 nova_compute[225855]: 2026-01-20 14:44:34.364 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:34 compute-1 nova_compute[225855]: 2026-01-20 14:44:34.365 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:44:34 compute-1 nova_compute[225855]: 2026-01-20 14:44:34.365 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:44:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:34.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:44:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:44:34 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3129307005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:34 compute-1 nova_compute[225855]: 2026-01-20 14:44:34.862 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:34 compute-1 nova_compute[225855]: 2026-01-20 14:44:34.989 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:44:34 compute-1 nova_compute[225855]: 2026-01-20 14:44:34.990 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:44:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:44:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:35.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:44:35 compute-1 nova_compute[225855]: 2026-01-20 14:44:35.434 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:35 compute-1 nova_compute[225855]: 2026-01-20 14:44:35.458 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:44:35 compute-1 nova_compute[225855]: 2026-01-20 14:44:35.459 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4342MB free_disk=20.915180206298828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:44:35 compute-1 nova_compute[225855]: 2026-01-20 14:44:35.459 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:35 compute-1 nova_compute[225855]: 2026-01-20 14:44:35.460 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:35 compute-1 nova_compute[225855]: 2026-01-20 14:44:35.530 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 6586bc3e-3a94-4d22-8e8c-713a86a956fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:44:35 compute-1 nova_compute[225855]: 2026-01-20 14:44:35.531 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:44:35 compute-1 nova_compute[225855]: 2026-01-20 14:44:35.531 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:44:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3129307005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:35 compute-1 nova_compute[225855]: 2026-01-20 14:44:35.566 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:44:35 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2245713301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:36 compute-1 nova_compute[225855]: 2026-01-20 14:44:36.000 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:36 compute-1 nova_compute[225855]: 2026-01-20 14:44:36.005 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:44:36 compute-1 nova_compute[225855]: 2026-01-20 14:44:36.027 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:44:36 compute-1 nova_compute[225855]: 2026-01-20 14:44:36.049 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:44:36 compute-1 nova_compute[225855]: 2026-01-20 14:44:36.050 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:36.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:36 compute-1 podman[259115]: 2026-01-20 14:44:36.478105558 +0000 UTC m=+0.053708622 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:44:36 compute-1 ceph-mon[81775]: pgmap v1687: 321 pgs: 321 active+clean; 160 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.4 MiB/s wr, 190 op/s
Jan 20 14:44:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/137075845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2245713301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2194028691' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:37 compute-1 nova_compute[225855]: 2026-01-20 14:44:37.044 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:44:37 compute-1 nova_compute[225855]: 2026-01-20 14:44:37.044 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:44:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:37.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4240458191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:38 compute-1 nova_compute[225855]: 2026-01-20 14:44:38.190 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:38 compute-1 nova_compute[225855]: 2026-01-20 14:44:38.334 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:44:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:38.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:38 compute-1 ceph-mon[81775]: pgmap v1688: 321 pgs: 321 active+clean; 173 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.4 MiB/s wr, 188 op/s
Jan 20 14:44:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1059237412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:38 compute-1 nova_compute[225855]: 2026-01-20 14:44:38.954 225859 DEBUG oslo_concurrency.lockutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Acquiring lock "d236e7eb-2b7e-4031-b851-ae2790528213" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:38 compute-1 nova_compute[225855]: 2026-01-20 14:44:38.954 225859 DEBUG oslo_concurrency.lockutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:38 compute-1 nova_compute[225855]: 2026-01-20 14:44:38.969 225859 DEBUG nova.compute.manager [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:44:39 compute-1 nova_compute[225855]: 2026-01-20 14:44:39.035 225859 DEBUG oslo_concurrency.lockutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:39 compute-1 nova_compute[225855]: 2026-01-20 14:44:39.035 225859 DEBUG oslo_concurrency.lockutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:39 compute-1 nova_compute[225855]: 2026-01-20 14:44:39.041 225859 DEBUG nova.virt.hardware [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:44:39 compute-1 nova_compute[225855]: 2026-01-20 14:44:39.041 225859 INFO nova.compute.claims [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:44:39 compute-1 nova_compute[225855]: 2026-01-20 14:44:39.226 225859 DEBUG oslo_concurrency.processutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:44:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:39.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:44:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:44:39 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4220284943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:39 compute-1 nova_compute[225855]: 2026-01-20 14:44:39.669 225859 DEBUG oslo_concurrency.processutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:39 compute-1 nova_compute[225855]: 2026-01-20 14:44:39.675 225859 DEBUG nova.compute.provider_tree [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:44:39 compute-1 nova_compute[225855]: 2026-01-20 14:44:39.699 225859 DEBUG nova.scheduler.client.report [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:44:39 compute-1 nova_compute[225855]: 2026-01-20 14:44:39.731 225859 DEBUG oslo_concurrency.lockutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:39 compute-1 nova_compute[225855]: 2026-01-20 14:44:39.732 225859 DEBUG nova.compute.manager [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:44:39 compute-1 nova_compute[225855]: 2026-01-20 14:44:39.828 225859 DEBUG nova.compute.manager [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:44:39 compute-1 nova_compute[225855]: 2026-01-20 14:44:39.829 225859 DEBUG nova.network.neutron [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:44:39 compute-1 nova_compute[225855]: 2026-01-20 14:44:39.861 225859 INFO nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:44:39 compute-1 nova_compute[225855]: 2026-01-20 14:44:39.885 225859 DEBUG nova.compute.manager [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:44:39 compute-1 nova_compute[225855]: 2026-01-20 14:44:39.939 225859 INFO nova.virt.block_device [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Booting with volume 2441d1fb-fc23-4a6d-b88d-4d82b035b65f at /dev/vda
Jan 20 14:44:40 compute-1 nova_compute[225855]: 2026-01-20 14:44:40.086 225859 DEBUG os_brick.utils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:44:40 compute-1 nova_compute[225855]: 2026-01-20 14:44:40.088 225859 DEBUG nova.policy [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9051b1fd0e0b40c2be07afc6da803903', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '144d821b8f624db687f0e009c5e06d8b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:44:40 compute-1 nova_compute[225855]: 2026-01-20 14:44:40.087 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:40 compute-1 nova_compute[225855]: 2026-01-20 14:44:40.099 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:40 compute-1 nova_compute[225855]: 2026-01-20 14:44:40.100 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[4e156cc0-2bba-4f6d-ad2c-c3153abdb748]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:40 compute-1 nova_compute[225855]: 2026-01-20 14:44:40.101 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:40 compute-1 nova_compute[225855]: 2026-01-20 14:44:40.109 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:40 compute-1 nova_compute[225855]: 2026-01-20 14:44:40.109 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[40c9403a-398a-40c0-8f5d-04c3a3a3673e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:40 compute-1 nova_compute[225855]: 2026-01-20 14:44:40.112 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:40 compute-1 nova_compute[225855]: 2026-01-20 14:44:40.120 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:40 compute-1 nova_compute[225855]: 2026-01-20 14:44:40.121 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[9e33cc23-1c23-4f53-ba70-d0893d6ba742]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:40 compute-1 nova_compute[225855]: 2026-01-20 14:44:40.122 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[92ec4c17-b21c-4b93-9ed8-aaddd5afd883]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:40 compute-1 nova_compute[225855]: 2026-01-20 14:44:40.122 225859 DEBUG oslo_concurrency.processutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4220284943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:40 compute-1 ceph-mon[81775]: pgmap v1689: 321 pgs: 321 active+clean; 193 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.8 MiB/s wr, 176 op/s
Jan 20 14:44:40 compute-1 nova_compute[225855]: 2026-01-20 14:44:40.151 225859 DEBUG oslo_concurrency.processutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:40 compute-1 nova_compute[225855]: 2026-01-20 14:44:40.155 225859 DEBUG os_brick.initiator.connectors.lightos [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:44:40 compute-1 nova_compute[225855]: 2026-01-20 14:44:40.155 225859 DEBUG os_brick.initiator.connectors.lightos [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:44:40 compute-1 nova_compute[225855]: 2026-01-20 14:44:40.156 225859 DEBUG os_brick.initiator.connectors.lightos [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:44:40 compute-1 nova_compute[225855]: 2026-01-20 14:44:40.156 225859 DEBUG os_brick.utils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:44:40 compute-1 nova_compute[225855]: 2026-01-20 14:44:40.157 225859 DEBUG nova.virt.block_device [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Updating existing volume attachment record: 6b0d250e-a1b5-4326-abb8-6df5e007e9a0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 14:44:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:40.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:40 compute-1 nova_compute[225855]: 2026-01-20 14:44:40.436 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:41 compute-1 nova_compute[225855]: 2026-01-20 14:44:41.148 225859 DEBUG nova.compute.manager [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:44:41 compute-1 nova_compute[225855]: 2026-01-20 14:44:41.149 225859 DEBUG nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:44:41 compute-1 nova_compute[225855]: 2026-01-20 14:44:41.150 225859 INFO nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Creating image(s)
Jan 20 14:44:41 compute-1 nova_compute[225855]: 2026-01-20 14:44:41.150 225859 DEBUG nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 14:44:41 compute-1 nova_compute[225855]: 2026-01-20 14:44:41.150 225859 DEBUG nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Ensure instance console log exists: /var/lib/nova/instances/d236e7eb-2b7e-4031-b851-ae2790528213/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:44:41 compute-1 nova_compute[225855]: 2026-01-20 14:44:41.151 225859 DEBUG oslo_concurrency.lockutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:41 compute-1 nova_compute[225855]: 2026-01-20 14:44:41.151 225859 DEBUG oslo_concurrency.lockutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:41 compute-1 nova_compute[225855]: 2026-01-20 14:44:41.151 225859 DEBUG oslo_concurrency.lockutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:41.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2622766125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:41 compute-1 nova_compute[225855]: 2026-01-20 14:44:41.572 225859 DEBUG nova.network.neutron [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Successfully created port: 019ea2f5-1721-42e7-9c77-4fc1599f8101 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:44:41 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:42 compute-1 ceph-mon[81775]: pgmap v1690: 321 pgs: 321 active+clean; 227 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.1 MiB/s wr, 140 op/s
Jan 20 14:44:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:44:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:42.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:44:43 compute-1 nova_compute[225855]: 2026-01-20 14:44:43.192 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:44:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:43.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:44:44 compute-1 nova_compute[225855]: 2026-01-20 14:44:44.064 225859 DEBUG nova.network.neutron [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Successfully updated port: 019ea2f5-1721-42e7-9c77-4fc1599f8101 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:44:44 compute-1 nova_compute[225855]: 2026-01-20 14:44:44.085 225859 DEBUG oslo_concurrency.lockutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Acquiring lock "refresh_cache-d236e7eb-2b7e-4031-b851-ae2790528213" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:44:44 compute-1 nova_compute[225855]: 2026-01-20 14:44:44.085 225859 DEBUG oslo_concurrency.lockutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Acquired lock "refresh_cache-d236e7eb-2b7e-4031-b851-ae2790528213" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:44:44 compute-1 nova_compute[225855]: 2026-01-20 14:44:44.085 225859 DEBUG nova.network.neutron [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:44:44 compute-1 nova_compute[225855]: 2026-01-20 14:44:44.343 225859 DEBUG nova.network.neutron [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:44:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:44.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:44 compute-1 ceph-mon[81775]: pgmap v1691: 321 pgs: 321 active+clean; 253 MiB data, 787 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.9 MiB/s wr, 184 op/s
Jan 20 14:44:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:45.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.438 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.500 225859 DEBUG nova.network.neutron [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Updating instance_info_cache with network_info: [{"id": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "address": "fa:16:3e:08:2c:78", "network": {"id": "eef55bf0-ad6a-4f88-adac-d746a869d579", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-101625742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "144d821b8f624db687f0e009c5e06d8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap019ea2f5-17", "ovs_interfaceid": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.527 225859 DEBUG oslo_concurrency.lockutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Releasing lock "refresh_cache-d236e7eb-2b7e-4031-b851-ae2790528213" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.528 225859 DEBUG nova.compute.manager [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Instance network_info: |[{"id": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "address": "fa:16:3e:08:2c:78", "network": {"id": "eef55bf0-ad6a-4f88-adac-d746a869d579", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-101625742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "144d821b8f624db687f0e009c5e06d8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap019ea2f5-17", "ovs_interfaceid": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.530 225859 DEBUG nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Start _get_guest_xml network_info=[{"id": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "address": "fa:16:3e:08:2c:78", "network": {"id": "eef55bf0-ad6a-4f88-adac-d746a869d579", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-101625742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "144d821b8f624db687f0e009c5e06d8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap019ea2f5-17", "ovs_interfaceid": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': True, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-2441d1fb-fc23-4a6d-b88d-4d82b035b65f', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '2441d1fb-fc23-4a6d-b88d-4d82b035b65f', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'd236e7eb-2b7e-4031-b851-ae2790528213', 'attached_at': '', 'detached_at': '', 'volume_id': '2441d1fb-fc23-4a6d-b88d-4d82b035b65f', 'serial': '2441d1fb-fc23-4a6d-b88d-4d82b035b65f'}, 'guest_format': None, 'boot_index': 0, 'mount_device': '/dev/vda', 'attachment_id': '6b0d250e-a1b5-4326-abb8-6df5e007e9a0', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.535 225859 WARNING nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.539 225859 DEBUG nova.virt.libvirt.host [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.540 225859 DEBUG nova.virt.libvirt.host [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.543 225859 DEBUG nova.virt.libvirt.host [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.544 225859 DEBUG nova.virt.libvirt.host [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.544 225859 DEBUG nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.545 225859 DEBUG nova.virt.hardware [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.545 225859 DEBUG nova.virt.hardware [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.545 225859 DEBUG nova.virt.hardware [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.545 225859 DEBUG nova.virt.hardware [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.546 225859 DEBUG nova.virt.hardware [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.546 225859 DEBUG nova.virt.hardware [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.546 225859 DEBUG nova.virt.hardware [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.546 225859 DEBUG nova.virt.hardware [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.546 225859 DEBUG nova.virt.hardware [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.546 225859 DEBUG nova.virt.hardware [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.547 225859 DEBUG nova.virt.hardware [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.576 225859 DEBUG nova.storage.rbd_utils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] rbd image d236e7eb-2b7e-4031-b851-ae2790528213_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:44:45 compute-1 nova_compute[225855]: 2026-01-20 14:44:45.582 225859 DEBUG oslo_concurrency.processutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3502547636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3417755684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/714686278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2544638103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:44:46 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/504919309' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.026 225859 DEBUG oslo_concurrency.processutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.154 225859 DEBUG nova.virt.libvirt.vif [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:44:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-2113030518',display_name='tempest-ServersTestBootFromVolume-server-2113030518',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-2113030518',id=91,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQAnzboP+S2NNqULNJ590LALw1e9CwKJIrHyMoISyM6baLxtf4y84xsP0kgRy7bjF2fbaXhodzuoV+0+uj6MQE6N4Q+sHthmobL8XMJ7dekwWVSr0yZf1dgnshwlxyeDQ==',key_name='tempest-keypair-1996933376',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='144d821b8f624db687f0e009c5e06d8b',ramdisk_id='',reservation_id='r-55pxq2es',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServersTestBootFromVolume-628592216',owner_user_name='tempest-ServersTestBootFromVolume-628592216-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:44:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9051b1fd0e0b40c2be07afc6da803903',uuid=d236e7eb-2b7e-4031-b851-ae2790528213,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "address": "fa:16:3e:08:2c:78", "network": {"id": "eef55bf0-ad6a-4f88-adac-d746a869d579", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-101625742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "144d821b8f624db687f0e009c5e06d8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap019ea2f5-17", "ovs_interfaceid": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.154 225859 DEBUG nova.network.os_vif_util [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Converting VIF {"id": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "address": "fa:16:3e:08:2c:78", "network": {"id": "eef55bf0-ad6a-4f88-adac-d746a869d579", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-101625742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "144d821b8f624db687f0e009c5e06d8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap019ea2f5-17", "ovs_interfaceid": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.157 225859 DEBUG nova.network.os_vif_util [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:2c:78,bridge_name='br-int',has_traffic_filtering=True,id=019ea2f5-1721-42e7-9c77-4fc1599f8101,network=Network(eef55bf0-ad6a-4f88-adac-d746a869d579),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap019ea2f5-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.158 225859 DEBUG nova.objects.instance [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Lazy-loading 'pci_devices' on Instance uuid d236e7eb-2b7e-4031-b851-ae2790528213 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.171 225859 DEBUG nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:44:46 compute-1 nova_compute[225855]:   <uuid>d236e7eb-2b7e-4031-b851-ae2790528213</uuid>
Jan 20 14:44:46 compute-1 nova_compute[225855]:   <name>instance-0000005b</name>
Jan 20 14:44:46 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:44:46 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:44:46 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <nova:name>tempest-ServersTestBootFromVolume-server-2113030518</nova:name>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:44:45</nova:creationTime>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:44:46 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:44:46 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:44:46 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:44:46 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:44:46 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:44:46 compute-1 nova_compute[225855]:         <nova:user uuid="9051b1fd0e0b40c2be07afc6da803903">tempest-ServersTestBootFromVolume-628592216-project-member</nova:user>
Jan 20 14:44:46 compute-1 nova_compute[225855]:         <nova:project uuid="144d821b8f624db687f0e009c5e06d8b">tempest-ServersTestBootFromVolume-628592216</nova:project>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:44:46 compute-1 nova_compute[225855]:         <nova:port uuid="019ea2f5-1721-42e7-9c77-4fc1599f8101">
Jan 20 14:44:46 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:44:46 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:44:46 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <system>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <entry name="serial">d236e7eb-2b7e-4031-b851-ae2790528213</entry>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <entry name="uuid">d236e7eb-2b7e-4031-b851-ae2790528213</entry>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     </system>
Jan 20 14:44:46 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:44:46 compute-1 nova_compute[225855]:   <os>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:   </os>
Jan 20 14:44:46 compute-1 nova_compute[225855]:   <features>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:   </features>
Jan 20 14:44:46 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:44:46 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:44:46 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/d236e7eb-2b7e-4031-b851-ae2790528213_disk.config">
Jan 20 14:44:46 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       </source>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:44:46 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <source protocol="rbd" name="volumes/volume-2441d1fb-fc23-4a6d-b88d-4d82b035b65f">
Jan 20 14:44:46 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       </source>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:44:46 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <serial>2441d1fb-fc23-4a6d-b88d-4d82b035b65f</serial>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:08:2c:78"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <target dev="tap019ea2f5-17"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/d236e7eb-2b7e-4031-b851-ae2790528213/console.log" append="off"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <video>
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     </video>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:44:46 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:44:46 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:44:46 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:44:46 compute-1 nova_compute[225855]: </domain>
Jan 20 14:44:46 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.171 225859 DEBUG nova.compute.manager [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Preparing to wait for external event network-vif-plugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.171 225859 DEBUG oslo_concurrency.lockutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Acquiring lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.172 225859 DEBUG oslo_concurrency.lockutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.172 225859 DEBUG oslo_concurrency.lockutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.172 225859 DEBUG nova.virt.libvirt.vif [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:44:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-2113030518',display_name='tempest-ServersTestBootFromVolume-server-2113030518',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-2113030518',id=91,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQAnzboP+S2NNqULNJ590LALw1e9CwKJIrHyMoISyM6baLxtf4y84xsP0kgRy7bjF2fbaXhodzuoV+0+uj6MQE6N4Q+sHthmobL8XMJ7dekwWVSr0yZf1dgnshwlxyeDQ==',key_name='tempest-keypair-1996933376',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='144d821b8f624db687f0e009c5e06d8b',ramdisk_id='',reservation_id='r-55pxq2es',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServersTestBootFromVolume-628592216',owner_user_name='tempest-ServersTestBootFromVolume-628592216-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:44:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9051b1fd0e0b40c2be07afc6da803903',uuid=d236e7eb-2b7e-4031-b851-ae2790528213,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "address": "fa:16:3e:08:2c:78", "network": {"id": "eef55bf0-ad6a-4f88-adac-d746a869d579", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-101625742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "144d821b8f624db687f0e009c5e06d8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap019ea2f5-17", "ovs_interfaceid": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.173 225859 DEBUG nova.network.os_vif_util [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Converting VIF {"id": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "address": "fa:16:3e:08:2c:78", "network": {"id": "eef55bf0-ad6a-4f88-adac-d746a869d579", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-101625742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "144d821b8f624db687f0e009c5e06d8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap019ea2f5-17", "ovs_interfaceid": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.173 225859 DEBUG nova.network.os_vif_util [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:2c:78,bridge_name='br-int',has_traffic_filtering=True,id=019ea2f5-1721-42e7-9c77-4fc1599f8101,network=Network(eef55bf0-ad6a-4f88-adac-d746a869d579),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap019ea2f5-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.173 225859 DEBUG os_vif [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:2c:78,bridge_name='br-int',has_traffic_filtering=True,id=019ea2f5-1721-42e7-9c77-4fc1599f8101,network=Network(eef55bf0-ad6a-4f88-adac-d746a869d579),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap019ea2f5-17') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.174 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.174 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.175 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.177 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.177 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap019ea2f5-17, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.178 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap019ea2f5-17, col_values=(('external_ids', {'iface-id': '019ea2f5-1721-42e7-9c77-4fc1599f8101', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:2c:78', 'vm-uuid': 'd236e7eb-2b7e-4031-b851-ae2790528213'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.218 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:46 compute-1 NetworkManager[49104]: <info>  [1768920286.2204] manager: (tap019ea2f5-17): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.221 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.225 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.227 225859 INFO os_vif [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:2c:78,bridge_name='br-int',has_traffic_filtering=True,id=019ea2f5-1721-42e7-9c77-4fc1599f8101,network=Network(eef55bf0-ad6a-4f88-adac-d746a869d579),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap019ea2f5-17')
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.268 225859 DEBUG nova.compute.manager [req-67efbc5e-1efe-45e7-bc12-9e7d78b41365 req-965e8d8f-f43f-40b6-9665-38cc3d54deae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Received event network-changed-019ea2f5-1721-42e7-9c77-4fc1599f8101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.269 225859 DEBUG nova.compute.manager [req-67efbc5e-1efe-45e7-bc12-9e7d78b41365 req-965e8d8f-f43f-40b6-9665-38cc3d54deae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Refreshing instance network info cache due to event network-changed-019ea2f5-1721-42e7-9c77-4fc1599f8101. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.269 225859 DEBUG oslo_concurrency.lockutils [req-67efbc5e-1efe-45e7-bc12-9e7d78b41365 req-965e8d8f-f43f-40b6-9665-38cc3d54deae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-d236e7eb-2b7e-4031-b851-ae2790528213" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.269 225859 DEBUG oslo_concurrency.lockutils [req-67efbc5e-1efe-45e7-bc12-9e7d78b41365 req-965e8d8f-f43f-40b6-9665-38cc3d54deae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-d236e7eb-2b7e-4031-b851-ae2790528213" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.270 225859 DEBUG nova.network.neutron [req-67efbc5e-1efe-45e7-bc12-9e7d78b41365 req-965e8d8f-f43f-40b6-9665-38cc3d54deae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Refreshing network info cache for port 019ea2f5-1721-42e7-9c77-4fc1599f8101 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.278 225859 DEBUG nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.278 225859 DEBUG nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.279 225859 DEBUG nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] No VIF found with MAC fa:16:3e:08:2c:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.279 225859 INFO nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Using config drive
Jan 20 14:44:46 compute-1 nova_compute[225855]: 2026-01-20 14:44:46.306 225859 DEBUG nova.storage.rbd_utils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] rbd image d236e7eb-2b7e-4031-b851-ae2790528213_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:44:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:46.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:46 compute-1 ceph-mon[81775]: pgmap v1692: 321 pgs: 321 active+clean; 230 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 208 op/s
Jan 20 14:44:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/504919309' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:47 compute-1 sudo[259229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:44:47 compute-1 sudo[259229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:44:47 compute-1 sudo[259229]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:47 compute-1 sudo[259254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:44:47 compute-1 sudo[259254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:44:47 compute-1 sudo[259254]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:44:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:47.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:44:47 compute-1 ceph-mon[81775]: pgmap v1693: 321 pgs: 321 active+clean; 221 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.2 MiB/s wr, 185 op/s
Jan 20 14:44:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:48.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:48 compute-1 nova_compute[225855]: 2026-01-20 14:44:48.554 225859 INFO nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Creating config drive at /var/lib/nova/instances/d236e7eb-2b7e-4031-b851-ae2790528213/disk.config
Jan 20 14:44:48 compute-1 nova_compute[225855]: 2026-01-20 14:44:48.560 225859 DEBUG oslo_concurrency.processutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d236e7eb-2b7e-4031-b851-ae2790528213/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9rdjtwtx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:48 compute-1 nova_compute[225855]: 2026-01-20 14:44:48.696 225859 DEBUG oslo_concurrency.processutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d236e7eb-2b7e-4031-b851-ae2790528213/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9rdjtwtx" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:48 compute-1 nova_compute[225855]: 2026-01-20 14:44:48.728 225859 DEBUG nova.storage.rbd_utils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] rbd image d236e7eb-2b7e-4031-b851-ae2790528213_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:44:48 compute-1 nova_compute[225855]: 2026-01-20 14:44:48.732 225859 DEBUG oslo_concurrency.processutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d236e7eb-2b7e-4031-b851-ae2790528213/disk.config d236e7eb-2b7e-4031-b851-ae2790528213_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:48 compute-1 nova_compute[225855]: 2026-01-20 14:44:48.901 225859 DEBUG oslo_concurrency.processutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d236e7eb-2b7e-4031-b851-ae2790528213/disk.config d236e7eb-2b7e-4031-b851-ae2790528213_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:48 compute-1 nova_compute[225855]: 2026-01-20 14:44:48.903 225859 INFO nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Deleting local config drive /var/lib/nova/instances/d236e7eb-2b7e-4031-b851-ae2790528213/disk.config because it was imported into RBD.
Jan 20 14:44:48 compute-1 kernel: tap019ea2f5-17: entered promiscuous mode
Jan 20 14:44:48 compute-1 NetworkManager[49104]: <info>  [1768920288.9610] manager: (tap019ea2f5-17): new Tun device (/org/freedesktop/NetworkManager/Devices/136)
Jan 20 14:44:48 compute-1 nova_compute[225855]: 2026-01-20 14:44:48.962 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:48 compute-1 ovn_controller[130490]: 2026-01-20T14:44:48Z|00305|binding|INFO|Claiming lport 019ea2f5-1721-42e7-9c77-4fc1599f8101 for this chassis.
Jan 20 14:44:48 compute-1 ovn_controller[130490]: 2026-01-20T14:44:48Z|00306|binding|INFO|019ea2f5-1721-42e7-9c77-4fc1599f8101: Claiming fa:16:3e:08:2c:78 10.100.0.10
Jan 20 14:44:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:48.969 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:2c:78 10.100.0.10'], port_security=['fa:16:3e:08:2c:78 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd236e7eb-2b7e-4031-b851-ae2790528213', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eef55bf0-ad6a-4f88-adac-d746a869d579', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '144d821b8f624db687f0e009c5e06d8b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4307ffb0-d161-4f09-96d3-b205cef524f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd338232-5bc6-43af-b249-74124e0134b1, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=019ea2f5-1721-42e7-9c77-4fc1599f8101) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:44:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:48.971 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 019ea2f5-1721-42e7-9c77-4fc1599f8101 in datapath eef55bf0-ad6a-4f88-adac-d746a869d579 bound to our chassis
Jan 20 14:44:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:48.972 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eef55bf0-ad6a-4f88-adac-d746a869d579
Jan 20 14:44:48 compute-1 ovn_controller[130490]: 2026-01-20T14:44:48Z|00307|binding|INFO|Setting lport 019ea2f5-1721-42e7-9c77-4fc1599f8101 ovn-installed in OVS
Jan 20 14:44:48 compute-1 ovn_controller[130490]: 2026-01-20T14:44:48Z|00308|binding|INFO|Setting lport 019ea2f5-1721-42e7-9c77-4fc1599f8101 up in Southbound
Jan 20 14:44:48 compute-1 nova_compute[225855]: 2026-01-20 14:44:48.979 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:48 compute-1 nova_compute[225855]: 2026-01-20 14:44:48.983 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:48.989 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2414cedf-0c5f-4c98-b208-9d5c41073bab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:48.990 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeef55bf0-a1 in ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:44:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:48.991 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeef55bf0-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:44:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:48.992 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1f2ff404-e733-4ae7-9a87-1324ca8be9fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:48.993 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[eef08c78-f4f1-4cdc-afec-2e9ba01e1f01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:49 compute-1 systemd-machined[194361]: New machine qemu-37-instance-0000005b.
Jan 20 14:44:49 compute-1 systemd-udevd[259335]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:49.009 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[0824b572-9fe9-4c57-b099-54abc208b435]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:49 compute-1 NetworkManager[49104]: <info>  [1768920289.0193] device (tap019ea2f5-17): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:44:49 compute-1 NetworkManager[49104]: <info>  [1768920289.0207] device (tap019ea2f5-17): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:44:49 compute-1 systemd[1]: Started Virtual Machine qemu-37-instance-0000005b.
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:49.022 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9423cf99-c019-4340-80c4-b3e34269cadc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:49.051 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f710b10f-4326-4270-bf7f-5e6a7d08c212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:49 compute-1 systemd-udevd[259338]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:44:49 compute-1 NetworkManager[49104]: <info>  [1768920289.0576] manager: (tapeef55bf0-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/137)
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:49.056 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cc235f8c-89c9-4d75-9085-e28738848db9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:49.085 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[b242a10a-a8cb-4af9-9a4d-214726344ba1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:49.088 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3250e2-2120-41c2-ba19-907246190745]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:49 compute-1 NetworkManager[49104]: <info>  [1768920289.1134] device (tapeef55bf0-a0): carrier: link connected
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:49.118 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[03abe4a0-dcc6-4ff6-83f0-23cc77f75936]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:49.137 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a53cb708-f4e8-457a-94af-05c28a49ce7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeef55bf0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:1e:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532804, 'reachable_time': 22563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259366, 'error': None, 'target': 'ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:49.153 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c36c6bba-9294-4196-849a-6c5d2555f5ee]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec0:1e69'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 532804, 'tstamp': 532804}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259367, 'error': None, 'target': 'ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:49.173 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[48201e73-b18e-4575-9e71-7436ad914b6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeef55bf0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:1e:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532804, 'reachable_time': 22563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259368, 'error': None, 'target': 'ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:49.203 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[711947f5-1326-411b-b586-7911c77df1e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:49.267 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b91f042a-3b64-412e-bec4-f9c968b83178]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:49.269 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeef55bf0-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:49.269 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:49.270 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeef55bf0-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:49 compute-1 kernel: tapeef55bf0-a0: entered promiscuous mode
Jan 20 14:44:49 compute-1 NetworkManager[49104]: <info>  [1768920289.2723] manager: (tapeef55bf0-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.272 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.274 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:49.275 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeef55bf0-a0, col_values=(('external_ids', {'iface-id': '7520a61f-574f-4683-b47f-71e915a4dabe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.276 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:49 compute-1 ovn_controller[130490]: 2026-01-20T14:44:49Z|00309|binding|INFO|Releasing lport 7520a61f-574f-4683-b47f-71e915a4dabe from this chassis (sb_readonly=0)
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.279 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:49.279 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eef55bf0-ad6a-4f88-adac-d746a869d579.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eef55bf0-ad6a-4f88-adac-d746a869d579.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:49.280 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[55390bde-881a-4201-a973-2362c455c670]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:49.281 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-eef55bf0-ad6a-4f88-adac-d746a869d579
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/eef55bf0-ad6a-4f88-adac-d746a869d579.pid.haproxy
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID eef55bf0-ad6a-4f88-adac-d746a869d579
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:44:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:49.282 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579', 'env', 'PROCESS_TAG=haproxy-eef55bf0-ad6a-4f88-adac-d746a869d579', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eef55bf0-ad6a-4f88-adac-d746a869d579.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.300 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:49.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.323 225859 DEBUG nova.compute.manager [req-4d036d0e-36de-4cf3-ae70-b7da9992c98b req-61424c07-aead-4154-bb08-20d13036545c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Received event network-vif-plugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.323 225859 DEBUG oslo_concurrency.lockutils [req-4d036d0e-36de-4cf3-ae70-b7da9992c98b req-61424c07-aead-4154-bb08-20d13036545c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.324 225859 DEBUG oslo_concurrency.lockutils [req-4d036d0e-36de-4cf3-ae70-b7da9992c98b req-61424c07-aead-4154-bb08-20d13036545c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.324 225859 DEBUG oslo_concurrency.lockutils [req-4d036d0e-36de-4cf3-ae70-b7da9992c98b req-61424c07-aead-4154-bb08-20d13036545c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.324 225859 DEBUG nova.compute.manager [req-4d036d0e-36de-4cf3-ae70-b7da9992c98b req-61424c07-aead-4154-bb08-20d13036545c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Processing event network-vif-plugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.479 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920289.4788468, d236e7eb-2b7e-4031-b851-ae2790528213 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.480 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] VM Started (Lifecycle Event)
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.482 225859 DEBUG nova.compute.manager [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.486 225859 DEBUG nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.491 225859 INFO nova.virt.libvirt.driver [-] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Instance spawned successfully.
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.491 225859 DEBUG nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.496 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.500 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.503 225859 DEBUG nova.network.neutron [req-67efbc5e-1efe-45e7-bc12-9e7d78b41365 req-965e8d8f-f43f-40b6-9665-38cc3d54deae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Updated VIF entry in instance network info cache for port 019ea2f5-1721-42e7-9c77-4fc1599f8101. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.503 225859 DEBUG nova.network.neutron [req-67efbc5e-1efe-45e7-bc12-9e7d78b41365 req-965e8d8f-f43f-40b6-9665-38cc3d54deae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Updating instance_info_cache with network_info: [{"id": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "address": "fa:16:3e:08:2c:78", "network": {"id": "eef55bf0-ad6a-4f88-adac-d746a869d579", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-101625742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "144d821b8f624db687f0e009c5e06d8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap019ea2f5-17", "ovs_interfaceid": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.511 225859 DEBUG nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.511 225859 DEBUG nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.512 225859 DEBUG nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.512 225859 DEBUG nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.513 225859 DEBUG nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.513 225859 DEBUG nova.virt.libvirt.driver [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.520 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.521 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920289.479793, d236e7eb-2b7e-4031-b851-ae2790528213 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.524 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] VM Paused (Lifecycle Event)
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.546 225859 DEBUG oslo_concurrency.lockutils [req-67efbc5e-1efe-45e7-bc12-9e7d78b41365 req-965e8d8f-f43f-40b6-9665-38cc3d54deae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-d236e7eb-2b7e-4031-b851-ae2790528213" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.549 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.552 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920289.4853854, d236e7eb-2b7e-4031-b851-ae2790528213 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.552 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] VM Resumed (Lifecycle Event)
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.575 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.578 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.586 225859 INFO nova.compute.manager [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Took 8.44 seconds to spawn the instance on the hypervisor.
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.587 225859 DEBUG nova.compute.manager [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.595 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.637 225859 INFO nova.compute.manager [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Took 10.62 seconds to build instance.
Jan 20 14:44:49 compute-1 nova_compute[225855]: 2026-01-20 14:44:49.658 225859 DEBUG oslo_concurrency.lockutils [None req-2a13799f-b3e4-48ea-b45d-a9254b040ace 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:49 compute-1 podman[259441]: 2026-01-20 14:44:49.662533394 +0000 UTC m=+0.051120759 container create e01522ec633af0a019b4440a7dd3965b02d373680203c840ebe0316866c9cf69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 20 14:44:49 compute-1 systemd[1]: Started libpod-conmon-e01522ec633af0a019b4440a7dd3965b02d373680203c840ebe0316866c9cf69.scope.
Jan 20 14:44:49 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:44:49 compute-1 podman[259441]: 2026-01-20 14:44:49.63664248 +0000 UTC m=+0.025229865 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:44:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6c68af6525807938ca92d3080ecb780abe59857167dc3a5929ab0345a12e53c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:44:49 compute-1 podman[259441]: 2026-01-20 14:44:49.745839123 +0000 UTC m=+0.134426498 container init e01522ec633af0a019b4440a7dd3965b02d373680203c840ebe0316866c9cf69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:44:49 compute-1 podman[259441]: 2026-01-20 14:44:49.751177304 +0000 UTC m=+0.139764669 container start e01522ec633af0a019b4440a7dd3965b02d373680203c840ebe0316866c9cf69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:44:49 compute-1 neutron-haproxy-ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579[259456]: [NOTICE]   (259460) : New worker (259462) forked
Jan 20 14:44:49 compute-1 neutron-haproxy-ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579[259456]: [NOTICE]   (259460) : Loading success.
Jan 20 14:44:49 compute-1 ceph-mon[81775]: pgmap v1694: 321 pgs: 321 active+clean; 253 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 152 op/s
Jan 20 14:44:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:50.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:50 compute-1 nova_compute[225855]: 2026-01-20 14:44:50.441 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:51 compute-1 nova_compute[225855]: 2026-01-20 14:44:51.219 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:51.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:51 compute-1 nova_compute[225855]: 2026-01-20 14:44:51.397 225859 DEBUG nova.compute.manager [req-8e5bd89f-3d5d-4529-a8fe-e41a5cdcef66 req-a65f3f1a-7082-4368-ab75-7de7d4d0a109 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Received event network-vif-plugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:51 compute-1 nova_compute[225855]: 2026-01-20 14:44:51.398 225859 DEBUG oslo_concurrency.lockutils [req-8e5bd89f-3d5d-4529-a8fe-e41a5cdcef66 req-a65f3f1a-7082-4368-ab75-7de7d4d0a109 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:51 compute-1 nova_compute[225855]: 2026-01-20 14:44:51.398 225859 DEBUG oslo_concurrency.lockutils [req-8e5bd89f-3d5d-4529-a8fe-e41a5cdcef66 req-a65f3f1a-7082-4368-ab75-7de7d4d0a109 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:51 compute-1 nova_compute[225855]: 2026-01-20 14:44:51.398 225859 DEBUG oslo_concurrency.lockutils [req-8e5bd89f-3d5d-4529-a8fe-e41a5cdcef66 req-a65f3f1a-7082-4368-ab75-7de7d4d0a109 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:51 compute-1 nova_compute[225855]: 2026-01-20 14:44:51.399 225859 DEBUG nova.compute.manager [req-8e5bd89f-3d5d-4529-a8fe-e41a5cdcef66 req-a65f3f1a-7082-4368-ab75-7de7d4d0a109 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] No waiting events found dispatching network-vif-plugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:44:51 compute-1 nova_compute[225855]: 2026-01-20 14:44:51.399 225859 WARNING nova.compute.manager [req-8e5bd89f-3d5d-4529-a8fe-e41a5cdcef66 req-a65f3f1a-7082-4368-ab75-7de7d4d0a109 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Received unexpected event network-vif-plugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 for instance with vm_state active and task_state None.
Jan 20 14:44:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:51 compute-1 ceph-mon[81775]: pgmap v1695: 321 pgs: 321 active+clean; 260 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 175 op/s
Jan 20 14:44:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:52.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:53 compute-1 ovn_controller[130490]: 2026-01-20T14:44:53Z|00310|binding|INFO|Releasing lport 7520a61f-574f-4683-b47f-71e915a4dabe from this chassis (sb_readonly=0)
Jan 20 14:44:53 compute-1 ovn_controller[130490]: 2026-01-20T14:44:53Z|00311|binding|INFO|Releasing lport 5527ab8d-a985-420b-9d5b-7e5d9baf7004 from this chassis (sb_readonly=0)
Jan 20 14:44:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:53.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:53 compute-1 nova_compute[225855]: 2026-01-20 14:44:53.336 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:53 compute-1 nova_compute[225855]: 2026-01-20 14:44:53.495 225859 DEBUG nova.compute.manager [req-73f25326-0c8b-49b2-b95e-4d925f81d3d3 req-43a2cb58-9fb1-4a95-9099-5159aaf8e54f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Received event network-changed-019ea2f5-1721-42e7-9c77-4fc1599f8101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:53 compute-1 nova_compute[225855]: 2026-01-20 14:44:53.495 225859 DEBUG nova.compute.manager [req-73f25326-0c8b-49b2-b95e-4d925f81d3d3 req-43a2cb58-9fb1-4a95-9099-5159aaf8e54f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Refreshing instance network info cache due to event network-changed-019ea2f5-1721-42e7-9c77-4fc1599f8101. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:44:53 compute-1 nova_compute[225855]: 2026-01-20 14:44:53.495 225859 DEBUG oslo_concurrency.lockutils [req-73f25326-0c8b-49b2-b95e-4d925f81d3d3 req-43a2cb58-9fb1-4a95-9099-5159aaf8e54f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-d236e7eb-2b7e-4031-b851-ae2790528213" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:44:53 compute-1 nova_compute[225855]: 2026-01-20 14:44:53.495 225859 DEBUG oslo_concurrency.lockutils [req-73f25326-0c8b-49b2-b95e-4d925f81d3d3 req-43a2cb58-9fb1-4a95-9099-5159aaf8e54f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-d236e7eb-2b7e-4031-b851-ae2790528213" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:44:53 compute-1 nova_compute[225855]: 2026-01-20 14:44:53.496 225859 DEBUG nova.network.neutron [req-73f25326-0c8b-49b2-b95e-4d925f81d3d3 req-43a2cb58-9fb1-4a95-9099-5159aaf8e54f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Refreshing network info cache for port 019ea2f5-1721-42e7-9c77-4fc1599f8101 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:44:53 compute-1 ceph-mon[81775]: pgmap v1696: 321 pgs: 321 active+clean; 260 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.1 MiB/s wr, 196 op/s
Jan 20 14:44:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:54.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:55 compute-1 nova_compute[225855]: 2026-01-20 14:44:55.240 225859 DEBUG nova.network.neutron [req-73f25326-0c8b-49b2-b95e-4d925f81d3d3 req-43a2cb58-9fb1-4a95-9099-5159aaf8e54f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Updated VIF entry in instance network info cache for port 019ea2f5-1721-42e7-9c77-4fc1599f8101. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:44:55 compute-1 nova_compute[225855]: 2026-01-20 14:44:55.241 225859 DEBUG nova.network.neutron [req-73f25326-0c8b-49b2-b95e-4d925f81d3d3 req-43a2cb58-9fb1-4a95-9099-5159aaf8e54f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Updating instance_info_cache with network_info: [{"id": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "address": "fa:16:3e:08:2c:78", "network": {"id": "eef55bf0-ad6a-4f88-adac-d746a869d579", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-101625742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "144d821b8f624db687f0e009c5e06d8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap019ea2f5-17", "ovs_interfaceid": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:44:55 compute-1 nova_compute[225855]: 2026-01-20 14:44:55.258 225859 DEBUG oslo_concurrency.lockutils [req-73f25326-0c8b-49b2-b95e-4d925f81d3d3 req-43a2cb58-9fb1-4a95-9099-5159aaf8e54f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-d236e7eb-2b7e-4031-b851-ae2790528213" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:44:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:55.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:55.331 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:44:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:44:55.334 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:44:55 compute-1 nova_compute[225855]: 2026-01-20 14:44:55.375 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/366812670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/733160525' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:55 compute-1 nova_compute[225855]: 2026-01-20 14:44:55.443 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:56 compute-1 nova_compute[225855]: 2026-01-20 14:44:56.222 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:56.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:56 compute-1 ceph-mon[81775]: pgmap v1697: 321 pgs: 321 active+clean; 260 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.2 MiB/s wr, 216 op/s
Jan 20 14:44:56 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:44:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:57.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:44:57 compute-1 ceph-mon[81775]: pgmap v1698: 321 pgs: 321 active+clean; 260 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 190 op/s
Jan 20 14:44:58 compute-1 podman[259475]: 2026-01-20 14:44:58.039654676 +0000 UTC m=+0.073220365 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:44:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:44:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:58.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:44:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:44:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:44:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:59.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:45:00 compute-1 ceph-mon[81775]: pgmap v1699: 321 pgs: 321 active+clean; 260 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 1.7 MiB/s wr, 205 op/s
Jan 20 14:45:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:00.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:00 compute-1 nova_compute[225855]: 2026-01-20 14:45:00.444 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:01 compute-1 nova_compute[225855]: 2026-01-20 14:45:01.225 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:01.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:02 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:02.336 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:45:02 compute-1 ceph-mon[81775]: pgmap v1700: 321 pgs: 321 active+clean; 260 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 164 KiB/s wr, 211 op/s
Jan 20 14:45:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:02.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:02 compute-1 nova_compute[225855]: 2026-01-20 14:45:02.555 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:03.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:03 compute-1 sudo[259503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:03 compute-1 sudo[259503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:03 compute-1 sudo[259503]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:03 compute-1 sudo[259528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:45:03 compute-1 sudo[259528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:03 compute-1 sudo[259528]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:03 compute-1 sudo[259553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:03 compute-1 sudo[259553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:03 compute-1 sudo[259553]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:03 compute-1 sudo[259579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:45:03 compute-1 sudo[259579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:04 compute-1 ceph-mon[81775]: pgmap v1701: 321 pgs: 321 active+clean; 263 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 573 KiB/s wr, 193 op/s
Jan 20 14:45:04 compute-1 sudo[259579]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:04.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:45:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:45:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:45:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:45:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:45:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:45:04 compute-1 ovn_controller[130490]: 2026-01-20T14:45:04Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:08:2c:78 10.100.0.10
Jan 20 14:45:04 compute-1 ovn_controller[130490]: 2026-01-20T14:45:04Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:08:2c:78 10.100.0.10
Jan 20 14:45:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:05.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:05 compute-1 nova_compute[225855]: 2026-01-20 14:45:05.447 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:05 compute-1 ceph-mon[81775]: pgmap v1702: 321 pgs: 321 active+clean; 263 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.3 MiB/s wr, 225 op/s
Jan 20 14:45:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1471590964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:05 compute-1 nova_compute[225855]: 2026-01-20 14:45:05.996 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:06 compute-1 nova_compute[225855]: 2026-01-20 14:45:06.227 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:06.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:07 compute-1 podman[259635]: 2026-01-20 14:45:07.012961574 +0000 UTC m=+0.058197200 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:45:07 compute-1 sudo[259654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:07 compute-1 sudo[259654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:07 compute-1 sudo[259654]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:07.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:07 compute-1 sudo[259679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:07 compute-1 sudo[259679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:07 compute-1 sudo[259679]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:07 compute-1 ceph-mon[81775]: pgmap v1703: 321 pgs: 321 active+clean; 271 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 207 op/s
Jan 20 14:45:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:08.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:09.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3641399765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:10.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:10 compute-1 nova_compute[225855]: 2026-01-20 14:45:10.450 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:11 compute-1 nova_compute[225855]: 2026-01-20 14:45:11.230 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:11.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:11 compute-1 ceph-mon[81775]: pgmap v1704: 321 pgs: 321 active+clean; 279 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 205 op/s
Jan 20 14:45:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:45:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:12.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:45:12 compute-1 sudo[259707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:12 compute-1 sudo[259707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:12 compute-1 sudo[259707]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:12 compute-1 sudo[259732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:45:12 compute-1 sudo[259732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:12 compute-1 sudo[259732]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:45:12 compute-1 ceph-mon[81775]: pgmap v1705: 321 pgs: 321 active+clean; 279 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 976 KiB/s rd, 4.3 MiB/s wr, 167 op/s
Jan 20 14:45:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/950872874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:45:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:13.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:14.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2711257148' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:45:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2711257148' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:45:14 compute-1 ceph-mon[81775]: pgmap v1706: 321 pgs: 321 active+clean; 279 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 820 KiB/s rd, 4.3 MiB/s wr, 158 op/s
Jan 20 14:45:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:45:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:15.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:45:15 compute-1 nova_compute[225855]: 2026-01-20 14:45:15.473 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:16 compute-1 nova_compute[225855]: 2026-01-20 14:45:16.232 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3957072564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:45:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/413851278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:45:16 compute-1 ceph-mon[81775]: pgmap v1707: 321 pgs: 321 active+clean; 347 MiB data, 851 MiB used, 20 GiB / 21 GiB avail; 706 KiB/s rd, 6.2 MiB/s wr, 180 op/s
Jan 20 14:45:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:16.406 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:16.406 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:16.407 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:16.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:17.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:17 compute-1 ceph-mon[81775]: pgmap v1708: 321 pgs: 321 active+clean; 368 MiB data, 863 MiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 4.5 MiB/s wr, 113 op/s
Jan 20 14:45:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:45:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:18.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:45:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3296741815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:45:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/129227990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:45:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:45:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:19.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:45:19 compute-1 ceph-mon[81775]: pgmap v1709: 321 pgs: 321 active+clean; 372 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 59 op/s
Jan 20 14:45:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:20.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:20 compute-1 nova_compute[225855]: 2026-01-20 14:45:20.476 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:21 compute-1 nova_compute[225855]: 2026-01-20 14:45:21.234 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:45:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:21.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:45:21 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:21 compute-1 ceph-mon[81775]: pgmap v1710: 321 pgs: 321 active+clean; 372 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 635 KiB/s rd, 3.6 MiB/s wr, 86 op/s
Jan 20 14:45:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:45:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:22.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:45:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:23.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1557210698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:45:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:24.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:25 compute-1 ceph-mon[81775]: pgmap v1711: 321 pgs: 321 active+clean; 334 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 853 KiB/s rd, 3.6 MiB/s wr, 114 op/s
Jan 20 14:45:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4037203044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:45:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:25.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:25 compute-1 nova_compute[225855]: 2026-01-20 14:45:25.513 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:26 compute-1 ceph-mon[81775]: pgmap v1712: 321 pgs: 321 active+clean; 323 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.0 MiB/s wr, 210 op/s
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:45:26.081778) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920326081919, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2337, "num_deletes": 251, "total_data_size": 5186721, "memory_usage": 5269544, "flush_reason": "Manual Compaction"}
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920326117040, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 3397927, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39133, "largest_seqno": 41465, "table_properties": {"data_size": 3388811, "index_size": 5546, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20370, "raw_average_key_size": 20, "raw_value_size": 3370006, "raw_average_value_size": 3400, "num_data_blocks": 242, "num_entries": 991, "num_filter_entries": 991, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768920134, "oldest_key_time": 1768920134, "file_creation_time": 1768920326, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 35318 microseconds, and 15907 cpu microseconds.
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:45:26.117085) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 3397927 bytes OK
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:45:26.117115) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:45:26.118506) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:45:26.118519) EVENT_LOG_v1 {"time_micros": 1768920326118515, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:45:26.118539) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 5176303, prev total WAL file size 5176303, number of live WAL files 2.
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:45:26.119858) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(3318KB)], [75(9563KB)]
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920326119896, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 13190636, "oldest_snapshot_seqno": -1}
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 6698 keys, 11262128 bytes, temperature: kUnknown
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920326233943, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 11262128, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11216324, "index_size": 27964, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 171844, "raw_average_key_size": 25, "raw_value_size": 11095333, "raw_average_value_size": 1656, "num_data_blocks": 1115, "num_entries": 6698, "num_filter_entries": 6698, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768920326, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:45:26.234246) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 11262128 bytes
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:45:26.235748) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.6 rd, 98.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 9.3 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 7213, records dropped: 515 output_compression: NoCompression
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:45:26.235765) EVENT_LOG_v1 {"time_micros": 1768920326235757, "job": 46, "event": "compaction_finished", "compaction_time_micros": 114125, "compaction_time_cpu_micros": 25815, "output_level": 6, "num_output_files": 1, "total_output_size": 11262128, "num_input_records": 7213, "num_output_records": 6698, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:45:26 compute-1 nova_compute[225855]: 2026-01-20 14:45:26.235 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920326236508, "job": 46, "event": "table_file_deletion", "file_number": 77}
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920326238251, "job": 46, "event": "table_file_deletion", "file_number": 75}
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:45:26.119768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:45:26.238362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:45:26.238368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:45:26.238370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:45:26.238372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:45:26 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:45:26.238374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:45:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:26.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:27.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:27 compute-1 sudo[259765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:27 compute-1 sudo[259765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:27 compute-1 sudo[259765]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:27 compute-1 sudo[259790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:27 compute-1 sudo[259790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:27 compute-1 sudo[259790]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:27 compute-1 ceph-mon[81775]: pgmap v1713: 321 pgs: 321 active+clean; 339 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.9 MiB/s wr, 224 op/s
Jan 20 14:45:27 compute-1 nova_compute[225855]: 2026-01-20 14:45:27.822 225859 DEBUG oslo_concurrency.lockutils [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Acquiring lock "d236e7eb-2b7e-4031-b851-ae2790528213" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:27 compute-1 nova_compute[225855]: 2026-01-20 14:45:27.822 225859 DEBUG oslo_concurrency.lockutils [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:27 compute-1 nova_compute[225855]: 2026-01-20 14:45:27.823 225859 DEBUG oslo_concurrency.lockutils [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Acquiring lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:27 compute-1 nova_compute[225855]: 2026-01-20 14:45:27.823 225859 DEBUG oslo_concurrency.lockutils [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:27 compute-1 nova_compute[225855]: 2026-01-20 14:45:27.823 225859 DEBUG oslo_concurrency.lockutils [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:27 compute-1 nova_compute[225855]: 2026-01-20 14:45:27.824 225859 INFO nova.compute.manager [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Terminating instance
Jan 20 14:45:27 compute-1 nova_compute[225855]: 2026-01-20 14:45:27.826 225859 DEBUG nova.compute.manager [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:45:27 compute-1 kernel: tap019ea2f5-17 (unregistering): left promiscuous mode
Jan 20 14:45:27 compute-1 NetworkManager[49104]: <info>  [1768920327.8827] device (tap019ea2f5-17): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:45:27 compute-1 ovn_controller[130490]: 2026-01-20T14:45:27Z|00312|binding|INFO|Releasing lport 019ea2f5-1721-42e7-9c77-4fc1599f8101 from this chassis (sb_readonly=0)
Jan 20 14:45:27 compute-1 ovn_controller[130490]: 2026-01-20T14:45:27Z|00313|binding|INFO|Setting lport 019ea2f5-1721-42e7-9c77-4fc1599f8101 down in Southbound
Jan 20 14:45:27 compute-1 nova_compute[225855]: 2026-01-20 14:45:27.917 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:27 compute-1 ovn_controller[130490]: 2026-01-20T14:45:27Z|00314|binding|INFO|Removing iface tap019ea2f5-17 ovn-installed in OVS
Jan 20 14:45:27 compute-1 nova_compute[225855]: 2026-01-20 14:45:27.919 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:27.924 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:2c:78 10.100.0.10'], port_security=['fa:16:3e:08:2c:78 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd236e7eb-2b7e-4031-b851-ae2790528213', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eef55bf0-ad6a-4f88-adac-d746a869d579', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '144d821b8f624db687f0e009c5e06d8b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4307ffb0-d161-4f09-96d3-b205cef524f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd338232-5bc6-43af-b249-74124e0134b1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=019ea2f5-1721-42e7-9c77-4fc1599f8101) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:45:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:27.925 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 019ea2f5-1721-42e7-9c77-4fc1599f8101 in datapath eef55bf0-ad6a-4f88-adac-d746a869d579 unbound from our chassis
Jan 20 14:45:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:27.927 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eef55bf0-ad6a-4f88-adac-d746a869d579, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:45:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:27.928 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1532fcb6-4d42-4eaf-aa2e-3bbb83776119]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:27.929 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579 namespace which is not needed anymore
Jan 20 14:45:27 compute-1 nova_compute[225855]: 2026-01-20 14:45:27.933 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:27 compute-1 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Jan 20 14:45:27 compute-1 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d0000005b.scope: Consumed 14.468s CPU time.
Jan 20 14:45:27 compute-1 systemd-machined[194361]: Machine qemu-37-instance-0000005b terminated.
Jan 20 14:45:28 compute-1 kernel: tap019ea2f5-17: entered promiscuous mode
Jan 20 14:45:28 compute-1 kernel: tap019ea2f5-17 (unregistering): left promiscuous mode
Jan 20 14:45:28 compute-1 NetworkManager[49104]: <info>  [1768920328.0473] manager: (tap019ea2f5-17): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Jan 20 14:45:28 compute-1 ovn_controller[130490]: 2026-01-20T14:45:28Z|00315|binding|INFO|Claiming lport 019ea2f5-1721-42e7-9c77-4fc1599f8101 for this chassis.
Jan 20 14:45:28 compute-1 ovn_controller[130490]: 2026-01-20T14:45:28Z|00316|binding|INFO|019ea2f5-1721-42e7-9c77-4fc1599f8101: Claiming fa:16:3e:08:2c:78 10.100.0.10
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.048 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:28.057 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:2c:78 10.100.0.10'], port_security=['fa:16:3e:08:2c:78 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd236e7eb-2b7e-4031-b851-ae2790528213', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eef55bf0-ad6a-4f88-adac-d746a869d579', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '144d821b8f624db687f0e009c5e06d8b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4307ffb0-d161-4f09-96d3-b205cef524f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd338232-5bc6-43af-b249-74124e0134b1, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=019ea2f5-1721-42e7-9c77-4fc1599f8101) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:45:28 compute-1 neutron-haproxy-ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579[259456]: [NOTICE]   (259460) : haproxy version is 2.8.14-c23fe91
Jan 20 14:45:28 compute-1 neutron-haproxy-ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579[259456]: [NOTICE]   (259460) : path to executable is /usr/sbin/haproxy
Jan 20 14:45:28 compute-1 neutron-haproxy-ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579[259456]: [WARNING]  (259460) : Exiting Master process...
Jan 20 14:45:28 compute-1 neutron-haproxy-ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579[259456]: [ALERT]    (259460) : Current worker (259462) exited with code 143 (Terminated)
Jan 20 14:45:28 compute-1 neutron-haproxy-ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579[259456]: [WARNING]  (259460) : All workers exited. Exiting... (0)
Jan 20 14:45:28 compute-1 systemd[1]: libpod-e01522ec633af0a019b4440a7dd3965b02d373680203c840ebe0316866c9cf69.scope: Deactivated successfully.
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.065 225859 INFO nova.virt.libvirt.driver [-] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Instance destroyed successfully.
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.066 225859 DEBUG nova.objects.instance [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Lazy-loading 'resources' on Instance uuid d236e7eb-2b7e-4031-b851-ae2790528213 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:45:28 compute-1 ovn_controller[130490]: 2026-01-20T14:45:28Z|00317|binding|INFO|Setting lport 019ea2f5-1721-42e7-9c77-4fc1599f8101 ovn-installed in OVS
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.072 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:28 compute-1 ovn_controller[130490]: 2026-01-20T14:45:28Z|00318|binding|INFO|Setting lport 019ea2f5-1721-42e7-9c77-4fc1599f8101 up in Southbound
Jan 20 14:45:28 compute-1 ovn_controller[130490]: 2026-01-20T14:45:28Z|00319|binding|INFO|Releasing lport 019ea2f5-1721-42e7-9c77-4fc1599f8101 from this chassis (sb_readonly=1)
Jan 20 14:45:28 compute-1 ovn_controller[130490]: 2026-01-20T14:45:28Z|00320|if_status|INFO|Dropped 2 log messages in last 270 seconds (most recently, 270 seconds ago) due to excessive rate
Jan 20 14:45:28 compute-1 ovn_controller[130490]: 2026-01-20T14:45:28Z|00321|if_status|INFO|Not setting lport 019ea2f5-1721-42e7-9c77-4fc1599f8101 down as sb is readonly
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.074 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:28 compute-1 ovn_controller[130490]: 2026-01-20T14:45:28Z|00322|binding|INFO|Removing iface tap019ea2f5-17 ovn-installed in OVS
Jan 20 14:45:28 compute-1 podman[259840]: 2026-01-20 14:45:28.075732464 +0000 UTC m=+0.055604056 container died e01522ec633af0a019b4440a7dd3965b02d373680203c840ebe0316866c9cf69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:45:28 compute-1 ovn_controller[130490]: 2026-01-20T14:45:28Z|00323|binding|INFO|Releasing lport 019ea2f5-1721-42e7-9c77-4fc1599f8101 from this chassis (sb_readonly=0)
Jan 20 14:45:28 compute-1 ovn_controller[130490]: 2026-01-20T14:45:28Z|00324|binding|INFO|Setting lport 019ea2f5-1721-42e7-9c77-4fc1599f8101 down in Southbound
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.090 225859 DEBUG nova.virt.libvirt.vif [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:44:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-2113030518',display_name='tempest-ServersTestBootFromVolume-server-2113030518',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-2113030518',id=91,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQAnzboP+S2NNqULNJ590LALw1e9CwKJIrHyMoISyM6baLxtf4y84xsP0kgRy7bjF2fbaXhodzuoV+0+uj6MQE6N4Q+sHthmobL8XMJ7dekwWVSr0yZf1dgnshwlxyeDQ==',key_name='tempest-keypair-1996933376',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:44:49Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='144d821b8f624db687f0e009c5e06d8b',ramdisk_id='',reservation_id='r-55pxq2es',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-ServersTestBootFromVolume-628592216',owner_user_name='tempest-ServersTestBootFromVolume-628592216-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:44:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9051b1fd0e0b40c2be07afc6da803903',uuid=d236e7eb-2b7e-4031-b851-ae2790528213,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "address": "fa:16:3e:08:2c:78", "network": {"id": "eef55bf0-ad6a-4f88-adac-d746a869d579", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-101625742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "144d821b8f624db687f0e009c5e06d8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap019ea2f5-17", "ovs_interfaceid": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.091 225859 DEBUG nova.network.os_vif_util [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Converting VIF {"id": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "address": "fa:16:3e:08:2c:78", "network": {"id": "eef55bf0-ad6a-4f88-adac-d746a869d579", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-101625742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "144d821b8f624db687f0e009c5e06d8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap019ea2f5-17", "ovs_interfaceid": "019ea2f5-1721-42e7-9c77-4fc1599f8101", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.093 225859 DEBUG nova.network.os_vif_util [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:08:2c:78,bridge_name='br-int',has_traffic_filtering=True,id=019ea2f5-1721-42e7-9c77-4fc1599f8101,network=Network(eef55bf0-ad6a-4f88-adac-d746a869d579),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap019ea2f5-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.093 225859 DEBUG os_vif [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:2c:78,bridge_name='br-int',has_traffic_filtering=True,id=019ea2f5-1721-42e7-9c77-4fc1599f8101,network=Network(eef55bf0-ad6a-4f88-adac-d746a869d579),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap019ea2f5-17') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:45:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:28.094 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:2c:78 10.100.0.10'], port_security=['fa:16:3e:08:2c:78 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd236e7eb-2b7e-4031-b851-ae2790528213', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eef55bf0-ad6a-4f88-adac-d746a869d579', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '144d821b8f624db687f0e009c5e06d8b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4307ffb0-d161-4f09-96d3-b205cef524f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd338232-5bc6-43af-b249-74124e0134b1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=019ea2f5-1721-42e7-9c77-4fc1599f8101) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.096 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.098 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap019ea2f5-17, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.099 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.103 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:45:28 compute-1 systemd[1]: var-lib-containers-storage-overlay-e6c68af6525807938ca92d3080ecb780abe59857167dc3a5929ab0345a12e53c-merged.mount: Deactivated successfully.
Jan 20 14:45:28 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e01522ec633af0a019b4440a7dd3965b02d373680203c840ebe0316866c9cf69-userdata-shm.mount: Deactivated successfully.
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.107 225859 INFO os_vif [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:2c:78,bridge_name='br-int',has_traffic_filtering=True,id=019ea2f5-1721-42e7-9c77-4fc1599f8101,network=Network(eef55bf0-ad6a-4f88-adac-d746a869d579),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap019ea2f5-17')
Jan 20 14:45:28 compute-1 podman[259840]: 2026-01-20 14:45:28.117030644 +0000 UTC m=+0.096902216 container cleanup e01522ec633af0a019b4440a7dd3965b02d373680203c840ebe0316866c9cf69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:45:28 compute-1 systemd[1]: libpod-conmon-e01522ec633af0a019b4440a7dd3965b02d373680203c840ebe0316866c9cf69.scope: Deactivated successfully.
Jan 20 14:45:28 compute-1 podman[259860]: 2026-01-20 14:45:28.17481773 +0000 UTC m=+0.079597745 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.180 225859 DEBUG nova.compute.manager [req-6f42f92e-38d2-470e-90cb-7e3d224630a0 req-34cfbf53-c60d-40e1-8f51-a27ebd18c424 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Received event network-vif-unplugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.180 225859 DEBUG oslo_concurrency.lockutils [req-6f42f92e-38d2-470e-90cb-7e3d224630a0 req-34cfbf53-c60d-40e1-8f51-a27ebd18c424 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.181 225859 DEBUG oslo_concurrency.lockutils [req-6f42f92e-38d2-470e-90cb-7e3d224630a0 req-34cfbf53-c60d-40e1-8f51-a27ebd18c424 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.181 225859 DEBUG oslo_concurrency.lockutils [req-6f42f92e-38d2-470e-90cb-7e3d224630a0 req-34cfbf53-c60d-40e1-8f51-a27ebd18c424 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.181 225859 DEBUG nova.compute.manager [req-6f42f92e-38d2-470e-90cb-7e3d224630a0 req-34cfbf53-c60d-40e1-8f51-a27ebd18c424 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] No waiting events found dispatching network-vif-unplugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.181 225859 DEBUG nova.compute.manager [req-6f42f92e-38d2-470e-90cb-7e3d224630a0 req-34cfbf53-c60d-40e1-8f51-a27ebd18c424 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Received event network-vif-unplugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:45:28 compute-1 podman[259899]: 2026-01-20 14:45:28.186922503 +0000 UTC m=+0.045234562 container remove e01522ec633af0a019b4440a7dd3965b02d373680203c840ebe0316866c9cf69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 20 14:45:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:28.191 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a0dd34c5-a525-4cb8-a116-e2099750caf9]: (4, ('Tue Jan 20 02:45:28 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579 (e01522ec633af0a019b4440a7dd3965b02d373680203c840ebe0316866c9cf69)\ne01522ec633af0a019b4440a7dd3965b02d373680203c840ebe0316866c9cf69\nTue Jan 20 02:45:28 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579 (e01522ec633af0a019b4440a7dd3965b02d373680203c840ebe0316866c9cf69)\ne01522ec633af0a019b4440a7dd3965b02d373680203c840ebe0316866c9cf69\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:28.193 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2919de25-91b0-4dd4-8b35-522e6e909ff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:28.194 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeef55bf0-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.195 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:28 compute-1 kernel: tapeef55bf0-a0: left promiscuous mode
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.208 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:28.210 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bab3b0cc-9e66-42b2-94ef-92fa604d1707]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:28.227 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[92f7e8f5-3162-4913-8f3c-04a0dffc86de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:28.228 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1c44c6c6-4503-4598-bf00-59d56a322a71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:28.241 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6212af40-4166-4336-a67e-f68746b4ef84]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532797, 'reachable_time': 36404, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259935, 'error': None, 'target': 'ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:28.244 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eef55bf0-ad6a-4f88-adac-d746a869d579 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:45:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:28.244 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe33c11-8142-464a-beb6-b4294b9c4ef7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:28 compute-1 systemd[1]: run-netns-ovnmeta\x2deef55bf0\x2dad6a\x2d4f88\x2dadac\x2dd746a869d579.mount: Deactivated successfully.
Jan 20 14:45:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:28.245 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 019ea2f5-1721-42e7-9c77-4fc1599f8101 in datapath eef55bf0-ad6a-4f88-adac-d746a869d579 unbound from our chassis
Jan 20 14:45:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:28.247 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eef55bf0-ad6a-4f88-adac-d746a869d579, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:45:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:28.247 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4556303e-a86b-4108-8362-cc08f636d4b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:28.248 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 019ea2f5-1721-42e7-9c77-4fc1599f8101 in datapath eef55bf0-ad6a-4f88-adac-d746a869d579 unbound from our chassis
Jan 20 14:45:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:28.250 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eef55bf0-ad6a-4f88-adac-d746a869d579, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:45:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:28.250 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[617c3531-d293-4e08-a112-d08b95f0469a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.307 225859 INFO nova.virt.libvirt.driver [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Deleting instance files /var/lib/nova/instances/d236e7eb-2b7e-4031-b851-ae2790528213_del
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.307 225859 INFO nova.virt.libvirt.driver [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Deletion of /var/lib/nova/instances/d236e7eb-2b7e-4031-b851-ae2790528213_del complete
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.375 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.383 225859 INFO nova.compute.manager [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Took 0.56 seconds to destroy the instance on the hypervisor.
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.383 225859 DEBUG oslo.service.loopingcall [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.384 225859 DEBUG nova.compute.manager [-] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.384 225859 DEBUG nova.network.neutron [-] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:45:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:28.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.643 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.644 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.644 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:45:28 compute-1 nova_compute[225855]: 2026-01-20 14:45:28.645 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6586bc3e-3a94-4d22-8e8c-713a86a956fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:45:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:29.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:29 compute-1 nova_compute[225855]: 2026-01-20 14:45:29.713 225859 DEBUG nova.network.neutron [-] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:45:29 compute-1 nova_compute[225855]: 2026-01-20 14:45:29.750 225859 INFO nova.compute.manager [-] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Took 1.37 seconds to deallocate network for instance.
Jan 20 14:45:29 compute-1 ceph-mon[81775]: pgmap v1714: 321 pgs: 321 active+clean; 339 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 217 op/s
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.022 225859 INFO nova.compute.manager [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Took 0.27 seconds to detach 1 volumes for instance.
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.024 225859 DEBUG nova.compute.manager [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Deleting volume: 2441d1fb-fc23-4a6d-b88d-4d82b035b65f _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.090 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Updating instance_info_cache with network_info: [{"id": "2c289e6f-295e-44c3-948a-9a6901251890", "address": "fa:16:3e:2f:4c:e2", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c289e6f-29", "ovs_interfaceid": "2c289e6f-295e-44c3-948a-9a6901251890", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.113 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.113 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.114 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.114 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.209 225859 DEBUG oslo_concurrency.lockutils [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.209 225859 DEBUG oslo_concurrency.lockutils [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.275 225859 DEBUG nova.compute.manager [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Received event network-vif-plugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.276 225859 DEBUG oslo_concurrency.lockutils [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.277 225859 DEBUG oslo_concurrency.lockutils [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.277 225859 DEBUG oslo_concurrency.lockutils [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.277 225859 DEBUG nova.compute.manager [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] No waiting events found dispatching network-vif-plugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.278 225859 WARNING nova.compute.manager [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Received unexpected event network-vif-plugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 for instance with vm_state deleted and task_state None.
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.278 225859 DEBUG nova.compute.manager [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Received event network-vif-plugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.278 225859 DEBUG oslo_concurrency.lockutils [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.279 225859 DEBUG oslo_concurrency.lockutils [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.279 225859 DEBUG oslo_concurrency.lockutils [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.279 225859 DEBUG nova.compute.manager [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] No waiting events found dispatching network-vif-plugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.279 225859 WARNING nova.compute.manager [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Received unexpected event network-vif-plugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 for instance with vm_state deleted and task_state None.
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.280 225859 DEBUG nova.compute.manager [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Received event network-vif-plugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.280 225859 DEBUG oslo_concurrency.lockutils [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.280 225859 DEBUG oslo_concurrency.lockutils [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.281 225859 DEBUG oslo_concurrency.lockutils [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.281 225859 DEBUG nova.compute.manager [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] No waiting events found dispatching network-vif-plugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.281 225859 WARNING nova.compute.manager [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Received unexpected event network-vif-plugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 for instance with vm_state deleted and task_state None.
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.281 225859 DEBUG nova.compute.manager [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Received event network-vif-plugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.282 225859 DEBUG oslo_concurrency.lockutils [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.282 225859 DEBUG oslo_concurrency.lockutils [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.282 225859 DEBUG oslo_concurrency.lockutils [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.283 225859 DEBUG nova.compute.manager [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] No waiting events found dispatching network-vif-plugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.283 225859 WARNING nova.compute.manager [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Received unexpected event network-vif-plugged-019ea2f5-1721-42e7-9c77-4fc1599f8101 for instance with vm_state deleted and task_state None.
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.283 225859 DEBUG nova.compute.manager [req-282c4f80-0292-4e02-8ff6-d44e45a351c6 req-e48e2abf-6fdd-4742-9e27-294927ef0d31 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Received event network-vif-deleted-019ea2f5-1721-42e7-9c77-4fc1599f8101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.307 225859 DEBUG oslo_concurrency.processutils [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:45:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:30.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.568 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:45:30 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1005838163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.744 225859 DEBUG oslo_concurrency.processutils [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.749 225859 DEBUG nova.compute.provider_tree [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:45:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:45:30 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1978237855' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:45:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:45:30 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1978237855' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.806 225859 DEBUG nova.scheduler.client.report [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:45:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1005838163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1978237855' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:45:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1978237855' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.828 225859 DEBUG oslo_concurrency.lockutils [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.860 225859 INFO nova.scheduler.client.report [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Deleted allocations for instance d236e7eb-2b7e-4031-b851-ae2790528213
Jan 20 14:45:30 compute-1 nova_compute[225855]: 2026-01-20 14:45:30.937 225859 DEBUG oslo_concurrency.lockutils [None req-bcb4d055-d85b-43f3-970b-703623f4eb3b 9051b1fd0e0b40c2be07afc6da803903 144d821b8f624db687f0e009c5e06d8b - - default default] Lock "d236e7eb-2b7e-4031-b851-ae2790528213" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:31.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:31 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:31 compute-1 ceph-mon[81775]: pgmap v1715: 321 pgs: 321 active+clean; 339 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 227 op/s
Jan 20 14:45:32 compute-1 nova_compute[225855]: 2026-01-20 14:45:32.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:45:32 compute-1 nova_compute[225855]: 2026-01-20 14:45:32.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:45:32 compute-1 nova_compute[225855]: 2026-01-20 14:45:32.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:45:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:32.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1381354546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4086620939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:33 compute-1 nova_compute[225855]: 2026-01-20 14:45:33.100 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:33.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1341277129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2097302598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:34 compute-1 ceph-mon[81775]: pgmap v1716: 321 pgs: 321 active+clean; 316 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.3 MiB/s wr, 221 op/s
Jan 20 14:45:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:34.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:35 compute-1 nova_compute[225855]: 2026-01-20 14:45:35.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:45:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/263069660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:35.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:35 compute-1 nova_compute[225855]: 2026-01-20 14:45:35.602 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:35 compute-1 ovn_controller[130490]: 2026-01-20T14:45:35Z|00325|binding|INFO|Releasing lport 5527ab8d-a985-420b-9d5b-7e5d9baf7004 from this chassis (sb_readonly=0)
Jan 20 14:45:36 compute-1 nova_compute[225855]: 2026-01-20 14:45:36.041 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:36 compute-1 nova_compute[225855]: 2026-01-20 14:45:36.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:45:36 compute-1 nova_compute[225855]: 2026-01-20 14:45:36.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:45:36 compute-1 ceph-mon[81775]: pgmap v1717: 321 pgs: 321 active+clean; 252 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.8 MiB/s wr, 306 op/s
Jan 20 14:45:36 compute-1 nova_compute[225855]: 2026-01-20 14:45:36.366 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:36 compute-1 nova_compute[225855]: 2026-01-20 14:45:36.366 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:36 compute-1 nova_compute[225855]: 2026-01-20 14:45:36.366 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:36 compute-1 nova_compute[225855]: 2026-01-20 14:45:36.367 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:45:36 compute-1 nova_compute[225855]: 2026-01-20 14:45:36.367 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:45:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e233 e233: 3 total, 3 up, 3 in
Jan 20 14:45:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:36.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:45:36 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2360956372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:36 compute-1 nova_compute[225855]: 2026-01-20 14:45:36.841 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:45:36 compute-1 nova_compute[225855]: 2026-01-20 14:45:36.901 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:45:36 compute-1 nova_compute[225855]: 2026-01-20 14:45:36.902 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:45:37 compute-1 nova_compute[225855]: 2026-01-20 14:45:37.051 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:45:37 compute-1 nova_compute[225855]: 2026-01-20 14:45:37.052 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4352MB free_disk=20.8660888671875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:45:37 compute-1 nova_compute[225855]: 2026-01-20 14:45:37.053 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:37 compute-1 nova_compute[225855]: 2026-01-20 14:45:37.053 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:37 compute-1 nova_compute[225855]: 2026-01-20 14:45:37.347 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 6586bc3e-3a94-4d22-8e8c-713a86a956fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:45:37 compute-1 nova_compute[225855]: 2026-01-20 14:45:37.347 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:45:37 compute-1 nova_compute[225855]: 2026-01-20 14:45:37.348 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:45:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:37.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:37 compute-1 nova_compute[225855]: 2026-01-20 14:45:37.398 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:45:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e234 e234: 3 total, 3 up, 3 in
Jan 20 14:45:37 compute-1 ceph-mon[81775]: osdmap e233: 3 total, 3 up, 3 in
Jan 20 14:45:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2360956372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:38 compute-1 podman[260005]: 2026-01-20 14:45:38.001151885 +0000 UTC m=+0.049483442 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:45:38 compute-1 nova_compute[225855]: 2026-01-20 14:45:38.103 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:45:38 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3888261964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:38 compute-1 nova_compute[225855]: 2026-01-20 14:45:38.290 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.892s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:45:38 compute-1 nova_compute[225855]: 2026-01-20 14:45:38.295 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:45:38 compute-1 nova_compute[225855]: 2026-01-20 14:45:38.316 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:45:38 compute-1 nova_compute[225855]: 2026-01-20 14:45:38.338 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:45:38 compute-1 nova_compute[225855]: 2026-01-20 14:45:38.339 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:38.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:39 compute-1 ceph-mon[81775]: osdmap e234: 3 total, 3 up, 3 in
Jan 20 14:45:39 compute-1 ceph-mon[81775]: pgmap v1720: 321 pgs: 321 active+clean; 294 MiB data, 827 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 7.3 MiB/s wr, 293 op/s
Jan 20 14:45:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3888261964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:39 compute-1 nova_compute[225855]: 2026-01-20 14:45:39.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:45:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:39.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:40 compute-1 ceph-mon[81775]: pgmap v1721: 321 pgs: 321 active+clean; 326 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 9.7 MiB/s wr, 310 op/s
Jan 20 14:45:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e235 e235: 3 total, 3 up, 3 in
Jan 20 14:45:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:45:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:40.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:45:40 compute-1 nova_compute[225855]: 2026-01-20 14:45:40.602 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:41 compute-1 ceph-mon[81775]: osdmap e235: 3 total, 3 up, 3 in
Jan 20 14:45:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:41.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:41 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:42 compute-1 nova_compute[225855]: 2026-01-20 14:45:42.183 225859 DEBUG oslo_concurrency.lockutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "5f56b3e9-af2b-4934-8184-6257994c6b6a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:42 compute-1 nova_compute[225855]: 2026-01-20 14:45:42.184 225859 DEBUG oslo_concurrency.lockutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:42 compute-1 nova_compute[225855]: 2026-01-20 14:45:42.219 225859 DEBUG nova.compute.manager [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:45:42 compute-1 ceph-mon[81775]: pgmap v1723: 321 pgs: 321 active+clean; 358 MiB data, 867 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 10 MiB/s wr, 217 op/s
Jan 20 14:45:42 compute-1 nova_compute[225855]: 2026-01-20 14:45:42.304 225859 DEBUG oslo_concurrency.lockutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:42 compute-1 nova_compute[225855]: 2026-01-20 14:45:42.304 225859 DEBUG oslo_concurrency.lockutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:42 compute-1 nova_compute[225855]: 2026-01-20 14:45:42.311 225859 DEBUG nova.virt.hardware [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:45:42 compute-1 nova_compute[225855]: 2026-01-20 14:45:42.312 225859 INFO nova.compute.claims [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:45:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:42.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:42 compute-1 nova_compute[225855]: 2026-01-20 14:45:42.493 225859 DEBUG oslo_concurrency.processutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:45:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:45:42 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1623992898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:42 compute-1 nova_compute[225855]: 2026-01-20 14:45:42.906 225859 DEBUG oslo_concurrency.processutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:45:42 compute-1 nova_compute[225855]: 2026-01-20 14:45:42.913 225859 DEBUG nova.compute.provider_tree [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:45:42 compute-1 nova_compute[225855]: 2026-01-20 14:45:42.926 225859 DEBUG nova.scheduler.client.report [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:45:42 compute-1 nova_compute[225855]: 2026-01-20 14:45:42.942 225859 DEBUG oslo_concurrency.lockutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:42 compute-1 nova_compute[225855]: 2026-01-20 14:45:42.943 225859 DEBUG nova.compute.manager [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:45:42 compute-1 nova_compute[225855]: 2026-01-20 14:45:42.991 225859 DEBUG nova.compute.manager [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:45:42 compute-1 nova_compute[225855]: 2026-01-20 14:45:42.992 225859 DEBUG nova.network.neutron [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:45:43 compute-1 nova_compute[225855]: 2026-01-20 14:45:43.024 225859 INFO nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:45:43 compute-1 nova_compute[225855]: 2026-01-20 14:45:43.049 225859 DEBUG nova.compute.manager [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:45:43 compute-1 nova_compute[225855]: 2026-01-20 14:45:43.062 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920328.0617414, d236e7eb-2b7e-4031-b851-ae2790528213 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:45:43 compute-1 nova_compute[225855]: 2026-01-20 14:45:43.063 225859 INFO nova.compute.manager [-] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] VM Stopped (Lifecycle Event)
Jan 20 14:45:43 compute-1 nova_compute[225855]: 2026-01-20 14:45:43.090 225859 DEBUG nova.compute.manager [None req-fb0885cb-eb63-46a5-989c-dc6aacdaabd9 - - - - - -] [instance: d236e7eb-2b7e-4031-b851-ae2790528213] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:45:43 compute-1 nova_compute[225855]: 2026-01-20 14:45:43.106 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:43 compute-1 nova_compute[225855]: 2026-01-20 14:45:43.181 225859 DEBUG nova.compute.manager [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:45:43 compute-1 nova_compute[225855]: 2026-01-20 14:45:43.182 225859 DEBUG nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:45:43 compute-1 nova_compute[225855]: 2026-01-20 14:45:43.182 225859 INFO nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Creating image(s)
Jan 20 14:45:43 compute-1 nova_compute[225855]: 2026-01-20 14:45:43.205 225859 DEBUG nova.storage.rbd_utils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:45:43 compute-1 nova_compute[225855]: 2026-01-20 14:45:43.231 225859 DEBUG nova.storage.rbd_utils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:45:43 compute-1 nova_compute[225855]: 2026-01-20 14:45:43.259 225859 DEBUG nova.storage.rbd_utils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:45:43 compute-1 nova_compute[225855]: 2026-01-20 14:45:43.262 225859 DEBUG oslo_concurrency.processutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:45:43 compute-1 nova_compute[225855]: 2026-01-20 14:45:43.298 225859 DEBUG nova.policy [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '869086208e10436c9dc96c78bee9a85d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b683fcc0026242e28ba6d8fba638688e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:45:43 compute-1 nova_compute[225855]: 2026-01-20 14:45:43.355 225859 DEBUG oslo_concurrency.processutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:45:43 compute-1 nova_compute[225855]: 2026-01-20 14:45:43.356 225859 DEBUG oslo_concurrency.lockutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:43 compute-1 nova_compute[225855]: 2026-01-20 14:45:43.356 225859 DEBUG oslo_concurrency.lockutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:43 compute-1 nova_compute[225855]: 2026-01-20 14:45:43.356 225859 DEBUG oslo_concurrency.lockutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:43 compute-1 nova_compute[225855]: 2026-01-20 14:45:43.380 225859 DEBUG nova.storage.rbd_utils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:45:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:45:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:43.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:45:43 compute-1 nova_compute[225855]: 2026-01-20 14:45:43.384 225859 DEBUG oslo_concurrency.processutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:45:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1623992898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:44.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:44 compute-1 ceph-mon[81775]: pgmap v1724: 321 pgs: 321 active+clean; 358 MiB data, 868 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 8.4 MiB/s wr, 179 op/s
Jan 20 14:45:44 compute-1 nova_compute[225855]: 2026-01-20 14:45:44.595 225859 DEBUG nova.network.neutron [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Successfully created port: ec1a7a25-a60a-40c3-98bf-710c68019b24 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:45:45 compute-1 nova_compute[225855]: 2026-01-20 14:45:45.073 225859 DEBUG oslo_concurrency.processutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.688s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:45:45 compute-1 nova_compute[225855]: 2026-01-20 14:45:45.136 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:45 compute-1 nova_compute[225855]: 2026-01-20 14:45:45.189 225859 DEBUG nova.storage.rbd_utils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] resizing rbd image 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:45:45 compute-1 nova_compute[225855]: 2026-01-20 14:45:45.319 225859 DEBUG nova.objects.instance [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'migration_context' on Instance uuid 5f56b3e9-af2b-4934-8184-6257994c6b6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:45:45 compute-1 nova_compute[225855]: 2026-01-20 14:45:45.336 225859 DEBUG nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:45:45 compute-1 nova_compute[225855]: 2026-01-20 14:45:45.337 225859 DEBUG nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Ensure instance console log exists: /var/lib/nova/instances/5f56b3e9-af2b-4934-8184-6257994c6b6a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:45:45 compute-1 nova_compute[225855]: 2026-01-20 14:45:45.338 225859 DEBUG oslo_concurrency.lockutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:45 compute-1 nova_compute[225855]: 2026-01-20 14:45:45.338 225859 DEBUG oslo_concurrency.lockutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:45 compute-1 nova_compute[225855]: 2026-01-20 14:45:45.338 225859 DEBUG oslo_concurrency.lockutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:45.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:45 compute-1 nova_compute[225855]: 2026-01-20 14:45:45.547 225859 DEBUG nova.network.neutron [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Successfully updated port: ec1a7a25-a60a-40c3-98bf-710c68019b24 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:45:45 compute-1 nova_compute[225855]: 2026-01-20 14:45:45.569 225859 DEBUG oslo_concurrency.lockutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "refresh_cache-5f56b3e9-af2b-4934-8184-6257994c6b6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:45:45 compute-1 nova_compute[225855]: 2026-01-20 14:45:45.570 225859 DEBUG oslo_concurrency.lockutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquired lock "refresh_cache-5f56b3e9-af2b-4934-8184-6257994c6b6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:45:45 compute-1 nova_compute[225855]: 2026-01-20 14:45:45.570 225859 DEBUG nova.network.neutron [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:45:45 compute-1 nova_compute[225855]: 2026-01-20 14:45:45.604 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:45 compute-1 nova_compute[225855]: 2026-01-20 14:45:45.729 225859 DEBUG nova.compute.manager [req-f50ed7ab-228a-4843-980d-24e38a82dc08 req-7976d171-feb5-48ca-9297-8b660b552caa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Received event network-changed-ec1a7a25-a60a-40c3-98bf-710c68019b24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:45:45 compute-1 nova_compute[225855]: 2026-01-20 14:45:45.730 225859 DEBUG nova.compute.manager [req-f50ed7ab-228a-4843-980d-24e38a82dc08 req-7976d171-feb5-48ca-9297-8b660b552caa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Refreshing instance network info cache due to event network-changed-ec1a7a25-a60a-40c3-98bf-710c68019b24. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:45:45 compute-1 nova_compute[225855]: 2026-01-20 14:45:45.731 225859 DEBUG oslo_concurrency.lockutils [req-f50ed7ab-228a-4843-980d-24e38a82dc08 req-7976d171-feb5-48ca-9297-8b660b552caa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-5f56b3e9-af2b-4934-8184-6257994c6b6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:45:45 compute-1 nova_compute[225855]: 2026-01-20 14:45:45.869 225859 DEBUG nova.network.neutron [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:45:46 compute-1 ceph-mon[81775]: pgmap v1725: 321 pgs: 321 active+clean; 380 MiB data, 881 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.4 MiB/s wr, 117 op/s
Jan 20 14:45:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:46.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.238 225859 DEBUG nova.network.neutron [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Updating instance_info_cache with network_info: [{"id": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "address": "fa:16:3e:76:c2:c8", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec1a7a25-a6", "ovs_interfaceid": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.262 225859 DEBUG oslo_concurrency.lockutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Releasing lock "refresh_cache-5f56b3e9-af2b-4934-8184-6257994c6b6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.263 225859 DEBUG nova.compute.manager [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Instance network_info: |[{"id": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "address": "fa:16:3e:76:c2:c8", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec1a7a25-a6", "ovs_interfaceid": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.263 225859 DEBUG oslo_concurrency.lockutils [req-f50ed7ab-228a-4843-980d-24e38a82dc08 req-7976d171-feb5-48ca-9297-8b660b552caa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-5f56b3e9-af2b-4934-8184-6257994c6b6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.264 225859 DEBUG nova.network.neutron [req-f50ed7ab-228a-4843-980d-24e38a82dc08 req-7976d171-feb5-48ca-9297-8b660b552caa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Refreshing network info cache for port ec1a7a25-a60a-40c3-98bf-710c68019b24 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.269 225859 DEBUG nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Start _get_guest_xml network_info=[{"id": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "address": "fa:16:3e:76:c2:c8", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec1a7a25-a6", "ovs_interfaceid": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.275 225859 WARNING nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.283 225859 DEBUG nova.virt.libvirt.host [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.284 225859 DEBUG nova.virt.libvirt.host [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.289 225859 DEBUG nova.virt.libvirt.host [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.290 225859 DEBUG nova.virt.libvirt.host [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.291 225859 DEBUG nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.291 225859 DEBUG nova.virt.hardware [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.291 225859 DEBUG nova.virt.hardware [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.291 225859 DEBUG nova.virt.hardware [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.292 225859 DEBUG nova.virt.hardware [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.292 225859 DEBUG nova.virt.hardware [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.292 225859 DEBUG nova.virt.hardware [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.292 225859 DEBUG nova.virt.hardware [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.292 225859 DEBUG nova.virt.hardware [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.292 225859 DEBUG nova.virt.hardware [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.293 225859 DEBUG nova.virt.hardware [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.293 225859 DEBUG nova.virt.hardware [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.295 225859 DEBUG oslo_concurrency.processutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:45:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:45:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:47.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:45:47 compute-1 sudo[260239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:47 compute-1 sudo[260239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:47 compute-1 sudo[260239]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:47 compute-1 sudo[260264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:47 compute-1 sudo[260264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:47 compute-1 sudo[260264]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:45:47 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3346481726' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.772 225859 DEBUG oslo_concurrency.processutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.799 225859 DEBUG nova.storage.rbd_utils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:45:47 compute-1 nova_compute[225855]: 2026-01-20 14:45:47.803 225859 DEBUG oslo_concurrency.processutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:45:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3346481726' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:45:47 compute-1 ceph-mon[81775]: pgmap v1726: 321 pgs: 321 active+clean; 399 MiB data, 885 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.7 MiB/s wr, 115 op/s
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.109 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:45:48 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/703686633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:45:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:48.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e236 e236: 3 total, 3 up, 3 in
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.538 225859 DEBUG oslo_concurrency.processutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.735s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.540 225859 DEBUG nova.virt.libvirt.vif [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:45:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1159529074',display_name='tempest-tempest.common.compute-instance-1159529074',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1159529074',id=95,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPQdfZ1kte1uwABu8Yt6GDBCbfLz/7DDRluLfcLvQPu0p1+5pmkgwEfXHuElB+zR6CnPfotcAwIhafKnCBnLfMP+a/6KQqDXsYrSlHbKZFIppFb1eVRM7RProDBMZxNI4Q==',key_name='tempest-keypair-1773413892',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b683fcc0026242e28ba6d8fba638688e',ramdisk_id='',reservation_id='r-0vpr4wvx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-967087071',owner_user_name='tempest-ServerActionsTestOtherA-967087071-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:45:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='869086208e10436c9dc96c78bee9a85d',uuid=5f56b3e9-af2b-4934-8184-6257994c6b6a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "address": "fa:16:3e:76:c2:c8", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec1a7a25-a6", "ovs_interfaceid": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.541 225859 DEBUG nova.network.os_vif_util [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converting VIF {"id": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "address": "fa:16:3e:76:c2:c8", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec1a7a25-a6", "ovs_interfaceid": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.541 225859 DEBUG nova.network.os_vif_util [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:c2:c8,bridge_name='br-int',has_traffic_filtering=True,id=ec1a7a25-a60a-40c3-98bf-710c68019b24,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec1a7a25-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.543 225859 DEBUG nova.objects.instance [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'pci_devices' on Instance uuid 5f56b3e9-af2b-4934-8184-6257994c6b6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.557 225859 DEBUG nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:45:48 compute-1 nova_compute[225855]:   <uuid>5f56b3e9-af2b-4934-8184-6257994c6b6a</uuid>
Jan 20 14:45:48 compute-1 nova_compute[225855]:   <name>instance-0000005f</name>
Jan 20 14:45:48 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:45:48 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:45:48 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <nova:name>tempest-tempest.common.compute-instance-1159529074</nova:name>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:45:47</nova:creationTime>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:45:48 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:45:48 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:45:48 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:45:48 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:45:48 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:45:48 compute-1 nova_compute[225855]:         <nova:user uuid="869086208e10436c9dc96c78bee9a85d">tempest-ServerActionsTestOtherA-967087071-project-member</nova:user>
Jan 20 14:45:48 compute-1 nova_compute[225855]:         <nova:project uuid="b683fcc0026242e28ba6d8fba638688e">tempest-ServerActionsTestOtherA-967087071</nova:project>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:45:48 compute-1 nova_compute[225855]:         <nova:port uuid="ec1a7a25-a60a-40c3-98bf-710c68019b24">
Jan 20 14:45:48 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:45:48 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:45:48 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <system>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <entry name="serial">5f56b3e9-af2b-4934-8184-6257994c6b6a</entry>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <entry name="uuid">5f56b3e9-af2b-4934-8184-6257994c6b6a</entry>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     </system>
Jan 20 14:45:48 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:45:48 compute-1 nova_compute[225855]:   <os>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:   </os>
Jan 20 14:45:48 compute-1 nova_compute[225855]:   <features>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:   </features>
Jan 20 14:45:48 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:45:48 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:45:48 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/5f56b3e9-af2b-4934-8184-6257994c6b6a_disk">
Jan 20 14:45:48 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       </source>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:45:48 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/5f56b3e9-af2b-4934-8184-6257994c6b6a_disk.config">
Jan 20 14:45:48 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       </source>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:45:48 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:76:c2:c8"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <target dev="tapec1a7a25-a6"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/5f56b3e9-af2b-4934-8184-6257994c6b6a/console.log" append="off"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <video>
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     </video>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:45:48 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:45:48 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:45:48 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:45:48 compute-1 nova_compute[225855]: </domain>
Jan 20 14:45:48 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.558 225859 DEBUG nova.compute.manager [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Preparing to wait for external event network-vif-plugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.559 225859 DEBUG oslo_concurrency.lockutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.559 225859 DEBUG oslo_concurrency.lockutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.559 225859 DEBUG oslo_concurrency.lockutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.560 225859 DEBUG nova.virt.libvirt.vif [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:45:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1159529074',display_name='tempest-tempest.common.compute-instance-1159529074',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1159529074',id=95,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPQdfZ1kte1uwABu8Yt6GDBCbfLz/7DDRluLfcLvQPu0p1+5pmkgwEfXHuElB+zR6CnPfotcAwIhafKnCBnLfMP+a/6KQqDXsYrSlHbKZFIppFb1eVRM7RProDBMZxNI4Q==',key_name='tempest-keypair-1773413892',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b683fcc0026242e28ba6d8fba638688e',ramdisk_id='',reservation_id='r-0vpr4wvx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-967087071',owner_user_name='tempest-ServerActionsTestOtherA-967087071-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:45:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='869086208e10436c9dc96c78bee9a85d',uuid=5f56b3e9-af2b-4934-8184-6257994c6b6a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "address": "fa:16:3e:76:c2:c8", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec1a7a25-a6", "ovs_interfaceid": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.560 225859 DEBUG nova.network.os_vif_util [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converting VIF {"id": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "address": "fa:16:3e:76:c2:c8", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec1a7a25-a6", "ovs_interfaceid": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.561 225859 DEBUG nova.network.os_vif_util [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:c2:c8,bridge_name='br-int',has_traffic_filtering=True,id=ec1a7a25-a60a-40c3-98bf-710c68019b24,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec1a7a25-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.561 225859 DEBUG os_vif [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:c2:c8,bridge_name='br-int',has_traffic_filtering=True,id=ec1a7a25-a60a-40c3-98bf-710c68019b24,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec1a7a25-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.562 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.562 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.562 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.565 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.566 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec1a7a25-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.566 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec1a7a25-a6, col_values=(('external_ids', {'iface-id': 'ec1a7a25-a60a-40c3-98bf-710c68019b24', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:c2:c8', 'vm-uuid': '5f56b3e9-af2b-4934-8184-6257994c6b6a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.568 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:48 compute-1 NetworkManager[49104]: <info>  [1768920348.5695] manager: (tapec1a7a25-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.571 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.577 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.577 225859 INFO os_vif [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:c2:c8,bridge_name='br-int',has_traffic_filtering=True,id=ec1a7a25-a60a-40c3-98bf-710c68019b24,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec1a7a25-a6')
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.679 225859 DEBUG nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.679 225859 DEBUG nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.679 225859 DEBUG nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] No VIF found with MAC fa:16:3e:76:c2:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.680 225859 INFO nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Using config drive
Jan 20 14:45:48 compute-1 nova_compute[225855]: 2026-01-20 14:45:48.751 225859 DEBUG nova.storage.rbd_utils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:45:49 compute-1 nova_compute[225855]: 2026-01-20 14:45:49.066 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:49.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:49 compute-1 nova_compute[225855]: 2026-01-20 14:45:49.417 225859 INFO nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Creating config drive at /var/lib/nova/instances/5f56b3e9-af2b-4934-8184-6257994c6b6a/disk.config
Jan 20 14:45:49 compute-1 nova_compute[225855]: 2026-01-20 14:45:49.423 225859 DEBUG oslo_concurrency.processutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5f56b3e9-af2b-4934-8184-6257994c6b6a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1zjenn5r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:45:49 compute-1 nova_compute[225855]: 2026-01-20 14:45:49.457 225859 DEBUG nova.network.neutron [req-f50ed7ab-228a-4843-980d-24e38a82dc08 req-7976d171-feb5-48ca-9297-8b660b552caa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Updated VIF entry in instance network info cache for port ec1a7a25-a60a-40c3-98bf-710c68019b24. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:45:49 compute-1 nova_compute[225855]: 2026-01-20 14:45:49.458 225859 DEBUG nova.network.neutron [req-f50ed7ab-228a-4843-980d-24e38a82dc08 req-7976d171-feb5-48ca-9297-8b660b552caa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Updating instance_info_cache with network_info: [{"id": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "address": "fa:16:3e:76:c2:c8", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec1a7a25-a6", "ovs_interfaceid": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:45:49 compute-1 nova_compute[225855]: 2026-01-20 14:45:49.472 225859 DEBUG oslo_concurrency.lockutils [req-f50ed7ab-228a-4843-980d-24e38a82dc08 req-7976d171-feb5-48ca-9297-8b660b552caa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-5f56b3e9-af2b-4934-8184-6257994c6b6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:45:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/703686633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:45:49 compute-1 ceph-mon[81775]: osdmap e236: 3 total, 3 up, 3 in
Jan 20 14:45:49 compute-1 nova_compute[225855]: 2026-01-20 14:45:49.575 225859 DEBUG oslo_concurrency.processutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5f56b3e9-af2b-4934-8184-6257994c6b6a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1zjenn5r" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:45:49 compute-1 nova_compute[225855]: 2026-01-20 14:45:49.794 225859 DEBUG nova.storage.rbd_utils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:45:49 compute-1 nova_compute[225855]: 2026-01-20 14:45:49.800 225859 DEBUG oslo_concurrency.processutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5f56b3e9-af2b-4934-8184-6257994c6b6a/disk.config 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:45:50 compute-1 ceph-mon[81775]: pgmap v1728: 321 pgs: 321 active+clean; 376 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 2.2 MiB/s wr, 49 op/s
Jan 20 14:45:50 compute-1 nova_compute[225855]: 2026-01-20 14:45:50.355 225859 DEBUG oslo_concurrency.processutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5f56b3e9-af2b-4934-8184-6257994c6b6a/disk.config 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:45:50 compute-1 nova_compute[225855]: 2026-01-20 14:45:50.357 225859 INFO nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Deleting local config drive /var/lib/nova/instances/5f56b3e9-af2b-4934-8184-6257994c6b6a/disk.config because it was imported into RBD.
Jan 20 14:45:50 compute-1 kernel: tapec1a7a25-a6: entered promiscuous mode
Jan 20 14:45:50 compute-1 NetworkManager[49104]: <info>  [1768920350.4074] manager: (tapec1a7a25-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/141)
Jan 20 14:45:50 compute-1 ovn_controller[130490]: 2026-01-20T14:45:50Z|00326|binding|INFO|Claiming lport ec1a7a25-a60a-40c3-98bf-710c68019b24 for this chassis.
Jan 20 14:45:50 compute-1 ovn_controller[130490]: 2026-01-20T14:45:50Z|00327|binding|INFO|ec1a7a25-a60a-40c3-98bf-710c68019b24: Claiming fa:16:3e:76:c2:c8 10.100.0.5
Jan 20 14:45:50 compute-1 nova_compute[225855]: 2026-01-20 14:45:50.411 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:50.418 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:c2:c8 10.100.0.5'], port_security=['fa:16:3e:76:c2:c8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5f56b3e9-af2b-4934-8184-6257994c6b6a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b683fcc0026242e28ba6d8fba638688e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '357f03e7-2463-4315-adee-f60d9b0f5500', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=361f9a69-30a6-4be4-89ad-2a8f92877af2, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=ec1a7a25-a60a-40c3-98bf-710c68019b24) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:45:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:50.420 140354 INFO neutron.agent.ovn.metadata.agent [-] Port ec1a7a25-a60a-40c3-98bf-710c68019b24 in datapath a19e9d1a-864f-41ee-bdea-188e65973ea5 bound to our chassis
Jan 20 14:45:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:50.422 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a19e9d1a-864f-41ee-bdea-188e65973ea5
Jan 20 14:45:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:50.440 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cf0a1642-f664-4186-b587-1098143f1243]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:50 compute-1 systemd-udevd[260406]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:45:50 compute-1 systemd-machined[194361]: New machine qemu-38-instance-0000005f.
Jan 20 14:45:50 compute-1 ovn_controller[130490]: 2026-01-20T14:45:50Z|00328|binding|INFO|Setting lport ec1a7a25-a60a-40c3-98bf-710c68019b24 ovn-installed in OVS
Jan 20 14:45:50 compute-1 ovn_controller[130490]: 2026-01-20T14:45:50Z|00329|binding|INFO|Setting lport ec1a7a25-a60a-40c3-98bf-710c68019b24 up in Southbound
Jan 20 14:45:50 compute-1 nova_compute[225855]: 2026-01-20 14:45:50.450 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:50 compute-1 nova_compute[225855]: 2026-01-20 14:45:50.452 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:50 compute-1 NetworkManager[49104]: <info>  [1768920350.4535] device (tapec1a7a25-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:45:50 compute-1 NetworkManager[49104]: <info>  [1768920350.4540] device (tapec1a7a25-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:45:50 compute-1 systemd[1]: Started Virtual Machine qemu-38-instance-0000005f.
Jan 20 14:45:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:50.474 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d329292b-960d-4c1a-8d63-4f06b98aedd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:50.478 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[2a91d27c-7bff-48fe-bb04-3ba82ae24fd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:50.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:50.511 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f32cbae3-ad30-4ee2-9b21-bb05affed1cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:50.527 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[29b2ea06-5166-4f5a-9484-8024fd5d2140]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa19e9d1a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:53:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529090, 'reachable_time': 16021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260419, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:50.543 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5017b386-8e67-4366-b6be-b876088ccac3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa19e9d1a-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529100, 'tstamp': 529100}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260420, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa19e9d1a-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529104, 'tstamp': 529104}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260420, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:50.545 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa19e9d1a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:45:50 compute-1 nova_compute[225855]: 2026-01-20 14:45:50.547 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:50 compute-1 nova_compute[225855]: 2026-01-20 14:45:50.548 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:50.550 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa19e9d1a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:45:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:50.550 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:45:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:50.550 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa19e9d1a-80, col_values=(('external_ids', {'iface-id': '5527ab8d-a985-420b-9d5b-7e5d9baf7004'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:45:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:50.551 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:45:50 compute-1 nova_compute[225855]: 2026-01-20 14:45:50.606 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:50 compute-1 nova_compute[225855]: 2026-01-20 14:45:50.729 225859 DEBUG nova.compute.manager [req-ec100eb5-649d-4ab4-a314-b2b439d6effc req-dbe6206b-dc54-4661-9609-aab0a76a27ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Received event network-vif-plugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:45:50 compute-1 nova_compute[225855]: 2026-01-20 14:45:50.729 225859 DEBUG oslo_concurrency.lockutils [req-ec100eb5-649d-4ab4-a314-b2b439d6effc req-dbe6206b-dc54-4661-9609-aab0a76a27ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:50 compute-1 nova_compute[225855]: 2026-01-20 14:45:50.729 225859 DEBUG oslo_concurrency.lockutils [req-ec100eb5-649d-4ab4-a314-b2b439d6effc req-dbe6206b-dc54-4661-9609-aab0a76a27ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:50 compute-1 nova_compute[225855]: 2026-01-20 14:45:50.730 225859 DEBUG oslo_concurrency.lockutils [req-ec100eb5-649d-4ab4-a314-b2b439d6effc req-dbe6206b-dc54-4661-9609-aab0a76a27ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:50 compute-1 nova_compute[225855]: 2026-01-20 14:45:50.730 225859 DEBUG nova.compute.manager [req-ec100eb5-649d-4ab4-a314-b2b439d6effc req-dbe6206b-dc54-4661-9609-aab0a76a27ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Processing event network-vif-plugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:45:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:50.826 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:45:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:50.827 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:45:50 compute-1 nova_compute[225855]: 2026-01-20 14:45:50.872 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:51.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.765 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920351.7653728, 5f56b3e9-af2b-4934-8184-6257994c6b6a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.766 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] VM Started (Lifecycle Event)
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.768 225859 DEBUG nova.compute.manager [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.771 225859 DEBUG nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.774 225859 INFO nova.virt.libvirt.driver [-] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Instance spawned successfully.
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.774 225859 DEBUG nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.795 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.798 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.805 225859 DEBUG nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.806 225859 DEBUG nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.806 225859 DEBUG nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.807 225859 DEBUG nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.807 225859 DEBUG nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.808 225859 DEBUG nova.virt.libvirt.driver [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.830 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.830 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920351.7662647, 5f56b3e9-af2b-4934-8184-6257994c6b6a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.831 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] VM Paused (Lifecycle Event)
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.859 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.863 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920351.7707226, 5f56b3e9-af2b-4934-8184-6257994c6b6a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.863 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] VM Resumed (Lifecycle Event)
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.867 225859 INFO nova.compute.manager [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Took 8.69 seconds to spawn the instance on the hypervisor.
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.867 225859 DEBUG nova.compute.manager [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.894 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.897 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.932 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.946 225859 INFO nova.compute.manager [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Took 9.67 seconds to build instance.
Jan 20 14:45:51 compute-1 nova_compute[225855]: 2026-01-20 14:45:51.962 225859 DEBUG oslo_concurrency.lockutils [None req-370cb0b5-817d-4c64-98bf-e48aa665a7a1 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:52 compute-1 ceph-mon[81775]: pgmap v1729: 321 pgs: 321 active+clean; 338 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Jan 20 14:45:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:52.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:45:52.830 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:45:52 compute-1 nova_compute[225855]: 2026-01-20 14:45:52.857 225859 DEBUG nova.compute.manager [req-8db341f1-a817-4c47-bb49-1f468bf9deef req-1089f9b1-43a0-4700-9ff6-8659f6bab3ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Received event network-vif-plugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:45:52 compute-1 nova_compute[225855]: 2026-01-20 14:45:52.858 225859 DEBUG oslo_concurrency.lockutils [req-8db341f1-a817-4c47-bb49-1f468bf9deef req-1089f9b1-43a0-4700-9ff6-8659f6bab3ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:52 compute-1 nova_compute[225855]: 2026-01-20 14:45:52.858 225859 DEBUG oslo_concurrency.lockutils [req-8db341f1-a817-4c47-bb49-1f468bf9deef req-1089f9b1-43a0-4700-9ff6-8659f6bab3ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:52 compute-1 nova_compute[225855]: 2026-01-20 14:45:52.859 225859 DEBUG oslo_concurrency.lockutils [req-8db341f1-a817-4c47-bb49-1f468bf9deef req-1089f9b1-43a0-4700-9ff6-8659f6bab3ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:52 compute-1 nova_compute[225855]: 2026-01-20 14:45:52.859 225859 DEBUG nova.compute.manager [req-8db341f1-a817-4c47-bb49-1f468bf9deef req-1089f9b1-43a0-4700-9ff6-8659f6bab3ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] No waiting events found dispatching network-vif-plugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:45:52 compute-1 nova_compute[225855]: 2026-01-20 14:45:52.859 225859 WARNING nova.compute.manager [req-8db341f1-a817-4c47-bb49-1f468bf9deef req-1089f9b1-43a0-4700-9ff6-8659f6bab3ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Received unexpected event network-vif-plugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 for instance with vm_state active and task_state None.
Jan 20 14:45:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/435567543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:45:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:53.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:45:53 compute-1 nova_compute[225855]: 2026-01-20 14:45:53.569 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:53 compute-1 nova_compute[225855]: 2026-01-20 14:45:53.891 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:54 compute-1 ceph-mon[81775]: pgmap v1730: 321 pgs: 321 active+clean; 326 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Jan 20 14:45:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:54.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e237 e237: 3 total, 3 up, 3 in
Jan 20 14:45:54 compute-1 nova_compute[225855]: 2026-01-20 14:45:54.940 225859 DEBUG nova.compute.manager [req-3de9020a-2f72-497b-8787-b9c0a8706f26 req-0ecd794d-3802-42fe-9b7c-80d54016406b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Received event network-changed-ec1a7a25-a60a-40c3-98bf-710c68019b24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:45:54 compute-1 nova_compute[225855]: 2026-01-20 14:45:54.941 225859 DEBUG nova.compute.manager [req-3de9020a-2f72-497b-8787-b9c0a8706f26 req-0ecd794d-3802-42fe-9b7c-80d54016406b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Refreshing instance network info cache due to event network-changed-ec1a7a25-a60a-40c3-98bf-710c68019b24. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:45:54 compute-1 nova_compute[225855]: 2026-01-20 14:45:54.941 225859 DEBUG oslo_concurrency.lockutils [req-3de9020a-2f72-497b-8787-b9c0a8706f26 req-0ecd794d-3802-42fe-9b7c-80d54016406b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-5f56b3e9-af2b-4934-8184-6257994c6b6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:45:54 compute-1 nova_compute[225855]: 2026-01-20 14:45:54.942 225859 DEBUG oslo_concurrency.lockutils [req-3de9020a-2f72-497b-8787-b9c0a8706f26 req-0ecd794d-3802-42fe-9b7c-80d54016406b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-5f56b3e9-af2b-4934-8184-6257994c6b6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:45:54 compute-1 nova_compute[225855]: 2026-01-20 14:45:54.942 225859 DEBUG nova.network.neutron [req-3de9020a-2f72-497b-8787-b9c0a8706f26 req-0ecd794d-3802-42fe-9b7c-80d54016406b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Refreshing network info cache for port ec1a7a25-a60a-40c3-98bf-710c68019b24 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:45:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:55.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:55 compute-1 nova_compute[225855]: 2026-01-20 14:45:55.609 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:55 compute-1 ceph-mon[81775]: osdmap e237: 3 total, 3 up, 3 in
Jan 20 14:45:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:56.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:56 compute-1 nova_compute[225855]: 2026-01-20 14:45:56.642 225859 DEBUG nova.network.neutron [req-3de9020a-2f72-497b-8787-b9c0a8706f26 req-0ecd794d-3802-42fe-9b7c-80d54016406b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Updated VIF entry in instance network info cache for port ec1a7a25-a60a-40c3-98bf-710c68019b24. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:45:56 compute-1 nova_compute[225855]: 2026-01-20 14:45:56.644 225859 DEBUG nova.network.neutron [req-3de9020a-2f72-497b-8787-b9c0a8706f26 req-0ecd794d-3802-42fe-9b7c-80d54016406b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Updating instance_info_cache with network_info: [{"id": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "address": "fa:16:3e:76:c2:c8", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec1a7a25-a6", "ovs_interfaceid": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:45:56 compute-1 nova_compute[225855]: 2026-01-20 14:45:56.668 225859 DEBUG oslo_concurrency.lockutils [req-3de9020a-2f72-497b-8787-b9c0a8706f26 req-0ecd794d-3802-42fe-9b7c-80d54016406b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-5f56b3e9-af2b-4934-8184-6257994c6b6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:45:56 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:57 compute-1 nova_compute[225855]: 2026-01-20 14:45:57.163 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:57 compute-1 ceph-mon[81775]: pgmap v1732: 321 pgs: 321 active+clean; 326 MiB data, 843 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 611 KiB/s wr, 150 op/s
Jan 20 14:45:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:45:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:57.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:45:58 compute-1 ceph-mon[81775]: pgmap v1733: 321 pgs: 321 active+clean; 308 MiB data, 838 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 518 KiB/s wr, 153 op/s
Jan 20 14:45:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:45:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:58.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:45:58 compute-1 nova_compute[225855]: 2026-01-20 14:45:58.571 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:59 compute-1 podman[260467]: 2026-01-20 14:45:59.041807297 +0000 UTC m=+0.086123650 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:45:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:45:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:59.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:00 compute-1 ceph-mon[81775]: pgmap v1734: 321 pgs: 321 active+clean; 279 MiB data, 819 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 152 op/s
Jan 20 14:46:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:46:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:00.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:46:00 compute-1 nova_compute[225855]: 2026-01-20 14:46:00.611 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:01.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2979122474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:02.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:02 compute-1 ceph-mon[81775]: pgmap v1735: 321 pgs: 321 active+clean; 249 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 21 KiB/s wr, 123 op/s
Jan 20 14:46:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/113448044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:03.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:03 compute-1 nova_compute[225855]: 2026-01-20 14:46:03.574 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e238 e238: 3 total, 3 up, 3 in
Jan 20 14:46:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:04.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2123534067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:05 compute-1 ceph-mon[81775]: osdmap e238: 3 total, 3 up, 3 in
Jan 20 14:46:05 compute-1 ceph-mon[81775]: pgmap v1737: 321 pgs: 321 active+clean; 247 MiB data, 797 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 21 KiB/s wr, 99 op/s
Jan 20 14:46:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:05.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:05 compute-1 nova_compute[225855]: 2026-01-20 14:46:05.613 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:06 compute-1 ceph-mon[81775]: pgmap v1738: 321 pgs: 321 active+clean; 253 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 588 KiB/s rd, 827 KiB/s wr, 58 op/s
Jan 20 14:46:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:06.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:07.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:07 compute-1 sudo[260499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:07 compute-1 sudo[260499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:07 compute-1 sudo[260499]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:07 compute-1 sudo[260524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:07 compute-1 sudo[260524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:07 compute-1 sudo[260524]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:08 compute-1 ovn_controller[130490]: 2026-01-20T14:46:08Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:76:c2:c8 10.100.0.5
Jan 20 14:46:08 compute-1 ovn_controller[130490]: 2026-01-20T14:46:08Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:c2:c8 10.100.0.5
Jan 20 14:46:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:08.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:08 compute-1 nova_compute[225855]: 2026-01-20 14:46:08.612 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:08 compute-1 ceph-mon[81775]: pgmap v1739: 321 pgs: 321 active+clean; 275 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.3 MiB/s wr, 109 op/s
Jan 20 14:46:09 compute-1 podman[260549]: 2026-01-20 14:46:09.019970832 +0000 UTC m=+0.054188755 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 14:46:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:09.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:10.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:10 compute-1 ceph-mon[81775]: pgmap v1740: 321 pgs: 321 active+clean; 287 MiB data, 825 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.0 MiB/s wr, 108 op/s
Jan 20 14:46:10 compute-1 nova_compute[225855]: 2026-01-20 14:46:10.615 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:46:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:11.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:46:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2347137275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:12.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:12 compute-1 sudo[260570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:12 compute-1 sudo[260570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:12 compute-1 sudo[260570]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:12 compute-1 sudo[260595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:46:12 compute-1 sudo[260595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:12 compute-1 sudo[260595]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:12 compute-1 sudo[260620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:12 compute-1 sudo[260620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:12 compute-1 sudo[260620]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:12 compute-1 sudo[260645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:46:12 compute-1 sudo[260645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:12 compute-1 ceph-mon[81775]: pgmap v1741: 321 pgs: 321 active+clean; 314 MiB data, 842 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.6 MiB/s wr, 175 op/s
Jan 20 14:46:13 compute-1 sudo[260645]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:13.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:13 compute-1 nova_compute[225855]: 2026-01-20 14:46:13.614 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:46:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/514796829' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:46:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:46:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/514796829' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:46:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3071100775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2880388694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:14 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:46:14 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:46:14 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:46:14 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:46:14 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:46:14 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:46:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/514796829' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:46:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/514796829' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:46:14 compute-1 ceph-mon[81775]: pgmap v1742: 321 pgs: 321 active+clean; 318 MiB data, 842 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.5 MiB/s wr, 176 op/s
Jan 20 14:46:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:14.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:15.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:15 compute-1 nova_compute[225855]: 2026-01-20 14:46:15.617 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:15 compute-1 ceph-mon[81775]: pgmap v1743: 321 pgs: 321 active+clean; 360 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.3 MiB/s wr, 182 op/s
Jan 20 14:46:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:16.407 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:16.407 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:16.408 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:16.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2610441827' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1110613297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:17.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:18.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:18 compute-1 nova_compute[225855]: 2026-01-20 14:46:18.662 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:46:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:19.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:46:20 compute-1 ceph-mon[81775]: pgmap v1744: 321 pgs: 321 active+clean; 370 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 200 op/s
Jan 20 14:46:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:20.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:20 compute-1 nova_compute[225855]: 2026-01-20 14:46:20.618 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:21 compute-1 ceph-mon[81775]: pgmap v1745: 321 pgs: 321 active+clean; 372 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 164 op/s
Jan 20 14:46:21 compute-1 sudo[260707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:21 compute-1 sudo[260707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:21 compute-1 sudo[260707]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:21 compute-1 sudo[260732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:46:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:21.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:21 compute-1 sudo[260732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:21 compute-1 sudo[260732]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:21 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:22.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:23.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:23 compute-1 nova_compute[225855]: 2026-01-20 14:46:23.664 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:46:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:46:24 compute-1 ceph-mon[81775]: pgmap v1746: 321 pgs: 321 active+clean; 372 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.2 MiB/s wr, 208 op/s
Jan 20 14:46:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:46:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:24.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:46:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:46:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:25.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:46:25 compute-1 nova_compute[225855]: 2026-01-20 14:46:25.621 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:25 compute-1 ceph-mon[81775]: pgmap v1747: 321 pgs: 321 active+clean; 374 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.9 MiB/s wr, 173 op/s
Jan 20 14:46:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:46:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:26.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:46:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:46:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:27.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:46:27 compute-1 ceph-mon[81775]: pgmap v1748: 321 pgs: 321 active+clean; 374 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.9 MiB/s wr, 228 op/s
Jan 20 14:46:27 compute-1 sudo[260761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:27 compute-1 sudo[260761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:27 compute-1 sudo[260761]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:28 compute-1 sudo[260786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:28 compute-1 sudo[260786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:28 compute-1 sudo[260786]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.069 225859 DEBUG oslo_concurrency.lockutils [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "5f56b3e9-af2b-4934-8184-6257994c6b6a" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.070 225859 DEBUG oslo_concurrency.lockutils [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.086 225859 DEBUG nova.objects.instance [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'flavor' on Instance uuid 5f56b3e9-af2b-4934-8184-6257994c6b6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.131 225859 DEBUG oslo_concurrency.lockutils [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.473 225859 DEBUG oslo_concurrency.lockutils [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "5f56b3e9-af2b-4934-8184-6257994c6b6a" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.473 225859 DEBUG oslo_concurrency.lockutils [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.474 225859 INFO nova.compute.manager [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Attaching volume aab03340-0e79-412e-a963-e216832603c4 to /dev/vdb
Jan 20 14:46:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:28.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.648 225859 DEBUG os_brick.utils [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.649 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.660 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.660 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[4d63a53d-89f3-4042-a043-58f5856c2b4d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.662 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.710 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.710 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.711 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[1df69b61-a6be-4ce5-a73c-2ceb97627750]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.713 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.720 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.720 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[cb653a98-6d80-4cbd-950c-76a3ab70d14f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.721 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[6e80d0d1-12a5-4a8c-ab48-906a5f64a643]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.722 225859 DEBUG oslo_concurrency.processutils [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.745 225859 DEBUG oslo_concurrency.processutils [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.748 225859 DEBUG os_brick.initiator.connectors.lightos [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.748 225859 DEBUG os_brick.initiator.connectors.lightos [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.748 225859 DEBUG os_brick.initiator.connectors.lightos [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.749 225859 DEBUG os_brick.utils [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] <== get_connector_properties: return (99ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:46:28 compute-1 nova_compute[225855]: 2026-01-20 14:46:28.749 225859 DEBUG nova.virt.block_device [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Updating existing volume attachment record: 4982b9c6-6475-4038-b1e1-764e3501e6c5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 14:46:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3610685573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:46:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:29.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:46:29 compute-1 nova_compute[225855]: 2026-01-20 14:46:29.640 225859 DEBUG nova.objects.instance [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'flavor' on Instance uuid 5f56b3e9-af2b-4934-8184-6257994c6b6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:29 compute-1 nova_compute[225855]: 2026-01-20 14:46:29.661 225859 DEBUG nova.virt.libvirt.driver [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Attempting to attach volume aab03340-0e79-412e-a963-e216832603c4 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 20 14:46:29 compute-1 nova_compute[225855]: 2026-01-20 14:46:29.664 225859 DEBUG nova.virt.libvirt.guest [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] attach device xml: <disk type="network" device="disk">
Jan 20 14:46:29 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:46:29 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-aab03340-0e79-412e-a963-e216832603c4">
Jan 20 14:46:29 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:46:29 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:46:29 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:46:29 compute-1 nova_compute[225855]:   </source>
Jan 20 14:46:29 compute-1 nova_compute[225855]:   <auth username="openstack">
Jan 20 14:46:29 compute-1 nova_compute[225855]:     <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:46:29 compute-1 nova_compute[225855]:   </auth>
Jan 20 14:46:29 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:46:29 compute-1 nova_compute[225855]:   <serial>aab03340-0e79-412e-a963-e216832603c4</serial>
Jan 20 14:46:29 compute-1 nova_compute[225855]: </disk>
Jan 20 14:46:29 compute-1 nova_compute[225855]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 14:46:30 compute-1 nova_compute[225855]: 2026-01-20 14:46:30.025 225859 DEBUG nova.virt.libvirt.driver [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:46:30 compute-1 nova_compute[225855]: 2026-01-20 14:46:30.026 225859 DEBUG nova.virt.libvirt.driver [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:46:30 compute-1 nova_compute[225855]: 2026-01-20 14:46:30.027 225859 DEBUG nova.virt.libvirt.driver [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:46:30 compute-1 nova_compute[225855]: 2026-01-20 14:46:30.027 225859 DEBUG nova.virt.libvirt.driver [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] No VIF found with MAC fa:16:3e:76:c2:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:46:30 compute-1 podman[260839]: 2026-01-20 14:46:30.038804147 +0000 UTC m=+0.081016035 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 20 14:46:30 compute-1 nova_compute[225855]: 2026-01-20 14:46:30.223 225859 DEBUG oslo_concurrency.lockutils [None req-d2635026-981d-4a32-8767-4963aa4021cb 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:30 compute-1 nova_compute[225855]: 2026-01-20 14:46:30.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:46:30 compute-1 nova_compute[225855]: 2026-01-20 14:46:30.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:46:30 compute-1 nova_compute[225855]: 2026-01-20 14:46:30.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:46:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:30.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:30 compute-1 nova_compute[225855]: 2026-01-20 14:46:30.582 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:46:30 compute-1 nova_compute[225855]: 2026-01-20 14:46:30.582 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:46:30 compute-1 nova_compute[225855]: 2026-01-20 14:46:30.582 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:46:30 compute-1 nova_compute[225855]: 2026-01-20 14:46:30.582 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6586bc3e-3a94-4d22-8e8c-713a86a956fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:30 compute-1 nova_compute[225855]: 2026-01-20 14:46:30.623 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:30 compute-1 ceph-mon[81775]: pgmap v1749: 321 pgs: 321 active+clean; 374 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 502 KiB/s wr, 208 op/s
Jan 20 14:46:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3828261511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2519742879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:31.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:31 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:31 compute-1 ceph-mon[81775]: pgmap v1750: 321 pgs: 321 active+clean; 374 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 55 KiB/s wr, 174 op/s
Jan 20 14:46:32 compute-1 nova_compute[225855]: 2026-01-20 14:46:32.275 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Updating instance_info_cache with network_info: [{"id": "2c289e6f-295e-44c3-948a-9a6901251890", "address": "fa:16:3e:2f:4c:e2", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c289e6f-29", "ovs_interfaceid": "2c289e6f-295e-44c3-948a-9a6901251890", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:46:32 compute-1 nova_compute[225855]: 2026-01-20 14:46:32.296 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:46:32 compute-1 nova_compute[225855]: 2026-01-20 14:46:32.296 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:46:32 compute-1 nova_compute[225855]: 2026-01-20 14:46:32.297 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:46:32 compute-1 nova_compute[225855]: 2026-01-20 14:46:32.297 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:46:32 compute-1 nova_compute[225855]: 2026-01-20 14:46:32.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:46:32 compute-1 nova_compute[225855]: 2026-01-20 14:46:32.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:46:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:46:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:32.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:46:32 compute-1 nova_compute[225855]: 2026-01-20 14:46:32.838 225859 INFO nova.compute.manager [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Rebuilding instance
Jan 20 14:46:33 compute-1 ceph-mon[81775]: pgmap v1751: 321 pgs: 321 active+clean; 374 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 51 KiB/s wr, 153 op/s
Jan 20 14:46:33 compute-1 nova_compute[225855]: 2026-01-20 14:46:33.076 225859 DEBUG nova.objects.instance [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5f56b3e9-af2b-4934-8184-6257994c6b6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:33 compute-1 nova_compute[225855]: 2026-01-20 14:46:33.092 225859 DEBUG nova.compute.manager [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:46:33 compute-1 nova_compute[225855]: 2026-01-20 14:46:33.165 225859 DEBUG nova.objects.instance [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'pci_requests' on Instance uuid 5f56b3e9-af2b-4934-8184-6257994c6b6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:33 compute-1 nova_compute[225855]: 2026-01-20 14:46:33.180 225859 DEBUG nova.objects.instance [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'pci_devices' on Instance uuid 5f56b3e9-af2b-4934-8184-6257994c6b6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:33 compute-1 nova_compute[225855]: 2026-01-20 14:46:33.193 225859 DEBUG nova.objects.instance [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'resources' on Instance uuid 5f56b3e9-af2b-4934-8184-6257994c6b6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:33 compute-1 nova_compute[225855]: 2026-01-20 14:46:33.205 225859 DEBUG nova.objects.instance [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'migration_context' on Instance uuid 5f56b3e9-af2b-4934-8184-6257994c6b6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:33 compute-1 nova_compute[225855]: 2026-01-20 14:46:33.216 225859 DEBUG nova.objects.instance [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:46:33 compute-1 nova_compute[225855]: 2026-01-20 14:46:33.220 225859 DEBUG nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:46:33 compute-1 nova_compute[225855]: 2026-01-20 14:46:33.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:46:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:33.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:33 compute-1 nova_compute[225855]: 2026-01-20 14:46:33.712 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3514728644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:34.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:35.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:35 compute-1 kernel: tapec1a7a25-a6 (unregistering): left promiscuous mode
Jan 20 14:46:35 compute-1 NetworkManager[49104]: <info>  [1768920395.5054] device (tapec1a7a25-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:46:35 compute-1 nova_compute[225855]: 2026-01-20 14:46:35.518 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:35 compute-1 ovn_controller[130490]: 2026-01-20T14:46:35Z|00330|binding|INFO|Releasing lport ec1a7a25-a60a-40c3-98bf-710c68019b24 from this chassis (sb_readonly=0)
Jan 20 14:46:35 compute-1 ovn_controller[130490]: 2026-01-20T14:46:35Z|00331|binding|INFO|Setting lport ec1a7a25-a60a-40c3-98bf-710c68019b24 down in Southbound
Jan 20 14:46:35 compute-1 ovn_controller[130490]: 2026-01-20T14:46:35Z|00332|binding|INFO|Removing iface tapec1a7a25-a6 ovn-installed in OVS
Jan 20 14:46:35 compute-1 nova_compute[225855]: 2026-01-20 14:46:35.521 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:35.527 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:c2:c8 10.100.0.5'], port_security=['fa:16:3e:76:c2:c8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5f56b3e9-af2b-4934-8184-6257994c6b6a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b683fcc0026242e28ba6d8fba638688e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '357f03e7-2463-4315-adee-f60d9b0f5500', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=361f9a69-30a6-4be4-89ad-2a8f92877af2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=ec1a7a25-a60a-40c3-98bf-710c68019b24) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:46:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:35.529 140354 INFO neutron.agent.ovn.metadata.agent [-] Port ec1a7a25-a60a-40c3-98bf-710c68019b24 in datapath a19e9d1a-864f-41ee-bdea-188e65973ea5 unbound from our chassis
Jan 20 14:46:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:35.532 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a19e9d1a-864f-41ee-bdea-188e65973ea5
Jan 20 14:46:35 compute-1 nova_compute[225855]: 2026-01-20 14:46:35.538 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:35.549 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fb8d31ee-0b0c-4515-90f9-165c869f81cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:35 compute-1 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Jan 20 14:46:35 compute-1 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000005f.scope: Consumed 14.016s CPU time.
Jan 20 14:46:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:35.584 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[2164f7d9-6783-4eb4-8674-67d0706f1ce0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:35 compute-1 systemd-machined[194361]: Machine qemu-38-instance-0000005f terminated.
Jan 20 14:46:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:35.587 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[5d8fa9f8-8076-4a45-88d3-6645f60443c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:35 compute-1 ceph-mon[81775]: pgmap v1752: 321 pgs: 321 active+clean; 388 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 998 KiB/s wr, 165 op/s
Jan 20 14:46:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2440153925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:35.621 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[15ded6df-d496-48ef-97c3-9f8b594fffc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:35 compute-1 nova_compute[225855]: 2026-01-20 14:46:35.626 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:35.637 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d39d3ed6-86d3-421a-ad22-d722fbcf6327]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa19e9d1a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:53:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529090, 'reachable_time': 16021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260880, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:35.653 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6d7fc794-924e-4efc-9ba7-147111f6d149]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa19e9d1a-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529100, 'tstamp': 529100}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260881, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa19e9d1a-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529104, 'tstamp': 529104}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260881, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:35.655 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa19e9d1a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:35 compute-1 nova_compute[225855]: 2026-01-20 14:46:35.656 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:35 compute-1 nova_compute[225855]: 2026-01-20 14:46:35.662 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:35.663 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa19e9d1a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:35.663 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:46:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:35.663 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa19e9d1a-80, col_values=(('external_ids', {'iface-id': '5527ab8d-a985-420b-9d5b-7e5d9baf7004'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:35.664 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:46:35 compute-1 kernel: tapec1a7a25-a6: entered promiscuous mode
Jan 20 14:46:35 compute-1 kernel: tapec1a7a25-a6 (unregistering): left promiscuous mode
Jan 20 14:46:35 compute-1 NetworkManager[49104]: <info>  [1768920395.7588] manager: (tapec1a7a25-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/142)
Jan 20 14:46:35 compute-1 nova_compute[225855]: 2026-01-20 14:46:35.803 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:36 compute-1 nova_compute[225855]: 2026-01-20 14:46:36.234 225859 INFO nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Instance shutdown successfully after 3 seconds.
Jan 20 14:46:36 compute-1 nova_compute[225855]: 2026-01-20 14:46:36.240 225859 INFO nova.virt.libvirt.driver [-] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Instance destroyed successfully.
Jan 20 14:46:36 compute-1 nova_compute[225855]: 2026-01-20 14:46:36.424 225859 INFO nova.compute.manager [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Detaching volume aab03340-0e79-412e-a963-e216832603c4
Jan 20 14:46:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:46:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:36.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:46:36 compute-1 nova_compute[225855]: 2026-01-20 14:46:36.567 225859 INFO nova.virt.block_device [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Attempting to driver detach volume aab03340-0e79-412e-a963-e216832603c4 from mountpoint /dev/vdb
Jan 20 14:46:36 compute-1 nova_compute[225855]: 2026-01-20 14:46:36.574 225859 DEBUG nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Attempting to detach device vdb from instance 5f56b3e9-af2b-4934-8184-6257994c6b6a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 14:46:36 compute-1 nova_compute[225855]: 2026-01-20 14:46:36.575 225859 DEBUG nova.virt.libvirt.guest [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:46:36 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:46:36 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-aab03340-0e79-412e-a963-e216832603c4">
Jan 20 14:46:36 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:46:36 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:46:36 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:46:36 compute-1 nova_compute[225855]:   </source>
Jan 20 14:46:36 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:46:36 compute-1 nova_compute[225855]:   <serial>aab03340-0e79-412e-a963-e216832603c4</serial>
Jan 20 14:46:36 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 14:46:36 compute-1 nova_compute[225855]: </disk>
Jan 20 14:46:36 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:46:36 compute-1 nova_compute[225855]: 2026-01-20 14:46:36.590 225859 INFO nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Successfully detached device vdb from instance 5f56b3e9-af2b-4934-8184-6257994c6b6a from the persistent domain config.
Jan 20 14:46:36 compute-1 nova_compute[225855]: 2026-01-20 14:46:36.848 225859 INFO nova.virt.libvirt.driver [-] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Instance destroyed successfully.
Jan 20 14:46:36 compute-1 nova_compute[225855]: 2026-01-20 14:46:36.850 225859 DEBUG nova.virt.libvirt.vif [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:45:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1159529074',display_name='tempest-ServerActionsTestOtherA-server-1358184153',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1159529074',id=95,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPQdfZ1kte1uwABu8Yt6GDBCbfLz/7DDRluLfcLvQPu0p1+5pmkgwEfXHuElB+zR6CnPfotcAwIhafKnCBnLfMP+a/6KQqDXsYrSlHbKZFIppFb1eVRM7RProDBMZxNI4Q==',key_name='tempest-keypair-1773413892',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:45:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b683fcc0026242e28ba6d8fba638688e',ramdisk_id='',reservation_id='r-0vpr4wvx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-967087071',owner_user_name='tempest-ServerActionsTestOtherA-967087071-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:46:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='869086208e10436c9dc96c78bee9a85d',uuid=5f56b3e9-af2b-4934-8184-6257994c6b6a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "address": "fa:16:3e:76:c2:c8", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec1a7a25-a6", "ovs_interfaceid": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:46:36 compute-1 nova_compute[225855]: 2026-01-20 14:46:36.850 225859 DEBUG nova.network.os_vif_util [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converting VIF {"id": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "address": "fa:16:3e:76:c2:c8", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec1a7a25-a6", "ovs_interfaceid": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:46:36 compute-1 nova_compute[225855]: 2026-01-20 14:46:36.851 225859 DEBUG nova.network.os_vif_util [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:c2:c8,bridge_name='br-int',has_traffic_filtering=True,id=ec1a7a25-a60a-40c3-98bf-710c68019b24,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec1a7a25-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:46:36 compute-1 nova_compute[225855]: 2026-01-20 14:46:36.852 225859 DEBUG os_vif [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:c2:c8,bridge_name='br-int',has_traffic_filtering=True,id=ec1a7a25-a60a-40c3-98bf-710c68019b24,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec1a7a25-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:46:36 compute-1 nova_compute[225855]: 2026-01-20 14:46:36.854 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:36 compute-1 nova_compute[225855]: 2026-01-20 14:46:36.855 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec1a7a25-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:36 compute-1 nova_compute[225855]: 2026-01-20 14:46:36.910 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:36 compute-1 nova_compute[225855]: 2026-01-20 14:46:36.913 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:46:36 compute-1 nova_compute[225855]: 2026-01-20 14:46:36.916 225859 INFO os_vif [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:c2:c8,bridge_name='br-int',has_traffic_filtering=True,id=ec1a7a25-a60a-40c3-98bf-710c68019b24,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec1a7a25-a6')
Jan 20 14:46:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3432940160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4114297959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.260 225859 DEBUG nova.compute.manager [req-bbbf1ebc-b7a9-44c6-9394-23afd4eb75dd req-78a539c9-855c-4428-9882-c1be1da3bf04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Received event network-vif-unplugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.261 225859 DEBUG oslo_concurrency.lockutils [req-bbbf1ebc-b7a9-44c6-9394-23afd4eb75dd req-78a539c9-855c-4428-9882-c1be1da3bf04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.262 225859 DEBUG oslo_concurrency.lockutils [req-bbbf1ebc-b7a9-44c6-9394-23afd4eb75dd req-78a539c9-855c-4428-9882-c1be1da3bf04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.262 225859 DEBUG oslo_concurrency.lockutils [req-bbbf1ebc-b7a9-44c6-9394-23afd4eb75dd req-78a539c9-855c-4428-9882-c1be1da3bf04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.263 225859 DEBUG nova.compute.manager [req-bbbf1ebc-b7a9-44c6-9394-23afd4eb75dd req-78a539c9-855c-4428-9882-c1be1da3bf04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] No waiting events found dispatching network-vif-unplugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.264 225859 WARNING nova.compute.manager [req-bbbf1ebc-b7a9-44c6-9394-23afd4eb75dd req-78a539c9-855c-4428-9882-c1be1da3bf04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Received unexpected event network-vif-unplugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 for instance with vm_state active and task_state rebuilding.
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.264 225859 DEBUG nova.compute.manager [req-bbbf1ebc-b7a9-44c6-9394-23afd4eb75dd req-78a539c9-855c-4428-9882-c1be1da3bf04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Received event network-vif-plugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.265 225859 DEBUG oslo_concurrency.lockutils [req-bbbf1ebc-b7a9-44c6-9394-23afd4eb75dd req-78a539c9-855c-4428-9882-c1be1da3bf04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.265 225859 DEBUG oslo_concurrency.lockutils [req-bbbf1ebc-b7a9-44c6-9394-23afd4eb75dd req-78a539c9-855c-4428-9882-c1be1da3bf04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.266 225859 DEBUG oslo_concurrency.lockutils [req-bbbf1ebc-b7a9-44c6-9394-23afd4eb75dd req-78a539c9-855c-4428-9882-c1be1da3bf04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.267 225859 DEBUG nova.compute.manager [req-bbbf1ebc-b7a9-44c6-9394-23afd4eb75dd req-78a539c9-855c-4428-9882-c1be1da3bf04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] No waiting events found dispatching network-vif-plugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.267 225859 WARNING nova.compute.manager [req-bbbf1ebc-b7a9-44c6-9394-23afd4eb75dd req-78a539c9-855c-4428-9882-c1be1da3bf04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Received unexpected event network-vif-plugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 for instance with vm_state active and task_state rebuilding.
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.358 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.359 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.360 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.360 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.361 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:46:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:37.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:46:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:46:37 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4020167201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.817 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.893 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.894 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.898 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:46:37 compute-1 nova_compute[225855]: 2026-01-20 14:46:37.898 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:46:38 compute-1 nova_compute[225855]: 2026-01-20 14:46:38.075 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:46:38 compute-1 nova_compute[225855]: 2026-01-20 14:46:38.077 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4319MB free_disk=20.76354217529297GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:46:38 compute-1 nova_compute[225855]: 2026-01-20 14:46:38.078 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:38 compute-1 nova_compute[225855]: 2026-01-20 14:46:38.078 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:38 compute-1 nova_compute[225855]: 2026-01-20 14:46:38.145 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 6586bc3e-3a94-4d22-8e8c-713a86a956fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:46:38 compute-1 nova_compute[225855]: 2026-01-20 14:46:38.145 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 5f56b3e9-af2b-4934-8184-6257994c6b6a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:46:38 compute-1 nova_compute[225855]: 2026-01-20 14:46:38.146 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:46:38 compute-1 nova_compute[225855]: 2026-01-20 14:46:38.146 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:46:38 compute-1 nova_compute[225855]: 2026-01-20 14:46:38.221 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:38 compute-1 ceph-mon[81775]: pgmap v1753: 321 pgs: 321 active+clean; 424 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.0 MiB/s wr, 260 op/s
Jan 20 14:46:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4020167201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:46:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:38.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:46:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:46:38 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/104493319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:38 compute-1 nova_compute[225855]: 2026-01-20 14:46:38.659 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:38 compute-1 nova_compute[225855]: 2026-01-20 14:46:38.666 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:46:38 compute-1 nova_compute[225855]: 2026-01-20 14:46:38.685 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:46:38 compute-1 nova_compute[225855]: 2026-01-20 14:46:38.711 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:46:38 compute-1 nova_compute[225855]: 2026-01-20 14:46:38.712 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:39.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:39 compute-1 ceph-mon[81775]: pgmap v1754: 321 pgs: 321 active+clean; 434 MiB data, 950 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.3 MiB/s wr, 219 op/s
Jan 20 14:46:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/104493319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:40 compute-1 podman[260958]: 2026-01-20 14:46:40.008634594 +0000 UTC m=+0.060451353 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.025 225859 INFO nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Deleting instance files /var/lib/nova/instances/5f56b3e9-af2b-4934-8184-6257994c6b6a_del
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.026 225859 INFO nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Deletion of /var/lib/nova/instances/5f56b3e9-af2b-4934-8184-6257994c6b6a_del complete
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.227 225859 INFO nova.virt.block_device [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Booting with volume aab03340-0e79-412e-a963-e216832603c4 at /dev/vdb
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.392 225859 DEBUG os_brick.utils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.394 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.411 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.411 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[916320ea-6e43-4f02-937d-b6893d058bd9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.413 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.427 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.427 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[7654e334-f246-4e08-848c-86dbdd302318]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.429 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.442 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.443 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[46a7fa98-095f-4686-a9ba-840a3a13a4a7]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.445 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[8b409b59-249d-44ef-8cbf-2b2cf9827abd]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.446 225859 DEBUG oslo_concurrency.processutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.486 225859 DEBUG oslo_concurrency.processutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.491 225859 DEBUG os_brick.initiator.connectors.lightos [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.491 225859 DEBUG os_brick.initiator.connectors.lightos [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.492 225859 DEBUG os_brick.initiator.connectors.lightos [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.493 225859 DEBUG os_brick.utils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] <== get_connector_properties: return (99ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.495 225859 DEBUG nova.virt.block_device [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Updating existing volume attachment record: c9861cd5-6ebe-4ef9-bc30-869a54cc88ab _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 14:46:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:40.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:40 compute-1 nova_compute[225855]: 2026-01-20 14:46:40.627 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:46:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:41.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:46:41 compute-1 nova_compute[225855]: 2026-01-20 14:46:41.772 225859 DEBUG nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:46:41 compute-1 nova_compute[225855]: 2026-01-20 14:46:41.773 225859 INFO nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Creating image(s)
Jan 20 14:46:41 compute-1 ceph-mon[81775]: pgmap v1755: 321 pgs: 321 active+clean; 414 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 213 op/s
Jan 20 14:46:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1928440784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2051009862' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:41 compute-1 nova_compute[225855]: 2026-01-20 14:46:41.819 225859 DEBUG nova.storage.rbd_utils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:41 compute-1 nova_compute[225855]: 2026-01-20 14:46:41.864 225859 DEBUG nova.storage.rbd_utils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:41 compute-1 nova_compute[225855]: 2026-01-20 14:46:41.903 225859 DEBUG nova.storage.rbd_utils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:41 compute-1 nova_compute[225855]: 2026-01-20 14:46:41.908 225859 DEBUG oslo_concurrency.processutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:41 compute-1 nova_compute[225855]: 2026-01-20 14:46:41.941 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:41 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:41 compute-1 nova_compute[225855]: 2026-01-20 14:46:41.996 225859 DEBUG oslo_concurrency.processutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:41 compute-1 nova_compute[225855]: 2026-01-20 14:46:41.997 225859 DEBUG oslo_concurrency.lockutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:41 compute-1 nova_compute[225855]: 2026-01-20 14:46:41.997 225859 DEBUG oslo_concurrency.lockutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:41 compute-1 nova_compute[225855]: 2026-01-20 14:46:41.998 225859 DEBUG oslo_concurrency.lockutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:42 compute-1 nova_compute[225855]: 2026-01-20 14:46:42.021 225859 DEBUG nova.storage.rbd_utils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:42 compute-1 nova_compute[225855]: 2026-01-20 14:46:42.024 225859 DEBUG oslo_concurrency.processutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:42.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:42 compute-1 nova_compute[225855]: 2026-01-20 14:46:42.902 225859 DEBUG oslo_concurrency.processutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.878s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:42 compute-1 nova_compute[225855]: 2026-01-20 14:46:42.971 225859 DEBUG nova.storage.rbd_utils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] resizing rbd image 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:46:42 compute-1 ceph-mon[81775]: pgmap v1756: 321 pgs: 321 active+clean; 348 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 240 op/s
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.170 225859 DEBUG nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.171 225859 DEBUG nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Ensure instance console log exists: /var/lib/nova/instances/5f56b3e9-af2b-4934-8184-6257994c6b6a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.172 225859 DEBUG oslo_concurrency.lockutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.172 225859 DEBUG oslo_concurrency.lockutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.173 225859 DEBUG oslo_concurrency.lockutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.176 225859 DEBUG nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Start _get_guest_xml network_info=[{"id": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "address": "fa:16:3e:76:c2:c8", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec1a7a25-a6", "ovs_interfaceid": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:22:02Z,direct_url=<?>,disk_format='qcow2',id=26699514-f465-4b50-98b7-36f2cfc6a308,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-aab03340-0e79-412e-a963-e216832603c4', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'aab03340-0e79-412e-a963-e216832603c4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '5f56b3e9-af2b-4934-8184-6257994c6b6a', 'attached_at': '', 'detached_at': '', 'volume_id': 'aab03340-0e79-412e-a963-e216832603c4', 'serial': 'aab03340-0e79-412e-a963-e216832603c4'}, 'guest_format': None, 'boot_index': None, 'mount_device': '/dev/vdb', 'attachment_id': 'c9861cd5-6ebe-4ef9-bc30-869a54cc88ab', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.180 225859 WARNING nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.185 225859 DEBUG nova.virt.libvirt.host [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.185 225859 DEBUG nova.virt.libvirt.host [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.188 225859 DEBUG nova.virt.libvirt.host [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.188 225859 DEBUG nova.virt.libvirt.host [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.190 225859 DEBUG nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.190 225859 DEBUG nova.virt.hardware [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:22:02Z,direct_url=<?>,disk_format='qcow2',id=26699514-f465-4b50-98b7-36f2cfc6a308,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.190 225859 DEBUG nova.virt.hardware [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.191 225859 DEBUG nova.virt.hardware [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.191 225859 DEBUG nova.virt.hardware [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.191 225859 DEBUG nova.virt.hardware [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.192 225859 DEBUG nova.virt.hardware [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.192 225859 DEBUG nova.virt.hardware [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.192 225859 DEBUG nova.virt.hardware [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.192 225859 DEBUG nova.virt.hardware [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.193 225859 DEBUG nova.virt.hardware [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.193 225859 DEBUG nova.virt.hardware [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.193 225859 DEBUG nova.objects.instance [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5f56b3e9-af2b-4934-8184-6257994c6b6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.217 225859 DEBUG oslo_concurrency.processutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:43.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:46:43 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2699107769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.685 225859 DEBUG oslo_concurrency.processutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.709 225859 DEBUG nova.storage.rbd_utils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:43 compute-1 nova_compute[225855]: 2026-01-20 14:46:43.713 225859 DEBUG oslo_concurrency.processutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:46:44 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3551339405' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.118 225859 DEBUG oslo_concurrency.processutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.138 225859 DEBUG nova.virt.libvirt.vif [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:45:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1159529074',display_name='tempest-ServerActionsTestOtherA-server-1358184153',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1159529074',id=95,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPQdfZ1kte1uwABu8Yt6GDBCbfLz/7DDRluLfcLvQPu0p1+5pmkgwEfXHuElB+zR6CnPfotcAwIhafKnCBnLfMP+a/6KQqDXsYrSlHbKZFIppFb1eVRM7RProDBMZxNI4Q==',key_name='tempest-keypair-1773413892',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:45:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b683fcc0026242e28ba6d8fba638688e',ramdisk_id='',reservation_id='r-0vpr4wvx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-967087071',owner_user_name='tempest-ServerActionsTestOtherA-967087071-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:46:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='869086208e10436c9dc96c78bee9a85d',uuid=5f56b3e9-af2b-4934-8184-6257994c6b6a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "address": "fa:16:3e:76:c2:c8", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec1a7a25-a6", "ovs_interfaceid": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.139 225859 DEBUG nova.network.os_vif_util [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converting VIF {"id": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "address": "fa:16:3e:76:c2:c8", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec1a7a25-a6", "ovs_interfaceid": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.139 225859 DEBUG nova.network.os_vif_util [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:c2:c8,bridge_name='br-int',has_traffic_filtering=True,id=ec1a7a25-a60a-40c3-98bf-710c68019b24,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec1a7a25-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.142 225859 DEBUG nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:46:44 compute-1 nova_compute[225855]:   <uuid>5f56b3e9-af2b-4934-8184-6257994c6b6a</uuid>
Jan 20 14:46:44 compute-1 nova_compute[225855]:   <name>instance-0000005f</name>
Jan 20 14:46:44 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:46:44 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:46:44 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerActionsTestOtherA-server-1358184153</nova:name>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:46:43</nova:creationTime>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:46:44 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:46:44 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:46:44 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:46:44 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:46:44 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:46:44 compute-1 nova_compute[225855]:         <nova:user uuid="869086208e10436c9dc96c78bee9a85d">tempest-ServerActionsTestOtherA-967087071-project-member</nova:user>
Jan 20 14:46:44 compute-1 nova_compute[225855]:         <nova:project uuid="b683fcc0026242e28ba6d8fba638688e">tempest-ServerActionsTestOtherA-967087071</nova:project>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="26699514-f465-4b50-98b7-36f2cfc6a308"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:46:44 compute-1 nova_compute[225855]:         <nova:port uuid="ec1a7a25-a60a-40c3-98bf-710c68019b24">
Jan 20 14:46:44 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:46:44 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:46:44 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <system>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <entry name="serial">5f56b3e9-af2b-4934-8184-6257994c6b6a</entry>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <entry name="uuid">5f56b3e9-af2b-4934-8184-6257994c6b6a</entry>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     </system>
Jan 20 14:46:44 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:46:44 compute-1 nova_compute[225855]:   <os>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:   </os>
Jan 20 14:46:44 compute-1 nova_compute[225855]:   <features>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:   </features>
Jan 20 14:46:44 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:46:44 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:46:44 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/5f56b3e9-af2b-4934-8184-6257994c6b6a_disk">
Jan 20 14:46:44 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       </source>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:46:44 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/5f56b3e9-af2b-4934-8184-6257994c6b6a_disk.config">
Jan 20 14:46:44 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       </source>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:46:44 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <source protocol="rbd" name="volumes/volume-aab03340-0e79-412e-a963-e216832603c4">
Jan 20 14:46:44 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       </source>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:46:44 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <target dev="vdb" bus="virtio"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <serial>aab03340-0e79-412e-a963-e216832603c4</serial>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:76:c2:c8"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <target dev="tapec1a7a25-a6"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/5f56b3e9-af2b-4934-8184-6257994c6b6a/console.log" append="off"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <video>
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     </video>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:46:44 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:46:44 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:46:44 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:46:44 compute-1 nova_compute[225855]: </domain>
Jan 20 14:46:44 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.144 225859 DEBUG nova.virt.libvirt.vif [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:45:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1159529074',display_name='tempest-ServerActionsTestOtherA-server-1358184153',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1159529074',id=95,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPQdfZ1kte1uwABu8Yt6GDBCbfLz/7DDRluLfcLvQPu0p1+5pmkgwEfXHuElB+zR6CnPfotcAwIhafKnCBnLfMP+a/6KQqDXsYrSlHbKZFIppFb1eVRM7RProDBMZxNI4Q==',key_name='tempest-keypair-1773413892',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:45:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b683fcc0026242e28ba6d8fba638688e',ramdisk_id='',reservation_id='r-0vpr4wvx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-967087071',owner_user_name='tempest-ServerActionsTestOtherA-967087071-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:46:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='869086208e10436c9dc96c78bee9a85d',uuid=5f56b3e9-af2b-4934-8184-6257994c6b6a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "address": "fa:16:3e:76:c2:c8", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec1a7a25-a6", "ovs_interfaceid": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.145 225859 DEBUG nova.network.os_vif_util [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converting VIF {"id": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "address": "fa:16:3e:76:c2:c8", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec1a7a25-a6", "ovs_interfaceid": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.146 225859 DEBUG nova.network.os_vif_util [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:c2:c8,bridge_name='br-int',has_traffic_filtering=True,id=ec1a7a25-a60a-40c3-98bf-710c68019b24,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec1a7a25-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.147 225859 DEBUG os_vif [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:c2:c8,bridge_name='br-int',has_traffic_filtering=True,id=ec1a7a25-a60a-40c3-98bf-710c68019b24,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec1a7a25-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.148 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.149 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.150 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.154 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.155 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec1a7a25-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.156 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec1a7a25-a6, col_values=(('external_ids', {'iface-id': 'ec1a7a25-a60a-40c3-98bf-710c68019b24', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:c2:c8', 'vm-uuid': '5f56b3e9-af2b-4934-8184-6257994c6b6a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:44 compute-1 NetworkManager[49104]: <info>  [1768920404.1595] manager: (tapec1a7a25-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.161 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.163 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.163 225859 INFO os_vif [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:c2:c8,bridge_name='br-int',has_traffic_filtering=True,id=ec1a7a25-a60a-40c3-98bf-710c68019b24,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec1a7a25-a6')
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.208 225859 DEBUG nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.208 225859 DEBUG nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.209 225859 DEBUG nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.209 225859 DEBUG nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] No VIF found with MAC fa:16:3e:76:c2:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.209 225859 INFO nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Using config drive
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.233 225859 DEBUG nova.storage.rbd_utils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.253 225859 DEBUG nova.objects.instance [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5f56b3e9-af2b-4934-8184-6257994c6b6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.346 225859 DEBUG nova.objects.instance [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'keypairs' on Instance uuid 5f56b3e9-af2b-4934-8184-6257994c6b6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:44.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.707 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.852 225859 INFO nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Creating config drive at /var/lib/nova/instances/5f56b3e9-af2b-4934-8184-6257994c6b6a/disk.config
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.858 225859 DEBUG oslo_concurrency.processutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5f56b3e9-af2b-4934-8184-6257994c6b6a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgshuq8rj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2699107769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:44 compute-1 nova_compute[225855]: 2026-01-20 14:46:44.987 225859 DEBUG oslo_concurrency.processutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5f56b3e9-af2b-4934-8184-6257994c6b6a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgshuq8rj" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.017 225859 DEBUG nova.storage.rbd_utils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.021 225859 DEBUG oslo_concurrency.processutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5f56b3e9-af2b-4934-8184-6257994c6b6a/disk.config 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.189 225859 DEBUG oslo_concurrency.processutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5f56b3e9-af2b-4934-8184-6257994c6b6a/disk.config 5f56b3e9-af2b-4934-8184-6257994c6b6a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.190 225859 INFO nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Deleting local config drive /var/lib/nova/instances/5f56b3e9-af2b-4934-8184-6257994c6b6a/disk.config because it was imported into RBD.
Jan 20 14:46:45 compute-1 kernel: tapec1a7a25-a6: entered promiscuous mode
Jan 20 14:46:45 compute-1 NetworkManager[49104]: <info>  [1768920405.2374] manager: (tapec1a7a25-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/144)
Jan 20 14:46:45 compute-1 ovn_controller[130490]: 2026-01-20T14:46:45Z|00333|binding|INFO|Claiming lport ec1a7a25-a60a-40c3-98bf-710c68019b24 for this chassis.
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.259 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:45 compute-1 ovn_controller[130490]: 2026-01-20T14:46:45Z|00334|binding|INFO|ec1a7a25-a60a-40c3-98bf-710c68019b24: Claiming fa:16:3e:76:c2:c8 10.100.0.5
Jan 20 14:46:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:45.269 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:c2:c8 10.100.0.5'], port_security=['fa:16:3e:76:c2:c8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5f56b3e9-af2b-4934-8184-6257994c6b6a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b683fcc0026242e28ba6d8fba638688e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '357f03e7-2463-4315-adee-f60d9b0f5500', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=361f9a69-30a6-4be4-89ad-2a8f92877af2, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=ec1a7a25-a60a-40c3-98bf-710c68019b24) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:46:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:45.270 140354 INFO neutron.agent.ovn.metadata.agent [-] Port ec1a7a25-a60a-40c3-98bf-710c68019b24 in datapath a19e9d1a-864f-41ee-bdea-188e65973ea5 bound to our chassis
Jan 20 14:46:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:45.272 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a19e9d1a-864f-41ee-bdea-188e65973ea5
Jan 20 14:46:45 compute-1 ovn_controller[130490]: 2026-01-20T14:46:45Z|00335|binding|INFO|Setting lport ec1a7a25-a60a-40c3-98bf-710c68019b24 ovn-installed in OVS
Jan 20 14:46:45 compute-1 ovn_controller[130490]: 2026-01-20T14:46:45Z|00336|binding|INFO|Setting lport ec1a7a25-a60a-40c3-98bf-710c68019b24 up in Southbound
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.279 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.281 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.282 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:45.289 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a4732483-9c55-438e-8604-24f603c1e39e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:45 compute-1 systemd-machined[194361]: New machine qemu-39-instance-0000005f.
Jan 20 14:46:45 compute-1 systemd-udevd[261288]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:46:45 compute-1 systemd[1]: Started Virtual Machine qemu-39-instance-0000005f.
Jan 20 14:46:45 compute-1 NetworkManager[49104]: <info>  [1768920405.3079] device (tapec1a7a25-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:46:45 compute-1 NetworkManager[49104]: <info>  [1768920405.3087] device (tapec1a7a25-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:46:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:45.334 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[2c91af9c-def7-4984-a3ae-8225dfbc873a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:45.338 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[30a32d5f-5c6e-4f40-b778-0d9ab720031f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:45.367 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c1adc412-aee2-473c-b83a-66d781cb2e3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:45.381 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9acded25-d45e-4d0b-bcc5-578887ee98ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa19e9d1a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:53:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529090, 'reachable_time': 16021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261300, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:45.398 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fd9ed6a7-13db-4d08-90d9-f849c9aa9454]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa19e9d1a-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529100, 'tstamp': 529100}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261302, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa19e9d1a-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529104, 'tstamp': 529104}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261302, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:45.400 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa19e9d1a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.401 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.403 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:45.403 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa19e9d1a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:45.403 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:46:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:45.404 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa19e9d1a-80, col_values=(('external_ids', {'iface-id': '5527ab8d-a985-420b-9d5b-7e5d9baf7004'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:45.404 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:46:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:45.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.629 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.856 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for 5f56b3e9-af2b-4934-8184-6257994c6b6a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.856 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920405.855787, 5f56b3e9-af2b-4934-8184-6257994c6b6a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.857 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] VM Resumed (Lifecycle Event)
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.859 225859 DEBUG nova.compute.manager [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.859 225859 DEBUG nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.862 225859 INFO nova.virt.libvirt.driver [-] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Instance spawned successfully.
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.863 225859 DEBUG nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:46:45 compute-1 ceph-mon[81775]: pgmap v1757: 321 pgs: 321 active+clean; 281 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 256 op/s
Jan 20 14:46:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3551339405' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3950123431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.943 225859 DEBUG nova.compute.manager [req-64f0176b-9305-4a43-a8ed-ca45fee2da2a req-1e594b73-675e-4012-9322-6a35672bd7ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Received event network-vif-plugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.944 225859 DEBUG oslo_concurrency.lockutils [req-64f0176b-9305-4a43-a8ed-ca45fee2da2a req-1e594b73-675e-4012-9322-6a35672bd7ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.944 225859 DEBUG oslo_concurrency.lockutils [req-64f0176b-9305-4a43-a8ed-ca45fee2da2a req-1e594b73-675e-4012-9322-6a35672bd7ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.944 225859 DEBUG oslo_concurrency.lockutils [req-64f0176b-9305-4a43-a8ed-ca45fee2da2a req-1e594b73-675e-4012-9322-6a35672bd7ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.944 225859 DEBUG nova.compute.manager [req-64f0176b-9305-4a43-a8ed-ca45fee2da2a req-1e594b73-675e-4012-9322-6a35672bd7ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] No waiting events found dispatching network-vif-plugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.944 225859 WARNING nova.compute.manager [req-64f0176b-9305-4a43-a8ed-ca45fee2da2a req-1e594b73-675e-4012-9322-6a35672bd7ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Received unexpected event network-vif-plugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 for instance with vm_state active and task_state rebuild_spawning.
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.967 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.971 225859 DEBUG nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.971 225859 DEBUG nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.971 225859 DEBUG nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.972 225859 DEBUG nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.972 225859 DEBUG nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.972 225859 DEBUG nova.virt.libvirt.driver [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:46:45 compute-1 nova_compute[225855]: 2026-01-20 14:46:45.976 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:46:46 compute-1 nova_compute[225855]: 2026-01-20 14:46:46.010 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 20 14:46:46 compute-1 nova_compute[225855]: 2026-01-20 14:46:46.011 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920405.856396, 5f56b3e9-af2b-4934-8184-6257994c6b6a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:46:46 compute-1 nova_compute[225855]: 2026-01-20 14:46:46.011 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] VM Started (Lifecycle Event)
Jan 20 14:46:46 compute-1 nova_compute[225855]: 2026-01-20 14:46:46.032 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:46:46 compute-1 nova_compute[225855]: 2026-01-20 14:46:46.035 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:46:46 compute-1 nova_compute[225855]: 2026-01-20 14:46:46.040 225859 DEBUG nova.compute.manager [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:46:46 compute-1 nova_compute[225855]: 2026-01-20 14:46:46.051 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 20 14:46:46 compute-1 nova_compute[225855]: 2026-01-20 14:46:46.100 225859 DEBUG oslo_concurrency.lockutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:46 compute-1 nova_compute[225855]: 2026-01-20 14:46:46.100 225859 DEBUG oslo_concurrency.lockutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:46 compute-1 nova_compute[225855]: 2026-01-20 14:46:46.100 225859 DEBUG nova.objects.instance [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:46:46 compute-1 nova_compute[225855]: 2026-01-20 14:46:46.207 225859 DEBUG oslo_concurrency.lockutils [None req-b4c774ae-1e22-4f05-b320-7417a7a22674 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:46.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:46 compute-1 ceph-mon[81775]: pgmap v1758: 321 pgs: 321 active+clean; 324 MiB data, 878 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.1 MiB/s wr, 227 op/s
Jan 20 14:46:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:46:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:47.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:46:48 compute-1 sudo[261365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:48 compute-1 sudo[261365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:48 compute-1 sudo[261365]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:48 compute-1 sudo[261390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:48 compute-1 sudo[261390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:48 compute-1 sudo[261390]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:48 compute-1 nova_compute[225855]: 2026-01-20 14:46:48.414 225859 DEBUG nova.compute.manager [req-e3219eea-fff0-4a70-bbfc-1609e1e36460 req-4a87b953-a057-40aa-abe8-bdda71cd5687 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Received event network-vif-plugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:46:48 compute-1 nova_compute[225855]: 2026-01-20 14:46:48.415 225859 DEBUG oslo_concurrency.lockutils [req-e3219eea-fff0-4a70-bbfc-1609e1e36460 req-4a87b953-a057-40aa-abe8-bdda71cd5687 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:48 compute-1 nova_compute[225855]: 2026-01-20 14:46:48.416 225859 DEBUG oslo_concurrency.lockutils [req-e3219eea-fff0-4a70-bbfc-1609e1e36460 req-4a87b953-a057-40aa-abe8-bdda71cd5687 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:48 compute-1 nova_compute[225855]: 2026-01-20 14:46:48.416 225859 DEBUG oslo_concurrency.lockutils [req-e3219eea-fff0-4a70-bbfc-1609e1e36460 req-4a87b953-a057-40aa-abe8-bdda71cd5687 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:48 compute-1 nova_compute[225855]: 2026-01-20 14:46:48.417 225859 DEBUG nova.compute.manager [req-e3219eea-fff0-4a70-bbfc-1609e1e36460 req-4a87b953-a057-40aa-abe8-bdda71cd5687 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] No waiting events found dispatching network-vif-plugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:46:48 compute-1 nova_compute[225855]: 2026-01-20 14:46:48.417 225859 WARNING nova.compute.manager [req-e3219eea-fff0-4a70-bbfc-1609e1e36460 req-4a87b953-a057-40aa-abe8-bdda71cd5687 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Received unexpected event network-vif-plugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 for instance with vm_state active and task_state None.
Jan 20 14:46:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:46:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:48.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:46:49 compute-1 nova_compute[225855]: 2026-01-20 14:46:49.159 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:49 compute-1 ceph-mon[81775]: pgmap v1759: 321 pgs: 321 active+clean; 336 MiB data, 883 MiB used, 20 GiB / 21 GiB avail; 516 KiB/s rd, 2.5 MiB/s wr, 134 op/s
Jan 20 14:46:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:46:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:49.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:46:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2897975098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3506616449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3845254384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:50 compute-1 nova_compute[225855]: 2026-01-20 14:46:50.356 225859 DEBUG oslo_concurrency.lockutils [None req-6c457338-0d70-4054-b038-16e6ecdb36c4 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "5f56b3e9-af2b-4934-8184-6257994c6b6a" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:50 compute-1 nova_compute[225855]: 2026-01-20 14:46:50.357 225859 DEBUG oslo_concurrency.lockutils [None req-6c457338-0d70-4054-b038-16e6ecdb36c4 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:50 compute-1 nova_compute[225855]: 2026-01-20 14:46:50.382 225859 INFO nova.compute.manager [None req-6c457338-0d70-4054-b038-16e6ecdb36c4 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Detaching volume aab03340-0e79-412e-a963-e216832603c4
Jan 20 14:46:50 compute-1 nova_compute[225855]: 2026-01-20 14:46:50.525 225859 INFO nova.virt.block_device [None req-6c457338-0d70-4054-b038-16e6ecdb36c4 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Attempting to driver detach volume aab03340-0e79-412e-a963-e216832603c4 from mountpoint /dev/vdb
Jan 20 14:46:50 compute-1 nova_compute[225855]: 2026-01-20 14:46:50.537 225859 DEBUG nova.virt.libvirt.driver [None req-6c457338-0d70-4054-b038-16e6ecdb36c4 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Attempting to detach device vdb from instance 5f56b3e9-af2b-4934-8184-6257994c6b6a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 14:46:50 compute-1 nova_compute[225855]: 2026-01-20 14:46:50.538 225859 DEBUG nova.virt.libvirt.guest [None req-6c457338-0d70-4054-b038-16e6ecdb36c4 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:46:50 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:46:50 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-aab03340-0e79-412e-a963-e216832603c4">
Jan 20 14:46:50 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:46:50 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:46:50 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:46:50 compute-1 nova_compute[225855]:   </source>
Jan 20 14:46:50 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:46:50 compute-1 nova_compute[225855]:   <serial>aab03340-0e79-412e-a963-e216832603c4</serial>
Jan 20 14:46:50 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 20 14:46:50 compute-1 nova_compute[225855]: </disk>
Jan 20 14:46:50 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:46:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:46:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:50.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:46:50 compute-1 nova_compute[225855]: 2026-01-20 14:46:50.632 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:50 compute-1 nova_compute[225855]: 2026-01-20 14:46:50.707 225859 INFO nova.virt.libvirt.driver [None req-6c457338-0d70-4054-b038-16e6ecdb36c4 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Successfully detached device vdb from instance 5f56b3e9-af2b-4934-8184-6257994c6b6a from the persistent domain config.
Jan 20 14:46:50 compute-1 nova_compute[225855]: 2026-01-20 14:46:50.707 225859 DEBUG nova.virt.libvirt.driver [None req-6c457338-0d70-4054-b038-16e6ecdb36c4 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 5f56b3e9-af2b-4934-8184-6257994c6b6a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 14:46:50 compute-1 nova_compute[225855]: 2026-01-20 14:46:50.708 225859 DEBUG nova.virt.libvirt.guest [None req-6c457338-0d70-4054-b038-16e6ecdb36c4 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:46:50 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:46:50 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-aab03340-0e79-412e-a963-e216832603c4">
Jan 20 14:46:50 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:46:50 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:46:50 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:46:50 compute-1 nova_compute[225855]:   </source>
Jan 20 14:46:50 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:46:50 compute-1 nova_compute[225855]:   <serial>aab03340-0e79-412e-a963-e216832603c4</serial>
Jan 20 14:46:50 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 20 14:46:50 compute-1 nova_compute[225855]: </disk>
Jan 20 14:46:50 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:46:51 compute-1 ceph-mon[81775]: pgmap v1760: 321 pgs: 321 active+clean; 347 MiB data, 891 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.9 MiB/s wr, 159 op/s
Jan 20 14:46:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:51.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:52.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:52 compute-1 nova_compute[225855]: 2026-01-20 14:46:52.988 225859 DEBUG nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Received event <DeviceRemovedEvent: 1768920412.9880133, 5f56b3e9-af2b-4934-8184-6257994c6b6a => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 14:46:52 compute-1 nova_compute[225855]: 2026-01-20 14:46:52.991 225859 DEBUG nova.virt.libvirt.driver [None req-6c457338-0d70-4054-b038-16e6ecdb36c4 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 5f56b3e9-af2b-4934-8184-6257994c6b6a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 14:46:52 compute-1 nova_compute[225855]: 2026-01-20 14:46:52.993 225859 INFO nova.virt.libvirt.driver [None req-6c457338-0d70-4054-b038-16e6ecdb36c4 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Successfully detached device vdb from instance 5f56b3e9-af2b-4934-8184-6257994c6b6a from the live domain config.
Jan 20 14:46:53 compute-1 nova_compute[225855]: 2026-01-20 14:46:53.204 225859 DEBUG nova.objects.instance [None req-6c457338-0d70-4054-b038-16e6ecdb36c4 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'flavor' on Instance uuid 5f56b3e9-af2b-4934-8184-6257994c6b6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:53 compute-1 ceph-mon[81775]: pgmap v1761: 321 pgs: 321 active+clean; 374 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 229 op/s
Jan 20 14:46:53 compute-1 nova_compute[225855]: 2026-01-20 14:46:53.253 225859 DEBUG oslo_concurrency.lockutils [None req-6c457338-0d70-4054-b038-16e6ecdb36c4 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 2.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:53.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.085 225859 DEBUG oslo_concurrency.lockutils [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "5f56b3e9-af2b-4934-8184-6257994c6b6a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.086 225859 DEBUG oslo_concurrency.lockutils [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.086 225859 DEBUG oslo_concurrency.lockutils [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.086 225859 DEBUG oslo_concurrency.lockutils [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.086 225859 DEBUG oslo_concurrency.lockutils [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.088 225859 INFO nova.compute.manager [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Terminating instance
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.089 225859 DEBUG nova.compute.manager [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:46:54 compute-1 kernel: tapec1a7a25-a6 (unregistering): left promiscuous mode
Jan 20 14:46:54 compute-1 NetworkManager[49104]: <info>  [1768920414.1289] device (tapec1a7a25-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:46:54 compute-1 ovn_controller[130490]: 2026-01-20T14:46:54Z|00337|binding|INFO|Releasing lport ec1a7a25-a60a-40c3-98bf-710c68019b24 from this chassis (sb_readonly=0)
Jan 20 14:46:54 compute-1 ovn_controller[130490]: 2026-01-20T14:46:54Z|00338|binding|INFO|Setting lport ec1a7a25-a60a-40c3-98bf-710c68019b24 down in Southbound
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.130 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:54 compute-1 ovn_controller[130490]: 2026-01-20T14:46:54Z|00339|binding|INFO|Removing iface tapec1a7a25-a6 ovn-installed in OVS
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.133 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:54.140 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:c2:c8 10.100.0.5'], port_security=['fa:16:3e:76:c2:c8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5f56b3e9-af2b-4934-8184-6257994c6b6a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b683fcc0026242e28ba6d8fba638688e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '357f03e7-2463-4315-adee-f60d9b0f5500', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.175', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=361f9a69-30a6-4be4-89ad-2a8f92877af2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=ec1a7a25-a60a-40c3-98bf-710c68019b24) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:46:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:54.143 140354 INFO neutron.agent.ovn.metadata.agent [-] Port ec1a7a25-a60a-40c3-98bf-710c68019b24 in datapath a19e9d1a-864f-41ee-bdea-188e65973ea5 unbound from our chassis
Jan 20 14:46:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:54.146 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a19e9d1a-864f-41ee-bdea-188e65973ea5
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.148 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.160 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:54.168 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[51641b97-68e5-43d9-8508-30eb0bf8bed8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:54 compute-1 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Jan 20 14:46:54 compute-1 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d0000005f.scope: Consumed 8.961s CPU time.
Jan 20 14:46:54 compute-1 systemd-machined[194361]: Machine qemu-39-instance-0000005f terminated.
Jan 20 14:46:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:54.204 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[7194507d-65fd-441c-b50a-ad6fdb7579f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:54.210 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[16ed3782-050b-4b57-aa16-2068e161e51c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:54.251 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[57469ab4-b585-4e5f-9196-d1b488ab8f91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:54.273 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[dc128960-156c-4dd4-b6c2-ac99bd792f00]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa19e9d1a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:53:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529090, 'reachable_time': 16021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261432, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:54.299 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d4dd4440-7abd-40ff-8a7b-55a0d85ca031]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa19e9d1a-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529100, 'tstamp': 529100}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261433, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa19e9d1a-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529104, 'tstamp': 529104}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261433, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:54.301 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa19e9d1a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.303 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.308 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:54.309 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa19e9d1a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:54.309 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:46:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:54.310 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa19e9d1a-80, col_values=(('external_ids', {'iface-id': '5527ab8d-a985-420b-9d5b-7e5d9baf7004'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:54.310 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.337 225859 INFO nova.virt.libvirt.driver [-] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Instance destroyed successfully.
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.338 225859 DEBUG nova.objects.instance [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'resources' on Instance uuid 5f56b3e9-af2b-4934-8184-6257994c6b6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:54.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.688 225859 DEBUG nova.virt.libvirt.vif [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:45:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1159529074',display_name='tempest-ServerActionsTestOtherA-server-1358184153',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1159529074',id=95,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPQdfZ1kte1uwABu8Yt6GDBCbfLz/7DDRluLfcLvQPu0p1+5pmkgwEfXHuElB+zR6CnPfotcAwIhafKnCBnLfMP+a/6KQqDXsYrSlHbKZFIppFb1eVRM7RProDBMZxNI4Q==',key_name='tempest-keypair-1773413892',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:46:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b683fcc0026242e28ba6d8fba638688e',ramdisk_id='',reservation_id='r-0vpr4wvx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-967087071',owner_user_name='tempest-ServerActionsTestOtherA-967087071-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:46:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='869086208e10436c9dc96c78bee9a85d',uuid=5f56b3e9-af2b-4934-8184-6257994c6b6a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "address": "fa:16:3e:76:c2:c8", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec1a7a25-a6", "ovs_interfaceid": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.689 225859 DEBUG nova.network.os_vif_util [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converting VIF {"id": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "address": "fa:16:3e:76:c2:c8", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec1a7a25-a6", "ovs_interfaceid": "ec1a7a25-a60a-40c3-98bf-710c68019b24", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.691 225859 DEBUG nova.network.os_vif_util [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:c2:c8,bridge_name='br-int',has_traffic_filtering=True,id=ec1a7a25-a60a-40c3-98bf-710c68019b24,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec1a7a25-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.692 225859 DEBUG os_vif [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:c2:c8,bridge_name='br-int',has_traffic_filtering=True,id=ec1a7a25-a60a-40c3-98bf-710c68019b24,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec1a7a25-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.695 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.696 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec1a7a25-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.698 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.701 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:46:54 compute-1 nova_compute[225855]: 2026-01-20 14:46:54.704 225859 INFO os_vif [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:c2:c8,bridge_name='br-int',has_traffic_filtering=True,id=ec1a7a25-a60a-40c3-98bf-710c68019b24,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec1a7a25-a6')
Jan 20 14:46:55 compute-1 nova_compute[225855]: 2026-01-20 14:46:55.127 225859 INFO nova.virt.libvirt.driver [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Deleting instance files /var/lib/nova/instances/5f56b3e9-af2b-4934-8184-6257994c6b6a_del
Jan 20 14:46:55 compute-1 nova_compute[225855]: 2026-01-20 14:46:55.128 225859 INFO nova.virt.libvirt.driver [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Deletion of /var/lib/nova/instances/5f56b3e9-af2b-4934-8184-6257994c6b6a_del complete
Jan 20 14:46:55 compute-1 nova_compute[225855]: 2026-01-20 14:46:55.199 225859 INFO nova.compute.manager [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Took 1.11 seconds to destroy the instance on the hypervisor.
Jan 20 14:46:55 compute-1 nova_compute[225855]: 2026-01-20 14:46:55.200 225859 DEBUG oslo.service.loopingcall [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:46:55 compute-1 nova_compute[225855]: 2026-01-20 14:46:55.200 225859 DEBUG nova.compute.manager [-] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:46:55 compute-1 nova_compute[225855]: 2026-01-20 14:46:55.201 225859 DEBUG nova.network.neutron [-] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:46:55 compute-1 ceph-mon[81775]: pgmap v1762: 321 pgs: 321 active+clean; 374 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 206 op/s
Jan 20 14:46:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:55.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:55.461 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:46:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:46:55.462 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:46:55 compute-1 nova_compute[225855]: 2026-01-20 14:46:55.462 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:55 compute-1 nova_compute[225855]: 2026-01-20 14:46:55.555 225859 DEBUG nova.compute.manager [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Received event network-vif-unplugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:46:55 compute-1 nova_compute[225855]: 2026-01-20 14:46:55.556 225859 DEBUG oslo_concurrency.lockutils [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:55 compute-1 nova_compute[225855]: 2026-01-20 14:46:55.556 225859 DEBUG oslo_concurrency.lockutils [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:55 compute-1 nova_compute[225855]: 2026-01-20 14:46:55.556 225859 DEBUG oslo_concurrency.lockutils [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:55 compute-1 nova_compute[225855]: 2026-01-20 14:46:55.557 225859 DEBUG nova.compute.manager [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] No waiting events found dispatching network-vif-unplugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:46:55 compute-1 nova_compute[225855]: 2026-01-20 14:46:55.557 225859 DEBUG nova.compute.manager [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Received event network-vif-unplugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:46:55 compute-1 nova_compute[225855]: 2026-01-20 14:46:55.557 225859 DEBUG nova.compute.manager [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Received event network-vif-plugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:46:55 compute-1 nova_compute[225855]: 2026-01-20 14:46:55.557 225859 DEBUG oslo_concurrency.lockutils [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:55 compute-1 nova_compute[225855]: 2026-01-20 14:46:55.557 225859 DEBUG oslo_concurrency.lockutils [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:55 compute-1 nova_compute[225855]: 2026-01-20 14:46:55.558 225859 DEBUG oslo_concurrency.lockutils [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:55 compute-1 nova_compute[225855]: 2026-01-20 14:46:55.558 225859 DEBUG nova.compute.manager [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] No waiting events found dispatching network-vif-plugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:46:55 compute-1 nova_compute[225855]: 2026-01-20 14:46:55.558 225859 WARNING nova.compute.manager [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Received unexpected event network-vif-plugged-ec1a7a25-a60a-40c3-98bf-710c68019b24 for instance with vm_state active and task_state deleting.
Jan 20 14:46:55 compute-1 nova_compute[225855]: 2026-01-20 14:46:55.634 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:56 compute-1 nova_compute[225855]: 2026-01-20 14:46:56.321 225859 DEBUG nova.network.neutron [-] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:46:56 compute-1 nova_compute[225855]: 2026-01-20 14:46:56.519 225859 INFO nova.compute.manager [-] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Took 1.32 seconds to deallocate network for instance.
Jan 20 14:46:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:56.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:56 compute-1 nova_compute[225855]: 2026-01-20 14:46:56.638 225859 DEBUG oslo_concurrency.lockutils [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:56 compute-1 nova_compute[225855]: 2026-01-20 14:46:56.639 225859 DEBUG oslo_concurrency.lockutils [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:56 compute-1 nova_compute[225855]: 2026-01-20 14:46:56.715 225859 DEBUG oslo_concurrency.processutils [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:56 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:46:57 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/876713242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:57 compute-1 nova_compute[225855]: 2026-01-20 14:46:57.168 225859 DEBUG oslo_concurrency.processutils [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:57 compute-1 nova_compute[225855]: 2026-01-20 14:46:57.179 225859 DEBUG nova.compute.provider_tree [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:46:57 compute-1 nova_compute[225855]: 2026-01-20 14:46:57.220 225859 DEBUG nova.scheduler.client.report [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:46:57 compute-1 ceph-mon[81775]: pgmap v1763: 321 pgs: 321 active+clean; 356 MiB data, 918 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 246 op/s
Jan 20 14:46:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/876713242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:57 compute-1 nova_compute[225855]: 2026-01-20 14:46:57.291 225859 DEBUG oslo_concurrency.lockutils [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:57 compute-1 nova_compute[225855]: 2026-01-20 14:46:57.331 225859 INFO nova.scheduler.client.report [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Deleted allocations for instance 5f56b3e9-af2b-4934-8184-6257994c6b6a
Jan 20 14:46:57 compute-1 nova_compute[225855]: 2026-01-20 14:46:57.408 225859 DEBUG oslo_concurrency.lockutils [None req-2cd59ed3-d902-4d19-b251-a0edc6e0598c 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "5f56b3e9-af2b-4934-8184-6257994c6b6a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:46:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:57.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:46:57 compute-1 nova_compute[225855]: 2026-01-20 14:46:57.658 225859 DEBUG nova.compute.manager [req-07a02586-fe04-4644-b47a-f663a1823ff3 req-207c0a5a-82b0-4b18-969a-25a4cd8cff9c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Received event network-vif-deleted-ec1a7a25-a60a-40c3-98bf-710c68019b24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:46:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1857006168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:58.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:46:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:59.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:59 compute-1 ceph-mon[81775]: pgmap v1764: 321 pgs: 321 active+clean; 344 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.9 MiB/s wr, 239 op/s
Jan 20 14:46:59 compute-1 nova_compute[225855]: 2026-01-20 14:46:59.699 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:00.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:00 compute-1 nova_compute[225855]: 2026-01-20 14:47:00.637 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:01 compute-1 podman[261489]: 2026-01-20 14:47:01.055651545 +0000 UTC m=+0.104316496 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 14:47:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:01.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:01 compute-1 ceph-mon[81775]: pgmap v1765: 321 pgs: 321 active+clean; 309 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.5 MiB/s wr, 229 op/s
Jan 20 14:47:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:02.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2640005168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:47:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:03.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:47:03 compute-1 ceph-mon[81775]: pgmap v1766: 321 pgs: 321 active+clean; 242 MiB data, 879 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 781 KiB/s wr, 216 op/s
Jan 20 14:47:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3871043654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1489706811' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:04.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:04 compute-1 nova_compute[225855]: 2026-01-20 14:47:04.700 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:05 compute-1 ceph-mon[81775]: pgmap v1767: 321 pgs: 321 active+clean; 228 MiB data, 873 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 44 KiB/s wr, 145 op/s
Jan 20 14:47:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3517147788' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:47:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:05.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:47:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:05.464 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:05 compute-1 nova_compute[225855]: 2026-01-20 14:47:05.640 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:05 compute-1 nova_compute[225855]: 2026-01-20 14:47:05.786 225859 DEBUG oslo_concurrency.lockutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:05 compute-1 nova_compute[225855]: 2026-01-20 14:47:05.787 225859 DEBUG oslo_concurrency.lockutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:05 compute-1 nova_compute[225855]: 2026-01-20 14:47:05.810 225859 DEBUG nova.compute.manager [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:47:05 compute-1 nova_compute[225855]: 2026-01-20 14:47:05.894 225859 DEBUG oslo_concurrency.lockutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:05 compute-1 nova_compute[225855]: 2026-01-20 14:47:05.894 225859 DEBUG oslo_concurrency.lockutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:05 compute-1 nova_compute[225855]: 2026-01-20 14:47:05.902 225859 DEBUG nova.virt.hardware [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:47:05 compute-1 nova_compute[225855]: 2026-01-20 14:47:05.903 225859 INFO nova.compute.claims [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.006 225859 DEBUG oslo_concurrency.processutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:47:06 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3563148152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.437 225859 DEBUG oslo_concurrency.processutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.446 225859 DEBUG nova.compute.provider_tree [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.461 225859 DEBUG nova.scheduler.client.report [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.483 225859 DEBUG oslo_concurrency.lockutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.484 225859 DEBUG nova.compute.manager [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.528 225859 DEBUG nova.compute.manager [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.529 225859 DEBUG nova.network.neutron [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.555 225859 INFO nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.571 225859 DEBUG nova.compute.manager [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:47:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:47:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:06.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.648 225859 DEBUG nova.compute.manager [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.649 225859 DEBUG nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.650 225859 INFO nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Creating image(s)
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.674 225859 DEBUG nova.storage.rbd_utils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.698 225859 DEBUG nova.storage.rbd_utils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.720 225859 DEBUG nova.storage.rbd_utils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.722 225859 DEBUG oslo_concurrency.processutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.782 225859 DEBUG oslo_concurrency.processutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.783 225859 DEBUG oslo_concurrency.lockutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.784 225859 DEBUG oslo_concurrency.lockutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.784 225859 DEBUG oslo_concurrency.lockutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.806 225859 DEBUG nova.storage.rbd_utils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.809 225859 DEBUG oslo_concurrency.processutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:06 compute-1 nova_compute[225855]: 2026-01-20 14:47:06.832 225859 DEBUG nova.policy [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '869086208e10436c9dc96c78bee9a85d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b683fcc0026242e28ba6d8fba638688e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:47:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:07 compute-1 ceph-mon[81775]: pgmap v1768: 321 pgs: 321 active+clean; 227 MiB data, 837 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 170 op/s
Jan 20 14:47:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3563148152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:07 compute-1 nova_compute[225855]: 2026-01-20 14:47:07.181 225859 DEBUG oslo_concurrency.processutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.372s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:07 compute-1 nova_compute[225855]: 2026-01-20 14:47:07.260 225859 DEBUG nova.storage.rbd_utils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] resizing rbd image 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:47:07 compute-1 nova_compute[225855]: 2026-01-20 14:47:07.371 225859 DEBUG nova.objects.instance [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'migration_context' on Instance uuid 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:47:07 compute-1 nova_compute[225855]: 2026-01-20 14:47:07.386 225859 DEBUG nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:47:07 compute-1 nova_compute[225855]: 2026-01-20 14:47:07.386 225859 DEBUG nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Ensure instance console log exists: /var/lib/nova/instances/7efaa6b8-d1bd-4954-83ec-adcdb8e392bf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:47:07 compute-1 nova_compute[225855]: 2026-01-20 14:47:07.387 225859 DEBUG oslo_concurrency.lockutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:07 compute-1 nova_compute[225855]: 2026-01-20 14:47:07.387 225859 DEBUG oslo_concurrency.lockutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:07 compute-1 nova_compute[225855]: 2026-01-20 14:47:07.387 225859 DEBUG oslo_concurrency.lockutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:07.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:07 compute-1 nova_compute[225855]: 2026-01-20 14:47:07.477 225859 DEBUG nova.network.neutron [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Successfully created port: b93181ae-8a01-468c-adfc-ec8894512d2e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:47:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/724914289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:47:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/724914289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:47:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1903933642' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:08 compute-1 sudo[261705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:08 compute-1 sudo[261705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:08 compute-1 sudo[261705]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:08 compute-1 sudo[261730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:08 compute-1 sudo[261730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:08 compute-1 sudo[261730]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:08.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:09 compute-1 ceph-mon[81775]: pgmap v1769: 321 pgs: 321 active+clean; 252 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 157 op/s
Jan 20 14:47:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2155745966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:09 compute-1 nova_compute[225855]: 2026-01-20 14:47:09.159 225859 DEBUG nova.network.neutron [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Successfully updated port: b93181ae-8a01-468c-adfc-ec8894512d2e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:47:09 compute-1 nova_compute[225855]: 2026-01-20 14:47:09.191 225859 DEBUG oslo_concurrency.lockutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "refresh_cache-7efaa6b8-d1bd-4954-83ec-adcdb8e392bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:47:09 compute-1 nova_compute[225855]: 2026-01-20 14:47:09.191 225859 DEBUG oslo_concurrency.lockutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquired lock "refresh_cache-7efaa6b8-d1bd-4954-83ec-adcdb8e392bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:47:09 compute-1 nova_compute[225855]: 2026-01-20 14:47:09.191 225859 DEBUG nova.network.neutron [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:47:09 compute-1 nova_compute[225855]: 2026-01-20 14:47:09.335 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920414.3349535, 5f56b3e9-af2b-4934-8184-6257994c6b6a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:47:09 compute-1 nova_compute[225855]: 2026-01-20 14:47:09.336 225859 INFO nova.compute.manager [-] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] VM Stopped (Lifecycle Event)
Jan 20 14:47:09 compute-1 nova_compute[225855]: 2026-01-20 14:47:09.340 225859 DEBUG nova.compute.manager [req-5c82e50f-795e-4840-a075-0fa5388da2d3 req-18b55abe-99cc-4f94-9328-0249bdee132e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Received event network-changed-b93181ae-8a01-468c-adfc-ec8894512d2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:47:09 compute-1 nova_compute[225855]: 2026-01-20 14:47:09.340 225859 DEBUG nova.compute.manager [req-5c82e50f-795e-4840-a075-0fa5388da2d3 req-18b55abe-99cc-4f94-9328-0249bdee132e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Refreshing instance network info cache due to event network-changed-b93181ae-8a01-468c-adfc-ec8894512d2e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:47:09 compute-1 nova_compute[225855]: 2026-01-20 14:47:09.340 225859 DEBUG oslo_concurrency.lockutils [req-5c82e50f-795e-4840-a075-0fa5388da2d3 req-18b55abe-99cc-4f94-9328-0249bdee132e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-7efaa6b8-d1bd-4954-83ec-adcdb8e392bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:47:09 compute-1 nova_compute[225855]: 2026-01-20 14:47:09.358 225859 DEBUG nova.compute.manager [None req-7c26a0e0-90f1-44de-8653-8218bf21d1dd - - - - - -] [instance: 5f56b3e9-af2b-4934-8184-6257994c6b6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:47:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:47:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:09.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:47:09 compute-1 nova_compute[225855]: 2026-01-20 14:47:09.472 225859 DEBUG nova.network.neutron [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:47:09 compute-1 nova_compute[225855]: 2026-01-20 14:47:09.702 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.327 225859 DEBUG nova.network.neutron [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Updating instance_info_cache with network_info: [{"id": "b93181ae-8a01-468c-adfc-ec8894512d2e", "address": "fa:16:3e:cf:63:44", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb93181ae-8a", "ovs_interfaceid": "b93181ae-8a01-468c-adfc-ec8894512d2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.372 225859 DEBUG oslo_concurrency.lockutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Releasing lock "refresh_cache-7efaa6b8-d1bd-4954-83ec-adcdb8e392bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.373 225859 DEBUG nova.compute.manager [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Instance network_info: |[{"id": "b93181ae-8a01-468c-adfc-ec8894512d2e", "address": "fa:16:3e:cf:63:44", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb93181ae-8a", "ovs_interfaceid": "b93181ae-8a01-468c-adfc-ec8894512d2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.373 225859 DEBUG oslo_concurrency.lockutils [req-5c82e50f-795e-4840-a075-0fa5388da2d3 req-18b55abe-99cc-4f94-9328-0249bdee132e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-7efaa6b8-d1bd-4954-83ec-adcdb8e392bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.374 225859 DEBUG nova.network.neutron [req-5c82e50f-795e-4840-a075-0fa5388da2d3 req-18b55abe-99cc-4f94-9328-0249bdee132e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Refreshing network info cache for port b93181ae-8a01-468c-adfc-ec8894512d2e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.377 225859 DEBUG nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Start _get_guest_xml network_info=[{"id": "b93181ae-8a01-468c-adfc-ec8894512d2e", "address": "fa:16:3e:cf:63:44", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb93181ae-8a", "ovs_interfaceid": "b93181ae-8a01-468c-adfc-ec8894512d2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.381 225859 WARNING nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.387 225859 DEBUG nova.virt.libvirt.host [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.388 225859 DEBUG nova.virt.libvirt.host [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.394 225859 DEBUG nova.virt.libvirt.host [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.395 225859 DEBUG nova.virt.libvirt.host [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.396 225859 DEBUG nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.397 225859 DEBUG nova.virt.hardware [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.397 225859 DEBUG nova.virt.hardware [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.398 225859 DEBUG nova.virt.hardware [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.398 225859 DEBUG nova.virt.hardware [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.398 225859 DEBUG nova.virt.hardware [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.398 225859 DEBUG nova.virt.hardware [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.399 225859 DEBUG nova.virt.hardware [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.399 225859 DEBUG nova.virt.hardware [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.399 225859 DEBUG nova.virt.hardware [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.399 225859 DEBUG nova.virt.hardware [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.400 225859 DEBUG nova.virt.hardware [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.402 225859 DEBUG oslo_concurrency.processutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:47:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:10.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.641 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:47:10 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3117658322' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.858 225859 DEBUG oslo_concurrency.processutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.880 225859 DEBUG nova.storage.rbd_utils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:10 compute-1 nova_compute[225855]: 2026-01-20 14:47:10.883 225859 DEBUG oslo_concurrency.processutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:11 compute-1 podman[261797]: 2026-01-20 14:47:11.021772479 +0000 UTC m=+0.058812627 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:47:11 compute-1 ceph-mon[81775]: pgmap v1770: 321 pgs: 321 active+clean; 283 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.4 MiB/s wr, 136 op/s
Jan 20 14:47:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3117658322' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:47:11 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3074709641' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.324 225859 DEBUG oslo_concurrency.processutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.326 225859 DEBUG nova.virt.libvirt.vif [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:47:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1146964335',display_name='tempest-ServerActionsTestOtherA-server-1146964335',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1146964335',id=100,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b683fcc0026242e28ba6d8fba638688e',ramdisk_id='',reservation_id='r-pmljdz8l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-967087071',owner_user_name='tempest-ServerActionsTestOtherA-967087071-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:47:06Z,user_data=None,user_id='869086208e10436c9dc96c78bee9a85d',uuid=7efaa6b8-d1bd-4954-83ec-adcdb8e392bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b93181ae-8a01-468c-adfc-ec8894512d2e", "address": "fa:16:3e:cf:63:44", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb93181ae-8a", "ovs_interfaceid": "b93181ae-8a01-468c-adfc-ec8894512d2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.326 225859 DEBUG nova.network.os_vif_util [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converting VIF {"id": "b93181ae-8a01-468c-adfc-ec8894512d2e", "address": "fa:16:3e:cf:63:44", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb93181ae-8a", "ovs_interfaceid": "b93181ae-8a01-468c-adfc-ec8894512d2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.327 225859 DEBUG nova.network.os_vif_util [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:63:44,bridge_name='br-int',has_traffic_filtering=True,id=b93181ae-8a01-468c-adfc-ec8894512d2e,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb93181ae-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.328 225859 DEBUG nova.objects.instance [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.349 225859 DEBUG nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:47:11 compute-1 nova_compute[225855]:   <uuid>7efaa6b8-d1bd-4954-83ec-adcdb8e392bf</uuid>
Jan 20 14:47:11 compute-1 nova_compute[225855]:   <name>instance-00000064</name>
Jan 20 14:47:11 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:47:11 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:47:11 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerActionsTestOtherA-server-1146964335</nova:name>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:47:10</nova:creationTime>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:47:11 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:47:11 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:47:11 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:47:11 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:47:11 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:47:11 compute-1 nova_compute[225855]:         <nova:user uuid="869086208e10436c9dc96c78bee9a85d">tempest-ServerActionsTestOtherA-967087071-project-member</nova:user>
Jan 20 14:47:11 compute-1 nova_compute[225855]:         <nova:project uuid="b683fcc0026242e28ba6d8fba638688e">tempest-ServerActionsTestOtherA-967087071</nova:project>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:47:11 compute-1 nova_compute[225855]:         <nova:port uuid="b93181ae-8a01-468c-adfc-ec8894512d2e">
Jan 20 14:47:11 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:47:11 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:47:11 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <system>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <entry name="serial">7efaa6b8-d1bd-4954-83ec-adcdb8e392bf</entry>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <entry name="uuid">7efaa6b8-d1bd-4954-83ec-adcdb8e392bf</entry>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     </system>
Jan 20 14:47:11 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:47:11 compute-1 nova_compute[225855]:   <os>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:   </os>
Jan 20 14:47:11 compute-1 nova_compute[225855]:   <features>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:   </features>
Jan 20 14:47:11 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:47:11 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:47:11 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/7efaa6b8-d1bd-4954-83ec-adcdb8e392bf_disk">
Jan 20 14:47:11 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       </source>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:47:11 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/7efaa6b8-d1bd-4954-83ec-adcdb8e392bf_disk.config">
Jan 20 14:47:11 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       </source>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:47:11 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:cf:63:44"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <target dev="tapb93181ae-8a"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/7efaa6b8-d1bd-4954-83ec-adcdb8e392bf/console.log" append="off"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <video>
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     </video>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:47:11 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:47:11 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:47:11 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:47:11 compute-1 nova_compute[225855]: </domain>
Jan 20 14:47:11 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.350 225859 DEBUG nova.compute.manager [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Preparing to wait for external event network-vif-plugged-b93181ae-8a01-468c-adfc-ec8894512d2e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.350 225859 DEBUG oslo_concurrency.lockutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.350 225859 DEBUG oslo_concurrency.lockutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.350 225859 DEBUG oslo_concurrency.lockutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.351 225859 DEBUG nova.virt.libvirt.vif [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:47:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1146964335',display_name='tempest-ServerActionsTestOtherA-server-1146964335',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1146964335',id=100,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b683fcc0026242e28ba6d8fba638688e',ramdisk_id='',reservation_id='r-pmljdz8l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-967087071',owner_user_name='tempest-ServerActionsTestOtherA-967087071-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:47:06Z,user_data=None,user_id='869086208e10436c9dc96c78bee9a85d',uuid=7efaa6b8-d1bd-4954-83ec-adcdb8e392bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b93181ae-8a01-468c-adfc-ec8894512d2e", "address": "fa:16:3e:cf:63:44", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb93181ae-8a", "ovs_interfaceid": "b93181ae-8a01-468c-adfc-ec8894512d2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.351 225859 DEBUG nova.network.os_vif_util [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converting VIF {"id": "b93181ae-8a01-468c-adfc-ec8894512d2e", "address": "fa:16:3e:cf:63:44", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb93181ae-8a", "ovs_interfaceid": "b93181ae-8a01-468c-adfc-ec8894512d2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.352 225859 DEBUG nova.network.os_vif_util [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:63:44,bridge_name='br-int',has_traffic_filtering=True,id=b93181ae-8a01-468c-adfc-ec8894512d2e,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb93181ae-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.352 225859 DEBUG os_vif [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:63:44,bridge_name='br-int',has_traffic_filtering=True,id=b93181ae-8a01-468c-adfc-ec8894512d2e,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb93181ae-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.352 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.353 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.353 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.355 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.355 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb93181ae-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.356 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb93181ae-8a, col_values=(('external_ids', {'iface-id': 'b93181ae-8a01-468c-adfc-ec8894512d2e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:63:44', 'vm-uuid': '7efaa6b8-d1bd-4954-83ec-adcdb8e392bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.357 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:11 compute-1 NetworkManager[49104]: <info>  [1768920431.3586] manager: (tapb93181ae-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.360 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.363 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.364 225859 INFO os_vif [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:63:44,bridge_name='br-int',has_traffic_filtering=True,id=b93181ae-8a01-468c-adfc-ec8894512d2e,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb93181ae-8a')
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.417 225859 DEBUG nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.418 225859 DEBUG nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.418 225859 DEBUG nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] No VIF found with MAC fa:16:3e:cf:63:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.418 225859 INFO nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Using config drive
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.439 225859 DEBUG nova.storage.rbd_utils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:11.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:11 compute-1 ovn_controller[130490]: 2026-01-20T14:47:11Z|00340|binding|INFO|Releasing lport 5527ab8d-a985-420b-9d5b-7e5d9baf7004 from this chassis (sb_readonly=0)
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.589 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.656 225859 DEBUG nova.network.neutron [req-5c82e50f-795e-4840-a075-0fa5388da2d3 req-18b55abe-99cc-4f94-9328-0249bdee132e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Updated VIF entry in instance network info cache for port b93181ae-8a01-468c-adfc-ec8894512d2e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.657 225859 DEBUG nova.network.neutron [req-5c82e50f-795e-4840-a075-0fa5388da2d3 req-18b55abe-99cc-4f94-9328-0249bdee132e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Updating instance_info_cache with network_info: [{"id": "b93181ae-8a01-468c-adfc-ec8894512d2e", "address": "fa:16:3e:cf:63:44", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb93181ae-8a", "ovs_interfaceid": "b93181ae-8a01-468c-adfc-ec8894512d2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.680 225859 DEBUG oslo_concurrency.lockutils [req-5c82e50f-795e-4840-a075-0fa5388da2d3 req-18b55abe-99cc-4f94-9328-0249bdee132e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-7efaa6b8-d1bd-4954-83ec-adcdb8e392bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.885 225859 INFO nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Creating config drive at /var/lib/nova/instances/7efaa6b8-d1bd-4954-83ec-adcdb8e392bf/disk.config
Jan 20 14:47:11 compute-1 nova_compute[225855]: 2026-01-20 14:47:11.896 225859 DEBUG oslo_concurrency.processutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7efaa6b8-d1bd-4954-83ec-adcdb8e392bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmhi2a2b3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:12 compute-1 nova_compute[225855]: 2026-01-20 14:47:12.043 225859 DEBUG oslo_concurrency.processutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7efaa6b8-d1bd-4954-83ec-adcdb8e392bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmhi2a2b3" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:12 compute-1 nova_compute[225855]: 2026-01-20 14:47:12.088 225859 DEBUG nova.storage.rbd_utils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] rbd image 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:12 compute-1 nova_compute[225855]: 2026-01-20 14:47:12.093 225859 DEBUG oslo_concurrency.processutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7efaa6b8-d1bd-4954-83ec-adcdb8e392bf/disk.config 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:12.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3074709641' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:12 compute-1 nova_compute[225855]: 2026-01-20 14:47:12.809 225859 DEBUG oslo_concurrency.processutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7efaa6b8-d1bd-4954-83ec-adcdb8e392bf/disk.config 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.716s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:12 compute-1 nova_compute[225855]: 2026-01-20 14:47:12.810 225859 INFO nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Deleting local config drive /var/lib/nova/instances/7efaa6b8-d1bd-4954-83ec-adcdb8e392bf/disk.config because it was imported into RBD.
Jan 20 14:47:12 compute-1 kernel: tapb93181ae-8a: entered promiscuous mode
Jan 20 14:47:12 compute-1 ovn_controller[130490]: 2026-01-20T14:47:12Z|00341|binding|INFO|Claiming lport b93181ae-8a01-468c-adfc-ec8894512d2e for this chassis.
Jan 20 14:47:12 compute-1 ovn_controller[130490]: 2026-01-20T14:47:12Z|00342|binding|INFO|b93181ae-8a01-468c-adfc-ec8894512d2e: Claiming fa:16:3e:cf:63:44 10.100.0.5
Jan 20 14:47:12 compute-1 NetworkManager[49104]: <info>  [1768920432.8646] manager: (tapb93181ae-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/146)
Jan 20 14:47:12 compute-1 nova_compute[225855]: 2026-01-20 14:47:12.863 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:12.869 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:63:44 10.100.0.5'], port_security=['fa:16:3e:cf:63:44 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7efaa6b8-d1bd-4954-83ec-adcdb8e392bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b683fcc0026242e28ba6d8fba638688e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ac411cec-795a-42a6-ba83-9468a87a4a14', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=361f9a69-30a6-4be4-89ad-2a8f92877af2, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=b93181ae-8a01-468c-adfc-ec8894512d2e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:47:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:12.870 140354 INFO neutron.agent.ovn.metadata.agent [-] Port b93181ae-8a01-468c-adfc-ec8894512d2e in datapath a19e9d1a-864f-41ee-bdea-188e65973ea5 bound to our chassis
Jan 20 14:47:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:12.871 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a19e9d1a-864f-41ee-bdea-188e65973ea5
Jan 20 14:47:12 compute-1 ovn_controller[130490]: 2026-01-20T14:47:12Z|00343|binding|INFO|Setting lport b93181ae-8a01-468c-adfc-ec8894512d2e ovn-installed in OVS
Jan 20 14:47:12 compute-1 ovn_controller[130490]: 2026-01-20T14:47:12Z|00344|binding|INFO|Setting lport b93181ae-8a01-468c-adfc-ec8894512d2e up in Southbound
Jan 20 14:47:12 compute-1 nova_compute[225855]: 2026-01-20 14:47:12.883 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:12 compute-1 nova_compute[225855]: 2026-01-20 14:47:12.886 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:12.887 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8830fcf9-88dd-4f9a-8c39-77fbe6e42d8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:12 compute-1 systemd-machined[194361]: New machine qemu-40-instance-00000064.
Jan 20 14:47:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:12.918 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e47440dd-4e29-4ab3-9cc0-cd397fd25204]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:12 compute-1 systemd[1]: Started Virtual Machine qemu-40-instance-00000064.
Jan 20 14:47:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:12.924 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[eeeea1cd-aed1-4741-8fd8-7b3405761819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:12 compute-1 systemd-udevd[261915]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:47:12 compute-1 NetworkManager[49104]: <info>  [1768920432.9420] device (tapb93181ae-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:47:12 compute-1 NetworkManager[49104]: <info>  [1768920432.9433] device (tapb93181ae-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:47:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:12.958 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e59a738c-8e47-4f33-ba07-fc2e146b84aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:12.978 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[97e333c5-017d-475d-bc8f-f9a1fb784887]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa19e9d1a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:53:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529090, 'reachable_time': 16021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261925, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:12.998 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7012ed3a-2097-41f6-b7d0-cce6c001c82d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa19e9d1a-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529100, 'tstamp': 529100}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261927, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa19e9d1a-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529104, 'tstamp': 529104}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261927, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:13.001 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa19e9d1a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.002 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.004 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:13.004 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa19e9d1a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:13.005 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:47:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:13.005 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa19e9d1a-80, col_values=(('external_ids', {'iface-id': '5527ab8d-a985-420b-9d5b-7e5d9baf7004'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:13.005 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.114 225859 DEBUG nova.compute.manager [req-e0ad3675-efdb-4cf8-ad73-05a5e0451c82 req-0e6de81c-4d63-457e-b333-f691537ae25a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Received event network-vif-plugged-b93181ae-8a01-468c-adfc-ec8894512d2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.115 225859 DEBUG oslo_concurrency.lockutils [req-e0ad3675-efdb-4cf8-ad73-05a5e0451c82 req-0e6de81c-4d63-457e-b333-f691537ae25a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.116 225859 DEBUG oslo_concurrency.lockutils [req-e0ad3675-efdb-4cf8-ad73-05a5e0451c82 req-0e6de81c-4d63-457e-b333-f691537ae25a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.116 225859 DEBUG oslo_concurrency.lockutils [req-e0ad3675-efdb-4cf8-ad73-05a5e0451c82 req-0e6de81c-4d63-457e-b333-f691537ae25a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.116 225859 DEBUG nova.compute.manager [req-e0ad3675-efdb-4cf8-ad73-05a5e0451c82 req-0e6de81c-4d63-457e-b333-f691537ae25a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Processing event network-vif-plugged-b93181ae-8a01-468c-adfc-ec8894512d2e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.260 225859 DEBUG nova.compute.manager [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.261 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920433.2600465, 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.261 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] VM Started (Lifecycle Event)
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.265 225859 DEBUG nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.268 225859 INFO nova.virt.libvirt.driver [-] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Instance spawned successfully.
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.268 225859 DEBUG nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.281 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.286 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.290 225859 DEBUG nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.290 225859 DEBUG nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.291 225859 DEBUG nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.291 225859 DEBUG nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.292 225859 DEBUG nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.292 225859 DEBUG nova.virt.libvirt.driver [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.322 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.322 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920433.261002, 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.322 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] VM Paused (Lifecycle Event)
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.348 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.352 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920433.2645874, 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.352 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] VM Resumed (Lifecycle Event)
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.374 225859 INFO nova.compute.manager [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Took 6.73 seconds to spawn the instance on the hypervisor.
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.374 225859 DEBUG nova.compute.manager [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.382 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.386 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.408 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.429 225859 INFO nova.compute.manager [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Took 7.56 seconds to build instance.
Jan 20 14:47:13 compute-1 nova_compute[225855]: 2026-01-20 14:47:13.450 225859 DEBUG oslo_concurrency.lockutils [None req-76f1c954-bbb9-4c35-a4e2-d14ff7715e96 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:13.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:13 compute-1 ceph-mon[81775]: pgmap v1771: 321 pgs: 321 active+clean; 295 MiB data, 879 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 183 op/s
Jan 20 14:47:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1676694116' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:47:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1676694116' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:47:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:14.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:14 compute-1 ovn_controller[130490]: 2026-01-20T14:47:14Z|00345|binding|INFO|Releasing lport 5527ab8d-a985-420b-9d5b-7e5d9baf7004 from this chassis (sb_readonly=0)
Jan 20 14:47:14 compute-1 nova_compute[225855]: 2026-01-20 14:47:14.993 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:15 compute-1 nova_compute[225855]: 2026-01-20 14:47:15.220 225859 DEBUG nova.compute.manager [req-2b017cd5-02f6-4140-8df6-9e61646e28cf req-41fe49c6-46e3-4774-95f9-4e86508d82f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Received event network-vif-plugged-b93181ae-8a01-468c-adfc-ec8894512d2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:47:15 compute-1 nova_compute[225855]: 2026-01-20 14:47:15.221 225859 DEBUG oslo_concurrency.lockutils [req-2b017cd5-02f6-4140-8df6-9e61646e28cf req-41fe49c6-46e3-4774-95f9-4e86508d82f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:15 compute-1 nova_compute[225855]: 2026-01-20 14:47:15.221 225859 DEBUG oslo_concurrency.lockutils [req-2b017cd5-02f6-4140-8df6-9e61646e28cf req-41fe49c6-46e3-4774-95f9-4e86508d82f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:15 compute-1 nova_compute[225855]: 2026-01-20 14:47:15.222 225859 DEBUG oslo_concurrency.lockutils [req-2b017cd5-02f6-4140-8df6-9e61646e28cf req-41fe49c6-46e3-4774-95f9-4e86508d82f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:15 compute-1 nova_compute[225855]: 2026-01-20 14:47:15.222 225859 DEBUG nova.compute.manager [req-2b017cd5-02f6-4140-8df6-9e61646e28cf req-41fe49c6-46e3-4774-95f9-4e86508d82f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] No waiting events found dispatching network-vif-plugged-b93181ae-8a01-468c-adfc-ec8894512d2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:47:15 compute-1 nova_compute[225855]: 2026-01-20 14:47:15.222 225859 WARNING nova.compute.manager [req-2b017cd5-02f6-4140-8df6-9e61646e28cf req-41fe49c6-46e3-4774-95f9-4e86508d82f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Received unexpected event network-vif-plugged-b93181ae-8a01-468c-adfc-ec8894512d2e for instance with vm_state active and task_state None.
Jan 20 14:47:15 compute-1 nova_compute[225855]: 2026-01-20 14:47:15.385 225859 DEBUG nova.compute.manager [req-ba5fdde9-fe78-470f-a774-bf9a14ca9719 req-e1b32a4b-73a0-4f97-bd81-fcd5f26f8ca0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Received event network-changed-b93181ae-8a01-468c-adfc-ec8894512d2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:47:15 compute-1 nova_compute[225855]: 2026-01-20 14:47:15.386 225859 DEBUG nova.compute.manager [req-ba5fdde9-fe78-470f-a774-bf9a14ca9719 req-e1b32a4b-73a0-4f97-bd81-fcd5f26f8ca0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Refreshing instance network info cache due to event network-changed-b93181ae-8a01-468c-adfc-ec8894512d2e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:47:15 compute-1 nova_compute[225855]: 2026-01-20 14:47:15.386 225859 DEBUG oslo_concurrency.lockutils [req-ba5fdde9-fe78-470f-a774-bf9a14ca9719 req-e1b32a4b-73a0-4f97-bd81-fcd5f26f8ca0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-7efaa6b8-d1bd-4954-83ec-adcdb8e392bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:47:15 compute-1 nova_compute[225855]: 2026-01-20 14:47:15.386 225859 DEBUG oslo_concurrency.lockutils [req-ba5fdde9-fe78-470f-a774-bf9a14ca9719 req-e1b32a4b-73a0-4f97-bd81-fcd5f26f8ca0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-7efaa6b8-d1bd-4954-83ec-adcdb8e392bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:47:15 compute-1 nova_compute[225855]: 2026-01-20 14:47:15.387 225859 DEBUG nova.network.neutron [req-ba5fdde9-fe78-470f-a774-bf9a14ca9719 req-e1b32a4b-73a0-4f97-bd81-fcd5f26f8ca0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Refreshing network info cache for port b93181ae-8a01-468c-adfc-ec8894512d2e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:47:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:47:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:15.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:47:15 compute-1 ceph-mon[81775]: pgmap v1772: 321 pgs: 321 active+clean; 295 MiB data, 879 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 182 op/s
Jan 20 14:47:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/279901560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:15 compute-1 nova_compute[225855]: 2026-01-20 14:47:15.642 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:16 compute-1 nova_compute[225855]: 2026-01-20 14:47:16.358 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:16.408 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:16.409 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:16.410 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:16.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:16 compute-1 nova_compute[225855]: 2026-01-20 14:47:16.908 225859 DEBUG nova.network.neutron [req-ba5fdde9-fe78-470f-a774-bf9a14ca9719 req-e1b32a4b-73a0-4f97-bd81-fcd5f26f8ca0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Updated VIF entry in instance network info cache for port b93181ae-8a01-468c-adfc-ec8894512d2e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:47:16 compute-1 nova_compute[225855]: 2026-01-20 14:47:16.908 225859 DEBUG nova.network.neutron [req-ba5fdde9-fe78-470f-a774-bf9a14ca9719 req-e1b32a4b-73a0-4f97-bd81-fcd5f26f8ca0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Updating instance_info_cache with network_info: [{"id": "b93181ae-8a01-468c-adfc-ec8894512d2e", "address": "fa:16:3e:cf:63:44", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb93181ae-8a", "ovs_interfaceid": "b93181ae-8a01-468c-adfc-ec8894512d2e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:47:16 compute-1 nova_compute[225855]: 2026-01-20 14:47:16.943 225859 DEBUG oslo_concurrency.lockutils [req-ba5fdde9-fe78-470f-a774-bf9a14ca9719 req-e1b32a4b-73a0-4f97-bd81-fcd5f26f8ca0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-7efaa6b8-d1bd-4954-83ec-adcdb8e392bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:47:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:47:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:17.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:47:17 compute-1 ceph-mon[81775]: pgmap v1773: 321 pgs: 321 active+clean; 295 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.6 MiB/s wr, 244 op/s
Jan 20 14:47:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:18.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:19.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:19 compute-1 ceph-mon[81775]: pgmap v1774: 321 pgs: 321 active+clean; 295 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.4 MiB/s wr, 249 op/s
Jan 20 14:47:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:20.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:20 compute-1 nova_compute[225855]: 2026-01-20 14:47:20.645 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:21 compute-1 nova_compute[225855]: 2026-01-20 14:47:21.362 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:21.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:21 compute-1 sudo[261976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:21 compute-1 sudo[261976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:21 compute-1 sudo[261976]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:21 compute-1 sudo[262001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:47:21 compute-1 sudo[262001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:21 compute-1 sudo[262001]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:21 compute-1 sudo[262026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:21 compute-1 sudo[262026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:21 compute-1 sudo[262026]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:21 compute-1 sudo[262051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:47:21 compute-1 sudo[262051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:21 compute-1 ceph-mon[81775]: pgmap v1775: 321 pgs: 321 active+clean; 295 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.5 MiB/s wr, 247 op/s
Jan 20 14:47:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:47:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 31K writes, 124K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.04 MB/s
                                           Cumulative WAL: 31K writes, 10K syncs, 2.88 writes per sync, written: 0.12 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8085 writes, 31K keys, 8085 commit groups, 1.0 writes per commit group, ingest: 34.64 MB, 0.06 MB/s
                                           Interval WAL: 8084 writes, 3158 syncs, 2.56 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 14:47:22 compute-1 sudo[262051]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:22.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:23 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Jan 20 14:47:23 compute-1 ceph-mon[81775]: pgmap v1776: 321 pgs: 321 active+clean; 295 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 196 KiB/s wr, 258 op/s
Jan 20 14:47:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:47:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:47:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:47:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:47:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:47:23 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:47:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:23.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:23 compute-1 nova_compute[225855]: 2026-01-20 14:47:23.558 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:24.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:25 compute-1 ceph-mon[81775]: pgmap v1777: 321 pgs: 321 active+clean; 295 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 23 KiB/s wr, 197 op/s
Jan 20 14:47:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:25.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:25 compute-1 nova_compute[225855]: 2026-01-20 14:47:25.648 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:26 compute-1 nova_compute[225855]: 2026-01-20 14:47:26.400 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:47:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:26.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:47:27 compute-1 ceph-mon[81775]: pgmap v1778: 321 pgs: 321 active+clean; 315 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.6 MiB/s wr, 221 op/s
Jan 20 14:47:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:47:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:27.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:47:28 compute-1 sudo[262110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:28 compute-1 sudo[262110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:28 compute-1 sudo[262110]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:28 compute-1 sudo[262135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:28 compute-1 sudo[262135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:28 compute-1 sudo[262135]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:28 compute-1 sudo[262160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:28 compute-1 sudo[262160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:28 compute-1 sudo[262160]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:28 compute-1 sudo[262185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:47:28 compute-1 sudo[262185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:28 compute-1 sudo[262185]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:28.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:29 compute-1 ceph-mon[81775]: pgmap v1779: 321 pgs: 321 active+clean; 366 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.6 MiB/s wr, 206 op/s
Jan 20 14:47:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:47:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:47:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:29.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:30 compute-1 nova_compute[225855]: 2026-01-20 14:47:30.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:47:30 compute-1 nova_compute[225855]: 2026-01-20 14:47:30.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:47:30 compute-1 nova_compute[225855]: 2026-01-20 14:47:30.578 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:30.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:30 compute-1 nova_compute[225855]: 2026-01-20 14:47:30.650 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:31 compute-1 nova_compute[225855]: 2026-01-20 14:47:31.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:47:31 compute-1 nova_compute[225855]: 2026-01-20 14:47:31.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:47:31 compute-1 ceph-mon[81775]: pgmap v1780: 321 pgs: 321 active+clean; 393 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.5 MiB/s wr, 213 op/s
Jan 20 14:47:31 compute-1 nova_compute[225855]: 2026-01-20 14:47:31.369 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:47:31 compute-1 nova_compute[225855]: 2026-01-20 14:47:31.402 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:31.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:32 compute-1 podman[262212]: 2026-01-20 14:47:32.056468863 +0000 UTC m=+0.089000152 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:47:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:32 compute-1 nova_compute[225855]: 2026-01-20 14:47:32.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:47:32 compute-1 nova_compute[225855]: 2026-01-20 14:47:32.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:47:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:32.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:33 compute-1 ceph-mon[81775]: pgmap v1781: 321 pgs: 321 active+clean; 403 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 6.0 MiB/s wr, 190 op/s
Jan 20 14:47:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3143770173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:33.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:34 compute-1 nova_compute[225855]: 2026-01-20 14:47:34.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:47:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2578026580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0.
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:47:34.519046) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920454519155, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1636, "num_deletes": 254, "total_data_size": 3488221, "memory_usage": 3537944, "flush_reason": "Manual Compaction"}
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920454534645, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 1416448, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41471, "largest_seqno": 43101, "table_properties": {"data_size": 1411216, "index_size": 2436, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14662, "raw_average_key_size": 21, "raw_value_size": 1399361, "raw_average_value_size": 2031, "num_data_blocks": 108, "num_entries": 689, "num_filter_entries": 689, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768920327, "oldest_key_time": 1768920327, "file_creation_time": 1768920454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 15630 microseconds, and 8246 cpu microseconds.
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:47:34.534685) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 1416448 bytes OK
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:47:34.534702) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:47:34.536354) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:47:34.536371) EVENT_LOG_v1 {"time_micros": 1768920454536366, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:47:34.536448) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 3480631, prev total WAL file size 3480631, number of live WAL files 2.
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:47:34.537932) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323533' seq:72057594037927935, type:22 .. '6D6772737461740031353036' seq:0, type:0; will stop at (end)
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(1383KB)], [78(10MB)]
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920454538016, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 12678576, "oldest_snapshot_seqno": -1}
Jan 20 14:47:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:34.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 6919 keys, 9676799 bytes, temperature: kUnknown
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920454736717, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 9676799, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9632588, "index_size": 25795, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 176842, "raw_average_key_size": 25, "raw_value_size": 9510916, "raw_average_value_size": 1374, "num_data_blocks": 1026, "num_entries": 6919, "num_filter_entries": 6919, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768920454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:47:34.736990) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 9676799 bytes
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:47:34.773687) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.8 rd, 48.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 10.7 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(15.8) write-amplify(6.8) OK, records in: 7387, records dropped: 468 output_compression: NoCompression
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:47:34.773722) EVENT_LOG_v1 {"time_micros": 1768920454773711, "job": 48, "event": "compaction_finished", "compaction_time_micros": 198763, "compaction_time_cpu_micros": 50897, "output_level": 6, "num_output_files": 1, "total_output_size": 9676799, "num_input_records": 7387, "num_output_records": 6919, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920454774194, "job": 48, "event": "table_file_deletion", "file_number": 80}
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920454776099, "job": 48, "event": "table_file_deletion", "file_number": 78}
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:47:34.537730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:47:34.776216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:47:34.776223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:47:34.776225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:47:34.776227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:47:34 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:47:34.776229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:47:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:47:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:35.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:47:35 compute-1 ceph-mon[81775]: pgmap v1782: 321 pgs: 321 active+clean; 407 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.1 MiB/s wr, 173 op/s
Jan 20 14:47:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2747494950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:35 compute-1 nova_compute[225855]: 2026-01-20 14:47:35.652 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:47:36 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2814695511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:36 compute-1 nova_compute[225855]: 2026-01-20 14:47:36.404 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e239 e239: 3 total, 3 up, 3 in
Jan 20 14:47:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1246613625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2814695511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/589147395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:36.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:37 compute-1 nova_compute[225855]: 2026-01-20 14:47:37.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:47:37 compute-1 nova_compute[225855]: 2026-01-20 14:47:37.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:47:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:37.363 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:47:37 compute-1 nova_compute[225855]: 2026-01-20 14:47:37.364 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:37.365 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:47:37 compute-1 nova_compute[225855]: 2026-01-20 14:47:37.419 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:37 compute-1 nova_compute[225855]: 2026-01-20 14:47:37.419 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:37 compute-1 nova_compute[225855]: 2026-01-20 14:47:37.420 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:37 compute-1 nova_compute[225855]: 2026-01-20 14:47:37.421 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:47:37 compute-1 nova_compute[225855]: 2026-01-20 14:47:37.421 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:37.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:37 compute-1 ceph-mon[81775]: pgmap v1783: 321 pgs: 321 active+clean; 407 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.1 MiB/s wr, 173 op/s
Jan 20 14:47:37 compute-1 ceph-mon[81775]: osdmap e239: 3 total, 3 up, 3 in
Jan 20 14:47:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1261327825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3809842418' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:47:37 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2919054783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:37 compute-1 nova_compute[225855]: 2026-01-20 14:47:37.900 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:38 compute-1 nova_compute[225855]: 2026-01-20 14:47:38.009 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:47:38 compute-1 nova_compute[225855]: 2026-01-20 14:47:38.010 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:47:38 compute-1 nova_compute[225855]: 2026-01-20 14:47:38.015 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:47:38 compute-1 nova_compute[225855]: 2026-01-20 14:47:38.015 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:47:38 compute-1 nova_compute[225855]: 2026-01-20 14:47:38.180 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:47:38 compute-1 nova_compute[225855]: 2026-01-20 14:47:38.182 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4117MB free_disk=20.80602264404297GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:47:38 compute-1 nova_compute[225855]: 2026-01-20 14:47:38.182 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:38 compute-1 nova_compute[225855]: 2026-01-20 14:47:38.182 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:38 compute-1 nova_compute[225855]: 2026-01-20 14:47:38.279 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 6586bc3e-3a94-4d22-8e8c-713a86a956fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:47:38 compute-1 nova_compute[225855]: 2026-01-20 14:47:38.280 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:47:38 compute-1 nova_compute[225855]: 2026-01-20 14:47:38.280 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:47:38 compute-1 nova_compute[225855]: 2026-01-20 14:47:38.280 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:47:38 compute-1 nova_compute[225855]: 2026-01-20 14:47:38.375 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2610479657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2919054783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:38.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:47:38 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3930138932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:38 compute-1 nova_compute[225855]: 2026-01-20 14:47:38.845 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:38 compute-1 nova_compute[225855]: 2026-01-20 14:47:38.852 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:47:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:47:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:39.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:47:39 compute-1 ceph-mon[81775]: pgmap v1785: 321 pgs: 321 active+clean; 411 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 20 14:47:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1209793672' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3930138932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:40 compute-1 nova_compute[225855]: 2026-01-20 14:47:40.082 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:47:40 compute-1 nova_compute[225855]: 2026-01-20 14:47:40.110 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:47:40 compute-1 nova_compute[225855]: 2026-01-20 14:47:40.111 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:40.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:40 compute-1 nova_compute[225855]: 2026-01-20 14:47:40.654 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:41 compute-1 nova_compute[225855]: 2026-01-20 14:47:41.106 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:47:41 compute-1 nova_compute[225855]: 2026-01-20 14:47:41.106 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:47:41 compute-1 nova_compute[225855]: 2026-01-20 14:47:41.407 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:41.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:41 compute-1 ceph-mon[81775]: pgmap v1786: 321 pgs: 321 active+clean; 433 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 179 KiB/s rd, 1.6 MiB/s wr, 49 op/s
Jan 20 14:47:42 compute-1 podman[262292]: 2026-01-20 14:47:42.003659252 +0000 UTC m=+0.050605764 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:47:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:47:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:42.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:47:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:43.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:43 compute-1 ceph-mon[81775]: pgmap v1787: 321 pgs: 321 active+clean; 453 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 115 op/s
Jan 20 14:47:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:47:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:44.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:47:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:45.367 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3517018525' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1346847683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:47:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:45.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:47:45 compute-1 nova_compute[225855]: 2026-01-20 14:47:45.656 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:46 compute-1 nova_compute[225855]: 2026-01-20 14:47:46.408 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:46.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:46 compute-1 ceph-mon[81775]: pgmap v1788: 321 pgs: 321 active+clean; 453 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Jan 20 14:47:47 compute-1 nova_compute[225855]: 2026-01-20 14:47:47.258 225859 DEBUG oslo_concurrency.lockutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:47 compute-1 nova_compute[225855]: 2026-01-20 14:47:47.259 225859 DEBUG oslo_concurrency.lockutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:47 compute-1 nova_compute[225855]: 2026-01-20 14:47:47.281 225859 DEBUG nova.compute.manager [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:47:47 compute-1 nova_compute[225855]: 2026-01-20 14:47:47.348 225859 DEBUG oslo_concurrency.lockutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:47 compute-1 nova_compute[225855]: 2026-01-20 14:47:47.349 225859 DEBUG oslo_concurrency.lockutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:47 compute-1 nova_compute[225855]: 2026-01-20 14:47:47.354 225859 DEBUG nova.virt.hardware [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:47:47 compute-1 nova_compute[225855]: 2026-01-20 14:47:47.354 225859 INFO nova.compute.claims [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:47:47 compute-1 nova_compute[225855]: 2026-01-20 14:47:47.407 225859 DEBUG nova.compute.manager [req-fd34f402-c720-4127-83c3-6789282bb8e3 req-76a6d4c6-0dcc-4359-a4dc-0ff4cb9b16b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Received event network-changed-2c289e6f-295e-44c3-948a-9a6901251890 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:47:47 compute-1 nova_compute[225855]: 2026-01-20 14:47:47.407 225859 DEBUG nova.compute.manager [req-fd34f402-c720-4127-83c3-6789282bb8e3 req-76a6d4c6-0dcc-4359-a4dc-0ff4cb9b16b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Refreshing instance network info cache due to event network-changed-2c289e6f-295e-44c3-948a-9a6901251890. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:47:47 compute-1 nova_compute[225855]: 2026-01-20 14:47:47.407 225859 DEBUG oslo_concurrency.lockutils [req-fd34f402-c720-4127-83c3-6789282bb8e3 req-76a6d4c6-0dcc-4359-a4dc-0ff4cb9b16b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:47:47 compute-1 nova_compute[225855]: 2026-01-20 14:47:47.408 225859 DEBUG oslo_concurrency.lockutils [req-fd34f402-c720-4127-83c3-6789282bb8e3 req-76a6d4c6-0dcc-4359-a4dc-0ff4cb9b16b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:47:47 compute-1 nova_compute[225855]: 2026-01-20 14:47:47.408 225859 DEBUG nova.network.neutron [req-fd34f402-c720-4127-83c3-6789282bb8e3 req-76a6d4c6-0dcc-4359-a4dc-0ff4cb9b16b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Refreshing network info cache for port 2c289e6f-295e-44c3-948a-9a6901251890 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:47:47 compute-1 nova_compute[225855]: 2026-01-20 14:47:47.471 225859 DEBUG oslo_concurrency.processutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:47.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:47 compute-1 sshd-session[262240]: Connection closed by 199.45.154.115 port 26778 [preauth]
Jan 20 14:47:48 compute-1 ceph-mon[81775]: pgmap v1789: 321 pgs: 321 active+clean; 453 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Jan 20 14:47:48 compute-1 nova_compute[225855]: 2026-01-20 14:47:48.320 225859 DEBUG nova.compute.manager [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 20 14:47:48 compute-1 nova_compute[225855]: 2026-01-20 14:47:48.404 225859 DEBUG oslo_concurrency.lockutils [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:48 compute-1 sudo[262331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:48 compute-1 sudo[262331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:48 compute-1 sudo[262331]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:48 compute-1 sudo[262356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:48 compute-1 sudo[262356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:48 compute-1 sudo[262356]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:48.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:48 compute-1 nova_compute[225855]: 2026-01-20 14:47:48.894 225859 DEBUG nova.network.neutron [req-fd34f402-c720-4127-83c3-6789282bb8e3 req-76a6d4c6-0dcc-4359-a4dc-0ff4cb9b16b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Updated VIF entry in instance network info cache for port 2c289e6f-295e-44c3-948a-9a6901251890. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:47:48 compute-1 nova_compute[225855]: 2026-01-20 14:47:48.896 225859 DEBUG nova.network.neutron [req-fd34f402-c720-4127-83c3-6789282bb8e3 req-76a6d4c6-0dcc-4359-a4dc-0ff4cb9b16b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Updating instance_info_cache with network_info: [{"id": "2c289e6f-295e-44c3-948a-9a6901251890", "address": "fa:16:3e:2f:4c:e2", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c289e6f-29", "ovs_interfaceid": "2c289e6f-295e-44c3-948a-9a6901251890", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:47:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e240 e240: 3 total, 3 up, 3 in
Jan 20 14:47:48 compute-1 nova_compute[225855]: 2026-01-20 14:47:48.970 225859 DEBUG oslo_concurrency.lockutils [req-fd34f402-c720-4127-83c3-6789282bb8e3 req-76a6d4c6-0dcc-4359-a4dc-0ff4cb9b16b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:47:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:47:49 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/551235169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.185 225859 DEBUG oslo_concurrency.processutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.714s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.190 225859 DEBUG nova.compute.provider_tree [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.204 225859 DEBUG nova.scheduler.client.report [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.231 225859 DEBUG oslo_concurrency.lockutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.232 225859 DEBUG nova.compute.manager [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.234 225859 DEBUG oslo_concurrency.lockutils [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.278 225859 DEBUG nova.objects.instance [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'pci_requests' on Instance uuid 9beb3ec3-721e-4919-9713-a92c82ad189b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.302 225859 DEBUG nova.virt.hardware [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.303 225859 INFO nova.compute.claims [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.303 225859 DEBUG nova.objects.instance [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'resources' on Instance uuid 9beb3ec3-721e-4919-9713-a92c82ad189b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.314 225859 DEBUG nova.compute.manager [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.315 225859 DEBUG nova.network.neutron [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.318 225859 DEBUG nova.objects.instance [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'pci_devices' on Instance uuid 9beb3ec3-721e-4919-9713-a92c82ad189b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.333 225859 INFO nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.374 225859 INFO nova.compute.resource_tracker [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Updating resource usage from migration d75f7553-0bf9-4277-b1f7-34600960db53
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.374 225859 DEBUG nova.compute.resource_tracker [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Starting to track incoming migration d75f7553-0bf9-4277-b1f7-34600960db53 with flavor 30c26a27-d918-46d8-a512-4ef3b4ce5955 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.384 225859 DEBUG nova.compute.manager [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:47:49 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:47:49 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.465 225859 DEBUG nova.compute.manager [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.466 225859 DEBUG nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.466 225859 INFO nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Creating image(s)
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.488 225859 DEBUG nova.storage.rbd_utils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:49.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.509 225859 DEBUG nova.storage.rbd_utils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.530 225859 DEBUG nova.storage.rbd_utils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.534 225859 DEBUG oslo_concurrency.processutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.594 225859 DEBUG oslo_concurrency.processutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.595 225859 DEBUG oslo_concurrency.lockutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.596 225859 DEBUG oslo_concurrency.lockutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.596 225859 DEBUG oslo_concurrency.lockutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.625 225859 DEBUG nova.storage.rbd_utils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.630 225859 DEBUG oslo_concurrency.processutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.662 225859 DEBUG oslo_concurrency.processutils [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.690 225859 DEBUG nova.policy [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3e9278fdb9e645b7938f3edb20c4d3cf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c5f03d46c0c4162a3b2f1530850bb6c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:47:49 compute-1 ceph-mon[81775]: pgmap v1790: 321 pgs: 321 active+clean; 453 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.9 MiB/s wr, 201 op/s
Jan 20 14:47:49 compute-1 ceph-mon[81775]: osdmap e240: 3 total, 3 up, 3 in
Jan 20 14:47:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/551235169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3005195862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.925 225859 DEBUG oslo_concurrency.processutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:49 compute-1 nova_compute[225855]: 2026-01-20 14:47:49.981 225859 DEBUG nova.storage.rbd_utils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] resizing rbd image d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:47:50 compute-1 nova_compute[225855]: 2026-01-20 14:47:50.103 225859 DEBUG nova.objects.instance [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'migration_context' on Instance uuid d699cf6a-9c33-400b-8d0f-4d61b8b16916 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:47:50 compute-1 nova_compute[225855]: 2026-01-20 14:47:50.116 225859 DEBUG nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:47:50 compute-1 nova_compute[225855]: 2026-01-20 14:47:50.116 225859 DEBUG nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Ensure instance console log exists: /var/lib/nova/instances/d699cf6a-9c33-400b-8d0f-4d61b8b16916/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:47:50 compute-1 nova_compute[225855]: 2026-01-20 14:47:50.117 225859 DEBUG oslo_concurrency.lockutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:50 compute-1 nova_compute[225855]: 2026-01-20 14:47:50.117 225859 DEBUG oslo_concurrency.lockutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:50 compute-1 nova_compute[225855]: 2026-01-20 14:47:50.117 225859 DEBUG oslo_concurrency.lockutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:47:50 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/619936034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:50 compute-1 nova_compute[225855]: 2026-01-20 14:47:50.136 225859 DEBUG oslo_concurrency.processutils [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:50 compute-1 nova_compute[225855]: 2026-01-20 14:47:50.140 225859 DEBUG nova.compute.provider_tree [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:47:50 compute-1 nova_compute[225855]: 2026-01-20 14:47:50.152 225859 DEBUG nova.scheduler.client.report [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:47:50 compute-1 nova_compute[225855]: 2026-01-20 14:47:50.168 225859 DEBUG oslo_concurrency.lockutils [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:50 compute-1 nova_compute[225855]: 2026-01-20 14:47:50.168 225859 INFO nova.compute.manager [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Migrating
Jan 20 14:47:50 compute-1 nova_compute[225855]: 2026-01-20 14:47:50.659 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:50.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/619936034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:50 compute-1 nova_compute[225855]: 2026-01-20 14:47:50.772 225859 DEBUG nova.network.neutron [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Successfully created port: f4f25f14-bc59-4322-86b2-b48f096472a5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:47:51 compute-1 nova_compute[225855]: 2026-01-20 14:47:51.410 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:51.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:51 compute-1 ceph-mon[81775]: pgmap v1792: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 454 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.2 MiB/s wr, 208 op/s
Jan 20 14:47:51 compute-1 nova_compute[225855]: 2026-01-20 14:47:51.872 225859 DEBUG nova.network.neutron [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Successfully updated port: f4f25f14-bc59-4322-86b2-b48f096472a5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:47:51 compute-1 nova_compute[225855]: 2026-01-20 14:47:51.908 225859 DEBUG oslo_concurrency.lockutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "refresh_cache-d699cf6a-9c33-400b-8d0f-4d61b8b16916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:47:51 compute-1 nova_compute[225855]: 2026-01-20 14:47:51.908 225859 DEBUG oslo_concurrency.lockutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquired lock "refresh_cache-d699cf6a-9c33-400b-8d0f-4d61b8b16916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:47:51 compute-1 nova_compute[225855]: 2026-01-20 14:47:51.908 225859 DEBUG nova.network.neutron [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.047 225859 DEBUG nova.network.neutron [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:47:52 compute-1 sshd-session[262576]: Accepted publickey for nova from 192.168.122.102 port 48556 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 14:47:52 compute-1 systemd-logind[783]: New session 57 of user nova.
Jan 20 14:47:52 compute-1 systemd[1]: Created slice User Slice of UID 42436.
Jan 20 14:47:52 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 20 14:47:52 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 20 14:47:52 compute-1 systemd[1]: Starting User Manager for UID 42436...
Jan 20 14:47:52 compute-1 systemd[262580]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:47:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:52 compute-1 systemd[262580]: Queued start job for default target Main User Target.
Jan 20 14:47:52 compute-1 systemd[262580]: Created slice User Application Slice.
Jan 20 14:47:52 compute-1 systemd[262580]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 14:47:52 compute-1 systemd[262580]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 14:47:52 compute-1 systemd[262580]: Reached target Paths.
Jan 20 14:47:52 compute-1 systemd[262580]: Reached target Timers.
Jan 20 14:47:52 compute-1 systemd[262580]: Starting D-Bus User Message Bus Socket...
Jan 20 14:47:52 compute-1 systemd[262580]: Starting Create User's Volatile Files and Directories...
Jan 20 14:47:52 compute-1 systemd[262580]: Listening on D-Bus User Message Bus Socket.
Jan 20 14:47:52 compute-1 systemd[262580]: Reached target Sockets.
Jan 20 14:47:52 compute-1 systemd[262580]: Finished Create User's Volatile Files and Directories.
Jan 20 14:47:52 compute-1 systemd[262580]: Reached target Basic System.
Jan 20 14:47:52 compute-1 systemd[262580]: Reached target Main User Target.
Jan 20 14:47:52 compute-1 systemd[262580]: Startup finished in 160ms.
Jan 20 14:47:52 compute-1 systemd[1]: Started User Manager for UID 42436.
Jan 20 14:47:52 compute-1 systemd[1]: Started Session 57 of User nova.
Jan 20 14:47:52 compute-1 sshd-session[262576]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:47:52 compute-1 sshd-session[262595]: Received disconnect from 192.168.122.102 port 48556:11: disconnected by user
Jan 20 14:47:52 compute-1 sshd-session[262595]: Disconnected from user nova 192.168.122.102 port 48556
Jan 20 14:47:52 compute-1 sshd-session[262576]: pam_unix(sshd:session): session closed for user nova
Jan 20 14:47:52 compute-1 systemd[1]: session-57.scope: Deactivated successfully.
Jan 20 14:47:52 compute-1 systemd-logind[783]: Session 57 logged out. Waiting for processes to exit.
Jan 20 14:47:52 compute-1 systemd-logind[783]: Removed session 57.
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.419 225859 DEBUG nova.compute.manager [req-6cfce73f-6457-47c5-be92-1044b6eabc95 req-bfa82b71-22b1-476d-89d9-d78560eed692 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Received event network-changed-f4f25f14-bc59-4322-86b2-b48f096472a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.419 225859 DEBUG nova.compute.manager [req-6cfce73f-6457-47c5-be92-1044b6eabc95 req-bfa82b71-22b1-476d-89d9-d78560eed692 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Refreshing instance network info cache due to event network-changed-f4f25f14-bc59-4322-86b2-b48f096472a5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.420 225859 DEBUG oslo_concurrency.lockutils [req-6cfce73f-6457-47c5-be92-1044b6eabc95 req-bfa82b71-22b1-476d-89d9-d78560eed692 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-d699cf6a-9c33-400b-8d0f-4d61b8b16916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:47:52 compute-1 sshd-session[262597]: Accepted publickey for nova from 192.168.122.102 port 48566 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 14:47:52 compute-1 systemd-logind[783]: New session 59 of user nova.
Jan 20 14:47:52 compute-1 systemd[1]: Started Session 59 of User nova.
Jan 20 14:47:52 compute-1 sshd-session[262597]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:47:52 compute-1 sshd-session[262600]: Received disconnect from 192.168.122.102 port 48566:11: disconnected by user
Jan 20 14:47:52 compute-1 sshd-session[262600]: Disconnected from user nova 192.168.122.102 port 48566
Jan 20 14:47:52 compute-1 sshd-session[262597]: pam_unix(sshd:session): session closed for user nova
Jan 20 14:47:52 compute-1 systemd[1]: session-59.scope: Deactivated successfully.
Jan 20 14:47:52 compute-1 systemd-logind[783]: Session 59 logged out. Waiting for processes to exit.
Jan 20 14:47:52 compute-1 systemd-logind[783]: Removed session 59.
Jan 20 14:47:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:52.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.896 225859 DEBUG nova.network.neutron [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Updating instance_info_cache with network_info: [{"id": "f4f25f14-bc59-4322-86b2-b48f096472a5", "address": "fa:16:3e:41:b2:cf", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f25f14-bc", "ovs_interfaceid": "f4f25f14-bc59-4322-86b2-b48f096472a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.915 225859 DEBUG oslo_concurrency.lockutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Releasing lock "refresh_cache-d699cf6a-9c33-400b-8d0f-4d61b8b16916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.915 225859 DEBUG nova.compute.manager [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Instance network_info: |[{"id": "f4f25f14-bc59-4322-86b2-b48f096472a5", "address": "fa:16:3e:41:b2:cf", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f25f14-bc", "ovs_interfaceid": "f4f25f14-bc59-4322-86b2-b48f096472a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.915 225859 DEBUG oslo_concurrency.lockutils [req-6cfce73f-6457-47c5-be92-1044b6eabc95 req-bfa82b71-22b1-476d-89d9-d78560eed692 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-d699cf6a-9c33-400b-8d0f-4d61b8b16916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.916 225859 DEBUG nova.network.neutron [req-6cfce73f-6457-47c5-be92-1044b6eabc95 req-bfa82b71-22b1-476d-89d9-d78560eed692 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Refreshing network info cache for port f4f25f14-bc59-4322-86b2-b48f096472a5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.918 225859 DEBUG nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Start _get_guest_xml network_info=[{"id": "f4f25f14-bc59-4322-86b2-b48f096472a5", "address": "fa:16:3e:41:b2:cf", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f25f14-bc", "ovs_interfaceid": "f4f25f14-bc59-4322-86b2-b48f096472a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.923 225859 WARNING nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.927 225859 DEBUG nova.virt.libvirt.host [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.928 225859 DEBUG nova.virt.libvirt.host [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.937 225859 DEBUG nova.virt.libvirt.host [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.937 225859 DEBUG nova.virt.libvirt.host [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.938 225859 DEBUG nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.939 225859 DEBUG nova.virt.hardware [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.939 225859 DEBUG nova.virt.hardware [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.939 225859 DEBUG nova.virt.hardware [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.939 225859 DEBUG nova.virt.hardware [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.939 225859 DEBUG nova.virt.hardware [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.940 225859 DEBUG nova.virt.hardware [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.940 225859 DEBUG nova.virt.hardware [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.940 225859 DEBUG nova.virt.hardware [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.940 225859 DEBUG nova.virt.hardware [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.941 225859 DEBUG nova.virt.hardware [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.941 225859 DEBUG nova.virt.hardware [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:47:52 compute-1 nova_compute[225855]: 2026-01-20 14:47:52.943 225859 DEBUG oslo_concurrency.processutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/919004951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:53.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:47:53 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1409179445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:53 compute-1 nova_compute[225855]: 2026-01-20 14:47:53.729 225859 DEBUG oslo_concurrency.processutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.786s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:53 compute-1 nova_compute[225855]: 2026-01-20 14:47:53.764 225859 DEBUG nova.storage.rbd_utils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:53 compute-1 nova_compute[225855]: 2026-01-20 14:47:53.769 225859 DEBUG oslo_concurrency.processutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:54 compute-1 nova_compute[225855]: 2026-01-20 14:47:54.159 225859 DEBUG nova.network.neutron [req-6cfce73f-6457-47c5-be92-1044b6eabc95 req-bfa82b71-22b1-476d-89d9-d78560eed692 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Updated VIF entry in instance network info cache for port f4f25f14-bc59-4322-86b2-b48f096472a5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:47:54 compute-1 nova_compute[225855]: 2026-01-20 14:47:54.159 225859 DEBUG nova.network.neutron [req-6cfce73f-6457-47c5-be92-1044b6eabc95 req-bfa82b71-22b1-476d-89d9-d78560eed692 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Updating instance_info_cache with network_info: [{"id": "f4f25f14-bc59-4322-86b2-b48f096472a5", "address": "fa:16:3e:41:b2:cf", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f25f14-bc", "ovs_interfaceid": "f4f25f14-bc59-4322-86b2-b48f096472a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:47:54 compute-1 nova_compute[225855]: 2026-01-20 14:47:54.175 225859 DEBUG oslo_concurrency.lockutils [req-6cfce73f-6457-47c5-be92-1044b6eabc95 req-bfa82b71-22b1-476d-89d9-d78560eed692 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-d699cf6a-9c33-400b-8d0f-4d61b8b16916" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:47:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:47:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 8445 writes, 43K keys, 8445 commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 8444 writes, 8444 syncs, 1.00 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1598 writes, 7935 keys, 1598 commit groups, 1.0 writes per commit group, ingest: 15.64 MB, 0.03 MB/s
                                           Interval WAL: 1598 writes, 1598 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     71.3      0.72              0.19        24    0.030       0      0       0.0       0.0
                                             L6      1/0    9.23 MB   0.0      0.2     0.1      0.2       0.2      0.0       0.0   3.9    107.3     88.7      2.27              0.68        23    0.099    128K    12K       0.0       0.0
                                            Sum      1/0    9.23 MB   0.0      0.2     0.1      0.2       0.2      0.1       0.0   4.9     81.4     84.5      2.99              0.87        47    0.064    128K    12K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.0     82.5     81.9      0.85              0.24        12    0.071     42K   3583       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.1      0.2       0.2      0.0       0.0   0.0    107.3     88.7      2.27              0.68        23    0.099    128K    12K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     71.5      0.72              0.19        23    0.031       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.050, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.25 GB write, 0.08 MB/s write, 0.24 GB read, 0.08 MB/s read, 3.0 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564d515a71f0#2 capacity: 304.00 MB usage: 28.03 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000309 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1623,27.05 MB,8.89745%) FilterBlock(47,366.80 KB,0.117829%) IndexBlock(47,634.67 KB,0.203881%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 14:47:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:47:54 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2590787595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:54.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:54 compute-1 ceph-mon[81775]: pgmap v1793: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 473 MiB data, 1021 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 894 KiB/s wr, 172 op/s
Jan 20 14:47:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1409179445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:55 compute-1 nova_compute[225855]: 2026-01-20 14:47:55.092 225859 DEBUG oslo_concurrency.processutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:55 compute-1 nova_compute[225855]: 2026-01-20 14:47:55.094 225859 DEBUG nova.virt.libvirt.vif [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:47:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-354445838',display_name='tempest-tempest.common.compute-instance-354445838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-354445838',id=103,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c5f03d46c0c4162a3b2f1530850bb6c',ramdisk_id='',reservation_id='r-2qck85q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1020442335',owner_user_name='tempest-ServerActionsTestJSON-1020442335-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:47:49Z,user_data=None,user_id='3e9278fdb9e645b7938f3edb20c4d3cf',uuid=d699cf6a-9c33-400b-8d0f-4d61b8b16916,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f4f25f14-bc59-4322-86b2-b48f096472a5", "address": "fa:16:3e:41:b2:cf", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f25f14-bc", "ovs_interfaceid": "f4f25f14-bc59-4322-86b2-b48f096472a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:47:55 compute-1 nova_compute[225855]: 2026-01-20 14:47:55.094 225859 DEBUG nova.network.os_vif_util [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converting VIF {"id": "f4f25f14-bc59-4322-86b2-b48f096472a5", "address": "fa:16:3e:41:b2:cf", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f25f14-bc", "ovs_interfaceid": "f4f25f14-bc59-4322-86b2-b48f096472a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:47:55 compute-1 nova_compute[225855]: 2026-01-20 14:47:55.095 225859 DEBUG nova.network.os_vif_util [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:b2:cf,bridge_name='br-int',has_traffic_filtering=True,id=f4f25f14-bc59-4322-86b2-b48f096472a5,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f25f14-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:47:55 compute-1 nova_compute[225855]: 2026-01-20 14:47:55.097 225859 DEBUG nova.objects.instance [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'pci_devices' on Instance uuid d699cf6a-9c33-400b-8d0f-4d61b8b16916 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:47:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:47:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:55.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:47:55 compute-1 nova_compute[225855]: 2026-01-20 14:47:55.661 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.412 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:56 compute-1 ceph-mon[81775]: pgmap v1794: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 456 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.0 MiB/s wr, 187 op/s
Jan 20 14:47:56 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2590787595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.494 225859 DEBUG nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:47:56 compute-1 nova_compute[225855]:   <uuid>d699cf6a-9c33-400b-8d0f-4d61b8b16916</uuid>
Jan 20 14:47:56 compute-1 nova_compute[225855]:   <name>instance-00000067</name>
Jan 20 14:47:56 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:47:56 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:47:56 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <nova:name>tempest-tempest.common.compute-instance-354445838</nova:name>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:47:52</nova:creationTime>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:47:56 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:47:56 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:47:56 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:47:56 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:47:56 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:47:56 compute-1 nova_compute[225855]:         <nova:user uuid="3e9278fdb9e645b7938f3edb20c4d3cf">tempest-ServerActionsTestJSON-1020442335-project-member</nova:user>
Jan 20 14:47:56 compute-1 nova_compute[225855]:         <nova:project uuid="1c5f03d46c0c4162a3b2f1530850bb6c">tempest-ServerActionsTestJSON-1020442335</nova:project>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:47:56 compute-1 nova_compute[225855]:         <nova:port uuid="f4f25f14-bc59-4322-86b2-b48f096472a5">
Jan 20 14:47:56 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:47:56 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:47:56 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <system>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <entry name="serial">d699cf6a-9c33-400b-8d0f-4d61b8b16916</entry>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <entry name="uuid">d699cf6a-9c33-400b-8d0f-4d61b8b16916</entry>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     </system>
Jan 20 14:47:56 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:47:56 compute-1 nova_compute[225855]:   <os>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:   </os>
Jan 20 14:47:56 compute-1 nova_compute[225855]:   <features>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:   </features>
Jan 20 14:47:56 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:47:56 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:47:56 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk">
Jan 20 14:47:56 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       </source>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:47:56 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk.config">
Jan 20 14:47:56 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       </source>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:47:56 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:41:b2:cf"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <target dev="tapf4f25f14-bc"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/d699cf6a-9c33-400b-8d0f-4d61b8b16916/console.log" append="off"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <video>
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     </video>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:47:56 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:47:56 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:47:56 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:47:56 compute-1 nova_compute[225855]: </domain>
Jan 20 14:47:56 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.495 225859 DEBUG nova.compute.manager [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Preparing to wait for external event network-vif-plugged-f4f25f14-bc59-4322-86b2-b48f096472a5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.495 225859 DEBUG oslo_concurrency.lockutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.495 225859 DEBUG oslo_concurrency.lockutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.496 225859 DEBUG oslo_concurrency.lockutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.496 225859 DEBUG nova.virt.libvirt.vif [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:47:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-354445838',display_name='tempest-tempest.common.compute-instance-354445838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-354445838',id=103,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c5f03d46c0c4162a3b2f1530850bb6c',ramdisk_id='',reservation_id='r-2qck85q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1020442335',owner_user_name='tempest-ServerActionsTestJSON-1020442335-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:47:49Z,user_data=None,user_id='3e9278fdb9e645b7938f3edb20c4d3cf',uuid=d699cf6a-9c33-400b-8d0f-4d61b8b16916,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f4f25f14-bc59-4322-86b2-b48f096472a5", "address": "fa:16:3e:41:b2:cf", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f25f14-bc", "ovs_interfaceid": "f4f25f14-bc59-4322-86b2-b48f096472a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.496 225859 DEBUG nova.network.os_vif_util [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converting VIF {"id": "f4f25f14-bc59-4322-86b2-b48f096472a5", "address": "fa:16:3e:41:b2:cf", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f25f14-bc", "ovs_interfaceid": "f4f25f14-bc59-4322-86b2-b48f096472a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.497 225859 DEBUG nova.network.os_vif_util [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:b2:cf,bridge_name='br-int',has_traffic_filtering=True,id=f4f25f14-bc59-4322-86b2-b48f096472a5,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f25f14-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.497 225859 DEBUG os_vif [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:b2:cf,bridge_name='br-int',has_traffic_filtering=True,id=f4f25f14-bc59-4322-86b2-b48f096472a5,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f25f14-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.498 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.498 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.498 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.501 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.502 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4f25f14-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.502 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf4f25f14-bc, col_values=(('external_ids', {'iface-id': 'f4f25f14-bc59-4322-86b2-b48f096472a5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:b2:cf', 'vm-uuid': 'd699cf6a-9c33-400b-8d0f-4d61b8b16916'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.503 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:56 compute-1 NetworkManager[49104]: <info>  [1768920476.5045] manager: (tapf4f25f14-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.505 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.511 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.511 225859 INFO os_vif [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:b2:cf,bridge_name='br-int',has_traffic_filtering=True,id=f4f25f14-bc59-4322-86b2-b48f096472a5,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f25f14-bc')
Jan 20 14:47:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:56.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.695 225859 DEBUG nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.695 225859 DEBUG nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.696 225859 DEBUG nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] No VIF found with MAC fa:16:3e:41:b2:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.696 225859 INFO nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Using config drive
Jan 20 14:47:56 compute-1 nova_compute[225855]: 2026-01-20 14:47:56.725 225859 DEBUG nova.storage.rbd_utils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:57.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:57 compute-1 ceph-mon[81775]: pgmap v1795: 321 pgs: 321 active+clean; 421 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 243 op/s
Jan 20 14:47:58 compute-1 nova_compute[225855]: 2026-01-20 14:47:58.405 225859 INFO nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Creating config drive at /var/lib/nova/instances/d699cf6a-9c33-400b-8d0f-4d61b8b16916/disk.config
Jan 20 14:47:58 compute-1 nova_compute[225855]: 2026-01-20 14:47:58.410 225859 DEBUG oslo_concurrency.processutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d699cf6a-9c33-400b-8d0f-4d61b8b16916/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjqeviksz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:58 compute-1 nova_compute[225855]: 2026-01-20 14:47:58.538 225859 DEBUG oslo_concurrency.processutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d699cf6a-9c33-400b-8d0f-4d61b8b16916/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjqeviksz" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:58.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:58 compute-1 nova_compute[225855]: 2026-01-20 14:47:58.718 225859 DEBUG nova.storage.rbd_utils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:58 compute-1 nova_compute[225855]: 2026-01-20 14:47:58.722 225859 DEBUG oslo_concurrency.processutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d699cf6a-9c33-400b-8d0f-4d61b8b16916/disk.config d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/938089309' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:47:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/938089309' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:47:58 compute-1 nova_compute[225855]: 2026-01-20 14:47:58.956 225859 DEBUG oslo_concurrency.processutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d699cf6a-9c33-400b-8d0f-4d61b8b16916/disk.config d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:58 compute-1 nova_compute[225855]: 2026-01-20 14:47:58.957 225859 INFO nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Deleting local config drive /var/lib/nova/instances/d699cf6a-9c33-400b-8d0f-4d61b8b16916/disk.config because it was imported into RBD.
Jan 20 14:47:59 compute-1 kernel: tapf4f25f14-bc: entered promiscuous mode
Jan 20 14:47:59 compute-1 NetworkManager[49104]: <info>  [1768920479.0031] manager: (tapf4f25f14-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/148)
Jan 20 14:47:59 compute-1 ovn_controller[130490]: 2026-01-20T14:47:59Z|00346|binding|INFO|Claiming lport f4f25f14-bc59-4322-86b2-b48f096472a5 for this chassis.
Jan 20 14:47:59 compute-1 nova_compute[225855]: 2026-01-20 14:47:59.003 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:59 compute-1 ovn_controller[130490]: 2026-01-20T14:47:59Z|00347|binding|INFO|f4f25f14-bc59-4322-86b2-b48f096472a5: Claiming fa:16:3e:41:b2:cf 10.100.0.12
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.015 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:b2:cf 10.100.0.12'], port_security=['fa:16:3e:41:b2:cf 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd699cf6a-9c33-400b-8d0f-4d61b8b16916', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-762e1859-4db4-4d9e-b66f-d50316f80df4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c5f03d46c0c4162a3b2f1530850bb6c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e045f96f-e14f-4cbd-a987-42fc8d4d3e66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2474a8ca-bb96-4cae-9133-23419b81a9fc, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=f4f25f14-bc59-4322-86b2-b48f096472a5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.016 140354 INFO neutron.agent.ovn.metadata.agent [-] Port f4f25f14-bc59-4322-86b2-b48f096472a5 in datapath 762e1859-4db4-4d9e-b66f-d50316f80df4 bound to our chassis
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.018 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 762e1859-4db4-4d9e-b66f-d50316f80df4
Jan 20 14:47:59 compute-1 ovn_controller[130490]: 2026-01-20T14:47:59Z|00348|binding|INFO|Setting lport f4f25f14-bc59-4322-86b2-b48f096472a5 ovn-installed in OVS
Jan 20 14:47:59 compute-1 ovn_controller[130490]: 2026-01-20T14:47:59Z|00349|binding|INFO|Setting lport f4f25f14-bc59-4322-86b2-b48f096472a5 up in Southbound
Jan 20 14:47:59 compute-1 nova_compute[225855]: 2026-01-20 14:47:59.021 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:59 compute-1 nova_compute[225855]: 2026-01-20 14:47:59.025 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.030 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f1cb9283-9f1c-4eba-8b0e-9efc5f211283]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.031 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap762e1859-41 in ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:47:59 compute-1 systemd-udevd[262739]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.033 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap762e1859-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.033 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4c16b1-be4d-4a55-b50b-4502e1f50843]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.034 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fac6c196-5c46-4c32-bcb3-24d773246333]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.046 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[8d07570a-0b1c-4bfe-bff0-0914939e563c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:59 compute-1 NetworkManager[49104]: <info>  [1768920479.0485] device (tapf4f25f14-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:47:59 compute-1 systemd-machined[194361]: New machine qemu-41-instance-00000067.
Jan 20 14:47:59 compute-1 NetworkManager[49104]: <info>  [1768920479.0492] device (tapf4f25f14-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:47:59 compute-1 systemd[1]: Started Virtual Machine qemu-41-instance-00000067.
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.068 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3df8a98b-f465-49c4-924a-975184ad0b30]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.096 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[a4da7b29-fbd3-45ea-8c76-b4eccff4498a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.099 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[65909b6b-9e5e-48ac-833c-26c2c052b17e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:59 compute-1 systemd-udevd[262744]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:47:59 compute-1 NetworkManager[49104]: <info>  [1768920479.1013] manager: (tap762e1859-40): new Veth device (/org/freedesktop/NetworkManager/Devices/149)
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.125 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c306acde-3320-4da9-8c0d-9f3cf2f7ad28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.128 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[aa3d8abd-1ae7-40dc-bbbc-10d0f564de13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:59 compute-1 NetworkManager[49104]: <info>  [1768920479.1504] device (tap762e1859-40): carrier: link connected
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.158 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e041b484-3901-4804-9f0f-9823befb094d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.177 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c14d78-615a-4179-b32f-4eb42f052acc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap762e1859-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:f1:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551807, 'reachable_time': 30286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262773, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.195 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7b7dcdc9-a48b-4867-aebf-c851b2fa151c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:f1da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 551807, 'tstamp': 551807}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262774, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.213 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e52817c3-48ed-40ad-a087-921a5f9c2807]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap762e1859-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:f1:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551807, 'reachable_time': 30286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262775, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.248 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a63f90-736b-4d69-b9e8-15617bb81ff8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.320 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f3126656-d170-425a-950c-b2c0c34be43e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.322 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap762e1859-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.322 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.322 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap762e1859-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:59 compute-1 kernel: tap762e1859-40: entered promiscuous mode
Jan 20 14:47:59 compute-1 nova_compute[225855]: 2026-01-20 14:47:59.324 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:59 compute-1 nova_compute[225855]: 2026-01-20 14:47:59.325 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.326 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap762e1859-40, col_values=(('external_ids', {'iface-id': '9e775c45-1646-436d-a0cb-a5b5ec356e1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:59 compute-1 NetworkManager[49104]: <info>  [1768920479.3288] manager: (tap762e1859-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/150)
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.329 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/762e1859-4db4-4d9e-b66f-d50316f80df4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/762e1859-4db4-4d9e-b66f-d50316f80df4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:47:59 compute-1 ovn_controller[130490]: 2026-01-20T14:47:59Z|00350|binding|INFO|Releasing lport 9e775c45-1646-436d-a0cb-a5b5ec356e1b from this chassis (sb_readonly=0)
Jan 20 14:47:59 compute-1 nova_compute[225855]: 2026-01-20 14:47:59.327 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:59 compute-1 nova_compute[225855]: 2026-01-20 14:47:59.328 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.329 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[587324dc-adad-457a-89a8-a9a4334ab1a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.330 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-762e1859-4db4-4d9e-b66f-d50316f80df4
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/762e1859-4db4-4d9e-b66f-d50316f80df4.pid.haproxy
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 762e1859-4db4-4d9e-b66f-d50316f80df4
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:47:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:47:59.330 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'env', 'PROCESS_TAG=haproxy-762e1859-4db4-4d9e-b66f-d50316f80df4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/762e1859-4db4-4d9e-b66f-d50316f80df4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:47:59 compute-1 nova_compute[225855]: 2026-01-20 14:47:59.343 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:47:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:47:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:59.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:47:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e241 e241: 3 total, 3 up, 3 in
Jan 20 14:47:59 compute-1 podman[262808]: 2026-01-20 14:47:59.669590537 +0000 UTC m=+0.026011398 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:48:00 compute-1 podman[262808]: 2026-01-20 14:48:00.04578011 +0000 UTC m=+0.402200951 container create f664c1788de7c7e153bea50ec21bff6e40c26e5e54caf93495340481c02af322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:48:00 compute-1 ceph-mon[81775]: pgmap v1796: 321 pgs: 321 active+clean; 425 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.0 MiB/s wr, 225 op/s
Jan 20 14:48:00 compute-1 ceph-mon[81775]: osdmap e241: 3 total, 3 up, 3 in
Jan 20 14:48:00 compute-1 systemd[1]: Started libpod-conmon-f664c1788de7c7e153bea50ec21bff6e40c26e5e54caf93495340481c02af322.scope.
Jan 20 14:48:00 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:48:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe49cb91db3dea07b646b62647ce33597562542cda25b9855d3f6435dcf0497/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.259 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920480.2590883, d699cf6a-9c33-400b-8d0f-4d61b8b16916 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.260 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] VM Started (Lifecycle Event)
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.285 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.289 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920480.2603076, d699cf6a-9c33-400b-8d0f-4d61b8b16916 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.289 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] VM Paused (Lifecycle Event)
Jan 20 14:48:00 compute-1 podman[262808]: 2026-01-20 14:48:00.30244663 +0000 UTC m=+0.658867491 container init f664c1788de7c7e153bea50ec21bff6e40c26e5e54caf93495340481c02af322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 14:48:00 compute-1 podman[262808]: 2026-01-20 14:48:00.308029618 +0000 UTC m=+0.664450449 container start f664c1788de7c7e153bea50ec21bff6e40c26e5e54caf93495340481c02af322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.317 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.322 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:48:00 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[262861]: [NOTICE]   (262869) : New worker (262871) forked
Jan 20 14:48:00 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[262861]: [NOTICE]   (262869) : Loading success.
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.343 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.667 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:00.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.903 225859 DEBUG nova.compute.manager [req-014a5a36-5b42-42fc-9b45-5954c5122c8d req-ca6620d1-5aa1-44ec-84ec-7a550439409a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Received event network-vif-plugged-f4f25f14-bc59-4322-86b2-b48f096472a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.904 225859 DEBUG oslo_concurrency.lockutils [req-014a5a36-5b42-42fc-9b45-5954c5122c8d req-ca6620d1-5aa1-44ec-84ec-7a550439409a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.904 225859 DEBUG oslo_concurrency.lockutils [req-014a5a36-5b42-42fc-9b45-5954c5122c8d req-ca6620d1-5aa1-44ec-84ec-7a550439409a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.904 225859 DEBUG oslo_concurrency.lockutils [req-014a5a36-5b42-42fc-9b45-5954c5122c8d req-ca6620d1-5aa1-44ec-84ec-7a550439409a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.904 225859 DEBUG nova.compute.manager [req-014a5a36-5b42-42fc-9b45-5954c5122c8d req-ca6620d1-5aa1-44ec-84ec-7a550439409a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Processing event network-vif-plugged-f4f25f14-bc59-4322-86b2-b48f096472a5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.905 225859 DEBUG nova.compute.manager [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.909 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920480.9085298, d699cf6a-9c33-400b-8d0f-4d61b8b16916 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.909 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] VM Resumed (Lifecycle Event)
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.911 225859 DEBUG nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.914 225859 INFO nova.virt.libvirt.driver [-] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Instance spawned successfully.
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.915 225859 DEBUG nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.941 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.946 225859 DEBUG nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.946 225859 DEBUG nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.947 225859 DEBUG nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.947 225859 DEBUG nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.947 225859 DEBUG nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.948 225859 DEBUG nova.virt.libvirt.driver [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:00 compute-1 nova_compute[225855]: 2026-01-20 14:48:00.953 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:48:01 compute-1 nova_compute[225855]: 2026-01-20 14:48:01.001 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:48:01 compute-1 nova_compute[225855]: 2026-01-20 14:48:01.019 225859 INFO nova.compute.manager [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Took 11.55 seconds to spawn the instance on the hypervisor.
Jan 20 14:48:01 compute-1 nova_compute[225855]: 2026-01-20 14:48:01.020 225859 DEBUG nova.compute.manager [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:01 compute-1 nova_compute[225855]: 2026-01-20 14:48:01.089 225859 INFO nova.compute.manager [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Took 13.77 seconds to build instance.
Jan 20 14:48:01 compute-1 nova_compute[225855]: 2026-01-20 14:48:01.130 225859 DEBUG oslo_concurrency.lockutils [None req-72afa0d3-f22b-4916-8bef-62f9235254a3 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:01 compute-1 nova_compute[225855]: 2026-01-20 14:48:01.504 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:48:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:01.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:48:01 compute-1 ceph-mon[81775]: pgmap v1798: 321 pgs: 321 active+clean; 434 MiB data, 1012 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 236 op/s
Jan 20 14:48:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:02.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:02 compute-1 systemd[1]: Stopping User Manager for UID 42436...
Jan 20 14:48:02 compute-1 systemd[262580]: Activating special unit Exit the Session...
Jan 20 14:48:02 compute-1 systemd[262580]: Stopped target Main User Target.
Jan 20 14:48:02 compute-1 systemd[262580]: Stopped target Basic System.
Jan 20 14:48:02 compute-1 systemd[262580]: Stopped target Paths.
Jan 20 14:48:02 compute-1 systemd[262580]: Stopped target Sockets.
Jan 20 14:48:02 compute-1 systemd[262580]: Stopped target Timers.
Jan 20 14:48:02 compute-1 systemd[262580]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 20 14:48:02 compute-1 systemd[262580]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 14:48:02 compute-1 systemd[262580]: Closed D-Bus User Message Bus Socket.
Jan 20 14:48:02 compute-1 systemd[262580]: Stopped Create User's Volatile Files and Directories.
Jan 20 14:48:02 compute-1 systemd[262580]: Removed slice User Application Slice.
Jan 20 14:48:02 compute-1 systemd[262580]: Reached target Shutdown.
Jan 20 14:48:02 compute-1 systemd[262580]: Finished Exit the Session.
Jan 20 14:48:02 compute-1 systemd[262580]: Reached target Exit the Session.
Jan 20 14:48:02 compute-1 systemd[1]: user@42436.service: Deactivated successfully.
Jan 20 14:48:02 compute-1 systemd[1]: Stopped User Manager for UID 42436.
Jan 20 14:48:02 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 20 14:48:02 compute-1 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 20 14:48:02 compute-1 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 20 14:48:02 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 20 14:48:02 compute-1 systemd[1]: Removed slice User Slice of UID 42436.
Jan 20 14:48:02 compute-1 podman[262881]: 2026-01-20 14:48:02.805802094 +0000 UTC m=+0.098545983 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 20 14:48:03 compute-1 nova_compute[225855]: 2026-01-20 14:48:03.187 225859 DEBUG nova.compute.manager [req-d5714cd1-b349-4a0e-9b21-8c8b9ede36a6 req-c79947aa-008a-4104-a3c1-2be21c7910e9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Received event network-vif-plugged-f4f25f14-bc59-4322-86b2-b48f096472a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:03 compute-1 nova_compute[225855]: 2026-01-20 14:48:03.187 225859 DEBUG oslo_concurrency.lockutils [req-d5714cd1-b349-4a0e-9b21-8c8b9ede36a6 req-c79947aa-008a-4104-a3c1-2be21c7910e9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:03 compute-1 nova_compute[225855]: 2026-01-20 14:48:03.188 225859 DEBUG oslo_concurrency.lockutils [req-d5714cd1-b349-4a0e-9b21-8c8b9ede36a6 req-c79947aa-008a-4104-a3c1-2be21c7910e9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:03 compute-1 nova_compute[225855]: 2026-01-20 14:48:03.188 225859 DEBUG oslo_concurrency.lockutils [req-d5714cd1-b349-4a0e-9b21-8c8b9ede36a6 req-c79947aa-008a-4104-a3c1-2be21c7910e9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:03 compute-1 nova_compute[225855]: 2026-01-20 14:48:03.188 225859 DEBUG nova.compute.manager [req-d5714cd1-b349-4a0e-9b21-8c8b9ede36a6 req-c79947aa-008a-4104-a3c1-2be21c7910e9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] No waiting events found dispatching network-vif-plugged-f4f25f14-bc59-4322-86b2-b48f096472a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:03 compute-1 nova_compute[225855]: 2026-01-20 14:48:03.188 225859 WARNING nova.compute.manager [req-d5714cd1-b349-4a0e-9b21-8c8b9ede36a6 req-c79947aa-008a-4104-a3c1-2be21c7910e9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Received unexpected event network-vif-plugged-f4f25f14-bc59-4322-86b2-b48f096472a5 for instance with vm_state active and task_state None.
Jan 20 14:48:03 compute-1 ceph-mon[81775]: pgmap v1799: 321 pgs: 321 active+clean; 446 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.7 MiB/s wr, 197 op/s
Jan 20 14:48:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:03.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:48:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:04.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:48:04 compute-1 nova_compute[225855]: 2026-01-20 14:48:04.837 225859 INFO nova.compute.manager [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Rebuilding instance
Jan 20 14:48:04 compute-1 ovn_controller[130490]: 2026-01-20T14:48:04Z|00351|binding|INFO|Releasing lport 9e775c45-1646-436d-a0cb-a5b5ec356e1b from this chassis (sb_readonly=0)
Jan 20 14:48:04 compute-1 ovn_controller[130490]: 2026-01-20T14:48:04Z|00352|binding|INFO|Releasing lport 5527ab8d-a985-420b-9d5b-7e5d9baf7004 from this chassis (sb_readonly=0)
Jan 20 14:48:05 compute-1 nova_compute[225855]: 2026-01-20 14:48:05.033 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:05 compute-1 nova_compute[225855]: 2026-01-20 14:48:05.192 225859 DEBUG nova.objects.instance [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'trusted_certs' on Instance uuid d699cf6a-9c33-400b-8d0f-4d61b8b16916 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:05 compute-1 nova_compute[225855]: 2026-01-20 14:48:05.217 225859 DEBUG nova.compute.manager [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:05 compute-1 nova_compute[225855]: 2026-01-20 14:48:05.276 225859 DEBUG nova.objects.instance [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'pci_requests' on Instance uuid d699cf6a-9c33-400b-8d0f-4d61b8b16916 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:05 compute-1 nova_compute[225855]: 2026-01-20 14:48:05.304 225859 DEBUG nova.objects.instance [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'pci_devices' on Instance uuid d699cf6a-9c33-400b-8d0f-4d61b8b16916 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:05 compute-1 nova_compute[225855]: 2026-01-20 14:48:05.318 225859 DEBUG nova.objects.instance [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'resources' on Instance uuid d699cf6a-9c33-400b-8d0f-4d61b8b16916 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:05 compute-1 nova_compute[225855]: 2026-01-20 14:48:05.331 225859 DEBUG nova.objects.instance [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'migration_context' on Instance uuid d699cf6a-9c33-400b-8d0f-4d61b8b16916 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:05 compute-1 nova_compute[225855]: 2026-01-20 14:48:05.342 225859 DEBUG nova.objects.instance [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:48:05 compute-1 nova_compute[225855]: 2026-01-20 14:48:05.345 225859 DEBUG nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:48:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:05.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:05 compute-1 nova_compute[225855]: 2026-01-20 14:48:05.670 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:05 compute-1 ceph-mon[81775]: pgmap v1800: 321 pgs: 321 active+clean; 448 MiB data, 1012 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.7 MiB/s wr, 243 op/s
Jan 20 14:48:06 compute-1 nova_compute[225855]: 2026-01-20 14:48:06.506 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:48:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:06.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:48:06 compute-1 nova_compute[225855]: 2026-01-20 14:48:06.859 225859 DEBUG nova.compute.manager [req-097c2d3f-04a2-477f-be4d-2ec10e58f733 req-0fac2271-8ace-4584-b943-f32d437f62b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Received event network-vif-unplugged-efc8b363-e70d-42f6-9be8-99865e269ec9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:06 compute-1 nova_compute[225855]: 2026-01-20 14:48:06.859 225859 DEBUG oslo_concurrency.lockutils [req-097c2d3f-04a2-477f-be4d-2ec10e58f733 req-0fac2271-8ace-4584-b943-f32d437f62b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:06 compute-1 nova_compute[225855]: 2026-01-20 14:48:06.860 225859 DEBUG oslo_concurrency.lockutils [req-097c2d3f-04a2-477f-be4d-2ec10e58f733 req-0fac2271-8ace-4584-b943-f32d437f62b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:06 compute-1 nova_compute[225855]: 2026-01-20 14:48:06.860 225859 DEBUG oslo_concurrency.lockutils [req-097c2d3f-04a2-477f-be4d-2ec10e58f733 req-0fac2271-8ace-4584-b943-f32d437f62b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:06 compute-1 nova_compute[225855]: 2026-01-20 14:48:06.860 225859 DEBUG nova.compute.manager [req-097c2d3f-04a2-477f-be4d-2ec10e58f733 req-0fac2271-8ace-4584-b943-f32d437f62b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] No waiting events found dispatching network-vif-unplugged-efc8b363-e70d-42f6-9be8-99865e269ec9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:06 compute-1 nova_compute[225855]: 2026-01-20 14:48:06.860 225859 WARNING nova.compute.manager [req-097c2d3f-04a2-477f-be4d-2ec10e58f733 req-0fac2271-8ace-4584-b943-f32d437f62b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Received unexpected event network-vif-unplugged-efc8b363-e70d-42f6-9be8-99865e269ec9 for instance with vm_state active and task_state resize_migrating.
Jan 20 14:48:07 compute-1 ceph-mon[81775]: pgmap v1801: 321 pgs: 321 active+clean; 472 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.8 MiB/s wr, 234 op/s
Jan 20 14:48:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:07.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:48:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:08.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:48:08 compute-1 nova_compute[225855]: 2026-01-20 14:48:08.686 225859 INFO nova.network.neutron [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Updating port efc8b363-e70d-42f6-9be8-99865e269ec9 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 20 14:48:08 compute-1 sudo[262914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:08 compute-1 sudo[262914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:08 compute-1 sudo[262914]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:08 compute-1 sudo[262939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:08 compute-1 sudo[262939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:08 compute-1 sudo[262939]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:09 compute-1 nova_compute[225855]: 2026-01-20 14:48:09.001 225859 DEBUG nova.compute.manager [req-e20e56d3-b184-482a-bf1c-4bef38a10f5b req-a76ed729-c047-4567-b166-ff20c00ca3d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Received event network-vif-plugged-efc8b363-e70d-42f6-9be8-99865e269ec9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:09 compute-1 nova_compute[225855]: 2026-01-20 14:48:09.002 225859 DEBUG oslo_concurrency.lockutils [req-e20e56d3-b184-482a-bf1c-4bef38a10f5b req-a76ed729-c047-4567-b166-ff20c00ca3d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:09 compute-1 nova_compute[225855]: 2026-01-20 14:48:09.002 225859 DEBUG oslo_concurrency.lockutils [req-e20e56d3-b184-482a-bf1c-4bef38a10f5b req-a76ed729-c047-4567-b166-ff20c00ca3d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:09 compute-1 nova_compute[225855]: 2026-01-20 14:48:09.003 225859 DEBUG oslo_concurrency.lockutils [req-e20e56d3-b184-482a-bf1c-4bef38a10f5b req-a76ed729-c047-4567-b166-ff20c00ca3d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:09 compute-1 nova_compute[225855]: 2026-01-20 14:48:09.003 225859 DEBUG nova.compute.manager [req-e20e56d3-b184-482a-bf1c-4bef38a10f5b req-a76ed729-c047-4567-b166-ff20c00ca3d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] No waiting events found dispatching network-vif-plugged-efc8b363-e70d-42f6-9be8-99865e269ec9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:09 compute-1 nova_compute[225855]: 2026-01-20 14:48:09.003 225859 WARNING nova.compute.manager [req-e20e56d3-b184-482a-bf1c-4bef38a10f5b req-a76ed729-c047-4567-b166-ff20c00ca3d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Received unexpected event network-vif-plugged-efc8b363-e70d-42f6-9be8-99865e269ec9 for instance with vm_state active and task_state resize_migrated.
Jan 20 14:48:09 compute-1 ceph-mon[81775]: pgmap v1802: 321 pgs: 321 active+clean; 478 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.3 MiB/s wr, 224 op/s
Jan 20 14:48:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:48:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:09.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:48:09 compute-1 nova_compute[225855]: 2026-01-20 14:48:09.849 225859 DEBUG oslo_concurrency.lockutils [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "refresh_cache-9beb3ec3-721e-4919-9713-a92c82ad189b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:48:09 compute-1 nova_compute[225855]: 2026-01-20 14:48:09.850 225859 DEBUG oslo_concurrency.lockutils [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquired lock "refresh_cache-9beb3ec3-721e-4919-9713-a92c82ad189b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:48:09 compute-1 nova_compute[225855]: 2026-01-20 14:48:09.850 225859 DEBUG nova.network.neutron [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:48:10 compute-1 nova_compute[225855]: 2026-01-20 14:48:10.672 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:10.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:11 compute-1 nova_compute[225855]: 2026-01-20 14:48:11.091 225859 DEBUG nova.compute.manager [req-ef09fc88-af27-40ef-880d-cee8107b5ee9 req-33809d9c-85ae-465c-82b7-2ba2a4cbd952 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Received event network-changed-efc8b363-e70d-42f6-9be8-99865e269ec9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:11 compute-1 nova_compute[225855]: 2026-01-20 14:48:11.091 225859 DEBUG nova.compute.manager [req-ef09fc88-af27-40ef-880d-cee8107b5ee9 req-33809d9c-85ae-465c-82b7-2ba2a4cbd952 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Refreshing instance network info cache due to event network-changed-efc8b363-e70d-42f6-9be8-99865e269ec9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:48:11 compute-1 nova_compute[225855]: 2026-01-20 14:48:11.092 225859 DEBUG oslo_concurrency.lockutils [req-ef09fc88-af27-40ef-880d-cee8107b5ee9 req-33809d9c-85ae-465c-82b7-2ba2a4cbd952 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-9beb3ec3-721e-4919-9713-a92c82ad189b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:48:11 compute-1 ceph-mon[81775]: pgmap v1803: 321 pgs: 321 active+clean; 480 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.5 MiB/s wr, 229 op/s
Jan 20 14:48:11 compute-1 nova_compute[225855]: 2026-01-20 14:48:11.508 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:11.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:11 compute-1 nova_compute[225855]: 2026-01-20 14:48:11.977 225859 DEBUG nova.network.neutron [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Updating instance_info_cache with network_info: [{"id": "efc8b363-e70d-42f6-9be8-99865e269ec9", "address": "fa:16:3e:36:66:1d", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefc8b363-e7", "ovs_interfaceid": "efc8b363-e70d-42f6-9be8-99865e269ec9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:48:12 compute-1 nova_compute[225855]: 2026-01-20 14:48:12.012 225859 DEBUG oslo_concurrency.lockutils [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Releasing lock "refresh_cache-9beb3ec3-721e-4919-9713-a92c82ad189b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:48:12 compute-1 nova_compute[225855]: 2026-01-20 14:48:12.015 225859 DEBUG oslo_concurrency.lockutils [req-ef09fc88-af27-40ef-880d-cee8107b5ee9 req-33809d9c-85ae-465c-82b7-2ba2a4cbd952 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-9beb3ec3-721e-4919-9713-a92c82ad189b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:48:12 compute-1 nova_compute[225855]: 2026-01-20 14:48:12.015 225859 DEBUG nova.network.neutron [req-ef09fc88-af27-40ef-880d-cee8107b5ee9 req-33809d9c-85ae-465c-82b7-2ba2a4cbd952 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Refreshing network info cache for port efc8b363-e70d-42f6-9be8-99865e269ec9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:48:12 compute-1 nova_compute[225855]: 2026-01-20 14:48:12.078 225859 DEBUG os_brick.utils [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:48:12 compute-1 nova_compute[225855]: 2026-01-20 14:48:12.080 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:12 compute-1 nova_compute[225855]: 2026-01-20 14:48:12.092 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:12 compute-1 nova_compute[225855]: 2026-01-20 14:48:12.093 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[be49b86a-def5-4b33-a828-844319f1c63e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:12 compute-1 nova_compute[225855]: 2026-01-20 14:48:12.094 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:12 compute-1 nova_compute[225855]: 2026-01-20 14:48:12.102 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:12 compute-1 nova_compute[225855]: 2026-01-20 14:48:12.102 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[3497dad6-62db-4442-bcf2-7aa93873de6a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:12 compute-1 nova_compute[225855]: 2026-01-20 14:48:12.104 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:12 compute-1 nova_compute[225855]: 2026-01-20 14:48:12.112 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:12 compute-1 nova_compute[225855]: 2026-01-20 14:48:12.112 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[9d21fe2f-fee5-429e-822f-b4d6c6b13d60]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:12 compute-1 nova_compute[225855]: 2026-01-20 14:48:12.114 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[3c1a4dfa-1409-4419-a336-b71a38e101d2]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:12 compute-1 nova_compute[225855]: 2026-01-20 14:48:12.114 225859 DEBUG oslo_concurrency.processutils [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:12 compute-1 nova_compute[225855]: 2026-01-20 14:48:12.140 225859 DEBUG oslo_concurrency.processutils [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:12 compute-1 nova_compute[225855]: 2026-01-20 14:48:12.143 225859 DEBUG os_brick.initiator.connectors.lightos [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:48:12 compute-1 nova_compute[225855]: 2026-01-20 14:48:12.143 225859 DEBUG os_brick.initiator.connectors.lightos [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:48:12 compute-1 nova_compute[225855]: 2026-01-20 14:48:12.143 225859 DEBUG os_brick.initiator.connectors.lightos [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:48:12 compute-1 nova_compute[225855]: 2026-01-20 14:48:12.144 225859 DEBUG os_brick.utils [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] <== get_connector_properties: return (64ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:48:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:12.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:13 compute-1 podman[262973]: 2026-01-20 14:48:13.020774774 +0000 UTC m=+0.062404953 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 14:48:13 compute-1 ceph-mon[81775]: pgmap v1804: 321 pgs: 321 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.9 MiB/s wr, 197 op/s
Jan 20 14:48:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4075507241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:48:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:13.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.556 225859 DEBUG nova.virt.libvirt.driver [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.558 225859 DEBUG nova.virt.libvirt.driver [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.558 225859 INFO nova.virt.libvirt.driver [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Creating image(s)
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.559 225859 DEBUG nova.virt.libvirt.driver [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.559 225859 DEBUG nova.virt.libvirt.driver [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Ensure instance console log exists: /var/lib/nova/instances/9beb3ec3-721e-4919-9713-a92c82ad189b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.559 225859 DEBUG oslo_concurrency.lockutils [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.560 225859 DEBUG oslo_concurrency.lockutils [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.560 225859 DEBUG oslo_concurrency.lockutils [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.562 225859 DEBUG nova.virt.libvirt.driver [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Start _get_guest_xml network_info=[{"id": "efc8b363-e70d-42f6-9be8-99865e269ec9", "address": "fa:16:3e:36:66:1d", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-916311998-network", "vif_mac": "fa:16:3e:36:66:1d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefc8b363-e7", "ovs_interfaceid": "efc8b363-e70d-42f6-9be8-99865e269ec9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': True, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9219aafd-6c66-4f38-9927-85b54b4175ae', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9219aafd-6c66-4f38-9927-85b54b4175ae', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '9beb3ec3-721e-4919-9713-a92c82ad189b', 'attached_at': '2026-01-20T14:48:13.000000', 'detached_at': '', 'volume_id': '9219aafd-6c66-4f38-9927-85b54b4175ae', 'serial': '9219aafd-6c66-4f38-9927-85b54b4175ae'}, 'guest_format': None, 'boot_index': 0, 'mount_device': '/dev/vda', 'attachment_id': 'a2de2d41-d2d4-4195-90f3-7c5d6054f339', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.565 225859 WARNING nova.virt.libvirt.driver [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.572 225859 DEBUG nova.virt.libvirt.host [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.573 225859 DEBUG nova.virt.libvirt.host [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.576 225859 DEBUG nova.virt.libvirt.host [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.576 225859 DEBUG nova.virt.libvirt.host [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.577 225859 DEBUG nova.virt.libvirt.driver [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.578 225859 DEBUG nova.virt.hardware [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='30c26a27-d918-46d8-a512-4ef3b4ce5955',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.578 225859 DEBUG nova.virt.hardware [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.578 225859 DEBUG nova.virt.hardware [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.579 225859 DEBUG nova.virt.hardware [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.579 225859 DEBUG nova.virt.hardware [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.579 225859 DEBUG nova.virt.hardware [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.579 225859 DEBUG nova.virt.hardware [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.580 225859 DEBUG nova.virt.hardware [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.580 225859 DEBUG nova.virt.hardware [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.580 225859 DEBUG nova.virt.hardware [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.580 225859 DEBUG nova.virt.hardware [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.580 225859 DEBUG nova.objects.instance [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9beb3ec3-721e-4919-9713-a92c82ad189b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.625 225859 DEBUG oslo_concurrency.processutils [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.910 225859 DEBUG nova.network.neutron [req-ef09fc88-af27-40ef-880d-cee8107b5ee9 req-33809d9c-85ae-465c-82b7-2ba2a4cbd952 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Updated VIF entry in instance network info cache for port efc8b363-e70d-42f6-9be8-99865e269ec9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.911 225859 DEBUG nova.network.neutron [req-ef09fc88-af27-40ef-880d-cee8107b5ee9 req-33809d9c-85ae-465c-82b7-2ba2a4cbd952 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Updating instance_info_cache with network_info: [{"id": "efc8b363-e70d-42f6-9be8-99865e269ec9", "address": "fa:16:3e:36:66:1d", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefc8b363-e7", "ovs_interfaceid": "efc8b363-e70d-42f6-9be8-99865e269ec9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:48:13 compute-1 nova_compute[225855]: 2026-01-20 14:48:13.943 225859 DEBUG oslo_concurrency.lockutils [req-ef09fc88-af27-40ef-880d-cee8107b5ee9 req-33809d9c-85ae-465c-82b7-2ba2a4cbd952 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-9beb3ec3-721e-4919-9713-a92c82ad189b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:48:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:48:14 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3187814475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.059 225859 DEBUG oslo_concurrency.processutils [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.083 225859 DEBUG nova.virt.libvirt.vif [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:47:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-757916410',display_name='tempest-ServerActionsTestOtherA-server-757916410',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-757916410',id=101,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDOC6AOV9rhSIyyfBXYGEFvhdWE5GLRuNfs0sPvnXoLHIstQY2OpqwhI35UcFW1s96uqz0+j9sMbdcq/NuNcfgrlnXkEz6j2iO7WUECWdPrQW34JB1FTWAvtA4R6RDoaZA==',key_name='tempest-keypair-1611966828',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:47:42Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b683fcc0026242e28ba6d8fba638688e',ramdisk_id='',reservation_id='r-042ihw9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-967087071',owner_user_name='tempest-ServerActionsTestOtherA-967087071-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:48:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='869086208e10436c9dc96c78bee9a85d',uuid=9beb3ec3-721e-4919-9713-a92c82ad189b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "efc8b363-e70d-42f6-9be8-99865e269ec9", "address": "fa:16:3e:36:66:1d", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-916311998-network", "vif_mac": "fa:16:3e:36:66:1d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefc8b363-e7", "ovs_interfaceid": "efc8b363-e70d-42f6-9be8-99865e269ec9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.084 225859 DEBUG nova.network.os_vif_util [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converting VIF {"id": "efc8b363-e70d-42f6-9be8-99865e269ec9", "address": "fa:16:3e:36:66:1d", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-916311998-network", "vif_mac": "fa:16:3e:36:66:1d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefc8b363-e7", "ovs_interfaceid": "efc8b363-e70d-42f6-9be8-99865e269ec9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.084 225859 DEBUG nova.network.os_vif_util [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:66:1d,bridge_name='br-int',has_traffic_filtering=True,id=efc8b363-e70d-42f6-9be8-99865e269ec9,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefc8b363-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.087 225859 DEBUG nova.virt.libvirt.driver [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:48:14 compute-1 nova_compute[225855]:   <uuid>9beb3ec3-721e-4919-9713-a92c82ad189b</uuid>
Jan 20 14:48:14 compute-1 nova_compute[225855]:   <name>instance-00000065</name>
Jan 20 14:48:14 compute-1 nova_compute[225855]:   <memory>196608</memory>
Jan 20 14:48:14 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:48:14 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerActionsTestOtherA-server-757916410</nova:name>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:48:13</nova:creationTime>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <nova:flavor name="m1.micro">
Jan 20 14:48:14 compute-1 nova_compute[225855]:         <nova:memory>192</nova:memory>
Jan 20 14:48:14 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:48:14 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:48:14 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:48:14 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:48:14 compute-1 nova_compute[225855]:         <nova:user uuid="869086208e10436c9dc96c78bee9a85d">tempest-ServerActionsTestOtherA-967087071-project-member</nova:user>
Jan 20 14:48:14 compute-1 nova_compute[225855]:         <nova:project uuid="b683fcc0026242e28ba6d8fba638688e">tempest-ServerActionsTestOtherA-967087071</nova:project>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:48:14 compute-1 nova_compute[225855]:         <nova:port uuid="efc8b363-e70d-42f6-9be8-99865e269ec9">
Jan 20 14:48:14 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:48:14 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:48:14 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <system>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <entry name="serial">9beb3ec3-721e-4919-9713-a92c82ad189b</entry>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <entry name="uuid">9beb3ec3-721e-4919-9713-a92c82ad189b</entry>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     </system>
Jan 20 14:48:14 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:48:14 compute-1 nova_compute[225855]:   <os>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:   </os>
Jan 20 14:48:14 compute-1 nova_compute[225855]:   <features>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:   </features>
Jan 20 14:48:14 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:48:14 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:48:14 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/9beb3ec3-721e-4919-9713-a92c82ad189b_disk.config">
Jan 20 14:48:14 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       </source>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:48:14 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <source protocol="rbd" name="volumes/volume-9219aafd-6c66-4f38-9927-85b54b4175ae">
Jan 20 14:48:14 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       </source>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:48:14 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <serial>9219aafd-6c66-4f38-9927-85b54b4175ae</serial>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:36:66:1d"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <target dev="tapefc8b363-e7"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/9beb3ec3-721e-4919-9713-a92c82ad189b/console.log" append="off"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <video>
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     </video>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:48:14 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:48:14 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:48:14 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:48:14 compute-1 nova_compute[225855]: </domain>
Jan 20 14:48:14 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.088 225859 DEBUG nova.virt.libvirt.vif [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:47:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-757916410',display_name='tempest-ServerActionsTestOtherA-server-757916410',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-757916410',id=101,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDOC6AOV9rhSIyyfBXYGEFvhdWE5GLRuNfs0sPvnXoLHIstQY2OpqwhI35UcFW1s96uqz0+j9sMbdcq/NuNcfgrlnXkEz6j2iO7WUECWdPrQW34JB1FTWAvtA4R6RDoaZA==',key_name='tempest-keypair-1611966828',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:47:42Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b683fcc0026242e28ba6d8fba638688e',ramdisk_id='',reservation_id='r-042ihw9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-967087071',owner_user_name='tempest-ServerActionsTestOtherA-967087071-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:48:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='869086208e10436c9dc96c78bee9a85d',uuid=9beb3ec3-721e-4919-9713-a92c82ad189b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "efc8b363-e70d-42f6-9be8-99865e269ec9", "address": "fa:16:3e:36:66:1d", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-916311998-network", "vif_mac": "fa:16:3e:36:66:1d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefc8b363-e7", "ovs_interfaceid": "efc8b363-e70d-42f6-9be8-99865e269ec9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.089 225859 DEBUG nova.network.os_vif_util [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converting VIF {"id": "efc8b363-e70d-42f6-9be8-99865e269ec9", "address": "fa:16:3e:36:66:1d", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-916311998-network", "vif_mac": "fa:16:3e:36:66:1d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefc8b363-e7", "ovs_interfaceid": "efc8b363-e70d-42f6-9be8-99865e269ec9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.089 225859 DEBUG nova.network.os_vif_util [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:66:1d,bridge_name='br-int',has_traffic_filtering=True,id=efc8b363-e70d-42f6-9be8-99865e269ec9,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefc8b363-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.089 225859 DEBUG os_vif [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:66:1d,bridge_name='br-int',has_traffic_filtering=True,id=efc8b363-e70d-42f6-9be8-99865e269ec9,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefc8b363-e7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.090 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.090 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.091 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.093 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.093 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapefc8b363-e7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.094 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapefc8b363-e7, col_values=(('external_ids', {'iface-id': 'efc8b363-e70d-42f6-9be8-99865e269ec9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:66:1d', 'vm-uuid': '9beb3ec3-721e-4919-9713-a92c82ad189b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.095 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:14 compute-1 NetworkManager[49104]: <info>  [1768920494.0964] manager: (tapefc8b363-e7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.098 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.102 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.103 225859 INFO os_vif [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:66:1d,bridge_name='br-int',has_traffic_filtering=True,id=efc8b363-e70d-42f6-9be8-99865e269ec9,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefc8b363-e7')
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.161 225859 DEBUG nova.virt.libvirt.driver [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.161 225859 DEBUG nova.virt.libvirt.driver [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.162 225859 DEBUG nova.virt.libvirt.driver [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] No VIF found with MAC fa:16:3e:36:66:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.162 225859 INFO nova.virt.libvirt.driver [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Using config drive
Jan 20 14:48:14 compute-1 kernel: tapefc8b363-e7: entered promiscuous mode
Jan 20 14:48:14 compute-1 NetworkManager[49104]: <info>  [1768920494.2506] manager: (tapefc8b363-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/152)
Jan 20 14:48:14 compute-1 ovn_controller[130490]: 2026-01-20T14:48:14Z|00353|binding|INFO|Claiming lport efc8b363-e70d-42f6-9be8-99865e269ec9 for this chassis.
Jan 20 14:48:14 compute-1 ovn_controller[130490]: 2026-01-20T14:48:14Z|00354|binding|INFO|efc8b363-e70d-42f6-9be8-99865e269ec9: Claiming fa:16:3e:36:66:1d 10.100.0.8
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.254 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:14.261 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:66:1d 10.100.0.8'], port_security=['fa:16:3e:36:66:1d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9beb3ec3-721e-4919-9713-a92c82ad189b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b683fcc0026242e28ba6d8fba638688e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7ceb05b5-53ff-444a-b0ef-2ba8294d585b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=361f9a69-30a6-4be4-89ad-2a8f92877af2, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=efc8b363-e70d-42f6-9be8-99865e269ec9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:48:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:14.264 140354 INFO neutron.agent.ovn.metadata.agent [-] Port efc8b363-e70d-42f6-9be8-99865e269ec9 in datapath a19e9d1a-864f-41ee-bdea-188e65973ea5 bound to our chassis
Jan 20 14:48:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:14.266 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a19e9d1a-864f-41ee-bdea-188e65973ea5
Jan 20 14:48:14 compute-1 ovn_controller[130490]: 2026-01-20T14:48:14Z|00355|binding|INFO|Setting lport efc8b363-e70d-42f6-9be8-99865e269ec9 ovn-installed in OVS
Jan 20 14:48:14 compute-1 ovn_controller[130490]: 2026-01-20T14:48:14Z|00356|binding|INFO|Setting lport efc8b363-e70d-42f6-9be8-99865e269ec9 up in Southbound
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.272 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.275 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:14.285 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8ef20bc6-0f59-4f68-9da9-6e7bfe7103ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:14 compute-1 systemd-machined[194361]: New machine qemu-42-instance-00000065.
Jan 20 14:48:14 compute-1 systemd-udevd[263067]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:48:14 compute-1 systemd[1]: Started Virtual Machine qemu-42-instance-00000065.
Jan 20 14:48:14 compute-1 NetworkManager[49104]: <info>  [1768920494.3070] device (tapefc8b363-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:48:14 compute-1 NetworkManager[49104]: <info>  [1768920494.3078] device (tapefc8b363-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:48:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:14.324 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[5d78ae86-e771-49c0-8883-2e5aae495642]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:14.329 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c01fadcf-3a79-4e42-8025-bba70cb6226b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:14.357 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[5c26abcb-8f6b-4348-b983-f1d34cf0a41d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:14.377 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[61fae8ab-7d9b-4f23-a0e8-e5d1998c0040]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa19e9d1a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:53:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529090, 'reachable_time': 16021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263079, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:14.396 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ba799d20-2cc9-4cc0-933b-029e44808f3c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa19e9d1a-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529100, 'tstamp': 529100}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263081, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa19e9d1a-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529104, 'tstamp': 529104}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263081, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:14.398 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa19e9d1a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.399 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:14.401 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa19e9d1a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:14.401 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:48:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:14.402 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa19e9d1a-80, col_values=(('external_ids', {'iface-id': '5527ab8d-a985-420b-9d5b-7e5d9baf7004'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:14.402 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:48:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4143906486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:48:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4143906486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:48:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3187814475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #82. Immutable memtables: 0.
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:14.538481) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 82
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920494538529, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 699, "num_deletes": 258, "total_data_size": 1110707, "memory_usage": 1129176, "flush_reason": "Manual Compaction"}
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #83: started
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920494545839, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 83, "file_size": 732123, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43107, "largest_seqno": 43800, "table_properties": {"data_size": 728727, "index_size": 1240, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8094, "raw_average_key_size": 19, "raw_value_size": 721713, "raw_average_value_size": 1702, "num_data_blocks": 55, "num_entries": 424, "num_filter_entries": 424, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768920455, "oldest_key_time": 1768920455, "file_creation_time": 1768920494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 7432 microseconds, and 2913 cpu microseconds.
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:14.545907) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #83: 732123 bytes OK
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:14.545930) [db/memtable_list.cc:519] [default] Level-0 commit table #83 started
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:14.547913) [db/memtable_list.cc:722] [default] Level-0 commit table #83: memtable #1 done
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:14.547929) EVENT_LOG_v1 {"time_micros": 1768920494547924, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:14.547951) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 1106864, prev total WAL file size 1106864, number of live WAL files 2.
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000079.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:14.548510) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323631' seq:72057594037927935, type:22 .. '6C6F676D0031353134' seq:0, type:0; will stop at (end)
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [83(714KB)], [81(9449KB)]
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920494548538, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [83], "files_L6": [81], "score": -1, "input_data_size": 10408922, "oldest_snapshot_seqno": -1}
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #84: 6814 keys, 10279653 bytes, temperature: kUnknown
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920494635640, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 84, "file_size": 10279653, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10235100, "index_size": 26393, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 175705, "raw_average_key_size": 25, "raw_value_size": 10114193, "raw_average_value_size": 1484, "num_data_blocks": 1049, "num_entries": 6814, "num_filter_entries": 6814, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768920494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 84, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:14.635947) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10279653 bytes
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:14.638556) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.3 rd, 117.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.2 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(28.3) write-amplify(14.0) OK, records in: 7343, records dropped: 529 output_compression: NoCompression
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:14.638579) EVENT_LOG_v1 {"time_micros": 1768920494638569, "job": 50, "event": "compaction_finished", "compaction_time_micros": 87220, "compaction_time_cpu_micros": 29583, "output_level": 6, "num_output_files": 1, "total_output_size": 10279653, "num_input_records": 7343, "num_output_records": 6814, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920494639007, "job": 50, "event": "table_file_deletion", "file_number": 83}
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000081.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920494641053, "job": 50, "event": "table_file_deletion", "file_number": 81}
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:14.548399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:14.641123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:14.641127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:14.641130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:14.641131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:14 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:14.641133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:48:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:14.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.710 225859 DEBUG nova.compute.manager [req-52bf3fba-92ea-4c19-a011-4c91a4e991d1 req-bddb78a8-ee80-4baf-b8cf-d6ceb0db8bb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Received event network-vif-plugged-efc8b363-e70d-42f6-9be8-99865e269ec9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.714 225859 DEBUG oslo_concurrency.lockutils [req-52bf3fba-92ea-4c19-a011-4c91a4e991d1 req-bddb78a8-ee80-4baf-b8cf-d6ceb0db8bb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.714 225859 DEBUG oslo_concurrency.lockutils [req-52bf3fba-92ea-4c19-a011-4c91a4e991d1 req-bddb78a8-ee80-4baf-b8cf-d6ceb0db8bb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.715 225859 DEBUG oslo_concurrency.lockutils [req-52bf3fba-92ea-4c19-a011-4c91a4e991d1 req-bddb78a8-ee80-4baf-b8cf-d6ceb0db8bb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.715 225859 DEBUG nova.compute.manager [req-52bf3fba-92ea-4c19-a011-4c91a4e991d1 req-bddb78a8-ee80-4baf-b8cf-d6ceb0db8bb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] No waiting events found dispatching network-vif-plugged-efc8b363-e70d-42f6-9be8-99865e269ec9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:14 compute-1 nova_compute[225855]: 2026-01-20 14:48:14.715 225859 WARNING nova.compute.manager [req-52bf3fba-92ea-4c19-a011-4c91a4e991d1 req-bddb78a8-ee80-4baf-b8cf-d6ceb0db8bb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Received unexpected event network-vif-plugged-efc8b363-e70d-42f6-9be8-99865e269ec9 for instance with vm_state active and task_state resize_finish.
Jan 20 14:48:15 compute-1 nova_compute[225855]: 2026-01-20 14:48:15.096 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920495.0961964, 9beb3ec3-721e-4919-9713-a92c82ad189b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:48:15 compute-1 nova_compute[225855]: 2026-01-20 14:48:15.098 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] VM Resumed (Lifecycle Event)
Jan 20 14:48:15 compute-1 nova_compute[225855]: 2026-01-20 14:48:15.101 225859 DEBUG nova.compute.manager [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:48:15 compute-1 nova_compute[225855]: 2026-01-20 14:48:15.104 225859 INFO nova.virt.libvirt.driver [-] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Instance running successfully.
Jan 20 14:48:15 compute-1 virtqemud[225396]: argument unsupported: QEMU guest agent is not configured
Jan 20 14:48:15 compute-1 nova_compute[225855]: 2026-01-20 14:48:15.108 225859 DEBUG nova.virt.libvirt.guest [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 20 14:48:15 compute-1 nova_compute[225855]: 2026-01-20 14:48:15.108 225859 DEBUG nova.virt.libvirt.driver [None req-4d25a047-bfcc-47e3-8e2e-07a0e595cea9 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 20 14:48:15 compute-1 nova_compute[225855]: 2026-01-20 14:48:15.128 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:15 compute-1 nova_compute[225855]: 2026-01-20 14:48:15.132 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:48:15 compute-1 nova_compute[225855]: 2026-01-20 14:48:15.161 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 20 14:48:15 compute-1 nova_compute[225855]: 2026-01-20 14:48:15.162 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920495.0972342, 9beb3ec3-721e-4919-9713-a92c82ad189b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:48:15 compute-1 nova_compute[225855]: 2026-01-20 14:48:15.162 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] VM Started (Lifecycle Event)
Jan 20 14:48:15 compute-1 ovn_controller[130490]: 2026-01-20T14:48:15Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:41:b2:cf 10.100.0.12
Jan 20 14:48:15 compute-1 ovn_controller[130490]: 2026-01-20T14:48:15Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:41:b2:cf 10.100.0.12
Jan 20 14:48:15 compute-1 nova_compute[225855]: 2026-01-20 14:48:15.186 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:15 compute-1 nova_compute[225855]: 2026-01-20 14:48:15.190 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:48:15 compute-1 nova_compute[225855]: 2026-01-20 14:48:15.210 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 20 14:48:15 compute-1 nova_compute[225855]: 2026-01-20 14:48:15.391 225859 DEBUG nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 14:48:15 compute-1 nova_compute[225855]: 2026-01-20 14:48:15.463 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:15.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:15 compute-1 ceph-mon[81775]: pgmap v1805: 321 pgs: 321 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.3 MiB/s wr, 182 op/s
Jan 20 14:48:15 compute-1 nova_compute[225855]: 2026-01-20 14:48:15.584 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:15.585 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:48:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:15.586 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:48:15 compute-1 nova_compute[225855]: 2026-01-20 14:48:15.674 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:16.409 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:16.410 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:16.411 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:16.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:17 compute-1 nova_compute[225855]: 2026-01-20 14:48:17.463 225859 DEBUG nova.compute.manager [req-01de2e77-9b49-4448-b97a-9c6cba352038 req-93fc5194-f820-4253-a593-05bfd11c4a53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Received event network-vif-plugged-efc8b363-e70d-42f6-9be8-99865e269ec9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:17 compute-1 nova_compute[225855]: 2026-01-20 14:48:17.463 225859 DEBUG oslo_concurrency.lockutils [req-01de2e77-9b49-4448-b97a-9c6cba352038 req-93fc5194-f820-4253-a593-05bfd11c4a53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:17 compute-1 nova_compute[225855]: 2026-01-20 14:48:17.464 225859 DEBUG oslo_concurrency.lockutils [req-01de2e77-9b49-4448-b97a-9c6cba352038 req-93fc5194-f820-4253-a593-05bfd11c4a53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:17 compute-1 nova_compute[225855]: 2026-01-20 14:48:17.464 225859 DEBUG oslo_concurrency.lockutils [req-01de2e77-9b49-4448-b97a-9c6cba352038 req-93fc5194-f820-4253-a593-05bfd11c4a53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:17 compute-1 nova_compute[225855]: 2026-01-20 14:48:17.464 225859 DEBUG nova.compute.manager [req-01de2e77-9b49-4448-b97a-9c6cba352038 req-93fc5194-f820-4253-a593-05bfd11c4a53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] No waiting events found dispatching network-vif-plugged-efc8b363-e70d-42f6-9be8-99865e269ec9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:17 compute-1 nova_compute[225855]: 2026-01-20 14:48:17.464 225859 WARNING nova.compute.manager [req-01de2e77-9b49-4448-b97a-9c6cba352038 req-93fc5194-f820-4253-a593-05bfd11c4a53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Received unexpected event network-vif-plugged-efc8b363-e70d-42f6-9be8-99865e269ec9 for instance with vm_state resized and task_state None.
Jan 20 14:48:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:17.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:17 compute-1 ceph-mon[81775]: pgmap v1806: 321 pgs: 321 active+clean; 508 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Jan 20 14:48:17 compute-1 kernel: tapf4f25f14-bc (unregistering): left promiscuous mode
Jan 20 14:48:17 compute-1 NetworkManager[49104]: <info>  [1768920497.6692] device (tapf4f25f14-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:48:17 compute-1 ovn_controller[130490]: 2026-01-20T14:48:17Z|00357|binding|INFO|Releasing lport f4f25f14-bc59-4322-86b2-b48f096472a5 from this chassis (sb_readonly=0)
Jan 20 14:48:17 compute-1 ovn_controller[130490]: 2026-01-20T14:48:17Z|00358|binding|INFO|Setting lport f4f25f14-bc59-4322-86b2-b48f096472a5 down in Southbound
Jan 20 14:48:17 compute-1 ovn_controller[130490]: 2026-01-20T14:48:17Z|00359|binding|INFO|Removing iface tapf4f25f14-bc ovn-installed in OVS
Jan 20 14:48:17 compute-1 nova_compute[225855]: 2026-01-20 14:48:17.683 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:17 compute-1 nova_compute[225855]: 2026-01-20 14:48:17.686 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:17.693 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:b2:cf 10.100.0.12'], port_security=['fa:16:3e:41:b2:cf 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd699cf6a-9c33-400b-8d0f-4d61b8b16916', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-762e1859-4db4-4d9e-b66f-d50316f80df4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c5f03d46c0c4162a3b2f1530850bb6c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e045f96f-e14f-4cbd-a987-42fc8d4d3e66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2474a8ca-bb96-4cae-9133-23419b81a9fc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=f4f25f14-bc59-4322-86b2-b48f096472a5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:48:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:17.694 140354 INFO neutron.agent.ovn.metadata.agent [-] Port f4f25f14-bc59-4322-86b2-b48f096472a5 in datapath 762e1859-4db4-4d9e-b66f-d50316f80df4 unbound from our chassis
Jan 20 14:48:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:17.696 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 762e1859-4db4-4d9e-b66f-d50316f80df4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:48:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:17.697 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[58416d86-89ad-4367-a830-0747ec0a4064]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:17.698 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 namespace which is not needed anymore
Jan 20 14:48:17 compute-1 nova_compute[225855]: 2026-01-20 14:48:17.702 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:17 compute-1 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000067.scope: Deactivated successfully.
Jan 20 14:48:17 compute-1 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000067.scope: Consumed 14.332s CPU time.
Jan 20 14:48:17 compute-1 systemd-machined[194361]: Machine qemu-41-instance-00000067 terminated.
Jan 20 14:48:17 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[262861]: [NOTICE]   (262869) : haproxy version is 2.8.14-c23fe91
Jan 20 14:48:17 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[262861]: [NOTICE]   (262869) : path to executable is /usr/sbin/haproxy
Jan 20 14:48:17 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[262861]: [WARNING]  (262869) : Exiting Master process...
Jan 20 14:48:17 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[262861]: [WARNING]  (262869) : Exiting Master process...
Jan 20 14:48:17 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[262861]: [ALERT]    (262869) : Current worker (262871) exited with code 143 (Terminated)
Jan 20 14:48:17 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[262861]: [WARNING]  (262869) : All workers exited. Exiting... (0)
Jan 20 14:48:17 compute-1 systemd[1]: libpod-f664c1788de7c7e153bea50ec21bff6e40c26e5e54caf93495340481c02af322.scope: Deactivated successfully.
Jan 20 14:48:17 compute-1 podman[263149]: 2026-01-20 14:48:17.849121383 +0000 UTC m=+0.048565342 container died f664c1788de7c7e153bea50ec21bff6e40c26e5e54caf93495340481c02af322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:48:17 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f664c1788de7c7e153bea50ec21bff6e40c26e5e54caf93495340481c02af322-userdata-shm.mount: Deactivated successfully.
Jan 20 14:48:17 compute-1 systemd[1]: var-lib-containers-storage-overlay-abe49cb91db3dea07b646b62647ce33597562542cda25b9855d3f6435dcf0497-merged.mount: Deactivated successfully.
Jan 20 14:48:17 compute-1 nova_compute[225855]: 2026-01-20 14:48:17.891 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:17 compute-1 nova_compute[225855]: 2026-01-20 14:48:17.897 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:17 compute-1 podman[263149]: 2026-01-20 14:48:17.89966452 +0000 UTC m=+0.099108479 container cleanup f664c1788de7c7e153bea50ec21bff6e40c26e5e54caf93495340481c02af322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:48:17 compute-1 systemd[1]: libpod-conmon-f664c1788de7c7e153bea50ec21bff6e40c26e5e54caf93495340481c02af322.scope: Deactivated successfully.
Jan 20 14:48:17 compute-1 podman[263190]: 2026-01-20 14:48:17.973805623 +0000 UTC m=+0.053525952 container remove f664c1788de7c7e153bea50ec21bff6e40c26e5e54caf93495340481c02af322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 20 14:48:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:17.982 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8888409a-042f-453f-962e-38f354a9df27]: (4, ('Tue Jan 20 02:48:17 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 (f664c1788de7c7e153bea50ec21bff6e40c26e5e54caf93495340481c02af322)\nf664c1788de7c7e153bea50ec21bff6e40c26e5e54caf93495340481c02af322\nTue Jan 20 02:48:17 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 (f664c1788de7c7e153bea50ec21bff6e40c26e5e54caf93495340481c02af322)\nf664c1788de7c7e153bea50ec21bff6e40c26e5e54caf93495340481c02af322\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:17.985 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[29be0599-1bd5-4e2e-add3-087564f27654]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:17.986 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap762e1859-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:17 compute-1 nova_compute[225855]: 2026-01-20 14:48:17.987 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:17 compute-1 kernel: tap762e1859-40: left promiscuous mode
Jan 20 14:48:18 compute-1 nova_compute[225855]: 2026-01-20 14:48:18.003 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:18.007 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3941797a-32b6-458e-9037-45817e7152d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:18.020 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[016dc9b4-e59e-4774-b8eb-091ca7623898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:18.022 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a44ba73b-01bb-45c3-8754-b69febab8ff8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:18.040 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[677a5b2f-730c-4e62-91d2-0878c91e7194]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551801, 'reachable_time': 20664, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263209, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:18 compute-1 systemd[1]: run-netns-ovnmeta\x2d762e1859\x2d4db4\x2d4d9e\x2db66f\x2dd50316f80df4.mount: Deactivated successfully.
Jan 20 14:48:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:18.043 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:48:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:18.043 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[541fd44b-c3ff-49b6-8858-0a5dfc505d09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:18 compute-1 nova_compute[225855]: 2026-01-20 14:48:18.405 225859 INFO nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Instance shutdown successfully after 13 seconds.
Jan 20 14:48:18 compute-1 nova_compute[225855]: 2026-01-20 14:48:18.410 225859 INFO nova.virt.libvirt.driver [-] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Instance destroyed successfully.
Jan 20 14:48:18 compute-1 nova_compute[225855]: 2026-01-20 14:48:18.415 225859 INFO nova.virt.libvirt.driver [-] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Instance destroyed successfully.
Jan 20 14:48:18 compute-1 nova_compute[225855]: 2026-01-20 14:48:18.416 225859 DEBUG nova.virt.libvirt.vif [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:47:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-354445838',display_name='tempest-ServerActionsTestJSON-server-583331137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-354445838',id=103,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:48:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1c5f03d46c0c4162a3b2f1530850bb6c',ramdisk_id='',reservation_id='r-2qck85q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1020442335',owner_user_name='tempest-ServerActionsTestJSON-1020442335-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:48:04Z,user_data=None,user_id='3e9278fdb9e645b7938f3edb20c4d3cf',uuid=d699cf6a-9c33-400b-8d0f-4d61b8b16916,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f4f25f14-bc59-4322-86b2-b48f096472a5", "address": "fa:16:3e:41:b2:cf", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f25f14-bc", "ovs_interfaceid": "f4f25f14-bc59-4322-86b2-b48f096472a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:48:18 compute-1 nova_compute[225855]: 2026-01-20 14:48:18.416 225859 DEBUG nova.network.os_vif_util [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converting VIF {"id": "f4f25f14-bc59-4322-86b2-b48f096472a5", "address": "fa:16:3e:41:b2:cf", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f25f14-bc", "ovs_interfaceid": "f4f25f14-bc59-4322-86b2-b48f096472a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:48:18 compute-1 nova_compute[225855]: 2026-01-20 14:48:18.417 225859 DEBUG nova.network.os_vif_util [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:b2:cf,bridge_name='br-int',has_traffic_filtering=True,id=f4f25f14-bc59-4322-86b2-b48f096472a5,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f25f14-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:48:18 compute-1 nova_compute[225855]: 2026-01-20 14:48:18.417 225859 DEBUG os_vif [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:b2:cf,bridge_name='br-int',has_traffic_filtering=True,id=f4f25f14-bc59-4322-86b2-b48f096472a5,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f25f14-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:48:18 compute-1 nova_compute[225855]: 2026-01-20 14:48:18.419 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:18 compute-1 nova_compute[225855]: 2026-01-20 14:48:18.419 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4f25f14-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:18 compute-1 nova_compute[225855]: 2026-01-20 14:48:18.420 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:18 compute-1 nova_compute[225855]: 2026-01-20 14:48:18.421 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:18 compute-1 nova_compute[225855]: 2026-01-20 14:48:18.423 225859 INFO os_vif [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:b2:cf,bridge_name='br-int',has_traffic_filtering=True,id=f4f25f14-bc59-4322-86b2-b48f096472a5,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f25f14-bc')
Jan 20 14:48:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:18.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:18 compute-1 nova_compute[225855]: 2026-01-20 14:48:18.792 225859 INFO nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Deleting instance files /var/lib/nova/instances/d699cf6a-9c33-400b-8d0f-4d61b8b16916_del
Jan 20 14:48:18 compute-1 nova_compute[225855]: 2026-01-20 14:48:18.794 225859 INFO nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Deletion of /var/lib/nova/instances/d699cf6a-9c33-400b-8d0f-4d61b8b16916_del complete
Jan 20 14:48:18 compute-1 nova_compute[225855]: 2026-01-20 14:48:18.994 225859 DEBUG nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:48:18 compute-1 nova_compute[225855]: 2026-01-20 14:48:18.994 225859 INFO nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Creating image(s)
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.014 225859 DEBUG nova.storage.rbd_utils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.034 225859 DEBUG nova.storage.rbd_utils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.057 225859 DEBUG nova.storage.rbd_utils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.061 225859 DEBUG oslo_concurrency.processutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.143 225859 DEBUG oslo_concurrency.processutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.144 225859 DEBUG oslo_concurrency.lockutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.145 225859 DEBUG oslo_concurrency.lockutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.145 225859 DEBUG oslo_concurrency.lockutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.174 225859 DEBUG nova.storage.rbd_utils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.178 225859 DEBUG oslo_concurrency.processutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:19.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:19 compute-1 ceph-mon[81775]: pgmap v1807: 321 pgs: 321 active+clean; 518 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.4 MiB/s wr, 121 op/s
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.743 225859 DEBUG oslo_concurrency.processutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.806 225859 DEBUG nova.storage.rbd_utils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] resizing rbd image d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.924 225859 DEBUG nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.925 225859 DEBUG nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Ensure instance console log exists: /var/lib/nova/instances/d699cf6a-9c33-400b-8d0f-4d61b8b16916/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.925 225859 DEBUG oslo_concurrency.lockutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.925 225859 DEBUG oslo_concurrency.lockutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.926 225859 DEBUG oslo_concurrency.lockutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.928 225859 DEBUG nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Start _get_guest_xml network_info=[{"id": "f4f25f14-bc59-4322-86b2-b48f096472a5", "address": "fa:16:3e:41:b2:cf", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f25f14-bc", "ovs_interfaceid": "f4f25f14-bc59-4322-86b2-b48f096472a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:22:02Z,direct_url=<?>,disk_format='qcow2',id=26699514-f465-4b50-98b7-36f2cfc6a308,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.934 225859 WARNING nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.939 225859 DEBUG nova.compute.manager [req-da530277-c926-4503-ad7b-cc756476fb6a req-44f41c41-64c2-4895-90af-3ff5915019a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Received event network-vif-unplugged-f4f25f14-bc59-4322-86b2-b48f096472a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.939 225859 DEBUG oslo_concurrency.lockutils [req-da530277-c926-4503-ad7b-cc756476fb6a req-44f41c41-64c2-4895-90af-3ff5915019a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.939 225859 DEBUG oslo_concurrency.lockutils [req-da530277-c926-4503-ad7b-cc756476fb6a req-44f41c41-64c2-4895-90af-3ff5915019a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.940 225859 DEBUG oslo_concurrency.lockutils [req-da530277-c926-4503-ad7b-cc756476fb6a req-44f41c41-64c2-4895-90af-3ff5915019a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.940 225859 DEBUG nova.compute.manager [req-da530277-c926-4503-ad7b-cc756476fb6a req-44f41c41-64c2-4895-90af-3ff5915019a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] No waiting events found dispatching network-vif-unplugged-f4f25f14-bc59-4322-86b2-b48f096472a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.940 225859 WARNING nova.compute.manager [req-da530277-c926-4503-ad7b-cc756476fb6a req-44f41c41-64c2-4895-90af-3ff5915019a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Received unexpected event network-vif-unplugged-f4f25f14-bc59-4322-86b2-b48f096472a5 for instance with vm_state active and task_state rebuild_spawning.
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.945 225859 DEBUG nova.virt.libvirt.host [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.946 225859 DEBUG nova.virt.libvirt.host [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.948 225859 DEBUG nova.virt.libvirt.host [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.948 225859 DEBUG nova.virt.libvirt.host [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.950 225859 DEBUG nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.950 225859 DEBUG nova.virt.hardware [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:22:02Z,direct_url=<?>,disk_format='qcow2',id=26699514-f465-4b50-98b7-36f2cfc6a308,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.950 225859 DEBUG nova.virt.hardware [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.950 225859 DEBUG nova.virt.hardware [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.951 225859 DEBUG nova.virt.hardware [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.951 225859 DEBUG nova.virt.hardware [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.951 225859 DEBUG nova.virt.hardware [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.951 225859 DEBUG nova.virt.hardware [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.952 225859 DEBUG nova.virt.hardware [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.952 225859 DEBUG nova.virt.hardware [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.952 225859 DEBUG nova.virt.hardware [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.953 225859 DEBUG nova.virt.hardware [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.953 225859 DEBUG nova.objects.instance [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'vcpu_model' on Instance uuid d699cf6a-9c33-400b-8d0f-4d61b8b16916 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:19 compute-1 nova_compute[225855]: 2026-01-20 14:48:19.981 225859 DEBUG oslo_concurrency.processutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:48:20 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1269976554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.399 225859 DEBUG oslo_concurrency.processutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.423 225859 DEBUG nova.storage.rbd_utils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.426 225859 DEBUG oslo_concurrency.processutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1269976554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.677 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:20.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:48:20 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3194524310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.925 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.928 225859 DEBUG oslo_concurrency.processutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.930 225859 DEBUG nova.virt.libvirt.vif [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:47:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-354445838',display_name='tempest-ServerActionsTestJSON-server-583331137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-354445838',id=103,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:48:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1c5f03d46c0c4162a3b2f1530850bb6c',ramdisk_id='',reservation_id='r-2qck85q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1020442335',owner_user_name='tempest-ServerActionsTestJSON-1020442335-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:48:18Z,user_data=None,user_id='3e9278fdb9e645b7938f3edb20c4d3cf',uuid=d699cf6a-9c33-400b-8d0f-4d61b8b16916,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f4f25f14-bc59-4322-86b2-b48f096472a5", "address": "fa:16:3e:41:b2:cf", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f25f14-bc", "ovs_interfaceid": "f4f25f14-bc59-4322-86b2-b48f096472a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.930 225859 DEBUG nova.network.os_vif_util [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converting VIF {"id": "f4f25f14-bc59-4322-86b2-b48f096472a5", "address": "fa:16:3e:41:b2:cf", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f25f14-bc", "ovs_interfaceid": "f4f25f14-bc59-4322-86b2-b48f096472a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.931 225859 DEBUG nova.network.os_vif_util [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:b2:cf,bridge_name='br-int',has_traffic_filtering=True,id=f4f25f14-bc59-4322-86b2-b48f096472a5,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f25f14-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.936 225859 DEBUG nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:48:20 compute-1 nova_compute[225855]:   <uuid>d699cf6a-9c33-400b-8d0f-4d61b8b16916</uuid>
Jan 20 14:48:20 compute-1 nova_compute[225855]:   <name>instance-00000067</name>
Jan 20 14:48:20 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:48:20 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:48:20 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerActionsTestJSON-server-583331137</nova:name>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:48:19</nova:creationTime>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:48:20 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:48:20 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:48:20 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:48:20 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:48:20 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:48:20 compute-1 nova_compute[225855]:         <nova:user uuid="3e9278fdb9e645b7938f3edb20c4d3cf">tempest-ServerActionsTestJSON-1020442335-project-member</nova:user>
Jan 20 14:48:20 compute-1 nova_compute[225855]:         <nova:project uuid="1c5f03d46c0c4162a3b2f1530850bb6c">tempest-ServerActionsTestJSON-1020442335</nova:project>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="26699514-f465-4b50-98b7-36f2cfc6a308"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:48:20 compute-1 nova_compute[225855]:         <nova:port uuid="f4f25f14-bc59-4322-86b2-b48f096472a5">
Jan 20 14:48:20 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:48:20 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:48:20 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <system>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <entry name="serial">d699cf6a-9c33-400b-8d0f-4d61b8b16916</entry>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <entry name="uuid">d699cf6a-9c33-400b-8d0f-4d61b8b16916</entry>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     </system>
Jan 20 14:48:20 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:48:20 compute-1 nova_compute[225855]:   <os>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:   </os>
Jan 20 14:48:20 compute-1 nova_compute[225855]:   <features>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:   </features>
Jan 20 14:48:20 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:48:20 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:48:20 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk">
Jan 20 14:48:20 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       </source>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:48:20 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk.config">
Jan 20 14:48:20 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       </source>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:48:20 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:41:b2:cf"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <target dev="tapf4f25f14-bc"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/d699cf6a-9c33-400b-8d0f-4d61b8b16916/console.log" append="off"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <video>
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     </video>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:48:20 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:48:20 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:48:20 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:48:20 compute-1 nova_compute[225855]: </domain>
Jan 20 14:48:20 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.938 225859 DEBUG nova.compute.manager [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Preparing to wait for external event network-vif-plugged-f4f25f14-bc59-4322-86b2-b48f096472a5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.938 225859 DEBUG oslo_concurrency.lockutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.938 225859 DEBUG oslo_concurrency.lockutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.939 225859 DEBUG oslo_concurrency.lockutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.940 225859 DEBUG nova.virt.libvirt.vif [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:47:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-354445838',display_name='tempest-ServerActionsTestJSON-server-583331137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-354445838',id=103,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:48:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1c5f03d46c0c4162a3b2f1530850bb6c',ramdisk_id='',reservation_id='r-2qck85q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1020442335',owner_user_name='tempest-ServerActionsTestJSON-1020442335-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:48:18Z,user_data=None,user_id='3e9278fdb9e645b7938f3edb20c4d3cf',uuid=d699cf6a-9c33-400b-8d0f-4d61b8b16916,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f4f25f14-bc59-4322-86b2-b48f096472a5", "address": "fa:16:3e:41:b2:cf", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f25f14-bc", "ovs_interfaceid": "f4f25f14-bc59-4322-86b2-b48f096472a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.940 225859 DEBUG nova.network.os_vif_util [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converting VIF {"id": "f4f25f14-bc59-4322-86b2-b48f096472a5", "address": "fa:16:3e:41:b2:cf", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f25f14-bc", "ovs_interfaceid": "f4f25f14-bc59-4322-86b2-b48f096472a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.941 225859 DEBUG nova.network.os_vif_util [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:b2:cf,bridge_name='br-int',has_traffic_filtering=True,id=f4f25f14-bc59-4322-86b2-b48f096472a5,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f25f14-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.941 225859 DEBUG os_vif [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:b2:cf,bridge_name='br-int',has_traffic_filtering=True,id=f4f25f14-bc59-4322-86b2-b48f096472a5,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f25f14-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.942 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.942 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.943 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.945 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.946 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4f25f14-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.946 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf4f25f14-bc, col_values=(('external_ids', {'iface-id': 'f4f25f14-bc59-4322-86b2-b48f096472a5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:b2:cf', 'vm-uuid': 'd699cf6a-9c33-400b-8d0f-4d61b8b16916'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:20 compute-1 NetworkManager[49104]: <info>  [1768920500.9490] manager: (tapf4f25f14-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.951 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.955 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:20 compute-1 nova_compute[225855]: 2026-01-20 14:48:20.956 225859 INFO os_vif [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:b2:cf,bridge_name='br-int',has_traffic_filtering=True,id=f4f25f14-bc59-4322-86b2-b48f096472a5,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f25f14-bc')
Jan 20 14:48:21 compute-1 nova_compute[225855]: 2026-01-20 14:48:21.027 225859 DEBUG nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:48:21 compute-1 nova_compute[225855]: 2026-01-20 14:48:21.027 225859 DEBUG nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:48:21 compute-1 nova_compute[225855]: 2026-01-20 14:48:21.028 225859 DEBUG nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] No VIF found with MAC fa:16:3e:41:b2:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:48:21 compute-1 nova_compute[225855]: 2026-01-20 14:48:21.028 225859 INFO nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Using config drive
Jan 20 14:48:21 compute-1 nova_compute[225855]: 2026-01-20 14:48:21.054 225859 DEBUG nova.storage.rbd_utils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:21 compute-1 nova_compute[225855]: 2026-01-20 14:48:21.085 225859 DEBUG nova.objects.instance [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'ec2_ids' on Instance uuid d699cf6a-9c33-400b-8d0f-4d61b8b16916 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:21 compute-1 nova_compute[225855]: 2026-01-20 14:48:21.178 225859 DEBUG nova.objects.instance [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'keypairs' on Instance uuid d699cf6a-9c33-400b-8d0f-4d61b8b16916 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:21.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:21 compute-1 ceph-mon[81775]: pgmap v1808: 321 pgs: 321 active+clean; 464 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 150 op/s
Jan 20 14:48:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3194524310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:22 compute-1 nova_compute[225855]: 2026-01-20 14:48:22.097 225859 INFO nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Creating config drive at /var/lib/nova/instances/d699cf6a-9c33-400b-8d0f-4d61b8b16916/disk.config
Jan 20 14:48:22 compute-1 nova_compute[225855]: 2026-01-20 14:48:22.101 225859 DEBUG oslo_concurrency.processutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d699cf6a-9c33-400b-8d0f-4d61b8b16916/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpixw68mkq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:22 compute-1 nova_compute[225855]: 2026-01-20 14:48:22.244 225859 DEBUG oslo_concurrency.processutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d699cf6a-9c33-400b-8d0f-4d61b8b16916/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpixw68mkq" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:22 compute-1 nova_compute[225855]: 2026-01-20 14:48:22.273 225859 DEBUG nova.storage.rbd_utils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:22 compute-1 nova_compute[225855]: 2026-01-20 14:48:22.279 225859 DEBUG oslo_concurrency.processutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d699cf6a-9c33-400b-8d0f-4d61b8b16916/disk.config d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:22 compute-1 nova_compute[225855]: 2026-01-20 14:48:22.475 225859 DEBUG oslo_concurrency.processutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d699cf6a-9c33-400b-8d0f-4d61b8b16916/disk.config d699cf6a-9c33-400b-8d0f-4d61b8b16916_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:22 compute-1 nova_compute[225855]: 2026-01-20 14:48:22.476 225859 INFO nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Deleting local config drive /var/lib/nova/instances/d699cf6a-9c33-400b-8d0f-4d61b8b16916/disk.config because it was imported into RBD.
Jan 20 14:48:22 compute-1 kernel: tapf4f25f14-bc: entered promiscuous mode
Jan 20 14:48:22 compute-1 NetworkManager[49104]: <info>  [1768920502.5237] manager: (tapf4f25f14-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/154)
Jan 20 14:48:22 compute-1 nova_compute[225855]: 2026-01-20 14:48:22.527 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:22 compute-1 ovn_controller[130490]: 2026-01-20T14:48:22Z|00360|binding|INFO|Claiming lport f4f25f14-bc59-4322-86b2-b48f096472a5 for this chassis.
Jan 20 14:48:22 compute-1 ovn_controller[130490]: 2026-01-20T14:48:22Z|00361|binding|INFO|f4f25f14-bc59-4322-86b2-b48f096472a5: Claiming fa:16:3e:41:b2:cf 10.100.0.12
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.541 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:b2:cf 10.100.0.12'], port_security=['fa:16:3e:41:b2:cf 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd699cf6a-9c33-400b-8d0f-4d61b8b16916', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-762e1859-4db4-4d9e-b66f-d50316f80df4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c5f03d46c0c4162a3b2f1530850bb6c', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e045f96f-e14f-4cbd-a987-42fc8d4d3e66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2474a8ca-bb96-4cae-9133-23419b81a9fc, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=f4f25f14-bc59-4322-86b2-b48f096472a5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.542 140354 INFO neutron.agent.ovn.metadata.agent [-] Port f4f25f14-bc59-4322-86b2-b48f096472a5 in datapath 762e1859-4db4-4d9e-b66f-d50316f80df4 bound to our chassis
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.544 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 762e1859-4db4-4d9e-b66f-d50316f80df4
Jan 20 14:48:22 compute-1 systemd-udevd[263532]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:48:22 compute-1 ovn_controller[130490]: 2026-01-20T14:48:22Z|00362|binding|INFO|Setting lport f4f25f14-bc59-4322-86b2-b48f096472a5 ovn-installed in OVS
Jan 20 14:48:22 compute-1 ovn_controller[130490]: 2026-01-20T14:48:22Z|00363|binding|INFO|Setting lport f4f25f14-bc59-4322-86b2-b48f096472a5 up in Southbound
Jan 20 14:48:22 compute-1 nova_compute[225855]: 2026-01-20 14:48:22.553 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.555 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb09e0d-5933-43b8-91b6-5a39745f175d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.557 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap762e1859-41 in ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:48:22 compute-1 systemd-machined[194361]: New machine qemu-43-instance-00000067.
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.563 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap762e1859-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.563 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6dddbacd-3131-4f72-8495-831676b71902]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.564 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4bdb1c-fa3e-4b63-9dda-933903a0a56f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:22 compute-1 NetworkManager[49104]: <info>  [1768920502.5654] device (tapf4f25f14-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:48:22 compute-1 NetworkManager[49104]: <info>  [1768920502.5665] device (tapf4f25f14-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:48:22 compute-1 systemd[1]: Started Virtual Machine qemu-43-instance-00000067.
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.577 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[3f890100-f78f-4aa9-af39-7303d10e9cd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:22 compute-1 nova_compute[225855]: 2026-01-20 14:48:22.582 225859 DEBUG nova.compute.manager [req-4f314d23-7114-4566-a35c-a3935b22e8fa req-9627b464-fec3-428f-a19b-2d9a5f7c4690 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Received event network-vif-plugged-f4f25f14-bc59-4322-86b2-b48f096472a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:22 compute-1 nova_compute[225855]: 2026-01-20 14:48:22.582 225859 DEBUG oslo_concurrency.lockutils [req-4f314d23-7114-4566-a35c-a3935b22e8fa req-9627b464-fec3-428f-a19b-2d9a5f7c4690 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:22 compute-1 nova_compute[225855]: 2026-01-20 14:48:22.583 225859 DEBUG oslo_concurrency.lockutils [req-4f314d23-7114-4566-a35c-a3935b22e8fa req-9627b464-fec3-428f-a19b-2d9a5f7c4690 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:22 compute-1 nova_compute[225855]: 2026-01-20 14:48:22.583 225859 DEBUG oslo_concurrency.lockutils [req-4f314d23-7114-4566-a35c-a3935b22e8fa req-9627b464-fec3-428f-a19b-2d9a5f7c4690 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:22 compute-1 nova_compute[225855]: 2026-01-20 14:48:22.583 225859 DEBUG nova.compute.manager [req-4f314d23-7114-4566-a35c-a3935b22e8fa req-9627b464-fec3-428f-a19b-2d9a5f7c4690 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Processing event network-vif-plugged-f4f25f14-bc59-4322-86b2-b48f096472a5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.602 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6692b38e-d57a-4704-adef-73e535d33538]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1691197459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/163141579' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.630 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d5d6aa10-8a59-40f5-b178-313d94ba924c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:22 compute-1 systemd-udevd[263537]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:48:22 compute-1 NetworkManager[49104]: <info>  [1768920502.6406] manager: (tap762e1859-40): new Veth device (/org/freedesktop/NetworkManager/Devices/155)
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.637 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[467f107f-e409-4e9c-ba1f-c9802f1f7b96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.672 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[daff0557-be31-47f9-98bb-96e0b1466b37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.676 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[53a81891-3d4b-4390-aaa1-ae4db0c1d8c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:22 compute-1 NetworkManager[49104]: <info>  [1768920502.6999] device (tap762e1859-40): carrier: link connected
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.704 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[60f72cb6-5d07-4f1c-a7b1-eef00f6b5477]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:22.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.725 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9d6bdbaf-a95b-43ae-90e8-3dff8a370131]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap762e1859-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:f1:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554162, 'reachable_time': 23605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263566, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.740 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[91a5f0b6-13d6-4910-a7bb-909b80a852c6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:f1da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554162, 'tstamp': 554162}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263567, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.760 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9c779c1f-2b5c-4518-8fe6-1305a6b1b5b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap762e1859-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:f1:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554162, 'reachable_time': 23605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263568, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.786 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c1c7cece-ef0e-4660-98a5-6ba270f7456f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.846 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[45dc11e0-1844-49aa-ba68-7c171b528e8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.848 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap762e1859-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.848 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.850 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap762e1859-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:22 compute-1 kernel: tap762e1859-40: entered promiscuous mode
Jan 20 14:48:22 compute-1 nova_compute[225855]: 2026-01-20 14:48:22.851 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:22 compute-1 NetworkManager[49104]: <info>  [1768920502.8524] manager: (tap762e1859-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.857 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap762e1859-40, col_values=(('external_ids', {'iface-id': '9e775c45-1646-436d-a0cb-a5b5ec356e1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:22 compute-1 nova_compute[225855]: 2026-01-20 14:48:22.859 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:22 compute-1 ovn_controller[130490]: 2026-01-20T14:48:22Z|00364|binding|INFO|Releasing lport 9e775c45-1646-436d-a0cb-a5b5ec356e1b from this chassis (sb_readonly=0)
Jan 20 14:48:22 compute-1 nova_compute[225855]: 2026-01-20 14:48:22.860 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.862 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/762e1859-4db4-4d9e-b66f-d50316f80df4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/762e1859-4db4-4d9e-b66f-d50316f80df4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.863 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9426ba-1304-4f28-ad08-3761d5cf7497]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.864 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-762e1859-4db4-4d9e-b66f-d50316f80df4
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/762e1859-4db4-4d9e-b66f-d50316f80df4.pid.haproxy
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 762e1859-4db4-4d9e-b66f-d50316f80df4
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:48:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:22.867 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'env', 'PROCESS_TAG=haproxy-762e1859-4db4-4d9e-b66f-d50316f80df4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/762e1859-4db4-4d9e-b66f-d50316f80df4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:48:22 compute-1 nova_compute[225855]: 2026-01-20 14:48:22.880 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.076 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for d699cf6a-9c33-400b-8d0f-4d61b8b16916 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.077 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920503.0762124, d699cf6a-9c33-400b-8d0f-4d61b8b16916 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.078 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] VM Started (Lifecycle Event)
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.081 225859 DEBUG nova.compute.manager [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.107 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.111 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.114 225859 DEBUG nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.118 225859 INFO nova.virt.libvirt.driver [-] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Instance spawned successfully.
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.118 225859 DEBUG nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.150 225859 DEBUG nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.151 225859 DEBUG nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.151 225859 DEBUG nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.152 225859 DEBUG nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.152 225859 DEBUG nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.153 225859 DEBUG nova.virt.libvirt.driver [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.158 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.158 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920503.080373, d699cf6a-9c33-400b-8d0f-4d61b8b16916 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.160 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] VM Paused (Lifecycle Event)
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.192 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.196 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920503.0866337, d699cf6a-9c33-400b-8d0f-4d61b8b16916 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.196 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] VM Resumed (Lifecycle Event)
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.227 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.231 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.264 225859 DEBUG nova.compute.manager [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.266 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 20 14:48:23 compute-1 podman[263642]: 2026-01-20 14:48:23.276571476 +0000 UTC m=+0.043955902 container create ec3d4d4109b3c2649aa7a2cc822b599f8cdcf1d31e564f6ab678ff971e9f60b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:48:23 compute-1 systemd[1]: Started libpod-conmon-ec3d4d4109b3c2649aa7a2cc822b599f8cdcf1d31e564f6ab678ff971e9f60b8.scope.
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.330 225859 DEBUG oslo_concurrency.lockutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.335 225859 DEBUG oslo_concurrency.lockutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.336 225859 DEBUG nova.objects.instance [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:48:23 compute-1 podman[263642]: 2026-01-20 14:48:23.253071113 +0000 UTC m=+0.020455559 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:48:23 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:48:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d891a9ccda42ab217a336e792c976565d831731c01eb0b300ac8c6c159413fa7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:48:23 compute-1 podman[263642]: 2026-01-20 14:48:23.376364974 +0000 UTC m=+0.143749410 container init ec3d4d4109b3c2649aa7a2cc822b599f8cdcf1d31e564f6ab678ff971e9f60b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:48:23 compute-1 podman[263642]: 2026-01-20 14:48:23.383292779 +0000 UTC m=+0.150677205 container start ec3d4d4109b3c2649aa7a2cc822b599f8cdcf1d31e564f6ab678ff971e9f60b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:48:23 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[263657]: [NOTICE]   (263661) : New worker (263663) forked
Jan 20 14:48:23 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[263657]: [NOTICE]   (263661) : Loading success.
Jan 20 14:48:23 compute-1 nova_compute[225855]: 2026-01-20 14:48:23.428 225859 DEBUG oslo_concurrency.lockutils [None req-57a9ea68-ec8a-451a-b907-3ce30cfe2680 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:48:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:23.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:48:23 compute-1 ceph-mon[81775]: pgmap v1809: 321 pgs: 321 active+clean; 412 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 186 op/s
Jan 20 14:48:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:24.589 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/624077283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:24.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:24 compute-1 nova_compute[225855]: 2026-01-20 14:48:24.714 225859 DEBUG nova.compute.manager [req-21e407b4-2abf-4807-b912-78d5a33ce861 req-74399323-fab2-4c07-a5a8-57ca63e698d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Received event network-vif-plugged-f4f25f14-bc59-4322-86b2-b48f096472a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:24 compute-1 nova_compute[225855]: 2026-01-20 14:48:24.715 225859 DEBUG oslo_concurrency.lockutils [req-21e407b4-2abf-4807-b912-78d5a33ce861 req-74399323-fab2-4c07-a5a8-57ca63e698d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:24 compute-1 nova_compute[225855]: 2026-01-20 14:48:24.715 225859 DEBUG oslo_concurrency.lockutils [req-21e407b4-2abf-4807-b912-78d5a33ce861 req-74399323-fab2-4c07-a5a8-57ca63e698d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:24 compute-1 nova_compute[225855]: 2026-01-20 14:48:24.716 225859 DEBUG oslo_concurrency.lockutils [req-21e407b4-2abf-4807-b912-78d5a33ce861 req-74399323-fab2-4c07-a5a8-57ca63e698d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:24 compute-1 nova_compute[225855]: 2026-01-20 14:48:24.716 225859 DEBUG nova.compute.manager [req-21e407b4-2abf-4807-b912-78d5a33ce861 req-74399323-fab2-4c07-a5a8-57ca63e698d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] No waiting events found dispatching network-vif-plugged-f4f25f14-bc59-4322-86b2-b48f096472a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:24 compute-1 nova_compute[225855]: 2026-01-20 14:48:24.716 225859 WARNING nova.compute.manager [req-21e407b4-2abf-4807-b912-78d5a33ce861 req-74399323-fab2-4c07-a5a8-57ca63e698d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Received unexpected event network-vif-plugged-f4f25f14-bc59-4322-86b2-b48f096472a5 for instance with vm_state active and task_state None.
Jan 20 14:48:24 compute-1 nova_compute[225855]: 2026-01-20 14:48:24.717 225859 DEBUG nova.compute.manager [req-21e407b4-2abf-4807-b912-78d5a33ce861 req-74399323-fab2-4c07-a5a8-57ca63e698d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Received event network-vif-plugged-f4f25f14-bc59-4322-86b2-b48f096472a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:24 compute-1 nova_compute[225855]: 2026-01-20 14:48:24.717 225859 DEBUG oslo_concurrency.lockutils [req-21e407b4-2abf-4807-b912-78d5a33ce861 req-74399323-fab2-4c07-a5a8-57ca63e698d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:24 compute-1 nova_compute[225855]: 2026-01-20 14:48:24.717 225859 DEBUG oslo_concurrency.lockutils [req-21e407b4-2abf-4807-b912-78d5a33ce861 req-74399323-fab2-4c07-a5a8-57ca63e698d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:24 compute-1 nova_compute[225855]: 2026-01-20 14:48:24.718 225859 DEBUG oslo_concurrency.lockutils [req-21e407b4-2abf-4807-b912-78d5a33ce861 req-74399323-fab2-4c07-a5a8-57ca63e698d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:24 compute-1 nova_compute[225855]: 2026-01-20 14:48:24.718 225859 DEBUG nova.compute.manager [req-21e407b4-2abf-4807-b912-78d5a33ce861 req-74399323-fab2-4c07-a5a8-57ca63e698d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] No waiting events found dispatching network-vif-plugged-f4f25f14-bc59-4322-86b2-b48f096472a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:24 compute-1 nova_compute[225855]: 2026-01-20 14:48:24.718 225859 WARNING nova.compute.manager [req-21e407b4-2abf-4807-b912-78d5a33ce861 req-74399323-fab2-4c07-a5a8-57ca63e698d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Received unexpected event network-vif-plugged-f4f25f14-bc59-4322-86b2-b48f096472a5 for instance with vm_state active and task_state None.
Jan 20 14:48:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:25.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:25 compute-1 ceph-mon[81775]: pgmap v1810: 321 pgs: 321 active+clean; 414 MiB data, 999 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 228 op/s
Jan 20 14:48:25 compute-1 nova_compute[225855]: 2026-01-20 14:48:25.679 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:25 compute-1 nova_compute[225855]: 2026-01-20 14:48:25.948 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.545 225859 DEBUG oslo_concurrency.lockutils [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.545 225859 DEBUG oslo_concurrency.lockutils [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.546 225859 DEBUG oslo_concurrency.lockutils [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.546 225859 DEBUG oslo_concurrency.lockutils [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.546 225859 DEBUG oslo_concurrency.lockutils [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.547 225859 INFO nova.compute.manager [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Terminating instance
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.548 225859 DEBUG nova.compute.manager [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:48:26 compute-1 kernel: tapf4f25f14-bc (unregistering): left promiscuous mode
Jan 20 14:48:26 compute-1 NetworkManager[49104]: <info>  [1768920506.5909] device (tapf4f25f14-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.598 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:26 compute-1 ovn_controller[130490]: 2026-01-20T14:48:26Z|00365|binding|INFO|Releasing lport f4f25f14-bc59-4322-86b2-b48f096472a5 from this chassis (sb_readonly=0)
Jan 20 14:48:26 compute-1 ovn_controller[130490]: 2026-01-20T14:48:26Z|00366|binding|INFO|Setting lport f4f25f14-bc59-4322-86b2-b48f096472a5 down in Southbound
Jan 20 14:48:26 compute-1 ovn_controller[130490]: 2026-01-20T14:48:26Z|00367|binding|INFO|Removing iface tapf4f25f14-bc ovn-installed in OVS
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.601 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:26.604 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:b2:cf 10.100.0.12'], port_security=['fa:16:3e:41:b2:cf 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd699cf6a-9c33-400b-8d0f-4d61b8b16916', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-762e1859-4db4-4d9e-b66f-d50316f80df4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c5f03d46c0c4162a3b2f1530850bb6c', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e045f96f-e14f-4cbd-a987-42fc8d4d3e66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2474a8ca-bb96-4cae-9133-23419b81a9fc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=f4f25f14-bc59-4322-86b2-b48f096472a5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:48:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:26.605 140354 INFO neutron.agent.ovn.metadata.agent [-] Port f4f25f14-bc59-4322-86b2-b48f096472a5 in datapath 762e1859-4db4-4d9e-b66f-d50316f80df4 unbound from our chassis
Jan 20 14:48:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:26.607 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 762e1859-4db4-4d9e-b66f-d50316f80df4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:48:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:26.609 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bf1e52f0-ce42-428a-90b4-5454cb311919]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:26.609 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 namespace which is not needed anymore
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.617 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:26 compute-1 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000067.scope: Deactivated successfully.
Jan 20 14:48:26 compute-1 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000067.scope: Consumed 4.055s CPU time.
Jan 20 14:48:26 compute-1 systemd-machined[194361]: Machine qemu-43-instance-00000067 terminated.
Jan 20 14:48:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:26.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:26 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[263657]: [NOTICE]   (263661) : haproxy version is 2.8.14-c23fe91
Jan 20 14:48:26 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[263657]: [NOTICE]   (263661) : path to executable is /usr/sbin/haproxy
Jan 20 14:48:26 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[263657]: [WARNING]  (263661) : Exiting Master process...
Jan 20 14:48:26 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[263657]: [ALERT]    (263661) : Current worker (263663) exited with code 143 (Terminated)
Jan 20 14:48:26 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[263657]: [WARNING]  (263661) : All workers exited. Exiting... (0)
Jan 20 14:48:26 compute-1 systemd[1]: libpod-ec3d4d4109b3c2649aa7a2cc822b599f8cdcf1d31e564f6ab678ff971e9f60b8.scope: Deactivated successfully.
Jan 20 14:48:26 compute-1 podman[263697]: 2026-01-20 14:48:26.729441185 +0000 UTC m=+0.040479364 container died ec3d4d4109b3c2649aa7a2cc822b599f8cdcf1d31e564f6ab678ff971e9f60b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:48:26 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ec3d4d4109b3c2649aa7a2cc822b599f8cdcf1d31e564f6ab678ff971e9f60b8-userdata-shm.mount: Deactivated successfully.
Jan 20 14:48:26 compute-1 systemd[1]: var-lib-containers-storage-overlay-d891a9ccda42ab217a336e792c976565d831731c01eb0b300ac8c6c159413fa7-merged.mount: Deactivated successfully.
Jan 20 14:48:26 compute-1 podman[263697]: 2026-01-20 14:48:26.769524026 +0000 UTC m=+0.080562195 container cleanup ec3d4d4109b3c2649aa7a2cc822b599f8cdcf1d31e564f6ab678ff971e9f60b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.780 225859 INFO nova.virt.libvirt.driver [-] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Instance destroyed successfully.
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.781 225859 DEBUG nova.objects.instance [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'resources' on Instance uuid d699cf6a-9c33-400b-8d0f-4d61b8b16916 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:26 compute-1 systemd[1]: libpod-conmon-ec3d4d4109b3c2649aa7a2cc822b599f8cdcf1d31e564f6ab678ff971e9f60b8.scope: Deactivated successfully.
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.812 225859 DEBUG nova.virt.libvirt.vif [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:47:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-354445838',display_name='tempest-ServerActionsTestJSON-server-583331137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-354445838',id=103,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:48:23Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c5f03d46c0c4162a3b2f1530850bb6c',ramdisk_id='',reservation_id='r-2qck85q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1020442335',owner_user_name='tempest-ServerActionsTestJSON-1020442335-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:48:23Z,user_data=None,user_id='3e9278fdb9e645b7938f3edb20c4d3cf',uuid=d699cf6a-9c33-400b-8d0f-4d61b8b16916,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f4f25f14-bc59-4322-86b2-b48f096472a5", "address": "fa:16:3e:41:b2:cf", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f25f14-bc", "ovs_interfaceid": "f4f25f14-bc59-4322-86b2-b48f096472a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.812 225859 DEBUG nova.network.os_vif_util [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converting VIF {"id": "f4f25f14-bc59-4322-86b2-b48f096472a5", "address": "fa:16:3e:41:b2:cf", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f25f14-bc", "ovs_interfaceid": "f4f25f14-bc59-4322-86b2-b48f096472a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.813 225859 DEBUG nova.network.os_vif_util [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:b2:cf,bridge_name='br-int',has_traffic_filtering=True,id=f4f25f14-bc59-4322-86b2-b48f096472a5,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f25f14-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.813 225859 DEBUG os_vif [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:b2:cf,bridge_name='br-int',has_traffic_filtering=True,id=f4f25f14-bc59-4322-86b2-b48f096472a5,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f25f14-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.816 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.816 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4f25f14-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.818 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.821 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.823 225859 INFO os_vif [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:b2:cf,bridge_name='br-int',has_traffic_filtering=True,id=f4f25f14-bc59-4322-86b2-b48f096472a5,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f25f14-bc')
Jan 20 14:48:26 compute-1 podman[263737]: 2026-01-20 14:48:26.847305962 +0000 UTC m=+0.052385030 container remove ec3d4d4109b3c2649aa7a2cc822b599f8cdcf1d31e564f6ab678ff971e9f60b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:48:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:26.853 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c998381c-5e43-4113-aeb1-2803f6ed3d49]: (4, ('Tue Jan 20 02:48:26 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 (ec3d4d4109b3c2649aa7a2cc822b599f8cdcf1d31e564f6ab678ff971e9f60b8)\nec3d4d4109b3c2649aa7a2cc822b599f8cdcf1d31e564f6ab678ff971e9f60b8\nTue Jan 20 02:48:26 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 (ec3d4d4109b3c2649aa7a2cc822b599f8cdcf1d31e564f6ab678ff971e9f60b8)\nec3d4d4109b3c2649aa7a2cc822b599f8cdcf1d31e564f6ab678ff971e9f60b8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:26.855 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1d83cf61-6116-4209-89e3-5e35cff9f185]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:26.857 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap762e1859-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.859 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:26 compute-1 kernel: tap762e1859-40: left promiscuous mode
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.875 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:26 compute-1 nova_compute[225855]: 2026-01-20 14:48:26.875 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:26.887 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[41f7cec2-df87-48ab-acbe-2fa62a3df980]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:26.900 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[184b8f32-d054-46d0-bfbe-7ecd126c56dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:26.901 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7693357f-b52c-4346-bf8b-431266a6988e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:26.920 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7b587cec-c283-4971-91dc-3cc50fec07a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554155, 'reachable_time': 23416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263768, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:26.923 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:48:26 compute-1 systemd[1]: run-netns-ovnmeta\x2d762e1859\x2d4db4\x2d4d9e\x2db66f\x2dd50316f80df4.mount: Deactivated successfully.
Jan 20 14:48:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:26.923 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[a01d017a-22bb-4d52-b89f-a95c3e9aeca1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:27 compute-1 nova_compute[225855]: 2026-01-20 14:48:27.104 225859 DEBUG nova.compute.manager [req-4cb968c4-f4bb-4a37-99b9-dca7fec25782 req-b4874409-7297-4269-95a6-eb91358681fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Received event network-vif-unplugged-f4f25f14-bc59-4322-86b2-b48f096472a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:27 compute-1 nova_compute[225855]: 2026-01-20 14:48:27.104 225859 DEBUG oslo_concurrency.lockutils [req-4cb968c4-f4bb-4a37-99b9-dca7fec25782 req-b4874409-7297-4269-95a6-eb91358681fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:27 compute-1 nova_compute[225855]: 2026-01-20 14:48:27.104 225859 DEBUG oslo_concurrency.lockutils [req-4cb968c4-f4bb-4a37-99b9-dca7fec25782 req-b4874409-7297-4269-95a6-eb91358681fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:27 compute-1 nova_compute[225855]: 2026-01-20 14:48:27.105 225859 DEBUG oslo_concurrency.lockutils [req-4cb968c4-f4bb-4a37-99b9-dca7fec25782 req-b4874409-7297-4269-95a6-eb91358681fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:27 compute-1 nova_compute[225855]: 2026-01-20 14:48:27.105 225859 DEBUG nova.compute.manager [req-4cb968c4-f4bb-4a37-99b9-dca7fec25782 req-b4874409-7297-4269-95a6-eb91358681fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] No waiting events found dispatching network-vif-unplugged-f4f25f14-bc59-4322-86b2-b48f096472a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:27 compute-1 nova_compute[225855]: 2026-01-20 14:48:27.105 225859 DEBUG nova.compute.manager [req-4cb968c4-f4bb-4a37-99b9-dca7fec25782 req-b4874409-7297-4269-95a6-eb91358681fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Received event network-vif-unplugged-f4f25f14-bc59-4322-86b2-b48f096472a5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:48:27 compute-1 nova_compute[225855]: 2026-01-20 14:48:27.175 225859 INFO nova.virt.libvirt.driver [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Deleting instance files /var/lib/nova/instances/d699cf6a-9c33-400b-8d0f-4d61b8b16916_del
Jan 20 14:48:27 compute-1 nova_compute[225855]: 2026-01-20 14:48:27.175 225859 INFO nova.virt.libvirt.driver [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Deletion of /var/lib/nova/instances/d699cf6a-9c33-400b-8d0f-4d61b8b16916_del complete
Jan 20 14:48:27 compute-1 nova_compute[225855]: 2026-01-20 14:48:27.286 225859 INFO nova.compute.manager [None req-447e4e19-2a81-4c40-b517-6a53dcbf479f 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Get console output
Jan 20 14:48:27 compute-1 nova_compute[225855]: 2026-01-20 14:48:27.287 225859 INFO nova.compute.manager [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Took 0.74 seconds to destroy the instance on the hypervisor.
Jan 20 14:48:27 compute-1 nova_compute[225855]: 2026-01-20 14:48:27.287 225859 DEBUG oslo.service.loopingcall [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:48:27 compute-1 nova_compute[225855]: 2026-01-20 14:48:27.287 225859 DEBUG nova.compute.manager [-] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:48:27 compute-1 nova_compute[225855]: 2026-01-20 14:48:27.288 225859 DEBUG nova.network.neutron [-] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:48:27 compute-1 nova_compute[225855]: 2026-01-20 14:48:27.297 225859 INFO oslo.privsep.daemon [None req-447e4e19-2a81-4c40-b517-6a53dcbf479f 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpilubip9q/privsep.sock']
Jan 20 14:48:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:27 compute-1 ovn_controller[130490]: 2026-01-20T14:48:27Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:36:66:1d 10.100.0.8
Jan 20 14:48:27 compute-1 ovn_controller[130490]: 2026-01-20T14:48:27Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:36:66:1d 10.100.0.8
Jan 20 14:48:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:27.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:27 compute-1 ceph-mon[81775]: pgmap v1811: 321 pgs: 321 active+clean; 453 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 5.7 MiB/s wr, 303 op/s
Jan 20 14:48:28 compute-1 nova_compute[225855]: 2026-01-20 14:48:28.046 225859 INFO oslo.privsep.daemon [None req-447e4e19-2a81-4c40-b517-6a53dcbf479f 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Spawned new privsep daemon via rootwrap
Jan 20 14:48:28 compute-1 nova_compute[225855]: 2026-01-20 14:48:27.927 263775 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 20 14:48:28 compute-1 nova_compute[225855]: 2026-01-20 14:48:27.931 263775 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 20 14:48:28 compute-1 nova_compute[225855]: 2026-01-20 14:48:27.933 263775 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 20 14:48:28 compute-1 nova_compute[225855]: 2026-01-20 14:48:27.933 263775 INFO oslo.privsep.daemon [-] privsep daemon running as pid 263775
Jan 20 14:48:28 compute-1 nova_compute[225855]: 2026-01-20 14:48:28.137 263775 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 14:48:28 compute-1 nova_compute[225855]: 2026-01-20 14:48:28.201 225859 DEBUG nova.network.neutron [-] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:48:28 compute-1 nova_compute[225855]: 2026-01-20 14:48:28.229 225859 INFO nova.compute.manager [-] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Took 0.94 seconds to deallocate network for instance.
Jan 20 14:48:28 compute-1 nova_compute[225855]: 2026-01-20 14:48:28.303 225859 DEBUG oslo_concurrency.lockutils [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:28 compute-1 nova_compute[225855]: 2026-01-20 14:48:28.303 225859 DEBUG oslo_concurrency.lockutils [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:28 compute-1 nova_compute[225855]: 2026-01-20 14:48:28.313 225859 DEBUG nova.compute.manager [req-cf920533-f375-47fd-8e0a-7ef907abd58e req-3ec179e0-535d-4d00-a87f-8de22b99899f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Received event network-vif-deleted-f4f25f14-bc59-4322-86b2-b48f096472a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:28 compute-1 nova_compute[225855]: 2026-01-20 14:48:28.406 225859 DEBUG oslo_concurrency.processutils [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:28 compute-1 nova_compute[225855]: 2026-01-20 14:48:28.695 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:28.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:28 compute-1 sudo[263797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:28 compute-1 sudo[263797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:28 compute-1 sudo[263797]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:48:28 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/679145408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:28 compute-1 nova_compute[225855]: 2026-01-20 14:48:28.836 225859 DEBUG oslo_concurrency.processutils [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:28 compute-1 nova_compute[225855]: 2026-01-20 14:48:28.845 225859 DEBUG nova.compute.provider_tree [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:48:28 compute-1 sudo[263824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:48:28 compute-1 sudo[263824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:28 compute-1 sudo[263824]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:28 compute-1 nova_compute[225855]: 2026-01-20 14:48:28.875 225859 DEBUG nova.scheduler.client.report [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:48:28 compute-1 sudo[263832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:28 compute-1 sudo[263832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:28 compute-1 sudo[263832]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:28 compute-1 nova_compute[225855]: 2026-01-20 14:48:28.898 225859 DEBUG oslo_concurrency.lockutils [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:28 compute-1 sudo[263873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:28 compute-1 sudo[263873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:28 compute-1 sudo[263873]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:28 compute-1 sudo[263887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:28 compute-1 sudo[263887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:28 compute-1 sudo[263887]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:28 compute-1 nova_compute[225855]: 2026-01-20 14:48:28.949 225859 INFO nova.scheduler.client.report [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Deleted allocations for instance d699cf6a-9c33-400b-8d0f-4d61b8b16916
Jan 20 14:48:28 compute-1 sudo[263923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:48:28 compute-1 sudo[263923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:29 compute-1 nova_compute[225855]: 2026-01-20 14:48:29.043 225859 DEBUG oslo_concurrency.lockutils [None req-90f3966b-c5d6-464f-b832-87927f85c1a1 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:29 compute-1 nova_compute[225855]: 2026-01-20 14:48:29.310 225859 DEBUG nova.compute.manager [req-685fdab1-6547-45ee-bcf6-d20d8c05769c req-d72213b2-47a5-41c8-b984-4440d79c2ea1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Received event network-vif-plugged-f4f25f14-bc59-4322-86b2-b48f096472a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:29 compute-1 nova_compute[225855]: 2026-01-20 14:48:29.311 225859 DEBUG oslo_concurrency.lockutils [req-685fdab1-6547-45ee-bcf6-d20d8c05769c req-d72213b2-47a5-41c8-b984-4440d79c2ea1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:29 compute-1 nova_compute[225855]: 2026-01-20 14:48:29.311 225859 DEBUG oslo_concurrency.lockutils [req-685fdab1-6547-45ee-bcf6-d20d8c05769c req-d72213b2-47a5-41c8-b984-4440d79c2ea1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:29 compute-1 nova_compute[225855]: 2026-01-20 14:48:29.311 225859 DEBUG oslo_concurrency.lockutils [req-685fdab1-6547-45ee-bcf6-d20d8c05769c req-d72213b2-47a5-41c8-b984-4440d79c2ea1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d699cf6a-9c33-400b-8d0f-4d61b8b16916-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:29 compute-1 nova_compute[225855]: 2026-01-20 14:48:29.311 225859 DEBUG nova.compute.manager [req-685fdab1-6547-45ee-bcf6-d20d8c05769c req-d72213b2-47a5-41c8-b984-4440d79c2ea1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] No waiting events found dispatching network-vif-plugged-f4f25f14-bc59-4322-86b2-b48f096472a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:29 compute-1 nova_compute[225855]: 2026-01-20 14:48:29.311 225859 WARNING nova.compute.manager [req-685fdab1-6547-45ee-bcf6-d20d8c05769c req-d72213b2-47a5-41c8-b984-4440d79c2ea1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Received unexpected event network-vif-plugged-f4f25f14-bc59-4322-86b2-b48f096472a5 for instance with vm_state deleted and task_state None.
Jan 20 14:48:29 compute-1 sudo[263923]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:29.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:29 compute-1 ceph-mon[81775]: pgmap v1812: 321 pgs: 321 active+clean; 437 MiB data, 1016 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 4.1 MiB/s wr, 297 op/s
Jan 20 14:48:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/679145408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:48:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:48:30 compute-1 nova_compute[225855]: 2026-01-20 14:48:30.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:30 compute-1 nova_compute[225855]: 2026-01-20 14:48:30.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:30 compute-1 nova_compute[225855]: 2026-01-20 14:48:30.682 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:48:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:30.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:48:30 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:48:30 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:48:30 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:48:30 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:48:30 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:48:30 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:48:31 compute-1 nova_compute[225855]: 2026-01-20 14:48:31.354 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:31 compute-1 nova_compute[225855]: 2026-01-20 14:48:31.354 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:48:31 compute-1 nova_compute[225855]: 2026-01-20 14:48:31.355 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:48:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:48:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:31.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:48:31 compute-1 nova_compute[225855]: 2026-01-20 14:48:31.659 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:48:31 compute-1 nova_compute[225855]: 2026-01-20 14:48:31.659 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:48:31 compute-1 nova_compute[225855]: 2026-01-20 14:48:31.660 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:48:31 compute-1 nova_compute[225855]: 2026-01-20 14:48:31.660 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6586bc3e-3a94-4d22-8e8c-713a86a956fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:31 compute-1 ceph-mon[81775]: pgmap v1813: 321 pgs: 321 active+clean; 422 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.6 MiB/s wr, 290 op/s
Jan 20 14:48:31 compute-1 nova_compute[225855]: 2026-01-20 14:48:31.818 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:32.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:32 compute-1 ceph-mon[81775]: pgmap v1814: 321 pgs: 321 active+clean; 388 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.6 MiB/s wr, 332 op/s
Jan 20 14:48:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/971129878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:33 compute-1 podman[263982]: 2026-01-20 14:48:33.036057608 +0000 UTC m=+0.076021437 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:48:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:33.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4144053837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:34.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:34 compute-1 nova_compute[225855]: 2026-01-20 14:48:34.827 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Updating instance_info_cache with network_info: [{"id": "2c289e6f-295e-44c3-948a-9a6901251890", "address": "fa:16:3e:2f:4c:e2", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c289e6f-29", "ovs_interfaceid": "2c289e6f-295e-44c3-948a-9a6901251890", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:48:34 compute-1 nova_compute[225855]: 2026-01-20 14:48:34.854 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-6586bc3e-3a94-4d22-8e8c-713a86a956fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:48:34 compute-1 nova_compute[225855]: 2026-01-20 14:48:34.855 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:48:34 compute-1 nova_compute[225855]: 2026-01-20 14:48:34.855 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:34 compute-1 nova_compute[225855]: 2026-01-20 14:48:34.856 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:34 compute-1 nova_compute[225855]: 2026-01-20 14:48:34.856 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:48:34 compute-1 ceph-mon[81775]: pgmap v1815: 321 pgs: 321 active+clean; 366 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 3.1 MiB/s wr, 319 op/s
Jan 20 14:48:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:48:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:35.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:48:35 compute-1 nova_compute[225855]: 2026-01-20 14:48:35.684 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:35 compute-1 nova_compute[225855]: 2026-01-20 14:48:35.881 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:36 compute-1 nova_compute[225855]: 2026-01-20 14:48:36.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:36 compute-1 nova_compute[225855]: 2026-01-20 14:48:36.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:36 compute-1 nova_compute[225855]: 2026-01-20 14:48:36.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 14:48:36 compute-1 nova_compute[225855]: 2026-01-20 14:48:36.372 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 14:48:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:48:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:36.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:48:36 compute-1 sudo[264011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:36 compute-1 sudo[264011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:36 compute-1 sudo[264011]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:36 compute-1 nova_compute[225855]: 2026-01-20 14:48:36.820 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:36 compute-1 nova_compute[225855]: 2026-01-20 14:48:36.852 225859 DEBUG nova.compute.manager [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 20 14:48:36 compute-1 sudo[264036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:48:36 compute-1 sudo[264036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:36 compute-1 sudo[264036]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:36 compute-1 nova_compute[225855]: 2026-01-20 14:48:36.925 225859 DEBUG oslo_concurrency.lockutils [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "9beb3ec3-721e-4919-9713-a92c82ad189b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:36 compute-1 nova_compute[225855]: 2026-01-20 14:48:36.926 225859 DEBUG oslo_concurrency.lockutils [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "9beb3ec3-721e-4919-9713-a92c82ad189b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:36 compute-1 nova_compute[225855]: 2026-01-20 14:48:36.926 225859 DEBUG oslo_concurrency.lockutils [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:36 compute-1 nova_compute[225855]: 2026-01-20 14:48:36.926 225859 DEBUG oslo_concurrency.lockutils [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:36 compute-1 nova_compute[225855]: 2026-01-20 14:48:36.927 225859 DEBUG oslo_concurrency.lockutils [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:36 compute-1 nova_compute[225855]: 2026-01-20 14:48:36.927 225859 INFO nova.compute.manager [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Terminating instance
Jan 20 14:48:36 compute-1 nova_compute[225855]: 2026-01-20 14:48:36.928 225859 DEBUG nova.compute.manager [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:48:37 compute-1 kernel: tapefc8b363-e7 (unregistering): left promiscuous mode
Jan 20 14:48:37 compute-1 NetworkManager[49104]: <info>  [1768920517.0215] device (tapefc8b363-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:48:37 compute-1 ovn_controller[130490]: 2026-01-20T14:48:37Z|00368|binding|INFO|Releasing lport efc8b363-e70d-42f6-9be8-99865e269ec9 from this chassis (sb_readonly=0)
Jan 20 14:48:37 compute-1 ovn_controller[130490]: 2026-01-20T14:48:37Z|00369|binding|INFO|Setting lport efc8b363-e70d-42f6-9be8-99865e269ec9 down in Southbound
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.032 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:37 compute-1 ovn_controller[130490]: 2026-01-20T14:48:37Z|00370|binding|INFO|Removing iface tapefc8b363-e7 ovn-installed in OVS
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.035 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:37.044 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:66:1d 10.100.0.8'], port_security=['fa:16:3e:36:66:1d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9beb3ec3-721e-4919-9713-a92c82ad189b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b683fcc0026242e28ba6d8fba638688e', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7ceb05b5-53ff-444a-b0ef-2ba8294d585b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=361f9a69-30a6-4be4-89ad-2a8f92877af2, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=efc8b363-e70d-42f6-9be8-99865e269ec9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:48:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:37.046 140354 INFO neutron.agent.ovn.metadata.agent [-] Port efc8b363-e70d-42f6-9be8-99865e269ec9 in datapath a19e9d1a-864f-41ee-bdea-188e65973ea5 unbound from our chassis
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.048 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:37.049 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a19e9d1a-864f-41ee-bdea-188e65973ea5
Jan 20 14:48:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:37.066 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[08886395-767d-44cb-949f-c0da714aba05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:37 compute-1 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000065.scope: Deactivated successfully.
Jan 20 14:48:37 compute-1 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000065.scope: Consumed 13.781s CPU time.
Jan 20 14:48:37 compute-1 systemd-machined[194361]: Machine qemu-42-instance-00000065 terminated.
Jan 20 14:48:37 compute-1 ceph-mon[81775]: pgmap v1816: 321 pgs: 321 active+clean; 371 MiB data, 970 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.9 MiB/s wr, 279 op/s
Jan 20 14:48:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/851343660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:48:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4103562931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:48:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:37.096 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[fe03281a-0b82-479e-b1bf-4206dc14a1b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:37.100 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d85983c1-6229-446f-af39-5fbc92b9cb85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:37.125 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a6eaff-b802-44f5-89a9-5fc03a4d92ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:37.144 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9879718c-b53a-4063-b076-71aa27737dc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa19e9d1a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:53:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529090, 'reachable_time': 16021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264072, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.149 225859 DEBUG oslo_concurrency.lockutils [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.149 225859 DEBUG oslo_concurrency.lockutils [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:37.161 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9d1a3948-823b-49d9-bc27-4ac163e47b0c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa19e9d1a-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529100, 'tstamp': 529100}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264077, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa19e9d1a-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529104, 'tstamp': 529104}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264077, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:37.162 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa19e9d1a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.163 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.167 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:37.168 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa19e9d1a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:37.168 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:48:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:37.169 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa19e9d1a-80, col_values=(('external_ids', {'iface-id': '5527ab8d-a985-420b-9d5b-7e5d9baf7004'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:37.169 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.172 225859 INFO nova.virt.libvirt.driver [-] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Instance destroyed successfully.
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.173 225859 DEBUG nova.objects.instance [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'resources' on Instance uuid 9beb3ec3-721e-4919-9713-a92c82ad189b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.186 225859 DEBUG nova.objects.instance [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'pci_requests' on Instance uuid 75736b87-b14e-45b7-b43b-5129cf7d3279 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.200 225859 DEBUG nova.virt.libvirt.vif [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:47:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-757916410',display_name='tempest-ServerActionsTestOtherA-server-757916410',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-757916410',id=101,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDOC6AOV9rhSIyyfBXYGEFvhdWE5GLRuNfs0sPvnXoLHIstQY2OpqwhI35UcFW1s96uqz0+j9sMbdcq/NuNcfgrlnXkEz6j2iO7WUECWdPrQW34JB1FTWAvtA4R6RDoaZA==',key_name='tempest-keypair-1611966828',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:48:15Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b683fcc0026242e28ba6d8fba638688e',ramdisk_id='',reservation_id='r-042ihw9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-ServerActionsTestOtherA-967087071',owner_user_name='tempest-ServerActionsTestOtherA-967087071-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:48:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='869086208e10436c9dc96c78bee9a85d',uuid=9beb3ec3-721e-4919-9713-a92c82ad189b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "efc8b363-e70d-42f6-9be8-99865e269ec9", "address": "fa:16:3e:36:66:1d", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefc8b363-e7", "ovs_interfaceid": "efc8b363-e70d-42f6-9be8-99865e269ec9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.201 225859 DEBUG nova.network.os_vif_util [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converting VIF {"id": "efc8b363-e70d-42f6-9be8-99865e269ec9", "address": "fa:16:3e:36:66:1d", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefc8b363-e7", "ovs_interfaceid": "efc8b363-e70d-42f6-9be8-99865e269ec9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.201 225859 DEBUG nova.network.os_vif_util [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:36:66:1d,bridge_name='br-int',has_traffic_filtering=True,id=efc8b363-e70d-42f6-9be8-99865e269ec9,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefc8b363-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.201 225859 DEBUG os_vif [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:66:1d,bridge_name='br-int',has_traffic_filtering=True,id=efc8b363-e70d-42f6-9be8-99865e269ec9,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefc8b363-e7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.203 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.203 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefc8b363-e7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.204 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.207 225859 DEBUG nova.virt.hardware [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.207 225859 INFO nova.compute.claims [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.208 225859 DEBUG nova.objects.instance [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'resources' on Instance uuid 75736b87-b14e-45b7-b43b-5129cf7d3279 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.208 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.210 225859 INFO os_vif [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:66:1d,bridge_name='br-int',has_traffic_filtering=True,id=efc8b363-e70d-42f6-9be8-99865e269ec9,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefc8b363-e7')
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.228 225859 DEBUG nova.objects.instance [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'pci_devices' on Instance uuid 75736b87-b14e-45b7-b43b-5129cf7d3279 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.289 225859 INFO nova.compute.resource_tracker [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Updating resource usage from migration 37b266a8-8f13-40bc-ab16-470d7fe422ef
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.290 225859 DEBUG nova.compute.resource_tracker [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Starting to track incoming migration 37b266a8-8f13-40bc-ab16-470d7fe422ef with flavor 30c26a27-d918-46d8-a512-4ef3b4ce5955 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.343 225859 DEBUG nova.compute.manager [req-597a8e63-0e57-4475-bd02-05abbca4aadd req-01335e5b-a54a-4f80-8eec-0476c9866a60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Received event network-vif-unplugged-efc8b363-e70d-42f6-9be8-99865e269ec9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.343 225859 DEBUG oslo_concurrency.lockutils [req-597a8e63-0e57-4475-bd02-05abbca4aadd req-01335e5b-a54a-4f80-8eec-0476c9866a60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.343 225859 DEBUG oslo_concurrency.lockutils [req-597a8e63-0e57-4475-bd02-05abbca4aadd req-01335e5b-a54a-4f80-8eec-0476c9866a60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.343 225859 DEBUG oslo_concurrency.lockutils [req-597a8e63-0e57-4475-bd02-05abbca4aadd req-01335e5b-a54a-4f80-8eec-0476c9866a60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.344 225859 DEBUG nova.compute.manager [req-597a8e63-0e57-4475-bd02-05abbca4aadd req-01335e5b-a54a-4f80-8eec-0476c9866a60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] No waiting events found dispatching network-vif-unplugged-efc8b363-e70d-42f6-9be8-99865e269ec9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.344 225859 DEBUG nova.compute.manager [req-597a8e63-0e57-4475-bd02-05abbca4aadd req-01335e5b-a54a-4f80-8eec-0476c9866a60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Received event network-vif-unplugged-efc8b363-e70d-42f6-9be8-99865e269ec9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:48:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.371 225859 DEBUG nova.scheduler.client.report [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Refreshing inventories for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.373 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.394 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.396 225859 DEBUG nova.scheduler.client.report [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Updating ProviderTree inventory for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.396 225859 DEBUG nova.compute.provider_tree [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.431 225859 DEBUG nova.scheduler.client.report [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Refreshing aggregate associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.454 225859 DEBUG nova.scheduler.client.report [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Refreshing trait associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 14:48:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:37.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.608 225859 DEBUG oslo_concurrency.processutils [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.655 225859 INFO nova.virt.libvirt.driver [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Deleting instance files /var/lib/nova/instances/9beb3ec3-721e-4919-9713-a92c82ad189b_del
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.656 225859 INFO nova.virt.libvirt.driver [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Deletion of /var/lib/nova/instances/9beb3ec3-721e-4919-9713-a92c82ad189b_del complete
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.703 225859 INFO nova.compute.manager [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Took 0.77 seconds to destroy the instance on the hypervisor.
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.704 225859 DEBUG oslo.service.loopingcall [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.704 225859 DEBUG nova.compute.manager [-] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:48:37 compute-1 nova_compute[225855]: 2026-01-20 14:48:37.705 225859 DEBUG nova.network.neutron [-] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:48:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:48:38 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4259718524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.048 225859 DEBUG oslo_concurrency.processutils [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.053 225859 DEBUG nova.compute.provider_tree [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.072 225859 DEBUG nova.scheduler.client.report [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:48:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1867248921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/821706828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4259718524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.099 225859 DEBUG oslo_concurrency.lockutils [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.100 225859 INFO nova.compute.manager [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Migrating
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.105 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.105 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.105 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.106 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:48:38 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3717095713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.548 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.671 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.672 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.678 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.679 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:48:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:38.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.846 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.847 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4057MB free_disk=20.843402862548828GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.848 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.848 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.921 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Migration for instance 75736b87-b14e-45b7-b43b-5129cf7d3279 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.948 225859 INFO nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Updating resource usage from migration 37b266a8-8f13-40bc-ab16-470d7fe422ef
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.949 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Starting to track incoming migration 37b266a8-8f13-40bc-ab16-470d7fe422ef with flavor 30c26a27-d918-46d8-a512-4ef3b4ce5955 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.987 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 6586bc3e-3a94-4d22-8e8c-713a86a956fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.987 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:48:38 compute-1 nova_compute[225855]: 2026-01-20 14:48:38.988 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 9beb3ec3-721e-4919-9713-a92c82ad189b actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:48:39 compute-1 nova_compute[225855]: 2026-01-20 14:48:39.012 225859 WARNING nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 75736b87-b14e-45b7-b43b-5129cf7d3279 has been moved to another host compute-2.ctlplane.example.com(compute-2.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}.
Jan 20 14:48:39 compute-1 nova_compute[225855]: 2026-01-20 14:48:39.012 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:48:39 compute-1 nova_compute[225855]: 2026-01-20 14:48:39.013 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:48:39 compute-1 ceph-mon[81775]: pgmap v1817: 321 pgs: 321 active+clean; 386 MiB data, 977 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 670 KiB/s wr, 217 op/s
Jan 20 14:48:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3717095713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:39 compute-1 nova_compute[225855]: 2026-01-20 14:48:39.212 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:39 compute-1 nova_compute[225855]: 2026-01-20 14:48:39.491 225859 DEBUG nova.compute.manager [req-45654040-a3ab-4549-b68a-4aca467f45c4 req-b0e17b8a-4715-4e4b-a593-f613fdd388d3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Received event network-vif-plugged-efc8b363-e70d-42f6-9be8-99865e269ec9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:39 compute-1 nova_compute[225855]: 2026-01-20 14:48:39.492 225859 DEBUG oslo_concurrency.lockutils [req-45654040-a3ab-4549-b68a-4aca467f45c4 req-b0e17b8a-4715-4e4b-a593-f613fdd388d3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:39 compute-1 nova_compute[225855]: 2026-01-20 14:48:39.492 225859 DEBUG oslo_concurrency.lockutils [req-45654040-a3ab-4549-b68a-4aca467f45c4 req-b0e17b8a-4715-4e4b-a593-f613fdd388d3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:39 compute-1 nova_compute[225855]: 2026-01-20 14:48:39.492 225859 DEBUG oslo_concurrency.lockutils [req-45654040-a3ab-4549-b68a-4aca467f45c4 req-b0e17b8a-4715-4e4b-a593-f613fdd388d3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9beb3ec3-721e-4919-9713-a92c82ad189b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:39 compute-1 nova_compute[225855]: 2026-01-20 14:48:39.493 225859 DEBUG nova.compute.manager [req-45654040-a3ab-4549-b68a-4aca467f45c4 req-b0e17b8a-4715-4e4b-a593-f613fdd388d3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] No waiting events found dispatching network-vif-plugged-efc8b363-e70d-42f6-9be8-99865e269ec9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:39 compute-1 nova_compute[225855]: 2026-01-20 14:48:39.493 225859 WARNING nova.compute.manager [req-45654040-a3ab-4549-b68a-4aca467f45c4 req-b0e17b8a-4715-4e4b-a593-f613fdd388d3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Received unexpected event network-vif-plugged-efc8b363-e70d-42f6-9be8-99865e269ec9 for instance with vm_state active and task_state deleting.
Jan 20 14:48:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:48:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:39.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:48:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:48:39 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/946020620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:39 compute-1 nova_compute[225855]: 2026-01-20 14:48:39.625 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:39 compute-1 nova_compute[225855]: 2026-01-20 14:48:39.630 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:48:39 compute-1 nova_compute[225855]: 2026-01-20 14:48:39.652 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:48:39 compute-1 nova_compute[225855]: 2026-01-20 14:48:39.690 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:48:39 compute-1 nova_compute[225855]: 2026-01-20 14:48:39.691 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:39 compute-1 nova_compute[225855]: 2026-01-20 14:48:39.703 225859 DEBUG nova.network.neutron [-] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:48:39 compute-1 nova_compute[225855]: 2026-01-20 14:48:39.749 225859 INFO nova.compute.manager [-] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Took 2.04 seconds to deallocate network for instance.
Jan 20 14:48:40 compute-1 nova_compute[225855]: 2026-01-20 14:48:40.007 225859 DEBUG nova.compute.manager [req-1d5cab74-782c-4f3e-a367-363132bb2350 req-9498e3af-b963-476a-aac7-f1e4bb3eada9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Received event network-vif-deleted-efc8b363-e70d-42f6-9be8-99865e269ec9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/946020620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:40 compute-1 nova_compute[225855]: 2026-01-20 14:48:40.106 225859 INFO nova.compute.manager [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Took 0.36 seconds to detach 1 volumes for instance.
Jan 20 14:48:40 compute-1 nova_compute[225855]: 2026-01-20 14:48:40.108 225859 DEBUG nova.compute.manager [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Deleting volume: 9219aafd-6c66-4f38-9927-85b54b4175ae _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Jan 20 14:48:40 compute-1 nova_compute[225855]: 2026-01-20 14:48:40.399 225859 DEBUG oslo_concurrency.lockutils [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:40 compute-1 nova_compute[225855]: 2026-01-20 14:48:40.400 225859 DEBUG oslo_concurrency.lockutils [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:40 compute-1 nova_compute[225855]: 2026-01-20 14:48:40.523 225859 DEBUG oslo_concurrency.processutils [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:40 compute-1 nova_compute[225855]: 2026-01-20 14:48:40.657 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:40 compute-1 nova_compute[225855]: 2026-01-20 14:48:40.658 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:40 compute-1 nova_compute[225855]: 2026-01-20 14:48:40.658 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:40 compute-1 nova_compute[225855]: 2026-01-20 14:48:40.689 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:48:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:40.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:48:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:48:40 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4024521406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:40 compute-1 nova_compute[225855]: 2026-01-20 14:48:40.945 225859 DEBUG oslo_concurrency.processutils [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:40 compute-1 nova_compute[225855]: 2026-01-20 14:48:40.950 225859 DEBUG nova.compute.provider_tree [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:48:40 compute-1 nova_compute[225855]: 2026-01-20 14:48:40.974 225859 DEBUG nova.scheduler.client.report [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:48:41 compute-1 nova_compute[225855]: 2026-01-20 14:48:41.001 225859 DEBUG oslo_concurrency.lockutils [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:41 compute-1 nova_compute[225855]: 2026-01-20 14:48:41.050 225859 INFO nova.scheduler.client.report [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Deleted allocations for instance 9beb3ec3-721e-4919-9713-a92c82ad189b
Jan 20 14:48:41 compute-1 nova_compute[225855]: 2026-01-20 14:48:41.119 225859 DEBUG oslo_concurrency.lockutils [None req-ef76f3be-b647-42f5-a3c1-090076f98780 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "9beb3ec3-721e-4919-9713-a92c82ad189b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:41 compute-1 ceph-mon[81775]: pgmap v1818: 321 pgs: 321 active+clean; 392 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1006 KiB/s wr, 184 op/s
Jan 20 14:48:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2633351073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4024521406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2563049413' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:41 compute-1 sshd-session[264196]: Accepted publickey for nova from 192.168.122.102 port 55726 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 14:48:41 compute-1 systemd[1]: Created slice User Slice of UID 42436.
Jan 20 14:48:41 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 20 14:48:41 compute-1 systemd-logind[783]: New session 60 of user nova.
Jan 20 14:48:41 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 20 14:48:41 compute-1 systemd[1]: Starting User Manager for UID 42436...
Jan 20 14:48:41 compute-1 systemd[264200]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:48:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:41.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:41 compute-1 systemd[264200]: Queued start job for default target Main User Target.
Jan 20 14:48:41 compute-1 systemd[264200]: Created slice User Application Slice.
Jan 20 14:48:41 compute-1 systemd[264200]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 14:48:41 compute-1 systemd[264200]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 14:48:41 compute-1 systemd[264200]: Reached target Paths.
Jan 20 14:48:41 compute-1 systemd[264200]: Reached target Timers.
Jan 20 14:48:41 compute-1 systemd[264200]: Starting D-Bus User Message Bus Socket...
Jan 20 14:48:41 compute-1 systemd[264200]: Starting Create User's Volatile Files and Directories...
Jan 20 14:48:41 compute-1 systemd[264200]: Listening on D-Bus User Message Bus Socket.
Jan 20 14:48:41 compute-1 systemd[264200]: Reached target Sockets.
Jan 20 14:48:41 compute-1 systemd[264200]: Finished Create User's Volatile Files and Directories.
Jan 20 14:48:41 compute-1 systemd[264200]: Reached target Basic System.
Jan 20 14:48:41 compute-1 systemd[264200]: Reached target Main User Target.
Jan 20 14:48:41 compute-1 systemd[264200]: Startup finished in 168ms.
Jan 20 14:48:41 compute-1 systemd[1]: Started User Manager for UID 42436.
Jan 20 14:48:41 compute-1 systemd[1]: Started Session 60 of User nova.
Jan 20 14:48:41 compute-1 sshd-session[264196]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:48:41 compute-1 sshd-session[264217]: Received disconnect from 192.168.122.102 port 55726:11: disconnected by user
Jan 20 14:48:41 compute-1 sshd-session[264217]: Disconnected from user nova 192.168.122.102 port 55726
Jan 20 14:48:41 compute-1 sshd-session[264196]: pam_unix(sshd:session): session closed for user nova
Jan 20 14:48:41 compute-1 systemd[1]: session-60.scope: Deactivated successfully.
Jan 20 14:48:41 compute-1 systemd-logind[783]: Session 60 logged out. Waiting for processes to exit.
Jan 20 14:48:41 compute-1 systemd-logind[783]: Removed session 60.
Jan 20 14:48:41 compute-1 nova_compute[225855]: 2026-01-20 14:48:41.779 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920506.7775161, d699cf6a-9c33-400b-8d0f-4d61b8b16916 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:48:41 compute-1 nova_compute[225855]: 2026-01-20 14:48:41.779 225859 INFO nova.compute.manager [-] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] VM Stopped (Lifecycle Event)
Jan 20 14:48:41 compute-1 nova_compute[225855]: 2026-01-20 14:48:41.824 225859 DEBUG nova.compute.manager [None req-2471905c-4f04-4a5c-9447-dfb915c4b9fc - - - - - -] [instance: d699cf6a-9c33-400b-8d0f-4d61b8b16916] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:41 compute-1 sshd-session[264219]: Accepted publickey for nova from 192.168.122.102 port 55734 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 14:48:41 compute-1 systemd-logind[783]: New session 62 of user nova.
Jan 20 14:48:41 compute-1 systemd[1]: Started Session 62 of User nova.
Jan 20 14:48:41 compute-1 sshd-session[264219]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:48:41 compute-1 sshd-session[264222]: Received disconnect from 192.168.122.102 port 55734:11: disconnected by user
Jan 20 14:48:41 compute-1 sshd-session[264222]: Disconnected from user nova 192.168.122.102 port 55734
Jan 20 14:48:41 compute-1 sshd-session[264219]: pam_unix(sshd:session): session closed for user nova
Jan 20 14:48:41 compute-1 systemd[1]: session-62.scope: Deactivated successfully.
Jan 20 14:48:41 compute-1 systemd-logind[783]: Session 62 logged out. Waiting for processes to exit.
Jan 20 14:48:41 compute-1 systemd-logind[783]: Removed session 62.
Jan 20 14:48:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/140876588' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:48:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/140876588' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #85. Immutable memtables: 0.
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:42.159546) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 85
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920522159611, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 611, "num_deletes": 251, "total_data_size": 885578, "memory_usage": 897424, "flush_reason": "Manual Compaction"}
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #86: started
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920522165781, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 86, "file_size": 572733, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43805, "largest_seqno": 44411, "table_properties": {"data_size": 569651, "index_size": 990, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7616, "raw_average_key_size": 19, "raw_value_size": 563367, "raw_average_value_size": 1444, "num_data_blocks": 43, "num_entries": 390, "num_filter_entries": 390, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768920495, "oldest_key_time": 1768920495, "file_creation_time": 1768920522, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 6276 microseconds, and 2499 cpu microseconds.
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:42.165820) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #86: 572733 bytes OK
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:42.165839) [db/memtable_list.cc:519] [default] Level-0 commit table #86 started
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:42.167771) [db/memtable_list.cc:722] [default] Level-0 commit table #86: memtable #1 done
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:42.167792) EVENT_LOG_v1 {"time_micros": 1768920522167787, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:42.167811) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 882106, prev total WAL file size 882106, number of live WAL files 2.
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000082.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:42.168321) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [86(559KB)], [84(10038KB)]
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920522168352, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [86], "files_L6": [84], "score": -1, "input_data_size": 10852386, "oldest_snapshot_seqno": -1}
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.205 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #87: 6690 keys, 8960919 bytes, temperature: kUnknown
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920522221176, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 87, "file_size": 8960919, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8918265, "index_size": 24814, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 173883, "raw_average_key_size": 25, "raw_value_size": 8800557, "raw_average_value_size": 1315, "num_data_blocks": 976, "num_entries": 6690, "num_filter_entries": 6690, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768920522, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 87, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:42.221402) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 8960919 bytes
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:42.250721) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 205.1 rd, 169.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.8 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(34.6) write-amplify(15.6) OK, records in: 7204, records dropped: 514 output_compression: NoCompression
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:42.250764) EVENT_LOG_v1 {"time_micros": 1768920522250748, "job": 52, "event": "compaction_finished", "compaction_time_micros": 52902, "compaction_time_cpu_micros": 19660, "output_level": 6, "num_output_files": 1, "total_output_size": 8960919, "num_input_records": 7204, "num_output_records": 6690, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920522251133, "job": 52, "event": "table_file_deletion", "file_number": 86}
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000084.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920522252710, "job": 52, "event": "table_file_deletion", "file_number": 84}
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:42.168259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:42.252740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:42.252744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:42.252748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:42.252750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:48:42.252752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.634 225859 DEBUG oslo_concurrency.lockutils [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.635 225859 DEBUG oslo_concurrency.lockutils [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.635 225859 DEBUG oslo_concurrency.lockutils [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.635 225859 DEBUG oslo_concurrency.lockutils [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.635 225859 DEBUG oslo_concurrency.lockutils [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.637 225859 INFO nova.compute.manager [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Terminating instance
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.638 225859 DEBUG nova.compute.manager [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:48:42 compute-1 kernel: tapb93181ae-8a (unregistering): left promiscuous mode
Jan 20 14:48:42 compute-1 NetworkManager[49104]: <info>  [1768920522.6982] device (tapb93181ae-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.712 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:42 compute-1 ovn_controller[130490]: 2026-01-20T14:48:42Z|00371|binding|INFO|Releasing lport b93181ae-8a01-468c-adfc-ec8894512d2e from this chassis (sb_readonly=0)
Jan 20 14:48:42 compute-1 ovn_controller[130490]: 2026-01-20T14:48:42Z|00372|binding|INFO|Setting lport b93181ae-8a01-468c-adfc-ec8894512d2e down in Southbound
Jan 20 14:48:42 compute-1 ovn_controller[130490]: 2026-01-20T14:48:42Z|00373|binding|INFO|Removing iface tapb93181ae-8a ovn-installed in OVS
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.715 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:42.720 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:63:44 10.100.0.5'], port_security=['fa:16:3e:cf:63:44 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7efaa6b8-d1bd-4954-83ec-adcdb8e392bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b683fcc0026242e28ba6d8fba638688e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=361f9a69-30a6-4be4-89ad-2a8f92877af2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=b93181ae-8a01-468c-adfc-ec8894512d2e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:48:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:42.722 140354 INFO neutron.agent.ovn.metadata.agent [-] Port b93181ae-8a01-468c-adfc-ec8894512d2e in datapath a19e9d1a-864f-41ee-bdea-188e65973ea5 unbound from our chassis
Jan 20 14:48:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:42.726 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a19e9d1a-864f-41ee-bdea-188e65973ea5
Jan 20 14:48:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:42.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.740 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:42.747 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7895391a-2856-4666-9ca9-8671651e37f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-1 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000064.scope: Deactivated successfully.
Jan 20 14:48:42 compute-1 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000064.scope: Consumed 16.119s CPU time.
Jan 20 14:48:42 compute-1 systemd-machined[194361]: Machine qemu-40-instance-00000064 terminated.
Jan 20 14:48:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:42.776 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa3f0aa-dc68-44c0-9be4-a0c19846e6cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:42.781 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[6357f34f-283b-4e3a-b678-57c12f757e4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:42.809 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[35fbeb86-0504-447b-bb13-47dd36b80b0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:42.827 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1d1f1aff-c0bb-407a-86b0-b0b014d31e5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa19e9d1a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:53:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 19, 'rx_bytes': 784, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 19, 'rx_bytes': 784, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529090, 'reachable_time': 16021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264236, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:42.843 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9c5541d1-d1ca-4a29-8e26-03529b409b90]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa19e9d1a-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529100, 'tstamp': 529100}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264237, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa19e9d1a-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529104, 'tstamp': 529104}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264237, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:42.845 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa19e9d1a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.847 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.851 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:42.851 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa19e9d1a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:42.852 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:48:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:42.852 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa19e9d1a-80, col_values=(('external_ids', {'iface-id': '5527ab8d-a985-420b-9d5b-7e5d9baf7004'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:42.852 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.875 225859 INFO nova.virt.libvirt.driver [-] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Instance destroyed successfully.
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.875 225859 DEBUG nova.objects.instance [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'resources' on Instance uuid 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.897 225859 DEBUG nova.virt.libvirt.vif [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:47:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1146964335',display_name='tempest-ServerActionsTestOtherA-server-1146964335',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1146964335',id=100,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:47:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b683fcc0026242e28ba6d8fba638688e',ramdisk_id='',reservation_id='r-pmljdz8l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-967087071',owner_user_name='tempest-ServerActionsTestOtherA-967087071-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:47:13Z,user_data=None,user_id='869086208e10436c9dc96c78bee9a85d',uuid=7efaa6b8-d1bd-4954-83ec-adcdb8e392bf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b93181ae-8a01-468c-adfc-ec8894512d2e", "address": "fa:16:3e:cf:63:44", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb93181ae-8a", "ovs_interfaceid": "b93181ae-8a01-468c-adfc-ec8894512d2e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.897 225859 DEBUG nova.network.os_vif_util [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converting VIF {"id": "b93181ae-8a01-468c-adfc-ec8894512d2e", "address": "fa:16:3e:cf:63:44", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb93181ae-8a", "ovs_interfaceid": "b93181ae-8a01-468c-adfc-ec8894512d2e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.898 225859 DEBUG nova.network.os_vif_util [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:63:44,bridge_name='br-int',has_traffic_filtering=True,id=b93181ae-8a01-468c-adfc-ec8894512d2e,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb93181ae-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.898 225859 DEBUG os_vif [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:63:44,bridge_name='br-int',has_traffic_filtering=True,id=b93181ae-8a01-468c-adfc-ec8894512d2e,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb93181ae-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.900 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.900 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb93181ae-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.902 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.903 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.906 225859 INFO os_vif [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:63:44,bridge_name='br-int',has_traffic_filtering=True,id=b93181ae-8a01-468c-adfc-ec8894512d2e,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb93181ae-8a')
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.975 225859 DEBUG nova.compute.manager [req-cde4c0fd-4fe7-4fb1-9cf5-7ed3c872c46d req-fe7fa7a2-bef1-4986-b2e3-1901ee724735 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Received event network-vif-unplugged-b93181ae-8a01-468c-adfc-ec8894512d2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.975 225859 DEBUG oslo_concurrency.lockutils [req-cde4c0fd-4fe7-4fb1-9cf5-7ed3c872c46d req-fe7fa7a2-bef1-4986-b2e3-1901ee724735 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.975 225859 DEBUG oslo_concurrency.lockutils [req-cde4c0fd-4fe7-4fb1-9cf5-7ed3c872c46d req-fe7fa7a2-bef1-4986-b2e3-1901ee724735 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.976 225859 DEBUG oslo_concurrency.lockutils [req-cde4c0fd-4fe7-4fb1-9cf5-7ed3c872c46d req-fe7fa7a2-bef1-4986-b2e3-1901ee724735 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.976 225859 DEBUG nova.compute.manager [req-cde4c0fd-4fe7-4fb1-9cf5-7ed3c872c46d req-fe7fa7a2-bef1-4986-b2e3-1901ee724735 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] No waiting events found dispatching network-vif-unplugged-b93181ae-8a01-468c-adfc-ec8894512d2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:42 compute-1 nova_compute[225855]: 2026-01-20 14:48:42.976 225859 DEBUG nova.compute.manager [req-cde4c0fd-4fe7-4fb1-9cf5-7ed3c872c46d req-fe7fa7a2-bef1-4986-b2e3-1901ee724735 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Received event network-vif-unplugged-b93181ae-8a01-468c-adfc-ec8894512d2e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:48:43 compute-1 ceph-mon[81775]: pgmap v1819: 321 pgs: 321 active+clean; 409 MiB data, 992 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Jan 20 14:48:43 compute-1 nova_compute[225855]: 2026-01-20 14:48:43.274 225859 INFO nova.virt.libvirt.driver [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Deleting instance files /var/lib/nova/instances/7efaa6b8-d1bd-4954-83ec-adcdb8e392bf_del
Jan 20 14:48:43 compute-1 nova_compute[225855]: 2026-01-20 14:48:43.275 225859 INFO nova.virt.libvirt.driver [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Deletion of /var/lib/nova/instances/7efaa6b8-d1bd-4954-83ec-adcdb8e392bf_del complete
Jan 20 14:48:43 compute-1 nova_compute[225855]: 2026-01-20 14:48:43.353 225859 INFO nova.compute.manager [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Took 0.71 seconds to destroy the instance on the hypervisor.
Jan 20 14:48:43 compute-1 nova_compute[225855]: 2026-01-20 14:48:43.353 225859 DEBUG oslo.service.loopingcall [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:48:43 compute-1 nova_compute[225855]: 2026-01-20 14:48:43.353 225859 DEBUG nova.compute.manager [-] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:48:43 compute-1 nova_compute[225855]: 2026-01-20 14:48:43.353 225859 DEBUG nova.network.neutron [-] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:48:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:43.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:44 compute-1 podman[264270]: 2026-01-20 14:48:44.030973978 +0000 UTC m=+0.064931215 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:48:44 compute-1 nova_compute[225855]: 2026-01-20 14:48:44.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:44.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:45 compute-1 nova_compute[225855]: 2026-01-20 14:48:45.043 225859 DEBUG nova.compute.manager [req-2e1deb02-3da8-4fe0-a4bc-1d10c3f9ec01 req-61da54de-835e-4ae7-8081-4457a5260c2a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Received event network-vif-unplugged-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:45 compute-1 nova_compute[225855]: 2026-01-20 14:48:45.043 225859 DEBUG oslo_concurrency.lockutils [req-2e1deb02-3da8-4fe0-a4bc-1d10c3f9ec01 req-61da54de-835e-4ae7-8081-4457a5260c2a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:45 compute-1 nova_compute[225855]: 2026-01-20 14:48:45.043 225859 DEBUG oslo_concurrency.lockutils [req-2e1deb02-3da8-4fe0-a4bc-1d10c3f9ec01 req-61da54de-835e-4ae7-8081-4457a5260c2a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:45 compute-1 nova_compute[225855]: 2026-01-20 14:48:45.043 225859 DEBUG oslo_concurrency.lockutils [req-2e1deb02-3da8-4fe0-a4bc-1d10c3f9ec01 req-61da54de-835e-4ae7-8081-4457a5260c2a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:45 compute-1 nova_compute[225855]: 2026-01-20 14:48:45.044 225859 DEBUG nova.compute.manager [req-2e1deb02-3da8-4fe0-a4bc-1d10c3f9ec01 req-61da54de-835e-4ae7-8081-4457a5260c2a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] No waiting events found dispatching network-vif-unplugged-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:45 compute-1 nova_compute[225855]: 2026-01-20 14:48:45.044 225859 WARNING nova.compute.manager [req-2e1deb02-3da8-4fe0-a4bc-1d10c3f9ec01 req-61da54de-835e-4ae7-8081-4457a5260c2a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Received unexpected event network-vif-unplugged-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 for instance with vm_state active and task_state resize_migrating.
Jan 20 14:48:45 compute-1 nova_compute[225855]: 2026-01-20 14:48:45.069 225859 DEBUG nova.compute.manager [req-6197acc9-1c76-4bc6-8961-3bf11bcf4ba1 req-a13cf42a-2b94-42bc-a876-3aa115131942 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Received event network-vif-plugged-b93181ae-8a01-468c-adfc-ec8894512d2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:45 compute-1 nova_compute[225855]: 2026-01-20 14:48:45.070 225859 DEBUG oslo_concurrency.lockutils [req-6197acc9-1c76-4bc6-8961-3bf11bcf4ba1 req-a13cf42a-2b94-42bc-a876-3aa115131942 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:45 compute-1 nova_compute[225855]: 2026-01-20 14:48:45.070 225859 DEBUG oslo_concurrency.lockutils [req-6197acc9-1c76-4bc6-8961-3bf11bcf4ba1 req-a13cf42a-2b94-42bc-a876-3aa115131942 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:45 compute-1 nova_compute[225855]: 2026-01-20 14:48:45.070 225859 DEBUG oslo_concurrency.lockutils [req-6197acc9-1c76-4bc6-8961-3bf11bcf4ba1 req-a13cf42a-2b94-42bc-a876-3aa115131942 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:45 compute-1 nova_compute[225855]: 2026-01-20 14:48:45.070 225859 DEBUG nova.compute.manager [req-6197acc9-1c76-4bc6-8961-3bf11bcf4ba1 req-a13cf42a-2b94-42bc-a876-3aa115131942 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] No waiting events found dispatching network-vif-plugged-b93181ae-8a01-468c-adfc-ec8894512d2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:45 compute-1 nova_compute[225855]: 2026-01-20 14:48:45.070 225859 WARNING nova.compute.manager [req-6197acc9-1c76-4bc6-8961-3bf11bcf4ba1 req-a13cf42a-2b94-42bc-a876-3aa115131942 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Received unexpected event network-vif-plugged-b93181ae-8a01-468c-adfc-ec8894512d2e for instance with vm_state active and task_state deleting.
Jan 20 14:48:45 compute-1 ceph-mon[81775]: pgmap v1820: 321 pgs: 321 active+clean; 382 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 20 14:48:45 compute-1 nova_compute[225855]: 2026-01-20 14:48:45.465 225859 DEBUG nova.network.neutron [-] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:48:45 compute-1 nova_compute[225855]: 2026-01-20 14:48:45.498 225859 INFO nova.compute.manager [-] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Took 2.14 seconds to deallocate network for instance.
Jan 20 14:48:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:45.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:45 compute-1 nova_compute[225855]: 2026-01-20 14:48:45.570 225859 DEBUG oslo_concurrency.lockutils [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:45 compute-1 nova_compute[225855]: 2026-01-20 14:48:45.570 225859 DEBUG oslo_concurrency.lockutils [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:45 compute-1 nova_compute[225855]: 2026-01-20 14:48:45.691 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:45 compute-1 nova_compute[225855]: 2026-01-20 14:48:45.703 225859 DEBUG oslo_concurrency.processutils [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:45 compute-1 nova_compute[225855]: 2026-01-20 14:48:45.929 225859 INFO nova.network.neutron [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Updating port d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 20 14:48:46 compute-1 nova_compute[225855]: 2026-01-20 14:48:46.182 225859 DEBUG oslo_concurrency.processutils [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:46 compute-1 nova_compute[225855]: 2026-01-20 14:48:46.189 225859 DEBUG nova.compute.provider_tree [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:48:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2739244399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:46 compute-1 nova_compute[225855]: 2026-01-20 14:48:46.213 225859 DEBUG nova.scheduler.client.report [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:48:46 compute-1 nova_compute[225855]: 2026-01-20 14:48:46.242 225859 DEBUG oslo_concurrency.lockutils [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:46 compute-1 nova_compute[225855]: 2026-01-20 14:48:46.298 225859 INFO nova.scheduler.client.report [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Deleted allocations for instance 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf
Jan 20 14:48:46 compute-1 nova_compute[225855]: 2026-01-20 14:48:46.476 225859 DEBUG oslo_concurrency.lockutils [None req-dd3dec03-bb58-43fc-bfa9-07351fe03dc3 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "7efaa6b8-d1bd-4954-83ec-adcdb8e392bf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:46.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:47 compute-1 nova_compute[225855]: 2026-01-20 14:48:47.151 225859 DEBUG oslo_concurrency.lockutils [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "refresh_cache-75736b87-b14e-45b7-b43b-5129cf7d3279" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:48:47 compute-1 nova_compute[225855]: 2026-01-20 14:48:47.152 225859 DEBUG oslo_concurrency.lockutils [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquired lock "refresh_cache-75736b87-b14e-45b7-b43b-5129cf7d3279" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:48:47 compute-1 nova_compute[225855]: 2026-01-20 14:48:47.152 225859 DEBUG nova.network.neutron [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:48:47 compute-1 ceph-mon[81775]: pgmap v1821: 321 pgs: 321 active+clean; 274 MiB data, 926 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Jan 20 14:48:47 compute-1 nova_compute[225855]: 2026-01-20 14:48:47.291 225859 DEBUG nova.compute.manager [req-cda4b8e7-6845-4233-8241-8c88c481ce34 req-0d70c841-8e77-47f9-98b3-5d28834685d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Received event network-vif-deleted-b93181ae-8a01-468c-adfc-ec8894512d2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:47 compute-1 nova_compute[225855]: 2026-01-20 14:48:47.325 225859 DEBUG nova.compute.manager [req-3a41699a-0367-4973-ac53-161ff8f9a117 req-226c3ec0-7a42-441c-bda7-ac430516d7d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Received event network-vif-plugged-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:47 compute-1 nova_compute[225855]: 2026-01-20 14:48:47.326 225859 DEBUG oslo_concurrency.lockutils [req-3a41699a-0367-4973-ac53-161ff8f9a117 req-226c3ec0-7a42-441c-bda7-ac430516d7d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:47 compute-1 nova_compute[225855]: 2026-01-20 14:48:47.326 225859 DEBUG oslo_concurrency.lockutils [req-3a41699a-0367-4973-ac53-161ff8f9a117 req-226c3ec0-7a42-441c-bda7-ac430516d7d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:47 compute-1 nova_compute[225855]: 2026-01-20 14:48:47.327 225859 DEBUG oslo_concurrency.lockutils [req-3a41699a-0367-4973-ac53-161ff8f9a117 req-226c3ec0-7a42-441c-bda7-ac430516d7d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:47 compute-1 nova_compute[225855]: 2026-01-20 14:48:47.327 225859 DEBUG nova.compute.manager [req-3a41699a-0367-4973-ac53-161ff8f9a117 req-226c3ec0-7a42-441c-bda7-ac430516d7d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] No waiting events found dispatching network-vif-plugged-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:47 compute-1 nova_compute[225855]: 2026-01-20 14:48:47.327 225859 WARNING nova.compute.manager [req-3a41699a-0367-4973-ac53-161ff8f9a117 req-226c3ec0-7a42-441c-bda7-ac430516d7d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Received unexpected event network-vif-plugged-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 for instance with vm_state active and task_state resize_migrated.
Jan 20 14:48:47 compute-1 nova_compute[225855]: 2026-01-20 14:48:47.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:47 compute-1 nova_compute[225855]: 2026-01-20 14:48:47.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 14:48:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:47 compute-1 nova_compute[225855]: 2026-01-20 14:48:47.398 225859 DEBUG nova.compute.manager [req-c7e8339e-e267-4fef-95d8-eea084a60576 req-e982c00d-bc4f-4669-a1f6-4f7f2fd55d8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Received event network-changed-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:47 compute-1 nova_compute[225855]: 2026-01-20 14:48:47.399 225859 DEBUG nova.compute.manager [req-c7e8339e-e267-4fef-95d8-eea084a60576 req-e982c00d-bc4f-4669-a1f6-4f7f2fd55d8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Refreshing instance network info cache due to event network-changed-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:48:47 compute-1 nova_compute[225855]: 2026-01-20 14:48:47.399 225859 DEBUG oslo_concurrency.lockutils [req-c7e8339e-e267-4fef-95d8-eea084a60576 req-e982c00d-bc4f-4669-a1f6-4f7f2fd55d8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-75736b87-b14e-45b7-b43b-5129cf7d3279" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:48:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:47.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:47 compute-1 nova_compute[225855]: 2026-01-20 14:48:47.901 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:48.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:48:48 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1043995877' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:48:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:48:48 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1043995877' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.003 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:49 compute-1 sudo[264313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:49 compute-1 sudo[264313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:49 compute-1 sudo[264313]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:49 compute-1 sudo[264338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:49 compute-1 sudo[264338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:49 compute-1 sudo[264338]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:49 compute-1 ceph-mon[81775]: pgmap v1822: 321 pgs: 321 active+clean; 258 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 160 op/s
Jan 20 14:48:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1043995877' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:48:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1043995877' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:48:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:48:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:49.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.646 225859 DEBUG oslo_concurrency.lockutils [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.647 225859 DEBUG oslo_concurrency.lockutils [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.647 225859 DEBUG oslo_concurrency.lockutils [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.648 225859 DEBUG oslo_concurrency.lockutils [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.648 225859 DEBUG oslo_concurrency.lockutils [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.649 225859 INFO nova.compute.manager [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Terminating instance
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.650 225859 DEBUG nova.compute.manager [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:48:49 compute-1 kernel: tap2c289e6f-29 (unregistering): left promiscuous mode
Jan 20 14:48:49 compute-1 NetworkManager[49104]: <info>  [1768920529.7127] device (tap2c289e6f-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:48:49 compute-1 ovn_controller[130490]: 2026-01-20T14:48:49Z|00374|binding|INFO|Releasing lport 2c289e6f-295e-44c3-948a-9a6901251890 from this chassis (sb_readonly=0)
Jan 20 14:48:49 compute-1 ovn_controller[130490]: 2026-01-20T14:48:49Z|00375|binding|INFO|Setting lport 2c289e6f-295e-44c3-948a-9a6901251890 down in Southbound
Jan 20 14:48:49 compute-1 ovn_controller[130490]: 2026-01-20T14:48:49Z|00376|binding|INFO|Removing iface tap2c289e6f-29 ovn-installed in OVS
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.721 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.748 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:49 compute-1 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000057.scope: Deactivated successfully.
Jan 20 14:48:49 compute-1 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000057.scope: Consumed 24.485s CPU time.
Jan 20 14:48:49 compute-1 systemd-machined[194361]: Machine qemu-36-instance-00000057 terminated.
Jan 20 14:48:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:49.769 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:4c:e2 10.100.0.9'], port_security=['fa:16:3e:2f:4c:e2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6586bc3e-3a94-4d22-8e8c-713a86a956fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b683fcc0026242e28ba6d8fba638688e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7ceb05b5-53ff-444a-b0ef-2ba8294d585b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=361f9a69-30a6-4be4-89ad-2a8f92877af2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=2c289e6f-295e-44c3-948a-9a6901251890) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:48:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:49.771 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 2c289e6f-295e-44c3-948a-9a6901251890 in datapath a19e9d1a-864f-41ee-bdea-188e65973ea5 unbound from our chassis
Jan 20 14:48:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:49.772 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a19e9d1a-864f-41ee-bdea-188e65973ea5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:48:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:49.773 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8eecead7-49ee-4b1f-a71f-f64143abdd8f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:49.774 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5 namespace which is not needed anymore
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.875 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.880 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.892 225859 INFO nova.virt.libvirt.driver [-] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Instance destroyed successfully.
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.893 225859 DEBUG nova.objects.instance [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lazy-loading 'resources' on Instance uuid 6586bc3e-3a94-4d22-8e8c-713a86a956fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:49 compute-1 neutron-haproxy-ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5[258963]: [NOTICE]   (258968) : haproxy version is 2.8.14-c23fe91
Jan 20 14:48:49 compute-1 neutron-haproxy-ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5[258963]: [NOTICE]   (258968) : path to executable is /usr/sbin/haproxy
Jan 20 14:48:49 compute-1 neutron-haproxy-ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5[258963]: [WARNING]  (258968) : Exiting Master process...
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.916 225859 DEBUG nova.virt.libvirt.vif [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:44:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1533521351',display_name='tempest-ServerActionsTestOtherA-server-1533521351',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1533521351',id=87,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDOC6AOV9rhSIyyfBXYGEFvhdWE5GLRuNfs0sPvnXoLHIstQY2OpqwhI35UcFW1s96uqz0+j9sMbdcq/NuNcfgrlnXkEz6j2iO7WUECWdPrQW34JB1FTWAvtA4R6RDoaZA==',key_name='tempest-keypair-1611966828',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:44:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b683fcc0026242e28ba6d8fba638688e',ramdisk_id='',reservation_id='r-47bmn591',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-967087071',owner_user_name='tempest-ServerActionsTestOtherA-967087071-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:44:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='869086208e10436c9dc96c78bee9a85d',uuid=6586bc3e-3a94-4d22-8e8c-713a86a956fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2c289e6f-295e-44c3-948a-9a6901251890", "address": "fa:16:3e:2f:4c:e2", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c289e6f-29", "ovs_interfaceid": "2c289e6f-295e-44c3-948a-9a6901251890", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.917 225859 DEBUG nova.network.os_vif_util [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converting VIF {"id": "2c289e6f-295e-44c3-948a-9a6901251890", "address": "fa:16:3e:2f:4c:e2", "network": {"id": "a19e9d1a-864f-41ee-bdea-188e65973ea5", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-916311998-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b683fcc0026242e28ba6d8fba638688e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c289e6f-29", "ovs_interfaceid": "2c289e6f-295e-44c3-948a-9a6901251890", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:48:49 compute-1 neutron-haproxy-ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5[258963]: [ALERT]    (258968) : Current worker (258970) exited with code 143 (Terminated)
Jan 20 14:48:49 compute-1 neutron-haproxy-ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5[258963]: [WARNING]  (258968) : All workers exited. Exiting... (0)
Jan 20 14:48:49 compute-1 systemd[1]: libpod-c79270ac4f85aee516fb775db6a3d75d0fa33264a9044c47814a555bf5216ad8.scope: Deactivated successfully.
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.917 225859 DEBUG nova.network.os_vif_util [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2f:4c:e2,bridge_name='br-int',has_traffic_filtering=True,id=2c289e6f-295e-44c3-948a-9a6901251890,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c289e6f-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.918 225859 DEBUG os_vif [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:4c:e2,bridge_name='br-int',has_traffic_filtering=True,id=2c289e6f-295e-44c3-948a-9a6901251890,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c289e6f-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.920 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.920 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c289e6f-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.922 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.924 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:49 compute-1 podman[264388]: 2026-01-20 14:48:49.925199748 +0000 UTC m=+0.051361620 container died c79270ac4f85aee516fb775db6a3d75d0fa33264a9044c47814a555bf5216ad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:48:49 compute-1 nova_compute[225855]: 2026-01-20 14:48:49.926 225859 INFO os_vif [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:4c:e2,bridge_name='br-int',has_traffic_filtering=True,id=2c289e6f-295e-44c3-948a-9a6901251890,network=Network(a19e9d1a-864f-41ee-bdea-188e65973ea5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c289e6f-29')
Jan 20 14:48:49 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c79270ac4f85aee516fb775db6a3d75d0fa33264a9044c47814a555bf5216ad8-userdata-shm.mount: Deactivated successfully.
Jan 20 14:48:49 compute-1 systemd[1]: var-lib-containers-storage-overlay-b6713c84d9e47876707c1459896eab206f8324b8157aad0c912932cf9613e3a8-merged.mount: Deactivated successfully.
Jan 20 14:48:49 compute-1 podman[264388]: 2026-01-20 14:48:49.984194144 +0000 UTC m=+0.110356016 container cleanup c79270ac4f85aee516fb775db6a3d75d0fa33264a9044c47814a555bf5216ad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:48:49 compute-1 systemd[1]: libpod-conmon-c79270ac4f85aee516fb775db6a3d75d0fa33264a9044c47814a555bf5216ad8.scope: Deactivated successfully.
Jan 20 14:48:50 compute-1 podman[264444]: 2026-01-20 14:48:50.047281625 +0000 UTC m=+0.041997797 container remove c79270ac4f85aee516fb775db6a3d75d0fa33264a9044c47814a555bf5216ad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 14:48:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:50.052 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[be14569f-29cb-4ac0-8dc4-586302af2c5e]: (4, ('Tue Jan 20 02:48:49 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5 (c79270ac4f85aee516fb775db6a3d75d0fa33264a9044c47814a555bf5216ad8)\nc79270ac4f85aee516fb775db6a3d75d0fa33264a9044c47814a555bf5216ad8\nTue Jan 20 02:48:49 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5 (c79270ac4f85aee516fb775db6a3d75d0fa33264a9044c47814a555bf5216ad8)\nc79270ac4f85aee516fb775db6a3d75d0fa33264a9044c47814a555bf5216ad8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:50.054 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c8dcfb51-f2cb-42b2-9162-1807220be49a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:50.055 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa19e9d1a-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.057 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:50 compute-1 kernel: tapa19e9d1a-80: left promiscuous mode
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.072 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:50.074 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1f8d162a-54fa-4431-acb6-232fb80df2ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:50.090 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9c24a6bf-dd8d-4a2f-b734-5f3f5ded4394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:50.091 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6f7c45b1-0979-4e2d-95d6-0f2b83ae1ae4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:50.105 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[253bfee5-e4de-47da-92ca-26ad48c91a56]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529083, 'reachable_time': 37181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264459, 'error': None, 'target': 'ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:50.107 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a19e9d1a-864f-41ee-bdea-188e65973ea5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:48:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:50.107 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[1140f12b-3011-46e4-ac39-17585bfbc955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:50 compute-1 systemd[1]: run-netns-ovnmeta\x2da19e9d1a\x2d864f\x2d41ee\x2dbdea\x2d188e65973ea5.mount: Deactivated successfully.
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.149 225859 DEBUG nova.compute.manager [req-69bd00ab-0273-4c70-8bf8-2334e1571b10 req-b1223372-3acc-4082-8aea-32a9527d38f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Received event network-vif-unplugged-2c289e6f-295e-44c3-948a-9a6901251890 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.149 225859 DEBUG oslo_concurrency.lockutils [req-69bd00ab-0273-4c70-8bf8-2334e1571b10 req-b1223372-3acc-4082-8aea-32a9527d38f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.149 225859 DEBUG oslo_concurrency.lockutils [req-69bd00ab-0273-4c70-8bf8-2334e1571b10 req-b1223372-3acc-4082-8aea-32a9527d38f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.149 225859 DEBUG oslo_concurrency.lockutils [req-69bd00ab-0273-4c70-8bf8-2334e1571b10 req-b1223372-3acc-4082-8aea-32a9527d38f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.150 225859 DEBUG nova.compute.manager [req-69bd00ab-0273-4c70-8bf8-2334e1571b10 req-b1223372-3acc-4082-8aea-32a9527d38f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] No waiting events found dispatching network-vif-unplugged-2c289e6f-295e-44c3-948a-9a6901251890 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.150 225859 DEBUG nova.compute.manager [req-69bd00ab-0273-4c70-8bf8-2334e1571b10 req-b1223372-3acc-4082-8aea-32a9527d38f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Received event network-vif-unplugged-2c289e6f-295e-44c3-948a-9a6901251890 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.187 225859 DEBUG nova.network.neutron [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Updating instance_info_cache with network_info: [{"id": "d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6", "address": "fa:16:3e:22:f9:d2", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a9a684-c9", "ovs_interfaceid": "d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.225 225859 DEBUG oslo_concurrency.lockutils [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Releasing lock "refresh_cache-75736b87-b14e-45b7-b43b-5129cf7d3279" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.228 225859 DEBUG oslo_concurrency.lockutils [req-c7e8339e-e267-4fef-95d8-eea084a60576 req-e982c00d-bc4f-4669-a1f6-4f7f2fd55d8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-75736b87-b14e-45b7-b43b-5129cf7d3279" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.228 225859 DEBUG nova.network.neutron [req-c7e8339e-e267-4fef-95d8-eea084a60576 req-e982c00d-bc4f-4669-a1f6-4f7f2fd55d8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Refreshing network info cache for port d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.393 225859 INFO nova.virt.libvirt.driver [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Deleting instance files /var/lib/nova/instances/6586bc3e-3a94-4d22-8e8c-713a86a956fb_del
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.394 225859 INFO nova.virt.libvirt.driver [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Deletion of /var/lib/nova/instances/6586bc3e-3a94-4d22-8e8c-713a86a956fb_del complete
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.419 225859 DEBUG nova.virt.libvirt.driver [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.422 225859 DEBUG nova.virt.libvirt.driver [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.422 225859 INFO nova.virt.libvirt.driver [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Creating image(s)
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.461 225859 DEBUG nova.storage.rbd_utils [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] creating snapshot(nova-resize) on rbd image(75736b87-b14e-45b7-b43b-5129cf7d3279_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.497 225859 INFO nova.compute.manager [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Took 0.85 seconds to destroy the instance on the hypervisor.
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.498 225859 DEBUG oslo.service.loopingcall [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.498 225859 DEBUG nova.compute.manager [-] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.498 225859 DEBUG nova.network.neutron [-] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:48:50 compute-1 nova_compute[225855]: 2026-01-20 14:48:50.693 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:50.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e242 e242: 3 total, 3 up, 3 in
Jan 20 14:48:51 compute-1 nova_compute[225855]: 2026-01-20 14:48:51.535 225859 DEBUG nova.network.neutron [-] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:48:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:48:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:51.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:48:51 compute-1 nova_compute[225855]: 2026-01-20 14:48:51.576 225859 INFO nova.compute.manager [-] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Took 1.08 seconds to deallocate network for instance.
Jan 20 14:48:51 compute-1 nova_compute[225855]: 2026-01-20 14:48:51.657 225859 DEBUG nova.compute.manager [req-1385451d-2826-4c94-a2ef-20f16e39ab4d req-da878352-bd74-4018-9da6-51a9356ea8b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Received event network-vif-deleted-2c289e6f-295e-44c3-948a-9a6901251890 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:51 compute-1 nova_compute[225855]: 2026-01-20 14:48:51.710 225859 DEBUG oslo_concurrency.lockutils [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:51 compute-1 nova_compute[225855]: 2026-01-20 14:48:51.711 225859 DEBUG oslo_concurrency.lockutils [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:51 compute-1 ceph-mon[81775]: pgmap v1823: 321 pgs: 321 active+clean; 273 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 175 op/s
Jan 20 14:48:51 compute-1 nova_compute[225855]: 2026-01-20 14:48:51.835 225859 DEBUG oslo_concurrency.processutils [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:51 compute-1 nova_compute[225855]: 2026-01-20 14:48:51.908 225859 DEBUG nova.objects.instance [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'trusted_certs' on Instance uuid 75736b87-b14e-45b7-b43b-5129cf7d3279 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.053 225859 DEBUG nova.virt.libvirt.driver [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.054 225859 DEBUG nova.virt.libvirt.driver [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Ensure instance console log exists: /var/lib/nova/instances/75736b87-b14e-45b7-b43b-5129cf7d3279/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.054 225859 DEBUG oslo_concurrency.lockutils [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.055 225859 DEBUG oslo_concurrency.lockutils [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.055 225859 DEBUG oslo_concurrency.lockutils [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.057 225859 DEBUG nova.virt.libvirt.driver [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Start _get_guest_xml network_info=[{"id": "d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6", "address": "fa:16:3e:22:f9:d2", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1917526237-network", "vif_mac": "fa:16:3e:22:f9:d2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a9a684-c9", "ovs_interfaceid": "d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.062 225859 WARNING nova.virt.libvirt.driver [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.069 225859 DEBUG nova.virt.libvirt.host [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.069 225859 DEBUG nova.virt.libvirt.host [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.072 225859 DEBUG nova.virt.libvirt.host [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.073 225859 DEBUG nova.virt.libvirt.host [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.074 225859 DEBUG nova.virt.libvirt.driver [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.074 225859 DEBUG nova.virt.hardware [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='30c26a27-d918-46d8-a512-4ef3b4ce5955',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.074 225859 DEBUG nova.virt.hardware [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.075 225859 DEBUG nova.virt.hardware [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.075 225859 DEBUG nova.virt.hardware [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.075 225859 DEBUG nova.virt.hardware [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.075 225859 DEBUG nova.virt.hardware [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.075 225859 DEBUG nova.virt.hardware [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.076 225859 DEBUG nova.virt.hardware [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.076 225859 DEBUG nova.virt.hardware [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.076 225859 DEBUG nova.virt.hardware [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.076 225859 DEBUG nova.virt.hardware [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.076 225859 DEBUG nova.objects.instance [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'vcpu_model' on Instance uuid 75736b87-b14e-45b7-b43b-5129cf7d3279 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.109 225859 DEBUG oslo_concurrency.processutils [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.171 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920517.170466, 9beb3ec3-721e-4919-9713-a92c82ad189b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.172 225859 INFO nova.compute.manager [-] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] VM Stopped (Lifecycle Event)
Jan 20 14:48:52 compute-1 systemd[1]: Stopping User Manager for UID 42436...
Jan 20 14:48:52 compute-1 systemd[264200]: Activating special unit Exit the Session...
Jan 20 14:48:52 compute-1 systemd[264200]: Stopped target Main User Target.
Jan 20 14:48:52 compute-1 systemd[264200]: Stopped target Basic System.
Jan 20 14:48:52 compute-1 systemd[264200]: Stopped target Paths.
Jan 20 14:48:52 compute-1 systemd[264200]: Stopped target Sockets.
Jan 20 14:48:52 compute-1 systemd[264200]: Stopped target Timers.
Jan 20 14:48:52 compute-1 systemd[264200]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 20 14:48:52 compute-1 systemd[264200]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 14:48:52 compute-1 systemd[264200]: Closed D-Bus User Message Bus Socket.
Jan 20 14:48:52 compute-1 systemd[264200]: Stopped Create User's Volatile Files and Directories.
Jan 20 14:48:52 compute-1 systemd[264200]: Removed slice User Application Slice.
Jan 20 14:48:52 compute-1 systemd[264200]: Reached target Shutdown.
Jan 20 14:48:52 compute-1 systemd[264200]: Finished Exit the Session.
Jan 20 14:48:52 compute-1 systemd[264200]: Reached target Exit the Session.
Jan 20 14:48:52 compute-1 systemd[1]: user@42436.service: Deactivated successfully.
Jan 20 14:48:52 compute-1 systemd[1]: Stopped User Manager for UID 42436.
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.197 225859 DEBUG nova.compute.manager [None req-ba82be76-e88e-45e8-88fd-add7eef3e220 - - - - - -] [instance: 9beb3ec3-721e-4919-9713-a92c82ad189b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:52 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 20 14:48:52 compute-1 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 20 14:48:52 compute-1 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 20 14:48:52 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 20 14:48:52 compute-1 systemd[1]: Removed slice User Slice of UID 42436.
Jan 20 14:48:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:48:52 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1726552496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.308 225859 DEBUG oslo_concurrency.processutils [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.314 225859 DEBUG nova.compute.provider_tree [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.327 225859 DEBUG nova.scheduler.client.report [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.338 225859 DEBUG nova.compute.manager [req-bb9a0090-f25b-4de4-99f0-c9ce9ec45b1e req-3357901e-40a1-4923-a73a-8561f5e40d45 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Received event network-vif-plugged-2c289e6f-295e-44c3-948a-9a6901251890 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.339 225859 DEBUG oslo_concurrency.lockutils [req-bb9a0090-f25b-4de4-99f0-c9ce9ec45b1e req-3357901e-40a1-4923-a73a-8561f5e40d45 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.339 225859 DEBUG oslo_concurrency.lockutils [req-bb9a0090-f25b-4de4-99f0-c9ce9ec45b1e req-3357901e-40a1-4923-a73a-8561f5e40d45 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.339 225859 DEBUG oslo_concurrency.lockutils [req-bb9a0090-f25b-4de4-99f0-c9ce9ec45b1e req-3357901e-40a1-4923-a73a-8561f5e40d45 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.339 225859 DEBUG nova.compute.manager [req-bb9a0090-f25b-4de4-99f0-c9ce9ec45b1e req-3357901e-40a1-4923-a73a-8561f5e40d45 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] No waiting events found dispatching network-vif-plugged-2c289e6f-295e-44c3-948a-9a6901251890 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.340 225859 WARNING nova.compute.manager [req-bb9a0090-f25b-4de4-99f0-c9ce9ec45b1e req-3357901e-40a1-4923-a73a-8561f5e40d45 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Received unexpected event network-vif-plugged-2c289e6f-295e-44c3-948a-9a6901251890 for instance with vm_state deleted and task_state None.
Jan 20 14:48:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.353 225859 DEBUG oslo_concurrency.lockutils [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.427 225859 INFO nova.scheduler.client.report [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Deleted allocations for instance 6586bc3e-3a94-4d22-8e8c-713a86a956fb
Jan 20 14:48:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:48:52 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1290154729' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.590 225859 DEBUG oslo_concurrency.processutils [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.626 225859 DEBUG oslo_concurrency.lockutils [None req-76dd6255-29d1-4c9b-ac8c-fcebbd3c6878 869086208e10436c9dc96c78bee9a85d b683fcc0026242e28ba6d8fba638688e - - default default] Lock "6586bc3e-3a94-4d22-8e8c-713a86a956fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:52 compute-1 nova_compute[225855]: 2026-01-20 14:48:52.631 225859 DEBUG oslo_concurrency.processutils [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:52.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:52 compute-1 ceph-mon[81775]: osdmap e242: 3 total, 3 up, 3 in
Jan 20 14:48:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1726552496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1290154729' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:48:53 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/344938960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.098 225859 DEBUG oslo_concurrency.processutils [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.101 225859 DEBUG nova.virt.libvirt.vif [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:45:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1202945337',display_name='tempest-ServerActionsTestJSON-server-1202945337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1202945337',id=94,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgEzx5mLsSqRL7L9WKOzM+WdeJ40U103wY9H3VMZ41G/sN5tQtQSt9lXWKTyc6pt00bfKD0E9GPugNMpy+dzSSpK23o3CgadkzAfAvjQCeCPOSp3fX13FGApomGd1HRCQ==',key_name='tempest-keypair-1602241722',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:45:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1c5f03d46c0c4162a3b2f1530850bb6c',ramdisk_id='',reservation_id='r-luaqa362',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1020442335',owner_user_name='tempest-ServerActionsTestJSON-1020442335-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:48:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3e9278fdb9e645b7938f3edb20c4d3cf',uuid=75736b87-b14e-45b7-b43b-5129cf7d3279,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6", "address": "fa:16:3e:22:f9:d2", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1917526237-network", "vif_mac": "fa:16:3e:22:f9:d2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a9a684-c9", "ovs_interfaceid": "d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.102 225859 DEBUG nova.network.os_vif_util [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converting VIF {"id": "d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6", "address": "fa:16:3e:22:f9:d2", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1917526237-network", "vif_mac": "fa:16:3e:22:f9:d2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a9a684-c9", "ovs_interfaceid": "d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.103 225859 DEBUG nova.network.os_vif_util [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:f9:d2,bridge_name='br-int',has_traffic_filtering=True,id=d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3a9a684-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.106 225859 DEBUG nova.virt.libvirt.driver [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:48:53 compute-1 nova_compute[225855]:   <uuid>75736b87-b14e-45b7-b43b-5129cf7d3279</uuid>
Jan 20 14:48:53 compute-1 nova_compute[225855]:   <name>instance-0000005e</name>
Jan 20 14:48:53 compute-1 nova_compute[225855]:   <memory>196608</memory>
Jan 20 14:48:53 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:48:53 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerActionsTestJSON-server-1202945337</nova:name>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:48:52</nova:creationTime>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <nova:flavor name="m1.micro">
Jan 20 14:48:53 compute-1 nova_compute[225855]:         <nova:memory>192</nova:memory>
Jan 20 14:48:53 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:48:53 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:48:53 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:48:53 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:48:53 compute-1 nova_compute[225855]:         <nova:user uuid="3e9278fdb9e645b7938f3edb20c4d3cf">tempest-ServerActionsTestJSON-1020442335-project-member</nova:user>
Jan 20 14:48:53 compute-1 nova_compute[225855]:         <nova:project uuid="1c5f03d46c0c4162a3b2f1530850bb6c">tempest-ServerActionsTestJSON-1020442335</nova:project>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:48:53 compute-1 nova_compute[225855]:         <nova:port uuid="d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6">
Jan 20 14:48:53 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:48:53 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:48:53 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <system>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <entry name="serial">75736b87-b14e-45b7-b43b-5129cf7d3279</entry>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <entry name="uuid">75736b87-b14e-45b7-b43b-5129cf7d3279</entry>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     </system>
Jan 20 14:48:53 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:48:53 compute-1 nova_compute[225855]:   <os>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:   </os>
Jan 20 14:48:53 compute-1 nova_compute[225855]:   <features>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:   </features>
Jan 20 14:48:53 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:48:53 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:48:53 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/75736b87-b14e-45b7-b43b-5129cf7d3279_disk">
Jan 20 14:48:53 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       </source>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:48:53 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/75736b87-b14e-45b7-b43b-5129cf7d3279_disk.config">
Jan 20 14:48:53 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       </source>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:48:53 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:22:f9:d2"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <target dev="tapd3a9a684-c9"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/75736b87-b14e-45b7-b43b-5129cf7d3279/console.log" append="off"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <video>
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     </video>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:48:53 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:48:53 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:48:53 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:48:53 compute-1 nova_compute[225855]: </domain>
Jan 20 14:48:53 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.108 225859 DEBUG nova.virt.libvirt.vif [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:45:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1202945337',display_name='tempest-ServerActionsTestJSON-server-1202945337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1202945337',id=94,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgEzx5mLsSqRL7L9WKOzM+WdeJ40U103wY9H3VMZ41G/sN5tQtQSt9lXWKTyc6pt00bfKD0E9GPugNMpy+dzSSpK23o3CgadkzAfAvjQCeCPOSp3fX13FGApomGd1HRCQ==',key_name='tempest-keypair-1602241722',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:45:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1c5f03d46c0c4162a3b2f1530850bb6c',ramdisk_id='',reservation_id='r-luaqa362',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1020442335',owner_user_name='tempest-ServerActionsTestJSON-1020442335-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:48:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3e9278fdb9e645b7938f3edb20c4d3cf',uuid=75736b87-b14e-45b7-b43b-5129cf7d3279,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6", "address": "fa:16:3e:22:f9:d2", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1917526237-network", "vif_mac": "fa:16:3e:22:f9:d2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a9a684-c9", "ovs_interfaceid": "d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.109 225859 DEBUG nova.network.os_vif_util [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converting VIF {"id": "d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6", "address": "fa:16:3e:22:f9:d2", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1917526237-network", "vif_mac": "fa:16:3e:22:f9:d2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a9a684-c9", "ovs_interfaceid": "d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.110 225859 DEBUG nova.network.os_vif_util [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:f9:d2,bridge_name='br-int',has_traffic_filtering=True,id=d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3a9a684-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.110 225859 DEBUG os_vif [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:f9:d2,bridge_name='br-int',has_traffic_filtering=True,id=d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3a9a684-c9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.111 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.112 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.112 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.115 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.115 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3a9a684-c9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.115 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd3a9a684-c9, col_values=(('external_ids', {'iface-id': 'd3a9a684-c9a7-4abc-a085-9dcd17bfc2e6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:f9:d2', 'vm-uuid': '75736b87-b14e-45b7-b43b-5129cf7d3279'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:53 compute-1 NetworkManager[49104]: <info>  [1768920533.1178] manager: (tapd3a9a684-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.120 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.180 225859 DEBUG nova.network.neutron [req-c7e8339e-e267-4fef-95d8-eea084a60576 req-e982c00d-bc4f-4669-a1f6-4f7f2fd55d8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Updated VIF entry in instance network info cache for port d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.180 225859 DEBUG nova.network.neutron [req-c7e8339e-e267-4fef-95d8-eea084a60576 req-e982c00d-bc4f-4669-a1f6-4f7f2fd55d8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Updating instance_info_cache with network_info: [{"id": "d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6", "address": "fa:16:3e:22:f9:d2", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a9a684-c9", "ovs_interfaceid": "d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.182 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.183 225859 INFO os_vif [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:f9:d2,bridge_name='br-int',has_traffic_filtering=True,id=d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3a9a684-c9')
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.196 225859 DEBUG oslo_concurrency.lockutils [req-c7e8339e-e267-4fef-95d8-eea084a60576 req-e982c00d-bc4f-4669-a1f6-4f7f2fd55d8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-75736b87-b14e-45b7-b43b-5129cf7d3279" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.248 225859 DEBUG nova.virt.libvirt.driver [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.249 225859 DEBUG nova.virt.libvirt.driver [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.249 225859 DEBUG nova.virt.libvirt.driver [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] No VIF found with MAC fa:16:3e:22:f9:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.249 225859 INFO nova.virt.libvirt.driver [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Using config drive
Jan 20 14:48:53 compute-1 kernel: tapd3a9a684-c9: entered promiscuous mode
Jan 20 14:48:53 compute-1 NetworkManager[49104]: <info>  [1768920533.3252] manager: (tapd3a9a684-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/158)
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.326 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:53 compute-1 ovn_controller[130490]: 2026-01-20T14:48:53Z|00377|binding|INFO|Claiming lport d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 for this chassis.
Jan 20 14:48:53 compute-1 ovn_controller[130490]: 2026-01-20T14:48:53Z|00378|binding|INFO|d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6: Claiming fa:16:3e:22:f9:d2 10.100.0.4
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.336 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:f9:d2 10.100.0.4'], port_security=['fa:16:3e:22:f9:d2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '75736b87-b14e-45b7-b43b-5129cf7d3279', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-762e1859-4db4-4d9e-b66f-d50316f80df4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c5f03d46c0c4162a3b2f1530850bb6c', 'neutron:revision_number': '12', 'neutron:security_group_ids': '80535eda-fa59-4edc-8e3d-9bfea6517730', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2474a8ca-bb96-4cae-9133-23419b81a9fc, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.338 140354 INFO neutron.agent.ovn.metadata.agent [-] Port d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 in datapath 762e1859-4db4-4d9e-b66f-d50316f80df4 bound to our chassis
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.339 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 762e1859-4db4-4d9e-b66f-d50316f80df4
Jan 20 14:48:53 compute-1 ovn_controller[130490]: 2026-01-20T14:48:53Z|00379|binding|INFO|Setting lport d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 ovn-installed in OVS
Jan 20 14:48:53 compute-1 ovn_controller[130490]: 2026-01-20T14:48:53Z|00380|binding|INFO|Setting lport d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 up in Southbound
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.345 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.347 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.350 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[85587314-e3e5-4fe6-b980-2f992728d917]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.350 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap762e1859-41 in ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.352 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap762e1859-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.352 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9820f3-a4e5-43aa-a5cd-18d186d3046b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.353 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1e8f0a-d786-4756-8f01-3eca05564d2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:53 compute-1 systemd-udevd[264653]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:48:53 compute-1 systemd-machined[194361]: New machine qemu-44-instance-0000005e.
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.365 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[f086f5a8-bca5-4ae6-af14-7f8ff78468f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:53 compute-1 NetworkManager[49104]: <info>  [1768920533.3714] device (tapd3a9a684-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:48:53 compute-1 NetworkManager[49104]: <info>  [1768920533.3720] device (tapd3a9a684-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:48:53 compute-1 systemd[1]: Started Virtual Machine qemu-44-instance-0000005e.
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.390 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[29a55f88-66b2-470f-8a76-4d5760654fd2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.414 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[b03a9a9d-d809-4bc2-a269-2bffe02325a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:53 compute-1 NetworkManager[49104]: <info>  [1768920533.4218] manager: (tap762e1859-40): new Veth device (/org/freedesktop/NetworkManager/Devices/159)
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.421 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fc6bc1bd-18cf-4c85-9540-1cb694842b78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.453 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b0b88f-5b25-4d02-b7bb-c6b7e5642631]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.456 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8df730-fb36-431e-a23b-4f5acc9602dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:53 compute-1 NetworkManager[49104]: <info>  [1768920533.4812] device (tap762e1859-40): carrier: link connected
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.486 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[efe25fcc-4276-469e-ae13-688f61a23cc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.502 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bc4ed959-b2cb-4dff-a372-f1de6e8fb613]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap762e1859-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:f1:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557240, 'reachable_time': 19112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264685, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.518 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[99ce43c4-dc47-47ee-a100-96db6a694876]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:f1da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 557240, 'tstamp': 557240}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264686, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.533 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[286dd4e1-bd5c-4f8b-ad20-ad07d0e0c8fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap762e1859-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:f1:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557240, 'reachable_time': 19112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264687, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.563 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fdbb038a-7466-4d41-acb2-5c39186d686a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:53.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.628 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5c950255-6f08-4197-85ad-2fabeb08736c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.629 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap762e1859-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.630 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.630 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap762e1859-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:53 compute-1 NetworkManager[49104]: <info>  [1768920533.6327] manager: (tap762e1859-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/160)
Jan 20 14:48:53 compute-1 kernel: tap762e1859-40: entered promiscuous mode
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.631 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.635 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap762e1859-40, col_values=(('external_ids', {'iface-id': '9e775c45-1646-436d-a0cb-a5b5ec356e1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:53 compute-1 ovn_controller[130490]: 2026-01-20T14:48:53Z|00381|binding|INFO|Releasing lport 9e775c45-1646-436d-a0cb-a5b5ec356e1b from this chassis (sb_readonly=0)
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.636 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.656 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.658 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.658 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/762e1859-4db4-4d9e-b66f-d50316f80df4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/762e1859-4db4-4d9e-b66f-d50316f80df4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.659 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5b2b9250-70ba-4b87-a0b3-77b763fc0fe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.660 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-762e1859-4db4-4d9e-b66f-d50316f80df4
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/762e1859-4db4-4d9e-b66f-d50316f80df4.pid.haproxy
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 762e1859-4db4-4d9e-b66f-d50316f80df4
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:48:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:48:53.660 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'env', 'PROCESS_TAG=haproxy-762e1859-4db4-4d9e-b66f-d50316f80df4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/762e1859-4db4-4d9e-b66f-d50316f80df4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:48:53 compute-1 ceph-mon[81775]: pgmap v1825: 321 pgs: 321 active+clean; 295 MiB data, 920 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 231 op/s
Jan 20 14:48:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/344938960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.868 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920533.8684597, 75736b87-b14e-45b7-b43b-5129cf7d3279 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.870 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] VM Resumed (Lifecycle Event)
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.871 225859 DEBUG nova.compute.manager [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.875 225859 INFO nova.virt.libvirt.driver [-] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Instance running successfully.
Jan 20 14:48:53 compute-1 virtqemud[225396]: argument unsupported: QEMU guest agent is not configured
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.877 225859 DEBUG nova.virt.libvirt.guest [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.877 225859 DEBUG nova.virt.libvirt.driver [None req-4796ccce-2601-44d6-9482-2234d4c7191f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.901 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.910 225859 DEBUG nova.compute.manager [req-2bff57d0-bcb0-4eb7-8fce-7079b335a84b req-b0d9ae89-0168-4e69-9785-4e65cbc3b1f8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Received event network-vif-plugged-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.910 225859 DEBUG oslo_concurrency.lockutils [req-2bff57d0-bcb0-4eb7-8fce-7079b335a84b req-b0d9ae89-0168-4e69-9785-4e65cbc3b1f8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.911 225859 DEBUG oslo_concurrency.lockutils [req-2bff57d0-bcb0-4eb7-8fce-7079b335a84b req-b0d9ae89-0168-4e69-9785-4e65cbc3b1f8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.911 225859 DEBUG oslo_concurrency.lockutils [req-2bff57d0-bcb0-4eb7-8fce-7079b335a84b req-b0d9ae89-0168-4e69-9785-4e65cbc3b1f8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.911 225859 DEBUG nova.compute.manager [req-2bff57d0-bcb0-4eb7-8fce-7079b335a84b req-b0d9ae89-0168-4e69-9785-4e65cbc3b1f8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] No waiting events found dispatching network-vif-plugged-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.911 225859 WARNING nova.compute.manager [req-2bff57d0-bcb0-4eb7-8fce-7079b335a84b req-b0d9ae89-0168-4e69-9785-4e65cbc3b1f8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Received unexpected event network-vif-plugged-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 for instance with vm_state active and task_state resize_finish.
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.914 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.981 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.982 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920533.869505, 75736b87-b14e-45b7-b43b-5129cf7d3279 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:48:53 compute-1 nova_compute[225855]: 2026-01-20 14:48:53.982 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] VM Started (Lifecycle Event)
Jan 20 14:48:54 compute-1 podman[264763]: 2026-01-20 14:48:54.028620042 +0000 UTC m=+0.049923350 container create ee1710db7c595c0c81475ce7f12613d2dc3b797c65c02c5878ca7132755baf03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 20 14:48:54 compute-1 nova_compute[225855]: 2026-01-20 14:48:54.031 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:54 compute-1 nova_compute[225855]: 2026-01-20 14:48:54.038 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:48:54 compute-1 systemd[1]: Started libpod-conmon-ee1710db7c595c0c81475ce7f12613d2dc3b797c65c02c5878ca7132755baf03.scope.
Jan 20 14:48:54 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:48:54 compute-1 podman[264763]: 2026-01-20 14:48:54.00198228 +0000 UTC m=+0.023285608 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:48:54 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ffadcb61cfffe61ed679266c9eaf04220f431cb7218fbabad4d52c6fc4d512/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:48:54 compute-1 podman[264763]: 2026-01-20 14:48:54.11637858 +0000 UTC m=+0.137681908 container init ee1710db7c595c0c81475ce7f12613d2dc3b797c65c02c5878ca7132755baf03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:48:54 compute-1 podman[264763]: 2026-01-20 14:48:54.121487704 +0000 UTC m=+0.142791012 container start ee1710db7c595c0c81475ce7f12613d2dc3b797c65c02c5878ca7132755baf03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 14:48:54 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[264778]: [NOTICE]   (264782) : New worker (264784) forked
Jan 20 14:48:54 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[264778]: [NOTICE]   (264782) : Loading success.
Jan 20 14:48:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:54.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:54 compute-1 ceph-mon[81775]: pgmap v1826: 321 pgs: 321 active+clean; 280 MiB data, 911 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 262 op/s
Jan 20 14:48:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:55.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:55 compute-1 nova_compute[225855]: 2026-01-20 14:48:55.696 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:56 compute-1 nova_compute[225855]: 2026-01-20 14:48:56.305 225859 DEBUG nova.compute.manager [req-1defb927-5b7a-454c-bb4b-437cc9b76850 req-fc38b063-5446-4aac-9ebb-fafe81e2b080 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Received event network-vif-plugged-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:56 compute-1 nova_compute[225855]: 2026-01-20 14:48:56.306 225859 DEBUG oslo_concurrency.lockutils [req-1defb927-5b7a-454c-bb4b-437cc9b76850 req-fc38b063-5446-4aac-9ebb-fafe81e2b080 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:56 compute-1 nova_compute[225855]: 2026-01-20 14:48:56.306 225859 DEBUG oslo_concurrency.lockutils [req-1defb927-5b7a-454c-bb4b-437cc9b76850 req-fc38b063-5446-4aac-9ebb-fafe81e2b080 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:56 compute-1 nova_compute[225855]: 2026-01-20 14:48:56.306 225859 DEBUG oslo_concurrency.lockutils [req-1defb927-5b7a-454c-bb4b-437cc9b76850 req-fc38b063-5446-4aac-9ebb-fafe81e2b080 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:56 compute-1 nova_compute[225855]: 2026-01-20 14:48:56.306 225859 DEBUG nova.compute.manager [req-1defb927-5b7a-454c-bb4b-437cc9b76850 req-fc38b063-5446-4aac-9ebb-fafe81e2b080 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] No waiting events found dispatching network-vif-plugged-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:56 compute-1 nova_compute[225855]: 2026-01-20 14:48:56.306 225859 WARNING nova.compute.manager [req-1defb927-5b7a-454c-bb4b-437cc9b76850 req-fc38b063-5446-4aac-9ebb-fafe81e2b080 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Received unexpected event network-vif-plugged-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 for instance with vm_state resized and task_state None.
Jan 20 14:48:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:56.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:56 compute-1 ceph-mon[81775]: pgmap v1827: 321 pgs: 321 active+clean; 215 MiB data, 877 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 369 op/s
Jan 20 14:48:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:57.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:57 compute-1 nova_compute[225855]: 2026-01-20 14:48:57.873 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920522.8722076, 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:48:57 compute-1 nova_compute[225855]: 2026-01-20 14:48:57.874 225859 INFO nova.compute.manager [-] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] VM Stopped (Lifecycle Event)
Jan 20 14:48:57 compute-1 nova_compute[225855]: 2026-01-20 14:48:57.906 225859 DEBUG nova.compute.manager [None req-c036993b-9bf6-4549-91a7-0cbadab63652 - - - - - -] [instance: 7efaa6b8-d1bd-4954-83ec-adcdb8e392bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:58 compute-1 nova_compute[225855]: 2026-01-20 14:48:58.118 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:48:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:58.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:48:59 compute-1 ceph-mon[81775]: pgmap v1828: 321 pgs: 321 active+clean; 222 MiB data, 886 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.4 MiB/s wr, 421 op/s
Jan 20 14:48:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:48:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:48:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:59.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:49:00 compute-1 nova_compute[225855]: 2026-01-20 14:49:00.729 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:49:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:00.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:49:01 compute-1 ceph-mon[81775]: pgmap v1829: 321 pgs: 321 active+clean; 230 MiB data, 886 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.0 MiB/s wr, 402 op/s
Jan 20 14:49:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e243 e243: 3 total, 3 up, 3 in
Jan 20 14:49:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:01.237 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:49:01 compute-1 nova_compute[225855]: 2026-01-20 14:49:01.237 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:01.238 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:49:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:01.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:01 compute-1 ovn_controller[130490]: 2026-01-20T14:49:01Z|00382|binding|INFO|Releasing lport 9e775c45-1646-436d-a0cb-a5b5ec356e1b from this chassis (sb_readonly=0)
Jan 20 14:49:01 compute-1 nova_compute[225855]: 2026-01-20 14:49:01.794 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:02 compute-1 ceph-mon[81775]: osdmap e243: 3 total, 3 up, 3 in
Jan 20 14:49:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4043656728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:49:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:02.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:49:03 compute-1 nova_compute[225855]: 2026-01-20 14:49:03.121 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:03 compute-1 ceph-mon[81775]: pgmap v1831: 321 pgs: 321 active+clean; 248 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 401 op/s
Jan 20 14:49:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:03.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:04 compute-1 podman[264798]: 2026-01-20 14:49:04.127261559 +0000 UTC m=+0.153636258 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:49:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:04.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:04 compute-1 nova_compute[225855]: 2026-01-20 14:49:04.889 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920529.8875377, 6586bc3e-3a94-4d22-8e8c-713a86a956fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:49:04 compute-1 nova_compute[225855]: 2026-01-20 14:49:04.889 225859 INFO nova.compute.manager [-] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] VM Stopped (Lifecycle Event)
Jan 20 14:49:04 compute-1 nova_compute[225855]: 2026-01-20 14:49:04.917 225859 DEBUG nova.compute.manager [None req-79affb54-6c9e-4259-9844-0b9906e85fc1 - - - - - -] [instance: 6586bc3e-3a94-4d22-8e8c-713a86a956fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:05 compute-1 ceph-mon[81775]: pgmap v1832: 321 pgs: 321 active+clean; 248 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 356 op/s
Jan 20 14:49:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:49:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:05.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:49:05 compute-1 nova_compute[225855]: 2026-01-20 14:49:05.730 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:06.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:07 compute-1 ovn_controller[130490]: 2026-01-20T14:49:07Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:22:f9:d2 10.100.0.4
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.331 225859 DEBUG oslo_concurrency.lockutils [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "75736b87-b14e-45b7-b43b-5129cf7d3279" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.332 225859 DEBUG oslo_concurrency.lockutils [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "75736b87-b14e-45b7-b43b-5129cf7d3279" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.332 225859 DEBUG oslo_concurrency.lockutils [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.333 225859 DEBUG oslo_concurrency.lockutils [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.333 225859 DEBUG oslo_concurrency.lockutils [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:07 compute-1 ceph-mon[81775]: pgmap v1833: 321 pgs: 321 active+clean; 248 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 167 op/s
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.334 225859 INFO nova.compute.manager [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Terminating instance
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.335 225859 DEBUG nova.compute.manager [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:49:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:07 compute-1 kernel: tapd3a9a684-c9 (unregistering): left promiscuous mode
Jan 20 14:49:07 compute-1 NetworkManager[49104]: <info>  [1768920547.3914] device (tapd3a9a684-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:49:07 compute-1 ovn_controller[130490]: 2026-01-20T14:49:07Z|00383|binding|INFO|Releasing lport d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 from this chassis (sb_readonly=0)
Jan 20 14:49:07 compute-1 ovn_controller[130490]: 2026-01-20T14:49:07Z|00384|binding|INFO|Setting lport d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 down in Southbound
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.416 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:07 compute-1 ovn_controller[130490]: 2026-01-20T14:49:07Z|00385|binding|INFO|Removing iface tapd3a9a684-c9 ovn-installed in OVS
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.418 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.433 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:07 compute-1 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Jan 20 14:49:07 compute-1 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d0000005e.scope: Consumed 12.920s CPU time.
Jan 20 14:49:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:07.464 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:f9:d2 10.100.0.4'], port_security=['fa:16:3e:22:f9:d2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '75736b87-b14e-45b7-b43b-5129cf7d3279', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-762e1859-4db4-4d9e-b66f-d50316f80df4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c5f03d46c0c4162a3b2f1530850bb6c', 'neutron:revision_number': '14', 'neutron:security_group_ids': '80535eda-fa59-4edc-8e3d-9bfea6517730', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.180', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2474a8ca-bb96-4cae-9133-23419b81a9fc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:49:07 compute-1 systemd-machined[194361]: Machine qemu-44-instance-0000005e terminated.
Jan 20 14:49:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:07.466 140354 INFO neutron.agent.ovn.metadata.agent [-] Port d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 in datapath 762e1859-4db4-4d9e-b66f-d50316f80df4 unbound from our chassis
Jan 20 14:49:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:07.468 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 762e1859-4db4-4d9e-b66f-d50316f80df4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:49:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:07.469 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8c43fcaa-ad2c-424f-806d-7e9819bd6304]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:07.470 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 namespace which is not needed anymore
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.578 225859 INFO nova.virt.libvirt.driver [-] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Instance destroyed successfully.
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.579 225859 DEBUG nova.objects.instance [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'resources' on Instance uuid 75736b87-b14e-45b7-b43b-5129cf7d3279 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:07.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:07 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[264778]: [NOTICE]   (264782) : haproxy version is 2.8.14-c23fe91
Jan 20 14:49:07 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[264778]: [NOTICE]   (264782) : path to executable is /usr/sbin/haproxy
Jan 20 14:49:07 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[264778]: [WARNING]  (264782) : Exiting Master process...
Jan 20 14:49:07 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[264778]: [ALERT]    (264782) : Current worker (264784) exited with code 143 (Terminated)
Jan 20 14:49:07 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[264778]: [WARNING]  (264782) : All workers exited. Exiting... (0)
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.612 225859 DEBUG nova.virt.libvirt.vif [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:45:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1202945337',display_name='tempest-ServerActionsTestJSON-server-1202945337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1202945337',id=94,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgEzx5mLsSqRL7L9WKOzM+WdeJ40U103wY9H3VMZ41G/sN5tQtQSt9lXWKTyc6pt00bfKD0E9GPugNMpy+dzSSpK23o3CgadkzAfAvjQCeCPOSp3fX13FGApomGd1HRCQ==',key_name='tempest-keypair-1602241722',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:48:53Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c5f03d46c0c4162a3b2f1530850bb6c',ramdisk_id='',reservation_id='r-luaqa362',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1020442335',owner_user_name='tempest-ServerActionsTestJSON-1020442335-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:49:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3e9278fdb9e645b7938f3edb20c4d3cf',uuid=75736b87-b14e-45b7-b43b-5129cf7d3279,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6", "address": "fa:16:3e:22:f9:d2", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a9a684-c9", "ovs_interfaceid": "d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.612 225859 DEBUG nova.network.os_vif_util [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converting VIF {"id": "d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6", "address": "fa:16:3e:22:f9:d2", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a9a684-c9", "ovs_interfaceid": "d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.613 225859 DEBUG nova.network.os_vif_util [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:22:f9:d2,bridge_name='br-int',has_traffic_filtering=True,id=d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3a9a684-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.613 225859 DEBUG os_vif [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:22:f9:d2,bridge_name='br-int',has_traffic_filtering=True,id=d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3a9a684-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.615 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.615 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3a9a684-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.616 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:07 compute-1 systemd[1]: libpod-ee1710db7c595c0c81475ce7f12613d2dc3b797c65c02c5878ca7132755baf03.scope: Deactivated successfully.
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.618 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:07 compute-1 podman[264851]: 2026-01-20 14:49:07.62024937 +0000 UTC m=+0.058222515 container died ee1710db7c595c0c81475ce7f12613d2dc3b797c65c02c5878ca7132755baf03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.621 225859 INFO os_vif [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:22:f9:d2,bridge_name='br-int',has_traffic_filtering=True,id=d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3a9a684-c9')
Jan 20 14:49:07 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ee1710db7c595c0c81475ce7f12613d2dc3b797c65c02c5878ca7132755baf03-userdata-shm.mount: Deactivated successfully.
Jan 20 14:49:07 compute-1 systemd[1]: var-lib-containers-storage-overlay-55ffadcb61cfffe61ed679266c9eaf04220f431cb7218fbabad4d52c6fc4d512-merged.mount: Deactivated successfully.
Jan 20 14:49:07 compute-1 podman[264851]: 2026-01-20 14:49:07.664136299 +0000 UTC m=+0.102109444 container cleanup ee1710db7c595c0c81475ce7f12613d2dc3b797c65c02c5878ca7132755baf03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:49:07 compute-1 systemd[1]: libpod-conmon-ee1710db7c595c0c81475ce7f12613d2dc3b797c65c02c5878ca7132755baf03.scope: Deactivated successfully.
Jan 20 14:49:07 compute-1 podman[264910]: 2026-01-20 14:49:07.72546094 +0000 UTC m=+0.042112350 container remove ee1710db7c595c0c81475ce7f12613d2dc3b797c65c02c5878ca7132755baf03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:49:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:07.730 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[646f28a9-84b1-446a-80e4-b71d9f055138]: (4, ('Tue Jan 20 02:49:07 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 (ee1710db7c595c0c81475ce7f12613d2dc3b797c65c02c5878ca7132755baf03)\nee1710db7c595c0c81475ce7f12613d2dc3b797c65c02c5878ca7132755baf03\nTue Jan 20 02:49:07 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 (ee1710db7c595c0c81475ce7f12613d2dc3b797c65c02c5878ca7132755baf03)\nee1710db7c595c0c81475ce7f12613d2dc3b797c65c02c5878ca7132755baf03\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:07.732 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[465a40b6-ef20-4fd4-828c-0e2170fe4144]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:07.733 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap762e1859-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:07 compute-1 kernel: tap762e1859-40: left promiscuous mode
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.735 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:07 compute-1 nova_compute[225855]: 2026-01-20 14:49:07.749 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:07.754 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[422e665f-6d13-4fcc-86ad-c65c832cd366]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:07.766 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[920026fc-3902-4b41-bc0f-8193a0b1c677]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:07.767 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3b6e8402-8ba4-4acb-a4d0-093f87490a1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:07.786 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1f2aff16-742d-4e21-978b-ddbe977d1164]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557233, 'reachable_time': 30637, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264925, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:07.789 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:49:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:07.789 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[072e108f-485a-49e9-bf8f-dfe5512946e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:07 compute-1 systemd[1]: run-netns-ovnmeta\x2d762e1859\x2d4db4\x2d4d9e\x2db66f\x2dd50316f80df4.mount: Deactivated successfully.
Jan 20 14:49:08 compute-1 nova_compute[225855]: 2026-01-20 14:49:08.088 225859 DEBUG nova.compute.manager [req-1f5aa4b2-4c37-479c-87c2-ec511ad8fe27 req-2d662357-cf98-4ed9-a5ae-55130c883826 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Received event network-vif-unplugged-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:08 compute-1 nova_compute[225855]: 2026-01-20 14:49:08.089 225859 DEBUG oslo_concurrency.lockutils [req-1f5aa4b2-4c37-479c-87c2-ec511ad8fe27 req-2d662357-cf98-4ed9-a5ae-55130c883826 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:08 compute-1 nova_compute[225855]: 2026-01-20 14:49:08.089 225859 DEBUG oslo_concurrency.lockutils [req-1f5aa4b2-4c37-479c-87c2-ec511ad8fe27 req-2d662357-cf98-4ed9-a5ae-55130c883826 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:08 compute-1 nova_compute[225855]: 2026-01-20 14:49:08.089 225859 DEBUG oslo_concurrency.lockutils [req-1f5aa4b2-4c37-479c-87c2-ec511ad8fe27 req-2d662357-cf98-4ed9-a5ae-55130c883826 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:08 compute-1 nova_compute[225855]: 2026-01-20 14:49:08.089 225859 DEBUG nova.compute.manager [req-1f5aa4b2-4c37-479c-87c2-ec511ad8fe27 req-2d662357-cf98-4ed9-a5ae-55130c883826 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] No waiting events found dispatching network-vif-unplugged-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:49:08 compute-1 nova_compute[225855]: 2026-01-20 14:49:08.089 225859 DEBUG nova.compute.manager [req-1f5aa4b2-4c37-479c-87c2-ec511ad8fe27 req-2d662357-cf98-4ed9-a5ae-55130c883826 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Received event network-vif-unplugged-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:49:08 compute-1 nova_compute[225855]: 2026-01-20 14:49:08.221 225859 INFO nova.virt.libvirt.driver [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Deleting instance files /var/lib/nova/instances/75736b87-b14e-45b7-b43b-5129cf7d3279_del
Jan 20 14:49:08 compute-1 nova_compute[225855]: 2026-01-20 14:49:08.221 225859 INFO nova.virt.libvirt.driver [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Deletion of /var/lib/nova/instances/75736b87-b14e-45b7-b43b-5129cf7d3279_del complete
Jan 20 14:49:08 compute-1 nova_compute[225855]: 2026-01-20 14:49:08.295 225859 INFO nova.compute.manager [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Took 0.96 seconds to destroy the instance on the hypervisor.
Jan 20 14:49:08 compute-1 nova_compute[225855]: 2026-01-20 14:49:08.295 225859 DEBUG oslo.service.loopingcall [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:49:08 compute-1 nova_compute[225855]: 2026-01-20 14:49:08.295 225859 DEBUG nova.compute.manager [-] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:49:08 compute-1 nova_compute[225855]: 2026-01-20 14:49:08.295 225859 DEBUG nova.network.neutron [-] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:49:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:08.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:09 compute-1 sudo[264927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:09 compute-1 sudo[264927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:09 compute-1 sudo[264927]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:09.240 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:09 compute-1 sudo[264952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:09 compute-1 sudo[264952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:09 compute-1 sudo[264952]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:09 compute-1 ceph-mon[81775]: pgmap v1834: 321 pgs: 321 active+clean; 248 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 745 KiB/s rd, 1.7 MiB/s wr, 94 op/s
Jan 20 14:49:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e244 e244: 3 total, 3 up, 3 in
Jan 20 14:49:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:09.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:10 compute-1 nova_compute[225855]: 2026-01-20 14:49:10.291 225859 DEBUG nova.compute.manager [req-18149ca3-5934-47fc-b6b4-80be69e6df90 req-be87d01f-bfcb-446e-bb0e-ab37c82e1d03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Received event network-vif-plugged-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:10 compute-1 nova_compute[225855]: 2026-01-20 14:49:10.292 225859 DEBUG oslo_concurrency.lockutils [req-18149ca3-5934-47fc-b6b4-80be69e6df90 req-be87d01f-bfcb-446e-bb0e-ab37c82e1d03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:10 compute-1 nova_compute[225855]: 2026-01-20 14:49:10.292 225859 DEBUG oslo_concurrency.lockutils [req-18149ca3-5934-47fc-b6b4-80be69e6df90 req-be87d01f-bfcb-446e-bb0e-ab37c82e1d03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:10 compute-1 nova_compute[225855]: 2026-01-20 14:49:10.292 225859 DEBUG oslo_concurrency.lockutils [req-18149ca3-5934-47fc-b6b4-80be69e6df90 req-be87d01f-bfcb-446e-bb0e-ab37c82e1d03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "75736b87-b14e-45b7-b43b-5129cf7d3279-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:10 compute-1 nova_compute[225855]: 2026-01-20 14:49:10.292 225859 DEBUG nova.compute.manager [req-18149ca3-5934-47fc-b6b4-80be69e6df90 req-be87d01f-bfcb-446e-bb0e-ab37c82e1d03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] No waiting events found dispatching network-vif-plugged-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:49:10 compute-1 nova_compute[225855]: 2026-01-20 14:49:10.293 225859 WARNING nova.compute.manager [req-18149ca3-5934-47fc-b6b4-80be69e6df90 req-be87d01f-bfcb-446e-bb0e-ab37c82e1d03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Received unexpected event network-vif-plugged-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 for instance with vm_state active and task_state deleting.
Jan 20 14:49:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1889894140' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:10 compute-1 ceph-mon[81775]: osdmap e244: 3 total, 3 up, 3 in
Jan 20 14:49:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/236200116' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:10 compute-1 nova_compute[225855]: 2026-01-20 14:49:10.630 225859 DEBUG nova.network.neutron [-] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:49:10 compute-1 nova_compute[225855]: 2026-01-20 14:49:10.653 225859 INFO nova.compute.manager [-] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Took 2.36 seconds to deallocate network for instance.
Jan 20 14:49:10 compute-1 nova_compute[225855]: 2026-01-20 14:49:10.759 225859 DEBUG oslo_concurrency.lockutils [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:10 compute-1 nova_compute[225855]: 2026-01-20 14:49:10.760 225859 DEBUG oslo_concurrency.lockutils [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:10 compute-1 nova_compute[225855]: 2026-01-20 14:49:10.767 225859 DEBUG oslo_concurrency.lockutils [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:10.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:10 compute-1 nova_compute[225855]: 2026-01-20 14:49:10.776 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:10 compute-1 nova_compute[225855]: 2026-01-20 14:49:10.865 225859 INFO nova.scheduler.client.report [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Deleted allocations for instance 75736b87-b14e-45b7-b43b-5129cf7d3279
Jan 20 14:49:10 compute-1 nova_compute[225855]: 2026-01-20 14:49:10.946 225859 DEBUG oslo_concurrency.lockutils [None req-d8dff4d9-0f65-4fc2-a251-6a9f41e97b0f 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "75736b87-b14e-45b7-b43b-5129cf7d3279" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:11 compute-1 nova_compute[225855]: 2026-01-20 14:49:11.051 225859 DEBUG nova.compute.manager [req-f8b41801-f187-4b08-b871-3094109b926d req-f98e7df1-1fb5-4220-89b9-1ff17541ab00 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Received event network-vif-deleted-d3a9a684-c9a7-4abc-a085-9dcd17bfc2e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:11 compute-1 ceph-mon[81775]: pgmap v1836: 321 pgs: 321 active+clean; 237 MiB data, 898 MiB used, 20 GiB / 21 GiB avail; 587 KiB/s rd, 503 KiB/s wr, 73 op/s
Jan 20 14:49:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:11.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:12 compute-1 nova_compute[225855]: 2026-01-20 14:49:12.618 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:12.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:13.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:13 compute-1 ceph-mon[81775]: pgmap v1837: 321 pgs: 321 active+clean; 188 MiB data, 869 MiB used, 20 GiB / 21 GiB avail; 692 KiB/s rd, 2.0 MiB/s wr, 137 op/s
Jan 20 14:49:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:14.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/95823729' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:49:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/95823729' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:49:15 compute-1 podman[264981]: 2026-01-20 14:49:15.021916779 +0000 UTC m=+0.061875198 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 14:49:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:49:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:15.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:49:15 compute-1 nova_compute[225855]: 2026-01-20 14:49:15.778 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:16 compute-1 ceph-mon[81775]: pgmap v1838: 321 pgs: 321 active+clean; 134 MiB data, 837 MiB used, 20 GiB / 21 GiB avail; 709 KiB/s rd, 2.2 MiB/s wr, 165 op/s
Jan 20 14:49:16 compute-1 nova_compute[225855]: 2026-01-20 14:49:16.273 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:16.409 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:16.410 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:16.410 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:16.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:16 compute-1 nova_compute[225855]: 2026-01-20 14:49:16.950 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:49:17 compute-1 ceph-mon[81775]: pgmap v1839: 321 pgs: 321 active+clean; 107 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 258 op/s
Jan 20 14:49:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:17.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:17 compute-1 nova_compute[225855]: 2026-01-20 14:49:17.618 225859 DEBUG oslo_concurrency.lockutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:17 compute-1 nova_compute[225855]: 2026-01-20 14:49:17.619 225859 DEBUG oslo_concurrency.lockutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:17 compute-1 nova_compute[225855]: 2026-01-20 14:49:17.620 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:17 compute-1 nova_compute[225855]: 2026-01-20 14:49:17.639 225859 DEBUG nova.compute.manager [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:49:17 compute-1 nova_compute[225855]: 2026-01-20 14:49:17.798 225859 DEBUG oslo_concurrency.lockutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:17 compute-1 nova_compute[225855]: 2026-01-20 14:49:17.799 225859 DEBUG oslo_concurrency.lockutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:17 compute-1 nova_compute[225855]: 2026-01-20 14:49:17.805 225859 DEBUG nova.virt.hardware [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:49:17 compute-1 nova_compute[225855]: 2026-01-20 14:49:17.805 225859 INFO nova.compute.claims [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:49:17 compute-1 nova_compute[225855]: 2026-01-20 14:49:17.943 225859 DEBUG oslo_concurrency.processutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3734497401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:49:18 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3509042914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:18 compute-1 nova_compute[225855]: 2026-01-20 14:49:18.527 225859 DEBUG oslo_concurrency.processutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:18 compute-1 nova_compute[225855]: 2026-01-20 14:49:18.533 225859 DEBUG nova.compute.provider_tree [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:49:18 compute-1 nova_compute[225855]: 2026-01-20 14:49:18.552 225859 DEBUG nova.scheduler.client.report [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:49:18 compute-1 nova_compute[225855]: 2026-01-20 14:49:18.575 225859 DEBUG oslo_concurrency.lockutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:18 compute-1 nova_compute[225855]: 2026-01-20 14:49:18.576 225859 DEBUG nova.compute.manager [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:49:18 compute-1 nova_compute[225855]: 2026-01-20 14:49:18.627 225859 DEBUG nova.compute.manager [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:49:18 compute-1 nova_compute[225855]: 2026-01-20 14:49:18.628 225859 DEBUG nova.network.neutron [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:49:18 compute-1 nova_compute[225855]: 2026-01-20 14:49:18.656 225859 INFO nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:49:18 compute-1 nova_compute[225855]: 2026-01-20 14:49:18.682 225859 DEBUG nova.compute.manager [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:49:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:18.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:18 compute-1 nova_compute[225855]: 2026-01-20 14:49:18.914 225859 DEBUG nova.compute.manager [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:49:18 compute-1 nova_compute[225855]: 2026-01-20 14:49:18.915 225859 DEBUG nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:49:18 compute-1 nova_compute[225855]: 2026-01-20 14:49:18.916 225859 INFO nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Creating image(s)
Jan 20 14:49:18 compute-1 nova_compute[225855]: 2026-01-20 14:49:18.940 225859 DEBUG nova.storage.rbd_utils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:18 compute-1 nova_compute[225855]: 2026-01-20 14:49:18.963 225859 DEBUG nova.storage.rbd_utils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:18 compute-1 nova_compute[225855]: 2026-01-20 14:49:18.986 225859 DEBUG nova.storage.rbd_utils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:18 compute-1 nova_compute[225855]: 2026-01-20 14:49:18.989 225859 DEBUG oslo_concurrency.processutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:19 compute-1 nova_compute[225855]: 2026-01-20 14:49:19.049 225859 DEBUG oslo_concurrency.processutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:19 compute-1 nova_compute[225855]: 2026-01-20 14:49:19.050 225859 DEBUG oslo_concurrency.lockutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:19 compute-1 nova_compute[225855]: 2026-01-20 14:49:19.051 225859 DEBUG oslo_concurrency.lockutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:19 compute-1 nova_compute[225855]: 2026-01-20 14:49:19.051 225859 DEBUG oslo_concurrency.lockutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:19 compute-1 nova_compute[225855]: 2026-01-20 14:49:19.077 225859 DEBUG nova.storage.rbd_utils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:19 compute-1 nova_compute[225855]: 2026-01-20 14:49:19.081 225859 DEBUG oslo_concurrency.processutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:19 compute-1 ceph-mon[81775]: pgmap v1840: 321 pgs: 321 active+clean; 101 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.2 MiB/s wr, 238 op/s
Jan 20 14:49:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3509042914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:19 compute-1 nova_compute[225855]: 2026-01-20 14:49:19.341 225859 DEBUG oslo_concurrency.processutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:19 compute-1 nova_compute[225855]: 2026-01-20 14:49:19.400 225859 DEBUG nova.storage.rbd_utils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] resizing rbd image fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:49:19 compute-1 nova_compute[225855]: 2026-01-20 14:49:19.522 225859 DEBUG nova.policy [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3e9278fdb9e645b7938f3edb20c4d3cf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c5f03d46c0c4162a3b2f1530850bb6c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:49:19 compute-1 nova_compute[225855]: 2026-01-20 14:49:19.530 225859 DEBUG nova.objects.instance [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'migration_context' on Instance uuid fdeb13eb-edb4-4bff-aeef-2671ba9d4618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:19 compute-1 nova_compute[225855]: 2026-01-20 14:49:19.552 225859 DEBUG nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:49:19 compute-1 nova_compute[225855]: 2026-01-20 14:49:19.553 225859 DEBUG nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Ensure instance console log exists: /var/lib/nova/instances/fdeb13eb-edb4-4bff-aeef-2671ba9d4618/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:49:19 compute-1 nova_compute[225855]: 2026-01-20 14:49:19.554 225859 DEBUG oslo_concurrency.lockutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:19 compute-1 nova_compute[225855]: 2026-01-20 14:49:19.554 225859 DEBUG oslo_concurrency.lockutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:19 compute-1 nova_compute[225855]: 2026-01-20 14:49:19.555 225859 DEBUG oslo_concurrency.lockutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:49:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:19.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:49:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2531960273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:20 compute-1 nova_compute[225855]: 2026-01-20 14:49:20.780 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:20.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:21 compute-1 ceph-mon[81775]: pgmap v1841: 321 pgs: 321 active+clean; 88 MiB data, 825 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.7 MiB/s wr, 208 op/s
Jan 20 14:49:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:49:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:21.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:49:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:22 compute-1 nova_compute[225855]: 2026-01-20 14:49:22.578 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920547.5760756, 75736b87-b14e-45b7-b43b-5129cf7d3279 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:49:22 compute-1 nova_compute[225855]: 2026-01-20 14:49:22.578 225859 INFO nova.compute.manager [-] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] VM Stopped (Lifecycle Event)
Jan 20 14:49:22 compute-1 nova_compute[225855]: 2026-01-20 14:49:22.600 225859 DEBUG nova.network.neutron [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Successfully created port: 6855cb4f-4178-4447-af36-126ade033206 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:49:22 compute-1 nova_compute[225855]: 2026-01-20 14:49:22.605 225859 DEBUG nova.compute.manager [None req-d7baab77-16c4-4e7e-8eb8-44506a890a56 - - - - - -] [instance: 75736b87-b14e-45b7-b43b-5129cf7d3279] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:22 compute-1 nova_compute[225855]: 2026-01-20 14:49:22.622 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:49:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:22.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:49:23 compute-1 ceph-mon[81775]: pgmap v1842: 321 pgs: 321 active+clean; 109 MiB data, 819 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.3 MiB/s wr, 191 op/s
Jan 20 14:49:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:23.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:49:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:24.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:49:24 compute-1 nova_compute[225855]: 2026-01-20 14:49:24.830 225859 DEBUG nova.network.neutron [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Successfully updated port: 6855cb4f-4178-4447-af36-126ade033206 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:49:24 compute-1 nova_compute[225855]: 2026-01-20 14:49:24.850 225859 DEBUG oslo_concurrency.lockutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:49:24 compute-1 nova_compute[225855]: 2026-01-20 14:49:24.850 225859 DEBUG oslo_concurrency.lockutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquired lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:49:24 compute-1 nova_compute[225855]: 2026-01-20 14:49:24.850 225859 DEBUG nova.network.neutron [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:49:24 compute-1 nova_compute[225855]: 2026-01-20 14:49:24.983 225859 DEBUG nova.compute.manager [req-5d4600d2-324d-4c3d-9ccb-031d08986f0d req-96153e43-6aba-458d-845c-a4772785b531 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received event network-changed-6855cb4f-4178-4447-af36-126ade033206 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:24 compute-1 nova_compute[225855]: 2026-01-20 14:49:24.984 225859 DEBUG nova.compute.manager [req-5d4600d2-324d-4c3d-9ccb-031d08986f0d req-96153e43-6aba-458d-845c-a4772785b531 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Refreshing instance network info cache due to event network-changed-6855cb4f-4178-4447-af36-126ade033206. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:49:24 compute-1 nova_compute[225855]: 2026-01-20 14:49:24.984 225859 DEBUG oslo_concurrency.lockutils [req-5d4600d2-324d-4c3d-9ccb-031d08986f0d req-96153e43-6aba-458d-845c-a4772785b531 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:49:25 compute-1 nova_compute[225855]: 2026-01-20 14:49:25.161 225859 DEBUG nova.network.neutron [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:49:25 compute-1 nova_compute[225855]: 2026-01-20 14:49:25.357 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:25 compute-1 ceph-mon[81775]: pgmap v1843: 321 pgs: 321 active+clean; 138 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 156 op/s
Jan 20 14:49:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:25.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:25 compute-1 nova_compute[225855]: 2026-01-20 14:49:25.782 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/640913900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/209954475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:49:26 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/617682879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.746 225859 DEBUG nova.network.neutron [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Updating instance_info_cache with network_info: [{"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:49:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:26.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.800 225859 DEBUG oslo_concurrency.lockutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Releasing lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.801 225859 DEBUG nova.compute.manager [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Instance network_info: |[{"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.802 225859 DEBUG oslo_concurrency.lockutils [req-5d4600d2-324d-4c3d-9ccb-031d08986f0d req-96153e43-6aba-458d-845c-a4772785b531 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.802 225859 DEBUG nova.network.neutron [req-5d4600d2-324d-4c3d-9ccb-031d08986f0d req-96153e43-6aba-458d-845c-a4772785b531 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Refreshing network info cache for port 6855cb4f-4178-4447-af36-126ade033206 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.807 225859 DEBUG nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Start _get_guest_xml network_info=[{"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.813 225859 WARNING nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.820 225859 DEBUG nova.virt.libvirt.host [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.820 225859 DEBUG nova.virt.libvirt.host [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.824 225859 DEBUG nova.virt.libvirt.host [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.824 225859 DEBUG nova.virt.libvirt.host [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.825 225859 DEBUG nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.825 225859 DEBUG nova.virt.hardware [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.825 225859 DEBUG nova.virt.hardware [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.826 225859 DEBUG nova.virt.hardware [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.826 225859 DEBUG nova.virt.hardware [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.826 225859 DEBUG nova.virt.hardware [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.826 225859 DEBUG nova.virt.hardware [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.826 225859 DEBUG nova.virt.hardware [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.826 225859 DEBUG nova.virt.hardware [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.827 225859 DEBUG nova.virt.hardware [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.827 225859 DEBUG nova.virt.hardware [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.827 225859 DEBUG nova.virt.hardware [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:49:26 compute-1 nova_compute[225855]: 2026-01-20 14:49:26.829 225859 DEBUG oslo_concurrency.processutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:49:27 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4113171858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.288 225859 DEBUG oslo_concurrency.processutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.311 225859 DEBUG nova.storage.rbd_utils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.315 225859 DEBUG oslo_concurrency.processutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:27 compute-1 ceph-mon[81775]: pgmap v1844: 321 pgs: 321 active+clean; 180 MiB data, 851 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 152 op/s
Jan 20 14:49:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/617682879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4113171858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:27.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.625 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:49:27 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2640238559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.775 225859 DEBUG oslo_concurrency.processutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.776 225859 DEBUG nova.virt.libvirt.vif [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:49:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2012792656',display_name='tempest-ServerActionsTestJSON-server-2012792656',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2012792656',id=105,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgEzx5mLsSqRL7L9WKOzM+WdeJ40U103wY9H3VMZ41G/sN5tQtQSt9lXWKTyc6pt00bfKD0E9GPugNMpy+dzSSpK23o3CgadkzAfAvjQCeCPOSp3fX13FGApomGd1HRCQ==',key_name='tempest-keypair-1602241722',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c5f03d46c0c4162a3b2f1530850bb6c',ramdisk_id='',reservation_id='r-c8be97ji',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1020442335',owner_user_name='tempest-ServerActionsTestJSON-1020442335-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:49:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3e9278fdb9e645b7938f3edb20c4d3cf',uuid=fdeb13eb-edb4-4bff-aeef-2671ba9d4618,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.777 225859 DEBUG nova.network.os_vif_util [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converting VIF {"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.778 225859 DEBUG nova.network.os_vif_util [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.779 225859 DEBUG nova.objects.instance [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'pci_devices' on Instance uuid fdeb13eb-edb4-4bff-aeef-2671ba9d4618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.802 225859 DEBUG nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:49:27 compute-1 nova_compute[225855]:   <uuid>fdeb13eb-edb4-4bff-aeef-2671ba9d4618</uuid>
Jan 20 14:49:27 compute-1 nova_compute[225855]:   <name>instance-00000069</name>
Jan 20 14:49:27 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:49:27 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:49:27 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerActionsTestJSON-server-2012792656</nova:name>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:49:26</nova:creationTime>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:49:27 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:49:27 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:49:27 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:49:27 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:49:27 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:49:27 compute-1 nova_compute[225855]:         <nova:user uuid="3e9278fdb9e645b7938f3edb20c4d3cf">tempest-ServerActionsTestJSON-1020442335-project-member</nova:user>
Jan 20 14:49:27 compute-1 nova_compute[225855]:         <nova:project uuid="1c5f03d46c0c4162a3b2f1530850bb6c">tempest-ServerActionsTestJSON-1020442335</nova:project>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:49:27 compute-1 nova_compute[225855]:         <nova:port uuid="6855cb4f-4178-4447-af36-126ade033206">
Jan 20 14:49:27 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:49:27 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:49:27 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <system>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <entry name="serial">fdeb13eb-edb4-4bff-aeef-2671ba9d4618</entry>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <entry name="uuid">fdeb13eb-edb4-4bff-aeef-2671ba9d4618</entry>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     </system>
Jan 20 14:49:27 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:49:27 compute-1 nova_compute[225855]:   <os>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:   </os>
Jan 20 14:49:27 compute-1 nova_compute[225855]:   <features>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:   </features>
Jan 20 14:49:27 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:49:27 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:49:27 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk">
Jan 20 14:49:27 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       </source>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:49:27 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk.config">
Jan 20 14:49:27 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       </source>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:49:27 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:4f:3f:20"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <target dev="tap6855cb4f-41"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/fdeb13eb-edb4-4bff-aeef-2671ba9d4618/console.log" append="off"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <video>
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     </video>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:49:27 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:49:27 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:49:27 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:49:27 compute-1 nova_compute[225855]: </domain>
Jan 20 14:49:27 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.803 225859 DEBUG nova.compute.manager [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Preparing to wait for external event network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.804 225859 DEBUG oslo_concurrency.lockutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.804 225859 DEBUG oslo_concurrency.lockutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.804 225859 DEBUG oslo_concurrency.lockutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.805 225859 DEBUG nova.virt.libvirt.vif [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:49:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2012792656',display_name='tempest-ServerActionsTestJSON-server-2012792656',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2012792656',id=105,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgEzx5mLsSqRL7L9WKOzM+WdeJ40U103wY9H3VMZ41G/sN5tQtQSt9lXWKTyc6pt00bfKD0E9GPugNMpy+dzSSpK23o3CgadkzAfAvjQCeCPOSp3fX13FGApomGd1HRCQ==',key_name='tempest-keypair-1602241722',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c5f03d46c0c4162a3b2f1530850bb6c',ramdisk_id='',reservation_id='r-c8be97ji',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1020442335',owner_user_name='tempest-ServerActionsTestJSON-1020442335-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:49:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3e9278fdb9e645b7938f3edb20c4d3cf',uuid=fdeb13eb-edb4-4bff-aeef-2671ba9d4618,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.805 225859 DEBUG nova.network.os_vif_util [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converting VIF {"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.806 225859 DEBUG nova.network.os_vif_util [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.807 225859 DEBUG os_vif [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.807 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.807 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.808 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.810 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.811 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6855cb4f-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.811 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6855cb4f-41, col_values=(('external_ids', {'iface-id': '6855cb4f-4178-4447-af36-126ade033206', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4f:3f:20', 'vm-uuid': 'fdeb13eb-edb4-4bff-aeef-2671ba9d4618'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:27 compute-1 NetworkManager[49104]: <info>  [1768920567.8384] manager: (tap6855cb4f-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.837 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.840 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.842 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.843 225859 INFO os_vif [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41')
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.918 225859 DEBUG nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.919 225859 DEBUG nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.919 225859 DEBUG nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] No VIF found with MAC fa:16:3e:4f:3f:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.919 225859 INFO nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Using config drive
Jan 20 14:49:27 compute-1 nova_compute[225855]: 2026-01-20 14:49:27.945 225859 DEBUG nova.storage.rbd_utils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:49:28 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2854563677' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2640238559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2854563677' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:28.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:28 compute-1 nova_compute[225855]: 2026-01-20 14:49:28.864 225859 INFO nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Creating config drive at /var/lib/nova/instances/fdeb13eb-edb4-4bff-aeef-2671ba9d4618/disk.config
Jan 20 14:49:28 compute-1 nova_compute[225855]: 2026-01-20 14:49:28.872 225859 DEBUG oslo_concurrency.processutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fdeb13eb-edb4-4bff-aeef-2671ba9d4618/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdx1xjkoz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.009 225859 DEBUG oslo_concurrency.processutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fdeb13eb-edb4-4bff-aeef-2671ba9d4618/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdx1xjkoz" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.044 225859 DEBUG nova.storage.rbd_utils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rbd image fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.049 225859 DEBUG oslo_concurrency.processutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fdeb13eb-edb4-4bff-aeef-2671ba9d4618/disk.config fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.198 225859 DEBUG oslo_concurrency.processutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fdeb13eb-edb4-4bff-aeef-2671ba9d4618/disk.config fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.199 225859 INFO nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Deleting local config drive /var/lib/nova/instances/fdeb13eb-edb4-4bff-aeef-2671ba9d4618/disk.config because it was imported into RBD.
Jan 20 14:49:29 compute-1 virtqemud[225396]: End of file while reading data: Input/output error
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.222 225859 DEBUG nova.network.neutron [req-5d4600d2-324d-4c3d-9ccb-031d08986f0d req-96153e43-6aba-458d-845c-a4772785b531 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Updated VIF entry in instance network info cache for port 6855cb4f-4178-4447-af36-126ade033206. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.223 225859 DEBUG nova.network.neutron [req-5d4600d2-324d-4c3d-9ccb-031d08986f0d req-96153e43-6aba-458d-845c-a4772785b531 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Updating instance_info_cache with network_info: [{"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:49:29 compute-1 kernel: tap6855cb4f-41: entered promiscuous mode
Jan 20 14:49:29 compute-1 NetworkManager[49104]: <info>  [1768920569.2537] manager: (tap6855cb4f-41): new Tun device (/org/freedesktop/NetworkManager/Devices/162)
Jan 20 14:49:29 compute-1 ovn_controller[130490]: 2026-01-20T14:49:29Z|00386|binding|INFO|Claiming lport 6855cb4f-4178-4447-af36-126ade033206 for this chassis.
Jan 20 14:49:29 compute-1 ovn_controller[130490]: 2026-01-20T14:49:29Z|00387|binding|INFO|6855cb4f-4178-4447-af36-126ade033206: Claiming fa:16:3e:4f:3f:20 10.100.0.12
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.255 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.258 225859 DEBUG oslo_concurrency.lockutils [req-5d4600d2-324d-4c3d-9ccb-031d08986f0d req-96153e43-6aba-458d-845c-a4772785b531 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.270 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:3f:20 10.100.0.12'], port_security=['fa:16:3e:4f:3f:20 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'fdeb13eb-edb4-4bff-aeef-2671ba9d4618', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-762e1859-4db4-4d9e-b66f-d50316f80df4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c5f03d46c0c4162a3b2f1530850bb6c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '80535eda-fa59-4edc-8e3d-9bfea6517730', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2474a8ca-bb96-4cae-9133-23419b81a9fc, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6855cb4f-4178-4447-af36-126ade033206) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:49:29 compute-1 ovn_controller[130490]: 2026-01-20T14:49:29Z|00388|binding|INFO|Setting lport 6855cb4f-4178-4447-af36-126ade033206 ovn-installed in OVS
Jan 20 14:49:29 compute-1 ovn_controller[130490]: 2026-01-20T14:49:29Z|00389|binding|INFO|Setting lport 6855cb4f-4178-4447-af36-126ade033206 up in Southbound
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.273 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6855cb4f-4178-4447-af36-126ade033206 in datapath 762e1859-4db4-4d9e-b66f-d50316f80df4 bound to our chassis
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.272 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.276 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 762e1859-4db4-4d9e-b66f-d50316f80df4
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.275 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:29 compute-1 systemd-machined[194361]: New machine qemu-45-instance-00000069.
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.288 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed7e455-c12d-43dd-8271-f1407e17098d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.289 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap762e1859-41 in ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.290 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap762e1859-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.290 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e35025cd-c1f4-4bc6-8a9b-49466b605191]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.291 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[75c7c797-4a56-4b4a-b6a7-295d3c563fc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:29 compute-1 systemd[1]: Started Virtual Machine qemu-45-instance-00000069.
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.302 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[9fedebcd-f6f4-449b-9271-bd9688a85772]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:29 compute-1 systemd-udevd[265337]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.313 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bbaf61d0-dab7-4abc-a08e-8cbdb95022b0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:29 compute-1 NetworkManager[49104]: <info>  [1768920569.3217] device (tap6855cb4f-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:49:29 compute-1 NetworkManager[49104]: <info>  [1768920569.3226] device (tap6855cb4f-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.344 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[7a461368-535a-43de-bf15-4adc6392e5a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.349 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5dbdc42b-9299-4e08-8e21-377bd05143a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:29 compute-1 systemd-udevd[265344]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:49:29 compute-1 NetworkManager[49104]: <info>  [1768920569.3500] manager: (tap762e1859-40): new Veth device (/org/freedesktop/NetworkManager/Devices/163)
Jan 20 14:49:29 compute-1 sudo[265339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:29 compute-1 sudo[265339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.379 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ee5e8c-780a-43ce-b0c0-68f6cee6a405]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:29 compute-1 sudo[265339]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.382 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[aa81fde7-dfd5-487d-bb0f-bddb68e52246]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:29 compute-1 NetworkManager[49104]: <info>  [1768920569.4049] device (tap762e1859-40): carrier: link connected
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.410 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[9f443492-5488-47b3-b744-82963b45966e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.426 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[53b190dc-5d7d-43e7-9edf-1ff50e923a8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap762e1859-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:f1:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 105], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560833, 'reachable_time': 26364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265413, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:29 compute-1 sudo[265391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:29 compute-1 sudo[265391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:29 compute-1 sudo[265391]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.439 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a646cc12-b68a-488c-bdd9-f408f7e7d528]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:f1da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 560833, 'tstamp': 560833}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265417, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.454 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2bbf9f93-562b-41ef-bd75-2b04fec56149]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap762e1859-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:f1:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 105], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560833, 'reachable_time': 26364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265419, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.482 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc773ba-94a3-4b04-b155-d33809b14cf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:29 compute-1 ceph-mon[81775]: pgmap v1845: 321 pgs: 321 active+clean; 180 MiB data, 851 MiB used, 20 GiB / 21 GiB avail; 109 KiB/s rd, 3.5 MiB/s wr, 70 op/s
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.551 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[55534203-d7d6-493a-b9c2-756a91ced44f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.554 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap762e1859-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.555 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.557 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap762e1859-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:29 compute-1 NetworkManager[49104]: <info>  [1768920569.5604] manager: (tap762e1859-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/164)
Jan 20 14:49:29 compute-1 kernel: tap762e1859-40: entered promiscuous mode
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.562 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.568 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap762e1859-40, col_values=(('external_ids', {'iface-id': '9e775c45-1646-436d-a0cb-a5b5ec356e1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:29 compute-1 ovn_controller[130490]: 2026-01-20T14:49:29Z|00390|binding|INFO|Releasing lport 9e775c45-1646-436d-a0cb-a5b5ec356e1b from this chassis (sb_readonly=0)
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.569 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.572 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/762e1859-4db4-4d9e-b66f-d50316f80df4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/762e1859-4db4-4d9e-b66f-d50316f80df4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.575 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cd41b288-5364-49f3-b2f4-7b0d824b55c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.576 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-762e1859-4db4-4d9e-b66f-d50316f80df4
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/762e1859-4db4-4d9e-b66f-d50316f80df4.pid.haproxy
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 762e1859-4db4-4d9e-b66f-d50316f80df4
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:49:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:29.576 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'env', 'PROCESS_TAG=haproxy-762e1859-4db4-4d9e-b66f-d50316f80df4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/762e1859-4db4-4d9e-b66f-d50316f80df4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.586 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:49:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:29.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.663 225859 DEBUG nova.compute.manager [req-6d57ca86-6392-482b-a026-819d87a8f42d req-03c56d4e-19af-4621-b239-d47eb44b5ab2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received event network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.664 225859 DEBUG oslo_concurrency.lockutils [req-6d57ca86-6392-482b-a026-819d87a8f42d req-03c56d4e-19af-4621-b239-d47eb44b5ab2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.664 225859 DEBUG oslo_concurrency.lockutils [req-6d57ca86-6392-482b-a026-819d87a8f42d req-03c56d4e-19af-4621-b239-d47eb44b5ab2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.664 225859 DEBUG oslo_concurrency.lockutils [req-6d57ca86-6392-482b-a026-819d87a8f42d req-03c56d4e-19af-4621-b239-d47eb44b5ab2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.665 225859 DEBUG nova.compute.manager [req-6d57ca86-6392-482b-a026-819d87a8f42d req-03c56d4e-19af-4621-b239-d47eb44b5ab2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Processing event network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.923 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920569.9228275, fdeb13eb-edb4-4bff-aeef-2671ba9d4618 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:49:29 compute-1 podman[265493]: 2026-01-20 14:49:29.923928858 +0000 UTC m=+0.048790299 container create 6832a1e9e9fbd88072418ee5f45181e7252b473c2cddc2205a332351b82ef902 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.924 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] VM Started (Lifecycle Event)
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.929 225859 DEBUG nova.compute.manager [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.934 225859 DEBUG nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.937 225859 INFO nova.virt.libvirt.driver [-] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Instance spawned successfully.
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.937 225859 DEBUG nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.965 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.968 225859 DEBUG nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.969 225859 DEBUG nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.969 225859 DEBUG nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.970 225859 DEBUG nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.970 225859 DEBUG nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.970 225859 DEBUG nova.virt.libvirt.driver [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:29 compute-1 systemd[1]: Started libpod-conmon-6832a1e9e9fbd88072418ee5f45181e7252b473c2cddc2205a332351b82ef902.scope.
Jan 20 14:49:29 compute-1 nova_compute[225855]: 2026-01-20 14:49:29.974 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:49:29 compute-1 podman[265493]: 2026-01-20 14:49:29.897037649 +0000 UTC m=+0.021899110 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:49:30 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:49:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9ea598c29d8f95feac6dbbb7186bfee63e25d53008923443ae675d2d1a3a93/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:49:30 compute-1 nova_compute[225855]: 2026-01-20 14:49:30.015 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:49:30 compute-1 nova_compute[225855]: 2026-01-20 14:49:30.015 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920569.923164, fdeb13eb-edb4-4bff-aeef-2671ba9d4618 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:49:30 compute-1 nova_compute[225855]: 2026-01-20 14:49:30.016 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] VM Paused (Lifecycle Event)
Jan 20 14:49:30 compute-1 podman[265493]: 2026-01-20 14:49:30.024413185 +0000 UTC m=+0.149274646 container init 6832a1e9e9fbd88072418ee5f45181e7252b473c2cddc2205a332351b82ef902 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 20 14:49:30 compute-1 podman[265493]: 2026-01-20 14:49:30.031085333 +0000 UTC m=+0.155946774 container start 6832a1e9e9fbd88072418ee5f45181e7252b473c2cddc2205a332351b82ef902 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:49:30 compute-1 nova_compute[225855]: 2026-01-20 14:49:30.042 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:30 compute-1 nova_compute[225855]: 2026-01-20 14:49:30.045 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920569.9343972, fdeb13eb-edb4-4bff-aeef-2671ba9d4618 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:49:30 compute-1 nova_compute[225855]: 2026-01-20 14:49:30.046 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] VM Resumed (Lifecycle Event)
Jan 20 14:49:30 compute-1 nova_compute[225855]: 2026-01-20 14:49:30.052 225859 INFO nova.compute.manager [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Took 11.14 seconds to spawn the instance on the hypervisor.
Jan 20 14:49:30 compute-1 nova_compute[225855]: 2026-01-20 14:49:30.052 225859 DEBUG nova.compute.manager [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:30 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[265510]: [NOTICE]   (265514) : New worker (265516) forked
Jan 20 14:49:30 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[265510]: [NOTICE]   (265514) : Loading success.
Jan 20 14:49:30 compute-1 nova_compute[225855]: 2026-01-20 14:49:30.065 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:30 compute-1 nova_compute[225855]: 2026-01-20 14:49:30.068 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:49:30 compute-1 nova_compute[225855]: 2026-01-20 14:49:30.109 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:49:30 compute-1 nova_compute[225855]: 2026-01-20 14:49:30.148 225859 INFO nova.compute.manager [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Took 12.40 seconds to build instance.
Jan 20 14:49:30 compute-1 nova_compute[225855]: 2026-01-20 14:49:30.168 225859 DEBUG oslo_concurrency.lockutils [None req-a8cdcb09-d473-426d-b0ef-0d4cc8c09fe2 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4003458176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:30 compute-1 nova_compute[225855]: 2026-01-20 14:49:30.783 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:30.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:31 compute-1 nova_compute[225855]: 2026-01-20 14:49:31.372 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:49:31 compute-1 nova_compute[225855]: 2026-01-20 14:49:31.372 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:49:31 compute-1 ceph-mon[81775]: pgmap v1846: 321 pgs: 321 active+clean; 180 MiB data, 851 MiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 3.5 MiB/s wr, 73 op/s
Jan 20 14:49:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4060347495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:31.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:31 compute-1 nova_compute[225855]: 2026-01-20 14:49:31.908 225859 DEBUG nova.compute.manager [req-c6a8c477-5ed9-401f-b340-4c4b10d37be4 req-1432fb4c-2214-47c0-9208-b7ea48468fb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received event network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:31 compute-1 nova_compute[225855]: 2026-01-20 14:49:31.908 225859 DEBUG oslo_concurrency.lockutils [req-c6a8c477-5ed9-401f-b340-4c4b10d37be4 req-1432fb4c-2214-47c0-9208-b7ea48468fb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:31 compute-1 nova_compute[225855]: 2026-01-20 14:49:31.909 225859 DEBUG oslo_concurrency.lockutils [req-c6a8c477-5ed9-401f-b340-4c4b10d37be4 req-1432fb4c-2214-47c0-9208-b7ea48468fb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:31 compute-1 nova_compute[225855]: 2026-01-20 14:49:31.909 225859 DEBUG oslo_concurrency.lockutils [req-c6a8c477-5ed9-401f-b340-4c4b10d37be4 req-1432fb4c-2214-47c0-9208-b7ea48468fb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:31 compute-1 nova_compute[225855]: 2026-01-20 14:49:31.910 225859 DEBUG nova.compute.manager [req-c6a8c477-5ed9-401f-b340-4c4b10d37be4 req-1432fb4c-2214-47c0-9208-b7ea48468fb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] No waiting events found dispatching network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:49:31 compute-1 nova_compute[225855]: 2026-01-20 14:49:31.910 225859 WARNING nova.compute.manager [req-c6a8c477-5ed9-401f-b340-4c4b10d37be4 req-1432fb4c-2214-47c0-9208-b7ea48468fb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received unexpected event network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 for instance with vm_state active and task_state None.
Jan 20 14:49:32 compute-1 nova_compute[225855]: 2026-01-20 14:49:32.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:49:32 compute-1 nova_compute[225855]: 2026-01-20 14:49:32.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:49:32 compute-1 nova_compute[225855]: 2026-01-20 14:49:32.404 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:49:32 compute-1 nova_compute[225855]: 2026-01-20 14:49:32.405 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:49:32 compute-1 nova_compute[225855]: 2026-01-20 14:49:32.405 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:49:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:32.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:32 compute-1 nova_compute[225855]: 2026-01-20 14:49:32.839 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:33 compute-1 ceph-mon[81775]: pgmap v1847: 321 pgs: 321 active+clean; 180 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 121 op/s
Jan 20 14:49:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:33.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:34 compute-1 nova_compute[225855]: 2026-01-20 14:49:34.262 225859 DEBUG nova.compute.manager [req-a3c9f776-91dd-4a21-bdfc-563f84cdfd1c req-c3611bb3-4e3a-4067-97b8-ff5a0e079494 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received event network-changed-6855cb4f-4178-4447-af36-126ade033206 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:34 compute-1 nova_compute[225855]: 2026-01-20 14:49:34.262 225859 DEBUG nova.compute.manager [req-a3c9f776-91dd-4a21-bdfc-563f84cdfd1c req-c3611bb3-4e3a-4067-97b8-ff5a0e079494 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Refreshing instance network info cache due to event network-changed-6855cb4f-4178-4447-af36-126ade033206. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:49:34 compute-1 nova_compute[225855]: 2026-01-20 14:49:34.262 225859 DEBUG oslo_concurrency.lockutils [req-a3c9f776-91dd-4a21-bdfc-563f84cdfd1c req-c3611bb3-4e3a-4067-97b8-ff5a0e079494 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:49:34 compute-1 nova_compute[225855]: 2026-01-20 14:49:34.263 225859 DEBUG oslo_concurrency.lockutils [req-a3c9f776-91dd-4a21-bdfc-563f84cdfd1c req-c3611bb3-4e3a-4067-97b8-ff5a0e079494 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:49:34 compute-1 nova_compute[225855]: 2026-01-20 14:49:34.263 225859 DEBUG nova.network.neutron [req-a3c9f776-91dd-4a21-bdfc-563f84cdfd1c req-c3611bb3-4e3a-4067-97b8-ff5a0e079494 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Refreshing network info cache for port 6855cb4f-4178-4447-af36-126ade033206 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:49:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:34.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/44349596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:35 compute-1 podman[265527]: 2026-01-20 14:49:35.05305651 +0000 UTC m=+0.084525558 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:49:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:35.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:35 compute-1 nova_compute[225855]: 2026-01-20 14:49:35.785 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:35 compute-1 ceph-mon[81775]: pgmap v1848: 321 pgs: 321 active+clean; 180 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 151 op/s
Jan 20 14:49:36 compute-1 nova_compute[225855]: 2026-01-20 14:49:36.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:49:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:49:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:36.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:49:36 compute-1 nova_compute[225855]: 2026-01-20 14:49:36.888 225859 DEBUG nova.network.neutron [req-a3c9f776-91dd-4a21-bdfc-563f84cdfd1c req-c3611bb3-4e3a-4067-97b8-ff5a0e079494 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Updated VIF entry in instance network info cache for port 6855cb4f-4178-4447-af36-126ade033206. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:49:36 compute-1 nova_compute[225855]: 2026-01-20 14:49:36.889 225859 DEBUG nova.network.neutron [req-a3c9f776-91dd-4a21-bdfc-563f84cdfd1c req-c3611bb3-4e3a-4067-97b8-ff5a0e079494 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Updating instance_info_cache with network_info: [{"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:49:36 compute-1 nova_compute[225855]: 2026-01-20 14:49:36.926 225859 DEBUG oslo_concurrency.lockutils [req-a3c9f776-91dd-4a21-bdfc-563f84cdfd1c req-c3611bb3-4e3a-4067-97b8-ff5a0e079494 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:49:36 compute-1 ceph-mon[81775]: pgmap v1849: 321 pgs: 321 active+clean; 211 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.8 MiB/s wr, 182 op/s
Jan 20 14:49:37 compute-1 sudo[265555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:37 compute-1 sudo[265555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:37 compute-1 sudo[265555]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:37 compute-1 sudo[265580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:49:37 compute-1 sudo[265580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:37 compute-1 sudo[265580]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:37 compute-1 sudo[265605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:37 compute-1 sudo[265605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:37 compute-1 sudo[265605]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:37 compute-1 sudo[265630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 14:49:37 compute-1 sudo[265630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:37 compute-1 nova_compute[225855]: 2026-01-20 14:49:37.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:49:37 compute-1 nova_compute[225855]: 2026-01-20 14:49:37.419 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:37 compute-1 nova_compute[225855]: 2026-01-20 14:49:37.419 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:37 compute-1 nova_compute[225855]: 2026-01-20 14:49:37.420 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:37 compute-1 nova_compute[225855]: 2026-01-20 14:49:37.420 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:49:37 compute-1 nova_compute[225855]: 2026-01-20 14:49:37.420 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:37 compute-1 sudo[265630]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:37.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:37 compute-1 sudo[265696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:37 compute-1 sudo[265696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:37 compute-1 sudo[265696]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:37 compute-1 sudo[265722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:49:37 compute-1 sudo[265722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:37 compute-1 sudo[265722]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:37 compute-1 sudo[265747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:37 compute-1 sudo[265747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:37 compute-1 sudo[265747]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:37 compute-1 nova_compute[225855]: 2026-01-20 14:49:37.842 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:37 compute-1 sudo[265772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:49:37 compute-1 sudo[265772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:49:37 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2025183021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:37 compute-1 nova_compute[225855]: 2026-01-20 14:49:37.952 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1764439710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:37 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 14:49:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2025183021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2922371658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:38 compute-1 nova_compute[225855]: 2026-01-20 14:49:38.039 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:49:38 compute-1 nova_compute[225855]: 2026-01-20 14:49:38.040 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:49:38 compute-1 nova_compute[225855]: 2026-01-20 14:49:38.205 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:49:38 compute-1 nova_compute[225855]: 2026-01-20 14:49:38.207 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4292MB free_disk=20.935195922851562GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:49:38 compute-1 nova_compute[225855]: 2026-01-20 14:49:38.207 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:38 compute-1 nova_compute[225855]: 2026-01-20 14:49:38.208 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:38 compute-1 nova_compute[225855]: 2026-01-20 14:49:38.362 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance fdeb13eb-edb4-4bff-aeef-2671ba9d4618 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:49:38 compute-1 nova_compute[225855]: 2026-01-20 14:49:38.363 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:49:38 compute-1 nova_compute[225855]: 2026-01-20 14:49:38.363 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:49:38 compute-1 sudo[265772]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:38 compute-1 nova_compute[225855]: 2026-01-20 14:49:38.414 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:38.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:49:38 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1374144073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:38 compute-1 nova_compute[225855]: 2026-01-20 14:49:38.854 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:38 compute-1 nova_compute[225855]: 2026-01-20 14:49:38.859 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:49:38 compute-1 nova_compute[225855]: 2026-01-20 14:49:38.897 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:49:38 compute-1 nova_compute[225855]: 2026-01-20 14:49:38.936 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:49:38 compute-1 nova_compute[225855]: 2026-01-20 14:49:38.937 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:38 compute-1 ceph-mon[81775]: pgmap v1850: 321 pgs: 321 active+clean; 220 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 169 op/s
Jan 20 14:49:39 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 14:49:39 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:49:39 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:49:39 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:39 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:49:39 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:49:39 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:49:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3553675854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1374144073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:39 compute-1 nova_compute[225855]: 2026-01-20 14:49:39.376 225859 DEBUG nova.compute.manager [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 20 14:49:39 compute-1 nova_compute[225855]: 2026-01-20 14:49:39.575 225859 DEBUG oslo_concurrency.lockutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:39 compute-1 nova_compute[225855]: 2026-01-20 14:49:39.576 225859 DEBUG oslo_concurrency.lockutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:39 compute-1 nova_compute[225855]: 2026-01-20 14:49:39.616 225859 DEBUG nova.objects.instance [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'pci_requests' on Instance uuid 52477e64-7989-4aa2-88e1-31600bfae2ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:49:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:39.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:49:39 compute-1 nova_compute[225855]: 2026-01-20 14:49:39.635 225859 DEBUG nova.virt.hardware [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:49:39 compute-1 nova_compute[225855]: 2026-01-20 14:49:39.636 225859 INFO nova.compute.claims [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:49:39 compute-1 nova_compute[225855]: 2026-01-20 14:49:39.636 225859 DEBUG nova.objects.instance [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'resources' on Instance uuid 52477e64-7989-4aa2-88e1-31600bfae2ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:39 compute-1 nova_compute[225855]: 2026-01-20 14:49:39.691 225859 DEBUG nova.objects.instance [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'pci_devices' on Instance uuid 52477e64-7989-4aa2-88e1-31600bfae2ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:39 compute-1 nova_compute[225855]: 2026-01-20 14:49:39.809 225859 INFO nova.compute.resource_tracker [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Updating resource usage from migration 4a873a64-1379-4cac-913e-e81f3f300ec7
Jan 20 14:49:39 compute-1 nova_compute[225855]: 2026-01-20 14:49:39.810 225859 DEBUG nova.compute.resource_tracker [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Starting to track incoming migration 4a873a64-1379-4cac-913e-e81f3f300ec7 with flavor 30c26a27-d918-46d8-a512-4ef3b4ce5955 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 20 14:49:39 compute-1 nova_compute[225855]: 2026-01-20 14:49:39.922 225859 DEBUG oslo_concurrency.processutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4163135150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:49:40 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/728206553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:40 compute-1 nova_compute[225855]: 2026-01-20 14:49:40.387 225859 DEBUG oslo_concurrency.processutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:40 compute-1 nova_compute[225855]: 2026-01-20 14:49:40.396 225859 DEBUG nova.compute.provider_tree [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:49:40 compute-1 nova_compute[225855]: 2026-01-20 14:49:40.418 225859 DEBUG nova.scheduler.client.report [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:49:40 compute-1 nova_compute[225855]: 2026-01-20 14:49:40.451 225859 DEBUG oslo_concurrency.lockutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:40 compute-1 nova_compute[225855]: 2026-01-20 14:49:40.452 225859 INFO nova.compute.manager [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Migrating
Jan 20 14:49:40 compute-1 nova_compute[225855]: 2026-01-20 14:49:40.787 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:49:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:40.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:49:41 compute-1 ceph-mon[81775]: pgmap v1851: 321 pgs: 321 active+clean; 227 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Jan 20 14:49:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/728206553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:41.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:41 compute-1 nova_compute[225855]: 2026-01-20 14:49:41.933 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:49:41 compute-1 nova_compute[225855]: 2026-01-20 14:49:41.933 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:49:41 compute-1 nova_compute[225855]: 2026-01-20 14:49:41.934 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:49:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:49:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:42.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:49:42 compute-1 nova_compute[225855]: 2026-01-20 14:49:42.847 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:43 compute-1 ceph-mon[81775]: pgmap v1852: 321 pgs: 321 active+clean; 227 MiB data, 873 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 166 op/s
Jan 20 14:49:43 compute-1 ovn_controller[130490]: 2026-01-20T14:49:43Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4f:3f:20 10.100.0.12
Jan 20 14:49:43 compute-1 ovn_controller[130490]: 2026-01-20T14:49:43Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4f:3f:20 10.100.0.12
Jan 20 14:49:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:43.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:43 compute-1 sshd-session[265878]: Accepted publickey for nova from 192.168.122.100 port 60278 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 14:49:43 compute-1 systemd[1]: Created slice User Slice of UID 42436.
Jan 20 14:49:43 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 20 14:49:43 compute-1 systemd-logind[783]: New session 63 of user nova.
Jan 20 14:49:43 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 20 14:49:43 compute-1 systemd[1]: Starting User Manager for UID 42436...
Jan 20 14:49:43 compute-1 systemd[265882]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:49:43 compute-1 systemd[265882]: Queued start job for default target Main User Target.
Jan 20 14:49:43 compute-1 systemd[265882]: Created slice User Application Slice.
Jan 20 14:49:43 compute-1 systemd[265882]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 14:49:43 compute-1 systemd[265882]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 14:49:43 compute-1 systemd[265882]: Reached target Paths.
Jan 20 14:49:43 compute-1 systemd[265882]: Reached target Timers.
Jan 20 14:49:43 compute-1 systemd[265882]: Starting D-Bus User Message Bus Socket...
Jan 20 14:49:43 compute-1 systemd[265882]: Starting Create User's Volatile Files and Directories...
Jan 20 14:49:43 compute-1 systemd[265882]: Finished Create User's Volatile Files and Directories.
Jan 20 14:49:43 compute-1 systemd[265882]: Listening on D-Bus User Message Bus Socket.
Jan 20 14:49:43 compute-1 systemd[265882]: Reached target Sockets.
Jan 20 14:49:43 compute-1 systemd[265882]: Reached target Basic System.
Jan 20 14:49:43 compute-1 systemd[265882]: Reached target Main User Target.
Jan 20 14:49:43 compute-1 systemd[265882]: Startup finished in 176ms.
Jan 20 14:49:43 compute-1 systemd[1]: Started User Manager for UID 42436.
Jan 20 14:49:43 compute-1 systemd[1]: Started Session 63 of User nova.
Jan 20 14:49:43 compute-1 sshd-session[265878]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:49:44 compute-1 sshd-session[265897]: Received disconnect from 192.168.122.100 port 60278:11: disconnected by user
Jan 20 14:49:44 compute-1 sshd-session[265897]: Disconnected from user nova 192.168.122.100 port 60278
Jan 20 14:49:44 compute-1 sshd-session[265878]: pam_unix(sshd:session): session closed for user nova
Jan 20 14:49:44 compute-1 systemd[1]: session-63.scope: Deactivated successfully.
Jan 20 14:49:44 compute-1 systemd-logind[783]: Session 63 logged out. Waiting for processes to exit.
Jan 20 14:49:44 compute-1 systemd-logind[783]: Removed session 63.
Jan 20 14:49:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/450672286' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4171830442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:44 compute-1 sshd-session[265899]: Accepted publickey for nova from 192.168.122.100 port 60294 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 14:49:44 compute-1 systemd-logind[783]: New session 65 of user nova.
Jan 20 14:49:44 compute-1 systemd[1]: Started Session 65 of User nova.
Jan 20 14:49:44 compute-1 sshd-session[265899]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:49:44 compute-1 sshd-session[265902]: Received disconnect from 192.168.122.100 port 60294:11: disconnected by user
Jan 20 14:49:44 compute-1 sshd-session[265902]: Disconnected from user nova 192.168.122.100 port 60294
Jan 20 14:49:44 compute-1 sshd-session[265899]: pam_unix(sshd:session): session closed for user nova
Jan 20 14:49:44 compute-1 systemd[1]: session-65.scope: Deactivated successfully.
Jan 20 14:49:44 compute-1 systemd-logind[783]: Session 65 logged out. Waiting for processes to exit.
Jan 20 14:49:44 compute-1 systemd-logind[783]: Removed session 65.
Jan 20 14:49:44 compute-1 sudo[265904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:44 compute-1 sudo[265904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:44 compute-1 sudo[265904]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:44 compute-1 sudo[265929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:49:44 compute-1 sudo[265929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:44 compute-1 sudo[265929]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:44.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:45 compute-1 ceph-mon[81775]: pgmap v1853: 321 pgs: 321 active+clean; 239 MiB data, 882 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.8 MiB/s wr, 143 op/s
Jan 20 14:49:45 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:45 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:45.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:45 compute-1 nova_compute[225855]: 2026-01-20 14:49:45.789 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:46 compute-1 podman[265955]: 2026-01-20 14:49:46.021609585 +0000 UTC m=+0.061468676 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 14:49:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:46.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:47 compute-1 ceph-mon[81775]: pgmap v1854: 321 pgs: 321 active+clean; 290 MiB data, 919 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.0 MiB/s wr, 190 op/s
Jan 20 14:49:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:47.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:47.818 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:49:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:47.819 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:49:47 compute-1 nova_compute[225855]: 2026-01-20 14:49:47.819 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:47 compute-1 nova_compute[225855]: 2026-01-20 14:49:47.849 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:49:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:48.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:49:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:48.822 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:49 compute-1 nova_compute[225855]: 2026-01-20 14:49:49.090 225859 DEBUG nova.compute.manager [req-ab956100-da64-4328-aa1c-51e8ba0af36c req-e0293cd6-6f0d-403c-84eb-63a433002bb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received event network-vif-unplugged-8286e975-4b57-4b5a-9018-82187a854a2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:49 compute-1 nova_compute[225855]: 2026-01-20 14:49:49.090 225859 DEBUG oslo_concurrency.lockutils [req-ab956100-da64-4328-aa1c-51e8ba0af36c req-e0293cd6-6f0d-403c-84eb-63a433002bb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:49 compute-1 nova_compute[225855]: 2026-01-20 14:49:49.090 225859 DEBUG oslo_concurrency.lockutils [req-ab956100-da64-4328-aa1c-51e8ba0af36c req-e0293cd6-6f0d-403c-84eb-63a433002bb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:49 compute-1 nova_compute[225855]: 2026-01-20 14:49:49.091 225859 DEBUG oslo_concurrency.lockutils [req-ab956100-da64-4328-aa1c-51e8ba0af36c req-e0293cd6-6f0d-403c-84eb-63a433002bb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:49 compute-1 nova_compute[225855]: 2026-01-20 14:49:49.091 225859 DEBUG nova.compute.manager [req-ab956100-da64-4328-aa1c-51e8ba0af36c req-e0293cd6-6f0d-403c-84eb-63a433002bb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] No waiting events found dispatching network-vif-unplugged-8286e975-4b57-4b5a-9018-82187a854a2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:49:49 compute-1 nova_compute[225855]: 2026-01-20 14:49:49.091 225859 WARNING nova.compute.manager [req-ab956100-da64-4328-aa1c-51e8ba0af36c req-e0293cd6-6f0d-403c-84eb-63a433002bb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received unexpected event network-vif-unplugged-8286e975-4b57-4b5a-9018-82187a854a2d for instance with vm_state active and task_state resize_migrated.
Jan 20 14:49:49 compute-1 sudo[265974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:49 compute-1 sudo[265974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:49 compute-1 sudo[265974]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:49 compute-1 ceph-mon[81775]: pgmap v1855: 321 pgs: 321 active+clean; 293 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 640 KiB/s rd, 5.1 MiB/s wr, 153 op/s
Jan 20 14:49:49 compute-1 nova_compute[225855]: 2026-01-20 14:49:49.574 225859 INFO nova.network.neutron [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Updating port 8286e975-4b57-4b5a-9018-82187a854a2d with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 20 14:49:49 compute-1 sudo[265999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:49 compute-1 sudo[265999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:49 compute-1 sudo[265999]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:49.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:50.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:50 compute-1 nova_compute[225855]: 2026-01-20 14:49:50.824 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:51 compute-1 nova_compute[225855]: 2026-01-20 14:49:51.502 225859 DEBUG nova.compute.manager [req-3e6b9d77-46e1-46f6-8611-3612121dda9d req-a1d4fdc6-a417-458e-a594-1368b2e84d06 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received event network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:51 compute-1 nova_compute[225855]: 2026-01-20 14:49:51.502 225859 DEBUG oslo_concurrency.lockutils [req-3e6b9d77-46e1-46f6-8611-3612121dda9d req-a1d4fdc6-a417-458e-a594-1368b2e84d06 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:51 compute-1 nova_compute[225855]: 2026-01-20 14:49:51.503 225859 DEBUG oslo_concurrency.lockutils [req-3e6b9d77-46e1-46f6-8611-3612121dda9d req-a1d4fdc6-a417-458e-a594-1368b2e84d06 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:51 compute-1 nova_compute[225855]: 2026-01-20 14:49:51.503 225859 DEBUG oslo_concurrency.lockutils [req-3e6b9d77-46e1-46f6-8611-3612121dda9d req-a1d4fdc6-a417-458e-a594-1368b2e84d06 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:51 compute-1 nova_compute[225855]: 2026-01-20 14:49:51.503 225859 DEBUG nova.compute.manager [req-3e6b9d77-46e1-46f6-8611-3612121dda9d req-a1d4fdc6-a417-458e-a594-1368b2e84d06 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] No waiting events found dispatching network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:49:51 compute-1 nova_compute[225855]: 2026-01-20 14:49:51.503 225859 WARNING nova.compute.manager [req-3e6b9d77-46e1-46f6-8611-3612121dda9d req-a1d4fdc6-a417-458e-a594-1368b2e84d06 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received unexpected event network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d for instance with vm_state active and task_state resize_migrated.
Jan 20 14:49:51 compute-1 ceph-mon[81775]: pgmap v1856: 321 pgs: 321 active+clean; 293 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.8 MiB/s wr, 157 op/s
Jan 20 14:49:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:51.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:51 compute-1 nova_compute[225855]: 2026-01-20 14:49:51.840 225859 DEBUG oslo_concurrency.lockutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:49:51 compute-1 nova_compute[225855]: 2026-01-20 14:49:51.840 225859 DEBUG oslo_concurrency.lockutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquired lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:49:51 compute-1 nova_compute[225855]: 2026-01-20 14:49:51.841 225859 DEBUG nova.network.neutron [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:49:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:52.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:52 compute-1 nova_compute[225855]: 2026-01-20 14:49:52.852 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:53 compute-1 nova_compute[225855]: 2026-01-20 14:49:53.178 225859 DEBUG nova.compute.manager [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 20 14:49:53 compute-1 nova_compute[225855]: 2026-01-20 14:49:53.314 225859 DEBUG oslo_concurrency.lockutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:53 compute-1 nova_compute[225855]: 2026-01-20 14:49:53.315 225859 DEBUG oslo_concurrency.lockutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:53 compute-1 nova_compute[225855]: 2026-01-20 14:49:53.337 225859 DEBUG nova.objects.instance [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'pci_requests' on Instance uuid fdeb13eb-edb4-4bff-aeef-2671ba9d4618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:53 compute-1 nova_compute[225855]: 2026-01-20 14:49:53.374 225859 DEBUG nova.virt.hardware [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:49:53 compute-1 nova_compute[225855]: 2026-01-20 14:49:53.374 225859 INFO nova.compute.claims [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:49:53 compute-1 nova_compute[225855]: 2026-01-20 14:49:53.375 225859 DEBUG nova.objects.instance [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'resources' on Instance uuid fdeb13eb-edb4-4bff-aeef-2671ba9d4618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:53 compute-1 nova_compute[225855]: 2026-01-20 14:49:53.388 225859 DEBUG nova.objects.instance [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'pci_devices' on Instance uuid fdeb13eb-edb4-4bff-aeef-2671ba9d4618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:53 compute-1 nova_compute[225855]: 2026-01-20 14:49:53.458 225859 INFO nova.compute.resource_tracker [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Updating resource usage from migration d2604dec-40c2-43fa-9566-b7cf6ab6e7a7
Jan 20 14:49:53 compute-1 nova_compute[225855]: 2026-01-20 14:49:53.557 225859 DEBUG oslo_concurrency.processutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:53 compute-1 ceph-mon[81775]: pgmap v1857: 321 pgs: 321 active+clean; 293 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 193 op/s
Jan 20 14:49:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:53.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:53 compute-1 nova_compute[225855]: 2026-01-20 14:49:53.656 225859 DEBUG nova.compute.manager [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received event network-changed-8286e975-4b57-4b5a-9018-82187a854a2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:53 compute-1 nova_compute[225855]: 2026-01-20 14:49:53.656 225859 DEBUG nova.compute.manager [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Refreshing instance network info cache due to event network-changed-8286e975-4b57-4b5a-9018-82187a854a2d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:49:53 compute-1 nova_compute[225855]: 2026-01-20 14:49:53.657 225859 DEBUG oslo_concurrency.lockutils [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:49:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:49:53 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2195194502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:54 compute-1 nova_compute[225855]: 2026-01-20 14:49:54.003 225859 DEBUG oslo_concurrency.processutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:54 compute-1 nova_compute[225855]: 2026-01-20 14:49:54.009 225859 DEBUG nova.compute.provider_tree [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:49:54 compute-1 nova_compute[225855]: 2026-01-20 14:49:54.035 225859 DEBUG nova.scheduler.client.report [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:49:54 compute-1 nova_compute[225855]: 2026-01-20 14:49:54.057 225859 DEBUG oslo_concurrency.lockutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:54 compute-1 nova_compute[225855]: 2026-01-20 14:49:54.058 225859 INFO nova.compute.manager [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Migrating
Jan 20 14:49:54 compute-1 nova_compute[225855]: 2026-01-20 14:49:54.079 225859 DEBUG nova.network.neutron [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Updating instance_info_cache with network_info: [{"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:49:54 compute-1 nova_compute[225855]: 2026-01-20 14:49:54.114 225859 DEBUG oslo_concurrency.lockutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Releasing lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:49:54 compute-1 nova_compute[225855]: 2026-01-20 14:49:54.119 225859 DEBUG oslo_concurrency.lockutils [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:49:54 compute-1 nova_compute[225855]: 2026-01-20 14:49:54.119 225859 DEBUG nova.network.neutron [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Refreshing network info cache for port 8286e975-4b57-4b5a-9018-82187a854a2d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:49:54 compute-1 nova_compute[225855]: 2026-01-20 14:49:54.121 225859 DEBUG oslo_concurrency.lockutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:49:54 compute-1 nova_compute[225855]: 2026-01-20 14:49:54.121 225859 DEBUG oslo_concurrency.lockutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquired lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:49:54 compute-1 nova_compute[225855]: 2026-01-20 14:49:54.121 225859 DEBUG nova.network.neutron [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:49:54 compute-1 nova_compute[225855]: 2026-01-20 14:49:54.221 225859 DEBUG nova.virt.libvirt.driver [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 20 14:49:54 compute-1 nova_compute[225855]: 2026-01-20 14:49:54.223 225859 DEBUG nova.virt.libvirt.driver [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 14:49:54 compute-1 nova_compute[225855]: 2026-01-20 14:49:54.223 225859 INFO nova.virt.libvirt.driver [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Creating image(s)
Jan 20 14:49:54 compute-1 nova_compute[225855]: 2026-01-20 14:49:54.257 225859 DEBUG nova.storage.rbd_utils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] creating snapshot(nova-resize) on rbd image(52477e64-7989-4aa2-88e1-31600bfae2ef_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:49:54 compute-1 systemd[1]: Stopping User Manager for UID 42436...
Jan 20 14:49:54 compute-1 systemd[265882]: Activating special unit Exit the Session...
Jan 20 14:49:54 compute-1 systemd[265882]: Stopped target Main User Target.
Jan 20 14:49:54 compute-1 systemd[265882]: Stopped target Basic System.
Jan 20 14:49:54 compute-1 systemd[265882]: Stopped target Paths.
Jan 20 14:49:54 compute-1 systemd[265882]: Stopped target Sockets.
Jan 20 14:49:54 compute-1 systemd[265882]: Stopped target Timers.
Jan 20 14:49:54 compute-1 systemd[265882]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 20 14:49:54 compute-1 systemd[265882]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 14:49:54 compute-1 systemd[265882]: Closed D-Bus User Message Bus Socket.
Jan 20 14:49:54 compute-1 systemd[265882]: Stopped Create User's Volatile Files and Directories.
Jan 20 14:49:54 compute-1 systemd[265882]: Removed slice User Application Slice.
Jan 20 14:49:54 compute-1 systemd[265882]: Reached target Shutdown.
Jan 20 14:49:54 compute-1 systemd[265882]: Finished Exit the Session.
Jan 20 14:49:54 compute-1 systemd[265882]: Reached target Exit the Session.
Jan 20 14:49:54 compute-1 systemd[1]: user@42436.service: Deactivated successfully.
Jan 20 14:49:54 compute-1 systemd[1]: Stopped User Manager for UID 42436.
Jan 20 14:49:54 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 20 14:49:54 compute-1 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 20 14:49:54 compute-1 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 20 14:49:54 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 20 14:49:54 compute-1 systemd[1]: Removed slice User Slice of UID 42436.
Jan 20 14:49:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e245 e245: 3 total, 3 up, 3 in
Jan 20 14:49:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2195194502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2454238869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:54 compute-1 nova_compute[225855]: 2026-01-20 14:49:54.670 225859 DEBUG nova.objects.instance [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'trusted_certs' on Instance uuid 52477e64-7989-4aa2-88e1-31600bfae2ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:49:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:54.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.239 225859 DEBUG nova.virt.libvirt.driver [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.239 225859 DEBUG nova.virt.libvirt.driver [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Ensure instance console log exists: /var/lib/nova/instances/52477e64-7989-4aa2-88e1-31600bfae2ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.240 225859 DEBUG oslo_concurrency.lockutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.241 225859 DEBUG oslo_concurrency.lockutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.241 225859 DEBUG oslo_concurrency.lockutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.244 225859 DEBUG nova.virt.libvirt.driver [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Start _get_guest_xml network_info=[{"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "vif_mac": "fa:16:3e:19:a9:8c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.250 225859 WARNING nova.virt.libvirt.driver [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.267 225859 DEBUG nova.virt.libvirt.host [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.269 225859 DEBUG nova.virt.libvirt.host [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.276 225859 DEBUG nova.virt.libvirt.host [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.277 225859 DEBUG nova.virt.libvirt.host [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.279 225859 DEBUG nova.virt.libvirt.driver [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.280 225859 DEBUG nova.virt.hardware [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='30c26a27-d918-46d8-a512-4ef3b4ce5955',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.281 225859 DEBUG nova.virt.hardware [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.281 225859 DEBUG nova.virt.hardware [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.282 225859 DEBUG nova.virt.hardware [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.282 225859 DEBUG nova.virt.hardware [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.283 225859 DEBUG nova.virt.hardware [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.283 225859 DEBUG nova.virt.hardware [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.284 225859 DEBUG nova.virt.hardware [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.284 225859 DEBUG nova.virt.hardware [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.284 225859 DEBUG nova.virt.hardware [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.285 225859 DEBUG nova.virt.hardware [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.286 225859 DEBUG nova.objects.instance [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 52477e64-7989-4aa2-88e1-31600bfae2ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.333 225859 DEBUG oslo_concurrency.processutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:55 compute-1 ceph-mon[81775]: pgmap v1858: 321 pgs: 321 active+clean; 293 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 203 op/s
Jan 20 14:49:55 compute-1 ceph-mon[81775]: osdmap e245: 3 total, 3 up, 3 in
Jan 20 14:49:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:55.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:49:55 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/87628794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.789 225859 DEBUG oslo_concurrency.processutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.831 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:55 compute-1 nova_compute[225855]: 2026-01-20 14:49:55.841 225859 DEBUG oslo_concurrency.processutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:56 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:49:56 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/848721760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.269 225859 DEBUG oslo_concurrency.processutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.271 225859 DEBUG nova.virt.libvirt.vif [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:49:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1663251192',display_name='tempest-ServerDiskConfigTestJSON-server-1663251192',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1663251192',id=106,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:49:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-nykd0j3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:49:48Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=52477e64-7989-4aa2-88e1-31600bfae2ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "vif_mac": "fa:16:3e:19:a9:8c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.272 225859 DEBUG nova.network.os_vif_util [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "vif_mac": "fa:16:3e:19:a9:8c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.273 225859 DEBUG nova.network.os_vif_util [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:a9:8c,bridge_name='br-int',has_traffic_filtering=True,id=8286e975-4b57-4b5a-9018-82187a854a2d,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8286e975-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.277 225859 DEBUG nova.virt.libvirt.driver [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:49:56 compute-1 nova_compute[225855]:   <uuid>52477e64-7989-4aa2-88e1-31600bfae2ef</uuid>
Jan 20 14:49:56 compute-1 nova_compute[225855]:   <name>instance-0000006a</name>
Jan 20 14:49:56 compute-1 nova_compute[225855]:   <memory>196608</memory>
Jan 20 14:49:56 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:49:56 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1663251192</nova:name>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:49:55</nova:creationTime>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <nova:flavor name="m1.micro">
Jan 20 14:49:56 compute-1 nova_compute[225855]:         <nova:memory>192</nova:memory>
Jan 20 14:49:56 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:49:56 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:49:56 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:49:56 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:49:56 compute-1 nova_compute[225855]:         <nova:user uuid="a1bd93d04cc4468abe1d5c61f5144191">tempest-ServerDiskConfigTestJSON-1806346246-project-member</nova:user>
Jan 20 14:49:56 compute-1 nova_compute[225855]:         <nova:project uuid="acb30fbc0e3749e390d7f867060b5a2a">tempest-ServerDiskConfigTestJSON-1806346246</nova:project>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:49:56 compute-1 nova_compute[225855]:         <nova:port uuid="8286e975-4b57-4b5a-9018-82187a854a2d">
Jan 20 14:49:56 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:49:56 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:49:56 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <system>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <entry name="serial">52477e64-7989-4aa2-88e1-31600bfae2ef</entry>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <entry name="uuid">52477e64-7989-4aa2-88e1-31600bfae2ef</entry>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     </system>
Jan 20 14:49:56 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:49:56 compute-1 nova_compute[225855]:   <os>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:   </os>
Jan 20 14:49:56 compute-1 nova_compute[225855]:   <features>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:   </features>
Jan 20 14:49:56 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:49:56 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:49:56 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/52477e64-7989-4aa2-88e1-31600bfae2ef_disk">
Jan 20 14:49:56 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       </source>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:49:56 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/52477e64-7989-4aa2-88e1-31600bfae2ef_disk.config">
Jan 20 14:49:56 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       </source>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:49:56 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:19:a9:8c"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <target dev="tap8286e975-4b"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/52477e64-7989-4aa2-88e1-31600bfae2ef/console.log" append="off"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <video>
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     </video>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:49:56 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:49:56 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:49:56 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:49:56 compute-1 nova_compute[225855]: </domain>
Jan 20 14:49:56 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.279 225859 DEBUG nova.virt.libvirt.vif [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:49:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1663251192',display_name='tempest-ServerDiskConfigTestJSON-server-1663251192',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1663251192',id=106,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:49:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-nykd0j3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:49:48Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=52477e64-7989-4aa2-88e1-31600bfae2ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "vif_mac": "fa:16:3e:19:a9:8c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.280 225859 DEBUG nova.network.os_vif_util [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "vif_mac": "fa:16:3e:19:a9:8c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.281 225859 DEBUG nova.network.os_vif_util [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:a9:8c,bridge_name='br-int',has_traffic_filtering=True,id=8286e975-4b57-4b5a-9018-82187a854a2d,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8286e975-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.281 225859 DEBUG os_vif [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:a9:8c,bridge_name='br-int',has_traffic_filtering=True,id=8286e975-4b57-4b5a-9018-82187a854a2d,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8286e975-4b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.282 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.283 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.284 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.287 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.288 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8286e975-4b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.289 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8286e975-4b, col_values=(('external_ids', {'iface-id': '8286e975-4b57-4b5a-9018-82187a854a2d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:a9:8c', 'vm-uuid': '52477e64-7989-4aa2-88e1-31600bfae2ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.292 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:56 compute-1 NetworkManager[49104]: <info>  [1768920596.2931] manager: (tap8286e975-4b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.295 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.300 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.301 225859 INFO os_vif [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:a9:8c,bridge_name='br-int',has_traffic_filtering=True,id=8286e975-4b57-4b5a-9018-82187a854a2d,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8286e975-4b')
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.384 225859 DEBUG nova.virt.libvirt.driver [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.385 225859 DEBUG nova.virt.libvirt.driver [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.385 225859 DEBUG nova.virt.libvirt.driver [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No VIF found with MAC fa:16:3e:19:a9:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.386 225859 INFO nova.virt.libvirt.driver [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Using config drive
Jan 20 14:49:56 compute-1 NetworkManager[49104]: <info>  [1768920596.4839] manager: (tap8286e975-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/166)
Jan 20 14:49:56 compute-1 kernel: tap8286e975-4b: entered promiscuous mode
Jan 20 14:49:56 compute-1 ovn_controller[130490]: 2026-01-20T14:49:56Z|00391|binding|INFO|Claiming lport 8286e975-4b57-4b5a-9018-82187a854a2d for this chassis.
Jan 20 14:49:56 compute-1 ovn_controller[130490]: 2026-01-20T14:49:56Z|00392|binding|INFO|8286e975-4b57-4b5a-9018-82187a854a2d: Claiming fa:16:3e:19:a9:8c 10.100.0.6
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.487 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.499 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:a9:8c 10.100.0.6'], port_security=['fa:16:3e:19:a9:8c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '52477e64-7989-4aa2-88e1-31600bfae2ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '19fab802-7db8-4c89-8f8e-8dcfc14d4627', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0e287ba-f88b-46f5-bb7f-3cc2a74be88e, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=8286e975-4b57-4b5a-9018-82187a854a2d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.501 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 8286e975-4b57-4b5a-9018-82187a854a2d in datapath 3379e2b3-ffb2-4391-969b-c9dc51bfbe25 bound to our chassis
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.504 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:49:56 compute-1 systemd-udevd[266216]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.514 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3d84cb-b751-441d-a8c7-6c69dc86d3a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.515 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3379e2b3-f1 in ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:49:56 compute-1 ovn_controller[130490]: 2026-01-20T14:49:56Z|00393|binding|INFO|Setting lport 8286e975-4b57-4b5a-9018-82187a854a2d up in Southbound
Jan 20 14:49:56 compute-1 ovn_controller[130490]: 2026-01-20T14:49:56Z|00394|binding|INFO|Setting lport 8286e975-4b57-4b5a-9018-82187a854a2d ovn-installed in OVS
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.517 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.517 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3379e2b3-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.517 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ef6cdb7f-2a51-4cd2-b2ec-106770226966]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.518 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ba72bb1a-decb-43fe-b701-ec107305bf96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:56 compute-1 NetworkManager[49104]: <info>  [1768920596.5280] device (tap8286e975-4b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.527 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:56 compute-1 NetworkManager[49104]: <info>  [1768920596.5294] device (tap8286e975-4b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.537 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5e28c6-2519-4752-9e42-539efdd9050c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:56 compute-1 systemd-machined[194361]: New machine qemu-46-instance-0000006a.
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.550 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1d94e960-234f-4d18-8005-d284317ff21a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:56 compute-1 systemd[1]: Started Virtual Machine qemu-46-instance-0000006a.
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.586 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[070b952a-3170-4917-821e-86bb1ea9f85e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.591 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[af12d2a2-0dee-4a39-af44-9baa65fbd6f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:56 compute-1 NetworkManager[49104]: <info>  [1768920596.5925] manager: (tap3379e2b3-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/167)
Jan 20 14:49:56 compute-1 systemd-udevd[266220]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.628 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[cc914986-3503-4c88-be99-a3528a0a4c62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.632 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[02206a43-5a15-4cf5-9b02-df728fa36a05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:56 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/87628794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:56 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/848721760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:56 compute-1 NetworkManager[49104]: <info>  [1768920596.6587] device (tap3379e2b3-f0): carrier: link connected
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.668 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[499a62eb-d370-48bc-aea6-f17636994e59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.688 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9fac98a5-5892-4c8f-a9cf-51079fde6f49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3379e2b3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:86:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563558, 'reachable_time': 23408, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266250, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.704 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0a7e190b-5a06-4f33-9df6-706c4513cbf9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:86fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563558, 'tstamp': 563558}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266251, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.720 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f01ee486-c155-49ca-bf10-b57c2f3ef069]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3379e2b3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:86:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563558, 'reachable_time': 23408, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266252, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.734 225859 DEBUG nova.network.neutron [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Updated VIF entry in instance network info cache for port 8286e975-4b57-4b5a-9018-82187a854a2d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.735 225859 DEBUG nova.network.neutron [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Updating instance_info_cache with network_info: [{"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.757 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4839a9-9e07-4198-9dab-38de1956bf95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.765 225859 DEBUG oslo_concurrency.lockutils [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:49:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:56.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.851 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0334e3b4-dcdd-4c11-8818-c07d7b194fd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.852 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3379e2b3-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.853 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.853 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3379e2b3-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.897 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:56 compute-1 NetworkManager[49104]: <info>  [1768920596.8988] manager: (tap3379e2b3-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/168)
Jan 20 14:49:56 compute-1 kernel: tap3379e2b3-f0: entered promiscuous mode
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.901 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.904 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3379e2b3-f0, col_values=(('external_ids', {'iface-id': 'b32ddf23-a8dd-4e6d-a410-ccb24b214d35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.905 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.906 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:56 compute-1 ovn_controller[130490]: 2026-01-20T14:49:56Z|00395|binding|INFO|Releasing lport b32ddf23-a8dd-4e6d-a410-ccb24b214d35 from this chassis (sb_readonly=0)
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.906 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.909 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2c74f1c9-1190-4f18-ac2f-a804b83e4b91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.909 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:49:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:56.910 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'env', 'PROCESS_TAG=haproxy-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:49:56 compute-1 nova_compute[225855]: 2026-01-20 14:49:56.919 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:57 compute-1 nova_compute[225855]: 2026-01-20 14:49:57.017 225859 DEBUG nova.compute.manager [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:49:57 compute-1 nova_compute[225855]: 2026-01-20 14:49:57.018 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920597.0175753, 52477e64-7989-4aa2-88e1-31600bfae2ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:49:57 compute-1 nova_compute[225855]: 2026-01-20 14:49:57.018 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] VM Resumed (Lifecycle Event)
Jan 20 14:49:57 compute-1 nova_compute[225855]: 2026-01-20 14:49:57.026 225859 INFO nova.virt.libvirt.driver [-] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Instance running successfully.
Jan 20 14:49:57 compute-1 virtqemud[225396]: argument unsupported: QEMU guest agent is not configured
Jan 20 14:49:57 compute-1 nova_compute[225855]: 2026-01-20 14:49:57.028 225859 DEBUG nova.virt.libvirt.guest [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 20 14:49:57 compute-1 nova_compute[225855]: 2026-01-20 14:49:57.029 225859 DEBUG nova.virt.libvirt.driver [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 20 14:49:57 compute-1 nova_compute[225855]: 2026-01-20 14:49:57.053 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:57 compute-1 nova_compute[225855]: 2026-01-20 14:49:57.056 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:49:57 compute-1 nova_compute[225855]: 2026-01-20 14:49:57.122 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 20 14:49:57 compute-1 nova_compute[225855]: 2026-01-20 14:49:57.123 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920597.0176432, 52477e64-7989-4aa2-88e1-31600bfae2ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:49:57 compute-1 nova_compute[225855]: 2026-01-20 14:49:57.123 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] VM Started (Lifecycle Event)
Jan 20 14:49:57 compute-1 nova_compute[225855]: 2026-01-20 14:49:57.161 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:57 compute-1 nova_compute[225855]: 2026-01-20 14:49:57.164 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:49:57 compute-1 nova_compute[225855]: 2026-01-20 14:49:57.246 225859 DEBUG nova.network.neutron [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Updating instance_info_cache with network_info: [{"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:49:57 compute-1 nova_compute[225855]: 2026-01-20 14:49:57.287 225859 DEBUG oslo_concurrency.lockutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Releasing lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:49:57 compute-1 podman[266326]: 2026-01-20 14:49:57.306022529 +0000 UTC m=+0.093779898 container create 7e472ad75c7c5324371d8342e82f1f2120ae128110e20722e399ec6b1ff46d49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 14:49:57 compute-1 podman[266326]: 2026-01-20 14:49:57.233636616 +0000 UTC m=+0.021394005 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:49:57 compute-1 nova_compute[225855]: 2026-01-20 14:49:57.376 225859 DEBUG nova.virt.libvirt.driver [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 20 14:49:57 compute-1 nova_compute[225855]: 2026-01-20 14:49:57.379 225859 DEBUG nova.virt.libvirt.driver [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:49:57 compute-1 systemd[1]: Started libpod-conmon-7e472ad75c7c5324371d8342e82f1f2120ae128110e20722e399ec6b1ff46d49.scope.
Jan 20 14:49:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:57 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:49:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e67e17409de7385c0e5c04d479fefc117c5bbc0e8751e2852e2bf32fe6c3ee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:49:57 compute-1 podman[266326]: 2026-01-20 14:49:57.505317416 +0000 UTC m=+0.293074795 container init 7e472ad75c7c5324371d8342e82f1f2120ae128110e20722e399ec6b1ff46d49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:49:57 compute-1 podman[266326]: 2026-01-20 14:49:57.510884223 +0000 UTC m=+0.298641632 container start 7e472ad75c7c5324371d8342e82f1f2120ae128110e20722e399ec6b1ff46d49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 20 14:49:57 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[266341]: [NOTICE]   (266345) : New worker (266347) forked
Jan 20 14:49:57 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[266341]: [NOTICE]   (266345) : Loading success.
Jan 20 14:49:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:57.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:57 compute-1 ceph-mon[81775]: pgmap v1860: 321 pgs: 321 active+clean; 255 MiB data, 910 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 120 KiB/s wr, 124 op/s
Jan 20 14:49:58 compute-1 nova_compute[225855]: 2026-01-20 14:49:58.086 225859 DEBUG nova.compute.manager [req-baf2153e-8186-4fec-aa51-62f15ccad980 req-55bc6dfb-46b7-4520-8731-ddad6e0e4c46 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received event network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:58 compute-1 nova_compute[225855]: 2026-01-20 14:49:58.086 225859 DEBUG oslo_concurrency.lockutils [req-baf2153e-8186-4fec-aa51-62f15ccad980 req-55bc6dfb-46b7-4520-8731-ddad6e0e4c46 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:58 compute-1 nova_compute[225855]: 2026-01-20 14:49:58.086 225859 DEBUG oslo_concurrency.lockutils [req-baf2153e-8186-4fec-aa51-62f15ccad980 req-55bc6dfb-46b7-4520-8731-ddad6e0e4c46 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:58 compute-1 nova_compute[225855]: 2026-01-20 14:49:58.086 225859 DEBUG oslo_concurrency.lockutils [req-baf2153e-8186-4fec-aa51-62f15ccad980 req-55bc6dfb-46b7-4520-8731-ddad6e0e4c46 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:58 compute-1 nova_compute[225855]: 2026-01-20 14:49:58.087 225859 DEBUG nova.compute.manager [req-baf2153e-8186-4fec-aa51-62f15ccad980 req-55bc6dfb-46b7-4520-8731-ddad6e0e4c46 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] No waiting events found dispatching network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:49:58 compute-1 nova_compute[225855]: 2026-01-20 14:49:58.087 225859 WARNING nova.compute.manager [req-baf2153e-8186-4fec-aa51-62f15ccad980 req-55bc6dfb-46b7-4520-8731-ddad6e0e4c46 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received unexpected event network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d for instance with vm_state resized and task_state None.
Jan 20 14:49:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:58.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:49:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:59.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:59 compute-1 ceph-mon[81775]: pgmap v1861: 321 pgs: 321 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 21 KiB/s wr, 121 op/s
Jan 20 14:49:59 compute-1 kernel: tap6855cb4f-41 (unregistering): left promiscuous mode
Jan 20 14:49:59 compute-1 NetworkManager[49104]: <info>  [1768920599.9794] device (tap6855cb4f-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:49:59 compute-1 nova_compute[225855]: 2026-01-20 14:49:59.983 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:59 compute-1 ovn_controller[130490]: 2026-01-20T14:49:59Z|00396|binding|INFO|Releasing lport 6855cb4f-4178-4447-af36-126ade033206 from this chassis (sb_readonly=0)
Jan 20 14:49:59 compute-1 ovn_controller[130490]: 2026-01-20T14:49:59Z|00397|binding|INFO|Setting lport 6855cb4f-4178-4447-af36-126ade033206 down in Southbound
Jan 20 14:49:59 compute-1 ovn_controller[130490]: 2026-01-20T14:49:59Z|00398|binding|INFO|Removing iface tap6855cb4f-41 ovn-installed in OVS
Jan 20 14:49:59 compute-1 nova_compute[225855]: 2026-01-20 14:49:59.986 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:59.994 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:3f:20 10.100.0.12'], port_security=['fa:16:3e:4f:3f:20 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'fdeb13eb-edb4-4bff-aeef-2671ba9d4618', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-762e1859-4db4-4d9e-b66f-d50316f80df4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c5f03d46c0c4162a3b2f1530850bb6c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '80535eda-fa59-4edc-8e3d-9bfea6517730', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2474a8ca-bb96-4cae-9133-23419b81a9fc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6855cb4f-4178-4447-af36-126ade033206) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:49:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:59.996 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6855cb4f-4178-4447-af36-126ade033206 in datapath 762e1859-4db4-4d9e-b66f-d50316f80df4 unbound from our chassis
Jan 20 14:49:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:49:59.999 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 762e1859-4db4-4d9e-b66f-d50316f80df4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:49:59 compute-1 nova_compute[225855]: 2026-01-20 14:49:59.999 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:00.000 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e78048c7-6504-46ca-a339-7f550c5365ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:00.001 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 namespace which is not needed anymore
Jan 20 14:50:00 compute-1 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000069.scope: Deactivated successfully.
Jan 20 14:50:00 compute-1 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000069.scope: Consumed 14.237s CPU time.
Jan 20 14:50:00 compute-1 systemd-machined[194361]: Machine qemu-45-instance-00000069 terminated.
Jan 20 14:50:00 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[265510]: [NOTICE]   (265514) : haproxy version is 2.8.14-c23fe91
Jan 20 14:50:00 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[265510]: [NOTICE]   (265514) : path to executable is /usr/sbin/haproxy
Jan 20 14:50:00 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[265510]: [WARNING]  (265514) : Exiting Master process...
Jan 20 14:50:00 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[265510]: [ALERT]    (265514) : Current worker (265516) exited with code 143 (Terminated)
Jan 20 14:50:00 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[265510]: [WARNING]  (265514) : All workers exited. Exiting... (0)
Jan 20 14:50:00 compute-1 systemd[1]: libpod-6832a1e9e9fbd88072418ee5f45181e7252b473c2cddc2205a332351b82ef902.scope: Deactivated successfully.
Jan 20 14:50:00 compute-1 podman[266383]: 2026-01-20 14:50:00.124003024 +0000 UTC m=+0.043561191 container died 6832a1e9e9fbd88072418ee5f45181e7252b473c2cddc2205a332351b82ef902 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 14:50:00 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6832a1e9e9fbd88072418ee5f45181e7252b473c2cddc2205a332351b82ef902-userdata-shm.mount: Deactivated successfully.
Jan 20 14:50:00 compute-1 systemd[1]: var-lib-containers-storage-overlay-5b9ea598c29d8f95feac6dbbb7186bfee63e25d53008923443ae675d2d1a3a93-merged.mount: Deactivated successfully.
Jan 20 14:50:00 compute-1 podman[266383]: 2026-01-20 14:50:00.162338036 +0000 UTC m=+0.081896193 container cleanup 6832a1e9e9fbd88072418ee5f45181e7252b473c2cddc2205a332351b82ef902 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:50:00 compute-1 systemd[1]: libpod-conmon-6832a1e9e9fbd88072418ee5f45181e7252b473c2cddc2205a332351b82ef902.scope: Deactivated successfully.
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.200 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.205 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:00 compute-1 podman[266413]: 2026-01-20 14:50:00.238294591 +0000 UTC m=+0.051762023 container remove 6832a1e9e9fbd88072418ee5f45181e7252b473c2cddc2205a332351b82ef902 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:50:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:00.245 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[168ca7fa-99a6-4848-b5c9-0f88157a1906]: (4, ('Tue Jan 20 02:50:00 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 (6832a1e9e9fbd88072418ee5f45181e7252b473c2cddc2205a332351b82ef902)\n6832a1e9e9fbd88072418ee5f45181e7252b473c2cddc2205a332351b82ef902\nTue Jan 20 02:50:00 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 (6832a1e9e9fbd88072418ee5f45181e7252b473c2cddc2205a332351b82ef902)\n6832a1e9e9fbd88072418ee5f45181e7252b473c2cddc2205a332351b82ef902\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:00.246 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc2f632-c19c-459c-beb7-fb96e0309acb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:00.247 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap762e1859-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:00 compute-1 kernel: tap762e1859-40: left promiscuous mode
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.248 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.261 225859 DEBUG nova.compute.manager [req-6439b544-5584-467c-92b7-5a2058bf7b71 req-ec47bd55-e9a1-4561-b20d-116bb2520b35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received event network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.262 225859 DEBUG oslo_concurrency.lockutils [req-6439b544-5584-467c-92b7-5a2058bf7b71 req-ec47bd55-e9a1-4561-b20d-116bb2520b35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.262 225859 DEBUG oslo_concurrency.lockutils [req-6439b544-5584-467c-92b7-5a2058bf7b71 req-ec47bd55-e9a1-4561-b20d-116bb2520b35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.263 225859 DEBUG oslo_concurrency.lockutils [req-6439b544-5584-467c-92b7-5a2058bf7b71 req-ec47bd55-e9a1-4561-b20d-116bb2520b35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.263 225859 DEBUG nova.compute.manager [req-6439b544-5584-467c-92b7-5a2058bf7b71 req-ec47bd55-e9a1-4561-b20d-116bb2520b35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] No waiting events found dispatching network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.263 225859 WARNING nova.compute.manager [req-6439b544-5584-467c-92b7-5a2058bf7b71 req-ec47bd55-e9a1-4561-b20d-116bb2520b35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received unexpected event network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d for instance with vm_state resized and task_state None.
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.265 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:00.267 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6742d4ea-0e20-4e3e-bb41-8088dd4be3d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:00.279 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8a9a19f2-c827-409f-ba46-a420981afd02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:00.283 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2b12193d-e66e-4491-90cf-218adc63a748]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:00.297 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8d753c9a-17a7-49f4-b6a1-b52a9b4292f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560826, 'reachable_time': 41751, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266441, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:00.299 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:50:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:00.299 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[5d5eb5ae-d8c8-40e1-b5d6-07e679b5d4f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:00 compute-1 systemd[1]: run-netns-ovnmeta\x2d762e1859\x2d4db4\x2d4d9e\x2db66f\x2dd50316f80df4.mount: Deactivated successfully.
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.396 225859 INFO nova.virt.libvirt.driver [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Instance shutdown successfully after 3 seconds.
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.402 225859 INFO nova.virt.libvirt.driver [-] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Instance destroyed successfully.
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.403 225859 DEBUG nova.virt.libvirt.vif [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:49:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2012792656',display_name='tempest-ServerActionsTestJSON-server-2012792656',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2012792656',id=105,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgEzx5mLsSqRL7L9WKOzM+WdeJ40U103wY9H3VMZ41G/sN5tQtQSt9lXWKTyc6pt00bfKD0E9GPugNMpy+dzSSpK23o3CgadkzAfAvjQCeCPOSp3fX13FGApomGd1HRCQ==',key_name='tempest-keypair-1602241722',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:49:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1c5f03d46c0c4162a3b2f1530850bb6c',ramdisk_id='',reservation_id='r-c8be97ji',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1020442335',owner_user_name='tempest-ServerActionsTestJSON-1020442335-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:49:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3e9278fdb9e645b7938f3edb20c4d3cf',uuid=fdeb13eb-edb4-4bff-aeef-2671ba9d4618,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1917526237-network", "vif_mac": "fa:16:3e:4f:3f:20"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.403 225859 DEBUG nova.network.os_vif_util [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converting VIF {"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1917526237-network", "vif_mac": "fa:16:3e:4f:3f:20"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.403 225859 DEBUG nova.network.os_vif_util [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.404 225859 DEBUG os_vif [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.407 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.407 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6855cb4f-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.410 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.412 225859 INFO os_vif [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41')
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.416 225859 DEBUG nova.virt.libvirt.driver [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.416 225859 DEBUG nova.virt.libvirt.driver [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:50:00 compute-1 ceph-mon[81775]: overall HEALTH_OK
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.828 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:50:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:00.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.866 225859 DEBUG nova.compute.manager [req-462e149f-6411-4175-8385-0b50623bc042 req-d545fb15-00ec-476c-9f22-3fd434e0c6a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received event network-vif-unplugged-6855cb4f-4178-4447-af36-126ade033206 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.867 225859 DEBUG oslo_concurrency.lockutils [req-462e149f-6411-4175-8385-0b50623bc042 req-d545fb15-00ec-476c-9f22-3fd434e0c6a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.867 225859 DEBUG oslo_concurrency.lockutils [req-462e149f-6411-4175-8385-0b50623bc042 req-d545fb15-00ec-476c-9f22-3fd434e0c6a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.867 225859 DEBUG oslo_concurrency.lockutils [req-462e149f-6411-4175-8385-0b50623bc042 req-d545fb15-00ec-476c-9f22-3fd434e0c6a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.867 225859 DEBUG nova.compute.manager [req-462e149f-6411-4175-8385-0b50623bc042 req-d545fb15-00ec-476c-9f22-3fd434e0c6a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] No waiting events found dispatching network-vif-unplugged-6855cb4f-4178-4447-af36-126ade033206 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.868 225859 WARNING nova.compute.manager [req-462e149f-6411-4175-8385-0b50623bc042 req-d545fb15-00ec-476c-9f22-3fd434e0c6a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received unexpected event network-vif-unplugged-6855cb4f-4178-4447-af36-126ade033206 for instance with vm_state active and task_state resize_migrating.
Jan 20 14:50:00 compute-1 nova_compute[225855]: 2026-01-20 14:50:00.945 225859 DEBUG nova.network.neutron [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Port 6855cb4f-4178-4447-af36-126ade033206 binding to destination host compute-1.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Jan 20 14:50:01 compute-1 nova_compute[225855]: 2026-01-20 14:50:01.102 225859 DEBUG oslo_concurrency.lockutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:01 compute-1 nova_compute[225855]: 2026-01-20 14:50:01.102 225859 DEBUG oslo_concurrency.lockutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:01 compute-1 nova_compute[225855]: 2026-01-20 14:50:01.103 225859 DEBUG oslo_concurrency.lockutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:01 compute-1 nova_compute[225855]: 2026-01-20 14:50:01.420 225859 DEBUG oslo_concurrency.lockutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:50:01 compute-1 nova_compute[225855]: 2026-01-20 14:50:01.421 225859 DEBUG oslo_concurrency.lockutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquired lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:50:01 compute-1 nova_compute[225855]: 2026-01-20 14:50:01.421 225859 DEBUG nova.network.neutron [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:50:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:01.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:01 compute-1 ceph-mon[81775]: pgmap v1862: 321 pgs: 321 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 20 KiB/s wr, 121 op/s
Jan 20 14:50:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:02.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:02 compute-1 ceph-mon[81775]: pgmap v1863: 321 pgs: 321 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 21 KiB/s wr, 142 op/s
Jan 20 14:50:03 compute-1 nova_compute[225855]: 2026-01-20 14:50:03.101 225859 DEBUG nova.compute.manager [req-dd4dc49c-e7a9-466d-9669-09c2f672c44d req-1a5ddf2a-9eff-4ed7-bc3b-572d19fba915 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received event network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:03 compute-1 nova_compute[225855]: 2026-01-20 14:50:03.101 225859 DEBUG oslo_concurrency.lockutils [req-dd4dc49c-e7a9-466d-9669-09c2f672c44d req-1a5ddf2a-9eff-4ed7-bc3b-572d19fba915 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:03 compute-1 nova_compute[225855]: 2026-01-20 14:50:03.101 225859 DEBUG oslo_concurrency.lockutils [req-dd4dc49c-e7a9-466d-9669-09c2f672c44d req-1a5ddf2a-9eff-4ed7-bc3b-572d19fba915 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:03 compute-1 nova_compute[225855]: 2026-01-20 14:50:03.102 225859 DEBUG oslo_concurrency.lockutils [req-dd4dc49c-e7a9-466d-9669-09c2f672c44d req-1a5ddf2a-9eff-4ed7-bc3b-572d19fba915 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:03 compute-1 nova_compute[225855]: 2026-01-20 14:50:03.102 225859 DEBUG nova.compute.manager [req-dd4dc49c-e7a9-466d-9669-09c2f672c44d req-1a5ddf2a-9eff-4ed7-bc3b-572d19fba915 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] No waiting events found dispatching network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:50:03 compute-1 nova_compute[225855]: 2026-01-20 14:50:03.102 225859 WARNING nova.compute.manager [req-dd4dc49c-e7a9-466d-9669-09c2f672c44d req-1a5ddf2a-9eff-4ed7-bc3b-572d19fba915 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received unexpected event network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 for instance with vm_state active and task_state resize_migrated.
Jan 20 14:50:03 compute-1 nova_compute[225855]: 2026-01-20 14:50:03.471 225859 DEBUG nova.network.neutron [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Updating instance_info_cache with network_info: [{"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:50:03 compute-1 nova_compute[225855]: 2026-01-20 14:50:03.503 225859 DEBUG oslo_concurrency.lockutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Releasing lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:50:03 compute-1 nova_compute[225855]: 2026-01-20 14:50:03.624 225859 DEBUG nova.virt.libvirt.driver [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 20 14:50:03 compute-1 nova_compute[225855]: 2026-01-20 14:50:03.626 225859 DEBUG nova.virt.libvirt.driver [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 14:50:03 compute-1 nova_compute[225855]: 2026-01-20 14:50:03.626 225859 INFO nova.virt.libvirt.driver [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Creating image(s)
Jan 20 14:50:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:50:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:03.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:50:03 compute-1 nova_compute[225855]: 2026-01-20 14:50:03.662 225859 DEBUG nova.storage.rbd_utils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] creating snapshot(nova-resize) on rbd image(fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:50:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e246 e246: 3 total, 3 up, 3 in
Jan 20 14:50:03 compute-1 nova_compute[225855]: 2026-01-20 14:50:03.952 225859 DEBUG nova.objects.instance [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'trusted_certs' on Instance uuid fdeb13eb-edb4-4bff-aeef-2671ba9d4618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.066 225859 DEBUG nova.virt.libvirt.driver [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.066 225859 DEBUG nova.virt.libvirt.driver [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Ensure instance console log exists: /var/lib/nova/instances/fdeb13eb-edb4-4bff-aeef-2671ba9d4618/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.067 225859 DEBUG oslo_concurrency.lockutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.067 225859 DEBUG oslo_concurrency.lockutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.067 225859 DEBUG oslo_concurrency.lockutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.070 225859 DEBUG nova.virt.libvirt.driver [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Start _get_guest_xml network_info=[{"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1917526237-network", "vif_mac": "fa:16:3e:4f:3f:20"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.074 225859 WARNING nova.virt.libvirt.driver [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.086 225859 DEBUG nova.virt.libvirt.host [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.087 225859 DEBUG nova.virt.libvirt.host [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.094 225859 DEBUG nova.virt.libvirt.host [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.094 225859 DEBUG nova.virt.libvirt.host [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.095 225859 DEBUG nova.virt.libvirt.driver [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.096 225859 DEBUG nova.virt.hardware [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='30c26a27-d918-46d8-a512-4ef3b4ce5955',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.096 225859 DEBUG nova.virt.hardware [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.096 225859 DEBUG nova.virt.hardware [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.097 225859 DEBUG nova.virt.hardware [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.097 225859 DEBUG nova.virt.hardware [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.097 225859 DEBUG nova.virt.hardware [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.097 225859 DEBUG nova.virt.hardware [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.098 225859 DEBUG nova.virt.hardware [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.098 225859 DEBUG nova.virt.hardware [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.098 225859 DEBUG nova.virt.hardware [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.098 225859 DEBUG nova.virt.hardware [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.099 225859 DEBUG nova.objects.instance [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'vcpu_model' on Instance uuid fdeb13eb-edb4-4bff-aeef-2671ba9d4618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.119 225859 DEBUG oslo_concurrency.processutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:04 compute-1 ovn_controller[130490]: 2026-01-20T14:50:04Z|00399|binding|INFO|Releasing lport b32ddf23-a8dd-4e6d-a410-ccb24b214d35 from this chassis (sb_readonly=0)
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.462 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:50:04 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2075039030' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.567 225859 DEBUG oslo_concurrency.processutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.608 225859 DEBUG oslo_concurrency.processutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:04 compute-1 ovn_controller[130490]: 2026-01-20T14:50:04Z|00400|binding|INFO|Releasing lport b32ddf23-a8dd-4e6d-a410-ccb24b214d35 from this chassis (sb_readonly=0)
Jan 20 14:50:04 compute-1 nova_compute[225855]: 2026-01-20 14:50:04.636 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:04.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:04 compute-1 ceph-mon[81775]: pgmap v1864: 321 pgs: 321 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 140 op/s
Jan 20 14:50:04 compute-1 ceph-mon[81775]: osdmap e246: 3 total, 3 up, 3 in
Jan 20 14:50:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2075039030' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e247 e247: 3 total, 3 up, 3 in
Jan 20 14:50:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:50:05 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1676165450' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.050 225859 DEBUG oslo_concurrency.processutils [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.053 225859 DEBUG nova.virt.libvirt.vif [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:49:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2012792656',display_name='tempest-ServerActionsTestJSON-server-2012792656',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2012792656',id=105,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgEzx5mLsSqRL7L9WKOzM+WdeJ40U103wY9H3VMZ41G/sN5tQtQSt9lXWKTyc6pt00bfKD0E9GPugNMpy+dzSSpK23o3CgadkzAfAvjQCeCPOSp3fX13FGApomGd1HRCQ==',key_name='tempest-keypair-1602241722',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:49:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1c5f03d46c0c4162a3b2f1530850bb6c',ramdisk_id='',reservation_id='r-c8be97ji',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1020442335',owner_user_name='tempest-ServerActionsTestJSON-1020442335-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:50:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3e9278fdb9e645b7938f3edb20c4d3cf',uuid=fdeb13eb-edb4-4bff-aeef-2671ba9d4618,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1917526237-network", "vif_mac": "fa:16:3e:4f:3f:20"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.054 225859 DEBUG nova.network.os_vif_util [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converting VIF {"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1917526237-network", "vif_mac": "fa:16:3e:4f:3f:20"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.055 225859 DEBUG nova.network.os_vif_util [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.060 225859 DEBUG nova.virt.libvirt.driver [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:50:05 compute-1 nova_compute[225855]:   <uuid>fdeb13eb-edb4-4bff-aeef-2671ba9d4618</uuid>
Jan 20 14:50:05 compute-1 nova_compute[225855]:   <name>instance-00000069</name>
Jan 20 14:50:05 compute-1 nova_compute[225855]:   <memory>196608</memory>
Jan 20 14:50:05 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:50:05 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerActionsTestJSON-server-2012792656</nova:name>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:50:04</nova:creationTime>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <nova:flavor name="m1.micro">
Jan 20 14:50:05 compute-1 nova_compute[225855]:         <nova:memory>192</nova:memory>
Jan 20 14:50:05 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:50:05 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:50:05 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:50:05 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:50:05 compute-1 nova_compute[225855]:         <nova:user uuid="3e9278fdb9e645b7938f3edb20c4d3cf">tempest-ServerActionsTestJSON-1020442335-project-member</nova:user>
Jan 20 14:50:05 compute-1 nova_compute[225855]:         <nova:project uuid="1c5f03d46c0c4162a3b2f1530850bb6c">tempest-ServerActionsTestJSON-1020442335</nova:project>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:50:05 compute-1 nova_compute[225855]:         <nova:port uuid="6855cb4f-4178-4447-af36-126ade033206">
Jan 20 14:50:05 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:50:05 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:50:05 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <system>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <entry name="serial">fdeb13eb-edb4-4bff-aeef-2671ba9d4618</entry>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <entry name="uuid">fdeb13eb-edb4-4bff-aeef-2671ba9d4618</entry>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     </system>
Jan 20 14:50:05 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:50:05 compute-1 nova_compute[225855]:   <os>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:   </os>
Jan 20 14:50:05 compute-1 nova_compute[225855]:   <features>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:   </features>
Jan 20 14:50:05 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:50:05 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:50:05 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk">
Jan 20 14:50:05 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       </source>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:50:05 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk.config">
Jan 20 14:50:05 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       </source>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:50:05 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:4f:3f:20"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <target dev="tap6855cb4f-41"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/fdeb13eb-edb4-4bff-aeef-2671ba9d4618/console.log" append="off"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <video>
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     </video>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:50:05 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:50:05 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:50:05 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:50:05 compute-1 nova_compute[225855]: </domain>
Jan 20 14:50:05 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.062 225859 DEBUG nova.virt.libvirt.vif [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:49:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2012792656',display_name='tempest-ServerActionsTestJSON-server-2012792656',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2012792656',id=105,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgEzx5mLsSqRL7L9WKOzM+WdeJ40U103wY9H3VMZ41G/sN5tQtQSt9lXWKTyc6pt00bfKD0E9GPugNMpy+dzSSpK23o3CgadkzAfAvjQCeCPOSp3fX13FGApomGd1HRCQ==',key_name='tempest-keypair-1602241722',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:49:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1c5f03d46c0c4162a3b2f1530850bb6c',ramdisk_id='',reservation_id='r-c8be97ji',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1020442335',owner_user_name='tempest-ServerActionsTestJSON-1020442335-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:50:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3e9278fdb9e645b7938f3edb20c4d3cf',uuid=fdeb13eb-edb4-4bff-aeef-2671ba9d4618,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1917526237-network", "vif_mac": "fa:16:3e:4f:3f:20"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.062 225859 DEBUG nova.network.os_vif_util [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converting VIF {"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1917526237-network", "vif_mac": "fa:16:3e:4f:3f:20"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.063 225859 DEBUG nova.network.os_vif_util [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.064 225859 DEBUG os_vif [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.065 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.066 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.066 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.069 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.070 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6855cb4f-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.071 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6855cb4f-41, col_values=(('external_ids', {'iface-id': '6855cb4f-4178-4447-af36-126ade033206', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4f:3f:20', 'vm-uuid': 'fdeb13eb-edb4-4bff-aeef-2671ba9d4618'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.072 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:05 compute-1 NetworkManager[49104]: <info>  [1768920605.0737] manager: (tap6855cb4f-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.075 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.077 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.078 225859 INFO os_vif [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41')
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.153 225859 DEBUG nova.virt.libvirt.driver [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.153 225859 DEBUG nova.virt.libvirt.driver [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.154 225859 DEBUG nova.virt.libvirt.driver [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] No VIF found with MAC fa:16:3e:4f:3f:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.155 225859 INFO nova.virt.libvirt.driver [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Using config drive
Jan 20 14:50:05 compute-1 podman[266581]: 2026-01-20 14:50:05.224082394 +0000 UTC m=+0.106675142 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 14:50:05 compute-1 kernel: tap6855cb4f-41: entered promiscuous mode
Jan 20 14:50:05 compute-1 NetworkManager[49104]: <info>  [1768920605.2460] manager: (tap6855cb4f-41): new Tun device (/org/freedesktop/NetworkManager/Devices/170)
Jan 20 14:50:05 compute-1 ovn_controller[130490]: 2026-01-20T14:50:05Z|00401|binding|INFO|Claiming lport 6855cb4f-4178-4447-af36-126ade033206 for this chassis.
Jan 20 14:50:05 compute-1 ovn_controller[130490]: 2026-01-20T14:50:05Z|00402|binding|INFO|6855cb4f-4178-4447-af36-126ade033206: Claiming fa:16:3e:4f:3f:20 10.100.0.12
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.258 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:05 compute-1 systemd-machined[194361]: New machine qemu-47-instance-00000069.
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.280 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:05 compute-1 NetworkManager[49104]: <info>  [1768920605.2822] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Jan 20 14:50:05 compute-1 NetworkManager[49104]: <info>  [1768920605.2828] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/172)
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.284 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:3f:20 10.100.0.12'], port_security=['fa:16:3e:4f:3f:20 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'fdeb13eb-edb4-4bff-aeef-2671ba9d4618', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-762e1859-4db4-4d9e-b66f-d50316f80df4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c5f03d46c0c4162a3b2f1530850bb6c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '80535eda-fa59-4edc-8e3d-9bfea6517730', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2474a8ca-bb96-4cae-9133-23419b81a9fc, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6855cb4f-4178-4447-af36-126ade033206) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:50:05 compute-1 systemd[1]: Started Virtual Machine qemu-47-instance-00000069.
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.286 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6855cb4f-4178-4447-af36-126ade033206 in datapath 762e1859-4db4-4d9e-b66f-d50316f80df4 bound to our chassis
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.289 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 762e1859-4db4-4d9e-b66f-d50316f80df4
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.301 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5547a843-6a11-4d5c-887f-118cf30c2862]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.302 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap762e1859-41 in ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.305 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap762e1859-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.305 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8c62f464-bb68-4960-a889-d5f3f698d7e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.307 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4bd87729-5bd7-4a98-b77d-01dcdb898f95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:05 compute-1 systemd-udevd[266640]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.321 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3757ad-39bb-43c3-b242-9aff85c213aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:05 compute-1 NetworkManager[49104]: <info>  [1768920605.3384] device (tap6855cb4f-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:50:05 compute-1 NetworkManager[49104]: <info>  [1768920605.3392] device (tap6855cb4f-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.349 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e8730987-bfb2-4242-b41c-8ecd5634e032]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.379 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f7066776-453e-476b-8b49-0cc56244aff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:05 compute-1 NetworkManager[49104]: <info>  [1768920605.3871] manager: (tap762e1859-40): new Veth device (/org/freedesktop/NetworkManager/Devices/173)
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.386 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a94f5ee3-d0ce-4a5e-830b-b2b3c9ec2ade]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:05 compute-1 systemd-udevd[266643]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.421 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[745e2ee9-63fa-4f80-af2a-c08ec3b856b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.425 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[71d4b5d5-f408-43bc-800d-3336f6d78002]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:05 compute-1 NetworkManager[49104]: <info>  [1768920605.4451] device (tap762e1859-40): carrier: link connected
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.449 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[ba3fd097-4ce9-492c-be8b-6bcdd8e82b77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.468 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[28ceffc7-51f2-4e53-86da-7e0e1641d585]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap762e1859-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:f1:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564437, 'reachable_time': 22954, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266671, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.476 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.485 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[90d2fe77-e85d-4687-a1f0-7c5143a9e82e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:f1da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 564437, 'tstamp': 564437}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266672, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:05 compute-1 ovn_controller[130490]: 2026-01-20T14:50:05Z|00403|binding|INFO|Releasing lport b32ddf23-a8dd-4e6d-a410-ccb24b214d35 from this chassis (sb_readonly=0)
Jan 20 14:50:05 compute-1 ovn_controller[130490]: 2026-01-20T14:50:05Z|00404|binding|INFO|Setting lport 6855cb4f-4178-4447-af36-126ade033206 up in Southbound
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.502 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[31fd9935-ab99-4330-87ff-b82e605089bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap762e1859-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:f1:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564437, 'reachable_time': 22954, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266673, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.524 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:05 compute-1 ovn_controller[130490]: 2026-01-20T14:50:05Z|00405|binding|INFO|Setting lport 6855cb4f-4178-4447-af36-126ade033206 ovn-installed in OVS
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.526 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.551 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3db07b07-2d89-463b-9ce7-65fb4d5b8431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.609 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1fdefb70-23bb-4ab4-a012-0094733b43da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.610 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap762e1859-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.610 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.610 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap762e1859-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.612 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:05 compute-1 kernel: tap762e1859-40: entered promiscuous mode
Jan 20 14:50:05 compute-1 NetworkManager[49104]: <info>  [1768920605.6132] manager: (tap762e1859-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/174)
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.615 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap762e1859-40, col_values=(('external_ids', {'iface-id': '9e775c45-1646-436d-a0cb-a5b5ec356e1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.616 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:05 compute-1 ovn_controller[130490]: 2026-01-20T14:50:05Z|00406|binding|INFO|Releasing lport 9e775c45-1646-436d-a0cb-a5b5ec356e1b from this chassis (sb_readonly=0)
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.618 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.618 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/762e1859-4db4-4d9e-b66f-d50316f80df4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/762e1859-4db4-4d9e-b66f-d50316f80df4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.620 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[29cd99b0-abbd-4222-a88d-95beff064149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.621 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-762e1859-4db4-4d9e-b66f-d50316f80df4
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/762e1859-4db4-4d9e-b66f-d50316f80df4.pid.haproxy
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 762e1859-4db4-4d9e-b66f-d50316f80df4
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:50:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:05.622 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'env', 'PROCESS_TAG=haproxy-762e1859-4db4-4d9e-b66f-d50316f80df4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/762e1859-4db4-4d9e-b66f-d50316f80df4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.640 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:05.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.830 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.906 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for fdeb13eb-edb4-4bff-aeef-2671ba9d4618 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.907 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920605.9062374, fdeb13eb-edb4-4bff-aeef-2671ba9d4618 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.907 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] VM Resumed (Lifecycle Event)
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.910 225859 DEBUG nova.compute.manager [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.914 225859 INFO nova.virt.libvirt.driver [-] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Instance running successfully.
Jan 20 14:50:05 compute-1 virtqemud[225396]: argument unsupported: QEMU guest agent is not configured
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.916 225859 DEBUG nova.virt.libvirt.guest [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 20 14:50:05 compute-1 nova_compute[225855]: 2026-01-20 14:50:05.916 225859 DEBUG nova.virt.libvirt.driver [None req-c7d66eda-2386-42cf-82ee-49bf287fa76d 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 20 14:50:05 compute-1 ceph-mon[81775]: osdmap e247: 3 total, 3 up, 3 in
Jan 20 14:50:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1676165450' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1811767033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:06 compute-1 podman[266745]: 2026-01-20 14:50:06.026060245 +0000 UTC m=+0.054529430 container create 5631707afb8ee465811db7586f33c72cad56b7fd83200a0452445ac0954ef0e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 20 14:50:06 compute-1 systemd[1]: Started libpod-conmon-5631707afb8ee465811db7586f33c72cad56b7fd83200a0452445ac0954ef0e3.scope.
Jan 20 14:50:06 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:50:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88cd4daf2bc9c6dc43439ff4fce93da549f6cbad7034349a6852b667007fb0f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:50:06 compute-1 podman[266745]: 2026-01-20 14:50:05.998299512 +0000 UTC m=+0.026768727 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:50:06 compute-1 podman[266745]: 2026-01-20 14:50:06.106468015 +0000 UTC m=+0.134937200 container init 5631707afb8ee465811db7586f33c72cad56b7fd83200a0452445ac0954ef0e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 14:50:06 compute-1 podman[266745]: 2026-01-20 14:50:06.111868828 +0000 UTC m=+0.140338013 container start 5631707afb8ee465811db7586f33c72cad56b7fd83200a0452445ac0954ef0e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:50:06 compute-1 nova_compute[225855]: 2026-01-20 14:50:06.127 225859 DEBUG nova.compute.manager [req-1ad58b4c-22bd-43c8-9fd1-b5ae04bb3e08 req-92099aa4-02c1-4401-a220-3082698d89ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received event network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:06 compute-1 nova_compute[225855]: 2026-01-20 14:50:06.128 225859 DEBUG oslo_concurrency.lockutils [req-1ad58b4c-22bd-43c8-9fd1-b5ae04bb3e08 req-92099aa4-02c1-4401-a220-3082698d89ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:06 compute-1 nova_compute[225855]: 2026-01-20 14:50:06.128 225859 DEBUG oslo_concurrency.lockutils [req-1ad58b4c-22bd-43c8-9fd1-b5ae04bb3e08 req-92099aa4-02c1-4401-a220-3082698d89ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:06 compute-1 nova_compute[225855]: 2026-01-20 14:50:06.128 225859 DEBUG oslo_concurrency.lockutils [req-1ad58b4c-22bd-43c8-9fd1-b5ae04bb3e08 req-92099aa4-02c1-4401-a220-3082698d89ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:06 compute-1 nova_compute[225855]: 2026-01-20 14:50:06.128 225859 DEBUG nova.compute.manager [req-1ad58b4c-22bd-43c8-9fd1-b5ae04bb3e08 req-92099aa4-02c1-4401-a220-3082698d89ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] No waiting events found dispatching network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:50:06 compute-1 nova_compute[225855]: 2026-01-20 14:50:06.129 225859 WARNING nova.compute.manager [req-1ad58b4c-22bd-43c8-9fd1-b5ae04bb3e08 req-92099aa4-02c1-4401-a220-3082698d89ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received unexpected event network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 for instance with vm_state active and task_state resize_finish.
Jan 20 14:50:06 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[266761]: [NOTICE]   (266765) : New worker (266767) forked
Jan 20 14:50:06 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[266761]: [NOTICE]   (266765) : Loading success.
Jan 20 14:50:06 compute-1 nova_compute[225855]: 2026-01-20 14:50:06.135 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:50:06 compute-1 nova_compute[225855]: 2026-01-20 14:50:06.140 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:50:06 compute-1 nova_compute[225855]: 2026-01-20 14:50:06.183 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 20 14:50:06 compute-1 nova_compute[225855]: 2026-01-20 14:50:06.183 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920605.909545, fdeb13eb-edb4-4bff-aeef-2671ba9d4618 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:50:06 compute-1 nova_compute[225855]: 2026-01-20 14:50:06.183 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] VM Started (Lifecycle Event)
Jan 20 14:50:06 compute-1 nova_compute[225855]: 2026-01-20 14:50:06.238 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:50:06 compute-1 nova_compute[225855]: 2026-01-20 14:50:06.240 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:50:06 compute-1 nova_compute[225855]: 2026-01-20 14:50:06.286 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 20 14:50:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:06.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:07 compute-1 ceph-mon[81775]: pgmap v1867: 321 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 311 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 26 KiB/s wr, 137 op/s
Jan 20 14:50:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:50:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:07.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:50:08 compute-1 nova_compute[225855]: 2026-01-20 14:50:08.352 225859 DEBUG nova.compute.manager [req-71321ab6-dee9-4566-bb70-e6c2b135fad2 req-59900e74-bcd3-4594-947d-3a6029833b7b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received event network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:08 compute-1 nova_compute[225855]: 2026-01-20 14:50:08.353 225859 DEBUG oslo_concurrency.lockutils [req-71321ab6-dee9-4566-bb70-e6c2b135fad2 req-59900e74-bcd3-4594-947d-3a6029833b7b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:08 compute-1 nova_compute[225855]: 2026-01-20 14:50:08.353 225859 DEBUG oslo_concurrency.lockutils [req-71321ab6-dee9-4566-bb70-e6c2b135fad2 req-59900e74-bcd3-4594-947d-3a6029833b7b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:08 compute-1 nova_compute[225855]: 2026-01-20 14:50:08.353 225859 DEBUG oslo_concurrency.lockutils [req-71321ab6-dee9-4566-bb70-e6c2b135fad2 req-59900e74-bcd3-4594-947d-3a6029833b7b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:08 compute-1 nova_compute[225855]: 2026-01-20 14:50:08.354 225859 DEBUG nova.compute.manager [req-71321ab6-dee9-4566-bb70-e6c2b135fad2 req-59900e74-bcd3-4594-947d-3a6029833b7b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] No waiting events found dispatching network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:50:08 compute-1 nova_compute[225855]: 2026-01-20 14:50:08.354 225859 WARNING nova.compute.manager [req-71321ab6-dee9-4566-bb70-e6c2b135fad2 req-59900e74-bcd3-4594-947d-3a6029833b7b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received unexpected event network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 for instance with vm_state resized and task_state None.
Jan 20 14:50:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:08.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:09 compute-1 ceph-mon[81775]: pgmap v1868: 321 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 311 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 25 KiB/s wr, 181 op/s
Jan 20 14:50:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:09.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:09 compute-1 sudo[266778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:09 compute-1 sudo[266778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:09 compute-1 sudo[266778]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:09 compute-1 sudo[266803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:09 compute-1 sudo[266803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:09 compute-1 sudo[266803]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:09 compute-1 nova_compute[225855]: 2026-01-20 14:50:09.833 225859 DEBUG nova.network.neutron [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Port 6855cb4f-4178-4447-af36-126ade033206 binding to destination host compute-1.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Jan 20 14:50:09 compute-1 nova_compute[225855]: 2026-01-20 14:50:09.835 225859 DEBUG oslo_concurrency.lockutils [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:50:09 compute-1 nova_compute[225855]: 2026-01-20 14:50:09.835 225859 DEBUG oslo_concurrency.lockutils [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquired lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:50:09 compute-1 nova_compute[225855]: 2026-01-20 14:50:09.835 225859 DEBUG nova.network.neutron [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:50:10 compute-1 nova_compute[225855]: 2026-01-20 14:50:10.110 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:10 compute-1 ovn_controller[130490]: 2026-01-20T14:50:10Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:19:a9:8c 10.100.0.6
Jan 20 14:50:10 compute-1 ovn_controller[130490]: 2026-01-20T14:50:10Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:19:a9:8c 10.100.0.6
Jan 20 14:50:10 compute-1 nova_compute[225855]: 2026-01-20 14:50:10.835 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:50:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:10.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:50:11 compute-1 ceph-mon[81775]: pgmap v1869: 321 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 311 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 23 KiB/s wr, 123 op/s
Jan 20 14:50:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3567694707' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:50:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:11.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:50:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:12.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:13 compute-1 ceph-mon[81775]: pgmap v1870: 321 pgs: 321 active+clean; 246 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 17 KiB/s wr, 193 op/s
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.303 225859 DEBUG nova.network.neutron [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Updating instance_info_cache with network_info: [{"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.334 225859 DEBUG oslo_concurrency.lockutils [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Releasing lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:50:13 compute-1 kernel: tap6855cb4f-41 (unregistering): left promiscuous mode
Jan 20 14:50:13 compute-1 NetworkManager[49104]: <info>  [1768920613.4020] device (tap6855cb4f-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:50:13 compute-1 ovn_controller[130490]: 2026-01-20T14:50:13Z|00407|binding|INFO|Releasing lport 6855cb4f-4178-4447-af36-126ade033206 from this chassis (sb_readonly=0)
Jan 20 14:50:13 compute-1 ovn_controller[130490]: 2026-01-20T14:50:13Z|00408|binding|INFO|Setting lport 6855cb4f-4178-4447-af36-126ade033206 down in Southbound
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.415 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:13 compute-1 ovn_controller[130490]: 2026-01-20T14:50:13Z|00409|binding|INFO|Removing iface tap6855cb4f-41 ovn-installed in OVS
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.417 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:13.423 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:3f:20 10.100.0.12'], port_security=['fa:16:3e:4f:3f:20 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'fdeb13eb-edb4-4bff-aeef-2671ba9d4618', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-762e1859-4db4-4d9e-b66f-d50316f80df4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c5f03d46c0c4162a3b2f1530850bb6c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '80535eda-fa59-4edc-8e3d-9bfea6517730', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.180', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2474a8ca-bb96-4cae-9133-23419b81a9fc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6855cb4f-4178-4447-af36-126ade033206) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:50:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:13.426 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6855cb4f-4178-4447-af36-126ade033206 in datapath 762e1859-4db4-4d9e-b66f-d50316f80df4 unbound from our chassis
Jan 20 14:50:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:13.430 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 762e1859-4db4-4d9e-b66f-d50316f80df4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.431 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:13.431 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1921d084-8e37-4406-8e25-b8ee8424b630]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:13.432 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 namespace which is not needed anymore
Jan 20 14:50:13 compute-1 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000069.scope: Deactivated successfully.
Jan 20 14:50:13 compute-1 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000069.scope: Consumed 8.272s CPU time.
Jan 20 14:50:13 compute-1 systemd-machined[194361]: Machine qemu-47-instance-00000069 terminated.
Jan 20 14:50:13 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[266761]: [NOTICE]   (266765) : haproxy version is 2.8.14-c23fe91
Jan 20 14:50:13 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[266761]: [NOTICE]   (266765) : path to executable is /usr/sbin/haproxy
Jan 20 14:50:13 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[266761]: [WARNING]  (266765) : Exiting Master process...
Jan 20 14:50:13 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[266761]: [ALERT]    (266765) : Current worker (266767) exited with code 143 (Terminated)
Jan 20 14:50:13 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[266761]: [WARNING]  (266765) : All workers exited. Exiting... (0)
Jan 20 14:50:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:50:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/276800458' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:50:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:50:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/276800458' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:50:13 compute-1 systemd[1]: libpod-5631707afb8ee465811db7586f33c72cad56b7fd83200a0452445ac0954ef0e3.scope: Deactivated successfully.
Jan 20 14:50:13 compute-1 podman[266854]: 2026-01-20 14:50:13.593890854 +0000 UTC m=+0.065901301 container died 5631707afb8ee465811db7586f33c72cad56b7fd83200a0452445ac0954ef0e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.597 225859 INFO nova.virt.libvirt.driver [-] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Instance destroyed successfully.
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.598 225859 DEBUG nova.objects.instance [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'resources' on Instance uuid fdeb13eb-edb4-4bff-aeef-2671ba9d4618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.613 225859 DEBUG nova.virt.libvirt.vif [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:49:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2012792656',display_name='tempest-ServerActionsTestJSON-server-2012792656',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2012792656',id=105,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgEzx5mLsSqRL7L9WKOzM+WdeJ40U103wY9H3VMZ41G/sN5tQtQSt9lXWKTyc6pt00bfKD0E9GPugNMpy+dzSSpK23o3CgadkzAfAvjQCeCPOSp3fX13FGApomGd1HRCQ==',key_name='tempest-keypair-1602241722',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:50:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c5f03d46c0c4162a3b2f1530850bb6c',ramdisk_id='',reservation_id='r-c8be97ji',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1020442335',owner_user_name='tempest-ServerActionsTestJSON-1020442335-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:50:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3e9278fdb9e645b7938f3edb20c4d3cf',uuid=fdeb13eb-edb4-4bff-aeef-2671ba9d4618,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.615 225859 DEBUG nova.network.os_vif_util [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converting VIF {"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.616 225859 DEBUG nova.network.os_vif_util [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.616 225859 DEBUG os_vif [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.618 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.618 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6855cb4f-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.620 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.621 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.624 225859 INFO os_vif [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41')
Jan 20 14:50:13 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5631707afb8ee465811db7586f33c72cad56b7fd83200a0452445ac0954ef0e3-userdata-shm.mount: Deactivated successfully.
Jan 20 14:50:13 compute-1 systemd[1]: var-lib-containers-storage-overlay-c88cd4daf2bc9c6dc43439ff4fce93da549f6cbad7034349a6852b667007fb0f-merged.mount: Deactivated successfully.
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.631 225859 DEBUG oslo_concurrency.lockutils [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.632 225859 DEBUG oslo_concurrency.lockutils [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:13 compute-1 podman[266854]: 2026-01-20 14:50:13.63730409 +0000 UTC m=+0.109314537 container cleanup 5631707afb8ee465811db7586f33c72cad56b7fd83200a0452445ac0954ef0e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:50:13 compute-1 systemd[1]: libpod-conmon-5631707afb8ee465811db7586f33c72cad56b7fd83200a0452445ac0954ef0e3.scope: Deactivated successfully.
Jan 20 14:50:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:13.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.653 225859 DEBUG nova.objects.instance [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'migration_context' on Instance uuid fdeb13eb-edb4-4bff-aeef-2671ba9d4618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:13 compute-1 podman[266893]: 2026-01-20 14:50:13.700985028 +0000 UTC m=+0.040974058 container remove 5631707afb8ee465811db7586f33c72cad56b7fd83200a0452445ac0954ef0e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 14:50:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:13.706 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fe89cc8a-99b0-4a2a-9d91-9e09a7fc6533]: (4, ('Tue Jan 20 02:50:13 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 (5631707afb8ee465811db7586f33c72cad56b7fd83200a0452445ac0954ef0e3)\n5631707afb8ee465811db7586f33c72cad56b7fd83200a0452445ac0954ef0e3\nTue Jan 20 02:50:13 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 (5631707afb8ee465811db7586f33c72cad56b7fd83200a0452445ac0954ef0e3)\n5631707afb8ee465811db7586f33c72cad56b7fd83200a0452445ac0954ef0e3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:13.707 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b497b8-4e1d-456c-8b7e-18c4004103de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:13.708 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap762e1859-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:13 compute-1 kernel: tap762e1859-40: left promiscuous mode
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.710 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.724 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.725 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:13.727 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc013c2-2196-482d-a060-cc7b57f62bff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:13.741 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9795bc0f-e7a3-462e-8a8f-b994bbc6f820]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:13.742 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[206aa3f4-42c7-427b-afe0-a2ac90cf94eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:13.757 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[63567e4e-3e9a-40db-9ba3-0089b3806d1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564430, 'reachable_time': 38669, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266908, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:13 compute-1 systemd[1]: run-netns-ovnmeta\x2d762e1859\x2d4db4\x2d4d9e\x2db66f\x2dd50316f80df4.mount: Deactivated successfully.
Jan 20 14:50:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:13.760 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:50:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:13.760 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[adea26ad-2c4b-4f9f-88c0-62d7ae8b79bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:13 compute-1 nova_compute[225855]: 2026-01-20 14:50:13.822 225859 DEBUG oslo_concurrency.processutils [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/276800458' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:50:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/276800458' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:50:14 compute-1 nova_compute[225855]: 2026-01-20 14:50:14.266 225859 DEBUG oslo_concurrency.processutils [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:14 compute-1 nova_compute[225855]: 2026-01-20 14:50:14.273 225859 DEBUG nova.compute.provider_tree [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:50:14 compute-1 nova_compute[225855]: 2026-01-20 14:50:14.327 225859 DEBUG nova.scheduler.client.report [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:50:14 compute-1 nova_compute[225855]: 2026-01-20 14:50:14.401 225859 DEBUG oslo_concurrency.lockutils [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e248 e248: 3 total, 3 up, 3 in
Jan 20 14:50:14 compute-1 nova_compute[225855]: 2026-01-20 14:50:14.585 225859 INFO nova.compute.manager [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Swapping old allocation on dict_keys(['bbb02880-a710-4ac1-8b2c-5c09765848d1']) held by migration d2604dec-40c2-43fa-9566-b7cf6ab6e7a7 for instance
Jan 20 14:50:14 compute-1 nova_compute[225855]: 2026-01-20 14:50:14.662 225859 DEBUG nova.scheduler.client.report [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Overwriting current allocation {'allocations': {'bbb02880-a710-4ac1-8b2c-5c09765848d1': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}, 'generation': 58}}, 'project_id': '1c5f03d46c0c4162a3b2f1530850bb6c', 'user_id': '3e9278fdb9e645b7938f3edb20c4d3cf', 'consumer_generation': 1} on consumer fdeb13eb-edb4-4bff-aeef-2671ba9d4618 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Jan 20 14:50:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:14.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:14 compute-1 nova_compute[225855]: 2026-01-20 14:50:14.976 225859 DEBUG oslo_concurrency.lockutils [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "52477e64-7989-4aa2-88e1-31600bfae2ef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:14 compute-1 nova_compute[225855]: 2026-01-20 14:50:14.977 225859 DEBUG oslo_concurrency.lockutils [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:14 compute-1 nova_compute[225855]: 2026-01-20 14:50:14.978 225859 DEBUG oslo_concurrency.lockutils [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:14 compute-1 nova_compute[225855]: 2026-01-20 14:50:14.979 225859 DEBUG oslo_concurrency.lockutils [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:14 compute-1 nova_compute[225855]: 2026-01-20 14:50:14.979 225859 DEBUG oslo_concurrency.lockutils [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:14 compute-1 nova_compute[225855]: 2026-01-20 14:50:14.981 225859 INFO nova.compute.manager [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Terminating instance
Jan 20 14:50:14 compute-1 nova_compute[225855]: 2026-01-20 14:50:14.983 225859 DEBUG nova.compute.manager [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:50:15 compute-1 kernel: tap8286e975-4b (unregistering): left promiscuous mode
Jan 20 14:50:15 compute-1 NetworkManager[49104]: <info>  [1768920615.0507] device (tap8286e975-4b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:50:15 compute-1 ovn_controller[130490]: 2026-01-20T14:50:15Z|00410|binding|INFO|Releasing lport 8286e975-4b57-4b5a-9018-82187a854a2d from this chassis (sb_readonly=0)
Jan 20 14:50:15 compute-1 ovn_controller[130490]: 2026-01-20T14:50:15Z|00411|binding|INFO|Setting lport 8286e975-4b57-4b5a-9018-82187a854a2d down in Southbound
Jan 20 14:50:15 compute-1 ovn_controller[130490]: 2026-01-20T14:50:15Z|00412|binding|INFO|Removing iface tap8286e975-4b ovn-installed in OVS
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.059 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.061 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:15.068 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:a9:8c 10.100.0.6'], port_security=['fa:16:3e:19:a9:8c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '52477e64-7989-4aa2-88e1-31600bfae2ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'neutron:revision_number': '8', 'neutron:security_group_ids': '19fab802-7db8-4c89-8f8e-8dcfc14d4627', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0e287ba-f88b-46f5-bb7f-3cc2a74be88e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=8286e975-4b57-4b5a-9018-82187a854a2d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:50:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:15.069 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 8286e975-4b57-4b5a-9018-82187a854a2d in datapath 3379e2b3-ffb2-4391-969b-c9dc51bfbe25 unbound from our chassis
Jan 20 14:50:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:15.070 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3379e2b3-ffb2-4391-969b-c9dc51bfbe25, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:50:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:15.071 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f66de3dd-252e-4893-a181-5c0963858ab1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:15.071 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 namespace which is not needed anymore
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.079 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.106 225859 DEBUG oslo_concurrency.lockutils [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.106 225859 DEBUG oslo_concurrency.lockutils [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquired lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.107 225859 DEBUG nova.network.neutron [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:50:15 compute-1 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Jan 20 14:50:15 compute-1 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000006a.scope: Consumed 12.746s CPU time.
Jan 20 14:50:15 compute-1 systemd-machined[194361]: Machine qemu-46-instance-0000006a terminated.
Jan 20 14:50:15 compute-1 ceph-mon[81775]: pgmap v1871: 321 pgs: 321 active+clean; 246 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 14 KiB/s wr, 165 op/s
Jan 20 14:50:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2969810977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:15 compute-1 ceph-mon[81775]: osdmap e248: 3 total, 3 up, 3 in
Jan 20 14:50:15 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[266341]: [NOTICE]   (266345) : haproxy version is 2.8.14-c23fe91
Jan 20 14:50:15 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[266341]: [NOTICE]   (266345) : path to executable is /usr/sbin/haproxy
Jan 20 14:50:15 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[266341]: [WARNING]  (266345) : Exiting Master process...
Jan 20 14:50:15 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[266341]: [ALERT]    (266345) : Current worker (266347) exited with code 143 (Terminated)
Jan 20 14:50:15 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[266341]: [WARNING]  (266345) : All workers exited. Exiting... (0)
Jan 20 14:50:15 compute-1 NetworkManager[49104]: <info>  [1768920615.2026] manager: (tap8286e975-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/175)
Jan 20 14:50:15 compute-1 systemd[1]: libpod-7e472ad75c7c5324371d8342e82f1f2120ae128110e20722e399ec6b1ff46d49.scope: Deactivated successfully.
Jan 20 14:50:15 compute-1 podman[266953]: 2026-01-20 14:50:15.210526204 +0000 UTC m=+0.051355640 container died 7e472ad75c7c5324371d8342e82f1f2120ae128110e20722e399ec6b1ff46d49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.219 225859 INFO nova.virt.libvirt.driver [-] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Instance destroyed successfully.
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.219 225859 DEBUG nova.objects.instance [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'resources' on Instance uuid 52477e64-7989-4aa2-88e1-31600bfae2ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.239 225859 DEBUG nova.virt.libvirt.vif [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:49:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1663251192',display_name='tempest-ServerDiskConfigTestJSON-server-1663251192',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1663251192',id=106,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:49:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-nykd0j3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:50:05Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=52477e64-7989-4aa2-88e1-31600bfae2ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.239 225859 DEBUG nova.network.os_vif_util [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.241 225859 DEBUG nova.network.os_vif_util [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:19:a9:8c,bridge_name='br-int',has_traffic_filtering=True,id=8286e975-4b57-4b5a-9018-82187a854a2d,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8286e975-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.241 225859 DEBUG os_vif [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:a9:8c,bridge_name='br-int',has_traffic_filtering=True,id=8286e975-4b57-4b5a-9018-82187a854a2d,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8286e975-4b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:50:15 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7e472ad75c7c5324371d8342e82f1f2120ae128110e20722e399ec6b1ff46d49-userdata-shm.mount: Deactivated successfully.
Jan 20 14:50:15 compute-1 systemd[1]: var-lib-containers-storage-overlay-48e67e17409de7385c0e5c04d479fefc117c5bbc0e8751e2852e2bf32fe6c3ee-merged.mount: Deactivated successfully.
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.247 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.247 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8286e975-4b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.249 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.252 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.253 225859 INFO os_vif [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:a9:8c,bridge_name='br-int',has_traffic_filtering=True,id=8286e975-4b57-4b5a-9018-82187a854a2d,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8286e975-4b')
Jan 20 14:50:15 compute-1 podman[266953]: 2026-01-20 14:50:15.254691081 +0000 UTC m=+0.095520517 container cleanup 7e472ad75c7c5324371d8342e82f1f2120ae128110e20722e399ec6b1ff46d49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:50:15 compute-1 systemd[1]: libpod-conmon-7e472ad75c7c5324371d8342e82f1f2120ae128110e20722e399ec6b1ff46d49.scope: Deactivated successfully.
Jan 20 14:50:15 compute-1 podman[267008]: 2026-01-20 14:50:15.322174026 +0000 UTC m=+0.042735657 container remove 7e472ad75c7c5324371d8342e82f1f2120ae128110e20722e399ec6b1ff46d49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 14:50:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:15.329 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f669b8f1-3a68-4cde-ae4c-5e9677d7f256]: (4, ('Tue Jan 20 02:50:15 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 (7e472ad75c7c5324371d8342e82f1f2120ae128110e20722e399ec6b1ff46d49)\n7e472ad75c7c5324371d8342e82f1f2120ae128110e20722e399ec6b1ff46d49\nTue Jan 20 02:50:15 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 (7e472ad75c7c5324371d8342e82f1f2120ae128110e20722e399ec6b1ff46d49)\n7e472ad75c7c5324371d8342e82f1f2120ae128110e20722e399ec6b1ff46d49\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:15.331 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c5adb3b8-bd5b-49a0-8b1b-1d146f2b5a0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:15.331 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3379e2b3-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.333 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:15 compute-1 kernel: tap3379e2b3-f0: left promiscuous mode
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.354 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:15.356 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[aa21d3b8-e12e-46e6-b92a-b3082ab8f330]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:15.379 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[69f7a621-8489-4178-8a86-3f70246cfc66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:15.380 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0d929afa-0ad9-4487-8465-30a5f2d44daf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:15.398 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[aac0478d-257f-4100-a91a-510d828e0a8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563550, 'reachable_time': 38403, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267027, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:15.400 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:50:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:15.400 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[72c5e031-b8ec-4870-a875-66251c30c6ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:15 compute-1 systemd[1]: run-netns-ovnmeta\x2d3379e2b3\x2dffb2\x2d4391\x2d969b\x2dc9dc51bfbe25.mount: Deactivated successfully.
Jan 20 14:50:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:50:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:15.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.667 225859 INFO nova.virt.libvirt.driver [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Deleting instance files /var/lib/nova/instances/52477e64-7989-4aa2-88e1-31600bfae2ef_del
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.667 225859 INFO nova.virt.libvirt.driver [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Deletion of /var/lib/nova/instances/52477e64-7989-4aa2-88e1-31600bfae2ef_del complete
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.739 225859 INFO nova.compute.manager [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Took 0.75 seconds to destroy the instance on the hypervisor.
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.739 225859 DEBUG oslo.service.loopingcall [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.740 225859 DEBUG nova.compute.manager [-] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.740 225859 DEBUG nova.network.neutron [-] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:50:15 compute-1 nova_compute[225855]: 2026-01-20 14:50:15.835 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:16.410 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:16.411 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:16.411 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:50:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:16.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:50:16 compute-1 nova_compute[225855]: 2026-01-20 14:50:16.880 225859 DEBUG nova.compute.manager [req-5461f9f8-25cf-4b91-aaa7-639b4f1ccca6 req-66c129e6-9d1d-419f-953d-b0be27ade3b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received event network-vif-unplugged-8286e975-4b57-4b5a-9018-82187a854a2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:16 compute-1 nova_compute[225855]: 2026-01-20 14:50:16.881 225859 DEBUG oslo_concurrency.lockutils [req-5461f9f8-25cf-4b91-aaa7-639b4f1ccca6 req-66c129e6-9d1d-419f-953d-b0be27ade3b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:16 compute-1 nova_compute[225855]: 2026-01-20 14:50:16.881 225859 DEBUG oslo_concurrency.lockutils [req-5461f9f8-25cf-4b91-aaa7-639b4f1ccca6 req-66c129e6-9d1d-419f-953d-b0be27ade3b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:16 compute-1 nova_compute[225855]: 2026-01-20 14:50:16.881 225859 DEBUG oslo_concurrency.lockutils [req-5461f9f8-25cf-4b91-aaa7-639b4f1ccca6 req-66c129e6-9d1d-419f-953d-b0be27ade3b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:16 compute-1 nova_compute[225855]: 2026-01-20 14:50:16.881 225859 DEBUG nova.compute.manager [req-5461f9f8-25cf-4b91-aaa7-639b4f1ccca6 req-66c129e6-9d1d-419f-953d-b0be27ade3b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] No waiting events found dispatching network-vif-unplugged-8286e975-4b57-4b5a-9018-82187a854a2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:50:16 compute-1 nova_compute[225855]: 2026-01-20 14:50:16.881 225859 DEBUG nova.compute.manager [req-5461f9f8-25cf-4b91-aaa7-639b4f1ccca6 req-66c129e6-9d1d-419f-953d-b0be27ade3b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received event network-vif-unplugged-8286e975-4b57-4b5a-9018-82187a854a2d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:50:16 compute-1 nova_compute[225855]: 2026-01-20 14:50:16.882 225859 DEBUG nova.compute.manager [req-5461f9f8-25cf-4b91-aaa7-639b4f1ccca6 req-66c129e6-9d1d-419f-953d-b0be27ade3b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received event network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:16 compute-1 nova_compute[225855]: 2026-01-20 14:50:16.882 225859 DEBUG oslo_concurrency.lockutils [req-5461f9f8-25cf-4b91-aaa7-639b4f1ccca6 req-66c129e6-9d1d-419f-953d-b0be27ade3b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:16 compute-1 nova_compute[225855]: 2026-01-20 14:50:16.882 225859 DEBUG oslo_concurrency.lockutils [req-5461f9f8-25cf-4b91-aaa7-639b4f1ccca6 req-66c129e6-9d1d-419f-953d-b0be27ade3b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:16 compute-1 nova_compute[225855]: 2026-01-20 14:50:16.882 225859 DEBUG oslo_concurrency.lockutils [req-5461f9f8-25cf-4b91-aaa7-639b4f1ccca6 req-66c129e6-9d1d-419f-953d-b0be27ade3b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:16 compute-1 nova_compute[225855]: 2026-01-20 14:50:16.882 225859 DEBUG nova.compute.manager [req-5461f9f8-25cf-4b91-aaa7-639b4f1ccca6 req-66c129e6-9d1d-419f-953d-b0be27ade3b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] No waiting events found dispatching network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:50:16 compute-1 nova_compute[225855]: 2026-01-20 14:50:16.882 225859 WARNING nova.compute.manager [req-5461f9f8-25cf-4b91-aaa7-639b4f1ccca6 req-66c129e6-9d1d-419f-953d-b0be27ade3b9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received unexpected event network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d for instance with vm_state active and task_state deleting.
Jan 20 14:50:17 compute-1 nova_compute[225855]: 2026-01-20 14:50:17.009 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:17 compute-1 podman[267031]: 2026-01-20 14:50:17.019748371 +0000 UTC m=+0.058522323 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 14:50:17 compute-1 ceph-mon[81775]: pgmap v1873: 321 pgs: 321 active+clean; 246 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 32 KiB/s wr, 172 op/s
Jan 20 14:50:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:17 compute-1 nova_compute[225855]: 2026-01-20 14:50:17.611 225859 DEBUG nova.compute.manager [req-238b57a2-08fa-428f-926e-00dc3e4027cf req-e43ff5b0-466e-4dd1-86ab-2af9205b9631 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received event network-vif-unplugged-6855cb4f-4178-4447-af36-126ade033206 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:17 compute-1 nova_compute[225855]: 2026-01-20 14:50:17.611 225859 DEBUG oslo_concurrency.lockutils [req-238b57a2-08fa-428f-926e-00dc3e4027cf req-e43ff5b0-466e-4dd1-86ab-2af9205b9631 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:17 compute-1 nova_compute[225855]: 2026-01-20 14:50:17.612 225859 DEBUG oslo_concurrency.lockutils [req-238b57a2-08fa-428f-926e-00dc3e4027cf req-e43ff5b0-466e-4dd1-86ab-2af9205b9631 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:17 compute-1 nova_compute[225855]: 2026-01-20 14:50:17.612 225859 DEBUG oslo_concurrency.lockutils [req-238b57a2-08fa-428f-926e-00dc3e4027cf req-e43ff5b0-466e-4dd1-86ab-2af9205b9631 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:17 compute-1 nova_compute[225855]: 2026-01-20 14:50:17.613 225859 DEBUG nova.compute.manager [req-238b57a2-08fa-428f-926e-00dc3e4027cf req-e43ff5b0-466e-4dd1-86ab-2af9205b9631 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] No waiting events found dispatching network-vif-unplugged-6855cb4f-4178-4447-af36-126ade033206 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:50:17 compute-1 nova_compute[225855]: 2026-01-20 14:50:17.613 225859 WARNING nova.compute.manager [req-238b57a2-08fa-428f-926e-00dc3e4027cf req-e43ff5b0-466e-4dd1-86ab-2af9205b9631 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received unexpected event network-vif-unplugged-6855cb4f-4178-4447-af36-126ade033206 for instance with vm_state resized and task_state resize_reverting.
Jan 20 14:50:17 compute-1 nova_compute[225855]: 2026-01-20 14:50:17.614 225859 DEBUG nova.compute.manager [req-238b57a2-08fa-428f-926e-00dc3e4027cf req-e43ff5b0-466e-4dd1-86ab-2af9205b9631 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received event network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:17 compute-1 nova_compute[225855]: 2026-01-20 14:50:17.614 225859 DEBUG oslo_concurrency.lockutils [req-238b57a2-08fa-428f-926e-00dc3e4027cf req-e43ff5b0-466e-4dd1-86ab-2af9205b9631 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:17 compute-1 nova_compute[225855]: 2026-01-20 14:50:17.615 225859 DEBUG oslo_concurrency.lockutils [req-238b57a2-08fa-428f-926e-00dc3e4027cf req-e43ff5b0-466e-4dd1-86ab-2af9205b9631 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:17 compute-1 nova_compute[225855]: 2026-01-20 14:50:17.615 225859 DEBUG oslo_concurrency.lockutils [req-238b57a2-08fa-428f-926e-00dc3e4027cf req-e43ff5b0-466e-4dd1-86ab-2af9205b9631 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:17 compute-1 nova_compute[225855]: 2026-01-20 14:50:17.615 225859 DEBUG nova.compute.manager [req-238b57a2-08fa-428f-926e-00dc3e4027cf req-e43ff5b0-466e-4dd1-86ab-2af9205b9631 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] No waiting events found dispatching network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:50:17 compute-1 nova_compute[225855]: 2026-01-20 14:50:17.616 225859 WARNING nova.compute.manager [req-238b57a2-08fa-428f-926e-00dc3e4027cf req-e43ff5b0-466e-4dd1-86ab-2af9205b9631 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received unexpected event network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 for instance with vm_state resized and task_state resize_reverting.
Jan 20 14:50:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:50:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:17.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:50:18 compute-1 nova_compute[225855]: 2026-01-20 14:50:18.195 225859 DEBUG nova.network.neutron [-] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:50:18 compute-1 nova_compute[225855]: 2026-01-20 14:50:18.213 225859 INFO nova.compute.manager [-] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Took 2.47 seconds to deallocate network for instance.
Jan 20 14:50:18 compute-1 nova_compute[225855]: 2026-01-20 14:50:18.275 225859 DEBUG oslo_concurrency.lockutils [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:18 compute-1 nova_compute[225855]: 2026-01-20 14:50:18.275 225859 DEBUG oslo_concurrency.lockutils [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:18 compute-1 nova_compute[225855]: 2026-01-20 14:50:18.286 225859 DEBUG oslo_concurrency.lockutils [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:18 compute-1 nova_compute[225855]: 2026-01-20 14:50:18.329 225859 INFO nova.scheduler.client.report [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Deleted allocations for instance 52477e64-7989-4aa2-88e1-31600bfae2ef
Jan 20 14:50:18 compute-1 nova_compute[225855]: 2026-01-20 14:50:18.371 225859 DEBUG nova.network.neutron [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Updating instance_info_cache with network_info: [{"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:50:18 compute-1 nova_compute[225855]: 2026-01-20 14:50:18.404 225859 DEBUG oslo_concurrency.lockutils [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Releasing lock "refresh_cache-fdeb13eb-edb4-4bff-aeef-2671ba9d4618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:50:18 compute-1 nova_compute[225855]: 2026-01-20 14:50:18.404 225859 DEBUG nova.virt.libvirt.driver [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Jan 20 14:50:18 compute-1 nova_compute[225855]: 2026-01-20 14:50:18.440 225859 DEBUG oslo_concurrency.lockutils [None req-f42363f2-02ba-471b-bd6f-2f81d785ebce a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:18 compute-1 nova_compute[225855]: 2026-01-20 14:50:18.485 225859 DEBUG nova.storage.rbd_utils [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] rolling back rbd image(fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Jan 20 14:50:18 compute-1 nova_compute[225855]: 2026-01-20 14:50:18.599 225859 DEBUG nova.storage.rbd_utils [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] removing snapshot(nova-resize) on rbd image(fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:50:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:50:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:18.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:50:18 compute-1 nova_compute[225855]: 2026-01-20 14:50:18.997 225859 DEBUG nova.compute.manager [req-7db177bc-2502-44a0-a2b6-161baac839a2 req-8a43a978-0815-43af-86f9-0a18f0bcf7be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received event network-vif-deleted-8286e975-4b57-4b5a-9018-82187a854a2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e249 e249: 3 total, 3 up, 3 in
Jan 20 14:50:19 compute-1 ceph-mon[81775]: pgmap v1874: 321 pgs: 321 active+clean; 231 MiB data, 899 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 44 KiB/s wr, 133 op/s
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.241 225859 DEBUG nova.virt.libvirt.driver [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Start _get_guest_xml network_info=[{"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.245 225859 WARNING nova.virt.libvirt.driver [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.251 225859 DEBUG nova.virt.libvirt.host [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.252 225859 DEBUG nova.virt.libvirt.host [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.256 225859 DEBUG nova.virt.libvirt.host [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.257 225859 DEBUG nova.virt.libvirt.host [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.258 225859 DEBUG nova.virt.libvirt.driver [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.258 225859 DEBUG nova.virt.hardware [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.258 225859 DEBUG nova.virt.hardware [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.259 225859 DEBUG nova.virt.hardware [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.259 225859 DEBUG nova.virt.hardware [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.259 225859 DEBUG nova.virt.hardware [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.259 225859 DEBUG nova.virt.hardware [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.259 225859 DEBUG nova.virt.hardware [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.260 225859 DEBUG nova.virt.hardware [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.260 225859 DEBUG nova.virt.hardware [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.260 225859 DEBUG nova.virt.hardware [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.260 225859 DEBUG nova.virt.hardware [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.261 225859 DEBUG nova.objects.instance [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'vcpu_model' on Instance uuid fdeb13eb-edb4-4bff-aeef-2671ba9d4618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.290 225859 DEBUG oslo_concurrency.processutils [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.615 225859 DEBUG oslo_concurrency.lockutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "d5167284-086d-4b37-98b0-3853baabf418" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.616 225859 DEBUG oslo_concurrency.lockutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.634 225859 DEBUG nova.compute.manager [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:50:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:19.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.719 225859 DEBUG oslo_concurrency.lockutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.720 225859 DEBUG oslo_concurrency.lockutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.726 225859 DEBUG nova.virt.hardware [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.727 225859 INFO nova.compute.claims [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:50:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:50:19 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/172413703' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.832 225859 DEBUG oslo_concurrency.processutils [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.866 225859 DEBUG oslo_concurrency.processutils [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:19 compute-1 nova_compute[225855]: 2026-01-20 14:50:19.889 225859 DEBUG oslo_concurrency.processutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.251 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:50:20 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4223586813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:20 compute-1 ceph-mon[81775]: osdmap e249: 3 total, 3 up, 3 in
Jan 20 14:50:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/172413703' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.382 225859 DEBUG oslo_concurrency.processutils [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.385 225859 DEBUG nova.virt.libvirt.vif [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:49:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2012792656',display_name='tempest-ServerActionsTestJSON-server-2012792656',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2012792656',id=105,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgEzx5mLsSqRL7L9WKOzM+WdeJ40U103wY9H3VMZ41G/sN5tQtQSt9lXWKTyc6pt00bfKD0E9GPugNMpy+dzSSpK23o3CgadkzAfAvjQCeCPOSp3fX13FGApomGd1HRCQ==',key_name='tempest-keypair-1602241722',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:50:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1c5f03d46c0c4162a3b2f1530850bb6c',ramdisk_id='',reservation_id='r-c8be97ji',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1020442335',owner_user_name='tempest-ServerActionsTestJSON-1020442335-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:50:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3e9278fdb9e645b7938f3edb20c4d3cf',uuid=fdeb13eb-edb4-4bff-aeef-2671ba9d4618,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.385 225859 DEBUG nova.network.os_vif_util [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converting VIF {"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.386 225859 DEBUG nova.network.os_vif_util [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.389 225859 DEBUG nova.virt.libvirt.driver [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:50:20 compute-1 nova_compute[225855]:   <uuid>fdeb13eb-edb4-4bff-aeef-2671ba9d4618</uuid>
Jan 20 14:50:20 compute-1 nova_compute[225855]:   <name>instance-00000069</name>
Jan 20 14:50:20 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:50:20 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:50:20 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerActionsTestJSON-server-2012792656</nova:name>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:50:19</nova:creationTime>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:50:20 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:50:20 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:50:20 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:50:20 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:50:20 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:50:20 compute-1 nova_compute[225855]:         <nova:user uuid="3e9278fdb9e645b7938f3edb20c4d3cf">tempest-ServerActionsTestJSON-1020442335-project-member</nova:user>
Jan 20 14:50:20 compute-1 nova_compute[225855]:         <nova:project uuid="1c5f03d46c0c4162a3b2f1530850bb6c">tempest-ServerActionsTestJSON-1020442335</nova:project>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:50:20 compute-1 nova_compute[225855]:         <nova:port uuid="6855cb4f-4178-4447-af36-126ade033206">
Jan 20 14:50:20 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:50:20 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:50:20 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <system>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <entry name="serial">fdeb13eb-edb4-4bff-aeef-2671ba9d4618</entry>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <entry name="uuid">fdeb13eb-edb4-4bff-aeef-2671ba9d4618</entry>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     </system>
Jan 20 14:50:20 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:50:20 compute-1 nova_compute[225855]:   <os>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:   </os>
Jan 20 14:50:20 compute-1 nova_compute[225855]:   <features>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:   </features>
Jan 20 14:50:20 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:50:20 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:50:20 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk">
Jan 20 14:50:20 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       </source>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:50:20 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/fdeb13eb-edb4-4bff-aeef-2671ba9d4618_disk.config">
Jan 20 14:50:20 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       </source>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:50:20 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:4f:3f:20"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <target dev="tap6855cb4f-41"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/fdeb13eb-edb4-4bff-aeef-2671ba9d4618/console.log" append="off"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <video>
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     </video>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <input type="keyboard" bus="usb"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:50:20 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:50:20 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:50:20 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:50:20 compute-1 nova_compute[225855]: </domain>
Jan 20 14:50:20 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.391 225859 DEBUG nova.virt.libvirt.vif [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:49:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2012792656',display_name='tempest-ServerActionsTestJSON-server-2012792656',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2012792656',id=105,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgEzx5mLsSqRL7L9WKOzM+WdeJ40U103wY9H3VMZ41G/sN5tQtQSt9lXWKTyc6pt00bfKD0E9GPugNMpy+dzSSpK23o3CgadkzAfAvjQCeCPOSp3fX13FGApomGd1HRCQ==',key_name='tempest-keypair-1602241722',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:50:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1c5f03d46c0c4162a3b2f1530850bb6c',ramdisk_id='',reservation_id='r-c8be97ji',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1020442335',owner_user_name='tempest-ServerActionsTestJSON-1020442335-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:50:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3e9278fdb9e645b7938f3edb20c4d3cf',uuid=fdeb13eb-edb4-4bff-aeef-2671ba9d4618,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.392 225859 DEBUG nova.network.os_vif_util [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converting VIF {"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.393 225859 DEBUG nova.network.os_vif_util [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.393 225859 DEBUG os_vif [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.393 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.394 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.394 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.397 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.398 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6855cb4f-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.399 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6855cb4f-41, col_values=(('external_ids', {'iface-id': '6855cb4f-4178-4447-af36-126ade033206', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4f:3f:20', 'vm-uuid': 'fdeb13eb-edb4-4bff-aeef-2671ba9d4618'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:20 compute-1 NetworkManager[49104]: <info>  [1768920620.4027] manager: (tap6855cb4f-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/176)
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.402 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.405 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.409 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.410 225859 INFO os_vif [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41')
Jan 20 14:50:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:50:20 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3610407819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.543 225859 DEBUG oslo_concurrency.processutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.653s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.547 225859 DEBUG nova.compute.provider_tree [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.577 225859 DEBUG nova.scheduler.client.report [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:50:20 compute-1 kernel: tap6855cb4f-41: entered promiscuous mode
Jan 20 14:50:20 compute-1 NetworkManager[49104]: <info>  [1768920620.6133] manager: (tap6855cb4f-41): new Tun device (/org/freedesktop/NetworkManager/Devices/177)
Jan 20 14:50:20 compute-1 ovn_controller[130490]: 2026-01-20T14:50:20Z|00413|binding|INFO|Claiming lport 6855cb4f-4178-4447-af36-126ade033206 for this chassis.
Jan 20 14:50:20 compute-1 ovn_controller[130490]: 2026-01-20T14:50:20Z|00414|binding|INFO|6855cb4f-4178-4447-af36-126ade033206: Claiming fa:16:3e:4f:3f:20 10.100.0.12
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.614 225859 DEBUG oslo_concurrency.lockutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.615 225859 DEBUG nova.compute.manager [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.620 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.624 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:3f:20 10.100.0.12'], port_security=['fa:16:3e:4f:3f:20 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'fdeb13eb-edb4-4bff-aeef-2671ba9d4618', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-762e1859-4db4-4d9e-b66f-d50316f80df4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c5f03d46c0c4162a3b2f1530850bb6c', 'neutron:revision_number': '7', 'neutron:security_group_ids': '80535eda-fa59-4edc-8e3d-9bfea6517730', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2474a8ca-bb96-4cae-9133-23419b81a9fc, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6855cb4f-4178-4447-af36-126ade033206) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.625 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6855cb4f-4178-4447-af36-126ade033206 in datapath 762e1859-4db4-4d9e-b66f-d50316f80df4 bound to our chassis
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.627 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 762e1859-4db4-4d9e-b66f-d50316f80df4
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.629 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.636 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.640 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[15a6def7-b306-43e4-9e04-218f92c12d57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.641 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap762e1859-41 in ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.643 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap762e1859-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.643 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e488f571-9b74-4488-86e2-cbf6c8e21ddb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.644 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c4036340-9956-404a-8bbb-41667030d3a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:20 compute-1 systemd-udevd[267207]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:50:20 compute-1 systemd-machined[194361]: New machine qemu-48-instance-00000069.
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.659 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[8296d0e7-f9bd-41ee-8830-ad589aef3345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.659 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:20 compute-1 ovn_controller[130490]: 2026-01-20T14:50:20Z|00415|binding|INFO|Setting lport 6855cb4f-4178-4447-af36-126ade033206 ovn-installed in OVS
Jan 20 14:50:20 compute-1 ovn_controller[130490]: 2026-01-20T14:50:20Z|00416|binding|INFO|Setting lport 6855cb4f-4178-4447-af36-126ade033206 up in Southbound
Jan 20 14:50:20 compute-1 NetworkManager[49104]: <info>  [1768920620.6631] device (tap6855cb4f-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.664 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:20 compute-1 NetworkManager[49104]: <info>  [1768920620.6651] device (tap6855cb4f-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:50:20 compute-1 systemd[1]: Started Virtual Machine qemu-48-instance-00000069.
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.672 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[99531cbf-2f43-460c-a16d-32f8f3378938]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.681 225859 DEBUG nova.compute.manager [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.681 225859 DEBUG nova.network.neutron [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.700 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ae7cf6-bbf2-4899-b586-54a2fa6261d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:20 compute-1 NetworkManager[49104]: <info>  [1768920620.7069] manager: (tap762e1859-40): new Veth device (/org/freedesktop/NetworkManager/Devices/178)
Jan 20 14:50:20 compute-1 systemd-udevd[267210]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.706 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[34d68dbc-1475-4c93-9d8a-0b712b2ee5d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.708 225859 INFO nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.727 225859 DEBUG nova.compute.manager [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.738 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f2276c6c-896c-4218-8bef-fa033df5b1f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.740 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[59ae2534-9a53-43d1-8589-d436561d7ed5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:20 compute-1 NetworkManager[49104]: <info>  [1768920620.7602] device (tap762e1859-40): carrier: link connected
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.764 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2f3479-1221-470e-8481-5d103741918c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.780 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[09270f6b-16e5-4293-876b-10bcb0c03d2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap762e1859-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:f1:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565968, 'reachable_time': 26783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267239, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.793 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9ccdf7-b171-4ea9-8d1c-defe4d197602]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:f1da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565968, 'tstamp': 565968}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267240, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.807 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[39363bd3-95e7-49fe-aa70-ee19da8a26e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap762e1859-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:f1:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565968, 'reachable_time': 26783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267241, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.830 225859 DEBUG nova.compute.manager [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.831 225859 DEBUG nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.832 225859 INFO nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Creating image(s)
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.831 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9162b7-f05b-4535-98d6-2c3b13c4a267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:20.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.867 225859 DEBUG nova.storage.rbd_utils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image d5167284-086d-4b37-98b0-3853baabf418_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.893 225859 DEBUG nova.storage.rbd_utils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image d5167284-086d-4b37-98b0-3853baabf418_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.898 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8b1132eb-b211-45c1-8e83-0d663f0b482b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.900 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap762e1859-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.900 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.901 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap762e1859-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:20 compute-1 NetworkManager[49104]: <info>  [1768920620.9036] manager: (tap762e1859-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/179)
Jan 20 14:50:20 compute-1 kernel: tap762e1859-40: entered promiscuous mode
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.906 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap762e1859-40, col_values=(('external_ids', {'iface-id': '9e775c45-1646-436d-a0cb-a5b5ec356e1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:20 compute-1 ovn_controller[130490]: 2026-01-20T14:50:20Z|00417|binding|INFO|Releasing lport 9e775c45-1646-436d-a0cb-a5b5ec356e1b from this chassis (sb_readonly=0)
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.908 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/762e1859-4db4-4d9e-b66f-d50316f80df4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/762e1859-4db4-4d9e-b66f-d50316f80df4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.909 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1c9268ce-0707-4770-87d8-9d690690090f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.910 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-762e1859-4db4-4d9e-b66f-d50316f80df4
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/762e1859-4db4-4d9e-b66f-d50316f80df4.pid.haproxy
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 762e1859-4db4-4d9e-b66f-d50316f80df4
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:50:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:20.911 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'env', 'PROCESS_TAG=haproxy-762e1859-4db4-4d9e-b66f-d50316f80df4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/762e1859-4db4-4d9e-b66f-d50316f80df4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.927 225859 DEBUG nova.storage.rbd_utils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image d5167284-086d-4b37-98b0-3853baabf418_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.932 225859 DEBUG oslo_concurrency.processutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.964 225859 DEBUG nova.policy [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a1bd93d04cc4468abe1d5c61f5144191', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.969 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.998 225859 DEBUG oslo_concurrency.processutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.998 225859 DEBUG oslo_concurrency.lockutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:20 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.999 225859 DEBUG oslo_concurrency.lockutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:21 compute-1 nova_compute[225855]: 2026-01-20 14:50:20.999 225859 DEBUG oslo_concurrency.lockutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:21 compute-1 nova_compute[225855]: 2026-01-20 14:50:21.024 225859 DEBUG nova.storage.rbd_utils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image d5167284-086d-4b37-98b0-3853baabf418_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:50:21 compute-1 nova_compute[225855]: 2026-01-20 14:50:21.028 225859 DEBUG oslo_concurrency.processutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 d5167284-086d-4b37-98b0-3853baabf418_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:21 compute-1 nova_compute[225855]: 2026-01-20 14:50:21.087 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for fdeb13eb-edb4-4bff-aeef-2671ba9d4618 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:50:21 compute-1 nova_compute[225855]: 2026-01-20 14:50:21.088 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920621.0855782, fdeb13eb-edb4-4bff-aeef-2671ba9d4618 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:50:21 compute-1 nova_compute[225855]: 2026-01-20 14:50:21.088 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] VM Resumed (Lifecycle Event)
Jan 20 14:50:21 compute-1 nova_compute[225855]: 2026-01-20 14:50:21.090 225859 DEBUG nova.compute.manager [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:50:21 compute-1 nova_compute[225855]: 2026-01-20 14:50:21.100 225859 INFO nova.virt.libvirt.driver [-] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Instance running successfully.
Jan 20 14:50:21 compute-1 nova_compute[225855]: 2026-01-20 14:50:21.101 225859 DEBUG nova.virt.libvirt.driver [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Jan 20 14:50:21 compute-1 nova_compute[225855]: 2026-01-20 14:50:21.116 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:50:21 compute-1 nova_compute[225855]: 2026-01-20 14:50:21.121 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:50:21 compute-1 nova_compute[225855]: 2026-01-20 14:50:21.148 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 20 14:50:21 compute-1 nova_compute[225855]: 2026-01-20 14:50:21.149 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920621.0872164, fdeb13eb-edb4-4bff-aeef-2671ba9d4618 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:50:21 compute-1 nova_compute[225855]: 2026-01-20 14:50:21.149 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] VM Started (Lifecycle Event)
Jan 20 14:50:21 compute-1 nova_compute[225855]: 2026-01-20 14:50:21.193 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:50:21 compute-1 nova_compute[225855]: 2026-01-20 14:50:21.198 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:50:21 compute-1 nova_compute[225855]: 2026-01-20 14:50:21.218 225859 INFO nova.compute.manager [None req-a32fc634-b215-4655-b844-13e732281d75 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Updating instance to original state: 'active'
Jan 20 14:50:21 compute-1 nova_compute[225855]: 2026-01-20 14:50:21.229 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 20 14:50:21 compute-1 nova_compute[225855]: 2026-01-20 14:50:21.292 225859 DEBUG oslo_concurrency.processutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 d5167284-086d-4b37-98b0-3853baabf418_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:21 compute-1 podman[267407]: 2026-01-20 14:50:21.297388414 +0000 UTC m=+0.050066974 container create 77c1e4defe56387ddf8a0a85cd09f24ebb67ce03630c8150f6659c6c85bb6552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:50:21 compute-1 ceph-mon[81775]: pgmap v1876: 321 pgs: 321 active+clean; 210 MiB data, 892 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 40 KiB/s wr, 99 op/s
Jan 20 14:50:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4223586813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3610407819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:21 compute-1 systemd[1]: Started libpod-conmon-77c1e4defe56387ddf8a0a85cd09f24ebb67ce03630c8150f6659c6c85bb6552.scope.
Jan 20 14:50:21 compute-1 podman[267407]: 2026-01-20 14:50:21.270456314 +0000 UTC m=+0.023134904 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:50:21 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:50:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e4bf7e696f3dd4d151e1f99ee68b890d05b859756f606d057e896cbb8a0594b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:50:21 compute-1 podman[267407]: 2026-01-20 14:50:21.388055584 +0000 UTC m=+0.140734144 container init 77c1e4defe56387ddf8a0a85cd09f24ebb67ce03630c8150f6659c6c85bb6552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 20 14:50:21 compute-1 podman[267407]: 2026-01-20 14:50:21.394841576 +0000 UTC m=+0.147520136 container start 77c1e4defe56387ddf8a0a85cd09f24ebb67ce03630c8150f6659c6c85bb6552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:50:21 compute-1 nova_compute[225855]: 2026-01-20 14:50:21.398 225859 DEBUG nova.storage.rbd_utils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] resizing rbd image d5167284-086d-4b37-98b0-3853baabf418_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:50:21 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[267440]: [NOTICE]   (267462) : New worker (267478) forked
Jan 20 14:50:21 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[267440]: [NOTICE]   (267462) : Loading success.
Jan 20 14:50:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:21.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:22 compute-1 nova_compute[225855]: 2026-01-20 14:50:22.116 225859 DEBUG nova.objects.instance [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'migration_context' on Instance uuid d5167284-086d-4b37-98b0-3853baabf418 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:22 compute-1 nova_compute[225855]: 2026-01-20 14:50:22.303 225859 DEBUG nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:50:22 compute-1 nova_compute[225855]: 2026-01-20 14:50:22.303 225859 DEBUG nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Ensure instance console log exists: /var/lib/nova/instances/d5167284-086d-4b37-98b0-3853baabf418/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:50:22 compute-1 nova_compute[225855]: 2026-01-20 14:50:22.304 225859 DEBUG oslo_concurrency.lockutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:22 compute-1 nova_compute[225855]: 2026-01-20 14:50:22.304 225859 DEBUG oslo_concurrency.lockutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:22 compute-1 nova_compute[225855]: 2026-01-20 14:50:22.304 225859 DEBUG oslo_concurrency.lockutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:22 compute-1 ceph-mon[81775]: pgmap v1877: 321 pgs: 321 active+clean; 167 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 39 KiB/s wr, 151 op/s
Jan 20 14:50:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:22 compute-1 nova_compute[225855]: 2026-01-20 14:50:22.525 225859 DEBUG nova.network.neutron [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Successfully created port: 86cabae0-8599-4330-b71c-91eb2e6b76d8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:50:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:22.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:50:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:23.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:50:23 compute-1 nova_compute[225855]: 2026-01-20 14:50:23.949 225859 DEBUG nova.network.neutron [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Successfully updated port: 86cabae0-8599-4330-b71c-91eb2e6b76d8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:50:23 compute-1 nova_compute[225855]: 2026-01-20 14:50:23.969 225859 DEBUG oslo_concurrency.lockutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "refresh_cache-d5167284-086d-4b37-98b0-3853baabf418" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:50:23 compute-1 nova_compute[225855]: 2026-01-20 14:50:23.970 225859 DEBUG oslo_concurrency.lockutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquired lock "refresh_cache-d5167284-086d-4b37-98b0-3853baabf418" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:50:23 compute-1 nova_compute[225855]: 2026-01-20 14:50:23.970 225859 DEBUG nova.network.neutron [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:50:24 compute-1 nova_compute[225855]: 2026-01-20 14:50:24.026 225859 DEBUG nova.compute.manager [req-5c2ab044-1c73-4429-b3b0-ee4004432102 req-1ea80a80-520b-4504-815f-0d1408c99c2e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Received event network-changed-86cabae0-8599-4330-b71c-91eb2e6b76d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:24 compute-1 nova_compute[225855]: 2026-01-20 14:50:24.027 225859 DEBUG nova.compute.manager [req-5c2ab044-1c73-4429-b3b0-ee4004432102 req-1ea80a80-520b-4504-815f-0d1408c99c2e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Refreshing instance network info cache due to event network-changed-86cabae0-8599-4330-b71c-91eb2e6b76d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:50:24 compute-1 nova_compute[225855]: 2026-01-20 14:50:24.027 225859 DEBUG oslo_concurrency.lockutils [req-5c2ab044-1c73-4429-b3b0-ee4004432102 req-1ea80a80-520b-4504-815f-0d1408c99c2e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-d5167284-086d-4b37-98b0-3853baabf418" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:50:24 compute-1 nova_compute[225855]: 2026-01-20 14:50:24.160 225859 DEBUG nova.network.neutron [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:50:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:24.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:24 compute-1 ceph-mon[81775]: pgmap v1878: 321 pgs: 321 active+clean; 184 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 955 KiB/s wr, 168 op/s
Jan 20 14:50:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/735507930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.403 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:25.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.840 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.908 225859 DEBUG nova.network.neutron [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Updating instance_info_cache with network_info: [{"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.936 225859 DEBUG oslo_concurrency.lockutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Releasing lock "refresh_cache-d5167284-086d-4b37-98b0-3853baabf418" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.936 225859 DEBUG nova.compute.manager [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Instance network_info: |[{"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.937 225859 DEBUG oslo_concurrency.lockutils [req-5c2ab044-1c73-4429-b3b0-ee4004432102 req-1ea80a80-520b-4504-815f-0d1408c99c2e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-d5167284-086d-4b37-98b0-3853baabf418" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.937 225859 DEBUG nova.network.neutron [req-5c2ab044-1c73-4429-b3b0-ee4004432102 req-1ea80a80-520b-4504-815f-0d1408c99c2e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Refreshing network info cache for port 86cabae0-8599-4330-b71c-91eb2e6b76d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.943 225859 DEBUG nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Start _get_guest_xml network_info=[{"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.948 225859 WARNING nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.954 225859 DEBUG nova.virt.libvirt.host [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.955 225859 DEBUG nova.virt.libvirt.host [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.959 225859 DEBUG nova.virt.libvirt.host [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.960 225859 DEBUG nova.virt.libvirt.host [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.961 225859 DEBUG nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.962 225859 DEBUG nova.virt.hardware [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.962 225859 DEBUG nova.virt.hardware [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.963 225859 DEBUG nova.virt.hardware [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.963 225859 DEBUG nova.virt.hardware [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.964 225859 DEBUG nova.virt.hardware [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.964 225859 DEBUG nova.virt.hardware [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.964 225859 DEBUG nova.virt.hardware [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.965 225859 DEBUG nova.virt.hardware [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.965 225859 DEBUG nova.virt.hardware [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.966 225859 DEBUG nova.virt.hardware [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.967 225859 DEBUG nova.virt.hardware [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:50:25 compute-1 nova_compute[225855]: 2026-01-20 14:50:25.971 225859 DEBUG oslo_concurrency.processutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.111 225859 DEBUG oslo_concurrency.lockutils [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.112 225859 DEBUG oslo_concurrency.lockutils [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.113 225859 DEBUG oslo_concurrency.lockutils [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.113 225859 DEBUG oslo_concurrency.lockutils [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.113 225859 DEBUG oslo_concurrency.lockutils [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.115 225859 INFO nova.compute.manager [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Terminating instance
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.117 225859 DEBUG nova.compute.manager [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.148 225859 DEBUG nova.compute.manager [req-189f70e9-8765-4736-98cf-bae84dd0851d req-cd13bed3-b2fb-4b74-b7ca-a84746d67410 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received event network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.149 225859 DEBUG oslo_concurrency.lockutils [req-189f70e9-8765-4736-98cf-bae84dd0851d req-cd13bed3-b2fb-4b74-b7ca-a84746d67410 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.149 225859 DEBUG oslo_concurrency.lockutils [req-189f70e9-8765-4736-98cf-bae84dd0851d req-cd13bed3-b2fb-4b74-b7ca-a84746d67410 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.150 225859 DEBUG oslo_concurrency.lockutils [req-189f70e9-8765-4736-98cf-bae84dd0851d req-cd13bed3-b2fb-4b74-b7ca-a84746d67410 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.150 225859 DEBUG nova.compute.manager [req-189f70e9-8765-4736-98cf-bae84dd0851d req-cd13bed3-b2fb-4b74-b7ca-a84746d67410 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] No waiting events found dispatching network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.151 225859 WARNING nova.compute.manager [req-189f70e9-8765-4736-98cf-bae84dd0851d req-cd13bed3-b2fb-4b74-b7ca-a84746d67410 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received unexpected event network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 for instance with vm_state active and task_state deleting.
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.151 225859 DEBUG nova.compute.manager [req-189f70e9-8765-4736-98cf-bae84dd0851d req-cd13bed3-b2fb-4b74-b7ca-a84746d67410 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received event network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.151 225859 DEBUG oslo_concurrency.lockutils [req-189f70e9-8765-4736-98cf-bae84dd0851d req-cd13bed3-b2fb-4b74-b7ca-a84746d67410 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.152 225859 DEBUG oslo_concurrency.lockutils [req-189f70e9-8765-4736-98cf-bae84dd0851d req-cd13bed3-b2fb-4b74-b7ca-a84746d67410 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.152 225859 DEBUG oslo_concurrency.lockutils [req-189f70e9-8765-4736-98cf-bae84dd0851d req-cd13bed3-b2fb-4b74-b7ca-a84746d67410 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.153 225859 DEBUG nova.compute.manager [req-189f70e9-8765-4736-98cf-bae84dd0851d req-cd13bed3-b2fb-4b74-b7ca-a84746d67410 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] No waiting events found dispatching network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:50:26 compute-1 kernel: tap6855cb4f-41 (unregistering): left promiscuous mode
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.153 225859 WARNING nova.compute.manager [req-189f70e9-8765-4736-98cf-bae84dd0851d req-cd13bed3-b2fb-4b74-b7ca-a84746d67410 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received unexpected event network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 for instance with vm_state active and task_state deleting.
Jan 20 14:50:26 compute-1 NetworkManager[49104]: <info>  [1768920626.1584] device (tap6855cb4f-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:50:26 compute-1 ovn_controller[130490]: 2026-01-20T14:50:26Z|00418|binding|INFO|Releasing lport 6855cb4f-4178-4447-af36-126ade033206 from this chassis (sb_readonly=0)
Jan 20 14:50:26 compute-1 ovn_controller[130490]: 2026-01-20T14:50:26Z|00419|binding|INFO|Setting lport 6855cb4f-4178-4447-af36-126ade033206 down in Southbound
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.168 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:26 compute-1 ovn_controller[130490]: 2026-01-20T14:50:26Z|00420|binding|INFO|Removing iface tap6855cb4f-41 ovn-installed in OVS
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.173 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:26.183 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:3f:20 10.100.0.12'], port_security=['fa:16:3e:4f:3f:20 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'fdeb13eb-edb4-4bff-aeef-2671ba9d4618', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-762e1859-4db4-4d9e-b66f-d50316f80df4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c5f03d46c0c4162a3b2f1530850bb6c', 'neutron:revision_number': '8', 'neutron:security_group_ids': '80535eda-fa59-4edc-8e3d-9bfea6517730', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.180', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2474a8ca-bb96-4cae-9133-23419b81a9fc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6855cb4f-4178-4447-af36-126ade033206) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:50:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:26.184 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6855cb4f-4178-4447-af36-126ade033206 in datapath 762e1859-4db4-4d9e-b66f-d50316f80df4 unbound from our chassis
Jan 20 14:50:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:26.186 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 762e1859-4db4-4d9e-b66f-d50316f80df4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:50:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:26.186 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1046d355-d0d0-4971-918a-36badc8972f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:26.187 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 namespace which is not needed anymore
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.195 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:26 compute-1 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000069.scope: Deactivated successfully.
Jan 20 14:50:26 compute-1 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000069.scope: Consumed 5.707s CPU time.
Jan 20 14:50:26 compute-1 systemd-machined[194361]: Machine qemu-48-instance-00000069 terminated.
Jan 20 14:50:26 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[267440]: [NOTICE]   (267462) : haproxy version is 2.8.14-c23fe91
Jan 20 14:50:26 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[267440]: [NOTICE]   (267462) : path to executable is /usr/sbin/haproxy
Jan 20 14:50:26 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[267440]: [WARNING]  (267462) : Exiting Master process...
Jan 20 14:50:26 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[267440]: [ALERT]    (267462) : Current worker (267478) exited with code 143 (Terminated)
Jan 20 14:50:26 compute-1 neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4[267440]: [WARNING]  (267462) : All workers exited. Exiting... (0)
Jan 20 14:50:26 compute-1 systemd[1]: libpod-77c1e4defe56387ddf8a0a85cd09f24ebb67ce03630c8150f6659c6c85bb6552.scope: Deactivated successfully.
Jan 20 14:50:26 compute-1 podman[267556]: 2026-01-20 14:50:26.315336167 +0000 UTC m=+0.044087886 container died 77c1e4defe56387ddf8a0a85cd09f24ebb67ce03630c8150f6659c6c85bb6552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.333 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.340 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.345 225859 INFO nova.virt.libvirt.driver [-] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Instance destroyed successfully.
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.345 225859 DEBUG nova.objects.instance [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lazy-loading 'resources' on Instance uuid fdeb13eb-edb4-4bff-aeef-2671ba9d4618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:26 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77c1e4defe56387ddf8a0a85cd09f24ebb67ce03630c8150f6659c6c85bb6552-userdata-shm.mount: Deactivated successfully.
Jan 20 14:50:26 compute-1 systemd[1]: var-lib-containers-storage-overlay-3e4bf7e696f3dd4d151e1f99ee68b890d05b859756f606d057e896cbb8a0594b-merged.mount: Deactivated successfully.
Jan 20 14:50:26 compute-1 podman[267556]: 2026-01-20 14:50:26.357213519 +0000 UTC m=+0.085965238 container cleanup 77c1e4defe56387ddf8a0a85cd09f24ebb67ce03630c8150f6659c6c85bb6552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:50:26 compute-1 systemd[1]: libpod-conmon-77c1e4defe56387ddf8a0a85cd09f24ebb67ce03630c8150f6659c6c85bb6552.scope: Deactivated successfully.
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.367 225859 DEBUG nova.virt.libvirt.vif [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:49:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2012792656',display_name='tempest-ServerActionsTestJSON-server-2012792656',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2012792656',id=105,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgEzx5mLsSqRL7L9WKOzM+WdeJ40U103wY9H3VMZ41G/sN5tQtQSt9lXWKTyc6pt00bfKD0E9GPugNMpy+dzSSpK23o3CgadkzAfAvjQCeCPOSp3fX13FGApomGd1HRCQ==',key_name='tempest-keypair-1602241722',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:50:21Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c5f03d46c0c4162a3b2f1530850bb6c',ramdisk_id='',reservation_id='r-c8be97ji',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1020442335',owner_user_name='tempest-ServerActionsTestJSON-1020442335-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:50:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3e9278fdb9e645b7938f3edb20c4d3cf',uuid=fdeb13eb-edb4-4bff-aeef-2671ba9d4618,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.367 225859 DEBUG nova.network.os_vif_util [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converting VIF {"id": "6855cb4f-4178-4447-af36-126ade033206", "address": "fa:16:3e:4f:3f:20", "network": {"id": "762e1859-4db4-4d9e-b66f-d50316f80df4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1917526237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c5f03d46c0c4162a3b2f1530850bb6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6855cb4f-41", "ovs_interfaceid": "6855cb4f-4178-4447-af36-126ade033206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.368 225859 DEBUG nova.network.os_vif_util [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.369 225859 DEBUG os_vif [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.371 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.371 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6855cb4f-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.373 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.375 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.377 225859 INFO os_vif [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:3f:20,bridge_name='br-int',has_traffic_filtering=True,id=6855cb4f-4178-4447-af36-126ade033206,network=Network(762e1859-4db4-4d9e-b66f-d50316f80df4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6855cb4f-41')
Jan 20 14:50:26 compute-1 podman[267593]: 2026-01-20 14:50:26.420689581 +0000 UTC m=+0.041786600 container remove 77c1e4defe56387ddf8a0a85cd09f24ebb67ce03630c8150f6659c6c85bb6552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 20 14:50:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:26.426 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8a0ad1e5-f3fd-4439-813b-fafa80cec591]: (4, ('Tue Jan 20 02:50:26 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 (77c1e4defe56387ddf8a0a85cd09f24ebb67ce03630c8150f6659c6c85bb6552)\n77c1e4defe56387ddf8a0a85cd09f24ebb67ce03630c8150f6659c6c85bb6552\nTue Jan 20 02:50:26 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 (77c1e4defe56387ddf8a0a85cd09f24ebb67ce03630c8150f6659c6c85bb6552)\n77c1e4defe56387ddf8a0a85cd09f24ebb67ce03630c8150f6659c6c85bb6552\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:26.428 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3bfc06-33ba-454b-bb5b-626206412c58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:26.429 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap762e1859-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.430 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:26 compute-1 kernel: tap762e1859-40: left promiscuous mode
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.444 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:50:26 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2577488927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:26.446 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fcbd5b33-97b9-4847-9165-d2dcc687a541]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:26.458 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[84c9d80d-cd96-49ef-ad86-edb08e5ae557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:26.459 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[148c4d4c-cb0e-447d-ba3f-4af735b0cde1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.470 225859 DEBUG oslo_concurrency.processutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:26.477 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7654c02b-c2c7-40ef-b5dc-07baeffceeb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565962, 'reachable_time': 39487, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267626, 'error': None, 'target': 'ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:26.480 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-762e1859-4db4-4d9e-b66f-d50316f80df4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:50:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:26.480 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[b0da330c-4325-4077-8935-f455e3a2561b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:26 compute-1 systemd[1]: run-netns-ovnmeta\x2d762e1859\x2d4db4\x2d4d9e\x2db66f\x2dd50316f80df4.mount: Deactivated successfully.
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.496 225859 DEBUG nova.storage.rbd_utils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image d5167284-086d-4b37-98b0-3853baabf418_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.499 225859 DEBUG oslo_concurrency.processutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.751 225859 INFO nova.virt.libvirt.driver [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Deleting instance files /var/lib/nova/instances/fdeb13eb-edb4-4bff-aeef-2671ba9d4618_del
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.753 225859 INFO nova.virt.libvirt.driver [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Deletion of /var/lib/nova/instances/fdeb13eb-edb4-4bff-aeef-2671ba9d4618_del complete
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.823 225859 INFO nova.compute.manager [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Took 0.71 seconds to destroy the instance on the hypervisor.
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.823 225859 DEBUG oslo.service.loopingcall [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.824 225859 DEBUG nova.compute.manager [-] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.824 225859 DEBUG nova.network.neutron [-] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:50:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:50:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:26.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.912 225859 DEBUG oslo_concurrency.processutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.915 225859 DEBUG nova.virt.libvirt.vif [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:50:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1906505493',display_name='tempest-ServerDiskConfigTestJSON-server-1906505493',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1906505493',id=109,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-5s3il9ii',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:50:20Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=d5167284-086d-4b37-98b0-3853baabf418,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.915 225859 DEBUG nova.network.os_vif_util [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.916 225859 DEBUG nova.network.os_vif_util [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:05:34,bridge_name='br-int',has_traffic_filtering=True,id=86cabae0-8599-4330-b71c-91eb2e6b76d8,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cabae0-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.919 225859 DEBUG nova.objects.instance [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'pci_devices' on Instance uuid d5167284-086d-4b37-98b0-3853baabf418 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.934 225859 DEBUG nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:50:26 compute-1 nova_compute[225855]:   <uuid>d5167284-086d-4b37-98b0-3853baabf418</uuid>
Jan 20 14:50:26 compute-1 nova_compute[225855]:   <name>instance-0000006d</name>
Jan 20 14:50:26 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:50:26 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:50:26 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1906505493</nova:name>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:50:25</nova:creationTime>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:50:26 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:50:26 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:50:26 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:50:26 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:50:26 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:50:26 compute-1 nova_compute[225855]:         <nova:user uuid="a1bd93d04cc4468abe1d5c61f5144191">tempest-ServerDiskConfigTestJSON-1806346246-project-member</nova:user>
Jan 20 14:50:26 compute-1 nova_compute[225855]:         <nova:project uuid="acb30fbc0e3749e390d7f867060b5a2a">tempest-ServerDiskConfigTestJSON-1806346246</nova:project>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:50:26 compute-1 nova_compute[225855]:         <nova:port uuid="86cabae0-8599-4330-b71c-91eb2e6b76d8">
Jan 20 14:50:26 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:50:26 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:50:26 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <system>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <entry name="serial">d5167284-086d-4b37-98b0-3853baabf418</entry>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <entry name="uuid">d5167284-086d-4b37-98b0-3853baabf418</entry>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     </system>
Jan 20 14:50:26 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:50:26 compute-1 nova_compute[225855]:   <os>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:   </os>
Jan 20 14:50:26 compute-1 nova_compute[225855]:   <features>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:   </features>
Jan 20 14:50:26 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:50:26 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:50:26 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/d5167284-086d-4b37-98b0-3853baabf418_disk">
Jan 20 14:50:26 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       </source>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:50:26 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/d5167284-086d-4b37-98b0-3853baabf418_disk.config">
Jan 20 14:50:26 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       </source>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:50:26 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:30:05:34"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <target dev="tap86cabae0-85"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/d5167284-086d-4b37-98b0-3853baabf418/console.log" append="off"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <video>
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     </video>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:50:26 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:50:26 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:50:26 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:50:26 compute-1 nova_compute[225855]: </domain>
Jan 20 14:50:26 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.939 225859 DEBUG nova.compute.manager [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Preparing to wait for external event network-vif-plugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.940 225859 DEBUG oslo_concurrency.lockutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "d5167284-086d-4b37-98b0-3853baabf418-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.941 225859 DEBUG oslo_concurrency.lockutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.941 225859 DEBUG oslo_concurrency.lockutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.942 225859 DEBUG nova.virt.libvirt.vif [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:50:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1906505493',display_name='tempest-ServerDiskConfigTestJSON-server-1906505493',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1906505493',id=109,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-5s3il9ii',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:50:20Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=d5167284-086d-4b37-98b0-3853baabf418,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.943 225859 DEBUG nova.network.os_vif_util [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.944 225859 DEBUG nova.network.os_vif_util [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:05:34,bridge_name='br-int',has_traffic_filtering=True,id=86cabae0-8599-4330-b71c-91eb2e6b76d8,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cabae0-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.945 225859 DEBUG os_vif [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:05:34,bridge_name='br-int',has_traffic_filtering=True,id=86cabae0-8599-4330-b71c-91eb2e6b76d8,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cabae0-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.946 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.947 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.948 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.952 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.953 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86cabae0-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:26 compute-1 nova_compute[225855]: 2026-01-20 14:50:26.954 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap86cabae0-85, col_values=(('external_ids', {'iface-id': '86cabae0-8599-4330-b71c-91eb2e6b76d8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:05:34', 'vm-uuid': 'd5167284-086d-4b37-98b0-3853baabf418'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:26 compute-1 ceph-mon[81775]: pgmap v1879: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 231 op/s
Jan 20 14:50:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2577488927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3209917238' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:27 compute-1 nova_compute[225855]: 2026-01-20 14:50:27.006 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:27 compute-1 NetworkManager[49104]: <info>  [1768920627.0070] manager: (tap86cabae0-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/180)
Jan 20 14:50:27 compute-1 nova_compute[225855]: 2026-01-20 14:50:27.010 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:50:27 compute-1 nova_compute[225855]: 2026-01-20 14:50:27.014 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:27 compute-1 nova_compute[225855]: 2026-01-20 14:50:27.016 225859 INFO os_vif [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:05:34,bridge_name='br-int',has_traffic_filtering=True,id=86cabae0-8599-4330-b71c-91eb2e6b76d8,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cabae0-85')
Jan 20 14:50:27 compute-1 nova_compute[225855]: 2026-01-20 14:50:27.211 225859 DEBUG nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:50:27 compute-1 nova_compute[225855]: 2026-01-20 14:50:27.212 225859 DEBUG nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:50:27 compute-1 nova_compute[225855]: 2026-01-20 14:50:27.212 225859 DEBUG nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No VIF found with MAC fa:16:3e:30:05:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:50:27 compute-1 nova_compute[225855]: 2026-01-20 14:50:27.213 225859 INFO nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Using config drive
Jan 20 14:50:27 compute-1 nova_compute[225855]: 2026-01-20 14:50:27.250 225859 DEBUG nova.storage.rbd_utils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image d5167284-086d-4b37-98b0-3853baabf418_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:50:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:27.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:27 compute-1 nova_compute[225855]: 2026-01-20 14:50:27.884 225859 INFO nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Creating config drive at /var/lib/nova/instances/d5167284-086d-4b37-98b0-3853baabf418/disk.config
Jan 20 14:50:27 compute-1 nova_compute[225855]: 2026-01-20 14:50:27.894 225859 DEBUG oslo_concurrency.processutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d5167284-086d-4b37-98b0-3853baabf418/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpapn263wm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:27 compute-1 nova_compute[225855]: 2026-01-20 14:50:27.928 225859 DEBUG nova.network.neutron [req-5c2ab044-1c73-4429-b3b0-ee4004432102 req-1ea80a80-520b-4504-815f-0d1408c99c2e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Updated VIF entry in instance network info cache for port 86cabae0-8599-4330-b71c-91eb2e6b76d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:50:27 compute-1 nova_compute[225855]: 2026-01-20 14:50:27.930 225859 DEBUG nova.network.neutron [req-5c2ab044-1c73-4429-b3b0-ee4004432102 req-1ea80a80-520b-4504-815f-0d1408c99c2e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Updating instance_info_cache with network_info: [{"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:50:27 compute-1 nova_compute[225855]: 2026-01-20 14:50:27.953 225859 DEBUG oslo_concurrency.lockutils [req-5c2ab044-1c73-4429-b3b0-ee4004432102 req-1ea80a80-520b-4504-815f-0d1408c99c2e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-d5167284-086d-4b37-98b0-3853baabf418" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.005 225859 DEBUG nova.network.neutron [-] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.033 225859 INFO nova.compute.manager [-] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Took 1.21 seconds to deallocate network for instance.
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.033 225859 DEBUG oslo_concurrency.processutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d5167284-086d-4b37-98b0-3853baabf418/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpapn263wm" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.122 225859 DEBUG nova.storage.rbd_utils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image d5167284-086d-4b37-98b0-3853baabf418_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.128 225859 DEBUG oslo_concurrency.processutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d5167284-086d-4b37-98b0-3853baabf418/disk.config d5167284-086d-4b37-98b0-3853baabf418_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.155 225859 DEBUG nova.compute.manager [req-8fbe0e71-504a-41aa-bdd4-3671a5504300 req-17469b22-9505-4f3c-b0ab-2202d3c8ac5b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received event network-vif-deleted-6855cb4f-4178-4447-af36-126ade033206 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.214 225859 DEBUG oslo_concurrency.lockutils [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.215 225859 DEBUG oslo_concurrency.lockutils [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.219 225859 DEBUG oslo_concurrency.lockutils [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.257 225859 DEBUG nova.compute.manager [req-ee2a81e4-df19-4a56-bde7-2b39e6040aca req-d5b89990-8a5f-4a46-a2ce-b327d61d4035 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received event network-vif-unplugged-6855cb4f-4178-4447-af36-126ade033206 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.258 225859 DEBUG oslo_concurrency.lockutils [req-ee2a81e4-df19-4a56-bde7-2b39e6040aca req-d5b89990-8a5f-4a46-a2ce-b327d61d4035 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.258 225859 DEBUG oslo_concurrency.lockutils [req-ee2a81e4-df19-4a56-bde7-2b39e6040aca req-d5b89990-8a5f-4a46-a2ce-b327d61d4035 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.258 225859 DEBUG oslo_concurrency.lockutils [req-ee2a81e4-df19-4a56-bde7-2b39e6040aca req-d5b89990-8a5f-4a46-a2ce-b327d61d4035 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.259 225859 DEBUG nova.compute.manager [req-ee2a81e4-df19-4a56-bde7-2b39e6040aca req-d5b89990-8a5f-4a46-a2ce-b327d61d4035 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] No waiting events found dispatching network-vif-unplugged-6855cb4f-4178-4447-af36-126ade033206 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.259 225859 WARNING nova.compute.manager [req-ee2a81e4-df19-4a56-bde7-2b39e6040aca req-d5b89990-8a5f-4a46-a2ce-b327d61d4035 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received unexpected event network-vif-unplugged-6855cb4f-4178-4447-af36-126ade033206 for instance with vm_state deleted and task_state None.
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.259 225859 DEBUG nova.compute.manager [req-ee2a81e4-df19-4a56-bde7-2b39e6040aca req-d5b89990-8a5f-4a46-a2ce-b327d61d4035 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received event network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.260 225859 DEBUG oslo_concurrency.lockutils [req-ee2a81e4-df19-4a56-bde7-2b39e6040aca req-d5b89990-8a5f-4a46-a2ce-b327d61d4035 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.260 225859 DEBUG oslo_concurrency.lockutils [req-ee2a81e4-df19-4a56-bde7-2b39e6040aca req-d5b89990-8a5f-4a46-a2ce-b327d61d4035 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.260 225859 DEBUG oslo_concurrency.lockutils [req-ee2a81e4-df19-4a56-bde7-2b39e6040aca req-d5b89990-8a5f-4a46-a2ce-b327d61d4035 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.261 225859 DEBUG nova.compute.manager [req-ee2a81e4-df19-4a56-bde7-2b39e6040aca req-d5b89990-8a5f-4a46-a2ce-b327d61d4035 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] No waiting events found dispatching network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.261 225859 WARNING nova.compute.manager [req-ee2a81e4-df19-4a56-bde7-2b39e6040aca req-d5b89990-8a5f-4a46-a2ce-b327d61d4035 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Received unexpected event network-vif-plugged-6855cb4f-4178-4447-af36-126ade033206 for instance with vm_state deleted and task_state None.
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.263 225859 INFO nova.scheduler.client.report [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Deleted allocations for instance fdeb13eb-edb4-4bff-aeef-2671ba9d4618
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.283 225859 DEBUG oslo_concurrency.processutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d5167284-086d-4b37-98b0-3853baabf418/disk.config d5167284-086d-4b37-98b0-3853baabf418_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.283 225859 INFO nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Deleting local config drive /var/lib/nova/instances/d5167284-086d-4b37-98b0-3853baabf418/disk.config because it was imported into RBD.
Jan 20 14:50:28 compute-1 kernel: tap86cabae0-85: entered promiscuous mode
Jan 20 14:50:28 compute-1 NetworkManager[49104]: <info>  [1768920628.3301] manager: (tap86cabae0-85): new Tun device (/org/freedesktop/NetworkManager/Devices/181)
Jan 20 14:50:28 compute-1 systemd-udevd[267538]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:50:28 compute-1 ovn_controller[130490]: 2026-01-20T14:50:28Z|00421|binding|INFO|Claiming lport 86cabae0-8599-4330-b71c-91eb2e6b76d8 for this chassis.
Jan 20 14:50:28 compute-1 ovn_controller[130490]: 2026-01-20T14:50:28Z|00422|binding|INFO|86cabae0-8599-4330-b71c-91eb2e6b76d8: Claiming fa:16:3e:30:05:34 10.100.0.3
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.330 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.335 225859 DEBUG oslo_concurrency.lockutils [None req-2c5030c6-41ba-4de4-a518-9b488a30855c 3e9278fdb9e645b7938f3edb20c4d3cf 1c5f03d46c0c4162a3b2f1530850bb6c - - default default] Lock "fdeb13eb-edb4-4bff-aeef-2671ba9d4618" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.337 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:05:34 10.100.0.3'], port_security=['fa:16:3e:30:05:34 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd5167284-086d-4b37-98b0-3853baabf418', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '19fab802-7db8-4c89-8f8e-8dcfc14d4627', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0e287ba-f88b-46f5-bb7f-3cc2a74be88e, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=86cabae0-8599-4330-b71c-91eb2e6b76d8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.338 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 86cabae0-8599-4330-b71c-91eb2e6b76d8 in datapath 3379e2b3-ffb2-4391-969b-c9dc51bfbe25 bound to our chassis
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.340 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:50:28 compute-1 NetworkManager[49104]: <info>  [1768920628.3434] device (tap86cabae0-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:50:28 compute-1 NetworkManager[49104]: <info>  [1768920628.3442] device (tap86cabae0-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:50:28 compute-1 ovn_controller[130490]: 2026-01-20T14:50:28Z|00423|binding|INFO|Setting lport 86cabae0-8599-4330-b71c-91eb2e6b76d8 ovn-installed in OVS
Jan 20 14:50:28 compute-1 ovn_controller[130490]: 2026-01-20T14:50:28Z|00424|binding|INFO|Setting lport 86cabae0-8599-4330-b71c-91eb2e6b76d8 up in Southbound
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.350 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.350 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a64921f7-51f6-406b-97a6-993a9cd6fdb2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.351 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3379e2b3-f1 in ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.353 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.353 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3379e2b3-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.354 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[48d08744-8721-4a89-89be-e2a7d5e869c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.355 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e647e381-1bfe-4428-945b-f7e300aeb7ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.365 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[bfbd88b4-25c2-48fa-a320-7ff501d5e959]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:28 compute-1 systemd-machined[194361]: New machine qemu-49-instance-0000006d.
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.376 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[67ef57a4-9016-43e4-981e-dfb4b1814373]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:28 compute-1 systemd[1]: Started Virtual Machine qemu-49-instance-0000006d.
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.401 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[62237bf6-d323-4151-8c84-4496ce098f03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.405 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[634033a4-f27c-44be-857c-2584cbeb9abc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:28 compute-1 NetworkManager[49104]: <info>  [1768920628.4069] manager: (tap3379e2b3-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/182)
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.437 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a4b857-6e6f-4fb1-8dde-a711efd096f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.440 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[764a0b4e-44d9-4b71-9286-cd23644a9b82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:28 compute-1 NetworkManager[49104]: <info>  [1768920628.4597] device (tap3379e2b3-f0): carrier: link connected
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.464 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[528ce9ca-6ef1-4b4e-8e26-3932db02f901]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.479 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ac50602c-7e7d-4ef0-a18f-d46634a2059b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3379e2b3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:86:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566738, 'reachable_time': 15831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267773, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.491 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[57683e64-b9d0-4bfc-bc5a-d200616d6646]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:86fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566738, 'tstamp': 566738}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267774, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.504 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6fbfce60-221c-4cd9-9ef6-1dc4e7fc4115]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3379e2b3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:86:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566738, 'reachable_time': 15831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267775, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.528 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8dfc5a90-f8f0-4d53-b92a-cab1b35fa632]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.577 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c8afac1b-2cba-455a-8afc-e20186bc8cf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.578 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3379e2b3-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.579 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.579 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3379e2b3-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.581 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:28 compute-1 NetworkManager[49104]: <info>  [1768920628.5817] manager: (tap3379e2b3-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Jan 20 14:50:28 compute-1 kernel: tap3379e2b3-f0: entered promiscuous mode
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.583 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.585 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3379e2b3-f0, col_values=(('external_ids', {'iface-id': 'b32ddf23-a8dd-4e6d-a410-ccb24b214d35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.586 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:28 compute-1 ovn_controller[130490]: 2026-01-20T14:50:28Z|00425|binding|INFO|Releasing lport b32ddf23-a8dd-4e6d-a410-ccb24b214d35 from this chassis (sb_readonly=0)
Jan 20 14:50:28 compute-1 nova_compute[225855]: 2026-01-20 14:50:28.600 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.602 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.603 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[66a243b6-0601-44cd-8389-15d8fefa0a38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.603 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:50:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:28.604 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'env', 'PROCESS_TAG=haproxy-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:50:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:28.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:28 compute-1 podman[267823]: 2026-01-20 14:50:28.954881653 +0000 UTC m=+0.054831018 container create 731bfd77941a55fc6ba26854288cff85a728673e94493d76f8d51354912a4949 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 20 14:50:28 compute-1 ceph-mon[81775]: pgmap v1880: 321 pgs: 321 active+clean; 209 MiB data, 879 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.1 MiB/s wr, 259 op/s
Jan 20 14:50:29 compute-1 systemd[1]: Started libpod-conmon-731bfd77941a55fc6ba26854288cff85a728673e94493d76f8d51354912a4949.scope.
Jan 20 14:50:29 compute-1 podman[267823]: 2026-01-20 14:50:28.929213729 +0000 UTC m=+0.029163124 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:50:29 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:50:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eeb75a2b56f63e1e54b32f2517ebb6272ef57e2b63ce8662cb464dfa749ef5b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:50:29 compute-1 nova_compute[225855]: 2026-01-20 14:50:29.058 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920629.0572743, d5167284-086d-4b37-98b0-3853baabf418 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:50:29 compute-1 nova_compute[225855]: 2026-01-20 14:50:29.059 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] VM Started (Lifecycle Event)
Jan 20 14:50:29 compute-1 podman[267823]: 2026-01-20 14:50:29.064538299 +0000 UTC m=+0.164487674 container init 731bfd77941a55fc6ba26854288cff85a728673e94493d76f8d51354912a4949 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:50:29 compute-1 podman[267823]: 2026-01-20 14:50:29.07096017 +0000 UTC m=+0.170909525 container start 731bfd77941a55fc6ba26854288cff85a728673e94493d76f8d51354912a4949 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:50:29 compute-1 nova_compute[225855]: 2026-01-20 14:50:29.081 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:50:29 compute-1 nova_compute[225855]: 2026-01-20 14:50:29.088 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920629.0588503, d5167284-086d-4b37-98b0-3853baabf418 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:50:29 compute-1 nova_compute[225855]: 2026-01-20 14:50:29.090 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] VM Paused (Lifecycle Event)
Jan 20 14:50:29 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[267863]: [NOTICE]   (267868) : New worker (267870) forked
Jan 20 14:50:29 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[267863]: [NOTICE]   (267868) : Loading success.
Jan 20 14:50:29 compute-1 nova_compute[225855]: 2026-01-20 14:50:29.112 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:50:29 compute-1 nova_compute[225855]: 2026-01-20 14:50:29.116 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:50:29 compute-1 nova_compute[225855]: 2026-01-20 14:50:29.261 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:50:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e250 e250: 3 total, 3 up, 3 in
Jan 20 14:50:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:29.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:29 compute-1 sudo[267880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:29 compute-1 sudo[267880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:29 compute-1 sudo[267880]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:29 compute-1 sudo[267905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:29 compute-1 sudo[267905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:29 compute-1 sudo[267905]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.217 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920615.2164834, 52477e64-7989-4aa2-88e1-31600bfae2ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.218 225859 INFO nova.compute.manager [-] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] VM Stopped (Lifecycle Event)
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.321 225859 DEBUG nova.compute.manager [req-41911b97-d75c-484f-a56a-e8e35ce25ebc req-74038d00-378c-4b00-9565-e96a39fac7de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Received event network-vif-plugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.321 225859 DEBUG oslo_concurrency.lockutils [req-41911b97-d75c-484f-a56a-e8e35ce25ebc req-74038d00-378c-4b00-9565-e96a39fac7de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5167284-086d-4b37-98b0-3853baabf418-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.322 225859 DEBUG oslo_concurrency.lockutils [req-41911b97-d75c-484f-a56a-e8e35ce25ebc req-74038d00-378c-4b00-9565-e96a39fac7de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.322 225859 DEBUG oslo_concurrency.lockutils [req-41911b97-d75c-484f-a56a-e8e35ce25ebc req-74038d00-378c-4b00-9565-e96a39fac7de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.323 225859 DEBUG nova.compute.manager [req-41911b97-d75c-484f-a56a-e8e35ce25ebc req-74038d00-378c-4b00-9565-e96a39fac7de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Processing event network-vif-plugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.323 225859 DEBUG nova.compute.manager [req-41911b97-d75c-484f-a56a-e8e35ce25ebc req-74038d00-378c-4b00-9565-e96a39fac7de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Received event network-vif-plugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.323 225859 DEBUG oslo_concurrency.lockutils [req-41911b97-d75c-484f-a56a-e8e35ce25ebc req-74038d00-378c-4b00-9565-e96a39fac7de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5167284-086d-4b37-98b0-3853baabf418-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.324 225859 DEBUG oslo_concurrency.lockutils [req-41911b97-d75c-484f-a56a-e8e35ce25ebc req-74038d00-378c-4b00-9565-e96a39fac7de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.324 225859 DEBUG oslo_concurrency.lockutils [req-41911b97-d75c-484f-a56a-e8e35ce25ebc req-74038d00-378c-4b00-9565-e96a39fac7de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.325 225859 DEBUG nova.compute.manager [req-41911b97-d75c-484f-a56a-e8e35ce25ebc req-74038d00-378c-4b00-9565-e96a39fac7de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] No waiting events found dispatching network-vif-plugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.325 225859 WARNING nova.compute.manager [req-41911b97-d75c-484f-a56a-e8e35ce25ebc req-74038d00-378c-4b00-9565-e96a39fac7de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Received unexpected event network-vif-plugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 for instance with vm_state building and task_state spawning.
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.326 225859 DEBUG nova.compute.manager [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.332 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920630.3318157, d5167284-086d-4b37-98b0-3853baabf418 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.332 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] VM Resumed (Lifecycle Event)
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.338 225859 DEBUG nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.343 225859 INFO nova.virt.libvirt.driver [-] [instance: d5167284-086d-4b37-98b0-3853baabf418] Instance spawned successfully.
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.344 225859 DEBUG nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.545 225859 DEBUG nova.compute.manager [None req-0c1dbbb2-011b-483d-aff5-9986902b6e38 - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.551 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.556 225859 DEBUG nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.556 225859 DEBUG nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.557 225859 DEBUG nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.557 225859 DEBUG nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.558 225859 DEBUG nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.558 225859 DEBUG nova.virt.libvirt.driver [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.563 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:50:30 compute-1 ceph-mon[81775]: osdmap e250: 3 total, 3 up, 3 in
Jan 20 14:50:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3768006211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:30 compute-1 ceph-mon[81775]: pgmap v1882: 321 pgs: 321 active+clean; 203 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.9 MiB/s wr, 249 op/s
Jan 20 14:50:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3027020217' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.637 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.658 225859 INFO nova.compute.manager [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Took 9.83 seconds to spawn the instance on the hypervisor.
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.658 225859 DEBUG nova.compute.manager [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.748 225859 INFO nova.compute.manager [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Took 11.06 seconds to build instance.
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.780 225859 DEBUG oslo_concurrency.lockutils [None req-caeffa0b-6abc-4875-a420-9b31a2b0b774 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:30 compute-1 nova_compute[225855]: 2026-01-20 14:50:30.843 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:50:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:30.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:50:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:50:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:31.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:50:32 compute-1 nova_compute[225855]: 2026-01-20 14:50:32.006 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:32 compute-1 nova_compute[225855]: 2026-01-20 14:50:32.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:50:32 compute-1 nova_compute[225855]: 2026-01-20 14:50:32.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:50:32 compute-1 nova_compute[225855]: 2026-01-20 14:50:32.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:50:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:32.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:32 compute-1 ceph-mon[81775]: pgmap v1883: 321 pgs: 321 active+clean; 186 MiB data, 886 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.3 MiB/s wr, 240 op/s
Jan 20 14:50:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2955992127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:33 compute-1 nova_compute[225855]: 2026-01-20 14:50:33.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:50:33 compute-1 nova_compute[225855]: 2026-01-20 14:50:33.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:50:33 compute-1 nova_compute[225855]: 2026-01-20 14:50:33.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:50:33 compute-1 nova_compute[225855]: 2026-01-20 14:50:33.359 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-d5167284-086d-4b37-98b0-3853baabf418" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:50:33 compute-1 nova_compute[225855]: 2026-01-20 14:50:33.359 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-d5167284-086d-4b37-98b0-3853baabf418" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:50:33 compute-1 nova_compute[225855]: 2026-01-20 14:50:33.359 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:50:33 compute-1 nova_compute[225855]: 2026-01-20 14:50:33.360 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d5167284-086d-4b37-98b0-3853baabf418 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:33.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:34 compute-1 ceph-mon[81775]: pgmap v1884: 321 pgs: 321 active+clean; 195 MiB data, 878 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.1 MiB/s wr, 246 op/s
Jan 20 14:50:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:34.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:34.999 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] Updating instance_info_cache with network_info: [{"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:35.019 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-d5167284-086d-4b37-98b0-3853baabf418" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:35.020 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:35.025 225859 DEBUG nova.compute.manager [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:35.030 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:50:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:35.100 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:50:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:35.101 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:35.102 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:35.131 225859 DEBUG oslo_concurrency.lockutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:35.132 225859 DEBUG oslo_concurrency.lockutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e251 e251: 3 total, 3 up, 3 in
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:35.157 225859 DEBUG nova.objects.instance [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'pci_requests' on Instance uuid d5167284-086d-4b37-98b0-3853baabf418 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:35.183 225859 DEBUG nova.virt.hardware [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:35.183 225859 INFO nova.compute.claims [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:35.183 225859 DEBUG nova.objects.instance [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'resources' on Instance uuid d5167284-086d-4b37-98b0-3853baabf418 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:35.205 225859 DEBUG nova.objects.instance [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'pci_devices' on Instance uuid d5167284-086d-4b37-98b0-3853baabf418 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:35.261 225859 INFO nova.compute.resource_tracker [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Updating resource usage from migration 135cd3df-9888-44b9-a278-94d967ab2b1d
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:35.359 225859 DEBUG oslo_concurrency.processutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:35.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:50:35 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/170477161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:35.806 225859 DEBUG oslo_concurrency.processutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:35.812 225859 DEBUG nova.compute.provider_tree [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:35.835 225859 DEBUG nova.scheduler.client.report [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:35.845 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:35.860 225859 DEBUG oslo_concurrency.lockutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:35 compute-1 nova_compute[225855]: 2026-01-20 14:50:35.861 225859 INFO nova.compute.manager [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Migrating
Jan 20 14:50:36 compute-1 nova_compute[225855]: 2026-01-20 14:50:36.071 225859 DEBUG oslo_concurrency.lockutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "refresh_cache-d5167284-086d-4b37-98b0-3853baabf418" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:50:36 compute-1 nova_compute[225855]: 2026-01-20 14:50:36.073 225859 DEBUG oslo_concurrency.lockutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquired lock "refresh_cache-d5167284-086d-4b37-98b0-3853baabf418" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:50:36 compute-1 nova_compute[225855]: 2026-01-20 14:50:36.073 225859 DEBUG nova.network.neutron [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:50:36 compute-1 podman[267955]: 2026-01-20 14:50:36.090855639 +0000 UTC m=+0.129474176 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 20 14:50:36 compute-1 ceph-mon[81775]: osdmap e251: 3 total, 3 up, 3 in
Jan 20 14:50:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/170477161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:36 compute-1 ceph-mon[81775]: pgmap v1886: 321 pgs: 321 active+clean; 242 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 6.3 MiB/s wr, 355 op/s
Jan 20 14:50:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e252 e252: 3 total, 3 up, 3 in
Jan 20 14:50:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:36.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:37 compute-1 nova_compute[225855]: 2026-01-20 14:50:37.008 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:37 compute-1 ceph-mon[81775]: osdmap e252: 3 total, 3 up, 3 in
Jan 20 14:50:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e253 e253: 3 total, 3 up, 3 in
Jan 20 14:50:37 compute-1 nova_compute[225855]: 2026-01-20 14:50:37.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:50:37 compute-1 nova_compute[225855]: 2026-01-20 14:50:37.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:50:37 compute-1 nova_compute[225855]: 2026-01-20 14:50:37.363 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:37 compute-1 nova_compute[225855]: 2026-01-20 14:50:37.363 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:37 compute-1 nova_compute[225855]: 2026-01-20 14:50:37.364 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:37 compute-1 nova_compute[225855]: 2026-01-20 14:50:37.364 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:50:37 compute-1 nova_compute[225855]: 2026-01-20 14:50:37.365 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:37.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:50:37 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/547360188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:37 compute-1 nova_compute[225855]: 2026-01-20 14:50:37.824 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:38 compute-1 nova_compute[225855]: 2026-01-20 14:50:38.197 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:50:38 compute-1 nova_compute[225855]: 2026-01-20 14:50:38.198 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:50:38 compute-1 ceph-mon[81775]: osdmap e253: 3 total, 3 up, 3 in
Jan 20 14:50:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3655918105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/547360188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:38 compute-1 ceph-mon[81775]: pgmap v1889: 321 pgs: 321 active+clean; 278 MiB data, 920 MiB used, 20 GiB / 21 GiB avail; 9.8 MiB/s rd, 8.1 MiB/s wr, 457 op/s
Jan 20 14:50:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2564209469' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:38 compute-1 nova_compute[225855]: 2026-01-20 14:50:38.352 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:50:38 compute-1 nova_compute[225855]: 2026-01-20 14:50:38.353 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4273MB free_disk=20.93305206298828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:50:38 compute-1 nova_compute[225855]: 2026-01-20 14:50:38.353 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:38 compute-1 nova_compute[225855]: 2026-01-20 14:50:38.354 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:38 compute-1 nova_compute[225855]: 2026-01-20 14:50:38.413 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Applying migration context for instance d5167284-086d-4b37-98b0-3853baabf418 as it has an incoming, in-progress migration 135cd3df-9888-44b9-a278-94d967ab2b1d. Migration status is pre-migrating _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950
Jan 20 14:50:38 compute-1 nova_compute[225855]: 2026-01-20 14:50:38.413 225859 INFO nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] Updating resource usage from migration 135cd3df-9888-44b9-a278-94d967ab2b1d
Jan 20 14:50:38 compute-1 nova_compute[225855]: 2026-01-20 14:50:38.430 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Migration 135cd3df-9888-44b9-a278-94d967ab2b1d is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 20 14:50:38 compute-1 nova_compute[225855]: 2026-01-20 14:50:38.430 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance d5167284-086d-4b37-98b0-3853baabf418 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:50:38 compute-1 nova_compute[225855]: 2026-01-20 14:50:38.431 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:50:38 compute-1 nova_compute[225855]: 2026-01-20 14:50:38.431 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:50:38 compute-1 nova_compute[225855]: 2026-01-20 14:50:38.627 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:38.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:39 compute-1 nova_compute[225855]: 2026-01-20 14:50:39.008 225859 DEBUG nova.network.neutron [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Updating instance_info_cache with network_info: [{"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:50:39 compute-1 nova_compute[225855]: 2026-01-20 14:50:39.030 225859 DEBUG oslo_concurrency.lockutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Releasing lock "refresh_cache-d5167284-086d-4b37-98b0-3853baabf418" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:50:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:50:39 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2975627258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:39 compute-1 nova_compute[225855]: 2026-01-20 14:50:39.050 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:39 compute-1 nova_compute[225855]: 2026-01-20 14:50:39.061 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:50:39 compute-1 nova_compute[225855]: 2026-01-20 14:50:39.092 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:50:39 compute-1 nova_compute[225855]: 2026-01-20 14:50:39.128 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:50:39 compute-1 nova_compute[225855]: 2026-01-20 14:50:39.129 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:39 compute-1 nova_compute[225855]: 2026-01-20 14:50:39.154 225859 DEBUG nova.virt.libvirt.driver [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 20 14:50:39 compute-1 nova_compute[225855]: 2026-01-20 14:50:39.158 225859 DEBUG nova.virt.libvirt.driver [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:50:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2975627258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:39.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:40 compute-1 ceph-mon[81775]: pgmap v1890: 321 pgs: 321 active+clean; 298 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 8.0 MiB/s wr, 416 op/s
Jan 20 14:50:40 compute-1 nova_compute[225855]: 2026-01-20 14:50:40.848 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:40.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3077486001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:41 compute-1 nova_compute[225855]: 2026-01-20 14:50:41.343 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920626.342226, fdeb13eb-edb4-4bff-aeef-2671ba9d4618 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:50:41 compute-1 nova_compute[225855]: 2026-01-20 14:50:41.344 225859 INFO nova.compute.manager [-] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] VM Stopped (Lifecycle Event)
Jan 20 14:50:41 compute-1 nova_compute[225855]: 2026-01-20 14:50:41.365 225859 DEBUG nova.compute.manager [None req-05b2f404-44cc-40a8-80c4-e9e94641f940 - - - - - -] [instance: fdeb13eb-edb4-4bff-aeef-2671ba9d4618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:50:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:41.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:42 compute-1 nova_compute[225855]: 2026-01-20 14:50:42.011 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:42 compute-1 nova_compute[225855]: 2026-01-20 14:50:42.127 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:50:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4154529792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:42 compute-1 ceph-mon[81775]: pgmap v1891: 321 pgs: 321 active+clean; 307 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.4 MiB/s wr, 179 op/s
Jan 20 14:50:42 compute-1 nova_compute[225855]: 2026-01-20 14:50:42.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:50:42 compute-1 nova_compute[225855]: 2026-01-20 14:50:42.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:50:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:42.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:50:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:43.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:50:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:44.103 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:44 compute-1 ovn_controller[130490]: 2026-01-20T14:50:44Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:05:34 10.100.0.3
Jan 20 14:50:44 compute-1 ovn_controller[130490]: 2026-01-20T14:50:44Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:05:34 10.100.0.3
Jan 20 14:50:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e254 e254: 3 total, 3 up, 3 in
Jan 20 14:50:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 20 14:50:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:44.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 20 14:50:44 compute-1 ceph-mon[81775]: pgmap v1892: 321 pgs: 321 active+clean; 307 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 4.0 MiB/s wr, 208 op/s
Jan 20 14:50:44 compute-1 ceph-mon[81775]: osdmap e254: 3 total, 3 up, 3 in
Jan 20 14:50:44 compute-1 sudo[268030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:44 compute-1 sudo[268030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:44 compute-1 sudo[268030]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:45 compute-1 sudo[268055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:50:45 compute-1 sudo[268055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:45 compute-1 sudo[268055]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:45 compute-1 sudo[268080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:45 compute-1 sudo[268080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:45 compute-1 sudo[268080]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:45 compute-1 sudo[268105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:50:45 compute-1 sudo[268105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:45 compute-1 sudo[268105]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:45.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:45 compute-1 nova_compute[225855]: 2026-01-20 14:50:45.850 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:45 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 14:50:45 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:50:45 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:50:45 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:50:45 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:50:45 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:50:45 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:50:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/203403028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:50:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:46.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:50:46 compute-1 ceph-mon[81775]: pgmap v1894: 321 pgs: 321 active+clean; 344 MiB data, 974 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.6 MiB/s wr, 258 op/s
Jan 20 14:50:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2235680583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:47 compute-1 nova_compute[225855]: 2026-01-20 14:50:47.014 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:47.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:48 compute-1 podman[268163]: 2026-01-20 14:50:48.041236322 +0000 UTC m=+0.083844968 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 20 14:50:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:48.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:49 compute-1 nova_compute[225855]: 2026-01-20 14:50:49.201 225859 DEBUG nova.virt.libvirt.driver [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 14:50:49 compute-1 ceph-mon[81775]: pgmap v1895: 321 pgs: 321 active+clean; 366 MiB data, 993 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 289 op/s
Jan 20 14:50:49 compute-1 nova_compute[225855]: 2026-01-20 14:50:49.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:50:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:50:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:49.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:50:50 compute-1 sudo[268185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:50 compute-1 sudo[268185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:50 compute-1 sudo[268185]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:50 compute-1 sudo[268210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:50 compute-1 sudo[268210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:50 compute-1 sudo[268210]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:50 compute-1 ceph-mon[81775]: pgmap v1896: 321 pgs: 321 active+clean; 368 MiB data, 997 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 5.5 MiB/s wr, 261 op/s
Jan 20 14:50:50 compute-1 nova_compute[225855]: 2026-01-20 14:50:50.853 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:50.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:51.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:51 compute-1 sudo[268236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:51 compute-1 sudo[268236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:51 compute-1 sudo[268236]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:51 compute-1 sudo[268261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:50:51 compute-1 sudo[268261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:52 compute-1 sudo[268261]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.016 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:52 compute-1 kernel: tap86cabae0-85 (unregistering): left promiscuous mode
Jan 20 14:50:52 compute-1 NetworkManager[49104]: <info>  [1768920652.0478] device (tap86cabae0-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.059 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:52 compute-1 ovn_controller[130490]: 2026-01-20T14:50:52Z|00426|binding|INFO|Releasing lport 86cabae0-8599-4330-b71c-91eb2e6b76d8 from this chassis (sb_readonly=0)
Jan 20 14:50:52 compute-1 ovn_controller[130490]: 2026-01-20T14:50:52Z|00427|binding|INFO|Setting lport 86cabae0-8599-4330-b71c-91eb2e6b76d8 down in Southbound
Jan 20 14:50:52 compute-1 ovn_controller[130490]: 2026-01-20T14:50:52Z|00428|binding|INFO|Removing iface tap86cabae0-85 ovn-installed in OVS
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.104 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:52.104 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:05:34 10.100.0.3'], port_security=['fa:16:3e:30:05:34 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd5167284-086d-4b37-98b0-3853baabf418', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19fab802-7db8-4c89-8f8e-8dcfc14d4627', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0e287ba-f88b-46f5-bb7f-3cc2a74be88e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=86cabae0-8599-4330-b71c-91eb2e6b76d8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:50:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:52.106 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 86cabae0-8599-4330-b71c-91eb2e6b76d8 in datapath 3379e2b3-ffb2-4391-969b-c9dc51bfbe25 unbound from our chassis
Jan 20 14:50:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:52.108 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3379e2b3-ffb2-4391-969b-c9dc51bfbe25, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:50:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:52.109 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[21bb2ab8-484a-42c5-a83a-5b2ce5b3e359]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:52.110 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 namespace which is not needed anymore
Jan 20 14:50:52 compute-1 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Jan 20 14:50:52 compute-1 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000006d.scope: Consumed 13.958s CPU time.
Jan 20 14:50:52 compute-1 systemd-machined[194361]: Machine qemu-49-instance-0000006d terminated.
Jan 20 14:50:52 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[267863]: [NOTICE]   (267868) : haproxy version is 2.8.14-c23fe91
Jan 20 14:50:52 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[267863]: [NOTICE]   (267868) : path to executable is /usr/sbin/haproxy
Jan 20 14:50:52 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[267863]: [WARNING]  (267868) : Exiting Master process...
Jan 20 14:50:52 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[267863]: [WARNING]  (267868) : Exiting Master process...
Jan 20 14:50:52 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[267863]: [ALERT]    (267868) : Current worker (267870) exited with code 143 (Terminated)
Jan 20 14:50:52 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[267863]: [WARNING]  (267868) : All workers exited. Exiting... (0)
Jan 20 14:50:52 compute-1 systemd[1]: libpod-731bfd77941a55fc6ba26854288cff85a728673e94493d76f8d51354912a4949.scope: Deactivated successfully.
Jan 20 14:50:52 compute-1 podman[268311]: 2026-01-20 14:50:52.235631914 +0000 UTC m=+0.045575008 container died 731bfd77941a55fc6ba26854288cff85a728673e94493d76f8d51354912a4949 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Jan 20 14:50:52 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-731bfd77941a55fc6ba26854288cff85a728673e94493d76f8d51354912a4949-userdata-shm.mount: Deactivated successfully.
Jan 20 14:50:52 compute-1 systemd[1]: var-lib-containers-storage-overlay-9eeb75a2b56f63e1e54b32f2517ebb6272ef57e2b63ce8662cb464dfa749ef5b-merged.mount: Deactivated successfully.
Jan 20 14:50:52 compute-1 podman[268311]: 2026-01-20 14:50:52.279164293 +0000 UTC m=+0.089107387 container cleanup 731bfd77941a55fc6ba26854288cff85a728673e94493d76f8d51354912a4949 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:50:52 compute-1 systemd[1]: libpod-conmon-731bfd77941a55fc6ba26854288cff85a728673e94493d76f8d51354912a4949.scope: Deactivated successfully.
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.338 225859 INFO nova.virt.libvirt.driver [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Instance shutdown successfully after 13 seconds.
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.345 225859 INFO nova.virt.libvirt.driver [-] [instance: d5167284-086d-4b37-98b0-3853baabf418] Instance destroyed successfully.
Jan 20 14:50:52 compute-1 podman[268343]: 2026-01-20 14:50:52.345925458 +0000 UTC m=+0.045783674 container remove 731bfd77941a55fc6ba26854288cff85a728673e94493d76f8d51354912a4949 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.347 225859 DEBUG nova.virt.libvirt.vif [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:50:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1906505493',display_name='tempest-ServerDiskConfigTestJSON-server-1906505493',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1906505493',id=109,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:50:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-5s3il9ii',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:50:35Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=d5167284-086d-4b37-98b0-3853baabf418,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "vif_mac": "fa:16:3e:30:05:34"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.347 225859 DEBUG nova.network.os_vif_util [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "vif_mac": "fa:16:3e:30:05:34"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.348 225859 DEBUG nova.network.os_vif_util [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:05:34,bridge_name='br-int',has_traffic_filtering=True,id=86cabae0-8599-4330-b71c-91eb2e6b76d8,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cabae0-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.349 225859 DEBUG os_vif [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:05:34,bridge_name='br-int',has_traffic_filtering=True,id=86cabae0-8599-4330-b71c-91eb2e6b76d8,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cabae0-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.352 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.353 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86cabae0-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:52.354 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7baec480-5fe1-4191-a76f-7f4dfc374430]: (4, ('Tue Jan 20 02:50:52 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 (731bfd77941a55fc6ba26854288cff85a728673e94493d76f8d51354912a4949)\n731bfd77941a55fc6ba26854288cff85a728673e94493d76f8d51354912a4949\nTue Jan 20 02:50:52 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 (731bfd77941a55fc6ba26854288cff85a728673e94493d76f8d51354912a4949)\n731bfd77941a55fc6ba26854288cff85a728673e94493d76f8d51354912a4949\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.355 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.356 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:52.357 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9b23a01e-398f-47bc-88ae-69dc4025aed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:52.358 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3379e2b3-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.359 225859 INFO os_vif [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:05:34,bridge_name='br-int',has_traffic_filtering=True,id=86cabae0-8599-4330-b71c-91eb2e6b76d8,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cabae0-85')
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.359 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:52 compute-1 kernel: tap3379e2b3-f0: left promiscuous mode
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.365 225859 DEBUG nova.virt.libvirt.driver [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.366 225859 DEBUG nova.virt.libvirt.driver [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.389 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:52.392 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[db0246f8-ad38-40ae-8d67-01b611bf2880]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:52.415 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[949fdb46-500a-4249-ad9d-f45356a2d76c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:52.416 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ff6a267d-3869-46cf-b110-a281c26d4b20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:52.433 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[14e27d9b-6d6e-4f78-bc80-7a8d14d55688]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566732, 'reachable_time': 19078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268367, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:52.436 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:50:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:50:52.436 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[d076fe63-98ce-4881-aa5c-2cda8e69f687]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:52 compute-1 systemd[1]: run-netns-ovnmeta\x2d3379e2b3\x2dffb2\x2d4391\x2d969b\x2dc9dc51bfbe25.mount: Deactivated successfully.
Jan 20 14:50:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.559 225859 DEBUG nova.network.neutron [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Port 86cabae0-8599-4330-b71c-91eb2e6b76d8 binding to destination host compute-1.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.567 225859 DEBUG nova.compute.manager [req-4c947333-3739-4fd8-a571-f7155aed0aca req-0e01e685-b93a-428d-ab11-d5a364545294 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Received event network-vif-unplugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.567 225859 DEBUG oslo_concurrency.lockutils [req-4c947333-3739-4fd8-a571-f7155aed0aca req-0e01e685-b93a-428d-ab11-d5a364545294 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5167284-086d-4b37-98b0-3853baabf418-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.568 225859 DEBUG oslo_concurrency.lockutils [req-4c947333-3739-4fd8-a571-f7155aed0aca req-0e01e685-b93a-428d-ab11-d5a364545294 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.568 225859 DEBUG oslo_concurrency.lockutils [req-4c947333-3739-4fd8-a571-f7155aed0aca req-0e01e685-b93a-428d-ab11-d5a364545294 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.568 225859 DEBUG nova.compute.manager [req-4c947333-3739-4fd8-a571-f7155aed0aca req-0e01e685-b93a-428d-ab11-d5a364545294 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] No waiting events found dispatching network-vif-unplugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.569 225859 WARNING nova.compute.manager [req-4c947333-3739-4fd8-a571-f7155aed0aca req-0e01e685-b93a-428d-ab11-d5a364545294 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Received unexpected event network-vif-unplugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 for instance with vm_state active and task_state resize_migrating.
Jan 20 14:50:52 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:50:52 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:50:52 compute-1 ceph-mon[81775]: pgmap v1897: 321 pgs: 321 active+clean; 372 MiB data, 998 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.1 MiB/s wr, 240 op/s
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.734 225859 DEBUG oslo_concurrency.lockutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "d5167284-086d-4b37-98b0-3853baabf418-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.735 225859 DEBUG oslo_concurrency.lockutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.736 225859 DEBUG oslo_concurrency.lockutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:50:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:52.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.912 225859 DEBUG oslo_concurrency.lockutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "refresh_cache-d5167284-086d-4b37-98b0-3853baabf418" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.913 225859 DEBUG oslo_concurrency.lockutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquired lock "refresh_cache-d5167284-086d-4b37-98b0-3853baabf418" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:50:52 compute-1 nova_compute[225855]: 2026-01-20 14:50:52.913 225859 DEBUG nova.network.neutron [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:50:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:53.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:50:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:54.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:50:54 compute-1 ceph-mon[81775]: pgmap v1898: 321 pgs: 321 active+clean; 378 MiB data, 998 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 201 op/s
Jan 20 14:50:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:50:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:55.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:50:55 compute-1 nova_compute[225855]: 2026-01-20 14:50:55.713 225859 DEBUG nova.compute.manager [req-b68ff91a-7207-483b-9d85-d9928c04c25e req-fc83dc6f-b2b1-4598-b05b-74579a970938 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Received event network-vif-plugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:55 compute-1 nova_compute[225855]: 2026-01-20 14:50:55.715 225859 DEBUG oslo_concurrency.lockutils [req-b68ff91a-7207-483b-9d85-d9928c04c25e req-fc83dc6f-b2b1-4598-b05b-74579a970938 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5167284-086d-4b37-98b0-3853baabf418-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:55 compute-1 nova_compute[225855]: 2026-01-20 14:50:55.715 225859 DEBUG oslo_concurrency.lockutils [req-b68ff91a-7207-483b-9d85-d9928c04c25e req-fc83dc6f-b2b1-4598-b05b-74579a970938 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:55 compute-1 nova_compute[225855]: 2026-01-20 14:50:55.716 225859 DEBUG oslo_concurrency.lockutils [req-b68ff91a-7207-483b-9d85-d9928c04c25e req-fc83dc6f-b2b1-4598-b05b-74579a970938 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:55 compute-1 nova_compute[225855]: 2026-01-20 14:50:55.716 225859 DEBUG nova.compute.manager [req-b68ff91a-7207-483b-9d85-d9928c04c25e req-fc83dc6f-b2b1-4598-b05b-74579a970938 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] No waiting events found dispatching network-vif-plugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:50:55 compute-1 nova_compute[225855]: 2026-01-20 14:50:55.717 225859 WARNING nova.compute.manager [req-b68ff91a-7207-483b-9d85-d9928c04c25e req-fc83dc6f-b2b1-4598-b05b-74579a970938 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Received unexpected event network-vif-plugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 for instance with vm_state active and task_state resize_migrated.
Jan 20 14:50:55 compute-1 nova_compute[225855]: 2026-01-20 14:50:55.857 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:50:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:56.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:50:56 compute-1 ceph-mon[81775]: pgmap v1899: 321 pgs: 321 active+clean; 402 MiB data, 1016 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 6.1 MiB/s wr, 211 op/s
Jan 20 14:50:57 compute-1 nova_compute[225855]: 2026-01-20 14:50:57.356 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:57.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:58 compute-1 ceph-mon[81775]: pgmap v1900: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 801 KiB/s rd, 4.0 MiB/s wr, 136 op/s
Jan 20 14:50:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:50:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:58.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:50:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:50:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:59.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:59 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2081531429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:59 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2264256095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:59 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/182888219' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:00 compute-1 nova_compute[225855]: 2026-01-20 14:51:00.859 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:51:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:00.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:51:00 compute-1 nova_compute[225855]: 2026-01-20 14:51:00.987 225859 DEBUG nova.network.neutron [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Updating instance_info_cache with network_info: [{"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:51:01 compute-1 nova_compute[225855]: 2026-01-20 14:51:01.009 225859 DEBUG oslo_concurrency.lockutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Releasing lock "refresh_cache-d5167284-086d-4b37-98b0-3853baabf418" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:51:01 compute-1 ceph-mon[81775]: pgmap v1901: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 2.5 MiB/s wr, 89 op/s
Jan 20 14:51:01 compute-1 nova_compute[225855]: 2026-01-20 14:51:01.174 225859 DEBUG nova.virt.libvirt.driver [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 20 14:51:01 compute-1 nova_compute[225855]: 2026-01-20 14:51:01.176 225859 DEBUG nova.virt.libvirt.driver [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 14:51:01 compute-1 nova_compute[225855]: 2026-01-20 14:51:01.177 225859 INFO nova.virt.libvirt.driver [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Creating image(s)
Jan 20 14:51:01 compute-1 nova_compute[225855]: 2026-01-20 14:51:01.221 225859 DEBUG nova.storage.rbd_utils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] creating snapshot(nova-resize) on rbd image(d5167284-086d-4b37-98b0-3853baabf418_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:51:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:01.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e255 e255: 3 total, 3 up, 3 in
Jan 20 14:51:02 compute-1 ceph-mon[81775]: pgmap v1902: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 2.2 MiB/s wr, 101 op/s
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.112 225859 DEBUG nova.objects.instance [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'trusted_certs' on Instance uuid d5167284-086d-4b37-98b0-3853baabf418 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.223 225859 DEBUG nova.virt.libvirt.driver [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.224 225859 DEBUG nova.virt.libvirt.driver [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Ensure instance console log exists: /var/lib/nova/instances/d5167284-086d-4b37-98b0-3853baabf418/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.224 225859 DEBUG oslo_concurrency.lockutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.225 225859 DEBUG oslo_concurrency.lockutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.225 225859 DEBUG oslo_concurrency.lockutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.227 225859 DEBUG nova.virt.libvirt.driver [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Start _get_guest_xml network_info=[{"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "vif_mac": "fa:16:3e:30:05:34"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.231 225859 WARNING nova.virt.libvirt.driver [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.236 225859 DEBUG nova.virt.libvirt.host [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.237 225859 DEBUG nova.virt.libvirt.host [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.240 225859 DEBUG nova.virt.libvirt.host [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.240 225859 DEBUG nova.virt.libvirt.host [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.241 225859 DEBUG nova.virt.libvirt.driver [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.241 225859 DEBUG nova.virt.hardware [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='30c26a27-d918-46d8-a512-4ef3b4ce5955',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.242 225859 DEBUG nova.virt.hardware [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.242 225859 DEBUG nova.virt.hardware [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.242 225859 DEBUG nova.virt.hardware [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.243 225859 DEBUG nova.virt.hardware [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.243 225859 DEBUG nova.virt.hardware [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.243 225859 DEBUG nova.virt.hardware [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.243 225859 DEBUG nova.virt.hardware [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.244 225859 DEBUG nova.virt.hardware [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.244 225859 DEBUG nova.virt.hardware [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.244 225859 DEBUG nova.virt.hardware [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.244 225859 DEBUG nova.objects.instance [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'vcpu_model' on Instance uuid d5167284-086d-4b37-98b0-3853baabf418 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.258 225859 DEBUG oslo_concurrency.processutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.398 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:51:02 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1368014388' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.722 225859 DEBUG oslo_concurrency.processutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:02 compute-1 nova_compute[225855]: 2026-01-20 14:51:02.756 225859 DEBUG oslo_concurrency.processutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:02.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:03 compute-1 ceph-mon[81775]: osdmap e255: 3 total, 3 up, 3 in
Jan 20 14:51:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1368014388' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:51:03 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2263803824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.199 225859 DEBUG oslo_concurrency.processutils [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.203 225859 DEBUG nova.virt.libvirt.vif [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:50:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1906505493',display_name='tempest-ServerDiskConfigTestJSON-server-1906505493',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1906505493',id=109,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:50:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-5s3il9ii',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:50:52Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=d5167284-086d-4b37-98b0-3853baabf418,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "vif_mac": "fa:16:3e:30:05:34"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.203 225859 DEBUG nova.network.os_vif_util [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "vif_mac": "fa:16:3e:30:05:34"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.204 225859 DEBUG nova.network.os_vif_util [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:05:34,bridge_name='br-int',has_traffic_filtering=True,id=86cabae0-8599-4330-b71c-91eb2e6b76d8,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cabae0-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.207 225859 DEBUG nova.virt.libvirt.driver [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:51:03 compute-1 nova_compute[225855]:   <uuid>d5167284-086d-4b37-98b0-3853baabf418</uuid>
Jan 20 14:51:03 compute-1 nova_compute[225855]:   <name>instance-0000006d</name>
Jan 20 14:51:03 compute-1 nova_compute[225855]:   <memory>196608</memory>
Jan 20 14:51:03 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:51:03 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1906505493</nova:name>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:51:02</nova:creationTime>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <nova:flavor name="m1.micro">
Jan 20 14:51:03 compute-1 nova_compute[225855]:         <nova:memory>192</nova:memory>
Jan 20 14:51:03 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:51:03 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:51:03 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:51:03 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:51:03 compute-1 nova_compute[225855]:         <nova:user uuid="a1bd93d04cc4468abe1d5c61f5144191">tempest-ServerDiskConfigTestJSON-1806346246-project-member</nova:user>
Jan 20 14:51:03 compute-1 nova_compute[225855]:         <nova:project uuid="acb30fbc0e3749e390d7f867060b5a2a">tempest-ServerDiskConfigTestJSON-1806346246</nova:project>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:51:03 compute-1 nova_compute[225855]:         <nova:port uuid="86cabae0-8599-4330-b71c-91eb2e6b76d8">
Jan 20 14:51:03 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:51:03 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:51:03 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <system>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <entry name="serial">d5167284-086d-4b37-98b0-3853baabf418</entry>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <entry name="uuid">d5167284-086d-4b37-98b0-3853baabf418</entry>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     </system>
Jan 20 14:51:03 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:51:03 compute-1 nova_compute[225855]:   <os>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:   </os>
Jan 20 14:51:03 compute-1 nova_compute[225855]:   <features>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:   </features>
Jan 20 14:51:03 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:51:03 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:51:03 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/d5167284-086d-4b37-98b0-3853baabf418_disk">
Jan 20 14:51:03 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       </source>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:51:03 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/d5167284-086d-4b37-98b0-3853baabf418_disk.config">
Jan 20 14:51:03 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       </source>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:51:03 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:30:05:34"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <target dev="tap86cabae0-85"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/d5167284-086d-4b37-98b0-3853baabf418/console.log" append="off"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <video>
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     </video>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:51:03 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:51:03 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:51:03 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:51:03 compute-1 nova_compute[225855]: </domain>
Jan 20 14:51:03 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.208 225859 DEBUG nova.virt.libvirt.vif [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:50:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1906505493',display_name='tempest-ServerDiskConfigTestJSON-server-1906505493',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1906505493',id=109,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:50:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-5s3il9ii',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:50:52Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=d5167284-086d-4b37-98b0-3853baabf418,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "vif_mac": "fa:16:3e:30:05:34"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.209 225859 DEBUG nova.network.os_vif_util [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "vif_mac": "fa:16:3e:30:05:34"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.209 225859 DEBUG nova.network.os_vif_util [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:05:34,bridge_name='br-int',has_traffic_filtering=True,id=86cabae0-8599-4330-b71c-91eb2e6b76d8,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cabae0-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.210 225859 DEBUG os_vif [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:05:34,bridge_name='br-int',has_traffic_filtering=True,id=86cabae0-8599-4330-b71c-91eb2e6b76d8,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cabae0-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.210 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.211 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.211 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.214 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.214 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86cabae0-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.214 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap86cabae0-85, col_values=(('external_ids', {'iface-id': '86cabae0-8599-4330-b71c-91eb2e6b76d8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:05:34', 'vm-uuid': 'd5167284-086d-4b37-98b0-3853baabf418'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.216 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:03 compute-1 NetworkManager[49104]: <info>  [1768920663.2170] manager: (tap86cabae0-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.218 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.221 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.222 225859 INFO os_vif [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:05:34,bridge_name='br-int',has_traffic_filtering=True,id=86cabae0-8599-4330-b71c-91eb2e6b76d8,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cabae0-85')
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.283 225859 DEBUG nova.virt.libvirt.driver [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.285 225859 DEBUG nova.virt.libvirt.driver [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.286 225859 DEBUG nova.virt.libvirt.driver [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No VIF found with MAC fa:16:3e:30:05:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.288 225859 INFO nova.virt.libvirt.driver [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Using config drive
Jan 20 14:51:03 compute-1 kernel: tap86cabae0-85: entered promiscuous mode
Jan 20 14:51:03 compute-1 NetworkManager[49104]: <info>  [1768920663.3676] manager: (tap86cabae0-85): new Tun device (/org/freedesktop/NetworkManager/Devices/185)
Jan 20 14:51:03 compute-1 ovn_controller[130490]: 2026-01-20T14:51:03Z|00429|binding|INFO|Claiming lport 86cabae0-8599-4330-b71c-91eb2e6b76d8 for this chassis.
Jan 20 14:51:03 compute-1 ovn_controller[130490]: 2026-01-20T14:51:03Z|00430|binding|INFO|86cabae0-8599-4330-b71c-91eb2e6b76d8: Claiming fa:16:3e:30:05:34 10.100.0.3
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.369 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.374 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:05:34 10.100.0.3'], port_security=['fa:16:3e:30:05:34 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd5167284-086d-4b37-98b0-3853baabf418', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '19fab802-7db8-4c89-8f8e-8dcfc14d4627', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0e287ba-f88b-46f5-bb7f-3cc2a74be88e, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=86cabae0-8599-4330-b71c-91eb2e6b76d8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.375 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 86cabae0-8599-4330-b71c-91eb2e6b76d8 in datapath 3379e2b3-ffb2-4391-969b-c9dc51bfbe25 bound to our chassis
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.376 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:51:03 compute-1 ovn_controller[130490]: 2026-01-20T14:51:03Z|00431|binding|INFO|Setting lport 86cabae0-8599-4330-b71c-91eb2e6b76d8 ovn-installed in OVS
Jan 20 14:51:03 compute-1 ovn_controller[130490]: 2026-01-20T14:51:03Z|00432|binding|INFO|Setting lport 86cabae0-8599-4330-b71c-91eb2e6b76d8 up in Southbound
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.386 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.388 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9985c9-e96b-445a-92a4-f66563a39db9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.389 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3379e2b3-f1 in ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.390 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3379e2b3-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.390 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[51c5fcea-84ca-40bc-b3f9-6f7f0d45b486]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.391 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3efd6fee-5af3-4746-800b-c36064697b99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:03 compute-1 systemd-udevd[268541]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.402 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[3e15361f-7552-499f-bbe1-a5344302f285]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:03 compute-1 systemd-machined[194361]: New machine qemu-50-instance-0000006d.
Jan 20 14:51:03 compute-1 NetworkManager[49104]: <info>  [1768920663.4099] device (tap86cabae0-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:51:03 compute-1 NetworkManager[49104]: <info>  [1768920663.4108] device (tap86cabae0-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:51:03 compute-1 systemd[1]: Started Virtual Machine qemu-50-instance-0000006d.
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.424 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd8b98b-53da-4aca-9038-16d9910084d4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.457 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[3cbc86fd-f43a-4c7e-b393-0759459510c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.461 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f4c982-4143-4a2f-8128-4bc11c29b8a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:03 compute-1 NetworkManager[49104]: <info>  [1768920663.4632] manager: (tap3379e2b3-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/186)
Jan 20 14:51:03 compute-1 systemd-udevd[268545]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.491 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d89c056b-1a51-448a-b0ae-47cab0376593]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.494 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c0273420-b0e1-4859-867b-a1a25f431dc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:03 compute-1 NetworkManager[49104]: <info>  [1768920663.5153] device (tap3379e2b3-f0): carrier: link connected
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.521 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[96df139a-a8bf-412d-8b87-94b33b37e1c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.535 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[255e83a4-bd58-4998-953a-123518e5e720]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3379e2b3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:86:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570244, 'reachable_time': 28706, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268573, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.548 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[461c1fb8-213d-47a4-9e4b-f03dc0dfcdf4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:86fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570244, 'tstamp': 570244}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268574, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.562 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[67e65a24-4540-47fb-90ab-11035345ea65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3379e2b3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:86:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570244, 'reachable_time': 28706, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268575, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.590 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[13e5976e-7b9e-4971-bfd6-019277bf0eba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.652 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc85f52-87e1-4981-8fad-12c206acf211]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.654 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3379e2b3-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.654 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.654 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3379e2b3-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:03 compute-1 kernel: tap3379e2b3-f0: entered promiscuous mode
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.656 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:03 compute-1 NetworkManager[49104]: <info>  [1768920663.6580] manager: (tap3379e2b3-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.659 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3379e2b3-f0, col_values=(('external_ids', {'iface-id': 'b32ddf23-a8dd-4e6d-a410-ccb24b214d35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.660 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:03 compute-1 ovn_controller[130490]: 2026-01-20T14:51:03Z|00433|binding|INFO|Releasing lport b32ddf23-a8dd-4e6d-a410-ccb24b214d35 from this chassis (sb_readonly=0)
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.675 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.676 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.677 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e125e7cb-ee19-46cc-bc7d-b36dc9573c70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.678 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:51:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:03.679 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'env', 'PROCESS_TAG=haproxy-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:51:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:03.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.823 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for d5167284-086d-4b37-98b0-3853baabf418 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.823 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920663.8228931, d5167284-086d-4b37-98b0-3853baabf418 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.823 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] VM Resumed (Lifecycle Event)
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.826 225859 DEBUG nova.compute.manager [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.829 225859 INFO nova.virt.libvirt.driver [-] [instance: d5167284-086d-4b37-98b0-3853baabf418] Instance running successfully.
Jan 20 14:51:03 compute-1 virtqemud[225396]: argument unsupported: QEMU guest agent is not configured
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.831 225859 DEBUG nova.virt.libvirt.guest [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.831 225859 DEBUG nova.virt.libvirt.driver [None req-2880dd92-9035-4a66-9e1e-40a92f2e0283 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.851 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.853 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.922 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.923 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920663.8260856, d5167284-086d-4b37-98b0-3853baabf418 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.923 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] VM Started (Lifecycle Event)
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.951 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:03 compute-1 nova_compute[225855]: 2026-01-20 14:51:03.954 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:51:04 compute-1 podman[268650]: 2026-01-20 14:51:04.025892657 +0000 UTC m=+0.024780560 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:51:04 compute-1 podman[268650]: 2026-01-20 14:51:04.14428829 +0000 UTC m=+0.143176173 container create a5869eca616e5cacd5c07a52572d7cd125c31406909fb3cffd818b208b05825e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 14:51:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2263803824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:04 compute-1 ceph-mon[81775]: pgmap v1904: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 421 KiB/s rd, 1.7 MiB/s wr, 108 op/s
Jan 20 14:51:04 compute-1 systemd[1]: Started libpod-conmon-a5869eca616e5cacd5c07a52572d7cd125c31406909fb3cffd818b208b05825e.scope.
Jan 20 14:51:04 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:51:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b33755f1bd34c133af6276ef5124932271c289130919c6a5f4cceaadd0c9ef0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:51:04 compute-1 podman[268650]: 2026-01-20 14:51:04.43050272 +0000 UTC m=+0.429390593 container init a5869eca616e5cacd5c07a52572d7cd125c31406909fb3cffd818b208b05825e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:51:04 compute-1 podman[268650]: 2026-01-20 14:51:04.436474999 +0000 UTC m=+0.435362862 container start a5869eca616e5cacd5c07a52572d7cd125c31406909fb3cffd818b208b05825e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:51:04 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[268665]: [NOTICE]   (268670) : New worker (268672) forked
Jan 20 14:51:04 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[268665]: [NOTICE]   (268670) : Loading success.
Jan 20 14:51:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:04.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:05 compute-1 nova_compute[225855]: 2026-01-20 14:51:05.610 225859 DEBUG nova.compute.manager [req-c32b7f8d-9983-4b88-a00a-911e052a9a02 req-697eba0c-977a-4371-9d04-c0d632ca86ac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Received event network-vif-plugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:51:05 compute-1 nova_compute[225855]: 2026-01-20 14:51:05.610 225859 DEBUG oslo_concurrency.lockutils [req-c32b7f8d-9983-4b88-a00a-911e052a9a02 req-697eba0c-977a-4371-9d04-c0d632ca86ac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5167284-086d-4b37-98b0-3853baabf418-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:05 compute-1 nova_compute[225855]: 2026-01-20 14:51:05.610 225859 DEBUG oslo_concurrency.lockutils [req-c32b7f8d-9983-4b88-a00a-911e052a9a02 req-697eba0c-977a-4371-9d04-c0d632ca86ac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:05 compute-1 nova_compute[225855]: 2026-01-20 14:51:05.610 225859 DEBUG oslo_concurrency.lockutils [req-c32b7f8d-9983-4b88-a00a-911e052a9a02 req-697eba0c-977a-4371-9d04-c0d632ca86ac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:05 compute-1 nova_compute[225855]: 2026-01-20 14:51:05.611 225859 DEBUG nova.compute.manager [req-c32b7f8d-9983-4b88-a00a-911e052a9a02 req-697eba0c-977a-4371-9d04-c0d632ca86ac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] No waiting events found dispatching network-vif-plugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:51:05 compute-1 nova_compute[225855]: 2026-01-20 14:51:05.611 225859 WARNING nova.compute.manager [req-c32b7f8d-9983-4b88-a00a-911e052a9a02 req-697eba0c-977a-4371-9d04-c0d632ca86ac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Received unexpected event network-vif-plugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 for instance with vm_state resized and task_state None.
Jan 20 14:51:05 compute-1 nova_compute[225855]: 2026-01-20 14:51:05.611 225859 DEBUG nova.compute.manager [req-c32b7f8d-9983-4b88-a00a-911e052a9a02 req-697eba0c-977a-4371-9d04-c0d632ca86ac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Received event network-vif-plugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:51:05 compute-1 nova_compute[225855]: 2026-01-20 14:51:05.611 225859 DEBUG oslo_concurrency.lockutils [req-c32b7f8d-9983-4b88-a00a-911e052a9a02 req-697eba0c-977a-4371-9d04-c0d632ca86ac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5167284-086d-4b37-98b0-3853baabf418-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:05 compute-1 nova_compute[225855]: 2026-01-20 14:51:05.611 225859 DEBUG oslo_concurrency.lockutils [req-c32b7f8d-9983-4b88-a00a-911e052a9a02 req-697eba0c-977a-4371-9d04-c0d632ca86ac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:05 compute-1 nova_compute[225855]: 2026-01-20 14:51:05.612 225859 DEBUG oslo_concurrency.lockutils [req-c32b7f8d-9983-4b88-a00a-911e052a9a02 req-697eba0c-977a-4371-9d04-c0d632ca86ac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:05 compute-1 nova_compute[225855]: 2026-01-20 14:51:05.612 225859 DEBUG nova.compute.manager [req-c32b7f8d-9983-4b88-a00a-911e052a9a02 req-697eba0c-977a-4371-9d04-c0d632ca86ac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] No waiting events found dispatching network-vif-plugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:51:05 compute-1 nova_compute[225855]: 2026-01-20 14:51:05.612 225859 WARNING nova.compute.manager [req-c32b7f8d-9983-4b88-a00a-911e052a9a02 req-697eba0c-977a-4371-9d04-c0d632ca86ac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Received unexpected event network-vif-plugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 for instance with vm_state resized and task_state None.
Jan 20 14:51:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:51:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:05.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:51:05 compute-1 nova_compute[225855]: 2026-01-20 14:51:05.859 225859 DEBUG oslo_concurrency.lockutils [None req-a37ea1ed-39f6-433d-9760-570995ae8099 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "d5167284-086d-4b37-98b0-3853baabf418" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:05 compute-1 nova_compute[225855]: 2026-01-20 14:51:05.860 225859 DEBUG oslo_concurrency.lockutils [None req-a37ea1ed-39f6-433d-9760-570995ae8099 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:05 compute-1 nova_compute[225855]: 2026-01-20 14:51:05.860 225859 DEBUG nova.compute.manager [None req-a37ea1ed-39f6-433d-9760-570995ae8099 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Going to confirm migration 16 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Jan 20 14:51:05 compute-1 nova_compute[225855]: 2026-01-20 14:51:05.865 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:06 compute-1 nova_compute[225855]: 2026-01-20 14:51:06.048 225859 DEBUG oslo_concurrency.lockutils [None req-a37ea1ed-39f6-433d-9760-570995ae8099 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "refresh_cache-d5167284-086d-4b37-98b0-3853baabf418" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:51:06 compute-1 nova_compute[225855]: 2026-01-20 14:51:06.048 225859 DEBUG oslo_concurrency.lockutils [None req-a37ea1ed-39f6-433d-9760-570995ae8099 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquired lock "refresh_cache-d5167284-086d-4b37-98b0-3853baabf418" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:51:06 compute-1 nova_compute[225855]: 2026-01-20 14:51:06.048 225859 DEBUG nova.network.neutron [None req-a37ea1ed-39f6-433d-9760-570995ae8099 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:51:06 compute-1 nova_compute[225855]: 2026-01-20 14:51:06.048 225859 DEBUG nova.objects.instance [None req-a37ea1ed-39f6-433d-9760-570995ae8099 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'info_cache' on Instance uuid d5167284-086d-4b37-98b0-3853baabf418 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:06.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:06 compute-1 ceph-mon[81775]: pgmap v1905: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 310 KiB/s wr, 154 op/s
Jan 20 14:51:07 compute-1 podman[268682]: 2026-01-20 14:51:07.071697674 +0000 UTC m=+0.122156590 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_controller)
Jan 20 14:51:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:07.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:07 compute-1 nova_compute[225855]: 2026-01-20 14:51:07.845 225859 DEBUG nova.network.neutron [None req-a37ea1ed-39f6-433d-9760-570995ae8099 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Updating instance_info_cache with network_info: [{"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:51:07 compute-1 nova_compute[225855]: 2026-01-20 14:51:07.871 225859 DEBUG oslo_concurrency.lockutils [None req-a37ea1ed-39f6-433d-9760-570995ae8099 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Releasing lock "refresh_cache-d5167284-086d-4b37-98b0-3853baabf418" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:51:07 compute-1 nova_compute[225855]: 2026-01-20 14:51:07.871 225859 DEBUG nova.objects.instance [None req-a37ea1ed-39f6-433d-9760-570995ae8099 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'migration_context' on Instance uuid d5167284-086d-4b37-98b0-3853baabf418 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:07 compute-1 nova_compute[225855]: 2026-01-20 14:51:07.945 225859 DEBUG nova.storage.rbd_utils [None req-a37ea1ed-39f6-433d-9760-570995ae8099 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] removing snapshot(nova-resize) on rbd image(d5167284-086d-4b37-98b0-3853baabf418_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:51:08 compute-1 nova_compute[225855]: 2026-01-20 14:51:08.216 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:51:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:08.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:51:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e256 e256: 3 total, 3 up, 3 in
Jan 20 14:51:08 compute-1 ceph-mon[81775]: pgmap v1906: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 56 KiB/s wr, 221 op/s
Jan 20 14:51:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2039283088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:09 compute-1 nova_compute[225855]: 2026-01-20 14:51:09.029 225859 DEBUG oslo_concurrency.lockutils [None req-a37ea1ed-39f6-433d-9760-570995ae8099 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:09 compute-1 nova_compute[225855]: 2026-01-20 14:51:09.030 225859 DEBUG oslo_concurrency.lockutils [None req-a37ea1ed-39f6-433d-9760-570995ae8099 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:09 compute-1 nova_compute[225855]: 2026-01-20 14:51:09.165 225859 DEBUG oslo_concurrency.processutils [None req-a37ea1ed-39f6-433d-9760-570995ae8099 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:51:09 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2382876353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:09 compute-1 nova_compute[225855]: 2026-01-20 14:51:09.641 225859 DEBUG oslo_concurrency.processutils [None req-a37ea1ed-39f6-433d-9760-570995ae8099 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:09 compute-1 nova_compute[225855]: 2026-01-20 14:51:09.648 225859 DEBUG nova.compute.provider_tree [None req-a37ea1ed-39f6-433d-9760-570995ae8099 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:51:09 compute-1 nova_compute[225855]: 2026-01-20 14:51:09.665 225859 DEBUG nova.scheduler.client.report [None req-a37ea1ed-39f6-433d-9760-570995ae8099 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:51:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:09.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:09 compute-1 nova_compute[225855]: 2026-01-20 14:51:09.718 225859 DEBUG oslo_concurrency.lockutils [None req-a37ea1ed-39f6-433d-9760-570995ae8099 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:09 compute-1 nova_compute[225855]: 2026-01-20 14:51:09.837 225859 INFO nova.scheduler.client.report [None req-a37ea1ed-39f6-433d-9760-570995ae8099 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Deleted allocation for migration 135cd3df-9888-44b9-a278-94d967ab2b1d
Jan 20 14:51:09 compute-1 nova_compute[225855]: 2026-01-20 14:51:09.906 225859 DEBUG oslo_concurrency.lockutils [None req-a37ea1ed-39f6-433d-9760-570995ae8099 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:10 compute-1 ceph-mon[81775]: osdmap e256: 3 total, 3 up, 3 in
Jan 20 14:51:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2382876353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2607352360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:10 compute-1 sudo[268769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:10 compute-1 sudo[268769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:10 compute-1 sudo[268769]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:10 compute-1 sudo[268794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:10 compute-1 sudo[268794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:10 compute-1 sudo[268794]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:10 compute-1 nova_compute[225855]: 2026-01-20 14:51:10.867 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:10.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:11 compute-1 ceph-mon[81775]: pgmap v1908: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 64 KiB/s wr, 298 op/s
Jan 20 14:51:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2098571758' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:51:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:11.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:51:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:12.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:13 compute-1 ceph-mon[81775]: pgmap v1909: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 53 KiB/s wr, 283 op/s
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.118 225859 DEBUG oslo_concurrency.lockutils [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "d5167284-086d-4b37-98b0-3853baabf418" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.119 225859 DEBUG oslo_concurrency.lockutils [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.119 225859 DEBUG oslo_concurrency.lockutils [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "d5167284-086d-4b37-98b0-3853baabf418-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.119 225859 DEBUG oslo_concurrency.lockutils [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.119 225859 DEBUG oslo_concurrency.lockutils [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.120 225859 INFO nova.compute.manager [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Terminating instance
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.121 225859 DEBUG nova.compute.manager [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:51:13 compute-1 kernel: tap86cabae0-85 (unregistering): left promiscuous mode
Jan 20 14:51:13 compute-1 NetworkManager[49104]: <info>  [1768920673.1648] device (tap86cabae0-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:51:13 compute-1 ovn_controller[130490]: 2026-01-20T14:51:13Z|00434|binding|INFO|Releasing lport 86cabae0-8599-4330-b71c-91eb2e6b76d8 from this chassis (sb_readonly=0)
Jan 20 14:51:13 compute-1 ovn_controller[130490]: 2026-01-20T14:51:13Z|00435|binding|INFO|Setting lport 86cabae0-8599-4330-b71c-91eb2e6b76d8 down in Southbound
Jan 20 14:51:13 compute-1 ovn_controller[130490]: 2026-01-20T14:51:13Z|00436|binding|INFO|Removing iface tap86cabae0-85 ovn-installed in OVS
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.176 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:13.182 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:05:34 10.100.0.3'], port_security=['fa:16:3e:30:05:34 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd5167284-086d-4b37-98b0-3853baabf418', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '19fab802-7db8-4c89-8f8e-8dcfc14d4627', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0e287ba-f88b-46f5-bb7f-3cc2a74be88e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=86cabae0-8599-4330-b71c-91eb2e6b76d8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:51:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:13.183 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 86cabae0-8599-4330-b71c-91eb2e6b76d8 in datapath 3379e2b3-ffb2-4391-969b-c9dc51bfbe25 unbound from our chassis
Jan 20 14:51:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:13.184 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3379e2b3-ffb2-4391-969b-c9dc51bfbe25, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:51:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:13.185 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7efb5647-a214-4716-9897-2e7b8252e6c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:13.186 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 namespace which is not needed anymore
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.196 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.217 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:13 compute-1 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Jan 20 14:51:13 compute-1 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000006d.scope: Consumed 9.934s CPU time.
Jan 20 14:51:13 compute-1 systemd-machined[194361]: Machine qemu-50-instance-0000006d terminated.
Jan 20 14:51:13 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[268665]: [NOTICE]   (268670) : haproxy version is 2.8.14-c23fe91
Jan 20 14:51:13 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[268665]: [NOTICE]   (268670) : path to executable is /usr/sbin/haproxy
Jan 20 14:51:13 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[268665]: [WARNING]  (268670) : Exiting Master process...
Jan 20 14:51:13 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[268665]: [WARNING]  (268670) : Exiting Master process...
Jan 20 14:51:13 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[268665]: [ALERT]    (268670) : Current worker (268672) exited with code 143 (Terminated)
Jan 20 14:51:13 compute-1 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[268665]: [WARNING]  (268670) : All workers exited. Exiting... (0)
Jan 20 14:51:13 compute-1 systemd[1]: libpod-a5869eca616e5cacd5c07a52572d7cd125c31406909fb3cffd818b208b05825e.scope: Deactivated successfully.
Jan 20 14:51:13 compute-1 podman[268843]: 2026-01-20 14:51:13.304541574 +0000 UTC m=+0.041512463 container died a5869eca616e5cacd5c07a52572d7cd125c31406909fb3cffd818b208b05825e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 20 14:51:13 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5869eca616e5cacd5c07a52572d7cd125c31406909fb3cffd818b208b05825e-userdata-shm.mount: Deactivated successfully.
Jan 20 14:51:13 compute-1 systemd[1]: var-lib-containers-storage-overlay-9b33755f1bd34c133af6276ef5124932271c289130919c6a5f4cceaadd0c9ef0-merged.mount: Deactivated successfully.
Jan 20 14:51:13 compute-1 podman[268843]: 2026-01-20 14:51:13.338510033 +0000 UTC m=+0.075480922 container cleanup a5869eca616e5cacd5c07a52572d7cd125c31406909fb3cffd818b208b05825e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.341 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:13 compute-1 systemd[1]: libpod-conmon-a5869eca616e5cacd5c07a52572d7cd125c31406909fb3cffd818b208b05825e.scope: Deactivated successfully.
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.346 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.352 225859 INFO nova.virt.libvirt.driver [-] [instance: d5167284-086d-4b37-98b0-3853baabf418] Instance destroyed successfully.
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.353 225859 DEBUG nova.objects.instance [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'resources' on Instance uuid d5167284-086d-4b37-98b0-3853baabf418 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.366 225859 DEBUG nova.virt.libvirt.vif [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:50:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1906505493',display_name='tempest-ServerDiskConfigTestJSON-server-1906505493',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1906505493',id=109,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:51:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-5s3il9ii',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:51:09Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=d5167284-086d-4b37-98b0-3853baabf418,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.366 225859 DEBUG nova.network.os_vif_util [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "address": "fa:16:3e:30:05:34", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cabae0-85", "ovs_interfaceid": "86cabae0-8599-4330-b71c-91eb2e6b76d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.367 225859 DEBUG nova.network.os_vif_util [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:05:34,bridge_name='br-int',has_traffic_filtering=True,id=86cabae0-8599-4330-b71c-91eb2e6b76d8,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cabae0-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.367 225859 DEBUG os_vif [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:05:34,bridge_name='br-int',has_traffic_filtering=True,id=86cabae0-8599-4330-b71c-91eb2e6b76d8,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cabae0-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.368 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.369 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86cabae0-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.370 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.371 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.373 225859 INFO os_vif [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:05:34,bridge_name='br-int',has_traffic_filtering=True,id=86cabae0-8599-4330-b71c-91eb2e6b76d8,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cabae0-85')
Jan 20 14:51:13 compute-1 podman[268878]: 2026-01-20 14:51:13.401028938 +0000 UTC m=+0.041655727 container remove a5869eca616e5cacd5c07a52572d7cd125c31406909fb3cffd818b208b05825e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 20 14:51:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:13.406 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[01b53d45-c31f-4039-9015-5f76f3a586d4]: (4, ('Tue Jan 20 02:51:13 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 (a5869eca616e5cacd5c07a52572d7cd125c31406909fb3cffd818b208b05825e)\na5869eca616e5cacd5c07a52572d7cd125c31406909fb3cffd818b208b05825e\nTue Jan 20 02:51:13 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 (a5869eca616e5cacd5c07a52572d7cd125c31406909fb3cffd818b208b05825e)\na5869eca616e5cacd5c07a52572d7cd125c31406909fb3cffd818b208b05825e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:13.407 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[df590090-4500-4b4f-8e53-f49e5d51c238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:13.408 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3379e2b3-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.410 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:13 compute-1 kernel: tap3379e2b3-f0: left promiscuous mode
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.426 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:13.430 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[20d1292c-c6fd-401a-b041-d4e2f4c19ef7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:13.458 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c1c80827-b216-4851-817c-f933b7c3a1ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:13.460 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b53c2c-daa6-412d-b638-ecbedd6500b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:13.475 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5858ab84-f210-4b56-afa8-6d2bd6d5fa93]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570237, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268915, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:13.478 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:51:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:13.478 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa06931-635c-4ec8-bc87-025b64eb1371]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:13 compute-1 systemd[1]: run-netns-ovnmeta\x2d3379e2b3\x2dffb2\x2d4391\x2d969b\x2dc9dc51bfbe25.mount: Deactivated successfully.
Jan 20 14:51:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:51:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4280913832' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:51:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:51:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4280913832' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:51:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:51:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:13.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.938 225859 INFO nova.virt.libvirt.driver [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Deleting instance files /var/lib/nova/instances/d5167284-086d-4b37-98b0-3853baabf418_del
Jan 20 14:51:13 compute-1 nova_compute[225855]: 2026-01-20 14:51:13.939 225859 INFO nova.virt.libvirt.driver [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Deletion of /var/lib/nova/instances/d5167284-086d-4b37-98b0-3853baabf418_del complete
Jan 20 14:51:14 compute-1 nova_compute[225855]: 2026-01-20 14:51:14.003 225859 INFO nova.compute.manager [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Took 0.88 seconds to destroy the instance on the hypervisor.
Jan 20 14:51:14 compute-1 nova_compute[225855]: 2026-01-20 14:51:14.003 225859 DEBUG oslo.service.loopingcall [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:51:14 compute-1 nova_compute[225855]: 2026-01-20 14:51:14.004 225859 DEBUG nova.compute.manager [-] [instance: d5167284-086d-4b37-98b0-3853baabf418] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:51:14 compute-1 nova_compute[225855]: 2026-01-20 14:51:14.004 225859 DEBUG nova.network.neutron [-] [instance: d5167284-086d-4b37-98b0-3853baabf418] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:51:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4280913832' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:51:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4280913832' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:51:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:14.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:14.944 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:51:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:14.945 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:51:14 compute-1 nova_compute[225855]: 2026-01-20 14:51:14.945 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:15 compute-1 ceph-mon[81775]: pgmap v1910: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 17 KiB/s wr, 332 op/s
Jan 20 14:51:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1484244040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.269 225859 DEBUG nova.network.neutron [-] [instance: d5167284-086d-4b37-98b0-3853baabf418] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.333 225859 INFO nova.compute.manager [-] [instance: d5167284-086d-4b37-98b0-3853baabf418] Took 1.33 seconds to deallocate network for instance.
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.561 225859 DEBUG nova.compute.manager [req-b834e40f-4cf2-4316-ab73-cd0de4c34931 req-49f7b545-3f1f-4a9a-88ad-109a52132d73 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Received event network-vif-unplugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.561 225859 DEBUG oslo_concurrency.lockutils [req-b834e40f-4cf2-4316-ab73-cd0de4c34931 req-49f7b545-3f1f-4a9a-88ad-109a52132d73 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5167284-086d-4b37-98b0-3853baabf418-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.561 225859 DEBUG oslo_concurrency.lockutils [req-b834e40f-4cf2-4316-ab73-cd0de4c34931 req-49f7b545-3f1f-4a9a-88ad-109a52132d73 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.562 225859 DEBUG oslo_concurrency.lockutils [req-b834e40f-4cf2-4316-ab73-cd0de4c34931 req-49f7b545-3f1f-4a9a-88ad-109a52132d73 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.562 225859 DEBUG nova.compute.manager [req-b834e40f-4cf2-4316-ab73-cd0de4c34931 req-49f7b545-3f1f-4a9a-88ad-109a52132d73 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] No waiting events found dispatching network-vif-unplugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.562 225859 DEBUG nova.compute.manager [req-b834e40f-4cf2-4316-ab73-cd0de4c34931 req-49f7b545-3f1f-4a9a-88ad-109a52132d73 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Received event network-vif-unplugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.562 225859 DEBUG nova.compute.manager [req-b834e40f-4cf2-4316-ab73-cd0de4c34931 req-49f7b545-3f1f-4a9a-88ad-109a52132d73 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Received event network-vif-plugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.563 225859 DEBUG oslo_concurrency.lockutils [req-b834e40f-4cf2-4316-ab73-cd0de4c34931 req-49f7b545-3f1f-4a9a-88ad-109a52132d73 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d5167284-086d-4b37-98b0-3853baabf418-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.563 225859 DEBUG oslo_concurrency.lockutils [req-b834e40f-4cf2-4316-ab73-cd0de4c34931 req-49f7b545-3f1f-4a9a-88ad-109a52132d73 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.563 225859 DEBUG oslo_concurrency.lockutils [req-b834e40f-4cf2-4316-ab73-cd0de4c34931 req-49f7b545-3f1f-4a9a-88ad-109a52132d73 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.564 225859 DEBUG nova.compute.manager [req-b834e40f-4cf2-4316-ab73-cd0de4c34931 req-49f7b545-3f1f-4a9a-88ad-109a52132d73 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] No waiting events found dispatching network-vif-plugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.564 225859 WARNING nova.compute.manager [req-b834e40f-4cf2-4316-ab73-cd0de4c34931 req-49f7b545-3f1f-4a9a-88ad-109a52132d73 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Received unexpected event network-vif-plugged-86cabae0-8599-4330-b71c-91eb2e6b76d8 for instance with vm_state active and task_state deleting.
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.564 225859 DEBUG nova.compute.manager [req-b834e40f-4cf2-4316-ab73-cd0de4c34931 req-49f7b545-3f1f-4a9a-88ad-109a52132d73 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d5167284-086d-4b37-98b0-3853baabf418] Received event network-vif-deleted-86cabae0-8599-4330-b71c-91eb2e6b76d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.565 225859 DEBUG oslo_concurrency.lockutils [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.566 225859 DEBUG oslo_concurrency.lockutils [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.571 225859 DEBUG oslo_concurrency.lockutils [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.635 225859 INFO nova.scheduler.client.report [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Deleted allocations for instance d5167284-086d-4b37-98b0-3853baabf418
Jan 20 14:51:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:51:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:15.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.738 225859 DEBUG oslo_concurrency.lockutils [None req-ae481e4d-076c-4b49-a909-514aa77b1f1f a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "d5167284-086d-4b37-98b0-3853baabf418" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:15 compute-1 nova_compute[225855]: 2026-01-20 14:51:15.916 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:16 compute-1 ceph-mon[81775]: pgmap v1911: 321 pgs: 321 active+clean; 349 MiB data, 1008 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 2.1 KiB/s wr, 310 op/s
Jan 20 14:51:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:16.411 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:16.412 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:16.412 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:16.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:17.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/100085165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:18 compute-1 nova_compute[225855]: 2026-01-20 14:51:18.415 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:18.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:18 compute-1 ceph-mon[81775]: pgmap v1912: 321 pgs: 321 active+clean; 347 MiB data, 999 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 870 KiB/s wr, 265 op/s
Jan 20 14:51:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/35703789' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:51:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/35703789' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:51:19 compute-1 podman[268921]: 2026-01-20 14:51:19.004618174 +0000 UTC m=+0.054832759 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 14:51:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e257 e257: 3 total, 3 up, 3 in
Jan 20 14:51:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:51:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:19.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:51:20 compute-1 ceph-mon[81775]: osdmap e257: 3 total, 3 up, 3 in
Jan 20 14:51:20 compute-1 ceph-mon[81775]: pgmap v1914: 321 pgs: 321 active+clean; 356 MiB data, 994 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.3 MiB/s wr, 241 op/s
Jan 20 14:51:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2416694841' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:51:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2416694841' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:51:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2798121711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:20 compute-1 nova_compute[225855]: 2026-01-20 14:51:20.920 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:20.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3218029730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:51:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:21.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:51:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3118254261' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:51:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3118254261' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:51:22 compute-1 ceph-mon[81775]: pgmap v1915: 321 pgs: 321 active+clean; 388 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 234 op/s
Jan 20 14:51:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1623500243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1223132071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:22.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:23 compute-1 nova_compute[225855]: 2026-01-20 14:51:23.418 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:23.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:24 compute-1 ceph-mon[81775]: pgmap v1916: 321 pgs: 321 active+clean; 393 MiB data, 1011 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.2 MiB/s wr, 232 op/s
Jan 20 14:51:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:51:24.947 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:24.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:25 compute-1 nova_compute[225855]: 2026-01-20 14:51:25.243 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:25 compute-1 nova_compute[225855]: 2026-01-20 14:51:25.561 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:25.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:25 compute-1 nova_compute[225855]: 2026-01-20 14:51:25.922 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e258 e258: 3 total, 3 up, 3 in
Jan 20 14:51:27 compute-1 ceph-mon[81775]: pgmap v1917: 321 pgs: 321 active+clean; 339 MiB data, 984 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.3 MiB/s wr, 242 op/s
Jan 20 14:51:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:27.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:27.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:28 compute-1 ceph-mon[81775]: osdmap e258: 3 total, 3 up, 3 in
Jan 20 14:51:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/890546304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e259 e259: 3 total, 3 up, 3 in
Jan 20 14:51:28 compute-1 nova_compute[225855]: 2026-01-20 14:51:28.351 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920673.3507633, d5167284-086d-4b37-98b0-3853baabf418 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:51:28 compute-1 nova_compute[225855]: 2026-01-20 14:51:28.351 225859 INFO nova.compute.manager [-] [instance: d5167284-086d-4b37-98b0-3853baabf418] VM Stopped (Lifecycle Event)
Jan 20 14:51:28 compute-1 nova_compute[225855]: 2026-01-20 14:51:28.380 225859 DEBUG nova.compute.manager [None req-95393680-3740-411d-b52c-e2f8bf3af950 - - - - - -] [instance: d5167284-086d-4b37-98b0-3853baabf418] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:28 compute-1 nova_compute[225855]: 2026-01-20 14:51:28.478 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:51:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:29.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:51:29 compute-1 ceph-mon[81775]: pgmap v1919: 321 pgs: 321 active+clean; 312 MiB data, 965 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 352 op/s
Jan 20 14:51:29 compute-1 ceph-mon[81775]: osdmap e259: 3 total, 3 up, 3 in
Jan 20 14:51:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e260 e260: 3 total, 3 up, 3 in
Jan 20 14:51:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:51:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:29.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:51:30 compute-1 ceph-mon[81775]: osdmap e260: 3 total, 3 up, 3 in
Jan 20 14:51:30 compute-1 sudo[268947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:30 compute-1 sudo[268947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:30 compute-1 sudo[268947]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:30 compute-1 sudo[268972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:30 compute-1 sudo[268972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:30 compute-1 sudo[268972]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:30 compute-1 nova_compute[225855]: 2026-01-20 14:51:30.923 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:51:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:31.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:51:31 compute-1 ceph-mon[81775]: pgmap v1922: 321 pgs: 321 active+clean; 317 MiB data, 959 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 1.6 MiB/s wr, 395 op/s
Jan 20 14:51:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:51:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:31.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:51:32 compute-1 ceph-mon[81775]: pgmap v1923: 321 pgs: 321 active+clean; 291 MiB data, 942 MiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 2.1 MiB/s wr, 353 op/s
Jan 20 14:51:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:33.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:33 compute-1 nova_compute[225855]: 2026-01-20 14:51:33.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:51:33 compute-1 nova_compute[225855]: 2026-01-20 14:51:33.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:51:33 compute-1 nova_compute[225855]: 2026-01-20 14:51:33.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:51:33 compute-1 nova_compute[225855]: 2026-01-20 14:51:33.358 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:51:33 compute-1 nova_compute[225855]: 2026-01-20 14:51:33.359 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:51:33 compute-1 nova_compute[225855]: 2026-01-20 14:51:33.359 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:51:33 compute-1 nova_compute[225855]: 2026-01-20 14:51:33.480 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1599957301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:51:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:33.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:51:34 compute-1 nova_compute[225855]: 2026-01-20 14:51:34.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:51:34 compute-1 nova_compute[225855]: 2026-01-20 14:51:34.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:51:34 compute-1 ceph-mon[81775]: pgmap v1924: 321 pgs: 321 active+clean; 284 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 3.1 MiB/s wr, 374 op/s
Jan 20 14:51:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:51:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:35.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:51:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:51:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:35.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:51:35 compute-1 nova_compute[225855]: 2026-01-20 14:51:35.925 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:37.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:37 compute-1 ceph-mon[81775]: pgmap v1925: 321 pgs: 321 active+clean; 261 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.7 MiB/s wr, 231 op/s
Jan 20 14:51:37 compute-1 nova_compute[225855]: 2026-01-20 14:51:37.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:51:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:37.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:38 compute-1 podman[269002]: 2026-01-20 14:51:38.036211969 +0000 UTC m=+0.080756601 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Jan 20 14:51:38 compute-1 nova_compute[225855]: 2026-01-20 14:51:38.481 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:39.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:39 compute-1 ceph-mon[81775]: pgmap v1926: 321 pgs: 321 active+clean; 269 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 215 op/s
Jan 20 14:51:39 compute-1 nova_compute[225855]: 2026-01-20 14:51:39.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:51:39 compute-1 nova_compute[225855]: 2026-01-20 14:51:39.368 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:39 compute-1 nova_compute[225855]: 2026-01-20 14:51:39.368 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:39 compute-1 nova_compute[225855]: 2026-01-20 14:51:39.368 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:39 compute-1 nova_compute[225855]: 2026-01-20 14:51:39.368 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:51:39 compute-1 nova_compute[225855]: 2026-01-20 14:51:39.369 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e261 e261: 3 total, 3 up, 3 in
Jan 20 14:51:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:39.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:51:39 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1849351552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:39 compute-1 nova_compute[225855]: 2026-01-20 14:51:39.803 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:39 compute-1 nova_compute[225855]: 2026-01-20 14:51:39.957 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:51:39 compute-1 nova_compute[225855]: 2026-01-20 14:51:39.958 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4447MB free_disk=20.91362762451172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:51:39 compute-1 nova_compute[225855]: 2026-01-20 14:51:39.959 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:39 compute-1 nova_compute[225855]: 2026-01-20 14:51:39.959 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:40 compute-1 nova_compute[225855]: 2026-01-20 14:51:40.039 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:51:40 compute-1 nova_compute[225855]: 2026-01-20 14:51:40.039 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:51:40 compute-1 nova_compute[225855]: 2026-01-20 14:51:40.065 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:51:40 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1423700390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:40 compute-1 nova_compute[225855]: 2026-01-20 14:51:40.512 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:40 compute-1 nova_compute[225855]: 2026-01-20 14:51:40.517 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:51:40 compute-1 nova_compute[225855]: 2026-01-20 14:51:40.542 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:51:40 compute-1 nova_compute[225855]: 2026-01-20 14:51:40.567 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:51:40 compute-1 nova_compute[225855]: 2026-01-20 14:51:40.568 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:40 compute-1 ceph-mon[81775]: osdmap e261: 3 total, 3 up, 3 in
Jan 20 14:51:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1849351552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:40 compute-1 ceph-mon[81775]: pgmap v1928: 321 pgs: 321 active+clean; 275 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 198 op/s
Jan 20 14:51:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1423700390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2350871441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:40 compute-1 nova_compute[225855]: 2026-01-20 14:51:40.927 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:41.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2869754477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3808001398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:51:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:41.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:51:42 compute-1 nova_compute[225855]: 2026-01-20 14:51:42.122 225859 DEBUG oslo_concurrency.lockutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquiring lock "932fd680-9aa0-49b4-9915-fa55104aaad7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:42 compute-1 nova_compute[225855]: 2026-01-20 14:51:42.123 225859 DEBUG oslo_concurrency.lockutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "932fd680-9aa0-49b4-9915-fa55104aaad7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:42 compute-1 nova_compute[225855]: 2026-01-20 14:51:42.144 225859 DEBUG nova.compute.manager [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:51:42 compute-1 nova_compute[225855]: 2026-01-20 14:51:42.265 225859 DEBUG oslo_concurrency.lockutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:42 compute-1 nova_compute[225855]: 2026-01-20 14:51:42.266 225859 DEBUG oslo_concurrency.lockutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:42 compute-1 nova_compute[225855]: 2026-01-20 14:51:42.277 225859 DEBUG nova.virt.hardware [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:51:42 compute-1 nova_compute[225855]: 2026-01-20 14:51:42.277 225859 INFO nova.compute.claims [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:51:42 compute-1 nova_compute[225855]: 2026-01-20 14:51:42.403 225859 DEBUG oslo_concurrency.processutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:42 compute-1 nova_compute[225855]: 2026-01-20 14:51:42.570 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:51:42 compute-1 ceph-mon[81775]: pgmap v1929: 321 pgs: 321 active+clean; 291 MiB data, 972 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.4 MiB/s wr, 151 op/s
Jan 20 14:51:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/166196684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/29521595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2877049591' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:51:42 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3514914750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:42 compute-1 nova_compute[225855]: 2026-01-20 14:51:42.829 225859 DEBUG oslo_concurrency.processutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:42 compute-1 nova_compute[225855]: 2026-01-20 14:51:42.837 225859 DEBUG nova.compute.provider_tree [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:51:42 compute-1 nova_compute[225855]: 2026-01-20 14:51:42.867 225859 DEBUG nova.scheduler.client.report [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:51:42 compute-1 nova_compute[225855]: 2026-01-20 14:51:42.901 225859 DEBUG oslo_concurrency.lockutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:42 compute-1 nova_compute[225855]: 2026-01-20 14:51:42.902 225859 DEBUG nova.compute.manager [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:51:42 compute-1 nova_compute[225855]: 2026-01-20 14:51:42.965 225859 DEBUG nova.compute.manager [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 20 14:51:42 compute-1 nova_compute[225855]: 2026-01-20 14:51:42.982 225859 INFO nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:51:43 compute-1 nova_compute[225855]: 2026-01-20 14:51:43.003 225859 DEBUG nova.compute.manager [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:51:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:43.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:43 compute-1 nova_compute[225855]: 2026-01-20 14:51:43.105 225859 DEBUG nova.compute.manager [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:51:43 compute-1 nova_compute[225855]: 2026-01-20 14:51:43.106 225859 DEBUG nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:51:43 compute-1 nova_compute[225855]: 2026-01-20 14:51:43.107 225859 INFO nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Creating image(s)
Jan 20 14:51:43 compute-1 nova_compute[225855]: 2026-01-20 14:51:43.138 225859 DEBUG nova.storage.rbd_utils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 932fd680-9aa0-49b4-9915-fa55104aaad7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:43 compute-1 nova_compute[225855]: 2026-01-20 14:51:43.169 225859 DEBUG nova.storage.rbd_utils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 932fd680-9aa0-49b4-9915-fa55104aaad7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:43 compute-1 nova_compute[225855]: 2026-01-20 14:51:43.199 225859 DEBUG nova.storage.rbd_utils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 932fd680-9aa0-49b4-9915-fa55104aaad7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:43 compute-1 nova_compute[225855]: 2026-01-20 14:51:43.204 225859 DEBUG oslo_concurrency.processutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:43 compute-1 nova_compute[225855]: 2026-01-20 14:51:43.287 225859 DEBUG oslo_concurrency.processutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:43 compute-1 nova_compute[225855]: 2026-01-20 14:51:43.288 225859 DEBUG oslo_concurrency.lockutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:43 compute-1 nova_compute[225855]: 2026-01-20 14:51:43.290 225859 DEBUG oslo_concurrency.lockutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:43 compute-1 nova_compute[225855]: 2026-01-20 14:51:43.291 225859 DEBUG oslo_concurrency.lockutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:43 compute-1 nova_compute[225855]: 2026-01-20 14:51:43.317 225859 DEBUG nova.storage.rbd_utils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 932fd680-9aa0-49b4-9915-fa55104aaad7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:43 compute-1 nova_compute[225855]: 2026-01-20 14:51:43.321 225859 DEBUG oslo_concurrency.processutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 932fd680-9aa0-49b4-9915-fa55104aaad7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:43 compute-1 nova_compute[225855]: 2026-01-20 14:51:43.357 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:51:43 compute-1 nova_compute[225855]: 2026-01-20 14:51:43.547 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3514914750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2407416010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:43.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.082 225859 DEBUG oslo_concurrency.processutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 932fd680-9aa0-49b4-9915-fa55104aaad7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.761s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.152 225859 DEBUG nova.storage.rbd_utils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] resizing rbd image 932fd680-9aa0-49b4-9915-fa55104aaad7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.245 225859 DEBUG nova.objects.instance [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lazy-loading 'migration_context' on Instance uuid 932fd680-9aa0-49b4-9915-fa55104aaad7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.276 225859 DEBUG nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.276 225859 DEBUG nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Ensure instance console log exists: /var/lib/nova/instances/932fd680-9aa0-49b4-9915-fa55104aaad7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.277 225859 DEBUG oslo_concurrency.lockutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.277 225859 DEBUG oslo_concurrency.lockutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.277 225859 DEBUG oslo_concurrency.lockutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.279 225859 DEBUG nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.282 225859 WARNING nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.297 225859 DEBUG nova.virt.libvirt.host [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.298 225859 DEBUG nova.virt.libvirt.host [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.301 225859 DEBUG nova.virt.libvirt.host [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.301 225859 DEBUG nova.virt.libvirt.host [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.303 225859 DEBUG nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.303 225859 DEBUG nova.virt.hardware [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.303 225859 DEBUG nova.virt.hardware [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.304 225859 DEBUG nova.virt.hardware [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.304 225859 DEBUG nova.virt.hardware [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.304 225859 DEBUG nova.virt.hardware [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.304 225859 DEBUG nova.virt.hardware [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.304 225859 DEBUG nova.virt.hardware [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.305 225859 DEBUG nova.virt.hardware [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.305 225859 DEBUG nova.virt.hardware [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.305 225859 DEBUG nova.virt.hardware [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.305 225859 DEBUG nova.virt.hardware [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.308 225859 DEBUG oslo_concurrency.processutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:51:44 compute-1 ceph-mon[81775]: pgmap v1930: 321 pgs: 321 active+clean; 294 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 507 KiB/s rd, 2.6 MiB/s wr, 133 op/s
Jan 20 14:51:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2029288579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:51:44 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/297072441' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.758 225859 DEBUG oslo_concurrency.processutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.785 225859 DEBUG nova.storage.rbd_utils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 932fd680-9aa0-49b4-9915-fa55104aaad7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:44 compute-1 nova_compute[225855]: 2026-01-20 14:51:44.789 225859 DEBUG oslo_concurrency.processutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:51:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:45.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:51:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:51:45 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/664632545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:45 compute-1 nova_compute[225855]: 2026-01-20 14:51:45.270 225859 DEBUG oslo_concurrency.processutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:45 compute-1 nova_compute[225855]: 2026-01-20 14:51:45.272 225859 DEBUG nova.objects.instance [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 932fd680-9aa0-49b4-9915-fa55104aaad7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:45 compute-1 nova_compute[225855]: 2026-01-20 14:51:45.333 225859 DEBUG nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:51:45 compute-1 nova_compute[225855]:   <uuid>932fd680-9aa0-49b4-9915-fa55104aaad7</uuid>
Jan 20 14:51:45 compute-1 nova_compute[225855]:   <name>instance-00000072</name>
Jan 20 14:51:45 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:51:45 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:51:45 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerShowV247Test-server-597934545</nova:name>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:51:44</nova:creationTime>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:51:45 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:51:45 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:51:45 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:51:45 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:51:45 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:51:45 compute-1 nova_compute[225855]:         <nova:user uuid="cdcdce94e7354b3bafb34285408888b9">tempest-ServerShowV247Test-1508434892-project-member</nova:user>
Jan 20 14:51:45 compute-1 nova_compute[225855]:         <nova:project uuid="ecfc3366b9194864a3f15ce0114b5ee3">tempest-ServerShowV247Test-1508434892</nova:project>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <nova:ports/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:51:45 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:51:45 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <system>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <entry name="serial">932fd680-9aa0-49b4-9915-fa55104aaad7</entry>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <entry name="uuid">932fd680-9aa0-49b4-9915-fa55104aaad7</entry>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     </system>
Jan 20 14:51:45 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:51:45 compute-1 nova_compute[225855]:   <os>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:   </os>
Jan 20 14:51:45 compute-1 nova_compute[225855]:   <features>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:   </features>
Jan 20 14:51:45 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:51:45 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:51:45 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/932fd680-9aa0-49b4-9915-fa55104aaad7_disk">
Jan 20 14:51:45 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       </source>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:51:45 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/932fd680-9aa0-49b4-9915-fa55104aaad7_disk.config">
Jan 20 14:51:45 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       </source>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:51:45 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/932fd680-9aa0-49b4-9915-fa55104aaad7/console.log" append="off"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <video>
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     </video>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:51:45 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:51:45 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:51:45 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:51:45 compute-1 nova_compute[225855]: </domain>
Jan 20 14:51:45 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:51:45 compute-1 nova_compute[225855]: 2026-01-20 14:51:45.400 225859 DEBUG nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:51:45 compute-1 nova_compute[225855]: 2026-01-20 14:51:45.400 225859 DEBUG nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:51:45 compute-1 nova_compute[225855]: 2026-01-20 14:51:45.401 225859 INFO nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Using config drive
Jan 20 14:51:45 compute-1 nova_compute[225855]: 2026-01-20 14:51:45.425 225859 DEBUG nova.storage.rbd_utils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 932fd680-9aa0-49b4-9915-fa55104aaad7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/297072441' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/664632545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:45.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:45 compute-1 nova_compute[225855]: 2026-01-20 14:51:45.910 225859 INFO nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Creating config drive at /var/lib/nova/instances/932fd680-9aa0-49b4-9915-fa55104aaad7/disk.config
Jan 20 14:51:45 compute-1 nova_compute[225855]: 2026-01-20 14:51:45.915 225859 DEBUG oslo_concurrency.processutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/932fd680-9aa0-49b4-9915-fa55104aaad7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc9kczawb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:45 compute-1 nova_compute[225855]: 2026-01-20 14:51:45.941 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:46 compute-1 nova_compute[225855]: 2026-01-20 14:51:46.051 225859 DEBUG oslo_concurrency.processutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/932fd680-9aa0-49b4-9915-fa55104aaad7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc9kczawb" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:46 compute-1 nova_compute[225855]: 2026-01-20 14:51:46.079 225859 DEBUG nova.storage.rbd_utils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 932fd680-9aa0-49b4-9915-fa55104aaad7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:46 compute-1 nova_compute[225855]: 2026-01-20 14:51:46.082 225859 DEBUG oslo_concurrency.processutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/932fd680-9aa0-49b4-9915-fa55104aaad7/disk.config 932fd680-9aa0-49b4-9915-fa55104aaad7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:46 compute-1 nova_compute[225855]: 2026-01-20 14:51:46.241 225859 DEBUG oslo_concurrency.processutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/932fd680-9aa0-49b4-9915-fa55104aaad7/disk.config 932fd680-9aa0-49b4-9915-fa55104aaad7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:46 compute-1 nova_compute[225855]: 2026-01-20 14:51:46.242 225859 INFO nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Deleting local config drive /var/lib/nova/instances/932fd680-9aa0-49b4-9915-fa55104aaad7/disk.config because it was imported into RBD.
Jan 20 14:51:46 compute-1 systemd-machined[194361]: New machine qemu-51-instance-00000072.
Jan 20 14:51:46 compute-1 systemd[1]: Started Virtual Machine qemu-51-instance-00000072.
Jan 20 14:51:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3622414733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:46 compute-1 ceph-mon[81775]: pgmap v1931: 321 pgs: 321 active+clean; 331 MiB data, 989 MiB used, 20 GiB / 21 GiB avail; 494 KiB/s rd, 4.2 MiB/s wr, 124 op/s
Jan 20 14:51:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2312341428' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:51:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:47.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.073 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920707.0728478, 932fd680-9aa0-49b4-9915-fa55104aaad7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.074 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] VM Resumed (Lifecycle Event)
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.077 225859 DEBUG nova.compute.manager [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.077 225859 DEBUG nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.080 225859 INFO nova.virt.libvirt.driver [-] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Instance spawned successfully.
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.080 225859 DEBUG nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.102 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.107 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.111 225859 DEBUG nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.111 225859 DEBUG nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.112 225859 DEBUG nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.113 225859 DEBUG nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.113 225859 DEBUG nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.114 225859 DEBUG nova.virt.libvirt.driver [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.166 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.167 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920707.073806, 932fd680-9aa0-49b4-9915-fa55104aaad7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.167 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] VM Started (Lifecycle Event)
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.242 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.244 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.258 225859 INFO nova.compute.manager [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Took 4.15 seconds to spawn the instance on the hypervisor.
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.259 225859 DEBUG nova.compute.manager [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.286 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.336 225859 INFO nova.compute.manager [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Took 5.14 seconds to build instance.
Jan 20 14:51:47 compute-1 nova_compute[225855]: 2026-01-20 14:51:47.353 225859 DEBUG oslo_concurrency.lockutils [None req-43830f2f-2208-45ef-bcaf-0627d14e2cd6 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "932fd680-9aa0-49b4-9915-fa55104aaad7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:47.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:48 compute-1 nova_compute[225855]: 2026-01-20 14:51:48.550 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:49 compute-1 ceph-mon[81775]: pgmap v1932: 321 pgs: 321 active+clean; 356 MiB data, 1002 MiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 4.7 MiB/s wr, 143 op/s
Jan 20 14:51:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:49.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:51:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:49.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:51:50 compute-1 podman[269444]: 2026-01-20 14:51:50.002633844 +0000 UTC m=+0.048703796 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #88. Immutable memtables: 0.
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:51:50.023809) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 88
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920710023896, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2497, "num_deletes": 258, "total_data_size": 5544581, "memory_usage": 5603312, "flush_reason": "Manual Compaction"}
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #89: started
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920710061396, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 89, "file_size": 3643008, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44416, "largest_seqno": 46908, "table_properties": {"data_size": 3632724, "index_size": 6522, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22189, "raw_average_key_size": 21, "raw_value_size": 3611937, "raw_average_value_size": 3430, "num_data_blocks": 281, "num_entries": 1053, "num_filter_entries": 1053, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768920523, "oldest_key_time": 1768920523, "file_creation_time": 1768920710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 37636 microseconds, and 8233 cpu microseconds.
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:51:50.061441) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #89: 3643008 bytes OK
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:51:50.061465) [db/memtable_list.cc:519] [default] Level-0 commit table #89 started
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:51:50.063487) [db/memtable_list.cc:722] [default] Level-0 commit table #89: memtable #1 done
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:51:50.063502) EVENT_LOG_v1 {"time_micros": 1768920710063498, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:51:50.063521) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 5533444, prev total WAL file size 5533444, number of live WAL files 2.
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000085.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:51:50.064643) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [89(3557KB)], [87(8750KB)]
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920710064673, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [89], "files_L6": [87], "score": -1, "input_data_size": 12603927, "oldest_snapshot_seqno": -1}
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #90: 7212 keys, 10795138 bytes, temperature: kUnknown
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920710185933, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 90, "file_size": 10795138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10747224, "index_size": 28794, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18053, "raw_key_size": 185910, "raw_average_key_size": 25, "raw_value_size": 10618578, "raw_average_value_size": 1472, "num_data_blocks": 1139, "num_entries": 7212, "num_filter_entries": 7212, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768920710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 90, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:51:50.186189) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 10795138 bytes
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:51:50.187931) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.9 rd, 89.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.5 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 7743, records dropped: 531 output_compression: NoCompression
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:51:50.187977) EVENT_LOG_v1 {"time_micros": 1768920710187960, "job": 54, "event": "compaction_finished", "compaction_time_micros": 121357, "compaction_time_cpu_micros": 30256, "output_level": 6, "num_output_files": 1, "total_output_size": 10795138, "num_input_records": 7743, "num_output_records": 7212, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920710188791, "job": 54, "event": "table_file_deletion", "file_number": 89}
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000087.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920710190237, "job": 54, "event": "table_file_deletion", "file_number": 87}
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:51:50.064578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:51:50.190300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:51:50.190305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:51:50.190307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:51:50.190308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:51:50 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:51:50.190309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:51:50 compute-1 sudo[269462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:50 compute-1 sudo[269462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:50 compute-1 sudo[269462]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:50 compute-1 sudo[269487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:50 compute-1 sudo[269487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:50 compute-1 sudo[269487]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:50 compute-1 nova_compute[225855]: 2026-01-20 14:51:50.931 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:51 compute-1 ceph-mon[81775]: pgmap v1933: 321 pgs: 321 active+clean; 375 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.6 MiB/s wr, 172 op/s
Jan 20 14:51:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:51.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:51.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2764376824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:52 compute-1 sudo[269513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:52 compute-1 sudo[269513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:52 compute-1 sudo[269513]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:52 compute-1 sudo[269538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:51:52 compute-1 sudo[269538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:52 compute-1 sudo[269538]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:52 compute-1 sudo[269563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:52 compute-1 sudo[269563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:52 compute-1 sudo[269563]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:52 compute-1 sudo[269588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 14:51:52 compute-1 sudo[269588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:52 compute-1 podman[269685]: 2026-01-20 14:51:52.828080819 +0000 UTC m=+0.063682979 container exec 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 14:51:52 compute-1 podman[269685]: 2026-01-20 14:51:52.923233155 +0000 UTC m=+0.158835295 container exec_died 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:51:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:53.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:53 compute-1 ceph-mon[81775]: pgmap v1934: 321 pgs: 321 active+clean; 387 MiB data, 1016 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.5 MiB/s wr, 234 op/s
Jan 20 14:51:53 compute-1 podman[269842]: 2026-01-20 14:51:53.472564704 +0000 UTC m=+0.059091709 container exec 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 14:51:53 compute-1 podman[269842]: 2026-01-20 14:51:53.483151283 +0000 UTC m=+0.069678288 container exec_died 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 14:51:53 compute-1 nova_compute[225855]: 2026-01-20 14:51:53.552 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:53 compute-1 podman[269909]: 2026-01-20 14:51:53.67468956 +0000 UTC m=+0.048770158 container exec e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, name=keepalived, description=keepalived for Ceph, release=1793, vcs-type=git)
Jan 20 14:51:53 compute-1 podman[269909]: 2026-01-20 14:51:53.689165269 +0000 UTC m=+0.063245867 container exec_died e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, distribution-scope=public, io.buildah.version=1.28.2)
Jan 20 14:51:53 compute-1 sudo[269588]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:51:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:53.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:51:53 compute-1 sudo[269942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:53 compute-1 sudo[269942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:53 compute-1 sudo[269942]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:53 compute-1 sudo[269967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:51:53 compute-1 sudo[269967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:53 compute-1 sudo[269967]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:53 compute-1 sudo[269992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:53 compute-1 sudo[269992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:53 compute-1 sudo[269992]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:53 compute-1 sudo[270017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:51:53 compute-1 sudo[270017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:54 compute-1 nova_compute[225855]: 2026-01-20 14:51:54.119 225859 DEBUG oslo_concurrency.lockutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "23ea4537-f03f-46de-881f-b979e232a3b9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:54 compute-1 nova_compute[225855]: 2026-01-20 14:51:54.120 225859 DEBUG oslo_concurrency.lockutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:54 compute-1 nova_compute[225855]: 2026-01-20 14:51:54.143 225859 DEBUG nova.compute.manager [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:51:54 compute-1 nova_compute[225855]: 2026-01-20 14:51:54.221 225859 DEBUG oslo_concurrency.lockutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:54 compute-1 nova_compute[225855]: 2026-01-20 14:51:54.222 225859 DEBUG oslo_concurrency.lockutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:54 compute-1 nova_compute[225855]: 2026-01-20 14:51:54.227 225859 DEBUG nova.virt.hardware [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:51:54 compute-1 nova_compute[225855]: 2026-01-20 14:51:54.227 225859 INFO nova.compute.claims [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:51:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:54 compute-1 ceph-mon[81775]: pgmap v1935: 321 pgs: 321 active+clean; 387 MiB data, 1020 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.6 MiB/s wr, 299 op/s
Jan 20 14:51:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:54 compute-1 nova_compute[225855]: 2026-01-20 14:51:54.382 225859 DEBUG oslo_concurrency.processutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:54 compute-1 sudo[270017]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:51:54 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/995231318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:54 compute-1 nova_compute[225855]: 2026-01-20 14:51:54.844 225859 DEBUG oslo_concurrency.processutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:54 compute-1 nova_compute[225855]: 2026-01-20 14:51:54.853 225859 DEBUG nova.compute.provider_tree [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:51:54 compute-1 nova_compute[225855]: 2026-01-20 14:51:54.872 225859 DEBUG nova.scheduler.client.report [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:51:54 compute-1 nova_compute[225855]: 2026-01-20 14:51:54.902 225859 DEBUG oslo_concurrency.lockutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:54 compute-1 nova_compute[225855]: 2026-01-20 14:51:54.903 225859 DEBUG nova.compute.manager [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:51:54 compute-1 nova_compute[225855]: 2026-01-20 14:51:54.951 225859 DEBUG nova.compute.manager [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:51:54 compute-1 nova_compute[225855]: 2026-01-20 14:51:54.952 225859 DEBUG nova.network.neutron [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:51:54 compute-1 nova_compute[225855]: 2026-01-20 14:51:54.970 225859 INFO nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:51:54 compute-1 nova_compute[225855]: 2026-01-20 14:51:54.986 225859 DEBUG nova.compute.manager [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:51:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:51:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:55.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.079 225859 DEBUG nova.compute.manager [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.081 225859 DEBUG nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.082 225859 INFO nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Creating image(s)
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.121 225859 DEBUG nova.storage.rbd_utils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image 23ea4537-f03f-46de-881f-b979e232a3b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.160 225859 DEBUG nova.storage.rbd_utils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image 23ea4537-f03f-46de-881f-b979e232a3b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.190 225859 DEBUG nova.storage.rbd_utils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image 23ea4537-f03f-46de-881f-b979e232a3b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.194 225859 DEBUG oslo_concurrency.processutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.221 225859 DEBUG nova.policy [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd85d286ce6224326a0f4a15a06afbfea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.257 225859 DEBUG oslo_concurrency.processutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.258 225859 DEBUG oslo_concurrency.lockutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.259 225859 DEBUG oslo_concurrency.lockutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.259 225859 DEBUG oslo_concurrency.lockutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.289 225859 DEBUG nova.storage.rbd_utils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image 23ea4537-f03f-46de-881f-b979e232a3b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.292 225859 DEBUG oslo_concurrency.processutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 23ea4537-f03f-46de-881f-b979e232a3b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.569 225859 DEBUG oslo_concurrency.processutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 23ea4537-f03f-46de-881f-b979e232a3b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.641 225859 DEBUG nova.storage.rbd_utils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] resizing rbd image 23ea4537-f03f-46de-881f-b979e232a3b9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.748 225859 DEBUG nova.objects.instance [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'migration_context' on Instance uuid 23ea4537-f03f-46de-881f-b979e232a3b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:55.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.767 225859 DEBUG nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.768 225859 DEBUG nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Ensure instance console log exists: /var/lib/nova/instances/23ea4537-f03f-46de-881f-b979e232a3b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.769 225859 DEBUG oslo_concurrency.lockutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.769 225859 DEBUG oslo_concurrency.lockutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.770 225859 DEBUG oslo_concurrency.lockutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/995231318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:55 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:51:55 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:51:55 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:55 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:51:55 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:51:55 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:51:55 compute-1 nova_compute[225855]: 2026-01-20 14:51:55.932 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:56 compute-1 nova_compute[225855]: 2026-01-20 14:51:56.061 225859 DEBUG nova.network.neutron [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Successfully created port: 234381ea-07b1-41fe-b3c1-be97ce6a3b64 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:51:56 compute-1 ceph-mon[81775]: pgmap v1936: 321 pgs: 321 active+clean; 421 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.9 MiB/s wr, 329 op/s
Jan 20 14:51:56 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1396617212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:56 compute-1 nova_compute[225855]: 2026-01-20 14:51:56.948 225859 DEBUG nova.network.neutron [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Successfully updated port: 234381ea-07b1-41fe-b3c1-be97ce6a3b64 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:51:56 compute-1 nova_compute[225855]: 2026-01-20 14:51:56.961 225859 DEBUG oslo_concurrency.lockutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "refresh_cache-23ea4537-f03f-46de-881f-b979e232a3b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:51:56 compute-1 nova_compute[225855]: 2026-01-20 14:51:56.961 225859 DEBUG oslo_concurrency.lockutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquired lock "refresh_cache-23ea4537-f03f-46de-881f-b979e232a3b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:51:56 compute-1 nova_compute[225855]: 2026-01-20 14:51:56.962 225859 DEBUG nova.network.neutron [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:51:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:57.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:57 compute-1 nova_compute[225855]: 2026-01-20 14:51:57.072 225859 DEBUG nova.compute.manager [req-7d5244c6-5f1c-44a0-8712-98cf17b16a1e req-411d5867-42e1-44dc-a564-3d5df9652432 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received event network-changed-234381ea-07b1-41fe-b3c1-be97ce6a3b64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:51:57 compute-1 nova_compute[225855]: 2026-01-20 14:51:57.072 225859 DEBUG nova.compute.manager [req-7d5244c6-5f1c-44a0-8712-98cf17b16a1e req-411d5867-42e1-44dc-a564-3d5df9652432 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Refreshing instance network info cache due to event network-changed-234381ea-07b1-41fe-b3c1-be97ce6a3b64. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:51:57 compute-1 nova_compute[225855]: 2026-01-20 14:51:57.073 225859 DEBUG oslo_concurrency.lockutils [req-7d5244c6-5f1c-44a0-8712-98cf17b16a1e req-411d5867-42e1-44dc-a564-3d5df9652432 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-23ea4537-f03f-46de-881f-b979e232a3b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:51:57 compute-1 nova_compute[225855]: 2026-01-20 14:51:57.164 225859 DEBUG nova.network.neutron [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:51:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:51:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:57.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:51:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3771881679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.301 225859 DEBUG nova.network.neutron [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Updating instance_info_cache with network_info: [{"id": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "address": "fa:16:3e:b5:55:3c", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap234381ea-07", "ovs_interfaceid": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.329 225859 DEBUG oslo_concurrency.lockutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Releasing lock "refresh_cache-23ea4537-f03f-46de-881f-b979e232a3b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.329 225859 DEBUG nova.compute.manager [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Instance network_info: |[{"id": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "address": "fa:16:3e:b5:55:3c", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap234381ea-07", "ovs_interfaceid": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.329 225859 DEBUG oslo_concurrency.lockutils [req-7d5244c6-5f1c-44a0-8712-98cf17b16a1e req-411d5867-42e1-44dc-a564-3d5df9652432 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-23ea4537-f03f-46de-881f-b979e232a3b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.330 225859 DEBUG nova.network.neutron [req-7d5244c6-5f1c-44a0-8712-98cf17b16a1e req-411d5867-42e1-44dc-a564-3d5df9652432 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Refreshing network info cache for port 234381ea-07b1-41fe-b3c1-be97ce6a3b64 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.332 225859 DEBUG nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Start _get_guest_xml network_info=[{"id": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "address": "fa:16:3e:b5:55:3c", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap234381ea-07", "ovs_interfaceid": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.336 225859 WARNING nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.341 225859 DEBUG nova.virt.libvirt.host [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.343 225859 DEBUG nova.virt.libvirt.host [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.351 225859 DEBUG nova.virt.libvirt.host [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.352 225859 DEBUG nova.virt.libvirt.host [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.354 225859 DEBUG nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.355 225859 DEBUG nova.virt.hardware [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.355 225859 DEBUG nova.virt.hardware [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.356 225859 DEBUG nova.virt.hardware [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.356 225859 DEBUG nova.virt.hardware [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.357 225859 DEBUG nova.virt.hardware [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.357 225859 DEBUG nova.virt.hardware [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.358 225859 DEBUG nova.virt.hardware [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.358 225859 DEBUG nova.virt.hardware [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.358 225859 DEBUG nova.virt.hardware [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.359 225859 DEBUG nova.virt.hardware [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.359 225859 DEBUG nova.virt.hardware [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.364 225859 DEBUG oslo_concurrency.processutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.556 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:58 compute-1 ceph-mon[81775]: pgmap v1937: 321 pgs: 321 active+clean; 449 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.2 MiB/s wr, 333 op/s
Jan 20 14:51:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3260160675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:51:58 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1996025638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.848 225859 DEBUG oslo_concurrency.processutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.874 225859 DEBUG nova.storage.rbd_utils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image 23ea4537-f03f-46de-881f-b979e232a3b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:58 compute-1 nova_compute[225855]: 2026-01-20 14:51:58.880 225859 DEBUG oslo_concurrency.processutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:51:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:59.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.369 225859 DEBUG oslo_concurrency.processutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.371 225859 DEBUG nova.virt.libvirt.vif [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:51:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-828759404',display_name='tempest-ServerStableDeviceRescueTest-server-828759404',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-828759404',id=117,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a29915e0dd2403fbd7b7e847696b00a',ramdisk_id='',reservation_id='r-ztpzn050',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-129078052',owner_user_name='tempest-ServerStableDeviceRescueTest-129078052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:51:55Z,user_data=None,user_id='d85d286ce6224326a0f4a15a06afbfea',uuid=23ea4537-f03f-46de-881f-b979e232a3b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "address": "fa:16:3e:b5:55:3c", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap234381ea-07", "ovs_interfaceid": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.371 225859 DEBUG nova.network.os_vif_util [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converting VIF {"id": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "address": "fa:16:3e:b5:55:3c", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap234381ea-07", "ovs_interfaceid": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.372 225859 DEBUG nova.network.os_vif_util [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:55:3c,bridge_name='br-int',has_traffic_filtering=True,id=234381ea-07b1-41fe-b3c1-be97ce6a3b64,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap234381ea-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.373 225859 DEBUG nova.objects.instance [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'pci_devices' on Instance uuid 23ea4537-f03f-46de-881f-b979e232a3b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.391 225859 DEBUG nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:51:59 compute-1 nova_compute[225855]:   <uuid>23ea4537-f03f-46de-881f-b979e232a3b9</uuid>
Jan 20 14:51:59 compute-1 nova_compute[225855]:   <name>instance-00000075</name>
Jan 20 14:51:59 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:51:59 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:51:59 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerStableDeviceRescueTest-server-828759404</nova:name>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:51:58</nova:creationTime>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:51:59 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:51:59 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:51:59 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:51:59 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:51:59 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:51:59 compute-1 nova_compute[225855]:         <nova:user uuid="d85d286ce6224326a0f4a15a06afbfea">tempest-ServerStableDeviceRescueTest-129078052-project-member</nova:user>
Jan 20 14:51:59 compute-1 nova_compute[225855]:         <nova:project uuid="0a29915e0dd2403fbd7b7e847696b00a">tempest-ServerStableDeviceRescueTest-129078052</nova:project>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:51:59 compute-1 nova_compute[225855]:         <nova:port uuid="234381ea-07b1-41fe-b3c1-be97ce6a3b64">
Jan 20 14:51:59 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:51:59 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:51:59 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <system>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <entry name="serial">23ea4537-f03f-46de-881f-b979e232a3b9</entry>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <entry name="uuid">23ea4537-f03f-46de-881f-b979e232a3b9</entry>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     </system>
Jan 20 14:51:59 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:51:59 compute-1 nova_compute[225855]:   <os>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:   </os>
Jan 20 14:51:59 compute-1 nova_compute[225855]:   <features>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:   </features>
Jan 20 14:51:59 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:51:59 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:51:59 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/23ea4537-f03f-46de-881f-b979e232a3b9_disk">
Jan 20 14:51:59 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       </source>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:51:59 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/23ea4537-f03f-46de-881f-b979e232a3b9_disk.config">
Jan 20 14:51:59 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       </source>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:51:59 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:b5:55:3c"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <target dev="tap234381ea-07"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/23ea4537-f03f-46de-881f-b979e232a3b9/console.log" append="off"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <video>
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     </video>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:51:59 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:51:59 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:51:59 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:51:59 compute-1 nova_compute[225855]: </domain>
Jan 20 14:51:59 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.392 225859 DEBUG nova.compute.manager [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Preparing to wait for external event network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.393 225859 DEBUG oslo_concurrency.lockutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.393 225859 DEBUG oslo_concurrency.lockutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.393 225859 DEBUG oslo_concurrency.lockutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.394 225859 DEBUG nova.virt.libvirt.vif [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:51:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-828759404',display_name='tempest-ServerStableDeviceRescueTest-server-828759404',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-828759404',id=117,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a29915e0dd2403fbd7b7e847696b00a',ramdisk_id='',reservation_id='r-ztpzn050',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-129078052',owner_user_name='tempest-ServerStableDeviceRescueTest-129078052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:51:55Z,user_data=None,user_id='d85d286ce6224326a0f4a15a06afbfea',uuid=23ea4537-f03f-46de-881f-b979e232a3b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "address": "fa:16:3e:b5:55:3c", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap234381ea-07", "ovs_interfaceid": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.394 225859 DEBUG nova.network.os_vif_util [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converting VIF {"id": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "address": "fa:16:3e:b5:55:3c", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap234381ea-07", "ovs_interfaceid": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.395 225859 DEBUG nova.network.os_vif_util [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:55:3c,bridge_name='br-int',has_traffic_filtering=True,id=234381ea-07b1-41fe-b3c1-be97ce6a3b64,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap234381ea-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.395 225859 DEBUG os_vif [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:55:3c,bridge_name='br-int',has_traffic_filtering=True,id=234381ea-07b1-41fe-b3c1-be97ce6a3b64,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap234381ea-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.396 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.397 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.397 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.403 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.404 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap234381ea-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.404 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap234381ea-07, col_values=(('external_ids', {'iface-id': '234381ea-07b1-41fe-b3c1-be97ce6a3b64', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:55:3c', 'vm-uuid': '23ea4537-f03f-46de-881f-b979e232a3b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.407 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:59 compute-1 NetworkManager[49104]: <info>  [1768920719.4083] manager: (tap234381ea-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/188)
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.410 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.416 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.416 225859 INFO os_vif [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:55:3c,bridge_name='br-int',has_traffic_filtering=True,id=234381ea-07b1-41fe-b3c1-be97ce6a3b64,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap234381ea-07')
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.473 225859 DEBUG nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.474 225859 DEBUG nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.474 225859 DEBUG nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No VIF found with MAC fa:16:3e:b5:55:3c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.474 225859 INFO nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Using config drive
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.500 225859 DEBUG nova.storage.rbd_utils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image 23ea4537-f03f-46de-881f-b979e232a3b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.581 225859 DEBUG nova.network.neutron [req-7d5244c6-5f1c-44a0-8712-98cf17b16a1e req-411d5867-42e1-44dc-a564-3d5df9652432 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Updated VIF entry in instance network info cache for port 234381ea-07b1-41fe-b3c1-be97ce6a3b64. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.582 225859 DEBUG nova.network.neutron [req-7d5244c6-5f1c-44a0-8712-98cf17b16a1e req-411d5867-42e1-44dc-a564-3d5df9652432 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Updating instance_info_cache with network_info: [{"id": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "address": "fa:16:3e:b5:55:3c", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap234381ea-07", "ovs_interfaceid": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.605 225859 DEBUG oslo_concurrency.lockutils [req-7d5244c6-5f1c-44a0-8712-98cf17b16a1e req-411d5867-42e1-44dc-a564-3d5df9652432 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-23ea4537-f03f-46de-881f-b979e232a3b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:51:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:51:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:59.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:59 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1996025638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:59 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/256008764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.912 225859 INFO nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Creating config drive at /var/lib/nova/instances/23ea4537-f03f-46de-881f-b979e232a3b9/disk.config
Jan 20 14:51:59 compute-1 nova_compute[225855]: 2026-01-20 14:51:59.916 225859 DEBUG oslo_concurrency.processutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/23ea4537-f03f-46de-881f-b979e232a3b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjxbz3kh0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:00 compute-1 nova_compute[225855]: 2026-01-20 14:52:00.049 225859 DEBUG oslo_concurrency.processutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/23ea4537-f03f-46de-881f-b979e232a3b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjxbz3kh0" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:00 compute-1 nova_compute[225855]: 2026-01-20 14:52:00.076 225859 DEBUG nova.storage.rbd_utils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image 23ea4537-f03f-46de-881f-b979e232a3b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:52:00 compute-1 nova_compute[225855]: 2026-01-20 14:52:00.079 225859 DEBUG oslo_concurrency.processutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/23ea4537-f03f-46de-881f-b979e232a3b9/disk.config 23ea4537-f03f-46de-881f-b979e232a3b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:00 compute-1 nova_compute[225855]: 2026-01-20 14:52:00.261 225859 DEBUG oslo_concurrency.processutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/23ea4537-f03f-46de-881f-b979e232a3b9/disk.config 23ea4537-f03f-46de-881f-b979e232a3b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:00 compute-1 nova_compute[225855]: 2026-01-20 14:52:00.262 225859 INFO nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Deleting local config drive /var/lib/nova/instances/23ea4537-f03f-46de-881f-b979e232a3b9/disk.config because it was imported into RBD.
Jan 20 14:52:00 compute-1 kernel: tap234381ea-07: entered promiscuous mode
Jan 20 14:52:00 compute-1 NetworkManager[49104]: <info>  [1768920720.3008] manager: (tap234381ea-07): new Tun device (/org/freedesktop/NetworkManager/Devices/189)
Jan 20 14:52:00 compute-1 systemd-udevd[270395]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:52:00 compute-1 nova_compute[225855]: 2026-01-20 14:52:00.361 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:00 compute-1 ovn_controller[130490]: 2026-01-20T14:52:00Z|00437|binding|INFO|Claiming lport 234381ea-07b1-41fe-b3c1-be97ce6a3b64 for this chassis.
Jan 20 14:52:00 compute-1 ovn_controller[130490]: 2026-01-20T14:52:00Z|00438|binding|INFO|234381ea-07b1-41fe-b3c1-be97ce6a3b64: Claiming fa:16:3e:b5:55:3c 10.100.0.14
Jan 20 14:52:00 compute-1 nova_compute[225855]: 2026-01-20 14:52:00.367 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:00 compute-1 NetworkManager[49104]: <info>  [1768920720.3727] device (tap234381ea-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:52:00 compute-1 NetworkManager[49104]: <info>  [1768920720.3746] device (tap234381ea-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.374 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:55:3c 10.100.0.14'], port_security=['fa:16:3e:b5:55:3c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '23ea4537-f03f-46de-881f-b979e232a3b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79184781-1f23-4584-87de-08e262242488', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '30ec24b7-15ba-4aeb-9785-539071729f77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b73ab05-b29f-401a-84a5-ea1a96103f33, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=234381ea-07b1-41fe-b3c1-be97ce6a3b64) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.375 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 234381ea-07b1-41fe-b3c1-be97ce6a3b64 in datapath 79184781-1f23-4584-87de-08e262242488 bound to our chassis
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.377 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79184781-1f23-4584-87de-08e262242488
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.387 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[34ac9ce1-e8be-40a7-a3a3-7aa9628213b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.388 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap79184781-11 in ovnmeta-79184781-1f23-4584-87de-08e262242488 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:52:00 compute-1 systemd-machined[194361]: New machine qemu-52-instance-00000075.
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.390 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap79184781-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.390 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f9c049d9-6562-4414-80cf-d6cd64eafaa3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.391 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[90874d05-c1b9-43cc-8a63-855a5c897673]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.401 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[2d25b206-a839-45a6-bff0-e6af170d56a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:00 compute-1 systemd[1]: Started Virtual Machine qemu-52-instance-00000075.
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.429 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e50bc71c-f0b9-4897-928c-981849774e5d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:00 compute-1 ovn_controller[130490]: 2026-01-20T14:52:00Z|00439|binding|INFO|Setting lport 234381ea-07b1-41fe-b3c1-be97ce6a3b64 ovn-installed in OVS
Jan 20 14:52:00 compute-1 ovn_controller[130490]: 2026-01-20T14:52:00Z|00440|binding|INFO|Setting lport 234381ea-07b1-41fe-b3c1-be97ce6a3b64 up in Southbound
Jan 20 14:52:00 compute-1 nova_compute[225855]: 2026-01-20 14:52:00.444 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.463 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[1f8a7b9d-0bc0-4135-a35e-3b88583a7d97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:00 compute-1 NetworkManager[49104]: <info>  [1768920720.4688] manager: (tap79184781-10): new Veth device (/org/freedesktop/NetworkManager/Devices/190)
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.468 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[755ecd3e-d89a-4652-9793-ec88224d6ac0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.498 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f71c5555-c884-4dd6-ae3c-8881d817a96e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.505 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[b46c5ffb-4b9f-4f0a-b8e7-6edb3160305d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:00 compute-1 NetworkManager[49104]: <info>  [1768920720.5268] device (tap79184781-10): carrier: link connected
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.531 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[df82f266-e0e9-4eed-b071-a47743285f60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.549 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b6be8f08-ed82-472d-b92f-ee18a70c20c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79184781-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:7c:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575945, 'reachable_time': 32082, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270431, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.567 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a841b0f4-8c19-4c52-8b90-d5c07fc32e32]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe38:7c2a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 575945, 'tstamp': 575945}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270432, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.586 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b6447f6c-5be7-44c7-840d-ff608edca17a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79184781-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:7c:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575945, 'reachable_time': 32082, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270433, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.620 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[907d3690-11de-4e4a-940a-9c349c394bb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.684 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[74f13395-bc5a-4464-9b47-75f6930bfbbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.685 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79184781-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.686 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.686 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79184781-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:52:00 compute-1 nova_compute[225855]: 2026-01-20 14:52:00.687 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:00 compute-1 NetworkManager[49104]: <info>  [1768920720.6883] manager: (tap79184781-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/191)
Jan 20 14:52:00 compute-1 kernel: tap79184781-10: entered promiscuous mode
Jan 20 14:52:00 compute-1 nova_compute[225855]: 2026-01-20 14:52:00.690 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.693 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79184781-10, col_values=(('external_ids', {'iface-id': 'b033e9e6-9781-4424-a20f-7b48a14e2c80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:52:00 compute-1 nova_compute[225855]: 2026-01-20 14:52:00.694 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:00 compute-1 ovn_controller[130490]: 2026-01-20T14:52:00Z|00441|binding|INFO|Releasing lport b033e9e6-9781-4424-a20f-7b48a14e2c80 from this chassis (sb_readonly=0)
Jan 20 14:52:00 compute-1 nova_compute[225855]: 2026-01-20 14:52:00.698 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.699 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/79184781-1f23-4584-87de-08e262242488.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/79184781-1f23-4584-87de-08e262242488.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.700 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c4964c62-ef64-45d9-bc1a-b9404a7a8c9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.701 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-79184781-1f23-4584-87de-08e262242488
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/79184781-1f23-4584-87de-08e262242488.pid.haproxy
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 79184781-1f23-4584-87de-08e262242488
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:52:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:00.701 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'env', 'PROCESS_TAG=haproxy-79184781-1f23-4584-87de-08e262242488', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/79184781-1f23-4584-87de-08e262242488.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:52:00 compute-1 nova_compute[225855]: 2026-01-20 14:52:00.729 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:00 compute-1 ceph-mon[81775]: pgmap v1938: 321 pgs: 321 active+clean; 470 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.3 MiB/s wr, 313 op/s
Jan 20 14:52:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:52:00 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:52:00 compute-1 nova_compute[225855]: 2026-01-20 14:52:00.934 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:00 compute-1 sudo[270443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:52:00 compute-1 sudo[270443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:52:00 compute-1 sudo[270443]: pam_unix(sudo:session): session closed for user root
Jan 20 14:52:01 compute-1 sudo[270473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:52:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:01.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:01 compute-1 sudo[270473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:52:01 compute-1 sudo[270473]: pam_unix(sudo:session): session closed for user root
Jan 20 14:52:01 compute-1 podman[270512]: 2026-01-20 14:52:01.121636866 +0000 UTC m=+0.052413831 container create 60e161a7f3185d9e79dabaf678e5dc136ea2a7d8acb23d5437f931e610606341 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Jan 20 14:52:01 compute-1 systemd[1]: Started libpod-conmon-60e161a7f3185d9e79dabaf678e5dc136ea2a7d8acb23d5437f931e610606341.scope.
Jan 20 14:52:01 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:52:01 compute-1 podman[270512]: 2026-01-20 14:52:01.090080235 +0000 UTC m=+0.020857220 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:52:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4d48cd3dd5c0d4dc71eeb194f1e389fb38ec773128a339e2e0b3111348c8895/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:52:01 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Jan 20 14:52:01 compute-1 podman[270512]: 2026-01-20 14:52:01.201733987 +0000 UTC m=+0.132510982 container init 60e161a7f3185d9e79dabaf678e5dc136ea2a7d8acb23d5437f931e610606341 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 20 14:52:01 compute-1 podman[270512]: 2026-01-20 14:52:01.207662805 +0000 UTC m=+0.138439770 container start 60e161a7f3185d9e79dabaf678e5dc136ea2a7d8acb23d5437f931e610606341 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:52:01 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[270527]: [NOTICE]   (270531) : New worker (270533) forked
Jan 20 14:52:01 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[270527]: [NOTICE]   (270531) : Loading success.
Jan 20 14:52:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:01.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.031 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920722.0307524, 23ea4537-f03f-46de-881f-b979e232a3b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.032 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] VM Started (Lifecycle Event)
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.063 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.069 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920722.0310616, 23ea4537-f03f-46de-881f-b979e232a3b9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.069 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] VM Paused (Lifecycle Event)
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.087 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.093 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.114 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:52:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.745 225859 DEBUG nova.compute.manager [req-b3bf3c1c-1fed-4f57-ba57-b10398f2bec4 req-b67be893-9ebb-4432-9147-ae90394bf5af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received event network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.746 225859 DEBUG oslo_concurrency.lockutils [req-b3bf3c1c-1fed-4f57-ba57-b10398f2bec4 req-b67be893-9ebb-4432-9147-ae90394bf5af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.746 225859 DEBUG oslo_concurrency.lockutils [req-b3bf3c1c-1fed-4f57-ba57-b10398f2bec4 req-b67be893-9ebb-4432-9147-ae90394bf5af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.747 225859 DEBUG oslo_concurrency.lockutils [req-b3bf3c1c-1fed-4f57-ba57-b10398f2bec4 req-b67be893-9ebb-4432-9147-ae90394bf5af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.747 225859 DEBUG nova.compute.manager [req-b3bf3c1c-1fed-4f57-ba57-b10398f2bec4 req-b67be893-9ebb-4432-9147-ae90394bf5af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Processing event network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.747 225859 DEBUG nova.compute.manager [req-b3bf3c1c-1fed-4f57-ba57-b10398f2bec4 req-b67be893-9ebb-4432-9147-ae90394bf5af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received event network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.747 225859 DEBUG oslo_concurrency.lockutils [req-b3bf3c1c-1fed-4f57-ba57-b10398f2bec4 req-b67be893-9ebb-4432-9147-ae90394bf5af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.748 225859 DEBUG oslo_concurrency.lockutils [req-b3bf3c1c-1fed-4f57-ba57-b10398f2bec4 req-b67be893-9ebb-4432-9147-ae90394bf5af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.748 225859 DEBUG oslo_concurrency.lockutils [req-b3bf3c1c-1fed-4f57-ba57-b10398f2bec4 req-b67be893-9ebb-4432-9147-ae90394bf5af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.748 225859 DEBUG nova.compute.manager [req-b3bf3c1c-1fed-4f57-ba57-b10398f2bec4 req-b67be893-9ebb-4432-9147-ae90394bf5af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] No waiting events found dispatching network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.748 225859 WARNING nova.compute.manager [req-b3bf3c1c-1fed-4f57-ba57-b10398f2bec4 req-b67be893-9ebb-4432-9147-ae90394bf5af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received unexpected event network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 for instance with vm_state building and task_state spawning.
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.749 225859 DEBUG nova.compute.manager [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.752 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920722.7522752, 23ea4537-f03f-46de-881f-b979e232a3b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.752 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] VM Resumed (Lifecycle Event)
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.754 225859 DEBUG nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.757 225859 INFO nova.virt.libvirt.driver [-] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Instance spawned successfully.
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.758 225859 DEBUG nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.775 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.781 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.785 225859 DEBUG nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.785 225859 DEBUG nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.785 225859 DEBUG nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.786 225859 DEBUG nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.786 225859 DEBUG nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.786 225859 DEBUG nova.virt.libvirt.driver [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.823 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.860 225859 INFO nova.compute.manager [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Took 7.78 seconds to spawn the instance on the hypervisor.
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.860 225859 DEBUG nova.compute.manager [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:02 compute-1 nova_compute[225855]: 2026-01-20 14:52:02.955 225859 INFO nova.compute.manager [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Took 8.77 seconds to build instance.
Jan 20 14:52:02 compute-1 ceph-mon[81775]: pgmap v1939: 321 pgs: 321 active+clean; 538 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 7.7 MiB/s wr, 347 op/s
Jan 20 14:52:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/640400790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1432111964' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:03 compute-1 nova_compute[225855]: 2026-01-20 14:52:03.019 225859 DEBUG oslo_concurrency.lockutils [None req-dfa82bcd-d307-4023-be3c-6e308041cbda d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:03.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:52:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:03.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:52:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3728354836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:04 compute-1 nova_compute[225855]: 2026-01-20 14:52:04.408 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:05 compute-1 ceph-mon[81775]: pgmap v1940: 321 pgs: 321 active+clean; 560 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 8.8 MiB/s wr, 345 op/s
Jan 20 14:52:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:05.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:05 compute-1 nova_compute[225855]: 2026-01-20 14:52:05.637 225859 DEBUG nova.compute.manager [None req-98302ea4-660c-495b-a273-3923f15a0b09 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:05 compute-1 nova_compute[225855]: 2026-01-20 14:52:05.689 225859 INFO nova.compute.manager [None req-98302ea4-660c-495b-a273-3923f15a0b09 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] instance snapshotting
Jan 20 14:52:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:52:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:05.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:52:05 compute-1 nova_compute[225855]: 2026-01-20 14:52:05.939 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:05 compute-1 nova_compute[225855]: 2026-01-20 14:52:05.954 225859 INFO nova.virt.libvirt.driver [None req-98302ea4-660c-495b-a273-3923f15a0b09 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Beginning live snapshot process
Jan 20 14:52:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:52:06 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2460387741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:06 compute-1 nova_compute[225855]: 2026-01-20 14:52:06.234 225859 DEBUG nova.virt.libvirt.imagebackend [None req-98302ea4-660c-495b-a273-3923f15a0b09 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 14:52:06 compute-1 nova_compute[225855]: 2026-01-20 14:52:06.511 225859 DEBUG nova.storage.rbd_utils [None req-98302ea4-660c-495b-a273-3923f15a0b09 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] creating snapshot(783df5f942d44e81a853b6c48ec72869) on rbd image(23ea4537-f03f-46de-881f-b979e232a3b9_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:52:07 compute-1 ceph-mon[81775]: pgmap v1941: 321 pgs: 321 active+clean; 516 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 9.6 MiB/s wr, 467 op/s
Jan 20 14:52:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2460387741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1459808343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e262 e262: 3 total, 3 up, 3 in
Jan 20 14:52:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:07.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:07 compute-1 nova_compute[225855]: 2026-01-20 14:52:07.092 225859 DEBUG nova.storage.rbd_utils [None req-98302ea4-660c-495b-a273-3923f15a0b09 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] cloning vms/23ea4537-f03f-46de-881f-b979e232a3b9_disk@783df5f942d44e81a853b6c48ec72869 to images/a1d1cbcb-c6a5-49c9-8868-06c3872a40d2 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:52:07 compute-1 nova_compute[225855]: 2026-01-20 14:52:07.191 225859 DEBUG nova.storage.rbd_utils [None req-98302ea4-660c-495b-a273-3923f15a0b09 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] flattening images/a1d1cbcb-c6a5-49c9-8868-06c3872a40d2 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:52:07 compute-1 nova_compute[225855]: 2026-01-20 14:52:07.440 225859 DEBUG nova.storage.rbd_utils [None req-98302ea4-660c-495b-a273-3923f15a0b09 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] removing snapshot(783df5f942d44e81a853b6c48ec72869) on rbd image(23ea4537-f03f-46de-881f-b979e232a3b9_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:52:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:52:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:07.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:52:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e263 e263: 3 total, 3 up, 3 in
Jan 20 14:52:08 compute-1 ceph-mon[81775]: osdmap e262: 3 total, 3 up, 3 in
Jan 20 14:52:08 compute-1 nova_compute[225855]: 2026-01-20 14:52:08.292 225859 DEBUG nova.storage.rbd_utils [None req-98302ea4-660c-495b-a273-3923f15a0b09 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] creating snapshot(snap) on rbd image(a1d1cbcb-c6a5-49c9-8868-06c3872a40d2) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:52:09 compute-1 podman[270729]: 2026-01-20 14:52:09.030826832 +0000 UTC m=+0.073400754 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 20 14:52:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:09.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e264 e264: 3 total, 3 up, 3 in
Jan 20 14:52:09 compute-1 ceph-mon[81775]: pgmap v1943: 321 pgs: 321 active+clean; 509 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 10 MiB/s wr, 538 op/s
Jan 20 14:52:09 compute-1 ceph-mon[81775]: osdmap e263: 3 total, 3 up, 3 in
Jan 20 14:52:09 compute-1 nova_compute[225855]: 2026-01-20 14:52:09.409 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:09.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:10 compute-1 ceph-mon[81775]: osdmap e264: 3 total, 3 up, 3 in
Jan 20 14:52:10 compute-1 ceph-mon[81775]: pgmap v1946: 321 pgs: 321 active+clean; 505 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 4.7 MiB/s wr, 627 op/s
Jan 20 14:52:10 compute-1 sudo[270757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:52:10 compute-1 sudo[270757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:52:10 compute-1 sudo[270757]: pam_unix(sudo:session): session closed for user root
Jan 20 14:52:10 compute-1 sudo[270782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:52:10 compute-1 sudo[270782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:52:10 compute-1 sudo[270782]: pam_unix(sudo:session): session closed for user root
Jan 20 14:52:10 compute-1 nova_compute[225855]: 2026-01-20 14:52:10.941 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:11.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:11.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:12 compute-1 nova_compute[225855]: 2026-01-20 14:52:12.277 225859 INFO nova.virt.libvirt.driver [None req-98302ea4-660c-495b-a273-3923f15a0b09 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Snapshot image upload complete
Jan 20 14:52:12 compute-1 nova_compute[225855]: 2026-01-20 14:52:12.278 225859 INFO nova.compute.manager [None req-98302ea4-660c-495b-a273-3923f15a0b09 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Took 6.59 seconds to snapshot the instance on the hypervisor.
Jan 20 14:52:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:12.810 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:52:12 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:12.811 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:52:12 compute-1 nova_compute[225855]: 2026-01-20 14:52:12.812 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:13 compute-1 ceph-mon[81775]: pgmap v1947: 321 pgs: 321 active+clean; 552 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 6.1 MiB/s wr, 419 op/s
Jan 20 14:52:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:52:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:13.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:52:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:13.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:14 compute-1 nova_compute[225855]: 2026-01-20 14:52:14.411 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e265 e265: 3 total, 3 up, 3 in
Jan 20 14:52:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:15.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:15 compute-1 ceph-mon[81775]: pgmap v1948: 321 pgs: 321 active+clean; 561 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 6.3 MiB/s wr, 461 op/s
Jan 20 14:52:15 compute-1 ceph-mon[81775]: osdmap e265: 3 total, 3 up, 3 in
Jan 20 14:52:15 compute-1 ovn_controller[130490]: 2026-01-20T14:52:15Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b5:55:3c 10.100.0.14
Jan 20 14:52:15 compute-1 ovn_controller[130490]: 2026-01-20T14:52:15Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:55:3c 10.100.0.14
Jan 20 14:52:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:52:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:15.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:52:15 compute-1 nova_compute[225855]: 2026-01-20 14:52:15.941 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:16 compute-1 ceph-mon[81775]: pgmap v1950: 321 pgs: 321 active+clean; 550 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 6.5 MiB/s wr, 369 op/s
Jan 20 14:52:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/592155962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:16.411 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:16.412 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:16.412 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:52:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:17.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:52:17 compute-1 nova_compute[225855]: 2026-01-20 14:52:17.175 225859 INFO nova.compute.manager [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Rescuing
Jan 20 14:52:17 compute-1 nova_compute[225855]: 2026-01-20 14:52:17.175 225859 DEBUG oslo_concurrency.lockutils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "refresh_cache-23ea4537-f03f-46de-881f-b979e232a3b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:52:17 compute-1 nova_compute[225855]: 2026-01-20 14:52:17.175 225859 DEBUG oslo_concurrency.lockutils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquired lock "refresh_cache-23ea4537-f03f-46de-881f-b979e232a3b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:52:17 compute-1 nova_compute[225855]: 2026-01-20 14:52:17.176 225859 DEBUG nova.network.neutron [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:52:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:17.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:18 compute-1 ceph-mon[81775]: pgmap v1951: 321 pgs: 321 active+clean; 538 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.4 MiB/s wr, 312 op/s
Jan 20 14:52:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:52:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:19.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:52:19 compute-1 nova_compute[225855]: 2026-01-20 14:52:19.413 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:19.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:20 compute-1 nova_compute[225855]: 2026-01-20 14:52:20.325 225859 DEBUG oslo_concurrency.lockutils [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquiring lock "932fd680-9aa0-49b4-9915-fa55104aaad7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:20 compute-1 nova_compute[225855]: 2026-01-20 14:52:20.326 225859 DEBUG oslo_concurrency.lockutils [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "932fd680-9aa0-49b4-9915-fa55104aaad7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:20 compute-1 nova_compute[225855]: 2026-01-20 14:52:20.326 225859 DEBUG oslo_concurrency.lockutils [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquiring lock "932fd680-9aa0-49b4-9915-fa55104aaad7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:20 compute-1 nova_compute[225855]: 2026-01-20 14:52:20.326 225859 DEBUG oslo_concurrency.lockutils [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "932fd680-9aa0-49b4-9915-fa55104aaad7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:20 compute-1 nova_compute[225855]: 2026-01-20 14:52:20.327 225859 DEBUG oslo_concurrency.lockutils [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "932fd680-9aa0-49b4-9915-fa55104aaad7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:20 compute-1 nova_compute[225855]: 2026-01-20 14:52:20.328 225859 INFO nova.compute.manager [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Terminating instance
Jan 20 14:52:20 compute-1 nova_compute[225855]: 2026-01-20 14:52:20.329 225859 DEBUG oslo_concurrency.lockutils [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquiring lock "refresh_cache-932fd680-9aa0-49b4-9915-fa55104aaad7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:52:20 compute-1 nova_compute[225855]: 2026-01-20 14:52:20.329 225859 DEBUG oslo_concurrency.lockutils [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquired lock "refresh_cache-932fd680-9aa0-49b4-9915-fa55104aaad7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:52:20 compute-1 nova_compute[225855]: 2026-01-20 14:52:20.329 225859 DEBUG nova.network.neutron [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:52:20 compute-1 nova_compute[225855]: 2026-01-20 14:52:20.800 225859 DEBUG nova.network.neutron [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:52:20 compute-1 nova_compute[225855]: 2026-01-20 14:52:20.943 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:21 compute-1 ceph-mon[81775]: pgmap v1952: 321 pgs: 321 active+clean; 543 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.2 MiB/s wr, 299 op/s
Jan 20 14:52:21 compute-1 podman[270813]: 2026-01-20 14:52:21.008007108 +0000 UTC m=+0.050027874 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:52:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:21.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:21.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:22 compute-1 ceph-mon[81775]: pgmap v1953: 321 pgs: 321 active+clean; 567 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.0 MiB/s wr, 210 op/s
Jan 20 14:52:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:22.813 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:52:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:23.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:23 compute-1 nova_compute[225855]: 2026-01-20 14:52:23.104 225859 DEBUG nova.network.neutron [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:52:23 compute-1 nova_compute[225855]: 2026-01-20 14:52:23.128 225859 DEBUG oslo_concurrency.lockutils [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Releasing lock "refresh_cache-932fd680-9aa0-49b4-9915-fa55104aaad7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:52:23 compute-1 nova_compute[225855]: 2026-01-20 14:52:23.128 225859 DEBUG nova.compute.manager [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:52:23 compute-1 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000072.scope: Deactivated successfully.
Jan 20 14:52:23 compute-1 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000072.scope: Consumed 14.404s CPU time.
Jan 20 14:52:23 compute-1 systemd-machined[194361]: Machine qemu-51-instance-00000072 terminated.
Jan 20 14:52:23 compute-1 nova_compute[225855]: 2026-01-20 14:52:23.217 225859 DEBUG nova.network.neutron [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Updating instance_info_cache with network_info: [{"id": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "address": "fa:16:3e:b5:55:3c", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap234381ea-07", "ovs_interfaceid": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:52:23 compute-1 nova_compute[225855]: 2026-01-20 14:52:23.243 225859 DEBUG oslo_concurrency.lockutils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Releasing lock "refresh_cache-23ea4537-f03f-46de-881f-b979e232a3b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:52:23 compute-1 nova_compute[225855]: 2026-01-20 14:52:23.352 225859 INFO nova.virt.libvirt.driver [-] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Instance destroyed successfully.
Jan 20 14:52:23 compute-1 nova_compute[225855]: 2026-01-20 14:52:23.353 225859 DEBUG nova.objects.instance [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lazy-loading 'resources' on Instance uuid 932fd680-9aa0-49b4-9915-fa55104aaad7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:23 compute-1 nova_compute[225855]: 2026-01-20 14:52:23.777 225859 INFO nova.virt.libvirt.driver [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Deleting instance files /var/lib/nova/instances/932fd680-9aa0-49b4-9915-fa55104aaad7_del
Jan 20 14:52:23 compute-1 nova_compute[225855]: 2026-01-20 14:52:23.779 225859 INFO nova.virt.libvirt.driver [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Deletion of /var/lib/nova/instances/932fd680-9aa0-49b4-9915-fa55104aaad7_del complete
Jan 20 14:52:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:23.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:24 compute-1 nova_compute[225855]: 2026-01-20 14:52:24.415 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:24 compute-1 ceph-mon[81775]: pgmap v1954: 321 pgs: 321 active+clean; 578 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 797 KiB/s rd, 5.0 MiB/s wr, 178 op/s
Jan 20 14:52:25 compute-1 nova_compute[225855]: 2026-01-20 14:52:25.069 225859 INFO nova.compute.manager [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Took 1.94 seconds to destroy the instance on the hypervisor.
Jan 20 14:52:25 compute-1 nova_compute[225855]: 2026-01-20 14:52:25.069 225859 DEBUG oslo.service.loopingcall [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:52:25 compute-1 nova_compute[225855]: 2026-01-20 14:52:25.069 225859 DEBUG nova.compute.manager [-] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:52:25 compute-1 nova_compute[225855]: 2026-01-20 14:52:25.070 225859 DEBUG nova.network.neutron [-] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:52:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:25.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:25 compute-1 nova_compute[225855]: 2026-01-20 14:52:25.457 225859 DEBUG nova.virt.libvirt.driver [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:52:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:52:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:25.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:52:25 compute-1 nova_compute[225855]: 2026-01-20 14:52:25.945 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:26 compute-1 ceph-mon[81775]: pgmap v1955: 321 pgs: 321 active+clean; 521 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 716 KiB/s rd, 4.5 MiB/s wr, 172 op/s
Jan 20 14:52:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:27.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:27 compute-1 nova_compute[225855]: 2026-01-20 14:52:27.281 225859 DEBUG nova.network.neutron [-] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:52:27 compute-1 nova_compute[225855]: 2026-01-20 14:52:27.368 225859 DEBUG nova.network.neutron [-] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:52:27 compute-1 nova_compute[225855]: 2026-01-20 14:52:27.395 225859 INFO nova.compute.manager [-] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Took 2.32 seconds to deallocate network for instance.
Jan 20 14:52:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:27 compute-1 nova_compute[225855]: 2026-01-20 14:52:27.501 225859 DEBUG oslo_concurrency.lockutils [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:27 compute-1 nova_compute[225855]: 2026-01-20 14:52:27.502 225859 DEBUG oslo_concurrency.lockutils [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:27 compute-1 nova_compute[225855]: 2026-01-20 14:52:27.682 225859 DEBUG oslo_concurrency.processutils [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:27 compute-1 kernel: tap234381ea-07 (unregistering): left promiscuous mode
Jan 20 14:52:27 compute-1 NetworkManager[49104]: <info>  [1768920747.7382] device (tap234381ea-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:52:27 compute-1 ovn_controller[130490]: 2026-01-20T14:52:27Z|00442|binding|INFO|Releasing lport 234381ea-07b1-41fe-b3c1-be97ce6a3b64 from this chassis (sb_readonly=0)
Jan 20 14:52:27 compute-1 ovn_controller[130490]: 2026-01-20T14:52:27Z|00443|binding|INFO|Setting lport 234381ea-07b1-41fe-b3c1-be97ce6a3b64 down in Southbound
Jan 20 14:52:27 compute-1 ovn_controller[130490]: 2026-01-20T14:52:27Z|00444|binding|INFO|Removing iface tap234381ea-07 ovn-installed in OVS
Jan 20 14:52:27 compute-1 nova_compute[225855]: 2026-01-20 14:52:27.747 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:27 compute-1 nova_compute[225855]: 2026-01-20 14:52:27.763 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:27.767 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:55:3c 10.100.0.14'], port_security=['fa:16:3e:b5:55:3c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '23ea4537-f03f-46de-881f-b979e232a3b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79184781-1f23-4584-87de-08e262242488', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '30ec24b7-15ba-4aeb-9785-539071729f77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b73ab05-b29f-401a-84a5-ea1a96103f33, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=234381ea-07b1-41fe-b3c1-be97ce6a3b64) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:52:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:27.768 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 234381ea-07b1-41fe-b3c1-be97ce6a3b64 in datapath 79184781-1f23-4584-87de-08e262242488 unbound from our chassis
Jan 20 14:52:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:27.770 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79184781-1f23-4584-87de-08e262242488, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:52:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:27.774 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b36381-fdc2-4163-a2fe-88c01f0eef0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:27.774 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-79184781-1f23-4584-87de-08e262242488 namespace which is not needed anymore
Jan 20 14:52:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:52:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:27.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:52:27 compute-1 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000075.scope: Deactivated successfully.
Jan 20 14:52:27 compute-1 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000075.scope: Consumed 15.258s CPU time.
Jan 20 14:52:27 compute-1 systemd-machined[194361]: Machine qemu-52-instance-00000075 terminated.
Jan 20 14:52:27 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[270527]: [NOTICE]   (270531) : haproxy version is 2.8.14-c23fe91
Jan 20 14:52:27 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[270527]: [NOTICE]   (270531) : path to executable is /usr/sbin/haproxy
Jan 20 14:52:27 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[270527]: [WARNING]  (270531) : Exiting Master process...
Jan 20 14:52:27 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[270527]: [ALERT]    (270531) : Current worker (270533) exited with code 143 (Terminated)
Jan 20 14:52:27 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[270527]: [WARNING]  (270531) : All workers exited. Exiting... (0)
Jan 20 14:52:27 compute-1 systemd[1]: libpod-60e161a7f3185d9e79dabaf678e5dc136ea2a7d8acb23d5437f931e610606341.scope: Deactivated successfully.
Jan 20 14:52:27 compute-1 podman[270895]: 2026-01-20 14:52:27.903434554 +0000 UTC m=+0.046016110 container died 60e161a7f3185d9e79dabaf678e5dc136ea2a7d8acb23d5437f931e610606341 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:52:27 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-60e161a7f3185d9e79dabaf678e5dc136ea2a7d8acb23d5437f931e610606341-userdata-shm.mount: Deactivated successfully.
Jan 20 14:52:27 compute-1 systemd[1]: var-lib-containers-storage-overlay-f4d48cd3dd5c0d4dc71eeb194f1e389fb38ec773128a339e2e0b3111348c8895-merged.mount: Deactivated successfully.
Jan 20 14:52:27 compute-1 podman[270895]: 2026-01-20 14:52:27.953451626 +0000 UTC m=+0.096033182 container cleanup 60e161a7f3185d9e79dabaf678e5dc136ea2a7d8acb23d5437f931e610606341 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 14:52:27 compute-1 systemd[1]: libpod-conmon-60e161a7f3185d9e79dabaf678e5dc136ea2a7d8acb23d5437f931e610606341.scope: Deactivated successfully.
Jan 20 14:52:27 compute-1 nova_compute[225855]: 2026-01-20 14:52:27.971 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:27 compute-1 nova_compute[225855]: 2026-01-20 14:52:27.977 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:28 compute-1 podman[270933]: 2026-01-20 14:52:28.024649376 +0000 UTC m=+0.048961643 container remove 60e161a7f3185d9e79dabaf678e5dc136ea2a7d8acb23d5437f931e610606341 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:52:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:28.030 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b38290e0-0985-437a-b355-c65d6374e8a6]: (4, ('Tue Jan 20 02:52:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488 (60e161a7f3185d9e79dabaf678e5dc136ea2a7d8acb23d5437f931e610606341)\n60e161a7f3185d9e79dabaf678e5dc136ea2a7d8acb23d5437f931e610606341\nTue Jan 20 02:52:27 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488 (60e161a7f3185d9e79dabaf678e5dc136ea2a7d8acb23d5437f931e610606341)\n60e161a7f3185d9e79dabaf678e5dc136ea2a7d8acb23d5437f931e610606341\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:28.032 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1dfa018f-b3fe-479b-9da2-768e68d59303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:28.032 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79184781-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:52:28 compute-1 nova_compute[225855]: 2026-01-20 14:52:28.034 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:28 compute-1 kernel: tap79184781-10: left promiscuous mode
Jan 20 14:52:28 compute-1 nova_compute[225855]: 2026-01-20 14:52:28.051 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:28.054 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2e2ef3c2-0a29-437c-ad7a-a9bbf08b3306]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:28.080 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ab088bd7-e269-4b1a-8e50-76d42b0a0b1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:28.081 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5a3fa512-2402-4a93-8d11-47a96e49ec9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:28.099 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f071c2d6-1622-4378-b342-1be660670dc3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575938, 'reachable_time': 37445, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270961, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:28 compute-1 systemd[1]: run-netns-ovnmeta\x2d79184781\x2d1f23\x2d4584\x2d87de\x2d08e262242488.mount: Deactivated successfully.
Jan 20 14:52:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:28.102 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-79184781-1f23-4584-87de-08e262242488 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:52:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:28.102 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[04119278-7ad5-4eca-8825-7b7f4cc7172d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:52:28 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3273996106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:28 compute-1 nova_compute[225855]: 2026-01-20 14:52:28.158 225859 DEBUG oslo_concurrency.processutils [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:28 compute-1 nova_compute[225855]: 2026-01-20 14:52:28.163 225859 DEBUG nova.compute.provider_tree [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:52:28 compute-1 nova_compute[225855]: 2026-01-20 14:52:28.476 225859 INFO nova.virt.libvirt.driver [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Instance shutdown successfully after 3 seconds.
Jan 20 14:52:28 compute-1 nova_compute[225855]: 2026-01-20 14:52:28.483 225859 INFO nova.virt.libvirt.driver [-] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Instance destroyed successfully.
Jan 20 14:52:28 compute-1 nova_compute[225855]: 2026-01-20 14:52:28.484 225859 DEBUG nova.objects.instance [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'numa_topology' on Instance uuid 23ea4537-f03f-46de-881f-b979e232a3b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:28 compute-1 ceph-mon[81775]: pgmap v1956: 321 pgs: 321 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 642 KiB/s rd, 3.0 MiB/s wr, 154 op/s
Jan 20 14:52:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3273996106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:29.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:29 compute-1 nova_compute[225855]: 2026-01-20 14:52:29.417 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:29 compute-1 nova_compute[225855]: 2026-01-20 14:52:29.718 225859 DEBUG nova.scheduler.client.report [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:52:29 compute-1 nova_compute[225855]: 2026-01-20 14:52:29.726 225859 INFO nova.virt.libvirt.driver [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Attempting a stable device rescue
Jan 20 14:52:29 compute-1 nova_compute[225855]: 2026-01-20 14:52:29.744 225859 DEBUG oslo_concurrency.lockutils [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:29.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:29 compute-1 nova_compute[225855]: 2026-01-20 14:52:29.793 225859 INFO nova.scheduler.client.report [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Deleted allocations for instance 932fd680-9aa0-49b4-9915-fa55104aaad7
Jan 20 14:52:29 compute-1 nova_compute[225855]: 2026-01-20 14:52:29.884 225859 DEBUG oslo_concurrency.lockutils [None req-f87f7e56-fd46-4e2f-ae00-24f38def3109 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "932fd680-9aa0-49b4-9915-fa55104aaad7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:30 compute-1 nova_compute[225855]: 2026-01-20 14:52:30.097 225859 DEBUG nova.compute.manager [req-0b3af9a4-1892-417e-be6d-926cde5a5e9d req-453b1863-f94d-4eb7-847f-67fb8523eed7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received event network-vif-unplugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:52:30 compute-1 nova_compute[225855]: 2026-01-20 14:52:30.098 225859 DEBUG oslo_concurrency.lockutils [req-0b3af9a4-1892-417e-be6d-926cde5a5e9d req-453b1863-f94d-4eb7-847f-67fb8523eed7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:30 compute-1 nova_compute[225855]: 2026-01-20 14:52:30.098 225859 DEBUG oslo_concurrency.lockutils [req-0b3af9a4-1892-417e-be6d-926cde5a5e9d req-453b1863-f94d-4eb7-847f-67fb8523eed7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:30 compute-1 nova_compute[225855]: 2026-01-20 14:52:30.098 225859 DEBUG oslo_concurrency.lockutils [req-0b3af9a4-1892-417e-be6d-926cde5a5e9d req-453b1863-f94d-4eb7-847f-67fb8523eed7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:30 compute-1 nova_compute[225855]: 2026-01-20 14:52:30.099 225859 DEBUG nova.compute.manager [req-0b3af9a4-1892-417e-be6d-926cde5a5e9d req-453b1863-f94d-4eb7-847f-67fb8523eed7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] No waiting events found dispatching network-vif-unplugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:52:30 compute-1 nova_compute[225855]: 2026-01-20 14:52:30.099 225859 WARNING nova.compute.manager [req-0b3af9a4-1892-417e-be6d-926cde5a5e9d req-453b1863-f94d-4eb7-847f-67fb8523eed7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received unexpected event network-vif-unplugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 for instance with vm_state active and task_state rescuing.
Jan 20 14:52:30 compute-1 nova_compute[225855]: 2026-01-20 14:52:30.243 225859 DEBUG nova.virt.libvirt.driver [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 20 14:52:30 compute-1 nova_compute[225855]: 2026-01-20 14:52:30.248 225859 DEBUG nova.virt.libvirt.driver [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 14:52:30 compute-1 nova_compute[225855]: 2026-01-20 14:52:30.248 225859 INFO nova.virt.libvirt.driver [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Creating image(s)
Jan 20 14:52:30 compute-1 nova_compute[225855]: 2026-01-20 14:52:30.275 225859 DEBUG nova.storage.rbd_utils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image 23ea4537-f03f-46de-881f-b979e232a3b9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:52:30 compute-1 nova_compute[225855]: 2026-01-20 14:52:30.278 225859 DEBUG nova.objects.instance [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'trusted_certs' on Instance uuid 23ea4537-f03f-46de-881f-b979e232a3b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:30 compute-1 sudo[270983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:52:30 compute-1 sudo[270983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:52:30 compute-1 sudo[270983]: pam_unix(sudo:session): session closed for user root
Jan 20 14:52:30 compute-1 nova_compute[225855]: 2026-01-20 14:52:30.947 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:30 compute-1 sudo[271008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:52:30 compute-1 sudo[271008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:52:30 compute-1 sudo[271008]: pam_unix(sudo:session): session closed for user root
Jan 20 14:52:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:52:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:31.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:52:31 compute-1 ceph-mon[81775]: pgmap v1957: 321 pgs: 321 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 488 KiB/s rd, 2.5 MiB/s wr, 119 op/s
Jan 20 14:52:31 compute-1 nova_compute[225855]: 2026-01-20 14:52:31.266 225859 DEBUG nova.storage.rbd_utils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image 23ea4537-f03f-46de-881f-b979e232a3b9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:52:31 compute-1 nova_compute[225855]: 2026-01-20 14:52:31.295 225859 DEBUG nova.storage.rbd_utils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image 23ea4537-f03f-46de-881f-b979e232a3b9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:52:31 compute-1 nova_compute[225855]: 2026-01-20 14:52:31.298 225859 DEBUG oslo_concurrency.lockutils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "a92690a6b5c5730c73a0f5ee421c950863bba099" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:31 compute-1 nova_compute[225855]: 2026-01-20 14:52:31.299 225859 DEBUG oslo_concurrency.lockutils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "a92690a6b5c5730c73a0f5ee421c950863bba099" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:31.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.086 225859 DEBUG nova.virt.libvirt.imagebackend [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Image locations are: [{'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/a1d1cbcb-c6a5-49c9-8868-06c3872a40d2/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/a1d1cbcb-c6a5-49c9-8868-06c3872a40d2/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.152 225859 DEBUG nova.virt.libvirt.imagebackend [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Selected location: {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/a1d1cbcb-c6a5-49c9-8868-06c3872a40d2/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.153 225859 DEBUG nova.storage.rbd_utils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] cloning images/a1d1cbcb-c6a5-49c9-8868-06c3872a40d2@snap to None/23ea4537-f03f-46de-881f-b979e232a3b9_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.234 225859 DEBUG nova.compute.manager [req-6a91aa19-29fb-49c0-bb7a-224473e4046b req-5f0216ab-efc3-436e-9828-0734421de22e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received event network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.235 225859 DEBUG oslo_concurrency.lockutils [req-6a91aa19-29fb-49c0-bb7a-224473e4046b req-5f0216ab-efc3-436e-9828-0734421de22e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.235 225859 DEBUG oslo_concurrency.lockutils [req-6a91aa19-29fb-49c0-bb7a-224473e4046b req-5f0216ab-efc3-436e-9828-0734421de22e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.235 225859 DEBUG oslo_concurrency.lockutils [req-6a91aa19-29fb-49c0-bb7a-224473e4046b req-5f0216ab-efc3-436e-9828-0734421de22e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.236 225859 DEBUG nova.compute.manager [req-6a91aa19-29fb-49c0-bb7a-224473e4046b req-5f0216ab-efc3-436e-9828-0734421de22e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] No waiting events found dispatching network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.236 225859 WARNING nova.compute.manager [req-6a91aa19-29fb-49c0-bb7a-224473e4046b req-5f0216ab-efc3-436e-9828-0734421de22e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received unexpected event network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 for instance with vm_state active and task_state rescuing.
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.247 225859 DEBUG oslo_concurrency.lockutils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "a92690a6b5c5730c73a0f5ee421c950863bba099" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.291 225859 DEBUG nova.objects.instance [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'migration_context' on Instance uuid 23ea4537-f03f-46de-881f-b979e232a3b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.306 225859 DEBUG nova.virt.libvirt.driver [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.308 225859 DEBUG nova.virt.libvirt.driver [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Start _get_guest_xml network_info=[{"id": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "address": "fa:16:3e:b5:55:3c", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "vif_mac": "fa:16:3e:b5:55:3c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap234381ea-07", "ovs_interfaceid": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'a1d1cbcb-c6a5-49c9-8868-06c3872a40d2', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.308 225859 DEBUG nova.objects.instance [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'resources' on Instance uuid 23ea4537-f03f-46de-881f-b979e232a3b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.326 225859 WARNING nova.virt.libvirt.driver [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.330 225859 DEBUG nova.virt.libvirt.host [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.331 225859 DEBUG nova.virt.libvirt.host [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.333 225859 DEBUG nova.virt.libvirt.host [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.334 225859 DEBUG nova.virt.libvirt.host [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.334 225859 DEBUG nova.virt.libvirt.driver [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.335 225859 DEBUG nova.virt.hardware [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.335 225859 DEBUG nova.virt.hardware [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.335 225859 DEBUG nova.virt.hardware [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.335 225859 DEBUG nova.virt.hardware [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.336 225859 DEBUG nova.virt.hardware [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.336 225859 DEBUG nova.virt.hardware [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.336 225859 DEBUG nova.virt.hardware [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.336 225859 DEBUG nova.virt.hardware [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.337 225859 DEBUG nova.virt.hardware [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.337 225859 DEBUG nova.virt.hardware [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.337 225859 DEBUG nova.virt.hardware [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.337 225859 DEBUG nova.objects.instance [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 23ea4537-f03f-46de-881f-b979e232a3b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.357 225859 DEBUG oslo_concurrency.processutils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:52:32 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/861012501' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.806 225859 DEBUG oslo_concurrency.processutils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:32 compute-1 nova_compute[225855]: 2026-01-20 14:52:32.854 225859 DEBUG oslo_concurrency.processutils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:52:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:33.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:52:33 compute-1 ceph-mon[81775]: pgmap v1958: 321 pgs: 321 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 358 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Jan 20 14:52:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/861012501' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:52:33 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3561049918' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:33 compute-1 nova_compute[225855]: 2026-01-20 14:52:33.294 225859 DEBUG oslo_concurrency.processutils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:33 compute-1 nova_compute[225855]: 2026-01-20 14:52:33.297 225859 DEBUG oslo_concurrency.processutils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:33 compute-1 nova_compute[225855]: 2026-01-20 14:52:33.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:52:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:52:33 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2828053923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:33 compute-1 nova_compute[225855]: 2026-01-20 14:52:33.716 225859 DEBUG oslo_concurrency.processutils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:33 compute-1 nova_compute[225855]: 2026-01-20 14:52:33.719 225859 DEBUG nova.virt.libvirt.vif [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:51:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-828759404',display_name='tempest-ServerStableDeviceRescueTest-server-828759404',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-828759404',id=117,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:52:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a29915e0dd2403fbd7b7e847696b00a',ramdisk_id='',reservation_id='r-ztpzn050',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-129078052',owner_user_name='tempest-ServerStableDeviceRescueTest-129078052-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:52:12Z,user_data=None,user_id='d85d286ce6224326a0f4a15a06afbfea',uuid=23ea4537-f03f-46de-881f-b979e232a3b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "address": "fa:16:3e:b5:55:3c", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "vif_mac": "fa:16:3e:b5:55:3c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap234381ea-07", "ovs_interfaceid": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:52:33 compute-1 nova_compute[225855]: 2026-01-20 14:52:33.719 225859 DEBUG nova.network.os_vif_util [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converting VIF {"id": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "address": "fa:16:3e:b5:55:3c", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "vif_mac": "fa:16:3e:b5:55:3c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap234381ea-07", "ovs_interfaceid": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:52:33 compute-1 nova_compute[225855]: 2026-01-20 14:52:33.720 225859 DEBUG nova.network.os_vif_util [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b5:55:3c,bridge_name='br-int',has_traffic_filtering=True,id=234381ea-07b1-41fe-b3c1-be97ce6a3b64,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap234381ea-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:52:33 compute-1 nova_compute[225855]: 2026-01-20 14:52:33.722 225859 DEBUG nova.objects.instance [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'pci_devices' on Instance uuid 23ea4537-f03f-46de-881f-b979e232a3b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:33 compute-1 nova_compute[225855]: 2026-01-20 14:52:33.758 225859 DEBUG nova.virt.libvirt.driver [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:52:33 compute-1 nova_compute[225855]:   <uuid>23ea4537-f03f-46de-881f-b979e232a3b9</uuid>
Jan 20 14:52:33 compute-1 nova_compute[225855]:   <name>instance-00000075</name>
Jan 20 14:52:33 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:52:33 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:52:33 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerStableDeviceRescueTest-server-828759404</nova:name>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:52:32</nova:creationTime>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:52:33 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:52:33 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:52:33 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:52:33 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:52:33 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:52:33 compute-1 nova_compute[225855]:         <nova:user uuid="d85d286ce6224326a0f4a15a06afbfea">tempest-ServerStableDeviceRescueTest-129078052-project-member</nova:user>
Jan 20 14:52:33 compute-1 nova_compute[225855]:         <nova:project uuid="0a29915e0dd2403fbd7b7e847696b00a">tempest-ServerStableDeviceRescueTest-129078052</nova:project>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:52:33 compute-1 nova_compute[225855]:         <nova:port uuid="234381ea-07b1-41fe-b3c1-be97ce6a3b64">
Jan 20 14:52:33 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:52:33 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:52:33 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <system>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <entry name="serial">23ea4537-f03f-46de-881f-b979e232a3b9</entry>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <entry name="uuid">23ea4537-f03f-46de-881f-b979e232a3b9</entry>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     </system>
Jan 20 14:52:33 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:52:33 compute-1 nova_compute[225855]:   <os>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:   </os>
Jan 20 14:52:33 compute-1 nova_compute[225855]:   <features>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:   </features>
Jan 20 14:52:33 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:52:33 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:52:33 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/23ea4537-f03f-46de-881f-b979e232a3b9_disk">
Jan 20 14:52:33 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       </source>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:52:33 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/23ea4537-f03f-46de-881f-b979e232a3b9_disk.config">
Jan 20 14:52:33 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       </source>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:52:33 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/23ea4537-f03f-46de-881f-b979e232a3b9_disk.rescue">
Jan 20 14:52:33 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       </source>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:52:33 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <target dev="vdb" bus="virtio"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <boot order="1"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:b5:55:3c"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <target dev="tap234381ea-07"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/23ea4537-f03f-46de-881f-b979e232a3b9/console.log" append="off"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <video>
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     </video>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:52:33 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:52:33 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:52:33 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:52:33 compute-1 nova_compute[225855]: </domain>
Jan 20 14:52:33 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:52:33 compute-1 nova_compute[225855]: 2026-01-20 14:52:33.772 225859 INFO nova.virt.libvirt.driver [-] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Instance destroyed successfully.
Jan 20 14:52:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:33.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:33 compute-1 nova_compute[225855]: 2026-01-20 14:52:33.848 225859 DEBUG nova.virt.libvirt.driver [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:52:33 compute-1 nova_compute[225855]: 2026-01-20 14:52:33.849 225859 DEBUG nova.virt.libvirt.driver [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:52:33 compute-1 nova_compute[225855]: 2026-01-20 14:52:33.849 225859 DEBUG nova.virt.libvirt.driver [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:52:33 compute-1 nova_compute[225855]: 2026-01-20 14:52:33.850 225859 DEBUG nova.virt.libvirt.driver [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No VIF found with MAC fa:16:3e:b5:55:3c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:52:33 compute-1 nova_compute[225855]: 2026-01-20 14:52:33.851 225859 INFO nova.virt.libvirt.driver [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Using config drive
Jan 20 14:52:33 compute-1 nova_compute[225855]: 2026-01-20 14:52:33.893 225859 DEBUG nova.storage.rbd_utils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image 23ea4537-f03f-46de-881f-b979e232a3b9_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:52:33 compute-1 nova_compute[225855]: 2026-01-20 14:52:33.983 225859 DEBUG nova.objects.instance [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'ec2_ids' on Instance uuid 23ea4537-f03f-46de-881f-b979e232a3b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:34 compute-1 nova_compute[225855]: 2026-01-20 14:52:34.020 225859 DEBUG nova.objects.instance [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'keypairs' on Instance uuid 23ea4537-f03f-46de-881f-b979e232a3b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3561049918' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2828053923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:34 compute-1 nova_compute[225855]: 2026-01-20 14:52:34.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:52:34 compute-1 nova_compute[225855]: 2026-01-20 14:52:34.420 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:52:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:35.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:52:35 compute-1 nova_compute[225855]: 2026-01-20 14:52:35.187 225859 INFO nova.virt.libvirt.driver [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Creating config drive at /var/lib/nova/instances/23ea4537-f03f-46de-881f-b979e232a3b9/disk.config.rescue
Jan 20 14:52:35 compute-1 nova_compute[225855]: 2026-01-20 14:52:35.195 225859 DEBUG oslo_concurrency.processutils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/23ea4537-f03f-46de-881f-b979e232a3b9/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkwe5xshx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:35 compute-1 nova_compute[225855]: 2026-01-20 14:52:35.594 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:52:35 compute-1 nova_compute[225855]: 2026-01-20 14:52:35.595 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:52:35 compute-1 nova_compute[225855]: 2026-01-20 14:52:35.595 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:52:35 compute-1 ceph-mon[81775]: pgmap v1959: 321 pgs: 321 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 209 KiB/s rd, 714 KiB/s wr, 70 op/s
Jan 20 14:52:35 compute-1 nova_compute[225855]: 2026-01-20 14:52:35.598 225859 DEBUG oslo_concurrency.processutils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/23ea4537-f03f-46de-881f-b979e232a3b9/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkwe5xshx" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:35 compute-1 nova_compute[225855]: 2026-01-20 14:52:35.637 225859 DEBUG nova.storage.rbd_utils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image 23ea4537-f03f-46de-881f-b979e232a3b9_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:52:35 compute-1 nova_compute[225855]: 2026-01-20 14:52:35.640 225859 DEBUG oslo_concurrency.processutils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/23ea4537-f03f-46de-881f-b979e232a3b9/disk.config.rescue 23ea4537-f03f-46de-881f-b979e232a3b9_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:35.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:35 compute-1 nova_compute[225855]: 2026-01-20 14:52:35.810 225859 DEBUG oslo_concurrency.processutils [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/23ea4537-f03f-46de-881f-b979e232a3b9/disk.config.rescue 23ea4537-f03f-46de-881f-b979e232a3b9_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:35 compute-1 nova_compute[225855]: 2026-01-20 14:52:35.812 225859 INFO nova.virt.libvirt.driver [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Deleting local config drive /var/lib/nova/instances/23ea4537-f03f-46de-881f-b979e232a3b9/disk.config.rescue because it was imported into RBD.
Jan 20 14:52:35 compute-1 nova_compute[225855]: 2026-01-20 14:52:35.817 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-23ea4537-f03f-46de-881f-b979e232a3b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:52:35 compute-1 nova_compute[225855]: 2026-01-20 14:52:35.818 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-23ea4537-f03f-46de-881f-b979e232a3b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:52:35 compute-1 nova_compute[225855]: 2026-01-20 14:52:35.818 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:52:35 compute-1 nova_compute[225855]: 2026-01-20 14:52:35.818 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 23ea4537-f03f-46de-881f-b979e232a3b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:35 compute-1 kernel: tap234381ea-07: entered promiscuous mode
Jan 20 14:52:35 compute-1 NetworkManager[49104]: <info>  [1768920755.8822] manager: (tap234381ea-07): new Tun device (/org/freedesktop/NetworkManager/Devices/192)
Jan 20 14:52:35 compute-1 nova_compute[225855]: 2026-01-20 14:52:35.882 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:35 compute-1 ovn_controller[130490]: 2026-01-20T14:52:35Z|00445|binding|INFO|Claiming lport 234381ea-07b1-41fe-b3c1-be97ce6a3b64 for this chassis.
Jan 20 14:52:35 compute-1 ovn_controller[130490]: 2026-01-20T14:52:35Z|00446|binding|INFO|234381ea-07b1-41fe-b3c1-be97ce6a3b64: Claiming fa:16:3e:b5:55:3c 10.100.0.14
Jan 20 14:52:35 compute-1 ovn_controller[130490]: 2026-01-20T14:52:35Z|00447|binding|INFO|Setting lport 234381ea-07b1-41fe-b3c1-be97ce6a3b64 ovn-installed in OVS
Jan 20 14:52:35 compute-1 nova_compute[225855]: 2026-01-20 14:52:35.901 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:35 compute-1 nova_compute[225855]: 2026-01-20 14:52:35.904 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:35 compute-1 systemd-udevd[271313]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:52:35 compute-1 systemd-machined[194361]: New machine qemu-53-instance-00000075.
Jan 20 14:52:35 compute-1 NetworkManager[49104]: <info>  [1768920755.9217] device (tap234381ea-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:52:35 compute-1 NetworkManager[49104]: <info>  [1768920755.9223] device (tap234381ea-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:52:35 compute-1 systemd[1]: Started Virtual Machine qemu-53-instance-00000075.
Jan 20 14:52:35 compute-1 nova_compute[225855]: 2026-01-20 14:52:35.948 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:35.971 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:55:3c 10.100.0.14'], port_security=['fa:16:3e:b5:55:3c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '23ea4537-f03f-46de-881f-b979e232a3b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79184781-1f23-4584-87de-08e262242488', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '30ec24b7-15ba-4aeb-9785-539071729f77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b73ab05-b29f-401a-84a5-ea1a96103f33, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=234381ea-07b1-41fe-b3c1-be97ce6a3b64) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:52:35 compute-1 ovn_controller[130490]: 2026-01-20T14:52:35Z|00448|binding|INFO|Setting lport 234381ea-07b1-41fe-b3c1-be97ce6a3b64 up in Southbound
Jan 20 14:52:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:35.972 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 234381ea-07b1-41fe-b3c1-be97ce6a3b64 in datapath 79184781-1f23-4584-87de-08e262242488 bound to our chassis
Jan 20 14:52:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:35.973 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79184781-1f23-4584-87de-08e262242488
Jan 20 14:52:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:35.984 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[26eac545-e443-4b63-a110-111b9f0991f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:35.985 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap79184781-11 in ovnmeta-79184781-1f23-4584-87de-08e262242488 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:52:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:35.987 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap79184781-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:52:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:35.987 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a013a5f2-0907-4cd1-85fb-08562a7b2051]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:35.988 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c67c91b7-4d4c-4c10-b6d8-3f49adacd3cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:36.007 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff99268-c7e8-46a3-81da-3161d9a871b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:36.021 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c26a7436-1922-421c-9ab3-e27006e584a3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:36.049 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd2fce2-22d3-48b7-9feb-901a41f2d20d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:36.053 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c83666cc-462c-4fad-92b7-8171804b7c2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:36 compute-1 NetworkManager[49104]: <info>  [1768920756.0543] manager: (tap79184781-10): new Veth device (/org/freedesktop/NetworkManager/Devices/193)
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:36.095 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[6eab4527-51c1-46e3-8287-5f396f6dca1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:36.098 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[614466c6-d02c-4b1a-b4f1-15775b0c026e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:36 compute-1 NetworkManager[49104]: <info>  [1768920756.1298] device (tap79184781-10): carrier: link connected
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:36.135 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[cce8b75d-004f-4faf-861c-d38a3c1af98c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:36.158 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c113a54c-8cfd-4115-aa83-647275da955d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79184781-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:7c:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579505, 'reachable_time': 30090, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271348, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:36.176 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[655cec9d-6b64-43b4-966f-691cc71f4099]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe38:7c2a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 579505, 'tstamp': 579505}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271349, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:36.194 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fa0ef578-8502-4c4f-befc-a3b92a2fd52d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79184781-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:7c:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579505, 'reachable_time': 30090, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271350, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:36.228 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4470f9-9174-4d99-8d00-0269aa13123c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:36.305 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e094836a-e2b7-4474-acf4-1c41a78f75e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:36.307 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79184781-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:36.307 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:36.308 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79184781-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.310 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:36 compute-1 NetworkManager[49104]: <info>  [1768920756.3104] manager: (tap79184781-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/194)
Jan 20 14:52:36 compute-1 kernel: tap79184781-10: entered promiscuous mode
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.314 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:36.314 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79184781-10, col_values=(('external_ids', {'iface-id': 'b033e9e6-9781-4424-a20f-7b48a14e2c80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.315 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:36 compute-1 ovn_controller[130490]: 2026-01-20T14:52:36Z|00449|binding|INFO|Releasing lport b033e9e6-9781-4424-a20f-7b48a14e2c80 from this chassis (sb_readonly=0)
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.348 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.349 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:36.350 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/79184781-1f23-4584-87de-08e262242488.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/79184781-1f23-4584-87de-08e262242488.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:36.351 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1adae803-555d-4d16-8b4c-07ec44a7f03c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:36.352 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-79184781-1f23-4584-87de-08e262242488
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/79184781-1f23-4584-87de-08e262242488.pid.haproxy
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 79184781-1f23-4584-87de-08e262242488
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:52:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:36.353 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'env', 'PROCESS_TAG=haproxy-79184781-1f23-4584-87de-08e262242488', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/79184781-1f23-4584-87de-08e262242488.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.540 225859 DEBUG nova.compute.manager [req-a5be19e0-0fad-443f-96a6-efb5a4aa0cef req-64bc2af3-95da-40e9-ad1d-dcdc8d49031e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received event network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.541 225859 DEBUG oslo_concurrency.lockutils [req-a5be19e0-0fad-443f-96a6-efb5a4aa0cef req-64bc2af3-95da-40e9-ad1d-dcdc8d49031e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.541 225859 DEBUG oslo_concurrency.lockutils [req-a5be19e0-0fad-443f-96a6-efb5a4aa0cef req-64bc2af3-95da-40e9-ad1d-dcdc8d49031e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.541 225859 DEBUG oslo_concurrency.lockutils [req-a5be19e0-0fad-443f-96a6-efb5a4aa0cef req-64bc2af3-95da-40e9-ad1d-dcdc8d49031e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.542 225859 DEBUG nova.compute.manager [req-a5be19e0-0fad-443f-96a6-efb5a4aa0cef req-64bc2af3-95da-40e9-ad1d-dcdc8d49031e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] No waiting events found dispatching network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.542 225859 WARNING nova.compute.manager [req-a5be19e0-0fad-443f-96a6-efb5a4aa0cef req-64bc2af3-95da-40e9-ad1d-dcdc8d49031e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received unexpected event network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 for instance with vm_state active and task_state rescuing.
Jan 20 14:52:36 compute-1 ceph-mon[81775]: pgmap v1960: 321 pgs: 321 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 130 KiB/s wr, 51 op/s
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.664 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for 23ea4537-f03f-46de-881f-b979e232a3b9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.665 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920756.663731, 23ea4537-f03f-46de-881f-b979e232a3b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.665 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] VM Resumed (Lifecycle Event)
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.669 225859 DEBUG nova.compute.manager [None req-863cf6e0-1eda-4be3-ad66-8b6c9d7954d5 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.693 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.696 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:52:36 compute-1 podman[271439]: 2026-01-20 14:52:36.717527275 +0000 UTC m=+0.060328955 container create b9869b2334cb22ebb41a8e59d6adfb7d39a9957a4d3680921081b2143268cfef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:52:36 compute-1 systemd[1]: Started libpod-conmon-b9869b2334cb22ebb41a8e59d6adfb7d39a9957a4d3680921081b2143268cfef.scope.
Jan 20 14:52:36 compute-1 podman[271439]: 2026-01-20 14:52:36.689182224 +0000 UTC m=+0.031983914 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:52:36 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:52:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ecd8403a0105356438fdeb735c64fc58258b386b8c6a85324ab7dbc102be6b3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:52:36 compute-1 podman[271439]: 2026-01-20 14:52:36.808924895 +0000 UTC m=+0.151726585 container init b9869b2334cb22ebb41a8e59d6adfb7d39a9957a4d3680921081b2143268cfef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 20 14:52:36 compute-1 podman[271439]: 2026-01-20 14:52:36.819055631 +0000 UTC m=+0.161857301 container start b9869b2334cb22ebb41a8e59d6adfb7d39a9957a4d3680921081b2143268cfef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:52:36 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[271452]: [NOTICE]   (271456) : New worker (271458) forked
Jan 20 14:52:36 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[271452]: [NOTICE]   (271456) : Loading success.
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.901 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] During sync_power_state the instance has a pending task (rescuing). Skip.
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.902 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920756.664075, 23ea4537-f03f-46de-881f-b979e232a3b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.902 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] VM Started (Lifecycle Event)
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.945 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:36 compute-1 nova_compute[225855]: 2026-01-20 14:52:36.950 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:52:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:37.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e266 e266: 3 total, 3 up, 3 in
Jan 20 14:52:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:37.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:38 compute-1 nova_compute[225855]: 2026-01-20 14:52:38.351 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920743.3496437, 932fd680-9aa0-49b4-9915-fa55104aaad7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:52:38 compute-1 nova_compute[225855]: 2026-01-20 14:52:38.352 225859 INFO nova.compute.manager [-] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] VM Stopped (Lifecycle Event)
Jan 20 14:52:38 compute-1 nova_compute[225855]: 2026-01-20 14:52:38.682 225859 DEBUG nova.compute.manager [None req-95f49cd8-9434-4d4c-904b-f8bc42496fcc - - - - - -] [instance: 932fd680-9aa0-49b4-9915-fa55104aaad7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:38 compute-1 ceph-mon[81775]: osdmap e266: 3 total, 3 up, 3 in
Jan 20 14:52:38 compute-1 ceph-mon[81775]: pgmap v1962: 321 pgs: 321 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 33 KiB/s wr, 31 op/s
Jan 20 14:52:38 compute-1 nova_compute[225855]: 2026-01-20 14:52:38.713 225859 DEBUG nova.compute.manager [req-f80bafd7-5204-4400-b3e5-a924f48c3943 req-3f494ac1-9172-4ae8-9ee4-7bedd9f7afb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received event network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:52:38 compute-1 nova_compute[225855]: 2026-01-20 14:52:38.714 225859 DEBUG oslo_concurrency.lockutils [req-f80bafd7-5204-4400-b3e5-a924f48c3943 req-3f494ac1-9172-4ae8-9ee4-7bedd9f7afb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:38 compute-1 nova_compute[225855]: 2026-01-20 14:52:38.715 225859 DEBUG oslo_concurrency.lockutils [req-f80bafd7-5204-4400-b3e5-a924f48c3943 req-3f494ac1-9172-4ae8-9ee4-7bedd9f7afb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:38 compute-1 nova_compute[225855]: 2026-01-20 14:52:38.715 225859 DEBUG oslo_concurrency.lockutils [req-f80bafd7-5204-4400-b3e5-a924f48c3943 req-3f494ac1-9172-4ae8-9ee4-7bedd9f7afb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:38 compute-1 nova_compute[225855]: 2026-01-20 14:52:38.716 225859 DEBUG nova.compute.manager [req-f80bafd7-5204-4400-b3e5-a924f48c3943 req-3f494ac1-9172-4ae8-9ee4-7bedd9f7afb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] No waiting events found dispatching network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:52:38 compute-1 nova_compute[225855]: 2026-01-20 14:52:38.716 225859 WARNING nova.compute.manager [req-f80bafd7-5204-4400-b3e5-a924f48c3943 req-3f494ac1-9172-4ae8-9ee4-7bedd9f7afb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received unexpected event network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 for instance with vm_state rescued and task_state unrescuing.
Jan 20 14:52:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e267 e267: 3 total, 3 up, 3 in
Jan 20 14:52:38 compute-1 nova_compute[225855]: 2026-01-20 14:52:38.749 225859 INFO nova.compute.manager [None req-042e7a60-6768-4229-8936-c6493f807f55 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Unrescuing
Jan 20 14:52:38 compute-1 nova_compute[225855]: 2026-01-20 14:52:38.750 225859 DEBUG oslo_concurrency.lockutils [None req-042e7a60-6768-4229-8936-c6493f807f55 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "refresh_cache-23ea4537-f03f-46de-881f-b979e232a3b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:52:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:39.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:39 compute-1 nova_compute[225855]: 2026-01-20 14:52:39.127 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Updating instance_info_cache with network_info: [{"id": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "address": "fa:16:3e:b5:55:3c", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap234381ea-07", "ovs_interfaceid": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:52:39 compute-1 nova_compute[225855]: 2026-01-20 14:52:39.422 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e268 e268: 3 total, 3 up, 3 in
Jan 20 14:52:39 compute-1 ceph-mon[81775]: osdmap e267: 3 total, 3 up, 3 in
Jan 20 14:52:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:39.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:40 compute-1 podman[271469]: 2026-01-20 14:52:40.074062393 +0000 UTC m=+0.099661784 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:52:40 compute-1 nova_compute[225855]: 2026-01-20 14:52:40.597 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-23ea4537-f03f-46de-881f-b979e232a3b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:52:40 compute-1 nova_compute[225855]: 2026-01-20 14:52:40.597 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:52:40 compute-1 nova_compute[225855]: 2026-01-20 14:52:40.598 225859 DEBUG oslo_concurrency.lockutils [None req-042e7a60-6768-4229-8936-c6493f807f55 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquired lock "refresh_cache-23ea4537-f03f-46de-881f-b979e232a3b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:52:40 compute-1 nova_compute[225855]: 2026-01-20 14:52:40.598 225859 DEBUG nova.network.neutron [None req-042e7a60-6768-4229-8936-c6493f807f55 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:52:40 compute-1 nova_compute[225855]: 2026-01-20 14:52:40.599 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:52:40 compute-1 nova_compute[225855]: 2026-01-20 14:52:40.600 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:52:40 compute-1 nova_compute[225855]: 2026-01-20 14:52:40.600 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:52:40 compute-1 ceph-mon[81775]: osdmap e268: 3 total, 3 up, 3 in
Jan 20 14:52:40 compute-1 ceph-mon[81775]: pgmap v1965: 321 pgs: 321 active+clean; 526 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 110 op/s
Jan 20 14:52:40 compute-1 nova_compute[225855]: 2026-01-20 14:52:40.951 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:41.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:41 compute-1 nova_compute[225855]: 2026-01-20 14:52:41.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:52:41 compute-1 nova_compute[225855]: 2026-01-20 14:52:41.402 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:41 compute-1 nova_compute[225855]: 2026-01-20 14:52:41.403 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:41 compute-1 nova_compute[225855]: 2026-01-20 14:52:41.404 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:41 compute-1 nova_compute[225855]: 2026-01-20 14:52:41.404 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:52:41 compute-1 nova_compute[225855]: 2026-01-20 14:52:41.405 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:52:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:41.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:52:41 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:52:41 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/383931634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:41 compute-1 nova_compute[225855]: 2026-01-20 14:52:41.859 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/383931634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:41 compute-1 nova_compute[225855]: 2026-01-20 14:52:41.939 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:52:41 compute-1 nova_compute[225855]: 2026-01-20 14:52:41.939 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:52:41 compute-1 nova_compute[225855]: 2026-01-20 14:52:41.940 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:52:42 compute-1 nova_compute[225855]: 2026-01-20 14:52:42.095 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:52:42 compute-1 nova_compute[225855]: 2026-01-20 14:52:42.097 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4219MB free_disk=20.806049346923828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:52:42 compute-1 nova_compute[225855]: 2026-01-20 14:52:42.097 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:42 compute-1 nova_compute[225855]: 2026-01-20 14:52:42.098 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:42 compute-1 nova_compute[225855]: 2026-01-20 14:52:42.698 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 23ea4537-f03f-46de-881f-b979e232a3b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:52:42 compute-1 nova_compute[225855]: 2026-01-20 14:52:42.698 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:52:42 compute-1 nova_compute[225855]: 2026-01-20 14:52:42.698 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:52:42 compute-1 nova_compute[225855]: 2026-01-20 14:52:42.753 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:42 compute-1 ceph-mon[81775]: pgmap v1966: 321 pgs: 321 active+clean; 548 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 3.7 MiB/s wr, 209 op/s
Jan 20 14:52:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2883729981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3155031433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:52:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:43.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:52:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:52:43 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2181614927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:43 compute-1 nova_compute[225855]: 2026-01-20 14:52:43.248 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:43 compute-1 nova_compute[225855]: 2026-01-20 14:52:43.256 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:52:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:52:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:43.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:52:43 compute-1 nova_compute[225855]: 2026-01-20 14:52:43.851 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:52:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2181614927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:44 compute-1 nova_compute[225855]: 2026-01-20 14:52:44.210 225859 DEBUG nova.network.neutron [None req-042e7a60-6768-4229-8936-c6493f807f55 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Updating instance_info_cache with network_info: [{"id": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "address": "fa:16:3e:b5:55:3c", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap234381ea-07", "ovs_interfaceid": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:52:44 compute-1 nova_compute[225855]: 2026-01-20 14:52:44.248 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:52:44 compute-1 nova_compute[225855]: 2026-01-20 14:52:44.248 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:44 compute-1 nova_compute[225855]: 2026-01-20 14:52:44.348 225859 DEBUG oslo_concurrency.lockutils [None req-042e7a60-6768-4229-8936-c6493f807f55 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Releasing lock "refresh_cache-23ea4537-f03f-46de-881f-b979e232a3b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:52:44 compute-1 nova_compute[225855]: 2026-01-20 14:52:44.350 225859 DEBUG nova.objects.instance [None req-042e7a60-6768-4229-8936-c6493f807f55 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'flavor' on Instance uuid 23ea4537-f03f-46de-881f-b979e232a3b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:44 compute-1 nova_compute[225855]: 2026-01-20 14:52:44.472 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:44 compute-1 kernel: tap234381ea-07 (unregistering): left promiscuous mode
Jan 20 14:52:44 compute-1 NetworkManager[49104]: <info>  [1768920764.5318] device (tap234381ea-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:52:44 compute-1 ovn_controller[130490]: 2026-01-20T14:52:44Z|00450|binding|INFO|Releasing lport 234381ea-07b1-41fe-b3c1-be97ce6a3b64 from this chassis (sb_readonly=0)
Jan 20 14:52:44 compute-1 ovn_controller[130490]: 2026-01-20T14:52:44Z|00451|binding|INFO|Setting lport 234381ea-07b1-41fe-b3c1-be97ce6a3b64 down in Southbound
Jan 20 14:52:44 compute-1 ovn_controller[130490]: 2026-01-20T14:52:44Z|00452|binding|INFO|Removing iface tap234381ea-07 ovn-installed in OVS
Jan 20 14:52:44 compute-1 nova_compute[225855]: 2026-01-20 14:52:44.537 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:44 compute-1 nova_compute[225855]: 2026-01-20 14:52:44.540 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:44 compute-1 nova_compute[225855]: 2026-01-20 14:52:44.570 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:44 compute-1 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000075.scope: Deactivated successfully.
Jan 20 14:52:44 compute-1 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000075.scope: Consumed 8.917s CPU time.
Jan 20 14:52:44 compute-1 systemd-machined[194361]: Machine qemu-53-instance-00000075 terminated.
Jan 20 14:52:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e269 e269: 3 total, 3 up, 3 in
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.657 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:55:3c 10.100.0.14'], port_security=['fa:16:3e:b5:55:3c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '23ea4537-f03f-46de-881f-b979e232a3b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79184781-1f23-4584-87de-08e262242488', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '30ec24b7-15ba-4aeb-9785-539071729f77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b73ab05-b29f-401a-84a5-ea1a96103f33, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=234381ea-07b1-41fe-b3c1-be97ce6a3b64) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.659 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 234381ea-07b1-41fe-b3c1-be97ce6a3b64 in datapath 79184781-1f23-4584-87de-08e262242488 unbound from our chassis
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.660 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79184781-1f23-4584-87de-08e262242488, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.661 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8c5e3e18-ea31-4021-ab25-ef3395d451da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.662 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-79184781-1f23-4584-87de-08e262242488 namespace which is not needed anymore
Jan 20 14:52:44 compute-1 nova_compute[225855]: 2026-01-20 14:52:44.714 225859 INFO nova.virt.libvirt.driver [-] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Instance destroyed successfully.
Jan 20 14:52:44 compute-1 nova_compute[225855]: 2026-01-20 14:52:44.714 225859 DEBUG nova.objects.instance [None req-042e7a60-6768-4229-8936-c6493f807f55 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'numa_topology' on Instance uuid 23ea4537-f03f-46de-881f-b979e232a3b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:44 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[271452]: [NOTICE]   (271456) : haproxy version is 2.8.14-c23fe91
Jan 20 14:52:44 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[271452]: [NOTICE]   (271456) : path to executable is /usr/sbin/haproxy
Jan 20 14:52:44 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[271452]: [WARNING]  (271456) : Exiting Master process...
Jan 20 14:52:44 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[271452]: [ALERT]    (271456) : Current worker (271458) exited with code 143 (Terminated)
Jan 20 14:52:44 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[271452]: [WARNING]  (271456) : All workers exited. Exiting... (0)
Jan 20 14:52:44 compute-1 systemd[1]: libpod-b9869b2334cb22ebb41a8e59d6adfb7d39a9957a4d3680921081b2143268cfef.scope: Deactivated successfully.
Jan 20 14:52:44 compute-1 podman[271578]: 2026-01-20 14:52:44.797113681 +0000 UTC m=+0.043756046 container died b9869b2334cb22ebb41a8e59d6adfb7d39a9957a4d3680921081b2143268cfef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 14:52:44 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b9869b2334cb22ebb41a8e59d6adfb7d39a9957a4d3680921081b2143268cfef-userdata-shm.mount: Deactivated successfully.
Jan 20 14:52:44 compute-1 systemd[1]: var-lib-containers-storage-overlay-7ecd8403a0105356438fdeb735c64fc58258b386b8c6a85324ab7dbc102be6b3-merged.mount: Deactivated successfully.
Jan 20 14:52:44 compute-1 podman[271578]: 2026-01-20 14:52:44.830289907 +0000 UTC m=+0.076932272 container cleanup b9869b2334cb22ebb41a8e59d6adfb7d39a9957a4d3680921081b2143268cfef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:52:44 compute-1 systemd[1]: libpod-conmon-b9869b2334cb22ebb41a8e59d6adfb7d39a9957a4d3680921081b2143268cfef.scope: Deactivated successfully.
Jan 20 14:52:44 compute-1 kernel: tap234381ea-07: entered promiscuous mode
Jan 20 14:52:44 compute-1 NetworkManager[49104]: <info>  [1768920764.8750] manager: (tap234381ea-07): new Tun device (/org/freedesktop/NetworkManager/Devices/195)
Jan 20 14:52:44 compute-1 ovn_controller[130490]: 2026-01-20T14:52:44Z|00453|binding|INFO|Claiming lport 234381ea-07b1-41fe-b3c1-be97ce6a3b64 for this chassis.
Jan 20 14:52:44 compute-1 ovn_controller[130490]: 2026-01-20T14:52:44Z|00454|binding|INFO|234381ea-07b1-41fe-b3c1-be97ce6a3b64: Claiming fa:16:3e:b5:55:3c 10.100.0.14
Jan 20 14:52:44 compute-1 systemd-udevd[271547]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:52:44 compute-1 nova_compute[225855]: 2026-01-20 14:52:44.878 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:44 compute-1 NetworkManager[49104]: <info>  [1768920764.8879] device (tap234381ea-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:52:44 compute-1 NetworkManager[49104]: <info>  [1768920764.8883] device (tap234381ea-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.891 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:55:3c 10.100.0.14'], port_security=['fa:16:3e:b5:55:3c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '23ea4537-f03f-46de-881f-b979e232a3b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79184781-1f23-4584-87de-08e262242488', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '30ec24b7-15ba-4aeb-9785-539071729f77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b73ab05-b29f-401a-84a5-ea1a96103f33, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=234381ea-07b1-41fe-b3c1-be97ce6a3b64) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:52:44 compute-1 ovn_controller[130490]: 2026-01-20T14:52:44Z|00455|binding|INFO|Setting lport 234381ea-07b1-41fe-b3c1-be97ce6a3b64 ovn-installed in OVS
Jan 20 14:52:44 compute-1 ovn_controller[130490]: 2026-01-20T14:52:44Z|00456|binding|INFO|Setting lport 234381ea-07b1-41fe-b3c1-be97ce6a3b64 up in Southbound
Jan 20 14:52:44 compute-1 nova_compute[225855]: 2026-01-20 14:52:44.893 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:44 compute-1 nova_compute[225855]: 2026-01-20 14:52:44.896 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:44 compute-1 podman[271611]: 2026-01-20 14:52:44.904890353 +0000 UTC m=+0.052590225 container remove b9869b2334cb22ebb41a8e59d6adfb7d39a9957a4d3680921081b2143268cfef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 20 14:52:44 compute-1 systemd-machined[194361]: New machine qemu-54-instance-00000075.
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.910 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac255d3-77fe-4128-9c53-93af5b3fcb66]: (4, ('Tue Jan 20 02:52:44 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488 (b9869b2334cb22ebb41a8e59d6adfb7d39a9957a4d3680921081b2143268cfef)\nb9869b2334cb22ebb41a8e59d6adfb7d39a9957a4d3680921081b2143268cfef\nTue Jan 20 02:52:44 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488 (b9869b2334cb22ebb41a8e59d6adfb7d39a9957a4d3680921081b2143268cfef)\nb9869b2334cb22ebb41a8e59d6adfb7d39a9957a4d3680921081b2143268cfef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.912 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[16ecedec-4979-48f5-8d08-4b9774cd0b2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.913 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79184781-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:52:44 compute-1 nova_compute[225855]: 2026-01-20 14:52:44.914 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:44 compute-1 kernel: tap79184781-10: left promiscuous mode
Jan 20 14:52:44 compute-1 systemd[1]: Started Virtual Machine qemu-54-instance-00000075.
Jan 20 14:52:44 compute-1 nova_compute[225855]: 2026-01-20 14:52:44.929 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.931 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5cac4136-fabf-4b93-8bc3-3cd90596423d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.946 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd31fe1-0039-4317-8787-adc08aae5f8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.947 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e82fd6dd-3a57-4313-8732-ee956b93479d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.963 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[98a05aab-0ad0-4ba3-b6b5-42a00b2ae4db]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579497, 'reachable_time': 34960, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271642, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:44 compute-1 systemd[1]: run-netns-ovnmeta\x2d79184781\x2d1f23\x2d4584\x2d87de\x2d08e262242488.mount: Deactivated successfully.
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.968 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-79184781-1f23-4584-87de-08e262242488 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.968 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[ea3f0333-69d2-4179-a5d1-80859c9acd18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.969 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 234381ea-07b1-41fe-b3c1-be97ce6a3b64 in datapath 79184781-1f23-4584-87de-08e262242488 unbound from our chassis
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.970 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79184781-1f23-4584-87de-08e262242488
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.981 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ced326-9b9c-438b-b0c4-cc2c7413ef02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.982 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap79184781-11 in ovnmeta-79184781-1f23-4584-87de-08e262242488 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.984 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap79184781-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.984 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[087020f5-724d-44e8-a327-e4eb0ca7da24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.985 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[36329abc-127a-4215-ba02-599e2ae86b6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:44 compute-1 ceph-mon[81775]: pgmap v1967: 321 pgs: 321 active+clean; 580 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.5 MiB/s wr, 293 op/s
Jan 20 14:52:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1761720748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:44 compute-1 ceph-mon[81775]: osdmap e269: 3 total, 3 up, 3 in
Jan 20 14:52:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:44.998 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[7af3aa7b-054a-445f-bdc8-37eca34673c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:45.021 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd6c477-aa13-4588-b0bd-9ae9cabd34f1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:45.047 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[7c33c1b3-8840-4d61-9db7-90bc97b8a097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:45 compute-1 NetworkManager[49104]: <info>  [1768920765.0591] manager: (tap79184781-10): new Veth device (/org/freedesktop/NetworkManager/Devices/196)
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:45.057 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef6aa22-ccef-419e-a0ea-59461d5c3c9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:45.102 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f150d79d-8dc2-407c-93bc-0de6c0095adb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:45.104 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[545c8857-c176-457f-a76a-b12c01e0312f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:45.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:45 compute-1 NetworkManager[49104]: <info>  [1768920765.1280] device (tap79184781-10): carrier: link connected
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:45.131 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f9dd0193-67f2-49d9-ac45-388f11b86169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:45.147 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe19de1-e704-43fe-ace7-aa652aef860b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79184781-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:7c:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580405, 'reachable_time': 23111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271667, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:45.159 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6b26d786-636c-4574-8006-ec2dbf127a79]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe38:7c2a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580405, 'tstamp': 580405}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271668, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:45.174 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c6017c9e-90c1-49c0-a46c-5dcac2966874]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79184781-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:7c:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580405, 'reachable_time': 23111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271669, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:45.201 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[78ed86b0-74b1-400e-b4bd-5f01aaee34d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:45.249 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ccf70d61-5590-42b7-8805-612fc3324458]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:45.250 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79184781-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:45.251 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:45.251 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79184781-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.253 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:45 compute-1 NetworkManager[49104]: <info>  [1768920765.2541] manager: (tap79184781-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Jan 20 14:52:45 compute-1 kernel: tap79184781-10: entered promiscuous mode
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.256 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:45.260 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79184781-10, col_values=(('external_ids', {'iface-id': 'b033e9e6-9781-4424-a20f-7b48a14e2c80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.261 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:45 compute-1 ovn_controller[130490]: 2026-01-20T14:52:45Z|00457|binding|INFO|Releasing lport b033e9e6-9781-4424-a20f-7b48a14e2c80 from this chassis (sb_readonly=0)
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.262 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:45.266 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/79184781-1f23-4584-87de-08e262242488.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/79184781-1f23-4584-87de-08e262242488.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:45.270 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5d6564ad-06e5-4445-b76d-7b6c6ab73380]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:45.271 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-79184781-1f23-4584-87de-08e262242488
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/79184781-1f23-4584-87de-08e262242488.pid.haproxy
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 79184781-1f23-4584-87de-08e262242488
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:52:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:52:45.272 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'env', 'PROCESS_TAG=haproxy-79184781-1f23-4584-87de-08e262242488', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/79184781-1f23-4584-87de-08e262242488.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.278 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.340 225859 DEBUG nova.compute.manager [req-c21db4e3-f0da-4001-a396-94bfa961cef0 req-aaa19aa8-e747-468f-b852-1e15d1a5ace3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received event network-vif-unplugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.341 225859 DEBUG oslo_concurrency.lockutils [req-c21db4e3-f0da-4001-a396-94bfa961cef0 req-aaa19aa8-e747-468f-b852-1e15d1a5ace3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.341 225859 DEBUG oslo_concurrency.lockutils [req-c21db4e3-f0da-4001-a396-94bfa961cef0 req-aaa19aa8-e747-468f-b852-1e15d1a5ace3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.342 225859 DEBUG oslo_concurrency.lockutils [req-c21db4e3-f0da-4001-a396-94bfa961cef0 req-aaa19aa8-e747-468f-b852-1e15d1a5ace3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.343 225859 DEBUG nova.compute.manager [req-c21db4e3-f0da-4001-a396-94bfa961cef0 req-aaa19aa8-e747-468f-b852-1e15d1a5ace3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] No waiting events found dispatching network-vif-unplugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.344 225859 WARNING nova.compute.manager [req-c21db4e3-f0da-4001-a396-94bfa961cef0 req-aaa19aa8-e747-468f-b852-1e15d1a5ace3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received unexpected event network-vif-unplugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 for instance with vm_state rescued and task_state unrescuing.
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.574 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for 23ea4537-f03f-46de-881f-b979e232a3b9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.576 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920765.5737062, 23ea4537-f03f-46de-881f-b979e232a3b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.576 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] VM Resumed (Lifecycle Event)
Jan 20 14:52:45 compute-1 podman[271762]: 2026-01-20 14:52:45.704271011 +0000 UTC m=+0.054608263 container create 8659aeb836f14239190951253b93a3be94364fbe5edb0e5964485303d9ef695f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.714 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.717 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:52:45 compute-1 systemd[1]: Started libpod-conmon-8659aeb836f14239190951253b93a3be94364fbe5edb0e5964485303d9ef695f.scope.
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.743 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.744 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920765.5749042, 23ea4537-f03f-46de-881f-b979e232a3b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.744 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] VM Started (Lifecycle Event)
Jan 20 14:52:45 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:52:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b783ff1c4e23d189e5eb541d582f8638c64e811963a516b7ed73c8d8eafb4a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:52:45 compute-1 podman[271762]: 2026-01-20 14:52:45.679637036 +0000 UTC m=+0.029974298 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:52:45 compute-1 podman[271762]: 2026-01-20 14:52:45.782991234 +0000 UTC m=+0.133328516 container init 8659aeb836f14239190951253b93a3be94364fbe5edb0e5964485303d9ef695f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 14:52:45 compute-1 podman[271762]: 2026-01-20 14:52:45.788660254 +0000 UTC m=+0.138997506 container start 8659aeb836f14239190951253b93a3be94364fbe5edb0e5964485303d9ef695f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 14:52:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:45.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:45 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[271777]: [NOTICE]   (271782) : New worker (271784) forked
Jan 20 14:52:45 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[271777]: [NOTICE]   (271782) : Loading success.
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.886 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.891 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.952 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.972 225859 DEBUG nova.compute.manager [None req-042e7a60-6768-4229-8936-c6493f807f55 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:45 compute-1 nova_compute[225855]: 2026-01-20 14:52:45.974 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 20 14:52:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2979801871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:46 compute-1 nova_compute[225855]: 2026-01-20 14:52:46.250 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:52:46 compute-1 nova_compute[225855]: 2026-01-20 14:52:46.251 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:52:46 compute-1 nova_compute[225855]: 2026-01-20 14:52:46.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:52:47 compute-1 ceph-mon[81775]: pgmap v1969: 321 pgs: 321 active+clean; 580 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 6.5 MiB/s wr, 253 op/s
Jan 20 14:52:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:47.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:52:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:47.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:52:47 compute-1 nova_compute[225855]: 2026-01-20 14:52:47.877 225859 DEBUG nova.compute.manager [req-26785ab1-815a-413e-92c5-3cf6d3d4c841 req-28c7d92a-22ef-49d9-be30-901d640262d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received event network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:52:47 compute-1 nova_compute[225855]: 2026-01-20 14:52:47.878 225859 DEBUG oslo_concurrency.lockutils [req-26785ab1-815a-413e-92c5-3cf6d3d4c841 req-28c7d92a-22ef-49d9-be30-901d640262d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:47 compute-1 nova_compute[225855]: 2026-01-20 14:52:47.878 225859 DEBUG oslo_concurrency.lockutils [req-26785ab1-815a-413e-92c5-3cf6d3d4c841 req-28c7d92a-22ef-49d9-be30-901d640262d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:47 compute-1 nova_compute[225855]: 2026-01-20 14:52:47.878 225859 DEBUG oslo_concurrency.lockutils [req-26785ab1-815a-413e-92c5-3cf6d3d4c841 req-28c7d92a-22ef-49d9-be30-901d640262d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:47 compute-1 nova_compute[225855]: 2026-01-20 14:52:47.878 225859 DEBUG nova.compute.manager [req-26785ab1-815a-413e-92c5-3cf6d3d4c841 req-28c7d92a-22ef-49d9-be30-901d640262d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] No waiting events found dispatching network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:52:47 compute-1 nova_compute[225855]: 2026-01-20 14:52:47.879 225859 WARNING nova.compute.manager [req-26785ab1-815a-413e-92c5-3cf6d3d4c841 req-28c7d92a-22ef-49d9-be30-901d640262d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received unexpected event network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 for instance with vm_state active and task_state None.
Jan 20 14:52:47 compute-1 nova_compute[225855]: 2026-01-20 14:52:47.879 225859 DEBUG nova.compute.manager [req-26785ab1-815a-413e-92c5-3cf6d3d4c841 req-28c7d92a-22ef-49d9-be30-901d640262d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received event network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:52:47 compute-1 nova_compute[225855]: 2026-01-20 14:52:47.879 225859 DEBUG oslo_concurrency.lockutils [req-26785ab1-815a-413e-92c5-3cf6d3d4c841 req-28c7d92a-22ef-49d9-be30-901d640262d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:47 compute-1 nova_compute[225855]: 2026-01-20 14:52:47.879 225859 DEBUG oslo_concurrency.lockutils [req-26785ab1-815a-413e-92c5-3cf6d3d4c841 req-28c7d92a-22ef-49d9-be30-901d640262d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:47 compute-1 nova_compute[225855]: 2026-01-20 14:52:47.879 225859 DEBUG oslo_concurrency.lockutils [req-26785ab1-815a-413e-92c5-3cf6d3d4c841 req-28c7d92a-22ef-49d9-be30-901d640262d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:47 compute-1 nova_compute[225855]: 2026-01-20 14:52:47.880 225859 DEBUG nova.compute.manager [req-26785ab1-815a-413e-92c5-3cf6d3d4c841 req-28c7d92a-22ef-49d9-be30-901d640262d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] No waiting events found dispatching network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:52:47 compute-1 nova_compute[225855]: 2026-01-20 14:52:47.880 225859 WARNING nova.compute.manager [req-26785ab1-815a-413e-92c5-3cf6d3d4c841 req-28c7d92a-22ef-49d9-be30-901d640262d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received unexpected event network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 for instance with vm_state active and task_state None.
Jan 20 14:52:47 compute-1 nova_compute[225855]: 2026-01-20 14:52:47.880 225859 DEBUG nova.compute.manager [req-26785ab1-815a-413e-92c5-3cf6d3d4c841 req-28c7d92a-22ef-49d9-be30-901d640262d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received event network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:52:47 compute-1 nova_compute[225855]: 2026-01-20 14:52:47.880 225859 DEBUG oslo_concurrency.lockutils [req-26785ab1-815a-413e-92c5-3cf6d3d4c841 req-28c7d92a-22ef-49d9-be30-901d640262d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:47 compute-1 nova_compute[225855]: 2026-01-20 14:52:47.880 225859 DEBUG oslo_concurrency.lockutils [req-26785ab1-815a-413e-92c5-3cf6d3d4c841 req-28c7d92a-22ef-49d9-be30-901d640262d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:47 compute-1 nova_compute[225855]: 2026-01-20 14:52:47.881 225859 DEBUG oslo_concurrency.lockutils [req-26785ab1-815a-413e-92c5-3cf6d3d4c841 req-28c7d92a-22ef-49d9-be30-901d640262d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:47 compute-1 nova_compute[225855]: 2026-01-20 14:52:47.881 225859 DEBUG nova.compute.manager [req-26785ab1-815a-413e-92c5-3cf6d3d4c841 req-28c7d92a-22ef-49d9-be30-901d640262d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] No waiting events found dispatching network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:52:47 compute-1 nova_compute[225855]: 2026-01-20 14:52:47.881 225859 WARNING nova.compute.manager [req-26785ab1-815a-413e-92c5-3cf6d3d4c841 req-28c7d92a-22ef-49d9-be30-901d640262d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received unexpected event network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 for instance with vm_state active and task_state None.
Jan 20 14:52:49 compute-1 ceph-mon[81775]: pgmap v1970: 321 pgs: 321 active+clean; 580 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 4.5 MiB/s wr, 211 op/s
Jan 20 14:52:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:49.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:49 compute-1 nova_compute[225855]: 2026-01-20 14:52:49.475 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:49.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e270 e270: 3 total, 3 up, 3 in
Jan 20 14:52:50 compute-1 nova_compute[225855]: 2026-01-20 14:52:50.959 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:51 compute-1 ceph-mon[81775]: pgmap v1971: 321 pgs: 321 active+clean; 580 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 3.7 MiB/s wr, 214 op/s
Jan 20 14:52:51 compute-1 ceph-mon[81775]: osdmap e270: 3 total, 3 up, 3 in
Jan 20 14:52:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e271 e271: 3 total, 3 up, 3 in
Jan 20 14:52:51 compute-1 sudo[271795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:52:51 compute-1 sudo[271795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:52:51 compute-1 sudo[271795]: pam_unix(sudo:session): session closed for user root
Jan 20 14:52:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:51.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:51 compute-1 sudo[271826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:52:51 compute-1 sudo[271826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:52:51 compute-1 sudo[271826]: pam_unix(sudo:session): session closed for user root
Jan 20 14:52:51 compute-1 podman[271819]: 2026-01-20 14:52:51.171882437 +0000 UTC m=+0.061006583 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 20 14:52:51 compute-1 nova_compute[225855]: 2026-01-20 14:52:51.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:52:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:51.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:52 compute-1 ceph-mon[81775]: osdmap e271: 3 total, 3 up, 3 in
Jan 20 14:52:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e272 e272: 3 total, 3 up, 3 in
Jan 20 14:52:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:53 compute-1 ceph-mon[81775]: pgmap v1974: 321 pgs: 321 active+clean; 580 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.5 KiB/s wr, 185 op/s
Jan 20 14:52:53 compute-1 ceph-mon[81775]: osdmap e272: 3 total, 3 up, 3 in
Jan 20 14:52:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:53.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:52:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:53.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:52:54 compute-1 nova_compute[225855]: 2026-01-20 14:52:54.477 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:55 compute-1 ceph-mon[81775]: pgmap v1976: 321 pgs: 321 active+clean; 628 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 4.7 MiB/s wr, 205 op/s
Jan 20 14:52:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:55.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:55.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:55 compute-1 nova_compute[225855]: 2026-01-20 14:52:55.963 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:57.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:57 compute-1 ceph-mon[81775]: pgmap v1977: 321 pgs: 321 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 7.8 MiB/s wr, 258 op/s
Jan 20 14:52:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:57.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:57 compute-1 ovn_controller[130490]: 2026-01-20T14:52:57Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:55:3c 10.100.0.14
Jan 20 14:52:58 compute-1 ceph-mon[81775]: pgmap v1978: 321 pgs: 321 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 5.9 MiB/s wr, 205 op/s
Jan 20 14:52:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:59.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:59 compute-1 nova_compute[225855]: 2026-01-20 14:52:59.510 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e273 e273: 3 total, 3 up, 3 in
Jan 20 14:52:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:52:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:59.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:00 compute-1 nova_compute[225855]: 2026-01-20 14:53:00.556 225859 DEBUG oslo_concurrency.lockutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "baada610-f563-4c97-89a9-56eba792c352" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:00 compute-1 nova_compute[225855]: 2026-01-20 14:53:00.557 225859 DEBUG oslo_concurrency.lockutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:00 compute-1 nova_compute[225855]: 2026-01-20 14:53:00.588 225859 DEBUG nova.compute.manager [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:53:00 compute-1 ceph-mon[81775]: osdmap e273: 3 total, 3 up, 3 in
Jan 20 14:53:00 compute-1 ceph-mon[81775]: pgmap v1980: 321 pgs: 321 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.8 MiB/s wr, 145 op/s
Jan 20 14:53:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e274 e274: 3 total, 3 up, 3 in
Jan 20 14:53:00 compute-1 nova_compute[225855]: 2026-01-20 14:53:00.801 225859 DEBUG oslo_concurrency.lockutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:00 compute-1 nova_compute[225855]: 2026-01-20 14:53:00.801 225859 DEBUG oslo_concurrency.lockutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:00 compute-1 nova_compute[225855]: 2026-01-20 14:53:00.809 225859 DEBUG nova.virt.hardware [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:53:00 compute-1 nova_compute[225855]: 2026-01-20 14:53:00.809 225859 INFO nova.compute.claims [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:53:00 compute-1 nova_compute[225855]: 2026-01-20 14:53:00.965 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:01 compute-1 nova_compute[225855]: 2026-01-20 14:53:01.119 225859 DEBUG oslo_concurrency.processutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:01.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:01 compute-1 sudo[271871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:01 compute-1 sudo[271871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:01 compute-1 sudo[271871]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:01 compute-1 sudo[271915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:53:01 compute-1 sudo[271915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:01 compute-1 sudo[271915]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:01 compute-1 sudo[271940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:01 compute-1 sudo[271940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:01 compute-1 sudo[271940]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:01 compute-1 sudo[271965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:53:01 compute-1 sudo[271965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:53:01 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4058498692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:01 compute-1 nova_compute[225855]: 2026-01-20 14:53:01.557 225859 DEBUG oslo_concurrency.processutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:01 compute-1 nova_compute[225855]: 2026-01-20 14:53:01.565 225859 DEBUG nova.compute.provider_tree [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:53:01 compute-1 ceph-mon[81775]: osdmap e274: 3 total, 3 up, 3 in
Jan 20 14:53:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4058498692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e275 e275: 3 total, 3 up, 3 in
Jan 20 14:53:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:01.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:01 compute-1 sudo[271965]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e276 e276: 3 total, 3 up, 3 in
Jan 20 14:53:02 compute-1 ceph-mon[81775]: osdmap e275: 3 total, 3 up, 3 in
Jan 20 14:53:02 compute-1 ceph-mon[81775]: pgmap v1983: 321 pgs: 321 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 743 KiB/s rd, 21 KiB/s wr, 72 op/s
Jan 20 14:53:02 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:53:02 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:53:02 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:53:02 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:53:02 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:53:02 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:53:02 compute-1 nova_compute[225855]: 2026-01-20 14:53:02.766 225859 DEBUG nova.scheduler.client.report [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:53:02 compute-1 nova_compute[225855]: 2026-01-20 14:53:02.893 225859 DEBUG oslo_concurrency.lockutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:02 compute-1 nova_compute[225855]: 2026-01-20 14:53:02.894 225859 DEBUG nova.compute.manager [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:53:03 compute-1 nova_compute[225855]: 2026-01-20 14:53:03.049 225859 DEBUG nova.compute.manager [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:53:03 compute-1 nova_compute[225855]: 2026-01-20 14:53:03.049 225859 DEBUG nova.network.neutron [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:53:03 compute-1 nova_compute[225855]: 2026-01-20 14:53:03.107 225859 INFO nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:53:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:03.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:03 compute-1 nova_compute[225855]: 2026-01-20 14:53:03.141 225859 DEBUG nova.compute.manager [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:53:03 compute-1 nova_compute[225855]: 2026-01-20 14:53:03.336 225859 DEBUG nova.compute.manager [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:53:03 compute-1 nova_compute[225855]: 2026-01-20 14:53:03.338 225859 DEBUG nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:53:03 compute-1 nova_compute[225855]: 2026-01-20 14:53:03.339 225859 INFO nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Creating image(s)
Jan 20 14:53:03 compute-1 nova_compute[225855]: 2026-01-20 14:53:03.372 225859 DEBUG nova.storage.rbd_utils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image baada610-f563-4c97-89a9-56eba792c352_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:53:03 compute-1 nova_compute[225855]: 2026-01-20 14:53:03.400 225859 DEBUG nova.storage.rbd_utils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image baada610-f563-4c97-89a9-56eba792c352_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:53:03 compute-1 nova_compute[225855]: 2026-01-20 14:53:03.427 225859 DEBUG nova.storage.rbd_utils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image baada610-f563-4c97-89a9-56eba792c352_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:53:03 compute-1 nova_compute[225855]: 2026-01-20 14:53:03.432 225859 DEBUG oslo_concurrency.processutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:03 compute-1 nova_compute[225855]: 2026-01-20 14:53:03.467 225859 DEBUG nova.policy [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd85d286ce6224326a0f4a15a06afbfea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:53:03 compute-1 nova_compute[225855]: 2026-01-20 14:53:03.525 225859 DEBUG oslo_concurrency.processutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:03 compute-1 nova_compute[225855]: 2026-01-20 14:53:03.526 225859 DEBUG oslo_concurrency.lockutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:03 compute-1 nova_compute[225855]: 2026-01-20 14:53:03.526 225859 DEBUG oslo_concurrency.lockutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:03 compute-1 nova_compute[225855]: 2026-01-20 14:53:03.527 225859 DEBUG oslo_concurrency.lockutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:03 compute-1 nova_compute[225855]: 2026-01-20 14:53:03.558 225859 DEBUG nova.storage.rbd_utils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image baada610-f563-4c97-89a9-56eba792c352_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:53:03 compute-1 nova_compute[225855]: 2026-01-20 14:53:03.562 225859 DEBUG oslo_concurrency.processutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 baada610-f563-4c97-89a9-56eba792c352_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:03 compute-1 ceph-mon[81775]: osdmap e276: 3 total, 3 up, 3 in
Jan 20 14:53:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:53:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:03.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:53:03 compute-1 nova_compute[225855]: 2026-01-20 14:53:03.862 225859 DEBUG oslo_concurrency.processutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 baada610-f563-4c97-89a9-56eba792c352_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:03 compute-1 nova_compute[225855]: 2026-01-20 14:53:03.936 225859 DEBUG nova.storage.rbd_utils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] resizing rbd image baada610-f563-4c97-89a9-56eba792c352_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:53:04 compute-1 nova_compute[225855]: 2026-01-20 14:53:04.055 225859 DEBUG nova.objects.instance [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'migration_context' on Instance uuid baada610-f563-4c97-89a9-56eba792c352 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:04 compute-1 nova_compute[225855]: 2026-01-20 14:53:04.071 225859 DEBUG nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:53:04 compute-1 nova_compute[225855]: 2026-01-20 14:53:04.071 225859 DEBUG nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Ensure instance console log exists: /var/lib/nova/instances/baada610-f563-4c97-89a9-56eba792c352/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:53:04 compute-1 nova_compute[225855]: 2026-01-20 14:53:04.071 225859 DEBUG oslo_concurrency.lockutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:04 compute-1 nova_compute[225855]: 2026-01-20 14:53:04.072 225859 DEBUG oslo_concurrency.lockutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:04 compute-1 nova_compute[225855]: 2026-01-20 14:53:04.072 225859 DEBUG oslo_concurrency.lockutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:04 compute-1 nova_compute[225855]: 2026-01-20 14:53:04.513 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:04 compute-1 ceph-mon[81775]: pgmap v1985: 321 pgs: 321 active+clean; 695 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 5.3 MiB/s wr, 118 op/s
Jan 20 14:53:04 compute-1 nova_compute[225855]: 2026-01-20 14:53:04.891 225859 DEBUG nova.network.neutron [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Successfully created port: a3156414-5a96-462d-974e-a57c9cd8e9c8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:53:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:53:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:05.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:53:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:05.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:05 compute-1 nova_compute[225855]: 2026-01-20 14:53:05.958 225859 DEBUG nova.network.neutron [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Successfully updated port: a3156414-5a96-462d-974e-a57c9cd8e9c8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:53:05 compute-1 nova_compute[225855]: 2026-01-20 14:53:05.968 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:05 compute-1 nova_compute[225855]: 2026-01-20 14:53:05.984 225859 DEBUG oslo_concurrency.lockutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:53:05 compute-1 nova_compute[225855]: 2026-01-20 14:53:05.984 225859 DEBUG oslo_concurrency.lockutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquired lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:53:05 compute-1 nova_compute[225855]: 2026-01-20 14:53:05.985 225859 DEBUG nova.network.neutron [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:53:06 compute-1 nova_compute[225855]: 2026-01-20 14:53:06.195 225859 DEBUG nova.compute.manager [req-0b7503d7-3150-4015-9248-34691eb4bb30 req-43095be4-e873-4bf7-aad7-00d41ef180df 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received event network-changed-a3156414-5a96-462d-974e-a57c9cd8e9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:53:06 compute-1 nova_compute[225855]: 2026-01-20 14:53:06.195 225859 DEBUG nova.compute.manager [req-0b7503d7-3150-4015-9248-34691eb4bb30 req-43095be4-e873-4bf7-aad7-00d41ef180df 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Refreshing instance network info cache due to event network-changed-a3156414-5a96-462d-974e-a57c9cd8e9c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:53:06 compute-1 nova_compute[225855]: 2026-01-20 14:53:06.195 225859 DEBUG oslo_concurrency.lockutils [req-0b7503d7-3150-4015-9248-34691eb4bb30 req-43095be4-e873-4bf7-aad7-00d41ef180df 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:53:06 compute-1 nova_compute[225855]: 2026-01-20 14:53:06.837 225859 DEBUG nova.network.neutron [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:53:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e277 e277: 3 total, 3 up, 3 in
Jan 20 14:53:07 compute-1 ceph-mon[81775]: pgmap v1986: 321 pgs: 321 active+clean; 780 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 11 MiB/s wr, 254 op/s
Jan 20 14:53:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:07.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:07.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:08 compute-1 ceph-mon[81775]: osdmap e277: 3 total, 3 up, 3 in
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.112 225859 DEBUG nova.network.neutron [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Updating instance_info_cache with network_info: [{"id": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "address": "fa:16:3e:9e:93:82", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3156414-5a", "ovs_interfaceid": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.174 225859 DEBUG oslo_concurrency.lockutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Releasing lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.174 225859 DEBUG nova.compute.manager [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Instance network_info: |[{"id": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "address": "fa:16:3e:9e:93:82", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3156414-5a", "ovs_interfaceid": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.174 225859 DEBUG oslo_concurrency.lockutils [req-0b7503d7-3150-4015-9248-34691eb4bb30 req-43095be4-e873-4bf7-aad7-00d41ef180df 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.175 225859 DEBUG nova.network.neutron [req-0b7503d7-3150-4015-9248-34691eb4bb30 req-43095be4-e873-4bf7-aad7-00d41ef180df 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Refreshing network info cache for port a3156414-5a96-462d-974e-a57c9cd8e9c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.177 225859 DEBUG nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Start _get_guest_xml network_info=[{"id": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "address": "fa:16:3e:9e:93:82", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3156414-5a", "ovs_interfaceid": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.182 225859 WARNING nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.204 225859 DEBUG nova.virt.libvirt.host [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.205 225859 DEBUG nova.virt.libvirt.host [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.211 225859 DEBUG nova.virt.libvirt.host [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.212 225859 DEBUG nova.virt.libvirt.host [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.213 225859 DEBUG nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.213 225859 DEBUG nova.virt.hardware [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.214 225859 DEBUG nova.virt.hardware [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.214 225859 DEBUG nova.virt.hardware [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.214 225859 DEBUG nova.virt.hardware [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.214 225859 DEBUG nova.virt.hardware [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.215 225859 DEBUG nova.virt.hardware [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.215 225859 DEBUG nova.virt.hardware [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.215 225859 DEBUG nova.virt.hardware [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.215 225859 DEBUG nova.virt.hardware [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.216 225859 DEBUG nova.virt.hardware [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.216 225859 DEBUG nova.virt.hardware [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.218 225859 DEBUG oslo_concurrency.processutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:08 compute-1 sudo[272215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:08 compute-1 sudo[272215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:08 compute-1 sudo[272215]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:08 compute-1 sudo[272240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:53:08 compute-1 sudo[272240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:08 compute-1 sudo[272240]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:53:08 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2491785817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.698 225859 DEBUG oslo_concurrency.processutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.725 225859 DEBUG nova.storage.rbd_utils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image baada610-f563-4c97-89a9-56eba792c352_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:53:08 compute-1 nova_compute[225855]: 2026-01-20 14:53:08.728 225859 DEBUG oslo_concurrency.processutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:09 compute-1 ceph-mon[81775]: pgmap v1988: 321 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 309 active+clean; 787 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 281 op/s
Jan 20 14:53:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:53:09 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:53:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2491785817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:53:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:09.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:53:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:53:09 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3601927553' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.184 225859 DEBUG oslo_concurrency.processutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.185 225859 DEBUG nova.virt.libvirt.vif [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:52:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-185388239',display_name='tempest-ServerStableDeviceRescueTest-server-185388239',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-185388239',id=119,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIfbHk5SMnBEAUVZEhoLPfdB2qCay341zK720hYW5qflxdgcEr+fHp9C3kAgJFmqON8wn8DkPxW0WmihyCLPTK7Iiiy5VDiRJ7U/0O7hlyzm17ZWhCVdPfXSugKxmeVL3w==',key_name='tempest-keypair-647310408',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a29915e0dd2403fbd7b7e847696b00a',ramdisk_id='',reservation_id='r-d9074tfl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-129078052',owner_user_name='tempest-ServerStableDeviceRescueTest-129078052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:53:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d85d286ce6224326a0f4a15a06afbfea',uuid=baada610-f563-4c97-89a9-56eba792c352,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "address": "fa:16:3e:9e:93:82", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3156414-5a", "ovs_interfaceid": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.186 225859 DEBUG nova.network.os_vif_util [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converting VIF {"id": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "address": "fa:16:3e:9e:93:82", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3156414-5a", "ovs_interfaceid": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.186 225859 DEBUG nova.network.os_vif_util [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:93:82,bridge_name='br-int',has_traffic_filtering=True,id=a3156414-5a96-462d-974e-a57c9cd8e9c8,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3156414-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.188 225859 DEBUG nova.objects.instance [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'pci_devices' on Instance uuid baada610-f563-4c97-89a9-56eba792c352 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.415 225859 DEBUG nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:53:09 compute-1 nova_compute[225855]:   <uuid>baada610-f563-4c97-89a9-56eba792c352</uuid>
Jan 20 14:53:09 compute-1 nova_compute[225855]:   <name>instance-00000077</name>
Jan 20 14:53:09 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:53:09 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:53:09 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerStableDeviceRescueTest-server-185388239</nova:name>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:53:08</nova:creationTime>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:53:09 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:53:09 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:53:09 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:53:09 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:53:09 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:53:09 compute-1 nova_compute[225855]:         <nova:user uuid="d85d286ce6224326a0f4a15a06afbfea">tempest-ServerStableDeviceRescueTest-129078052-project-member</nova:user>
Jan 20 14:53:09 compute-1 nova_compute[225855]:         <nova:project uuid="0a29915e0dd2403fbd7b7e847696b00a">tempest-ServerStableDeviceRescueTest-129078052</nova:project>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:53:09 compute-1 nova_compute[225855]:         <nova:port uuid="a3156414-5a96-462d-974e-a57c9cd8e9c8">
Jan 20 14:53:09 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:53:09 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:53:09 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <system>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <entry name="serial">baada610-f563-4c97-89a9-56eba792c352</entry>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <entry name="uuid">baada610-f563-4c97-89a9-56eba792c352</entry>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     </system>
Jan 20 14:53:09 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:53:09 compute-1 nova_compute[225855]:   <os>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:   </os>
Jan 20 14:53:09 compute-1 nova_compute[225855]:   <features>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:   </features>
Jan 20 14:53:09 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:53:09 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:53:09 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/baada610-f563-4c97-89a9-56eba792c352_disk">
Jan 20 14:53:09 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       </source>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:53:09 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/baada610-f563-4c97-89a9-56eba792c352_disk.config">
Jan 20 14:53:09 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       </source>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:53:09 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:9e:93:82"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <target dev="tapa3156414-5a"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/baada610-f563-4c97-89a9-56eba792c352/console.log" append="off"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <video>
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     </video>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:53:09 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:53:09 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:53:09 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:53:09 compute-1 nova_compute[225855]: </domain>
Jan 20 14:53:09 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.416 225859 DEBUG nova.compute.manager [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Preparing to wait for external event network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.417 225859 DEBUG oslo_concurrency.lockutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "baada610-f563-4c97-89a9-56eba792c352-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.417 225859 DEBUG oslo_concurrency.lockutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.417 225859 DEBUG oslo_concurrency.lockutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.418 225859 DEBUG nova.virt.libvirt.vif [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:52:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-185388239',display_name='tempest-ServerStableDeviceRescueTest-server-185388239',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-185388239',id=119,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIfbHk5SMnBEAUVZEhoLPfdB2qCay341zK720hYW5qflxdgcEr+fHp9C3kAgJFmqON8wn8DkPxW0WmihyCLPTK7Iiiy5VDiRJ7U/0O7hlyzm17ZWhCVdPfXSugKxmeVL3w==',key_name='tempest-keypair-647310408',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a29915e0dd2403fbd7b7e847696b00a',ramdisk_id='',reservation_id='r-d9074tfl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-129078052',owner_user_name='tempest-ServerStableDeviceRescueTest-129078052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:53:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d85d286ce6224326a0f4a15a06afbfea',uuid=baada610-f563-4c97-89a9-56eba792c352,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "address": "fa:16:3e:9e:93:82", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3156414-5a", "ovs_interfaceid": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.419 225859 DEBUG nova.network.os_vif_util [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converting VIF {"id": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "address": "fa:16:3e:9e:93:82", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3156414-5a", "ovs_interfaceid": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.419 225859 DEBUG nova.network.os_vif_util [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:93:82,bridge_name='br-int',has_traffic_filtering=True,id=a3156414-5a96-462d-974e-a57c9cd8e9c8,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3156414-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.420 225859 DEBUG os_vif [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:93:82,bridge_name='br-int',has_traffic_filtering=True,id=a3156414-5a96-462d-974e-a57c9cd8e9c8,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3156414-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.420 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.421 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.421 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.426 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.426 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3156414-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.427 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3156414-5a, col_values=(('external_ids', {'iface-id': 'a3156414-5a96-462d-974e-a57c9cd8e9c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:93:82', 'vm-uuid': 'baada610-f563-4c97-89a9-56eba792c352'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.428 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:09 compute-1 NetworkManager[49104]: <info>  [1768920789.4295] manager: (tapa3156414-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/198)
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.433 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.437 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.438 225859 INFO os_vif [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:93:82,bridge_name='br-int',has_traffic_filtering=True,id=a3156414-5a96-462d-974e-a57c9cd8e9c8,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3156414-5a')
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.500 225859 DEBUG nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.500 225859 DEBUG nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.500 225859 DEBUG nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No VIF found with MAC fa:16:3e:9e:93:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.501 225859 INFO nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Using config drive
Jan 20 14:53:09 compute-1 nova_compute[225855]: 2026-01-20 14:53:09.532 225859 DEBUG nova.storage.rbd_utils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image baada610-f563-4c97-89a9-56eba792c352_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:53:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e278 e278: 3 total, 3 up, 3 in
Jan 20 14:53:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:53:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:09.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:53:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3601927553' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:10 compute-1 ceph-mon[81775]: osdmap e278: 3 total, 3 up, 3 in
Jan 20 14:53:10 compute-1 nova_compute[225855]: 2026-01-20 14:53:10.174 225859 INFO nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Creating config drive at /var/lib/nova/instances/baada610-f563-4c97-89a9-56eba792c352/disk.config
Jan 20 14:53:10 compute-1 nova_compute[225855]: 2026-01-20 14:53:10.181 225859 DEBUG oslo_concurrency.processutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/baada610-f563-4c97-89a9-56eba792c352/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzd2gtkh0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:10 compute-1 nova_compute[225855]: 2026-01-20 14:53:10.319 225859 DEBUG oslo_concurrency.processutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/baada610-f563-4c97-89a9-56eba792c352/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzd2gtkh0" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:10 compute-1 nova_compute[225855]: 2026-01-20 14:53:10.347 225859 DEBUG nova.storage.rbd_utils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image baada610-f563-4c97-89a9-56eba792c352_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:53:10 compute-1 nova_compute[225855]: 2026-01-20 14:53:10.350 225859 DEBUG oslo_concurrency.processutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/baada610-f563-4c97-89a9-56eba792c352/disk.config baada610-f563-4c97-89a9-56eba792c352_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:10 compute-1 nova_compute[225855]: 2026-01-20 14:53:10.533 225859 DEBUG oslo_concurrency.processutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/baada610-f563-4c97-89a9-56eba792c352/disk.config baada610-f563-4c97-89a9-56eba792c352_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:10 compute-1 nova_compute[225855]: 2026-01-20 14:53:10.535 225859 INFO nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Deleting local config drive /var/lib/nova/instances/baada610-f563-4c97-89a9-56eba792c352/disk.config because it was imported into RBD.
Jan 20 14:53:10 compute-1 NetworkManager[49104]: <info>  [1768920790.5776] manager: (tapa3156414-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/199)
Jan 20 14:53:10 compute-1 kernel: tapa3156414-5a: entered promiscuous mode
Jan 20 14:53:10 compute-1 ovn_controller[130490]: 2026-01-20T14:53:10Z|00458|binding|INFO|Claiming lport a3156414-5a96-462d-974e-a57c9cd8e9c8 for this chassis.
Jan 20 14:53:10 compute-1 ovn_controller[130490]: 2026-01-20T14:53:10Z|00459|binding|INFO|a3156414-5a96-462d-974e-a57c9cd8e9c8: Claiming fa:16:3e:9e:93:82 10.100.0.3
Jan 20 14:53:10 compute-1 nova_compute[225855]: 2026-01-20 14:53:10.579 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:10.600 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:93:82 10.100.0.3'], port_security=['fa:16:3e:9e:93:82 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'baada610-f563-4c97-89a9-56eba792c352', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79184781-1f23-4584-87de-08e262242488', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '34326e47-c07e-48d1-9283-c1c5634fdc52', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b73ab05-b29f-401a-84a5-ea1a96103f33, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=a3156414-5a96-462d-974e-a57c9cd8e9c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:53:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:10.602 140354 INFO neutron.agent.ovn.metadata.agent [-] Port a3156414-5a96-462d-974e-a57c9cd8e9c8 in datapath 79184781-1f23-4584-87de-08e262242488 bound to our chassis
Jan 20 14:53:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:10.604 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79184781-1f23-4584-87de-08e262242488
Jan 20 14:53:10 compute-1 systemd-machined[194361]: New machine qemu-55-instance-00000077.
Jan 20 14:53:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:10.619 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c167eb3f-6dd9-4bb7-9155-fba8f9f9a059]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:10 compute-1 ovn_controller[130490]: 2026-01-20T14:53:10Z|00460|binding|INFO|Setting lport a3156414-5a96-462d-974e-a57c9cd8e9c8 up in Southbound
Jan 20 14:53:10 compute-1 ovn_controller[130490]: 2026-01-20T14:53:10Z|00461|binding|INFO|Setting lport a3156414-5a96-462d-974e-a57c9cd8e9c8 ovn-installed in OVS
Jan 20 14:53:10 compute-1 nova_compute[225855]: 2026-01-20 14:53:10.621 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:10 compute-1 nova_compute[225855]: 2026-01-20 14:53:10.625 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:10 compute-1 systemd[1]: Started Virtual Machine qemu-55-instance-00000077.
Jan 20 14:53:10 compute-1 systemd-udevd[272395]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:53:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:10.651 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[511be42f-671e-45d3-b01f-3ec34d1d31da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e279 e279: 3 total, 3 up, 3 in
Jan 20 14:53:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:10.654 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[5972c449-3dd9-4e4f-9e99-59479d97e9de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:10 compute-1 NetworkManager[49104]: <info>  [1768920790.6618] device (tapa3156414-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:53:10 compute-1 NetworkManager[49104]: <info>  [1768920790.6633] device (tapa3156414-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:53:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:10.684 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1ac4bf-a621-49de-a58d-8431931e0ea1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:10.700 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9ed81a70-9d8d-4d7e-8b96-98b87b4bf2cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79184781-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:7c:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580405, 'reachable_time': 23111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272419, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:10.713 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[33e94a7e-0ec3-4a7b-8df5-1984b2024af3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap79184781-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580414, 'tstamp': 580414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272423, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap79184781-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580417, 'tstamp': 580417}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272423, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:10.715 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79184781-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:10 compute-1 nova_compute[225855]: 2026-01-20 14:53:10.716 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:10 compute-1 nova_compute[225855]: 2026-01-20 14:53:10.717 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:10.717 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79184781-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:10.717 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:53:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:10.718 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79184781-10, col_values=(('external_ids', {'iface-id': 'b033e9e6-9781-4424-a20f-7b48a14e2c80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:10.718 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:53:10 compute-1 podman[272379]: 2026-01-20 14:53:10.726962298 +0000 UTC m=+0.118605509 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:53:10 compute-1 nova_compute[225855]: 2026-01-20 14:53:10.970 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.002 225859 DEBUG nova.network.neutron [req-0b7503d7-3150-4015-9248-34691eb4bb30 req-43095be4-e873-4bf7-aad7-00d41ef180df 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Updated VIF entry in instance network info cache for port a3156414-5a96-462d-974e-a57c9cd8e9c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.002 225859 DEBUG nova.network.neutron [req-0b7503d7-3150-4015-9248-34691eb4bb30 req-43095be4-e873-4bf7-aad7-00d41ef180df 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Updating instance_info_cache with network_info: [{"id": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "address": "fa:16:3e:9e:93:82", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3156414-5a", "ovs_interfaceid": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.006 225859 DEBUG nova.compute.manager [req-a4198ce5-ec65-448c-9b74-c76d44c044d7 req-b2d1a642-60cd-46f7-8b31-2b6b4a013218 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received event network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.006 225859 DEBUG oslo_concurrency.lockutils [req-a4198ce5-ec65-448c-9b74-c76d44c044d7 req-b2d1a642-60cd-46f7-8b31-2b6b4a013218 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "baada610-f563-4c97-89a9-56eba792c352-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.007 225859 DEBUG oslo_concurrency.lockutils [req-a4198ce5-ec65-448c-9b74-c76d44c044d7 req-b2d1a642-60cd-46f7-8b31-2b6b4a013218 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.007 225859 DEBUG oslo_concurrency.lockutils [req-a4198ce5-ec65-448c-9b74-c76d44c044d7 req-b2d1a642-60cd-46f7-8b31-2b6b4a013218 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.007 225859 DEBUG nova.compute.manager [req-a4198ce5-ec65-448c-9b74-c76d44c044d7 req-b2d1a642-60cd-46f7-8b31-2b6b4a013218 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Processing event network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.038 225859 DEBUG oslo_concurrency.lockutils [req-0b7503d7-3150-4015-9248-34691eb4bb30 req-43095be4-e873-4bf7-aad7-00d41ef180df 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.068 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920791.0682774, baada610-f563-4c97-89a9-56eba792c352 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.069 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] VM Started (Lifecycle Event)
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.073 225859 DEBUG nova.compute.manager [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:53:11 compute-1 ceph-mon[81775]: pgmap v1990: 321 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 309 active+clean; 775 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.9 MiB/s wr, 237 op/s
Jan 20 14:53:11 compute-1 ceph-mon[81775]: osdmap e279: 3 total, 3 up, 3 in
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.077 225859 DEBUG nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.080 225859 INFO nova.virt.libvirt.driver [-] [instance: baada610-f563-4c97-89a9-56eba792c352] Instance spawned successfully.
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.080 225859 DEBUG nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.143 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:53:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:11.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.148 225859 DEBUG nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.148 225859 DEBUG nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.149 225859 DEBUG nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.149 225859 DEBUG nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.150 225859 DEBUG nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.150 225859 DEBUG nova.virt.libvirt.driver [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.154 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:53:11 compute-1 sudo[272466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:11 compute-1 sudo[272466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:11 compute-1 sudo[272466]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.294 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.294 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920791.0697346, baada610-f563-4c97-89a9-56eba792c352 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.294 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] VM Paused (Lifecycle Event)
Jan 20 14:53:11 compute-1 sudo[272491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:11 compute-1 sudo[272491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:11 compute-1 sudo[272491]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.325 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.329 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920791.0762372, baada610-f563-4c97-89a9-56eba792c352 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.330 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] VM Resumed (Lifecycle Event)
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.361 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.365 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.470 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.558 225859 INFO nova.compute.manager [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Took 8.22 seconds to spawn the instance on the hypervisor.
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.558 225859 DEBUG nova.compute.manager [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:53:11 compute-1 nova_compute[225855]: 2026-01-20 14:53:11.702 225859 INFO nova.compute.manager [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Took 10.96 seconds to build instance.
Jan 20 14:53:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:53:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:11.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:53:12 compute-1 nova_compute[225855]: 2026-01-20 14:53:12.009 225859 DEBUG oslo_concurrency.lockutils [None req-b1b59a96-b618-4ebc-b642-4b40ab846a49 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1108883090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:13 compute-1 ceph-mon[81775]: pgmap v1992: 321 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 309 active+clean; 769 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 102 KiB/s rd, 2.8 MiB/s wr, 153 op/s
Jan 20 14:53:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e280 e280: 3 total, 3 up, 3 in
Jan 20 14:53:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:13.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:13 compute-1 nova_compute[225855]: 2026-01-20 14:53:13.156 225859 DEBUG nova.compute.manager [req-54389736-f714-4176-ab5e-35f2958b4369 req-114b4779-60d6-44d8-b611-7283c38e8c12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received event network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:53:13 compute-1 nova_compute[225855]: 2026-01-20 14:53:13.156 225859 DEBUG oslo_concurrency.lockutils [req-54389736-f714-4176-ab5e-35f2958b4369 req-114b4779-60d6-44d8-b611-7283c38e8c12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "baada610-f563-4c97-89a9-56eba792c352-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:13 compute-1 nova_compute[225855]: 2026-01-20 14:53:13.157 225859 DEBUG oslo_concurrency.lockutils [req-54389736-f714-4176-ab5e-35f2958b4369 req-114b4779-60d6-44d8-b611-7283c38e8c12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:13 compute-1 nova_compute[225855]: 2026-01-20 14:53:13.157 225859 DEBUG oslo_concurrency.lockutils [req-54389736-f714-4176-ab5e-35f2958b4369 req-114b4779-60d6-44d8-b611-7283c38e8c12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:13 compute-1 nova_compute[225855]: 2026-01-20 14:53:13.157 225859 DEBUG nova.compute.manager [req-54389736-f714-4176-ab5e-35f2958b4369 req-114b4779-60d6-44d8-b611-7283c38e8c12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] No waiting events found dispatching network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:53:13 compute-1 nova_compute[225855]: 2026-01-20 14:53:13.157 225859 WARNING nova.compute.manager [req-54389736-f714-4176-ab5e-35f2958b4369 req-114b4779-60d6-44d8-b611-7283c38e8c12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received unexpected event network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 for instance with vm_state active and task_state None.
Jan 20 14:53:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:13.728 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:53:13 compute-1 nova_compute[225855]: 2026-01-20 14:53:13.729 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:13.730 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:53:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:13.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:14 compute-1 ceph-mon[81775]: osdmap e280: 3 total, 3 up, 3 in
Jan 20 14:53:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2714408348' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:53:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2714408348' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:53:14 compute-1 nova_compute[225855]: 2026-01-20 14:53:14.469 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e281 e281: 3 total, 3 up, 3 in
Jan 20 14:53:14 compute-1 nova_compute[225855]: 2026-01-20 14:53:14.679 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:14 compute-1 NetworkManager[49104]: <info>  [1768920794.6799] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Jan 20 14:53:14 compute-1 NetworkManager[49104]: <info>  [1768920794.6808] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Jan 20 14:53:14 compute-1 nova_compute[225855]: 2026-01-20 14:53:14.861 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:14 compute-1 ovn_controller[130490]: 2026-01-20T14:53:14Z|00462|binding|INFO|Releasing lport b033e9e6-9781-4424-a20f-7b48a14e2c80 from this chassis (sb_readonly=0)
Jan 20 14:53:14 compute-1 nova_compute[225855]: 2026-01-20 14:53:14.879 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:15 compute-1 ceph-mon[81775]: pgmap v1994: 321 pgs: 321 active+clean; 719 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 113 KiB/s rd, 3.6 MiB/s wr, 130 op/s
Jan 20 14:53:15 compute-1 ceph-mon[81775]: osdmap e281: 3 total, 3 up, 3 in
Jan 20 14:53:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:15.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:15 compute-1 nova_compute[225855]: 2026-01-20 14:53:15.189 225859 DEBUG nova.compute.manager [req-a5a7be6b-8a88-4801-b46b-7998864a745a req-e3d1a0ca-02ab-4def-a695-1748cb1e0e51 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received event network-changed-a3156414-5a96-462d-974e-a57c9cd8e9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:53:15 compute-1 nova_compute[225855]: 2026-01-20 14:53:15.190 225859 DEBUG nova.compute.manager [req-a5a7be6b-8a88-4801-b46b-7998864a745a req-e3d1a0ca-02ab-4def-a695-1748cb1e0e51 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Refreshing instance network info cache due to event network-changed-a3156414-5a96-462d-974e-a57c9cd8e9c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:53:15 compute-1 nova_compute[225855]: 2026-01-20 14:53:15.190 225859 DEBUG oslo_concurrency.lockutils [req-a5a7be6b-8a88-4801-b46b-7998864a745a req-e3d1a0ca-02ab-4def-a695-1748cb1e0e51 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:53:15 compute-1 nova_compute[225855]: 2026-01-20 14:53:15.190 225859 DEBUG oslo_concurrency.lockutils [req-a5a7be6b-8a88-4801-b46b-7998864a745a req-e3d1a0ca-02ab-4def-a695-1748cb1e0e51 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:53:15 compute-1 nova_compute[225855]: 2026-01-20 14:53:15.190 225859 DEBUG nova.network.neutron [req-a5a7be6b-8a88-4801-b46b-7998864a745a req-e3d1a0ca-02ab-4def-a695-1748cb1e0e51 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Refreshing network info cache for port a3156414-5a96-462d-974e-a57c9cd8e9c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:53:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:53:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:15.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:53:15 compute-1 nova_compute[225855]: 2026-01-20 14:53:15.973 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/821572000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:16 compute-1 ceph-mon[81775]: pgmap v1996: 321 pgs: 321 active+clean; 670 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.6 MiB/s wr, 287 op/s
Jan 20 14:53:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:16.413 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:16.413 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:16.414 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:17.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/720863058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:17 compute-1 nova_compute[225855]: 2026-01-20 14:53:17.404 225859 DEBUG nova.network.neutron [req-a5a7be6b-8a88-4801-b46b-7998864a745a req-e3d1a0ca-02ab-4def-a695-1748cb1e0e51 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Updated VIF entry in instance network info cache for port a3156414-5a96-462d-974e-a57c9cd8e9c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:53:17 compute-1 nova_compute[225855]: 2026-01-20 14:53:17.406 225859 DEBUG nova.network.neutron [req-a5a7be6b-8a88-4801-b46b-7998864a745a req-e3d1a0ca-02ab-4def-a695-1748cb1e0e51 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Updating instance_info_cache with network_info: [{"id": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "address": "fa:16:3e:9e:93:82", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3156414-5a", "ovs_interfaceid": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:53:17 compute-1 nova_compute[225855]: 2026-01-20 14:53:17.430 225859 DEBUG oslo_concurrency.lockutils [req-a5a7be6b-8a88-4801-b46b-7998864a745a req-e3d1a0ca-02ab-4def-a695-1748cb1e0e51 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:53:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:17.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/894774041' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:18 compute-1 ceph-mon[81775]: pgmap v1997: 321 pgs: 321 active+clean; 642 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.8 MiB/s wr, 255 op/s
Jan 20 14:53:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:18.732 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:53:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:19.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:53:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1212389849' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:19 compute-1 nova_compute[225855]: 2026-01-20 14:53:19.470 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e282 e282: 3 total, 3 up, 3 in
Jan 20 14:53:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:53:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:19.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:53:20 compute-1 ceph-mon[81775]: osdmap e282: 3 total, 3 up, 3 in
Jan 20 14:53:20 compute-1 ceph-mon[81775]: pgmap v1999: 321 pgs: 321 active+clean; 642 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.1 MiB/s wr, 234 op/s
Jan 20 14:53:20 compute-1 nova_compute[225855]: 2026-01-20 14:53:20.975 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:53:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:21.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:53:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:21.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:22 compute-1 podman[272524]: 2026-01-20 14:53:22.003155467 +0000 UTC m=+0.047687787 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:53:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:23 compute-1 ceph-mon[81775]: pgmap v2000: 321 pgs: 321 active+clean; 642 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 207 op/s
Jan 20 14:53:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:53:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:23.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:53:23 compute-1 ovn_controller[130490]: 2026-01-20T14:53:23Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9e:93:82 10.100.0.3
Jan 20 14:53:23 compute-1 ovn_controller[130490]: 2026-01-20T14:53:23Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9e:93:82 10.100.0.3
Jan 20 14:53:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:23.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1554013804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:24 compute-1 nova_compute[225855]: 2026-01-20 14:53:24.473 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:25 compute-1 ceph-mon[81775]: pgmap v2001: 321 pgs: 321 active+clean; 642 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 136 op/s
Jan 20 14:53:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:53:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:25.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:53:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:25.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:25 compute-1 nova_compute[225855]: 2026-01-20 14:53:25.977 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:26 compute-1 ceph-mon[81775]: pgmap v2002: 321 pgs: 321 active+clean; 665 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.1 MiB/s wr, 131 op/s
Jan 20 14:53:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:53:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:27.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:53:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1500097760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:27.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:28 compute-1 ceph-mon[81775]: pgmap v2003: 321 pgs: 321 active+clean; 699 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.2 MiB/s wr, 168 op/s
Jan 20 14:53:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:29.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:29 compute-1 nova_compute[225855]: 2026-01-20 14:53:29.474 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:29.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:30 compute-1 nova_compute[225855]: 2026-01-20 14:53:30.981 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:31 compute-1 ceph-mon[81775]: pgmap v2004: 321 pgs: 321 active+clean; 707 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.2 MiB/s wr, 198 op/s
Jan 20 14:53:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:31.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:31 compute-1 nova_compute[225855]: 2026-01-20 14:53:31.274 225859 DEBUG nova.compute.manager [None req-57c441db-e1fb-4444-9764-df5914c151b4 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:53:31 compute-1 nova_compute[225855]: 2026-01-20 14:53:31.324 225859 INFO nova.compute.manager [None req-57c441db-e1fb-4444-9764-df5914c151b4 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] instance snapshotting
Jan 20 14:53:31 compute-1 sudo[272548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:31 compute-1 sudo[272548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:31 compute-1 sudo[272548]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:31 compute-1 sudo[272573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:31 compute-1 sudo[272573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:31 compute-1 sudo[272573]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:31 compute-1 nova_compute[225855]: 2026-01-20 14:53:31.642 225859 INFO nova.virt.libvirt.driver [None req-57c441db-e1fb-4444-9764-df5914c151b4 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Beginning live snapshot process
Jan 20 14:53:31 compute-1 nova_compute[225855]: 2026-01-20 14:53:31.782 225859 DEBUG nova.virt.libvirt.imagebackend [None req-57c441db-e1fb-4444-9764-df5914c151b4 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 14:53:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:53:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:31.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:53:31 compute-1 nova_compute[225855]: 2026-01-20 14:53:31.994 225859 DEBUG nova.storage.rbd_utils [None req-57c441db-e1fb-4444-9764-df5914c151b4 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] creating snapshot(ff1d44216f744f44807da7676144e1fc) on rbd image(baada610-f563-4c97-89a9-56eba792c352_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:53:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e283 e283: 3 total, 3 up, 3 in
Jan 20 14:53:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:33.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:33 compute-1 ceph-mon[81775]: pgmap v2005: 321 pgs: 321 active+clean; 722 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 222 op/s
Jan 20 14:53:33 compute-1 nova_compute[225855]: 2026-01-20 14:53:33.214 225859 DEBUG nova.storage.rbd_utils [None req-57c441db-e1fb-4444-9764-df5914c151b4 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] cloning vms/baada610-f563-4c97-89a9-56eba792c352_disk@ff1d44216f744f44807da7676144e1fc to images/132a812e-f4a2-4a8b-813d-1df62e09798a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:53:33 compute-1 nova_compute[225855]: 2026-01-20 14:53:33.332 225859 DEBUG nova.storage.rbd_utils [None req-57c441db-e1fb-4444-9764-df5914c151b4 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] flattening images/132a812e-f4a2-4a8b-813d-1df62e09798a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:53:33 compute-1 nova_compute[225855]: 2026-01-20 14:53:33.738 225859 DEBUG nova.storage.rbd_utils [None req-57c441db-e1fb-4444-9764-df5914c151b4 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] removing snapshot(ff1d44216f744f44807da7676144e1fc) on rbd image(baada610-f563-4c97-89a9-56eba792c352_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:53:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:53:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:33.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:53:34 compute-1 ceph-mon[81775]: osdmap e283: 3 total, 3 up, 3 in
Jan 20 14:53:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3836508932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:34 compute-1 ceph-mon[81775]: pgmap v2007: 321 pgs: 321 active+clean; 722 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.7 MiB/s wr, 282 op/s
Jan 20 14:53:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/654798905' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e284 e284: 3 total, 3 up, 3 in
Jan 20 14:53:34 compute-1 nova_compute[225855]: 2026-01-20 14:53:34.236 225859 DEBUG nova.storage.rbd_utils [None req-57c441db-e1fb-4444-9764-df5914c151b4 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] creating snapshot(snap) on rbd image(132a812e-f4a2-4a8b-813d-1df62e09798a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:53:34 compute-1 nova_compute[225855]: 2026-01-20 14:53:34.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:34 compute-1 nova_compute[225855]: 2026-01-20 14:53:34.476 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:35.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:35 compute-1 ceph-mon[81775]: osdmap e284: 3 total, 3 up, 3 in
Jan 20 14:53:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e285 e285: 3 total, 3 up, 3 in
Jan 20 14:53:35 compute-1 nova_compute[225855]: 2026-01-20 14:53:35.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:53:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:35.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:53:35 compute-1 nova_compute[225855]: 2026-01-20 14:53:35.983 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:36 compute-1 ceph-mon[81775]: osdmap e285: 3 total, 3 up, 3 in
Jan 20 14:53:36 compute-1 ceph-mon[81775]: pgmap v2010: 321 pgs: 321 active+clean; 790 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.8 MiB/s rd, 7.8 MiB/s wr, 262 op/s
Jan 20 14:53:36 compute-1 nova_compute[225855]: 2026-01-20 14:53:36.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:36 compute-1 nova_compute[225855]: 2026-01-20 14:53:36.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:53:36 compute-1 nova_compute[225855]: 2026-01-20 14:53:36.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:36 compute-1 nova_compute[225855]: 2026-01-20 14:53:36.652 225859 INFO nova.virt.libvirt.driver [None req-57c441db-e1fb-4444-9764-df5914c151b4 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Snapshot image upload complete
Jan 20 14:53:36 compute-1 nova_compute[225855]: 2026-01-20 14:53:36.652 225859 INFO nova.compute.manager [None req-57c441db-e1fb-4444-9764-df5914c151b4 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Took 5.33 seconds to snapshot the instance on the hypervisor.
Jan 20 14:53:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:53:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:37.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:53:37 compute-1 nova_compute[225855]: 2026-01-20 14:53:37.375 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:37 compute-1 nova_compute[225855]: 2026-01-20 14:53:37.375 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:53:37 compute-1 nova_compute[225855]: 2026-01-20 14:53:37.376 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:53:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:53:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:37.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:53:38 compute-1 nova_compute[225855]: 2026-01-20 14:53:38.009 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-23ea4537-f03f-46de-881f-b979e232a3b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:53:38 compute-1 nova_compute[225855]: 2026-01-20 14:53:38.010 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-23ea4537-f03f-46de-881f-b979e232a3b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:53:38 compute-1 nova_compute[225855]: 2026-01-20 14:53:38.010 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:53:38 compute-1 nova_compute[225855]: 2026-01-20 14:53:38.010 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 23ea4537-f03f-46de-881f-b979e232a3b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:39 compute-1 ceph-mon[81775]: pgmap v2011: 321 pgs: 321 active+clean; 815 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 11 MiB/s wr, 376 op/s
Jan 20 14:53:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:39.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:39 compute-1 nova_compute[225855]: 2026-01-20 14:53:39.478 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:39.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:40 compute-1 ceph-mon[81775]: pgmap v2012: 321 pgs: 321 active+clean; 826 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 10 MiB/s wr, 319 op/s
Jan 20 14:53:40 compute-1 nova_compute[225855]: 2026-01-20 14:53:40.793 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Updating instance_info_cache with network_info: [{"id": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "address": "fa:16:3e:b5:55:3c", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap234381ea-07", "ovs_interfaceid": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:53:40 compute-1 nova_compute[225855]: 2026-01-20 14:53:40.813 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-23ea4537-f03f-46de-881f-b979e232a3b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:53:40 compute-1 nova_compute[225855]: 2026-01-20 14:53:40.814 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:53:40 compute-1 nova_compute[225855]: 2026-01-20 14:53:40.815 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:40 compute-1 nova_compute[225855]: 2026-01-20 14:53:40.815 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:40 compute-1 nova_compute[225855]: 2026-01-20 14:53:40.815 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 14:53:40 compute-1 nova_compute[225855]: 2026-01-20 14:53:40.831 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 14:53:40 compute-1 nova_compute[225855]: 2026-01-20 14:53:40.985 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:41 compute-1 podman[272744]: 2026-01-20 14:53:41.038727019 +0000 UTC m=+0.078555938 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.127 225859 DEBUG oslo_concurrency.lockutils [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "baada610-f563-4c97-89a9-56eba792c352" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.128 225859 DEBUG oslo_concurrency.lockutils [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.145 225859 DEBUG nova.objects.instance [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'flavor' on Instance uuid baada610-f563-4c97-89a9-56eba792c352 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.185 225859 DEBUG oslo_concurrency.lockutils [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:41.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.437 225859 DEBUG oslo_concurrency.lockutils [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "baada610-f563-4c97-89a9-56eba792c352" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.438 225859 DEBUG oslo_concurrency.lockutils [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.438 225859 INFO nova.compute.manager [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Attaching volume 5f6a803f-d232-4e97-9965-ece0139e0fda to /dev/vdb
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.609 225859 DEBUG os_brick.utils [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.610 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.632 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.632 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[7d42ef2c-c0c6-42b6-8e4f-1a7778427568]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.633 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.640 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.640 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[d7f4d919-689f-44f9-820b-c366bcf5312d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.642 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.650 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.650 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[42ba5721-f637-4239-ab0c-7931812b2f78]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.651 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[568cafb2-f4bd-4a0c-bcdc-92764a36f1c7]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.652 225859 DEBUG oslo_concurrency.processutils [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.675 225859 DEBUG oslo_concurrency.processutils [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.677 225859 DEBUG os_brick.initiator.connectors.lightos [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.677 225859 DEBUG os_brick.initiator.connectors.lightos [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.677 225859 DEBUG os_brick.initiator.connectors.lightos [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.678 225859 DEBUG os_brick.utils [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] <== get_connector_properties: return (67ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:53:41 compute-1 nova_compute[225855]: 2026-01-20 14:53:41.678 225859 DEBUG nova.virt.block_device [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Updating existing volume attachment record: d9b595f1-88ea-494e-bc6e-d959c2d6b8eb _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 14:53:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:53:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:41.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:53:42 compute-1 nova_compute[225855]: 2026-01-20 14:53:42.439 225859 DEBUG nova.objects.instance [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'flavor' on Instance uuid baada610-f563-4c97-89a9-56eba792c352 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:42 compute-1 nova_compute[225855]: 2026-01-20 14:53:42.461 225859 DEBUG nova.virt.libvirt.driver [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Attempting to attach volume 5f6a803f-d232-4e97-9965-ece0139e0fda with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 20 14:53:42 compute-1 nova_compute[225855]: 2026-01-20 14:53:42.463 225859 DEBUG nova.virt.libvirt.guest [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] attach device xml: <disk type="network" device="disk">
Jan 20 14:53:42 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:53:42 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-5f6a803f-d232-4e97-9965-ece0139e0fda">
Jan 20 14:53:42 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:53:42 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:53:42 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:53:42 compute-1 nova_compute[225855]:   </source>
Jan 20 14:53:42 compute-1 nova_compute[225855]:   <auth username="openstack">
Jan 20 14:53:42 compute-1 nova_compute[225855]:     <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:53:42 compute-1 nova_compute[225855]:   </auth>
Jan 20 14:53:42 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:53:42 compute-1 nova_compute[225855]:   <serial>5f6a803f-d232-4e97-9965-ece0139e0fda</serial>
Jan 20 14:53:42 compute-1 nova_compute[225855]: </disk>
Jan 20 14:53:42 compute-1 nova_compute[225855]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 14:53:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:42 compute-1 nova_compute[225855]: 2026-01-20 14:53:42.588 225859 DEBUG nova.virt.libvirt.driver [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:53:42 compute-1 nova_compute[225855]: 2026-01-20 14:53:42.588 225859 DEBUG nova.virt.libvirt.driver [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:53:42 compute-1 nova_compute[225855]: 2026-01-20 14:53:42.589 225859 DEBUG nova.virt.libvirt.driver [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:53:42 compute-1 nova_compute[225855]: 2026-01-20 14:53:42.589 225859 DEBUG nova.virt.libvirt.driver [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No VIF found with MAC fa:16:3e:9e:93:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:53:42 compute-1 nova_compute[225855]: 2026-01-20 14:53:42.781 225859 DEBUG oslo_concurrency.lockutils [None req-42a91944-3838-4633-a431-acd3dada7db3 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:43 compute-1 ceph-mon[81775]: pgmap v2013: 321 pgs: 321 active+clean; 834 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 9.2 MiB/s wr, 324 op/s
Jan 20 14:53:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3957894210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1624741923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:53:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:43.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:53:43 compute-1 nova_compute[225855]: 2026-01-20 14:53:43.356 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:43 compute-1 nova_compute[225855]: 2026-01-20 14:53:43.356 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:43 compute-1 nova_compute[225855]: 2026-01-20 14:53:43.373 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:43 compute-1 nova_compute[225855]: 2026-01-20 14:53:43.374 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:43 compute-1 nova_compute[225855]: 2026-01-20 14:53:43.374 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:43 compute-1 nova_compute[225855]: 2026-01-20 14:53:43.374 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:53:43 compute-1 nova_compute[225855]: 2026-01-20 14:53:43.374 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:53:43 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2478177324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:43 compute-1 nova_compute[225855]: 2026-01-20 14:53:43.804 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:43.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:43 compute-1 nova_compute[225855]: 2026-01-20 14:53:43.876 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:53:43 compute-1 nova_compute[225855]: 2026-01-20 14:53:43.876 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:53:43 compute-1 nova_compute[225855]: 2026-01-20 14:53:43.877 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:53:43 compute-1 nova_compute[225855]: 2026-01-20 14:53:43.879 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:53:43 compute-1 nova_compute[225855]: 2026-01-20 14:53:43.879 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:53:44 compute-1 nova_compute[225855]: 2026-01-20 14:53:44.040 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:53:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2236801101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/201861485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2478177324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:44 compute-1 nova_compute[225855]: 2026-01-20 14:53:44.041 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3957MB free_disk=20.693958282470703GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:53:44 compute-1 nova_compute[225855]: 2026-01-20 14:53:44.041 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:44 compute-1 nova_compute[225855]: 2026-01-20 14:53:44.042 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:44 compute-1 nova_compute[225855]: 2026-01-20 14:53:44.419 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 23ea4537-f03f-46de-881f-b979e232a3b9 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:53:44 compute-1 nova_compute[225855]: 2026-01-20 14:53:44.420 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance baada610-f563-4c97-89a9-56eba792c352 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:53:44 compute-1 nova_compute[225855]: 2026-01-20 14:53:44.420 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:53:44 compute-1 nova_compute[225855]: 2026-01-20 14:53:44.420 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:53:44 compute-1 nova_compute[225855]: 2026-01-20 14:53:44.480 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:44 compute-1 nova_compute[225855]: 2026-01-20 14:53:44.494 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing inventories for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 14:53:44 compute-1 nova_compute[225855]: 2026-01-20 14:53:44.511 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating ProviderTree inventory for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 14:53:44 compute-1 nova_compute[225855]: 2026-01-20 14:53:44.511 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:53:44 compute-1 nova_compute[225855]: 2026-01-20 14:53:44.554 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing aggregate associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 14:53:44 compute-1 nova_compute[225855]: 2026-01-20 14:53:44.587 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing trait associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 14:53:44 compute-1 nova_compute[225855]: 2026-01-20 14:53:44.652 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e286 e286: 3 total, 3 up, 3 in
Jan 20 14:53:44 compute-1 nova_compute[225855]: 2026-01-20 14:53:44.794 225859 INFO nova.compute.manager [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Rescuing
Jan 20 14:53:44 compute-1 nova_compute[225855]: 2026-01-20 14:53:44.794 225859 DEBUG oslo_concurrency.lockutils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:53:44 compute-1 nova_compute[225855]: 2026-01-20 14:53:44.794 225859 DEBUG oslo_concurrency.lockutils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquired lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:53:44 compute-1 nova_compute[225855]: 2026-01-20 14:53:44.795 225859 DEBUG nova.network.neutron [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:53:45 compute-1 ceph-mon[81775]: pgmap v2014: 321 pgs: 321 active+clean; 846 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 5.9 MiB/s wr, 287 op/s
Jan 20 14:53:45 compute-1 ceph-mon[81775]: osdmap e286: 3 total, 3 up, 3 in
Jan 20 14:53:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:53:45 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1022852382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:45 compute-1 nova_compute[225855]: 2026-01-20 14:53:45.101 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:45 compute-1 nova_compute[225855]: 2026-01-20 14:53:45.106 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:53:45 compute-1 nova_compute[225855]: 2026-01-20 14:53:45.122 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:53:45 compute-1 nova_compute[225855]: 2026-01-20 14:53:45.144 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:53:45 compute-1 nova_compute[225855]: 2026-01-20 14:53:45.144 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:45.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:53:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:45.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:53:45 compute-1 nova_compute[225855]: 2026-01-20 14:53:45.987 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2437868149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1022852382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:46 compute-1 nova_compute[225855]: 2026-01-20 14:53:46.225 225859 DEBUG nova.network.neutron [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Updating instance_info_cache with network_info: [{"id": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "address": "fa:16:3e:9e:93:82", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3156414-5a", "ovs_interfaceid": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:53:46 compute-1 nova_compute[225855]: 2026-01-20 14:53:46.248 225859 DEBUG oslo_concurrency.lockutils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Releasing lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:53:46 compute-1 nova_compute[225855]: 2026-01-20 14:53:46.678 225859 DEBUG nova.virt.libvirt.driver [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:53:47 compute-1 ceph-mon[81775]: pgmap v2016: 321 pgs: 321 active+clean; 867 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.6 MiB/s wr, 286 op/s
Jan 20 14:53:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:47.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:47.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:48 compute-1 kernel: tapa3156414-5a (unregistering): left promiscuous mode
Jan 20 14:53:48 compute-1 NetworkManager[49104]: <info>  [1768920828.9583] device (tapa3156414-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:53:48 compute-1 ovn_controller[130490]: 2026-01-20T14:53:48Z|00463|binding|INFO|Releasing lport a3156414-5a96-462d-974e-a57c9cd8e9c8 from this chassis (sb_readonly=0)
Jan 20 14:53:48 compute-1 nova_compute[225855]: 2026-01-20 14:53:48.967 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:48 compute-1 ovn_controller[130490]: 2026-01-20T14:53:48Z|00464|binding|INFO|Setting lport a3156414-5a96-462d-974e-a57c9cd8e9c8 down in Southbound
Jan 20 14:53:48 compute-1 ovn_controller[130490]: 2026-01-20T14:53:48Z|00465|binding|INFO|Removing iface tapa3156414-5a ovn-installed in OVS
Jan 20 14:53:48 compute-1 nova_compute[225855]: 2026-01-20 14:53:48.970 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:48.974 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:93:82 10.100.0.3'], port_security=['fa:16:3e:9e:93:82 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'baada610-f563-4c97-89a9-56eba792c352', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79184781-1f23-4584-87de-08e262242488', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '34326e47-c07e-48d1-9283-c1c5634fdc52', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b73ab05-b29f-401a-84a5-ea1a96103f33, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=a3156414-5a96-462d-974e-a57c9cd8e9c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:53:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:48.976 140354 INFO neutron.agent.ovn.metadata.agent [-] Port a3156414-5a96-462d-974e-a57c9cd8e9c8 in datapath 79184781-1f23-4584-87de-08e262242488 unbound from our chassis
Jan 20 14:53:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:48.978 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79184781-1f23-4584-87de-08e262242488
Jan 20 14:53:48 compute-1 nova_compute[225855]: 2026-01-20 14:53:48.984 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:48.996 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4b2390b2-7f6b-4ef0-a74f-3bf53ef1a6b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:49.020 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[af593159-1fa6-4165-9bb9-a74796117e2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:49.023 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[5b69457d-7c51-4c0f-a14a-76358079c366]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:49 compute-1 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000077.scope: Deactivated successfully.
Jan 20 14:53:49 compute-1 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000077.scope: Consumed 14.199s CPU time.
Jan 20 14:53:49 compute-1 systemd-machined[194361]: Machine qemu-55-instance-00000077 terminated.
Jan 20 14:53:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:49.049 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f1207d-fea6-41a1-903d-1c1d431a75f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:49.063 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8cdb1e0f-1a7b-49a9-9f4f-44c2adeb9566]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79184781-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:7c:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580405, 'reachable_time': 23111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272857, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:49.077 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[24a131fd-593e-4912-84f3-fa8d24155991]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap79184781-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580414, 'tstamp': 580414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272858, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap79184781-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580417, 'tstamp': 580417}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272858, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:49.078 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79184781-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.079 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.084 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:49.084 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79184781-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:49.084 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:53:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:49.085 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79184781-10, col_values=(('external_ids', {'iface-id': 'b033e9e6-9781-4424-a20f-7b48a14e2c80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:49.085 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:53:49 compute-1 ceph-mon[81775]: pgmap v2017: 321 pgs: 321 active+clean; 867 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.5 MiB/s wr, 155 op/s
Jan 20 14:53:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e287 e287: 3 total, 3 up, 3 in
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.123 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.124 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.183 225859 DEBUG nova.compute.manager [req-5a7132ee-511d-4c70-bad0-c5da7ae62a33 req-5906cafd-9810-45c5-b126-bc8598783283 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received event network-vif-unplugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.184 225859 DEBUG oslo_concurrency.lockutils [req-5a7132ee-511d-4c70-bad0-c5da7ae62a33 req-5906cafd-9810-45c5-b126-bc8598783283 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "baada610-f563-4c97-89a9-56eba792c352-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.184 225859 DEBUG oslo_concurrency.lockutils [req-5a7132ee-511d-4c70-bad0-c5da7ae62a33 req-5906cafd-9810-45c5-b126-bc8598783283 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.184 225859 DEBUG oslo_concurrency.lockutils [req-5a7132ee-511d-4c70-bad0-c5da7ae62a33 req-5906cafd-9810-45c5-b126-bc8598783283 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.184 225859 DEBUG nova.compute.manager [req-5a7132ee-511d-4c70-bad0-c5da7ae62a33 req-5906cafd-9810-45c5-b126-bc8598783283 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] No waiting events found dispatching network-vif-unplugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.185 225859 WARNING nova.compute.manager [req-5a7132ee-511d-4c70-bad0-c5da7ae62a33 req-5906cafd-9810-45c5-b126-bc8598783283 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received unexpected event network-vif-unplugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 for instance with vm_state active and task_state rescuing.
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.188 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.194 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:53:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:49.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.483 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.693 225859 INFO nova.virt.libvirt.driver [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Instance shutdown successfully after 3 seconds.
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.698 225859 INFO nova.virt.libvirt.driver [-] [instance: baada610-f563-4c97-89a9-56eba792c352] Instance destroyed successfully.
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.698 225859 DEBUG nova.objects.instance [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'numa_topology' on Instance uuid baada610-f563-4c97-89a9-56eba792c352 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.713 225859 INFO nova.virt.libvirt.driver [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Attempting a stable device rescue
Jan 20 14:53:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:49.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.938 225859 DEBUG nova.virt.libvirt.driver [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.943 225859 DEBUG nova.virt.libvirt.driver [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.944 225859 INFO nova.virt.libvirt.driver [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Creating image(s)
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.970 225859 DEBUG nova.storage.rbd_utils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image baada610-f563-4c97-89a9-56eba792c352_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:53:49 compute-1 nova_compute[225855]: 2026-01-20 14:53:49.974 225859 DEBUG nova.objects.instance [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'trusted_certs' on Instance uuid baada610-f563-4c97-89a9-56eba792c352 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.010 225859 DEBUG nova.storage.rbd_utils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image baada610-f563-4c97-89a9-56eba792c352_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.038 225859 DEBUG nova.storage.rbd_utils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image baada610-f563-4c97-89a9-56eba792c352_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.043 225859 DEBUG oslo_concurrency.lockutils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "bbd525c47a2c08c6db0d918bbd4125fe578740ee" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.044 225859 DEBUG oslo_concurrency.lockutils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "bbd525c47a2c08c6db0d918bbd4125fe578740ee" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:50 compute-1 ceph-mon[81775]: osdmap e287: 3 total, 3 up, 3 in
Jan 20 14:53:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e288 e288: 3 total, 3 up, 3 in
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.264 225859 DEBUG nova.virt.libvirt.imagebackend [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Image locations are: [{'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/132a812e-f4a2-4a8b-813d-1df62e09798a/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/132a812e-f4a2-4a8b-813d-1df62e09798a/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.328 225859 DEBUG nova.virt.libvirt.imagebackend [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Selected location: {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/132a812e-f4a2-4a8b-813d-1df62e09798a/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.329 225859 DEBUG nova.storage.rbd_utils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] cloning images/132a812e-f4a2-4a8b-813d-1df62e09798a@snap to None/baada610-f563-4c97-89a9-56eba792c352_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.454 225859 DEBUG oslo_concurrency.lockutils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "bbd525c47a2c08c6db0d918bbd4125fe578740ee" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.505 225859 DEBUG nova.objects.instance [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'migration_context' on Instance uuid baada610-f563-4c97-89a9-56eba792c352 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.521 225859 DEBUG nova.virt.libvirt.driver [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.524 225859 DEBUG nova.virt.libvirt.driver [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Start _get_guest_xml network_info=[{"id": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "address": "fa:16:3e:9e:93:82", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "vif_mac": "fa:16:3e:9e:93:82"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3156414-5a", "ovs_interfaceid": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '132a812e-f4a2-4a8b-813d-1df62e09798a', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5f6a803f-d232-4e97-9965-ece0139e0fda', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5f6a803f-d232-4e97-9965-ece0139e0fda', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'baada610-f563-4c97-89a9-56eba792c352', 'attached_at': '', 'detached_at': '', 'volume_id': '5f6a803f-d232-4e97-9965-ece0139e0fda', 'serial': '5f6a803f-d232-4e97-9965-ece0139e0fda'}, 'guest_format': None, 'boot_index': None, 'mount_device': '/dev/vdb', 'attachment_id': 'd9b595f1-88ea-494e-bc6e-d959c2d6b8eb', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.524 225859 DEBUG nova.objects.instance [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'resources' on Instance uuid baada610-f563-4c97-89a9-56eba792c352 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.547 225859 WARNING nova.virt.libvirt.driver [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.553 225859 DEBUG nova.virt.libvirt.host [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.554 225859 DEBUG nova.virt.libvirt.host [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.558 225859 DEBUG nova.virt.libvirt.host [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.559 225859 DEBUG nova.virt.libvirt.host [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.560 225859 DEBUG nova.virt.libvirt.driver [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.560 225859 DEBUG nova.virt.hardware [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.561 225859 DEBUG nova.virt.hardware [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.561 225859 DEBUG nova.virt.hardware [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.561 225859 DEBUG nova.virt.hardware [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.561 225859 DEBUG nova.virt.hardware [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.562 225859 DEBUG nova.virt.hardware [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.562 225859 DEBUG nova.virt.hardware [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.562 225859 DEBUG nova.virt.hardware [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.562 225859 DEBUG nova.virt.hardware [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.563 225859 DEBUG nova.virt.hardware [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.563 225859 DEBUG nova.virt.hardware [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.563 225859 DEBUG nova.objects.instance [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'vcpu_model' on Instance uuid baada610-f563-4c97-89a9-56eba792c352 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.580 225859 DEBUG oslo_concurrency.processutils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:50 compute-1 nova_compute[225855]: 2026-01-20 14:53:50.989 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:53:51 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2500595433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:51 compute-1 nova_compute[225855]: 2026-01-20 14:53:51.112 225859 DEBUG oslo_concurrency.processutils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:51 compute-1 ceph-mon[81775]: pgmap v2019: 321 pgs: 321 active+clean; 871 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 943 KiB/s rd, 3.7 MiB/s wr, 137 op/s
Jan 20 14:53:51 compute-1 ceph-mon[81775]: osdmap e288: 3 total, 3 up, 3 in
Jan 20 14:53:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2500595433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e289 e289: 3 total, 3 up, 3 in
Jan 20 14:53:51 compute-1 nova_compute[225855]: 2026-01-20 14:53:51.153 225859 DEBUG oslo_concurrency.processutils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:51.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:51 compute-1 nova_compute[225855]: 2026-01-20 14:53:51.350 225859 DEBUG nova.compute.manager [req-3ee7a85b-bfae-4355-b170-83735e42285b req-12f5da0a-088d-4e20-a73c-f375ee777225 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received event network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:53:51 compute-1 nova_compute[225855]: 2026-01-20 14:53:51.351 225859 DEBUG oslo_concurrency.lockutils [req-3ee7a85b-bfae-4355-b170-83735e42285b req-12f5da0a-088d-4e20-a73c-f375ee777225 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "baada610-f563-4c97-89a9-56eba792c352-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:51 compute-1 nova_compute[225855]: 2026-01-20 14:53:51.351 225859 DEBUG oslo_concurrency.lockutils [req-3ee7a85b-bfae-4355-b170-83735e42285b req-12f5da0a-088d-4e20-a73c-f375ee777225 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:51 compute-1 nova_compute[225855]: 2026-01-20 14:53:51.351 225859 DEBUG oslo_concurrency.lockutils [req-3ee7a85b-bfae-4355-b170-83735e42285b req-12f5da0a-088d-4e20-a73c-f375ee777225 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:51 compute-1 nova_compute[225855]: 2026-01-20 14:53:51.352 225859 DEBUG nova.compute.manager [req-3ee7a85b-bfae-4355-b170-83735e42285b req-12f5da0a-088d-4e20-a73c-f375ee777225 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] No waiting events found dispatching network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:53:51 compute-1 nova_compute[225855]: 2026-01-20 14:53:51.352 225859 WARNING nova.compute.manager [req-3ee7a85b-bfae-4355-b170-83735e42285b req-12f5da0a-088d-4e20-a73c-f375ee777225 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received unexpected event network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 for instance with vm_state active and task_state rescuing.
Jan 20 14:53:51 compute-1 sudo[273072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:51 compute-1 sudo[273072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:51 compute-1 sudo[273072]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:53:51 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3735164888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:51 compute-1 nova_compute[225855]: 2026-01-20 14:53:51.617 225859 DEBUG oslo_concurrency.processutils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:51 compute-1 sudo[273097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:51 compute-1 sudo[273097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:51 compute-1 sudo[273097]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:51 compute-1 nova_compute[225855]: 2026-01-20 14:53:51.646 225859 DEBUG oslo_concurrency.processutils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:51.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:53:52 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2119220250' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:52 compute-1 nova_compute[225855]: 2026-01-20 14:53:52.105 225859 DEBUG oslo_concurrency.processutils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:52 compute-1 nova_compute[225855]: 2026-01-20 14:53:52.108 225859 DEBUG nova.virt.libvirt.vif [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:52:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-185388239',display_name='tempest-ServerStableDeviceRescueTest-server-185388239',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-185388239',id=119,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIfbHk5SMnBEAUVZEhoLPfdB2qCay341zK720hYW5qflxdgcEr+fHp9C3kAgJFmqON8wn8DkPxW0WmihyCLPTK7Iiiy5VDiRJ7U/0O7hlyzm17ZWhCVdPfXSugKxmeVL3w==',key_name='tempest-keypair-647310408',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:53:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a29915e0dd2403fbd7b7e847696b00a',ramdisk_id='',reservation_id='r-d9074tfl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-129078052',owner_user_name='tempest-ServerStableDeviceRescueTest-129078052-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:53:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d85d286ce6224326a0f4a15a06afbfea',uuid=baada610-f563-4c97-89a9-56eba792c352,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "address": "fa:16:3e:9e:93:82", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "vif_mac": "fa:16:3e:9e:93:82"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3156414-5a", "ovs_interfaceid": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:53:52 compute-1 nova_compute[225855]: 2026-01-20 14:53:52.109 225859 DEBUG nova.network.os_vif_util [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converting VIF {"id": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "address": "fa:16:3e:9e:93:82", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "vif_mac": "fa:16:3e:9e:93:82"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3156414-5a", "ovs_interfaceid": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:53:52 compute-1 nova_compute[225855]: 2026-01-20 14:53:52.110 225859 DEBUG nova.network.os_vif_util [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:93:82,bridge_name='br-int',has_traffic_filtering=True,id=a3156414-5a96-462d-974e-a57c9cd8e9c8,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3156414-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:53:52 compute-1 nova_compute[225855]: 2026-01-20 14:53:52.111 225859 DEBUG nova.objects.instance [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'pci_devices' on Instance uuid baada610-f563-4c97-89a9-56eba792c352 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:52 compute-1 nova_compute[225855]: 2026-01-20 14:53:52.126 225859 DEBUG nova.virt.libvirt.driver [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:53:52 compute-1 nova_compute[225855]:   <uuid>baada610-f563-4c97-89a9-56eba792c352</uuid>
Jan 20 14:53:52 compute-1 nova_compute[225855]:   <name>instance-00000077</name>
Jan 20 14:53:52 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:53:52 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:53:52 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerStableDeviceRescueTest-server-185388239</nova:name>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:53:50</nova:creationTime>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <nova:user uuid="d85d286ce6224326a0f4a15a06afbfea">tempest-ServerStableDeviceRescueTest-129078052-project-member</nova:user>
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <nova:project uuid="0a29915e0dd2403fbd7b7e847696b00a">tempest-ServerStableDeviceRescueTest-129078052</nova:project>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <nova:port uuid="a3156414-5a96-462d-974e-a57c9cd8e9c8">
Jan 20 14:53:52 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:53:52 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:53:52 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <system>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <entry name="serial">baada610-f563-4c97-89a9-56eba792c352</entry>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <entry name="uuid">baada610-f563-4c97-89a9-56eba792c352</entry>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     </system>
Jan 20 14:53:52 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:53:52 compute-1 nova_compute[225855]:   <os>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:   </os>
Jan 20 14:53:52 compute-1 nova_compute[225855]:   <features>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:   </features>
Jan 20 14:53:52 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:53:52 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:53:52 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/baada610-f563-4c97-89a9-56eba792c352_disk">
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       </source>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/baada610-f563-4c97-89a9-56eba792c352_disk.config">
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       </source>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <source protocol="rbd" name="volumes/volume-5f6a803f-d232-4e97-9965-ece0139e0fda">
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       </source>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <target dev="vdb" bus="virtio"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <serial>5f6a803f-d232-4e97-9965-ece0139e0fda</serial>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/baada610-f563-4c97-89a9-56eba792c352_disk.rescue">
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       </source>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:53:52 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <target dev="vdc" bus="virtio"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <boot order="1"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:9e:93:82"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <target dev="tapa3156414-5a"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/baada610-f563-4c97-89a9-56eba792c352/console.log" append="off"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <video>
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     </video>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:53:52 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:53:52 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:53:52 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:53:52 compute-1 nova_compute[225855]: </domain>
Jan 20 14:53:52 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:53:52 compute-1 nova_compute[225855]: 2026-01-20 14:53:52.135 225859 INFO nova.virt.libvirt.driver [-] [instance: baada610-f563-4c97-89a9-56eba792c352] Instance destroyed successfully.
Jan 20 14:53:52 compute-1 ceph-mon[81775]: osdmap e289: 3 total, 3 up, 3 in
Jan 20 14:53:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3735164888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:52 compute-1 ceph-mon[81775]: pgmap v2022: 321 pgs: 321 active+clean; 899 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.2 MiB/s wr, 93 op/s
Jan 20 14:53:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2119220250' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:52 compute-1 nova_compute[225855]: 2026-01-20 14:53:52.217 225859 DEBUG nova.virt.libvirt.driver [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:53:52 compute-1 nova_compute[225855]: 2026-01-20 14:53:52.217 225859 DEBUG nova.virt.libvirt.driver [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:53:52 compute-1 nova_compute[225855]: 2026-01-20 14:53:52.218 225859 DEBUG nova.virt.libvirt.driver [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:53:52 compute-1 nova_compute[225855]: 2026-01-20 14:53:52.218 225859 DEBUG nova.virt.libvirt.driver [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:53:52 compute-1 nova_compute[225855]: 2026-01-20 14:53:52.218 225859 DEBUG nova.virt.libvirt.driver [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No VIF found with MAC fa:16:3e:9e:93:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:53:52 compute-1 nova_compute[225855]: 2026-01-20 14:53:52.218 225859 INFO nova.virt.libvirt.driver [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Using config drive
Jan 20 14:53:52 compute-1 nova_compute[225855]: 2026-01-20 14:53:52.252 225859 DEBUG nova.storage.rbd_utils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image baada610-f563-4c97-89a9-56eba792c352_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:53:52 compute-1 podman[273147]: 2026-01-20 14:53:52.279756306 +0000 UTC m=+0.103557784 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:53:52 compute-1 nova_compute[225855]: 2026-01-20 14:53:52.281 225859 DEBUG nova.objects.instance [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'ec2_ids' on Instance uuid baada610-f563-4c97-89a9-56eba792c352 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:52 compute-1 nova_compute[225855]: 2026-01-20 14:53:52.310 225859 DEBUG nova.objects.instance [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'keypairs' on Instance uuid baada610-f563-4c97-89a9-56eba792c352 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:53:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:53.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.397 225859 INFO nova.virt.libvirt.driver [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Creating config drive at /var/lib/nova/instances/baada610-f563-4c97-89a9-56eba792c352/disk.config.rescue
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.403 225859 DEBUG oslo_concurrency.processutils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/baada610-f563-4c97-89a9-56eba792c352/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1540c4je execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.523 225859 DEBUG nova.compute.manager [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Stashing vm_state: stopped _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.543 225859 DEBUG oslo_concurrency.processutils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/baada610-f563-4c97-89a9-56eba792c352/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1540c4je" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.574 225859 DEBUG nova.storage.rbd_utils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image baada610-f563-4c97-89a9-56eba792c352_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.579 225859 DEBUG oslo_concurrency.processutils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/baada610-f563-4c97-89a9-56eba792c352/disk.config.rescue baada610-f563-4c97-89a9-56eba792c352_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.648 225859 DEBUG oslo_concurrency.lockutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.649 225859 DEBUG oslo_concurrency.lockutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.696 225859 DEBUG nova.objects.instance [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'pci_requests' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.714 225859 DEBUG nova.virt.hardware [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.714 225859 INFO nova.compute.claims [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.714 225859 DEBUG nova.objects.instance [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'resources' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.725 225859 DEBUG nova.objects.instance [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.746 225859 DEBUG oslo_concurrency.processutils [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/baada610-f563-4c97-89a9-56eba792c352/disk.config.rescue baada610-f563-4c97-89a9-56eba792c352_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.747 225859 INFO nova.virt.libvirt.driver [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Deleting local config drive /var/lib/nova/instances/baada610-f563-4c97-89a9-56eba792c352/disk.config.rescue because it was imported into RBD.
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.770 225859 INFO nova.compute.resource_tracker [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Updating resource usage from migration 0c07b1b8-dd18-4f00-acbf-59c21d8f4a60
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.771 225859 DEBUG nova.compute.resource_tracker [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Starting to track incoming migration 0c07b1b8-dd18-4f00-acbf-59c21d8f4a60 with flavor 30c26a27-d918-46d8-a512-4ef3b4ce5955 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 20 14:53:53 compute-1 kernel: tapa3156414-5a: entered promiscuous mode
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.804 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:53 compute-1 NetworkManager[49104]: <info>  [1768920833.8070] manager: (tapa3156414-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/202)
Jan 20 14:53:53 compute-1 ovn_controller[130490]: 2026-01-20T14:53:53Z|00466|binding|INFO|Claiming lport a3156414-5a96-462d-974e-a57c9cd8e9c8 for this chassis.
Jan 20 14:53:53 compute-1 ovn_controller[130490]: 2026-01-20T14:53:53Z|00467|binding|INFO|a3156414-5a96-462d-974e-a57c9cd8e9c8: Claiming fa:16:3e:9e:93:82 10.100.0.3
Jan 20 14:53:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:53.812 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:93:82 10.100.0.3'], port_security=['fa:16:3e:9e:93:82 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'baada610-f563-4c97-89a9-56eba792c352', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79184781-1f23-4584-87de-08e262242488', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '34326e47-c07e-48d1-9283-c1c5634fdc52', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b73ab05-b29f-401a-84a5-ea1a96103f33, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=a3156414-5a96-462d-974e-a57c9cd8e9c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:53:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:53.813 140354 INFO neutron.agent.ovn.metadata.agent [-] Port a3156414-5a96-462d-974e-a57c9cd8e9c8 in datapath 79184781-1f23-4584-87de-08e262242488 bound to our chassis
Jan 20 14:53:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:53.815 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79184781-1f23-4584-87de-08e262242488
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.824 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:53 compute-1 ovn_controller[130490]: 2026-01-20T14:53:53Z|00468|binding|INFO|Setting lport a3156414-5a96-462d-974e-a57c9cd8e9c8 ovn-installed in OVS
Jan 20 14:53:53 compute-1 ovn_controller[130490]: 2026-01-20T14:53:53Z|00469|binding|INFO|Setting lport a3156414-5a96-462d-974e-a57c9cd8e9c8 up in Southbound
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.827 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:53.840 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[22142fac-ddc7-4561-9209-9e4f3dd318c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:53 compute-1 systemd-machined[194361]: New machine qemu-56-instance-00000077.
Jan 20 14:53:53 compute-1 systemd-udevd[273243]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:53:53 compute-1 NetworkManager[49104]: <info>  [1768920833.8722] device (tapa3156414-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:53:53 compute-1 systemd[1]: Started Virtual Machine qemu-56-instance-00000077.
Jan 20 14:53:53 compute-1 NetworkManager[49104]: <info>  [1768920833.8730] device (tapa3156414-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:53:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:53.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.880 225859 DEBUG oslo_concurrency.processutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:53.881 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[0b65bb6e-a859-4966-a314-fa3b55f49577]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:53.884 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d88a4d44-1457-4e89-a3ae-441335f49495]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:53.918 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[6c55ac96-97d1-4480-9230-70acdc5e5e73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:53.942 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d846da68-88c4-4302-998b-89a6de413fb0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79184781-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:7c:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580405, 'reachable_time': 23111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273255, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:53.963 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c9079d19-b34a-462e-ae91-f574e35b361b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap79184781-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580414, 'tstamp': 580414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273257, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap79184781-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580417, 'tstamp': 580417}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273257, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:53.965 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79184781-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:53.969 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79184781-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:53 compute-1 nova_compute[225855]: 2026-01-20 14:53:53.969 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:53.970 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:53:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:53.970 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79184781-10, col_values=(('external_ids', {'iface-id': 'b033e9e6-9781-4424-a20f-7b48a14e2c80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:53.971 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.059 225859 DEBUG nova.compute.manager [req-c3076cef-b50f-4260-bd78-2aabd5bf4d86 req-feb13a46-a55b-44a9-8a15-b687a30645f6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received event network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.059 225859 DEBUG oslo_concurrency.lockutils [req-c3076cef-b50f-4260-bd78-2aabd5bf4d86 req-feb13a46-a55b-44a9-8a15-b687a30645f6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "baada610-f563-4c97-89a9-56eba792c352-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.060 225859 DEBUG oslo_concurrency.lockutils [req-c3076cef-b50f-4260-bd78-2aabd5bf4d86 req-feb13a46-a55b-44a9-8a15-b687a30645f6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.060 225859 DEBUG oslo_concurrency.lockutils [req-c3076cef-b50f-4260-bd78-2aabd5bf4d86 req-feb13a46-a55b-44a9-8a15-b687a30645f6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.061 225859 DEBUG nova.compute.manager [req-c3076cef-b50f-4260-bd78-2aabd5bf4d86 req-feb13a46-a55b-44a9-8a15-b687a30645f6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] No waiting events found dispatching network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.061 225859 WARNING nova.compute.manager [req-c3076cef-b50f-4260-bd78-2aabd5bf4d86 req-feb13a46-a55b-44a9-8a15-b687a30645f6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received unexpected event network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 for instance with vm_state active and task_state rescuing.
Jan 20 14:53:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:53:54 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/613345111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.335 225859 DEBUG oslo_concurrency.processutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.344 225859 DEBUG nova.compute.provider_tree [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.353 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for baada610-f563-4c97-89a9-56eba792c352 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.354 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920834.3524303, baada610-f563-4c97-89a9-56eba792c352 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.354 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] VM Resumed (Lifecycle Event)
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.359 225859 DEBUG nova.compute.manager [None req-154f5cfe-1d4e-4261-b272-cf70d3757d76 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.376 225859 DEBUG nova.scheduler.client.report [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.381 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.386 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.402 225859 DEBUG oslo_concurrency.lockutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.402 225859 INFO nova.compute.manager [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Migrating
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.411 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] During sync_power_state the instance has a pending task (rescuing). Skip.
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.412 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920834.3528485, baada610-f563-4c97-89a9-56eba792c352 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.412 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] VM Started (Lifecycle Event)
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.446 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.452 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:53:54 compute-1 nova_compute[225855]: 2026-01-20 14:53:54.484 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:55 compute-1 ceph-mon[81775]: pgmap v2023: 321 pgs: 321 active+clean; 957 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 11 MiB/s wr, 278 op/s
Jan 20 14:53:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/613345111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:53:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:55.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:53:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:53:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:55.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:53:55 compute-1 nova_compute[225855]: 2026-01-20 14:53:55.993 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:56 compute-1 nova_compute[225855]: 2026-01-20 14:53:56.179 225859 DEBUG nova.compute.manager [req-7bad2a47-4454-44e1-bea1-bf7c226506f4 req-ea90f2b2-e58f-49bc-91b3-87fec2c247a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received event network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:53:56 compute-1 nova_compute[225855]: 2026-01-20 14:53:56.180 225859 DEBUG oslo_concurrency.lockutils [req-7bad2a47-4454-44e1-bea1-bf7c226506f4 req-ea90f2b2-e58f-49bc-91b3-87fec2c247a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "baada610-f563-4c97-89a9-56eba792c352-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:56 compute-1 nova_compute[225855]: 2026-01-20 14:53:56.181 225859 DEBUG oslo_concurrency.lockutils [req-7bad2a47-4454-44e1-bea1-bf7c226506f4 req-ea90f2b2-e58f-49bc-91b3-87fec2c247a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:56 compute-1 nova_compute[225855]: 2026-01-20 14:53:56.181 225859 DEBUG oslo_concurrency.lockutils [req-7bad2a47-4454-44e1-bea1-bf7c226506f4 req-ea90f2b2-e58f-49bc-91b3-87fec2c247a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:56 compute-1 nova_compute[225855]: 2026-01-20 14:53:56.182 225859 DEBUG nova.compute.manager [req-7bad2a47-4454-44e1-bea1-bf7c226506f4 req-ea90f2b2-e58f-49bc-91b3-87fec2c247a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] No waiting events found dispatching network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:53:56 compute-1 nova_compute[225855]: 2026-01-20 14:53:56.183 225859 WARNING nova.compute.manager [req-7bad2a47-4454-44e1-bea1-bf7c226506f4 req-ea90f2b2-e58f-49bc-91b3-87fec2c247a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received unexpected event network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 for instance with vm_state rescued and task_state unrescuing.
Jan 20 14:53:56 compute-1 nova_compute[225855]: 2026-01-20 14:53:56.191 225859 INFO nova.compute.manager [None req-17d1ade6-ce2c-4ceb-86ef-fdd82f9c9feb d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Unrescuing
Jan 20 14:53:56 compute-1 nova_compute[225855]: 2026-01-20 14:53:56.192 225859 DEBUG oslo_concurrency.lockutils [None req-17d1ade6-ce2c-4ceb-86ef-fdd82f9c9feb d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:53:56 compute-1 nova_compute[225855]: 2026-01-20 14:53:56.192 225859 DEBUG oslo_concurrency.lockutils [None req-17d1ade6-ce2c-4ceb-86ef-fdd82f9c9feb d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquired lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:53:56 compute-1 nova_compute[225855]: 2026-01-20 14:53:56.192 225859 DEBUG nova.network.neutron [None req-17d1ade6-ce2c-4ceb-86ef-fdd82f9c9feb d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:53:56 compute-1 sshd-session[273358]: Accepted publickey for nova from 192.168.122.100 port 58034 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 14:53:56 compute-1 systemd-logind[783]: New session 66 of user nova.
Jan 20 14:53:56 compute-1 systemd[1]: Created slice User Slice of UID 42436.
Jan 20 14:53:56 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 20 14:53:56 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 20 14:53:56 compute-1 systemd[1]: Starting User Manager for UID 42436...
Jan 20 14:53:56 compute-1 systemd[273362]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:53:56 compute-1 systemd[273362]: Queued start job for default target Main User Target.
Jan 20 14:53:56 compute-1 systemd[273362]: Created slice User Application Slice.
Jan 20 14:53:56 compute-1 systemd[273362]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 14:53:56 compute-1 systemd[273362]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 14:53:56 compute-1 systemd[273362]: Reached target Paths.
Jan 20 14:53:56 compute-1 systemd[273362]: Reached target Timers.
Jan 20 14:53:56 compute-1 systemd[273362]: Starting D-Bus User Message Bus Socket...
Jan 20 14:53:56 compute-1 systemd[273362]: Starting Create User's Volatile Files and Directories...
Jan 20 14:53:56 compute-1 systemd[273362]: Finished Create User's Volatile Files and Directories.
Jan 20 14:53:56 compute-1 systemd[273362]: Listening on D-Bus User Message Bus Socket.
Jan 20 14:53:56 compute-1 systemd[273362]: Reached target Sockets.
Jan 20 14:53:56 compute-1 systemd[273362]: Reached target Basic System.
Jan 20 14:53:56 compute-1 systemd[273362]: Reached target Main User Target.
Jan 20 14:53:56 compute-1 systemd[273362]: Startup finished in 169ms.
Jan 20 14:53:56 compute-1 systemd[1]: Started User Manager for UID 42436.
Jan 20 14:53:56 compute-1 systemd[1]: Started Session 66 of User nova.
Jan 20 14:53:56 compute-1 sshd-session[273358]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:53:56 compute-1 sshd-session[273378]: Received disconnect from 192.168.122.100 port 58034:11: disconnected by user
Jan 20 14:53:56 compute-1 sshd-session[273378]: Disconnected from user nova 192.168.122.100 port 58034
Jan 20 14:53:56 compute-1 sshd-session[273358]: pam_unix(sshd:session): session closed for user nova
Jan 20 14:53:56 compute-1 systemd[1]: session-66.scope: Deactivated successfully.
Jan 20 14:53:56 compute-1 systemd-logind[783]: Session 66 logged out. Waiting for processes to exit.
Jan 20 14:53:56 compute-1 systemd-logind[783]: Removed session 66.
Jan 20 14:53:56 compute-1 sshd-session[273380]: Accepted publickey for nova from 192.168.122.100 port 58048 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 14:53:56 compute-1 systemd-logind[783]: New session 68 of user nova.
Jan 20 14:53:56 compute-1 systemd[1]: Started Session 68 of User nova.
Jan 20 14:53:56 compute-1 sshd-session[273380]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:53:56 compute-1 sshd-session[273383]: Received disconnect from 192.168.122.100 port 58048:11: disconnected by user
Jan 20 14:53:56 compute-1 sshd-session[273383]: Disconnected from user nova 192.168.122.100 port 58048
Jan 20 14:53:56 compute-1 sshd-session[273380]: pam_unix(sshd:session): session closed for user nova
Jan 20 14:53:56 compute-1 systemd[1]: session-68.scope: Deactivated successfully.
Jan 20 14:53:56 compute-1 systemd-logind[783]: Session 68 logged out. Waiting for processes to exit.
Jan 20 14:53:56 compute-1 systemd-logind[783]: Removed session 68.
Jan 20 14:53:57 compute-1 ceph-mon[81775]: pgmap v2024: 321 pgs: 321 active+clean; 979 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 9.8 MiB/s wr, 322 op/s
Jan 20 14:53:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:53:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:57.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:53:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:57.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.029 225859 INFO nova.network.neutron [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Updating port 87b0cab5-af2f-4440-8f58-840860a23f68 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.046 225859 DEBUG nova.network.neutron [None req-17d1ade6-ce2c-4ceb-86ef-fdd82f9c9feb d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Updating instance_info_cache with network_info: [{"id": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "address": "fa:16:3e:9e:93:82", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3156414-5a", "ovs_interfaceid": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:53:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/433259130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.066 225859 DEBUG oslo_concurrency.lockutils [None req-17d1ade6-ce2c-4ceb-86ef-fdd82f9c9feb d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Releasing lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.068 225859 DEBUG nova.objects.instance [None req-17d1ade6-ce2c-4ceb-86ef-fdd82f9c9feb d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'flavor' on Instance uuid baada610-f563-4c97-89a9-56eba792c352 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:58 compute-1 kernel: tapa3156414-5a (unregistering): left promiscuous mode
Jan 20 14:53:58 compute-1 NetworkManager[49104]: <info>  [1768920838.1586] device (tapa3156414-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.168 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:58 compute-1 ovn_controller[130490]: 2026-01-20T14:53:58Z|00470|binding|INFO|Releasing lport a3156414-5a96-462d-974e-a57c9cd8e9c8 from this chassis (sb_readonly=0)
Jan 20 14:53:58 compute-1 ovn_controller[130490]: 2026-01-20T14:53:58Z|00471|binding|INFO|Setting lport a3156414-5a96-462d-974e-a57c9cd8e9c8 down in Southbound
Jan 20 14:53:58 compute-1 ovn_controller[130490]: 2026-01-20T14:53:58Z|00472|binding|INFO|Removing iface tapa3156414-5a ovn-installed in OVS
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.173 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.179 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:93:82 10.100.0.3'], port_security=['fa:16:3e:9e:93:82 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'baada610-f563-4c97-89a9-56eba792c352', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79184781-1f23-4584-87de-08e262242488', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '34326e47-c07e-48d1-9283-c1c5634fdc52', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.202', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b73ab05-b29f-401a-84a5-ea1a96103f33, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=a3156414-5a96-462d-974e-a57c9cd8e9c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.180 140354 INFO neutron.agent.ovn.metadata.agent [-] Port a3156414-5a96-462d-974e-a57c9cd8e9c8 in datapath 79184781-1f23-4584-87de-08e262242488 unbound from our chassis
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.182 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79184781-1f23-4584-87de-08e262242488
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.188 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.204 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[86d6caac-b776-43bb-b2ac-cb65018bd1dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000077.scope: Deactivated successfully.
Jan 20 14:53:58 compute-1 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000077.scope: Consumed 4.546s CPU time.
Jan 20 14:53:58 compute-1 systemd-machined[194361]: Machine qemu-56-instance-00000077 terminated.
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.242 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[75ee5a2d-464e-4db0-b05f-708405d9d9cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.245 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[628b6aa4-c31c-40d5-8097-a70e59371362]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.281 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[804ce5d9-4e7d-456d-8ea0-bbd1b6b56777]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.297 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[67bad1d1-ab03-40af-8456-5d235cc69f88]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79184781-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:7c:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580405, 'reachable_time': 23111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273398, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.312 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6198531e-c6b4-4eb4-afec-041b397182b8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap79184781-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580414, 'tstamp': 580414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273399, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap79184781-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580417, 'tstamp': 580417}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273399, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 kernel: tapa3156414-5a: entered promiscuous mode
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.314 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79184781-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:58 compute-1 kernel: tapa3156414-5a (unregistering): left promiscuous mode
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.315 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:58 compute-1 ovn_controller[130490]: 2026-01-20T14:53:58Z|00473|binding|INFO|Claiming lport a3156414-5a96-462d-974e-a57c9cd8e9c8 for this chassis.
Jan 20 14:53:58 compute-1 ovn_controller[130490]: 2026-01-20T14:53:58Z|00474|binding|INFO|a3156414-5a96-462d-974e-a57c9cd8e9c8: Claiming fa:16:3e:9e:93:82 10.100.0.3
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.319 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.333 225859 INFO nova.virt.libvirt.driver [-] [instance: baada610-f563-4c97-89a9-56eba792c352] Instance destroyed successfully.
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.333 225859 DEBUG nova.objects.instance [None req-17d1ade6-ce2c-4ceb-86ef-fdd82f9c9feb d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'numa_topology' on Instance uuid baada610-f563-4c97-89a9-56eba792c352 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:58 compute-1 ovn_controller[130490]: 2026-01-20T14:53:58Z|00475|binding|INFO|Setting lport a3156414-5a96-462d-974e-a57c9cd8e9c8 ovn-installed in OVS
Jan 20 14:53:58 compute-1 ovn_controller[130490]: 2026-01-20T14:53:58Z|00476|if_status|INFO|Dropped 2 log messages in last 510 seconds (most recently, 510 seconds ago) due to excessive rate
Jan 20 14:53:58 compute-1 ovn_controller[130490]: 2026-01-20T14:53:58Z|00477|if_status|INFO|Not setting lport a3156414-5a96-462d-974e-a57c9cd8e9c8 down as sb is readonly
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.339 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.342 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79184781-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.343 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.343 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79184781-10, col_values=(('external_ids', {'iface-id': 'b033e9e6-9781-4424-a20f-7b48a14e2c80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.344 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:53:58 compute-1 ovn_controller[130490]: 2026-01-20T14:53:58Z|00478|binding|INFO|Releasing lport a3156414-5a96-462d-974e-a57c9cd8e9c8 from this chassis (sb_readonly=0)
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.578 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:93:82 10.100.0.3'], port_security=['fa:16:3e:9e:93:82 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'baada610-f563-4c97-89a9-56eba792c352', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79184781-1f23-4584-87de-08e262242488', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '34326e47-c07e-48d1-9283-c1c5634fdc52', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.202', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b73ab05-b29f-401a-84a5-ea1a96103f33, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=a3156414-5a96-462d-974e-a57c9cd8e9c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.579 225859 DEBUG nova.compute.manager [req-5be064b3-afcb-4cf0-93a1-124f121e3640 req-5f591b7a-4bff-44db-a009-e1682b3492d5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received event network-vif-unplugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.579 225859 DEBUG oslo_concurrency.lockutils [req-5be064b3-afcb-4cf0-93a1-124f121e3640 req-5f591b7a-4bff-44db-a009-e1682b3492d5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "baada610-f563-4c97-89a9-56eba792c352-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.580 225859 DEBUG oslo_concurrency.lockutils [req-5be064b3-afcb-4cf0-93a1-124f121e3640 req-5f591b7a-4bff-44db-a009-e1682b3492d5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.580 140354 INFO neutron.agent.ovn.metadata.agent [-] Port a3156414-5a96-462d-974e-a57c9cd8e9c8 in datapath 79184781-1f23-4584-87de-08e262242488 bound to our chassis
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.580 225859 DEBUG oslo_concurrency.lockutils [req-5be064b3-afcb-4cf0-93a1-124f121e3640 req-5f591b7a-4bff-44db-a009-e1682b3492d5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.580 225859 DEBUG nova.compute.manager [req-5be064b3-afcb-4cf0-93a1-124f121e3640 req-5f591b7a-4bff-44db-a009-e1682b3492d5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] No waiting events found dispatching network-vif-unplugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.580 225859 WARNING nova.compute.manager [req-5be064b3-afcb-4cf0-93a1-124f121e3640 req-5f591b7a-4bff-44db-a009-e1682b3492d5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received unexpected event network-vif-unplugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 for instance with vm_state rescued and task_state unrescuing.
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.582 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79184781-1f23-4584-87de-08e262242488
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.586 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:93:82 10.100.0.3'], port_security=['fa:16:3e:9e:93:82 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'baada610-f563-4c97-89a9-56eba792c352', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79184781-1f23-4584-87de-08e262242488', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '34326e47-c07e-48d1-9283-c1c5634fdc52', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.202', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b73ab05-b29f-401a-84a5-ea1a96103f33, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=a3156414-5a96-462d-974e-a57c9cd8e9c8) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.592 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.596 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[db283db5-3229-486e-9b71-1e218f0610f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.630 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae8356a-572e-4416-b62d-433fcd46720c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.634 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[8cbc59e1-1246-464e-860c-3303918d9c3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.669 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[b57f2895-1224-499e-871b-9a0456a0d772]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 kernel: tapa3156414-5a: entered promiscuous mode
Jan 20 14:53:58 compute-1 systemd-udevd[273389]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:53:58 compute-1 NetworkManager[49104]: <info>  [1768920838.6971] manager: (tapa3156414-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/203)
Jan 20 14:53:58 compute-1 ovn_controller[130490]: 2026-01-20T14:53:58Z|00479|binding|INFO|Claiming lport a3156414-5a96-462d-974e-a57c9cd8e9c8 for this chassis.
Jan 20 14:53:58 compute-1 ovn_controller[130490]: 2026-01-20T14:53:58Z|00480|binding|INFO|a3156414-5a96-462d-974e-a57c9cd8e9c8: Claiming fa:16:3e:9e:93:82 10.100.0.3
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.697 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5bc2f5-385c-4042-b0b0-0f586b71fadd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79184781-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:7c:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 700, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 700, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580405, 'reachable_time': 23111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273420, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.697 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:58 compute-1 ovn_controller[130490]: 2026-01-20T14:53:58Z|00481|binding|INFO|Removing lport a3156414-5a96-462d-974e-a57c9cd8e9c8 ovn-installed in OVS
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.699 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.703 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:93:82 10.100.0.3'], port_security=['fa:16:3e:9e:93:82 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'baada610-f563-4c97-89a9-56eba792c352', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79184781-1f23-4584-87de-08e262242488', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '34326e47-c07e-48d1-9283-c1c5634fdc52', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.202', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b73ab05-b29f-401a-84a5-ea1a96103f33, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=a3156414-5a96-462d-974e-a57c9cd8e9c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:53:58 compute-1 NetworkManager[49104]: <info>  [1768920838.7062] device (tapa3156414-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:53:58 compute-1 NetworkManager[49104]: <info>  [1768920838.7074] device (tapa3156414-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:53:58 compute-1 ovn_controller[130490]: 2026-01-20T14:53:58Z|00482|binding|INFO|Setting lport a3156414-5a96-462d-974e-a57c9cd8e9c8 ovn-installed in OVS
Jan 20 14:53:58 compute-1 ovn_controller[130490]: 2026-01-20T14:53:58Z|00483|binding|INFO|Setting lport a3156414-5a96-462d-974e-a57c9cd8e9c8 up in Southbound
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.717 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.717 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2b524f4e-452e-45c2-8712-8bd7a428fef7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap79184781-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580414, 'tstamp': 580414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273426, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap79184781-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580417, 'tstamp': 580417}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273426, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.718 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79184781-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.720 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.721 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.723 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:58 compute-1 systemd-machined[194361]: New machine qemu-57-instance-00000077.
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.726 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79184781-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.727 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.728 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79184781-10, col_values=(('external_ids', {'iface-id': 'b033e9e6-9781-4424-a20f-7b48a14e2c80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.728 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.730 140354 INFO neutron.agent.ovn.metadata.agent [-] Port a3156414-5a96-462d-974e-a57c9cd8e9c8 in datapath 79184781-1f23-4584-87de-08e262242488 unbound from our chassis
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.732 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79184781-1f23-4584-87de-08e262242488
Jan 20 14:53:58 compute-1 systemd[1]: Started Virtual Machine qemu-57-instance-00000077.
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.748 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[84943a53-8a07-4859-8d50-3238a7b92e2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.773 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[05143511-b8e1-430f-a1bf-26d14dc06de6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.778 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd727da-3f6b-49eb-959f-9d3042fbc1b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.806 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[5789022f-7f89-4632-a04d-be77ae80686c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.822 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3799457a-3fa2-48f7-b752-385470106bf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79184781-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:7c:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 700, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 700, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580405, 'reachable_time': 23111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273440, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.837 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[76acbc39-5b53-4443-8001-46c507adf1e9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap79184781-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580414, 'tstamp': 580414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273442, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap79184781-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580417, 'tstamp': 580417}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273442, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.840 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79184781-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:58 compute-1 nova_compute[225855]: 2026-01-20 14:53:58.841 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.843 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79184781-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.843 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.844 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79184781-10, col_values=(('external_ids', {'iface-id': 'b033e9e6-9781-4424-a20f-7b48a14e2c80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.844 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.845 140354 INFO neutron.agent.ovn.metadata.agent [-] Port a3156414-5a96-462d-974e-a57c9cd8e9c8 in datapath 79184781-1f23-4584-87de-08e262242488 unbound from our chassis
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.847 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79184781-1f23-4584-87de-08e262242488
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.862 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3d0fdb79-68e1-424f-ad07-682123c0ec1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.889 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[6ddba788-afe1-40a6-8938-b89b5c3eb3e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.893 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b3df50-83f0-40bc-b9d6-d8762cce2b40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.925 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[5e1694ac-0df4-4643-b3e4-c9d761d9ee40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.945 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[14b4a2b3-2fa1-4caa-8c04-a876518f4b3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79184781-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:7c:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 18, 'rx_bytes': 700, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 18, 'rx_bytes': 700, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580405, 'reachable_time': 23111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273455, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.966 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[db0e5fca-a37a-498d-a8b7-39dce1176100]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap79184781-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580414, 'tstamp': 580414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273465, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap79184781-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580417, 'tstamp': 580417}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273465, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:58.968 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79184781-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.122 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.123 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:59.125 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79184781-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:59.125 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:53:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:59.126 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79184781-10, col_values=(('external_ids', {'iface-id': 'b033e9e6-9781-4424-a20f-7b48a14e2c80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:53:59.126 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:53:59 compute-1 ceph-mon[81775]: pgmap v2025: 321 pgs: 321 active+clean; 956 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 8.4 MiB/s wr, 344 op/s
Jan 20 14:53:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:59.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.312 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for baada610-f563-4c97-89a9-56eba792c352 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.313 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920839.3113894, baada610-f563-4c97-89a9-56eba792c352 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.313 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] VM Resumed (Lifecycle Event)
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.336 225859 DEBUG oslo_concurrency.lockutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.337 225859 DEBUG oslo_concurrency.lockutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquired lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.337 225859 DEBUG nova.network.neutron [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.348 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.352 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.369 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.370 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920839.3117814, baada610-f563-4c97-89a9-56eba792c352 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.370 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] VM Started (Lifecycle Event)
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.399 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.402 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.422 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.437 225859 DEBUG nova.compute.manager [req-3cf1c8fb-7ccd-4908-8ac4-22779402973f req-537fae91-771c-49c0-89be-305c6a3f71f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Received event network-changed-87b0cab5-af2f-4440-8f58-840860a23f68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.438 225859 DEBUG nova.compute.manager [req-3cf1c8fb-7ccd-4908-8ac4-22779402973f req-537fae91-771c-49c0-89be-305c6a3f71f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Refreshing instance network info cache due to event network-changed-87b0cab5-af2f-4440-8f58-840860a23f68. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.438 225859 DEBUG oslo_concurrency.lockutils [req-3cf1c8fb-7ccd-4908-8ac4-22779402973f req-537fae91-771c-49c0-89be-305c6a3f71f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.486 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e290 e290: 3 total, 3 up, 3 in
Jan 20 14:53:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:53:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:59.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.936 225859 DEBUG nova.compute.manager [req-1e42ad7b-2695-4540-a7a9-7a729e69d132 req-15cc6379-4caf-4b2a-9ff1-99f169566279 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received event network-changed-a3156414-5a96-462d-974e-a57c9cd8e9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.937 225859 DEBUG nova.compute.manager [req-1e42ad7b-2695-4540-a7a9-7a729e69d132 req-15cc6379-4caf-4b2a-9ff1-99f169566279 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Refreshing instance network info cache due to event network-changed-a3156414-5a96-462d-974e-a57c9cd8e9c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.937 225859 DEBUG oslo_concurrency.lockutils [req-1e42ad7b-2695-4540-a7a9-7a729e69d132 req-15cc6379-4caf-4b2a-9ff1-99f169566279 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.938 225859 DEBUG oslo_concurrency.lockutils [req-1e42ad7b-2695-4540-a7a9-7a729e69d132 req-15cc6379-4caf-4b2a-9ff1-99f169566279 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:53:59 compute-1 nova_compute[225855]: 2026-01-20 14:53:59.938 225859 DEBUG nova.network.neutron [req-1e42ad7b-2695-4540-a7a9-7a729e69d132 req-15cc6379-4caf-4b2a-9ff1-99f169566279 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Refreshing network info cache for port a3156414-5a96-462d-974e-a57c9cd8e9c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:54:00 compute-1 nova_compute[225855]: 2026-01-20 14:54:00.110 225859 DEBUG nova.compute.manager [None req-17d1ade6-ce2c-4ceb-86ef-fdd82f9c9feb d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:54:00 compute-1 ceph-mon[81775]: osdmap e290: 3 total, 3 up, 3 in
Jan 20 14:54:00 compute-1 ceph-mon[81775]: pgmap v2027: 321 pgs: 321 active+clean; 932 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 5.4 MiB/s wr, 329 op/s
Jan 20 14:54:00 compute-1 nova_compute[225855]: 2026-01-20 14:54:00.691 225859 DEBUG nova.compute.manager [req-fdedb619-36f7-409b-abb1-269c8359c78f req-77621fa7-e140-476a-820a-32294f004028 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received event network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:00 compute-1 nova_compute[225855]: 2026-01-20 14:54:00.692 225859 DEBUG oslo_concurrency.lockutils [req-fdedb619-36f7-409b-abb1-269c8359c78f req-77621fa7-e140-476a-820a-32294f004028 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "baada610-f563-4c97-89a9-56eba792c352-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:00 compute-1 nova_compute[225855]: 2026-01-20 14:54:00.692 225859 DEBUG oslo_concurrency.lockutils [req-fdedb619-36f7-409b-abb1-269c8359c78f req-77621fa7-e140-476a-820a-32294f004028 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:00 compute-1 nova_compute[225855]: 2026-01-20 14:54:00.693 225859 DEBUG oslo_concurrency.lockutils [req-fdedb619-36f7-409b-abb1-269c8359c78f req-77621fa7-e140-476a-820a-32294f004028 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:00 compute-1 nova_compute[225855]: 2026-01-20 14:54:00.693 225859 DEBUG nova.compute.manager [req-fdedb619-36f7-409b-abb1-269c8359c78f req-77621fa7-e140-476a-820a-32294f004028 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] No waiting events found dispatching network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:54:00 compute-1 nova_compute[225855]: 2026-01-20 14:54:00.693 225859 WARNING nova.compute.manager [req-fdedb619-36f7-409b-abb1-269c8359c78f req-77621fa7-e140-476a-820a-32294f004028 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received unexpected event network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 for instance with vm_state active and task_state None.
Jan 20 14:54:00 compute-1 nova_compute[225855]: 2026-01-20 14:54:00.693 225859 DEBUG nova.compute.manager [req-fdedb619-36f7-409b-abb1-269c8359c78f req-77621fa7-e140-476a-820a-32294f004028 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received event network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:00 compute-1 nova_compute[225855]: 2026-01-20 14:54:00.694 225859 DEBUG oslo_concurrency.lockutils [req-fdedb619-36f7-409b-abb1-269c8359c78f req-77621fa7-e140-476a-820a-32294f004028 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "baada610-f563-4c97-89a9-56eba792c352-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:00 compute-1 nova_compute[225855]: 2026-01-20 14:54:00.694 225859 DEBUG oslo_concurrency.lockutils [req-fdedb619-36f7-409b-abb1-269c8359c78f req-77621fa7-e140-476a-820a-32294f004028 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:00 compute-1 nova_compute[225855]: 2026-01-20 14:54:00.694 225859 DEBUG oslo_concurrency.lockutils [req-fdedb619-36f7-409b-abb1-269c8359c78f req-77621fa7-e140-476a-820a-32294f004028 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:00 compute-1 nova_compute[225855]: 2026-01-20 14:54:00.695 225859 DEBUG nova.compute.manager [req-fdedb619-36f7-409b-abb1-269c8359c78f req-77621fa7-e140-476a-820a-32294f004028 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] No waiting events found dispatching network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:54:00 compute-1 nova_compute[225855]: 2026-01-20 14:54:00.695 225859 WARNING nova.compute.manager [req-fdedb619-36f7-409b-abb1-269c8359c78f req-77621fa7-e140-476a-820a-32294f004028 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received unexpected event network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 for instance with vm_state active and task_state None.
Jan 20 14:54:00 compute-1 nova_compute[225855]: 2026-01-20 14:54:00.695 225859 DEBUG nova.compute.manager [req-fdedb619-36f7-409b-abb1-269c8359c78f req-77621fa7-e140-476a-820a-32294f004028 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received event network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:00 compute-1 nova_compute[225855]: 2026-01-20 14:54:00.695 225859 DEBUG oslo_concurrency.lockutils [req-fdedb619-36f7-409b-abb1-269c8359c78f req-77621fa7-e140-476a-820a-32294f004028 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "baada610-f563-4c97-89a9-56eba792c352-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:00 compute-1 nova_compute[225855]: 2026-01-20 14:54:00.696 225859 DEBUG oslo_concurrency.lockutils [req-fdedb619-36f7-409b-abb1-269c8359c78f req-77621fa7-e140-476a-820a-32294f004028 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:00 compute-1 nova_compute[225855]: 2026-01-20 14:54:00.696 225859 DEBUG oslo_concurrency.lockutils [req-fdedb619-36f7-409b-abb1-269c8359c78f req-77621fa7-e140-476a-820a-32294f004028 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:00 compute-1 nova_compute[225855]: 2026-01-20 14:54:00.696 225859 DEBUG nova.compute.manager [req-fdedb619-36f7-409b-abb1-269c8359c78f req-77621fa7-e140-476a-820a-32294f004028 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] No waiting events found dispatching network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:54:00 compute-1 nova_compute[225855]: 2026-01-20 14:54:00.697 225859 WARNING nova.compute.manager [req-fdedb619-36f7-409b-abb1-269c8359c78f req-77621fa7-e140-476a-820a-32294f004028 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received unexpected event network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 for instance with vm_state active and task_state None.
Jan 20 14:54:00 compute-1 nova_compute[225855]: 2026-01-20 14:54:00.995 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.072 225859 DEBUG nova.network.neutron [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Updating instance_info_cache with network_info: [{"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.099 225859 DEBUG oslo_concurrency.lockutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Releasing lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.104 225859 DEBUG oslo_concurrency.lockutils [req-3cf1c8fb-7ccd-4908-8ac4-22779402973f req-537fae91-771c-49c0-89be-305c6a3f71f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.105 225859 DEBUG nova.network.neutron [req-3cf1c8fb-7ccd-4908-8ac4-22779402973f req-537fae91-771c-49c0-89be-305c6a3f71f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Refreshing network info cache for port 87b0cab5-af2f-4440-8f58-840860a23f68 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:54:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:01.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.249 225859 DEBUG nova.virt.libvirt.driver [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 20 14:54:01 compute-1 ovn_controller[130490]: 2026-01-20T14:54:01Z|00484|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.253 225859 DEBUG nova.virt.libvirt.driver [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.254 225859 INFO nova.virt.libvirt.driver [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Creating image(s)
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.322 225859 DEBUG nova.storage.rbd_utils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] creating snapshot(nova-resize) on rbd image(7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:54:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e291 e291: 3 total, 3 up, 3 in
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.748 225859 DEBUG nova.objects.instance [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:01.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.881 225859 DEBUG nova.virt.libvirt.driver [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.882 225859 DEBUG nova.virt.libvirt.driver [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Ensure instance console log exists: /var/lib/nova/instances/7f5cfffe-c1dc-4b00-844e-0fb35b340f44/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.882 225859 DEBUG oslo_concurrency.lockutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.882 225859 DEBUG oslo_concurrency.lockutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.883 225859 DEBUG oslo_concurrency.lockutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.887 225859 DEBUG nova.virt.libvirt.driver [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Start _get_guest_xml network_info=[{"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1445030024-network", "vif_mac": "fa:16:3e:2b:79:1b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.890 225859 WARNING nova.virt.libvirt.driver [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.895 225859 DEBUG nova.virt.libvirt.host [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.896 225859 DEBUG nova.virt.libvirt.host [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.900 225859 DEBUG nova.virt.libvirt.host [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.901 225859 DEBUG nova.virt.libvirt.host [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.903 225859 DEBUG nova.virt.libvirt.driver [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.904 225859 DEBUG nova.virt.hardware [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='30c26a27-d918-46d8-a512-4ef3b4ce5955',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.904 225859 DEBUG nova.virt.hardware [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.905 225859 DEBUG nova.virt.hardware [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.905 225859 DEBUG nova.virt.hardware [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.905 225859 DEBUG nova.virt.hardware [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.905 225859 DEBUG nova.virt.hardware [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.905 225859 DEBUG nova.virt.hardware [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.906 225859 DEBUG nova.virt.hardware [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.906 225859 DEBUG nova.virt.hardware [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.907 225859 DEBUG nova.virt.hardware [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.907 225859 DEBUG nova.virt.hardware [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.908 225859 DEBUG nova.objects.instance [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:01 compute-1 nova_compute[225855]: 2026-01-20 14:54:01.924 225859 DEBUG oslo_concurrency.processutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:54:02 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3505285473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.094 225859 DEBUG nova.network.neutron [req-1e42ad7b-2695-4540-a7a9-7a729e69d132 req-15cc6379-4caf-4b2a-9ff1-99f169566279 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Updated VIF entry in instance network info cache for port a3156414-5a96-462d-974e-a57c9cd8e9c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.094 225859 DEBUG nova.network.neutron [req-1e42ad7b-2695-4540-a7a9-7a729e69d132 req-15cc6379-4caf-4b2a-9ff1-99f169566279 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Updating instance_info_cache with network_info: [{"id": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "address": "fa:16:3e:9e:93:82", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3156414-5a", "ovs_interfaceid": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.119 225859 DEBUG oslo_concurrency.lockutils [req-1e42ad7b-2695-4540-a7a9-7a729e69d132 req-15cc6379-4caf-4b2a-9ff1-99f169566279 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.120 225859 DEBUG nova.compute.manager [req-1e42ad7b-2695-4540-a7a9-7a729e69d132 req-15cc6379-4caf-4b2a-9ff1-99f169566279 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received event network-changed-a3156414-5a96-462d-974e-a57c9cd8e9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.120 225859 DEBUG nova.compute.manager [req-1e42ad7b-2695-4540-a7a9-7a729e69d132 req-15cc6379-4caf-4b2a-9ff1-99f169566279 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Refreshing instance network info cache due to event network-changed-a3156414-5a96-462d-974e-a57c9cd8e9c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.121 225859 DEBUG oslo_concurrency.lockutils [req-1e42ad7b-2695-4540-a7a9-7a729e69d132 req-15cc6379-4caf-4b2a-9ff1-99f169566279 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.121 225859 DEBUG oslo_concurrency.lockutils [req-1e42ad7b-2695-4540-a7a9-7a729e69d132 req-15cc6379-4caf-4b2a-9ff1-99f169566279 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.121 225859 DEBUG nova.network.neutron [req-1e42ad7b-2695-4540-a7a9-7a729e69d132 req-15cc6379-4caf-4b2a-9ff1-99f169566279 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Refreshing network info cache for port a3156414-5a96-462d-974e-a57c9cd8e9c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:54:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:54:02 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2647610533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.363 225859 DEBUG oslo_concurrency.processutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.403 225859 DEBUG oslo_concurrency.processutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:02 compute-1 ceph-mon[81775]: osdmap e291: 3 total, 3 up, 3 in
Jan 20 14:54:02 compute-1 ceph-mon[81775]: pgmap v2029: 321 pgs: 321 active+clean; 888 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 1.2 MiB/s wr, 300 op/s
Jan 20 14:54:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3505285473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2647610533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.739 225859 DEBUG nova.network.neutron [req-3cf1c8fb-7ccd-4908-8ac4-22779402973f req-537fae91-771c-49c0-89be-305c6a3f71f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Updated VIF entry in instance network info cache for port 87b0cab5-af2f-4440-8f58-840860a23f68. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.740 225859 DEBUG nova.network.neutron [req-3cf1c8fb-7ccd-4908-8ac4-22779402973f req-537fae91-771c-49c0-89be-305c6a3f71f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Updating instance_info_cache with network_info: [{"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.757 225859 DEBUG oslo_concurrency.lockutils [req-3cf1c8fb-7ccd-4908-8ac4-22779402973f req-537fae91-771c-49c0-89be-305c6a3f71f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:54:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:54:02 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1618823478' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.929 225859 DEBUG oslo_concurrency.processutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.931 225859 DEBUG nova.virt.libvirt.vif [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:51:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1654627482',display_name='tempest-ServerActionsTestOtherB-server-1654627482',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1654627482',id=118,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO99uJ9+FwgjxRb/9u+f3Mj9/VKSDM+OKd66Ygsg8lEO+7bGpDEQrC5BIaSV+Na5YF+3DqUwLNmAYSN9IkTSGbRPw5y8813A+KsiNHebrpnZ7oReyT+5/zNQYafCHVAfGA==',key_name='tempest-keypair-302882914',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:52:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-2ulk0sfq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:53:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='215db37373dc4ae5a75cbd6866f471da',uuid=7f5cfffe-c1dc-4b00-844e-0fb35b340f44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1445030024-network", "vif_mac": "fa:16:3e:2b:79:1b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.931 225859 DEBUG nova.network.os_vif_util [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1445030024-network", "vif_mac": "fa:16:3e:2b:79:1b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.932 225859 DEBUG nova.network.os_vif_util [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.936 225859 DEBUG nova.virt.libvirt.driver [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:54:02 compute-1 nova_compute[225855]:   <uuid>7f5cfffe-c1dc-4b00-844e-0fb35b340f44</uuid>
Jan 20 14:54:02 compute-1 nova_compute[225855]:   <name>instance-00000076</name>
Jan 20 14:54:02 compute-1 nova_compute[225855]:   <memory>196608</memory>
Jan 20 14:54:02 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:54:02 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerActionsTestOtherB-server-1654627482</nova:name>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:54:01</nova:creationTime>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <nova:flavor name="m1.micro">
Jan 20 14:54:02 compute-1 nova_compute[225855]:         <nova:memory>192</nova:memory>
Jan 20 14:54:02 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:54:02 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:54:02 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:54:02 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:54:02 compute-1 nova_compute[225855]:         <nova:user uuid="215db37373dc4ae5a75cbd6866f471da">tempest-ServerActionsTestOtherB-1136521362-project-member</nova:user>
Jan 20 14:54:02 compute-1 nova_compute[225855]:         <nova:project uuid="b3b1b7f5b4f84b5abbc401eb577c85c0">tempest-ServerActionsTestOtherB-1136521362</nova:project>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:54:02 compute-1 nova_compute[225855]:         <nova:port uuid="87b0cab5-af2f-4440-8f58-840860a23f68">
Jan 20 14:54:02 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:54:02 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:54:02 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <system>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <entry name="serial">7f5cfffe-c1dc-4b00-844e-0fb35b340f44</entry>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <entry name="uuid">7f5cfffe-c1dc-4b00-844e-0fb35b340f44</entry>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     </system>
Jan 20 14:54:02 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:54:02 compute-1 nova_compute[225855]:   <os>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:   </os>
Jan 20 14:54:02 compute-1 nova_compute[225855]:   <features>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:   </features>
Jan 20 14:54:02 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:54:02 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:54:02 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk">
Jan 20 14:54:02 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       </source>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:54:02 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk.config">
Jan 20 14:54:02 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       </source>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:54:02 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:2b:79:1b"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <target dev="tap87b0cab5-af"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/7f5cfffe-c1dc-4b00-844e-0fb35b340f44/console.log" append="off"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <video>
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     </video>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:54:02 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:54:02 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:54:02 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:54:02 compute-1 nova_compute[225855]: </domain>
Jan 20 14:54:02 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.943 225859 DEBUG nova.virt.libvirt.vif [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:51:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1654627482',display_name='tempest-ServerActionsTestOtherB-server-1654627482',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1654627482',id=118,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO99uJ9+FwgjxRb/9u+f3Mj9/VKSDM+OKd66Ygsg8lEO+7bGpDEQrC5BIaSV+Na5YF+3DqUwLNmAYSN9IkTSGbRPw5y8813A+KsiNHebrpnZ7oReyT+5/zNQYafCHVAfGA==',key_name='tempest-keypair-302882914',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:52:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-2ulk0sfq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:53:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='215db37373dc4ae5a75cbd6866f471da',uuid=7f5cfffe-c1dc-4b00-844e-0fb35b340f44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1445030024-network", "vif_mac": "fa:16:3e:2b:79:1b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.943 225859 DEBUG nova.network.os_vif_util [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1445030024-network", "vif_mac": "fa:16:3e:2b:79:1b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.944 225859 DEBUG nova.network.os_vif_util [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.944 225859 DEBUG os_vif [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.945 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.946 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.946 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.949 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.949 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87b0cab5-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.949 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap87b0cab5-af, col_values=(('external_ids', {'iface-id': '87b0cab5-af2f-4440-8f58-840860a23f68', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:79:1b', 'vm-uuid': '7f5cfffe-c1dc-4b00-844e-0fb35b340f44'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.951 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:02 compute-1 NetworkManager[49104]: <info>  [1768920842.9530] manager: (tap87b0cab5-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.953 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.958 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:02 compute-1 nova_compute[225855]: 2026-01-20 14:54:02.959 225859 INFO os_vif [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af')
Jan 20 14:54:03 compute-1 nova_compute[225855]: 2026-01-20 14:54:03.030 225859 DEBUG nova.virt.libvirt.driver [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:54:03 compute-1 nova_compute[225855]: 2026-01-20 14:54:03.030 225859 DEBUG nova.virt.libvirt.driver [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:54:03 compute-1 nova_compute[225855]: 2026-01-20 14:54:03.030 225859 DEBUG nova.virt.libvirt.driver [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No VIF found with MAC fa:16:3e:2b:79:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:54:03 compute-1 nova_compute[225855]: 2026-01-20 14:54:03.031 225859 INFO nova.virt.libvirt.driver [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Using config drive
Jan 20 14:54:03 compute-1 nova_compute[225855]: 2026-01-20 14:54:03.059 225859 DEBUG nova.compute.manager [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:54:03 compute-1 nova_compute[225855]: 2026-01-20 14:54:03.060 225859 DEBUG nova.virt.libvirt.driver [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 20 14:54:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:03.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:03 compute-1 nova_compute[225855]: 2026-01-20 14:54:03.639 225859 DEBUG nova.network.neutron [req-1e42ad7b-2695-4540-a7a9-7a729e69d132 req-15cc6379-4caf-4b2a-9ff1-99f169566279 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Updated VIF entry in instance network info cache for port a3156414-5a96-462d-974e-a57c9cd8e9c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:54:03 compute-1 nova_compute[225855]: 2026-01-20 14:54:03.640 225859 DEBUG nova.network.neutron [req-1e42ad7b-2695-4540-a7a9-7a729e69d132 req-15cc6379-4caf-4b2a-9ff1-99f169566279 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Updating instance_info_cache with network_info: [{"id": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "address": "fa:16:3e:9e:93:82", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3156414-5a", "ovs_interfaceid": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:54:03 compute-1 nova_compute[225855]: 2026-01-20 14:54:03.657 225859 DEBUG oslo_concurrency.lockutils [req-1e42ad7b-2695-4540-a7a9-7a729e69d132 req-15cc6379-4caf-4b2a-9ff1-99f169566279 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-baada610-f563-4c97-89a9-56eba792c352" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:54:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1618823478' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3954144284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:03.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:04 compute-1 ceph-mon[81775]: pgmap v2030: 321 pgs: 321 active+clean; 881 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 1.5 MiB/s wr, 262 op/s
Jan 20 14:54:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1756112089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:05.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3370297134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:05.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:06 compute-1 nova_compute[225855]: 2026-01-20 14:54:05.997 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:06 compute-1 ceph-mon[81775]: pgmap v2031: 321 pgs: 321 active+clean; 867 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 2.7 MiB/s wr, 324 op/s
Jan 20 14:54:07 compute-1 systemd[1]: Stopping User Manager for UID 42436...
Jan 20 14:54:07 compute-1 systemd[273362]: Activating special unit Exit the Session...
Jan 20 14:54:07 compute-1 systemd[273362]: Stopped target Main User Target.
Jan 20 14:54:07 compute-1 systemd[273362]: Stopped target Basic System.
Jan 20 14:54:07 compute-1 systemd[273362]: Stopped target Paths.
Jan 20 14:54:07 compute-1 systemd[273362]: Stopped target Sockets.
Jan 20 14:54:07 compute-1 systemd[273362]: Stopped target Timers.
Jan 20 14:54:07 compute-1 systemd[273362]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 20 14:54:07 compute-1 systemd[273362]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 14:54:07 compute-1 systemd[273362]: Closed D-Bus User Message Bus Socket.
Jan 20 14:54:07 compute-1 systemd[273362]: Stopped Create User's Volatile Files and Directories.
Jan 20 14:54:07 compute-1 systemd[273362]: Removed slice User Application Slice.
Jan 20 14:54:07 compute-1 systemd[273362]: Reached target Shutdown.
Jan 20 14:54:07 compute-1 systemd[273362]: Finished Exit the Session.
Jan 20 14:54:07 compute-1 systemd[273362]: Reached target Exit the Session.
Jan 20 14:54:07 compute-1 systemd[1]: user@42436.service: Deactivated successfully.
Jan 20 14:54:07 compute-1 systemd[1]: Stopped User Manager for UID 42436.
Jan 20 14:54:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:07.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:07 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 20 14:54:07 compute-1 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 20 14:54:07 compute-1 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 20 14:54:07 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 20 14:54:07 compute-1 systemd[1]: Removed slice User Slice of UID 42436.
Jan 20 14:54:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e292 e292: 3 total, 3 up, 3 in
Jan 20 14:54:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:54:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:07.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:54:07 compute-1 nova_compute[225855]: 2026-01-20 14:54:07.952 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:08 compute-1 sudo[273690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:08 compute-1 sudo[273690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:08 compute-1 sudo[273690]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:08 compute-1 sudo[273715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:54:08 compute-1 sudo[273715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:08 compute-1 sudo[273715]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:08 compute-1 sudo[273740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:08 compute-1 sudo[273740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:08 compute-1 sudo[273740]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:08 compute-1 sudo[273765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:54:08 compute-1 sudo[273765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:08 compute-1 ceph-mon[81775]: osdmap e292: 3 total, 3 up, 3 in
Jan 20 14:54:08 compute-1 ceph-mon[81775]: pgmap v2033: 321 pgs: 321 active+clean; 867 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 2.7 MiB/s wr, 313 op/s
Jan 20 14:54:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/685966402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:09.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:09 compute-1 sudo[273765]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #91. Immutable memtables: 0.
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:09.740662) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 91
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920849740772, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 2156, "num_deletes": 265, "total_data_size": 4584616, "memory_usage": 4651520, "flush_reason": "Manual Compaction"}
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #92: started
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920849773221, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 92, "file_size": 2997055, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46914, "largest_seqno": 49064, "table_properties": {"data_size": 2987998, "index_size": 5615, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19297, "raw_average_key_size": 20, "raw_value_size": 2969678, "raw_average_value_size": 3203, "num_data_blocks": 241, "num_entries": 927, "num_filter_entries": 927, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768920710, "oldest_key_time": 1768920710, "file_creation_time": 1768920849, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 32621 microseconds, and 12490 cpu microseconds.
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:09.773284) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #92: 2997055 bytes OK
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:09.773309) [db/memtable_list.cc:519] [default] Level-0 commit table #92 started
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:09.775399) [db/memtable_list.cc:722] [default] Level-0 commit table #92: memtable #1 done
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:09.775420) EVENT_LOG_v1 {"time_micros": 1768920849775413, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:09.775446) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 4574827, prev total WAL file size 4574827, number of live WAL files 2.
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000088.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:09.777333) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353133' seq:72057594037927935, type:22 .. '6C6F676D0031373635' seq:0, type:0; will stop at (end)
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [92(2926KB)], [90(10MB)]
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920849777389, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [92], "files_L6": [90], "score": -1, "input_data_size": 13792193, "oldest_snapshot_seqno": -1}
Jan 20 14:54:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:09.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #93: 7596 keys, 13633097 bytes, temperature: kUnknown
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920849926788, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 93, "file_size": 13633097, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13579214, "index_size": 33803, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19013, "raw_key_size": 195234, "raw_average_key_size": 25, "raw_value_size": 13440509, "raw_average_value_size": 1769, "num_data_blocks": 1347, "num_entries": 7596, "num_filter_entries": 7596, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768920849, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 93, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:09.927192) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 13633097 bytes
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:09.929291) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.2 rd, 91.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 10.3 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(9.2) write-amplify(4.5) OK, records in: 8139, records dropped: 543 output_compression: NoCompression
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:09.929314) EVENT_LOG_v1 {"time_micros": 1768920849929302, "job": 56, "event": "compaction_finished", "compaction_time_micros": 149610, "compaction_time_cpu_micros": 50104, "output_level": 6, "num_output_files": 1, "total_output_size": 13633097, "num_input_records": 8139, "num_output_records": 7596, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920849930076, "job": 56, "event": "table_file_deletion", "file_number": 92}
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000090.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920849932797, "job": 56, "event": "table_file_deletion", "file_number": 90}
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:09.777165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:09.932851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:09.932883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:09.932885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:09.932887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:09.932889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:54:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:54:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:54:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:54:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:54:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:54:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/912464231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:11 compute-1 nova_compute[225855]: 2026-01-20 14:54:11.000 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:11 compute-1 ceph-mon[81775]: pgmap v2034: 321 pgs: 321 active+clean; 867 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 2.4 MiB/s wr, 302 op/s
Jan 20 14:54:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2126179071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:11.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:11 compute-1 sudo[273828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:11 compute-1 sudo[273828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:11 compute-1 sudo[273828]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:11 compute-1 sudo[273854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:11 compute-1 sudo[273854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:11 compute-1 sudo[273854]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:11 compute-1 podman[273852]: 2026-01-20 14:54:11.837077532 +0000 UTC m=+0.098053789 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:54:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:11.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:12 compute-1 nova_compute[225855]: 2026-01-20 14:54:12.634 225859 DEBUG nova.objects.instance [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'flavor' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:12 compute-1 nova_compute[225855]: 2026-01-20 14:54:12.660 225859 DEBUG oslo_concurrency.lockutils [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:54:12 compute-1 nova_compute[225855]: 2026-01-20 14:54:12.660 225859 DEBUG oslo_concurrency.lockutils [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquired lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:54:12 compute-1 nova_compute[225855]: 2026-01-20 14:54:12.661 225859 DEBUG nova.network.neutron [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:54:12 compute-1 nova_compute[225855]: 2026-01-20 14:54:12.661 225859 DEBUG nova.objects.instance [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'info_cache' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:12 compute-1 ovn_controller[130490]: 2026-01-20T14:54:12Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9e:93:82 10.100.0.3
Jan 20 14:54:12 compute-1 nova_compute[225855]: 2026-01-20 14:54:12.956 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:13 compute-1 ceph-mon[81775]: pgmap v2035: 321 pgs: 321 active+clean; 896 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 3.6 MiB/s wr, 296 op/s
Jan 20 14:54:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:13.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:13.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2752549181' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:54:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2752549181' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:54:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e293 e293: 3 total, 3 up, 3 in
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.685 225859 DEBUG nova.network.neutron [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Updating instance_info_cache with network_info: [{"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:54:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e294 e294: 3 total, 3 up, 3 in
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.741 225859 DEBUG oslo_concurrency.lockutils [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Releasing lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.790 225859 INFO nova.virt.libvirt.driver [-] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Instance destroyed successfully.
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.791 225859 DEBUG nova.objects.instance [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.808 225859 DEBUG nova.objects.instance [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'resources' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.831 225859 DEBUG nova.virt.libvirt.vif [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:51:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1654627482',display_name='tempest-ServerActionsTestOtherB-server-1654627482',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1654627482',id=118,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO99uJ9+FwgjxRb/9u+f3Mj9/VKSDM+OKd66Ygsg8lEO+7bGpDEQrC5BIaSV+Na5YF+3DqUwLNmAYSN9IkTSGbRPw5y8813A+KsiNHebrpnZ7oReyT+5/zNQYafCHVAfGA==',key_name='tempest-keypair-302882914',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:54:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-2ulk0sfq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:54:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='215db37373dc4ae5a75cbd6866f471da',uuid=7f5cfffe-c1dc-4b00-844e-0fb35b340f44,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.832 225859 DEBUG nova.network.os_vif_util [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.833 225859 DEBUG nova.network.os_vif_util [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.833 225859 DEBUG os_vif [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.835 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.836 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87b0cab5-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.837 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.839 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.840 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.843 225859 INFO os_vif [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af')
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.849 225859 DEBUG nova.virt.libvirt.driver [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Start _get_guest_xml network_info=[{"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.854 225859 WARNING nova.virt.libvirt.driver [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.860 225859 DEBUG nova.virt.libvirt.host [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.860 225859 DEBUG nova.virt.libvirt.host [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.863 225859 DEBUG nova.virt.libvirt.host [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.864 225859 DEBUG nova.virt.libvirt.host [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.864 225859 DEBUG nova.virt.libvirt.driver [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.865 225859 DEBUG nova.virt.hardware [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='30c26a27-d918-46d8-a512-4ef3b4ce5955',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.865 225859 DEBUG nova.virt.hardware [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.865 225859 DEBUG nova.virt.hardware [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.865 225859 DEBUG nova.virt.hardware [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.866 225859 DEBUG nova.virt.hardware [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.866 225859 DEBUG nova.virt.hardware [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.866 225859 DEBUG nova.virt.hardware [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.866 225859 DEBUG nova.virt.hardware [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.867 225859 DEBUG nova.virt.hardware [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.867 225859 DEBUG nova.virt.hardware [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.867 225859 DEBUG nova.virt.hardware [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.867 225859 DEBUG nova.objects.instance [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:14 compute-1 nova_compute[225855]: 2026-01-20 14:54:14.881 225859 DEBUG oslo_concurrency.processutils [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:15 compute-1 ceph-mon[81775]: pgmap v2036: 321 pgs: 321 active+clean; 933 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 4.7 MiB/s wr, 310 op/s
Jan 20 14:54:15 compute-1 ceph-mon[81775]: osdmap e293: 3 total, 3 up, 3 in
Jan 20 14:54:15 compute-1 ceph-mon[81775]: osdmap e294: 3 total, 3 up, 3 in
Jan 20 14:54:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:15.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:54:15 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/474916819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.352 225859 DEBUG oslo_concurrency.processutils [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.398 225859 DEBUG oslo_concurrency.processutils [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:54:15 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4265517374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.836 225859 DEBUG oslo_concurrency.processutils [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.840 225859 DEBUG nova.virt.libvirt.vif [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:51:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1654627482',display_name='tempest-ServerActionsTestOtherB-server-1654627482',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1654627482',id=118,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO99uJ9+FwgjxRb/9u+f3Mj9/VKSDM+OKd66Ygsg8lEO+7bGpDEQrC5BIaSV+Na5YF+3DqUwLNmAYSN9IkTSGbRPw5y8813A+KsiNHebrpnZ7oReyT+5/zNQYafCHVAfGA==',key_name='tempest-keypair-302882914',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:54:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-2ulk0sfq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:54:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='215db37373dc4ae5a75cbd6866f471da',uuid=7f5cfffe-c1dc-4b00-844e-0fb35b340f44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.841 225859 DEBUG nova.network.os_vif_util [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.842 225859 DEBUG nova.network.os_vif_util [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.845 225859 DEBUG nova.objects.instance [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:15.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.933 225859 DEBUG nova.virt.libvirt.driver [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:54:15 compute-1 nova_compute[225855]:   <uuid>7f5cfffe-c1dc-4b00-844e-0fb35b340f44</uuid>
Jan 20 14:54:15 compute-1 nova_compute[225855]:   <name>instance-00000076</name>
Jan 20 14:54:15 compute-1 nova_compute[225855]:   <memory>196608</memory>
Jan 20 14:54:15 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:54:15 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerActionsTestOtherB-server-1654627482</nova:name>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:54:14</nova:creationTime>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <nova:flavor name="m1.micro">
Jan 20 14:54:15 compute-1 nova_compute[225855]:         <nova:memory>192</nova:memory>
Jan 20 14:54:15 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:54:15 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:54:15 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:54:15 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:54:15 compute-1 nova_compute[225855]:         <nova:user uuid="215db37373dc4ae5a75cbd6866f471da">tempest-ServerActionsTestOtherB-1136521362-project-member</nova:user>
Jan 20 14:54:15 compute-1 nova_compute[225855]:         <nova:project uuid="b3b1b7f5b4f84b5abbc401eb577c85c0">tempest-ServerActionsTestOtherB-1136521362</nova:project>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:54:15 compute-1 nova_compute[225855]:         <nova:port uuid="87b0cab5-af2f-4440-8f58-840860a23f68">
Jan 20 14:54:15 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:54:15 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:54:15 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <system>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <entry name="serial">7f5cfffe-c1dc-4b00-844e-0fb35b340f44</entry>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <entry name="uuid">7f5cfffe-c1dc-4b00-844e-0fb35b340f44</entry>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     </system>
Jan 20 14:54:15 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:54:15 compute-1 nova_compute[225855]:   <os>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:   </os>
Jan 20 14:54:15 compute-1 nova_compute[225855]:   <features>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:   </features>
Jan 20 14:54:15 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:54:15 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:54:15 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk">
Jan 20 14:54:15 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       </source>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:54:15 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk.config">
Jan 20 14:54:15 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       </source>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:54:15 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:2b:79:1b"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <target dev="tap87b0cab5-af"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/7f5cfffe-c1dc-4b00-844e-0fb35b340f44/console.log" append="off"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <video>
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     </video>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <input type="keyboard" bus="usb"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:54:15 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:54:15 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:54:15 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:54:15 compute-1 nova_compute[225855]: </domain>
Jan 20 14:54:15 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.935 225859 DEBUG nova.virt.libvirt.driver [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.935 225859 DEBUG nova.virt.libvirt.driver [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.936 225859 DEBUG nova.virt.libvirt.vif [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:51:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1654627482',display_name='tempest-ServerActionsTestOtherB-server-1654627482',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1654627482',id=118,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO99uJ9+FwgjxRb/9u+f3Mj9/VKSDM+OKd66Ygsg8lEO+7bGpDEQrC5BIaSV+Na5YF+3DqUwLNmAYSN9IkTSGbRPw5y8813A+KsiNHebrpnZ7oReyT+5/zNQYafCHVAfGA==',key_name='tempest-keypair-302882914',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:54:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-2ulk0sfq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:54:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='215db37373dc4ae5a75cbd6866f471da',uuid=7f5cfffe-c1dc-4b00-844e-0fb35b340f44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.937 225859 DEBUG nova.network.os_vif_util [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.938 225859 DEBUG nova.network.os_vif_util [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.939 225859 DEBUG os_vif [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.939 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.940 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.940 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.944 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.945 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87b0cab5-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.946 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap87b0cab5-af, col_values=(('external_ids', {'iface-id': '87b0cab5-af2f-4440-8f58-840860a23f68', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:79:1b', 'vm-uuid': '7f5cfffe-c1dc-4b00-844e-0fb35b340f44'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.947 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:15 compute-1 NetworkManager[49104]: <info>  [1768920855.9485] manager: (tap87b0cab5-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.949 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.956 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:15 compute-1 nova_compute[225855]: 2026-01-20 14:54:15.957 225859 INFO os_vif [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af')
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.001 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:16 compute-1 kernel: tap87b0cab5-af: entered promiscuous mode
Jan 20 14:54:16 compute-1 NetworkManager[49104]: <info>  [1768920856.0191] manager: (tap87b0cab5-af): new Tun device (/org/freedesktop/NetworkManager/Devices/206)
Jan 20 14:54:16 compute-1 ovn_controller[130490]: 2026-01-20T14:54:16Z|00485|binding|INFO|Claiming lport 87b0cab5-af2f-4440-8f58-840860a23f68 for this chassis.
Jan 20 14:54:16 compute-1 ovn_controller[130490]: 2026-01-20T14:54:16Z|00486|binding|INFO|87b0cab5-af2f-4440-8f58-840860a23f68: Claiming fa:16:3e:2b:79:1b 10.100.0.9
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.020 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.027 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:79:1b 10.100.0.9'], port_security=['fa:16:3e:2b:79:1b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7f5cfffe-c1dc-4b00-844e-0fb35b340f44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3b1b7f5b4f84b5abbc401eb577c85c0', 'neutron:revision_number': '7', 'neutron:security_group_ids': '8b11f3fb-2601-4eca-a1b6-838549d7750c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.184'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3273589e-5585-406c-9611-87f758b0e521, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=87b0cab5-af2f-4440-8f58-840860a23f68) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.028 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 87b0cab5-af2f-4440-8f58-840860a23f68 in datapath 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce bound to our chassis
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.030 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce
Jan 20 14:54:16 compute-1 ovn_controller[130490]: 2026-01-20T14:54:16Z|00487|binding|INFO|Setting lport 87b0cab5-af2f-4440-8f58-840860a23f68 ovn-installed in OVS
Jan 20 14:54:16 compute-1 ovn_controller[130490]: 2026-01-20T14:54:16Z|00488|binding|INFO|Setting lport 87b0cab5-af2f-4440-8f58-840860a23f68 up in Southbound
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.041 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.042 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[958721f0-c9ba-4353-9501-417fe2b3eeb5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.043 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41a1a3fe-f1 in ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.044 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.045 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41a1a3fe-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.045 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[46350ddf-8847-457f-8183-496d1c49d5e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.046 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[006b67ab-eda9-408f-aec6-ca1a9cd515ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:16 compute-1 systemd-udevd[273982]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:54:16 compute-1 systemd-machined[194361]: New machine qemu-58-instance-00000076.
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.057 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[a6178877-9d60-4ee1-a876-ef67b643f231]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:16 compute-1 systemd[1]: Started Virtual Machine qemu-58-instance-00000076.
Jan 20 14:54:16 compute-1 NetworkManager[49104]: <info>  [1768920856.0680] device (tap87b0cab5-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:54:16 compute-1 NetworkManager[49104]: <info>  [1768920856.0689] device (tap87b0cab5-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.071 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[01f63726-547e-436d-b1f7-07ca1c00a171]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.099 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab78bd2-5f3c-4fbb-bbf2-246a441a6ec3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.105 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[340a5090-9530-4919-8e81-d60f2166e7c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:16 compute-1 NetworkManager[49104]: <info>  [1768920856.1065] manager: (tap41a1a3fe-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/207)
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.132 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[ed067912-d3cc-4610-8e2d-5185435f0fcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.137 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[18a8f79f-1a2e-42a4-a2db-8c9523041839]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:16 compute-1 NetworkManager[49104]: <info>  [1768920856.1576] device (tap41a1a3fe-f0): carrier: link connected
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.162 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[eca41d98-0f66-45a2-868c-a6dc16b7064c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/474916819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4265517374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:16 compute-1 ceph-mon[81775]: pgmap v2039: 321 pgs: 321 active+clean; 931 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 5.8 MiB/s wr, 298 op/s
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.179 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a24041b6-fd92-4c74-afb6-5eb979e7d59e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41a1a3fe-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:1f:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589508, 'reachable_time': 17134, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274014, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.195 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fd08d5ae-f710-4051-be79-286107c2cd86]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3c:1fb5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589508, 'tstamp': 589508}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274015, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.221 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9ddbbbec-7e4c-4fe7-933c-98fbc16728a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41a1a3fe-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:1f:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589508, 'reachable_time': 17134, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274016, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.264 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[94e78398-dd6f-471a-b3c8-78b2c95ac3b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.352 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b0ae3c5e-98a7-40dc-91b6-bf37bad70919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.354 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41a1a3fe-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.354 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.355 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41a1a3fe-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.357 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:16 compute-1 kernel: tap41a1a3fe-f0: entered promiscuous mode
Jan 20 14:54:16 compute-1 NetworkManager[49104]: <info>  [1768920856.3578] manager: (tap41a1a3fe-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/208)
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.359 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.361 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41a1a3fe-f0, col_values=(('external_ids', {'iface-id': '3fa2df7b-42b2-4a3b-a33b-ab37b5d6aef3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.363 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:16 compute-1 ovn_controller[130490]: 2026-01-20T14:54:16Z|00489|binding|INFO|Releasing lport 3fa2df7b-42b2-4a3b-a33b-ab37b5d6aef3 from this chassis (sb_readonly=0)
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.385 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.386 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.387 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[27612061-3bed-4d2b-a0cc-a86e440cbdcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.389 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.pid.haproxy
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.391 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'env', 'PROCESS_TAG=haproxy-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.414 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.415 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:16.415 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:16 compute-1 sudo[274041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:16 compute-1 sudo[274041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:16 compute-1 sudo[274041]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:16 compute-1 sudo[274088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:54:16 compute-1 sudo[274088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:16 compute-1 sudo[274088]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.524 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920856.524169, 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.525 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] VM Resumed (Lifecycle Event)
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.527 225859 DEBUG nova.compute.manager [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.530 225859 INFO nova.virt.libvirt.driver [-] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Instance rebooted successfully.
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.531 225859 DEBUG nova.compute.manager [None req-1a6d3efd-f2d4-427c-8626-c07b97fcb723 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.564 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.567 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.596 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920856.524804, 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.597 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] VM Started (Lifecycle Event)
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.617 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.621 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:54:16 compute-1 podman[274140]: 2026-01-20 14:54:16.794799333 +0000 UTC m=+0.074084442 container create ae11b6978c1a5aa6f308a011c4a65dc64e867e3720c9cf1fb815e78bfe604ca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 14:54:16 compute-1 systemd[1]: Started libpod-conmon-ae11b6978c1a5aa6f308a011c4a65dc64e867e3720c9cf1fb815e78bfe604ca6.scope.
Jan 20 14:54:16 compute-1 podman[274140]: 2026-01-20 14:54:16.749553886 +0000 UTC m=+0.028839025 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:54:16 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:54:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaa4d503d18710d2bbff4fcb5c47904c975a3c0af22d0db4d6302ccf63c41716/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:54:16 compute-1 podman[274140]: 2026-01-20 14:54:16.902460193 +0000 UTC m=+0.181745322 container init ae11b6978c1a5aa6f308a011c4a65dc64e867e3720c9cf1fb815e78bfe604ca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 14:54:16 compute-1 podman[274140]: 2026-01-20 14:54:16.908266027 +0000 UTC m=+0.187551136 container start ae11b6978c1a5aa6f308a011c4a65dc64e867e3720c9cf1fb815e78bfe604ca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:54:16 compute-1 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[274155]: [NOTICE]   (274159) : New worker (274161) forked
Jan 20 14:54:16 compute-1 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[274155]: [NOTICE]   (274159) : Loading success.
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.988 225859 DEBUG nova.compute.manager [req-c2f495b2-12e5-4414-8bea-67d7b69a81dd req-e282b3e3-488a-4d54-96ff-639172249b94 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Received event network-vif-plugged-87b0cab5-af2f-4440-8f58-840860a23f68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.993 225859 DEBUG oslo_concurrency.lockutils [req-c2f495b2-12e5-4414-8bea-67d7b69a81dd req-e282b3e3-488a-4d54-96ff-639172249b94 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.994 225859 DEBUG oslo_concurrency.lockutils [req-c2f495b2-12e5-4414-8bea-67d7b69a81dd req-e282b3e3-488a-4d54-96ff-639172249b94 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.994 225859 DEBUG oslo_concurrency.lockutils [req-c2f495b2-12e5-4414-8bea-67d7b69a81dd req-e282b3e3-488a-4d54-96ff-639172249b94 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.995 225859 DEBUG nova.compute.manager [req-c2f495b2-12e5-4414-8bea-67d7b69a81dd req-e282b3e3-488a-4d54-96ff-639172249b94 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] No waiting events found dispatching network-vif-plugged-87b0cab5-af2f-4440-8f58-840860a23f68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:54:16 compute-1 nova_compute[225855]: 2026-01-20 14:54:16.995 225859 WARNING nova.compute.manager [req-c2f495b2-12e5-4414-8bea-67d7b69a81dd req-e282b3e3-488a-4d54-96ff-639172249b94 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Received unexpected event network-vif-plugged-87b0cab5-af2f-4440-8f58-840860a23f68 for instance with vm_state active and task_state None.
Jan 20 14:54:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:54:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:54:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:17.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:17.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:18 compute-1 ceph-mon[81775]: pgmap v2040: 321 pgs: 321 active+clean; 889 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 6.5 MiB/s wr, 317 op/s
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.103 225859 DEBUG nova.compute.manager [req-f6f8a8fe-2ec5-4e65-8b30-f233f36f55ea req-14c10c01-2471-48fc-8ee5-82c33b5f21b1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Received event network-vif-plugged-87b0cab5-af2f-4440-8f58-840860a23f68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.104 225859 DEBUG oslo_concurrency.lockutils [req-f6f8a8fe-2ec5-4e65-8b30-f233f36f55ea req-14c10c01-2471-48fc-8ee5-82c33b5f21b1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.104 225859 DEBUG oslo_concurrency.lockutils [req-f6f8a8fe-2ec5-4e65-8b30-f233f36f55ea req-14c10c01-2471-48fc-8ee5-82c33b5f21b1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.104 225859 DEBUG oslo_concurrency.lockutils [req-f6f8a8fe-2ec5-4e65-8b30-f233f36f55ea req-14c10c01-2471-48fc-8ee5-82c33b5f21b1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.104 225859 DEBUG nova.compute.manager [req-f6f8a8fe-2ec5-4e65-8b30-f233f36f55ea req-14c10c01-2471-48fc-8ee5-82c33b5f21b1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] No waiting events found dispatching network-vif-plugged-87b0cab5-af2f-4440-8f58-840860a23f68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.104 225859 WARNING nova.compute.manager [req-f6f8a8fe-2ec5-4e65-8b30-f233f36f55ea req-14c10c01-2471-48fc-8ee5-82c33b5f21b1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Received unexpected event network-vif-plugged-87b0cab5-af2f-4440-8f58-840860a23f68 for instance with vm_state active and task_state None.
Jan 20 14:54:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:19.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.375 225859 DEBUG oslo_concurrency.lockutils [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.376 225859 DEBUG oslo_concurrency.lockutils [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.376 225859 DEBUG oslo_concurrency.lockutils [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.376 225859 DEBUG oslo_concurrency.lockutils [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.377 225859 DEBUG oslo_concurrency.lockutils [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.378 225859 INFO nova.compute.manager [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Terminating instance
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.378 225859 DEBUG nova.compute.manager [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:54:19 compute-1 kernel: tap87b0cab5-af (unregistering): left promiscuous mode
Jan 20 14:54:19 compute-1 NetworkManager[49104]: <info>  [1768920859.4373] device (tap87b0cab5-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:54:19 compute-1 ovn_controller[130490]: 2026-01-20T14:54:19Z|00490|binding|INFO|Releasing lport 87b0cab5-af2f-4440-8f58-840860a23f68 from this chassis (sb_readonly=0)
Jan 20 14:54:19 compute-1 ovn_controller[130490]: 2026-01-20T14:54:19Z|00491|binding|INFO|Setting lport 87b0cab5-af2f-4440-8f58-840860a23f68 down in Southbound
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.446 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:19 compute-1 ovn_controller[130490]: 2026-01-20T14:54:19Z|00492|binding|INFO|Removing iface tap87b0cab5-af ovn-installed in OVS
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.448 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:19.452 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:79:1b 10.100.0.9'], port_security=['fa:16:3e:2b:79:1b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7f5cfffe-c1dc-4b00-844e-0fb35b340f44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3b1b7f5b4f84b5abbc401eb577c85c0', 'neutron:revision_number': '8', 'neutron:security_group_ids': '8b11f3fb-2601-4eca-a1b6-838549d7750c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.184', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3273589e-5585-406c-9611-87f758b0e521, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=87b0cab5-af2f-4440-8f58-840860a23f68) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:54:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:19.453 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 87b0cab5-af2f-4440-8f58-840860a23f68 in datapath 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce unbound from our chassis
Jan 20 14:54:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:19.455 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:54:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:19.456 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a4da95fc-b9c5-49b1-a375-7ec6f447ed13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:19.456 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce namespace which is not needed anymore
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.462 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:19 compute-1 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000076.scope: Deactivated successfully.
Jan 20 14:54:19 compute-1 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000076.scope: Consumed 3.461s CPU time.
Jan 20 14:54:19 compute-1 systemd-machined[194361]: Machine qemu-58-instance-00000076 terminated.
Jan 20 14:54:19 compute-1 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[274155]: [NOTICE]   (274159) : haproxy version is 2.8.14-c23fe91
Jan 20 14:54:19 compute-1 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[274155]: [NOTICE]   (274159) : path to executable is /usr/sbin/haproxy
Jan 20 14:54:19 compute-1 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[274155]: [WARNING]  (274159) : Exiting Master process...
Jan 20 14:54:19 compute-1 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[274155]: [WARNING]  (274159) : Exiting Master process...
Jan 20 14:54:19 compute-1 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[274155]: [ALERT]    (274159) : Current worker (274161) exited with code 143 (Terminated)
Jan 20 14:54:19 compute-1 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[274155]: [WARNING]  (274159) : All workers exited. Exiting... (0)
Jan 20 14:54:19 compute-1 systemd[1]: libpod-ae11b6978c1a5aa6f308a011c4a65dc64e867e3720c9cf1fb815e78bfe604ca6.scope: Deactivated successfully.
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.612 225859 INFO nova.virt.libvirt.driver [-] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Instance destroyed successfully.
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.612 225859 DEBUG nova.objects.instance [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'resources' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:19 compute-1 podman[274194]: 2026-01-20 14:54:19.613066596 +0000 UTC m=+0.049763746 container died ae11b6978c1a5aa6f308a011c4a65dc64e867e3720c9cf1fb815e78bfe604ca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.630 225859 DEBUG nova.virt.libvirt.vif [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:51:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1654627482',display_name='tempest-ServerActionsTestOtherB-server-1654627482',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1654627482',id=118,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO99uJ9+FwgjxRb/9u+f3Mj9/VKSDM+OKd66Ygsg8lEO+7bGpDEQrC5BIaSV+Na5YF+3DqUwLNmAYSN9IkTSGbRPw5y8813A+KsiNHebrpnZ7oReyT+5/zNQYafCHVAfGA==',key_name='tempest-keypair-302882914',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:54:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-2ulk0sfq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:54:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='215db37373dc4ae5a75cbd6866f471da',uuid=7f5cfffe-c1dc-4b00-844e-0fb35b340f44,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.630 225859 DEBUG nova.network.os_vif_util [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.631 225859 DEBUG nova.network.os_vif_util [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.631 225859 DEBUG os_vif [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.633 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.634 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87b0cab5-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.635 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.637 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.640 225859 INFO os_vif [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af')
Jan 20 14:54:19 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ae11b6978c1a5aa6f308a011c4a65dc64e867e3720c9cf1fb815e78bfe604ca6-userdata-shm.mount: Deactivated successfully.
Jan 20 14:54:19 compute-1 systemd[1]: var-lib-containers-storage-overlay-eaa4d503d18710d2bbff4fcb5c47904c975a3c0af22d0db4d6302ccf63c41716-merged.mount: Deactivated successfully.
Jan 20 14:54:19 compute-1 podman[274194]: 2026-01-20 14:54:19.663449308 +0000 UTC m=+0.100146458 container cleanup ae11b6978c1a5aa6f308a011c4a65dc64e867e3720c9cf1fb815e78bfe604ca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:54:19 compute-1 systemd[1]: libpod-conmon-ae11b6978c1a5aa6f308a011c4a65dc64e867e3720c9cf1fb815e78bfe604ca6.scope: Deactivated successfully.
Jan 20 14:54:19 compute-1 podman[274251]: 2026-01-20 14:54:19.72264817 +0000 UTC m=+0.039268320 container remove ae11b6978c1a5aa6f308a011c4a65dc64e867e3720c9cf1fb815e78bfe604ca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 20 14:54:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:19.729 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f44c2b58-cd80-460f-baa6-e63ea2fb9544]: (4, ('Tue Jan 20 02:54:19 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce (ae11b6978c1a5aa6f308a011c4a65dc64e867e3720c9cf1fb815e78bfe604ca6)\nae11b6978c1a5aa6f308a011c4a65dc64e867e3720c9cf1fb815e78bfe604ca6\nTue Jan 20 02:54:19 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce (ae11b6978c1a5aa6f308a011c4a65dc64e867e3720c9cf1fb815e78bfe604ca6)\nae11b6978c1a5aa6f308a011c4a65dc64e867e3720c9cf1fb815e78bfe604ca6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:19.730 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[34279aa6-2064-4b57-baf0-57e5617ed82c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:19.731 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41a1a3fe-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:19 compute-1 kernel: tap41a1a3fe-f0: left promiscuous mode
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.734 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.749 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:19.752 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[47b49516-7391-4e3f-aa82-715c86eeb103]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:19.771 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b3bb2dd5-2149-4a1b-88fe-a8a5edaf99fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:19.772 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9395c1-749c-4dcc-b84b-b901761a6d0a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:19.787 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[dc40f105-0fe0-4189-a3f4-b91ac052f1c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589502, 'reachable_time': 29432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274270, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:19.790 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:54:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:19.790 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3ca517-fbf9-4d7f-bfe5-023be5259935]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:19 compute-1 systemd[1]: run-netns-ovnmeta\x2d41a1a3fe\x2df6f8\x2d4375\x2d9b0f\x2da4d4bb269cce.mount: Deactivated successfully.
Jan 20 14:54:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:19.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.992 225859 INFO nova.virt.libvirt.driver [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Deleting instance files /var/lib/nova/instances/7f5cfffe-c1dc-4b00-844e-0fb35b340f44_del
Jan 20 14:54:19 compute-1 nova_compute[225855]: 2026-01-20 14:54:19.992 225859 INFO nova.virt.libvirt.driver [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Deletion of /var/lib/nova/instances/7f5cfffe-c1dc-4b00-844e-0fb35b340f44_del complete
Jan 20 14:54:20 compute-1 nova_compute[225855]: 2026-01-20 14:54:20.070 225859 INFO nova.compute.manager [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Took 0.69 seconds to destroy the instance on the hypervisor.
Jan 20 14:54:20 compute-1 nova_compute[225855]: 2026-01-20 14:54:20.071 225859 DEBUG oslo.service.loopingcall [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:54:20 compute-1 nova_compute[225855]: 2026-01-20 14:54:20.071 225859 DEBUG nova.compute.manager [-] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:54:20 compute-1 nova_compute[225855]: 2026-01-20 14:54:20.071 225859 DEBUG nova.network.neutron [-] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:54:21 compute-1 nova_compute[225855]: 2026-01-20 14:54:21.004 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:21 compute-1 ceph-mon[81775]: pgmap v2041: 321 pgs: 321 active+clean; 887 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 5.7 MiB/s wr, 352 op/s
Jan 20 14:54:21 compute-1 nova_compute[225855]: 2026-01-20 14:54:21.227 225859 DEBUG nova.compute.manager [req-b8f65186-16bf-4916-b8dc-beedcc7cf471 req-4088d856-e670-4a1f-a597-18bfc264fe5b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Received event network-vif-unplugged-87b0cab5-af2f-4440-8f58-840860a23f68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:21 compute-1 nova_compute[225855]: 2026-01-20 14:54:21.227 225859 DEBUG oslo_concurrency.lockutils [req-b8f65186-16bf-4916-b8dc-beedcc7cf471 req-4088d856-e670-4a1f-a597-18bfc264fe5b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:21 compute-1 nova_compute[225855]: 2026-01-20 14:54:21.228 225859 DEBUG oslo_concurrency.lockutils [req-b8f65186-16bf-4916-b8dc-beedcc7cf471 req-4088d856-e670-4a1f-a597-18bfc264fe5b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:21 compute-1 nova_compute[225855]: 2026-01-20 14:54:21.228 225859 DEBUG oslo_concurrency.lockutils [req-b8f65186-16bf-4916-b8dc-beedcc7cf471 req-4088d856-e670-4a1f-a597-18bfc264fe5b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:21 compute-1 nova_compute[225855]: 2026-01-20 14:54:21.228 225859 DEBUG nova.compute.manager [req-b8f65186-16bf-4916-b8dc-beedcc7cf471 req-4088d856-e670-4a1f-a597-18bfc264fe5b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] No waiting events found dispatching network-vif-unplugged-87b0cab5-af2f-4440-8f58-840860a23f68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:54:21 compute-1 nova_compute[225855]: 2026-01-20 14:54:21.228 225859 DEBUG nova.compute.manager [req-b8f65186-16bf-4916-b8dc-beedcc7cf471 req-4088d856-e670-4a1f-a597-18bfc264fe5b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Received event network-vif-unplugged-87b0cab5-af2f-4440-8f58-840860a23f68 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:54:21 compute-1 nova_compute[225855]: 2026-01-20 14:54:21.229 225859 DEBUG nova.compute.manager [req-b8f65186-16bf-4916-b8dc-beedcc7cf471 req-4088d856-e670-4a1f-a597-18bfc264fe5b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Received event network-vif-plugged-87b0cab5-af2f-4440-8f58-840860a23f68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:21 compute-1 nova_compute[225855]: 2026-01-20 14:54:21.229 225859 DEBUG oslo_concurrency.lockutils [req-b8f65186-16bf-4916-b8dc-beedcc7cf471 req-4088d856-e670-4a1f-a597-18bfc264fe5b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:21 compute-1 nova_compute[225855]: 2026-01-20 14:54:21.229 225859 DEBUG oslo_concurrency.lockutils [req-b8f65186-16bf-4916-b8dc-beedcc7cf471 req-4088d856-e670-4a1f-a597-18bfc264fe5b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:21 compute-1 nova_compute[225855]: 2026-01-20 14:54:21.229 225859 DEBUG oslo_concurrency.lockutils [req-b8f65186-16bf-4916-b8dc-beedcc7cf471 req-4088d856-e670-4a1f-a597-18bfc264fe5b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:21 compute-1 nova_compute[225855]: 2026-01-20 14:54:21.229 225859 DEBUG nova.compute.manager [req-b8f65186-16bf-4916-b8dc-beedcc7cf471 req-4088d856-e670-4a1f-a597-18bfc264fe5b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] No waiting events found dispatching network-vif-plugged-87b0cab5-af2f-4440-8f58-840860a23f68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:54:21 compute-1 nova_compute[225855]: 2026-01-20 14:54:21.230 225859 WARNING nova.compute.manager [req-b8f65186-16bf-4916-b8dc-beedcc7cf471 req-4088d856-e670-4a1f-a597-18bfc264fe5b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Received unexpected event network-vif-plugged-87b0cab5-af2f-4440-8f58-840860a23f68 for instance with vm_state active and task_state deleting.
Jan 20 14:54:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:21.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:54:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:21.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:54:22 compute-1 nova_compute[225855]: 2026-01-20 14:54:22.391 225859 DEBUG nova.network.neutron [-] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:54:22 compute-1 nova_compute[225855]: 2026-01-20 14:54:22.410 225859 INFO nova.compute.manager [-] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Took 2.34 seconds to deallocate network for instance.
Jan 20 14:54:22 compute-1 nova_compute[225855]: 2026-01-20 14:54:22.462 225859 DEBUG oslo_concurrency.lockutils [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:22 compute-1 nova_compute[225855]: 2026-01-20 14:54:22.463 225859 DEBUG oslo_concurrency.lockutils [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:22 compute-1 nova_compute[225855]: 2026-01-20 14:54:22.468 225859 DEBUG oslo_concurrency.lockutils [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:22 compute-1 nova_compute[225855]: 2026-01-20 14:54:22.502 225859 INFO nova.scheduler.client.report [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Deleted allocations for instance 7f5cfffe-c1dc-4b00-844e-0fb35b340f44
Jan 20 14:54:22 compute-1 nova_compute[225855]: 2026-01-20 14:54:22.593 225859 DEBUG oslo_concurrency.lockutils [None req-a3b2f09e-4532-4621-bb52-2dd21ee746cc 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:23 compute-1 podman[274273]: 2026-01-20 14:54:23.029826614 +0000 UTC m=+0.080287887 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:54:23 compute-1 ceph-mon[81775]: pgmap v2042: 321 pgs: 321 active+clean; 847 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 4.4 MiB/s wr, 407 op/s
Jan 20 14:54:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:54:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:23.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:54:23 compute-1 nova_compute[225855]: 2026-01-20 14:54:23.313 225859 DEBUG nova.compute.manager [req-93243fb2-0718-4658-a305-dc5f4bfb7e48 req-0ac5592a-5584-48c5-8da2-56cd28b0106a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Received event network-vif-deleted-87b0cab5-af2f-4440-8f58-840860a23f68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:54:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:23.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:54:24 compute-1 nova_compute[225855]: 2026-01-20 14:54:24.636 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e295 e295: 3 total, 3 up, 3 in
Jan 20 14:54:25 compute-1 ceph-mon[81775]: pgmap v2043: 321 pgs: 321 active+clean; 828 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.6 MiB/s wr, 333 op/s
Jan 20 14:54:25 compute-1 ceph-mon[81775]: osdmap e295: 3 total, 3 up, 3 in
Jan 20 14:54:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:54:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:25.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:54:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:54:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:25.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:54:26 compute-1 nova_compute[225855]: 2026-01-20 14:54:26.007 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:27.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:27 compute-1 ceph-mon[81775]: pgmap v2045: 321 pgs: 321 active+clean; 822 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.6 MiB/s wr, 290 op/s
Jan 20 14:54:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:54:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:27.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:54:28 compute-1 ceph-mon[81775]: pgmap v2046: 321 pgs: 321 active+clean; 822 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 240 op/s
Jan 20 14:54:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:29.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:29 compute-1 nova_compute[225855]: 2026-01-20 14:54:29.638 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:29.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:30 compute-1 ceph-mon[81775]: pgmap v2047: 321 pgs: 321 active+clean; 822 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.1 MiB/s wr, 172 op/s
Jan 20 14:54:30 compute-1 nova_compute[225855]: 2026-01-20 14:54:30.744 225859 DEBUG oslo_concurrency.lockutils [None req-99cc8d1c-f01c-478b-a905-fe262b6711a1 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "baada610-f563-4c97-89a9-56eba792c352" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:30 compute-1 nova_compute[225855]: 2026-01-20 14:54:30.745 225859 DEBUG oslo_concurrency.lockutils [None req-99cc8d1c-f01c-478b-a905-fe262b6711a1 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:30 compute-1 nova_compute[225855]: 2026-01-20 14:54:30.793 225859 INFO nova.compute.manager [None req-99cc8d1c-f01c-478b-a905-fe262b6711a1 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Detaching volume 5f6a803f-d232-4e97-9965-ece0139e0fda
Jan 20 14:54:30 compute-1 nova_compute[225855]: 2026-01-20 14:54:30.931 225859 INFO nova.virt.block_device [None req-99cc8d1c-f01c-478b-a905-fe262b6711a1 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Attempting to driver detach volume 5f6a803f-d232-4e97-9965-ece0139e0fda from mountpoint /dev/vdb
Jan 20 14:54:30 compute-1 nova_compute[225855]: 2026-01-20 14:54:30.944 225859 DEBUG nova.virt.libvirt.driver [None req-99cc8d1c-f01c-478b-a905-fe262b6711a1 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Attempting to detach device vdb from instance baada610-f563-4c97-89a9-56eba792c352 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 14:54:30 compute-1 nova_compute[225855]: 2026-01-20 14:54:30.945 225859 DEBUG nova.virt.libvirt.guest [None req-99cc8d1c-f01c-478b-a905-fe262b6711a1 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:54:30 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:54:30 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-5f6a803f-d232-4e97-9965-ece0139e0fda">
Jan 20 14:54:30 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:54:30 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:54:30 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:54:30 compute-1 nova_compute[225855]:   </source>
Jan 20 14:54:30 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:54:30 compute-1 nova_compute[225855]:   <serial>5f6a803f-d232-4e97-9965-ece0139e0fda</serial>
Jan 20 14:54:30 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 14:54:30 compute-1 nova_compute[225855]: </disk>
Jan 20 14:54:30 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:54:30 compute-1 nova_compute[225855]: 2026-01-20 14:54:30.976 225859 INFO nova.virt.libvirt.driver [None req-99cc8d1c-f01c-478b-a905-fe262b6711a1 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Successfully detached device vdb from instance baada610-f563-4c97-89a9-56eba792c352 from the persistent domain config.
Jan 20 14:54:30 compute-1 nova_compute[225855]: 2026-01-20 14:54:30.977 225859 DEBUG nova.virt.libvirt.driver [None req-99cc8d1c-f01c-478b-a905-fe262b6711a1 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance baada610-f563-4c97-89a9-56eba792c352 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 14:54:30 compute-1 nova_compute[225855]: 2026-01-20 14:54:30.978 225859 DEBUG nova.virt.libvirt.guest [None req-99cc8d1c-f01c-478b-a905-fe262b6711a1 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:54:30 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:54:30 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-5f6a803f-d232-4e97-9965-ece0139e0fda">
Jan 20 14:54:30 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:54:30 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:54:30 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:54:30 compute-1 nova_compute[225855]:   </source>
Jan 20 14:54:30 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:54:30 compute-1 nova_compute[225855]:   <serial>5f6a803f-d232-4e97-9965-ece0139e0fda</serial>
Jan 20 14:54:30 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 14:54:30 compute-1 nova_compute[225855]: </disk>
Jan 20 14:54:30 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:54:31 compute-1 nova_compute[225855]: 2026-01-20 14:54:31.009 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:31 compute-1 nova_compute[225855]: 2026-01-20 14:54:31.102 225859 DEBUG nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Received event <DeviceRemovedEvent: 1768920871.1023562, baada610-f563-4c97-89a9-56eba792c352 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 14:54:31 compute-1 nova_compute[225855]: 2026-01-20 14:54:31.105 225859 DEBUG nova.virt.libvirt.driver [None req-99cc8d1c-f01c-478b-a905-fe262b6711a1 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance baada610-f563-4c97-89a9-56eba792c352 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 14:54:31 compute-1 nova_compute[225855]: 2026-01-20 14:54:31.108 225859 INFO nova.virt.libvirt.driver [None req-99cc8d1c-f01c-478b-a905-fe262b6711a1 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Successfully detached device vdb from instance baada610-f563-4c97-89a9-56eba792c352 from the live domain config.
Jan 20 14:54:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:54:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:31.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:54:31 compute-1 nova_compute[225855]: 2026-01-20 14:54:31.361 225859 DEBUG nova.objects.instance [None req-99cc8d1c-f01c-478b-a905-fe262b6711a1 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'flavor' on Instance uuid baada610-f563-4c97-89a9-56eba792c352 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:31 compute-1 nova_compute[225855]: 2026-01-20 14:54:31.398 225859 DEBUG oslo_concurrency.lockutils [None req-99cc8d1c-f01c-478b-a905-fe262b6711a1 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2413086570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:31 compute-1 sudo[274300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:31 compute-1 sudo[274300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:31 compute-1 sudo[274300]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:31.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:31 compute-1 sudo[274325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:31 compute-1 sudo[274325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:31 compute-1 sudo[274325]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:32 compute-1 nova_compute[225855]: 2026-01-20 14:54:32.088 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:32.090 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:54:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:32.091 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:54:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e296 e296: 3 total, 3 up, 3 in
Jan 20 14:54:32 compute-1 ceph-mon[81775]: pgmap v2048: 321 pgs: 321 active+clean; 822 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 54 KiB/s wr, 88 op/s
Jan 20 14:54:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:33.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.420 225859 DEBUG oslo_concurrency.lockutils [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "baada610-f563-4c97-89a9-56eba792c352" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.421 225859 DEBUG oslo_concurrency.lockutils [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.421 225859 DEBUG oslo_concurrency.lockutils [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "baada610-f563-4c97-89a9-56eba792c352-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.422 225859 DEBUG oslo_concurrency.lockutils [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.422 225859 DEBUG oslo_concurrency.lockutils [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.423 225859 INFO nova.compute.manager [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Terminating instance
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.424 225859 DEBUG nova.compute.manager [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:54:33 compute-1 kernel: tapa3156414-5a (unregistering): left promiscuous mode
Jan 20 14:54:33 compute-1 NetworkManager[49104]: <info>  [1768920873.4843] device (tapa3156414-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:54:33 compute-1 ovn_controller[130490]: 2026-01-20T14:54:33Z|00493|binding|INFO|Releasing lport a3156414-5a96-462d-974e-a57c9cd8e9c8 from this chassis (sb_readonly=0)
Jan 20 14:54:33 compute-1 ovn_controller[130490]: 2026-01-20T14:54:33Z|00494|binding|INFO|Setting lport a3156414-5a96-462d-974e-a57c9cd8e9c8 down in Southbound
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.491 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:33 compute-1 ovn_controller[130490]: 2026-01-20T14:54:33Z|00495|binding|INFO|Removing iface tapa3156414-5a ovn-installed in OVS
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.493 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:33.497 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:93:82 10.100.0.3'], port_security=['fa:16:3e:9e:93:82 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'baada610-f563-4c97-89a9-56eba792c352', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79184781-1f23-4584-87de-08e262242488', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'neutron:revision_number': '8', 'neutron:security_group_ids': '34326e47-c07e-48d1-9283-c1c5634fdc52', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.202', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b73ab05-b29f-401a-84a5-ea1a96103f33, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=a3156414-5a96-462d-974e-a57c9cd8e9c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:54:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:33.498 140354 INFO neutron.agent.ovn.metadata.agent [-] Port a3156414-5a96-462d-974e-a57c9cd8e9c8 in datapath 79184781-1f23-4584-87de-08e262242488 unbound from our chassis
Jan 20 14:54:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:33.500 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79184781-1f23-4584-87de-08e262242488
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.507 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:33.516 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd75bfa-13db-455b-a901-afdf6b7db496]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:33.540 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[71cba883-287a-49da-85a3-7404fc610648]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:33.543 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe4eb73-9871-4f3a-881b-d6ef4e2abfb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:33 compute-1 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000077.scope: Deactivated successfully.
Jan 20 14:54:33 compute-1 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000077.scope: Consumed 14.552s CPU time.
Jan 20 14:54:33 compute-1 systemd-machined[194361]: Machine qemu-57-instance-00000077 terminated.
Jan 20 14:54:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:33.575 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[811da5f0-f7f9-4e0a-9288-49906ac68906]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:33.590 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c49d17df-f81d-4a67-b98a-51dfe19cc1bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79184781-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:7c:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 20, 'rx_bytes': 784, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 20, 'rx_bytes': 784, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580405, 'reachable_time': 23111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274361, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:33.604 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[497b7695-a3e2-4be7-8be8-81ea51549c47]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap79184781-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580414, 'tstamp': 580414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274362, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap79184781-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580417, 'tstamp': 580417}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274362, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:33.606 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79184781-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.607 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.611 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:33.612 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79184781-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:33.613 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:54:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:33.613 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79184781-10, col_values=(('external_ids', {'iface-id': 'b033e9e6-9781-4424-a20f-7b48a14e2c80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:33.613 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.642 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.647 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.660 225859 INFO nova.virt.libvirt.driver [-] [instance: baada610-f563-4c97-89a9-56eba792c352] Instance destroyed successfully.
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.661 225859 DEBUG nova.objects.instance [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'resources' on Instance uuid baada610-f563-4c97-89a9-56eba792c352 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.687 225859 DEBUG nova.virt.libvirt.vif [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:52:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-185388239',display_name='tempest-ServerStableDeviceRescueTest-server-185388239',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-185388239',id=119,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIfbHk5SMnBEAUVZEhoLPfdB2qCay341zK720hYW5qflxdgcEr+fHp9C3kAgJFmqON8wn8DkPxW0WmihyCLPTK7Iiiy5VDiRJ7U/0O7hlyzm17ZWhCVdPfXSugKxmeVL3w==',key_name='tempest-keypair-647310408',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:53:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a29915e0dd2403fbd7b7e847696b00a',ramdisk_id='',reservation_id='r-d9074tfl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-129078052',owner_user_name='tempest-ServerStableDeviceRescueTest-129078052-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:54:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d85d286ce6224326a0f4a15a06afbfea',uuid=baada610-f563-4c97-89a9-56eba792c352,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "address": "fa:16:3e:9e:93:82", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3156414-5a", "ovs_interfaceid": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.688 225859 DEBUG nova.network.os_vif_util [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converting VIF {"id": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "address": "fa:16:3e:9e:93:82", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3156414-5a", "ovs_interfaceid": "a3156414-5a96-462d-974e-a57c9cd8e9c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.688 225859 DEBUG nova.network.os_vif_util [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:93:82,bridge_name='br-int',has_traffic_filtering=True,id=a3156414-5a96-462d-974e-a57c9cd8e9c8,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3156414-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.688 225859 DEBUG os_vif [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:93:82,bridge_name='br-int',has_traffic_filtering=True,id=a3156414-5a96-462d-974e-a57c9cd8e9c8,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3156414-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.689 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.690 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3156414-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.691 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.693 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.695 225859 INFO os_vif [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:93:82,bridge_name='br-int',has_traffic_filtering=True,id=a3156414-5a96-462d-974e-a57c9cd8e9c8,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3156414-5a')
Jan 20 14:54:33 compute-1 ceph-mon[81775]: osdmap e296: 3 total, 3 up, 3 in
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.724 225859 DEBUG nova.compute.manager [req-fd293b95-32ff-41d2-9d47-2f769ff01830 req-11317e23-ea25-48bb-adde-c4efbcfbc12d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received event network-vif-unplugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.725 225859 DEBUG oslo_concurrency.lockutils [req-fd293b95-32ff-41d2-9d47-2f769ff01830 req-11317e23-ea25-48bb-adde-c4efbcfbc12d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "baada610-f563-4c97-89a9-56eba792c352-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.725 225859 DEBUG oslo_concurrency.lockutils [req-fd293b95-32ff-41d2-9d47-2f769ff01830 req-11317e23-ea25-48bb-adde-c4efbcfbc12d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.725 225859 DEBUG oslo_concurrency.lockutils [req-fd293b95-32ff-41d2-9d47-2f769ff01830 req-11317e23-ea25-48bb-adde-c4efbcfbc12d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.726 225859 DEBUG nova.compute.manager [req-fd293b95-32ff-41d2-9d47-2f769ff01830 req-11317e23-ea25-48bb-adde-c4efbcfbc12d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] No waiting events found dispatching network-vif-unplugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:54:33 compute-1 nova_compute[225855]: 2026-01-20 14:54:33.726 225859 DEBUG nova.compute.manager [req-fd293b95-32ff-41d2-9d47-2f769ff01830 req-11317e23-ea25-48bb-adde-c4efbcfbc12d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received event network-vif-unplugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:54:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:33.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:34 compute-1 nova_compute[225855]: 2026-01-20 14:54:34.611 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920859.6104903, 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:54:34 compute-1 nova_compute[225855]: 2026-01-20 14:54:34.611 225859 INFO nova.compute.manager [-] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] VM Stopped (Lifecycle Event)
Jan 20 14:54:34 compute-1 nova_compute[225855]: 2026-01-20 14:54:34.641 225859 DEBUG nova.compute.manager [None req-cadf0971-3b18-4893-be0e-a7e7d3917b2e - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:54:34 compute-1 nova_compute[225855]: 2026-01-20 14:54:34.842 225859 INFO nova.virt.libvirt.driver [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Deleting instance files /var/lib/nova/instances/baada610-f563-4c97-89a9-56eba792c352_del
Jan 20 14:54:34 compute-1 nova_compute[225855]: 2026-01-20 14:54:34.843 225859 INFO nova.virt.libvirt.driver [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Deletion of /var/lib/nova/instances/baada610-f563-4c97-89a9-56eba792c352_del complete
Jan 20 14:54:34 compute-1 nova_compute[225855]: 2026-01-20 14:54:34.907 225859 INFO nova.compute.manager [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Took 1.48 seconds to destroy the instance on the hypervisor.
Jan 20 14:54:34 compute-1 nova_compute[225855]: 2026-01-20 14:54:34.907 225859 DEBUG oslo.service.loopingcall [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:54:34 compute-1 nova_compute[225855]: 2026-01-20 14:54:34.908 225859 DEBUG nova.compute.manager [-] [instance: baada610-f563-4c97-89a9-56eba792c352] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:54:34 compute-1 nova_compute[225855]: 2026-01-20 14:54:34.908 225859 DEBUG nova.network.neutron [-] [instance: baada610-f563-4c97-89a9-56eba792c352] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:54:35 compute-1 ceph-mon[81775]: pgmap v2050: 321 pgs: 321 active+clean; 826 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 686 KiB/s rd, 498 KiB/s wr, 63 op/s
Jan 20 14:54:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:35.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:35 compute-1 nova_compute[225855]: 2026-01-20 14:54:35.358 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:54:35 compute-1 nova_compute[225855]: 2026-01-20 14:54:35.833 225859 DEBUG nova.compute.manager [req-251f362a-c134-423d-9f27-84c52c55eee7 req-39d92fff-67ca-4b3d-9f99-e94bd4c812ca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received event network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:35 compute-1 nova_compute[225855]: 2026-01-20 14:54:35.833 225859 DEBUG oslo_concurrency.lockutils [req-251f362a-c134-423d-9f27-84c52c55eee7 req-39d92fff-67ca-4b3d-9f99-e94bd4c812ca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "baada610-f563-4c97-89a9-56eba792c352-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:35 compute-1 nova_compute[225855]: 2026-01-20 14:54:35.835 225859 DEBUG oslo_concurrency.lockutils [req-251f362a-c134-423d-9f27-84c52c55eee7 req-39d92fff-67ca-4b3d-9f99-e94bd4c812ca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:35 compute-1 nova_compute[225855]: 2026-01-20 14:54:35.835 225859 DEBUG oslo_concurrency.lockutils [req-251f362a-c134-423d-9f27-84c52c55eee7 req-39d92fff-67ca-4b3d-9f99-e94bd4c812ca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:35 compute-1 nova_compute[225855]: 2026-01-20 14:54:35.836 225859 DEBUG nova.compute.manager [req-251f362a-c134-423d-9f27-84c52c55eee7 req-39d92fff-67ca-4b3d-9f99-e94bd4c812ca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] No waiting events found dispatching network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:54:35 compute-1 nova_compute[225855]: 2026-01-20 14:54:35.836 225859 WARNING nova.compute.manager [req-251f362a-c134-423d-9f27-84c52c55eee7 req-39d92fff-67ca-4b3d-9f99-e94bd4c812ca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received unexpected event network-vif-plugged-a3156414-5a96-462d-974e-a57c9cd8e9c8 for instance with vm_state active and task_state deleting.
Jan 20 14:54:35 compute-1 nova_compute[225855]: 2026-01-20 14:54:35.838 225859 DEBUG nova.network.neutron [-] [instance: baada610-f563-4c97-89a9-56eba792c352] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:54:35 compute-1 nova_compute[225855]: 2026-01-20 14:54:35.861 225859 INFO nova.compute.manager [-] [instance: baada610-f563-4c97-89a9-56eba792c352] Took 0.95 seconds to deallocate network for instance.
Jan 20 14:54:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:54:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:35.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:54:35 compute-1 nova_compute[225855]: 2026-01-20 14:54:35.920 225859 DEBUG oslo_concurrency.lockutils [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:35 compute-1 nova_compute[225855]: 2026-01-20 14:54:35.920 225859 DEBUG oslo_concurrency.lockutils [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:35 compute-1 nova_compute[225855]: 2026-01-20 14:54:35.923 225859 DEBUG nova.compute.manager [req-6d5fd803-0264-45d4-a928-544ee3377e22 req-e065ff20-b221-4bf6-afba-954dd9e53c95 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: baada610-f563-4c97-89a9-56eba792c352] Received event network-vif-deleted-a3156414-5a96-462d-974e-a57c9cd8e9c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:35 compute-1 nova_compute[225855]: 2026-01-20 14:54:35.999 225859 DEBUG oslo_concurrency.processutils [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:36 compute-1 nova_compute[225855]: 2026-01-20 14:54:36.032 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3198119657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3718312487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:36 compute-1 ceph-mon[81775]: pgmap v2051: 321 pgs: 321 active+clean; 748 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 538 KiB/s rd, 2.2 MiB/s wr, 107 op/s
Jan 20 14:54:36 compute-1 nova_compute[225855]: 2026-01-20 14:54:36.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:54:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:54:36 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2926708069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:36 compute-1 nova_compute[225855]: 2026-01-20 14:54:36.454 225859 DEBUG oslo_concurrency.processutils [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:36 compute-1 nova_compute[225855]: 2026-01-20 14:54:36.464 225859 DEBUG nova.compute.provider_tree [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:54:36 compute-1 nova_compute[225855]: 2026-01-20 14:54:36.484 225859 DEBUG nova.scheduler.client.report [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:54:36 compute-1 nova_compute[225855]: 2026-01-20 14:54:36.517 225859 DEBUG oslo_concurrency.lockutils [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:36 compute-1 nova_compute[225855]: 2026-01-20 14:54:36.562 225859 INFO nova.scheduler.client.report [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Deleted allocations for instance baada610-f563-4c97-89a9-56eba792c352
Jan 20 14:54:36 compute-1 nova_compute[225855]: 2026-01-20 14:54:36.632 225859 DEBUG oslo_concurrency.lockutils [None req-3f3df2db-d1c2-4077-aa71-3352e0002504 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "baada610-f563-4c97-89a9-56eba792c352" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:54:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:37.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:54:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2926708069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:37 compute-1 nova_compute[225855]: 2026-01-20 14:54:37.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:54:37 compute-1 nova_compute[225855]: 2026-01-20 14:54:37.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:54:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:37.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:38 compute-1 ceph-mon[81775]: pgmap v2052: 321 pgs: 321 active+clean; 673 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 296 KiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 20 14:54:38 compute-1 nova_compute[225855]: 2026-01-20 14:54:38.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:54:38 compute-1 nova_compute[225855]: 2026-01-20 14:54:38.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:54:38 compute-1 nova_compute[225855]: 2026-01-20 14:54:38.366 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:54:38 compute-1 nova_compute[225855]: 2026-01-20 14:54:38.691 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:39.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3424035681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/61354593' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:54:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/61354593' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:54:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e297 e297: 3 total, 3 up, 3 in
Jan 20 14:54:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:54:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:39.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:54:40 compute-1 nova_compute[225855]: 2026-01-20 14:54:40.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:54:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e298 e298: 3 total, 3 up, 3 in
Jan 20 14:54:40 compute-1 ceph-mon[81775]: osdmap e297: 3 total, 3 up, 3 in
Jan 20 14:54:40 compute-1 ceph-mon[81775]: pgmap v2054: 321 pgs: 321 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 847 KiB/s rd, 2.7 MiB/s wr, 202 op/s
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #94. Immutable memtables: 0.
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:40.733262) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 94
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920880733295, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 673, "num_deletes": 255, "total_data_size": 1044116, "memory_usage": 1058016, "flush_reason": "Manual Compaction"}
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #95: started
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920880742964, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 95, "file_size": 687669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49069, "largest_seqno": 49737, "table_properties": {"data_size": 684234, "index_size": 1279, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8344, "raw_average_key_size": 20, "raw_value_size": 677213, "raw_average_value_size": 1631, "num_data_blocks": 55, "num_entries": 415, "num_filter_entries": 415, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768920850, "oldest_key_time": 1768920850, "file_creation_time": 1768920880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 9736 microseconds, and 2819 cpu microseconds.
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:40.742998) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #95: 687669 bytes OK
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:40.743016) [db/memtable_list.cc:519] [default] Level-0 commit table #95 started
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:40.747612) [db/memtable_list.cc:722] [default] Level-0 commit table #95: memtable #1 done
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:40.747635) EVENT_LOG_v1 {"time_micros": 1768920880747628, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:40.747656) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 1040362, prev total WAL file size 1040362, number of live WAL files 2.
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000091.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:40.748361) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [95(671KB)], [93(13MB)]
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920880748424, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [95], "files_L6": [93], "score": -1, "input_data_size": 14320766, "oldest_snapshot_seqno": -1}
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #96: 7487 keys, 12472866 bytes, temperature: kUnknown
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920880920780, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 96, "file_size": 12472866, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12420540, "index_size": 32502, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18757, "raw_key_size": 193754, "raw_average_key_size": 25, "raw_value_size": 12284471, "raw_average_value_size": 1640, "num_data_blocks": 1286, "num_entries": 7487, "num_filter_entries": 7487, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768920880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 96, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:40.921061) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 12472866 bytes
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:40.923095) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 83.0 rd, 72.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 13.0 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(39.0) write-amplify(18.1) OK, records in: 8011, records dropped: 524 output_compression: NoCompression
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:40.923135) EVENT_LOG_v1 {"time_micros": 1768920880923120, "job": 58, "event": "compaction_finished", "compaction_time_micros": 172476, "compaction_time_cpu_micros": 29016, "output_level": 6, "num_output_files": 1, "total_output_size": 12472866, "num_input_records": 8011, "num_output_records": 7487, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920880923546, "job": 58, "event": "table_file_deletion", "file_number": 95}
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000093.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920880925813, "job": 58, "event": "table_file_deletion", "file_number": 93}
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:40.748183) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:40.925886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:40.925893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:40.925896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:40.925899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:54:40.925902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:41 compute-1 nova_compute[225855]: 2026-01-20 14:54:41.035 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:41.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:41 compute-1 nova_compute[225855]: 2026-01-20 14:54:41.552 225859 DEBUG oslo_concurrency.lockutils [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "23ea4537-f03f-46de-881f-b979e232a3b9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:41 compute-1 nova_compute[225855]: 2026-01-20 14:54:41.553 225859 DEBUG oslo_concurrency.lockutils [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:41 compute-1 nova_compute[225855]: 2026-01-20 14:54:41.553 225859 DEBUG oslo_concurrency.lockutils [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:41 compute-1 nova_compute[225855]: 2026-01-20 14:54:41.553 225859 DEBUG oslo_concurrency.lockutils [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:41 compute-1 nova_compute[225855]: 2026-01-20 14:54:41.553 225859 DEBUG oslo_concurrency.lockutils [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:41 compute-1 nova_compute[225855]: 2026-01-20 14:54:41.554 225859 INFO nova.compute.manager [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Terminating instance
Jan 20 14:54:41 compute-1 nova_compute[225855]: 2026-01-20 14:54:41.555 225859 DEBUG nova.compute.manager [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:54:41 compute-1 ceph-mon[81775]: osdmap e298: 3 total, 3 up, 3 in
Jan 20 14:54:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3706603828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:41 compute-1 kernel: tap234381ea-07 (unregistering): left promiscuous mode
Jan 20 14:54:41 compute-1 NetworkManager[49104]: <info>  [1768920881.8917] device (tap234381ea-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:54:41 compute-1 ovn_controller[130490]: 2026-01-20T14:54:41Z|00496|binding|INFO|Releasing lport 234381ea-07b1-41fe-b3c1-be97ce6a3b64 from this chassis (sb_readonly=0)
Jan 20 14:54:41 compute-1 ovn_controller[130490]: 2026-01-20T14:54:41Z|00497|binding|INFO|Setting lport 234381ea-07b1-41fe-b3c1-be97ce6a3b64 down in Southbound
Jan 20 14:54:41 compute-1 nova_compute[225855]: 2026-01-20 14:54:41.904 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:41 compute-1 ovn_controller[130490]: 2026-01-20T14:54:41Z|00498|binding|INFO|Removing iface tap234381ea-07 ovn-installed in OVS
Jan 20 14:54:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:41.910 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:55:3c 10.100.0.14'], port_security=['fa:16:3e:b5:55:3c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '23ea4537-f03f-46de-881f-b979e232a3b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79184781-1f23-4584-87de-08e262242488', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'neutron:revision_number': '8', 'neutron:security_group_ids': '30ec24b7-15ba-4aeb-9785-539071729f77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b73ab05-b29f-401a-84a5-ea1a96103f33, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=234381ea-07b1-41fe-b3c1-be97ce6a3b64) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:54:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:41.911 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 234381ea-07b1-41fe-b3c1-be97ce6a3b64 in datapath 79184781-1f23-4584-87de-08e262242488 unbound from our chassis
Jan 20 14:54:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:41.912 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79184781-1f23-4584-87de-08e262242488, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:54:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:41.913 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[926ab9d8-3bf8-41e6-bbff-8af2cbb80c04]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:41.914 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-79184781-1f23-4584-87de-08e262242488 namespace which is not needed anymore
Jan 20 14:54:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:54:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:41.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:54:41 compute-1 nova_compute[225855]: 2026-01-20 14:54:41.934 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:41 compute-1 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000075.scope: Deactivated successfully.
Jan 20 14:54:41 compute-1 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000075.scope: Consumed 17.674s CPU time.
Jan 20 14:54:41 compute-1 systemd-machined[194361]: Machine qemu-54-instance-00000075 terminated.
Jan 20 14:54:41 compute-1 nova_compute[225855]: 2026-01-20 14:54:41.992 225859 INFO nova.virt.libvirt.driver [-] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Instance destroyed successfully.
Jan 20 14:54:41 compute-1 nova_compute[225855]: 2026-01-20 14:54:41.993 225859 DEBUG nova.objects.instance [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'resources' on Instance uuid 23ea4537-f03f-46de-881f-b979e232a3b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.010 225859 DEBUG nova.virt.libvirt.vif [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:51:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-828759404',display_name='tempest-ServerStableDeviceRescueTest-server-828759404',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-828759404',id=117,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:52:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a29915e0dd2403fbd7b7e847696b00a',ramdisk_id='',reservation_id='r-ztpzn050',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-129078052',owner_user_name='tempest-ServerStableDeviceRescueTest-129078052-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:52:46Z,user_data=None,user_id='d85d286ce6224326a0f4a15a06afbfea',uuid=23ea4537-f03f-46de-881f-b979e232a3b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "address": "fa:16:3e:b5:55:3c", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap234381ea-07", "ovs_interfaceid": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.010 225859 DEBUG nova.network.os_vif_util [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converting VIF {"id": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "address": "fa:16:3e:b5:55:3c", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap234381ea-07", "ovs_interfaceid": "234381ea-07b1-41fe-b3c1-be97ce6a3b64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.011 225859 DEBUG nova.network.os_vif_util [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b5:55:3c,bridge_name='br-int',has_traffic_filtering=True,id=234381ea-07b1-41fe-b3c1-be97ce6a3b64,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap234381ea-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.012 225859 DEBUG os_vif [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:55:3c,bridge_name='br-int',has_traffic_filtering=True,id=234381ea-07b1-41fe-b3c1-be97ce6a3b64,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap234381ea-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.013 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.013 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap234381ea-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.016 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.018 225859 INFO os_vif [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:55:3c,bridge_name='br-int',has_traffic_filtering=True,id=234381ea-07b1-41fe-b3c1-be97ce6a3b64,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap234381ea-07')
Jan 20 14:54:42 compute-1 podman[274420]: 2026-01-20 14:54:42.021621054 +0000 UTC m=+0.103108481 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 14:54:42 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[271777]: [NOTICE]   (271782) : haproxy version is 2.8.14-c23fe91
Jan 20 14:54:42 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[271777]: [NOTICE]   (271782) : path to executable is /usr/sbin/haproxy
Jan 20 14:54:42 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[271777]: [WARNING]  (271782) : Exiting Master process...
Jan 20 14:54:42 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[271777]: [ALERT]    (271782) : Current worker (271784) exited with code 143 (Terminated)
Jan 20 14:54:42 compute-1 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[271777]: [WARNING]  (271782) : All workers exited. Exiting... (0)
Jan 20 14:54:42 compute-1 systemd[1]: libpod-8659aeb836f14239190951253b93a3be94364fbe5edb0e5964485303d9ef695f.scope: Deactivated successfully.
Jan 20 14:54:42 compute-1 podman[274480]: 2026-01-20 14:54:42.065075971 +0000 UTC m=+0.047386789 container died 8659aeb836f14239190951253b93a3be94364fbe5edb0e5964485303d9ef695f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:54:42 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8659aeb836f14239190951253b93a3be94364fbe5edb0e5964485303d9ef695f-userdata-shm.mount: Deactivated successfully.
Jan 20 14:54:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:42.093 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:42 compute-1 systemd[1]: var-lib-containers-storage-overlay-31b783ff1c4e23d189e5eb541d582f8638c64e811963a516b7ed73c8d8eafb4a-merged.mount: Deactivated successfully.
Jan 20 14:54:42 compute-1 podman[274480]: 2026-01-20 14:54:42.116439931 +0000 UTC m=+0.098750749 container cleanup 8659aeb836f14239190951253b93a3be94364fbe5edb0e5964485303d9ef695f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:54:42 compute-1 systemd[1]: libpod-conmon-8659aeb836f14239190951253b93a3be94364fbe5edb0e5964485303d9ef695f.scope: Deactivated successfully.
Jan 20 14:54:42 compute-1 podman[274529]: 2026-01-20 14:54:42.194490175 +0000 UTC m=+0.056544607 container remove 8659aeb836f14239190951253b93a3be94364fbe5edb0e5964485303d9ef695f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:54:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:42.201 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b626fc-d490-4496-9656-e6a64239e5e4]: (4, ('Tue Jan 20 02:54:42 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488 (8659aeb836f14239190951253b93a3be94364fbe5edb0e5964485303d9ef695f)\n8659aeb836f14239190951253b93a3be94364fbe5edb0e5964485303d9ef695f\nTue Jan 20 02:54:42 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488 (8659aeb836f14239190951253b93a3be94364fbe5edb0e5964485303d9ef695f)\n8659aeb836f14239190951253b93a3be94364fbe5edb0e5964485303d9ef695f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:42.203 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[87da468b-1f92-4fd1-a45c-110e8170ff98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:42.204 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79184781-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.206 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:42 compute-1 kernel: tap79184781-10: left promiscuous mode
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.223 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:42.226 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d46a20ab-ded7-4dfc-aa32-90e87eefd2e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:42.240 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[eefc6a25-33b6-42b2-8b74-fd46957483d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:42.241 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9dbb4a93-f1e5-4c7d-9c8d-7ebf1b08a49a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:42.255 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fd0f49cb-44c0-41f9-9eef-3499e27884d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580397, 'reachable_time': 26164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274545, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:42 compute-1 systemd[1]: run-netns-ovnmeta\x2d79184781\x2d1f23\x2d4584\x2d87de\x2d08e262242488.mount: Deactivated successfully.
Jan 20 14:54:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:42.258 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-79184781-1f23-4584-87de-08e262242488 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:54:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:42.259 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[ff579ff8-de19-4932-aeff-c4d67b7d6ae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:54:42 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3510984213' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:54:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:54:42 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3510984213' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.412 225859 INFO nova.virt.libvirt.driver [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Deleting instance files /var/lib/nova/instances/23ea4537-f03f-46de-881f-b979e232a3b9_del
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.413 225859 INFO nova.virt.libvirt.driver [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Deletion of /var/lib/nova/instances/23ea4537-f03f-46de-881f-b979e232a3b9_del complete
Jan 20 14:54:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.786 225859 DEBUG nova.compute.manager [req-c7ca7af7-4363-4e44-8137-859a60cb36a7 req-608ed09f-e9a1-4846-8b0d-5351fd567649 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received event network-vif-unplugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.786 225859 DEBUG oslo_concurrency.lockutils [req-c7ca7af7-4363-4e44-8137-859a60cb36a7 req-608ed09f-e9a1-4846-8b0d-5351fd567649 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.787 225859 DEBUG oslo_concurrency.lockutils [req-c7ca7af7-4363-4e44-8137-859a60cb36a7 req-608ed09f-e9a1-4846-8b0d-5351fd567649 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.787 225859 DEBUG oslo_concurrency.lockutils [req-c7ca7af7-4363-4e44-8137-859a60cb36a7 req-608ed09f-e9a1-4846-8b0d-5351fd567649 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.787 225859 DEBUG nova.compute.manager [req-c7ca7af7-4363-4e44-8137-859a60cb36a7 req-608ed09f-e9a1-4846-8b0d-5351fd567649 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] No waiting events found dispatching network-vif-unplugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.787 225859 DEBUG nova.compute.manager [req-c7ca7af7-4363-4e44-8137-859a60cb36a7 req-608ed09f-e9a1-4846-8b0d-5351fd567649 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received event network-vif-unplugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.824 225859 INFO nova.compute.manager [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Took 1.27 seconds to destroy the instance on the hypervisor.
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.824 225859 DEBUG oslo.service.loopingcall [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.825 225859 DEBUG nova.compute.manager [-] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:54:42 compute-1 nova_compute[225855]: 2026-01-20 14:54:42.825 225859 DEBUG nova.network.neutron [-] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:54:42 compute-1 ceph-mon[81775]: pgmap v2056: 321 pgs: 321 active+clean; 629 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 260 op/s
Jan 20 14:54:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3510984213' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:54:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3510984213' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:54:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:54:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:43.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:54:43 compute-1 nova_compute[225855]: 2026-01-20 14:54:43.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:54:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2834153926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:43.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.116 225859 DEBUG nova.network.neutron [-] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.147 225859 INFO nova.compute.manager [-] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Took 1.32 seconds to deallocate network for instance.
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.222 225859 DEBUG oslo_concurrency.lockutils [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.223 225859 DEBUG oslo_concurrency.lockutils [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.325 225859 DEBUG oslo_concurrency.processutils [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.348 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.373 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.376 225859 DEBUG nova.compute.manager [req-b1259cbe-5b70-4c4b-877c-95383d195375 req-406ac6f4-8918-4594-a38b-a333ac629e5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received event network-vif-deleted-234381ea-07b1-41fe-b3c1-be97ce6a3b64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:54:44 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/510416413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.755 225859 DEBUG oslo_concurrency.processutils [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.763 225859 DEBUG nova.compute.provider_tree [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.783 225859 DEBUG nova.scheduler.client.report [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.805 225859 DEBUG oslo_concurrency.lockutils [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.807 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.808 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.808 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.808 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.855 225859 INFO nova.scheduler.client.report [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Deleted allocations for instance 23ea4537-f03f-46de-881f-b979e232a3b9
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.886 225859 DEBUG nova.compute.manager [req-ce3ee162-8fa6-459d-92a4-9b5dc933c7b9 req-66295c60-cafc-4918-9ddd-b2cae0cf971a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received event network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.886 225859 DEBUG oslo_concurrency.lockutils [req-ce3ee162-8fa6-459d-92a4-9b5dc933c7b9 req-66295c60-cafc-4918-9ddd-b2cae0cf971a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.887 225859 DEBUG oslo_concurrency.lockutils [req-ce3ee162-8fa6-459d-92a4-9b5dc933c7b9 req-66295c60-cafc-4918-9ddd-b2cae0cf971a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.887 225859 DEBUG oslo_concurrency.lockutils [req-ce3ee162-8fa6-459d-92a4-9b5dc933c7b9 req-66295c60-cafc-4918-9ddd-b2cae0cf971a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.887 225859 DEBUG nova.compute.manager [req-ce3ee162-8fa6-459d-92a4-9b5dc933c7b9 req-66295c60-cafc-4918-9ddd-b2cae0cf971a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] No waiting events found dispatching network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.887 225859 WARNING nova.compute.manager [req-ce3ee162-8fa6-459d-92a4-9b5dc933c7b9 req-66295c60-cafc-4918-9ddd-b2cae0cf971a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Received unexpected event network-vif-plugged-234381ea-07b1-41fe-b3c1-be97ce6a3b64 for instance with vm_state deleted and task_state None.
Jan 20 14:54:44 compute-1 ceph-mon[81775]: pgmap v2057: 321 pgs: 321 active+clean; 590 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 31 KiB/s wr, 271 op/s
Jan 20 14:54:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2381813736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4065210901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/510416413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:44 compute-1 nova_compute[225855]: 2026-01-20 14:54:44.925 225859 DEBUG oslo_concurrency.lockutils [None req-b4971f7c-3726-4f85-af64-620f9ac7eebf d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "23ea4537-f03f-46de-881f-b979e232a3b9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:54:45 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2105245927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:45 compute-1 nova_compute[225855]: 2026-01-20 14:54:45.225 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:54:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:45.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:54:45 compute-1 nova_compute[225855]: 2026-01-20 14:54:45.388 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:54:45 compute-1 nova_compute[225855]: 2026-01-20 14:54:45.390 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4334MB free_disk=20.78514862060547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:54:45 compute-1 nova_compute[225855]: 2026-01-20 14:54:45.391 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:45 compute-1 nova_compute[225855]: 2026-01-20 14:54:45.391 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:45 compute-1 nova_compute[225855]: 2026-01-20 14:54:45.460 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:54:45 compute-1 nova_compute[225855]: 2026-01-20 14:54:45.461 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:54:45 compute-1 nova_compute[225855]: 2026-01-20 14:54:45.475 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:54:45 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/802906552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:45 compute-1 nova_compute[225855]: 2026-01-20 14:54:45.906 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:45 compute-1 nova_compute[225855]: 2026-01-20 14:54:45.912 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:54:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:54:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:45.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:54:46 compute-1 nova_compute[225855]: 2026-01-20 14:54:46.037 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2620888822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2105245927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/802906552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:46 compute-1 nova_compute[225855]: 2026-01-20 14:54:46.164 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:54:46 compute-1 nova_compute[225855]: 2026-01-20 14:54:46.309 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:54:46 compute-1 nova_compute[225855]: 2026-01-20 14:54:46.310 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:47 compute-1 nova_compute[225855]: 2026-01-20 14:54:47.015 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:47 compute-1 ceph-mon[81775]: pgmap v2058: 321 pgs: 321 active+clean; 446 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 28 KiB/s wr, 256 op/s
Jan 20 14:54:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:47.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:47 compute-1 nova_compute[225855]: 2026-01-20 14:54:47.382 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:47 compute-1 nova_compute[225855]: 2026-01-20 14:54:47.613 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e299 e299: 3 total, 3 up, 3 in
Jan 20 14:54:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:47.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:48 compute-1 nova_compute[225855]: 2026-01-20 14:54:48.229 225859 DEBUG oslo_concurrency.lockutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquiring lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:48 compute-1 nova_compute[225855]: 2026-01-20 14:54:48.229 225859 DEBUG oslo_concurrency.lockutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:48 compute-1 nova_compute[225855]: 2026-01-20 14:54:48.244 225859 DEBUG nova.compute.manager [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:54:48 compute-1 nova_compute[225855]: 2026-01-20 14:54:48.353 225859 DEBUG oslo_concurrency.lockutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:48 compute-1 nova_compute[225855]: 2026-01-20 14:54:48.353 225859 DEBUG oslo_concurrency.lockutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:48 compute-1 nova_compute[225855]: 2026-01-20 14:54:48.360 225859 DEBUG nova.virt.hardware [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:54:48 compute-1 nova_compute[225855]: 2026-01-20 14:54:48.361 225859 INFO nova.compute.claims [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:54:48 compute-1 nova_compute[225855]: 2026-01-20 14:54:48.482 225859 DEBUG oslo_concurrency.processutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:48 compute-1 nova_compute[225855]: 2026-01-20 14:54:48.659 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920873.6579804, baada610-f563-4c97-89a9-56eba792c352 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:54:48 compute-1 nova_compute[225855]: 2026-01-20 14:54:48.660 225859 INFO nova.compute.manager [-] [instance: baada610-f563-4c97-89a9-56eba792c352] VM Stopped (Lifecycle Event)
Jan 20 14:54:48 compute-1 nova_compute[225855]: 2026-01-20 14:54:48.684 225859 DEBUG nova.compute.manager [None req-4781a04f-4c5e-4083-8f9b-d9b314b836d4 - - - - - -] [instance: baada610-f563-4c97-89a9-56eba792c352] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:54:48 compute-1 ceph-mon[81775]: osdmap e299: 3 total, 3 up, 3 in
Jan 20 14:54:48 compute-1 ceph-mon[81775]: pgmap v2060: 321 pgs: 321 active+clean; 422 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 28 KiB/s wr, 221 op/s
Jan 20 14:54:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:54:48 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4106358672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.001 225859 DEBUG oslo_concurrency.processutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.009 225859 DEBUG nova.compute.provider_tree [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.028 225859 DEBUG nova.scheduler.client.report [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.050 225859 DEBUG oslo_concurrency.lockutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.051 225859 DEBUG nova.compute.manager [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.102 225859 DEBUG nova.compute.manager [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.103 225859 DEBUG nova.network.neutron [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.145 225859 INFO nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.173 225859 DEBUG nova.compute.manager [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:54:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:49.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.313 225859 DEBUG nova.compute.manager [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.315 225859 DEBUG nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.315 225859 INFO nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Creating image(s)
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.346 225859 DEBUG nova.storage.rbd_utils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] rbd image 3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.376 225859 DEBUG nova.storage.rbd_utils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] rbd image 3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.405 225859 DEBUG nova.storage.rbd_utils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] rbd image 3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.409 225859 DEBUG oslo_concurrency.processutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.495 225859 DEBUG nova.policy [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34eb73f628994c11801d447148d5f142', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b1e83af992c94112a965575784639d77', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.499 225859 DEBUG oslo_concurrency.processutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.499 225859 DEBUG oslo_concurrency.lockutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.500 225859 DEBUG oslo_concurrency.lockutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.500 225859 DEBUG oslo_concurrency.lockutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.527 225859 DEBUG nova.storage.rbd_utils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] rbd image 3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.531 225859 DEBUG oslo_concurrency.processutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e300 e300: 3 total, 3 up, 3 in
Jan 20 14:54:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4106358672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:49 compute-1 ceph-mon[81775]: osdmap e300: 3 total, 3 up, 3 in
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.816 225859 DEBUG oslo_concurrency.processutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:49 compute-1 nova_compute[225855]: 2026-01-20 14:54:49.922 225859 DEBUG nova.storage.rbd_utils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] resizing rbd image 3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:54:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:49.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:50 compute-1 nova_compute[225855]: 2026-01-20 14:54:50.067 225859 DEBUG nova.objects.instance [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lazy-loading 'migration_context' on Instance uuid 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:50 compute-1 nova_compute[225855]: 2026-01-20 14:54:50.082 225859 DEBUG nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:54:50 compute-1 nova_compute[225855]: 2026-01-20 14:54:50.083 225859 DEBUG nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Ensure instance console log exists: /var/lib/nova/instances/3bec73f6-5255-44c0-8a10-a64c7e86c0c2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:54:50 compute-1 nova_compute[225855]: 2026-01-20 14:54:50.083 225859 DEBUG oslo_concurrency.lockutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:50 compute-1 nova_compute[225855]: 2026-01-20 14:54:50.084 225859 DEBUG oslo_concurrency.lockutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:50 compute-1 nova_compute[225855]: 2026-01-20 14:54:50.084 225859 DEBUG oslo_concurrency.lockutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:50 compute-1 nova_compute[225855]: 2026-01-20 14:54:50.303 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:54:50 compute-1 nova_compute[225855]: 2026-01-20 14:54:50.303 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:54:50 compute-1 nova_compute[225855]: 2026-01-20 14:54:50.497 225859 DEBUG nova.network.neutron [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Successfully created port: e084df8c-a73e-4535-bcf7-de8adbafa9ae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:54:50 compute-1 ceph-mon[81775]: pgmap v2062: 321 pgs: 321 active+clean; 404 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1020 KiB/s rd, 6.2 KiB/s wr, 162 op/s
Jan 20 14:54:51 compute-1 nova_compute[225855]: 2026-01-20 14:54:51.038 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:54:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:51.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:54:51 compute-1 nova_compute[225855]: 2026-01-20 14:54:51.474 225859 DEBUG nova.network.neutron [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Successfully updated port: e084df8c-a73e-4535-bcf7-de8adbafa9ae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:54:51 compute-1 nova_compute[225855]: 2026-01-20 14:54:51.487 225859 DEBUG oslo_concurrency.lockutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquiring lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:54:51 compute-1 nova_compute[225855]: 2026-01-20 14:54:51.488 225859 DEBUG oslo_concurrency.lockutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquired lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:54:51 compute-1 nova_compute[225855]: 2026-01-20 14:54:51.488 225859 DEBUG nova.network.neutron [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:54:51 compute-1 nova_compute[225855]: 2026-01-20 14:54:51.610 225859 DEBUG nova.compute.manager [req-a0ae1716-053e-47ca-b495-eb9b88b1145b req-d0ad84e5-2db1-416b-804b-8cb2900ef5af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Received event network-changed-e084df8c-a73e-4535-bcf7-de8adbafa9ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:51 compute-1 nova_compute[225855]: 2026-01-20 14:54:51.611 225859 DEBUG nova.compute.manager [req-a0ae1716-053e-47ca-b495-eb9b88b1145b req-d0ad84e5-2db1-416b-804b-8cb2900ef5af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Refreshing instance network info cache due to event network-changed-e084df8c-a73e-4535-bcf7-de8adbafa9ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:54:51 compute-1 nova_compute[225855]: 2026-01-20 14:54:51.611 225859 DEBUG oslo_concurrency.lockutils [req-a0ae1716-053e-47ca-b495-eb9b88b1145b req-d0ad84e5-2db1-416b-804b-8cb2900ef5af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:54:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1206076325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:51 compute-1 nova_compute[225855]: 2026-01-20 14:54:51.799 225859 DEBUG nova.network.neutron [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:54:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:51.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.017 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:52 compute-1 sudo[274807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:52 compute-1 sudo[274807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:52 compute-1 sudo[274807]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:52 compute-1 sudo[274832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:52 compute-1 sudo[274832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:52 compute-1 sudo[274832]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.697 225859 DEBUG nova.network.neutron [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Updating instance_info_cache with network_info: [{"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.749 225859 DEBUG oslo_concurrency.lockutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Releasing lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.750 225859 DEBUG nova.compute.manager [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Instance network_info: |[{"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.750 225859 DEBUG oslo_concurrency.lockutils [req-a0ae1716-053e-47ca-b495-eb9b88b1145b req-d0ad84e5-2db1-416b-804b-8cb2900ef5af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.750 225859 DEBUG nova.network.neutron [req-a0ae1716-053e-47ca-b495-eb9b88b1145b req-d0ad84e5-2db1-416b-804b-8cb2900ef5af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Refreshing network info cache for port e084df8c-a73e-4535-bcf7-de8adbafa9ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.753 225859 DEBUG nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Start _get_guest_xml network_info=[{"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.757 225859 WARNING nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.760 225859 DEBUG nova.virt.libvirt.host [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.761 225859 DEBUG nova.virt.libvirt.host [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.764 225859 DEBUG nova.virt.libvirt.host [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.764 225859 DEBUG nova.virt.libvirt.host [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.765 225859 DEBUG nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.765 225859 DEBUG nova.virt.hardware [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.766 225859 DEBUG nova.virt.hardware [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.766 225859 DEBUG nova.virt.hardware [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.766 225859 DEBUG nova.virt.hardware [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.766 225859 DEBUG nova.virt.hardware [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.767 225859 DEBUG nova.virt.hardware [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.767 225859 DEBUG nova.virt.hardware [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.767 225859 DEBUG nova.virt.hardware [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.767 225859 DEBUG nova.virt.hardware [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.768 225859 DEBUG nova.virt.hardware [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.768 225859 DEBUG nova.virt.hardware [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:54:52 compute-1 nova_compute[225855]: 2026-01-20 14:54:52.770 225859 DEBUG oslo_concurrency.processutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e301 e301: 3 total, 3 up, 3 in
Jan 20 14:54:52 compute-1 ceph-mon[81775]: pgmap v2063: 321 pgs: 321 active+clean; 378 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 502 KiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 20 14:54:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:54:53 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/805911879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.209 225859 DEBUG oslo_concurrency.processutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.241 225859 DEBUG nova.storage.rbd_utils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] rbd image 3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.245 225859 DEBUG oslo_concurrency.processutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:53.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:54:53 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/951443556' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.658 225859 DEBUG oslo_concurrency.processutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.660 225859 DEBUG nova.virt.libvirt.vif [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:54:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-913712707',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-913712707',id=124,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEiICz3SRHAbg35RWQTkKKcPH7nuzl556rWhnnJMKWCQeHRtCTrp3rh0Cew3QLmsFdOqe88XbxeaMKtgT6L6nfvjZZnoyEjqVogiPNh8/V6NYBD5v71aQZWpX0o+tqsUvg==',key_name='tempest-keypair-1388890452',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b1e83af992c94112a965575784639d77',ramdisk_id='',reservation_id='r-n20fo39g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-896995479',owner_user_name='tempest-AttachVolumeShelveTestJSON-896995479-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:54:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34eb73f628994c11801d447148d5f142',uuid=3bec73f6-5255-44c0-8a10-a64c7e86c0c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.661 225859 DEBUG nova.network.os_vif_util [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Converting VIF {"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.662 225859 DEBUG nova.network.os_vif_util [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:8e:0f,bridge_name='br-int',has_traffic_filtering=True,id=e084df8c-a73e-4535-bcf7-de8adbafa9ae,network=Network(e9589011-b728-4b79-9945-aa6c52dd0fc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape084df8c-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.663 225859 DEBUG nova.objects.instance [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.685 225859 DEBUG nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:54:53 compute-1 nova_compute[225855]:   <uuid>3bec73f6-5255-44c0-8a10-a64c7e86c0c2</uuid>
Jan 20 14:54:53 compute-1 nova_compute[225855]:   <name>instance-0000007c</name>
Jan 20 14:54:53 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:54:53 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:54:53 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <nova:name>tempest-AttachVolumeShelveTestJSON-server-913712707</nova:name>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:54:52</nova:creationTime>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:54:53 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:54:53 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:54:53 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:54:53 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:54:53 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:54:53 compute-1 nova_compute[225855]:         <nova:user uuid="34eb73f628994c11801d447148d5f142">tempest-AttachVolumeShelveTestJSON-896995479-project-member</nova:user>
Jan 20 14:54:53 compute-1 nova_compute[225855]:         <nova:project uuid="b1e83af992c94112a965575784639d77">tempest-AttachVolumeShelveTestJSON-896995479</nova:project>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:54:53 compute-1 nova_compute[225855]:         <nova:port uuid="e084df8c-a73e-4535-bcf7-de8adbafa9ae">
Jan 20 14:54:53 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:54:53 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:54:53 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <system>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <entry name="serial">3bec73f6-5255-44c0-8a10-a64c7e86c0c2</entry>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <entry name="uuid">3bec73f6-5255-44c0-8a10-a64c7e86c0c2</entry>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     </system>
Jan 20 14:54:53 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:54:53 compute-1 nova_compute[225855]:   <os>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:   </os>
Jan 20 14:54:53 compute-1 nova_compute[225855]:   <features>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:   </features>
Jan 20 14:54:53 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:54:53 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:54:53 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk">
Jan 20 14:54:53 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       </source>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:54:53 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk.config">
Jan 20 14:54:53 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       </source>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:54:53 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:89:8e:0f"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <target dev="tape084df8c-a7"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/3bec73f6-5255-44c0-8a10-a64c7e86c0c2/console.log" append="off"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <video>
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     </video>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:54:53 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:54:53 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:54:53 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:54:53 compute-1 nova_compute[225855]: </domain>
Jan 20 14:54:53 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.686 225859 DEBUG nova.compute.manager [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Preparing to wait for external event network-vif-plugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.687 225859 DEBUG oslo_concurrency.lockutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquiring lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.687 225859 DEBUG oslo_concurrency.lockutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.687 225859 DEBUG oslo_concurrency.lockutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.688 225859 DEBUG nova.virt.libvirt.vif [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:54:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-913712707',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-913712707',id=124,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEiICz3SRHAbg35RWQTkKKcPH7nuzl556rWhnnJMKWCQeHRtCTrp3rh0Cew3QLmsFdOqe88XbxeaMKtgT6L6nfvjZZnoyEjqVogiPNh8/V6NYBD5v71aQZWpX0o+tqsUvg==',key_name='tempest-keypair-1388890452',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b1e83af992c94112a965575784639d77',ramdisk_id='',reservation_id='r-n20fo39g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-896995479',owner_user_name='tempest-AttachVolumeShelveTestJSON-896995479-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:54:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34eb73f628994c11801d447148d5f142',uuid=3bec73f6-5255-44c0-8a10-a64c7e86c0c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.688 225859 DEBUG nova.network.os_vif_util [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Converting VIF {"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.689 225859 DEBUG nova.network.os_vif_util [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:8e:0f,bridge_name='br-int',has_traffic_filtering=True,id=e084df8c-a73e-4535-bcf7-de8adbafa9ae,network=Network(e9589011-b728-4b79-9945-aa6c52dd0fc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape084df8c-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.689 225859 DEBUG os_vif [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:8e:0f,bridge_name='br-int',has_traffic_filtering=True,id=e084df8c-a73e-4535-bcf7-de8adbafa9ae,network=Network(e9589011-b728-4b79-9945-aa6c52dd0fc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape084df8c-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.690 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.690 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.691 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.693 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.693 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape084df8c-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.694 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape084df8c-a7, col_values=(('external_ids', {'iface-id': 'e084df8c-a73e-4535-bcf7-de8adbafa9ae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:8e:0f', 'vm-uuid': '3bec73f6-5255-44c0-8a10-a64c7e86c0c2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.695 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:53 compute-1 NetworkManager[49104]: <info>  [1768920893.6963] manager: (tape084df8c-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/209)
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.697 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.701 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.702 225859 INFO os_vif [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:8e:0f,bridge_name='br-int',has_traffic_filtering=True,id=e084df8c-a73e-4535-bcf7-de8adbafa9ae,network=Network(e9589011-b728-4b79-9945-aa6c52dd0fc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape084df8c-a7')
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.743 225859 DEBUG nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.743 225859 DEBUG nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.743 225859 DEBUG nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] No VIF found with MAC fa:16:3e:89:8e:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.744 225859 INFO nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Using config drive
Jan 20 14:54:53 compute-1 nova_compute[225855]: 2026-01-20 14:54:53.768 225859 DEBUG nova.storage.rbd_utils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] rbd image 3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:54:53 compute-1 ceph-mon[81775]: osdmap e301: 3 total, 3 up, 3 in
Jan 20 14:54:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/805911879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/951443556' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:53.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:54 compute-1 podman[274940]: 2026-01-20 14:54:54.018555752 +0000 UTC m=+0.067291551 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 20 14:54:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/276321979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:54 compute-1 ceph-mon[81775]: pgmap v2065: 321 pgs: 321 active+clean; 359 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 112 KiB/s rd, 5.2 MiB/s wr, 137 op/s
Jan 20 14:54:55 compute-1 nova_compute[225855]: 2026-01-20 14:54:55.213 225859 INFO nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Creating config drive at /var/lib/nova/instances/3bec73f6-5255-44c0-8a10-a64c7e86c0c2/disk.config
Jan 20 14:54:55 compute-1 nova_compute[225855]: 2026-01-20 14:54:55.218 225859 DEBUG oslo_concurrency.processutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3bec73f6-5255-44c0-8a10-a64c7e86c0c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsek0poxk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:55 compute-1 nova_compute[225855]: 2026-01-20 14:54:55.242 225859 DEBUG nova.network.neutron [req-a0ae1716-053e-47ca-b495-eb9b88b1145b req-d0ad84e5-2db1-416b-804b-8cb2900ef5af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Updated VIF entry in instance network info cache for port e084df8c-a73e-4535-bcf7-de8adbafa9ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:54:55 compute-1 nova_compute[225855]: 2026-01-20 14:54:55.243 225859 DEBUG nova.network.neutron [req-a0ae1716-053e-47ca-b495-eb9b88b1145b req-d0ad84e5-2db1-416b-804b-8cb2900ef5af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Updating instance_info_cache with network_info: [{"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:54:55 compute-1 nova_compute[225855]: 2026-01-20 14:54:55.274 225859 DEBUG oslo_concurrency.lockutils [req-a0ae1716-053e-47ca-b495-eb9b88b1145b req-d0ad84e5-2db1-416b-804b-8cb2900ef5af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:54:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:54:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:55.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:54:55 compute-1 nova_compute[225855]: 2026-01-20 14:54:55.334 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:54:55 compute-1 nova_compute[225855]: 2026-01-20 14:54:55.349 225859 DEBUG oslo_concurrency.processutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3bec73f6-5255-44c0-8a10-a64c7e86c0c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsek0poxk" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:55 compute-1 nova_compute[225855]: 2026-01-20 14:54:55.382 225859 DEBUG nova.storage.rbd_utils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] rbd image 3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:54:55 compute-1 nova_compute[225855]: 2026-01-20 14:54:55.385 225859 DEBUG oslo_concurrency.processutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3bec73f6-5255-44c0-8a10-a64c7e86c0c2/disk.config 3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:55 compute-1 nova_compute[225855]: 2026-01-20 14:54:55.533 225859 DEBUG oslo_concurrency.processutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3bec73f6-5255-44c0-8a10-a64c7e86c0c2/disk.config 3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:55 compute-1 nova_compute[225855]: 2026-01-20 14:54:55.534 225859 INFO nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Deleting local config drive /var/lib/nova/instances/3bec73f6-5255-44c0-8a10-a64c7e86c0c2/disk.config because it was imported into RBD.
Jan 20 14:54:55 compute-1 virtqemud[225396]: End of file while reading data: Input/output error
Jan 20 14:54:55 compute-1 kernel: tape084df8c-a7: entered promiscuous mode
Jan 20 14:54:55 compute-1 NetworkManager[49104]: <info>  [1768920895.5853] manager: (tape084df8c-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/210)
Jan 20 14:54:55 compute-1 ovn_controller[130490]: 2026-01-20T14:54:55Z|00499|binding|INFO|Claiming lport e084df8c-a73e-4535-bcf7-de8adbafa9ae for this chassis.
Jan 20 14:54:55 compute-1 ovn_controller[130490]: 2026-01-20T14:54:55Z|00500|binding|INFO|e084df8c-a73e-4535-bcf7-de8adbafa9ae: Claiming fa:16:3e:89:8e:0f 10.100.0.14
Jan 20 14:54:55 compute-1 nova_compute[225855]: 2026-01-20 14:54:55.586 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.596 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:8e:0f 10.100.0.14'], port_security=['fa:16:3e:89:8e:0f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3bec73f6-5255-44c0-8a10-a64c7e86c0c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e9589011-b728-4b79-9945-aa6c52dd0fc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1e83af992c94112a965575784639d77', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e88a960e-8540-4c69-934d-b6e1b91beb98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dc6df7d-3e57-4779-8232-af1ccf413403, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=e084df8c-a73e-4535-bcf7-de8adbafa9ae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.597 140354 INFO neutron.agent.ovn.metadata.agent [-] Port e084df8c-a73e-4535-bcf7-de8adbafa9ae in datapath e9589011-b728-4b79-9945-aa6c52dd0fc2 bound to our chassis
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.598 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e9589011-b728-4b79-9945-aa6c52dd0fc2
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.608 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9afb37f3-e8e8-4297-afbb-a26cb238fd60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.609 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape9589011-b1 in ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.611 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape9589011-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.611 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[37f023f9-b507-4187-8d19-b595205483e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.612 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c0a4a3d9-673c-46de-a494-4451daa149b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:55 compute-1 systemd-udevd[275016]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:54:55 compute-1 systemd-machined[194361]: New machine qemu-59-instance-0000007c.
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.625 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[65de640c-280e-473e-9bb2-b2f346a7877f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:55 compute-1 NetworkManager[49104]: <info>  [1768920895.6356] device (tape084df8c-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:54:55 compute-1 NetworkManager[49104]: <info>  [1768920895.6368] device (tape084df8c-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.649 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf1464b-79dc-4385-a33d-3504357c40e9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:55 compute-1 systemd[1]: Started Virtual Machine qemu-59-instance-0000007c.
Jan 20 14:54:55 compute-1 nova_compute[225855]: 2026-01-20 14:54:55.664 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:55 compute-1 nova_compute[225855]: 2026-01-20 14:54:55.671 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:55 compute-1 ovn_controller[130490]: 2026-01-20T14:54:55Z|00501|binding|INFO|Setting lport e084df8c-a73e-4535-bcf7-de8adbafa9ae ovn-installed in OVS
Jan 20 14:54:55 compute-1 ovn_controller[130490]: 2026-01-20T14:54:55Z|00502|binding|INFO|Setting lport e084df8c-a73e-4535-bcf7-de8adbafa9ae up in Southbound
Jan 20 14:54:55 compute-1 nova_compute[225855]: 2026-01-20 14:54:55.677 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.678 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[016674b5-8647-4c0f-8a60-c1b82fa6017c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.682 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8dcdd49a-c257-4b59-be90-a7b61f7567ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:55 compute-1 systemd-udevd[275019]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:54:55 compute-1 NetworkManager[49104]: <info>  [1768920895.6839] manager: (tape9589011-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/211)
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.710 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[20523d66-fd07-42ae-958c-924bc782af68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.713 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e06ee447-1b1b-4da3-9a1d-920365902c83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:55 compute-1 NetworkManager[49104]: <info>  [1768920895.7319] device (tape9589011-b0): carrier: link connected
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.738 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e7683c38-385a-4e60-a3a8-6a893a6ea556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.754 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1282c7db-9524-4d83-8c68-19f0f1cb537e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape9589011-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:5a:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593465, 'reachable_time': 18563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275047, 'error': None, 'target': 'ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.766 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[069d19b3-a3b1-48ed-9827-6fde3e6af81d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea0:5a14'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 593465, 'tstamp': 593465}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275048, 'error': None, 'target': 'ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.785 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[290b2ff3-aed3-440c-b0bd-344beb009f91]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape9589011-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:5a:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593465, 'reachable_time': 18563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275049, 'error': None, 'target': 'ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.808 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad23ec9-5792-486b-a7d4-594fc2fdb527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2457285435' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.875 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3961ac24-8cf7-453f-8e7d-dae7b6c6a59c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.876 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape9589011-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.877 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.877 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape9589011-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:55 compute-1 nova_compute[225855]: 2026-01-20 14:54:55.879 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:55 compute-1 NetworkManager[49104]: <info>  [1768920895.8796] manager: (tape9589011-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/212)
Jan 20 14:54:55 compute-1 kernel: tape9589011-b0: entered promiscuous mode
Jan 20 14:54:55 compute-1 nova_compute[225855]: 2026-01-20 14:54:55.881 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.882 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape9589011-b0, col_values=(('external_ids', {'iface-id': '9ca9d06a-9365-4769-a2c4-7322625683ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:55 compute-1 nova_compute[225855]: 2026-01-20 14:54:55.883 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:55 compute-1 ovn_controller[130490]: 2026-01-20T14:54:55Z|00503|binding|INFO|Releasing lport 9ca9d06a-9365-4769-a2c4-7322625683ac from this chassis (sb_readonly=0)
Jan 20 14:54:55 compute-1 nova_compute[225855]: 2026-01-20 14:54:55.898 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.899 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e9589011-b728-4b79-9945-aa6c52dd0fc2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e9589011-b728-4b79-9945-aa6c52dd0fc2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.900 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[eca5d3e5-dd44-4821-ad41-4456380c285b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.901 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-e9589011-b728-4b79-9945-aa6c52dd0fc2
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/e9589011-b728-4b79-9945-aa6c52dd0fc2.pid.haproxy
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID e9589011-b728-4b79-9945-aa6c52dd0fc2
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:54:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:54:55.902 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2', 'env', 'PROCESS_TAG=haproxy-e9589011-b728-4b79-9945-aa6c52dd0fc2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e9589011-b728-4b79-9945-aa6c52dd0fc2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:54:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:55.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.019 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920896.0183027, 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.019 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] VM Started (Lifecycle Event)
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.040 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.043 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.047 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920896.019713, 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.047 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] VM Paused (Lifecycle Event)
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.078 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.081 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.104 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:54:56 compute-1 podman[275123]: 2026-01-20 14:54:56.357982826 +0000 UTC m=+0.090894627 container create 35c1b3425f1e0506111fa57ff509641c7dc2b69df752675b10a514f5d11a05d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:54:56 compute-1 podman[275123]: 2026-01-20 14:54:56.299709881 +0000 UTC m=+0.032621702 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:54:56 compute-1 systemd[1]: Started libpod-conmon-35c1b3425f1e0506111fa57ff509641c7dc2b69df752675b10a514f5d11a05d6.scope.
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.454 225859 DEBUG nova.compute.manager [req-1173ce9e-d191-4d3a-8ae8-65115d4a950d req-f838c079-2d26-45ad-bafd-2b051183597f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Received event network-vif-plugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.455 225859 DEBUG oslo_concurrency.lockutils [req-1173ce9e-d191-4d3a-8ae8-65115d4a950d req-f838c079-2d26-45ad-bafd-2b051183597f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.455 225859 DEBUG oslo_concurrency.lockutils [req-1173ce9e-d191-4d3a-8ae8-65115d4a950d req-f838c079-2d26-45ad-bafd-2b051183597f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.455 225859 DEBUG oslo_concurrency.lockutils [req-1173ce9e-d191-4d3a-8ae8-65115d4a950d req-f838c079-2d26-45ad-bafd-2b051183597f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.455 225859 DEBUG nova.compute.manager [req-1173ce9e-d191-4d3a-8ae8-65115d4a950d req-f838c079-2d26-45ad-bafd-2b051183597f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Processing event network-vif-plugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.456 225859 DEBUG nova.compute.manager [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:54:56 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.459 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920896.4596813, 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.460 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] VM Resumed (Lifecycle Event)
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.462 225859 DEBUG nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:54:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bba5b462764fe5156f5cb57f1fc6d7aae20c07ad8940e8ed4d30f072b3f5c46d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.473 225859 INFO nova.virt.libvirt.driver [-] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Instance spawned successfully.
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.473 225859 DEBUG nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.478 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.481 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.506 225859 DEBUG nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.507 225859 DEBUG nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.508 225859 DEBUG nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.508 225859 DEBUG nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.508 225859 DEBUG nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.509 225859 DEBUG nova.virt.libvirt.driver [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:54:56 compute-1 podman[275123]: 2026-01-20 14:54:56.517941572 +0000 UTC m=+0.250853403 container init 35c1b3425f1e0506111fa57ff509641c7dc2b69df752675b10a514f5d11a05d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 14:54:56 compute-1 podman[275123]: 2026-01-20 14:54:56.524121706 +0000 UTC m=+0.257033517 container start 35c1b3425f1e0506111fa57ff509641c7dc2b69df752675b10a514f5d11a05d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:54:56 compute-1 neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2[275139]: [NOTICE]   (275143) : New worker (275145) forked
Jan 20 14:54:56 compute-1 neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2[275139]: [NOTICE]   (275143) : Loading success.
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.567 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.625 225859 INFO nova.compute.manager [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Took 7.31 seconds to spawn the instance on the hypervisor.
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.625 225859 DEBUG nova.compute.manager [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.718 225859 INFO nova.compute.manager [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Took 8.38 seconds to build instance.
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.736 225859 DEBUG oslo_concurrency.lockutils [None req-0ec32ccc-7ae6-48ef-bfe0-8dad6c8c8aef 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.506s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:56 compute-1 ceph-mon[81775]: pgmap v2066: 321 pgs: 321 active+clean; 323 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 595 KiB/s rd, 6.3 MiB/s wr, 260 op/s
Jan 20 14:54:56 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3520046388' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.988 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920881.9873397, 23ea4537-f03f-46de-881f-b979e232a3b9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:54:56 compute-1 nova_compute[225855]: 2026-01-20 14:54:56.989 225859 INFO nova.compute.manager [-] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] VM Stopped (Lifecycle Event)
Jan 20 14:54:57 compute-1 nova_compute[225855]: 2026-01-20 14:54:57.013 225859 DEBUG nova.compute.manager [None req-b69ccc37-485c-4beb-9f26-21ec25ce8965 - - - - - -] [instance: 23ea4537-f03f-46de-881f-b979e232a3b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:54:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:57.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/531414823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:57.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:58 compute-1 nova_compute[225855]: 2026-01-20 14:54:58.698 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:58 compute-1 ceph-mon[81775]: pgmap v2067: 321 pgs: 321 active+clean; 289 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 819 KiB/s rd, 7.6 MiB/s wr, 293 op/s
Jan 20 14:54:59 compute-1 nova_compute[225855]: 2026-01-20 14:54:59.150 225859 DEBUG nova.compute.manager [req-3a3c36b3-a35e-4e47-8aea-d75a8b644eff req-dc2f0173-9909-4d54-9a08-d5fc88f7c93d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Received event network-vif-plugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:59 compute-1 nova_compute[225855]: 2026-01-20 14:54:59.150 225859 DEBUG oslo_concurrency.lockutils [req-3a3c36b3-a35e-4e47-8aea-d75a8b644eff req-dc2f0173-9909-4d54-9a08-d5fc88f7c93d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:59 compute-1 nova_compute[225855]: 2026-01-20 14:54:59.150 225859 DEBUG oslo_concurrency.lockutils [req-3a3c36b3-a35e-4e47-8aea-d75a8b644eff req-dc2f0173-9909-4d54-9a08-d5fc88f7c93d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:59 compute-1 nova_compute[225855]: 2026-01-20 14:54:59.150 225859 DEBUG oslo_concurrency.lockutils [req-3a3c36b3-a35e-4e47-8aea-d75a8b644eff req-dc2f0173-9909-4d54-9a08-d5fc88f7c93d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:59 compute-1 nova_compute[225855]: 2026-01-20 14:54:59.151 225859 DEBUG nova.compute.manager [req-3a3c36b3-a35e-4e47-8aea-d75a8b644eff req-dc2f0173-9909-4d54-9a08-d5fc88f7c93d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] No waiting events found dispatching network-vif-plugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:54:59 compute-1 nova_compute[225855]: 2026-01-20 14:54:59.151 225859 WARNING nova.compute.manager [req-3a3c36b3-a35e-4e47-8aea-d75a8b644eff req-dc2f0173-9909-4d54-9a08-d5fc88f7c93d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Received unexpected event network-vif-plugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae for instance with vm_state active and task_state None.
Jan 20 14:54:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:54:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:59.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:54:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e302 e302: 3 total, 3 up, 3 in
Jan 20 14:54:59 compute-1 nova_compute[225855]: 2026-01-20 14:54:59.811 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:59 compute-1 NetworkManager[49104]: <info>  [1768920899.8138] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Jan 20 14:54:59 compute-1 NetworkManager[49104]: <info>  [1768920899.8148] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Jan 20 14:54:59 compute-1 nova_compute[225855]: 2026-01-20 14:54:59.933 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:59 compute-1 ovn_controller[130490]: 2026-01-20T14:54:59Z|00504|binding|INFO|Releasing lport 9ca9d06a-9365-4769-a2c4-7322625683ac from this chassis (sb_readonly=0)
Jan 20 14:54:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:54:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:59.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:59 compute-1 nova_compute[225855]: 2026-01-20 14:54:59.946 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:01 compute-1 nova_compute[225855]: 2026-01-20 14:55:01.042 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:01 compute-1 ceph-mon[81775]: osdmap e302: 3 total, 3 up, 3 in
Jan 20 14:55:01 compute-1 ceph-mon[81775]: pgmap v2069: 321 pgs: 321 active+clean; 293 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.3 MiB/s wr, 330 op/s
Jan 20 14:55:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:01.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:01 compute-1 nova_compute[225855]: 2026-01-20 14:55:01.597 225859 DEBUG nova.compute.manager [req-8903823b-89fc-407d-8919-cfe9a67c1d38 req-15fc6e41-9dc4-45e1-92ae-799cb94a969f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Received event network-changed-e084df8c-a73e-4535-bcf7-de8adbafa9ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:01 compute-1 nova_compute[225855]: 2026-01-20 14:55:01.598 225859 DEBUG nova.compute.manager [req-8903823b-89fc-407d-8919-cfe9a67c1d38 req-15fc6e41-9dc4-45e1-92ae-799cb94a969f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Refreshing instance network info cache due to event network-changed-e084df8c-a73e-4535-bcf7-de8adbafa9ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:55:01 compute-1 nova_compute[225855]: 2026-01-20 14:55:01.598 225859 DEBUG oslo_concurrency.lockutils [req-8903823b-89fc-407d-8919-cfe9a67c1d38 req-15fc6e41-9dc4-45e1-92ae-799cb94a969f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:55:01 compute-1 nova_compute[225855]: 2026-01-20 14:55:01.598 225859 DEBUG oslo_concurrency.lockutils [req-8903823b-89fc-407d-8919-cfe9a67c1d38 req-15fc6e41-9dc4-45e1-92ae-799cb94a969f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:55:01 compute-1 nova_compute[225855]: 2026-01-20 14:55:01.598 225859 DEBUG nova.network.neutron [req-8903823b-89fc-407d-8919-cfe9a67c1d38 req-15fc6e41-9dc4-45e1-92ae-799cb94a969f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Refreshing network info cache for port e084df8c-a73e-4535-bcf7-de8adbafa9ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:55:01 compute-1 ovn_controller[130490]: 2026-01-20T14:55:01Z|00505|binding|INFO|Releasing lport 9ca9d06a-9365-4769-a2c4-7322625683ac from this chassis (sb_readonly=0)
Jan 20 14:55:01 compute-1 nova_compute[225855]: 2026-01-20 14:55:01.788 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:01.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1360851754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:02 compute-1 ceph-mon[81775]: pgmap v2070: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.9 MiB/s wr, 294 op/s
Jan 20 14:55:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:55:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:03.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:55:03 compute-1 nova_compute[225855]: 2026-01-20 14:55:03.701 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:55:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:03.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:55:05 compute-1 nova_compute[225855]: 2026-01-20 14:55:05.175 225859 DEBUG nova.network.neutron [req-8903823b-89fc-407d-8919-cfe9a67c1d38 req-15fc6e41-9dc4-45e1-92ae-799cb94a969f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Updated VIF entry in instance network info cache for port e084df8c-a73e-4535-bcf7-de8adbafa9ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:55:05 compute-1 nova_compute[225855]: 2026-01-20 14:55:05.176 225859 DEBUG nova.network.neutron [req-8903823b-89fc-407d-8919-cfe9a67c1d38 req-15fc6e41-9dc4-45e1-92ae-799cb94a969f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Updating instance_info_cache with network_info: [{"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:55:05 compute-1 nova_compute[225855]: 2026-01-20 14:55:05.229 225859 DEBUG oslo_concurrency.lockutils [req-8903823b-89fc-407d-8919-cfe9a67c1d38 req-15fc6e41-9dc4-45e1-92ae-799cb94a969f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:55:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:05.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:05 compute-1 ceph-mon[81775]: pgmap v2071: 321 pgs: 321 active+clean; 271 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.6 MiB/s wr, 357 op/s
Jan 20 14:55:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:55:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:05.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:55:06 compute-1 nova_compute[225855]: 2026-01-20 14:55:06.044 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:55:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:07.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:55:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:07 compute-1 ceph-mon[81775]: pgmap v2072: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.9 MiB/s wr, 255 op/s
Jan 20 14:55:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:55:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:07.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:55:08 compute-1 nova_compute[225855]: 2026-01-20 14:55:08.704 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:08 compute-1 ceph-mon[81775]: pgmap v2073: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 581 KiB/s wr, 216 op/s
Jan 20 14:55:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:55:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:09.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:55:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:09.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:10 compute-1 ovn_controller[130490]: 2026-01-20T14:55:10Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:89:8e:0f 10.100.0.14
Jan 20 14:55:10 compute-1 ovn_controller[130490]: 2026-01-20T14:55:10Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:89:8e:0f 10.100.0.14
Jan 20 14:55:10 compute-1 ceph-mon[81775]: pgmap v2074: 321 pgs: 321 active+clean; 252 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 671 KiB/s wr, 151 op/s
Jan 20 14:55:11 compute-1 nova_compute[225855]: 2026-01-20 14:55:11.046 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:11.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:11.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:12 compute-1 sudo[275164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:12 compute-1 sudo[275164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:12 compute-1 sudo[275164]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:12 compute-1 sudo[275195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:12 compute-1 sudo[275195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:12 compute-1 sudo[275195]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:12 compute-1 podman[275188]: 2026-01-20 14:55:12.377886796 +0000 UTC m=+0.101162107 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:55:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:12 compute-1 ceph-mon[81775]: pgmap v2075: 321 pgs: 321 active+clean; 260 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.1 MiB/s wr, 151 op/s
Jan 20 14:55:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:13.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:13 compute-1 nova_compute[225855]: 2026-01-20 14:55:13.707 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:13.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2253935145' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:55:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2253935145' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:55:15 compute-1 ceph-mon[81775]: pgmap v2076: 321 pgs: 321 active+clean; 274 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Jan 20 14:55:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:15.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:15 compute-1 nova_compute[225855]: 2026-01-20 14:55:15.457 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:15.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:16 compute-1 nova_compute[225855]: 2026-01-20 14:55:16.048 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:16 compute-1 ceph-mon[81775]: pgmap v2077: 321 pgs: 321 active+clean; 279 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Jan 20 14:55:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:16.414 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:16.415 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:16.416 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:16 compute-1 sudo[275242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:16 compute-1 sudo[275242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:16 compute-1 sudo[275242]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:16 compute-1 sudo[275267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:55:16 compute-1 sudo[275267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:16 compute-1 sudo[275267]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:16 compute-1 sudo[275292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:16 compute-1 sudo[275292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:16 compute-1 sudo[275292]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:16 compute-1 sudo[275318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:55:16 compute-1 sudo[275318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/784021205' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:55:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:17.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:55:17 compute-1 sudo[275318]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:55:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:17.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:55:18 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:55:18 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:55:18 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:55:18 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:55:18 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:55:18 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:55:18 compute-1 ceph-mon[81775]: pgmap v2078: 321 pgs: 321 active+clean; 279 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 20 14:55:18 compute-1 nova_compute[225855]: 2026-01-20 14:55:18.710 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:18 compute-1 nova_compute[225855]: 2026-01-20 14:55:18.712 225859 DEBUG oslo_concurrency.lockutils [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquiring lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:18 compute-1 nova_compute[225855]: 2026-01-20 14:55:18.712 225859 DEBUG oslo_concurrency.lockutils [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:18 compute-1 nova_compute[225855]: 2026-01-20 14:55:18.713 225859 INFO nova.compute.manager [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Shelving
Jan 20 14:55:18 compute-1 nova_compute[225855]: 2026-01-20 14:55:18.738 225859 DEBUG nova.virt.libvirt.driver [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:55:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:19.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:19.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:20 compute-1 nova_compute[225855]: 2026-01-20 14:55:20.232 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:20 compute-1 ceph-mon[81775]: pgmap v2079: 321 pgs: 321 active+clean; 279 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 20 14:55:21 compute-1 nova_compute[225855]: 2026-01-20 14:55:21.050 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:21 compute-1 nova_compute[225855]: 2026-01-20 14:55:21.112 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:21.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:21 compute-1 kernel: tape084df8c-a7 (unregistering): left promiscuous mode
Jan 20 14:55:21 compute-1 NetworkManager[49104]: <info>  [1768920921.7351] device (tape084df8c-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:55:21 compute-1 ovn_controller[130490]: 2026-01-20T14:55:21Z|00506|binding|INFO|Releasing lport e084df8c-a73e-4535-bcf7-de8adbafa9ae from this chassis (sb_readonly=0)
Jan 20 14:55:21 compute-1 ovn_controller[130490]: 2026-01-20T14:55:21Z|00507|binding|INFO|Setting lport e084df8c-a73e-4535-bcf7-de8adbafa9ae down in Southbound
Jan 20 14:55:21 compute-1 nova_compute[225855]: 2026-01-20 14:55:21.745 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:21 compute-1 ovn_controller[130490]: 2026-01-20T14:55:21Z|00508|binding|INFO|Removing iface tape084df8c-a7 ovn-installed in OVS
Jan 20 14:55:21 compute-1 nova_compute[225855]: 2026-01-20 14:55:21.747 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:21 compute-1 nova_compute[225855]: 2026-01-20 14:55:21.752 225859 INFO nova.virt.libvirt.driver [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Instance shutdown successfully after 3 seconds.
Jan 20 14:55:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:21.758 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:8e:0f 10.100.0.14'], port_security=['fa:16:3e:89:8e:0f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3bec73f6-5255-44c0-8a10-a64c7e86c0c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e9589011-b728-4b79-9945-aa6c52dd0fc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1e83af992c94112a965575784639d77', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e88a960e-8540-4c69-934d-b6e1b91beb98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dc6df7d-3e57-4779-8232-af1ccf413403, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=e084df8c-a73e-4535-bcf7-de8adbafa9ae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:55:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:21.759 140354 INFO neutron.agent.ovn.metadata.agent [-] Port e084df8c-a73e-4535-bcf7-de8adbafa9ae in datapath e9589011-b728-4b79-9945-aa6c52dd0fc2 unbound from our chassis
Jan 20 14:55:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:21.760 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e9589011-b728-4b79-9945-aa6c52dd0fc2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:55:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:21.762 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[20ff6c6c-2724-4580-a4d2-372999bfe7a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:21.762 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2 namespace which is not needed anymore
Jan 20 14:55:21 compute-1 nova_compute[225855]: 2026-01-20 14:55:21.766 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:21 compute-1 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Jan 20 14:55:21 compute-1 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000007c.scope: Consumed 14.362s CPU time.
Jan 20 14:55:21 compute-1 systemd-machined[194361]: Machine qemu-59-instance-0000007c terminated.
Jan 20 14:55:21 compute-1 neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2[275139]: [NOTICE]   (275143) : haproxy version is 2.8.14-c23fe91
Jan 20 14:55:21 compute-1 neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2[275139]: [NOTICE]   (275143) : path to executable is /usr/sbin/haproxy
Jan 20 14:55:21 compute-1 neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2[275139]: [WARNING]  (275143) : Exiting Master process...
Jan 20 14:55:21 compute-1 neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2[275139]: [ALERT]    (275143) : Current worker (275145) exited with code 143 (Terminated)
Jan 20 14:55:21 compute-1 neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2[275139]: [WARNING]  (275143) : All workers exited. Exiting... (0)
Jan 20 14:55:21 compute-1 systemd[1]: libpod-35c1b3425f1e0506111fa57ff509641c7dc2b69df752675b10a514f5d11a05d6.scope: Deactivated successfully.
Jan 20 14:55:21 compute-1 podman[275401]: 2026-01-20 14:55:21.902421473 +0000 UTC m=+0.056095885 container died 35c1b3425f1e0506111fa57ff509641c7dc2b69df752675b10a514f5d11a05d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:55:21 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-35c1b3425f1e0506111fa57ff509641c7dc2b69df752675b10a514f5d11a05d6-userdata-shm.mount: Deactivated successfully.
Jan 20 14:55:21 compute-1 systemd[1]: var-lib-containers-storage-overlay-bba5b462764fe5156f5cb57f1fc6d7aae20c07ad8940e8ed4d30f072b3f5c46d-merged.mount: Deactivated successfully.
Jan 20 14:55:21 compute-1 podman[275401]: 2026-01-20 14:55:21.942579586 +0000 UTC m=+0.096253998 container cleanup 35c1b3425f1e0506111fa57ff509641c7dc2b69df752675b10a514f5d11a05d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 20 14:55:21 compute-1 systemd[1]: libpod-conmon-35c1b3425f1e0506111fa57ff509641c7dc2b69df752675b10a514f5d11a05d6.scope: Deactivated successfully.
Jan 20 14:55:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:21.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:21 compute-1 nova_compute[225855]: 2026-01-20 14:55:21.990 225859 INFO nova.virt.libvirt.driver [-] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Instance destroyed successfully.
Jan 20 14:55:21 compute-1 nova_compute[225855]: 2026-01-20 14:55:21.990 225859 DEBUG nova.objects.instance [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:22 compute-1 podman[275429]: 2026-01-20 14:55:22.010488283 +0000 UTC m=+0.045592188 container remove 35c1b3425f1e0506111fa57ff509641c7dc2b69df752675b10a514f5d11a05d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 14:55:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:22.016 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b18c7b36-9622-4ac2-8df2-783acdae388b]: (4, ('Tue Jan 20 02:55:21 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2 (35c1b3425f1e0506111fa57ff509641c7dc2b69df752675b10a514f5d11a05d6)\n35c1b3425f1e0506111fa57ff509641c7dc2b69df752675b10a514f5d11a05d6\nTue Jan 20 02:55:21 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2 (35c1b3425f1e0506111fa57ff509641c7dc2b69df752675b10a514f5d11a05d6)\n35c1b3425f1e0506111fa57ff509641c7dc2b69df752675b10a514f5d11a05d6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:22.018 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[41cb4918-0383-4728-959a-4832eda36722]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:22.019 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape9589011-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:22 compute-1 nova_compute[225855]: 2026-01-20 14:55:22.021 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:22 compute-1 kernel: tape9589011-b0: left promiscuous mode
Jan 20 14:55:22 compute-1 nova_compute[225855]: 2026-01-20 14:55:22.079 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:22.082 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[489d2aa3-0937-42df-ad9e-1d0e76eac0ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:22.100 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d400a3-818f-4a56-9a99-6f2e5e0e0846]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:22.102 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[53e5d719-d2f3-4096-ad03-5dc3bf03ab80]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:22.118 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f74c4cd9-4167-40f7-a9d8-1554c7fed0f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593460, 'reachable_time': 16588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275458, 'error': None, 'target': 'ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:22.120 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:55:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:22.121 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[98c1bcec-525b-4cf6-b4df-9152004c68bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:22 compute-1 systemd[1]: run-netns-ovnmeta\x2de9589011\x2db728\x2d4b79\x2d9945\x2daa6c52dd0fc2.mount: Deactivated successfully.
Jan 20 14:55:22 compute-1 nova_compute[225855]: 2026-01-20 14:55:22.348 225859 INFO nova.virt.libvirt.driver [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Beginning cold snapshot process
Jan 20 14:55:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:22 compute-1 nova_compute[225855]: 2026-01-20 14:55:22.511 225859 DEBUG nova.virt.libvirt.imagebackend [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 14:55:22 compute-1 nova_compute[225855]: 2026-01-20 14:55:22.574 225859 DEBUG nova.compute.manager [req-2347be6e-aacf-4810-a315-784977e43de6 req-f24e8006-a367-4c71-b5ec-1fe829586403 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Received event network-vif-unplugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:22 compute-1 nova_compute[225855]: 2026-01-20 14:55:22.574 225859 DEBUG oslo_concurrency.lockutils [req-2347be6e-aacf-4810-a315-784977e43de6 req-f24e8006-a367-4c71-b5ec-1fe829586403 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:22 compute-1 nova_compute[225855]: 2026-01-20 14:55:22.574 225859 DEBUG oslo_concurrency.lockutils [req-2347be6e-aacf-4810-a315-784977e43de6 req-f24e8006-a367-4c71-b5ec-1fe829586403 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:22 compute-1 nova_compute[225855]: 2026-01-20 14:55:22.575 225859 DEBUG oslo_concurrency.lockutils [req-2347be6e-aacf-4810-a315-784977e43de6 req-f24e8006-a367-4c71-b5ec-1fe829586403 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:22 compute-1 nova_compute[225855]: 2026-01-20 14:55:22.575 225859 DEBUG nova.compute.manager [req-2347be6e-aacf-4810-a315-784977e43de6 req-f24e8006-a367-4c71-b5ec-1fe829586403 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] No waiting events found dispatching network-vif-unplugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:55:22 compute-1 nova_compute[225855]: 2026-01-20 14:55:22.575 225859 WARNING nova.compute.manager [req-2347be6e-aacf-4810-a315-784977e43de6 req-f24e8006-a367-4c71-b5ec-1fe829586403 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Received unexpected event network-vif-unplugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae for instance with vm_state active and task_state shelving_image_uploading.
Jan 20 14:55:22 compute-1 nova_compute[225855]: 2026-01-20 14:55:22.761 225859 DEBUG nova.storage.rbd_utils [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] creating snapshot(20d7c18e7d794b51839e3145b4ba1f66) on rbd image(3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:55:23 compute-1 ceph-mon[81775]: pgmap v2080: 321 pgs: 321 active+clean; 279 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 305 KiB/s rd, 1.6 MiB/s wr, 64 op/s
Jan 20 14:55:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e303 e303: 3 total, 3 up, 3 in
Jan 20 14:55:23 compute-1 nova_compute[225855]: 2026-01-20 14:55:23.186 225859 DEBUG nova.storage.rbd_utils [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] cloning vms/3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk@20d7c18e7d794b51839e3145b4ba1f66 to images/0f1d91a7-05af-4ed6-87af-1e03976e25f0 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:55:23 compute-1 nova_compute[225855]: 2026-01-20 14:55:23.312 225859 DEBUG nova.storage.rbd_utils [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] flattening images/0f1d91a7-05af-4ed6-87af-1e03976e25f0 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:55:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:23.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:23 compute-1 nova_compute[225855]: 2026-01-20 14:55:23.704 225859 DEBUG nova.storage.rbd_utils [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] removing snapshot(20d7c18e7d794b51839e3145b4ba1f66) on rbd image(3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:55:23 compute-1 nova_compute[225855]: 2026-01-20 14:55:23.712 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:23.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e304 e304: 3 total, 3 up, 3 in
Jan 20 14:55:24 compute-1 ceph-mon[81775]: osdmap e303: 3 total, 3 up, 3 in
Jan 20 14:55:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3896483640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:24 compute-1 nova_compute[225855]: 2026-01-20 14:55:24.149 225859 DEBUG nova.storage.rbd_utils [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] creating snapshot(snap) on rbd image(0f1d91a7-05af-4ed6-87af-1e03976e25f0) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:55:24 compute-1 sudo[275601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:24 compute-1 sudo[275601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:24 compute-1 sudo[275601]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:24 compute-1 sudo[275632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:55:24 compute-1 sudo[275632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:24 compute-1 sudo[275632]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:24 compute-1 podman[275625]: 2026-01-20 14:55:24.500730595 +0000 UTC m=+0.077738616 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 20 14:55:24 compute-1 nova_compute[225855]: 2026-01-20 14:55:24.751 225859 DEBUG nova.compute.manager [req-ddd81a93-6c68-4a1b-9fe0-c08001dad499 req-c97b90d9-daed-4102-8e4d-4623867a03ab 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Received event network-vif-plugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:24 compute-1 nova_compute[225855]: 2026-01-20 14:55:24.751 225859 DEBUG oslo_concurrency.lockutils [req-ddd81a93-6c68-4a1b-9fe0-c08001dad499 req-c97b90d9-daed-4102-8e4d-4623867a03ab 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:24 compute-1 nova_compute[225855]: 2026-01-20 14:55:24.752 225859 DEBUG oslo_concurrency.lockutils [req-ddd81a93-6c68-4a1b-9fe0-c08001dad499 req-c97b90d9-daed-4102-8e4d-4623867a03ab 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:24 compute-1 nova_compute[225855]: 2026-01-20 14:55:24.752 225859 DEBUG oslo_concurrency.lockutils [req-ddd81a93-6c68-4a1b-9fe0-c08001dad499 req-c97b90d9-daed-4102-8e4d-4623867a03ab 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:24 compute-1 nova_compute[225855]: 2026-01-20 14:55:24.752 225859 DEBUG nova.compute.manager [req-ddd81a93-6c68-4a1b-9fe0-c08001dad499 req-c97b90d9-daed-4102-8e4d-4623867a03ab 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] No waiting events found dispatching network-vif-plugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:55:24 compute-1 nova_compute[225855]: 2026-01-20 14:55:24.753 225859 WARNING nova.compute.manager [req-ddd81a93-6c68-4a1b-9fe0-c08001dad499 req-c97b90d9-daed-4102-8e4d-4623867a03ab 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Received unexpected event network-vif-plugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae for instance with vm_state active and task_state shelving_image_uploading.
Jan 20 14:55:25 compute-1 ceph-mon[81775]: pgmap v2082: 321 pgs: 321 active+clean; 279 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 132 KiB/s wr, 22 op/s
Jan 20 14:55:25 compute-1 ceph-mon[81775]: osdmap e304: 3 total, 3 up, 3 in
Jan 20 14:55:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:55:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:55:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:25.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:25 compute-1 nova_compute[225855]: 2026-01-20 14:55:25.475 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e305 e305: 3 total, 3 up, 3 in
Jan 20 14:55:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:55:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:25.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:55:26 compute-1 nova_compute[225855]: 2026-01-20 14:55:26.051 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2689986105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:26 compute-1 ceph-mon[81775]: osdmap e305: 3 total, 3 up, 3 in
Jan 20 14:55:26 compute-1 ceph-mon[81775]: pgmap v2085: 321 pgs: 321 active+clean; 325 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.7 MiB/s wr, 102 op/s
Jan 20 14:55:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:27.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:27 compute-1 nova_compute[225855]: 2026-01-20 14:55:27.719 225859 INFO nova.virt.libvirt.driver [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Snapshot image upload complete
Jan 20 14:55:27 compute-1 nova_compute[225855]: 2026-01-20 14:55:27.720 225859 DEBUG nova.compute.manager [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:55:27 compute-1 nova_compute[225855]: 2026-01-20 14:55:27.792 225859 INFO nova.compute.manager [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Shelve offloading
Jan 20 14:55:27 compute-1 nova_compute[225855]: 2026-01-20 14:55:27.798 225859 INFO nova.virt.libvirt.driver [-] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Instance destroyed successfully.
Jan 20 14:55:27 compute-1 nova_compute[225855]: 2026-01-20 14:55:27.798 225859 DEBUG nova.compute.manager [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:55:27 compute-1 nova_compute[225855]: 2026-01-20 14:55:27.800 225859 DEBUG oslo_concurrency.lockutils [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquiring lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:55:27 compute-1 nova_compute[225855]: 2026-01-20 14:55:27.800 225859 DEBUG oslo_concurrency.lockutils [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquired lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:55:27 compute-1 nova_compute[225855]: 2026-01-20 14:55:27.800 225859 DEBUG nova.network.neutron [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:55:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:27.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:28 compute-1 nova_compute[225855]: 2026-01-20 14:55:28.753 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:29 compute-1 ceph-mon[81775]: pgmap v2086: 321 pgs: 321 active+clean; 375 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 8.6 MiB/s wr, 160 op/s
Jan 20 14:55:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:29.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:29 compute-1 nova_compute[225855]: 2026-01-20 14:55:29.673 225859 DEBUG nova.network.neutron [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Updating instance_info_cache with network_info: [{"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:55:29 compute-1 nova_compute[225855]: 2026-01-20 14:55:29.702 225859 DEBUG oslo_concurrency.lockutils [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Releasing lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:55:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:29.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:31 compute-1 nova_compute[225855]: 2026-01-20 14:55:31.090 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:31 compute-1 ceph-mon[81775]: pgmap v2087: 321 pgs: 321 active+clean; 388 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 8.2 MiB/s wr, 160 op/s
Jan 20 14:55:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/683276483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:31.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:31 compute-1 nova_compute[225855]: 2026-01-20 14:55:31.802 225859 INFO nova.virt.libvirt.driver [-] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Instance destroyed successfully.
Jan 20 14:55:31 compute-1 nova_compute[225855]: 2026-01-20 14:55:31.803 225859 DEBUG nova.objects.instance [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lazy-loading 'resources' on Instance uuid 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:31 compute-1 nova_compute[225855]: 2026-01-20 14:55:31.826 225859 DEBUG nova.virt.libvirt.vif [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:54:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-913712707',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-913712707',id=124,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEiICz3SRHAbg35RWQTkKKcPH7nuzl556rWhnnJMKWCQeHRtCTrp3rh0Cew3QLmsFdOqe88XbxeaMKtgT6L6nfvjZZnoyEjqVogiPNh8/V6NYBD5v71aQZWpX0o+tqsUvg==',key_name='tempest-keypair-1388890452',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:54:56Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b1e83af992c94112a965575784639d77',ramdisk_id='',reservation_id='r-n20fo39g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-896995479',owner_user_name='tempest-AttachVolumeShelveTestJSON-896995479-project-member',shelved_at='2026-01-20T14:55:27.720433',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='0f1d91a7-05af-4ed6-87af-1e03976e25f0'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:55:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34eb73f628994c11801d447148d5f142',uuid=3bec73f6-5255-44c0-8a10-a64c7e86c0c2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:55:31 compute-1 nova_compute[225855]: 2026-01-20 14:55:31.826 225859 DEBUG nova.network.os_vif_util [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Converting VIF {"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:55:31 compute-1 nova_compute[225855]: 2026-01-20 14:55:31.827 225859 DEBUG nova.network.os_vif_util [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:8e:0f,bridge_name='br-int',has_traffic_filtering=True,id=e084df8c-a73e-4535-bcf7-de8adbafa9ae,network=Network(e9589011-b728-4b79-9945-aa6c52dd0fc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape084df8c-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:55:31 compute-1 nova_compute[225855]: 2026-01-20 14:55:31.827 225859 DEBUG os_vif [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:8e:0f,bridge_name='br-int',has_traffic_filtering=True,id=e084df8c-a73e-4535-bcf7-de8adbafa9ae,network=Network(e9589011-b728-4b79-9945-aa6c52dd0fc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape084df8c-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:55:31 compute-1 nova_compute[225855]: 2026-01-20 14:55:31.829 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:31 compute-1 nova_compute[225855]: 2026-01-20 14:55:31.829 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape084df8c-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:31 compute-1 nova_compute[225855]: 2026-01-20 14:55:31.830 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:31 compute-1 nova_compute[225855]: 2026-01-20 14:55:31.831 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:31 compute-1 nova_compute[225855]: 2026-01-20 14:55:31.833 225859 INFO os_vif [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:8e:0f,bridge_name='br-int',has_traffic_filtering=True,id=e084df8c-a73e-4535-bcf7-de8adbafa9ae,network=Network(e9589011-b728-4b79-9945-aa6c52dd0fc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape084df8c-a7')
Jan 20 14:55:31 compute-1 nova_compute[225855]: 2026-01-20 14:55:31.911 225859 DEBUG nova.compute.manager [req-35707b01-1d7d-48c6-8f04-277acfc50f18 req-693a5909-cf78-4bcf-98cf-2d919aba1f89 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Received event network-changed-e084df8c-a73e-4535-bcf7-de8adbafa9ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:31 compute-1 nova_compute[225855]: 2026-01-20 14:55:31.911 225859 DEBUG nova.compute.manager [req-35707b01-1d7d-48c6-8f04-277acfc50f18 req-693a5909-cf78-4bcf-98cf-2d919aba1f89 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Refreshing instance network info cache due to event network-changed-e084df8c-a73e-4535-bcf7-de8adbafa9ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:55:31 compute-1 nova_compute[225855]: 2026-01-20 14:55:31.912 225859 DEBUG oslo_concurrency.lockutils [req-35707b01-1d7d-48c6-8f04-277acfc50f18 req-693a5909-cf78-4bcf-98cf-2d919aba1f89 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:55:31 compute-1 nova_compute[225855]: 2026-01-20 14:55:31.912 225859 DEBUG oslo_concurrency.lockutils [req-35707b01-1d7d-48c6-8f04-277acfc50f18 req-693a5909-cf78-4bcf-98cf-2d919aba1f89 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:55:31 compute-1 nova_compute[225855]: 2026-01-20 14:55:31.912 225859 DEBUG nova.network.neutron [req-35707b01-1d7d-48c6-8f04-277acfc50f18 req-693a5909-cf78-4bcf-98cf-2d919aba1f89 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Refreshing network info cache for port e084df8c-a73e-4535-bcf7-de8adbafa9ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:55:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:31.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2268157372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3071191747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:32 compute-1 nova_compute[225855]: 2026-01-20 14:55:32.320 225859 INFO nova.virt.libvirt.driver [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Deleting instance files /var/lib/nova/instances/3bec73f6-5255-44c0-8a10-a64c7e86c0c2_del
Jan 20 14:55:32 compute-1 nova_compute[225855]: 2026-01-20 14:55:32.321 225859 INFO nova.virt.libvirt.driver [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Deletion of /var/lib/nova/instances/3bec73f6-5255-44c0-8a10-a64c7e86c0c2_del complete
Jan 20 14:55:32 compute-1 sudo[275693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:32 compute-1 sudo[275693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:32 compute-1 sudo[275693]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:32 compute-1 sudo[275718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:32 compute-1 sudo[275718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:32 compute-1 sudo[275718]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:32 compute-1 nova_compute[225855]: 2026-01-20 14:55:32.802 225859 INFO nova.scheduler.client.report [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Deleted allocations for instance 3bec73f6-5255-44c0-8a10-a64c7e86c0c2
Jan 20 14:55:32 compute-1 nova_compute[225855]: 2026-01-20 14:55:32.870 225859 DEBUG oslo_concurrency.lockutils [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:32 compute-1 nova_compute[225855]: 2026-01-20 14:55:32.871 225859 DEBUG oslo_concurrency.lockutils [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:32 compute-1 nova_compute[225855]: 2026-01-20 14:55:32.913 225859 DEBUG oslo_concurrency.processutils [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:55:33 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3124150087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:33.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:33 compute-1 nova_compute[225855]: 2026-01-20 14:55:33.356 225859 DEBUG oslo_concurrency.processutils [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:33 compute-1 nova_compute[225855]: 2026-01-20 14:55:33.362 225859 DEBUG nova.compute.provider_tree [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:55:33 compute-1 nova_compute[225855]: 2026-01-20 14:55:33.379 225859 DEBUG nova.scheduler.client.report [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:55:33 compute-1 nova_compute[225855]: 2026-01-20 14:55:33.407 225859 DEBUG oslo_concurrency.lockutils [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:33 compute-1 nova_compute[225855]: 2026-01-20 14:55:33.486 225859 DEBUG oslo_concurrency.lockutils [None req-d43ac6da-3dd9-44d7-8c73-72899d49ddd7 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 14.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:33 compute-1 nova_compute[225855]: 2026-01-20 14:55:33.744 225859 DEBUG nova.network.neutron [req-35707b01-1d7d-48c6-8f04-277acfc50f18 req-693a5909-cf78-4bcf-98cf-2d919aba1f89 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Updated VIF entry in instance network info cache for port e084df8c-a73e-4535-bcf7-de8adbafa9ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:55:33 compute-1 nova_compute[225855]: 2026-01-20 14:55:33.745 225859 DEBUG nova.network.neutron [req-35707b01-1d7d-48c6-8f04-277acfc50f18 req-693a5909-cf78-4bcf-98cf-2d919aba1f89 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Updating instance_info_cache with network_info: [{"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": null, "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tape084df8c-a7", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:55:33 compute-1 nova_compute[225855]: 2026-01-20 14:55:33.813 225859 DEBUG oslo_concurrency.lockutils [req-35707b01-1d7d-48c6-8f04-277acfc50f18 req-693a5909-cf78-4bcf-98cf-2d919aba1f89 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:55:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:55:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:33.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:55:34 compute-1 ceph-mon[81775]: pgmap v2088: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 8.5 MiB/s wr, 175 op/s
Jan 20 14:55:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1693936922' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e306 e306: 3 total, 3 up, 3 in
Jan 20 14:55:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3124150087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:34 compute-1 ceph-mon[81775]: pgmap v2089: 321 pgs: 321 active+clean; 373 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.5 MiB/s wr, 126 op/s
Jan 20 14:55:34 compute-1 ceph-mon[81775]: osdmap e306: 3 total, 3 up, 3 in
Jan 20 14:55:35 compute-1 nova_compute[225855]: 2026-01-20 14:55:35.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:55:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:35.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2353520320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3121600348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:55:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:35.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:55:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:36.025 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:55:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:36.026 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:55:36 compute-1 nova_compute[225855]: 2026-01-20 14:55:36.061 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:36 compute-1 nova_compute[225855]: 2026-01-20 14:55:36.092 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:36 compute-1 nova_compute[225855]: 2026-01-20 14:55:36.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:55:36 compute-1 nova_compute[225855]: 2026-01-20 14:55:36.831 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2064067837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3426357909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:36 compute-1 ceph-mon[81775]: pgmap v2091: 321 pgs: 321 active+clean; 365 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.9 MiB/s wr, 175 op/s
Jan 20 14:55:36 compute-1 nova_compute[225855]: 2026-01-20 14:55:36.987 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920921.9858956, 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:55:36 compute-1 nova_compute[225855]: 2026-01-20 14:55:36.988 225859 INFO nova.compute.manager [-] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] VM Stopped (Lifecycle Event)
Jan 20 14:55:37 compute-1 nova_compute[225855]: 2026-01-20 14:55:37.068 225859 DEBUG nova.compute.manager [None req-6ae1fa24-5580-4fb6-bdb3-45fc2a59bc57 - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:55:37 compute-1 nova_compute[225855]: 2026-01-20 14:55:37.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:55:37 compute-1 nova_compute[225855]: 2026-01-20 14:55:37.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:55:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:37.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2887191558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:37.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:38 compute-1 nova_compute[225855]: 2026-01-20 14:55:38.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:55:38 compute-1 nova_compute[225855]: 2026-01-20 14:55:38.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:55:38 compute-1 nova_compute[225855]: 2026-01-20 14:55:38.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:55:38 compute-1 nova_compute[225855]: 2026-01-20 14:55:38.428 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:55:38 compute-1 nova_compute[225855]: 2026-01-20 14:55:38.439 225859 DEBUG oslo_concurrency.lockutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquiring lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:38 compute-1 nova_compute[225855]: 2026-01-20 14:55:38.440 225859 DEBUG oslo_concurrency.lockutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:38 compute-1 nova_compute[225855]: 2026-01-20 14:55:38.440 225859 INFO nova.compute.manager [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Unshelving
Jan 20 14:55:38 compute-1 nova_compute[225855]: 2026-01-20 14:55:38.634 225859 DEBUG oslo_concurrency.lockutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:38 compute-1 nova_compute[225855]: 2026-01-20 14:55:38.634 225859 DEBUG oslo_concurrency.lockutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:38 compute-1 nova_compute[225855]: 2026-01-20 14:55:38.638 225859 DEBUG nova.objects.instance [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lazy-loading 'pci_requests' on Instance uuid 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:38 compute-1 nova_compute[225855]: 2026-01-20 14:55:38.652 225859 DEBUG nova.objects.instance [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:38 compute-1 nova_compute[225855]: 2026-01-20 14:55:38.662 225859 DEBUG nova.virt.hardware [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:55:38 compute-1 nova_compute[225855]: 2026-01-20 14:55:38.662 225859 INFO nova.compute.claims [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:55:38 compute-1 nova_compute[225855]: 2026-01-20 14:55:38.876 225859 DEBUG oslo_concurrency.processutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:38 compute-1 ceph-mon[81775]: pgmap v2092: 321 pgs: 321 active+clean; 355 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 973 KiB/s rd, 3.8 MiB/s wr, 187 op/s
Jan 20 14:55:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:55:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:39.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:55:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:55:39 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/641791931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:39 compute-1 nova_compute[225855]: 2026-01-20 14:55:39.379 225859 DEBUG oslo_concurrency.processutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:39 compute-1 nova_compute[225855]: 2026-01-20 14:55:39.387 225859 DEBUG nova.compute.provider_tree [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:55:39 compute-1 nova_compute[225855]: 2026-01-20 14:55:39.410 225859 DEBUG nova.scheduler.client.report [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:55:39 compute-1 nova_compute[225855]: 2026-01-20 14:55:39.456 225859 DEBUG oslo_concurrency.lockutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/641791931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:39 compute-1 nova_compute[225855]: 2026-01-20 14:55:39.959 225859 INFO nova.network.neutron [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Updating port e084df8c-a73e-4535-bcf7-de8adbafa9ae with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 20 14:55:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:39.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:40 compute-1 ceph-mon[81775]: pgmap v2093: 321 pgs: 321 active+clean; 343 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 219 op/s
Jan 20 14:55:41 compute-1 nova_compute[225855]: 2026-01-20 14:55:41.093 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:41 compute-1 nova_compute[225855]: 2026-01-20 14:55:41.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:55:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:55:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:41.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:55:41 compute-1 nova_compute[225855]: 2026-01-20 14:55:41.831 225859 DEBUG oslo_concurrency.lockutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquiring lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:55:41 compute-1 nova_compute[225855]: 2026-01-20 14:55:41.832 225859 DEBUG oslo_concurrency.lockutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquired lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:55:41 compute-1 nova_compute[225855]: 2026-01-20 14:55:41.832 225859 DEBUG nova.network.neutron [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:55:41 compute-1 nova_compute[225855]: 2026-01-20 14:55:41.835 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:41 compute-1 nova_compute[225855]: 2026-01-20 14:55:41.979 225859 DEBUG nova.compute.manager [req-23f35e16-0c16-45cd-a5bf-fee625d47a4d req-a01a4132-b35a-47fa-93c4-6bfd9c1e5f56 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Received event network-changed-e084df8c-a73e-4535-bcf7-de8adbafa9ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:41 compute-1 nova_compute[225855]: 2026-01-20 14:55:41.979 225859 DEBUG nova.compute.manager [req-23f35e16-0c16-45cd-a5bf-fee625d47a4d req-a01a4132-b35a-47fa-93c4-6bfd9c1e5f56 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Refreshing instance network info cache due to event network-changed-e084df8c-a73e-4535-bcf7-de8adbafa9ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:55:41 compute-1 nova_compute[225855]: 2026-01-20 14:55:41.980 225859 DEBUG oslo_concurrency.lockutils [req-23f35e16-0c16-45cd-a5bf-fee625d47a4d req-a01a4132-b35a-47fa-93c4-6bfd9c1e5f56 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:55:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:41.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:43 compute-1 podman[275792]: 2026-01-20 14:55:43.051623889 +0000 UTC m=+0.083387615 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:55:43 compute-1 ceph-mon[81775]: pgmap v2094: 321 pgs: 321 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 258 op/s
Jan 20 14:55:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:43.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:43 compute-1 nova_compute[225855]: 2026-01-20 14:55:43.796 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:43 compute-1 nova_compute[225855]: 2026-01-20 14:55:43.895 225859 DEBUG nova.network.neutron [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Updating instance_info_cache with network_info: [{"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:55:43 compute-1 nova_compute[225855]: 2026-01-20 14:55:43.920 225859 DEBUG oslo_concurrency.lockutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Releasing lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:55:43 compute-1 nova_compute[225855]: 2026-01-20 14:55:43.922 225859 DEBUG nova.virt.libvirt.driver [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:55:43 compute-1 nova_compute[225855]: 2026-01-20 14:55:43.922 225859 INFO nova.virt.libvirt.driver [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Creating image(s)
Jan 20 14:55:43 compute-1 nova_compute[225855]: 2026-01-20 14:55:43.949 225859 DEBUG nova.storage.rbd_utils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] rbd image 3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:55:43 compute-1 nova_compute[225855]: 2026-01-20 14:55:43.953 225859 DEBUG nova.objects.instance [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:43 compute-1 nova_compute[225855]: 2026-01-20 14:55:43.954 225859 DEBUG oslo_concurrency.lockutils [req-23f35e16-0c16-45cd-a5bf-fee625d47a4d req-a01a4132-b35a-47fa-93c4-6bfd9c1e5f56 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:55:43 compute-1 nova_compute[225855]: 2026-01-20 14:55:43.955 225859 DEBUG nova.network.neutron [req-23f35e16-0c16-45cd-a5bf-fee625d47a4d req-a01a4132-b35a-47fa-93c4-6bfd9c1e5f56 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Refreshing network info cache for port e084df8c-a73e-4535-bcf7-de8adbafa9ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:55:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:43.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:43 compute-1 nova_compute[225855]: 2026-01-20 14:55:43.993 225859 DEBUG nova.storage.rbd_utils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] rbd image 3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:55:44 compute-1 nova_compute[225855]: 2026-01-20 14:55:44.018 225859 DEBUG nova.storage.rbd_utils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] rbd image 3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:55:44 compute-1 nova_compute[225855]: 2026-01-20 14:55:44.022 225859 DEBUG oslo_concurrency.lockutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquiring lock "2f18fd61310b7a6e1fac51a6ca49bea435dc548b" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:44 compute-1 nova_compute[225855]: 2026-01-20 14:55:44.023 225859 DEBUG oslo_concurrency.lockutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "2f18fd61310b7a6e1fac51a6ca49bea435dc548b" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:44 compute-1 nova_compute[225855]: 2026-01-20 14:55:44.038 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:44 compute-1 ceph-mon[81775]: pgmap v2095: 321 pgs: 321 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.0 MiB/s wr, 273 op/s
Jan 20 14:55:44 compute-1 nova_compute[225855]: 2026-01-20 14:55:44.450 225859 DEBUG nova.virt.libvirt.imagebackend [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Image locations are: [{'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/0f1d91a7-05af-4ed6-87af-1e03976e25f0/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/0f1d91a7-05af-4ed6-87af-1e03976e25f0/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 20 14:55:44 compute-1 nova_compute[225855]: 2026-01-20 14:55:44.516 225859 DEBUG nova.virt.libvirt.imagebackend [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Selected location: {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/0f1d91a7-05af-4ed6-87af-1e03976e25f0/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 20 14:55:44 compute-1 nova_compute[225855]: 2026-01-20 14:55:44.517 225859 DEBUG nova.storage.rbd_utils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] cloning images/0f1d91a7-05af-4ed6-87af-1e03976e25f0@snap to None/3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:55:44 compute-1 nova_compute[225855]: 2026-01-20 14:55:44.631 225859 DEBUG oslo_concurrency.lockutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "2f18fd61310b7a6e1fac51a6ca49bea435dc548b" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:44 compute-1 nova_compute[225855]: 2026-01-20 14:55:44.791 225859 DEBUG nova.objects.instance [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lazy-loading 'migration_context' on Instance uuid 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:44 compute-1 nova_compute[225855]: 2026-01-20 14:55:44.856 225859 DEBUG nova.storage.rbd_utils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] flattening vms/3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:55:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3485196193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.353 225859 DEBUG nova.virt.libvirt.driver [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Image rbd:vms/3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.354 225859 DEBUG nova.virt.libvirt.driver [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.354 225859 DEBUG nova.virt.libvirt.driver [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Ensure instance console log exists: /var/lib/nova/instances/3bec73f6-5255-44c0-8a10-a64c7e86c0c2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:55:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.355 225859 DEBUG oslo_concurrency.lockutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.355 225859 DEBUG oslo_concurrency.lockutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:45.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.355 225859 DEBUG oslo_concurrency.lockutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.357 225859 DEBUG nova.virt.libvirt.driver [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Start _get_guest_xml network_info=[{"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-20T14:55:18Z,direct_url=<?>,disk_format='raw',id=0f1d91a7-05af-4ed6-87af-1e03976e25f0,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-913712707-shelved',owner='b1e83af992c94112a965575784639d77',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-20T14:55:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.362 225859 WARNING nova.virt.libvirt.driver [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.382 225859 DEBUG nova.virt.libvirt.host [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.383 225859 DEBUG nova.virt.libvirt.host [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.386 225859 DEBUG nova.virt.libvirt.host [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.387 225859 DEBUG nova.virt.libvirt.host [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.388 225859 DEBUG nova.virt.libvirt.driver [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.388 225859 DEBUG nova.virt.hardware [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-20T14:55:18Z,direct_url=<?>,disk_format='raw',id=0f1d91a7-05af-4ed6-87af-1e03976e25f0,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-913712707-shelved',owner='b1e83af992c94112a965575784639d77',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-20T14:55:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.389 225859 DEBUG nova.virt.hardware [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.389 225859 DEBUG nova.virt.hardware [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.389 225859 DEBUG nova.virt.hardware [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.390 225859 DEBUG nova.virt.hardware [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.390 225859 DEBUG nova.virt.hardware [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.390 225859 DEBUG nova.virt.hardware [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.390 225859 DEBUG nova.virt.hardware [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.391 225859 DEBUG nova.virt.hardware [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.391 225859 DEBUG nova.virt.hardware [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.391 225859 DEBUG nova.virt.hardware [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.391 225859 DEBUG nova.objects.instance [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.408 225859 DEBUG oslo_concurrency.processutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:55:45 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/387418848' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.863 225859 DEBUG oslo_concurrency.processutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.893 225859 DEBUG nova.storage.rbd_utils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] rbd image 3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:55:45 compute-1 nova_compute[225855]: 2026-01-20 14:55:45.897 225859 DEBUG oslo_concurrency.processutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:45.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:46.027 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.095 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.122 225859 DEBUG nova.network.neutron [req-23f35e16-0c16-45cd-a5bf-fee625d47a4d req-a01a4132-b35a-47fa-93c4-6bfd9c1e5f56 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Updated VIF entry in instance network info cache for port e084df8c-a73e-4535-bcf7-de8adbafa9ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.123 225859 DEBUG nova.network.neutron [req-23f35e16-0c16-45cd-a5bf-fee625d47a4d req-a01a4132-b35a-47fa-93c4-6bfd9c1e5f56 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Updating instance_info_cache with network_info: [{"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.145 225859 DEBUG oslo_concurrency.lockutils [req-23f35e16-0c16-45cd-a5bf-fee625d47a4d req-a01a4132-b35a-47fa-93c4-6bfd9c1e5f56 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-3bec73f6-5255-44c0-8a10-a64c7e86c0c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:55:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:55:46 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1562014082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2332547316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/387418848' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2964940297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:46 compute-1 ceph-mon[81775]: pgmap v2096: 321 pgs: 321 active+clean; 359 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 3.3 MiB/s wr, 340 op/s
Jan 20 14:55:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4122561463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1562014082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.368 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.368 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.368 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.369 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.369 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.398 225859 DEBUG oslo_concurrency.processutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.401 225859 DEBUG nova.virt.libvirt.vif [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:54:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-913712707',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-913712707',id=124,image_ref='0f1d91a7-05af-4ed6-87af-1e03976e25f0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-1388890452',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:54:56Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b1e83af992c94112a965575784639d77',ramdisk_id='',reservation_id='r-n20fo39g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-896995479',owner_user_name='tempest-AttachVolumeShelveTestJSON-896995479-project-member',shelved_at='2026-01-20T14:55:27.720433',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='0f1d91a7-05af-4ed6-87af-1e03976e25f0'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:55:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34eb73f628994c11801d447148d5f142',uuid=3bec73f6-5255-44c0-8a10-a64c7e86c0c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.401 225859 DEBUG nova.network.os_vif_util [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Converting VIF {"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.403 225859 DEBUG nova.network.os_vif_util [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:8e:0f,bridge_name='br-int',has_traffic_filtering=True,id=e084df8c-a73e-4535-bcf7-de8adbafa9ae,network=Network(e9589011-b728-4b79-9945-aa6c52dd0fc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape084df8c-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.404 225859 DEBUG nova.objects.instance [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.428 225859 DEBUG nova.virt.libvirt.driver [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:55:46 compute-1 nova_compute[225855]:   <uuid>3bec73f6-5255-44c0-8a10-a64c7e86c0c2</uuid>
Jan 20 14:55:46 compute-1 nova_compute[225855]:   <name>instance-0000007c</name>
Jan 20 14:55:46 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:55:46 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:55:46 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <nova:name>tempest-AttachVolumeShelveTestJSON-server-913712707</nova:name>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:55:45</nova:creationTime>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:55:46 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:55:46 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:55:46 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:55:46 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:55:46 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:55:46 compute-1 nova_compute[225855]:         <nova:user uuid="34eb73f628994c11801d447148d5f142">tempest-AttachVolumeShelveTestJSON-896995479-project-member</nova:user>
Jan 20 14:55:46 compute-1 nova_compute[225855]:         <nova:project uuid="b1e83af992c94112a965575784639d77">tempest-AttachVolumeShelveTestJSON-896995479</nova:project>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="0f1d91a7-05af-4ed6-87af-1e03976e25f0"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:55:46 compute-1 nova_compute[225855]:         <nova:port uuid="e084df8c-a73e-4535-bcf7-de8adbafa9ae">
Jan 20 14:55:46 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:55:46 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:55:46 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <system>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <entry name="serial">3bec73f6-5255-44c0-8a10-a64c7e86c0c2</entry>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <entry name="uuid">3bec73f6-5255-44c0-8a10-a64c7e86c0c2</entry>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     </system>
Jan 20 14:55:46 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:55:46 compute-1 nova_compute[225855]:   <os>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:   </os>
Jan 20 14:55:46 compute-1 nova_compute[225855]:   <features>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:   </features>
Jan 20 14:55:46 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:55:46 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:55:46 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk">
Jan 20 14:55:46 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       </source>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:55:46 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk.config">
Jan 20 14:55:46 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       </source>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:55:46 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:89:8e:0f"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <target dev="tape084df8c-a7"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/3bec73f6-5255-44c0-8a10-a64c7e86c0c2/console.log" append="off"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <video>
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     </video>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <input type="keyboard" bus="usb"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:55:46 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:55:46 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:55:46 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:55:46 compute-1 nova_compute[225855]: </domain>
Jan 20 14:55:46 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.430 225859 DEBUG nova.compute.manager [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Preparing to wait for external event network-vif-plugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.431 225859 DEBUG oslo_concurrency.lockutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquiring lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.431 225859 DEBUG oslo_concurrency.lockutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.431 225859 DEBUG oslo_concurrency.lockutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.432 225859 DEBUG nova.virt.libvirt.vif [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:54:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-913712707',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-913712707',id=124,image_ref='0f1d91a7-05af-4ed6-87af-1e03976e25f0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-1388890452',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:54:56Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b1e83af992c94112a965575784639d77',ramdisk_id='',reservation_id='r-n20fo39g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-896995479',owner_user_name='tempest-AttachVolumeShelveTestJSON-896995479-project-member',shelved_at='2026-01-20T14:55:27.720433',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='0f1d91a7-05af-4ed6-87af-1e03976e25f0'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:55:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34eb73f628994c11801d447148d5f142',uuid=3bec73f6-5255-44c0-8a10-a64c7e86c0c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.432 225859 DEBUG nova.network.os_vif_util [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Converting VIF {"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.433 225859 DEBUG nova.network.os_vif_util [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:8e:0f,bridge_name='br-int',has_traffic_filtering=True,id=e084df8c-a73e-4535-bcf7-de8adbafa9ae,network=Network(e9589011-b728-4b79-9945-aa6c52dd0fc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape084df8c-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.433 225859 DEBUG os_vif [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:8e:0f,bridge_name='br-int',has_traffic_filtering=True,id=e084df8c-a73e-4535-bcf7-de8adbafa9ae,network=Network(e9589011-b728-4b79-9945-aa6c52dd0fc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape084df8c-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.434 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.434 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.435 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.437 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.438 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape084df8c-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.439 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape084df8c-a7, col_values=(('external_ids', {'iface-id': 'e084df8c-a73e-4535-bcf7-de8adbafa9ae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:8e:0f', 'vm-uuid': '3bec73f6-5255-44c0-8a10-a64c7e86c0c2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:46 compute-1 NetworkManager[49104]: <info>  [1768920946.4411] manager: (tape084df8c-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/215)
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.442 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.448 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.449 225859 INFO os_vif [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:8e:0f,bridge_name='br-int',has_traffic_filtering=True,id=e084df8c-a73e-4535-bcf7-de8adbafa9ae,network=Network(e9589011-b728-4b79-9945-aa6c52dd0fc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape084df8c-a7')
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.526 225859 DEBUG nova.virt.libvirt.driver [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.526 225859 DEBUG nova.virt.libvirt.driver [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.527 225859 DEBUG nova.virt.libvirt.driver [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] No VIF found with MAC fa:16:3e:89:8e:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.528 225859 INFO nova.virt.libvirt.driver [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Using config drive
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.556 225859 DEBUG nova.storage.rbd_utils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] rbd image 3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.578 225859 DEBUG nova.objects.instance [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.634 225859 DEBUG nova.objects.instance [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lazy-loading 'keypairs' on Instance uuid 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:55:46 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/541677167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.808 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.855 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.855 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.969 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.970 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4348MB free_disk=20.853878021240234GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.971 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:46 compute-1 nova_compute[225855]: 2026-01-20 14:55:46.971 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.013 225859 INFO nova.virt.libvirt.driver [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Creating config drive at /var/lib/nova/instances/3bec73f6-5255-44c0-8a10-a64c7e86c0c2/disk.config
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.018 225859 DEBUG oslo_concurrency.processutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3bec73f6-5255-44c0-8a10-a64c7e86c0c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcwgnu9oe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.071 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.071 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.072 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.108 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.149 225859 DEBUG oslo_concurrency.processutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3bec73f6-5255-44c0-8a10-a64c7e86c0c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcwgnu9oe" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.178 225859 DEBUG nova.storage.rbd_utils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] rbd image 3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.182 225859 DEBUG oslo_concurrency.processutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3bec73f6-5255-44c0-8a10-a64c7e86c0c2/disk.config 3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.329 225859 DEBUG oslo_concurrency.processutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3bec73f6-5255-44c0-8a10-a64c7e86c0c2/disk.config 3bec73f6-5255-44c0-8a10-a64c7e86c0c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.329 225859 INFO nova.virt.libvirt.driver [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Deleting local config drive /var/lib/nova/instances/3bec73f6-5255-44c0-8a10-a64c7e86c0c2/disk.config because it was imported into RBD.
Jan 20 14:55:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:47.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:47 compute-1 kernel: tape084df8c-a7: entered promiscuous mode
Jan 20 14:55:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/541677167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2961583989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.376 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:47 compute-1 NetworkManager[49104]: <info>  [1768920947.3774] manager: (tape084df8c-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/216)
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.380 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:47 compute-1 ovn_controller[130490]: 2026-01-20T14:55:47Z|00509|binding|INFO|Claiming lport e084df8c-a73e-4535-bcf7-de8adbafa9ae for this chassis.
Jan 20 14:55:47 compute-1 ovn_controller[130490]: 2026-01-20T14:55:47Z|00510|binding|INFO|e084df8c-a73e-4535-bcf7-de8adbafa9ae: Claiming fa:16:3e:89:8e:0f 10.100.0.14
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.386 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.391 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.397 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:47 compute-1 NetworkManager[49104]: <info>  [1768920947.4001] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Jan 20 14:55:47 compute-1 NetworkManager[49104]: <info>  [1768920947.4010] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.404 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:8e:0f 10.100.0.14'], port_security=['fa:16:3e:89:8e:0f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3bec73f6-5255-44c0-8a10-a64c7e86c0c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e9589011-b728-4b79-9945-aa6c52dd0fc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1e83af992c94112a965575784639d77', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'e88a960e-8540-4c69-934d-b6e1b91beb98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dc6df7d-3e57-4779-8232-af1ccf413403, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=e084df8c-a73e-4535-bcf7-de8adbafa9ae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.405 140354 INFO neutron.agent.ovn.metadata.agent [-] Port e084df8c-a73e-4535-bcf7-de8adbafa9ae in datapath e9589011-b728-4b79-9945-aa6c52dd0fc2 bound to our chassis
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.406 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e9589011-b728-4b79-9945-aa6c52dd0fc2
Jan 20 14:55:47 compute-1 systemd-machined[194361]: New machine qemu-60-instance-0000007c.
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.419 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6685728e-3146-4d43-a620-06302305ab80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.419 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape9589011-b1 in ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.421 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape9589011-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.421 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[69b19a9e-ed24-499e-829b-4be4d10b4c87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.422 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8c283884-b1d3-4d4f-8869-02e55c599bbb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:47 compute-1 systemd[1]: Started Virtual Machine qemu-60-instance-0000007c.
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.435 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[00055eed-e34a-42b1-8f5f-9fc4dfc14818]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:47 compute-1 systemd-udevd[276214]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.451 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0b4b0f9d-a493-4ea5-8574-7fe238833dfe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:47 compute-1 NetworkManager[49104]: <info>  [1768920947.4617] device (tape084df8c-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:55:47 compute-1 NetworkManager[49104]: <info>  [1768920947.4624] device (tape084df8c-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.481 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[5365d447-da3c-4b4b-a454-e07aa52b4b61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:47 compute-1 systemd-udevd[276217]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:55:47 compute-1 NetworkManager[49104]: <info>  [1768920947.5005] manager: (tape9589011-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/219)
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.501 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1454503a-7911-4820-a01b-3a6ab9a315e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.528 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[810c2b8d-f481-43a6-85f1-da7241a420f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.531 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[bf40ccb2-b6aa-4323-8482-85c723a2a7bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:55:47 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2048446538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:47 compute-1 NetworkManager[49104]: <info>  [1768920947.5514] device (tape9589011-b0): carrier: link connected
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.556 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[dc322a48-5634-4198-9924-af7662e796ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.563 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.568 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.571 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bce37ea1-377e-4086-9d49-e26ced334828]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape9589011-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:5a:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 144], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598647, 'reachable_time': 22390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276246, 'error': None, 'target': 'ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.583 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.589 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.603 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.603 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[17d815b1-d11f-447e-a88d-9bd630308f27]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea0:5a14'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598647, 'tstamp': 598647}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276247, 'error': None, 'target': 'ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.613 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.614 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:47 compute-1 ovn_controller[130490]: 2026-01-20T14:55:47Z|00511|binding|INFO|Setting lport e084df8c-a73e-4535-bcf7-de8adbafa9ae ovn-installed in OVS
Jan 20 14:55:47 compute-1 ovn_controller[130490]: 2026-01-20T14:55:47Z|00512|binding|INFO|Setting lport e084df8c-a73e-4535-bcf7-de8adbafa9ae up in Southbound
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.616 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.619 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a10e8714-c6b5-4158-b594-ddd9d30fb028]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape9589011-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:5a:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 144], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598647, 'reachable_time': 22390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276249, 'error': None, 'target': 'ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.649 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7f1aba8f-7393-4650-80a1-9c4111667fec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.703 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b5fc96a2-b30b-4838-b4c8-f6c009fbf882]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.705 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape9589011-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.705 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.705 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape9589011-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.707 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:47 compute-1 kernel: tape9589011-b0: entered promiscuous mode
Jan 20 14:55:47 compute-1 NetworkManager[49104]: <info>  [1768920947.7081] manager: (tape9589011-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.709 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.710 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape9589011-b0, col_values=(('external_ids', {'iface-id': '9ca9d06a-9365-4769-a2c4-7322625683ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.710 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:47 compute-1 ovn_controller[130490]: 2026-01-20T14:55:47Z|00513|binding|INFO|Releasing lport 9ca9d06a-9365-4769-a2c4-7322625683ac from this chassis (sb_readonly=0)
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.726 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.727 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e9589011-b728-4b79-9945-aa6c52dd0fc2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e9589011-b728-4b79-9945-aa6c52dd0fc2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.728 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd566c6-0707-40d3-b537-1b7d8eba3600]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.729 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-e9589011-b728-4b79-9945-aa6c52dd0fc2
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/e9589011-b728-4b79-9945-aa6c52dd0fc2.pid.haproxy
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID e9589011-b728-4b79-9945-aa6c52dd0fc2
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:55:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:55:47.730 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2', 'env', 'PROCESS_TAG=haproxy-e9589011-b728-4b79-9945-aa6c52dd0fc2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e9589011-b728-4b79-9945-aa6c52dd0fc2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.943 225859 DEBUG nova.compute.manager [req-74e89d76-f67f-4246-b948-64d9ecf34e5f req-9889af3c-bd24-4645-9463-73f65ea9adcd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Received event network-vif-plugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.944 225859 DEBUG oslo_concurrency.lockutils [req-74e89d76-f67f-4246-b948-64d9ecf34e5f req-9889af3c-bd24-4645-9463-73f65ea9adcd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.944 225859 DEBUG oslo_concurrency.lockutils [req-74e89d76-f67f-4246-b948-64d9ecf34e5f req-9889af3c-bd24-4645-9463-73f65ea9adcd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.944 225859 DEBUG oslo_concurrency.lockutils [req-74e89d76-f67f-4246-b948-64d9ecf34e5f req-9889af3c-bd24-4645-9463-73f65ea9adcd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:47 compute-1 nova_compute[225855]: 2026-01-20 14:55:47.945 225859 DEBUG nova.compute.manager [req-74e89d76-f67f-4246-b948-64d9ecf34e5f req-9889af3c-bd24-4645-9463-73f65ea9adcd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Processing event network-vif-plugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:55:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:55:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:47.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:55:48 compute-1 podman[276281]: 2026-01-20 14:55:48.110725674 +0000 UTC m=+0.054290214 container create a8130d817215800fca2757ee75ba5302132a8e9fb828d8b29771e62e00d0d605 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 14:55:48 compute-1 systemd[1]: Started libpod-conmon-a8130d817215800fca2757ee75ba5302132a8e9fb828d8b29771e62e00d0d605.scope.
Jan 20 14:55:48 compute-1 podman[276281]: 2026-01-20 14:55:48.079142192 +0000 UTC m=+0.022706762 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:55:48 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:55:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b58477d32c8948a9385d790e9b6a62f6a46c01c4ace31c1af6ef64bffa12e02/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:55:48 compute-1 podman[276281]: 2026-01-20 14:55:48.206030794 +0000 UTC m=+0.149595354 container init a8130d817215800fca2757ee75ba5302132a8e9fb828d8b29771e62e00d0d605 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 20 14:55:48 compute-1 podman[276281]: 2026-01-20 14:55:48.212307351 +0000 UTC m=+0.155871891 container start a8130d817215800fca2757ee75ba5302132a8e9fb828d8b29771e62e00d0d605 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:55:48 compute-1 neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2[276297]: [NOTICE]   (276301) : New worker (276303) forked
Jan 20 14:55:48 compute-1 neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2[276297]: [NOTICE]   (276301) : Loading success.
Jan 20 14:55:48 compute-1 nova_compute[225855]: 2026-01-20 14:55:48.409 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920948.4093874, 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:55:48 compute-1 nova_compute[225855]: 2026-01-20 14:55:48.410 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] VM Started (Lifecycle Event)
Jan 20 14:55:48 compute-1 nova_compute[225855]: 2026-01-20 14:55:48.412 225859 DEBUG nova.compute.manager [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:55:48 compute-1 nova_compute[225855]: 2026-01-20 14:55:48.415 225859 DEBUG nova.virt.libvirt.driver [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:55:48 compute-1 nova_compute[225855]: 2026-01-20 14:55:48.417 225859 INFO nova.virt.libvirt.driver [-] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Instance spawned successfully.
Jan 20 14:55:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2048446538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:48 compute-1 ceph-mon[81775]: pgmap v2097: 321 pgs: 321 active+clean; 378 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 2.7 MiB/s wr, 281 op/s
Jan 20 14:55:48 compute-1 nova_compute[225855]: 2026-01-20 14:55:48.447 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:55:48 compute-1 nova_compute[225855]: 2026-01-20 14:55:48.451 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:55:48 compute-1 nova_compute[225855]: 2026-01-20 14:55:48.480 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:55:48 compute-1 nova_compute[225855]: 2026-01-20 14:55:48.482 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920948.4095788, 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:55:48 compute-1 nova_compute[225855]: 2026-01-20 14:55:48.482 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] VM Paused (Lifecycle Event)
Jan 20 14:55:48 compute-1 nova_compute[225855]: 2026-01-20 14:55:48.501 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:55:48 compute-1 nova_compute[225855]: 2026-01-20 14:55:48.504 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920948.4142823, 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:55:48 compute-1 nova_compute[225855]: 2026-01-20 14:55:48.505 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] VM Resumed (Lifecycle Event)
Jan 20 14:55:48 compute-1 nova_compute[225855]: 2026-01-20 14:55:48.533 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:55:48 compute-1 nova_compute[225855]: 2026-01-20 14:55:48.536 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:55:48 compute-1 nova_compute[225855]: 2026-01-20 14:55:48.573 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:55:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:49.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3137524621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/858933901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e307 e307: 3 total, 3 up, 3 in
Jan 20 14:55:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:55:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:49.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:55:50 compute-1 nova_compute[225855]: 2026-01-20 14:55:50.018 225859 DEBUG nova.compute.manager [req-a7c4c67e-75c5-4d75-8ade-4895cd5e75a9 req-b1ad84d0-df78-4c73-b790-b392f0a359a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Received event network-vif-plugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:50 compute-1 nova_compute[225855]: 2026-01-20 14:55:50.018 225859 DEBUG oslo_concurrency.lockutils [req-a7c4c67e-75c5-4d75-8ade-4895cd5e75a9 req-b1ad84d0-df78-4c73-b790-b392f0a359a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:50 compute-1 nova_compute[225855]: 2026-01-20 14:55:50.019 225859 DEBUG oslo_concurrency.lockutils [req-a7c4c67e-75c5-4d75-8ade-4895cd5e75a9 req-b1ad84d0-df78-4c73-b790-b392f0a359a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:50 compute-1 nova_compute[225855]: 2026-01-20 14:55:50.019 225859 DEBUG oslo_concurrency.lockutils [req-a7c4c67e-75c5-4d75-8ade-4895cd5e75a9 req-b1ad84d0-df78-4c73-b790-b392f0a359a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:50 compute-1 nova_compute[225855]: 2026-01-20 14:55:50.019 225859 DEBUG nova.compute.manager [req-a7c4c67e-75c5-4d75-8ade-4895cd5e75a9 req-b1ad84d0-df78-4c73-b790-b392f0a359a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] No waiting events found dispatching network-vif-plugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:55:50 compute-1 nova_compute[225855]: 2026-01-20 14:55:50.019 225859 WARNING nova.compute.manager [req-a7c4c67e-75c5-4d75-8ade-4895cd5e75a9 req-b1ad84d0-df78-4c73-b790-b392f0a359a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Received unexpected event network-vif-plugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae for instance with vm_state shelved_offloaded and task_state spawning.
Jan 20 14:55:50 compute-1 ceph-mon[81775]: osdmap e307: 3 total, 3 up, 3 in
Jan 20 14:55:50 compute-1 ceph-mon[81775]: pgmap v2099: 321 pgs: 321 active+clean; 397 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 4.4 MiB/s wr, 248 op/s
Jan 20 14:55:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2089473108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:50 compute-1 nova_compute[225855]: 2026-01-20 14:55:50.609 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:55:50 compute-1 nova_compute[225855]: 2026-01-20 14:55:50.610 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:55:50 compute-1 nova_compute[225855]: 2026-01-20 14:55:50.785 225859 DEBUG nova.compute.manager [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:55:50 compute-1 nova_compute[225855]: 2026-01-20 14:55:50.866 225859 DEBUG oslo_concurrency.lockutils [None req-2e8348fc-730c-43ea-954a-8f2243213414 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 12.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:51 compute-1 nova_compute[225855]: 2026-01-20 14:55:51.142 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:51.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:51 compute-1 nova_compute[225855]: 2026-01-20 14:55:51.441 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/40220004' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:51 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Jan 20 14:55:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:55:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:51.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:55:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:52 compute-1 sudo[276356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:52 compute-1 sudo[276356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:52 compute-1 sudo[276356]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:52 compute-1 ceph-mon[81775]: pgmap v2100: 321 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 309 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 4.7 MiB/s wr, 248 op/s
Jan 20 14:55:52 compute-1 sudo[276381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:52 compute-1 sudo[276381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:52 compute-1 sudo[276381]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:53.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:53 compute-1 nova_compute[225855]: 2026-01-20 14:55:53.399 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:53.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #97. Immutable memtables: 0.
Jan 20 14:55:54 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:55:54.936443) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:55:54 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 97
Jan 20 14:55:54 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920954936476, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1180, "num_deletes": 254, "total_data_size": 2204638, "memory_usage": 2254672, "flush_reason": "Manual Compaction"}
Jan 20 14:55:54 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #98: started
Jan 20 14:55:54 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920954955710, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 98, "file_size": 977776, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49742, "largest_seqno": 50917, "table_properties": {"data_size": 973325, "index_size": 1911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 12214, "raw_average_key_size": 21, "raw_value_size": 963598, "raw_average_value_size": 1702, "num_data_blocks": 83, "num_entries": 566, "num_filter_entries": 566, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768920881, "oldest_key_time": 1768920881, "file_creation_time": 1768920954, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:55:54 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 19311 microseconds, and 3354 cpu microseconds.
Jan 20 14:55:54 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:55:54 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:55:54.955753) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #98: 977776 bytes OK
Jan 20 14:55:54 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:55:54.955771) [db/memtable_list.cc:519] [default] Level-0 commit table #98 started
Jan 20 14:55:54 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:55:54.957188) [db/memtable_list.cc:722] [default] Level-0 commit table #98: memtable #1 done
Jan 20 14:55:54 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:55:54.957202) EVENT_LOG_v1 {"time_micros": 1768920954957198, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:55:54 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:55:54.957222) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:55:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2198836, prev total WAL file size 2198836, number of live WAL files 2.
Jan 20 14:55:54 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000094.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:55:54 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:55:54.957952) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353035' seq:72057594037927935, type:22 .. '6D6772737461740031373536' seq:0, type:0; will stop at (end)
Jan 20 14:55:54 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:55:54 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [98(954KB)], [96(11MB)]
Jan 20 14:55:54 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920954957983, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [98], "files_L6": [96], "score": -1, "input_data_size": 13450642, "oldest_snapshot_seqno": -1}
Jan 20 14:55:55 compute-1 podman[276407]: 2026-01-20 14:55:55.036942399 +0000 UTC m=+0.080643098 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 20 14:55:55 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #99: 7560 keys, 10086049 bytes, temperature: kUnknown
Jan 20 14:55:55 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920955057259, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 99, "file_size": 10086049, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10036810, "index_size": 29223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18949, "raw_key_size": 195723, "raw_average_key_size": 25, "raw_value_size": 9903078, "raw_average_value_size": 1309, "num_data_blocks": 1149, "num_entries": 7560, "num_filter_entries": 7560, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768920954, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 99, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:55:55 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:55:55 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:55:55.057548) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 10086049 bytes
Jan 20 14:55:55 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:55:55.058881) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.3 rd, 101.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.9 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(24.1) write-amplify(10.3) OK, records in: 8053, records dropped: 493 output_compression: NoCompression
Jan 20 14:55:55 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:55:55.058915) EVENT_LOG_v1 {"time_micros": 1768920955058891, "job": 60, "event": "compaction_finished", "compaction_time_micros": 99408, "compaction_time_cpu_micros": 25977, "output_level": 6, "num_output_files": 1, "total_output_size": 10086049, "num_input_records": 8053, "num_output_records": 7560, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:55:55 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:55:55 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920955059144, "job": 60, "event": "table_file_deletion", "file_number": 98}
Jan 20 14:55:55 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000096.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:55:55 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920955060794, "job": 60, "event": "table_file_deletion", "file_number": 96}
Jan 20 14:55:55 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:55:54.957854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:55:55 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:55:55.060881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:55:55 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:55:55.060885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:55:55 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:55:55.060887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:55:55 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:55:55.060888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:55:55 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:55:55.060890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:55:55 compute-1 ceph-mon[81775]: pgmap v2101: 321 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 309 active+clean; 398 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 6.2 MiB/s wr, 291 op/s
Jan 20 14:55:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2206619567' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/251820846' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:55.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:55:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:55.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:55:56 compute-1 nova_compute[225855]: 2026-01-20 14:55:56.144 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:56 compute-1 ceph-mon[81775]: pgmap v2102: 321 pgs: 321 active+clean; 399 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 7.3 MiB/s wr, 316 op/s
Jan 20 14:55:56 compute-1 nova_compute[225855]: 2026-01-20 14:55:56.443 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:57.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:57 compute-1 nova_compute[225855]: 2026-01-20 14:55:57.806 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3540996139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:57.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:58 compute-1 ceph-mon[81775]: pgmap v2103: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 6.4 MiB/s wr, 368 op/s
Jan 20 14:55:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:55:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:59.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e308 e308: 3 total, 3 up, 3 in
Jan 20 14:56:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:56:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:00.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:56:00 compute-1 nova_compute[225855]: 2026-01-20 14:56:00.633 225859 DEBUG oslo_concurrency.lockutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Acquiring lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:00 compute-1 nova_compute[225855]: 2026-01-20 14:56:00.634 225859 DEBUG oslo_concurrency.lockutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:00 compute-1 nova_compute[225855]: 2026-01-20 14:56:00.650 225859 DEBUG nova.compute.manager [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:56:00 compute-1 nova_compute[225855]: 2026-01-20 14:56:00.726 225859 DEBUG oslo_concurrency.lockutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:00 compute-1 nova_compute[225855]: 2026-01-20 14:56:00.726 225859 DEBUG oslo_concurrency.lockutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:00 compute-1 nova_compute[225855]: 2026-01-20 14:56:00.737 225859 DEBUG nova.virt.hardware [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:56:00 compute-1 nova_compute[225855]: 2026-01-20 14:56:00.737 225859 INFO nova.compute.claims [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:56:00 compute-1 nova_compute[225855]: 2026-01-20 14:56:00.836 225859 DEBUG oslo_concurrency.processutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:00 compute-1 ceph-mon[81775]: osdmap e308: 3 total, 3 up, 3 in
Jan 20 14:56:00 compute-1 ceph-mon[81775]: pgmap v2105: 321 pgs: 321 active+clean; 364 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.0 MiB/s wr, 393 op/s
Jan 20 14:56:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/242987236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.149 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:56:01 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/533248142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.310 225859 DEBUG oslo_concurrency.processutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.316 225859 DEBUG nova.compute.provider_tree [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.331 225859 DEBUG nova.scheduler.client.report [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.353 225859 DEBUG oslo_concurrency.lockutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.354 225859 DEBUG nova.compute.manager [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:56:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:01.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.391 225859 DEBUG nova.compute.manager [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.392 225859 DEBUG nova.network.neutron [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.407 225859 INFO nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.423 225859 DEBUG nova.compute.manager [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.445 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.513 225859 DEBUG nova.compute.manager [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.515 225859 DEBUG nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.515 225859 INFO nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Creating image(s)
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.540 225859 DEBUG nova.storage.rbd_utils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] rbd image f6f09d34-bc44-451f-98e2-1b0701aeab3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.566 225859 DEBUG nova.storage.rbd_utils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] rbd image f6f09d34-bc44-451f-98e2-1b0701aeab3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.591 225859 DEBUG nova.storage.rbd_utils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] rbd image f6f09d34-bc44-451f-98e2-1b0701aeab3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.595 225859 DEBUG oslo_concurrency.processutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.659 225859 DEBUG oslo_concurrency.processutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.660 225859 DEBUG oslo_concurrency.lockutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.660 225859 DEBUG oslo_concurrency.lockutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.661 225859 DEBUG oslo_concurrency.lockutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.689 225859 DEBUG nova.storage.rbd_utils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] rbd image f6f09d34-bc44-451f-98e2-1b0701aeab3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.693 225859 DEBUG oslo_concurrency.processutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 f6f09d34-bc44-451f-98e2-1b0701aeab3a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.930 225859 DEBUG nova.policy [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f6f144f1d330427e82e84c891e9a8a89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4be5b75b5dcb4eeea9759f7c4a779ffa', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:56:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/533248142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2444160729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:01 compute-1 nova_compute[225855]: 2026-01-20 14:56:01.997 225859 DEBUG oslo_concurrency.processutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 f6f09d34-bc44-451f-98e2-1b0701aeab3a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:02.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:02 compute-1 ovn_controller[130490]: 2026-01-20T14:56:02Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:89:8e:0f 10.100.0.14
Jan 20 14:56:02 compute-1 nova_compute[225855]: 2026-01-20 14:56:02.075 225859 DEBUG nova.storage.rbd_utils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] resizing rbd image f6f09d34-bc44-451f-98e2-1b0701aeab3a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:56:02 compute-1 nova_compute[225855]: 2026-01-20 14:56:02.175 225859 DEBUG nova.objects.instance [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Lazy-loading 'migration_context' on Instance uuid f6f09d34-bc44-451f-98e2-1b0701aeab3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:02 compute-1 nova_compute[225855]: 2026-01-20 14:56:02.193 225859 DEBUG nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:56:02 compute-1 nova_compute[225855]: 2026-01-20 14:56:02.194 225859 DEBUG nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Ensure instance console log exists: /var/lib/nova/instances/f6f09d34-bc44-451f-98e2-1b0701aeab3a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:56:02 compute-1 nova_compute[225855]: 2026-01-20 14:56:02.195 225859 DEBUG oslo_concurrency.lockutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:02 compute-1 nova_compute[225855]: 2026-01-20 14:56:02.196 225859 DEBUG oslo_concurrency.lockutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:02 compute-1 nova_compute[225855]: 2026-01-20 14:56:02.196 225859 DEBUG oslo_concurrency.lockutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:02 compute-1 ceph-mon[81775]: pgmap v2106: 321 pgs: 321 active+clean; 307 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 4.7 MiB/s wr, 387 op/s
Jan 20 14:56:03 compute-1 nova_compute[225855]: 2026-01-20 14:56:03.321 225859 DEBUG nova.network.neutron [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Successfully created port: 73ed9acf-a178-4d9c-98a3-25f22489d41d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:56:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:03.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:56:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:04.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:56:05 compute-1 nova_compute[225855]: 2026-01-20 14:56:05.076 225859 DEBUG nova.network.neutron [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Successfully updated port: 73ed9acf-a178-4d9c-98a3-25f22489d41d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:56:05 compute-1 nova_compute[225855]: 2026-01-20 14:56:05.093 225859 DEBUG oslo_concurrency.lockutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Acquiring lock "refresh_cache-f6f09d34-bc44-451f-98e2-1b0701aeab3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:56:05 compute-1 nova_compute[225855]: 2026-01-20 14:56:05.094 225859 DEBUG oslo_concurrency.lockutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Acquired lock "refresh_cache-f6f09d34-bc44-451f-98e2-1b0701aeab3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:56:05 compute-1 nova_compute[225855]: 2026-01-20 14:56:05.094 225859 DEBUG nova.network.neutron [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:56:05 compute-1 ceph-mon[81775]: pgmap v2107: 321 pgs: 321 active+clean; 303 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 3.5 MiB/s wr, 364 op/s
Jan 20 14:56:05 compute-1 nova_compute[225855]: 2026-01-20 14:56:05.157 225859 DEBUG nova.compute.manager [req-3ae2be60-b4e7-40f3-918e-bd3a0c0c2349 req-a467a7d9-f09e-4b60-8a85-939b9f66012e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Received event network-changed-73ed9acf-a178-4d9c-98a3-25f22489d41d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:05 compute-1 nova_compute[225855]: 2026-01-20 14:56:05.158 225859 DEBUG nova.compute.manager [req-3ae2be60-b4e7-40f3-918e-bd3a0c0c2349 req-a467a7d9-f09e-4b60-8a85-939b9f66012e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Refreshing instance network info cache due to event network-changed-73ed9acf-a178-4d9c-98a3-25f22489d41d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:56:05 compute-1 nova_compute[225855]: 2026-01-20 14:56:05.159 225859 DEBUG oslo_concurrency.lockutils [req-3ae2be60-b4e7-40f3-918e-bd3a0c0c2349 req-a467a7d9-f09e-4b60-8a85-939b9f66012e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-f6f09d34-bc44-451f-98e2-1b0701aeab3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:56:05 compute-1 nova_compute[225855]: 2026-01-20 14:56:05.215 225859 DEBUG nova.network.neutron [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:56:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:05.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:56:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:06.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.151 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.448 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.471 225859 DEBUG nova.network.neutron [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Updating instance_info_cache with network_info: [{"id": "73ed9acf-a178-4d9c-98a3-25f22489d41d", "address": "fa:16:3e:a5:7a:7c", "network": {"id": "d3dc1854-2a38-414a-a424-2ff753e5a7da", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-645777946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4be5b75b5dcb4eeea9759f7c4a779ffa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73ed9acf-a1", "ovs_interfaceid": "73ed9acf-a178-4d9c-98a3-25f22489d41d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.494 225859 DEBUG oslo_concurrency.lockutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Releasing lock "refresh_cache-f6f09d34-bc44-451f-98e2-1b0701aeab3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.494 225859 DEBUG nova.compute.manager [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Instance network_info: |[{"id": "73ed9acf-a178-4d9c-98a3-25f22489d41d", "address": "fa:16:3e:a5:7a:7c", "network": {"id": "d3dc1854-2a38-414a-a424-2ff753e5a7da", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-645777946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4be5b75b5dcb4eeea9759f7c4a779ffa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73ed9acf-a1", "ovs_interfaceid": "73ed9acf-a178-4d9c-98a3-25f22489d41d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.494 225859 DEBUG oslo_concurrency.lockutils [req-3ae2be60-b4e7-40f3-918e-bd3a0c0c2349 req-a467a7d9-f09e-4b60-8a85-939b9f66012e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-f6f09d34-bc44-451f-98e2-1b0701aeab3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.494 225859 DEBUG nova.network.neutron [req-3ae2be60-b4e7-40f3-918e-bd3a0c0c2349 req-a467a7d9-f09e-4b60-8a85-939b9f66012e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Refreshing network info cache for port 73ed9acf-a178-4d9c-98a3-25f22489d41d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.496 225859 DEBUG nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Start _get_guest_xml network_info=[{"id": "73ed9acf-a178-4d9c-98a3-25f22489d41d", "address": "fa:16:3e:a5:7a:7c", "network": {"id": "d3dc1854-2a38-414a-a424-2ff753e5a7da", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-645777946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4be5b75b5dcb4eeea9759f7c4a779ffa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73ed9acf-a1", "ovs_interfaceid": "73ed9acf-a178-4d9c-98a3-25f22489d41d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.502 225859 WARNING nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.505 225859 DEBUG nova.virt.libvirt.host [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.506 225859 DEBUG nova.virt.libvirt.host [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.511 225859 DEBUG nova.virt.libvirt.host [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.511 225859 DEBUG nova.virt.libvirt.host [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.512 225859 DEBUG nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.512 225859 DEBUG nova.virt.hardware [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.513 225859 DEBUG nova.virt.hardware [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.513 225859 DEBUG nova.virt.hardware [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.513 225859 DEBUG nova.virt.hardware [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.513 225859 DEBUG nova.virt.hardware [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.514 225859 DEBUG nova.virt.hardware [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.514 225859 DEBUG nova.virt.hardware [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.514 225859 DEBUG nova.virt.hardware [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.514 225859 DEBUG nova.virt.hardware [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.514 225859 DEBUG nova.virt.hardware [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.515 225859 DEBUG nova.virt.hardware [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.518 225859 DEBUG oslo_concurrency.processutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:56:06 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3100921922' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:06 compute-1 nova_compute[225855]: 2026-01-20 14:56:06.971 225859 DEBUG oslo_concurrency.processutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.004 225859 DEBUG nova.storage.rbd_utils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] rbd image f6f09d34-bc44-451f-98e2-1b0701aeab3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.008 225859 DEBUG oslo_concurrency.processutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:07 compute-1 ceph-mon[81775]: pgmap v2108: 321 pgs: 321 active+clean; 363 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 290 op/s
Jan 20 14:56:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1903058217' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4107789460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3100921922' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3304163422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:07.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:56:07 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2754763687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.440 225859 DEBUG oslo_concurrency.processutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.442 225859 DEBUG nova.virt.libvirt.vif [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:55:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-505687879',display_name='tempest-ServerMetadataTestJSON-server-505687879',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-505687879',id=129,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4be5b75b5dcb4eeea9759f7c4a779ffa',ramdisk_id='',reservation_id='r-cbmgnji8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-599451381',owner_user_name='tempest-ServerMetadataTestJSON-599451381-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:56:01Z,user_data=None,user_id='f6f144f1d330427e82e84c891e9a8a89',uuid=f6f09d34-bc44-451f-98e2-1b0701aeab3a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "73ed9acf-a178-4d9c-98a3-25f22489d41d", "address": "fa:16:3e:a5:7a:7c", "network": {"id": "d3dc1854-2a38-414a-a424-2ff753e5a7da", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-645777946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4be5b75b5dcb4eeea9759f7c4a779ffa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73ed9acf-a1", "ovs_interfaceid": "73ed9acf-a178-4d9c-98a3-25f22489d41d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.443 225859 DEBUG nova.network.os_vif_util [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Converting VIF {"id": "73ed9acf-a178-4d9c-98a3-25f22489d41d", "address": "fa:16:3e:a5:7a:7c", "network": {"id": "d3dc1854-2a38-414a-a424-2ff753e5a7da", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-645777946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4be5b75b5dcb4eeea9759f7c4a779ffa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73ed9acf-a1", "ovs_interfaceid": "73ed9acf-a178-4d9c-98a3-25f22489d41d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.443 225859 DEBUG nova.network.os_vif_util [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:7a:7c,bridge_name='br-int',has_traffic_filtering=True,id=73ed9acf-a178-4d9c-98a3-25f22489d41d,network=Network(d3dc1854-2a38-414a-a424-2ff753e5a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73ed9acf-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.444 225859 DEBUG nova.objects.instance [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Lazy-loading 'pci_devices' on Instance uuid f6f09d34-bc44-451f-98e2-1b0701aeab3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.463 225859 DEBUG nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:56:07 compute-1 nova_compute[225855]:   <uuid>f6f09d34-bc44-451f-98e2-1b0701aeab3a</uuid>
Jan 20 14:56:07 compute-1 nova_compute[225855]:   <name>instance-00000081</name>
Jan 20 14:56:07 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:56:07 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:56:07 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerMetadataTestJSON-server-505687879</nova:name>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:56:06</nova:creationTime>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:56:07 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:56:07 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:56:07 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:56:07 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:56:07 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:56:07 compute-1 nova_compute[225855]:         <nova:user uuid="f6f144f1d330427e82e84c891e9a8a89">tempest-ServerMetadataTestJSON-599451381-project-member</nova:user>
Jan 20 14:56:07 compute-1 nova_compute[225855]:         <nova:project uuid="4be5b75b5dcb4eeea9759f7c4a779ffa">tempest-ServerMetadataTestJSON-599451381</nova:project>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:56:07 compute-1 nova_compute[225855]:         <nova:port uuid="73ed9acf-a178-4d9c-98a3-25f22489d41d">
Jan 20 14:56:07 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:56:07 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:56:07 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <system>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <entry name="serial">f6f09d34-bc44-451f-98e2-1b0701aeab3a</entry>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <entry name="uuid">f6f09d34-bc44-451f-98e2-1b0701aeab3a</entry>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     </system>
Jan 20 14:56:07 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:56:07 compute-1 nova_compute[225855]:   <os>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:   </os>
Jan 20 14:56:07 compute-1 nova_compute[225855]:   <features>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:   </features>
Jan 20 14:56:07 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:56:07 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:56:07 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/f6f09d34-bc44-451f-98e2-1b0701aeab3a_disk">
Jan 20 14:56:07 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       </source>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:56:07 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/f6f09d34-bc44-451f-98e2-1b0701aeab3a_disk.config">
Jan 20 14:56:07 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       </source>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:56:07 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:a5:7a:7c"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <target dev="tap73ed9acf-a1"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/f6f09d34-bc44-451f-98e2-1b0701aeab3a/console.log" append="off"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <video>
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     </video>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:56:07 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:56:07 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:56:07 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:56:07 compute-1 nova_compute[225855]: </domain>
Jan 20 14:56:07 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.464 225859 DEBUG nova.compute.manager [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Preparing to wait for external event network-vif-plugged-73ed9acf-a178-4d9c-98a3-25f22489d41d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.465 225859 DEBUG oslo_concurrency.lockutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Acquiring lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.465 225859 DEBUG oslo_concurrency.lockutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.465 225859 DEBUG oslo_concurrency.lockutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.466 225859 DEBUG nova.virt.libvirt.vif [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:55:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-505687879',display_name='tempest-ServerMetadataTestJSON-server-505687879',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-505687879',id=129,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4be5b75b5dcb4eeea9759f7c4a779ffa',ramdisk_id='',reservation_id='r-cbmgnji8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-599451381',owner_user_name='tempest-ServerMetadataTestJSON-599451381-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:56:01Z,user_data=None,user_id='f6f144f1d330427e82e84c891e9a8a89',uuid=f6f09d34-bc44-451f-98e2-1b0701aeab3a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "73ed9acf-a178-4d9c-98a3-25f22489d41d", "address": "fa:16:3e:a5:7a:7c", "network": {"id": "d3dc1854-2a38-414a-a424-2ff753e5a7da", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-645777946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4be5b75b5dcb4eeea9759f7c4a779ffa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73ed9acf-a1", "ovs_interfaceid": "73ed9acf-a178-4d9c-98a3-25f22489d41d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.466 225859 DEBUG nova.network.os_vif_util [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Converting VIF {"id": "73ed9acf-a178-4d9c-98a3-25f22489d41d", "address": "fa:16:3e:a5:7a:7c", "network": {"id": "d3dc1854-2a38-414a-a424-2ff753e5a7da", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-645777946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4be5b75b5dcb4eeea9759f7c4a779ffa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73ed9acf-a1", "ovs_interfaceid": "73ed9acf-a178-4d9c-98a3-25f22489d41d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.467 225859 DEBUG nova.network.os_vif_util [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:7a:7c,bridge_name='br-int',has_traffic_filtering=True,id=73ed9acf-a178-4d9c-98a3-25f22489d41d,network=Network(d3dc1854-2a38-414a-a424-2ff753e5a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73ed9acf-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.467 225859 DEBUG os_vif [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:7a:7c,bridge_name='br-int',has_traffic_filtering=True,id=73ed9acf-a178-4d9c-98a3-25f22489d41d,network=Network(d3dc1854-2a38-414a-a424-2ff753e5a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73ed9acf-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.468 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.468 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.469 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.471 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.471 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73ed9acf-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.472 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap73ed9acf-a1, col_values=(('external_ids', {'iface-id': '73ed9acf-a178-4d9c-98a3-25f22489d41d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:7a:7c', 'vm-uuid': 'f6f09d34-bc44-451f-98e2-1b0701aeab3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.473 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:07 compute-1 NetworkManager[49104]: <info>  [1768920967.4747] manager: (tap73ed9acf-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.476 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.480 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.482 225859 INFO os_vif [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:7a:7c,bridge_name='br-int',has_traffic_filtering=True,id=73ed9acf-a178-4d9c-98a3-25f22489d41d,network=Network(d3dc1854-2a38-414a-a424-2ff753e5a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73ed9acf-a1')
Jan 20 14:56:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.588 225859 DEBUG nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.589 225859 DEBUG nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.589 225859 DEBUG nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] No VIF found with MAC fa:16:3e:a5:7a:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.589 225859 INFO nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Using config drive
Jan 20 14:56:07 compute-1 nova_compute[225855]: 2026-01-20 14:56:07.617 225859 DEBUG nova.storage.rbd_utils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] rbd image f6f09d34-bc44-451f-98e2-1b0701aeab3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:56:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:08.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:56:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2754763687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:08 compute-1 nova_compute[225855]: 2026-01-20 14:56:08.451 225859 INFO nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Creating config drive at /var/lib/nova/instances/f6f09d34-bc44-451f-98e2-1b0701aeab3a/disk.config
Jan 20 14:56:08 compute-1 nova_compute[225855]: 2026-01-20 14:56:08.456 225859 DEBUG oslo_concurrency.processutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f6f09d34-bc44-451f-98e2-1b0701aeab3a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_kdmdtvt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:08 compute-1 nova_compute[225855]: 2026-01-20 14:56:08.586 225859 DEBUG oslo_concurrency.processutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f6f09d34-bc44-451f-98e2-1b0701aeab3a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_kdmdtvt" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:08 compute-1 nova_compute[225855]: 2026-01-20 14:56:08.613 225859 DEBUG nova.storage.rbd_utils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] rbd image f6f09d34-bc44-451f-98e2-1b0701aeab3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:08 compute-1 nova_compute[225855]: 2026-01-20 14:56:08.617 225859 DEBUG oslo_concurrency.processutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f6f09d34-bc44-451f-98e2-1b0701aeab3a/disk.config f6f09d34-bc44-451f-98e2-1b0701aeab3a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:08 compute-1 nova_compute[225855]: 2026-01-20 14:56:08.644 225859 DEBUG nova.network.neutron [req-3ae2be60-b4e7-40f3-918e-bd3a0c0c2349 req-a467a7d9-f09e-4b60-8a85-939b9f66012e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Updated VIF entry in instance network info cache for port 73ed9acf-a178-4d9c-98a3-25f22489d41d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:56:08 compute-1 nova_compute[225855]: 2026-01-20 14:56:08.645 225859 DEBUG nova.network.neutron [req-3ae2be60-b4e7-40f3-918e-bd3a0c0c2349 req-a467a7d9-f09e-4b60-8a85-939b9f66012e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Updating instance_info_cache with network_info: [{"id": "73ed9acf-a178-4d9c-98a3-25f22489d41d", "address": "fa:16:3e:a5:7a:7c", "network": {"id": "d3dc1854-2a38-414a-a424-2ff753e5a7da", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-645777946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4be5b75b5dcb4eeea9759f7c4a779ffa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73ed9acf-a1", "ovs_interfaceid": "73ed9acf-a178-4d9c-98a3-25f22489d41d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:08 compute-1 nova_compute[225855]: 2026-01-20 14:56:08.670 225859 DEBUG oslo_concurrency.lockutils [req-3ae2be60-b4e7-40f3-918e-bd3a0c0c2349 req-a467a7d9-f09e-4b60-8a85-939b9f66012e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-f6f09d34-bc44-451f-98e2-1b0701aeab3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:56:08 compute-1 nova_compute[225855]: 2026-01-20 14:56:08.770 225859 DEBUG oslo_concurrency.processutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f6f09d34-bc44-451f-98e2-1b0701aeab3a/disk.config f6f09d34-bc44-451f-98e2-1b0701aeab3a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:08 compute-1 nova_compute[225855]: 2026-01-20 14:56:08.771 225859 INFO nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Deleting local config drive /var/lib/nova/instances/f6f09d34-bc44-451f-98e2-1b0701aeab3a/disk.config because it was imported into RBD.
Jan 20 14:56:08 compute-1 kernel: tap73ed9acf-a1: entered promiscuous mode
Jan 20 14:56:08 compute-1 NetworkManager[49104]: <info>  [1768920968.8154] manager: (tap73ed9acf-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/222)
Jan 20 14:56:08 compute-1 nova_compute[225855]: 2026-01-20 14:56:08.816 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:08 compute-1 ovn_controller[130490]: 2026-01-20T14:56:08Z|00514|binding|INFO|Claiming lport 73ed9acf-a178-4d9c-98a3-25f22489d41d for this chassis.
Jan 20 14:56:08 compute-1 ovn_controller[130490]: 2026-01-20T14:56:08Z|00515|binding|INFO|73ed9acf-a178-4d9c-98a3-25f22489d41d: Claiming fa:16:3e:a5:7a:7c 10.100.0.7
Jan 20 14:56:08 compute-1 ovn_controller[130490]: 2026-01-20T14:56:08Z|00516|binding|INFO|Setting lport 73ed9acf-a178-4d9c-98a3-25f22489d41d ovn-installed in OVS
Jan 20 14:56:08 compute-1 nova_compute[225855]: 2026-01-20 14:56:08.835 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:08 compute-1 nova_compute[225855]: 2026-01-20 14:56:08.838 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:08 compute-1 systemd-udevd[276752]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:56:08 compute-1 NetworkManager[49104]: <info>  [1768920968.8652] device (tap73ed9acf-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:56:08 compute-1 NetworkManager[49104]: <info>  [1768920968.8657] device (tap73ed9acf-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:56:08 compute-1 systemd-machined[194361]: New machine qemu-61-instance-00000081.
Jan 20 14:56:08 compute-1 ovn_controller[130490]: 2026-01-20T14:56:08Z|00517|binding|INFO|Setting lport 73ed9acf-a178-4d9c-98a3-25f22489d41d up in Southbound
Jan 20 14:56:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:08.894 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:7a:7c 10.100.0.7'], port_security=['fa:16:3e:a5:7a:7c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f6f09d34-bc44-451f-98e2-1b0701aeab3a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3dc1854-2a38-414a-a424-2ff753e5a7da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4be5b75b5dcb4eeea9759f7c4a779ffa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6171706d-94c4-4c43-b4b2-ef4cbdfdf97c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf74ec16-3ea2-4ca6-9e5e-52ec9c203b9d, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=73ed9acf-a178-4d9c-98a3-25f22489d41d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:56:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:08.895 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 73ed9acf-a178-4d9c-98a3-25f22489d41d in datapath d3dc1854-2a38-414a-a424-2ff753e5a7da bound to our chassis
Jan 20 14:56:08 compute-1 systemd[1]: Started Virtual Machine qemu-61-instance-00000081.
Jan 20 14:56:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:08.897 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d3dc1854-2a38-414a-a424-2ff753e5a7da
Jan 20 14:56:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:08.909 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[13065628-197d-4fbf-ab05-58e78679a3d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:08.910 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd3dc1854-21 in ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:56:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:08.912 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd3dc1854-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:56:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:08.912 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3a92bb83-3a2e-45ed-891c-1bec5fd372df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:08.913 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[afc57135-ac02-47b4-b989-82c75fca11e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:08.931 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[40734ea2-2120-4f3c-96a6-aa37d31af157]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:08.960 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[190a11ea-e13b-40d0-9b13-3a9fb70168b9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:08.995 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[6c0b39ba-b548-4e7b-bc10-656a4992469f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:08.999 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ae0ed3c9-43e5-479b-9c21-ded3f33fcde0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:09 compute-1 NetworkManager[49104]: <info>  [1768920969.0006] manager: (tapd3dc1854-20): new Veth device (/org/freedesktop/NetworkManager/Devices/223)
Jan 20 14:56:09 compute-1 systemd-udevd[276754]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.026 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[7e5ca48a-ddd7-465d-9c7c-6589625ea912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.028 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e9067d3e-c751-4662-9fd9-c68e30dd61f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:09 compute-1 NetworkManager[49104]: <info>  [1768920969.0519] device (tapd3dc1854-20): carrier: link connected
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.057 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[7a93951d-6d46-4928-8736-6f5f5375f42a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.074 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8212c3b1-6556-4210-9c3e-de04accb3234]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3dc1854-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:66:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600797, 'reachable_time': 22961, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276788, 'error': None, 'target': 'ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.088 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e9cb04cb-5ab1-4cc1-a324-9c7594b5faa8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:66de'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 600797, 'tstamp': 600797}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276789, 'error': None, 'target': 'ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.105 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[87768b0f-d99f-4979-a5ec-ee629c4625db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3dc1854-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:66:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600797, 'reachable_time': 22961, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276790, 'error': None, 'target': 'ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.132 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[41da9b4e-c0f5-49c4-951b-3d01791b964f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:09 compute-1 ceph-mon[81775]: pgmap v2109: 321 pgs: 321 active+clean; 376 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.4 MiB/s wr, 248 op/s
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.186 225859 DEBUG oslo_concurrency.lockutils [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquiring lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.186 225859 DEBUG oslo_concurrency.lockutils [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.186 225859 DEBUG oslo_concurrency.lockutils [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquiring lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.187 225859 DEBUG oslo_concurrency.lockutils [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.187 225859 DEBUG oslo_concurrency.lockutils [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.188 225859 INFO nova.compute.manager [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Terminating instance
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.190 225859 DEBUG nova.compute.manager [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.206 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d4a7ae-661f-4558-b3f7-52189b16e34d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.207 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3dc1854-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.207 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.207 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3dc1854-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:09 compute-1 NetworkManager[49104]: <info>  [1768920969.2106] manager: (tapd3dc1854-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Jan 20 14:56:09 compute-1 kernel: tapd3dc1854-20: entered promiscuous mode
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.212 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.212 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd3dc1854-20, col_values=(('external_ids', {'iface-id': '10b0432e-3a35-4d0d-ae91-89caad81d90f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:09 compute-1 ovn_controller[130490]: 2026-01-20T14:56:09Z|00518|binding|INFO|Releasing lport 10b0432e-3a35-4d0d-ae91-89caad81d90f from this chassis (sb_readonly=0)
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.230 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.230 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d3dc1854-2a38-414a-a424-2ff753e5a7da.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d3dc1854-2a38-414a-a424-2ff753e5a7da.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.231 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9ed5511a-d115-46ba-8e93-0edd23b3d94a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.232 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-d3dc1854-2a38-414a-a424-2ff753e5a7da
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/d3dc1854-2a38-414a-a424-2ff753e5a7da.pid.haproxy
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID d3dc1854-2a38-414a-a424-2ff753e5a7da
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.232 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da', 'env', 'PROCESS_TAG=haproxy-d3dc1854-2a38-414a-a424-2ff753e5a7da', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d3dc1854-2a38-414a-a424-2ff753e5a7da.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:56:09 compute-1 kernel: tape084df8c-a7 (unregistering): left promiscuous mode
Jan 20 14:56:09 compute-1 NetworkManager[49104]: <info>  [1768920969.2621] device (tape084df8c-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:56:09 compute-1 ovn_controller[130490]: 2026-01-20T14:56:09Z|00519|binding|INFO|Releasing lport e084df8c-a73e-4535-bcf7-de8adbafa9ae from this chassis (sb_readonly=0)
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.266 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:09 compute-1 ovn_controller[130490]: 2026-01-20T14:56:09Z|00520|binding|INFO|Setting lport e084df8c-a73e-4535-bcf7-de8adbafa9ae down in Southbound
Jan 20 14:56:09 compute-1 ovn_controller[130490]: 2026-01-20T14:56:09Z|00521|binding|INFO|Removing iface tape084df8c-a7 ovn-installed in OVS
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.270 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.276 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:8e:0f 10.100.0.14'], port_security=['fa:16:3e:89:8e:0f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3bec73f6-5255-44c0-8a10-a64c7e86c0c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e9589011-b728-4b79-9945-aa6c52dd0fc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1e83af992c94112a965575784639d77', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'e88a960e-8540-4c69-934d-b6e1b91beb98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.233', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dc6df7d-3e57-4779-8232-af1ccf413403, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=e084df8c-a73e-4535-bcf7-de8adbafa9ae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.294 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:09 compute-1 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Jan 20 14:56:09 compute-1 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d0000007c.scope: Consumed 14.051s CPU time.
Jan 20 14:56:09 compute-1 systemd-machined[194361]: Machine qemu-60-instance-0000007c terminated.
Jan 20 14:56:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:09.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:09 compute-1 NetworkManager[49104]: <info>  [1768920969.4104] manager: (tape084df8c-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/225)
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.446 225859 INFO nova.virt.libvirt.driver [-] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Instance destroyed successfully.
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.447 225859 DEBUG nova.objects.instance [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lazy-loading 'resources' on Instance uuid 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.461 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920969.4608223, f6f09d34-bc44-451f-98e2-1b0701aeab3a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.461 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] VM Started (Lifecycle Event)
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.467 225859 DEBUG nova.virt.libvirt.vif [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:54:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-913712707',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-913712707',id=124,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEiICz3SRHAbg35RWQTkKKcPH7nuzl556rWhnnJMKWCQeHRtCTrp3rh0Cew3QLmsFdOqe88XbxeaMKtgT6L6nfvjZZnoyEjqVogiPNh8/V6NYBD5v71aQZWpX0o+tqsUvg==',key_name='tempest-keypair-1388890452',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:55:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b1e83af992c94112a965575784639d77',ramdisk_id='',reservation_id='r-n20fo39g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-896995479',owner_user_name='tempest-AttachVolumeShelveTestJSON-896995479-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:55:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34eb73f628994c11801d447148d5f142',uuid=3bec73f6-5255-44c0-8a10-a64c7e86c0c2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.467 225859 DEBUG nova.network.os_vif_util [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Converting VIF {"id": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "address": "fa:16:3e:89:8e:0f", "network": {"id": "e9589011-b728-4b79-9945-aa6c52dd0fc2", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1143668360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1e83af992c94112a965575784639d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape084df8c-a7", "ovs_interfaceid": "e084df8c-a73e-4535-bcf7-de8adbafa9ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.468 225859 DEBUG nova.network.os_vif_util [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:8e:0f,bridge_name='br-int',has_traffic_filtering=True,id=e084df8c-a73e-4535-bcf7-de8adbafa9ae,network=Network(e9589011-b728-4b79-9945-aa6c52dd0fc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape084df8c-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.468 225859 DEBUG os_vif [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:8e:0f,bridge_name='br-int',has_traffic_filtering=True,id=e084df8c-a73e-4535-bcf7-de8adbafa9ae,network=Network(e9589011-b728-4b79-9945-aa6c52dd0fc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape084df8c-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.470 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.470 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape084df8c-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.474 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.476 225859 INFO os_vif [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:8e:0f,bridge_name='br-int',has_traffic_filtering=True,id=e084df8c-a73e-4535-bcf7-de8adbafa9ae,network=Network(e9589011-b728-4b79-9945-aa6c52dd0fc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape084df8c-a7')
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.527 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.532 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920969.4615161, f6f09d34-bc44-451f-98e2-1b0701aeab3a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.532 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] VM Paused (Lifecycle Event)
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.570 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.574 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:56:09 compute-1 podman[276897]: 2026-01-20 14:56:09.610572349 +0000 UTC m=+0.049225921 container create 17702d2dea4cfe79c035f6a1cbf056da9364208b58b99e46738aa45a65e6f66e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:56:09 compute-1 systemd[1]: Started libpod-conmon-17702d2dea4cfe79c035f6a1cbf056da9364208b58b99e46738aa45a65e6f66e.scope.
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.663 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:56:09 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:56:09 compute-1 podman[276897]: 2026-01-20 14:56:09.583895756 +0000 UTC m=+0.022549358 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:56:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79772ae235c75facde7fa71bd7324d0c967aed4152a0bb0f69655c6f1f473ade/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:56:09 compute-1 podman[276897]: 2026-01-20 14:56:09.699239752 +0000 UTC m=+0.137893374 container init 17702d2dea4cfe79c035f6a1cbf056da9364208b58b99e46738aa45a65e6f66e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 20 14:56:09 compute-1 podman[276897]: 2026-01-20 14:56:09.704939673 +0000 UTC m=+0.143593245 container start 17702d2dea4cfe79c035f6a1cbf056da9364208b58b99e46738aa45a65e6f66e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 20 14:56:09 compute-1 neutron-haproxy-ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da[276915]: [NOTICE]   (276919) : New worker (276921) forked
Jan 20 14:56:09 compute-1 neutron-haproxy-ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da[276915]: [NOTICE]   (276919) : Loading success.
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.764 140354 INFO neutron.agent.ovn.metadata.agent [-] Port e084df8c-a73e-4535-bcf7-de8adbafa9ae in datapath e9589011-b728-4b79-9945-aa6c52dd0fc2 unbound from our chassis
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.766 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e9589011-b728-4b79-9945-aa6c52dd0fc2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.767 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[19d52ec5-81d5-450b-be03-a1c324dd5ba3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:09.767 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2 namespace which is not needed anymore
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.830 225859 INFO nova.virt.libvirt.driver [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Deleting instance files /var/lib/nova/instances/3bec73f6-5255-44c0-8a10-a64c7e86c0c2_del
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.831 225859 INFO nova.virt.libvirt.driver [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Deletion of /var/lib/nova/instances/3bec73f6-5255-44c0-8a10-a64c7e86c0c2_del complete
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.873 225859 INFO nova.compute.manager [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Took 0.68 seconds to destroy the instance on the hypervisor.
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.875 225859 DEBUG oslo.service.loopingcall [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.875 225859 DEBUG nova.compute.manager [-] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:56:09 compute-1 nova_compute[225855]: 2026-01-20 14:56:09.876 225859 DEBUG nova.network.neutron [-] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:56:09 compute-1 neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2[276297]: [NOTICE]   (276301) : haproxy version is 2.8.14-c23fe91
Jan 20 14:56:09 compute-1 neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2[276297]: [NOTICE]   (276301) : path to executable is /usr/sbin/haproxy
Jan 20 14:56:09 compute-1 neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2[276297]: [WARNING]  (276301) : Exiting Master process...
Jan 20 14:56:09 compute-1 neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2[276297]: [ALERT]    (276301) : Current worker (276303) exited with code 143 (Terminated)
Jan 20 14:56:09 compute-1 neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2[276297]: [WARNING]  (276301) : All workers exited. Exiting... (0)
Jan 20 14:56:09 compute-1 systemd[1]: libpod-a8130d817215800fca2757ee75ba5302132a8e9fb828d8b29771e62e00d0d605.scope: Deactivated successfully.
Jan 20 14:56:09 compute-1 podman[276947]: 2026-01-20 14:56:09.899083624 +0000 UTC m=+0.048026087 container died a8130d817215800fca2757ee75ba5302132a8e9fb828d8b29771e62e00d0d605 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:56:09 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a8130d817215800fca2757ee75ba5302132a8e9fb828d8b29771e62e00d0d605-userdata-shm.mount: Deactivated successfully.
Jan 20 14:56:09 compute-1 systemd[1]: var-lib-containers-storage-overlay-4b58477d32c8948a9385d790e9b6a62f6a46c01c4ace31c1af6ef64bffa12e02-merged.mount: Deactivated successfully.
Jan 20 14:56:09 compute-1 podman[276947]: 2026-01-20 14:56:09.934483243 +0000 UTC m=+0.083425706 container cleanup a8130d817215800fca2757ee75ba5302132a8e9fb828d8b29771e62e00d0d605 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:56:09 compute-1 systemd[1]: libpod-conmon-a8130d817215800fca2757ee75ba5302132a8e9fb828d8b29771e62e00d0d605.scope: Deactivated successfully.
Jan 20 14:56:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:10.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:10 compute-1 podman[276978]: 2026-01-20 14:56:10.014560654 +0000 UTC m=+0.054110399 container remove a8130d817215800fca2757ee75ba5302132a8e9fb828d8b29771e62e00d0d605 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:56:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:10.021 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ef0facab-b37c-4a76-8e39-5d5044c87692]: (4, ('Tue Jan 20 02:56:09 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2 (a8130d817215800fca2757ee75ba5302132a8e9fb828d8b29771e62e00d0d605)\na8130d817215800fca2757ee75ba5302132a8e9fb828d8b29771e62e00d0d605\nTue Jan 20 02:56:09 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2 (a8130d817215800fca2757ee75ba5302132a8e9fb828d8b29771e62e00d0d605)\na8130d817215800fca2757ee75ba5302132a8e9fb828d8b29771e62e00d0d605\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:10.023 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f5bf120e-93d9-4036-8a66-a2cb54e85db2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:10.024 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape9589011-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:10 compute-1 kernel: tape9589011-b0: left promiscuous mode
Jan 20 14:56:10 compute-1 nova_compute[225855]: 2026-01-20 14:56:10.075 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:10 compute-1 nova_compute[225855]: 2026-01-20 14:56:10.094 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:10.098 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa2f416-f4b7-4fd8-ab98-5022dc232fb9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:10 compute-1 nova_compute[225855]: 2026-01-20 14:56:10.108 225859 DEBUG nova.compute.manager [req-d2a7ed7b-f96d-40f6-9f59-e55da8679369 req-8128ee16-d2f8-437b-8d28-d8e337842f77 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Received event network-vif-unplugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:10 compute-1 nova_compute[225855]: 2026-01-20 14:56:10.108 225859 DEBUG oslo_concurrency.lockutils [req-d2a7ed7b-f96d-40f6-9f59-e55da8679369 req-8128ee16-d2f8-437b-8d28-d8e337842f77 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:10 compute-1 nova_compute[225855]: 2026-01-20 14:56:10.108 225859 DEBUG oslo_concurrency.lockutils [req-d2a7ed7b-f96d-40f6-9f59-e55da8679369 req-8128ee16-d2f8-437b-8d28-d8e337842f77 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:10 compute-1 nova_compute[225855]: 2026-01-20 14:56:10.108 225859 DEBUG oslo_concurrency.lockutils [req-d2a7ed7b-f96d-40f6-9f59-e55da8679369 req-8128ee16-d2f8-437b-8d28-d8e337842f77 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:10 compute-1 nova_compute[225855]: 2026-01-20 14:56:10.109 225859 DEBUG nova.compute.manager [req-d2a7ed7b-f96d-40f6-9f59-e55da8679369 req-8128ee16-d2f8-437b-8d28-d8e337842f77 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] No waiting events found dispatching network-vif-unplugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:56:10 compute-1 nova_compute[225855]: 2026-01-20 14:56:10.109 225859 DEBUG nova.compute.manager [req-d2a7ed7b-f96d-40f6-9f59-e55da8679369 req-8128ee16-d2f8-437b-8d28-d8e337842f77 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Received event network-vif-unplugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:56:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:10.114 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c94c045e-6059-4fa6-81b0-00dd47e01c8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:10.115 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[14d120b6-f798-4086-9bb9-cf31555e88ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:10.135 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d336ae9a-c3f3-48ad-813e-9afdb2c47c72]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598640, 'reachable_time': 25093, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276991, 'error': None, 'target': 'ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:10 compute-1 systemd[1]: run-netns-ovnmeta\x2de9589011\x2db728\x2d4b79\x2d9945\x2daa6c52dd0fc2.mount: Deactivated successfully.
Jan 20 14:56:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:10.138 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e9589011-b728-4b79-9945-aa6c52dd0fc2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:56:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:10.138 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[4a8347ea-c614-4d81-8a6e-8916e6cc3c42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:10 compute-1 ceph-mon[81775]: pgmap v2110: 321 pgs: 321 active+clean; 392 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.8 MiB/s wr, 232 op/s
Jan 20 14:56:11 compute-1 nova_compute[225855]: 2026-01-20 14:56:11.192 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:11.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:12.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.241 225859 DEBUG nova.compute.manager [req-862ecea2-8ef5-4b05-8c98-6c78939b20b9 req-feeb1419-f667-44d3-8b2e-581a72a64428 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Received event network-vif-plugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.242 225859 DEBUG oslo_concurrency.lockutils [req-862ecea2-8ef5-4b05-8c98-6c78939b20b9 req-feeb1419-f667-44d3-8b2e-581a72a64428 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.242 225859 DEBUG oslo_concurrency.lockutils [req-862ecea2-8ef5-4b05-8c98-6c78939b20b9 req-feeb1419-f667-44d3-8b2e-581a72a64428 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.242 225859 DEBUG oslo_concurrency.lockutils [req-862ecea2-8ef5-4b05-8c98-6c78939b20b9 req-feeb1419-f667-44d3-8b2e-581a72a64428 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.242 225859 DEBUG nova.compute.manager [req-862ecea2-8ef5-4b05-8c98-6c78939b20b9 req-feeb1419-f667-44d3-8b2e-581a72a64428 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] No waiting events found dispatching network-vif-plugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.243 225859 WARNING nova.compute.manager [req-862ecea2-8ef5-4b05-8c98-6c78939b20b9 req-feeb1419-f667-44d3-8b2e-581a72a64428 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Received unexpected event network-vif-plugged-e084df8c-a73e-4535-bcf7-de8adbafa9ae for instance with vm_state active and task_state deleting.
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.386 225859 DEBUG nova.network.neutron [-] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.403 225859 INFO nova.compute.manager [-] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Took 2.53 seconds to deallocate network for instance.
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.453 225859 DEBUG nova.compute.manager [req-35fb815a-9fba-461a-8f8a-d244a36878e2 req-a835dafa-1400-44d5-98fb-517427180ca2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Received event network-vif-plugged-73ed9acf-a178-4d9c-98a3-25f22489d41d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.454 225859 DEBUG oslo_concurrency.lockutils [req-35fb815a-9fba-461a-8f8a-d244a36878e2 req-a835dafa-1400-44d5-98fb-517427180ca2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.454 225859 DEBUG oslo_concurrency.lockutils [req-35fb815a-9fba-461a-8f8a-d244a36878e2 req-a835dafa-1400-44d5-98fb-517427180ca2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.455 225859 DEBUG oslo_concurrency.lockutils [req-35fb815a-9fba-461a-8f8a-d244a36878e2 req-a835dafa-1400-44d5-98fb-517427180ca2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.455 225859 DEBUG nova.compute.manager [req-35fb815a-9fba-461a-8f8a-d244a36878e2 req-a835dafa-1400-44d5-98fb-517427180ca2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Processing event network-vif-plugged-73ed9acf-a178-4d9c-98a3-25f22489d41d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.455 225859 DEBUG nova.compute.manager [req-35fb815a-9fba-461a-8f8a-d244a36878e2 req-a835dafa-1400-44d5-98fb-517427180ca2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Received event network-vif-plugged-73ed9acf-a178-4d9c-98a3-25f22489d41d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.456 225859 DEBUG oslo_concurrency.lockutils [req-35fb815a-9fba-461a-8f8a-d244a36878e2 req-a835dafa-1400-44d5-98fb-517427180ca2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.456 225859 DEBUG oslo_concurrency.lockutils [req-35fb815a-9fba-461a-8f8a-d244a36878e2 req-a835dafa-1400-44d5-98fb-517427180ca2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.456 225859 DEBUG oslo_concurrency.lockutils [req-35fb815a-9fba-461a-8f8a-d244a36878e2 req-a835dafa-1400-44d5-98fb-517427180ca2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.457 225859 DEBUG nova.compute.manager [req-35fb815a-9fba-461a-8f8a-d244a36878e2 req-a835dafa-1400-44d5-98fb-517427180ca2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] No waiting events found dispatching network-vif-plugged-73ed9acf-a178-4d9c-98a3-25f22489d41d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.457 225859 WARNING nova.compute.manager [req-35fb815a-9fba-461a-8f8a-d244a36878e2 req-a835dafa-1400-44d5-98fb-517427180ca2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Received unexpected event network-vif-plugged-73ed9acf-a178-4d9c-98a3-25f22489d41d for instance with vm_state building and task_state spawning.
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.460 225859 DEBUG nova.compute.manager [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.461 225859 DEBUG oslo_concurrency.lockutils [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.462 225859 DEBUG oslo_concurrency.lockutils [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.468 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920972.4680414, f6f09d34-bc44-451f-98e2-1b0701aeab3a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.468 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] VM Resumed (Lifecycle Event)
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.478 225859 DEBUG nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.487 225859 INFO nova.virt.libvirt.driver [-] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Instance spawned successfully.
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.487 225859 DEBUG nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:56:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.497 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.501 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.514 225859 DEBUG nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.514 225859 DEBUG nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.515 225859 DEBUG nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.516 225859 DEBUG nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.517 225859 DEBUG nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.517 225859 DEBUG nova.virt.libvirt.driver [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.525 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.604 225859 DEBUG oslo_concurrency.processutils [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.634 225859 INFO nova.compute.manager [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Took 11.12 seconds to spawn the instance on the hypervisor.
Jan 20 14:56:12 compute-1 nova_compute[225855]: 2026-01-20 14:56:12.635 225859 DEBUG nova.compute.manager [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:12 compute-1 sudo[276994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:12 compute-1 sudo[276994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:12 compute-1 sudo[276994]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:12 compute-1 sudo[277038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:12 compute-1 sudo[277038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:12 compute-1 sudo[277038]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:56:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2494222947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:13 compute-1 nova_compute[225855]: 2026-01-20 14:56:13.022 225859 INFO nova.compute.manager [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Took 12.32 seconds to build instance.
Jan 20 14:56:13 compute-1 nova_compute[225855]: 2026-01-20 14:56:13.039 225859 DEBUG oslo_concurrency.lockutils [None req-f2371872-7b78-4e90-8e3f-234488572008 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:13 compute-1 nova_compute[225855]: 2026-01-20 14:56:13.040 225859 DEBUG oslo_concurrency.processutils [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:13 compute-1 nova_compute[225855]: 2026-01-20 14:56:13.050 225859 DEBUG nova.compute.provider_tree [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:56:13 compute-1 nova_compute[225855]: 2026-01-20 14:56:13.069 225859 DEBUG nova.scheduler.client.report [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:56:13 compute-1 nova_compute[225855]: 2026-01-20 14:56:13.090 225859 DEBUG oslo_concurrency.lockutils [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:13 compute-1 ceph-mon[81775]: pgmap v2111: 321 pgs: 321 active+clean; 368 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.8 MiB/s wr, 229 op/s
Jan 20 14:56:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2494222947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:13 compute-1 nova_compute[225855]: 2026-01-20 14:56:13.129 225859 INFO nova.scheduler.client.report [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Deleted allocations for instance 3bec73f6-5255-44c0-8a10-a64c7e86c0c2
Jan 20 14:56:13 compute-1 nova_compute[225855]: 2026-01-20 14:56:13.364 225859 DEBUG oslo_concurrency.lockutils [None req-52e4d9fa-cf15-4d5d-9e86-eff0b02cb5ba 34eb73f628994c11801d447148d5f142 b1e83af992c94112a965575784639d77 - - default default] Lock "3bec73f6-5255-44c0-8a10-a64c7e86c0c2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:13.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:13 compute-1 nova_compute[225855]: 2026-01-20 14:56:13.662 225859 DEBUG nova.compute.manager [req-9a9499e4-e538-4c97-bf2c-20a7dc379c98 req-19e2c7a1-b434-4c13-a2d8-2c4149d98f94 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Received event network-vif-deleted-e084df8c-a73e-4535-bcf7-de8adbafa9ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:14.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:14 compute-1 podman[277066]: 2026-01-20 14:56:14.065833875 +0000 UTC m=+0.103325208 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:56:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1356210779' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:56:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1356210779' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:56:14 compute-1 nova_compute[225855]: 2026-01-20 14:56:14.471 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:15 compute-1 ceph-mon[81775]: pgmap v2112: 321 pgs: 321 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 5.4 MiB/s wr, 209 op/s
Jan 20 14:56:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2532883791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/945588242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:56:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:15.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:56:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:56:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:16.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:56:16 compute-1 nova_compute[225855]: 2026-01-20 14:56:16.232 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2874809718' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:56:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2874809718' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:56:16 compute-1 ceph-mon[81775]: pgmap v2113: 321 pgs: 321 active+clean; 339 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.1 MiB/s wr, 266 op/s
Jan 20 14:56:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:16.415 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:16.415 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:16.416 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.057 225859 DEBUG oslo_concurrency.lockutils [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Acquiring lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.058 225859 DEBUG oslo_concurrency.lockutils [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.058 225859 DEBUG oslo_concurrency.lockutils [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Acquiring lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.059 225859 DEBUG oslo_concurrency.lockutils [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.059 225859 DEBUG oslo_concurrency.lockutils [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.060 225859 INFO nova.compute.manager [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Terminating instance
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.062 225859 DEBUG nova.compute.manager [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:56:17 compute-1 kernel: tap73ed9acf-a1 (unregistering): left promiscuous mode
Jan 20 14:56:17 compute-1 NetworkManager[49104]: <info>  [1768920977.1182] device (tap73ed9acf-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.125 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:17 compute-1 ovn_controller[130490]: 2026-01-20T14:56:17Z|00522|binding|INFO|Releasing lport 73ed9acf-a178-4d9c-98a3-25f22489d41d from this chassis (sb_readonly=0)
Jan 20 14:56:17 compute-1 ovn_controller[130490]: 2026-01-20T14:56:17Z|00523|binding|INFO|Setting lport 73ed9acf-a178-4d9c-98a3-25f22489d41d down in Southbound
Jan 20 14:56:17 compute-1 ovn_controller[130490]: 2026-01-20T14:56:17Z|00524|binding|INFO|Removing iface tap73ed9acf-a1 ovn-installed in OVS
Jan 20 14:56:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:17.133 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:7a:7c 10.100.0.7'], port_security=['fa:16:3e:a5:7a:7c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f6f09d34-bc44-451f-98e2-1b0701aeab3a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3dc1854-2a38-414a-a424-2ff753e5a7da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4be5b75b5dcb4eeea9759f7c4a779ffa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6171706d-94c4-4c43-b4b2-ef4cbdfdf97c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf74ec16-3ea2-4ca6-9e5e-52ec9c203b9d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=73ed9acf-a178-4d9c-98a3-25f22489d41d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:56:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:17.134 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 73ed9acf-a178-4d9c-98a3-25f22489d41d in datapath d3dc1854-2a38-414a-a424-2ff753e5a7da unbound from our chassis
Jan 20 14:56:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:17.135 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d3dc1854-2a38-414a-a424-2ff753e5a7da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:56:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:17.136 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[60813d76-7797-4175-9f83-a8e621f205ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:17.137 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da namespace which is not needed anymore
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.145 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:17 compute-1 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000081.scope: Deactivated successfully.
Jan 20 14:56:17 compute-1 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000081.scope: Consumed 5.309s CPU time.
Jan 20 14:56:17 compute-1 systemd-machined[194361]: Machine qemu-61-instance-00000081 terminated.
Jan 20 14:56:17 compute-1 neutron-haproxy-ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da[276915]: [NOTICE]   (276919) : haproxy version is 2.8.14-c23fe91
Jan 20 14:56:17 compute-1 neutron-haproxy-ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da[276915]: [NOTICE]   (276919) : path to executable is /usr/sbin/haproxy
Jan 20 14:56:17 compute-1 neutron-haproxy-ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da[276915]: [WARNING]  (276919) : Exiting Master process...
Jan 20 14:56:17 compute-1 neutron-haproxy-ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da[276915]: [ALERT]    (276919) : Current worker (276921) exited with code 143 (Terminated)
Jan 20 14:56:17 compute-1 neutron-haproxy-ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da[276915]: [WARNING]  (276919) : All workers exited. Exiting... (0)
Jan 20 14:56:17 compute-1 systemd[1]: libpod-17702d2dea4cfe79c035f6a1cbf056da9364208b58b99e46738aa45a65e6f66e.scope: Deactivated successfully.
Jan 20 14:56:17 compute-1 podman[277118]: 2026-01-20 14:56:17.286823707 +0000 UTC m=+0.059829239 container died 17702d2dea4cfe79c035f6a1cbf056da9364208b58b99e46738aa45a65e6f66e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 14:56:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3749428790' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:56:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3749428790' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.302 225859 INFO nova.virt.libvirt.driver [-] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Instance destroyed successfully.
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.303 225859 DEBUG nova.objects.instance [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Lazy-loading 'resources' on Instance uuid f6f09d34-bc44-451f-98e2-1b0701aeab3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:17 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-17702d2dea4cfe79c035f6a1cbf056da9364208b58b99e46738aa45a65e6f66e-userdata-shm.mount: Deactivated successfully.
Jan 20 14:56:17 compute-1 systemd[1]: var-lib-containers-storage-overlay-79772ae235c75facde7fa71bd7324d0c967aed4152a0bb0f69655c6f1f473ade-merged.mount: Deactivated successfully.
Jan 20 14:56:17 compute-1 podman[277118]: 2026-01-20 14:56:17.335565623 +0000 UTC m=+0.108571135 container cleanup 17702d2dea4cfe79c035f6a1cbf056da9364208b58b99e46738aa45a65e6f66e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 14:56:17 compute-1 systemd[1]: libpod-conmon-17702d2dea4cfe79c035f6a1cbf056da9364208b58b99e46738aa45a65e6f66e.scope: Deactivated successfully.
Jan 20 14:56:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:17.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.399 225859 DEBUG nova.virt.libvirt.vif [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:55:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-505687879',display_name='tempest-ServerMetadataTestJSON-server-505687879',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-505687879',id=129,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:56:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4be5b75b5dcb4eeea9759f7c4a779ffa',ramdisk_id='',reservation_id='r-cbmgnji8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-599451381',owner_user_name='tempest-ServerMetadataTestJSON-599451381-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:56:16Z,user_data=None,user_id='f6f144f1d330427e82e84c891e9a8a89',uuid=f6f09d34-bc44-451f-98e2-1b0701aeab3a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "73ed9acf-a178-4d9c-98a3-25f22489d41d", "address": "fa:16:3e:a5:7a:7c", "network": {"id": "d3dc1854-2a38-414a-a424-2ff753e5a7da", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-645777946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4be5b75b5dcb4eeea9759f7c4a779ffa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73ed9acf-a1", "ovs_interfaceid": "73ed9acf-a178-4d9c-98a3-25f22489d41d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.400 225859 DEBUG nova.network.os_vif_util [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Converting VIF {"id": "73ed9acf-a178-4d9c-98a3-25f22489d41d", "address": "fa:16:3e:a5:7a:7c", "network": {"id": "d3dc1854-2a38-414a-a424-2ff753e5a7da", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-645777946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4be5b75b5dcb4eeea9759f7c4a779ffa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73ed9acf-a1", "ovs_interfaceid": "73ed9acf-a178-4d9c-98a3-25f22489d41d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.401 225859 DEBUG nova.network.os_vif_util [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:7a:7c,bridge_name='br-int',has_traffic_filtering=True,id=73ed9acf-a178-4d9c-98a3-25f22489d41d,network=Network(d3dc1854-2a38-414a-a424-2ff753e5a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73ed9acf-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.401 225859 DEBUG os_vif [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:7a:7c,bridge_name='br-int',has_traffic_filtering=True,id=73ed9acf-a178-4d9c-98a3-25f22489d41d,network=Network(d3dc1854-2a38-414a-a424-2ff753e5a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73ed9acf-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.403 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.403 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73ed9acf-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:17 compute-1 podman[277160]: 2026-01-20 14:56:17.406426313 +0000 UTC m=+0.047948684 container remove 17702d2dea4cfe79c035f6a1cbf056da9364208b58b99e46738aa45a65e6f66e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.431 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.433 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.435 225859 INFO os_vif [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:7a:7c,bridge_name='br-int',has_traffic_filtering=True,id=73ed9acf-a178-4d9c-98a3-25f22489d41d,network=Network(d3dc1854-2a38-414a-a424-2ff753e5a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73ed9acf-a1')
Jan 20 14:56:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:17.436 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f8392c6b-3d4a-487a-8258-bce73d9cded1]: (4, ('Tue Jan 20 02:56:17 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da (17702d2dea4cfe79c035f6a1cbf056da9364208b58b99e46738aa45a65e6f66e)\n17702d2dea4cfe79c035f6a1cbf056da9364208b58b99e46738aa45a65e6f66e\nTue Jan 20 02:56:17 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da (17702d2dea4cfe79c035f6a1cbf056da9364208b58b99e46738aa45a65e6f66e)\n17702d2dea4cfe79c035f6a1cbf056da9364208b58b99e46738aa45a65e6f66e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:17.439 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb9a3ce-6c3f-4f90-ba5e-cf32d29ce676]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:17.440 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3dc1854-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:17 compute-1 kernel: tapd3dc1854-20: left promiscuous mode
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.458 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:17.463 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e93d2a-c9bd-411c-8999-1ef3bb980515]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:17.484 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[daa65bbf-c95f-4f41-be2f-3bc25352120b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:17.485 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb6c80d-1a96-4012-95d0-a57342d037e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:17.503 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4ff84a-41b0-4c4f-9a7f-81c9fb7b027a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600791, 'reachable_time': 17265, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277193, 'error': None, 'target': 'ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:17.505 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d3dc1854-2a38-414a-a424-2ff753e5a7da deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:56:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:17.506 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed8882d-aa4a-49db-8583-aa83060fef30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:17 compute-1 systemd[1]: run-netns-ovnmeta\x2dd3dc1854\x2d2a38\x2d414a\x2da424\x2d2ff753e5a7da.mount: Deactivated successfully.
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.827 225859 INFO nova.virt.libvirt.driver [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Deleting instance files /var/lib/nova/instances/f6f09d34-bc44-451f-98e2-1b0701aeab3a_del
Jan 20 14:56:17 compute-1 nova_compute[225855]: 2026-01-20 14:56:17.829 225859 INFO nova.virt.libvirt.driver [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Deletion of /var/lib/nova/instances/f6f09d34-bc44-451f-98e2-1b0701aeab3a_del complete
Jan 20 14:56:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:18.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:18 compute-1 nova_compute[225855]: 2026-01-20 14:56:18.030 225859 INFO nova.compute.manager [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Took 0.97 seconds to destroy the instance on the hypervisor.
Jan 20 14:56:18 compute-1 nova_compute[225855]: 2026-01-20 14:56:18.031 225859 DEBUG oslo.service.loopingcall [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:56:18 compute-1 nova_compute[225855]: 2026-01-20 14:56:18.031 225859 DEBUG nova.compute.manager [-] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:56:18 compute-1 nova_compute[225855]: 2026-01-20 14:56:18.032 225859 DEBUG nova.network.neutron [-] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:56:18 compute-1 ceph-mon[81775]: pgmap v2114: 321 pgs: 321 active+clean; 339 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 224 op/s
Jan 20 14:56:18 compute-1 nova_compute[225855]: 2026-01-20 14:56:18.664 225859 DEBUG nova.compute.manager [req-0f4035ba-b83f-4459-867a-94982d716b05 req-7b02a496-dfb6-40b8-bb95-e62a3cd6b7c3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Received event network-vif-unplugged-73ed9acf-a178-4d9c-98a3-25f22489d41d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:18 compute-1 nova_compute[225855]: 2026-01-20 14:56:18.664 225859 DEBUG oslo_concurrency.lockutils [req-0f4035ba-b83f-4459-867a-94982d716b05 req-7b02a496-dfb6-40b8-bb95-e62a3cd6b7c3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:18 compute-1 nova_compute[225855]: 2026-01-20 14:56:18.665 225859 DEBUG oslo_concurrency.lockutils [req-0f4035ba-b83f-4459-867a-94982d716b05 req-7b02a496-dfb6-40b8-bb95-e62a3cd6b7c3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:18 compute-1 nova_compute[225855]: 2026-01-20 14:56:18.665 225859 DEBUG oslo_concurrency.lockutils [req-0f4035ba-b83f-4459-867a-94982d716b05 req-7b02a496-dfb6-40b8-bb95-e62a3cd6b7c3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:18 compute-1 nova_compute[225855]: 2026-01-20 14:56:18.665 225859 DEBUG nova.compute.manager [req-0f4035ba-b83f-4459-867a-94982d716b05 req-7b02a496-dfb6-40b8-bb95-e62a3cd6b7c3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] No waiting events found dispatching network-vif-unplugged-73ed9acf-a178-4d9c-98a3-25f22489d41d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:56:18 compute-1 nova_compute[225855]: 2026-01-20 14:56:18.665 225859 DEBUG nova.compute.manager [req-0f4035ba-b83f-4459-867a-94982d716b05 req-7b02a496-dfb6-40b8-bb95-e62a3cd6b7c3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Received event network-vif-unplugged-73ed9acf-a178-4d9c-98a3-25f22489d41d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:56:18 compute-1 nova_compute[225855]: 2026-01-20 14:56:18.665 225859 DEBUG nova.compute.manager [req-0f4035ba-b83f-4459-867a-94982d716b05 req-7b02a496-dfb6-40b8-bb95-e62a3cd6b7c3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Received event network-vif-plugged-73ed9acf-a178-4d9c-98a3-25f22489d41d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:18 compute-1 nova_compute[225855]: 2026-01-20 14:56:18.666 225859 DEBUG oslo_concurrency.lockutils [req-0f4035ba-b83f-4459-867a-94982d716b05 req-7b02a496-dfb6-40b8-bb95-e62a3cd6b7c3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:18 compute-1 nova_compute[225855]: 2026-01-20 14:56:18.666 225859 DEBUG oslo_concurrency.lockutils [req-0f4035ba-b83f-4459-867a-94982d716b05 req-7b02a496-dfb6-40b8-bb95-e62a3cd6b7c3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:18 compute-1 nova_compute[225855]: 2026-01-20 14:56:18.666 225859 DEBUG oslo_concurrency.lockutils [req-0f4035ba-b83f-4459-867a-94982d716b05 req-7b02a496-dfb6-40b8-bb95-e62a3cd6b7c3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:18 compute-1 nova_compute[225855]: 2026-01-20 14:56:18.666 225859 DEBUG nova.compute.manager [req-0f4035ba-b83f-4459-867a-94982d716b05 req-7b02a496-dfb6-40b8-bb95-e62a3cd6b7c3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] No waiting events found dispatching network-vif-plugged-73ed9acf-a178-4d9c-98a3-25f22489d41d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:56:18 compute-1 nova_compute[225855]: 2026-01-20 14:56:18.666 225859 WARNING nova.compute.manager [req-0f4035ba-b83f-4459-867a-94982d716b05 req-7b02a496-dfb6-40b8-bb95-e62a3cd6b7c3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Received unexpected event network-vif-plugged-73ed9acf-a178-4d9c-98a3-25f22489d41d for instance with vm_state active and task_state deleting.
Jan 20 14:56:19 compute-1 nova_compute[225855]: 2026-01-20 14:56:19.109 225859 DEBUG nova.network.neutron [-] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:19 compute-1 nova_compute[225855]: 2026-01-20 14:56:19.221 225859 DEBUG nova.compute.manager [req-81172894-9c40-473f-9415-1e65f32394b5 req-9b0e9c94-fa57-4734-94cd-5f50387e025f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Received event network-vif-deleted-73ed9acf-a178-4d9c-98a3-25f22489d41d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:19 compute-1 nova_compute[225855]: 2026-01-20 14:56:19.222 225859 INFO nova.compute.manager [req-81172894-9c40-473f-9415-1e65f32394b5 req-9b0e9c94-fa57-4734-94cd-5f50387e025f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Neutron deleted interface 73ed9acf-a178-4d9c-98a3-25f22489d41d; detaching it from the instance and deleting it from the info cache
Jan 20 14:56:19 compute-1 nova_compute[225855]: 2026-01-20 14:56:19.222 225859 DEBUG nova.network.neutron [req-81172894-9c40-473f-9415-1e65f32394b5 req-9b0e9c94-fa57-4734-94cd-5f50387e025f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:19 compute-1 nova_compute[225855]: 2026-01-20 14:56:19.244 225859 INFO nova.compute.manager [-] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Took 1.21 seconds to deallocate network for instance.
Jan 20 14:56:19 compute-1 nova_compute[225855]: 2026-01-20 14:56:19.257 225859 DEBUG nova.compute.manager [req-81172894-9c40-473f-9415-1e65f32394b5 req-9b0e9c94-fa57-4734-94cd-5f50387e025f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Detach interface failed, port_id=73ed9acf-a178-4d9c-98a3-25f22489d41d, reason: Instance f6f09d34-bc44-451f-98e2-1b0701aeab3a could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 20 14:56:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:19.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:19 compute-1 nova_compute[225855]: 2026-01-20 14:56:19.412 225859 DEBUG oslo_concurrency.lockutils [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:19 compute-1 nova_compute[225855]: 2026-01-20 14:56:19.413 225859 DEBUG oslo_concurrency.lockutils [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:19 compute-1 nova_compute[225855]: 2026-01-20 14:56:19.472 225859 DEBUG oslo_concurrency.processutils [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:56:19 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3206897350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:19 compute-1 nova_compute[225855]: 2026-01-20 14:56:19.895 225859 DEBUG oslo_concurrency.processutils [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:19 compute-1 nova_compute[225855]: 2026-01-20 14:56:19.900 225859 DEBUG nova.compute.provider_tree [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:56:19 compute-1 nova_compute[225855]: 2026-01-20 14:56:19.918 225859 DEBUG nova.scheduler.client.report [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:56:19 compute-1 nova_compute[225855]: 2026-01-20 14:56:19.980 225859 DEBUG oslo_concurrency.lockutils [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3206897350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:20 compute-1 nova_compute[225855]: 2026-01-20 14:56:20.012 225859 INFO nova.scheduler.client.report [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Deleted allocations for instance f6f09d34-bc44-451f-98e2-1b0701aeab3a
Jan 20 14:56:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:56:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:20.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:56:20 compute-1 nova_compute[225855]: 2026-01-20 14:56:20.082 225859 DEBUG oslo_concurrency.lockutils [None req-de8ab48f-f90f-41dc-bd39-b1222b6e5a67 f6f144f1d330427e82e84c891e9a8a89 4be5b75b5dcb4eeea9759f7c4a779ffa - - default default] Lock "f6f09d34-bc44-451f-98e2-1b0701aeab3a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:20 compute-1 nova_compute[225855]: 2026-01-20 14:56:20.816 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:21 compute-1 ceph-mon[81775]: pgmap v2115: 321 pgs: 321 active+clean; 332 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.7 MiB/s wr, 275 op/s
Jan 20 14:56:21 compute-1 nova_compute[225855]: 2026-01-20 14:56:21.076 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:21 compute-1 nova_compute[225855]: 2026-01-20 14:56:21.231 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:21.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:22.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:22 compute-1 nova_compute[225855]: 2026-01-20 14:56:22.432 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:22 compute-1 ceph-mon[81775]: pgmap v2116: 321 pgs: 321 active+clean; 304 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 1.4 MiB/s wr, 291 op/s
Jan 20 14:56:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:56:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:23.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:56:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/539267138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:24.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:24 compute-1 nova_compute[225855]: 2026-01-20 14:56:24.431 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920969.4307125, 3bec73f6-5255-44c0-8a10-a64c7e86c0c2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:24 compute-1 nova_compute[225855]: 2026-01-20 14:56:24.432 225859 INFO nova.compute.manager [-] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] VM Stopped (Lifecycle Event)
Jan 20 14:56:24 compute-1 nova_compute[225855]: 2026-01-20 14:56:24.454 225859 DEBUG nova.compute.manager [None req-98c1837b-e595-4558-9b3f-be532e154f51 - - - - - -] [instance: 3bec73f6-5255-44c0-8a10-a64c7e86c0c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:24 compute-1 ceph-mon[81775]: pgmap v2117: 321 pgs: 321 active+clean; 276 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.3 MiB/s wr, 311 op/s
Jan 20 14:56:24 compute-1 sudo[277222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:24 compute-1 sudo[277222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:24 compute-1 sudo[277222]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:24 compute-1 sudo[277247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:56:24 compute-1 sudo[277247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:24 compute-1 sudo[277247]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:24 compute-1 sudo[277272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:24 compute-1 sudo[277272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:24 compute-1 sudo[277272]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:24 compute-1 sudo[277297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:56:24 compute-1 sudo[277297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:25 compute-1 sudo[277297]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:25.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:56:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:56:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:56:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:56:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:56:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:56:26 compute-1 podman[277355]: 2026-01-20 14:56:26.012148394 +0000 UTC m=+0.054754087 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Jan 20 14:56:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:56:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:26.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:56:26 compute-1 nova_compute[225855]: 2026-01-20 14:56:26.234 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:26 compute-1 ceph-mon[81775]: pgmap v2118: 321 pgs: 321 active+clean; 278 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.1 MiB/s wr, 328 op/s
Jan 20 14:56:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:27.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:27 compute-1 nova_compute[225855]: 2026-01-20 14:56:27.434 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:27 compute-1 nova_compute[225855]: 2026-01-20 14:56:27.500 225859 DEBUG oslo_concurrency.lockutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "e32ecf59-145a-4ae9-a91e-288419407cd0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:27 compute-1 nova_compute[225855]: 2026-01-20 14:56:27.501 225859 DEBUG oslo_concurrency.lockutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "e32ecf59-145a-4ae9-a91e-288419407cd0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:27 compute-1 nova_compute[225855]: 2026-01-20 14:56:27.531 225859 DEBUG nova.compute.manager [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:56:27 compute-1 nova_compute[225855]: 2026-01-20 14:56:27.640 225859 DEBUG oslo_concurrency.lockutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:27 compute-1 nova_compute[225855]: 2026-01-20 14:56:27.640 225859 DEBUG oslo_concurrency.lockutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:27 compute-1 nova_compute[225855]: 2026-01-20 14:56:27.646 225859 DEBUG nova.virt.hardware [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:56:27 compute-1 nova_compute[225855]: 2026-01-20 14:56:27.646 225859 INFO nova.compute.claims [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:56:27 compute-1 nova_compute[225855]: 2026-01-20 14:56:27.762 225859 DEBUG oslo_concurrency.processutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:28.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:56:28 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1100107449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.173 225859 DEBUG oslo_concurrency.processutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.179 225859 DEBUG nova.compute.provider_tree [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.196 225859 DEBUG nova.scheduler.client.report [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.228 225859 DEBUG oslo_concurrency.lockutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.229 225859 DEBUG nova.compute.manager [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.286 225859 DEBUG nova.compute.manager [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.287 225859 DEBUG nova.network.neutron [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.311 225859 INFO nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.328 225859 DEBUG nova.compute.manager [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.418 225859 DEBUG nova.compute.manager [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.419 225859 DEBUG nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.420 225859 INFO nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Creating image(s)
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.446 225859 DEBUG nova.storage.rbd_utils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image e32ecf59-145a-4ae9-a91e-288419407cd0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.473 225859 DEBUG nova.storage.rbd_utils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image e32ecf59-145a-4ae9-a91e-288419407cd0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.499 225859 DEBUG nova.storage.rbd_utils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image e32ecf59-145a-4ae9-a91e-288419407cd0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.502 225859 DEBUG oslo_concurrency.processutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.562 225859 DEBUG oslo_concurrency.processutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.563 225859 DEBUG oslo_concurrency.lockutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.564 225859 DEBUG oslo_concurrency.lockutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.564 225859 DEBUG oslo_concurrency.lockutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.589 225859 DEBUG nova.storage.rbd_utils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image e32ecf59-145a-4ae9-a91e-288419407cd0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.593 225859 DEBUG oslo_concurrency.processutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 e32ecf59-145a-4ae9-a91e-288419407cd0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.913 225859 DEBUG oslo_concurrency.processutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 e32ecf59-145a-4ae9-a91e-288419407cd0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:28 compute-1 nova_compute[225855]: 2026-01-20 14:56:28.995 225859 DEBUG nova.storage.rbd_utils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] resizing rbd image e32ecf59-145a-4ae9-a91e-288419407cd0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:56:29 compute-1 nova_compute[225855]: 2026-01-20 14:56:29.097 225859 DEBUG nova.objects.instance [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lazy-loading 'migration_context' on Instance uuid e32ecf59-145a-4ae9-a91e-288419407cd0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:29 compute-1 ceph-mon[81775]: pgmap v2119: 321 pgs: 321 active+clean; 279 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 231 op/s
Jan 20 14:56:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1100107449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:29 compute-1 nova_compute[225855]: 2026-01-20 14:56:29.126 225859 DEBUG nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:56:29 compute-1 nova_compute[225855]: 2026-01-20 14:56:29.127 225859 DEBUG nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Ensure instance console log exists: /var/lib/nova/instances/e32ecf59-145a-4ae9-a91e-288419407cd0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:56:29 compute-1 nova_compute[225855]: 2026-01-20 14:56:29.127 225859 DEBUG oslo_concurrency.lockutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:29 compute-1 nova_compute[225855]: 2026-01-20 14:56:29.127 225859 DEBUG oslo_concurrency.lockutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:29 compute-1 nova_compute[225855]: 2026-01-20 14:56:29.128 225859 DEBUG oslo_concurrency.lockutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:29 compute-1 nova_compute[225855]: 2026-01-20 14:56:29.298 225859 DEBUG nova.policy [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '395a5c503218411284bc94c45263d1fb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca6cd0afe0ab41e3ab36d21a4129f734', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:56:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:56:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:29.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:56:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:30.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:31 compute-1 ceph-mon[81775]: pgmap v2120: 321 pgs: 321 active+clean; 289 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 214 op/s
Jan 20 14:56:31 compute-1 nova_compute[225855]: 2026-01-20 14:56:31.236 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:56:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:31.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:56:31 compute-1 sudo[277563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:31 compute-1 nova_compute[225855]: 2026-01-20 14:56:31.565 225859 DEBUG nova.network.neutron [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Successfully created port: 5909a21f-c1fb-4265-a7de-a6b0e6136194 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:56:31 compute-1 sudo[277563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:31 compute-1 sudo[277563]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:31 compute-1 sudo[277588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:56:31 compute-1 sudo[277588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:31 compute-1 sudo[277588]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:32.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:32 compute-1 nova_compute[225855]: 2026-01-20 14:56:32.301 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920977.2993426, f6f09d34-bc44-451f-98e2-1b0701aeab3a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:32 compute-1 nova_compute[225855]: 2026-01-20 14:56:32.301 225859 INFO nova.compute.manager [-] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] VM Stopped (Lifecycle Event)
Jan 20 14:56:32 compute-1 nova_compute[225855]: 2026-01-20 14:56:32.324 225859 DEBUG nova.compute.manager [None req-52c34ae0-8357-49ec-b3f8-749e5b63a289 - - - - - -] [instance: f6f09d34-bc44-451f-98e2-1b0701aeab3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:32 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:56:32 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:56:32 compute-1 ceph-mon[81775]: pgmap v2121: 321 pgs: 321 active+clean; 310 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.1 MiB/s wr, 173 op/s
Jan 20 14:56:32 compute-1 nova_compute[225855]: 2026-01-20 14:56:32.415 225859 DEBUG nova.network.neutron [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Successfully updated port: 5909a21f-c1fb-4265-a7de-a6b0e6136194 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:56:32 compute-1 nova_compute[225855]: 2026-01-20 14:56:32.434 225859 DEBUG oslo_concurrency.lockutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "refresh_cache-e32ecf59-145a-4ae9-a91e-288419407cd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:56:32 compute-1 nova_compute[225855]: 2026-01-20 14:56:32.434 225859 DEBUG oslo_concurrency.lockutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquired lock "refresh_cache-e32ecf59-145a-4ae9-a91e-288419407cd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:56:32 compute-1 nova_compute[225855]: 2026-01-20 14:56:32.435 225859 DEBUG nova.network.neutron [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:56:32 compute-1 nova_compute[225855]: 2026-01-20 14:56:32.436 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:32 compute-1 nova_compute[225855]: 2026-01-20 14:56:32.548 225859 DEBUG nova.compute.manager [req-663c50a3-d523-449b-a9bb-b5dfb198dff5 req-dcb08139-88cf-441d-a9f5-3baa9f6754bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Received event network-changed-5909a21f-c1fb-4265-a7de-a6b0e6136194 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:32 compute-1 nova_compute[225855]: 2026-01-20 14:56:32.549 225859 DEBUG nova.compute.manager [req-663c50a3-d523-449b-a9bb-b5dfb198dff5 req-dcb08139-88cf-441d-a9f5-3baa9f6754bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Refreshing instance network info cache due to event network-changed-5909a21f-c1fb-4265-a7de-a6b0e6136194. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:56:32 compute-1 nova_compute[225855]: 2026-01-20 14:56:32.549 225859 DEBUG oslo_concurrency.lockutils [req-663c50a3-d523-449b-a9bb-b5dfb198dff5 req-dcb08139-88cf-441d-a9f5-3baa9f6754bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-e32ecf59-145a-4ae9-a91e-288419407cd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:56:32 compute-1 sudo[277614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:32 compute-1 sudo[277614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:32 compute-1 sudo[277614]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:32 compute-1 nova_compute[225855]: 2026-01-20 14:56:32.851 225859 DEBUG nova.network.neutron [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:56:32 compute-1 sudo[277639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:32 compute-1 sudo[277639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:32 compute-1 sudo[277639]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/315137130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:33.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.720 225859 DEBUG nova.network.neutron [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Updating instance_info_cache with network_info: [{"id": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "address": "fa:16:3e:ac:7a:cf", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5909a21f-c1", "ovs_interfaceid": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.749 225859 DEBUG oslo_concurrency.lockutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Releasing lock "refresh_cache-e32ecf59-145a-4ae9-a91e-288419407cd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.749 225859 DEBUG nova.compute.manager [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Instance network_info: |[{"id": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "address": "fa:16:3e:ac:7a:cf", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5909a21f-c1", "ovs_interfaceid": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.750 225859 DEBUG oslo_concurrency.lockutils [req-663c50a3-d523-449b-a9bb-b5dfb198dff5 req-dcb08139-88cf-441d-a9f5-3baa9f6754bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-e32ecf59-145a-4ae9-a91e-288419407cd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.750 225859 DEBUG nova.network.neutron [req-663c50a3-d523-449b-a9bb-b5dfb198dff5 req-dcb08139-88cf-441d-a9f5-3baa9f6754bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Refreshing network info cache for port 5909a21f-c1fb-4265-a7de-a6b0e6136194 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.753 225859 DEBUG nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Start _get_guest_xml network_info=[{"id": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "address": "fa:16:3e:ac:7a:cf", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5909a21f-c1", "ovs_interfaceid": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.758 225859 WARNING nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.762 225859 DEBUG nova.virt.libvirt.host [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.762 225859 DEBUG nova.virt.libvirt.host [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.770 225859 DEBUG nova.virt.libvirt.host [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.771 225859 DEBUG nova.virt.libvirt.host [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.772 225859 DEBUG nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.772 225859 DEBUG nova.virt.hardware [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.772 225859 DEBUG nova.virt.hardware [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.773 225859 DEBUG nova.virt.hardware [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.773 225859 DEBUG nova.virt.hardware [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.773 225859 DEBUG nova.virt.hardware [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.773 225859 DEBUG nova.virt.hardware [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.774 225859 DEBUG nova.virt.hardware [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.774 225859 DEBUG nova.virt.hardware [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.774 225859 DEBUG nova.virt.hardware [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.774 225859 DEBUG nova.virt.hardware [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.775 225859 DEBUG nova.virt.hardware [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:56:33 compute-1 nova_compute[225855]: 2026-01-20 14:56:33.777 225859 DEBUG oslo_concurrency.processutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:34.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:56:34 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2907952302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:34 compute-1 nova_compute[225855]: 2026-01-20 14:56:34.251 225859 DEBUG oslo_concurrency.processutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:34 compute-1 nova_compute[225855]: 2026-01-20 14:56:34.279 225859 DEBUG nova.storage.rbd_utils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image e32ecf59-145a-4ae9-a91e-288419407cd0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:34 compute-1 nova_compute[225855]: 2026-01-20 14:56:34.284 225859 DEBUG oslo_concurrency.processutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:35.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/863845729' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:56:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/863845729' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:56:35 compute-1 ceph-mon[81775]: pgmap v2122: 321 pgs: 321 active+clean; 331 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 912 KiB/s rd, 4.1 MiB/s wr, 139 op/s
Jan 20 14:56:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2907952302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:56:35 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2104464304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.811 225859 DEBUG oslo_concurrency.processutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.815 225859 DEBUG nova.virt.libvirt.vif [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:56:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1537565903',display_name='tempest-ServersTestJSON-server-1537565903',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-1537565903',id=132,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca6cd0afe0ab41e3ab36d21a4129f734',ramdisk_id='',reservation_id='r-iv93ouga',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-405461620',owner_user_name='tempest-ServersTestJSON-405461620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:56:28Z,user_data=None,user_id='395a5c503218411284bc94c45263d1fb',uuid=e32ecf59-145a-4ae9-a91e-288419407cd0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "address": "fa:16:3e:ac:7a:cf", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5909a21f-c1", "ovs_interfaceid": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.815 225859 DEBUG nova.network.os_vif_util [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converting VIF {"id": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "address": "fa:16:3e:ac:7a:cf", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5909a21f-c1", "ovs_interfaceid": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.817 225859 DEBUG nova.network.os_vif_util [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:7a:cf,bridge_name='br-int',has_traffic_filtering=True,id=5909a21f-c1fb-4265-a7de-a6b0e6136194,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5909a21f-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.819 225859 DEBUG nova.objects.instance [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lazy-loading 'pci_devices' on Instance uuid e32ecf59-145a-4ae9-a91e-288419407cd0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.841 225859 DEBUG nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:56:35 compute-1 nova_compute[225855]:   <uuid>e32ecf59-145a-4ae9-a91e-288419407cd0</uuid>
Jan 20 14:56:35 compute-1 nova_compute[225855]:   <name>instance-00000084</name>
Jan 20 14:56:35 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:56:35 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:56:35 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <nova:name>tempest-ServersTestJSON-server-1537565903</nova:name>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:56:33</nova:creationTime>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:56:35 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:56:35 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:56:35 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:56:35 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:56:35 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:56:35 compute-1 nova_compute[225855]:         <nova:user uuid="395a5c503218411284bc94c45263d1fb">tempest-ServersTestJSON-405461620-project-member</nova:user>
Jan 20 14:56:35 compute-1 nova_compute[225855]:         <nova:project uuid="ca6cd0afe0ab41e3ab36d21a4129f734">tempest-ServersTestJSON-405461620</nova:project>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:56:35 compute-1 nova_compute[225855]:         <nova:port uuid="5909a21f-c1fb-4265-a7de-a6b0e6136194">
Jan 20 14:56:35 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:56:35 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:56:35 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <system>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <entry name="serial">e32ecf59-145a-4ae9-a91e-288419407cd0</entry>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <entry name="uuid">e32ecf59-145a-4ae9-a91e-288419407cd0</entry>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     </system>
Jan 20 14:56:35 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:56:35 compute-1 nova_compute[225855]:   <os>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:   </os>
Jan 20 14:56:35 compute-1 nova_compute[225855]:   <features>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:   </features>
Jan 20 14:56:35 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:56:35 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:56:35 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/e32ecf59-145a-4ae9-a91e-288419407cd0_disk">
Jan 20 14:56:35 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       </source>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:56:35 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/e32ecf59-145a-4ae9-a91e-288419407cd0_disk.config">
Jan 20 14:56:35 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       </source>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:56:35 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:ac:7a:cf"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <target dev="tap5909a21f-c1"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/e32ecf59-145a-4ae9-a91e-288419407cd0/console.log" append="off"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <video>
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     </video>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:56:35 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:56:35 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:56:35 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:56:35 compute-1 nova_compute[225855]: </domain>
Jan 20 14:56:35 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.843 225859 DEBUG nova.compute.manager [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Preparing to wait for external event network-vif-plugged-5909a21f-c1fb-4265-a7de-a6b0e6136194 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.844 225859 DEBUG oslo_concurrency.lockutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "e32ecf59-145a-4ae9-a91e-288419407cd0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.845 225859 DEBUG oslo_concurrency.lockutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "e32ecf59-145a-4ae9-a91e-288419407cd0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.845 225859 DEBUG oslo_concurrency.lockutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "e32ecf59-145a-4ae9-a91e-288419407cd0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.847 225859 DEBUG nova.virt.libvirt.vif [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:56:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1537565903',display_name='tempest-ServersTestJSON-server-1537565903',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-1537565903',id=132,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca6cd0afe0ab41e3ab36d21a4129f734',ramdisk_id='',reservation_id='r-iv93ouga',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-405461620',owner_user_name='tempest-ServersTestJSON-405461620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:56:28Z,user_data=None,user_id='395a5c503218411284bc94c45263d1fb',uuid=e32ecf59-145a-4ae9-a91e-288419407cd0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "address": "fa:16:3e:ac:7a:cf", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5909a21f-c1", "ovs_interfaceid": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.847 225859 DEBUG nova.network.os_vif_util [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converting VIF {"id": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "address": "fa:16:3e:ac:7a:cf", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5909a21f-c1", "ovs_interfaceid": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.848 225859 DEBUG nova.network.os_vif_util [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:7a:cf,bridge_name='br-int',has_traffic_filtering=True,id=5909a21f-c1fb-4265-a7de-a6b0e6136194,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5909a21f-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.849 225859 DEBUG os_vif [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:7a:cf,bridge_name='br-int',has_traffic_filtering=True,id=5909a21f-c1fb-4265-a7de-a6b0e6136194,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5909a21f-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.850 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.851 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.852 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.856 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.856 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5909a21f-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.857 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5909a21f-c1, col_values=(('external_ids', {'iface-id': '5909a21f-c1fb-4265-a7de-a6b0e6136194', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:7a:cf', 'vm-uuid': 'e32ecf59-145a-4ae9-a91e-288419407cd0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:35 compute-1 NetworkManager[49104]: <info>  [1768920995.8592] manager: (tap5909a21f-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/226)
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.861 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.866 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.866 225859 INFO os_vif [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:7a:cf,bridge_name='br-int',has_traffic_filtering=True,id=5909a21f-c1fb-4265-a7de-a6b0e6136194,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5909a21f-c1')
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.930 225859 DEBUG nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.931 225859 DEBUG nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.931 225859 DEBUG nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] No VIF found with MAC fa:16:3e:ac:7a:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.932 225859 INFO nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Using config drive
Jan 20 14:56:35 compute-1 nova_compute[225855]: 2026-01-20 14:56:35.953 225859 DEBUG nova.storage.rbd_utils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image e32ecf59-145a-4ae9-a91e-288419407cd0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:36.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:36 compute-1 nova_compute[225855]: 2026-01-20 14:56:36.238 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:36 compute-1 nova_compute[225855]: 2026-01-20 14:56:36.248 225859 DEBUG nova.network.neutron [req-663c50a3-d523-449b-a9bb-b5dfb198dff5 req-dcb08139-88cf-441d-a9f5-3baa9f6754bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Updated VIF entry in instance network info cache for port 5909a21f-c1fb-4265-a7de-a6b0e6136194. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:56:36 compute-1 nova_compute[225855]: 2026-01-20 14:56:36.248 225859 DEBUG nova.network.neutron [req-663c50a3-d523-449b-a9bb-b5dfb198dff5 req-dcb08139-88cf-441d-a9f5-3baa9f6754bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Updating instance_info_cache with network_info: [{"id": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "address": "fa:16:3e:ac:7a:cf", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5909a21f-c1", "ovs_interfaceid": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:36 compute-1 nova_compute[225855]: 2026-01-20 14:56:36.264 225859 DEBUG oslo_concurrency.lockutils [req-663c50a3-d523-449b-a9bb-b5dfb198dff5 req-dcb08139-88cf-441d-a9f5-3baa9f6754bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-e32ecf59-145a-4ae9-a91e-288419407cd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:56:36 compute-1 nova_compute[225855]: 2026-01-20 14:56:36.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:56:36 compute-1 nova_compute[225855]: 2026-01-20 14:56:36.527 225859 INFO nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Creating config drive at /var/lib/nova/instances/e32ecf59-145a-4ae9-a91e-288419407cd0/disk.config
Jan 20 14:56:36 compute-1 nova_compute[225855]: 2026-01-20 14:56:36.532 225859 DEBUG oslo_concurrency.processutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e32ecf59-145a-4ae9-a91e-288419407cd0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjv1s2hi7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:36 compute-1 nova_compute[225855]: 2026-01-20 14:56:36.661 225859 DEBUG oslo_concurrency.processutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e32ecf59-145a-4ae9-a91e-288419407cd0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjv1s2hi7" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:36 compute-1 nova_compute[225855]: 2026-01-20 14:56:36.693 225859 DEBUG nova.storage.rbd_utils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image e32ecf59-145a-4ae9-a91e-288419407cd0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:36 compute-1 nova_compute[225855]: 2026-01-20 14:56:36.697 225859 DEBUG oslo_concurrency.processutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e32ecf59-145a-4ae9-a91e-288419407cd0/disk.config e32ecf59-145a-4ae9-a91e-288419407cd0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2104464304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:36 compute-1 ceph-mon[81775]: pgmap v2123: 321 pgs: 321 active+clean; 361 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 276 KiB/s rd, 4.2 MiB/s wr, 109 op/s
Jan 20 14:56:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1348621708' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:56:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1348621708' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:56:36 compute-1 nova_compute[225855]: 2026-01-20 14:56:36.884 225859 DEBUG oslo_concurrency.processutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e32ecf59-145a-4ae9-a91e-288419407cd0/disk.config e32ecf59-145a-4ae9-a91e-288419407cd0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:36 compute-1 nova_compute[225855]: 2026-01-20 14:56:36.885 225859 INFO nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Deleting local config drive /var/lib/nova/instances/e32ecf59-145a-4ae9-a91e-288419407cd0/disk.config because it was imported into RBD.
Jan 20 14:56:36 compute-1 kernel: tap5909a21f-c1: entered promiscuous mode
Jan 20 14:56:36 compute-1 NetworkManager[49104]: <info>  [1768920996.9416] manager: (tap5909a21f-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/227)
Jan 20 14:56:36 compute-1 nova_compute[225855]: 2026-01-20 14:56:36.942 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:36 compute-1 ovn_controller[130490]: 2026-01-20T14:56:36Z|00525|binding|INFO|Claiming lport 5909a21f-c1fb-4265-a7de-a6b0e6136194 for this chassis.
Jan 20 14:56:36 compute-1 ovn_controller[130490]: 2026-01-20T14:56:36Z|00526|binding|INFO|5909a21f-c1fb-4265-a7de-a6b0e6136194: Claiming fa:16:3e:ac:7a:cf 10.100.0.13
Jan 20 14:56:36 compute-1 nova_compute[225855]: 2026-01-20 14:56:36.948 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:36.957 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:7a:cf 10.100.0.13'], port_security=['fa:16:3e:ac:7a:cf 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e32ecf59-145a-4ae9-a91e-288419407cd0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca6cd0afe0ab41e3ab36d21a4129f734', 'neutron:revision_number': '2', 'neutron:security_group_ids': '819ea4ae-b994-44d1-9da3-8b0ca609fb2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee620e3e-ef7e-4826-b394-b8a89442b353, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=5909a21f-c1fb-4265-a7de-a6b0e6136194) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:56:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:36.958 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 5909a21f-c1fb-4265-a7de-a6b0e6136194 in datapath f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c bound to our chassis
Jan 20 14:56:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:36.960 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c
Jan 20 14:56:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:36.970 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[335daa75-0b3f-4ff8-a760-3b065b95333b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:36.971 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf4c8474b-01 in ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:56:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:36.973 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf4c8474b-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:56:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:36.973 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3e64f09f-14f7-4773-806b-86edc654ff43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:36.974 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bd86d5d3-3de2-4e69-a5eb-22491f1e0957]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:36 compute-1 systemd-udevd[277803]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:56:36 compute-1 systemd-machined[194361]: New machine qemu-62-instance-00000084.
Jan 20 14:56:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:36.989 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[8c4e8523-48ac-42aa-aff0-39f0adaadf4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:36 compute-1 NetworkManager[49104]: <info>  [1768920996.9940] device (tap5909a21f-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:56:36 compute-1 NetworkManager[49104]: <info>  [1768920996.9952] device (tap5909a21f-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:56:37 compute-1 systemd[1]: Started Virtual Machine qemu-62-instance-00000084.
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.014 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ca03829a-4709-41ad-bd9f-1425f0197666]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.015 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.017 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.024 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:37 compute-1 ovn_controller[130490]: 2026-01-20T14:56:37Z|00527|binding|INFO|Setting lport 5909a21f-c1fb-4265-a7de-a6b0e6136194 ovn-installed in OVS
Jan 20 14:56:37 compute-1 ovn_controller[130490]: 2026-01-20T14:56:37Z|00528|binding|INFO|Setting lport 5909a21f-c1fb-4265-a7de-a6b0e6136194 up in Southbound
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.030 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.042 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[ec49f21f-052e-4725-ad5b-791bf1ce8799]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:37 compute-1 NetworkManager[49104]: <info>  [1768920997.0480] manager: (tapf4c8474b-00): new Veth device (/org/freedesktop/NetworkManager/Devices/228)
Jan 20 14:56:37 compute-1 systemd-udevd[277806]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.047 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c29af9aa-33ae-430d-ad03-2b48f63b1fd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.077 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f3e5db-0a0b-45ed-99df-6f1f3dd7ac11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.079 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[df7965af-d284-421d-b240-bee400b800fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:37 compute-1 NetworkManager[49104]: <info>  [1768920997.1051] device (tapf4c8474b-00): carrier: link connected
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.111 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e737b030-5159-4c3a-a0f9-e1d46851c44c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.125 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7844324f-2e8d-4d3a-9c15-9fbc88f9226e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4c8474b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:a2:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603603, 'reachable_time': 28670, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277835, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.142 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ac3c1e-6a3e-4bbd-979a-b82de0e33bf2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe14:a25f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603603, 'tstamp': 603603}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277836, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.156 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e7147c71-b40c-41dc-915d-712f1b604476]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4c8474b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:a2:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603603, 'reachable_time': 28670, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277837, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.183 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ec664a11-75b3-423a-bf2a-1bc4f332e3bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.252 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[13c69e15-ad36-4875-b772-65313f766cdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.254 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4c8474b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.254 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.254 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4c8474b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.258 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:37 compute-1 NetworkManager[49104]: <info>  [1768920997.2594] manager: (tapf4c8474b-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Jan 20 14:56:37 compute-1 kernel: tapf4c8474b-00: entered promiscuous mode
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.261 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4c8474b-00, col_values=(('external_ids', {'iface-id': '8c6fd3ab-70a8-4e63-99de-f2e15ac0207f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:37 compute-1 ovn_controller[130490]: 2026-01-20T14:56:37Z|00529|binding|INFO|Releasing lport 8c6fd3ab-70a8-4e63-99de-f2e15ac0207f from this chassis (sb_readonly=0)
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.267 225859 DEBUG nova.compute.manager [req-6b8bd7ce-1fa2-4c23-ad25-133b77685826 req-fe0ec297-5c52-480b-baf1-3dbc6be9bb2f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Received event network-vif-plugged-5909a21f-c1fb-4265-a7de-a6b0e6136194 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.268 225859 DEBUG oslo_concurrency.lockutils [req-6b8bd7ce-1fa2-4c23-ad25-133b77685826 req-fe0ec297-5c52-480b-baf1-3dbc6be9bb2f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "e32ecf59-145a-4ae9-a91e-288419407cd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.269 225859 DEBUG oslo_concurrency.lockutils [req-6b8bd7ce-1fa2-4c23-ad25-133b77685826 req-fe0ec297-5c52-480b-baf1-3dbc6be9bb2f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e32ecf59-145a-4ae9-a91e-288419407cd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.269 225859 DEBUG oslo_concurrency.lockutils [req-6b8bd7ce-1fa2-4c23-ad25-133b77685826 req-fe0ec297-5c52-480b-baf1-3dbc6be9bb2f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e32ecf59-145a-4ae9-a91e-288419407cd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.270 225859 DEBUG nova.compute.manager [req-6b8bd7ce-1fa2-4c23-ad25-133b77685826 req-fe0ec297-5c52-480b-baf1-3dbc6be9bb2f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Processing event network-vif-plugged-5909a21f-c1fb-4265-a7de-a6b0e6136194 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.271 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.297 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.297 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.298 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3f2f228a-ecc3-418c-a5e4-0adbd69ee42a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.299 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c.pid.haproxy
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.300 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'env', 'PROCESS_TAG=haproxy-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:56:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:37.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.693 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920997.6930447, e32ecf59-145a-4ae9-a91e-288419407cd0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:37 compute-1 podman[277911]: 2026-01-20 14:56:37.694673125 +0000 UTC m=+0.057923616 container create 998e434681eead6e0c698e580159f48be23d2464951b6f3b45d2e6309690a250 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.694 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] VM Started (Lifecycle Event)
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.698 225859 DEBUG nova.compute.manager [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.704 225859 DEBUG nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.708 225859 INFO nova.virt.libvirt.driver [-] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Instance spawned successfully.
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.708 225859 DEBUG nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.729 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:37 compute-1 systemd[1]: Started libpod-conmon-998e434681eead6e0c698e580159f48be23d2464951b6f3b45d2e6309690a250.scope.
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.736 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.739 225859 DEBUG nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.740 225859 DEBUG nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.740 225859 DEBUG nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.741 225859 DEBUG nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.741 225859 DEBUG nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.742 225859 DEBUG nova.virt.libvirt.driver [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:37 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:56:37 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6beda3a917b01666983f6717d8a15faa248a8e035acb78285162928c0b4a3550/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:56:37 compute-1 podman[277911]: 2026-01-20 14:56:37.66438391 +0000 UTC m=+0.027634441 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:56:37 compute-1 podman[277911]: 2026-01-20 14:56:37.769972431 +0000 UTC m=+0.133222942 container init 998e434681eead6e0c698e580159f48be23d2464951b6f3b45d2e6309690a250 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 20 14:56:37 compute-1 podman[277911]: 2026-01-20 14:56:37.775449115 +0000 UTC m=+0.138699606 container start 998e434681eead6e0c698e580159f48be23d2464951b6f3b45d2e6309690a250 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.778 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.779 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920997.694136, e32ecf59-145a-4ae9-a91e-288419407cd0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.779 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] VM Paused (Lifecycle Event)
Jan 20 14:56:37 compute-1 neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c[277928]: [NOTICE]   (277932) : New worker (277934) forked
Jan 20 14:56:37 compute-1 neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c[277928]: [NOTICE]   (277932) : Loading success.
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.814 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.817 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768920997.7039716, e32ecf59-145a-4ae9-a91e-288419407cd0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.817 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] VM Resumed (Lifecycle Event)
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.826 225859 INFO nova.compute.manager [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Took 9.41 seconds to spawn the instance on the hypervisor.
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.827 225859 DEBUG nova.compute.manager [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:37.838 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.839 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.841 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.871 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.894 225859 INFO nova.compute.manager [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Took 10.28 seconds to build instance.
Jan 20 14:56:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2021551626' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2247402655' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:37 compute-1 nova_compute[225855]: 2026-01-20 14:56:37.915 225859 DEBUG oslo_concurrency.lockutils [None req-dace9bde-26dd-4f65-90a1-a36f1180201a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "e32ecf59-145a-4ae9-a91e-288419407cd0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.414s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:38.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:38 compute-1 nova_compute[225855]: 2026-01-20 14:56:38.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:56:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3611847006' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:56:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3611847006' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:56:38 compute-1 ceph-mon[81775]: pgmap v2124: 321 pgs: 321 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 3.6 MiB/s wr, 80 op/s
Jan 20 14:56:39 compute-1 nova_compute[225855]: 2026-01-20 14:56:39.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:56:39 compute-1 nova_compute[225855]: 2026-01-20 14:56:39.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:56:39 compute-1 nova_compute[225855]: 2026-01-20 14:56:39.394 225859 DEBUG nova.compute.manager [req-efb092f2-2234-4ee4-afa5-811d6b24b09f req-82e8dd03-24b1-4753-afcd-35f8df64e394 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Received event network-vif-plugged-5909a21f-c1fb-4265-a7de-a6b0e6136194 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:39 compute-1 nova_compute[225855]: 2026-01-20 14:56:39.395 225859 DEBUG oslo_concurrency.lockutils [req-efb092f2-2234-4ee4-afa5-811d6b24b09f req-82e8dd03-24b1-4753-afcd-35f8df64e394 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "e32ecf59-145a-4ae9-a91e-288419407cd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:39 compute-1 nova_compute[225855]: 2026-01-20 14:56:39.395 225859 DEBUG oslo_concurrency.lockutils [req-efb092f2-2234-4ee4-afa5-811d6b24b09f req-82e8dd03-24b1-4753-afcd-35f8df64e394 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e32ecf59-145a-4ae9-a91e-288419407cd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:39 compute-1 nova_compute[225855]: 2026-01-20 14:56:39.395 225859 DEBUG oslo_concurrency.lockutils [req-efb092f2-2234-4ee4-afa5-811d6b24b09f req-82e8dd03-24b1-4753-afcd-35f8df64e394 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e32ecf59-145a-4ae9-a91e-288419407cd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:39 compute-1 nova_compute[225855]: 2026-01-20 14:56:39.395 225859 DEBUG nova.compute.manager [req-efb092f2-2234-4ee4-afa5-811d6b24b09f req-82e8dd03-24b1-4753-afcd-35f8df64e394 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] No waiting events found dispatching network-vif-plugged-5909a21f-c1fb-4265-a7de-a6b0e6136194 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:56:39 compute-1 nova_compute[225855]: 2026-01-20 14:56:39.395 225859 WARNING nova.compute.manager [req-efb092f2-2234-4ee4-afa5-811d6b24b09f req-82e8dd03-24b1-4753-afcd-35f8df64e394 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Received unexpected event network-vif-plugged-5909a21f-c1fb-4265-a7de-a6b0e6136194 for instance with vm_state active and task_state None.
Jan 20 14:56:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:39.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:40.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:40 compute-1 nova_compute[225855]: 2026-01-20 14:56:40.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:56:40 compute-1 nova_compute[225855]: 2026-01-20 14:56:40.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:56:40 compute-1 nova_compute[225855]: 2026-01-20 14:56:40.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:56:40 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:56:40 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:56:40 compute-1 nova_compute[225855]: 2026-01-20 14:56:40.502 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-e32ecf59-145a-4ae9-a91e-288419407cd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:56:40 compute-1 nova_compute[225855]: 2026-01-20 14:56:40.503 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-e32ecf59-145a-4ae9-a91e-288419407cd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:56:40 compute-1 nova_compute[225855]: 2026-01-20 14:56:40.503 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:56:40 compute-1 nova_compute[225855]: 2026-01-20 14:56:40.503 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e32ecf59-145a-4ae9-a91e-288419407cd0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:40 compute-1 nova_compute[225855]: 2026-01-20 14:56:40.859 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:41 compute-1 ceph-mon[81775]: pgmap v2125: 321 pgs: 321 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 194 KiB/s rd, 3.6 MiB/s wr, 97 op/s
Jan 20 14:56:41 compute-1 nova_compute[225855]: 2026-01-20 14:56:41.241 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:56:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:41.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:56:41 compute-1 nova_compute[225855]: 2026-01-20 14:56:41.452 225859 DEBUG oslo_concurrency.lockutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Acquiring lock "538fe1f0-b666-4b97-b2ef-317adae0a47a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:41 compute-1 nova_compute[225855]: 2026-01-20 14:56:41.452 225859 DEBUG oslo_concurrency.lockutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:41 compute-1 nova_compute[225855]: 2026-01-20 14:56:41.473 225859 DEBUG oslo_concurrency.lockutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "3426109c-5671-4cc7-89b6-fea13983f921" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:41 compute-1 nova_compute[225855]: 2026-01-20 14:56:41.474 225859 DEBUG oslo_concurrency.lockutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "3426109c-5671-4cc7-89b6-fea13983f921" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:41 compute-1 nova_compute[225855]: 2026-01-20 14:56:41.476 225859 DEBUG nova.compute.manager [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:56:41 compute-1 nova_compute[225855]: 2026-01-20 14:56:41.506 225859 DEBUG nova.compute.manager [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:56:41 compute-1 nova_compute[225855]: 2026-01-20 14:56:41.578 225859 DEBUG oslo_concurrency.lockutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:41 compute-1 nova_compute[225855]: 2026-01-20 14:56:41.578 225859 DEBUG oslo_concurrency.lockutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:41 compute-1 nova_compute[225855]: 2026-01-20 14:56:41.588 225859 DEBUG nova.virt.hardware [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:56:41 compute-1 nova_compute[225855]: 2026-01-20 14:56:41.589 225859 INFO nova.compute.claims [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:56:41 compute-1 nova_compute[225855]: 2026-01-20 14:56:41.605 225859 DEBUG oslo_concurrency.lockutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:41 compute-1 nova_compute[225855]: 2026-01-20 14:56:41.738 225859 DEBUG oslo_concurrency.processutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:42.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.061 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Updating instance_info_cache with network_info: [{"id": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "address": "fa:16:3e:ac:7a:cf", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5909a21f-c1", "ovs_interfaceid": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.089 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-e32ecf59-145a-4ae9-a91e-288419407cd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.089 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.090 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:56:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:56:42 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1147169689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.180 225859 DEBUG oslo_concurrency.processutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.185 225859 DEBUG nova.compute.provider_tree [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.199 225859 DEBUG nova.scheduler.client.report [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.221 225859 DEBUG oslo_concurrency.lockutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.222 225859 DEBUG nova.compute.manager [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.224 225859 DEBUG oslo_concurrency.lockutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.230 225859 DEBUG nova.virt.hardware [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.230 225859 INFO nova.compute.claims [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.292 225859 DEBUG nova.compute.manager [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.293 225859 DEBUG nova.network.neutron [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.315 225859 INFO nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.340 225859 DEBUG nova.compute.manager [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.381 225859 DEBUG oslo_concurrency.processutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.459 225859 DEBUG nova.compute.manager [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.461 225859 DEBUG nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.461 225859 INFO nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Creating image(s)
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.485 225859 DEBUG nova.storage.rbd_utils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] rbd image 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.512 225859 DEBUG nova.storage.rbd_utils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] rbd image 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.546 225859 DEBUG nova.storage.rbd_utils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] rbd image 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.550 225859 DEBUG oslo_concurrency.processutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.574 225859 DEBUG nova.policy [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '168ca7898b964a44b76c90912fa89a66', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4d4e37f4fd7f4dbbb25648ec639e0e43', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.610 225859 DEBUG oslo_concurrency.processutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.611 225859 DEBUG oslo_concurrency.lockutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.611 225859 DEBUG oslo_concurrency.lockutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.612 225859 DEBUG oslo_concurrency.lockutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.634 225859 DEBUG nova.storage.rbd_utils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] rbd image 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.637 225859 DEBUG oslo_concurrency.processutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:56:42 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/394105736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:42 compute-1 ceph-mon[81775]: pgmap v2126: 321 pgs: 321 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.4 MiB/s wr, 141 op/s
Jan 20 14:56:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1147169689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.856 225859 DEBUG oslo_concurrency.processutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.862 225859 DEBUG nova.compute.provider_tree [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.884 225859 DEBUG nova.scheduler.client.report [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.903 225859 DEBUG oslo_concurrency.lockutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.904 225859 DEBUG nova.compute.manager [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.971 225859 DEBUG nova.compute.manager [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.971 225859 DEBUG nova.network.neutron [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:56:42 compute-1 nova_compute[225855]: 2026-01-20 14:56:42.990 225859 INFO nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:56:43 compute-1 nova_compute[225855]: 2026-01-20 14:56:43.007 225859 DEBUG nova.compute.manager [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:56:43 compute-1 nova_compute[225855]: 2026-01-20 14:56:43.090 225859 DEBUG nova.compute.manager [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:56:43 compute-1 nova_compute[225855]: 2026-01-20 14:56:43.092 225859 DEBUG nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:56:43 compute-1 nova_compute[225855]: 2026-01-20 14:56:43.092 225859 INFO nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Creating image(s)
Jan 20 14:56:43 compute-1 nova_compute[225855]: 2026-01-20 14:56:43.115 225859 DEBUG nova.storage.rbd_utils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 3426109c-5671-4cc7-89b6-fea13983f921_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:43 compute-1 nova_compute[225855]: 2026-01-20 14:56:43.140 225859 DEBUG nova.storage.rbd_utils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 3426109c-5671-4cc7-89b6-fea13983f921_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:43 compute-1 nova_compute[225855]: 2026-01-20 14:56:43.167 225859 DEBUG nova.storage.rbd_utils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 3426109c-5671-4cc7-89b6-fea13983f921_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:43 compute-1 nova_compute[225855]: 2026-01-20 14:56:43.170 225859 DEBUG oslo_concurrency.processutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:43 compute-1 nova_compute[225855]: 2026-01-20 14:56:43.228 225859 DEBUG oslo_concurrency.processutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:43 compute-1 nova_compute[225855]: 2026-01-20 14:56:43.229 225859 DEBUG oslo_concurrency.lockutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:43 compute-1 nova_compute[225855]: 2026-01-20 14:56:43.229 225859 DEBUG oslo_concurrency.lockutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:43 compute-1 nova_compute[225855]: 2026-01-20 14:56:43.230 225859 DEBUG oslo_concurrency.lockutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:43 compute-1 nova_compute[225855]: 2026-01-20 14:56:43.261 225859 DEBUG nova.storage.rbd_utils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 3426109c-5671-4cc7-89b6-fea13983f921_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:43 compute-1 nova_compute[225855]: 2026-01-20 14:56:43.265 225859 DEBUG oslo_concurrency.processutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 3426109c-5671-4cc7-89b6-fea13983f921_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:43 compute-1 nova_compute[225855]: 2026-01-20 14:56:43.288 225859 DEBUG nova.policy [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '395a5c503218411284bc94c45263d1fb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca6cd0afe0ab41e3ab36d21a4129f734', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:56:43 compute-1 nova_compute[225855]: 2026-01-20 14:56:43.295 225859 DEBUG nova.network.neutron [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Successfully created port: 6550efe7-7235-437c-b9f3-728b676371ee _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:56:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:43.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:43 compute-1 nova_compute[225855]: 2026-01-20 14:56:43.801 225859 DEBUG oslo_concurrency.processutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 3426109c-5671-4cc7-89b6-fea13983f921_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/394105736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:43 compute-1 nova_compute[225855]: 2026-01-20 14:56:43.918 225859 DEBUG nova.storage.rbd_utils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] resizing rbd image 3426109c-5671-4cc7-89b6-fea13983f921_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:56:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:56:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:44.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:56:44 compute-1 nova_compute[225855]: 2026-01-20 14:56:44.286 225859 DEBUG oslo_concurrency.processutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:44 compute-1 nova_compute[225855]: 2026-01-20 14:56:44.431 225859 DEBUG nova.storage.rbd_utils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] resizing rbd image 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:56:44 compute-1 nova_compute[225855]: 2026-01-20 14:56:44.479 225859 DEBUG nova.objects.instance [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lazy-loading 'migration_context' on Instance uuid 3426109c-5671-4cc7-89b6-fea13983f921 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:44 compute-1 nova_compute[225855]: 2026-01-20 14:56:44.493 225859 DEBUG nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:56:44 compute-1 nova_compute[225855]: 2026-01-20 14:56:44.494 225859 DEBUG nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Ensure instance console log exists: /var/lib/nova/instances/3426109c-5671-4cc7-89b6-fea13983f921/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:56:44 compute-1 nova_compute[225855]: 2026-01-20 14:56:44.494 225859 DEBUG oslo_concurrency.lockutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:44 compute-1 nova_compute[225855]: 2026-01-20 14:56:44.495 225859 DEBUG oslo_concurrency.lockutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:44 compute-1 nova_compute[225855]: 2026-01-20 14:56:44.495 225859 DEBUG oslo_concurrency.lockutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:44 compute-1 nova_compute[225855]: 2026-01-20 14:56:44.746 225859 DEBUG nova.objects.instance [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lazy-loading 'migration_context' on Instance uuid 538fe1f0-b666-4b97-b2ef-317adae0a47a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:44 compute-1 nova_compute[225855]: 2026-01-20 14:56:44.770 225859 DEBUG nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:56:44 compute-1 nova_compute[225855]: 2026-01-20 14:56:44.770 225859 DEBUG nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Ensure instance console log exists: /var/lib/nova/instances/538fe1f0-b666-4b97-b2ef-317adae0a47a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:56:44 compute-1 nova_compute[225855]: 2026-01-20 14:56:44.771 225859 DEBUG oslo_concurrency.lockutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:44 compute-1 nova_compute[225855]: 2026-01-20 14:56:44.771 225859 DEBUG oslo_concurrency.lockutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:44 compute-1 nova_compute[225855]: 2026-01-20 14:56:44.771 225859 DEBUG oslo_concurrency.lockutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:44 compute-1 ceph-mon[81775]: pgmap v2127: 321 pgs: 321 active+clean; 397 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.2 MiB/s wr, 159 op/s
Jan 20 14:56:45 compute-1 podman[278323]: 2026-01-20 14:56:45.061839777 +0000 UTC m=+0.100295190 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:56:45 compute-1 nova_compute[225855]: 2026-01-20 14:56:45.211 225859 DEBUG nova.network.neutron [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Successfully created port: d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:56:45 compute-1 nova_compute[225855]: 2026-01-20 14:56:45.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:56:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:56:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:45.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:56:45 compute-1 nova_compute[225855]: 2026-01-20 14:56:45.627 225859 DEBUG nova.network.neutron [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Successfully updated port: 6550efe7-7235-437c-b9f3-728b676371ee _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:56:45 compute-1 nova_compute[225855]: 2026-01-20 14:56:45.650 225859 DEBUG oslo_concurrency.lockutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Acquiring lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:56:45 compute-1 nova_compute[225855]: 2026-01-20 14:56:45.651 225859 DEBUG oslo_concurrency.lockutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Acquired lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:56:45 compute-1 nova_compute[225855]: 2026-01-20 14:56:45.652 225859 DEBUG nova.network.neutron [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:56:45 compute-1 nova_compute[225855]: 2026-01-20 14:56:45.861 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:45 compute-1 nova_compute[225855]: 2026-01-20 14:56:45.911 225859 DEBUG nova.network.neutron [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:45.999 225859 DEBUG nova.network.neutron [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Successfully updated port: d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:46.016 225859 DEBUG oslo_concurrency.lockutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "refresh_cache-3426109c-5671-4cc7-89b6-fea13983f921" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:46.017 225859 DEBUG oslo_concurrency.lockutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquired lock "refresh_cache-3426109c-5671-4cc7-89b6-fea13983f921" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:46.018 225859 DEBUG nova.network.neutron [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:56:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:46.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:46.173 225859 DEBUG nova.compute.manager [req-1a5af0a2-50d9-4cd7-9675-e509d73b28f2 req-ecde0102-bb46-4490-a675-b039f3f48622 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Received event network-changed-d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:46.174 225859 DEBUG nova.compute.manager [req-1a5af0a2-50d9-4cd7-9675-e509d73b28f2 req-ecde0102-bb46-4490-a675-b039f3f48622 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Refreshing instance network info cache due to event network-changed-d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:46.174 225859 DEBUG oslo_concurrency.lockutils [req-1a5af0a2-50d9-4cd7-9675-e509d73b28f2 req-ecde0102-bb46-4490-a675-b039f3f48622 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-3426109c-5671-4cc7-89b6-fea13983f921" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:46.242 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:46.261 225859 DEBUG nova.network.neutron [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:46.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:46.387 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:46.388 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:46.388 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:46.389 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:46.390 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:56:46 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3207207858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:46.863 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:46.928 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:46.929 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:46.980 225859 DEBUG nova.compute.manager [req-6ac12c71-92f6-4b79-9191-cc43d7ba390a req-c7771993-1790-4c03-ba46-12fe19c55066 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received event network-changed-6550efe7-7235-437c-b9f3-728b676371ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:46.981 225859 DEBUG nova.compute.manager [req-6ac12c71-92f6-4b79-9191-cc43d7ba390a req-c7771993-1790-4c03-ba46-12fe19c55066 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Refreshing instance network info cache due to event network-changed-6550efe7-7235-437c-b9f3-728b676371ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:56:46 compute-1 nova_compute[225855]: 2026-01-20 14:56:46.981 225859 DEBUG oslo_concurrency.lockutils [req-6ac12c71-92f6-4b79-9191-cc43d7ba390a req-c7771993-1790-4c03-ba46-12fe19c55066 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.058 225859 DEBUG nova.network.neutron [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Updating instance_info_cache with network_info: [{"id": "6550efe7-7235-437c-b9f3-728b676371ee", "address": "fa:16:3e:e3:4f:ce", "network": {"id": "87caaa2e-d899-4eed-8b6a-8d19125c693b", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4d4e37f4fd7f4dbbb25648ec639e0e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6550efe7-72", "ovs_interfaceid": "6550efe7-7235-437c-b9f3-728b676371ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.073 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.075 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4201MB free_disk=20.781208038330078GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.075 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.075 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.090 225859 DEBUG oslo_concurrency.lockutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Releasing lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.090 225859 DEBUG nova.compute.manager [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Instance network_info: |[{"id": "6550efe7-7235-437c-b9f3-728b676371ee", "address": "fa:16:3e:e3:4f:ce", "network": {"id": "87caaa2e-d899-4eed-8b6a-8d19125c693b", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4d4e37f4fd7f4dbbb25648ec639e0e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6550efe7-72", "ovs_interfaceid": "6550efe7-7235-437c-b9f3-728b676371ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.091 225859 DEBUG oslo_concurrency.lockutils [req-6ac12c71-92f6-4b79-9191-cc43d7ba390a req-c7771993-1790-4c03-ba46-12fe19c55066 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.091 225859 DEBUG nova.network.neutron [req-6ac12c71-92f6-4b79-9191-cc43d7ba390a req-c7771993-1790-4c03-ba46-12fe19c55066 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Refreshing network info cache for port 6550efe7-7235-437c-b9f3-728b676371ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.093 225859 DEBUG nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Start _get_guest_xml network_info=[{"id": "6550efe7-7235-437c-b9f3-728b676371ee", "address": "fa:16:3e:e3:4f:ce", "network": {"id": "87caaa2e-d899-4eed-8b6a-8d19125c693b", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4d4e37f4fd7f4dbbb25648ec639e0e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6550efe7-72", "ovs_interfaceid": "6550efe7-7235-437c-b9f3-728b676371ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.099 225859 WARNING nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.103 225859 DEBUG nova.virt.libvirt.host [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.104 225859 DEBUG nova.virt.libvirt.host [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.109 225859 DEBUG nova.virt.libvirt.host [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.110 225859 DEBUG nova.virt.libvirt.host [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.111 225859 DEBUG nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.111 225859 DEBUG nova.virt.hardware [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.111 225859 DEBUG nova.virt.hardware [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.112 225859 DEBUG nova.virt.hardware [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.112 225859 DEBUG nova.virt.hardware [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.112 225859 DEBUG nova.virt.hardware [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.113 225859 DEBUG nova.virt.hardware [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.113 225859 DEBUG nova.virt.hardware [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.113 225859 DEBUG nova.virt.hardware [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.113 225859 DEBUG nova.virt.hardware [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.114 225859 DEBUG nova.virt.hardware [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.114 225859 DEBUG nova.virt.hardware [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.117 225859 DEBUG oslo_concurrency.processutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:47 compute-1 ceph-mon[81775]: pgmap v2128: 321 pgs: 321 active+clean; 430 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.1 MiB/s wr, 205 op/s
Jan 20 14:56:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4239732291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1572009590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3207207858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.197 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance e32ecf59-145a-4ae9-a91e-288419407cd0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.197 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 538fe1f0-b666-4b97-b2ef-317adae0a47a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.198 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 3426109c-5671-4cc7-89b6-fea13983f921 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.198 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.198 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.301 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:47.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:56:47 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1351465299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.581 225859 DEBUG oslo_concurrency.processutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.616 225859 DEBUG nova.storage.rbd_utils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] rbd image 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.621 225859 DEBUG oslo_concurrency.processutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.685 225859 DEBUG nova.network.neutron [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Updating instance_info_cache with network_info: [{"id": "d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8", "address": "fa:16:3e:4f:e1:78", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd93a212a-0f", "ovs_interfaceid": "d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.711 225859 DEBUG oslo_concurrency.lockutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Releasing lock "refresh_cache-3426109c-5671-4cc7-89b6-fea13983f921" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.713 225859 DEBUG nova.compute.manager [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Instance network_info: |[{"id": "d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8", "address": "fa:16:3e:4f:e1:78", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd93a212a-0f", "ovs_interfaceid": "d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.714 225859 DEBUG oslo_concurrency.lockutils [req-1a5af0a2-50d9-4cd7-9675-e509d73b28f2 req-ecde0102-bb46-4490-a675-b039f3f48622 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-3426109c-5671-4cc7-89b6-fea13983f921" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.714 225859 DEBUG nova.network.neutron [req-1a5af0a2-50d9-4cd7-9675-e509d73b28f2 req-ecde0102-bb46-4490-a675-b039f3f48622 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Refreshing network info cache for port d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.717 225859 DEBUG nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Start _get_guest_xml network_info=[{"id": "d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8", "address": "fa:16:3e:4f:e1:78", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd93a212a-0f", "ovs_interfaceid": "d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.721 225859 WARNING nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.727 225859 DEBUG nova.virt.libvirt.host [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.728 225859 DEBUG nova.virt.libvirt.host [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.730 225859 DEBUG nova.virt.libvirt.host [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.731 225859 DEBUG nova.virt.libvirt.host [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.732 225859 DEBUG nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.732 225859 DEBUG nova.virt.hardware [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.733 225859 DEBUG nova.virt.hardware [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.733 225859 DEBUG nova.virt.hardware [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.734 225859 DEBUG nova.virt.hardware [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.734 225859 DEBUG nova.virt.hardware [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.734 225859 DEBUG nova.virt.hardware [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.734 225859 DEBUG nova.virt.hardware [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.735 225859 DEBUG nova.virt.hardware [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.735 225859 DEBUG nova.virt.hardware [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:56:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.735 225859 DEBUG nova.virt.hardware [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:56:47 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4132156085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.736 225859 DEBUG nova.virt.hardware [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.740 225859 DEBUG oslo_concurrency.processutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.772 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.778 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.795 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.813 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:56:47 compute-1 nova_compute[225855]: 2026-01-20 14:56:47.814 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:47.841 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:48.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:56:48 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1142049157' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.092 225859 DEBUG oslo_concurrency.processutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.094 225859 DEBUG nova.virt.libvirt.vif [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-430397789',display_name='tempest-ServerRescueTestJSONUnderV235-server-430397789',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-430397789',id=134,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d4e37f4fd7f4dbbb25648ec639e0e43',ramdisk_id='',reservation_id='r-xjy771y2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-201664875',owner_user_name='tempest-ServerRescueTestJSONUnderV235-201664875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:56:42Z,user_data=None,user_id='168ca7898b964a44b76c90912fa89a66',uuid=538fe1f0-b666-4b97-b2ef-317adae0a47a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6550efe7-7235-437c-b9f3-728b676371ee", "address": "fa:16:3e:e3:4f:ce", "network": {"id": "87caaa2e-d899-4eed-8b6a-8d19125c693b", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4d4e37f4fd7f4dbbb25648ec639e0e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6550efe7-72", "ovs_interfaceid": "6550efe7-7235-437c-b9f3-728b676371ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.094 225859 DEBUG nova.network.os_vif_util [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Converting VIF {"id": "6550efe7-7235-437c-b9f3-728b676371ee", "address": "fa:16:3e:e3:4f:ce", "network": {"id": "87caaa2e-d899-4eed-8b6a-8d19125c693b", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4d4e37f4fd7f4dbbb25648ec639e0e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6550efe7-72", "ovs_interfaceid": "6550efe7-7235-437c-b9f3-728b676371ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.095 225859 DEBUG nova.network.os_vif_util [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:4f:ce,bridge_name='br-int',has_traffic_filtering=True,id=6550efe7-7235-437c-b9f3-728b676371ee,network=Network(87caaa2e-d899-4eed-8b6a-8d19125c693b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6550efe7-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.096 225859 DEBUG nova.objects.instance [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 538fe1f0-b666-4b97-b2ef-317adae0a47a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.113 225859 DEBUG nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <uuid>538fe1f0-b666-4b97-b2ef-317adae0a47a</uuid>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <name>instance-00000086</name>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-430397789</nova:name>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:56:47</nova:creationTime>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <nova:user uuid="168ca7898b964a44b76c90912fa89a66">tempest-ServerRescueTestJSONUnderV235-201664875-project-member</nova:user>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <nova:project uuid="4d4e37f4fd7f4dbbb25648ec639e0e43">tempest-ServerRescueTestJSONUnderV235-201664875</nova:project>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <nova:port uuid="6550efe7-7235-437c-b9f3-728b676371ee">
Jan 20 14:56:48 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <system>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <entry name="serial">538fe1f0-b666-4b97-b2ef-317adae0a47a</entry>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <entry name="uuid">538fe1f0-b666-4b97-b2ef-317adae0a47a</entry>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     </system>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <os>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   </os>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <features>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   </features>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/538fe1f0-b666-4b97-b2ef-317adae0a47a_disk">
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       </source>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/538fe1f0-b666-4b97-b2ef-317adae0a47a_disk.config">
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       </source>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:e3:4f:ce"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <target dev="tap6550efe7-72"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/538fe1f0-b666-4b97-b2ef-317adae0a47a/console.log" append="off"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <video>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     </video>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:56:48 compute-1 nova_compute[225855]: </domain>
Jan 20 14:56:48 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.115 225859 DEBUG nova.compute.manager [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Preparing to wait for external event network-vif-plugged-6550efe7-7235-437c-b9f3-728b676371ee prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.115 225859 DEBUG oslo_concurrency.lockutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Acquiring lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.115 225859 DEBUG oslo_concurrency.lockutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.115 225859 DEBUG oslo_concurrency.lockutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.116 225859 DEBUG nova.virt.libvirt.vif [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-430397789',display_name='tempest-ServerRescueTestJSONUnderV235-server-430397789',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-430397789',id=134,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d4e37f4fd7f4dbbb25648ec639e0e43',ramdisk_id='',reservation_id='r-xjy771y2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-201664875',owner_user_name='tempest-ServerRescueTestJSONUnderV235-201664875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:56:42Z,user_data=None,user_id='168ca7898b964a44b76c90912fa89a66',uuid=538fe1f0-b666-4b97-b2ef-317adae0a47a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6550efe7-7235-437c-b9f3-728b676371ee", "address": "fa:16:3e:e3:4f:ce", "network": {"id": "87caaa2e-d899-4eed-8b6a-8d19125c693b", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4d4e37f4fd7f4dbbb25648ec639e0e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6550efe7-72", "ovs_interfaceid": "6550efe7-7235-437c-b9f3-728b676371ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.116 225859 DEBUG nova.network.os_vif_util [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Converting VIF {"id": "6550efe7-7235-437c-b9f3-728b676371ee", "address": "fa:16:3e:e3:4f:ce", "network": {"id": "87caaa2e-d899-4eed-8b6a-8d19125c693b", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4d4e37f4fd7f4dbbb25648ec639e0e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6550efe7-72", "ovs_interfaceid": "6550efe7-7235-437c-b9f3-728b676371ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.117 225859 DEBUG nova.network.os_vif_util [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:4f:ce,bridge_name='br-int',has_traffic_filtering=True,id=6550efe7-7235-437c-b9f3-728b676371ee,network=Network(87caaa2e-d899-4eed-8b6a-8d19125c693b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6550efe7-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.117 225859 DEBUG os_vif [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:4f:ce,bridge_name='br-int',has_traffic_filtering=True,id=6550efe7-7235-437c-b9f3-728b676371ee,network=Network(87caaa2e-d899-4eed-8b6a-8d19125c693b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6550efe7-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.118 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.118 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.118 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.121 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.121 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6550efe7-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.122 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6550efe7-72, col_values=(('external_ids', {'iface-id': '6550efe7-7235-437c-b9f3-728b676371ee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e3:4f:ce', 'vm-uuid': '538fe1f0-b666-4b97-b2ef-317adae0a47a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.123 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:48 compute-1 NetworkManager[49104]: <info>  [1768921008.1240] manager: (tap6550efe7-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.127 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.131 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.132 225859 INFO os_vif [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:4f:ce,bridge_name='br-int',has_traffic_filtering=True,id=6550efe7-7235-437c-b9f3-728b676371ee,network=Network(87caaa2e-d899-4eed-8b6a-8d19125c693b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6550efe7-72')
Jan 20 14:56:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e309 e309: 3 total, 3 up, 3 in
Jan 20 14:56:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3627162203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1351465299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1427266879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4132156085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1142049157' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.189 225859 DEBUG nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.189 225859 DEBUG nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.190 225859 DEBUG nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] No VIF found with MAC fa:16:3e:e3:4f:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.190 225859 INFO nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Using config drive
Jan 20 14:56:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:56:48 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4275779399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.215 225859 DEBUG nova.storage.rbd_utils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] rbd image 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.225 225859 DEBUG oslo_concurrency.processutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.251 225859 DEBUG nova.storage.rbd_utils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 3426109c-5671-4cc7-89b6-fea13983f921_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.256 225859 DEBUG oslo_concurrency.processutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:56:48 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4266035393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.680 225859 INFO nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Creating config drive at /var/lib/nova/instances/538fe1f0-b666-4b97-b2ef-317adae0a47a/disk.config
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.685 225859 DEBUG oslo_concurrency.processutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/538fe1f0-b666-4b97-b2ef-317adae0a47a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdsj22l2k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.704 225859 DEBUG oslo_concurrency.processutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.706 225859 DEBUG nova.virt.libvirt.vif [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1537565903',display_name='tempest-ServersTestJSON-server-1537565903',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-1537565903',id=135,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca6cd0afe0ab41e3ab36d21a4129f734',ramdisk_id='',reservation_id='r-9isslkfb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-405461620',owner_user_name='tempest-ServersTestJSON-405461620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:56:43Z,user_data=None,user_id='395a5c503218411284bc94c45263d1fb',uuid=3426109c-5671-4cc7-89b6-fea13983f921,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8", "address": "fa:16:3e:4f:e1:78", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd93a212a-0f", "ovs_interfaceid": "d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.706 225859 DEBUG nova.network.os_vif_util [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converting VIF {"id": "d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8", "address": "fa:16:3e:4f:e1:78", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd93a212a-0f", "ovs_interfaceid": "d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.707 225859 DEBUG nova.network.os_vif_util [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:e1:78,bridge_name='br-int',has_traffic_filtering=True,id=d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd93a212a-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.708 225859 DEBUG nova.objects.instance [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3426109c-5671-4cc7-89b6-fea13983f921 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.724 225859 DEBUG nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <uuid>3426109c-5671-4cc7-89b6-fea13983f921</uuid>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <name>instance-00000087</name>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <nova:name>tempest-ServersTestJSON-server-1537565903</nova:name>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:56:47</nova:creationTime>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <nova:user uuid="395a5c503218411284bc94c45263d1fb">tempest-ServersTestJSON-405461620-project-member</nova:user>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <nova:project uuid="ca6cd0afe0ab41e3ab36d21a4129f734">tempest-ServersTestJSON-405461620</nova:project>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <nova:port uuid="d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8">
Jan 20 14:56:48 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <system>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <entry name="serial">3426109c-5671-4cc7-89b6-fea13983f921</entry>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <entry name="uuid">3426109c-5671-4cc7-89b6-fea13983f921</entry>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     </system>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <os>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   </os>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <features>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   </features>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/3426109c-5671-4cc7-89b6-fea13983f921_disk">
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       </source>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/3426109c-5671-4cc7-89b6-fea13983f921_disk.config">
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       </source>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:56:48 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:4f:e1:78"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <target dev="tapd93a212a-0f"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/3426109c-5671-4cc7-89b6-fea13983f921/console.log" append="off"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <video>
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     </video>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:56:48 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:56:48 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:56:48 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:56:48 compute-1 nova_compute[225855]: </domain>
Jan 20 14:56:48 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.725 225859 DEBUG nova.compute.manager [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Preparing to wait for external event network-vif-plugged-d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.725 225859 DEBUG oslo_concurrency.lockutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "3426109c-5671-4cc7-89b6-fea13983f921-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.725 225859 DEBUG oslo_concurrency.lockutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "3426109c-5671-4cc7-89b6-fea13983f921-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.725 225859 DEBUG oslo_concurrency.lockutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "3426109c-5671-4cc7-89b6-fea13983f921-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.726 225859 DEBUG nova.virt.libvirt.vif [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1537565903',display_name='tempest-ServersTestJSON-server-1537565903',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-1537565903',id=135,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca6cd0afe0ab41e3ab36d21a4129f734',ramdisk_id='',reservation_id='r-9isslkfb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-405461620',owner_user_name='tempest-ServersTestJSON-405461620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:56:43Z,user_data=None,user_id='395a5c503218411284bc94c45263d1fb',uuid=3426109c-5671-4cc7-89b6-fea13983f921,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8", "address": "fa:16:3e:4f:e1:78", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd93a212a-0f", "ovs_interfaceid": "d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.726 225859 DEBUG nova.network.os_vif_util [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converting VIF {"id": "d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8", "address": "fa:16:3e:4f:e1:78", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd93a212a-0f", "ovs_interfaceid": "d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.727 225859 DEBUG nova.network.os_vif_util [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:e1:78,bridge_name='br-int',has_traffic_filtering=True,id=d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd93a212a-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.727 225859 DEBUG os_vif [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:e1:78,bridge_name='br-int',has_traffic_filtering=True,id=d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd93a212a-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.728 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.728 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.728 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.731 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.731 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd93a212a-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.731 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd93a212a-0f, col_values=(('external_ids', {'iface-id': 'd93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4f:e1:78', 'vm-uuid': '3426109c-5671-4cc7-89b6-fea13983f921'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:48 compute-1 NetworkManager[49104]: <info>  [1768921008.7695] manager: (tapd93a212a-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/231)
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.768 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.771 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.775 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.776 225859 INFO os_vif [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:e1:78,bridge_name='br-int',has_traffic_filtering=True,id=d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd93a212a-0f')
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.812 225859 DEBUG oslo_concurrency.processutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/538fe1f0-b666-4b97-b2ef-317adae0a47a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdsj22l2k" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.835 225859 DEBUG nova.storage.rbd_utils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] rbd image 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.837 225859 DEBUG oslo_concurrency.processutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/538fe1f0-b666-4b97-b2ef-317adae0a47a/disk.config 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.868 225859 DEBUG nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.869 225859 DEBUG nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.869 225859 DEBUG nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] No VIF found with MAC fa:16:3e:4f:e1:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.869 225859 INFO nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Using config drive
Jan 20 14:56:48 compute-1 nova_compute[225855]: 2026-01-20 14:56:48.898 225859 DEBUG nova.storage.rbd_utils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 3426109c-5671-4cc7-89b6-fea13983f921_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:49 compute-1 ceph-mon[81775]: pgmap v2129: 321 pgs: 321 active+clean; 465 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 4.3 MiB/s wr, 232 op/s
Jan 20 14:56:49 compute-1 ceph-mon[81775]: osdmap e309: 3 total, 3 up, 3 in
Jan 20 14:56:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4275779399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4266035393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:49 compute-1 nova_compute[225855]: 2026-01-20 14:56:49.293 225859 DEBUG oslo_concurrency.processutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/538fe1f0-b666-4b97-b2ef-317adae0a47a/disk.config 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:49 compute-1 nova_compute[225855]: 2026-01-20 14:56:49.294 225859 INFO nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Deleting local config drive /var/lib/nova/instances/538fe1f0-b666-4b97-b2ef-317adae0a47a/disk.config because it was imported into RBD.
Jan 20 14:56:49 compute-1 kernel: tap6550efe7-72: entered promiscuous mode
Jan 20 14:56:49 compute-1 NetworkManager[49104]: <info>  [1768921009.3421] manager: (tap6550efe7-72): new Tun device (/org/freedesktop/NetworkManager/Devices/232)
Jan 20 14:56:49 compute-1 ovn_controller[130490]: 2026-01-20T14:56:49Z|00530|binding|INFO|Claiming lport 6550efe7-7235-437c-b9f3-728b676371ee for this chassis.
Jan 20 14:56:49 compute-1 ovn_controller[130490]: 2026-01-20T14:56:49Z|00531|binding|INFO|6550efe7-7235-437c-b9f3-728b676371ee: Claiming fa:16:3e:e3:4f:ce 10.100.0.3
Jan 20 14:56:49 compute-1 nova_compute[225855]: 2026-01-20 14:56:49.346 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:49.358 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:4f:ce 10.100.0.3'], port_security=['fa:16:3e:e3:4f:ce 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '538fe1f0-b666-4b97-b2ef-317adae0a47a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87caaa2e-d899-4eed-8b6a-8d19125c693b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d4e37f4fd7f4dbbb25648ec639e0e43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '16f3c0d4-753e-4c8b-b00a-7073cbcfa6dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be68bfdf-b1f2-46c8-82b2-2c275774a706, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6550efe7-7235-437c-b9f3-728b676371ee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:56:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:49.360 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6550efe7-7235-437c-b9f3-728b676371ee in datapath 87caaa2e-d899-4eed-8b6a-8d19125c693b bound to our chassis
Jan 20 14:56:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:49.361 140354 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 87caaa2e-d899-4eed-8b6a-8d19125c693b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 20 14:56:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:49.362 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a6496691-77eb-456c-ac4b-08a2c3aea3b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:49 compute-1 systemd-machined[194361]: New machine qemu-63-instance-00000086.
Jan 20 14:56:49 compute-1 systemd[1]: Started Virtual Machine qemu-63-instance-00000086.
Jan 20 14:56:49 compute-1 systemd-udevd[278617]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:56:49 compute-1 NetworkManager[49104]: <info>  [1768921009.4235] device (tap6550efe7-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:56:49 compute-1 NetworkManager[49104]: <info>  [1768921009.4244] device (tap6550efe7-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:56:49 compute-1 nova_compute[225855]: 2026-01-20 14:56:49.430 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:49 compute-1 ovn_controller[130490]: 2026-01-20T14:56:49Z|00532|binding|INFO|Setting lport 6550efe7-7235-437c-b9f3-728b676371ee ovn-installed in OVS
Jan 20 14:56:49 compute-1 ovn_controller[130490]: 2026-01-20T14:56:49Z|00533|binding|INFO|Setting lport 6550efe7-7235-437c-b9f3-728b676371ee up in Southbound
Jan 20 14:56:49 compute-1 nova_compute[225855]: 2026-01-20 14:56:49.434 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:49.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:49 compute-1 nova_compute[225855]: 2026-01-20 14:56:49.498 225859 INFO nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Creating config drive at /var/lib/nova/instances/3426109c-5671-4cc7-89b6-fea13983f921/disk.config
Jan 20 14:56:49 compute-1 nova_compute[225855]: 2026-01-20 14:56:49.510 225859 DEBUG oslo_concurrency.processutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3426109c-5671-4cc7-89b6-fea13983f921/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg4kqx6r1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:49 compute-1 nova_compute[225855]: 2026-01-20 14:56:49.593 225859 DEBUG nova.network.neutron [req-6ac12c71-92f6-4b79-9191-cc43d7ba390a req-c7771993-1790-4c03-ba46-12fe19c55066 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Updated VIF entry in instance network info cache for port 6550efe7-7235-437c-b9f3-728b676371ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:56:49 compute-1 nova_compute[225855]: 2026-01-20 14:56:49.594 225859 DEBUG nova.network.neutron [req-6ac12c71-92f6-4b79-9191-cc43d7ba390a req-c7771993-1790-4c03-ba46-12fe19c55066 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Updating instance_info_cache with network_info: [{"id": "6550efe7-7235-437c-b9f3-728b676371ee", "address": "fa:16:3e:e3:4f:ce", "network": {"id": "87caaa2e-d899-4eed-8b6a-8d19125c693b", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4d4e37f4fd7f4dbbb25648ec639e0e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6550efe7-72", "ovs_interfaceid": "6550efe7-7235-437c-b9f3-728b676371ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:49 compute-1 nova_compute[225855]: 2026-01-20 14:56:49.612 225859 DEBUG oslo_concurrency.lockutils [req-6ac12c71-92f6-4b79-9191-cc43d7ba390a req-c7771993-1790-4c03-ba46-12fe19c55066 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:56:49 compute-1 nova_compute[225855]: 2026-01-20 14:56:49.653 225859 DEBUG oslo_concurrency.processutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3426109c-5671-4cc7-89b6-fea13983f921/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg4kqx6r1" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:49 compute-1 nova_compute[225855]: 2026-01-20 14:56:49.694 225859 DEBUG nova.storage.rbd_utils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 3426109c-5671-4cc7-89b6-fea13983f921_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:49 compute-1 nova_compute[225855]: 2026-01-20 14:56:49.698 225859 DEBUG oslo_concurrency.processutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3426109c-5671-4cc7-89b6-fea13983f921/disk.config 3426109c-5671-4cc7-89b6-fea13983f921_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:49 compute-1 nova_compute[225855]: 2026-01-20 14:56:49.912 225859 DEBUG nova.compute.manager [req-bbd0046f-0a5e-491d-b067-f825cbababaf req-f6d9b735-c480-48e3-a1a4-205c2f48b0ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received event network-vif-plugged-6550efe7-7235-437c-b9f3-728b676371ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:49 compute-1 nova_compute[225855]: 2026-01-20 14:56:49.913 225859 DEBUG oslo_concurrency.lockutils [req-bbd0046f-0a5e-491d-b067-f825cbababaf req-f6d9b735-c480-48e3-a1a4-205c2f48b0ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:49 compute-1 nova_compute[225855]: 2026-01-20 14:56:49.914 225859 DEBUG oslo_concurrency.lockutils [req-bbd0046f-0a5e-491d-b067-f825cbababaf req-f6d9b735-c480-48e3-a1a4-205c2f48b0ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:49 compute-1 nova_compute[225855]: 2026-01-20 14:56:49.914 225859 DEBUG oslo_concurrency.lockutils [req-bbd0046f-0a5e-491d-b067-f825cbababaf req-f6d9b735-c480-48e3-a1a4-205c2f48b0ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:49 compute-1 nova_compute[225855]: 2026-01-20 14:56:49.915 225859 DEBUG nova.compute.manager [req-bbd0046f-0a5e-491d-b067-f825cbababaf req-f6d9b735-c480-48e3-a1a4-205c2f48b0ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Processing event network-vif-plugged-6550efe7-7235-437c-b9f3-728b676371ee _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.002 225859 DEBUG oslo_concurrency.processutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3426109c-5671-4cc7-89b6-fea13983f921/disk.config 3426109c-5671-4cc7-89b6-fea13983f921_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.003 225859 INFO nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Deleting local config drive /var/lib/nova/instances/3426109c-5671-4cc7-89b6-fea13983f921/disk.config because it was imported into RBD.
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.020 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921010.0199256, 538fe1f0-b666-4b97-b2ef-317adae0a47a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.021 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] VM Started (Lifecycle Event)
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.023 225859 DEBUG nova.compute.manager [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.031 225859 DEBUG nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.040 225859 INFO nova.virt.libvirt.driver [-] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Instance spawned successfully.
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.041 225859 DEBUG nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:56:50 compute-1 kernel: tapd93a212a-0f: entered promiscuous mode
Jan 20 14:56:50 compute-1 NetworkManager[49104]: <info>  [1768921010.0599] manager: (tapd93a212a-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/233)
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.060 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:50 compute-1 ovn_controller[130490]: 2026-01-20T14:56:50Z|00534|binding|INFO|Claiming lport d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 for this chassis.
Jan 20 14:56:50 compute-1 ovn_controller[130490]: 2026-01-20T14:56:50Z|00535|binding|INFO|d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8: Claiming fa:16:3e:4f:e1:78 10.100.0.11
Jan 20 14:56:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:50.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:50 compute-1 NetworkManager[49104]: <info>  [1768921010.0703] device (tapd93a212a-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:56:50 compute-1 NetworkManager[49104]: <info>  [1768921010.0709] device (tapd93a212a-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:56:50 compute-1 ovn_controller[130490]: 2026-01-20T14:56:50Z|00536|binding|INFO|Setting lport d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 ovn-installed in OVS
Jan 20 14:56:50 compute-1 ovn_controller[130490]: 2026-01-20T14:56:50Z|00537|binding|INFO|Setting lport d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 up in Southbound
Jan 20 14:56:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:50.078 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:e1:78 10.100.0.11'], port_security=['fa:16:3e:4f:e1:78 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3426109c-5671-4cc7-89b6-fea13983f921', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca6cd0afe0ab41e3ab36d21a4129f734', 'neutron:revision_number': '2', 'neutron:security_group_ids': '819ea4ae-b994-44d1-9da3-8b0ca609fb2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee620e3e-ef7e-4826-b394-b8a89442b353, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.078 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:50.079 140354 INFO neutron.agent.ovn.metadata.agent [-] Port d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 in datapath f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c bound to our chassis
Jan 20 14:56:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:50.081 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.082 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:50 compute-1 systemd-machined[194361]: New machine qemu-64-instance-00000087.
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.093 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:50.095 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[deecf02a-a3ed-43e4-ac2d-f0b68d85d550]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.099 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.104 225859 DEBUG nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.104 225859 DEBUG nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.105 225859 DEBUG nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.105 225859 DEBUG nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.105 225859 DEBUG nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.106 225859 DEBUG nova.virt.libvirt.driver [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:50 compute-1 systemd[1]: Started Virtual Machine qemu-64-instance-00000087.
Jan 20 14:56:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:50.123 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f8df02a3-11ad-4c9f-a8ef-1d628e9c36c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:50.125 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[661a4755-200c-401e-98cf-f561ccccdc18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.141 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.141 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921010.022909, 538fe1f0-b666-4b97-b2ef-317adae0a47a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.141 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] VM Paused (Lifecycle Event)
Jan 20 14:56:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:50.152 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a99abc-5442-461d-bcfa-ebea22768e5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:50.169 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2112cb1f-cfc0-4c5a-a465-817c9623b6aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4c8474b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:a2:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603603, 'reachable_time': 28670, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278736, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:50.185 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a9bc48f2-81e5-43e0-b5a2-70417bdb893b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf4c8474b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603613, 'tstamp': 603613}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278737, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf4c8474b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603617, 'tstamp': 603617}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278737, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:50.186 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4c8474b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.188 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.189 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:50.189 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4c8474b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:50.189 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:56:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:50.190 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4c8474b-00, col_values=(('external_ids', {'iface-id': '8c6fd3ab-70a8-4e63-99de-f2e15ac0207f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:50.190 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.203 225859 DEBUG nova.network.neutron [req-1a5af0a2-50d9-4cd7-9675-e509d73b28f2 req-ecde0102-bb46-4490-a675-b039f3f48622 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Updated VIF entry in instance network info cache for port d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.204 225859 DEBUG nova.network.neutron [req-1a5af0a2-50d9-4cd7-9675-e509d73b28f2 req-ecde0102-bb46-4490-a675-b039f3f48622 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Updating instance_info_cache with network_info: [{"id": "d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8", "address": "fa:16:3e:4f:e1:78", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd93a212a-0f", "ovs_interfaceid": "d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.242 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.246 225859 DEBUG oslo_concurrency.lockutils [req-1a5af0a2-50d9-4cd7-9675-e509d73b28f2 req-ecde0102-bb46-4490-a675-b039f3f48622 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-3426109c-5671-4cc7-89b6-fea13983f921" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.248 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921010.026502, 538fe1f0-b666-4b97-b2ef-317adae0a47a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.248 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] VM Resumed (Lifecycle Event)
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.260 225859 INFO nova.compute.manager [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Took 7.80 seconds to spawn the instance on the hypervisor.
Jan 20 14:56:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e310 e310: 3 total, 3 up, 3 in
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.260 225859 DEBUG nova.compute.manager [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.271 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.276 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.314 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.339 225859 INFO nova.compute.manager [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Took 8.80 seconds to build instance.
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.361 225859 DEBUG oslo_concurrency.lockutils [None req-598146b4-a7b4-497e-8f70-a27771b8bdcf 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.635 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921010.6349812, 3426109c-5671-4cc7-89b6-fea13983f921 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.635 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] VM Started (Lifecycle Event)
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.660 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.665 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921010.6386104, 3426109c-5671-4cc7-89b6-fea13983f921 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.665 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] VM Paused (Lifecycle Event)
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.685 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.688 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.705 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.814 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:56:50 compute-1 nova_compute[225855]: 2026-01-20 14:56:50.815 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:56:51 compute-1 ovn_controller[130490]: 2026-01-20T14:56:51Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ac:7a:cf 10.100.0.13
Jan 20 14:56:51 compute-1 ovn_controller[130490]: 2026-01-20T14:56:51Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ac:7a:cf 10.100.0.13
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.245 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.434 225859 INFO nova.compute.manager [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Rescuing
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.435 225859 DEBUG oslo_concurrency.lockutils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Acquiring lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.435 225859 DEBUG oslo_concurrency.lockutils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Acquired lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.435 225859 DEBUG nova.network.neutron [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:56:51 compute-1 ceph-mon[81775]: pgmap v2131: 321 pgs: 321 active+clean; 469 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 244 op/s
Jan 20 14:56:51 compute-1 ceph-mon[81775]: osdmap e310: 3 total, 3 up, 3 in
Jan 20 14:56:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:51.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e311 e311: 3 total, 3 up, 3 in
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.975 225859 DEBUG nova.compute.manager [req-23a1b379-cbf9-4d17-ad19-aeda0ddc83f2 req-50fb98a0-611f-4dc4-9188-371466c7b34f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received event network-vif-plugged-6550efe7-7235-437c-b9f3-728b676371ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.976 225859 DEBUG oslo_concurrency.lockutils [req-23a1b379-cbf9-4d17-ad19-aeda0ddc83f2 req-50fb98a0-611f-4dc4-9188-371466c7b34f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.976 225859 DEBUG oslo_concurrency.lockutils [req-23a1b379-cbf9-4d17-ad19-aeda0ddc83f2 req-50fb98a0-611f-4dc4-9188-371466c7b34f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.976 225859 DEBUG oslo_concurrency.lockutils [req-23a1b379-cbf9-4d17-ad19-aeda0ddc83f2 req-50fb98a0-611f-4dc4-9188-371466c7b34f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.976 225859 DEBUG nova.compute.manager [req-23a1b379-cbf9-4d17-ad19-aeda0ddc83f2 req-50fb98a0-611f-4dc4-9188-371466c7b34f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] No waiting events found dispatching network-vif-plugged-6550efe7-7235-437c-b9f3-728b676371ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.976 225859 WARNING nova.compute.manager [req-23a1b379-cbf9-4d17-ad19-aeda0ddc83f2 req-50fb98a0-611f-4dc4-9188-371466c7b34f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received unexpected event network-vif-plugged-6550efe7-7235-437c-b9f3-728b676371ee for instance with vm_state active and task_state rescuing.
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.977 225859 DEBUG nova.compute.manager [req-23a1b379-cbf9-4d17-ad19-aeda0ddc83f2 req-50fb98a0-611f-4dc4-9188-371466c7b34f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Received event network-vif-plugged-d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.977 225859 DEBUG oslo_concurrency.lockutils [req-23a1b379-cbf9-4d17-ad19-aeda0ddc83f2 req-50fb98a0-611f-4dc4-9188-371466c7b34f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3426109c-5671-4cc7-89b6-fea13983f921-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.977 225859 DEBUG oslo_concurrency.lockutils [req-23a1b379-cbf9-4d17-ad19-aeda0ddc83f2 req-50fb98a0-611f-4dc4-9188-371466c7b34f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3426109c-5671-4cc7-89b6-fea13983f921-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.977 225859 DEBUG oslo_concurrency.lockutils [req-23a1b379-cbf9-4d17-ad19-aeda0ddc83f2 req-50fb98a0-611f-4dc4-9188-371466c7b34f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3426109c-5671-4cc7-89b6-fea13983f921-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.977 225859 DEBUG nova.compute.manager [req-23a1b379-cbf9-4d17-ad19-aeda0ddc83f2 req-50fb98a0-611f-4dc4-9188-371466c7b34f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Processing event network-vif-plugged-d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.978 225859 DEBUG nova.compute.manager [req-23a1b379-cbf9-4d17-ad19-aeda0ddc83f2 req-50fb98a0-611f-4dc4-9188-371466c7b34f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Received event network-vif-plugged-d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.978 225859 DEBUG oslo_concurrency.lockutils [req-23a1b379-cbf9-4d17-ad19-aeda0ddc83f2 req-50fb98a0-611f-4dc4-9188-371466c7b34f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3426109c-5671-4cc7-89b6-fea13983f921-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.978 225859 DEBUG oslo_concurrency.lockutils [req-23a1b379-cbf9-4d17-ad19-aeda0ddc83f2 req-50fb98a0-611f-4dc4-9188-371466c7b34f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3426109c-5671-4cc7-89b6-fea13983f921-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.978 225859 DEBUG oslo_concurrency.lockutils [req-23a1b379-cbf9-4d17-ad19-aeda0ddc83f2 req-50fb98a0-611f-4dc4-9188-371466c7b34f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3426109c-5671-4cc7-89b6-fea13983f921-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.978 225859 DEBUG nova.compute.manager [req-23a1b379-cbf9-4d17-ad19-aeda0ddc83f2 req-50fb98a0-611f-4dc4-9188-371466c7b34f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] No waiting events found dispatching network-vif-plugged-d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.978 225859 WARNING nova.compute.manager [req-23a1b379-cbf9-4d17-ad19-aeda0ddc83f2 req-50fb98a0-611f-4dc4-9188-371466c7b34f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Received unexpected event network-vif-plugged-d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 for instance with vm_state building and task_state spawning.
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.979 225859 DEBUG nova.compute.manager [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.982 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921011.982382, 3426109c-5671-4cc7-89b6-fea13983f921 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.982 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] VM Resumed (Lifecycle Event)
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.984 225859 DEBUG nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.986 225859 INFO nova.virt.libvirt.driver [-] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Instance spawned successfully.
Jan 20 14:56:51 compute-1 nova_compute[225855]: 2026-01-20 14:56:51.986 225859 DEBUG nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:56:52 compute-1 nova_compute[225855]: 2026-01-20 14:56:52.000 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:52 compute-1 nova_compute[225855]: 2026-01-20 14:56:52.005 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:56:52 compute-1 nova_compute[225855]: 2026-01-20 14:56:52.009 225859 DEBUG nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:52 compute-1 nova_compute[225855]: 2026-01-20 14:56:52.010 225859 DEBUG nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:52 compute-1 nova_compute[225855]: 2026-01-20 14:56:52.010 225859 DEBUG nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:52 compute-1 nova_compute[225855]: 2026-01-20 14:56:52.011 225859 DEBUG nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:52 compute-1 nova_compute[225855]: 2026-01-20 14:56:52.011 225859 DEBUG nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:52 compute-1 nova_compute[225855]: 2026-01-20 14:56:52.011 225859 DEBUG nova.virt.libvirt.driver [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:52 compute-1 nova_compute[225855]: 2026-01-20 14:56:52.036 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:56:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:56:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:52.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:56:52 compute-1 nova_compute[225855]: 2026-01-20 14:56:52.070 225859 INFO nova.compute.manager [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Took 8.98 seconds to spawn the instance on the hypervisor.
Jan 20 14:56:52 compute-1 nova_compute[225855]: 2026-01-20 14:56:52.071 225859 DEBUG nova.compute.manager [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:52 compute-1 nova_compute[225855]: 2026-01-20 14:56:52.136 225859 INFO nova.compute.manager [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Took 10.56 seconds to build instance.
Jan 20 14:56:52 compute-1 nova_compute[225855]: 2026-01-20 14:56:52.157 225859 DEBUG oslo_concurrency.lockutils [None req-116c4c26-7bd7-4fed-8899-981d8620cba1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "3426109c-5671-4cc7-89b6-fea13983f921" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:52 compute-1 ceph-mon[81775]: osdmap e311: 3 total, 3 up, 3 in
Jan 20 14:56:52 compute-1 ceph-mon[81775]: pgmap v2134: 321 pgs: 321 active+clean; 492 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.5 MiB/s wr, 192 op/s
Jan 20 14:56:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:52 compute-1 sudo[278782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:53 compute-1 sudo[278782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:53 compute-1 sudo[278782]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:53 compute-1 sudo[278807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:53 compute-1 sudo[278807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:53 compute-1 sudo[278807]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:53 compute-1 nova_compute[225855]: 2026-01-20 14:56:53.377 225859 DEBUG nova.network.neutron [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Updating instance_info_cache with network_info: [{"id": "6550efe7-7235-437c-b9f3-728b676371ee", "address": "fa:16:3e:e3:4f:ce", "network": {"id": "87caaa2e-d899-4eed-8b6a-8d19125c693b", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4d4e37f4fd7f4dbbb25648ec639e0e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6550efe7-72", "ovs_interfaceid": "6550efe7-7235-437c-b9f3-728b676371ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:56:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:53.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:56:53 compute-1 nova_compute[225855]: 2026-01-20 14:56:53.551 225859 DEBUG oslo_concurrency.lockutils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Releasing lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:56:53 compute-1 nova_compute[225855]: 2026-01-20 14:56:53.770 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:53 compute-1 nova_compute[225855]: 2026-01-20 14:56:53.815 225859 DEBUG nova.virt.libvirt.driver [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:56:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:54.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.480 225859 DEBUG oslo_concurrency.lockutils [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "3426109c-5671-4cc7-89b6-fea13983f921" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.481 225859 DEBUG oslo_concurrency.lockutils [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "3426109c-5671-4cc7-89b6-fea13983f921" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.481 225859 DEBUG oslo_concurrency.lockutils [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "3426109c-5671-4cc7-89b6-fea13983f921-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.481 225859 DEBUG oslo_concurrency.lockutils [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "3426109c-5671-4cc7-89b6-fea13983f921-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.481 225859 DEBUG oslo_concurrency.lockutils [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "3426109c-5671-4cc7-89b6-fea13983f921-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.482 225859 INFO nova.compute.manager [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Terminating instance
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.483 225859 DEBUG nova.compute.manager [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:56:54 compute-1 kernel: tapd93a212a-0f (unregistering): left promiscuous mode
Jan 20 14:56:54 compute-1 NetworkManager[49104]: <info>  [1768921014.5254] device (tapd93a212a-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:56:54 compute-1 ovn_controller[130490]: 2026-01-20T14:56:54Z|00538|binding|INFO|Releasing lport d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 from this chassis (sb_readonly=0)
Jan 20 14:56:54 compute-1 ovn_controller[130490]: 2026-01-20T14:56:54Z|00539|binding|INFO|Setting lport d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 down in Southbound
Jan 20 14:56:54 compute-1 ovn_controller[130490]: 2026-01-20T14:56:54Z|00540|binding|INFO|Removing iface tapd93a212a-0f ovn-installed in OVS
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.580 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.595 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:54.596 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:e1:78 10.100.0.11'], port_security=['fa:16:3e:4f:e1:78 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3426109c-5671-4cc7-89b6-fea13983f921', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca6cd0afe0ab41e3ab36d21a4129f734', 'neutron:revision_number': '4', 'neutron:security_group_ids': '819ea4ae-b994-44d1-9da3-8b0ca609fb2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee620e3e-ef7e-4826-b394-b8a89442b353, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:56:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:54.598 140354 INFO neutron.agent.ovn.metadata.agent [-] Port d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 in datapath f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c unbound from our chassis
Jan 20 14:56:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:54.599 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c
Jan 20 14:56:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:54.615 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[34f5edcf-42a6-4659-adad-b67b4cb0dae7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:54 compute-1 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000087.scope: Deactivated successfully.
Jan 20 14:56:54 compute-1 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000087.scope: Consumed 3.085s CPU time.
Jan 20 14:56:54 compute-1 systemd-machined[194361]: Machine qemu-64-instance-00000087 terminated.
Jan 20 14:56:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:54.653 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[4b379646-98d7-49b0-b538-fa8d9036fa2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:54.658 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[86bd8d5a-29d6-411d-b120-d3edfc95c16a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:54.697 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[703cc8b4-4daf-4128-8717-c0400dbd6bb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.703 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.710 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.718 225859 INFO nova.virt.libvirt.driver [-] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Instance destroyed successfully.
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.718 225859 DEBUG nova.objects.instance [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lazy-loading 'resources' on Instance uuid 3426109c-5671-4cc7-89b6-fea13983f921 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.731 225859 DEBUG nova.virt.libvirt.vif [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1537565903',display_name='tempest-ServersTestJSON-server-1537565903',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-1537565903',id=135,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:56:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca6cd0afe0ab41e3ab36d21a4129f734',ramdisk_id='',reservation_id='r-9isslkfb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-405461620',owner_user_name='tempest-ServersTestJSON-405461620-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:56:52Z,user_data=None,user_id='395a5c503218411284bc94c45263d1fb',uuid=3426109c-5671-4cc7-89b6-fea13983f921,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8", "address": "fa:16:3e:4f:e1:78", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd93a212a-0f", "ovs_interfaceid": "d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.732 225859 DEBUG nova.network.os_vif_util [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converting VIF {"id": "d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8", "address": "fa:16:3e:4f:e1:78", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd93a212a-0f", "ovs_interfaceid": "d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.732 225859 DEBUG nova.network.os_vif_util [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:e1:78,bridge_name='br-int',has_traffic_filtering=True,id=d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd93a212a-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.733 225859 DEBUG os_vif [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:e1:78,bridge_name='br-int',has_traffic_filtering=True,id=d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd93a212a-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.734 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.734 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93a212a-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:54.734 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2020a3db-379d-44e9-86a1-4e6fc2b8d28a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4c8474b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:a2:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603603, 'reachable_time': 28670, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278849, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.736 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.737 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.740 225859 INFO os_vif [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:e1:78,bridge_name='br-int',has_traffic_filtering=True,id=d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd93a212a-0f')
Jan 20 14:56:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:54.753 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f28c9e0c-e527-44a4-a3fc-7c067645bc62]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf4c8474b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603613, 'tstamp': 603613}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278855, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf4c8474b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603617, 'tstamp': 603617}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278855, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:54.755 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4c8474b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:54.757 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4c8474b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:54.758 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:56:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:54.758 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4c8474b-00, col_values=(('external_ids', {'iface-id': '8c6fd3ab-70a8-4e63-99de-f2e15ac0207f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:54.758 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.759 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.857 225859 DEBUG nova.compute.manager [req-6fe54304-c493-4d66-8e9d-49bcb9e14bf8 req-15db1934-c644-4261-9eae-007b1f0282c0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Received event network-vif-unplugged-d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.857 225859 DEBUG oslo_concurrency.lockutils [req-6fe54304-c493-4d66-8e9d-49bcb9e14bf8 req-15db1934-c644-4261-9eae-007b1f0282c0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3426109c-5671-4cc7-89b6-fea13983f921-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.857 225859 DEBUG oslo_concurrency.lockutils [req-6fe54304-c493-4d66-8e9d-49bcb9e14bf8 req-15db1934-c644-4261-9eae-007b1f0282c0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3426109c-5671-4cc7-89b6-fea13983f921-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.857 225859 DEBUG oslo_concurrency.lockutils [req-6fe54304-c493-4d66-8e9d-49bcb9e14bf8 req-15db1934-c644-4261-9eae-007b1f0282c0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3426109c-5671-4cc7-89b6-fea13983f921-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.858 225859 DEBUG nova.compute.manager [req-6fe54304-c493-4d66-8e9d-49bcb9e14bf8 req-15db1934-c644-4261-9eae-007b1f0282c0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] No waiting events found dispatching network-vif-unplugged-d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:56:54 compute-1 nova_compute[225855]: 2026-01-20 14:56:54.858 225859 DEBUG nova.compute.manager [req-6fe54304-c493-4d66-8e9d-49bcb9e14bf8 req-15db1934-c644-4261-9eae-007b1f0282c0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Received event network-vif-unplugged-d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:56:55 compute-1 nova_compute[225855]: 2026-01-20 14:56:55.109 225859 INFO nova.virt.libvirt.driver [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Deleting instance files /var/lib/nova/instances/3426109c-5671-4cc7-89b6-fea13983f921_del
Jan 20 14:56:55 compute-1 nova_compute[225855]: 2026-01-20 14:56:55.110 225859 INFO nova.virt.libvirt.driver [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Deletion of /var/lib/nova/instances/3426109c-5671-4cc7-89b6-fea13983f921_del complete
Jan 20 14:56:55 compute-1 ceph-mon[81775]: pgmap v2135: 321 pgs: 321 active+clean; 542 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 7.8 MiB/s wr, 362 op/s
Jan 20 14:56:55 compute-1 nova_compute[225855]: 2026-01-20 14:56:55.201 225859 INFO nova.compute.manager [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Took 0.72 seconds to destroy the instance on the hypervisor.
Jan 20 14:56:55 compute-1 nova_compute[225855]: 2026-01-20 14:56:55.202 225859 DEBUG oslo.service.loopingcall [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:56:55 compute-1 nova_compute[225855]: 2026-01-20 14:56:55.202 225859 DEBUG nova.compute.manager [-] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:56:55 compute-1 nova_compute[225855]: 2026-01-20 14:56:55.202 225859 DEBUG nova.network.neutron [-] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:56:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:56:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:55.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:56:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:56:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:56.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:56:56 compute-1 nova_compute[225855]: 2026-01-20 14:56:56.247 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:56 compute-1 nova_compute[225855]: 2026-01-20 14:56:56.435 225859 DEBUG nova.network.neutron [-] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:56 compute-1 nova_compute[225855]: 2026-01-20 14:56:56.456 225859 INFO nova.compute.manager [-] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Took 1.25 seconds to deallocate network for instance.
Jan 20 14:56:56 compute-1 nova_compute[225855]: 2026-01-20 14:56:56.533 225859 DEBUG nova.compute.manager [req-628444ff-a249-4e4f-b408-f04196cc158a req-4a38def0-1013-4c32-8f4f-308ccf74815b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Received event network-vif-deleted-d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:56 compute-1 nova_compute[225855]: 2026-01-20 14:56:56.630 225859 DEBUG oslo_concurrency.lockutils [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:56 compute-1 nova_compute[225855]: 2026-01-20 14:56:56.630 225859 DEBUG oslo_concurrency.lockutils [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:56 compute-1 nova_compute[225855]: 2026-01-20 14:56:56.709 225859 DEBUG oslo_concurrency.processutils [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:57 compute-1 podman[278896]: 2026-01-20 14:56:57.014682637 +0000 UTC m=+0.056524336 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:56:57 compute-1 nova_compute[225855]: 2026-01-20 14:56:57.083 225859 DEBUG nova.compute.manager [req-0b8f39b7-395f-4d9f-8e90-b631855d6923 req-17352424-488c-4a07-837c-76cfdf4ddd1e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Received event network-vif-plugged-d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:57 compute-1 nova_compute[225855]: 2026-01-20 14:56:57.083 225859 DEBUG oslo_concurrency.lockutils [req-0b8f39b7-395f-4d9f-8e90-b631855d6923 req-17352424-488c-4a07-837c-76cfdf4ddd1e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3426109c-5671-4cc7-89b6-fea13983f921-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:57 compute-1 nova_compute[225855]: 2026-01-20 14:56:57.084 225859 DEBUG oslo_concurrency.lockutils [req-0b8f39b7-395f-4d9f-8e90-b631855d6923 req-17352424-488c-4a07-837c-76cfdf4ddd1e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3426109c-5671-4cc7-89b6-fea13983f921-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:57 compute-1 nova_compute[225855]: 2026-01-20 14:56:57.084 225859 DEBUG oslo_concurrency.lockutils [req-0b8f39b7-395f-4d9f-8e90-b631855d6923 req-17352424-488c-4a07-837c-76cfdf4ddd1e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3426109c-5671-4cc7-89b6-fea13983f921-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:57 compute-1 nova_compute[225855]: 2026-01-20 14:56:57.084 225859 DEBUG nova.compute.manager [req-0b8f39b7-395f-4d9f-8e90-b631855d6923 req-17352424-488c-4a07-837c-76cfdf4ddd1e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] No waiting events found dispatching network-vif-plugged-d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:56:57 compute-1 nova_compute[225855]: 2026-01-20 14:56:57.085 225859 WARNING nova.compute.manager [req-0b8f39b7-395f-4d9f-8e90-b631855d6923 req-17352424-488c-4a07-837c-76cfdf4ddd1e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Received unexpected event network-vif-plugged-d93a212a-0f1f-4f7e-9e7e-ee3fd5a542e8 for instance with vm_state deleted and task_state None.
Jan 20 14:56:57 compute-1 ceph-mon[81775]: pgmap v2136: 321 pgs: 321 active+clean; 532 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.0 MiB/s wr, 386 op/s
Jan 20 14:56:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:56:57 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2810887296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:57 compute-1 nova_compute[225855]: 2026-01-20 14:56:57.198 225859 DEBUG oslo_concurrency.processutils [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:57 compute-1 nova_compute[225855]: 2026-01-20 14:56:57.204 225859 DEBUG nova.compute.provider_tree [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:56:57 compute-1 nova_compute[225855]: 2026-01-20 14:56:57.386 225859 DEBUG nova.scheduler.client.report [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:56:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:57.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:57 compute-1 nova_compute[225855]: 2026-01-20 14:56:57.571 225859 DEBUG oslo_concurrency.lockutils [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:57 compute-1 nova_compute[225855]: 2026-01-20 14:56:57.771 225859 INFO nova.scheduler.client.report [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Deleted allocations for instance 3426109c-5671-4cc7-89b6-fea13983f921
Jan 20 14:56:57 compute-1 nova_compute[225855]: 2026-01-20 14:56:57.945 225859 DEBUG oslo_concurrency.lockutils [None req-b6008c26-9999-4183-b136-a3c591315ebe 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "3426109c-5671-4cc7-89b6-fea13983f921" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:58.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2810887296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:59 compute-1 ceph-mon[81775]: pgmap v2137: 321 pgs: 321 active+clean; 506 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 5.9 MiB/s wr, 448 op/s
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.444 225859 DEBUG oslo_concurrency.lockutils [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "e32ecf59-145a-4ae9-a91e-288419407cd0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.444 225859 DEBUG oslo_concurrency.lockutils [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "e32ecf59-145a-4ae9-a91e-288419407cd0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.445 225859 DEBUG oslo_concurrency.lockutils [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "e32ecf59-145a-4ae9-a91e-288419407cd0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.445 225859 DEBUG oslo_concurrency.lockutils [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "e32ecf59-145a-4ae9-a91e-288419407cd0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.445 225859 DEBUG oslo_concurrency.lockutils [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "e32ecf59-145a-4ae9-a91e-288419407cd0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.447 225859 INFO nova.compute.manager [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Terminating instance
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.448 225859 DEBUG nova.compute.manager [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:56:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:56:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:56:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:59.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:56:59 compute-1 kernel: tap5909a21f-c1 (unregistering): left promiscuous mode
Jan 20 14:56:59 compute-1 NetworkManager[49104]: <info>  [1768921019.4918] device (tap5909a21f-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.501 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:59 compute-1 ovn_controller[130490]: 2026-01-20T14:56:59Z|00541|binding|INFO|Releasing lport 5909a21f-c1fb-4265-a7de-a6b0e6136194 from this chassis (sb_readonly=0)
Jan 20 14:56:59 compute-1 ovn_controller[130490]: 2026-01-20T14:56:59Z|00542|binding|INFO|Setting lport 5909a21f-c1fb-4265-a7de-a6b0e6136194 down in Southbound
Jan 20 14:56:59 compute-1 ovn_controller[130490]: 2026-01-20T14:56:59Z|00543|binding|INFO|Removing iface tap5909a21f-c1 ovn-installed in OVS
Jan 20 14:56:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:59.509 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:7a:cf 10.100.0.13'], port_security=['fa:16:3e:ac:7a:cf 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e32ecf59-145a-4ae9-a91e-288419407cd0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca6cd0afe0ab41e3ab36d21a4129f734', 'neutron:revision_number': '4', 'neutron:security_group_ids': '819ea4ae-b994-44d1-9da3-8b0ca609fb2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee620e3e-ef7e-4826-b394-b8a89442b353, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=5909a21f-c1fb-4265-a7de-a6b0e6136194) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:56:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:59.511 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 5909a21f-c1fb-4265-a7de-a6b0e6136194 in datapath f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c unbound from our chassis
Jan 20 14:56:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:59.512 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:56:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:59.513 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[16f70f90-991b-49e2-9401-4ff68d8dffdb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:59.513 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c namespace which is not needed anymore
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.521 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:59 compute-1 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000084.scope: Deactivated successfully.
Jan 20 14:56:59 compute-1 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000084.scope: Consumed 13.577s CPU time.
Jan 20 14:56:59 compute-1 systemd-machined[194361]: Machine qemu-62-instance-00000084 terminated.
Jan 20 14:56:59 compute-1 neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c[277928]: [NOTICE]   (277932) : haproxy version is 2.8.14-c23fe91
Jan 20 14:56:59 compute-1 neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c[277928]: [NOTICE]   (277932) : path to executable is /usr/sbin/haproxy
Jan 20 14:56:59 compute-1 neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c[277928]: [ALERT]    (277932) : Current worker (277934) exited with code 143 (Terminated)
Jan 20 14:56:59 compute-1 neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c[277928]: [WARNING]  (277932) : All workers exited. Exiting... (0)
Jan 20 14:56:59 compute-1 systemd[1]: libpod-998e434681eead6e0c698e580159f48be23d2464951b6f3b45d2e6309690a250.scope: Deactivated successfully.
Jan 20 14:56:59 compute-1 podman[278943]: 2026-01-20 14:56:59.650726445 +0000 UTC m=+0.047712717 container died 998e434681eead6e0c698e580159f48be23d2464951b6f3b45d2e6309690a250 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.668 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.676 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.682 225859 INFO nova.virt.libvirt.driver [-] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Instance destroyed successfully.
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.683 225859 DEBUG nova.objects.instance [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lazy-loading 'resources' on Instance uuid e32ecf59-145a-4ae9-a91e-288419407cd0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:59 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-998e434681eead6e0c698e580159f48be23d2464951b6f3b45d2e6309690a250-userdata-shm.mount: Deactivated successfully.
Jan 20 14:56:59 compute-1 systemd[1]: var-lib-containers-storage-overlay-6beda3a917b01666983f6717d8a15faa248a8e035acb78285162928c0b4a3550-merged.mount: Deactivated successfully.
Jan 20 14:56:59 compute-1 podman[278943]: 2026-01-20 14:56:59.70688969 +0000 UTC m=+0.103875962 container cleanup 998e434681eead6e0c698e580159f48be23d2464951b6f3b45d2e6309690a250 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:56:59 compute-1 systemd[1]: libpod-conmon-998e434681eead6e0c698e580159f48be23d2464951b6f3b45d2e6309690a250.scope: Deactivated successfully.
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.721 225859 DEBUG nova.virt.libvirt.vif [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:56:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1537565903',display_name='tempest-ServersTestJSON-server-1537565903',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-1537565903',id=132,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:56:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca6cd0afe0ab41e3ab36d21a4129f734',ramdisk_id='',reservation_id='r-iv93ouga',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-405461620',owner_user_name='tempest-ServersTestJSON-405461620-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:56:37Z,user_data=None,user_id='395a5c503218411284bc94c45263d1fb',uuid=e32ecf59-145a-4ae9-a91e-288419407cd0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "address": "fa:16:3e:ac:7a:cf", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5909a21f-c1", "ovs_interfaceid": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.722 225859 DEBUG nova.network.os_vif_util [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converting VIF {"id": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "address": "fa:16:3e:ac:7a:cf", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5909a21f-c1", "ovs_interfaceid": "5909a21f-c1fb-4265-a7de-a6b0e6136194", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.723 225859 DEBUG nova.network.os_vif_util [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ac:7a:cf,bridge_name='br-int',has_traffic_filtering=True,id=5909a21f-c1fb-4265-a7de-a6b0e6136194,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5909a21f-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.724 225859 DEBUG os_vif [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:7a:cf,bridge_name='br-int',has_traffic_filtering=True,id=5909a21f-c1fb-4265-a7de-a6b0e6136194,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5909a21f-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.726 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.727 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5909a21f-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.733 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.735 225859 INFO os_vif [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:7a:cf,bridge_name='br-int',has_traffic_filtering=True,id=5909a21f-c1fb-4265-a7de-a6b0e6136194,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5909a21f-c1')
Jan 20 14:56:59 compute-1 podman[278984]: 2026-01-20 14:56:59.784671144 +0000 UTC m=+0.051515124 container remove 998e434681eead6e0c698e580159f48be23d2464951b6f3b45d2e6309690a250 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:56:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:59.790 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c74e6b3e-ea07-4d83-8b4f-aa1fa5f7f1c2]: (4, ('Tue Jan 20 02:56:59 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c (998e434681eead6e0c698e580159f48be23d2464951b6f3b45d2e6309690a250)\n998e434681eead6e0c698e580159f48be23d2464951b6f3b45d2e6309690a250\nTue Jan 20 02:56:59 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c (998e434681eead6e0c698e580159f48be23d2464951b6f3b45d2e6309690a250)\n998e434681eead6e0c698e580159f48be23d2464951b6f3b45d2e6309690a250\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:59.792 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[58f4c533-1526-485c-b8a4-195562bf05d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:59.793 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4c8474b-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:59 compute-1 kernel: tapf4c8474b-00: left promiscuous mode
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.799 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:59 compute-1 nova_compute[225855]: 2026-01-20 14:56:59.813 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:59.816 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7008a930-fc31-418a-8737-78529d1f56f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:59.836 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[378a5839-aec1-49f3-b302-7f25ee6dfe6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:59.839 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4c8a8987-8605-4711-830a-6c542b3ce0c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:59.858 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f88e5442-f2ee-4e0d-bbad-ee4bda8b0ece]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603596, 'reachable_time': 19125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279018, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:59 compute-1 systemd[1]: run-netns-ovnmeta\x2df4c8474b\x2d0ca3\x2d4cb0\x2db6dd\x2de6aa302def5c.mount: Deactivated successfully.
Jan 20 14:56:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:59.863 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:56:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:56:59.864 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[587979e2-d1dd-4695-84e8-b4c9da892edd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e312 e312: 3 total, 3 up, 3 in
Jan 20 14:57:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:00.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:00 compute-1 nova_compute[225855]: 2026-01-20 14:57:00.131 225859 DEBUG nova.compute.manager [req-c528debc-36e8-4353-a2e3-ea5f450725c1 req-b389f7c4-186a-4efe-847c-d7fd9a937572 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Received event network-vif-unplugged-5909a21f-c1fb-4265-a7de-a6b0e6136194 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:00 compute-1 nova_compute[225855]: 2026-01-20 14:57:00.132 225859 DEBUG oslo_concurrency.lockutils [req-c528debc-36e8-4353-a2e3-ea5f450725c1 req-b389f7c4-186a-4efe-847c-d7fd9a937572 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "e32ecf59-145a-4ae9-a91e-288419407cd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:00 compute-1 nova_compute[225855]: 2026-01-20 14:57:00.132 225859 DEBUG oslo_concurrency.lockutils [req-c528debc-36e8-4353-a2e3-ea5f450725c1 req-b389f7c4-186a-4efe-847c-d7fd9a937572 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e32ecf59-145a-4ae9-a91e-288419407cd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:00 compute-1 nova_compute[225855]: 2026-01-20 14:57:00.132 225859 DEBUG oslo_concurrency.lockutils [req-c528debc-36e8-4353-a2e3-ea5f450725c1 req-b389f7c4-186a-4efe-847c-d7fd9a937572 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e32ecf59-145a-4ae9-a91e-288419407cd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:00 compute-1 nova_compute[225855]: 2026-01-20 14:57:00.133 225859 DEBUG nova.compute.manager [req-c528debc-36e8-4353-a2e3-ea5f450725c1 req-b389f7c4-186a-4efe-847c-d7fd9a937572 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] No waiting events found dispatching network-vif-unplugged-5909a21f-c1fb-4265-a7de-a6b0e6136194 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:57:00 compute-1 nova_compute[225855]: 2026-01-20 14:57:00.133 225859 DEBUG nova.compute.manager [req-c528debc-36e8-4353-a2e3-ea5f450725c1 req-b389f7c4-186a-4efe-847c-d7fd9a937572 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Received event network-vif-unplugged-5909a21f-c1fb-4265-a7de-a6b0e6136194 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:57:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/689296263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:00 compute-1 ceph-mon[81775]: osdmap e312: 3 total, 3 up, 3 in
Jan 20 14:57:00 compute-1 nova_compute[225855]: 2026-01-20 14:57:00.207 225859 INFO nova.virt.libvirt.driver [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Deleting instance files /var/lib/nova/instances/e32ecf59-145a-4ae9-a91e-288419407cd0_del
Jan 20 14:57:00 compute-1 nova_compute[225855]: 2026-01-20 14:57:00.208 225859 INFO nova.virt.libvirt.driver [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Deletion of /var/lib/nova/instances/e32ecf59-145a-4ae9-a91e-288419407cd0_del complete
Jan 20 14:57:00 compute-1 nova_compute[225855]: 2026-01-20 14:57:00.273 225859 INFO nova.compute.manager [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Took 0.82 seconds to destroy the instance on the hypervisor.
Jan 20 14:57:00 compute-1 nova_compute[225855]: 2026-01-20 14:57:00.274 225859 DEBUG oslo.service.loopingcall [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:57:00 compute-1 nova_compute[225855]: 2026-01-20 14:57:00.274 225859 DEBUG nova.compute.manager [-] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:57:00 compute-1 nova_compute[225855]: 2026-01-20 14:57:00.275 225859 DEBUG nova.network.neutron [-] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:57:00 compute-1 nova_compute[225855]: 2026-01-20 14:57:00.334 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:57:01 compute-1 ceph-mon[81775]: pgmap v2139: 321 pgs: 321 active+clean; 492 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 3.2 MiB/s wr, 373 op/s
Jan 20 14:57:01 compute-1 nova_compute[225855]: 2026-01-20 14:57:01.249 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:01.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:02.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:02 compute-1 nova_compute[225855]: 2026-01-20 14:57:02.346 225859 DEBUG nova.compute.manager [req-5847965d-8864-4f9b-9262-46aac7f2c6bf req-3fe8b7f3-1b01-4be8-b1f4-0531d247f254 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Received event network-vif-plugged-5909a21f-c1fb-4265-a7de-a6b0e6136194 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:02 compute-1 nova_compute[225855]: 2026-01-20 14:57:02.347 225859 DEBUG oslo_concurrency.lockutils [req-5847965d-8864-4f9b-9262-46aac7f2c6bf req-3fe8b7f3-1b01-4be8-b1f4-0531d247f254 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "e32ecf59-145a-4ae9-a91e-288419407cd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:02 compute-1 nova_compute[225855]: 2026-01-20 14:57:02.348 225859 DEBUG oslo_concurrency.lockutils [req-5847965d-8864-4f9b-9262-46aac7f2c6bf req-3fe8b7f3-1b01-4be8-b1f4-0531d247f254 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e32ecf59-145a-4ae9-a91e-288419407cd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:02 compute-1 nova_compute[225855]: 2026-01-20 14:57:02.348 225859 DEBUG oslo_concurrency.lockutils [req-5847965d-8864-4f9b-9262-46aac7f2c6bf req-3fe8b7f3-1b01-4be8-b1f4-0531d247f254 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e32ecf59-145a-4ae9-a91e-288419407cd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:02 compute-1 nova_compute[225855]: 2026-01-20 14:57:02.348 225859 DEBUG nova.compute.manager [req-5847965d-8864-4f9b-9262-46aac7f2c6bf req-3fe8b7f3-1b01-4be8-b1f4-0531d247f254 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] No waiting events found dispatching network-vif-plugged-5909a21f-c1fb-4265-a7de-a6b0e6136194 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:57:02 compute-1 nova_compute[225855]: 2026-01-20 14:57:02.348 225859 WARNING nova.compute.manager [req-5847965d-8864-4f9b-9262-46aac7f2c6bf req-3fe8b7f3-1b01-4be8-b1f4-0531d247f254 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Received unexpected event network-vif-plugged-5909a21f-c1fb-4265-a7de-a6b0e6136194 for instance with vm_state active and task_state deleting.
Jan 20 14:57:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:02 compute-1 ceph-mon[81775]: pgmap v2140: 321 pgs: 321 active+clean; 422 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.8 MiB/s wr, 353 op/s
Jan 20 14:57:02 compute-1 nova_compute[225855]: 2026-01-20 14:57:02.697 225859 DEBUG nova.network.neutron [-] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:57:02 compute-1 nova_compute[225855]: 2026-01-20 14:57:02.714 225859 INFO nova.compute.manager [-] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Took 2.44 seconds to deallocate network for instance.
Jan 20 14:57:02 compute-1 nova_compute[225855]: 2026-01-20 14:57:02.761 225859 DEBUG oslo_concurrency.lockutils [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:02 compute-1 nova_compute[225855]: 2026-01-20 14:57:02.762 225859 DEBUG oslo_concurrency.lockutils [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:02 compute-1 nova_compute[225855]: 2026-01-20 14:57:02.806 225859 DEBUG nova.compute.manager [req-27d8ae0c-49bf-418b-adb3-264f32eeae8e req-f1a4ed8a-e3d9-4390-accb-b6d1da175cb5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Received event network-vif-deleted-5909a21f-c1fb-4265-a7de-a6b0e6136194 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:02 compute-1 nova_compute[225855]: 2026-01-20 14:57:02.834 225859 DEBUG oslo_concurrency.processutils [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:57:03 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/234247054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:03 compute-1 nova_compute[225855]: 2026-01-20 14:57:03.290 225859 DEBUG oslo_concurrency.processutils [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:03 compute-1 nova_compute[225855]: 2026-01-20 14:57:03.298 225859 DEBUG nova.compute.provider_tree [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:57:03 compute-1 nova_compute[225855]: 2026-01-20 14:57:03.318 225859 DEBUG nova.scheduler.client.report [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:57:03 compute-1 nova_compute[225855]: 2026-01-20 14:57:03.360 225859 DEBUG oslo_concurrency.lockutils [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:03 compute-1 nova_compute[225855]: 2026-01-20 14:57:03.386 225859 INFO nova.scheduler.client.report [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Deleted allocations for instance e32ecf59-145a-4ae9-a91e-288419407cd0
Jan 20 14:57:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:57:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:03.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:57:03 compute-1 nova_compute[225855]: 2026-01-20 14:57:03.478 225859 DEBUG oslo_concurrency.lockutils [None req-71e0153c-f8fc-4e1a-86c2-5791b270d513 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "e32ecf59-145a-4ae9-a91e-288419407cd0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/234247054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:03 compute-1 nova_compute[225855]: 2026-01-20 14:57:03.872 225859 DEBUG nova.virt.libvirt.driver [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 14:57:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:04.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:04 compute-1 ceph-mon[81775]: pgmap v2141: 321 pgs: 321 active+clean; 385 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.1 MiB/s wr, 243 op/s
Jan 20 14:57:04 compute-1 nova_compute[225855]: 2026-01-20 14:57:04.728 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:57:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:05.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:57:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:06.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:06 compute-1 nova_compute[225855]: 2026-01-20 14:57:06.252 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:06 compute-1 kernel: tap6550efe7-72 (unregistering): left promiscuous mode
Jan 20 14:57:06 compute-1 NetworkManager[49104]: <info>  [1768921026.8414] device (tap6550efe7-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:57:06 compute-1 ovn_controller[130490]: 2026-01-20T14:57:06Z|00544|binding|INFO|Releasing lport 6550efe7-7235-437c-b9f3-728b676371ee from this chassis (sb_readonly=0)
Jan 20 14:57:06 compute-1 ovn_controller[130490]: 2026-01-20T14:57:06Z|00545|binding|INFO|Setting lport 6550efe7-7235-437c-b9f3-728b676371ee down in Southbound
Jan 20 14:57:06 compute-1 nova_compute[225855]: 2026-01-20 14:57:06.845 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:06 compute-1 ovn_controller[130490]: 2026-01-20T14:57:06Z|00546|binding|INFO|Removing iface tap6550efe7-72 ovn-installed in OVS
Jan 20 14:57:06 compute-1 nova_compute[225855]: 2026-01-20 14:57:06.848 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:06.853 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:4f:ce 10.100.0.3'], port_security=['fa:16:3e:e3:4f:ce 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '538fe1f0-b666-4b97-b2ef-317adae0a47a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87caaa2e-d899-4eed-8b6a-8d19125c693b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d4e37f4fd7f4dbbb25648ec639e0e43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '16f3c0d4-753e-4c8b-b00a-7073cbcfa6dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be68bfdf-b1f2-46c8-82b2-2c275774a706, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6550efe7-7235-437c-b9f3-728b676371ee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:57:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:06.855 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6550efe7-7235-437c-b9f3-728b676371ee in datapath 87caaa2e-d899-4eed-8b6a-8d19125c693b unbound from our chassis
Jan 20 14:57:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:06.856 140354 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 87caaa2e-d899-4eed-8b6a-8d19125c693b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 20 14:57:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:06.858 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e74ffd80-0325-431f-9630-1e06287569e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:06 compute-1 nova_compute[225855]: 2026-01-20 14:57:06.861 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:06 compute-1 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000086.scope: Deactivated successfully.
Jan 20 14:57:06 compute-1 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000086.scope: Consumed 13.681s CPU time.
Jan 20 14:57:06 compute-1 systemd-machined[194361]: Machine qemu-63-instance-00000086 terminated.
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.085 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.089 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.101 225859 INFO nova.virt.libvirt.driver [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Instance shutdown successfully after 13 seconds.
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.107 225859 INFO nova.virt.libvirt.driver [-] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Instance destroyed successfully.
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.107 225859 DEBUG nova.objects.instance [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lazy-loading 'numa_topology' on Instance uuid 538fe1f0-b666-4b97-b2ef-317adae0a47a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:57:07 compute-1 ceph-mon[81775]: pgmap v2142: 321 pgs: 321 active+clean; 403 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 197 op/s
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.165 225859 DEBUG nova.compute.manager [req-1b828cd4-e917-4e7f-a96c-761f07ad436c req-5e39838f-59bd-4fcc-891e-b8ef3d113d04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received event network-vif-unplugged-6550efe7-7235-437c-b9f3-728b676371ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.165 225859 DEBUG oslo_concurrency.lockutils [req-1b828cd4-e917-4e7f-a96c-761f07ad436c req-5e39838f-59bd-4fcc-891e-b8ef3d113d04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.165 225859 DEBUG oslo_concurrency.lockutils [req-1b828cd4-e917-4e7f-a96c-761f07ad436c req-5e39838f-59bd-4fcc-891e-b8ef3d113d04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.166 225859 DEBUG oslo_concurrency.lockutils [req-1b828cd4-e917-4e7f-a96c-761f07ad436c req-5e39838f-59bd-4fcc-891e-b8ef3d113d04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.166 225859 DEBUG nova.compute.manager [req-1b828cd4-e917-4e7f-a96c-761f07ad436c req-5e39838f-59bd-4fcc-891e-b8ef3d113d04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] No waiting events found dispatching network-vif-unplugged-6550efe7-7235-437c-b9f3-728b676371ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.166 225859 WARNING nova.compute.manager [req-1b828cd4-e917-4e7f-a96c-761f07ad436c req-5e39838f-59bd-4fcc-891e-b8ef3d113d04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received unexpected event network-vif-unplugged-6550efe7-7235-437c-b9f3-728b676371ee for instance with vm_state active and task_state rescuing.
Jan 20 14:57:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:07.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.485 225859 INFO nova.virt.libvirt.driver [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Attempting rescue
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.486 225859 DEBUG nova.virt.libvirt.driver [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.489 225859 DEBUG nova.virt.libvirt.driver [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.490 225859 INFO nova.virt.libvirt.driver [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Creating image(s)
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.515 225859 DEBUG nova.storage.rbd_utils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] rbd image 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.519 225859 DEBUG nova.objects.instance [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 538fe1f0-b666-4b97-b2ef-317adae0a47a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.583 225859 DEBUG nova.storage.rbd_utils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] rbd image 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.608 225859 DEBUG nova.storage.rbd_utils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] rbd image 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.613 225859 DEBUG oslo_concurrency.processutils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.673 225859 DEBUG oslo_concurrency.processutils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.675 225859 DEBUG oslo_concurrency.lockutils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.675 225859 DEBUG oslo_concurrency.lockutils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.676 225859 DEBUG oslo_concurrency.lockutils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.700 225859 DEBUG nova.storage.rbd_utils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] rbd image 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.704 225859 DEBUG oslo_concurrency.processutils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.997 225859 DEBUG oslo_concurrency.processutils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:07 compute-1 nova_compute[225855]: 2026-01-20 14:57:07.998 225859 DEBUG nova.objects.instance [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lazy-loading 'migration_context' on Instance uuid 538fe1f0-b666-4b97-b2ef-317adae0a47a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.013 225859 DEBUG nova.virt.libvirt.driver [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.014 225859 DEBUG nova.virt.libvirt.driver [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Start _get_guest_xml network_info=[{"id": "6550efe7-7235-437c-b9f3-728b676371ee", "address": "fa:16:3e:e3:4f:ce", "network": {"id": "87caaa2e-d899-4eed-8b6a-8d19125c693b", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "vif_mac": "fa:16:3e:e3:4f:ce"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4d4e37f4fd7f4dbbb25648ec639e0e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6550efe7-72", "ovs_interfaceid": "6550efe7-7235-437c-b9f3-728b676371ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.014 225859 DEBUG nova.objects.instance [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lazy-loading 'resources' on Instance uuid 538fe1f0-b666-4b97-b2ef-317adae0a47a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.032 225859 WARNING nova.virt.libvirt.driver [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.037 225859 DEBUG nova.virt.libvirt.host [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.038 225859 DEBUG nova.virt.libvirt.host [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.041 225859 DEBUG nova.virt.libvirt.host [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.041 225859 DEBUG nova.virt.libvirt.host [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.042 225859 DEBUG nova.virt.libvirt.driver [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.042 225859 DEBUG nova.virt.hardware [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.043 225859 DEBUG nova.virt.hardware [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.043 225859 DEBUG nova.virt.hardware [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.043 225859 DEBUG nova.virt.hardware [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.043 225859 DEBUG nova.virt.hardware [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.044 225859 DEBUG nova.virt.hardware [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.044 225859 DEBUG nova.virt.hardware [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.044 225859 DEBUG nova.virt.hardware [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.044 225859 DEBUG nova.virt.hardware [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.044 225859 DEBUG nova.virt.hardware [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.045 225859 DEBUG nova.virt.hardware [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.045 225859 DEBUG nova.objects.instance [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 538fe1f0-b666-4b97-b2ef-317adae0a47a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.065 225859 DEBUG oslo_concurrency.processutils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:08.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:57:08 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1167924893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.511 225859 DEBUG oslo_concurrency.processutils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.512 225859 DEBUG oslo_concurrency.processutils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:57:08 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1449155061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.959 225859 DEBUG oslo_concurrency.processutils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:08 compute-1 nova_compute[225855]: 2026-01-20 14:57:08.961 225859 DEBUG oslo_concurrency.processutils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e313 e313: 3 total, 3 up, 3 in
Jan 20 14:57:09 compute-1 ceph-mon[81775]: pgmap v2143: 321 pgs: 321 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 430 KiB/s rd, 2.6 MiB/s wr, 146 op/s
Jan 20 14:57:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1167924893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1449155061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.327 225859 DEBUG nova.compute.manager [req-d0791654-f534-423b-9e22-2cf86c20a94e req-1f6df6af-5915-476e-b99c-149056b6cdc7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received event network-vif-plugged-6550efe7-7235-437c-b9f3-728b676371ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.328 225859 DEBUG oslo_concurrency.lockutils [req-d0791654-f534-423b-9e22-2cf86c20a94e req-1f6df6af-5915-476e-b99c-149056b6cdc7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.328 225859 DEBUG oslo_concurrency.lockutils [req-d0791654-f534-423b-9e22-2cf86c20a94e req-1f6df6af-5915-476e-b99c-149056b6cdc7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.328 225859 DEBUG oslo_concurrency.lockutils [req-d0791654-f534-423b-9e22-2cf86c20a94e req-1f6df6af-5915-476e-b99c-149056b6cdc7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.329 225859 DEBUG nova.compute.manager [req-d0791654-f534-423b-9e22-2cf86c20a94e req-1f6df6af-5915-476e-b99c-149056b6cdc7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] No waiting events found dispatching network-vif-plugged-6550efe7-7235-437c-b9f3-728b676371ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.329 225859 WARNING nova.compute.manager [req-d0791654-f534-423b-9e22-2cf86c20a94e req-1f6df6af-5915-476e-b99c-149056b6cdc7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received unexpected event network-vif-plugged-6550efe7-7235-437c-b9f3-728b676371ee for instance with vm_state active and task_state rescuing.
Jan 20 14:57:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:57:09 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4079294048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.396 225859 DEBUG oslo_concurrency.processutils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.398 225859 DEBUG nova.virt.libvirt.vif [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-430397789',display_name='tempest-ServerRescueTestJSONUnderV235-server-430397789',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-430397789',id=134,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:56:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d4e37f4fd7f4dbbb25648ec639e0e43',ramdisk_id='',reservation_id='r-xjy771y2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-201664875',owner_user_name='tempest-ServerRescueTestJSONUnderV235-201664875-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:56:50Z,user_data=None,user_id='168ca7898b964a44b76c90912fa89a66',uuid=538fe1f0-b666-4b97-b2ef-317adae0a47a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6550efe7-7235-437c-b9f3-728b676371ee", "address": "fa:16:3e:e3:4f:ce", "network": {"id": "87caaa2e-d899-4eed-8b6a-8d19125c693b", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "vif_mac": "fa:16:3e:e3:4f:ce"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4d4e37f4fd7f4dbbb25648ec639e0e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6550efe7-72", "ovs_interfaceid": "6550efe7-7235-437c-b9f3-728b676371ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.398 225859 DEBUG nova.network.os_vif_util [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Converting VIF {"id": "6550efe7-7235-437c-b9f3-728b676371ee", "address": "fa:16:3e:e3:4f:ce", "network": {"id": "87caaa2e-d899-4eed-8b6a-8d19125c693b", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "vif_mac": "fa:16:3e:e3:4f:ce"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4d4e37f4fd7f4dbbb25648ec639e0e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6550efe7-72", "ovs_interfaceid": "6550efe7-7235-437c-b9f3-728b676371ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.399 225859 DEBUG nova.network.os_vif_util [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e3:4f:ce,bridge_name='br-int',has_traffic_filtering=True,id=6550efe7-7235-437c-b9f3-728b676371ee,network=Network(87caaa2e-d899-4eed-8b6a-8d19125c693b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6550efe7-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.400 225859 DEBUG nova.objects.instance [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 538fe1f0-b666-4b97-b2ef-317adae0a47a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.413 225859 DEBUG nova.virt.libvirt.driver [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:57:09 compute-1 nova_compute[225855]:   <uuid>538fe1f0-b666-4b97-b2ef-317adae0a47a</uuid>
Jan 20 14:57:09 compute-1 nova_compute[225855]:   <name>instance-00000086</name>
Jan 20 14:57:09 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:57:09 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:57:09 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-430397789</nova:name>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:57:08</nova:creationTime>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:57:09 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:57:09 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:57:09 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:57:09 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:57:09 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:57:09 compute-1 nova_compute[225855]:         <nova:user uuid="168ca7898b964a44b76c90912fa89a66">tempest-ServerRescueTestJSONUnderV235-201664875-project-member</nova:user>
Jan 20 14:57:09 compute-1 nova_compute[225855]:         <nova:project uuid="4d4e37f4fd7f4dbbb25648ec639e0e43">tempest-ServerRescueTestJSONUnderV235-201664875</nova:project>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:57:09 compute-1 nova_compute[225855]:         <nova:port uuid="6550efe7-7235-437c-b9f3-728b676371ee">
Jan 20 14:57:09 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:57:09 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:57:09 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <system>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <entry name="serial">538fe1f0-b666-4b97-b2ef-317adae0a47a</entry>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <entry name="uuid">538fe1f0-b666-4b97-b2ef-317adae0a47a</entry>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     </system>
Jan 20 14:57:09 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:57:09 compute-1 nova_compute[225855]:   <os>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:   </os>
Jan 20 14:57:09 compute-1 nova_compute[225855]:   <features>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:   </features>
Jan 20 14:57:09 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:57:09 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:57:09 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/538fe1f0-b666-4b97-b2ef-317adae0a47a_disk.rescue">
Jan 20 14:57:09 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       </source>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:57:09 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/538fe1f0-b666-4b97-b2ef-317adae0a47a_disk">
Jan 20 14:57:09 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       </source>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:57:09 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <target dev="vdb" bus="virtio"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/538fe1f0-b666-4b97-b2ef-317adae0a47a_disk.config.rescue">
Jan 20 14:57:09 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       </source>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:57:09 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:e3:4f:ce"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <target dev="tap6550efe7-72"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/538fe1f0-b666-4b97-b2ef-317adae0a47a/console.log" append="off"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <video>
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     </video>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:57:09 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:57:09 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:57:09 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:57:09 compute-1 nova_compute[225855]: </domain>
Jan 20 14:57:09 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.422 225859 INFO nova.virt.libvirt.driver [-] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Instance destroyed successfully.
Jan 20 14:57:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:09.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.475 225859 DEBUG nova.virt.libvirt.driver [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.476 225859 DEBUG nova.virt.libvirt.driver [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.476 225859 DEBUG nova.virt.libvirt.driver [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.476 225859 DEBUG nova.virt.libvirt.driver [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] No VIF found with MAC fa:16:3e:e3:4f:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.476 225859 INFO nova.virt.libvirt.driver [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Using config drive
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.504 225859 DEBUG nova.storage.rbd_utils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] rbd image 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.521 225859 DEBUG nova.objects.instance [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 538fe1f0-b666-4b97-b2ef-317adae0a47a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.561 225859 DEBUG nova.objects.instance [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lazy-loading 'keypairs' on Instance uuid 538fe1f0-b666-4b97-b2ef-317adae0a47a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.716 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921014.7143466, 3426109c-5671-4cc7-89b6-fea13983f921 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.717 225859 INFO nova.compute.manager [-] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] VM Stopped (Lifecycle Event)
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.732 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:09 compute-1 nova_compute[225855]: 2026-01-20 14:57:09.737 225859 DEBUG nova.compute.manager [None req-5fa53512-648d-49fd-8780-c95b446ec4b7 - - - - - -] [instance: 3426109c-5671-4cc7-89b6-fea13983f921] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:10.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:10 compute-1 ceph-mon[81775]: osdmap e313: 3 total, 3 up, 3 in
Jan 20 14:57:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4079294048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1771001101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e314 e314: 3 total, 3 up, 3 in
Jan 20 14:57:10 compute-1 nova_compute[225855]: 2026-01-20 14:57:10.212 225859 INFO nova.virt.libvirt.driver [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Creating config drive at /var/lib/nova/instances/538fe1f0-b666-4b97-b2ef-317adae0a47a/disk.config.rescue
Jan 20 14:57:10 compute-1 nova_compute[225855]: 2026-01-20 14:57:10.217 225859 DEBUG oslo_concurrency.processutils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/538fe1f0-b666-4b97-b2ef-317adae0a47a/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdl7b9i4x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:10 compute-1 nova_compute[225855]: 2026-01-20 14:57:10.354 225859 DEBUG oslo_concurrency.processutils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/538fe1f0-b666-4b97-b2ef-317adae0a47a/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdl7b9i4x" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:10 compute-1 nova_compute[225855]: 2026-01-20 14:57:10.395 225859 DEBUG nova.storage.rbd_utils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] rbd image 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:10 compute-1 nova_compute[225855]: 2026-01-20 14:57:10.399 225859 DEBUG oslo_concurrency.processutils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/538fe1f0-b666-4b97-b2ef-317adae0a47a/disk.config.rescue 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:10 compute-1 nova_compute[225855]: 2026-01-20 14:57:10.721 225859 DEBUG oslo_concurrency.processutils [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/538fe1f0-b666-4b97-b2ef-317adae0a47a/disk.config.rescue 538fe1f0-b666-4b97-b2ef-317adae0a47a_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:10 compute-1 nova_compute[225855]: 2026-01-20 14:57:10.722 225859 INFO nova.virt.libvirt.driver [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Deleting local config drive /var/lib/nova/instances/538fe1f0-b666-4b97-b2ef-317adae0a47a/disk.config.rescue because it was imported into RBD.
Jan 20 14:57:10 compute-1 kernel: tap6550efe7-72: entered promiscuous mode
Jan 20 14:57:10 compute-1 NetworkManager[49104]: <info>  [1768921030.7846] manager: (tap6550efe7-72): new Tun device (/org/freedesktop/NetworkManager/Devices/234)
Jan 20 14:57:10 compute-1 ovn_controller[130490]: 2026-01-20T14:57:10Z|00547|binding|INFO|Claiming lport 6550efe7-7235-437c-b9f3-728b676371ee for this chassis.
Jan 20 14:57:10 compute-1 ovn_controller[130490]: 2026-01-20T14:57:10Z|00548|binding|INFO|6550efe7-7235-437c-b9f3-728b676371ee: Claiming fa:16:3e:e3:4f:ce 10.100.0.3
Jan 20 14:57:10 compute-1 nova_compute[225855]: 2026-01-20 14:57:10.786 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:10.794 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:4f:ce 10.100.0.3'], port_security=['fa:16:3e:e3:4f:ce 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '538fe1f0-b666-4b97-b2ef-317adae0a47a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87caaa2e-d899-4eed-8b6a-8d19125c693b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d4e37f4fd7f4dbbb25648ec639e0e43', 'neutron:revision_number': '5', 'neutron:security_group_ids': '16f3c0d4-753e-4c8b-b00a-7073cbcfa6dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be68bfdf-b1f2-46c8-82b2-2c275774a706, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6550efe7-7235-437c-b9f3-728b676371ee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:57:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:10.795 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6550efe7-7235-437c-b9f3-728b676371ee in datapath 87caaa2e-d899-4eed-8b6a-8d19125c693b bound to our chassis
Jan 20 14:57:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:10.796 140354 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 87caaa2e-d899-4eed-8b6a-8d19125c693b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 20 14:57:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:10.797 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[32888d90-943a-4fa6-a682-c2451340a758]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:10 compute-1 ovn_controller[130490]: 2026-01-20T14:57:10Z|00549|binding|INFO|Setting lport 6550efe7-7235-437c-b9f3-728b676371ee up in Southbound
Jan 20 14:57:10 compute-1 ovn_controller[130490]: 2026-01-20T14:57:10Z|00550|binding|INFO|Setting lport 6550efe7-7235-437c-b9f3-728b676371ee ovn-installed in OVS
Jan 20 14:57:10 compute-1 nova_compute[225855]: 2026-01-20 14:57:10.809 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:10 compute-1 nova_compute[225855]: 2026-01-20 14:57:10.814 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:10 compute-1 systemd-udevd[279297]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:57:10 compute-1 systemd-machined[194361]: New machine qemu-65-instance-00000086.
Jan 20 14:57:10 compute-1 NetworkManager[49104]: <info>  [1768921030.8361] device (tap6550efe7-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:57:10 compute-1 NetworkManager[49104]: <info>  [1768921030.8369] device (tap6550efe7-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:57:10 compute-1 systemd[1]: Started Virtual Machine qemu-65-instance-00000086.
Jan 20 14:57:11 compute-1 ceph-mon[81775]: pgmap v2145: 321 pgs: 321 active+clean; 414 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 438 KiB/s rd, 3.2 MiB/s wr, 149 op/s
Jan 20 14:57:11 compute-1 ceph-mon[81775]: osdmap e314: 3 total, 3 up, 3 in
Jan 20 14:57:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e315 e315: 3 total, 3 up, 3 in
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.253 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.258 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for 538fe1f0-b666-4b97-b2ef-317adae0a47a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.258 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921031.2578287, 538fe1f0-b666-4b97-b2ef-317adae0a47a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.259 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] VM Resumed (Lifecycle Event)
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.263 225859 DEBUG nova.compute.manager [None req-7ceaa51e-5ae5-4572-8a7a-d5b83daa8d3a 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.298 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.301 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.352 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] During sync_power_state the instance has a pending task (rescuing). Skip.
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.353 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921031.258838, 538fe1f0-b666-4b97-b2ef-317adae0a47a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.353 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] VM Started (Lifecycle Event)
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.374 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.378 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.450 225859 DEBUG nova.compute.manager [req-841e0443-306c-46ab-9816-902cb5dc85c4 req-8377e3d0-3687-417d-b697-c2226b0e2fbd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received event network-vif-plugged-6550efe7-7235-437c-b9f3-728b676371ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.450 225859 DEBUG oslo_concurrency.lockutils [req-841e0443-306c-46ab-9816-902cb5dc85c4 req-8377e3d0-3687-417d-b697-c2226b0e2fbd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.451 225859 DEBUG oslo_concurrency.lockutils [req-841e0443-306c-46ab-9816-902cb5dc85c4 req-8377e3d0-3687-417d-b697-c2226b0e2fbd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.451 225859 DEBUG oslo_concurrency.lockutils [req-841e0443-306c-46ab-9816-902cb5dc85c4 req-8377e3d0-3687-417d-b697-c2226b0e2fbd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.451 225859 DEBUG nova.compute.manager [req-841e0443-306c-46ab-9816-902cb5dc85c4 req-8377e3d0-3687-417d-b697-c2226b0e2fbd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] No waiting events found dispatching network-vif-plugged-6550efe7-7235-437c-b9f3-728b676371ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.451 225859 WARNING nova.compute.manager [req-841e0443-306c-46ab-9816-902cb5dc85c4 req-8377e3d0-3687-417d-b697-c2226b0e2fbd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received unexpected event network-vif-plugged-6550efe7-7235-437c-b9f3-728b676371ee for instance with vm_state rescued and task_state None.
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.452 225859 DEBUG nova.compute.manager [req-841e0443-306c-46ab-9816-902cb5dc85c4 req-8377e3d0-3687-417d-b697-c2226b0e2fbd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received event network-vif-plugged-6550efe7-7235-437c-b9f3-728b676371ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.452 225859 DEBUG oslo_concurrency.lockutils [req-841e0443-306c-46ab-9816-902cb5dc85c4 req-8377e3d0-3687-417d-b697-c2226b0e2fbd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.452 225859 DEBUG oslo_concurrency.lockutils [req-841e0443-306c-46ab-9816-902cb5dc85c4 req-8377e3d0-3687-417d-b697-c2226b0e2fbd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.453 225859 DEBUG oslo_concurrency.lockutils [req-841e0443-306c-46ab-9816-902cb5dc85c4 req-8377e3d0-3687-417d-b697-c2226b0e2fbd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.453 225859 DEBUG nova.compute.manager [req-841e0443-306c-46ab-9816-902cb5dc85c4 req-8377e3d0-3687-417d-b697-c2226b0e2fbd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] No waiting events found dispatching network-vif-plugged-6550efe7-7235-437c-b9f3-728b676371ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:57:11 compute-1 nova_compute[225855]: 2026-01-20 14:57:11.453 225859 WARNING nova.compute.manager [req-841e0443-306c-46ab-9816-902cb5dc85c4 req-8377e3d0-3687-417d-b697-c2226b0e2fbd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received unexpected event network-vif-plugged-6550efe7-7235-437c-b9f3-728b676371ee for instance with vm_state rescued and task_state None.
Jan 20 14:57:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:11.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:12.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:12 compute-1 ceph-mon[81775]: osdmap e315: 3 total, 3 up, 3 in
Jan 20 14:57:12 compute-1 ceph-mon[81775]: pgmap v2148: 321 pgs: 321 active+clean; 449 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.5 MiB/s wr, 61 op/s
Jan 20 14:57:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:13 compute-1 sudo[279367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:13 compute-1 sudo[279367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:13 compute-1 sudo[279367]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:13 compute-1 sudo[279392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:13 compute-1 sudo[279392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:13 compute-1 sudo[279392]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:13 compute-1 nova_compute[225855]: 2026-01-20 14:57:13.310 225859 DEBUG nova.compute.manager [req-a1088e4c-91c6-4fec-87ff-b6e6b2a182a1 req-9d56062e-3a50-4a93-808d-6c531ff0647b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received event network-changed-6550efe7-7235-437c-b9f3-728b676371ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:13 compute-1 nova_compute[225855]: 2026-01-20 14:57:13.310 225859 DEBUG nova.compute.manager [req-a1088e4c-91c6-4fec-87ff-b6e6b2a182a1 req-9d56062e-3a50-4a93-808d-6c531ff0647b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Refreshing instance network info cache due to event network-changed-6550efe7-7235-437c-b9f3-728b676371ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:57:13 compute-1 nova_compute[225855]: 2026-01-20 14:57:13.310 225859 DEBUG oslo_concurrency.lockutils [req-a1088e4c-91c6-4fec-87ff-b6e6b2a182a1 req-9d56062e-3a50-4a93-808d-6c531ff0647b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:57:13 compute-1 nova_compute[225855]: 2026-01-20 14:57:13.311 225859 DEBUG oslo_concurrency.lockutils [req-a1088e4c-91c6-4fec-87ff-b6e6b2a182a1 req-9d56062e-3a50-4a93-808d-6c531ff0647b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:57:13 compute-1 nova_compute[225855]: 2026-01-20 14:57:13.311 225859 DEBUG nova.network.neutron [req-a1088e4c-91c6-4fec-87ff-b6e6b2a182a1 req-9d56062e-3a50-4a93-808d-6c531ff0647b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Refreshing network info cache for port 6550efe7-7235-437c-b9f3-728b676371ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:57:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:13.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:57:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1115444861' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:57:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:57:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1115444861' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:57:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1115444861' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:57:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1115444861' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:57:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:14.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:14 compute-1 nova_compute[225855]: 2026-01-20 14:57:14.609 225859 DEBUG nova.compute.manager [req-1bbe517c-a35b-4cba-ab38-ff961cf7497c req-2f512ad2-cbe5-418e-b8fd-28913ee6540f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received event network-changed-6550efe7-7235-437c-b9f3-728b676371ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:14 compute-1 nova_compute[225855]: 2026-01-20 14:57:14.609 225859 DEBUG nova.compute.manager [req-1bbe517c-a35b-4cba-ab38-ff961cf7497c req-2f512ad2-cbe5-418e-b8fd-28913ee6540f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Refreshing instance network info cache due to event network-changed-6550efe7-7235-437c-b9f3-728b676371ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:57:14 compute-1 nova_compute[225855]: 2026-01-20 14:57:14.610 225859 DEBUG oslo_concurrency.lockutils [req-1bbe517c-a35b-4cba-ab38-ff961cf7497c req-2f512ad2-cbe5-418e-b8fd-28913ee6540f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:57:14 compute-1 ceph-mon[81775]: pgmap v2149: 321 pgs: 321 active+clean; 550 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 13 MiB/s wr, 320 op/s
Jan 20 14:57:14 compute-1 nova_compute[225855]: 2026-01-20 14:57:14.681 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921019.6797974, e32ecf59-145a-4ae9-a91e-288419407cd0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:57:14 compute-1 nova_compute[225855]: 2026-01-20 14:57:14.681 225859 INFO nova.compute.manager [-] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] VM Stopped (Lifecycle Event)
Jan 20 14:57:14 compute-1 nova_compute[225855]: 2026-01-20 14:57:14.700 225859 DEBUG nova.compute.manager [None req-e2df0b17-f31b-430c-9f6f-ad4fd589e421 - - - - - -] [instance: e32ecf59-145a-4ae9-a91e-288419407cd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:14 compute-1 nova_compute[225855]: 2026-01-20 14:57:14.734 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:15.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:16 compute-1 podman[279419]: 2026-01-20 14:57:16.048163115 +0000 UTC m=+0.096572365 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 20 14:57:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:16.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:16 compute-1 nova_compute[225855]: 2026-01-20 14:57:16.190 225859 DEBUG nova.network.neutron [req-a1088e4c-91c6-4fec-87ff-b6e6b2a182a1 req-9d56062e-3a50-4a93-808d-6c531ff0647b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Updated VIF entry in instance network info cache for port 6550efe7-7235-437c-b9f3-728b676371ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:57:16 compute-1 nova_compute[225855]: 2026-01-20 14:57:16.191 225859 DEBUG nova.network.neutron [req-a1088e4c-91c6-4fec-87ff-b6e6b2a182a1 req-9d56062e-3a50-4a93-808d-6c531ff0647b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Updating instance_info_cache with network_info: [{"id": "6550efe7-7235-437c-b9f3-728b676371ee", "address": "fa:16:3e:e3:4f:ce", "network": {"id": "87caaa2e-d899-4eed-8b6a-8d19125c693b", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4d4e37f4fd7f4dbbb25648ec639e0e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6550efe7-72", "ovs_interfaceid": "6550efe7-7235-437c-b9f3-728b676371ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:57:16 compute-1 nova_compute[225855]: 2026-01-20 14:57:16.204 225859 DEBUG oslo_concurrency.lockutils [req-a1088e4c-91c6-4fec-87ff-b6e6b2a182a1 req-9d56062e-3a50-4a93-808d-6c531ff0647b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:57:16 compute-1 nova_compute[225855]: 2026-01-20 14:57:16.205 225859 DEBUG oslo_concurrency.lockutils [req-1bbe517c-a35b-4cba-ab38-ff961cf7497c req-2f512ad2-cbe5-418e-b8fd-28913ee6540f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:57:16 compute-1 nova_compute[225855]: 2026-01-20 14:57:16.205 225859 DEBUG nova.network.neutron [req-1bbe517c-a35b-4cba-ab38-ff961cf7497c req-2f512ad2-cbe5-418e-b8fd-28913ee6540f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Refreshing network info cache for port 6550efe7-7235-437c-b9f3-728b676371ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:57:16 compute-1 nova_compute[225855]: 2026-01-20 14:57:16.254 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:16.416 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:16.417 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:16.417 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:17 compute-1 ceph-mon[81775]: pgmap v2150: 321 pgs: 321 active+clean; 577 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 12 MiB/s wr, 325 op/s
Jan 20 14:57:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2422669962' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3750816865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:17.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:18.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4255108125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e316 e316: 3 total, 3 up, 3 in
Jan 20 14:57:18 compute-1 nova_compute[225855]: 2026-01-20 14:57:18.443 225859 DEBUG nova.network.neutron [req-1bbe517c-a35b-4cba-ab38-ff961cf7497c req-2f512ad2-cbe5-418e-b8fd-28913ee6540f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Updated VIF entry in instance network info cache for port 6550efe7-7235-437c-b9f3-728b676371ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:57:18 compute-1 nova_compute[225855]: 2026-01-20 14:57:18.443 225859 DEBUG nova.network.neutron [req-1bbe517c-a35b-4cba-ab38-ff961cf7497c req-2f512ad2-cbe5-418e-b8fd-28913ee6540f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Updating instance_info_cache with network_info: [{"id": "6550efe7-7235-437c-b9f3-728b676371ee", "address": "fa:16:3e:e3:4f:ce", "network": {"id": "87caaa2e-d899-4eed-8b6a-8d19125c693b", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4d4e37f4fd7f4dbbb25648ec639e0e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6550efe7-72", "ovs_interfaceid": "6550efe7-7235-437c-b9f3-728b676371ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:57:18 compute-1 nova_compute[225855]: 2026-01-20 14:57:18.466 225859 DEBUG oslo_concurrency.lockutils [req-1bbe517c-a35b-4cba-ab38-ff961cf7497c req-2f512ad2-cbe5-418e-b8fd-28913ee6540f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:57:19 compute-1 ceph-mon[81775]: pgmap v2151: 321 pgs: 321 active+clean; 565 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 10 MiB/s wr, 325 op/s
Jan 20 14:57:19 compute-1 ceph-mon[81775]: osdmap e316: 3 total, 3 up, 3 in
Jan 20 14:57:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:19.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:19 compute-1 NetworkManager[49104]: <info>  [1768921039.4920] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Jan 20 14:57:19 compute-1 NetworkManager[49104]: <info>  [1768921039.4931] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/236)
Jan 20 14:57:19 compute-1 nova_compute[225855]: 2026-01-20 14:57:19.491 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:19 compute-1 nova_compute[225855]: 2026-01-20 14:57:19.692 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:19 compute-1 nova_compute[225855]: 2026-01-20 14:57:19.715 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:19 compute-1 nova_compute[225855]: 2026-01-20 14:57:19.735 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e317 e317: 3 total, 3 up, 3 in
Jan 20 14:57:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:20.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:20 compute-1 ceph-mon[81775]: osdmap e317: 3 total, 3 up, 3 in
Jan 20 14:57:20 compute-1 ceph-mon[81775]: pgmap v2154: 321 pgs: 321 active+clean; 542 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 9.1 MiB/s wr, 339 op/s
Jan 20 14:57:21 compute-1 nova_compute[225855]: 2026-01-20 14:57:21.251 225859 DEBUG nova.compute.manager [req-87b9a3a5-4e98-4201-9755-e486bd6571dc req-c541d0da-fda3-4442-a775-ba4641e234f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received event network-changed-6550efe7-7235-437c-b9f3-728b676371ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:21 compute-1 nova_compute[225855]: 2026-01-20 14:57:21.251 225859 DEBUG nova.compute.manager [req-87b9a3a5-4e98-4201-9755-e486bd6571dc req-c541d0da-fda3-4442-a775-ba4641e234f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Refreshing instance network info cache due to event network-changed-6550efe7-7235-437c-b9f3-728b676371ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:57:21 compute-1 nova_compute[225855]: 2026-01-20 14:57:21.251 225859 DEBUG oslo_concurrency.lockutils [req-87b9a3a5-4e98-4201-9755-e486bd6571dc req-c541d0da-fda3-4442-a775-ba4641e234f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:57:21 compute-1 nova_compute[225855]: 2026-01-20 14:57:21.252 225859 DEBUG oslo_concurrency.lockutils [req-87b9a3a5-4e98-4201-9755-e486bd6571dc req-c541d0da-fda3-4442-a775-ba4641e234f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:57:21 compute-1 nova_compute[225855]: 2026-01-20 14:57:21.252 225859 DEBUG nova.network.neutron [req-87b9a3a5-4e98-4201-9755-e486bd6571dc req-c541d0da-fda3-4442-a775-ba4641e234f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Refreshing network info cache for port 6550efe7-7235-437c-b9f3-728b676371ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:57:21 compute-1 nova_compute[225855]: 2026-01-20 14:57:21.257 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:21.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:57:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:22.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:57:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 43K writes, 171K keys, 43K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.05 MB/s
                                           Cumulative WAL: 43K writes, 15K syncs, 2.75 writes per sync, written: 0.16 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 12K writes, 47K keys, 12K commit groups, 1.0 writes per commit group, ingest: 46.56 MB, 0.08 MB/s
                                           Interval WAL: 12K writes, 4997 syncs, 2.47 writes per sync, written: 0.05 GB, 0.08 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 14:57:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:23 compute-1 ceph-mon[81775]: pgmap v2155: 321 pgs: 321 active+clean; 502 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 159 op/s
Jan 20 14:57:23 compute-1 nova_compute[225855]: 2026-01-20 14:57:23.214 225859 DEBUG nova.network.neutron [req-87b9a3a5-4e98-4201-9755-e486bd6571dc req-c541d0da-fda3-4442-a775-ba4641e234f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Updated VIF entry in instance network info cache for port 6550efe7-7235-437c-b9f3-728b676371ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:57:23 compute-1 nova_compute[225855]: 2026-01-20 14:57:23.214 225859 DEBUG nova.network.neutron [req-87b9a3a5-4e98-4201-9755-e486bd6571dc req-c541d0da-fda3-4442-a775-ba4641e234f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Updating instance_info_cache with network_info: [{"id": "6550efe7-7235-437c-b9f3-728b676371ee", "address": "fa:16:3e:e3:4f:ce", "network": {"id": "87caaa2e-d899-4eed-8b6a-8d19125c693b", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4d4e37f4fd7f4dbbb25648ec639e0e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6550efe7-72", "ovs_interfaceid": "6550efe7-7235-437c-b9f3-728b676371ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:57:23 compute-1 nova_compute[225855]: 2026-01-20 14:57:23.241 225859 DEBUG oslo_concurrency.lockutils [req-87b9a3a5-4e98-4201-9755-e486bd6571dc req-c541d0da-fda3-4442-a775-ba4641e234f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:57:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:23.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:57:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:24.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:57:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4028243345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:24 compute-1 nova_compute[225855]: 2026-01-20 14:57:24.737 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:25 compute-1 ceph-mon[81775]: pgmap v2156: 321 pgs: 321 active+clean; 510 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 210 op/s
Jan 20 14:57:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1658765285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:25.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:25 compute-1 nova_compute[225855]: 2026-01-20 14:57:25.500 225859 DEBUG nova.compute.manager [req-f4718a15-cde7-4e38-ae03-2cfac926ba5b req-a94a2c68-9914-47fb-89ae-617fc949dc96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received event network-changed-6550efe7-7235-437c-b9f3-728b676371ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:25 compute-1 nova_compute[225855]: 2026-01-20 14:57:25.500 225859 DEBUG nova.compute.manager [req-f4718a15-cde7-4e38-ae03-2cfac926ba5b req-a94a2c68-9914-47fb-89ae-617fc949dc96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Refreshing instance network info cache due to event network-changed-6550efe7-7235-437c-b9f3-728b676371ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:57:25 compute-1 nova_compute[225855]: 2026-01-20 14:57:25.500 225859 DEBUG oslo_concurrency.lockutils [req-f4718a15-cde7-4e38-ae03-2cfac926ba5b req-a94a2c68-9914-47fb-89ae-617fc949dc96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:57:25 compute-1 nova_compute[225855]: 2026-01-20 14:57:25.501 225859 DEBUG oslo_concurrency.lockutils [req-f4718a15-cde7-4e38-ae03-2cfac926ba5b req-a94a2c68-9914-47fb-89ae-617fc949dc96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:57:25 compute-1 nova_compute[225855]: 2026-01-20 14:57:25.501 225859 DEBUG nova.network.neutron [req-f4718a15-cde7-4e38-ae03-2cfac926ba5b req-a94a2c68-9914-47fb-89ae-617fc949dc96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Refreshing network info cache for port 6550efe7-7235-437c-b9f3-728b676371ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:57:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:57:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:26.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:57:26 compute-1 nova_compute[225855]: 2026-01-20 14:57:26.258 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:27 compute-1 ceph-mon[81775]: pgmap v2157: 321 pgs: 321 active+clean; 484 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.6 MiB/s wr, 249 op/s
Jan 20 14:57:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:27.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:28 compute-1 podman[279454]: 2026-01-20 14:57:28.083031161 +0000 UTC m=+0.110338374 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:57:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:28.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:28 compute-1 nova_compute[225855]: 2026-01-20 14:57:28.412 225859 DEBUG nova.network.neutron [req-f4718a15-cde7-4e38-ae03-2cfac926ba5b req-a94a2c68-9914-47fb-89ae-617fc949dc96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Updated VIF entry in instance network info cache for port 6550efe7-7235-437c-b9f3-728b676371ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:57:28 compute-1 nova_compute[225855]: 2026-01-20 14:57:28.413 225859 DEBUG nova.network.neutron [req-f4718a15-cde7-4e38-ae03-2cfac926ba5b req-a94a2c68-9914-47fb-89ae-617fc949dc96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Updating instance_info_cache with network_info: [{"id": "6550efe7-7235-437c-b9f3-728b676371ee", "address": "fa:16:3e:e3:4f:ce", "network": {"id": "87caaa2e-d899-4eed-8b6a-8d19125c693b", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4d4e37f4fd7f4dbbb25648ec639e0e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6550efe7-72", "ovs_interfaceid": "6550efe7-7235-437c-b9f3-728b676371ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:57:28 compute-1 nova_compute[225855]: 2026-01-20 14:57:28.456 225859 DEBUG oslo_concurrency.lockutils [req-f4718a15-cde7-4e38-ae03-2cfac926ba5b req-a94a2c68-9914-47fb-89ae-617fc949dc96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-538fe1f0-b666-4b97-b2ef-317adae0a47a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:57:28 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Jan 20 14:57:29 compute-1 ceph-mon[81775]: pgmap v2158: 321 pgs: 321 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 2.1 MiB/s wr, 239 op/s
Jan 20 14:57:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:57:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:29.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:57:29 compute-1 nova_compute[225855]: 2026-01-20 14:57:29.738 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:29 compute-1 nova_compute[225855]: 2026-01-20 14:57:29.758 225859 DEBUG oslo_concurrency.lockutils [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Acquiring lock "538fe1f0-b666-4b97-b2ef-317adae0a47a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:29 compute-1 nova_compute[225855]: 2026-01-20 14:57:29.759 225859 DEBUG oslo_concurrency.lockutils [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:29 compute-1 nova_compute[225855]: 2026-01-20 14:57:29.759 225859 DEBUG oslo_concurrency.lockutils [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Acquiring lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:29 compute-1 nova_compute[225855]: 2026-01-20 14:57:29.759 225859 DEBUG oslo_concurrency.lockutils [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:29 compute-1 nova_compute[225855]: 2026-01-20 14:57:29.759 225859 DEBUG oslo_concurrency.lockutils [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:29 compute-1 nova_compute[225855]: 2026-01-20 14:57:29.760 225859 INFO nova.compute.manager [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Terminating instance
Jan 20 14:57:29 compute-1 nova_compute[225855]: 2026-01-20 14:57:29.761 225859 DEBUG nova.compute.manager [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:57:29 compute-1 kernel: tap6550efe7-72 (unregistering): left promiscuous mode
Jan 20 14:57:29 compute-1 NetworkManager[49104]: <info>  [1768921049.8282] device (tap6550efe7-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:57:29 compute-1 nova_compute[225855]: 2026-01-20 14:57:29.874 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:29 compute-1 ovn_controller[130490]: 2026-01-20T14:57:29Z|00551|binding|INFO|Releasing lport 6550efe7-7235-437c-b9f3-728b676371ee from this chassis (sb_readonly=0)
Jan 20 14:57:29 compute-1 ovn_controller[130490]: 2026-01-20T14:57:29Z|00552|binding|INFO|Setting lport 6550efe7-7235-437c-b9f3-728b676371ee down in Southbound
Jan 20 14:57:29 compute-1 ovn_controller[130490]: 2026-01-20T14:57:29Z|00553|binding|INFO|Removing iface tap6550efe7-72 ovn-installed in OVS
Jan 20 14:57:29 compute-1 nova_compute[225855]: 2026-01-20 14:57:29.876 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:29.881 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:4f:ce 10.100.0.3'], port_security=['fa:16:3e:e3:4f:ce 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '538fe1f0-b666-4b97-b2ef-317adae0a47a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87caaa2e-d899-4eed-8b6a-8d19125c693b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d4e37f4fd7f4dbbb25648ec639e0e43', 'neutron:revision_number': '8', 'neutron:security_group_ids': '16f3c0d4-753e-4c8b-b00a-7073cbcfa6dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be68bfdf-b1f2-46c8-82b2-2c275774a706, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6550efe7-7235-437c-b9f3-728b676371ee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:57:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:29.883 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6550efe7-7235-437c-b9f3-728b676371ee in datapath 87caaa2e-d899-4eed-8b6a-8d19125c693b unbound from our chassis
Jan 20 14:57:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:29.883 140354 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 87caaa2e-d899-4eed-8b6a-8d19125c693b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 20 14:57:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:29.884 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0b99a7f5-be60-487e-b2a6-de3cf20ee086]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:29 compute-1 nova_compute[225855]: 2026-01-20 14:57:29.893 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:29 compute-1 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000086.scope: Deactivated successfully.
Jan 20 14:57:29 compute-1 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000086.scope: Consumed 13.460s CPU time.
Jan 20 14:57:29 compute-1 systemd-machined[194361]: Machine qemu-65-instance-00000086 terminated.
Jan 20 14:57:30 compute-1 nova_compute[225855]: 2026-01-20 14:57:30.005 225859 INFO nova.virt.libvirt.driver [-] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Instance destroyed successfully.
Jan 20 14:57:30 compute-1 nova_compute[225855]: 2026-01-20 14:57:30.006 225859 DEBUG nova.objects.instance [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lazy-loading 'resources' on Instance uuid 538fe1f0-b666-4b97-b2ef-317adae0a47a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:57:30 compute-1 nova_compute[225855]: 2026-01-20 14:57:30.016 225859 DEBUG nova.virt.libvirt.vif [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-430397789',display_name='tempest-ServerRescueTestJSONUnderV235-server-430397789',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-430397789',id=134,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:57:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d4e37f4fd7f4dbbb25648ec639e0e43',ramdisk_id='',reservation_id='r-xjy771y2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-201664875',owner_user_name='tempest-ServerRescueTestJSONUnderV235-201664875-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:57:11Z,user_data=None,user_id='168ca7898b964a44b76c90912fa89a66',uuid=538fe1f0-b666-4b97-b2ef-317adae0a47a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "6550efe7-7235-437c-b9f3-728b676371ee", "address": "fa:16:3e:e3:4f:ce", "network": {"id": "87caaa2e-d899-4eed-8b6a-8d19125c693b", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4d4e37f4fd7f4dbbb25648ec639e0e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6550efe7-72", "ovs_interfaceid": "6550efe7-7235-437c-b9f3-728b676371ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:57:30 compute-1 nova_compute[225855]: 2026-01-20 14:57:30.017 225859 DEBUG nova.network.os_vif_util [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Converting VIF {"id": "6550efe7-7235-437c-b9f3-728b676371ee", "address": "fa:16:3e:e3:4f:ce", "network": {"id": "87caaa2e-d899-4eed-8b6a-8d19125c693b", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-347508957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4d4e37f4fd7f4dbbb25648ec639e0e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6550efe7-72", "ovs_interfaceid": "6550efe7-7235-437c-b9f3-728b676371ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:57:30 compute-1 nova_compute[225855]: 2026-01-20 14:57:30.017 225859 DEBUG nova.network.os_vif_util [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e3:4f:ce,bridge_name='br-int',has_traffic_filtering=True,id=6550efe7-7235-437c-b9f3-728b676371ee,network=Network(87caaa2e-d899-4eed-8b6a-8d19125c693b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6550efe7-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:57:30 compute-1 nova_compute[225855]: 2026-01-20 14:57:30.018 225859 DEBUG os_vif [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:4f:ce,bridge_name='br-int',has_traffic_filtering=True,id=6550efe7-7235-437c-b9f3-728b676371ee,network=Network(87caaa2e-d899-4eed-8b6a-8d19125c693b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6550efe7-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:57:30 compute-1 nova_compute[225855]: 2026-01-20 14:57:30.021 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:30 compute-1 nova_compute[225855]: 2026-01-20 14:57:30.021 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6550efe7-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:30 compute-1 nova_compute[225855]: 2026-01-20 14:57:30.023 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:30 compute-1 nova_compute[225855]: 2026-01-20 14:57:30.025 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:30 compute-1 nova_compute[225855]: 2026-01-20 14:57:30.027 225859 INFO os_vif [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:4f:ce,bridge_name='br-int',has_traffic_filtering=True,id=6550efe7-7235-437c-b9f3-728b676371ee,network=Network(87caaa2e-d899-4eed-8b6a-8d19125c693b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6550efe7-72')
Jan 20 14:57:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:57:30 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2654572084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:57:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:30.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:57:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2445595473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/535457568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2654572084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:30 compute-1 nova_compute[225855]: 2026-01-20 14:57:30.377 225859 DEBUG nova.compute.manager [req-a9e68a66-8cd2-4b84-945f-3935fada19a2 req-4b090397-1a12-4fc5-a7da-f2e42b79bde1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received event network-vif-unplugged-6550efe7-7235-437c-b9f3-728b676371ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:30 compute-1 nova_compute[225855]: 2026-01-20 14:57:30.378 225859 DEBUG oslo_concurrency.lockutils [req-a9e68a66-8cd2-4b84-945f-3935fada19a2 req-4b090397-1a12-4fc5-a7da-f2e42b79bde1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:30 compute-1 nova_compute[225855]: 2026-01-20 14:57:30.378 225859 DEBUG oslo_concurrency.lockutils [req-a9e68a66-8cd2-4b84-945f-3935fada19a2 req-4b090397-1a12-4fc5-a7da-f2e42b79bde1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:30 compute-1 nova_compute[225855]: 2026-01-20 14:57:30.379 225859 DEBUG oslo_concurrency.lockutils [req-a9e68a66-8cd2-4b84-945f-3935fada19a2 req-4b090397-1a12-4fc5-a7da-f2e42b79bde1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:30 compute-1 nova_compute[225855]: 2026-01-20 14:57:30.379 225859 DEBUG nova.compute.manager [req-a9e68a66-8cd2-4b84-945f-3935fada19a2 req-4b090397-1a12-4fc5-a7da-f2e42b79bde1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] No waiting events found dispatching network-vif-unplugged-6550efe7-7235-437c-b9f3-728b676371ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:57:30 compute-1 nova_compute[225855]: 2026-01-20 14:57:30.379 225859 DEBUG nova.compute.manager [req-a9e68a66-8cd2-4b84-945f-3935fada19a2 req-4b090397-1a12-4fc5-a7da-f2e42b79bde1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received event network-vif-unplugged-6550efe7-7235-437c-b9f3-728b676371ee for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:57:31 compute-1 nova_compute[225855]: 2026-01-20 14:57:31.005 225859 INFO nova.virt.libvirt.driver [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Deleting instance files /var/lib/nova/instances/538fe1f0-b666-4b97-b2ef-317adae0a47a_del
Jan 20 14:57:31 compute-1 nova_compute[225855]: 2026-01-20 14:57:31.006 225859 INFO nova.virt.libvirt.driver [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Deletion of /var/lib/nova/instances/538fe1f0-b666-4b97-b2ef-317adae0a47a_del complete
Jan 20 14:57:31 compute-1 nova_compute[225855]: 2026-01-20 14:57:31.073 225859 INFO nova.compute.manager [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Took 1.31 seconds to destroy the instance on the hypervisor.
Jan 20 14:57:31 compute-1 nova_compute[225855]: 2026-01-20 14:57:31.074 225859 DEBUG oslo.service.loopingcall [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:57:31 compute-1 nova_compute[225855]: 2026-01-20 14:57:31.075 225859 DEBUG nova.compute.manager [-] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:57:31 compute-1 nova_compute[225855]: 2026-01-20 14:57:31.075 225859 DEBUG nova.network.neutron [-] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:57:31 compute-1 ceph-mon[81775]: pgmap v2159: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 2.6 MiB/s wr, 248 op/s
Jan 20 14:57:31 compute-1 nova_compute[225855]: 2026-01-20 14:57:31.261 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:31.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:31 compute-1 sudo[279514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:31 compute-1 sudo[279514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:31 compute-1 sudo[279514]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:31 compute-1 sudo[279539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:57:31 compute-1 sudo[279539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:31 compute-1 sudo[279539]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:31 compute-1 sudo[279564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:31 compute-1 sudo[279564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:31 compute-1 sudo[279564]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:32 compute-1 sudo[279589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:57:32 compute-1 sudo[279589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:32.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:32 compute-1 ceph-mon[81775]: pgmap v2160: 321 pgs: 321 active+clean; 582 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 6.7 MiB/s wr, 329 op/s
Jan 20 14:57:32 compute-1 nova_compute[225855]: 2026-01-20 14:57:32.381 225859 DEBUG nova.network.neutron [-] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:57:32 compute-1 nova_compute[225855]: 2026-01-20 14:57:32.403 225859 INFO nova.compute.manager [-] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Took 1.33 seconds to deallocate network for instance.
Jan 20 14:57:32 compute-1 nova_compute[225855]: 2026-01-20 14:57:32.448 225859 DEBUG oslo_concurrency.lockutils [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:32 compute-1 nova_compute[225855]: 2026-01-20 14:57:32.448 225859 DEBUG oslo_concurrency.lockutils [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:32 compute-1 sudo[279589]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:32 compute-1 nova_compute[225855]: 2026-01-20 14:57:32.476 225859 DEBUG nova.compute.manager [req-c69a41d3-e66d-473b-96f6-c004ee7ad447 req-4b83c1a8-f345-4a8c-a0af-12a187c28569 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received event network-vif-plugged-6550efe7-7235-437c-b9f3-728b676371ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:32 compute-1 nova_compute[225855]: 2026-01-20 14:57:32.476 225859 DEBUG oslo_concurrency.lockutils [req-c69a41d3-e66d-473b-96f6-c004ee7ad447 req-4b83c1a8-f345-4a8c-a0af-12a187c28569 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:32 compute-1 nova_compute[225855]: 2026-01-20 14:57:32.476 225859 DEBUG oslo_concurrency.lockutils [req-c69a41d3-e66d-473b-96f6-c004ee7ad447 req-4b83c1a8-f345-4a8c-a0af-12a187c28569 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:32 compute-1 nova_compute[225855]: 2026-01-20 14:57:32.477 225859 DEBUG oslo_concurrency.lockutils [req-c69a41d3-e66d-473b-96f6-c004ee7ad447 req-4b83c1a8-f345-4a8c-a0af-12a187c28569 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:32 compute-1 nova_compute[225855]: 2026-01-20 14:57:32.477 225859 DEBUG nova.compute.manager [req-c69a41d3-e66d-473b-96f6-c004ee7ad447 req-4b83c1a8-f345-4a8c-a0af-12a187c28569 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] No waiting events found dispatching network-vif-plugged-6550efe7-7235-437c-b9f3-728b676371ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:57:32 compute-1 nova_compute[225855]: 2026-01-20 14:57:32.477 225859 WARNING nova.compute.manager [req-c69a41d3-e66d-473b-96f6-c004ee7ad447 req-4b83c1a8-f345-4a8c-a0af-12a187c28569 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received unexpected event network-vif-plugged-6550efe7-7235-437c-b9f3-728b676371ee for instance with vm_state deleted and task_state None.
Jan 20 14:57:32 compute-1 nova_compute[225855]: 2026-01-20 14:57:32.496 225859 DEBUG oslo_concurrency.processutils [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:32 compute-1 sudo[279646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:32 compute-1 sudo[279646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:32 compute-1 sudo[279646]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:32 compute-1 sudo[279690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:57:32 compute-1 sudo[279690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:32 compute-1 sudo[279690]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:32 compute-1 sudo[279715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:32 compute-1 sudo[279715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:32 compute-1 sudo[279715]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:32 compute-1 sudo[279740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 20 14:57:32 compute-1 sudo[279740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:57:32 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1816590284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:32 compute-1 nova_compute[225855]: 2026-01-20 14:57:32.953 225859 DEBUG oslo_concurrency.processutils [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:32 compute-1 nova_compute[225855]: 2026-01-20 14:57:32.959 225859 DEBUG nova.compute.provider_tree [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:57:32 compute-1 nova_compute[225855]: 2026-01-20 14:57:32.981 225859 DEBUG nova.scheduler.client.report [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:57:33 compute-1 nova_compute[225855]: 2026-01-20 14:57:33.023 225859 DEBUG oslo_concurrency.lockutils [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:33 compute-1 nova_compute[225855]: 2026-01-20 14:57:33.058 225859 INFO nova.scheduler.client.report [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Deleted allocations for instance 538fe1f0-b666-4b97-b2ef-317adae0a47a
Jan 20 14:57:33 compute-1 sudo[279740]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:33 compute-1 nova_compute[225855]: 2026-01-20 14:57:33.180 225859 DEBUG oslo_concurrency.lockutils [None req-c0990b97-4a4d-4801-b9ca-ab15bce827dc 168ca7898b964a44b76c90912fa89a66 4d4e37f4fd7f4dbbb25648ec639e0e43 - - default default] Lock "538fe1f0-b666-4b97-b2ef-317adae0a47a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e318 e318: 3 total, 3 up, 3 in
Jan 20 14:57:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1816590284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:33 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:33 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:33 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:33 compute-1 sudo[279784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:33 compute-1 sudo[279784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:33 compute-1 sudo[279784]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:33 compute-1 sudo[279809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:33 compute-1 sudo[279809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:33 compute-1 sudo[279809]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:33.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:33 compute-1 nova_compute[225855]: 2026-01-20 14:57:33.908 225859 DEBUG nova.compute.manager [req-559b044e-78df-473b-85df-b624d1fd284a req-12db5d70-896f-4bcc-9a31-8a8e532fc4e6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Received event network-vif-deleted-6550efe7-7235-437c-b9f3-728b676371ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:57:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:34.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:57:35 compute-1 nova_compute[225855]: 2026-01-20 14:57:35.024 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:35.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:35 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:35 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:57:35 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:57:35 compute-1 ceph-mon[81775]: osdmap e318: 3 total, 3 up, 3 in
Jan 20 14:57:35 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:35 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:57:35 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:57:35 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:57:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3524889482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:35 compute-1 ceph-mon[81775]: pgmap v2162: 321 pgs: 321 active+clean; 591 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 10 MiB/s wr, 369 op/s
Jan 20 14:57:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2690735207' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4079449944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:57:35 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/759606216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:36.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:36 compute-1 nova_compute[225855]: 2026-01-20 14:57:36.262 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:36 compute-1 nova_compute[225855]: 2026-01-20 14:57:36.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:57:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/759606216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:36 compute-1 ceph-mon[81775]: pgmap v2163: 321 pgs: 321 active+clean; 531 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 13 MiB/s wr, 491 op/s
Jan 20 14:57:37 compute-1 nova_compute[225855]: 2026-01-20 14:57:37.339 225859 DEBUG oslo_concurrency.lockutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "1ebdefed-0903-4d72-b78d-912666c5ce61" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:37 compute-1 nova_compute[225855]: 2026-01-20 14:57:37.339 225859 DEBUG oslo_concurrency.lockutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:37 compute-1 nova_compute[225855]: 2026-01-20 14:57:37.360 225859 DEBUG nova.compute.manager [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:57:37 compute-1 nova_compute[225855]: 2026-01-20 14:57:37.442 225859 DEBUG oslo_concurrency.lockutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:37 compute-1 nova_compute[225855]: 2026-01-20 14:57:37.442 225859 DEBUG oslo_concurrency.lockutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:37 compute-1 nova_compute[225855]: 2026-01-20 14:57:37.448 225859 DEBUG nova.virt.hardware [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:57:37 compute-1 nova_compute[225855]: 2026-01-20 14:57:37.449 225859 INFO nova.compute.claims [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:57:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:37.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:37.538 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:57:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:37.539 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:57:37 compute-1 nova_compute[225855]: 2026-01-20 14:57:37.541 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:37 compute-1 nova_compute[225855]: 2026-01-20 14:57:37.600 225859 DEBUG oslo_concurrency.processutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e319 e319: 3 total, 3 up, 3 in
Jan 20 14:57:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:57:38 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4005775922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.053 225859 DEBUG oslo_concurrency.processutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.061 225859 DEBUG nova.compute.provider_tree [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.082 225859 DEBUG nova.scheduler.client.report [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.106 225859 DEBUG oslo_concurrency.lockutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.107 225859 DEBUG nova.compute.manager [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:57:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:57:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:38.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.155 225859 DEBUG nova.compute.manager [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.155 225859 DEBUG nova.network.neutron [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.173 225859 INFO nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.200 225859 DEBUG nova.compute.manager [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.247 225859 INFO nova.virt.block_device [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Booting with volume 5728e8f8-a711-41d5-aa04-a1d9faada8d9 at /dev/vda
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.367 225859 DEBUG nova.policy [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed2c9bd268d1491fa3484d86bcdb9ec6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '107c1f3b5b7b413d9a389ca1166e331f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.440 225859 DEBUG os_brick.utils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.442 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.453 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.454 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[88dcf51e-1c54-4265-ba65-c0c0bf856399]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.455 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.463 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.463 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[c67e2fa4-47b1-49a0-920e-824fb90f414f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.464 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.473 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.473 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[42682ae7-2e00-4760-a46e-11643b0a2530]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.475 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[3387c024-4eeb-491d-bf3a-f594e7dc4a13]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.475 225859 DEBUG oslo_concurrency.processutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.499 225859 DEBUG oslo_concurrency.processutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.502 225859 DEBUG os_brick.initiator.connectors.lightos [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.502 225859 DEBUG os_brick.initiator.connectors.lightos [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.502 225859 DEBUG os_brick.initiator.connectors.lightos [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.503 225859 DEBUG os_brick.utils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:57:38 compute-1 nova_compute[225855]: 2026-01-20 14:57:38.503 225859 DEBUG nova.virt.block_device [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updating existing volume attachment record: 25d46c6d-0955-42e9-9edd-2c90ded91a6c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 14:57:38 compute-1 ceph-mon[81775]: osdmap e319: 3 total, 3 up, 3 in
Jan 20 14:57:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4005775922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:38 compute-1 ceph-mon[81775]: pgmap v2165: 321 pgs: 321 active+clean; 531 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 15 MiB/s wr, 516 op/s
Jan 20 14:57:39 compute-1 sudo[279866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:39 compute-1 sudo[279866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:39 compute-1 sudo[279866]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:39 compute-1 sudo[279891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:57:39 compute-1 sudo[279891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:39 compute-1 sudo[279891]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:57:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:39.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:57:39 compute-1 nova_compute[225855]: 2026-01-20 14:57:39.546 225859 DEBUG nova.compute.manager [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:57:39 compute-1 nova_compute[225855]: 2026-01-20 14:57:39.548 225859 DEBUG nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:57:39 compute-1 nova_compute[225855]: 2026-01-20 14:57:39.548 225859 INFO nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Creating image(s)
Jan 20 14:57:39 compute-1 nova_compute[225855]: 2026-01-20 14:57:39.549 225859 DEBUG nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 14:57:39 compute-1 nova_compute[225855]: 2026-01-20 14:57:39.549 225859 DEBUG nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Ensure instance console log exists: /var/lib/nova/instances/1ebdefed-0903-4d72-b78d-912666c5ce61/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:57:39 compute-1 nova_compute[225855]: 2026-01-20 14:57:39.549 225859 DEBUG oslo_concurrency.lockutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:39 compute-1 nova_compute[225855]: 2026-01-20 14:57:39.550 225859 DEBUG oslo_concurrency.lockutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:39 compute-1 nova_compute[225855]: 2026-01-20 14:57:39.550 225859 DEBUG oslo_concurrency.lockutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:39 compute-1 nova_compute[225855]: 2026-01-20 14:57:39.883 225859 DEBUG nova.network.neutron [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Successfully created port: 3067803c-07f3-4a15-a5ee-47f9a770efca _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:57:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e320 e320: 3 total, 3 up, 3 in
Jan 20 14:57:39 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #100. Immutable memtables: 0.
Jan 20 14:57:39 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:57:39.991235) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:57:39 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 100
Jan 20 14:57:39 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921059991392, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 1600, "num_deletes": 256, "total_data_size": 3286196, "memory_usage": 3359312, "flush_reason": "Manual Compaction"}
Jan 20 14:57:39 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #101: started
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921060012444, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 101, "file_size": 2143187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50922, "largest_seqno": 52517, "table_properties": {"data_size": 2136383, "index_size": 3811, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15753, "raw_average_key_size": 20, "raw_value_size": 2122188, "raw_average_value_size": 2822, "num_data_blocks": 166, "num_entries": 752, "num_filter_entries": 752, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768920955, "oldest_key_time": 1768920955, "file_creation_time": 1768921059, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 21276 microseconds, and 5882 cpu microseconds.
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:57:40.012536) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #101: 2143187 bytes OK
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:57:40.012570) [db/memtable_list.cc:519] [default] Level-0 commit table #101 started
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:57:40.015460) [db/memtable_list.cc:722] [default] Level-0 commit table #101: memtable #1 done
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:57:40.015479) EVENT_LOG_v1 {"time_micros": 1768921060015474, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:57:40.015505) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3278640, prev total WAL file size 3278640, number of live WAL files 2.
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000097.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:57:40.016668) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [101(2092KB)], [99(9849KB)]
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921060016707, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [101], "files_L6": [99], "score": -1, "input_data_size": 12229236, "oldest_snapshot_seqno": -1}
Jan 20 14:57:40 compute-1 nova_compute[225855]: 2026-01-20 14:57:40.026 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:40.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #102: 7779 keys, 10351315 bytes, temperature: kUnknown
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921060128189, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 102, "file_size": 10351315, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10300487, "index_size": 30300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19461, "raw_key_size": 201302, "raw_average_key_size": 25, "raw_value_size": 10162771, "raw_average_value_size": 1306, "num_data_blocks": 1188, "num_entries": 7779, "num_filter_entries": 7779, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768921060, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 102, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:57:40.128401) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10351315 bytes
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:57:40.129916) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.6 rd, 92.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.6 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(10.5) write-amplify(4.8) OK, records in: 8312, records dropped: 533 output_compression: NoCompression
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:57:40.129933) EVENT_LOG_v1 {"time_micros": 1768921060129924, "job": 62, "event": "compaction_finished", "compaction_time_micros": 111541, "compaction_time_cpu_micros": 37984, "output_level": 6, "num_output_files": 1, "total_output_size": 10351315, "num_input_records": 8312, "num_output_records": 7779, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921060130432, "job": 62, "event": "table_file_deletion", "file_number": 101}
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000099.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921060132232, "job": 62, "event": "table_file_deletion", "file_number": 99}
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:57:40.016552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:57:40.132314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:57:40.132322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:57:40.132325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:57:40.132328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:57:40 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-14:57:40.132331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:57:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1873084837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:40 compute-1 ceph-mon[81775]: osdmap e320: 3 total, 3 up, 3 in
Jan 20 14:57:40 compute-1 nova_compute[225855]: 2026-01-20 14:57:40.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:57:40 compute-1 nova_compute[225855]: 2026-01-20 14:57:40.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:57:40 compute-1 nova_compute[225855]: 2026-01-20 14:57:40.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:57:40 compute-1 nova_compute[225855]: 2026-01-20 14:57:40.360 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 14:57:40 compute-1 nova_compute[225855]: 2026-01-20 14:57:40.360 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:57:40 compute-1 nova_compute[225855]: 2026-01-20 14:57:40.612 225859 DEBUG oslo_concurrency.lockutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "b4c1468d-9914-426a-9464-c1167de53632" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:40 compute-1 nova_compute[225855]: 2026-01-20 14:57:40.612 225859 DEBUG oslo_concurrency.lockutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:40 compute-1 nova_compute[225855]: 2026-01-20 14:57:40.632 225859 DEBUG nova.compute.manager [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:57:40 compute-1 nova_compute[225855]: 2026-01-20 14:57:40.718 225859 DEBUG oslo_concurrency.lockutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:40 compute-1 nova_compute[225855]: 2026-01-20 14:57:40.718 225859 DEBUG oslo_concurrency.lockutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:40 compute-1 nova_compute[225855]: 2026-01-20 14:57:40.724 225859 DEBUG nova.virt.hardware [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:57:40 compute-1 nova_compute[225855]: 2026-01-20 14:57:40.724 225859 INFO nova.compute.claims [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Claim successful on node compute-1.ctlplane.example.com
Jan 20 14:57:40 compute-1 nova_compute[225855]: 2026-01-20 14:57:40.839 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:40 compute-1 nova_compute[225855]: 2026-01-20 14:57:40.931 225859 DEBUG oslo_concurrency.processutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:40 compute-1 nova_compute[225855]: 2026-01-20 14:57:40.992 225859 DEBUG nova.network.neutron [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Successfully updated port: 3067803c-07f3-4a15-a5ee-47f9a770efca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.009 225859 DEBUG oslo_concurrency.lockutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.010 225859 DEBUG oslo_concurrency.lockutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquired lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.010 225859 DEBUG nova.network.neutron [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.106 225859 DEBUG nova.compute.manager [req-14f5a19f-bf01-4364-9068-6d17f04bf8d0 req-12031be3-e681-499f-bd66-2c01dbc0736c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Received event network-changed-3067803c-07f3-4a15-a5ee-47f9a770efca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.107 225859 DEBUG nova.compute.manager [req-14f5a19f-bf01-4364-9068-6d17f04bf8d0 req-12031be3-e681-499f-bd66-2c01dbc0736c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Refreshing instance network info cache due to event network-changed-3067803c-07f3-4a15-a5ee-47f9a770efca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.108 225859 DEBUG oslo_concurrency.lockutils [req-14f5a19f-bf01-4364-9068-6d17f04bf8d0 req-12031be3-e681-499f-bd66-2c01dbc0736c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.157 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:41 compute-1 ceph-mon[81775]: pgmap v2167: 321 pgs: 321 active+clean; 519 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 10 MiB/s wr, 425 op/s
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.202 225859 DEBUG nova.network.neutron [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.265 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:57:41 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:57:41 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/721238381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.411 225859 DEBUG oslo_concurrency.processutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.420 225859 DEBUG nova.compute.provider_tree [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.442 225859 DEBUG nova.scheduler.client.report [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.466 225859 DEBUG oslo_concurrency.lockutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.468 225859 DEBUG nova.compute.manager [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:57:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:41.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.522 225859 DEBUG nova.compute.manager [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.523 225859 DEBUG nova.network.neutron [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:57:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:41.541 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.547 225859 INFO nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.569 225859 DEBUG nova.compute.manager [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.654 225859 INFO nova.virt.block_device [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Booting with volume 002e39e3-1bec-4033-aca2-f1428e495087 at /dev/vda
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.884 225859 DEBUG os_brick.utils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.886 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.908 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.909 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[a566225c-38dd-4ec3-b98a-48e62eface13]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.911 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.927 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.927 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d4938f-eeda-4ae2-903c-92eec5a25b5d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.929 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.943 225859 DEBUG nova.policy [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed2c9bd268d1491fa3484d86bcdb9ec6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '107c1f3b5b7b413d9a389ca1166e331f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.942 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.943 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[653a7e03-824d-4060-80da-dea418d08767]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.947 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[52e22383-c219-4a16-bb5e-bd7a8b742f9a]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.948 225859 DEBUG oslo_concurrency.processutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.974 225859 DEBUG oslo_concurrency.processutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.977 225859 DEBUG os_brick.initiator.connectors.lightos [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.977 225859 DEBUG os_brick.initiator.connectors.lightos [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.977 225859 DEBUG os_brick.initiator.connectors.lightos [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.978 225859 DEBUG os_brick.utils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] <== get_connector_properties: return (92ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:57:41 compute-1 nova_compute[225855]: 2026-01-20 14:57:41.978 225859 DEBUG nova.virt.block_device [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updating existing volume attachment record: b8b8cc31-54c0-4f4d-80cc-6fca4e9cae9f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 14:57:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:42.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/721238381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2972576168' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.377 225859 DEBUG nova.network.neutron [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updating instance_info_cache with network_info: [{"id": "3067803c-07f3-4a15-a5ee-47f9a770efca", "address": "fa:16:3e:cd:b7:b1", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3067803c-07", "ovs_interfaceid": "3067803c-07f3-4a15-a5ee-47f9a770efca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.398 225859 DEBUG oslo_concurrency.lockutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Releasing lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.399 225859 DEBUG nova.compute.manager [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Instance network_info: |[{"id": "3067803c-07f3-4a15-a5ee-47f9a770efca", "address": "fa:16:3e:cd:b7:b1", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3067803c-07", "ovs_interfaceid": "3067803c-07f3-4a15-a5ee-47f9a770efca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.399 225859 DEBUG oslo_concurrency.lockutils [req-14f5a19f-bf01-4364-9068-6d17f04bf8d0 req-12031be3-e681-499f-bd66-2c01dbc0736c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.400 225859 DEBUG nova.network.neutron [req-14f5a19f-bf01-4364-9068-6d17f04bf8d0 req-12031be3-e681-499f-bd66-2c01dbc0736c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Refreshing network info cache for port 3067803c-07f3-4a15-a5ee-47f9a770efca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.406 225859 DEBUG nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Start _get_guest_xml network_info=[{"id": "3067803c-07f3-4a15-a5ee-47f9a770efca", "address": "fa:16:3e:cd:b7:b1", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3067803c-07", "ovs_interfaceid": "3067803c-07f3-4a15-a5ee-47f9a770efca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5728e8f8-a711-41d5-aa04-a1d9faada8d9', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5728e8f8-a711-41d5-aa04-a1d9faada8d9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '1ebdefed-0903-4d72-b78d-912666c5ce61', 'attached_at': '', 'detached_at': '', 'volume_id': '5728e8f8-a711-41d5-aa04-a1d9faada8d9', 'serial': '5728e8f8-a711-41d5-aa04-a1d9faada8d9'}, 'guest_format': None, 'boot_index': 0, 'mount_device': '/dev/vda', 'attachment_id': '25d46c6d-0955-42e9-9edd-2c90ded91a6c', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.412 225859 WARNING nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.419 225859 DEBUG nova.virt.libvirt.host [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.420 225859 DEBUG nova.virt.libvirt.host [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.432 225859 DEBUG nova.virt.libvirt.host [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.433 225859 DEBUG nova.virt.libvirt.host [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.435 225859 DEBUG nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.436 225859 DEBUG nova.virt.hardware [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.436 225859 DEBUG nova.virt.hardware [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.437 225859 DEBUG nova.virt.hardware [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.437 225859 DEBUG nova.virt.hardware [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.438 225859 DEBUG nova.virt.hardware [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.438 225859 DEBUG nova.virt.hardware [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.439 225859 DEBUG nova.virt.hardware [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.439 225859 DEBUG nova.virt.hardware [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.440 225859 DEBUG nova.virt.hardware [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.440 225859 DEBUG nova.virt.hardware [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.441 225859 DEBUG nova.virt.hardware [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.484 225859 DEBUG nova.storage.rbd_utils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] rbd image 1ebdefed-0903-4d72-b78d-912666c5ce61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.491 225859 DEBUG oslo_concurrency.processutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:57:42 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/663477354' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:57:42 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2004004617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:42 compute-1 nova_compute[225855]: 2026-01-20 14:57:42.996 225859 DEBUG oslo_concurrency.processutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.046 225859 DEBUG nova.virt.libvirt.vif [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:57:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1983668831',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1983668831',id=139,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGBbRGX2xZT3D1ftqdKpZwTwb/ukXbRv/O5UyYYLjii3gk46qsw4SNMi6p0GpNIY5l/f9OSIg9UlRsUFQqLszBoQT2vJic2iOBlI6VLyxyg71obcHOZQEGpjfcTfqUsJeQ==',key_name='tempest-TestInstancesWithCinderVolumes-1812188149',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='107c1f3b5b7b413d9a389ca1166e331f',ramdisk_id='',reservation_id='r-c4vqjrp4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-1174033615',owner_user_name='tempest-TestInstancesWithCinderVolumes-1174033615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:57:38Z,user_data=None,user_id='ed2c9bd268d1491fa3484d86bcdb9ec6',uuid=1ebdefed-0903-4d72-b78d-912666c5ce61,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3067803c-07f3-4a15-a5ee-47f9a770efca", "address": "fa:16:3e:cd:b7:b1", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3067803c-07", "ovs_interfaceid": "3067803c-07f3-4a15-a5ee-47f9a770efca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.047 225859 DEBUG nova.network.os_vif_util [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Converting VIF {"id": "3067803c-07f3-4a15-a5ee-47f9a770efca", "address": "fa:16:3e:cd:b7:b1", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3067803c-07", "ovs_interfaceid": "3067803c-07f3-4a15-a5ee-47f9a770efca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.048 225859 DEBUG nova.network.os_vif_util [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:b7:b1,bridge_name='br-int',has_traffic_filtering=True,id=3067803c-07f3-4a15-a5ee-47f9a770efca,network=Network(58d966e1-4d26-414a-920e-0be2d77abb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3067803c-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.051 225859 DEBUG nova.objects.instance [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ebdefed-0903-4d72-b78d-912666c5ce61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.073 225859 DEBUG nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:57:43 compute-1 nova_compute[225855]:   <uuid>1ebdefed-0903-4d72-b78d-912666c5ce61</uuid>
Jan 20 14:57:43 compute-1 nova_compute[225855]:   <name>instance-0000008b</name>
Jan 20 14:57:43 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:57:43 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:57:43 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <nova:name>tempest-TestInstancesWithCinderVolumes-server-1983668831</nova:name>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:57:42</nova:creationTime>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:57:43 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:57:43 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:57:43 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:57:43 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:57:43 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:57:43 compute-1 nova_compute[225855]:         <nova:user uuid="ed2c9bd268d1491fa3484d86bcdb9ec6">tempest-TestInstancesWithCinderVolumes-1174033615-project-member</nova:user>
Jan 20 14:57:43 compute-1 nova_compute[225855]:         <nova:project uuid="107c1f3b5b7b413d9a389ca1166e331f">tempest-TestInstancesWithCinderVolumes-1174033615</nova:project>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:57:43 compute-1 nova_compute[225855]:         <nova:port uuid="3067803c-07f3-4a15-a5ee-47f9a770efca">
Jan 20 14:57:43 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:57:43 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:57:43 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <system>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <entry name="serial">1ebdefed-0903-4d72-b78d-912666c5ce61</entry>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <entry name="uuid">1ebdefed-0903-4d72-b78d-912666c5ce61</entry>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     </system>
Jan 20 14:57:43 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:57:43 compute-1 nova_compute[225855]:   <os>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:   </os>
Jan 20 14:57:43 compute-1 nova_compute[225855]:   <features>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:   </features>
Jan 20 14:57:43 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:57:43 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:57:43 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/1ebdefed-0903-4d72-b78d-912666c5ce61_disk.config">
Jan 20 14:57:43 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       </source>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:57:43 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <source protocol="rbd" name="volumes/volume-5728e8f8-a711-41d5-aa04-a1d9faada8d9">
Jan 20 14:57:43 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       </source>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:57:43 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <serial>5728e8f8-a711-41d5-aa04-a1d9faada8d9</serial>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:cd:b7:b1"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <target dev="tap3067803c-07"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/1ebdefed-0903-4d72-b78d-912666c5ce61/console.log" append="off"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <video>
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     </video>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:57:43 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:57:43 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:57:43 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:57:43 compute-1 nova_compute[225855]: </domain>
Jan 20 14:57:43 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.075 225859 DEBUG nova.compute.manager [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Preparing to wait for external event network-vif-plugged-3067803c-07f3-4a15-a5ee-47f9a770efca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.075 225859 DEBUG oslo_concurrency.lockutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "1ebdefed-0903-4d72-b78d-912666c5ce61-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.076 225859 DEBUG oslo_concurrency.lockutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.076 225859 DEBUG oslo_concurrency.lockutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.076 225859 DEBUG nova.virt.libvirt.vif [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:57:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1983668831',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1983668831',id=139,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGBbRGX2xZT3D1ftqdKpZwTwb/ukXbRv/O5UyYYLjii3gk46qsw4SNMi6p0GpNIY5l/f9OSIg9UlRsUFQqLszBoQT2vJic2iOBlI6VLyxyg71obcHOZQEGpjfcTfqUsJeQ==',key_name='tempest-TestInstancesWithCinderVolumes-1812188149',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='107c1f3b5b7b413d9a389ca1166e331f',ramdisk_id='',reservation_id='r-c4vqjrp4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-1174033615',owner_user_name='tempest-TestInstancesWithCinderVolumes-1174033615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:57:38Z,user_data=None,user_id='ed2c9bd268d1491fa3484d86bcdb9ec6',uuid=1ebdefed-0903-4d72-b78d-912666c5ce61,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3067803c-07f3-4a15-a5ee-47f9a770efca", "address": "fa:16:3e:cd:b7:b1", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3067803c-07", "ovs_interfaceid": "3067803c-07f3-4a15-a5ee-47f9a770efca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.077 225859 DEBUG nova.network.os_vif_util [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Converting VIF {"id": "3067803c-07f3-4a15-a5ee-47f9a770efca", "address": "fa:16:3e:cd:b7:b1", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3067803c-07", "ovs_interfaceid": "3067803c-07f3-4a15-a5ee-47f9a770efca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.077 225859 DEBUG nova.network.os_vif_util [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:b7:b1,bridge_name='br-int',has_traffic_filtering=True,id=3067803c-07f3-4a15-a5ee-47f9a770efca,network=Network(58d966e1-4d26-414a-920e-0be2d77abb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3067803c-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.077 225859 DEBUG os_vif [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:b7:b1,bridge_name='br-int',has_traffic_filtering=True,id=3067803c-07f3-4a15-a5ee-47f9a770efca,network=Network(58d966e1-4d26-414a-920e-0be2d77abb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3067803c-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.078 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.078 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.079 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.082 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.082 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3067803c-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.082 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3067803c-07, col_values=(('external_ids', {'iface-id': '3067803c-07f3-4a15-a5ee-47f9a770efca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:b7:b1', 'vm-uuid': '1ebdefed-0903-4d72-b78d-912666c5ce61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.083 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:43 compute-1 NetworkManager[49104]: <info>  [1768921063.0854] manager: (tap3067803c-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.086 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.090 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.091 225859 INFO os_vif [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:b7:b1,bridge_name='br-int',has_traffic_filtering=True,id=3067803c-07f3-4a15-a5ee-47f9a770efca,network=Network(58d966e1-4d26-414a-920e-0be2d77abb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3067803c-07')
Jan 20 14:57:43 compute-1 ceph-mon[81775]: pgmap v2168: 321 pgs: 321 active+clean; 485 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.6 MiB/s wr, 374 op/s
Jan 20 14:57:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/663477354' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2004004617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.211 225859 DEBUG nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.211 225859 DEBUG nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.211 225859 DEBUG nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No VIF found with MAC fa:16:3e:cd:b7:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.212 225859 INFO nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Using config drive
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.236 225859 DEBUG nova.storage.rbd_utils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] rbd image 1ebdefed-0903-4d72-b78d-912666c5ce61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.272 225859 DEBUG nova.network.neutron [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Successfully created port: 7c572239-9b2e-493c-8be5-632f27cc634a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.440 225859 DEBUG nova.compute.manager [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.441 225859 DEBUG nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.441 225859 INFO nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Creating image(s)
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.442 225859 DEBUG nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.442 225859 DEBUG nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Ensure instance console log exists: /var/lib/nova/instances/b4c1468d-9914-426a-9464-c1167de53632/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.442 225859 DEBUG oslo_concurrency.lockutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.442 225859 DEBUG oslo_concurrency.lockutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:43 compute-1 nova_compute[225855]: 2026-01-20 14:57:43.443 225859 DEBUG oslo_concurrency.lockutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:57:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:43.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:57:44 compute-1 nova_compute[225855]: 2026-01-20 14:57:44.084 225859 INFO nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Creating config drive at /var/lib/nova/instances/1ebdefed-0903-4d72-b78d-912666c5ce61/disk.config
Jan 20 14:57:44 compute-1 nova_compute[225855]: 2026-01-20 14:57:44.088 225859 DEBUG oslo_concurrency.processutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ebdefed-0903-4d72-b78d-912666c5ce61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpperrk6d_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:44.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:44 compute-1 nova_compute[225855]: 2026-01-20 14:57:44.231 225859 DEBUG oslo_concurrency.processutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ebdefed-0903-4d72-b78d-912666c5ce61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpperrk6d_" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2874127425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:44 compute-1 nova_compute[225855]: 2026-01-20 14:57:44.310 225859 DEBUG nova.storage.rbd_utils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] rbd image 1ebdefed-0903-4d72-b78d-912666c5ce61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:44 compute-1 nova_compute[225855]: 2026-01-20 14:57:44.315 225859 DEBUG oslo_concurrency.processutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ebdefed-0903-4d72-b78d-912666c5ce61/disk.config 1ebdefed-0903-4d72-b78d-912666c5ce61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:44 compute-1 nova_compute[225855]: 2026-01-20 14:57:44.436 225859 DEBUG nova.network.neutron [req-14f5a19f-bf01-4364-9068-6d17f04bf8d0 req-12031be3-e681-499f-bd66-2c01dbc0736c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updated VIF entry in instance network info cache for port 3067803c-07f3-4a15-a5ee-47f9a770efca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:57:44 compute-1 nova_compute[225855]: 2026-01-20 14:57:44.437 225859 DEBUG nova.network.neutron [req-14f5a19f-bf01-4364-9068-6d17f04bf8d0 req-12031be3-e681-499f-bd66-2c01dbc0736c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updating instance_info_cache with network_info: [{"id": "3067803c-07f3-4a15-a5ee-47f9a770efca", "address": "fa:16:3e:cd:b7:b1", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3067803c-07", "ovs_interfaceid": "3067803c-07f3-4a15-a5ee-47f9a770efca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:57:44 compute-1 nova_compute[225855]: 2026-01-20 14:57:44.497 225859 DEBUG oslo_concurrency.processutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ebdefed-0903-4d72-b78d-912666c5ce61/disk.config 1ebdefed-0903-4d72-b78d-912666c5ce61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:44 compute-1 nova_compute[225855]: 2026-01-20 14:57:44.498 225859 INFO nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Deleting local config drive /var/lib/nova/instances/1ebdefed-0903-4d72-b78d-912666c5ce61/disk.config because it was imported into RBD.
Jan 20 14:57:44 compute-1 kernel: tap3067803c-07: entered promiscuous mode
Jan 20 14:57:44 compute-1 NetworkManager[49104]: <info>  [1768921064.5510] manager: (tap3067803c-07): new Tun device (/org/freedesktop/NetworkManager/Devices/238)
Jan 20 14:57:44 compute-1 nova_compute[225855]: 2026-01-20 14:57:44.550 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:44 compute-1 ovn_controller[130490]: 2026-01-20T14:57:44Z|00554|binding|INFO|Claiming lport 3067803c-07f3-4a15-a5ee-47f9a770efca for this chassis.
Jan 20 14:57:44 compute-1 ovn_controller[130490]: 2026-01-20T14:57:44Z|00555|binding|INFO|3067803c-07f3-4a15-a5ee-47f9a770efca: Claiming fa:16:3e:cd:b7:b1 10.100.0.10
Jan 20 14:57:44 compute-1 nova_compute[225855]: 2026-01-20 14:57:44.556 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.569 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:b7:b1 10.100.0.10'], port_security=['fa:16:3e:cd:b7:b1 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1ebdefed-0903-4d72-b78d-912666c5ce61', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58d966e1-4d26-414a-920e-0be2d77abb59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '107c1f3b5b7b413d9a389ca1166e331f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '207accdf-2d5c-48e9-bf02-5dfcc7d28063', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2edf59-0338-43ad-aa77-d6a806c781a6, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=3067803c-07f3-4a15-a5ee-47f9a770efca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.571 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 3067803c-07f3-4a15-a5ee-47f9a770efca in datapath 58d966e1-4d26-414a-920e-0be2d77abb59 bound to our chassis
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.573 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58d966e1-4d26-414a-920e-0be2d77abb59
Jan 20 14:57:44 compute-1 nova_compute[225855]: 2026-01-20 14:57:44.580 225859 DEBUG oslo_concurrency.lockutils [req-14f5a19f-bf01-4364-9068-6d17f04bf8d0 req-12031be3-e681-499f-bd66-2c01dbc0736c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:57:44 compute-1 systemd-machined[194361]: New machine qemu-66-instance-0000008b.
Jan 20 14:57:44 compute-1 systemd-udevd[280064]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.587 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[85bc5411-7e3f-4c88-99dd-8aa8d9f6653e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.588 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58d966e1-41 in ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.590 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58d966e1-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.590 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ceba51b4-66bf-41fb-9d73-f42e4847bd41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.591 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f160bf01-6a7a-40b8-80b9-b469a4c872e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.603 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[a614bb58-4944-4cce-82c3-173d607df94b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:44 compute-1 NetworkManager[49104]: <info>  [1768921064.6057] device (tap3067803c-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:57:44 compute-1 NetworkManager[49104]: <info>  [1768921064.6067] device (tap3067803c-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:57:44 compute-1 systemd[1]: Started Virtual Machine qemu-66-instance-0000008b.
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.631 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1e519688-760a-49fc-bc55-7a3802aff371]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:44 compute-1 nova_compute[225855]: 2026-01-20 14:57:44.642 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:44 compute-1 ovn_controller[130490]: 2026-01-20T14:57:44Z|00556|binding|INFO|Setting lport 3067803c-07f3-4a15-a5ee-47f9a770efca ovn-installed in OVS
Jan 20 14:57:44 compute-1 ovn_controller[130490]: 2026-01-20T14:57:44Z|00557|binding|INFO|Setting lport 3067803c-07f3-4a15-a5ee-47f9a770efca up in Southbound
Jan 20 14:57:44 compute-1 nova_compute[225855]: 2026-01-20 14:57:44.648 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.677 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[22763f14-d4c6-4449-a227-19a5819441f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.682 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7b28a384-cde4-4d31-bfdb-f1a3339ae807]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:44 compute-1 NetworkManager[49104]: <info>  [1768921064.6832] manager: (tap58d966e1-40): new Veth device (/org/freedesktop/NetworkManager/Devices/239)
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.713 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[8f430bad-abed-419c-92d1-8a0dede51621]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.716 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c393bf1d-b34e-4bf8-9de6-52dc967814fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:44 compute-1 NetworkManager[49104]: <info>  [1768921064.7372] device (tap58d966e1-40): carrier: link connected
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.741 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[61d0a49c-5d16-4a65-bce8-db9f09e202c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.761 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bc2839f9-28f1-4092-8bbc-7bf652f749b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58d966e1-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:c8:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610366, 'reachable_time': 35351, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280096, 'error': None, 'target': 'ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.776 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bb2b4e4a-c123-4cf8-bfb4-713feb6061f3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec9:c82a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 610366, 'tstamp': 610366}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280097, 'error': None, 'target': 'ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.796 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2dcc0e89-b368-441f-afce-811d45de02a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58d966e1-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:c8:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610366, 'reachable_time': 35351, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280105, 'error': None, 'target': 'ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.827 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2647c2a4-5ba8-4194-a29d-2a9153abf2e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.897 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e56a2aa4-9007-4a60-97c0-98ce7639d002]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.899 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58d966e1-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.899 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.899 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58d966e1-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:44 compute-1 nova_compute[225855]: 2026-01-20 14:57:44.955 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:44 compute-1 NetworkManager[49104]: <info>  [1768921064.9562] manager: (tap58d966e1-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Jan 20 14:57:44 compute-1 kernel: tap58d966e1-40: entered promiscuous mode
Jan 20 14:57:44 compute-1 nova_compute[225855]: 2026-01-20 14:57:44.959 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.961 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58d966e1-40, col_values=(('external_ids', {'iface-id': '1623097d-35b0-4d71-9dc2-c4d659492102'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:44 compute-1 nova_compute[225855]: 2026-01-20 14:57:44.963 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:44 compute-1 ovn_controller[130490]: 2026-01-20T14:57:44Z|00558|binding|INFO|Releasing lport 1623097d-35b0-4d71-9dc2-c4d659492102 from this chassis (sb_readonly=0)
Jan 20 14:57:44 compute-1 nova_compute[225855]: 2026-01-20 14:57:44.964 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.964 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58d966e1-4d26-414a-920e-0be2d77abb59.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58d966e1-4d26-414a-920e-0be2d77abb59.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.965 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e9cf55f3-66ca-4efa-b667-5931b46097a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.966 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: global
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-58d966e1-4d26-414a-920e-0be2d77abb59
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/58d966e1-4d26-414a-920e-0be2d77abb59.pid.haproxy
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 58d966e1-4d26-414a-920e-0be2d77abb59
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:57:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:44.967 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59', 'env', 'PROCESS_TAG=haproxy-58d966e1-4d26-414a-920e-0be2d77abb59', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58d966e1-4d26-414a-920e-0be2d77abb59.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:57:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 e321: 3 total, 3 up, 3 in
Jan 20 14:57:44 compute-1 nova_compute[225855]: 2026-01-20 14:57:44.980 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.004 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921050.003141, 538fe1f0-b666-4b97-b2ef-317adae0a47a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.004 225859 INFO nova.compute.manager [-] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] VM Stopped (Lifecycle Event)
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.020 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921065.01918, 1ebdefed-0903-4d72-b78d-912666c5ce61 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.020 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] VM Started (Lifecycle Event)
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.034 225859 DEBUG nova.compute.manager [None req-f3c6c40c-0f7f-433b-bf59-38cded4bc938 - - - - - -] [instance: 538fe1f0-b666-4b97-b2ef-317adae0a47a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.054 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.059 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921065.019513, 1ebdefed-0903-4d72-b78d-912666c5ce61 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.059 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] VM Paused (Lifecycle Event)
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.076 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.080 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.098 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:57:45 compute-1 ceph-mon[81775]: pgmap v2169: 321 pgs: 321 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 39 KiB/s wr, 157 op/s
Jan 20 14:57:45 compute-1 ceph-mon[81775]: osdmap e321: 3 total, 3 up, 3 in
Jan 20 14:57:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/607676245' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:57:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/607676245' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.335 225859 DEBUG nova.network.neutron [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Successfully updated port: 7c572239-9b2e-493c-8be5-632f27cc634a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.338 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.347 225859 DEBUG oslo_concurrency.lockutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.347 225859 DEBUG oslo_concurrency.lockutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquired lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.347 225859 DEBUG nova.network.neutron [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:57:45 compute-1 podman[280172]: 2026-01-20 14:57:45.381598484 +0000 UTC m=+0.060356254 container create f09eb4085a157e25ca55ec28f9deac3d8ef6af0026f2f7a138323503b9f81ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:57:45 compute-1 systemd[1]: Started libpod-conmon-f09eb4085a157e25ca55ec28f9deac3d8ef6af0026f2f7a138323503b9f81ddc.scope.
Jan 20 14:57:45 compute-1 podman[280172]: 2026-01-20 14:57:45.353161812 +0000 UTC m=+0.031919602 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:57:45 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:57:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0bfac44a2e71e205ef5911174c9794e1609c1289dfb400ba4190926919b056/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:57:45 compute-1 podman[280172]: 2026-01-20 14:57:45.471944653 +0000 UTC m=+0.150702513 container init f09eb4085a157e25ca55ec28f9deac3d8ef6af0026f2f7a138323503b9f81ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 20 14:57:45 compute-1 podman[280172]: 2026-01-20 14:57:45.47893925 +0000 UTC m=+0.157697060 container start f09eb4085a157e25ca55ec28f9deac3d8ef6af0026f2f7a138323503b9f81ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:57:45 compute-1 neutron-haproxy-ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59[280188]: [NOTICE]   (280192) : New worker (280194) forked
Jan 20 14:57:45 compute-1 neutron-haproxy-ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59[280188]: [NOTICE]   (280192) : Loading success.
Jan 20 14:57:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:45.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.624 225859 DEBUG nova.compute.manager [req-f4ced9dd-9f62-47e7-9fc5-556d98108fde req-dbecc6d7-69fd-41aa-9235-62d0c43035af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Received event network-vif-plugged-3067803c-07f3-4a15-a5ee-47f9a770efca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.625 225859 DEBUG oslo_concurrency.lockutils [req-f4ced9dd-9f62-47e7-9fc5-556d98108fde req-dbecc6d7-69fd-41aa-9235-62d0c43035af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "1ebdefed-0903-4d72-b78d-912666c5ce61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.626 225859 DEBUG oslo_concurrency.lockutils [req-f4ced9dd-9f62-47e7-9fc5-556d98108fde req-dbecc6d7-69fd-41aa-9235-62d0c43035af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.627 225859 DEBUG oslo_concurrency.lockutils [req-f4ced9dd-9f62-47e7-9fc5-556d98108fde req-dbecc6d7-69fd-41aa-9235-62d0c43035af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.627 225859 DEBUG nova.compute.manager [req-f4ced9dd-9f62-47e7-9fc5-556d98108fde req-dbecc6d7-69fd-41aa-9235-62d0c43035af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Processing event network-vif-plugged-3067803c-07f3-4a15-a5ee-47f9a770efca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.628 225859 DEBUG nova.compute.manager [req-f4ced9dd-9f62-47e7-9fc5-556d98108fde req-dbecc6d7-69fd-41aa-9235-62d0c43035af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Received event network-changed-7c572239-9b2e-493c-8be5-632f27cc634a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.628 225859 DEBUG nova.compute.manager [req-f4ced9dd-9f62-47e7-9fc5-556d98108fde req-dbecc6d7-69fd-41aa-9235-62d0c43035af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Refreshing instance network info cache due to event network-changed-7c572239-9b2e-493c-8be5-632f27cc634a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.629 225859 DEBUG oslo_concurrency.lockutils [req-f4ced9dd-9f62-47e7-9fc5-556d98108fde req-dbecc6d7-69fd-41aa-9235-62d0c43035af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.630 225859 DEBUG nova.compute.manager [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.635 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921065.6355221, 1ebdefed-0903-4d72-b78d-912666c5ce61 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.636 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] VM Resumed (Lifecycle Event)
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.639 225859 DEBUG nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.641 225859 INFO nova.virt.libvirt.driver [-] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Instance spawned successfully.
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.642 225859 DEBUG nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.657 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.665 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.669 225859 DEBUG nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.669 225859 DEBUG nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.670 225859 DEBUG nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.670 225859 DEBUG nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.670 225859 DEBUG nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.671 225859 DEBUG nova.virt.libvirt.driver [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.705 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.739 225859 INFO nova.compute.manager [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Took 6.19 seconds to spawn the instance on the hypervisor.
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.740 225859 DEBUG nova.compute.manager [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.817 225859 INFO nova.compute.manager [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Took 8.39 seconds to build instance.
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.852 225859 DEBUG oslo_concurrency.lockutils [None req-6ee258f6-71b3-490f-ac4f-856603e106f3 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:45 compute-1 nova_compute[225855]: 2026-01-20 14:57:45.879 225859 DEBUG nova.network.neutron [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:57:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:46.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:46 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.268 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:46 compute-1 ceph-mon[81775]: pgmap v2171: 321 pgs: 321 active+clean; 406 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 60 KiB/s wr, 267 op/s
Jan 20 14:57:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3342654723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:46 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.935 225859 DEBUG nova.network.neutron [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updating instance_info_cache with network_info: [{"id": "7c572239-9b2e-493c-8be5-632f27cc634a", "address": "fa:16:3e:d9:6a:1f", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c572239-9b", "ovs_interfaceid": "7c572239-9b2e-493c-8be5-632f27cc634a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:57:46 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.974 225859 DEBUG oslo_concurrency.lockutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Releasing lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:57:46 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.975 225859 DEBUG nova.compute.manager [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Instance network_info: |[{"id": "7c572239-9b2e-493c-8be5-632f27cc634a", "address": "fa:16:3e:d9:6a:1f", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c572239-9b", "ovs_interfaceid": "7c572239-9b2e-493c-8be5-632f27cc634a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:57:46 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.975 225859 DEBUG oslo_concurrency.lockutils [req-f4ced9dd-9f62-47e7-9fc5-556d98108fde req-dbecc6d7-69fd-41aa-9235-62d0c43035af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:57:46 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.976 225859 DEBUG nova.network.neutron [req-f4ced9dd-9f62-47e7-9fc5-556d98108fde req-dbecc6d7-69fd-41aa-9235-62d0c43035af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Refreshing network info cache for port 7c572239-9b2e-493c-8be5-632f27cc634a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:57:46 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.979 225859 DEBUG nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Start _get_guest_xml network_info=[{"id": "7c572239-9b2e-493c-8be5-632f27cc634a", "address": "fa:16:3e:d9:6a:1f", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c572239-9b", "ovs_interfaceid": "7c572239-9b2e-493c-8be5-632f27cc634a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-002e39e3-1bec-4033-aca2-f1428e495087', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '002e39e3-1bec-4033-aca2-f1428e495087', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'b4c1468d-9914-426a-9464-c1167de53632', 'attached_at': '', 'detached_at': '', 'volume_id': '002e39e3-1bec-4033-aca2-f1428e495087', 'serial': '002e39e3-1bec-4033-aca2-f1428e495087'}, 'guest_format': None, 'boot_index': 0, 'mount_device': '/dev/vda', 'attachment_id': 'b8b8cc31-54c0-4f4d-80cc-6fca4e9cae9f', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:57:46 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.984 225859 WARNING nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:57:46 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.989 225859 DEBUG nova.virt.libvirt.host [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:57:46 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.990 225859 DEBUG nova.virt.libvirt.host [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.992 225859 DEBUG nova.virt.libvirt.host [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.992 225859 DEBUG nova.virt.libvirt.host [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.993 225859 DEBUG nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.993 225859 DEBUG nova.virt.hardware [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.994 225859 DEBUG nova.virt.hardware [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.994 225859 DEBUG nova.virt.hardware [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.994 225859 DEBUG nova.virt.hardware [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.994 225859 DEBUG nova.virt.hardware [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.994 225859 DEBUG nova.virt.hardware [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.995 225859 DEBUG nova.virt.hardware [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.995 225859 DEBUG nova.virt.hardware [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.995 225859 DEBUG nova.virt.hardware [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.995 225859 DEBUG nova.virt.hardware [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:46.995 225859 DEBUG nova.virt.hardware [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.023 225859 DEBUG nova.storage.rbd_utils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] rbd image b4c1468d-9914-426a-9464-c1167de53632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.027 225859 DEBUG oslo_concurrency.processutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:47 compute-1 podman[280204]: 2026-01-20 14:57:47.05417519 +0000 UTC m=+0.086643436 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 20 14:57:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1436669365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1505514556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.385 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.385 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.386 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.386 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.386 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:57:47 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2911748478' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.516 225859 DEBUG oslo_concurrency.processutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:47.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.695 225859 DEBUG nova.virt.libvirt.vif [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:57:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-65714861',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-65714861',id=140,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGBbRGX2xZT3D1ftqdKpZwTwb/ukXbRv/O5UyYYLjii3gk46qsw4SNMi6p0GpNIY5l/f9OSIg9UlRsUFQqLszBoQT2vJic2iOBlI6VLyxyg71obcHOZQEGpjfcTfqUsJeQ==',key_name='tempest-TestInstancesWithCinderVolumes-1812188149',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='107c1f3b5b7b413d9a389ca1166e331f',ramdisk_id='',reservation_id='r-8l7rw241',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-1174033615',owner_user_name='tempest-TestInstancesWithCinderVolumes-1174033615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:57:41Z,user_data=None,user_id='ed2c9bd268d1491fa3484d86bcdb9ec6',uuid=b4c1468d-9914-426a-9464-c1167de53632,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c572239-9b2e-493c-8be5-632f27cc634a", "address": "fa:16:3e:d9:6a:1f", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c572239-9b", "ovs_interfaceid": "7c572239-9b2e-493c-8be5-632f27cc634a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.696 225859 DEBUG nova.network.os_vif_util [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Converting VIF {"id": "7c572239-9b2e-493c-8be5-632f27cc634a", "address": "fa:16:3e:d9:6a:1f", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c572239-9b", "ovs_interfaceid": "7c572239-9b2e-493c-8be5-632f27cc634a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.697 225859 DEBUG nova.network.os_vif_util [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:6a:1f,bridge_name='br-int',has_traffic_filtering=True,id=7c572239-9b2e-493c-8be5-632f27cc634a,network=Network(58d966e1-4d26-414a-920e-0be2d77abb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c572239-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.698 225859 DEBUG nova.objects.instance [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lazy-loading 'pci_devices' on Instance uuid b4c1468d-9914-426a-9464-c1167de53632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.751 225859 DEBUG nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:57:47 compute-1 nova_compute[225855]:   <uuid>b4c1468d-9914-426a-9464-c1167de53632</uuid>
Jan 20 14:57:47 compute-1 nova_compute[225855]:   <name>instance-0000008c</name>
Jan 20 14:57:47 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 14:57:47 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 14:57:47 compute-1 nova_compute[225855]:   <metadata>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <nova:name>tempest-TestInstancesWithCinderVolumes-server-65714861</nova:name>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 14:57:46</nova:creationTime>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 14:57:47 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 14:57:47 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 14:57:47 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 14:57:47 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:57:47 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 14:57:47 compute-1 nova_compute[225855]:         <nova:user uuid="ed2c9bd268d1491fa3484d86bcdb9ec6">tempest-TestInstancesWithCinderVolumes-1174033615-project-member</nova:user>
Jan 20 14:57:47 compute-1 nova_compute[225855]:         <nova:project uuid="107c1f3b5b7b413d9a389ca1166e331f">tempest-TestInstancesWithCinderVolumes-1174033615</nova:project>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 14:57:47 compute-1 nova_compute[225855]:         <nova:port uuid="7c572239-9b2e-493c-8be5-632f27cc634a">
Jan 20 14:57:47 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 14:57:47 compute-1 nova_compute[225855]:   </metadata>
Jan 20 14:57:47 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <system>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <entry name="serial">b4c1468d-9914-426a-9464-c1167de53632</entry>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <entry name="uuid">b4c1468d-9914-426a-9464-c1167de53632</entry>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     </system>
Jan 20 14:57:47 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 14:57:47 compute-1 nova_compute[225855]:   <os>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:   </os>
Jan 20 14:57:47 compute-1 nova_compute[225855]:   <features>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <apic/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:   </features>
Jan 20 14:57:47 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:   </clock>
Jan 20 14:57:47 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:   </cpu>
Jan 20 14:57:47 compute-1 nova_compute[225855]:   <devices>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/b4c1468d-9914-426a-9464-c1167de53632_disk.config">
Jan 20 14:57:47 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       </source>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:57:47 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <source protocol="rbd" name="volumes/volume-002e39e3-1bec-4033-aca2-f1428e495087">
Jan 20 14:57:47 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       </source>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 14:57:47 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       </auth>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <serial>002e39e3-1bec-4033-aca2-f1428e495087</serial>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     </disk>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:d9:6a:1f"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <target dev="tap7c572239-9b"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     </interface>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/b4c1468d-9914-426a-9464-c1167de53632/console.log" append="off"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     </serial>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <video>
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     </video>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     </rng>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 14:57:47 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 14:57:47 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 14:57:47 compute-1 nova_compute[225855]:   </devices>
Jan 20 14:57:47 compute-1 nova_compute[225855]: </domain>
Jan 20 14:57:47 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.765 225859 DEBUG nova.compute.manager [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Preparing to wait for external event network-vif-plugged-7c572239-9b2e-493c-8be5-632f27cc634a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.766 225859 DEBUG oslo_concurrency.lockutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "b4c1468d-9914-426a-9464-c1167de53632-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.766 225859 DEBUG oslo_concurrency.lockutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.766 225859 DEBUG oslo_concurrency.lockutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.768 225859 DEBUG nova.virt.libvirt.vif [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:57:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-65714861',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-65714861',id=140,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGBbRGX2xZT3D1ftqdKpZwTwb/ukXbRv/O5UyYYLjii3gk46qsw4SNMi6p0GpNIY5l/f9OSIg9UlRsUFQqLszBoQT2vJic2iOBlI6VLyxyg71obcHOZQEGpjfcTfqUsJeQ==',key_name='tempest-TestInstancesWithCinderVolumes-1812188149',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='107c1f3b5b7b413d9a389ca1166e331f',ramdisk_id='',reservation_id='r-8l7rw241',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-1174033615',owner_user_name='tempest-TestInstancesWithCinderVolumes-1174033615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:57:41Z,user_data=None,user_id='ed2c9bd268d1491fa3484d86bcdb9ec6',uuid=b4c1468d-9914-426a-9464-c1167de53632,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c572239-9b2e-493c-8be5-632f27cc634a", "address": "fa:16:3e:d9:6a:1f", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c572239-9b", "ovs_interfaceid": "7c572239-9b2e-493c-8be5-632f27cc634a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.768 225859 DEBUG nova.network.os_vif_util [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Converting VIF {"id": "7c572239-9b2e-493c-8be5-632f27cc634a", "address": "fa:16:3e:d9:6a:1f", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c572239-9b", "ovs_interfaceid": "7c572239-9b2e-493c-8be5-632f27cc634a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.770 225859 DEBUG nova.network.os_vif_util [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:6a:1f,bridge_name='br-int',has_traffic_filtering=True,id=7c572239-9b2e-493c-8be5-632f27cc634a,network=Network(58d966e1-4d26-414a-920e-0be2d77abb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c572239-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.771 225859 DEBUG os_vif [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:6a:1f,bridge_name='br-int',has_traffic_filtering=True,id=7c572239-9b2e-493c-8be5-632f27cc634a,network=Network(58d966e1-4d26-414a-920e-0be2d77abb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c572239-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.772 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.773 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.774 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.778 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.779 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c572239-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.781 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7c572239-9b, col_values=(('external_ids', {'iface-id': '7c572239-9b2e-493c-8be5-632f27cc634a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:6a:1f', 'vm-uuid': 'b4c1468d-9914-426a-9464-c1167de53632'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.783 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:47 compute-1 NetworkManager[49104]: <info>  [1768921067.7845] manager: (tap7c572239-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/241)
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.789 225859 DEBUG nova.compute.manager [req-929ba64e-9ddf-49e3-890a-8757a6aa0e6c req-bd027a3e-3670-4dc9-a6ae-08ac1e85deb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Received event network-vif-plugged-3067803c-07f3-4a15-a5ee-47f9a770efca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.790 225859 DEBUG oslo_concurrency.lockutils [req-929ba64e-9ddf-49e3-890a-8757a6aa0e6c req-bd027a3e-3670-4dc9-a6ae-08ac1e85deb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "1ebdefed-0903-4d72-b78d-912666c5ce61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.791 225859 DEBUG oslo_concurrency.lockutils [req-929ba64e-9ddf-49e3-890a-8757a6aa0e6c req-bd027a3e-3670-4dc9-a6ae-08ac1e85deb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.791 225859 DEBUG oslo_concurrency.lockutils [req-929ba64e-9ddf-49e3-890a-8757a6aa0e6c req-bd027a3e-3670-4dc9-a6ae-08ac1e85deb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.792 225859 DEBUG nova.compute.manager [req-929ba64e-9ddf-49e3-890a-8757a6aa0e6c req-bd027a3e-3670-4dc9-a6ae-08ac1e85deb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] No waiting events found dispatching network-vif-plugged-3067803c-07f3-4a15-a5ee-47f9a770efca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.792 225859 WARNING nova.compute.manager [req-929ba64e-9ddf-49e3-890a-8757a6aa0e6c req-bd027a3e-3670-4dc9-a6ae-08ac1e85deb8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Received unexpected event network-vif-plugged-3067803c-07f3-4a15-a5ee-47f9a770efca for instance with vm_state active and task_state None.
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.793 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.795 225859 INFO os_vif [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:6a:1f,bridge_name='br-int',has_traffic_filtering=True,id=7c572239-9b2e-493c-8be5-632f27cc634a,network=Network(58d966e1-4d26-414a-920e-0be2d77abb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c572239-9b')
Jan 20 14:57:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:57:47 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1045157438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.838 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.871 225859 DEBUG nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.872 225859 DEBUG nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.873 225859 DEBUG nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No VIF found with MAC fa:16:3e:d9:6a:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.874 225859 INFO nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Using config drive
Jan 20 14:57:47 compute-1 nova_compute[225855]: 2026-01-20 14:57:47.900 225859 DEBUG nova.storage.rbd_utils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] rbd image b4c1468d-9914-426a-9464-c1167de53632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:48 compute-1 nova_compute[225855]: 2026-01-20 14:57:48.032 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:57:48 compute-1 nova_compute[225855]: 2026-01-20 14:57:48.033 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:57:48 compute-1 nova_compute[225855]: 2026-01-20 14:57:48.037 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:57:48 compute-1 nova_compute[225855]: 2026-01-20 14:57:48.037 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:57:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:48.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:48 compute-1 nova_compute[225855]: 2026-01-20 14:57:48.225 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:57:48 compute-1 nova_compute[225855]: 2026-01-20 14:57:48.227 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4229MB free_disk=20.876060485839844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:57:48 compute-1 nova_compute[225855]: 2026-01-20 14:57:48.228 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:48 compute-1 nova_compute[225855]: 2026-01-20 14:57:48.228 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2911748478' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1674329294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1045157438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:48 compute-1 ceph-mon[81775]: pgmap v2172: 321 pgs: 321 active+clean; 406 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 59 KiB/s wr, 239 op/s
Jan 20 14:57:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4255159192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:48 compute-1 nova_compute[225855]: 2026-01-20 14:57:48.480 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 1ebdefed-0903-4d72-b78d-912666c5ce61 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:57:48 compute-1 nova_compute[225855]: 2026-01-20 14:57:48.481 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance b4c1468d-9914-426a-9464-c1167de53632 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:57:48 compute-1 nova_compute[225855]: 2026-01-20 14:57:48.482 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:57:48 compute-1 nova_compute[225855]: 2026-01-20 14:57:48.482 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:57:48 compute-1 nova_compute[225855]: 2026-01-20 14:57:48.624 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:57:49 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3301246247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:49 compute-1 nova_compute[225855]: 2026-01-20 14:57:49.107 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:49 compute-1 nova_compute[225855]: 2026-01-20 14:57:49.115 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:57:49 compute-1 nova_compute[225855]: 2026-01-20 14:57:49.142 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:57:49 compute-1 nova_compute[225855]: 2026-01-20 14:57:49.324 225859 INFO nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Creating config drive at /var/lib/nova/instances/b4c1468d-9914-426a-9464-c1167de53632/disk.config
Jan 20 14:57:49 compute-1 nova_compute[225855]: 2026-01-20 14:57:49.330 225859 DEBUG oslo_concurrency.processutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4c1468d-9914-426a-9464-c1167de53632/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp89qkppa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3301246247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:49 compute-1 nova_compute[225855]: 2026-01-20 14:57:49.488 225859 DEBUG oslo_concurrency.processutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4c1468d-9914-426a-9464-c1167de53632/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp89qkppa" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:49.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:49 compute-1 nova_compute[225855]: 2026-01-20 14:57:49.533 225859 DEBUG nova.storage.rbd_utils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] rbd image b4c1468d-9914-426a-9464-c1167de53632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:49 compute-1 nova_compute[225855]: 2026-01-20 14:57:49.538 225859 DEBUG oslo_concurrency.processutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4c1468d-9914-426a-9464-c1167de53632/disk.config b4c1468d-9914-426a-9464-c1167de53632_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:49 compute-1 nova_compute[225855]: 2026-01-20 14:57:49.608 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:57:49 compute-1 nova_compute[225855]: 2026-01-20 14:57:49.609 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:49 compute-1 nova_compute[225855]: 2026-01-20 14:57:49.753 225859 DEBUG oslo_concurrency.processutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4c1468d-9914-426a-9464-c1167de53632/disk.config b4c1468d-9914-426a-9464-c1167de53632_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:49 compute-1 nova_compute[225855]: 2026-01-20 14:57:49.754 225859 INFO nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Deleting local config drive /var/lib/nova/instances/b4c1468d-9914-426a-9464-c1167de53632/disk.config because it was imported into RBD.
Jan 20 14:57:49 compute-1 kernel: tap7c572239-9b: entered promiscuous mode
Jan 20 14:57:49 compute-1 NetworkManager[49104]: <info>  [1768921069.8320] manager: (tap7c572239-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/242)
Jan 20 14:57:49 compute-1 ovn_controller[130490]: 2026-01-20T14:57:49Z|00559|binding|INFO|Claiming lport 7c572239-9b2e-493c-8be5-632f27cc634a for this chassis.
Jan 20 14:57:49 compute-1 nova_compute[225855]: 2026-01-20 14:57:49.831 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:49 compute-1 ovn_controller[130490]: 2026-01-20T14:57:49Z|00560|binding|INFO|7c572239-9b2e-493c-8be5-632f27cc634a: Claiming fa:16:3e:d9:6a:1f 10.100.0.9
Jan 20 14:57:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:49.845 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:6a:1f 10.100.0.9'], port_security=['fa:16:3e:d9:6a:1f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b4c1468d-9914-426a-9464-c1167de53632', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58d966e1-4d26-414a-920e-0be2d77abb59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '107c1f3b5b7b413d9a389ca1166e331f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '207accdf-2d5c-48e9-bf02-5dfcc7d28063', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2edf59-0338-43ad-aa77-d6a806c781a6, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=7c572239-9b2e-493c-8be5-632f27cc634a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:57:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:49.847 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 7c572239-9b2e-493c-8be5-632f27cc634a in datapath 58d966e1-4d26-414a-920e-0be2d77abb59 bound to our chassis
Jan 20 14:57:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:49.849 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58d966e1-4d26-414a-920e-0be2d77abb59
Jan 20 14:57:49 compute-1 ovn_controller[130490]: 2026-01-20T14:57:49Z|00561|binding|INFO|Setting lport 7c572239-9b2e-493c-8be5-632f27cc634a ovn-installed in OVS
Jan 20 14:57:49 compute-1 ovn_controller[130490]: 2026-01-20T14:57:49Z|00562|binding|INFO|Setting lport 7c572239-9b2e-493c-8be5-632f27cc634a up in Southbound
Jan 20 14:57:49 compute-1 nova_compute[225855]: 2026-01-20 14:57:49.854 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:49 compute-1 nova_compute[225855]: 2026-01-20 14:57:49.860 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:49.869 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ecfad7-d166-45d4-92b8-47b3eec4a47e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:49 compute-1 systemd-machined[194361]: New machine qemu-67-instance-0000008c.
Jan 20 14:57:49 compute-1 systemd[1]: Started Virtual Machine qemu-67-instance-0000008c.
Jan 20 14:57:49 compute-1 systemd-udevd[280396]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:57:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:49.907 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[3144f15d-a2df-4885-8a7a-55b546f2d681]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:49.911 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[295d6385-ed6e-4a9f-a800-d636030ce5f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:49 compute-1 NetworkManager[49104]: <info>  [1768921069.9172] device (tap7c572239-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:57:49 compute-1 NetworkManager[49104]: <info>  [1768921069.9181] device (tap7c572239-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:57:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:49.942 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[096d4bbe-0918-4f13-b3cd-abf16c66b5c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:49.959 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6341f2-ed4e-44ee-a3a2-09ae3f7a0ddf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58d966e1-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:c8:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610366, 'reachable_time': 35351, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280404, 'error': None, 'target': 'ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:49.990 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2b1e8aaf-ce4e-4711-894c-b24e09c01a6c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap58d966e1-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 610378, 'tstamp': 610378}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280407, 'error': None, 'target': 'ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap58d966e1-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 610381, 'tstamp': 610381}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280407, 'error': None, 'target': 'ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:49.992 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58d966e1-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:49 compute-1 nova_compute[225855]: 2026-01-20 14:57:49.993 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:49.994 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58d966e1-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:49.995 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:57:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:49.995 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58d966e1-40, col_values=(('external_ids', {'iface-id': '1623097d-35b0-4d71-9dc2-c4d659492102'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:57:49.995 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:57:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:50.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:50 compute-1 ceph-mon[81775]: pgmap v2173: 321 pgs: 321 active+clean; 391 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 48 KiB/s wr, 280 op/s
Jan 20 14:57:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/319160311' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:50 compute-1 nova_compute[225855]: 2026-01-20 14:57:50.698 225859 DEBUG nova.compute.manager [req-cf40b2e5-cf3b-44ca-a3eb-1b9dab98a6bc req-be2b7343-0f5f-46d3-b515-426e50b6ded3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Received event network-vif-plugged-7c572239-9b2e-493c-8be5-632f27cc634a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:50 compute-1 nova_compute[225855]: 2026-01-20 14:57:50.700 225859 DEBUG oslo_concurrency.lockutils [req-cf40b2e5-cf3b-44ca-a3eb-1b9dab98a6bc req-be2b7343-0f5f-46d3-b515-426e50b6ded3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b4c1468d-9914-426a-9464-c1167de53632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:50 compute-1 nova_compute[225855]: 2026-01-20 14:57:50.700 225859 DEBUG oslo_concurrency.lockutils [req-cf40b2e5-cf3b-44ca-a3eb-1b9dab98a6bc req-be2b7343-0f5f-46d3-b515-426e50b6ded3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:50 compute-1 nova_compute[225855]: 2026-01-20 14:57:50.700 225859 DEBUG oslo_concurrency.lockutils [req-cf40b2e5-cf3b-44ca-a3eb-1b9dab98a6bc req-be2b7343-0f5f-46d3-b515-426e50b6ded3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:50 compute-1 nova_compute[225855]: 2026-01-20 14:57:50.701 225859 DEBUG nova.compute.manager [req-cf40b2e5-cf3b-44ca-a3eb-1b9dab98a6bc req-be2b7343-0f5f-46d3-b515-426e50b6ded3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Processing event network-vif-plugged-7c572239-9b2e-493c-8be5-632f27cc634a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.085 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921071.0852692, b4c1468d-9914-426a-9464-c1167de53632 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.086 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c1468d-9914-426a-9464-c1167de53632] VM Started (Lifecycle Event)
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.088 225859 DEBUG nova.compute.manager [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.091 225859 DEBUG nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.094 225859 INFO nova.virt.libvirt.driver [-] [instance: b4c1468d-9914-426a-9464-c1167de53632] Instance spawned successfully.
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.094 225859 DEBUG nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.159 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c1468d-9914-426a-9464-c1167de53632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.163 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c1468d-9914-426a-9464-c1167de53632] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.198 225859 DEBUG nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.199 225859 DEBUG nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.199 225859 DEBUG nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.200 225859 DEBUG nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.200 225859 DEBUG nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.200 225859 DEBUG nova.virt.libvirt.driver [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.252 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c1468d-9914-426a-9464-c1167de53632] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.253 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921071.0862257, b4c1468d-9914-426a-9464-c1167de53632 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.253 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c1468d-9914-426a-9464-c1167de53632] VM Paused (Lifecycle Event)
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.269 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.360 225859 INFO nova.compute.manager [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Took 7.92 seconds to spawn the instance on the hypervisor.
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.361 225859 DEBUG nova.compute.manager [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.427 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c1468d-9914-426a-9464-c1167de53632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.431 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921071.0908244, b4c1468d-9914-426a-9464-c1167de53632 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.432 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c1468d-9914-426a-9464-c1167de53632] VM Resumed (Lifecycle Event)
Jan 20 14:57:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:51.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.571 225859 INFO nova.compute.manager [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Took 10.87 seconds to build instance.
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.604 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c1468d-9914-426a-9464-c1167de53632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.609 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c1468d-9914-426a-9464-c1167de53632] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:57:51 compute-1 nova_compute[225855]: 2026-01-20 14:57:51.649 225859 DEBUG oslo_concurrency.lockutils [None req-0a7677da-1522-4eea-a664-8af5ee58a977 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:52.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:52 compute-1 nova_compute[225855]: 2026-01-20 14:57:52.610 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:57:52 compute-1 nova_compute[225855]: 2026-01-20 14:57:52.611 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:57:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:52 compute-1 nova_compute[225855]: 2026-01-20 14:57:52.667 225859 DEBUG nova.network.neutron [req-f4ced9dd-9f62-47e7-9fc5-556d98108fde req-dbecc6d7-69fd-41aa-9235-62d0c43035af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updated VIF entry in instance network info cache for port 7c572239-9b2e-493c-8be5-632f27cc634a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:57:52 compute-1 nova_compute[225855]: 2026-01-20 14:57:52.668 225859 DEBUG nova.network.neutron [req-f4ced9dd-9f62-47e7-9fc5-556d98108fde req-dbecc6d7-69fd-41aa-9235-62d0c43035af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updating instance_info_cache with network_info: [{"id": "7c572239-9b2e-493c-8be5-632f27cc634a", "address": "fa:16:3e:d9:6a:1f", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c572239-9b", "ovs_interfaceid": "7c572239-9b2e-493c-8be5-632f27cc634a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:57:52 compute-1 nova_compute[225855]: 2026-01-20 14:57:52.686 225859 DEBUG oslo_concurrency.lockutils [req-f4ced9dd-9f62-47e7-9fc5-556d98108fde req-dbecc6d7-69fd-41aa-9235-62d0c43035af 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:57:52 compute-1 nova_compute[225855]: 2026-01-20 14:57:52.783 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:52 compute-1 nova_compute[225855]: 2026-01-20 14:57:52.920 225859 DEBUG nova.compute.manager [req-07e5061e-243d-4cb0-a7f8-0d2b32f10e4f req-804f87cf-26af-491a-82bf-76d703a8605f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Received event network-vif-plugged-7c572239-9b2e-493c-8be5-632f27cc634a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:52 compute-1 nova_compute[225855]: 2026-01-20 14:57:52.921 225859 DEBUG oslo_concurrency.lockutils [req-07e5061e-243d-4cb0-a7f8-0d2b32f10e4f req-804f87cf-26af-491a-82bf-76d703a8605f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b4c1468d-9914-426a-9464-c1167de53632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:52 compute-1 nova_compute[225855]: 2026-01-20 14:57:52.921 225859 DEBUG oslo_concurrency.lockutils [req-07e5061e-243d-4cb0-a7f8-0d2b32f10e4f req-804f87cf-26af-491a-82bf-76d703a8605f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:52 compute-1 nova_compute[225855]: 2026-01-20 14:57:52.922 225859 DEBUG oslo_concurrency.lockutils [req-07e5061e-243d-4cb0-a7f8-0d2b32f10e4f req-804f87cf-26af-491a-82bf-76d703a8605f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:52 compute-1 nova_compute[225855]: 2026-01-20 14:57:52.922 225859 DEBUG nova.compute.manager [req-07e5061e-243d-4cb0-a7f8-0d2b32f10e4f req-804f87cf-26af-491a-82bf-76d703a8605f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] No waiting events found dispatching network-vif-plugged-7c572239-9b2e-493c-8be5-632f27cc634a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:57:52 compute-1 nova_compute[225855]: 2026-01-20 14:57:52.922 225859 WARNING nova.compute.manager [req-07e5061e-243d-4cb0-a7f8-0d2b32f10e4f req-804f87cf-26af-491a-82bf-76d703a8605f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Received unexpected event network-vif-plugged-7c572239-9b2e-493c-8be5-632f27cc634a for instance with vm_state active and task_state None.
Jan 20 14:57:53 compute-1 ceph-mon[81775]: pgmap v2174: 321 pgs: 321 active+clean; 345 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.9 MiB/s wr, 314 op/s
Jan 20 14:57:53 compute-1 sudo[280452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:53 compute-1 sudo[280452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:53 compute-1 sudo[280452]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:53 compute-1 sudo[280477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:53 compute-1 sudo[280477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:53 compute-1 sudo[280477]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:57:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:53.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:57:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:57:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:54.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:57:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:57:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 52K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s
                                           Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.10 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1854 writes, 9385 keys, 1854 commit groups, 1.0 writes per commit group, ingest: 17.68 MB, 0.03 MB/s
                                           Interval WAL: 1855 writes, 1855 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     73.2      0.86              0.23        31    0.028       0      0       0.0       0.0
                                             L6      1/0    9.87 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.4    106.8     89.5      3.06              0.90        30    0.102    183K    16K       0.0       0.0
                                            Sum      1/0    9.87 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.4     83.4     85.9      3.92              1.13        61    0.064    183K    16K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.5     90.0     90.7      0.93              0.26        14    0.066     54K   3667       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    106.8     89.5      3.06              0.90        30    0.102    183K    16K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     73.3      0.86              0.23        30    0.029       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.061, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.33 GB write, 0.09 MB/s write, 0.32 GB read, 0.09 MB/s read, 3.9 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564d515a71f0#2 capacity: 304.00 MB usage: 38.25 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.00042 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2201,36.88 MB,12.1318%) FilterBlock(61,519.36 KB,0.166838%) IndexBlock(61,877.77 KB,0.281971%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 14:57:55 compute-1 ceph-mon[81775]: pgmap v2175: 321 pgs: 321 active+clean; 353 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.5 MiB/s wr, 357 op/s
Jan 20 14:57:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:57:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:55.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:57:56 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:57:56 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2303621507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:56.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:56 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2303621507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:56 compute-1 nova_compute[225855]: 2026-01-20 14:57:56.271 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:57 compute-1 ceph-mon[81775]: pgmap v2176: 321 pgs: 321 active+clean; 360 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 2.3 MiB/s wr, 376 op/s
Jan 20 14:57:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:57:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:57.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:57:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:57 compute-1 nova_compute[225855]: 2026-01-20 14:57:57.785 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:57:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:58.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:57:58 compute-1 ceph-mon[81775]: pgmap v2177: 321 pgs: 321 active+clean; 360 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.2 MiB/s wr, 275 op/s
Jan 20 14:57:59 compute-1 podman[280505]: 2026-01-20 14:57:59.055937531 +0000 UTC m=+0.069734088 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 20 14:57:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:57:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:59.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:59 compute-1 ovn_controller[130490]: 2026-01-20T14:57:59Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cd:b7:b1 10.100.0.10
Jan 20 14:57:59 compute-1 ovn_controller[130490]: 2026-01-20T14:57:59Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cd:b7:b1 10.100.0.10
Jan 20 14:58:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:00.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:00 compute-1 nova_compute[225855]: 2026-01-20 14:58:00.153 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:00 compute-1 NetworkManager[49104]: <info>  [1768921080.1544] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Jan 20 14:58:00 compute-1 NetworkManager[49104]: <info>  [1768921080.1555] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Jan 20 14:58:00 compute-1 nova_compute[225855]: 2026-01-20 14:58:00.351 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:00 compute-1 ovn_controller[130490]: 2026-01-20T14:58:00Z|00563|binding|INFO|Releasing lport 1623097d-35b0-4d71-9dc2-c4d659492102 from this chassis (sb_readonly=0)
Jan 20 14:58:00 compute-1 nova_compute[225855]: 2026-01-20 14:58:00.374 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:01 compute-1 nova_compute[225855]: 2026-01-20 14:58:01.090 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:01 compute-1 ceph-mon[81775]: pgmap v2178: 321 pgs: 321 active+clean; 389 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 3.5 MiB/s wr, 328 op/s
Jan 20 14:58:01 compute-1 nova_compute[225855]: 2026-01-20 14:58:01.274 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:01.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:02.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:02 compute-1 nova_compute[225855]: 2026-01-20 14:58:02.787 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:03 compute-1 ceph-mon[81775]: pgmap v2179: 321 pgs: 321 active+clean; 424 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 6.3 MiB/s wr, 328 op/s
Jan 20 14:58:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:58:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:03.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:58:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:58:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:04.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:58:04 compute-1 ceph-mon[81775]: pgmap v2180: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.2 MiB/s wr, 253 op/s
Jan 20 14:58:04 compute-1 ovn_controller[130490]: 2026-01-20T14:58:04Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d9:6a:1f 10.100.0.9
Jan 20 14:58:04 compute-1 ovn_controller[130490]: 2026-01-20T14:58:04Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:6a:1f 10.100.0.9
Jan 20 14:58:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:05.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:06.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:06 compute-1 nova_compute[225855]: 2026-01-20 14:58:06.278 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:07 compute-1 ceph-mon[81775]: pgmap v2181: 321 pgs: 321 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.4 MiB/s wr, 255 op/s
Jan 20 14:58:07 compute-1 nova_compute[225855]: 2026-01-20 14:58:07.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:07 compute-1 nova_compute[225855]: 2026-01-20 14:58:07.340 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:07 compute-1 nova_compute[225855]: 2026-01-20 14:58:07.341 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:07 compute-1 nova_compute[225855]: 2026-01-20 14:58:07.342 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:07 compute-1 nova_compute[225855]: 2026-01-20 14:58:07.342 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:07 compute-1 nova_compute[225855]: 2026-01-20 14:58:07.342 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:07 compute-1 nova_compute[225855]: 2026-01-20 14:58:07.342 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:07 compute-1 nova_compute[225855]: 2026-01-20 14:58:07.429 225859 DEBUG nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Jan 20 14:58:07 compute-1 nova_compute[225855]: 2026-01-20 14:58:07.430 225859 DEBUG nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 20 14:58:07 compute-1 nova_compute[225855]: 2026-01-20 14:58:07.430 225859 DEBUG nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] 1ebdefed-0903-4d72-b78d-912666c5ce61 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 20 14:58:07 compute-1 nova_compute[225855]: 2026-01-20 14:58:07.431 225859 DEBUG nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] b4c1468d-9914-426a-9464-c1167de53632 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 20 14:58:07 compute-1 nova_compute[225855]: 2026-01-20 14:58:07.431 225859 WARNING nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462
Jan 20 14:58:07 compute-1 nova_compute[225855]: 2026-01-20 14:58:07.431 225859 WARNING nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c
Jan 20 14:58:07 compute-1 nova_compute[225855]: 2026-01-20 14:58:07.431 225859 INFO nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Removable base files: /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c
Jan 20 14:58:07 compute-1 nova_compute[225855]: 2026-01-20 14:58:07.431 225859 INFO nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462
Jan 20 14:58:07 compute-1 nova_compute[225855]: 2026-01-20 14:58:07.432 225859 INFO nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c
Jan 20 14:58:07 compute-1 nova_compute[225855]: 2026-01-20 14:58:07.432 225859 DEBUG nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Jan 20 14:58:07 compute-1 nova_compute[225855]: 2026-01-20 14:58:07.432 225859 DEBUG nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Jan 20 14:58:07 compute-1 nova_compute[225855]: 2026-01-20 14:58:07.432 225859 DEBUG nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Jan 20 14:58:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:07.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:07 compute-1 nova_compute[225855]: 2026-01-20 14:58:07.789 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:08.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:09 compute-1 ceph-mon[81775]: pgmap v2182: 321 pgs: 321 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1002 KiB/s rd, 6.3 MiB/s wr, 204 op/s
Jan 20 14:58:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2689632524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:09.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:10.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:11 compute-1 ceph-mon[81775]: pgmap v2183: 321 pgs: 321 active+clean; 379 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 6.4 MiB/s wr, 222 op/s
Jan 20 14:58:11 compute-1 nova_compute[225855]: 2026-01-20 14:58:11.278 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:11.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:12.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:12 compute-1 ceph-mon[81775]: pgmap v2184: 321 pgs: 321 active+clean; 379 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 717 KiB/s rd, 5.1 MiB/s wr, 169 op/s
Jan 20 14:58:12 compute-1 nova_compute[225855]: 2026-01-20 14:58:12.624 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:12 compute-1 nova_compute[225855]: 2026-01-20 14:58:12.793 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:13.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:13 compute-1 sudo[280530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:13 compute-1 sudo[280530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:13 compute-1 sudo[280530]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:13 compute-1 sudo[280555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:13 compute-1 sudo[280555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:13 compute-1 sudo[280555]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:14.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1767471805' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:58:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1767471805' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:58:15 compute-1 ceph-mon[81775]: pgmap v2185: 321 pgs: 321 active+clean; 368 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 362 KiB/s rd, 2.3 MiB/s wr, 101 op/s
Jan 20 14:58:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/697642607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:58:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:15.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:58:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.002000056s ======
Jan 20 14:58:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:16.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Jan 20 14:58:16 compute-1 ceph-mon[81775]: pgmap v2186: 321 pgs: 321 active+clean; 302 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Jan 20 14:58:16 compute-1 nova_compute[225855]: 2026-01-20 14:58:16.281 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:58:16.419 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:58:16.419 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:58:16.420 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:58:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:17.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:58:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:17 compute-1 nova_compute[225855]: 2026-01-20 14:58:17.796 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:18 compute-1 podman[280584]: 2026-01-20 14:58:18.036509511 +0000 UTC m=+0.082656103 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 20 14:58:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:18.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:58:18 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3000352245' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:58:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:58:18 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3000352245' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:58:18 compute-1 nova_compute[225855]: 2026-01-20 14:58:18.600 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:19 compute-1 ceph-mon[81775]: pgmap v2187: 321 pgs: 321 active+clean; 302 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 290 KiB/s wr, 52 op/s
Jan 20 14:58:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3000352245' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:58:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3000352245' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:58:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:19.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:20.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:20 compute-1 ceph-mon[81775]: pgmap v2188: 321 pgs: 321 active+clean; 302 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 305 KiB/s wr, 55 op/s
Jan 20 14:58:21 compute-1 nova_compute[225855]: 2026-01-20 14:58:21.282 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:58:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:21.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:58:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:58:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:22.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:58:22 compute-1 ceph-mon[81775]: pgmap v2189: 321 pgs: 321 active+clean; 302 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 393 KiB/s wr, 57 op/s
Jan 20 14:58:22 compute-1 nova_compute[225855]: 2026-01-20 14:58:22.514 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:22 compute-1 nova_compute[225855]: 2026-01-20 14:58:22.798 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2286653853' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:58:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2286653853' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:58:23 compute-1 nova_compute[225855]: 2026-01-20 14:58:23.523 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:23 compute-1 ovn_controller[130490]: 2026-01-20T14:58:23Z|00564|binding|INFO|Releasing lport 1623097d-35b0-4d71-9dc2-c4d659492102 from this chassis (sb_readonly=0)
Jan 20 14:58:23 compute-1 nova_compute[225855]: 2026-01-20 14:58:23.552 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:23.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:23 compute-1 ovn_controller[130490]: 2026-01-20T14:58:23Z|00565|binding|INFO|Releasing lport 1623097d-35b0-4d71-9dc2-c4d659492102 from this chassis (sb_readonly=0)
Jan 20 14:58:23 compute-1 nova_compute[225855]: 2026-01-20 14:58:23.854 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:58:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:24.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:58:24 compute-1 ceph-mon[81775]: pgmap v2190: 321 pgs: 321 active+clean; 302 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 389 KiB/s wr, 56 op/s
Jan 20 14:58:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:25.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:26.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:26 compute-1 nova_compute[225855]: 2026-01-20 14:58:26.284 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:27 compute-1 ceph-mon[81775]: pgmap v2191: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 376 KiB/s wr, 68 op/s
Jan 20 14:58:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:27.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:27 compute-1 nova_compute[225855]: 2026-01-20 14:58:27.799 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:28.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:29 compute-1 ceph-mon[81775]: pgmap v2192: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 195 KiB/s wr, 37 op/s
Jan 20 14:58:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:29.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:30 compute-1 podman[280617]: 2026-01-20 14:58:30.00074224 +0000 UTC m=+0.047201622 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 20 14:58:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:30.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:30 compute-1 nova_compute[225855]: 2026-01-20 14:58:30.772 225859 DEBUG oslo_concurrency.lockutils [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "1ebdefed-0903-4d72-b78d-912666c5ce61" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:30 compute-1 nova_compute[225855]: 2026-01-20 14:58:30.772 225859 DEBUG oslo_concurrency.lockutils [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:30 compute-1 nova_compute[225855]: 2026-01-20 14:58:30.793 225859 DEBUG nova.objects.instance [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lazy-loading 'flavor' on Instance uuid 1ebdefed-0903-4d72-b78d-912666c5ce61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:58:30 compute-1 nova_compute[225855]: 2026-01-20 14:58:30.855 225859 DEBUG oslo_concurrency.lockutils [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.151 225859 DEBUG oslo_concurrency.lockutils [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "1ebdefed-0903-4d72-b78d-912666c5ce61" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.151 225859 DEBUG oslo_concurrency.lockutils [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.152 225859 INFO nova.compute.manager [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Attaching volume 3381d324-93a9-4d2f-ab25-8460bb2b8e95 to /dev/vdb
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.287 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.460 225859 DEBUG os_brick.utils [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.461 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.474 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.475 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[f435e800-c679-496d-ad9f-991d52216fd7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.477 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.486 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.486 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[5f2a7265-60d6-4baf-af34-d017ced7f9a5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.488 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.496 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.497 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[dd05cb19-022d-486e-ad5d-a521073f1907]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.499 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[6eaf37dc-44df-4213-995e-d13dd4b7e4b3]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.499 225859 DEBUG oslo_concurrency.processutils [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:31 compute-1 ceph-mon[81775]: pgmap v2193: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 195 KiB/s wr, 37 op/s
Jan 20 14:58:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1304440551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.524 225859 DEBUG oslo_concurrency.processutils [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.526 225859 DEBUG os_brick.initiator.connectors.lightos [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.526 225859 DEBUG os_brick.initiator.connectors.lightos [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.527 225859 DEBUG os_brick.initiator.connectors.lightos [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.527 225859 DEBUG os_brick.utils [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:58:31 compute-1 nova_compute[225855]: 2026-01-20 14:58:31.527 225859 DEBUG nova.virt.block_device [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updating existing volume attachment record: 33fc5a49-bcff-4298-84b6-8c4fe61f57ab _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 14:58:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:58:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:31.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:58:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:58:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:32.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:58:32 compute-1 nova_compute[225855]: 2026-01-20 14:58:32.426 225859 DEBUG nova.objects.instance [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lazy-loading 'flavor' on Instance uuid 1ebdefed-0903-4d72-b78d-912666c5ce61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:58:32 compute-1 nova_compute[225855]: 2026-01-20 14:58:32.464 225859 DEBUG nova.virt.libvirt.driver [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Attempting to attach volume 3381d324-93a9-4d2f-ab25-8460bb2b8e95 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 20 14:58:32 compute-1 nova_compute[225855]: 2026-01-20 14:58:32.467 225859 DEBUG nova.virt.libvirt.guest [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] attach device xml: <disk type="network" device="disk">
Jan 20 14:58:32 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:58:32 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-3381d324-93a9-4d2f-ab25-8460bb2b8e95">
Jan 20 14:58:32 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:58:32 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:58:32 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:58:32 compute-1 nova_compute[225855]:   </source>
Jan 20 14:58:32 compute-1 nova_compute[225855]:   <auth username="openstack">
Jan 20 14:58:32 compute-1 nova_compute[225855]:     <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:58:32 compute-1 nova_compute[225855]:   </auth>
Jan 20 14:58:32 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:58:32 compute-1 nova_compute[225855]:   <serial>3381d324-93a9-4d2f-ab25-8460bb2b8e95</serial>
Jan 20 14:58:32 compute-1 nova_compute[225855]: </disk>
Jan 20 14:58:32 compute-1 nova_compute[225855]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 14:58:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/721070655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:32 compute-1 ceph-mon[81775]: pgmap v2194: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 181 KiB/s wr, 34 op/s
Jan 20 14:58:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2569693860' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:58:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:32 compute-1 nova_compute[225855]: 2026-01-20 14:58:32.678 225859 DEBUG nova.virt.libvirt.driver [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:58:32 compute-1 nova_compute[225855]: 2026-01-20 14:58:32.679 225859 DEBUG nova.virt.libvirt.driver [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:58:32 compute-1 nova_compute[225855]: 2026-01-20 14:58:32.680 225859 DEBUG nova.virt.libvirt.driver [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:58:32 compute-1 nova_compute[225855]: 2026-01-20 14:58:32.680 225859 DEBUG nova.virt.libvirt.driver [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No VIF found with MAC fa:16:3e:cd:b7:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:58:32 compute-1 nova_compute[225855]: 2026-01-20 14:58:32.802 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:33 compute-1 nova_compute[225855]: 2026-01-20 14:58:33.013 225859 DEBUG oslo_concurrency.lockutils [None req-0af08aac-4274-42e8-a15a-78e36ad79581 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:33.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:33 compute-1 sudo[280665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:33 compute-1 sudo[280665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:33 compute-1 sudo[280665]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:33 compute-1 sudo[280690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:33 compute-1 sudo[280690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:33 compute-1 sudo[280690]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:58:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:34.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.194 225859 DEBUG oslo_concurrency.lockutils [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "1ebdefed-0903-4d72-b78d-912666c5ce61" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.197 225859 DEBUG oslo_concurrency.lockutils [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.221 225859 DEBUG nova.objects.instance [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lazy-loading 'flavor' on Instance uuid 1ebdefed-0903-4d72-b78d-912666c5ce61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.309 225859 DEBUG oslo_concurrency.lockutils [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.575 225859 DEBUG oslo_concurrency.lockutils [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "1ebdefed-0903-4d72-b78d-912666c5ce61" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.576 225859 DEBUG oslo_concurrency.lockutils [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.576 225859 INFO nova.compute.manager [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Attaching volume 0d487092-de99-40b0-be3f-425947d7010c to /dev/vdc
Jan 20 14:58:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:35.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.742 225859 DEBUG os_brick.utils [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.744 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.756 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.756 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[5cef7ea8-bbd6-4261-895d-5eaec3439c85]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.758 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.768 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.768 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[4789073c-dc63-4300-8a67-09379cd2c259]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.770 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.780 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.780 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[ed4b15bd-4e02-43d1-9f9d-c89df80f63b3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.782 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[970f23d2-4f2a-4d38-bc79-34d366de9072]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.782 225859 DEBUG oslo_concurrency.processutils [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.810 225859 DEBUG oslo_concurrency.processutils [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.813 225859 DEBUG os_brick.initiator.connectors.lightos [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.813 225859 DEBUG os_brick.initiator.connectors.lightos [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.813 225859 DEBUG os_brick.initiator.connectors.lightos [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.814 225859 DEBUG os_brick.utils [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:58:35 compute-1 nova_compute[225855]: 2026-01-20 14:58:35.814 225859 DEBUG nova.virt.block_device [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updating existing volume attachment record: aa9114da-45ac-4223-b30c-7a489822200c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 14:58:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:36.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:36 compute-1 nova_compute[225855]: 2026-01-20 14:58:36.744 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:36 compute-1 ceph-mon[81775]: pgmap v2195: 321 pgs: 321 active+clean; 322 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Jan 20 14:58:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:37.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:37 compute-1 ceph-mon[81775]: pgmap v2196: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 3.5 MiB/s wr, 72 op/s
Jan 20 14:58:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3023486754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:58:37 compute-1 nova_compute[225855]: 2026-01-20 14:58:37.843 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:37 compute-1 nova_compute[225855]: 2026-01-20 14:58:37.849 225859 DEBUG nova.objects.instance [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lazy-loading 'flavor' on Instance uuid 1ebdefed-0903-4d72-b78d-912666c5ce61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:58:37 compute-1 nova_compute[225855]: 2026-01-20 14:58:37.880 225859 DEBUG nova.virt.libvirt.driver [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Attempting to attach volume 0d487092-de99-40b0-be3f-425947d7010c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 20 14:58:37 compute-1 nova_compute[225855]: 2026-01-20 14:58:37.883 225859 DEBUG nova.virt.libvirt.guest [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] attach device xml: <disk type="network" device="disk">
Jan 20 14:58:37 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:58:37 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-0d487092-de99-40b0-be3f-425947d7010c">
Jan 20 14:58:37 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:58:37 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:58:37 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:58:37 compute-1 nova_compute[225855]:   </source>
Jan 20 14:58:37 compute-1 nova_compute[225855]:   <auth username="openstack">
Jan 20 14:58:37 compute-1 nova_compute[225855]:     <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:58:37 compute-1 nova_compute[225855]:   </auth>
Jan 20 14:58:37 compute-1 nova_compute[225855]:   <target dev="vdc" bus="virtio"/>
Jan 20 14:58:37 compute-1 nova_compute[225855]:   <serial>0d487092-de99-40b0-be3f-425947d7010c</serial>
Jan 20 14:58:37 compute-1 nova_compute[225855]: </disk>
Jan 20 14:58:37 compute-1 nova_compute[225855]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 14:58:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:58:37.934 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:58:37 compute-1 nova_compute[225855]: 2026-01-20 14:58:37.934 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:58:37.936 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:58:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:58:37.937 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:38 compute-1 nova_compute[225855]: 2026-01-20 14:58:38.016 225859 DEBUG nova.virt.libvirt.driver [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:58:38 compute-1 nova_compute[225855]: 2026-01-20 14:58:38.017 225859 DEBUG nova.virt.libvirt.driver [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:58:38 compute-1 nova_compute[225855]: 2026-01-20 14:58:38.017 225859 DEBUG nova.virt.libvirt.driver [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:58:38 compute-1 nova_compute[225855]: 2026-01-20 14:58:38.017 225859 DEBUG nova.virt.libvirt.driver [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:58:38 compute-1 nova_compute[225855]: 2026-01-20 14:58:38.017 225859 DEBUG nova.virt.libvirt.driver [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No VIF found with MAC fa:16:3e:cd:b7:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:58:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:38.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:38 compute-1 nova_compute[225855]: 2026-01-20 14:58:38.432 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:38 compute-1 ceph-mon[81775]: pgmap v2197: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 3.5 MiB/s wr, 57 op/s
Jan 20 14:58:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2554415008' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:58:39 compute-1 nova_compute[225855]: 2026-01-20 14:58:39.030 225859 DEBUG oslo_concurrency.lockutils [None req-12d62678-a738-4b9a-89aa-a51e996a5b41 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.454s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:39 compute-1 nova_compute[225855]: 2026-01-20 14:58:39.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:39.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:39 compute-1 sudo[280744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:39 compute-1 sudo[280744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:39 compute-1 sudo[280744]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:39 compute-1 sudo[280770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:58:39 compute-1 sudo[280770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:39 compute-1 sudo[280770]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:39 compute-1 sudo[280795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:39 compute-1 sudo[280795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:39 compute-1 sudo[280795]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:39 compute-1 sudo[280820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:58:39 compute-1 sudo[280820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:40.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:40 compute-1 sudo[280820]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/761535922' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:58:40 compute-1 sudo[280875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:40 compute-1 sudo[280875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:40 compute-1 sudo[280875]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:40 compute-1 sudo[280900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:58:40 compute-1 sudo[280900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:40 compute-1 sudo[280900]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:40 compute-1 sudo[280925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:40 compute-1 sudo[280925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:40 compute-1 sudo[280925]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:40 compute-1 sudo[280950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- inventory --format=json-pretty --filter-for-batch
Jan 20 14:58:40 compute-1 sudo[280950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:40 compute-1 podman[281014]: 2026-01-20 14:58:40.880060775 +0000 UTC m=+0.042030676 container create cc7b8f5914dd70d3c1118742ed90c9d05585301c0188a641cfa384dca5a7b650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:58:40 compute-1 systemd[1]: Started libpod-conmon-cc7b8f5914dd70d3c1118742ed90c9d05585301c0188a641cfa384dca5a7b650.scope.
Jan 20 14:58:40 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:58:40 compute-1 podman[281014]: 2026-01-20 14:58:40.86037648 +0000 UTC m=+0.022346411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:58:40 compute-1 podman[281014]: 2026-01-20 14:58:40.970988751 +0000 UTC m=+0.132958662 container init cc7b8f5914dd70d3c1118742ed90c9d05585301c0188a641cfa384dca5a7b650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 14:58:40 compute-1 podman[281014]: 2026-01-20 14:58:40.978823952 +0000 UTC m=+0.140793863 container start cc7b8f5914dd70d3c1118742ed90c9d05585301c0188a641cfa384dca5a7b650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bartik, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:58:40 compute-1 podman[281014]: 2026-01-20 14:58:40.982440894 +0000 UTC m=+0.144410815 container attach cc7b8f5914dd70d3c1118742ed90c9d05585301c0188a641cfa384dca5a7b650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bartik, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 20 14:58:40 compute-1 hopeful_bartik[281030]: 167 167
Jan 20 14:58:40 compute-1 systemd[1]: libpod-cc7b8f5914dd70d3c1118742ed90c9d05585301c0188a641cfa384dca5a7b650.scope: Deactivated successfully.
Jan 20 14:58:40 compute-1 conmon[281030]: conmon cc7b8f5914dd70d3c111 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc7b8f5914dd70d3c1118742ed90c9d05585301c0188a641cfa384dca5a7b650.scope/container/memory.events
Jan 20 14:58:40 compute-1 podman[281014]: 2026-01-20 14:58:40.987830636 +0000 UTC m=+0.149800537 container died cc7b8f5914dd70d3c1118742ed90c9d05585301c0188a641cfa384dca5a7b650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:58:41 compute-1 systemd[1]: var-lib-containers-storage-overlay-36f29b6d777380cd9507e337a3c5538750b36403649d8f3b104c6741d0c68e4b-merged.mount: Deactivated successfully.
Jan 20 14:58:41 compute-1 podman[281014]: 2026-01-20 14:58:41.031422086 +0000 UTC m=+0.193391987 container remove cc7b8f5914dd70d3c1118742ed90c9d05585301c0188a641cfa384dca5a7b650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:58:41 compute-1 systemd[1]: libpod-conmon-cc7b8f5914dd70d3c1118742ed90c9d05585301c0188a641cfa384dca5a7b650.scope: Deactivated successfully.
Jan 20 14:58:41 compute-1 podman[281054]: 2026-01-20 14:58:41.203257904 +0000 UTC m=+0.047069660 container create 23078b88548d945c9f4e8597c26978ce3a7e785df42bedfca978cb6e87ecbd3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ride, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:58:41 compute-1 systemd[1]: Started libpod-conmon-23078b88548d945c9f4e8597c26978ce3a7e785df42bedfca978cb6e87ecbd3b.scope.
Jan 20 14:58:41 compute-1 systemd[1]: Started libcrun container.
Jan 20 14:58:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbfe4ec9ae8e87949c3114b277088125c28f9a2cdb29103e7a54dd08f6848943/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbfe4ec9ae8e87949c3114b277088125c28f9a2cdb29103e7a54dd08f6848943/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbfe4ec9ae8e87949c3114b277088125c28f9a2cdb29103e7a54dd08f6848943/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbfe4ec9ae8e87949c3114b277088125c28f9a2cdb29103e7a54dd08f6848943/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:41 compute-1 podman[281054]: 2026-01-20 14:58:41.183059604 +0000 UTC m=+0.026871380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:58:41 compute-1 podman[281054]: 2026-01-20 14:58:41.277112567 +0000 UTC m=+0.120924343 container init 23078b88548d945c9f4e8597c26978ce3a7e785df42bedfca978cb6e87ecbd3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ride, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 14:58:41 compute-1 podman[281054]: 2026-01-20 14:58:41.282785437 +0000 UTC m=+0.126597193 container start 23078b88548d945c9f4e8597c26978ce3a7e785df42bedfca978cb6e87ecbd3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ride, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:58:41 compute-1 podman[281054]: 2026-01-20 14:58:41.286999646 +0000 UTC m=+0.130811402 container attach 23078b88548d945c9f4e8597c26978ce3a7e785df42bedfca978cb6e87ecbd3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:58:41 compute-1 nova_compute[225855]: 2026-01-20 14:58:41.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:41 compute-1 nova_compute[225855]: 2026-01-20 14:58:41.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:58:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:41.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:41 compute-1 nova_compute[225855]: 2026-01-20 14:58:41.747 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:41 compute-1 ceph-mon[81775]: pgmap v2198: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 3.5 MiB/s wr, 57 op/s
Jan 20 14:58:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:42.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:42 compute-1 nova_compute[225855]: 2026-01-20 14:58:42.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:42 compute-1 nova_compute[225855]: 2026-01-20 14:58:42.343 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:58:42 compute-1 nova_compute[225855]: 2026-01-20 14:58:42.343 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:58:42 compute-1 interesting_ride[281070]: [
Jan 20 14:58:42 compute-1 interesting_ride[281070]:     {
Jan 20 14:58:42 compute-1 interesting_ride[281070]:         "available": false,
Jan 20 14:58:42 compute-1 interesting_ride[281070]:         "ceph_device": false,
Jan 20 14:58:42 compute-1 interesting_ride[281070]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:         "lsm_data": {},
Jan 20 14:58:42 compute-1 interesting_ride[281070]:         "lvs": [],
Jan 20 14:58:42 compute-1 interesting_ride[281070]:         "path": "/dev/sr0",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:         "rejected_reasons": [
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "Insufficient space (<5GB)",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "Has a FileSystem"
Jan 20 14:58:42 compute-1 interesting_ride[281070]:         ],
Jan 20 14:58:42 compute-1 interesting_ride[281070]:         "sys_api": {
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "actuators": null,
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "device_nodes": "sr0",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "devname": "sr0",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "human_readable_size": "482.00 KB",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "id_bus": "ata",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "model": "QEMU DVD-ROM",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "nr_requests": "2",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "parent": "/dev/sr0",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "partitions": {},
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "path": "/dev/sr0",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "removable": "1",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "rev": "2.5+",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "ro": "0",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "rotational": "1",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "sas_address": "",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "sas_device_handle": "",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "scheduler_mode": "mq-deadline",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "sectors": 0,
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "sectorsize": "2048",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "size": 493568.0,
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "support_discard": "2048",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "type": "disk",
Jan 20 14:58:42 compute-1 interesting_ride[281070]:             "vendor": "QEMU"
Jan 20 14:58:42 compute-1 interesting_ride[281070]:         }
Jan 20 14:58:42 compute-1 interesting_ride[281070]:     }
Jan 20 14:58:42 compute-1 interesting_ride[281070]: ]
Jan 20 14:58:42 compute-1 systemd[1]: libpod-23078b88548d945c9f4e8597c26978ce3a7e785df42bedfca978cb6e87ecbd3b.scope: Deactivated successfully.
Jan 20 14:58:42 compute-1 podman[281054]: 2026-01-20 14:58:42.485889889 +0000 UTC m=+1.329701645 container died 23078b88548d945c9f4e8597c26978ce3a7e785df42bedfca978cb6e87ecbd3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ride, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:58:42 compute-1 systemd[1]: libpod-23078b88548d945c9f4e8597c26978ce3a7e785df42bedfca978cb6e87ecbd3b.scope: Consumed 1.183s CPU time.
Jan 20 14:58:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:42 compute-1 systemd[1]: var-lib-containers-storage-overlay-dbfe4ec9ae8e87949c3114b277088125c28f9a2cdb29103e7a54dd08f6848943-merged.mount: Deactivated successfully.
Jan 20 14:58:42 compute-1 podman[281054]: 2026-01-20 14:58:42.747688285 +0000 UTC m=+1.591500041 container remove 23078b88548d945c9f4e8597c26978ce3a7e785df42bedfca978cb6e87ecbd3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:58:42 compute-1 systemd[1]: libpod-conmon-23078b88548d945c9f4e8597c26978ce3a7e785df42bedfca978cb6e87ecbd3b.scope: Deactivated successfully.
Jan 20 14:58:42 compute-1 sudo[280950]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:42 compute-1 nova_compute[225855]: 2026-01-20 14:58:42.845 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:42 compute-1 ceph-mon[81775]: pgmap v2199: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 60 op/s
Jan 20 14:58:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:43 compute-1 nova_compute[225855]: 2026-01-20 14:58:43.058 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:58:43 compute-1 nova_compute[225855]: 2026-01-20 14:58:43.058 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:58:43 compute-1 nova_compute[225855]: 2026-01-20 14:58:43.059 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:58:43 compute-1 nova_compute[225855]: 2026-01-20 14:58:43.059 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1ebdefed-0903-4d72-b78d-912666c5ce61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:58:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:43.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:43 compute-1 nova_compute[225855]: 2026-01-20 14:58:43.615 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:43 compute-1 NetworkManager[49104]: <info>  [1768921123.6158] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/245)
Jan 20 14:58:43 compute-1 NetworkManager[49104]: <info>  [1768921123.6170] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Jan 20 14:58:43 compute-1 nova_compute[225855]: 2026-01-20 14:58:43.868 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:43 compute-1 ovn_controller[130490]: 2026-01-20T14:58:43Z|00566|binding|INFO|Releasing lport 1623097d-35b0-4d71-9dc2-c4d659492102 from this chassis (sb_readonly=0)
Jan 20 14:58:43 compute-1 nova_compute[225855]: 2026-01-20 14:58:43.893 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:44 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3110291340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:58:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2546240148' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:58:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:44.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:45 compute-1 nova_compute[225855]: 2026-01-20 14:58:45.006 225859 DEBUG nova.compute.manager [req-d9775999-6ef5-477b-9731-7984c7264ff7 req-e178faaa-69bb-40e9-aa96-3cce2867e5bf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Received event network-changed-3067803c-07f3-4a15-a5ee-47f9a770efca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:58:45 compute-1 nova_compute[225855]: 2026-01-20 14:58:45.006 225859 DEBUG nova.compute.manager [req-d9775999-6ef5-477b-9731-7984c7264ff7 req-e178faaa-69bb-40e9-aa96-3cce2867e5bf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Refreshing instance network info cache due to event network-changed-3067803c-07f3-4a15-a5ee-47f9a770efca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:58:45 compute-1 nova_compute[225855]: 2026-01-20 14:58:45.007 225859 DEBUG oslo_concurrency.lockutils [req-d9775999-6ef5-477b-9731-7984c7264ff7 req-e178faaa-69bb-40e9-aa96-3cce2867e5bf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:58:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:45.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:46 compute-1 ceph-mon[81775]: pgmap v2200: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 60 op/s
Jan 20 14:58:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:46.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:46 compute-1 nova_compute[225855]: 2026-01-20 14:58:46.749 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:47 compute-1 ceph-mon[81775]: pgmap v2201: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 759 KiB/s rd, 2.4 MiB/s wr, 92 op/s
Jan 20 14:58:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:58:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:58:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:58:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:58:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:58:47 compute-1 nova_compute[225855]: 2026-01-20 14:58:47.514 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updating instance_info_cache with network_info: [{"id": "3067803c-07f3-4a15-a5ee-47f9a770efca", "address": "fa:16:3e:cd:b7:b1", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3067803c-07", "ovs_interfaceid": "3067803c-07f3-4a15-a5ee-47f9a770efca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:58:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:58:47 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/579533939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:47 compute-1 nova_compute[225855]: 2026-01-20 14:58:47.594 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:58:47 compute-1 nova_compute[225855]: 2026-01-20 14:58:47.595 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:58:47 compute-1 nova_compute[225855]: 2026-01-20 14:58:47.595 225859 DEBUG oslo_concurrency.lockutils [req-d9775999-6ef5-477b-9731-7984c7264ff7 req-e178faaa-69bb-40e9-aa96-3cce2867e5bf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:58:47 compute-1 nova_compute[225855]: 2026-01-20 14:58:47.595 225859 DEBUG nova.network.neutron [req-d9775999-6ef5-477b-9731-7984c7264ff7 req-e178faaa-69bb-40e9-aa96-3cce2867e5bf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Refreshing network info cache for port 3067803c-07f3-4a15-a5ee-47f9a770efca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:58:47 compute-1 nova_compute[225855]: 2026-01-20 14:58:47.596 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:47 compute-1 nova_compute[225855]: 2026-01-20 14:58:47.597 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:47 compute-1 nova_compute[225855]: 2026-01-20 14:58:47.597 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:47.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:47 compute-1 nova_compute[225855]: 2026-01-20 14:58:47.848 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:48.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/579533939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:49 compute-1 podman[282214]: 2026-01-20 14:58:49.070755656 +0000 UTC m=+0.100107715 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:58:49 compute-1 nova_compute[225855]: 2026-01-20 14:58:49.360 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:49 compute-1 nova_compute[225855]: 2026-01-20 14:58:49.392 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:49 compute-1 nova_compute[225855]: 2026-01-20 14:58:49.392 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:49 compute-1 nova_compute[225855]: 2026-01-20 14:58:49.392 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:49 compute-1 nova_compute[225855]: 2026-01-20 14:58:49.393 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:58:49 compute-1 nova_compute[225855]: 2026-01-20 14:58:49.393 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:49 compute-1 ceph-mon[81775]: pgmap v2202: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 723 KiB/s rd, 15 KiB/s wr, 37 op/s
Jan 20 14:58:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2719794551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4234632430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:49 compute-1 nova_compute[225855]: 2026-01-20 14:58:49.458 225859 DEBUG nova.compute.manager [req-1228f766-6739-4e95-bcd3-57f32a6438c2 req-951e0959-c92a-4128-9f5d-3bfe55509b04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Received event network-changed-3067803c-07f3-4a15-a5ee-47f9a770efca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:58:49 compute-1 nova_compute[225855]: 2026-01-20 14:58:49.459 225859 DEBUG nova.compute.manager [req-1228f766-6739-4e95-bcd3-57f32a6438c2 req-951e0959-c92a-4128-9f5d-3bfe55509b04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Refreshing instance network info cache due to event network-changed-3067803c-07f3-4a15-a5ee-47f9a770efca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:58:49 compute-1 nova_compute[225855]: 2026-01-20 14:58:49.459 225859 DEBUG oslo_concurrency.lockutils [req-1228f766-6739-4e95-bcd3-57f32a6438c2 req-951e0959-c92a-4128-9f5d-3bfe55509b04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:58:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:49.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:58:49 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2379891077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:49 compute-1 nova_compute[225855]: 2026-01-20 14:58:49.856 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:49 compute-1 nova_compute[225855]: 2026-01-20 14:58:49.942 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:58:49 compute-1 nova_compute[225855]: 2026-01-20 14:58:49.943 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:58:49 compute-1 nova_compute[225855]: 2026-01-20 14:58:49.947 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:58:49 compute-1 nova_compute[225855]: 2026-01-20 14:58:49.948 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:58:49 compute-1 nova_compute[225855]: 2026-01-20 14:58:49.948 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:58:49 compute-1 nova_compute[225855]: 2026-01-20 14:58:49.948 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.109 225859 DEBUG nova.network.neutron [req-d9775999-6ef5-477b-9731-7984c7264ff7 req-e178faaa-69bb-40e9-aa96-3cce2867e5bf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updated VIF entry in instance network info cache for port 3067803c-07f3-4a15-a5ee-47f9a770efca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.109 225859 DEBUG nova.network.neutron [req-d9775999-6ef5-477b-9731-7984c7264ff7 req-e178faaa-69bb-40e9-aa96-3cce2867e5bf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updating instance_info_cache with network_info: [{"id": "3067803c-07f3-4a15-a5ee-47f9a770efca", "address": "fa:16:3e:cd:b7:b1", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3067803c-07", "ovs_interfaceid": "3067803c-07f3-4a15-a5ee-47f9a770efca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.116 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.117 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3872MB free_disk=20.94619369506836GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.117 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.118 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.127 225859 DEBUG oslo_concurrency.lockutils [req-d9775999-6ef5-477b-9731-7984c7264ff7 req-e178faaa-69bb-40e9-aa96-3cce2867e5bf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.128 225859 DEBUG oslo_concurrency.lockutils [req-1228f766-6739-4e95-bcd3-57f32a6438c2 req-951e0959-c92a-4128-9f5d-3bfe55509b04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.128 225859 DEBUG nova.network.neutron [req-1228f766-6739-4e95-bcd3-57f32a6438c2 req-951e0959-c92a-4128-9f5d-3bfe55509b04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Refreshing network info cache for port 3067803c-07f3-4a15-a5ee-47f9a770efca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:58:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:50.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.210 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 1ebdefed-0903-4d72-b78d-912666c5ce61 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.211 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance b4c1468d-9914-426a-9464-c1167de53632 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.211 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.211 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.303 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing inventories for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.377 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating ProviderTree inventory for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.377 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.404 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing aggregate associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 14:58:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2379891077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:50 compute-1 ceph-mon[81775]: pgmap v2203: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 16 KiB/s wr, 141 op/s
Jan 20 14:58:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2990147587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.429 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing trait associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.506 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:58:50 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3942644152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.951 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.960 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:58:50 compute-1 nova_compute[225855]: 2026-01-20 14:58:50.984 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:58:51 compute-1 nova_compute[225855]: 2026-01-20 14:58:51.010 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:58:51 compute-1 nova_compute[225855]: 2026-01-20 14:58:51.010 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3942644152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:51.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:51 compute-1 nova_compute[225855]: 2026-01-20 14:58:51.751 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:58:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:52.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:58:52 compute-1 ceph-mon[81775]: pgmap v2204: 321 pgs: 321 active+clean; 393 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 27 KiB/s wr, 191 op/s
Jan 20 14:58:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:52 compute-1 nova_compute[225855]: 2026-01-20 14:58:52.851 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:52 compute-1 nova_compute[225855]: 2026-01-20 14:58:52.880 225859 DEBUG nova.compute.manager [req-60aaeb3a-d272-4037-a354-4bac3988ba44 req-ddf24908-4dd9-46f1-b484-1dc55b6d0ebd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Received event network-changed-3067803c-07f3-4a15-a5ee-47f9a770efca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:58:52 compute-1 nova_compute[225855]: 2026-01-20 14:58:52.880 225859 DEBUG nova.compute.manager [req-60aaeb3a-d272-4037-a354-4bac3988ba44 req-ddf24908-4dd9-46f1-b484-1dc55b6d0ebd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Refreshing instance network info cache due to event network-changed-3067803c-07f3-4a15-a5ee-47f9a770efca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:58:52 compute-1 nova_compute[225855]: 2026-01-20 14:58:52.880 225859 DEBUG oslo_concurrency.lockutils [req-60aaeb3a-d272-4037-a354-4bac3988ba44 req-ddf24908-4dd9-46f1-b484-1dc55b6d0ebd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:58:52 compute-1 nova_compute[225855]: 2026-01-20 14:58:52.989 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:52 compute-1 nova_compute[225855]: 2026-01-20 14:58:52.990 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:53 compute-1 nova_compute[225855]: 2026-01-20 14:58:53.094 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:53 compute-1 sudo[282287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:53 compute-1 sudo[282287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:53 compute-1 sudo[282287]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:53 compute-1 sudo[282312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:58:53 compute-1 sudo[282312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:53 compute-1 sudo[282312]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:53 compute-1 nova_compute[225855]: 2026-01-20 14:58:53.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:53 compute-1 nova_compute[225855]: 2026-01-20 14:58:53.342 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 14:58:53 compute-1 nova_compute[225855]: 2026-01-20 14:58:53.372 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 14:58:53 compute-1 nova_compute[225855]: 2026-01-20 14:58:53.475 225859 DEBUG nova.network.neutron [req-1228f766-6739-4e95-bcd3-57f32a6438c2 req-951e0959-c92a-4128-9f5d-3bfe55509b04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updated VIF entry in instance network info cache for port 3067803c-07f3-4a15-a5ee-47f9a770efca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:58:53 compute-1 nova_compute[225855]: 2026-01-20 14:58:53.476 225859 DEBUG nova.network.neutron [req-1228f766-6739-4e95-bcd3-57f32a6438c2 req-951e0959-c92a-4128-9f5d-3bfe55509b04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updating instance_info_cache with network_info: [{"id": "3067803c-07f3-4a15-a5ee-47f9a770efca", "address": "fa:16:3e:cd:b7:b1", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3067803c-07", "ovs_interfaceid": "3067803c-07f3-4a15-a5ee-47f9a770efca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:58:53 compute-1 nova_compute[225855]: 2026-01-20 14:58:53.491 225859 DEBUG oslo_concurrency.lockutils [req-1228f766-6739-4e95-bcd3-57f32a6438c2 req-951e0959-c92a-4128-9f5d-3bfe55509b04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:58:53 compute-1 nova_compute[225855]: 2026-01-20 14:58:53.492 225859 DEBUG oslo_concurrency.lockutils [req-60aaeb3a-d272-4037-a354-4bac3988ba44 req-ddf24908-4dd9-46f1-b484-1dc55b6d0ebd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:58:53 compute-1 nova_compute[225855]: 2026-01-20 14:58:53.492 225859 DEBUG nova.network.neutron [req-60aaeb3a-d272-4037-a354-4bac3988ba44 req-ddf24908-4dd9-46f1-b484-1dc55b6d0ebd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Refreshing network info cache for port 3067803c-07f3-4a15-a5ee-47f9a770efca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:58:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:53.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:53 compute-1 sudo[282338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:53 compute-1 sudo[282338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:53 compute-1 sudo[282338]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:53 compute-1 sudo[282363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:53 compute-1 sudo[282363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:53 compute-1 sudo[282363]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:54.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:55 compute-1 ceph-mon[81775]: pgmap v2205: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 26 KiB/s wr, 282 op/s
Jan 20 14:58:55 compute-1 nova_compute[225855]: 2026-01-20 14:58:55.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:55 compute-1 nova_compute[225855]: 2026-01-20 14:58:55.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 14:58:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:55.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:56 compute-1 nova_compute[225855]: 2026-01-20 14:58:56.030 225859 DEBUG nova.network.neutron [req-60aaeb3a-d272-4037-a354-4bac3988ba44 req-ddf24908-4dd9-46f1-b484-1dc55b6d0ebd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updated VIF entry in instance network info cache for port 3067803c-07f3-4a15-a5ee-47f9a770efca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:58:56 compute-1 nova_compute[225855]: 2026-01-20 14:58:56.031 225859 DEBUG nova.network.neutron [req-60aaeb3a-d272-4037-a354-4bac3988ba44 req-ddf24908-4dd9-46f1-b484-1dc55b6d0ebd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updating instance_info_cache with network_info: [{"id": "3067803c-07f3-4a15-a5ee-47f9a770efca", "address": "fa:16:3e:cd:b7:b1", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3067803c-07", "ovs_interfaceid": "3067803c-07f3-4a15-a5ee-47f9a770efca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:58:56 compute-1 nova_compute[225855]: 2026-01-20 14:58:56.047 225859 DEBUG oslo_concurrency.lockutils [req-60aaeb3a-d272-4037-a354-4bac3988ba44 req-ddf24908-4dd9-46f1-b484-1dc55b6d0ebd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:58:56 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1260322840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:56.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:56 compute-1 nova_compute[225855]: 2026-01-20 14:58:56.753 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:57 compute-1 ceph-mon[81775]: pgmap v2206: 321 pgs: 321 active+clean; 304 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 29 KiB/s wr, 411 op/s
Jan 20 14:58:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3497436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 20 14:58:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:57.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 20 14:58:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:57 compute-1 nova_compute[225855]: 2026-01-20 14:58:57.876 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:58.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2355284715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:58 compute-1 nova_compute[225855]: 2026-01-20 14:58:58.302 225859 DEBUG nova.compute.manager [req-a03f3cd4-0774-4e61-afbb-563b287813be req-ba37c252-f184-4a83-bea2-a386e7d54318 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Received event network-changed-3067803c-07f3-4a15-a5ee-47f9a770efca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:58:58 compute-1 nova_compute[225855]: 2026-01-20 14:58:58.302 225859 DEBUG nova.compute.manager [req-a03f3cd4-0774-4e61-afbb-563b287813be req-ba37c252-f184-4a83-bea2-a386e7d54318 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Refreshing instance network info cache due to event network-changed-3067803c-07f3-4a15-a5ee-47f9a770efca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:58:58 compute-1 nova_compute[225855]: 2026-01-20 14:58:58.303 225859 DEBUG oslo_concurrency.lockutils [req-a03f3cd4-0774-4e61-afbb-563b287813be req-ba37c252-f184-4a83-bea2-a386e7d54318 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:58:58 compute-1 nova_compute[225855]: 2026-01-20 14:58:58.303 225859 DEBUG oslo_concurrency.lockutils [req-a03f3cd4-0774-4e61-afbb-563b287813be req-ba37c252-f184-4a83-bea2-a386e7d54318 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:58:58 compute-1 nova_compute[225855]: 2026-01-20 14:58:58.303 225859 DEBUG nova.network.neutron [req-a03f3cd4-0774-4e61-afbb-563b287813be req-ba37c252-f184-4a83-bea2-a386e7d54318 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Refreshing network info cache for port 3067803c-07f3-4a15-a5ee-47f9a770efca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:58:58 compute-1 nova_compute[225855]: 2026-01-20 14:58:58.864 225859 DEBUG oslo_concurrency.lockutils [None req-56e6039d-e3ad-400d-9aed-443fde2a9d46 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "1ebdefed-0903-4d72-b78d-912666c5ce61" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:58 compute-1 nova_compute[225855]: 2026-01-20 14:58:58.865 225859 DEBUG oslo_concurrency.lockutils [None req-56e6039d-e3ad-400d-9aed-443fde2a9d46 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:58 compute-1 nova_compute[225855]: 2026-01-20 14:58:58.885 225859 INFO nova.compute.manager [None req-56e6039d-e3ad-400d-9aed-443fde2a9d46 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Detaching volume 3381d324-93a9-4d2f-ab25-8460bb2b8e95
Jan 20 14:58:59 compute-1 nova_compute[225855]: 2026-01-20 14:58:59.257 225859 INFO nova.virt.block_device [None req-56e6039d-e3ad-400d-9aed-443fde2a9d46 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Attempting to driver detach volume 3381d324-93a9-4d2f-ab25-8460bb2b8e95 from mountpoint /dev/vdb
Jan 20 14:58:59 compute-1 ceph-mon[81775]: pgmap v2207: 321 pgs: 321 active+clean; 304 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 16 KiB/s wr, 378 op/s
Jan 20 14:58:59 compute-1 nova_compute[225855]: 2026-01-20 14:58:59.264 225859 DEBUG nova.virt.libvirt.driver [None req-56e6039d-e3ad-400d-9aed-443fde2a9d46 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Attempting to detach device vdb from instance 1ebdefed-0903-4d72-b78d-912666c5ce61 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 14:58:59 compute-1 nova_compute[225855]: 2026-01-20 14:58:59.265 225859 DEBUG nova.virt.libvirt.guest [None req-56e6039d-e3ad-400d-9aed-443fde2a9d46 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:58:59 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:58:59 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-3381d324-93a9-4d2f-ab25-8460bb2b8e95">
Jan 20 14:58:59 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:58:59 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:58:59 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:58:59 compute-1 nova_compute[225855]:   </source>
Jan 20 14:58:59 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:58:59 compute-1 nova_compute[225855]:   <serial>3381d324-93a9-4d2f-ab25-8460bb2b8e95</serial>
Jan 20 14:58:59 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 14:58:59 compute-1 nova_compute[225855]: </disk>
Jan 20 14:58:59 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:58:59 compute-1 nova_compute[225855]: 2026-01-20 14:58:59.272 225859 INFO nova.virt.libvirt.driver [None req-56e6039d-e3ad-400d-9aed-443fde2a9d46 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Successfully detached device vdb from instance 1ebdefed-0903-4d72-b78d-912666c5ce61 from the persistent domain config.
Jan 20 14:58:59 compute-1 nova_compute[225855]: 2026-01-20 14:58:59.273 225859 DEBUG nova.virt.libvirt.driver [None req-56e6039d-e3ad-400d-9aed-443fde2a9d46 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 1ebdefed-0903-4d72-b78d-912666c5ce61 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 14:58:59 compute-1 nova_compute[225855]: 2026-01-20 14:58:59.273 225859 DEBUG nova.virt.libvirt.guest [None req-56e6039d-e3ad-400d-9aed-443fde2a9d46 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:58:59 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:58:59 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-3381d324-93a9-4d2f-ab25-8460bb2b8e95">
Jan 20 14:58:59 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:58:59 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:58:59 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:58:59 compute-1 nova_compute[225855]:   </source>
Jan 20 14:58:59 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:58:59 compute-1 nova_compute[225855]:   <serial>3381d324-93a9-4d2f-ab25-8460bb2b8e95</serial>
Jan 20 14:58:59 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 14:58:59 compute-1 nova_compute[225855]: </disk>
Jan 20 14:58:59 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:58:59 compute-1 nova_compute[225855]: 2026-01-20 14:58:59.332 225859 DEBUG nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Received event <DeviceRemovedEvent: 1768921139.3320618, 1ebdefed-0903-4d72-b78d-912666c5ce61 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 14:58:59 compute-1 nova_compute[225855]: 2026-01-20 14:58:59.334 225859 DEBUG nova.virt.libvirt.driver [None req-56e6039d-e3ad-400d-9aed-443fde2a9d46 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 1ebdefed-0903-4d72-b78d-912666c5ce61 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 14:58:59 compute-1 nova_compute[225855]: 2026-01-20 14:58:59.336 225859 INFO nova.virt.libvirt.driver [None req-56e6039d-e3ad-400d-9aed-443fde2a9d46 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Successfully detached device vdb from instance 1ebdefed-0903-4d72-b78d-912666c5ce61 from the live domain config.
Jan 20 14:58:59 compute-1 nova_compute[225855]: 2026-01-20 14:58:59.609 225859 DEBUG nova.objects.instance [None req-56e6039d-e3ad-400d-9aed-443fde2a9d46 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lazy-loading 'flavor' on Instance uuid 1ebdefed-0903-4d72-b78d-912666c5ce61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:58:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:58:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:59.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:59 compute-1 nova_compute[225855]: 2026-01-20 14:58:59.649 225859 DEBUG oslo_concurrency.lockutils [None req-56e6039d-e3ad-400d-9aed-443fde2a9d46 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:00.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:00 compute-1 nova_compute[225855]: 2026-01-20 14:59:00.551 225859 DEBUG nova.network.neutron [req-a03f3cd4-0774-4e61-afbb-563b287813be req-ba37c252-f184-4a83-bea2-a386e7d54318 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updated VIF entry in instance network info cache for port 3067803c-07f3-4a15-a5ee-47f9a770efca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:59:00 compute-1 nova_compute[225855]: 2026-01-20 14:59:00.551 225859 DEBUG nova.network.neutron [req-a03f3cd4-0774-4e61-afbb-563b287813be req-ba37c252-f184-4a83-bea2-a386e7d54318 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updating instance_info_cache with network_info: [{"id": "3067803c-07f3-4a15-a5ee-47f9a770efca", "address": "fa:16:3e:cd:b7:b1", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3067803c-07", "ovs_interfaceid": "3067803c-07f3-4a15-a5ee-47f9a770efca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:59:00 compute-1 nova_compute[225855]: 2026-01-20 14:59:00.573 225859 DEBUG oslo_concurrency.lockutils [req-a03f3cd4-0774-4e61-afbb-563b287813be req-ba37c252-f184-4a83-bea2-a386e7d54318 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:59:00 compute-1 ovn_controller[130490]: 2026-01-20T14:59:00Z|00567|binding|INFO|Releasing lport 1623097d-35b0-4d71-9dc2-c4d659492102 from this chassis (sb_readonly=0)
Jan 20 14:59:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:59:00 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2943226124' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:59:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:59:00 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2943226124' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:59:00 compute-1 nova_compute[225855]: 2026-01-20 14:59:00.881 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:01 compute-1 podman[282393]: 2026-01-20 14:59:01.014881828 +0000 UTC m=+0.057395551 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 20 14:59:01 compute-1 ovn_controller[130490]: 2026-01-20T14:59:01Z|00568|binding|INFO|Releasing lport 1623097d-35b0-4d71-9dc2-c4d659492102 from this chassis (sb_readonly=0)
Jan 20 14:59:01 compute-1 nova_compute[225855]: 2026-01-20 14:59:01.195 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:01 compute-1 ceph-mon[81775]: pgmap v2208: 321 pgs: 321 active+clean; 333 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1007 KiB/s wr, 387 op/s
Jan 20 14:59:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2943226124' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:59:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2943226124' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:59:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:59:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:01.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:59:01 compute-1 nova_compute[225855]: 2026-01-20 14:59:01.756 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:01 compute-1 anacron[89046]: Job `cron.weekly' started
Jan 20 14:59:02 compute-1 anacron[89046]: Job `cron.weekly' terminated
Jan 20 14:59:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:02.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:02 compute-1 nova_compute[225855]: 2026-01-20 14:59:02.878 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:03 compute-1 nova_compute[225855]: 2026-01-20 14:59:03.271 225859 DEBUG oslo_concurrency.lockutils [None req-500eed47-730e-41d1-8c78-dcb0ded30f39 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "1ebdefed-0903-4d72-b78d-912666c5ce61" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:03 compute-1 nova_compute[225855]: 2026-01-20 14:59:03.271 225859 DEBUG oslo_concurrency.lockutils [None req-500eed47-730e-41d1-8c78-dcb0ded30f39 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:03 compute-1 ceph-mon[81775]: pgmap v2209: 321 pgs: 321 active+clean; 340 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.5 MiB/s wr, 302 op/s
Jan 20 14:59:03 compute-1 nova_compute[225855]: 2026-01-20 14:59:03.327 225859 INFO nova.compute.manager [None req-500eed47-730e-41d1-8c78-dcb0ded30f39 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Detaching volume 0d487092-de99-40b0-be3f-425947d7010c
Jan 20 14:59:03 compute-1 nova_compute[225855]: 2026-01-20 14:59:03.402 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:03 compute-1 nova_compute[225855]: 2026-01-20 14:59:03.479 225859 INFO nova.virt.block_device [None req-500eed47-730e-41d1-8c78-dcb0ded30f39 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Attempting to driver detach volume 0d487092-de99-40b0-be3f-425947d7010c from mountpoint /dev/vdc
Jan 20 14:59:03 compute-1 nova_compute[225855]: 2026-01-20 14:59:03.487 225859 DEBUG nova.virt.libvirt.driver [None req-500eed47-730e-41d1-8c78-dcb0ded30f39 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Attempting to detach device vdc from instance 1ebdefed-0903-4d72-b78d-912666c5ce61 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 14:59:03 compute-1 nova_compute[225855]: 2026-01-20 14:59:03.488 225859 DEBUG nova.virt.libvirt.guest [None req-500eed47-730e-41d1-8c78-dcb0ded30f39 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:59:03 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:59:03 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-0d487092-de99-40b0-be3f-425947d7010c">
Jan 20 14:59:03 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:59:03 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:59:03 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:59:03 compute-1 nova_compute[225855]:   </source>
Jan 20 14:59:03 compute-1 nova_compute[225855]:   <target dev="vdc" bus="virtio"/>
Jan 20 14:59:03 compute-1 nova_compute[225855]:   <serial>0d487092-de99-40b0-be3f-425947d7010c</serial>
Jan 20 14:59:03 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 20 14:59:03 compute-1 nova_compute[225855]: </disk>
Jan 20 14:59:03 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:59:03 compute-1 nova_compute[225855]: 2026-01-20 14:59:03.494 225859 INFO nova.virt.libvirt.driver [None req-500eed47-730e-41d1-8c78-dcb0ded30f39 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Successfully detached device vdc from instance 1ebdefed-0903-4d72-b78d-912666c5ce61 from the persistent domain config.
Jan 20 14:59:03 compute-1 nova_compute[225855]: 2026-01-20 14:59:03.495 225859 DEBUG nova.virt.libvirt.driver [None req-500eed47-730e-41d1-8c78-dcb0ded30f39 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 1ebdefed-0903-4d72-b78d-912666c5ce61 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 14:59:03 compute-1 nova_compute[225855]: 2026-01-20 14:59:03.495 225859 DEBUG nova.virt.libvirt.guest [None req-500eed47-730e-41d1-8c78-dcb0ded30f39 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:59:03 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:59:03 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-0d487092-de99-40b0-be3f-425947d7010c">
Jan 20 14:59:03 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:59:03 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:59:03 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:59:03 compute-1 nova_compute[225855]:   </source>
Jan 20 14:59:03 compute-1 nova_compute[225855]:   <target dev="vdc" bus="virtio"/>
Jan 20 14:59:03 compute-1 nova_compute[225855]:   <serial>0d487092-de99-40b0-be3f-425947d7010c</serial>
Jan 20 14:59:03 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 20 14:59:03 compute-1 nova_compute[225855]: </disk>
Jan 20 14:59:03 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:59:03 compute-1 nova_compute[225855]: 2026-01-20 14:59:03.557 225859 DEBUG nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Received event <DeviceRemovedEvent: 1768921143.5572193, 1ebdefed-0903-4d72-b78d-912666c5ce61 => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 14:59:03 compute-1 nova_compute[225855]: 2026-01-20 14:59:03.558 225859 DEBUG nova.virt.libvirt.driver [None req-500eed47-730e-41d1-8c78-dcb0ded30f39 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 1ebdefed-0903-4d72-b78d-912666c5ce61 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 14:59:03 compute-1 nova_compute[225855]: 2026-01-20 14:59:03.560 225859 INFO nova.virt.libvirt.driver [None req-500eed47-730e-41d1-8c78-dcb0ded30f39 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Successfully detached device vdc from instance 1ebdefed-0903-4d72-b78d-912666c5ce61 from the live domain config.
Jan 20 14:59:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:03.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:03 compute-1 nova_compute[225855]: 2026-01-20 14:59:03.825 225859 DEBUG nova.objects.instance [None req-500eed47-730e-41d1-8c78-dcb0ded30f39 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lazy-loading 'flavor' on Instance uuid 1ebdefed-0903-4d72-b78d-912666c5ce61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:59:03 compute-1 nova_compute[225855]: 2026-01-20 14:59:03.875 225859 DEBUG oslo_concurrency.lockutils [None req-500eed47-730e-41d1-8c78-dcb0ded30f39 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:04.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/626265555' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:59:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:05.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:05 compute-1 nova_compute[225855]: 2026-01-20 14:59:05.824 225859 DEBUG nova.compute.manager [req-0e40f913-f074-49f7-b0da-ea75d7bbfac4 req-c9f2fdc6-19d0-4b1f-9289-2b31e42ea5e7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Received event network-changed-3067803c-07f3-4a15-a5ee-47f9a770efca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:59:05 compute-1 nova_compute[225855]: 2026-01-20 14:59:05.825 225859 DEBUG nova.compute.manager [req-0e40f913-f074-49f7-b0da-ea75d7bbfac4 req-c9f2fdc6-19d0-4b1f-9289-2b31e42ea5e7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Refreshing instance network info cache due to event network-changed-3067803c-07f3-4a15-a5ee-47f9a770efca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:59:05 compute-1 nova_compute[225855]: 2026-01-20 14:59:05.825 225859 DEBUG oslo_concurrency.lockutils [req-0e40f913-f074-49f7-b0da-ea75d7bbfac4 req-c9f2fdc6-19d0-4b1f-9289-2b31e42ea5e7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:59:05 compute-1 nova_compute[225855]: 2026-01-20 14:59:05.825 225859 DEBUG oslo_concurrency.lockutils [req-0e40f913-f074-49f7-b0da-ea75d7bbfac4 req-c9f2fdc6-19d0-4b1f-9289-2b31e42ea5e7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:59:05 compute-1 nova_compute[225855]: 2026-01-20 14:59:05.826 225859 DEBUG nova.network.neutron [req-0e40f913-f074-49f7-b0da-ea75d7bbfac4 req-c9f2fdc6-19d0-4b1f-9289-2b31e42ea5e7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Refreshing network info cache for port 3067803c-07f3-4a15-a5ee-47f9a770efca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:59:06 compute-1 ceph-mon[81775]: pgmap v2210: 321 pgs: 321 active+clean; 348 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 505 KiB/s rd, 2.1 MiB/s wr, 266 op/s
Jan 20 14:59:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1110536162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:59:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1895765005' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:59:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1895765005' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:59:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:59:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:06.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:59:06 compute-1 nova_compute[225855]: 2026-01-20 14:59:06.758 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:07 compute-1 ceph-mon[81775]: pgmap v2211: 321 pgs: 321 active+clean; 348 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 134 KiB/s rd, 2.1 MiB/s wr, 199 op/s
Jan 20 14:59:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:07.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:07 compute-1 nova_compute[225855]: 2026-01-20 14:59:07.879 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:59:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:08.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:59:08 compute-1 nova_compute[225855]: 2026-01-20 14:59:08.396 225859 DEBUG nova.network.neutron [req-0e40f913-f074-49f7-b0da-ea75d7bbfac4 req-c9f2fdc6-19d0-4b1f-9289-2b31e42ea5e7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updated VIF entry in instance network info cache for port 3067803c-07f3-4a15-a5ee-47f9a770efca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:59:08 compute-1 nova_compute[225855]: 2026-01-20 14:59:08.397 225859 DEBUG nova.network.neutron [req-0e40f913-f074-49f7-b0da-ea75d7bbfac4 req-c9f2fdc6-19d0-4b1f-9289-2b31e42ea5e7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updating instance_info_cache with network_info: [{"id": "3067803c-07f3-4a15-a5ee-47f9a770efca", "address": "fa:16:3e:cd:b7:b1", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3067803c-07", "ovs_interfaceid": "3067803c-07f3-4a15-a5ee-47f9a770efca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:59:08 compute-1 ovn_controller[130490]: 2026-01-20T14:59:08Z|00569|binding|INFO|Releasing lport 1623097d-35b0-4d71-9dc2-c4d659492102 from this chassis (sb_readonly=0)
Jan 20 14:59:08 compute-1 nova_compute[225855]: 2026-01-20 14:59:08.517 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:08 compute-1 nova_compute[225855]: 2026-01-20 14:59:08.520 225859 DEBUG oslo_concurrency.lockutils [req-0e40f913-f074-49f7-b0da-ea75d7bbfac4 req-c9f2fdc6-19d0-4b1f-9289-2b31e42ea5e7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:59:08 compute-1 ovn_controller[130490]: 2026-01-20T14:59:08Z|00570|binding|INFO|Releasing lport 1623097d-35b0-4d71-9dc2-c4d659492102 from this chassis (sb_readonly=0)
Jan 20 14:59:08 compute-1 nova_compute[225855]: 2026-01-20 14:59:08.702 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:09 compute-1 ceph-mon[81775]: pgmap v2212: 321 pgs: 321 active+clean; 348 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 20 14:59:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:09.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:10.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:11 compute-1 ceph-mon[81775]: pgmap v2213: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 952 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Jan 20 14:59:11 compute-1 nova_compute[225855]: 2026-01-20 14:59:11.579 225859 DEBUG oslo_concurrency.lockutils [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "b4c1468d-9914-426a-9464-c1167de53632" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:11 compute-1 nova_compute[225855]: 2026-01-20 14:59:11.579 225859 DEBUG oslo_concurrency.lockutils [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:11 compute-1 nova_compute[225855]: 2026-01-20 14:59:11.602 225859 DEBUG nova.objects.instance [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lazy-loading 'flavor' on Instance uuid b4c1468d-9914-426a-9464-c1167de53632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:59:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:11.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:11 compute-1 nova_compute[225855]: 2026-01-20 14:59:11.668 225859 DEBUG oslo_concurrency.lockutils [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:11 compute-1 nova_compute[225855]: 2026-01-20 14:59:11.759 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.058 225859 DEBUG oslo_concurrency.lockutils [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "b4c1468d-9914-426a-9464-c1167de53632" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.059 225859 DEBUG oslo_concurrency.lockutils [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.059 225859 INFO nova.compute.manager [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Attaching volume b0619b28-88eb-4051-9e30-36100f39c117 to /dev/vdb
Jan 20 14:59:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:12.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.279 225859 DEBUG os_brick.utils [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.280 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.290 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.290 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[c6cb6dab-e494-44c3-9c62-78fa6b4811a7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.292 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.299 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.299 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[984c0414-3218-472e-b16f-f2a8798187ce]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.300 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.307 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.307 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[918b5cf1-8bae-4c41-9784-d406be505af9]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.309 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[69882a38-eab2-471f-bddb-a8d00c60d926]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.309 225859 DEBUG oslo_concurrency.processutils [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.334 225859 DEBUG oslo_concurrency.processutils [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.336 225859 DEBUG os_brick.initiator.connectors.lightos [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.336 225859 DEBUG os_brick.initiator.connectors.lightos [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.337 225859 DEBUG os_brick.initiator.connectors.lightos [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.337 225859 DEBUG os_brick.utils [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] <== get_connector_properties: return (58ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.338 225859 DEBUG nova.virt.block_device [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updating existing volume attachment record: 17fc8ff5-edb4-4dc8-aba8-9dd268a5f344 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 14:59:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:12 compute-1 nova_compute[225855]: 2026-01-20 14:59:12.882 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:59:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/19077082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:59:13 compute-1 nova_compute[225855]: 2026-01-20 14:59:13.111 225859 DEBUG nova.objects.instance [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lazy-loading 'flavor' on Instance uuid b4c1468d-9914-426a-9464-c1167de53632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:59:13 compute-1 nova_compute[225855]: 2026-01-20 14:59:13.132 225859 DEBUG nova.virt.libvirt.driver [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Attempting to attach volume b0619b28-88eb-4051-9e30-36100f39c117 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 20 14:59:13 compute-1 nova_compute[225855]: 2026-01-20 14:59:13.134 225859 DEBUG nova.virt.libvirt.guest [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] attach device xml: <disk type="network" device="disk">
Jan 20 14:59:13 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:59:13 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-b0619b28-88eb-4051-9e30-36100f39c117">
Jan 20 14:59:13 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:59:13 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:59:13 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:59:13 compute-1 nova_compute[225855]:   </source>
Jan 20 14:59:13 compute-1 nova_compute[225855]:   <auth username="openstack">
Jan 20 14:59:13 compute-1 nova_compute[225855]:     <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:59:13 compute-1 nova_compute[225855]:   </auth>
Jan 20 14:59:13 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:59:13 compute-1 nova_compute[225855]:   <serial>b0619b28-88eb-4051-9e30-36100f39c117</serial>
Jan 20 14:59:13 compute-1 nova_compute[225855]: </disk>
Jan 20 14:59:13 compute-1 nova_compute[225855]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 14:59:13 compute-1 nova_compute[225855]: 2026-01-20 14:59:13.249 225859 DEBUG nova.virt.libvirt.driver [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:59:13 compute-1 nova_compute[225855]: 2026-01-20 14:59:13.250 225859 DEBUG nova.virt.libvirt.driver [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:59:13 compute-1 nova_compute[225855]: 2026-01-20 14:59:13.250 225859 DEBUG nova.virt.libvirt.driver [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:59:13 compute-1 nova_compute[225855]: 2026-01-20 14:59:13.250 225859 DEBUG nova.virt.libvirt.driver [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No VIF found with MAC fa:16:3e:d9:6a:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:59:13 compute-1 ceph-mon[81775]: pgmap v2214: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 119 op/s
Jan 20 14:59:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/19077082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:59:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:59:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/857366021' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:59:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:59:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/857366021' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:59:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:13.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:14 compute-1 sudo[282451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:14 compute-1 sudo[282451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:14 compute-1 sudo[282451]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:14 compute-1 sudo[282476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:14 compute-1 sudo[282476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:14 compute-1 sudo[282476]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:14.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/857366021' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:59:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/857366021' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:59:15 compute-1 ceph-mon[81775]: pgmap v2215: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 713 KiB/s wr, 117 op/s
Jan 20 14:59:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:15.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:16.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:16 compute-1 nova_compute[225855]: 2026-01-20 14:59:16.381 225859 DEBUG oslo_concurrency.lockutils [None req-6570a512-ed7a-403c-8c93-7866d8503ce8 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:16.419 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:16.420 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:16.421 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:16 compute-1 nova_compute[225855]: 2026-01-20 14:59:16.762 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:16 compute-1 nova_compute[225855]: 2026-01-20 14:59:16.888 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:16 compute-1 ceph-mon[81775]: pgmap v2216: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 104 op/s
Jan 20 14:59:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:17.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:17 compute-1 nova_compute[225855]: 2026-01-20 14:59:17.884 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:59:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:18.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:59:18 compute-1 nova_compute[225855]: 2026-01-20 14:59:18.378 225859 DEBUG oslo_concurrency.lockutils [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "b4c1468d-9914-426a-9464-c1167de53632" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:18 compute-1 nova_compute[225855]: 2026-01-20 14:59:18.379 225859 DEBUG oslo_concurrency.lockutils [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:18 compute-1 nova_compute[225855]: 2026-01-20 14:59:18.446 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Triggering sync for uuid 1ebdefed-0903-4d72-b78d-912666c5ce61 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 20 14:59:18 compute-1 nova_compute[225855]: 2026-01-20 14:59:18.446 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Triggering sync for uuid b4c1468d-9914-426a-9464-c1167de53632 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 20 14:59:18 compute-1 nova_compute[225855]: 2026-01-20 14:59:18.447 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "1ebdefed-0903-4d72-b78d-912666c5ce61" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:18 compute-1 nova_compute[225855]: 2026-01-20 14:59:18.447 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:18 compute-1 nova_compute[225855]: 2026-01-20 14:59:18.447 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "b4c1468d-9914-426a-9464-c1167de53632" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:18 compute-1 nova_compute[225855]: 2026-01-20 14:59:18.455 225859 DEBUG nova.objects.instance [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lazy-loading 'flavor' on Instance uuid b4c1468d-9914-426a-9464-c1167de53632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:59:18 compute-1 nova_compute[225855]: 2026-01-20 14:59:18.468 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:18 compute-1 nova_compute[225855]: 2026-01-20 14:59:18.500 225859 DEBUG oslo_concurrency.lockutils [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:18 compute-1 nova_compute[225855]: 2026-01-20 14:59:18.500 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "b4c1468d-9914-426a-9464-c1167de53632" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:18 compute-1 nova_compute[225855]: 2026-01-20 14:59:18.545 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "b4c1468d-9914-426a-9464-c1167de53632" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:18 compute-1 nova_compute[225855]: 2026-01-20 14:59:18.927 225859 DEBUG oslo_concurrency.lockutils [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "b4c1468d-9914-426a-9464-c1167de53632" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:18 compute-1 nova_compute[225855]: 2026-01-20 14:59:18.928 225859 DEBUG oslo_concurrency.lockutils [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:18 compute-1 nova_compute[225855]: 2026-01-20 14:59:18.928 225859 INFO nova.compute.manager [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Attaching volume ab9a0bc3-5c61-456d-9fc8-3cb8ce358e66 to /dev/vdc
Jan 20 14:59:19 compute-1 nova_compute[225855]: 2026-01-20 14:59:19.115 225859 DEBUG os_brick.utils [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:59:19 compute-1 nova_compute[225855]: 2026-01-20 14:59:19.116 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:59:19 compute-1 nova_compute[225855]: 2026-01-20 14:59:19.125 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:59:19 compute-1 nova_compute[225855]: 2026-01-20 14:59:19.126 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[3f502614-d0bb-472e-9503-40ff11cd9649]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:19 compute-1 nova_compute[225855]: 2026-01-20 14:59:19.127 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:59:19 compute-1 nova_compute[225855]: 2026-01-20 14:59:19.133 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:59:19 compute-1 nova_compute[225855]: 2026-01-20 14:59:19.134 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[f187c59e-974e-4864-8ccb-6bcc5c2f8912]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:19 compute-1 nova_compute[225855]: 2026-01-20 14:59:19.135 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:59:19 compute-1 nova_compute[225855]: 2026-01-20 14:59:19.142 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:59:19 compute-1 nova_compute[225855]: 2026-01-20 14:59:19.142 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[62123aa2-3a74-4f07-91e9-26d55759b95d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:19 compute-1 nova_compute[225855]: 2026-01-20 14:59:19.143 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[01a2bcaf-e2a4-41a2-b562-a2e49f499352]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:19 compute-1 nova_compute[225855]: 2026-01-20 14:59:19.144 225859 DEBUG oslo_concurrency.processutils [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:59:19 compute-1 nova_compute[225855]: 2026-01-20 14:59:19.171 225859 DEBUG oslo_concurrency.processutils [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:59:19 compute-1 nova_compute[225855]: 2026-01-20 14:59:19.173 225859 DEBUG os_brick.initiator.connectors.lightos [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:59:19 compute-1 nova_compute[225855]: 2026-01-20 14:59:19.173 225859 DEBUG os_brick.initiator.connectors.lightos [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:59:19 compute-1 nova_compute[225855]: 2026-01-20 14:59:19.173 225859 DEBUG os_brick.initiator.connectors.lightos [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:59:19 compute-1 nova_compute[225855]: 2026-01-20 14:59:19.174 225859 DEBUG os_brick.utils [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] <== get_connector_properties: return (58ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:59:19 compute-1 nova_compute[225855]: 2026-01-20 14:59:19.174 225859 DEBUG nova.virt.block_device [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updating existing volume attachment record: c891cf33-bcd1-4101-99a6-243f9f47c95b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 14:59:19 compute-1 ceph-mon[81775]: pgmap v2217: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 78 op/s
Jan 20 14:59:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:19.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:20 compute-1 podman[282512]: 2026-01-20 14:59:20.065943465 +0000 UTC m=+0.101552066 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Jan 20 14:59:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:59:20 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2904376589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:59:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2897357433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2904376589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:59:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:20.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:20 compute-1 nova_compute[225855]: 2026-01-20 14:59:20.235 225859 DEBUG nova.objects.instance [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lazy-loading 'flavor' on Instance uuid b4c1468d-9914-426a-9464-c1167de53632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:59:20 compute-1 nova_compute[225855]: 2026-01-20 14:59:20.264 225859 DEBUG nova.virt.libvirt.driver [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Attempting to attach volume ab9a0bc3-5c61-456d-9fc8-3cb8ce358e66 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 20 14:59:20 compute-1 nova_compute[225855]: 2026-01-20 14:59:20.266 225859 DEBUG nova.virt.libvirt.guest [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] attach device xml: <disk type="network" device="disk">
Jan 20 14:59:20 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:59:20 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-ab9a0bc3-5c61-456d-9fc8-3cb8ce358e66">
Jan 20 14:59:20 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:59:20 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:59:20 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:59:20 compute-1 nova_compute[225855]:   </source>
Jan 20 14:59:20 compute-1 nova_compute[225855]:   <auth username="openstack">
Jan 20 14:59:20 compute-1 nova_compute[225855]:     <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:59:20 compute-1 nova_compute[225855]:   </auth>
Jan 20 14:59:20 compute-1 nova_compute[225855]:   <target dev="vdc" bus="virtio"/>
Jan 20 14:59:20 compute-1 nova_compute[225855]:   <serial>ab9a0bc3-5c61-456d-9fc8-3cb8ce358e66</serial>
Jan 20 14:59:20 compute-1 nova_compute[225855]: </disk>
Jan 20 14:59:20 compute-1 nova_compute[225855]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 14:59:20 compute-1 nova_compute[225855]: 2026-01-20 14:59:20.473 225859 DEBUG nova.virt.libvirt.driver [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:59:20 compute-1 nova_compute[225855]: 2026-01-20 14:59:20.474 225859 DEBUG nova.virt.libvirt.driver [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:59:20 compute-1 nova_compute[225855]: 2026-01-20 14:59:20.474 225859 DEBUG nova.virt.libvirt.driver [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:59:20 compute-1 nova_compute[225855]: 2026-01-20 14:59:20.474 225859 DEBUG nova.virt.libvirt.driver [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:59:20 compute-1 nova_compute[225855]: 2026-01-20 14:59:20.474 225859 DEBUG nova.virt.libvirt.driver [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] No VIF found with MAC fa:16:3e:d9:6a:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:59:20 compute-1 nova_compute[225855]: 2026-01-20 14:59:20.893 225859 DEBUG oslo_concurrency.lockutils [None req-12e9d29c-04e8-4082-a7d9-9ccfe2314a89 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:21 compute-1 ceph-mon[81775]: pgmap v2218: 321 pgs: 321 active+clean; 349 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 120 KiB/s wr, 85 op/s
Jan 20 14:59:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:21.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:21 compute-1 nova_compute[225855]: 2026-01-20 14:59:21.763 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:22.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:22 compute-1 nova_compute[225855]: 2026-01-20 14:59:22.886 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:23 compute-1 ceph-mon[81775]: pgmap v2219: 321 pgs: 321 active+clean; 361 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 662 KiB/s wr, 56 op/s
Jan 20 14:59:23 compute-1 nova_compute[225855]: 2026-01-20 14:59:23.530 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:23 compute-1 NetworkManager[49104]: <info>  [1768921163.5311] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Jan 20 14:59:23 compute-1 NetworkManager[49104]: <info>  [1768921163.5318] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Jan 20 14:59:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:23.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:23 compute-1 nova_compute[225855]: 2026-01-20 14:59:23.701 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:23 compute-1 ovn_controller[130490]: 2026-01-20T14:59:23Z|00571|binding|INFO|Releasing lport 1623097d-35b0-4d71-9dc2-c4d659492102 from this chassis (sb_readonly=0)
Jan 20 14:59:23 compute-1 nova_compute[225855]: 2026-01-20 14:59:23.719 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:24 compute-1 nova_compute[225855]: 2026-01-20 14:59:24.168 225859 DEBUG nova.compute.manager [req-682a67f2-50bf-43a3-a24d-d8492a545836 req-a973bf47-e1db-4a7b-b77c-bc6c46e3543a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Received event network-changed-7c572239-9b2e-493c-8be5-632f27cc634a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:59:24 compute-1 nova_compute[225855]: 2026-01-20 14:59:24.168 225859 DEBUG nova.compute.manager [req-682a67f2-50bf-43a3-a24d-d8492a545836 req-a973bf47-e1db-4a7b-b77c-bc6c46e3543a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Refreshing instance network info cache due to event network-changed-7c572239-9b2e-493c-8be5-632f27cc634a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:59:24 compute-1 nova_compute[225855]: 2026-01-20 14:59:24.169 225859 DEBUG oslo_concurrency.lockutils [req-682a67f2-50bf-43a3-a24d-d8492a545836 req-a973bf47-e1db-4a7b-b77c-bc6c46e3543a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:59:24 compute-1 nova_compute[225855]: 2026-01-20 14:59:24.169 225859 DEBUG oslo_concurrency.lockutils [req-682a67f2-50bf-43a3-a24d-d8492a545836 req-a973bf47-e1db-4a7b-b77c-bc6c46e3543a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:59:24 compute-1 nova_compute[225855]: 2026-01-20 14:59:24.169 225859 DEBUG nova.network.neutron [req-682a67f2-50bf-43a3-a24d-d8492a545836 req-a973bf47-e1db-4a7b-b77c-bc6c46e3543a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Refreshing network info cache for port 7c572239-9b2e-493c-8be5-632f27cc634a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:59:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:24.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2388556621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:59:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1151045178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:59:25 compute-1 ceph-mon[81775]: pgmap v2220: 321 pgs: 321 active+clean; 377 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 583 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Jan 20 14:59:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:25.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:25 compute-1 nova_compute[225855]: 2026-01-20 14:59:25.960 225859 DEBUG nova.network.neutron [req-682a67f2-50bf-43a3-a24d-d8492a545836 req-a973bf47-e1db-4a7b-b77c-bc6c46e3543a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updated VIF entry in instance network info cache for port 7c572239-9b2e-493c-8be5-632f27cc634a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:59:25 compute-1 nova_compute[225855]: 2026-01-20 14:59:25.961 225859 DEBUG nova.network.neutron [req-682a67f2-50bf-43a3-a24d-d8492a545836 req-a973bf47-e1db-4a7b-b77c-bc6c46e3543a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updating instance_info_cache with network_info: [{"id": "7c572239-9b2e-493c-8be5-632f27cc634a", "address": "fa:16:3e:d9:6a:1f", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c572239-9b", "ovs_interfaceid": "7c572239-9b2e-493c-8be5-632f27cc634a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:59:26 compute-1 nova_compute[225855]: 2026-01-20 14:59:26.000 225859 DEBUG oslo_concurrency.lockutils [req-682a67f2-50bf-43a3-a24d-d8492a545836 req-a973bf47-e1db-4a7b-b77c-bc6c46e3543a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:59:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:26.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:26 compute-1 nova_compute[225855]: 2026-01-20 14:59:26.300 225859 DEBUG nova.compute.manager [req-6d8759a5-db1b-4688-b918-5a947368cf1f req-e6514c73-467e-46b9-8470-848fbf9e5278 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Received event network-changed-7c572239-9b2e-493c-8be5-632f27cc634a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:59:26 compute-1 nova_compute[225855]: 2026-01-20 14:59:26.300 225859 DEBUG nova.compute.manager [req-6d8759a5-db1b-4688-b918-5a947368cf1f req-e6514c73-467e-46b9-8470-848fbf9e5278 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Refreshing instance network info cache due to event network-changed-7c572239-9b2e-493c-8be5-632f27cc634a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:59:26 compute-1 nova_compute[225855]: 2026-01-20 14:59:26.301 225859 DEBUG oslo_concurrency.lockutils [req-6d8759a5-db1b-4688-b918-5a947368cf1f req-e6514c73-467e-46b9-8470-848fbf9e5278 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:59:26 compute-1 nova_compute[225855]: 2026-01-20 14:59:26.301 225859 DEBUG oslo_concurrency.lockutils [req-6d8759a5-db1b-4688-b918-5a947368cf1f req-e6514c73-467e-46b9-8470-848fbf9e5278 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:59:26 compute-1 nova_compute[225855]: 2026-01-20 14:59:26.301 225859 DEBUG nova.network.neutron [req-6d8759a5-db1b-4688-b918-5a947368cf1f req-e6514c73-467e-46b9-8470-848fbf9e5278 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Refreshing network info cache for port 7c572239-9b2e-493c-8be5-632f27cc634a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:59:26 compute-1 ovn_controller[130490]: 2026-01-20T14:59:26Z|00572|binding|INFO|Releasing lport 1623097d-35b0-4d71-9dc2-c4d659492102 from this chassis (sb_readonly=0)
Jan 20 14:59:26 compute-1 nova_compute[225855]: 2026-01-20 14:59:26.650 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:26 compute-1 nova_compute[225855]: 2026-01-20 14:59:26.765 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:27 compute-1 ceph-mon[81775]: pgmap v2221: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 349 KiB/s rd, 3.9 MiB/s wr, 96 op/s
Jan 20 14:59:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:59:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:27.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:59:27 compute-1 nova_compute[225855]: 2026-01-20 14:59:27.867 225859 DEBUG nova.network.neutron [req-6d8759a5-db1b-4688-b918-5a947368cf1f req-e6514c73-467e-46b9-8470-848fbf9e5278 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updated VIF entry in instance network info cache for port 7c572239-9b2e-493c-8be5-632f27cc634a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:59:27 compute-1 nova_compute[225855]: 2026-01-20 14:59:27.868 225859 DEBUG nova.network.neutron [req-6d8759a5-db1b-4688-b918-5a947368cf1f req-e6514c73-467e-46b9-8470-848fbf9e5278 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updating instance_info_cache with network_info: [{"id": "7c572239-9b2e-493c-8be5-632f27cc634a", "address": "fa:16:3e:d9:6a:1f", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c572239-9b", "ovs_interfaceid": "7c572239-9b2e-493c-8be5-632f27cc634a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:59:27 compute-1 nova_compute[225855]: 2026-01-20 14:59:27.888 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:27 compute-1 nova_compute[225855]: 2026-01-20 14:59:27.890 225859 DEBUG oslo_concurrency.lockutils [req-6d8759a5-db1b-4688-b918-5a947368cf1f req-e6514c73-467e-46b9-8470-848fbf9e5278 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:59:27 compute-1 nova_compute[225855]: 2026-01-20 14:59:27.890 225859 DEBUG nova.compute.manager [req-6d8759a5-db1b-4688-b918-5a947368cf1f req-e6514c73-467e-46b9-8470-848fbf9e5278 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Received event network-changed-7c572239-9b2e-493c-8be5-632f27cc634a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:59:27 compute-1 nova_compute[225855]: 2026-01-20 14:59:27.890 225859 DEBUG nova.compute.manager [req-6d8759a5-db1b-4688-b918-5a947368cf1f req-e6514c73-467e-46b9-8470-848fbf9e5278 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Refreshing instance network info cache due to event network-changed-7c572239-9b2e-493c-8be5-632f27cc634a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:59:27 compute-1 nova_compute[225855]: 2026-01-20 14:59:27.890 225859 DEBUG oslo_concurrency.lockutils [req-6d8759a5-db1b-4688-b918-5a947368cf1f req-e6514c73-467e-46b9-8470-848fbf9e5278 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:59:27 compute-1 nova_compute[225855]: 2026-01-20 14:59:27.891 225859 DEBUG oslo_concurrency.lockutils [req-6d8759a5-db1b-4688-b918-5a947368cf1f req-e6514c73-467e-46b9-8470-848fbf9e5278 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:59:27 compute-1 nova_compute[225855]: 2026-01-20 14:59:27.891 225859 DEBUG nova.network.neutron [req-6d8759a5-db1b-4688-b918-5a947368cf1f req-e6514c73-467e-46b9-8470-848fbf9e5278 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Refreshing network info cache for port 7c572239-9b2e-493c-8be5-632f27cc634a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:59:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:28.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:28 compute-1 ceph-mon[81775]: pgmap v2222: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Jan 20 14:59:29 compute-1 nova_compute[225855]: 2026-01-20 14:59:29.516 225859 DEBUG nova.network.neutron [req-6d8759a5-db1b-4688-b918-5a947368cf1f req-e6514c73-467e-46b9-8470-848fbf9e5278 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updated VIF entry in instance network info cache for port 7c572239-9b2e-493c-8be5-632f27cc634a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:59:29 compute-1 nova_compute[225855]: 2026-01-20 14:59:29.516 225859 DEBUG nova.network.neutron [req-6d8759a5-db1b-4688-b918-5a947368cf1f req-e6514c73-467e-46b9-8470-848fbf9e5278 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updating instance_info_cache with network_info: [{"id": "7c572239-9b2e-493c-8be5-632f27cc634a", "address": "fa:16:3e:d9:6a:1f", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c572239-9b", "ovs_interfaceid": "7c572239-9b2e-493c-8be5-632f27cc634a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:59:29 compute-1 nova_compute[225855]: 2026-01-20 14:59:29.551 225859 DEBUG oslo_concurrency.lockutils [req-6d8759a5-db1b-4688-b918-5a947368cf1f req-e6514c73-467e-46b9-8470-848fbf9e5278 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:59:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:29.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:30 compute-1 nova_compute[225855]: 2026-01-20 14:59:30.208 225859 DEBUG nova.compute.manager [req-a66d5549-f4d4-4c87-9c17-fc333baab02a req-9fbf6400-3219-4b7d-9055-230cb1d7330c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Received event network-changed-7c572239-9b2e-493c-8be5-632f27cc634a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:59:30 compute-1 nova_compute[225855]: 2026-01-20 14:59:30.208 225859 DEBUG nova.compute.manager [req-a66d5549-f4d4-4c87-9c17-fc333baab02a req-9fbf6400-3219-4b7d-9055-230cb1d7330c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Refreshing instance network info cache due to event network-changed-7c572239-9b2e-493c-8be5-632f27cc634a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:59:30 compute-1 nova_compute[225855]: 2026-01-20 14:59:30.209 225859 DEBUG oslo_concurrency.lockutils [req-a66d5549-f4d4-4c87-9c17-fc333baab02a req-9fbf6400-3219-4b7d-9055-230cb1d7330c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:59:30 compute-1 nova_compute[225855]: 2026-01-20 14:59:30.209 225859 DEBUG oslo_concurrency.lockutils [req-a66d5549-f4d4-4c87-9c17-fc333baab02a req-9fbf6400-3219-4b7d-9055-230cb1d7330c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:59:30 compute-1 nova_compute[225855]: 2026-01-20 14:59:30.209 225859 DEBUG nova.network.neutron [req-a66d5549-f4d4-4c87-9c17-fc333baab02a req-9fbf6400-3219-4b7d-9055-230cb1d7330c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Refreshing network info cache for port 7c572239-9b2e-493c-8be5-632f27cc634a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:59:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:30.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:31 compute-1 ceph-mon[81775]: pgmap v2223: 321 pgs: 321 active+clean; 399 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 171 op/s
Jan 20 14:59:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2951671161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:31.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:31 compute-1 nova_compute[225855]: 2026-01-20 14:59:31.767 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:32 compute-1 podman[282565]: 2026-01-20 14:59:32.003738749 +0000 UTC m=+0.048705545 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:59:32 compute-1 nova_compute[225855]: 2026-01-20 14:59:32.097 225859 DEBUG nova.network.neutron [req-a66d5549-f4d4-4c87-9c17-fc333baab02a req-9fbf6400-3219-4b7d-9055-230cb1d7330c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updated VIF entry in instance network info cache for port 7c572239-9b2e-493c-8be5-632f27cc634a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:59:32 compute-1 nova_compute[225855]: 2026-01-20 14:59:32.097 225859 DEBUG nova.network.neutron [req-a66d5549-f4d4-4c87-9c17-fc333baab02a req-9fbf6400-3219-4b7d-9055-230cb1d7330c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updating instance_info_cache with network_info: [{"id": "7c572239-9b2e-493c-8be5-632f27cc634a", "address": "fa:16:3e:d9:6a:1f", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c572239-9b", "ovs_interfaceid": "7c572239-9b2e-493c-8be5-632f27cc634a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:59:32 compute-1 nova_compute[225855]: 2026-01-20 14:59:32.149 225859 DEBUG oslo_concurrency.lockutils [req-a66d5549-f4d4-4c87-9c17-fc333baab02a req-9fbf6400-3219-4b7d-9055-230cb1d7330c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:59:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:32.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:32 compute-1 nova_compute[225855]: 2026-01-20 14:59:32.493 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:32 compute-1 nova_compute[225855]: 2026-01-20 14:59:32.890 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:33 compute-1 ceph-mon[81775]: pgmap v2224: 321 pgs: 321 active+clean; 395 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 182 op/s
Jan 20 14:59:33 compute-1 nova_compute[225855]: 2026-01-20 14:59:33.248 225859 DEBUG oslo_concurrency.lockutils [None req-b80c68e5-e08d-494b-a7ad-c2e7cbbeaae4 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "b4c1468d-9914-426a-9464-c1167de53632" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:33 compute-1 nova_compute[225855]: 2026-01-20 14:59:33.249 225859 DEBUG oslo_concurrency.lockutils [None req-b80c68e5-e08d-494b-a7ad-c2e7cbbeaae4 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:33 compute-1 nova_compute[225855]: 2026-01-20 14:59:33.264 225859 INFO nova.compute.manager [None req-b80c68e5-e08d-494b-a7ad-c2e7cbbeaae4 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Detaching volume b0619b28-88eb-4051-9e30-36100f39c117
Jan 20 14:59:33 compute-1 nova_compute[225855]: 2026-01-20 14:59:33.474 225859 INFO nova.virt.block_device [None req-b80c68e5-e08d-494b-a7ad-c2e7cbbeaae4 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Attempting to driver detach volume b0619b28-88eb-4051-9e30-36100f39c117 from mountpoint /dev/vdb
Jan 20 14:59:33 compute-1 nova_compute[225855]: 2026-01-20 14:59:33.486 225859 DEBUG nova.virt.libvirt.driver [None req-b80c68e5-e08d-494b-a7ad-c2e7cbbeaae4 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Attempting to detach device vdb from instance b4c1468d-9914-426a-9464-c1167de53632 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 14:59:33 compute-1 nova_compute[225855]: 2026-01-20 14:59:33.487 225859 DEBUG nova.virt.libvirt.guest [None req-b80c68e5-e08d-494b-a7ad-c2e7cbbeaae4 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:59:33 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:59:33 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-b0619b28-88eb-4051-9e30-36100f39c117">
Jan 20 14:59:33 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:59:33 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:59:33 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:59:33 compute-1 nova_compute[225855]:   </source>
Jan 20 14:59:33 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:59:33 compute-1 nova_compute[225855]:   <serial>b0619b28-88eb-4051-9e30-36100f39c117</serial>
Jan 20 14:59:33 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 14:59:33 compute-1 nova_compute[225855]: </disk>
Jan 20 14:59:33 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:59:33 compute-1 nova_compute[225855]: 2026-01-20 14:59:33.495 225859 INFO nova.virt.libvirt.driver [None req-b80c68e5-e08d-494b-a7ad-c2e7cbbeaae4 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Successfully detached device vdb from instance b4c1468d-9914-426a-9464-c1167de53632 from the persistent domain config.
Jan 20 14:59:33 compute-1 nova_compute[225855]: 2026-01-20 14:59:33.495 225859 DEBUG nova.virt.libvirt.driver [None req-b80c68e5-e08d-494b-a7ad-c2e7cbbeaae4 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b4c1468d-9914-426a-9464-c1167de53632 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 14:59:33 compute-1 nova_compute[225855]: 2026-01-20 14:59:33.496 225859 DEBUG nova.virt.libvirt.guest [None req-b80c68e5-e08d-494b-a7ad-c2e7cbbeaae4 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:59:33 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:59:33 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-b0619b28-88eb-4051-9e30-36100f39c117">
Jan 20 14:59:33 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:59:33 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:59:33 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:59:33 compute-1 nova_compute[225855]:   </source>
Jan 20 14:59:33 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:59:33 compute-1 nova_compute[225855]:   <serial>b0619b28-88eb-4051-9e30-36100f39c117</serial>
Jan 20 14:59:33 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 14:59:33 compute-1 nova_compute[225855]: </disk>
Jan 20 14:59:33 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:59:33 compute-1 nova_compute[225855]: 2026-01-20 14:59:33.564 225859 DEBUG nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Received event <DeviceRemovedEvent: 1768921173.5637622, b4c1468d-9914-426a-9464-c1167de53632 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 14:59:33 compute-1 nova_compute[225855]: 2026-01-20 14:59:33.566 225859 DEBUG nova.virt.libvirt.driver [None req-b80c68e5-e08d-494b-a7ad-c2e7cbbeaae4 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b4c1468d-9914-426a-9464-c1167de53632 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 14:59:33 compute-1 nova_compute[225855]: 2026-01-20 14:59:33.568 225859 INFO nova.virt.libvirt.driver [None req-b80c68e5-e08d-494b-a7ad-c2e7cbbeaae4 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Successfully detached device vdb from instance b4c1468d-9914-426a-9464-c1167de53632 from the live domain config.
Jan 20 14:59:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:33.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:33 compute-1 nova_compute[225855]: 2026-01-20 14:59:33.889 225859 DEBUG nova.objects.instance [None req-b80c68e5-e08d-494b-a7ad-c2e7cbbeaae4 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lazy-loading 'flavor' on Instance uuid b4c1468d-9914-426a-9464-c1167de53632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:59:33 compute-1 nova_compute[225855]: 2026-01-20 14:59:33.937 225859 DEBUG oslo_concurrency.lockutils [None req-b80c68e5-e08d-494b-a7ad-c2e7cbbeaae4 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:34 compute-1 sudo[282588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:34 compute-1 sudo[282588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:34 compute-1 sudo[282588]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:34.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:34 compute-1 sudo[282613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:34 compute-1 sudo[282613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:34 compute-1 sudo[282613]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:35 compute-1 ceph-mon[81775]: pgmap v2225: 321 pgs: 321 active+clean; 381 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 180 op/s
Jan 20 14:59:35 compute-1 nova_compute[225855]: 2026-01-20 14:59:35.441 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:35.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:36.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/280429019' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:59:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/280429019' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:59:36 compute-1 nova_compute[225855]: 2026-01-20 14:59:36.769 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:36 compute-1 nova_compute[225855]: 2026-01-20 14:59:36.860 225859 DEBUG oslo_concurrency.lockutils [None req-cd488a83-9815-4af5-90e2-201aab21d807 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "b4c1468d-9914-426a-9464-c1167de53632" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:36 compute-1 nova_compute[225855]: 2026-01-20 14:59:36.861 225859 DEBUG oslo_concurrency.lockutils [None req-cd488a83-9815-4af5-90e2-201aab21d807 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:36 compute-1 nova_compute[225855]: 2026-01-20 14:59:36.874 225859 INFO nova.compute.manager [None req-cd488a83-9815-4af5-90e2-201aab21d807 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Detaching volume ab9a0bc3-5c61-456d-9fc8-3cb8ce358e66
Jan 20 14:59:37 compute-1 nova_compute[225855]: 2026-01-20 14:59:37.017 225859 INFO nova.virt.block_device [None req-cd488a83-9815-4af5-90e2-201aab21d807 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Attempting to driver detach volume ab9a0bc3-5c61-456d-9fc8-3cb8ce358e66 from mountpoint /dev/vdc
Jan 20 14:59:37 compute-1 nova_compute[225855]: 2026-01-20 14:59:37.025 225859 DEBUG nova.virt.libvirt.driver [None req-cd488a83-9815-4af5-90e2-201aab21d807 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Attempting to detach device vdc from instance b4c1468d-9914-426a-9464-c1167de53632 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 14:59:37 compute-1 nova_compute[225855]: 2026-01-20 14:59:37.026 225859 DEBUG nova.virt.libvirt.guest [None req-cd488a83-9815-4af5-90e2-201aab21d807 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:59:37 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:59:37 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-ab9a0bc3-5c61-456d-9fc8-3cb8ce358e66">
Jan 20 14:59:37 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:59:37 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:59:37 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:59:37 compute-1 nova_compute[225855]:   </source>
Jan 20 14:59:37 compute-1 nova_compute[225855]:   <target dev="vdc" bus="virtio"/>
Jan 20 14:59:37 compute-1 nova_compute[225855]:   <serial>ab9a0bc3-5c61-456d-9fc8-3cb8ce358e66</serial>
Jan 20 14:59:37 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 20 14:59:37 compute-1 nova_compute[225855]: </disk>
Jan 20 14:59:37 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:59:37 compute-1 nova_compute[225855]: 2026-01-20 14:59:37.049 225859 INFO nova.virt.libvirt.driver [None req-cd488a83-9815-4af5-90e2-201aab21d807 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Successfully detached device vdc from instance b4c1468d-9914-426a-9464-c1167de53632 from the persistent domain config.
Jan 20 14:59:37 compute-1 nova_compute[225855]: 2026-01-20 14:59:37.049 225859 DEBUG nova.virt.libvirt.driver [None req-cd488a83-9815-4af5-90e2-201aab21d807 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance b4c1468d-9914-426a-9464-c1167de53632 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 14:59:37 compute-1 nova_compute[225855]: 2026-01-20 14:59:37.050 225859 DEBUG nova.virt.libvirt.guest [None req-cd488a83-9815-4af5-90e2-201aab21d807 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:59:37 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:59:37 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-ab9a0bc3-5c61-456d-9fc8-3cb8ce358e66">
Jan 20 14:59:37 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:59:37 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:59:37 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:59:37 compute-1 nova_compute[225855]:   </source>
Jan 20 14:59:37 compute-1 nova_compute[225855]:   <target dev="vdc" bus="virtio"/>
Jan 20 14:59:37 compute-1 nova_compute[225855]:   <serial>ab9a0bc3-5c61-456d-9fc8-3cb8ce358e66</serial>
Jan 20 14:59:37 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 20 14:59:37 compute-1 nova_compute[225855]: </disk>
Jan 20 14:59:37 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:59:37 compute-1 nova_compute[225855]: 2026-01-20 14:59:37.131 225859 DEBUG nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Received event <DeviceRemovedEvent: 1768921177.1310332, b4c1468d-9914-426a-9464-c1167de53632 => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 14:59:37 compute-1 nova_compute[225855]: 2026-01-20 14:59:37.133 225859 DEBUG nova.virt.libvirt.driver [None req-cd488a83-9815-4af5-90e2-201aab21d807 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance b4c1468d-9914-426a-9464-c1167de53632 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 14:59:37 compute-1 nova_compute[225855]: 2026-01-20 14:59:37.135 225859 INFO nova.virt.libvirt.driver [None req-cd488a83-9815-4af5-90e2-201aab21d807 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Successfully detached device vdc from instance b4c1468d-9914-426a-9464-c1167de53632 from the live domain config.
Jan 20 14:59:37 compute-1 nova_compute[225855]: 2026-01-20 14:59:37.347 225859 DEBUG nova.objects.instance [None req-cd488a83-9815-4af5-90e2-201aab21d807 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lazy-loading 'flavor' on Instance uuid b4c1468d-9914-426a-9464-c1167de53632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:59:37 compute-1 ceph-mon[81775]: pgmap v2226: 321 pgs: 321 active+clean; 381 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 184 op/s
Jan 20 14:59:37 compute-1 nova_compute[225855]: 2026-01-20 14:59:37.396 225859 DEBUG oslo_concurrency.lockutils [None req-cd488a83-9815-4af5-90e2-201aab21d807 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:37.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:37 compute-1 nova_compute[225855]: 2026-01-20 14:59:37.892 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:59:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:38.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:59:38 compute-1 ceph-mon[81775]: pgmap v2227: 321 pgs: 321 active+clean; 381 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 215 KiB/s wr, 121 op/s
Jan 20 14:59:38 compute-1 nova_compute[225855]: 2026-01-20 14:59:38.855 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:38 compute-1 nova_compute[225855]: 2026-01-20 14:59:38.885 225859 DEBUG nova.compute.manager [req-7530eee5-eefe-4483-a826-64dfa87a7d90 req-6fa91407-0234-4c78-8cbc-125bec97a4ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Received event network-changed-7c572239-9b2e-493c-8be5-632f27cc634a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:59:38 compute-1 nova_compute[225855]: 2026-01-20 14:59:38.885 225859 DEBUG nova.compute.manager [req-7530eee5-eefe-4483-a826-64dfa87a7d90 req-6fa91407-0234-4c78-8cbc-125bec97a4ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Refreshing instance network info cache due to event network-changed-7c572239-9b2e-493c-8be5-632f27cc634a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:59:38 compute-1 nova_compute[225855]: 2026-01-20 14:59:38.886 225859 DEBUG oslo_concurrency.lockutils [req-7530eee5-eefe-4483-a826-64dfa87a7d90 req-6fa91407-0234-4c78-8cbc-125bec97a4ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:59:38 compute-1 nova_compute[225855]: 2026-01-20 14:59:38.886 225859 DEBUG oslo_concurrency.lockutils [req-7530eee5-eefe-4483-a826-64dfa87a7d90 req-6fa91407-0234-4c78-8cbc-125bec97a4ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:59:38 compute-1 nova_compute[225855]: 2026-01-20 14:59:38.886 225859 DEBUG nova.network.neutron [req-7530eee5-eefe-4483-a826-64dfa87a7d90 req-6fa91407-0234-4c78-8cbc-125bec97a4ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Refreshing network info cache for port 7c572239-9b2e-493c-8be5-632f27cc634a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:59:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:59:39 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/385370313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:59:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:59:39 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/385370313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:59:39 compute-1 nova_compute[225855]: 2026-01-20 14:59:39.198 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/385370313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:59:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/385370313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:59:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:39.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:39 compute-1 nova_compute[225855]: 2026-01-20 14:59:39.898 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:59:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:40.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:59:40 compute-1 ceph-mon[81775]: pgmap v2228: 321 pgs: 321 active+clean; 379 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 392 KiB/s wr, 129 op/s
Jan 20 14:59:40 compute-1 nova_compute[225855]: 2026-01-20 14:59:40.768 225859 DEBUG nova.network.neutron [req-7530eee5-eefe-4483-a826-64dfa87a7d90 req-6fa91407-0234-4c78-8cbc-125bec97a4ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updated VIF entry in instance network info cache for port 7c572239-9b2e-493c-8be5-632f27cc634a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:59:40 compute-1 nova_compute[225855]: 2026-01-20 14:59:40.769 225859 DEBUG nova.network.neutron [req-7530eee5-eefe-4483-a826-64dfa87a7d90 req-6fa91407-0234-4c78-8cbc-125bec97a4ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updating instance_info_cache with network_info: [{"id": "7c572239-9b2e-493c-8be5-632f27cc634a", "address": "fa:16:3e:d9:6a:1f", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c572239-9b", "ovs_interfaceid": "7c572239-9b2e-493c-8be5-632f27cc634a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:59:40 compute-1 nova_compute[225855]: 2026-01-20 14:59:40.798 225859 DEBUG oslo_concurrency.lockutils [req-7530eee5-eefe-4483-a826-64dfa87a7d90 req-6fa91407-0234-4c78-8cbc-125bec97a4ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:59:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:41.017 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:59:41 compute-1 nova_compute[225855]: 2026-01-20 14:59:41.017 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:41.018 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:59:41 compute-1 nova_compute[225855]: 2026-01-20 14:59:41.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:41.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:41 compute-1 nova_compute[225855]: 2026-01-20 14:59:41.771 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:42.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:42 compute-1 nova_compute[225855]: 2026-01-20 14:59:42.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:42 compute-1 nova_compute[225855]: 2026-01-20 14:59:42.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:59:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:42 compute-1 nova_compute[225855]: 2026-01-20 14:59:42.894 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:43 compute-1 ceph-mon[81775]: pgmap v2229: 321 pgs: 321 active+clean; 379 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 211 KiB/s rd, 368 KiB/s wr, 66 op/s
Jan 20 14:59:43 compute-1 nova_compute[225855]: 2026-01-20 14:59:43.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:43 compute-1 nova_compute[225855]: 2026-01-20 14:59:43.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:59:43 compute-1 nova_compute[225855]: 2026-01-20 14:59:43.586 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:59:43 compute-1 nova_compute[225855]: 2026-01-20 14:59:43.586 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:59:43 compute-1 nova_compute[225855]: 2026-01-20 14:59:43.587 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: b4c1468d-9914-426a-9464-c1167de53632] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:59:43 compute-1 nova_compute[225855]: 2026-01-20 14:59:43.641 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:43 compute-1 nova_compute[225855]: 2026-01-20 14:59:43.658 225859 DEBUG nova.compute.manager [req-ade06d21-a8ba-44d2-8d65-9051a1ae8bdf req-2acbb262-49bf-49ef-b902-910eaa03f9a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Received event network-changed-7c572239-9b2e-493c-8be5-632f27cc634a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:59:43 compute-1 nova_compute[225855]: 2026-01-20 14:59:43.658 225859 DEBUG nova.compute.manager [req-ade06d21-a8ba-44d2-8d65-9051a1ae8bdf req-2acbb262-49bf-49ef-b902-910eaa03f9a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Refreshing instance network info cache due to event network-changed-7c572239-9b2e-493c-8be5-632f27cc634a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:59:43 compute-1 nova_compute[225855]: 2026-01-20 14:59:43.659 225859 DEBUG oslo_concurrency.lockutils [req-ade06d21-a8ba-44d2-8d65-9051a1ae8bdf req-2acbb262-49bf-49ef-b902-910eaa03f9a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:59:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:43.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 14:59:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:44.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 14:59:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/764431920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:45 compute-1 ceph-mon[81775]: pgmap v2230: 321 pgs: 321 active+clean; 379 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 368 KiB/s wr, 48 op/s
Jan 20 14:59:45 compute-1 nova_compute[225855]: 2026-01-20 14:59:45.520 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updating instance_info_cache with network_info: [{"id": "7c572239-9b2e-493c-8be5-632f27cc634a", "address": "fa:16:3e:d9:6a:1f", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c572239-9b", "ovs_interfaceid": "7c572239-9b2e-493c-8be5-632f27cc634a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:59:45 compute-1 nova_compute[225855]: 2026-01-20 14:59:45.596 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:59:45 compute-1 nova_compute[225855]: 2026-01-20 14:59:45.596 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:59:45 compute-1 nova_compute[225855]: 2026-01-20 14:59:45.597 225859 DEBUG oslo_concurrency.lockutils [req-ade06d21-a8ba-44d2-8d65-9051a1ae8bdf req-2acbb262-49bf-49ef-b902-910eaa03f9a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:59:45 compute-1 nova_compute[225855]: 2026-01-20 14:59:45.597 225859 DEBUG nova.network.neutron [req-ade06d21-a8ba-44d2-8d65-9051a1ae8bdf req-2acbb262-49bf-49ef-b902-910eaa03f9a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Refreshing network info cache for port 7c572239-9b2e-493c-8be5-632f27cc634a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:59:45 compute-1 nova_compute[225855]: 2026-01-20 14:59:45.598 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:45.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:46.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:46 compute-1 nova_compute[225855]: 2026-01-20 14:59:46.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:46 compute-1 ceph-mon[81775]: pgmap v2231: 321 pgs: 321 active+clean; 416 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 1.7 MiB/s wr, 63 op/s
Jan 20 14:59:46 compute-1 nova_compute[225855]: 2026-01-20 14:59:46.773 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:47 compute-1 nova_compute[225855]: 2026-01-20 14:59:47.014 225859 DEBUG nova.compute.manager [req-dfb5baab-cb45-49f0-ab4a-92873c87e12d req-cd87dfdc-674d-4cde-932a-7f89d3c335b1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Received event network-changed-3067803c-07f3-4a15-a5ee-47f9a770efca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:59:47 compute-1 nova_compute[225855]: 2026-01-20 14:59:47.015 225859 DEBUG nova.compute.manager [req-dfb5baab-cb45-49f0-ab4a-92873c87e12d req-cd87dfdc-674d-4cde-932a-7f89d3c335b1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Refreshing instance network info cache due to event network-changed-3067803c-07f3-4a15-a5ee-47f9a770efca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:59:47 compute-1 nova_compute[225855]: 2026-01-20 14:59:47.015 225859 DEBUG oslo_concurrency.lockutils [req-dfb5baab-cb45-49f0-ab4a-92873c87e12d req-cd87dfdc-674d-4cde-932a-7f89d3c335b1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:59:47 compute-1 nova_compute[225855]: 2026-01-20 14:59:47.016 225859 DEBUG oslo_concurrency.lockutils [req-dfb5baab-cb45-49f0-ab4a-92873c87e12d req-cd87dfdc-674d-4cde-932a-7f89d3c335b1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:59:47 compute-1 nova_compute[225855]: 2026-01-20 14:59:47.016 225859 DEBUG nova.network.neutron [req-dfb5baab-cb45-49f0-ab4a-92873c87e12d req-cd87dfdc-674d-4cde-932a-7f89d3c335b1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Refreshing network info cache for port 3067803c-07f3-4a15-a5ee-47f9a770efca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:59:47 compute-1 nova_compute[225855]: 2026-01-20 14:59:47.354 225859 DEBUG nova.network.neutron [req-ade06d21-a8ba-44d2-8d65-9051a1ae8bdf req-2acbb262-49bf-49ef-b902-910eaa03f9a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updated VIF entry in instance network info cache for port 7c572239-9b2e-493c-8be5-632f27cc634a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:59:47 compute-1 nova_compute[225855]: 2026-01-20 14:59:47.355 225859 DEBUG nova.network.neutron [req-ade06d21-a8ba-44d2-8d65-9051a1ae8bdf req-2acbb262-49bf-49ef-b902-910eaa03f9a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updating instance_info_cache with network_info: [{"id": "7c572239-9b2e-493c-8be5-632f27cc634a", "address": "fa:16:3e:d9:6a:1f", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c572239-9b", "ovs_interfaceid": "7c572239-9b2e-493c-8be5-632f27cc634a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:59:47 compute-1 nova_compute[225855]: 2026-01-20 14:59:47.377 225859 DEBUG oslo_concurrency.lockutils [req-ade06d21-a8ba-44d2-8d65-9051a1ae8bdf req-2acbb262-49bf-49ef-b902-910eaa03f9a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b4c1468d-9914-426a-9464-c1167de53632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:59:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 14:59:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:47.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 14:59:47 compute-1 nova_compute[225855]: 2026-01-20 14:59:47.897 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:48.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:49 compute-1 ceph-mon[81775]: pgmap v2232: 321 pgs: 321 active+clean; 416 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 1.7 MiB/s wr, 47 op/s
Jan 20 14:59:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/937710557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:49.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:49 compute-1 nova_compute[225855]: 2026-01-20 14:59:49.996 225859 DEBUG nova.network.neutron [req-dfb5baab-cb45-49f0-ab4a-92873c87e12d req-cd87dfdc-674d-4cde-932a-7f89d3c335b1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updated VIF entry in instance network info cache for port 3067803c-07f3-4a15-a5ee-47f9a770efca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:59:49 compute-1 nova_compute[225855]: 2026-01-20 14:59:49.997 225859 DEBUG nova.network.neutron [req-dfb5baab-cb45-49f0-ab4a-92873c87e12d req-cd87dfdc-674d-4cde-932a-7f89d3c335b1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updating instance_info_cache with network_info: [{"id": "3067803c-07f3-4a15-a5ee-47f9a770efca", "address": "fa:16:3e:cd:b7:b1", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3067803c-07", "ovs_interfaceid": "3067803c-07f3-4a15-a5ee-47f9a770efca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:59:50 compute-1 nova_compute[225855]: 2026-01-20 14:59:50.016 225859 DEBUG oslo_concurrency.lockutils [req-dfb5baab-cb45-49f0-ab4a-92873c87e12d req-cd87dfdc-674d-4cde-932a-7f89d3c335b1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-1ebdefed-0903-4d72-b78d-912666c5ce61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:59:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:50.020 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:59:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1559493939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:50.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:50 compute-1 nova_compute[225855]: 2026-01-20 14:59:50.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:50 compute-1 nova_compute[225855]: 2026-01-20 14:59:50.372 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:50 compute-1 nova_compute[225855]: 2026-01-20 14:59:50.372 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:50 compute-1 nova_compute[225855]: 2026-01-20 14:59:50.372 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:50 compute-1 nova_compute[225855]: 2026-01-20 14:59:50.372 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:59:50 compute-1 nova_compute[225855]: 2026-01-20 14:59:50.373 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:59:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:59:50 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1363363369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:50 compute-1 nova_compute[225855]: 2026-01-20 14:59:50.839 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:59:50 compute-1 nova_compute[225855]: 2026-01-20 14:59:50.896 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:59:50 compute-1 nova_compute[225855]: 2026-01-20 14:59:50.897 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:59:50 compute-1 nova_compute[225855]: 2026-01-20 14:59:50.900 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:59:50 compute-1 nova_compute[225855]: 2026-01-20 14:59:50.900 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:59:51 compute-1 podman[282670]: 2026-01-20 14:59:51.042544575 +0000 UTC m=+0.084968938 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 20 14:59:51 compute-1 nova_compute[225855]: 2026-01-20 14:59:51.070 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:59:51 compute-1 nova_compute[225855]: 2026-01-20 14:59:51.071 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3943MB free_disk=20.92159652709961GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:59:51 compute-1 nova_compute[225855]: 2026-01-20 14:59:51.072 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:51 compute-1 nova_compute[225855]: 2026-01-20 14:59:51.072 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:51 compute-1 nova_compute[225855]: 2026-01-20 14:59:51.176 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 1ebdefed-0903-4d72-b78d-912666c5ce61 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:59:51 compute-1 nova_compute[225855]: 2026-01-20 14:59:51.177 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance b4c1468d-9914-426a-9464-c1167de53632 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:59:51 compute-1 nova_compute[225855]: 2026-01-20 14:59:51.177 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:59:51 compute-1 nova_compute[225855]: 2026-01-20 14:59:51.177 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:59:51 compute-1 nova_compute[225855]: 2026-01-20 14:59:51.238 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:59:51 compute-1 ceph-mon[81775]: pgmap v2233: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.9 MiB/s wr, 50 op/s
Jan 20 14:59:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2081012519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1363363369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1540140882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:59:51 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2238468495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:51 compute-1 nova_compute[225855]: 2026-01-20 14:59:51.682 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:59:51 compute-1 nova_compute[225855]: 2026-01-20 14:59:51.688 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:59:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:51.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:51 compute-1 nova_compute[225855]: 2026-01-20 14:59:51.704 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:59:51 compute-1 nova_compute[225855]: 2026-01-20 14:59:51.706 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:59:51 compute-1 nova_compute[225855]: 2026-01-20 14:59:51.706 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:51 compute-1 nova_compute[225855]: 2026-01-20 14:59:51.776 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:52 compute-1 nova_compute[225855]: 2026-01-20 14:59:52.168 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:52.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4015297812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:59:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2238468495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2401738277' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:59:52 compute-1 ceph-mon[81775]: pgmap v2234: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 20 14:59:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:52 compute-1 nova_compute[225855]: 2026-01-20 14:59:52.739 225859 DEBUG oslo_concurrency.lockutils [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "b4c1468d-9914-426a-9464-c1167de53632" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:52 compute-1 nova_compute[225855]: 2026-01-20 14:59:52.740 225859 DEBUG oslo_concurrency.lockutils [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:52 compute-1 nova_compute[225855]: 2026-01-20 14:59:52.741 225859 DEBUG oslo_concurrency.lockutils [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "b4c1468d-9914-426a-9464-c1167de53632-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:52 compute-1 nova_compute[225855]: 2026-01-20 14:59:52.741 225859 DEBUG oslo_concurrency.lockutils [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:52 compute-1 nova_compute[225855]: 2026-01-20 14:59:52.741 225859 DEBUG oslo_concurrency.lockutils [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:52 compute-1 nova_compute[225855]: 2026-01-20 14:59:52.743 225859 INFO nova.compute.manager [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Terminating instance
Jan 20 14:59:52 compute-1 nova_compute[225855]: 2026-01-20 14:59:52.744 225859 DEBUG nova.compute.manager [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:59:52 compute-1 kernel: tap7c572239-9b (unregistering): left promiscuous mode
Jan 20 14:59:52 compute-1 nova_compute[225855]: 2026-01-20 14:59:52.899 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:52 compute-1 NetworkManager[49104]: <info>  [1768921192.9029] device (tap7c572239-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:59:52 compute-1 nova_compute[225855]: 2026-01-20 14:59:52.914 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:52 compute-1 ovn_controller[130490]: 2026-01-20T14:59:52Z|00573|binding|INFO|Releasing lport 7c572239-9b2e-493c-8be5-632f27cc634a from this chassis (sb_readonly=0)
Jan 20 14:59:52 compute-1 ovn_controller[130490]: 2026-01-20T14:59:52Z|00574|binding|INFO|Setting lport 7c572239-9b2e-493c-8be5-632f27cc634a down in Southbound
Jan 20 14:59:52 compute-1 ovn_controller[130490]: 2026-01-20T14:59:52Z|00575|binding|INFO|Removing iface tap7c572239-9b ovn-installed in OVS
Jan 20 14:59:52 compute-1 nova_compute[225855]: 2026-01-20 14:59:52.931 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:52 compute-1 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Jan 20 14:59:52 compute-1 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000008c.scope: Consumed 19.864s CPU time.
Jan 20 14:59:52 compute-1 systemd-machined[194361]: Machine qemu-67-instance-0000008c terminated.
Jan 20 14:59:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:52.975 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:6a:1f 10.100.0.9'], port_security=['fa:16:3e:d9:6a:1f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b4c1468d-9914-426a-9464-c1167de53632', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58d966e1-4d26-414a-920e-0be2d77abb59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '107c1f3b5b7b413d9a389ca1166e331f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '207accdf-2d5c-48e9-bf02-5dfcc7d28063', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2edf59-0338-43ad-aa77-d6a806c781a6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=7c572239-9b2e-493c-8be5-632f27cc634a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:59:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:52.976 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 7c572239-9b2e-493c-8be5-632f27cc634a in datapath 58d966e1-4d26-414a-920e-0be2d77abb59 unbound from our chassis
Jan 20 14:59:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:52.977 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58d966e1-4d26-414a-920e-0be2d77abb59
Jan 20 14:59:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:52.995 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[61dc4c9f-dd68-4779-9483-b81d4df41da8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:53.026 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[aaee87f1-5c69-4301-96fa-c636ec34d97a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:53.029 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f61595b8-b66d-4005-a9d2-7eb9bbcc6333]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:53.063 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[29cf5371-cb52-49a4-aa4a-dc4147115994]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:53.081 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[466a40d4-68e4-4bd1-80e8-122745cb153a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58d966e1-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:c8:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610366, 'reachable_time': 35351, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282731, 'error': None, 'target': 'ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:53.098 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[93b192bd-bcfa-4629-a5c7-a73fb5eb3a20]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap58d966e1-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 610378, 'tstamp': 610378}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282732, 'error': None, 'target': 'ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap58d966e1-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 610381, 'tstamp': 610381}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282732, 'error': None, 'target': 'ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:53.100 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58d966e1-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.102 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.106 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:53.107 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58d966e1-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:59:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:53.107 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:59:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:53.108 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58d966e1-40, col_values=(('external_ids', {'iface-id': '1623097d-35b0-4d71-9dc2-c4d659492102'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:59:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:53.108 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.178 225859 INFO nova.virt.libvirt.driver [-] [instance: b4c1468d-9914-426a-9464-c1167de53632] Instance destroyed successfully.
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.178 225859 DEBUG nova.objects.instance [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lazy-loading 'resources' on Instance uuid b4c1468d-9914-426a-9464-c1167de53632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.192 225859 DEBUG nova.virt.libvirt.vif [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:57:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-65714861',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-65714861',id=140,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGBbRGX2xZT3D1ftqdKpZwTwb/ukXbRv/O5UyYYLjii3gk46qsw4SNMi6p0GpNIY5l/f9OSIg9UlRsUFQqLszBoQT2vJic2iOBlI6VLyxyg71obcHOZQEGpjfcTfqUsJeQ==',key_name='tempest-TestInstancesWithCinderVolumes-1812188149',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:57:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='107c1f3b5b7b413d9a389ca1166e331f',ramdisk_id='',reservation_id='r-8l7rw241',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-1174033615',owner_user_name='tempest-TestInstancesWithCinderVolumes-1174033615-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:57:51Z,user_data=None,user_id='ed2c9bd268d1491fa3484d86bcdb9ec6',uuid=b4c1468d-9914-426a-9464-c1167de53632,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7c572239-9b2e-493c-8be5-632f27cc634a", "address": "fa:16:3e:d9:6a:1f", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c572239-9b", "ovs_interfaceid": "7c572239-9b2e-493c-8be5-632f27cc634a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.193 225859 DEBUG nova.network.os_vif_util [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Converting VIF {"id": "7c572239-9b2e-493c-8be5-632f27cc634a", "address": "fa:16:3e:d9:6a:1f", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c572239-9b", "ovs_interfaceid": "7c572239-9b2e-493c-8be5-632f27cc634a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.194 225859 DEBUG nova.network.os_vif_util [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d9:6a:1f,bridge_name='br-int',has_traffic_filtering=True,id=7c572239-9b2e-493c-8be5-632f27cc634a,network=Network(58d966e1-4d26-414a-920e-0be2d77abb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c572239-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.195 225859 DEBUG os_vif [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:6a:1f,bridge_name='br-int',has_traffic_filtering=True,id=7c572239-9b2e-493c-8be5-632f27cc634a,network=Network(58d966e1-4d26-414a-920e-0be2d77abb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c572239-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.197 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.197 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c572239-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.199 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.201 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.205 225859 INFO os_vif [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:6a:1f,bridge_name='br-int',has_traffic_filtering=True,id=7c572239-9b2e-493c-8be5-632f27cc634a,network=Network(58d966e1-4d26-414a-920e-0be2d77abb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c572239-9b')
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.414 225859 INFO nova.virt.libvirt.driver [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Deleting instance files /var/lib/nova/instances/b4c1468d-9914-426a-9464-c1167de53632_del
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.415 225859 INFO nova.virt.libvirt.driver [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Deletion of /var/lib/nova/instances/b4c1468d-9914-426a-9464-c1167de53632_del complete
Jan 20 14:59:53 compute-1 sudo[282764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:53 compute-1 sudo[282764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:53 compute-1 sudo[282764]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:53 compute-1 sudo[282789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:59:53 compute-1 sudo[282789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:53 compute-1 sudo[282789]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.527 225859 INFO nova.compute.manager [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Took 0.78 seconds to destroy the instance on the hypervisor.
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.528 225859 DEBUG oslo.service.loopingcall [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.528 225859 DEBUG nova.compute.manager [-] [instance: b4c1468d-9914-426a-9464-c1167de53632] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.528 225859 DEBUG nova.network.neutron [-] [instance: b4c1468d-9914-426a-9464-c1167de53632] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:59:53 compute-1 sudo[282814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:53 compute-1 sudo[282814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:53 compute-1 sudo[282814]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:53 compute-1 sudo[282839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 14:59:53 compute-1 sudo[282839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:53.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.701 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.702 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:53 compute-1 sudo[282839]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:53 compute-1 sudo[282885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:53 compute-1 sudo[282885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:53 compute-1 sudo[282885]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.967 225859 DEBUG nova.compute.manager [req-b7e5455a-e464-4068-9a1a-67557e9ff05c req-d88290a0-b8f0-4294-b214-02717eb37506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Received event network-vif-unplugged-7c572239-9b2e-493c-8be5-632f27cc634a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.967 225859 DEBUG oslo_concurrency.lockutils [req-b7e5455a-e464-4068-9a1a-67557e9ff05c req-d88290a0-b8f0-4294-b214-02717eb37506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b4c1468d-9914-426a-9464-c1167de53632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.968 225859 DEBUG oslo_concurrency.lockutils [req-b7e5455a-e464-4068-9a1a-67557e9ff05c req-d88290a0-b8f0-4294-b214-02717eb37506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.968 225859 DEBUG oslo_concurrency.lockutils [req-b7e5455a-e464-4068-9a1a-67557e9ff05c req-d88290a0-b8f0-4294-b214-02717eb37506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.968 225859 DEBUG nova.compute.manager [req-b7e5455a-e464-4068-9a1a-67557e9ff05c req-d88290a0-b8f0-4294-b214-02717eb37506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] No waiting events found dispatching network-vif-unplugged-7c572239-9b2e-493c-8be5-632f27cc634a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.968 225859 DEBUG nova.compute.manager [req-b7e5455a-e464-4068-9a1a-67557e9ff05c req-d88290a0-b8f0-4294-b214-02717eb37506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Received event network-vif-unplugged-7c572239-9b2e-493c-8be5-632f27cc634a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.969 225859 DEBUG nova.compute.manager [req-b7e5455a-e464-4068-9a1a-67557e9ff05c req-d88290a0-b8f0-4294-b214-02717eb37506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Received event network-vif-plugged-7c572239-9b2e-493c-8be5-632f27cc634a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.969 225859 DEBUG oslo_concurrency.lockutils [req-b7e5455a-e464-4068-9a1a-67557e9ff05c req-d88290a0-b8f0-4294-b214-02717eb37506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b4c1468d-9914-426a-9464-c1167de53632-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.969 225859 DEBUG oslo_concurrency.lockutils [req-b7e5455a-e464-4068-9a1a-67557e9ff05c req-d88290a0-b8f0-4294-b214-02717eb37506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.969 225859 DEBUG oslo_concurrency.lockutils [req-b7e5455a-e464-4068-9a1a-67557e9ff05c req-d88290a0-b8f0-4294-b214-02717eb37506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.969 225859 DEBUG nova.compute.manager [req-b7e5455a-e464-4068-9a1a-67557e9ff05c req-d88290a0-b8f0-4294-b214-02717eb37506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] No waiting events found dispatching network-vif-plugged-7c572239-9b2e-493c-8be5-632f27cc634a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:59:53 compute-1 nova_compute[225855]: 2026-01-20 14:59:53.970 225859 WARNING nova.compute.manager [req-b7e5455a-e464-4068-9a1a-67557e9ff05c req-d88290a0-b8f0-4294-b214-02717eb37506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Received unexpected event network-vif-plugged-7c572239-9b2e-493c-8be5-632f27cc634a for instance with vm_state active and task_state deleting.
Jan 20 14:59:53 compute-1 sudo[282910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:59:54 compute-1 sudo[282910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:54 compute-1 sudo[282910]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:54 compute-1 sudo[282935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:54 compute-1 sudo[282935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:54 compute-1 sudo[282935]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:54 compute-1 sudo[282960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:59:54 compute-1 sudo[282960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:54.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:54 compute-1 nova_compute[225855]: 2026-01-20 14:59:54.292 225859 DEBUG nova.network.neutron [-] [instance: b4c1468d-9914-426a-9464-c1167de53632] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:59:54 compute-1 nova_compute[225855]: 2026-01-20 14:59:54.339 225859 INFO nova.compute.manager [-] [instance: b4c1468d-9914-426a-9464-c1167de53632] Took 0.81 seconds to deallocate network for instance.
Jan 20 14:59:54 compute-1 sudo[282999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:54 compute-1 sudo[282999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:54 compute-1 sudo[282999]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:54 compute-1 sudo[283027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:54 compute-1 sudo[283027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:54 compute-1 sudo[283027]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:54 compute-1 nova_compute[225855]: 2026-01-20 14:59:54.569 225859 INFO nova.compute.manager [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Took 0.23 seconds to detach 1 volumes for instance.
Jan 20 14:59:54 compute-1 sudo[282960]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:54 compute-1 nova_compute[225855]: 2026-01-20 14:59:54.615 225859 DEBUG oslo_concurrency.lockutils [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:54 compute-1 nova_compute[225855]: 2026-01-20 14:59:54.616 225859 DEBUG oslo_concurrency.lockutils [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:54 compute-1 nova_compute[225855]: 2026-01-20 14:59:54.685 225859 DEBUG oslo_concurrency.processutils [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:59:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:54 compute-1 ceph-mon[81775]: pgmap v2235: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 20 14:59:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 14:59:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 14:59:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:59:55 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2354938243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:55 compute-1 nova_compute[225855]: 2026-01-20 14:59:55.152 225859 DEBUG oslo_concurrency.processutils [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:59:55 compute-1 nova_compute[225855]: 2026-01-20 14:59:55.159 225859 DEBUG nova.compute.provider_tree [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:59:55 compute-1 nova_compute[225855]: 2026-01-20 14:59:55.668 225859 DEBUG nova.scheduler.client.report [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:59:55 compute-1 nova_compute[225855]: 2026-01-20 14:59:55.693 225859 DEBUG oslo_concurrency.lockutils [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:55.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:55 compute-1 nova_compute[225855]: 2026-01-20 14:59:55.762 225859 INFO nova.scheduler.client.report [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Deleted allocations for instance b4c1468d-9914-426a-9464-c1167de53632
Jan 20 14:59:55 compute-1 nova_compute[225855]: 2026-01-20 14:59:55.840 225859 DEBUG oslo_concurrency.lockutils [None req-f281c6c5-7161-4cc0-ba81-18ff8176d767 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "b4c1468d-9914-426a-9464-c1167de53632" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2354938243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:56 compute-1 nova_compute[225855]: 2026-01-20 14:59:56.070 225859 DEBUG nova.compute.manager [req-faa8b653-d449-4ee6-956f-51f6df16183e req-a0f2553b-215e-4709-9226-9fb94f0a4751 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c1468d-9914-426a-9464-c1167de53632] Received event network-vif-deleted-7c572239-9b2e-493c-8be5-632f27cc634a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:59:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:56.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:56 compute-1 nova_compute[225855]: 2026-01-20 14:59:56.777 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.202 225859 DEBUG oslo_concurrency.lockutils [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "1ebdefed-0903-4d72-b78d-912666c5ce61" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.202 225859 DEBUG oslo_concurrency.lockutils [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.203 225859 DEBUG oslo_concurrency.lockutils [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "1ebdefed-0903-4d72-b78d-912666c5ce61-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.203 225859 DEBUG oslo_concurrency.lockutils [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.203 225859 DEBUG oslo_concurrency.lockutils [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.204 225859 INFO nova.compute.manager [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Terminating instance
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.205 225859 DEBUG nova.compute.manager [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:59:57 compute-1 kernel: tap3067803c-07 (unregistering): left promiscuous mode
Jan 20 14:59:57 compute-1 NetworkManager[49104]: <info>  [1768921197.2598] device (tap3067803c-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:59:57 compute-1 ovn_controller[130490]: 2026-01-20T14:59:57Z|00576|binding|INFO|Releasing lport 3067803c-07f3-4a15-a5ee-47f9a770efca from this chassis (sb_readonly=0)
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.268 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:57 compute-1 ovn_controller[130490]: 2026-01-20T14:59:57Z|00577|binding|INFO|Setting lport 3067803c-07f3-4a15-a5ee-47f9a770efca down in Southbound
Jan 20 14:59:57 compute-1 ovn_controller[130490]: 2026-01-20T14:59:57Z|00578|binding|INFO|Removing iface tap3067803c-07 ovn-installed in OVS
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.271 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:57.276 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:b7:b1 10.100.0.10'], port_security=['fa:16:3e:cd:b7:b1 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1ebdefed-0903-4d72-b78d-912666c5ce61', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58d966e1-4d26-414a-920e-0be2d77abb59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '107c1f3b5b7b413d9a389ca1166e331f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '207accdf-2d5c-48e9-bf02-5dfcc7d28063', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2edf59-0338-43ad-aa77-d6a806c781a6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=3067803c-07f3-4a15-a5ee-47f9a770efca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:59:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:57.278 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 3067803c-07f3-4a15-a5ee-47f9a770efca in datapath 58d966e1-4d26-414a-920e-0be2d77abb59 unbound from our chassis
Jan 20 14:59:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:57.280 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58d966e1-4d26-414a-920e-0be2d77abb59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:59:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:57.281 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8ac2f8ae-617e-4341-b8d3-8330e01847a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:57.282 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59 namespace which is not needed anymore
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.286 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:57 compute-1 ceph-mon[81775]: pgmap v2236: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 20 14:59:57 compute-1 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Jan 20 14:59:57 compute-1 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000008b.scope: Consumed 19.019s CPU time.
Jan 20 14:59:57 compute-1 systemd-machined[194361]: Machine qemu-66-instance-0000008b terminated.
Jan 20 14:59:57 compute-1 neutron-haproxy-ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59[280188]: [NOTICE]   (280192) : haproxy version is 2.8.14-c23fe91
Jan 20 14:59:57 compute-1 neutron-haproxy-ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59[280188]: [NOTICE]   (280192) : path to executable is /usr/sbin/haproxy
Jan 20 14:59:57 compute-1 neutron-haproxy-ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59[280188]: [WARNING]  (280192) : Exiting Master process...
Jan 20 14:59:57 compute-1 neutron-haproxy-ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59[280188]: [WARNING]  (280192) : Exiting Master process...
Jan 20 14:59:57 compute-1 neutron-haproxy-ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59[280188]: [ALERT]    (280192) : Current worker (280194) exited with code 143 (Terminated)
Jan 20 14:59:57 compute-1 neutron-haproxy-ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59[280188]: [WARNING]  (280192) : All workers exited. Exiting... (0)
Jan 20 14:59:57 compute-1 systemd[1]: libpod-f09eb4085a157e25ca55ec28f9deac3d8ef6af0026f2f7a138323503b9f81ddc.scope: Deactivated successfully.
Jan 20 14:59:57 compute-1 podman[283113]: 2026-01-20 14:59:57.427985928 +0000 UTC m=+0.050208738 container died f09eb4085a157e25ca55ec28f9deac3d8ef6af0026f2f7a138323503b9f81ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.447 225859 INFO nova.virt.libvirt.driver [-] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Instance destroyed successfully.
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.448 225859 DEBUG nova.objects.instance [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lazy-loading 'resources' on Instance uuid 1ebdefed-0903-4d72-b78d-912666c5ce61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.465 225859 DEBUG nova.virt.libvirt.vif [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:57:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1983668831',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1983668831',id=139,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGBbRGX2xZT3D1ftqdKpZwTwb/ukXbRv/O5UyYYLjii3gk46qsw4SNMi6p0GpNIY5l/f9OSIg9UlRsUFQqLszBoQT2vJic2iOBlI6VLyxyg71obcHOZQEGpjfcTfqUsJeQ==',key_name='tempest-TestInstancesWithCinderVolumes-1812188149',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:57:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='107c1f3b5b7b413d9a389ca1166e331f',ramdisk_id='',reservation_id='r-c4vqjrp4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-1174033615',owner_user_name='tempest-TestInstancesWithCinderVolumes-1174033615-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:57:45Z,user_data=None,user_id='ed2c9bd268d1491fa3484d86bcdb9ec6',uuid=1ebdefed-0903-4d72-b78d-912666c5ce61,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3067803c-07f3-4a15-a5ee-47f9a770efca", "address": "fa:16:3e:cd:b7:b1", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3067803c-07", "ovs_interfaceid": "3067803c-07f3-4a15-a5ee-47f9a770efca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.467 225859 DEBUG nova.network.os_vif_util [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Converting VIF {"id": "3067803c-07f3-4a15-a5ee-47f9a770efca", "address": "fa:16:3e:cd:b7:b1", "network": {"id": "58d966e1-4d26-414a-920e-0be2d77abb59", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1896990059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "107c1f3b5b7b413d9a389ca1166e331f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3067803c-07", "ovs_interfaceid": "3067803c-07f3-4a15-a5ee-47f9a770efca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.467 225859 DEBUG nova.network.os_vif_util [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cd:b7:b1,bridge_name='br-int',has_traffic_filtering=True,id=3067803c-07f3-4a15-a5ee-47f9a770efca,network=Network(58d966e1-4d26-414a-920e-0be2d77abb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3067803c-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:59:57 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f09eb4085a157e25ca55ec28f9deac3d8ef6af0026f2f7a138323503b9f81ddc-userdata-shm.mount: Deactivated successfully.
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.472 225859 DEBUG os_vif [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:b7:b1,bridge_name='br-int',has_traffic_filtering=True,id=3067803c-07f3-4a15-a5ee-47f9a770efca,network=Network(58d966e1-4d26-414a-920e-0be2d77abb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3067803c-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:59:57 compute-1 systemd[1]: var-lib-containers-storage-overlay-ad0bfac44a2e71e205ef5911174c9794e1609c1289dfb400ba4190926919b056-merged.mount: Deactivated successfully.
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.475 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.475 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3067803c-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.480 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.482 225859 INFO os_vif [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:b7:b1,bridge_name='br-int',has_traffic_filtering=True,id=3067803c-07f3-4a15-a5ee-47f9a770efca,network=Network(58d966e1-4d26-414a-920e-0be2d77abb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3067803c-07')
Jan 20 14:59:57 compute-1 podman[283113]: 2026-01-20 14:59:57.487154897 +0000 UTC m=+0.109377687 container cleanup f09eb4085a157e25ca55ec28f9deac3d8ef6af0026f2f7a138323503b9f81ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:59:57 compute-1 systemd[1]: libpod-conmon-f09eb4085a157e25ca55ec28f9deac3d8ef6af0026f2f7a138323503b9f81ddc.scope: Deactivated successfully.
Jan 20 14:59:57 compute-1 podman[283167]: 2026-01-20 14:59:57.56241252 +0000 UTC m=+0.052285216 container remove f09eb4085a157e25ca55ec28f9deac3d8ef6af0026f2f7a138323503b9f81ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 20 14:59:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:57.570 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a5738345-d644-414a-91a2-b15530ef5cee]: (4, ('Tue Jan 20 02:59:57 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59 (f09eb4085a157e25ca55ec28f9deac3d8ef6af0026f2f7a138323503b9f81ddc)\nf09eb4085a157e25ca55ec28f9deac3d8ef6af0026f2f7a138323503b9f81ddc\nTue Jan 20 02:59:57 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59 (f09eb4085a157e25ca55ec28f9deac3d8ef6af0026f2f7a138323503b9f81ddc)\nf09eb4085a157e25ca55ec28f9deac3d8ef6af0026f2f7a138323503b9f81ddc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:57.572 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e35ad46c-9cb7-41a4-88a8-f847de456a30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:57.573 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58d966e1-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.574 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:57 compute-1 kernel: tap58d966e1-40: left promiscuous mode
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.595 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:57.597 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[78583c0a-b613-42f5-ae10-fcf720178eee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:57.612 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c885642f-1836-4dda-bc7b-3d6dbe1afd67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:57.614 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[eae6b91d-e263-4bc3-bd77-2ea2616f8139]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:57.635 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[46a8cd80-fc1b-4a9c-8bc3-da38303b05ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610359, 'reachable_time': 17831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283186, 'error': None, 'target': 'ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:57.638 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58d966e1-4d26-414a-920e-0be2d77abb59 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:59:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 14:59:57.639 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[dbdfd5d1-456e-42d7-83d4-39a54b000ca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:57 compute-1 systemd[1]: run-netns-ovnmeta\x2d58d966e1\x2d4d26\x2d414a\x2d920e\x2d0be2d77abb59.mount: Deactivated successfully.
Jan 20 14:59:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:57.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.717 225859 INFO nova.virt.libvirt.driver [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Deleting instance files /var/lib/nova/instances/1ebdefed-0903-4d72-b78d-912666c5ce61_del
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.718 225859 INFO nova.virt.libvirt.driver [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Deletion of /var/lib/nova/instances/1ebdefed-0903-4d72-b78d-912666c5ce61_del complete
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.765 225859 INFO nova.compute.manager [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Took 0.56 seconds to destroy the instance on the hypervisor.
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.766 225859 DEBUG oslo.service.loopingcall [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.766 225859 DEBUG nova.compute.manager [-] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:59:57 compute-1 nova_compute[225855]: 2026-01-20 14:59:57.766 225859 DEBUG nova.network.neutron [-] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:59:58 compute-1 nova_compute[225855]: 2026-01-20 14:59:58.177 225859 DEBUG nova.compute.manager [req-3061d5b3-2d7e-440f-a12c-b9cb0028654a req-7afca30d-572e-4290-ba3d-72108b581c02 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Received event network-vif-unplugged-3067803c-07f3-4a15-a5ee-47f9a770efca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:59:58 compute-1 nova_compute[225855]: 2026-01-20 14:59:58.177 225859 DEBUG oslo_concurrency.lockutils [req-3061d5b3-2d7e-440f-a12c-b9cb0028654a req-7afca30d-572e-4290-ba3d-72108b581c02 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "1ebdefed-0903-4d72-b78d-912666c5ce61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:58 compute-1 nova_compute[225855]: 2026-01-20 14:59:58.177 225859 DEBUG oslo_concurrency.lockutils [req-3061d5b3-2d7e-440f-a12c-b9cb0028654a req-7afca30d-572e-4290-ba3d-72108b581c02 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:58 compute-1 nova_compute[225855]: 2026-01-20 14:59:58.177 225859 DEBUG oslo_concurrency.lockutils [req-3061d5b3-2d7e-440f-a12c-b9cb0028654a req-7afca30d-572e-4290-ba3d-72108b581c02 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:58 compute-1 nova_compute[225855]: 2026-01-20 14:59:58.178 225859 DEBUG nova.compute.manager [req-3061d5b3-2d7e-440f-a12c-b9cb0028654a req-7afca30d-572e-4290-ba3d-72108b581c02 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] No waiting events found dispatching network-vif-unplugged-3067803c-07f3-4a15-a5ee-47f9a770efca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:59:58 compute-1 nova_compute[225855]: 2026-01-20 14:59:58.178 225859 DEBUG nova.compute.manager [req-3061d5b3-2d7e-440f-a12c-b9cb0028654a req-7afca30d-572e-4290-ba3d-72108b581c02 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Received event network-vif-unplugged-3067803c-07f3-4a15-a5ee-47f9a770efca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:59:58 compute-1 nova_compute[225855]: 2026-01-20 14:59:58.178 225859 DEBUG nova.compute.manager [req-3061d5b3-2d7e-440f-a12c-b9cb0028654a req-7afca30d-572e-4290-ba3d-72108b581c02 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Received event network-vif-plugged-3067803c-07f3-4a15-a5ee-47f9a770efca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:59:58 compute-1 nova_compute[225855]: 2026-01-20 14:59:58.178 225859 DEBUG oslo_concurrency.lockutils [req-3061d5b3-2d7e-440f-a12c-b9cb0028654a req-7afca30d-572e-4290-ba3d-72108b581c02 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "1ebdefed-0903-4d72-b78d-912666c5ce61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:58 compute-1 nova_compute[225855]: 2026-01-20 14:59:58.179 225859 DEBUG oslo_concurrency.lockutils [req-3061d5b3-2d7e-440f-a12c-b9cb0028654a req-7afca30d-572e-4290-ba3d-72108b581c02 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:58 compute-1 nova_compute[225855]: 2026-01-20 14:59:58.179 225859 DEBUG oslo_concurrency.lockutils [req-3061d5b3-2d7e-440f-a12c-b9cb0028654a req-7afca30d-572e-4290-ba3d-72108b581c02 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:58 compute-1 nova_compute[225855]: 2026-01-20 14:59:58.179 225859 DEBUG nova.compute.manager [req-3061d5b3-2d7e-440f-a12c-b9cb0028654a req-7afca30d-572e-4290-ba3d-72108b581c02 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] No waiting events found dispatching network-vif-plugged-3067803c-07f3-4a15-a5ee-47f9a770efca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:59:58 compute-1 nova_compute[225855]: 2026-01-20 14:59:58.179 225859 WARNING nova.compute.manager [req-3061d5b3-2d7e-440f-a12c-b9cb0028654a req-7afca30d-572e-4290-ba3d-72108b581c02 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Received unexpected event network-vif-plugged-3067803c-07f3-4a15-a5ee-47f9a770efca for instance with vm_state active and task_state deleting.
Jan 20 14:59:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:58.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:59:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:59:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:59:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:59:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:59:58 compute-1 nova_compute[225855]: 2026-01-20 14:59:58.920 225859 DEBUG nova.network.neutron [-] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:59:58 compute-1 nova_compute[225855]: 2026-01-20 14:59:58.944 225859 INFO nova.compute.manager [-] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Took 1.18 seconds to deallocate network for instance.
Jan 20 14:59:59 compute-1 nova_compute[225855]: 2026-01-20 14:59:59.165 225859 INFO nova.compute.manager [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Took 0.22 seconds to detach 1 volumes for instance.
Jan 20 14:59:59 compute-1 nova_compute[225855]: 2026-01-20 14:59:59.220 225859 DEBUG oslo_concurrency.lockutils [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:59 compute-1 nova_compute[225855]: 2026-01-20 14:59:59.221 225859 DEBUG oslo_concurrency.lockutils [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:59 compute-1 nova_compute[225855]: 2026-01-20 14:59:59.305 225859 DEBUG oslo_concurrency.processutils [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:59:59 compute-1 ceph-mon[81775]: pgmap v2237: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 317 KiB/s wr, 77 op/s
Jan 20 14:59:59 compute-1 nova_compute[225855]: 2026-01-20 14:59:59.460 225859 DEBUG nova.compute.manager [req-3d061aff-00c5-406c-a159-5683fff8ec28 req-0dede328-7eb2-474f-8907-5c1968c939c6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Received event network-vif-deleted-3067803c-07f3-4a15-a5ee-47f9a770efca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:59:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 14:59:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:59.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:59:59 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1717918447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:59 compute-1 nova_compute[225855]: 2026-01-20 14:59:59.766 225859 DEBUG oslo_concurrency.processutils [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:59:59 compute-1 nova_compute[225855]: 2026-01-20 14:59:59.773 225859 DEBUG nova.compute.provider_tree [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:59:59 compute-1 nova_compute[225855]: 2026-01-20 14:59:59.789 225859 DEBUG nova.scheduler.client.report [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:59:59 compute-1 nova_compute[225855]: 2026-01-20 14:59:59.810 225859 DEBUG oslo_concurrency.lockutils [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:59 compute-1 nova_compute[225855]: 2026-01-20 14:59:59.853 225859 INFO nova.scheduler.client.report [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Deleted allocations for instance 1ebdefed-0903-4d72-b78d-912666c5ce61
Jan 20 14:59:59 compute-1 nova_compute[225855]: 2026-01-20 14:59:59.933 225859 DEBUG oslo_concurrency.lockutils [None req-ef91f40a-4b99-482f-a429-c9ca6ccd5151 ed2c9bd268d1491fa3484d86bcdb9ec6 107c1f3b5b7b413d9a389ca1166e331f - - default default] Lock "1ebdefed-0903-4d72-b78d-912666c5ce61" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:00.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1717918447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:00 compute-1 ceph-mon[81775]: overall HEALTH_OK
Jan 20 15:00:00 compute-1 nova_compute[225855]: 2026-01-20 15:00:00.939 225859 DEBUG oslo_concurrency.lockutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Acquiring lock "f1757bed-1718-45e4-a731-11f1a3b4f068" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:00 compute-1 nova_compute[225855]: 2026-01-20 15:00:00.940 225859 DEBUG oslo_concurrency.lockutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Lock "f1757bed-1718-45e4-a731-11f1a3b4f068" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:00 compute-1 nova_compute[225855]: 2026-01-20 15:00:00.956 225859 DEBUG nova.compute.manager [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:00:01 compute-1 nova_compute[225855]: 2026-01-20 15:00:01.058 225859 DEBUG oslo_concurrency.lockutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:01 compute-1 nova_compute[225855]: 2026-01-20 15:00:01.059 225859 DEBUG oslo_concurrency.lockutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:01 compute-1 nova_compute[225855]: 2026-01-20 15:00:01.066 225859 DEBUG nova.virt.hardware [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:00:01 compute-1 nova_compute[225855]: 2026-01-20 15:00:01.067 225859 INFO nova.compute.claims [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:00:01 compute-1 nova_compute[225855]: 2026-01-20 15:00:01.209 225859 DEBUG oslo_concurrency.processutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:00:01 compute-1 ceph-mon[81775]: pgmap v2238: 321 pgs: 321 active+clean; 400 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 318 KiB/s wr, 106 op/s
Jan 20 15:00:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3689656102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:00:01 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3486972496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:01 compute-1 nova_compute[225855]: 2026-01-20 15:00:01.677 225859 DEBUG oslo_concurrency.processutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:00:01 compute-1 nova_compute[225855]: 2026-01-20 15:00:01.684 225859 DEBUG nova.compute.provider_tree [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:00:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:01.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:01 compute-1 nova_compute[225855]: 2026-01-20 15:00:01.709 225859 DEBUG nova.scheduler.client.report [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:00:01 compute-1 nova_compute[225855]: 2026-01-20 15:00:01.735 225859 DEBUG oslo_concurrency.lockutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:01 compute-1 nova_compute[225855]: 2026-01-20 15:00:01.736 225859 DEBUG nova.compute.manager [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:00:01 compute-1 nova_compute[225855]: 2026-01-20 15:00:01.779 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:01 compute-1 nova_compute[225855]: 2026-01-20 15:00:01.785 225859 DEBUG nova.compute.manager [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:00:01 compute-1 nova_compute[225855]: 2026-01-20 15:00:01.786 225859 DEBUG nova.network.neutron [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:00:01 compute-1 nova_compute[225855]: 2026-01-20 15:00:01.820 225859 INFO nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:00:01 compute-1 nova_compute[225855]: 2026-01-20 15:00:01.842 225859 DEBUG nova.compute.manager [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:00:01 compute-1 nova_compute[225855]: 2026-01-20 15:00:01.930 225859 DEBUG nova.compute.manager [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:00:01 compute-1 nova_compute[225855]: 2026-01-20 15:00:01.932 225859 DEBUG nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:00:01 compute-1 nova_compute[225855]: 2026-01-20 15:00:01.932 225859 INFO nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Creating image(s)
Jan 20 15:00:01 compute-1 nova_compute[225855]: 2026-01-20 15:00:01.959 225859 DEBUG nova.storage.rbd_utils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] rbd image f1757bed-1718-45e4-a731-11f1a3b4f068_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:00:01 compute-1 nova_compute[225855]: 2026-01-20 15:00:01.990 225859 DEBUG nova.storage.rbd_utils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] rbd image f1757bed-1718-45e4-a731-11f1a3b4f068_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:00:02 compute-1 nova_compute[225855]: 2026-01-20 15:00:02.021 225859 DEBUG nova.storage.rbd_utils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] rbd image f1757bed-1718-45e4-a731-11f1a3b4f068_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:00:02 compute-1 nova_compute[225855]: 2026-01-20 15:00:02.026 225859 DEBUG oslo_concurrency.processutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:00:02 compute-1 nova_compute[225855]: 2026-01-20 15:00:02.053 225859 DEBUG nova.policy [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2a3fc2ba2a08423eb2e0bd7cf0fd5cf7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '994b02a8c0094d2daa7b775b1f86f394', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:00:02 compute-1 nova_compute[225855]: 2026-01-20 15:00:02.096 225859 DEBUG oslo_concurrency.processutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:00:02 compute-1 nova_compute[225855]: 2026-01-20 15:00:02.097 225859 DEBUG oslo_concurrency.lockutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:02 compute-1 nova_compute[225855]: 2026-01-20 15:00:02.097 225859 DEBUG oslo_concurrency.lockutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:02 compute-1 nova_compute[225855]: 2026-01-20 15:00:02.098 225859 DEBUG oslo_concurrency.lockutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:02 compute-1 nova_compute[225855]: 2026-01-20 15:00:02.125 225859 DEBUG nova.storage.rbd_utils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] rbd image f1757bed-1718-45e4-a731-11f1a3b4f068_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:00:02 compute-1 nova_compute[225855]: 2026-01-20 15:00:02.130 225859 DEBUG oslo_concurrency.processutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 f1757bed-1718-45e4-a731-11f1a3b4f068_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:00:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:02.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:02 compute-1 nova_compute[225855]: 2026-01-20 15:00:02.379 225859 DEBUG oslo_concurrency.processutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 f1757bed-1718-45e4-a731-11f1a3b4f068_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:00:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3486972496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:02 compute-1 nova_compute[225855]: 2026-01-20 15:00:02.457 225859 DEBUG nova.storage.rbd_utils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] resizing rbd image f1757bed-1718-45e4-a731-11f1a3b4f068_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:00:02 compute-1 nova_compute[225855]: 2026-01-20 15:00:02.488 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:02 compute-1 nova_compute[225855]: 2026-01-20 15:00:02.564 225859 DEBUG nova.objects.instance [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Lazy-loading 'migration_context' on Instance uuid f1757bed-1718-45e4-a731-11f1a3b4f068 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:00:02 compute-1 nova_compute[225855]: 2026-01-20 15:00:02.602 225859 DEBUG nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:00:02 compute-1 nova_compute[225855]: 2026-01-20 15:00:02.602 225859 DEBUG nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Ensure instance console log exists: /var/lib/nova/instances/f1757bed-1718-45e4-a731-11f1a3b4f068/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:00:02 compute-1 nova_compute[225855]: 2026-01-20 15:00:02.603 225859 DEBUG oslo_concurrency.lockutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:02 compute-1 nova_compute[225855]: 2026-01-20 15:00:02.603 225859 DEBUG oslo_concurrency.lockutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:02 compute-1 nova_compute[225855]: 2026-01-20 15:00:02.603 225859 DEBUG oslo_concurrency.lockutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:03 compute-1 podman[283401]: 2026-01-20 15:00:03.006793484 +0000 UTC m=+0.054297443 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:00:03 compute-1 sudo[283420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:00:03 compute-1 sudo[283420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:00:03 compute-1 sudo[283420]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:03 compute-1 sudo[283445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:00:03 compute-1 sudo[283445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:00:03 compute-1 sudo[283445]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:03 compute-1 ceph-mon[81775]: pgmap v2239: 321 pgs: 321 active+clean; 388 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 18 KiB/s wr, 126 op/s
Jan 20 15:00:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2499902095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:00:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:00:03 compute-1 nova_compute[225855]: 2026-01-20 15:00:03.527 225859 DEBUG nova.network.neutron [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Successfully created port: f62f622f-1d0a-4a68-9540-d1a7f48a66d0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:00:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:03.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:00:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:04.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:00:04 compute-1 ceph-mon[81775]: pgmap v2240: 321 pgs: 321 active+clean; 394 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 705 KiB/s wr, 141 op/s
Jan 20 15:00:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:00:04 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3429916275' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:00:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:00:04 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3429916275' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #103. Immutable memtables: 0.
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:05.017052) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 103
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921205017130, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1918, "num_deletes": 259, "total_data_size": 4384204, "memory_usage": 4429136, "flush_reason": "Manual Compaction"}
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #104: started
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921205039284, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 104, "file_size": 2830046, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52522, "largest_seqno": 54435, "table_properties": {"data_size": 2822148, "index_size": 4648, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17420, "raw_average_key_size": 20, "raw_value_size": 2805888, "raw_average_value_size": 3277, "num_data_blocks": 203, "num_entries": 856, "num_filter_entries": 856, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921060, "oldest_key_time": 1768921060, "file_creation_time": 1768921205, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 22271 microseconds, and 6384 cpu microseconds.
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:05.039345) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #104: 2830046 bytes OK
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:05.039363) [db/memtable_list.cc:519] [default] Level-0 commit table #104 started
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:05.042709) [db/memtable_list.cc:722] [default] Level-0 commit table #104: memtable #1 done
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:05.042725) EVENT_LOG_v1 {"time_micros": 1768921205042720, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:05.042744) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 4375403, prev total WAL file size 4375403, number of live WAL files 2.
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000100.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:05.043736) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373634' seq:72057594037927935, type:22 .. '6C6F676D0032303138' seq:0, type:0; will stop at (end)
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [104(2763KB)], [102(10108KB)]
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921205043815, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [104], "files_L6": [102], "score": -1, "input_data_size": 13181361, "oldest_snapshot_seqno": -1}
Jan 20 15:00:05 compute-1 nova_compute[225855]: 2026-01-20 15:00:05.105 225859 DEBUG nova.network.neutron [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Successfully updated port: f62f622f-1d0a-4a68-9540-d1a7f48a66d0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:00:05 compute-1 nova_compute[225855]: 2026-01-20 15:00:05.121 225859 DEBUG oslo_concurrency.lockutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Acquiring lock "refresh_cache-f1757bed-1718-45e4-a731-11f1a3b4f068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:00:05 compute-1 nova_compute[225855]: 2026-01-20 15:00:05.121 225859 DEBUG oslo_concurrency.lockutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Acquired lock "refresh_cache-f1757bed-1718-45e4-a731-11f1a3b4f068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:00:05 compute-1 nova_compute[225855]: 2026-01-20 15:00:05.122 225859 DEBUG nova.network.neutron [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #105: 8095 keys, 13023021 bytes, temperature: kUnknown
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921205164068, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 105, "file_size": 13023021, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12967333, "index_size": 34328, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20293, "raw_key_size": 209010, "raw_average_key_size": 25, "raw_value_size": 12821522, "raw_average_value_size": 1583, "num_data_blocks": 1357, "num_entries": 8095, "num_filter_entries": 8095, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768921205, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 105, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:05.164314) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 13023021 bytes
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:05.166394) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.5 rd, 108.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 9.9 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(9.3) write-amplify(4.6) OK, records in: 8635, records dropped: 540 output_compression: NoCompression
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:05.166440) EVENT_LOG_v1 {"time_micros": 1768921205166424, "job": 64, "event": "compaction_finished", "compaction_time_micros": 120328, "compaction_time_cpu_micros": 43342, "output_level": 6, "num_output_files": 1, "total_output_size": 13023021, "num_input_records": 8635, "num_output_records": 8095, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921205167129, "job": 64, "event": "table_file_deletion", "file_number": 104}
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000102.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921205169009, "job": 64, "event": "table_file_deletion", "file_number": 102}
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:05.043590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:05.169098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:05.169104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:05.169106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:05.169107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:05.169109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:05 compute-1 nova_compute[225855]: 2026-01-20 15:00:05.235 225859 DEBUG nova.compute.manager [req-eb198a38-ff06-4e43-bacb-710a317b1b28 req-77adfd5a-ef37-4760-ab9d-e4706e74e755 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Received event network-changed-f62f622f-1d0a-4a68-9540-d1a7f48a66d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:00:05 compute-1 nova_compute[225855]: 2026-01-20 15:00:05.235 225859 DEBUG nova.compute.manager [req-eb198a38-ff06-4e43-bacb-710a317b1b28 req-77adfd5a-ef37-4760-ab9d-e4706e74e755 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Refreshing instance network info cache due to event network-changed-f62f622f-1d0a-4a68-9540-d1a7f48a66d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:00:05 compute-1 nova_compute[225855]: 2026-01-20 15:00:05.236 225859 DEBUG oslo_concurrency.lockutils [req-eb198a38-ff06-4e43-bacb-710a317b1b28 req-77adfd5a-ef37-4760-ab9d-e4706e74e755 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-f1757bed-1718-45e4-a731-11f1a3b4f068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:00:05 compute-1 nova_compute[225855]: 2026-01-20 15:00:05.469 225859 DEBUG nova.network.neutron [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:00:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3429916275' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:00:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3429916275' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:00:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:05.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:05 compute-1 nova_compute[225855]: 2026-01-20 15:00:05.995 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:00:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:06.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.313 225859 DEBUG nova.network.neutron [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Updating instance_info_cache with network_info: [{"id": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "address": "fa:16:3e:cb:97:d3", "network": {"id": "347be9eb-2d7b-4b2d-b1e8-8ed5a063f269", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1057973257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "994b02a8c0094d2daa7b775b1f86f394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf62f622f-1d", "ovs_interfaceid": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.320 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.340 225859 DEBUG oslo_concurrency.lockutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Releasing lock "refresh_cache-f1757bed-1718-45e4-a731-11f1a3b4f068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.341 225859 DEBUG nova.compute.manager [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Instance network_info: |[{"id": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "address": "fa:16:3e:cb:97:d3", "network": {"id": "347be9eb-2d7b-4b2d-b1e8-8ed5a063f269", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1057973257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "994b02a8c0094d2daa7b775b1f86f394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf62f622f-1d", "ovs_interfaceid": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.341 225859 DEBUG oslo_concurrency.lockutils [req-eb198a38-ff06-4e43-bacb-710a317b1b28 req-77adfd5a-ef37-4760-ab9d-e4706e74e755 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-f1757bed-1718-45e4-a731-11f1a3b4f068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.342 225859 DEBUG nova.network.neutron [req-eb198a38-ff06-4e43-bacb-710a317b1b28 req-77adfd5a-ef37-4760-ab9d-e4706e74e755 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Refreshing network info cache for port f62f622f-1d0a-4a68-9540-d1a7f48a66d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.345 225859 DEBUG nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Start _get_guest_xml network_info=[{"id": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "address": "fa:16:3e:cb:97:d3", "network": {"id": "347be9eb-2d7b-4b2d-b1e8-8ed5a063f269", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1057973257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "994b02a8c0094d2daa7b775b1f86f394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf62f622f-1d", "ovs_interfaceid": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.349 225859 WARNING nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.356 225859 DEBUG nova.virt.libvirt.host [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.356 225859 DEBUG nova.virt.libvirt.host [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.361 225859 DEBUG nova.virt.libvirt.host [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.362 225859 DEBUG nova.virt.libvirt.host [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.363 225859 DEBUG nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.363 225859 DEBUG nova.virt.hardware [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.364 225859 DEBUG nova.virt.hardware [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.364 225859 DEBUG nova.virt.hardware [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.364 225859 DEBUG nova.virt.hardware [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.365 225859 DEBUG nova.virt.hardware [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.365 225859 DEBUG nova.virt.hardware [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.365 225859 DEBUG nova.virt.hardware [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.365 225859 DEBUG nova.virt.hardware [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.366 225859 DEBUG nova.virt.hardware [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.366 225859 DEBUG nova.virt.hardware [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.366 225859 DEBUG nova.virt.hardware [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.369 225859 DEBUG oslo_concurrency.processutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:00:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3768612646' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:00:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3768612646' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:00:06 compute-1 ceph-mon[81775]: pgmap v2241: 321 pgs: 321 active+clean; 368 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 179 op/s
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.781 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:00:06 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1652995315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.809 225859 DEBUG oslo_concurrency.processutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.834 225859 DEBUG nova.storage.rbd_utils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] rbd image f1757bed-1718-45e4-a731-11f1a3b4f068_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:00:06 compute-1 nova_compute[225855]: 2026-01-20 15:00:06.838 225859 DEBUG oslo_concurrency.processutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:00:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:00:06 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2825997578' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:00:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:00:06 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2825997578' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:00:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:00:07 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/407955871' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.287 225859 DEBUG oslo_concurrency.processutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.289 225859 DEBUG nova.virt.libvirt.vif [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:59:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-2075964816',display_name='tempest-TestServerBasicOps-server-2075964816',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserverbasicops-server-2075964816',id=146,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCiCeGBNIIjQtIa5Udw9T4nKiS3sQpmIFto+Zj/ppiwHl3KoPC1ZwXSQfteIxtI2AuErtkRwyRat7WVpBCL4SK6jCl43k4+LHYwocVMfWmtSf2fkMge6nUPK98YTBKBV9g==',key_name='tempest-TestServerBasicOps-469960215',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='994b02a8c0094d2daa7b775b1f86f394',ramdisk_id='',reservation_id='r-c012j9n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-2006244970',owner_user_name='tempest-TestServerBasicOps-2006244970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:00:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2a3fc2ba2a08423eb2e0bd7cf0fd5cf7',uuid=f1757bed-1718-45e4-a731-11f1a3b4f068,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "address": "fa:16:3e:cb:97:d3", "network": {"id": "347be9eb-2d7b-4b2d-b1e8-8ed5a063f269", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1057973257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "994b02a8c0094d2daa7b775b1f86f394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf62f622f-1d", "ovs_interfaceid": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.290 225859 DEBUG nova.network.os_vif_util [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Converting VIF {"id": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "address": "fa:16:3e:cb:97:d3", "network": {"id": "347be9eb-2d7b-4b2d-b1e8-8ed5a063f269", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1057973257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "994b02a8c0094d2daa7b775b1f86f394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf62f622f-1d", "ovs_interfaceid": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.291 225859 DEBUG nova.network.os_vif_util [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:97:d3,bridge_name='br-int',has_traffic_filtering=True,id=f62f622f-1d0a-4a68-9540-d1a7f48a66d0,network=Network(347be9eb-2d7b-4b2d-b1e8-8ed5a063f269),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf62f622f-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.292 225859 DEBUG nova.objects.instance [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Lazy-loading 'pci_devices' on Instance uuid f1757bed-1718-45e4-a731-11f1a3b4f068 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.393 225859 DEBUG nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:00:07 compute-1 nova_compute[225855]:   <uuid>f1757bed-1718-45e4-a731-11f1a3b4f068</uuid>
Jan 20 15:00:07 compute-1 nova_compute[225855]:   <name>instance-00000092</name>
Jan 20 15:00:07 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:00:07 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:00:07 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <nova:name>tempest-TestServerBasicOps-server-2075964816</nova:name>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:00:06</nova:creationTime>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:00:07 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:00:07 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:00:07 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:00:07 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:00:07 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:00:07 compute-1 nova_compute[225855]:         <nova:user uuid="2a3fc2ba2a08423eb2e0bd7cf0fd5cf7">tempest-TestServerBasicOps-2006244970-project-member</nova:user>
Jan 20 15:00:07 compute-1 nova_compute[225855]:         <nova:project uuid="994b02a8c0094d2daa7b775b1f86f394">tempest-TestServerBasicOps-2006244970</nova:project>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:00:07 compute-1 nova_compute[225855]:         <nova:port uuid="f62f622f-1d0a-4a68-9540-d1a7f48a66d0">
Jan 20 15:00:07 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:00:07 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:00:07 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <system>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <entry name="serial">f1757bed-1718-45e4-a731-11f1a3b4f068</entry>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <entry name="uuid">f1757bed-1718-45e4-a731-11f1a3b4f068</entry>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     </system>
Jan 20 15:00:07 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:00:07 compute-1 nova_compute[225855]:   <os>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:   </os>
Jan 20 15:00:07 compute-1 nova_compute[225855]:   <features>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:   </features>
Jan 20 15:00:07 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:00:07 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:00:07 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/f1757bed-1718-45e4-a731-11f1a3b4f068_disk">
Jan 20 15:00:07 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       </source>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:00:07 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/f1757bed-1718-45e4-a731-11f1a3b4f068_disk.config">
Jan 20 15:00:07 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       </source>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:00:07 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:cb:97:d3"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <target dev="tapf62f622f-1d"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/f1757bed-1718-45e4-a731-11f1a3b4f068/console.log" append="off"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <video>
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     </video>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:00:07 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:00:07 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:00:07 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:00:07 compute-1 nova_compute[225855]: </domain>
Jan 20 15:00:07 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.395 225859 DEBUG nova.compute.manager [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Preparing to wait for external event network-vif-plugged-f62f622f-1d0a-4a68-9540-d1a7f48a66d0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.395 225859 DEBUG oslo_concurrency.lockutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Acquiring lock "f1757bed-1718-45e4-a731-11f1a3b4f068-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.396 225859 DEBUG oslo_concurrency.lockutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Lock "f1757bed-1718-45e4-a731-11f1a3b4f068-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.396 225859 DEBUG oslo_concurrency.lockutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Lock "f1757bed-1718-45e4-a731-11f1a3b4f068-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.397 225859 DEBUG nova.virt.libvirt.vif [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:59:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-2075964816',display_name='tempest-TestServerBasicOps-server-2075964816',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserverbasicops-server-2075964816',id=146,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCiCeGBNIIjQtIa5Udw9T4nKiS3sQpmIFto+Zj/ppiwHl3KoPC1ZwXSQfteIxtI2AuErtkRwyRat7WVpBCL4SK6jCl43k4+LHYwocVMfWmtSf2fkMge6nUPK98YTBKBV9g==',key_name='tempest-TestServerBasicOps-469960215',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='994b02a8c0094d2daa7b775b1f86f394',ramdisk_id='',reservation_id='r-c012j9n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-2006244970',owner_user_name='tempest-TestServerBasicOps-2006244970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:00:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2a3fc2ba2a08423eb2e0bd7cf0fd5cf7',uuid=f1757bed-1718-45e4-a731-11f1a3b4f068,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "address": "fa:16:3e:cb:97:d3", "network": {"id": "347be9eb-2d7b-4b2d-b1e8-8ed5a063f269", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1057973257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "994b02a8c0094d2daa7b775b1f86f394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf62f622f-1d", "ovs_interfaceid": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.397 225859 DEBUG nova.network.os_vif_util [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Converting VIF {"id": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "address": "fa:16:3e:cb:97:d3", "network": {"id": "347be9eb-2d7b-4b2d-b1e8-8ed5a063f269", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1057973257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "994b02a8c0094d2daa7b775b1f86f394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf62f622f-1d", "ovs_interfaceid": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.397 225859 DEBUG nova.network.os_vif_util [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:97:d3,bridge_name='br-int',has_traffic_filtering=True,id=f62f622f-1d0a-4a68-9540-d1a7f48a66d0,network=Network(347be9eb-2d7b-4b2d-b1e8-8ed5a063f269),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf62f622f-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.398 225859 DEBUG os_vif [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:97:d3,bridge_name='br-int',has_traffic_filtering=True,id=f62f622f-1d0a-4a68-9540-d1a7f48a66d0,network=Network(347be9eb-2d7b-4b2d-b1e8-8ed5a063f269),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf62f622f-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.398 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.399 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.399 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.402 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.402 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf62f622f-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.403 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf62f622f-1d, col_values=(('external_ids', {'iface-id': 'f62f622f-1d0a-4a68-9540-d1a7f48a66d0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:97:d3', 'vm-uuid': 'f1757bed-1718-45e4-a731-11f1a3b4f068'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.404 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:07 compute-1 NetworkManager[49104]: <info>  [1768921207.4052] manager: (tapf62f622f-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.406 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.410 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.410 225859 INFO os_vif [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:97:d3,bridge_name='br-int',has_traffic_filtering=True,id=f62f622f-1d0a-4a68-9540-d1a7f48a66d0,network=Network(347be9eb-2d7b-4b2d-b1e8-8ed5a063f269),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf62f622f-1d')
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.464 225859 DEBUG nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.465 225859 DEBUG nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.465 225859 DEBUG nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] No VIF found with MAC fa:16:3e:cb:97:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.465 225859 INFO nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Using config drive
Jan 20 15:00:07 compute-1 nova_compute[225855]: 2026-01-20 15:00:07.492 225859 DEBUG nova.storage.rbd_utils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] rbd image f1757bed-1718-45e4-a731-11f1a3b4f068_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:00:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1652995315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:00:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2825997578' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:00:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2825997578' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:00:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/407955871' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:00:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:07.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:08 compute-1 nova_compute[225855]: 2026-01-20 15:00:08.177 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921193.175826, b4c1468d-9914-426a-9464-c1167de53632 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:00:08 compute-1 nova_compute[225855]: 2026-01-20 15:00:08.177 225859 INFO nova.compute.manager [-] [instance: b4c1468d-9914-426a-9464-c1167de53632] VM Stopped (Lifecycle Event)
Jan 20 15:00:08 compute-1 nova_compute[225855]: 2026-01-20 15:00:08.205 225859 DEBUG nova.compute.manager [None req-bf9d1724-5477-49ba-badc-82a806421648 - - - - - -] [instance: b4c1468d-9914-426a-9464-c1167de53632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:00:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:00:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:08.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:00:08 compute-1 ceph-mon[81775]: pgmap v2242: 321 pgs: 321 active+clean; 368 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 461 KiB/s rd, 1.8 MiB/s wr, 108 op/s
Jan 20 15:00:08 compute-1 nova_compute[225855]: 2026-01-20 15:00:08.964 225859 INFO nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Creating config drive at /var/lib/nova/instances/f1757bed-1718-45e4-a731-11f1a3b4f068/disk.config
Jan 20 15:00:08 compute-1 nova_compute[225855]: 2026-01-20 15:00:08.971 225859 DEBUG oslo_concurrency.processutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f1757bed-1718-45e4-a731-11f1a3b4f068/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmq204j3q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.105 225859 DEBUG oslo_concurrency.processutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f1757bed-1718-45e4-a731-11f1a3b4f068/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmq204j3q" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.134 225859 DEBUG nova.storage.rbd_utils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] rbd image f1757bed-1718-45e4-a731-11f1a3b4f068_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.139 225859 DEBUG oslo_concurrency.processutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f1757bed-1718-45e4-a731-11f1a3b4f068/disk.config f1757bed-1718-45e4-a731-11f1a3b4f068_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.167 225859 DEBUG nova.network.neutron [req-eb198a38-ff06-4e43-bacb-710a317b1b28 req-77adfd5a-ef37-4760-ab9d-e4706e74e755 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Updated VIF entry in instance network info cache for port f62f622f-1d0a-4a68-9540-d1a7f48a66d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.168 225859 DEBUG nova.network.neutron [req-eb198a38-ff06-4e43-bacb-710a317b1b28 req-77adfd5a-ef37-4760-ab9d-e4706e74e755 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Updating instance_info_cache with network_info: [{"id": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "address": "fa:16:3e:cb:97:d3", "network": {"id": "347be9eb-2d7b-4b2d-b1e8-8ed5a063f269", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1057973257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "994b02a8c0094d2daa7b775b1f86f394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf62f622f-1d", "ovs_interfaceid": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.195 225859 DEBUG oslo_concurrency.lockutils [req-eb198a38-ff06-4e43-bacb-710a317b1b28 req-77adfd5a-ef37-4760-ab9d-e4706e74e755 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-f1757bed-1718-45e4-a731-11f1a3b4f068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.305 225859 DEBUG oslo_concurrency.processutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f1757bed-1718-45e4-a731-11f1a3b4f068/disk.config f1757bed-1718-45e4-a731-11f1a3b4f068_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.306 225859 INFO nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Deleting local config drive /var/lib/nova/instances/f1757bed-1718-45e4-a731-11f1a3b4f068/disk.config because it was imported into RBD.
Jan 20 15:00:09 compute-1 kernel: tapf62f622f-1d: entered promiscuous mode
Jan 20 15:00:09 compute-1 NetworkManager[49104]: <info>  [1768921209.3555] manager: (tapf62f622f-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/250)
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.356 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:09 compute-1 ovn_controller[130490]: 2026-01-20T15:00:09Z|00579|binding|INFO|Claiming lport f62f622f-1d0a-4a68-9540-d1a7f48a66d0 for this chassis.
Jan 20 15:00:09 compute-1 ovn_controller[130490]: 2026-01-20T15:00:09Z|00580|binding|INFO|f62f622f-1d0a-4a68-9540-d1a7f48a66d0: Claiming fa:16:3e:cb:97:d3 10.100.0.13
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.361 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.368 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:97:d3 10.100.0.13'], port_security=['fa:16:3e:cb:97:d3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f1757bed-1718-45e4-a731-11f1a3b4f068', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '994b02a8c0094d2daa7b775b1f86f394', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6cdfb95a-0d3a-472b-8deb-06068f9edf9a c3b2b511-7001-4a00-abcb-ee7970518e80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34e153e6-2244-443e-a2b3-4d18f8409d44, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=f62f622f-1d0a-4a68-9540-d1a7f48a66d0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.369 140354 INFO neutron.agent.ovn.metadata.agent [-] Port f62f622f-1d0a-4a68-9540-d1a7f48a66d0 in datapath 347be9eb-2d7b-4b2d-b1e8-8ed5a063f269 bound to our chassis
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.370 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 347be9eb-2d7b-4b2d-b1e8-8ed5a063f269
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.380 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[22690c3c-8831-4d5b-b597-17e5ce6250e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.381 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap347be9eb-21 in ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.383 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap347be9eb-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.383 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c506447a-6b64-47c9-ade0-1f272883e060]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.383 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1eb5127c-52c7-466e-ab6b-a4260ab750b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.394 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[ff75f058-a56b-4cf2-a50a-5893033d7a8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:09 compute-1 systemd-udevd[283611]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:00:09 compute-1 systemd-machined[194361]: New machine qemu-68-instance-00000092.
Jan 20 15:00:09 compute-1 NetworkManager[49104]: <info>  [1768921209.4129] device (tapf62f622f-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:00:09 compute-1 NetworkManager[49104]: <info>  [1768921209.4137] device (tapf62f622f-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:00:09 compute-1 systemd[1]: Started Virtual Machine qemu-68-instance-00000092.
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.422 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e048b386-0650-489c-bf84-31e4a06eee2c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.427 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:09 compute-1 ovn_controller[130490]: 2026-01-20T15:00:09Z|00581|binding|INFO|Setting lport f62f622f-1d0a-4a68-9540-d1a7f48a66d0 ovn-installed in OVS
Jan 20 15:00:09 compute-1 ovn_controller[130490]: 2026-01-20T15:00:09Z|00582|binding|INFO|Setting lport f62f622f-1d0a-4a68-9540-d1a7f48a66d0 up in Southbound
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.436 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.450 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[b014a32a-39c2-41a2-b0b9-894c14d59861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.455 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2565c99d-bebe-4656-aa25-021ecb90076b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:09 compute-1 NetworkManager[49104]: <info>  [1768921209.4571] manager: (tap347be9eb-20): new Veth device (/org/freedesktop/NetworkManager/Devices/251)
Jan 20 15:00:09 compute-1 systemd-udevd[283615]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.489 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[ede41b79-2134-4c96-b69c-4b0aa498e963]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.492 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[01e330ff-29d8-475e-ab81-82e03392e5f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:09 compute-1 NetworkManager[49104]: <info>  [1768921209.5152] device (tap347be9eb-20): carrier: link connected
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.519 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[bd6686cf-9bbd-4e25-8464-211df3fa66dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.537 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[96d6ccc8-871c-4f6e-b698-cf013c2c6574]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap347be9eb-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:7f:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624844, 'reachable_time': 18143, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283643, 'error': None, 'target': 'ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.553 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2c2c731e-e364-4d95-93bc-b9d5edffde38]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8a:7f17'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624844, 'tstamp': 624844}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283644, 'error': None, 'target': 'ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.571 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[55105227-023b-498b-a0db-5e01aea412e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap347be9eb-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:7f:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624844, 'reachable_time': 18143, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283645, 'error': None, 'target': 'ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.605 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2d0c21eb-b3c2-44a0-9279-22ae18ee6be3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.669 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[18665a3f-6c01-42ba-97a0-8d6e2fb8abbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.671 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap347be9eb-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.671 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.671 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap347be9eb-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.673 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:09 compute-1 NetworkManager[49104]: <info>  [1768921209.6745] manager: (tap347be9eb-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Jan 20 15:00:09 compute-1 kernel: tap347be9eb-20: entered promiscuous mode
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.677 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap347be9eb-20, col_values=(('external_ids', {'iface-id': '22c381cf-5510-43a8-bc2e-16e5bc5ac409'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.678 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:09 compute-1 ovn_controller[130490]: 2026-01-20T15:00:09Z|00583|binding|INFO|Releasing lport 22c381cf-5510-43a8-bc2e-16e5bc5ac409 from this chassis (sb_readonly=0)
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.695 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.696 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/347be9eb-2d7b-4b2d-b1e8-8ed5a063f269.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/347be9eb-2d7b-4b2d-b1e8-8ed5a063f269.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.697 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[70204339-f03a-4adc-8c0b-5b6245d27d14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.698 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/347be9eb-2d7b-4b2d-b1e8-8ed5a063f269.pid.haproxy
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 347be9eb-2d7b-4b2d-b1e8-8ed5a063f269
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:00:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:09.698 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269', 'env', 'PROCESS_TAG=haproxy-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/347be9eb-2d7b-4b2d-b1e8-8ed5a063f269.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:00:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:09.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.740 225859 DEBUG nova.compute.manager [req-cdd13de9-6c41-45a6-9770-5c2d74c12cc4 req-76ada829-2b14-4f79-834c-ed1e9b2a7d54 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Received event network-vif-plugged-f62f622f-1d0a-4a68-9540-d1a7f48a66d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.741 225859 DEBUG oslo_concurrency.lockutils [req-cdd13de9-6c41-45a6-9770-5c2d74c12cc4 req-76ada829-2b14-4f79-834c-ed1e9b2a7d54 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f1757bed-1718-45e4-a731-11f1a3b4f068-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.741 225859 DEBUG oslo_concurrency.lockutils [req-cdd13de9-6c41-45a6-9770-5c2d74c12cc4 req-76ada829-2b14-4f79-834c-ed1e9b2a7d54 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1757bed-1718-45e4-a731-11f1a3b4f068-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.741 225859 DEBUG oslo_concurrency.lockutils [req-cdd13de9-6c41-45a6-9770-5c2d74c12cc4 req-76ada829-2b14-4f79-834c-ed1e9b2a7d54 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1757bed-1718-45e4-a731-11f1a3b4f068-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.741 225859 DEBUG nova.compute.manager [req-cdd13de9-6c41-45a6-9770-5c2d74c12cc4 req-76ada829-2b14-4f79-834c-ed1e9b2a7d54 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Processing event network-vif-plugged-f62f622f-1d0a-4a68-9540-d1a7f48a66d0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:00:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e322 e322: 3 total, 3 up, 3 in
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.871 225859 DEBUG nova.compute.manager [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.871 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921209.8703406, f1757bed-1718-45e4-a731-11f1a3b4f068 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.872 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] VM Started (Lifecycle Event)
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.883 225859 DEBUG nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.886 225859 INFO nova.virt.libvirt.driver [-] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Instance spawned successfully.
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.887 225859 DEBUG nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.890 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.894 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.902 225859 DEBUG nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.903 225859 DEBUG nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.903 225859 DEBUG nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.903 225859 DEBUG nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.904 225859 DEBUG nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.904 225859 DEBUG nova.virt.libvirt.driver [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.909 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.910 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921209.8776488, f1757bed-1718-45e4-a731-11f1a3b4f068 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.910 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] VM Paused (Lifecycle Event)
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.936 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.939 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921209.882652, f1757bed-1718-45e4-a731-11f1a3b4f068 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.939 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] VM Resumed (Lifecycle Event)
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.958 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.962 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.967 225859 INFO nova.compute.manager [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Took 8.04 seconds to spawn the instance on the hypervisor.
Jan 20 15:00:09 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.968 225859 DEBUG nova.compute.manager [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:00:10 compute-1 nova_compute[225855]: 2026-01-20 15:00:09.999 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:00:10 compute-1 nova_compute[225855]: 2026-01-20 15:00:10.043 225859 INFO nova.compute.manager [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Took 9.01 seconds to build instance.
Jan 20 15:00:10 compute-1 nova_compute[225855]: 2026-01-20 15:00:10.065 225859 DEBUG oslo_concurrency.lockutils [None req-bcfce5a4-4ed1-4cf4-a1c3-2572368c4e32 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Lock "f1757bed-1718-45e4-a731-11f1a3b4f068" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:10 compute-1 podman[283720]: 2026-01-20 15:00:10.09339982 +0000 UTC m=+0.059687095 container create 62f09e6d34cd4e3fc9344987f57c02ddc6251d2f2f53944064b170b013df8b8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 20 15:00:10 compute-1 systemd[1]: Started libpod-conmon-62f09e6d34cd4e3fc9344987f57c02ddc6251d2f2f53944064b170b013df8b8c.scope.
Jan 20 15:00:10 compute-1 podman[283720]: 2026-01-20 15:00:10.058598478 +0000 UTC m=+0.024885773 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:00:10 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:00:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b2c64753b02dd1a3ea792cfd3a0a537701996b7974d1a901afebdbb9cc4a05/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:00:10 compute-1 podman[283720]: 2026-01-20 15:00:10.18061396 +0000 UTC m=+0.146901255 container init 62f09e6d34cd4e3fc9344987f57c02ddc6251d2f2f53944064b170b013df8b8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 20 15:00:10 compute-1 podman[283720]: 2026-01-20 15:00:10.186842866 +0000 UTC m=+0.153130141 container start 62f09e6d34cd4e3fc9344987f57c02ddc6251d2f2f53944064b170b013df8b8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:00:10 compute-1 neutron-haproxy-ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269[283735]: [NOTICE]   (283739) : New worker (283741) forked
Jan 20 15:00:10 compute-1 neutron-haproxy-ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269[283735]: [NOTICE]   (283739) : Loading success.
Jan 20 15:00:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:10.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:10 compute-1 ceph-mon[81775]: osdmap e322: 3 total, 3 up, 3 in
Jan 20 15:00:10 compute-1 ceph-mon[81775]: pgmap v2244: 321 pgs: 321 active+clean; 205 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 98 KiB/s rd, 2.1 MiB/s wr, 143 op/s
Jan 20 15:00:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:11.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:11 compute-1 nova_compute[225855]: 2026-01-20 15:00:11.785 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:00:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:12.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:00:12 compute-1 nova_compute[225855]: 2026-01-20 15:00:12.404 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:12 compute-1 nova_compute[225855]: 2026-01-20 15:00:12.445 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921197.443742, 1ebdefed-0903-4d72-b78d-912666c5ce61 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:00:12 compute-1 nova_compute[225855]: 2026-01-20 15:00:12.445 225859 INFO nova.compute.manager [-] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] VM Stopped (Lifecycle Event)
Jan 20 15:00:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:12 compute-1 nova_compute[225855]: 2026-01-20 15:00:12.743 225859 DEBUG nova.compute.manager [req-0baa5c00-22bb-4a05-bb49-390b91d239c0 req-15eb2abd-93aa-4d8c-8ac4-1b40c998950b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Received event network-vif-plugged-f62f622f-1d0a-4a68-9540-d1a7f48a66d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:00:12 compute-1 nova_compute[225855]: 2026-01-20 15:00:12.744 225859 DEBUG oslo_concurrency.lockutils [req-0baa5c00-22bb-4a05-bb49-390b91d239c0 req-15eb2abd-93aa-4d8c-8ac4-1b40c998950b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f1757bed-1718-45e4-a731-11f1a3b4f068-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:12 compute-1 nova_compute[225855]: 2026-01-20 15:00:12.744 225859 DEBUG oslo_concurrency.lockutils [req-0baa5c00-22bb-4a05-bb49-390b91d239c0 req-15eb2abd-93aa-4d8c-8ac4-1b40c998950b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1757bed-1718-45e4-a731-11f1a3b4f068-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:12 compute-1 nova_compute[225855]: 2026-01-20 15:00:12.745 225859 DEBUG oslo_concurrency.lockutils [req-0baa5c00-22bb-4a05-bb49-390b91d239c0 req-15eb2abd-93aa-4d8c-8ac4-1b40c998950b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1757bed-1718-45e4-a731-11f1a3b4f068-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:12 compute-1 nova_compute[225855]: 2026-01-20 15:00:12.745 225859 DEBUG nova.compute.manager [req-0baa5c00-22bb-4a05-bb49-390b91d239c0 req-15eb2abd-93aa-4d8c-8ac4-1b40c998950b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] No waiting events found dispatching network-vif-plugged-f62f622f-1d0a-4a68-9540-d1a7f48a66d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:00:12 compute-1 nova_compute[225855]: 2026-01-20 15:00:12.745 225859 WARNING nova.compute.manager [req-0baa5c00-22bb-4a05-bb49-390b91d239c0 req-15eb2abd-93aa-4d8c-8ac4-1b40c998950b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Received unexpected event network-vif-plugged-f62f622f-1d0a-4a68-9540-d1a7f48a66d0 for instance with vm_state active and task_state None.
Jan 20 15:00:12 compute-1 nova_compute[225855]: 2026-01-20 15:00:12.799 225859 DEBUG nova.compute.manager [None req-5e119f5d-7f1c-41d0-8f91-7df112143d2c - - - - - -] [instance: 1ebdefed-0903-4d72-b78d-912666c5ce61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:00:13 compute-1 ceph-mon[81775]: pgmap v2245: 321 pgs: 321 active+clean; 171 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 295 KiB/s rd, 2.1 MiB/s wr, 134 op/s
Jan 20 15:00:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:13.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:00:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:14.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:00:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3702275476' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:00:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3702275476' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:00:14 compute-1 nova_compute[225855]: 2026-01-20 15:00:14.522 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:14 compute-1 NetworkManager[49104]: <info>  [1768921214.5260] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Jan 20 15:00:14 compute-1 NetworkManager[49104]: <info>  [1768921214.5274] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Jan 20 15:00:14 compute-1 sudo[283752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:00:14 compute-1 sudo[283752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:00:14 compute-1 sudo[283752]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:14 compute-1 sudo[283777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:00:14 compute-1 sudo[283777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:00:14 compute-1 sudo[283777]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:14 compute-1 nova_compute[225855]: 2026-01-20 15:00:14.715 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:14 compute-1 ovn_controller[130490]: 2026-01-20T15:00:14Z|00584|binding|INFO|Releasing lport 22c381cf-5510-43a8-bc2e-16e5bc5ac409 from this chassis (sb_readonly=0)
Jan 20 15:00:14 compute-1 nova_compute[225855]: 2026-01-20 15:00:14.736 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:15 compute-1 nova_compute[225855]: 2026-01-20 15:00:15.041 225859 DEBUG nova.compute.manager [req-1895007c-e922-4783-ba2f-19daac2a4a1c req-05ddc67f-95bd-499b-bca5-b9e77a62ade5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Received event network-changed-f62f622f-1d0a-4a68-9540-d1a7f48a66d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:00:15 compute-1 nova_compute[225855]: 2026-01-20 15:00:15.041 225859 DEBUG nova.compute.manager [req-1895007c-e922-4783-ba2f-19daac2a4a1c req-05ddc67f-95bd-499b-bca5-b9e77a62ade5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Refreshing instance network info cache due to event network-changed-f62f622f-1d0a-4a68-9540-d1a7f48a66d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:00:15 compute-1 nova_compute[225855]: 2026-01-20 15:00:15.041 225859 DEBUG oslo_concurrency.lockutils [req-1895007c-e922-4783-ba2f-19daac2a4a1c req-05ddc67f-95bd-499b-bca5-b9e77a62ade5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-f1757bed-1718-45e4-a731-11f1a3b4f068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:00:15 compute-1 nova_compute[225855]: 2026-01-20 15:00:15.042 225859 DEBUG oslo_concurrency.lockutils [req-1895007c-e922-4783-ba2f-19daac2a4a1c req-05ddc67f-95bd-499b-bca5-b9e77a62ade5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-f1757bed-1718-45e4-a731-11f1a3b4f068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:00:15 compute-1 nova_compute[225855]: 2026-01-20 15:00:15.042 225859 DEBUG nova.network.neutron [req-1895007c-e922-4783-ba2f-19daac2a4a1c req-05ddc67f-95bd-499b-bca5-b9e77a62ade5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Refreshing network info cache for port f62f622f-1d0a-4a68-9540-d1a7f48a66d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:00:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e323 e323: 3 total, 3 up, 3 in
Jan 20 15:00:15 compute-1 ceph-mon[81775]: pgmap v2246: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 163 op/s
Jan 20 15:00:15 compute-1 ceph-mon[81775]: osdmap e323: 3 total, 3 up, 3 in
Jan 20 15:00:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:15.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:16.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:16 compute-1 ovn_controller[130490]: 2026-01-20T15:00:16Z|00585|binding|INFO|Releasing lport 22c381cf-5510-43a8-bc2e-16e5bc5ac409 from this chassis (sb_readonly=0)
Jan 20 15:00:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:16.421 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:16.421 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:16.422 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:16 compute-1 nova_compute[225855]: 2026-01-20 15:00:16.452 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:16 compute-1 nova_compute[225855]: 2026-01-20 15:00:16.606 225859 DEBUG nova.network.neutron [req-1895007c-e922-4783-ba2f-19daac2a4a1c req-05ddc67f-95bd-499b-bca5-b9e77a62ade5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Updated VIF entry in instance network info cache for port f62f622f-1d0a-4a68-9540-d1a7f48a66d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:00:16 compute-1 nova_compute[225855]: 2026-01-20 15:00:16.607 225859 DEBUG nova.network.neutron [req-1895007c-e922-4783-ba2f-19daac2a4a1c req-05ddc67f-95bd-499b-bca5-b9e77a62ade5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Updating instance_info_cache with network_info: [{"id": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "address": "fa:16:3e:cb:97:d3", "network": {"id": "347be9eb-2d7b-4b2d-b1e8-8ed5a063f269", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1057973257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "994b02a8c0094d2daa7b775b1f86f394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf62f622f-1d", "ovs_interfaceid": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:00:16 compute-1 nova_compute[225855]: 2026-01-20 15:00:16.629 225859 DEBUG oslo_concurrency.lockutils [req-1895007c-e922-4783-ba2f-19daac2a4a1c req-05ddc67f-95bd-499b-bca5-b9e77a62ade5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-f1757bed-1718-45e4-a731-11f1a3b4f068" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:00:16 compute-1 nova_compute[225855]: 2026-01-20 15:00:16.787 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:17 compute-1 ceph-mon[81775]: pgmap v2248: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 25 KiB/s wr, 196 op/s
Jan 20 15:00:17 compute-1 nova_compute[225855]: 2026-01-20 15:00:17.405 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:17.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:17 compute-1 nova_compute[225855]: 2026-01-20 15:00:17.980 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:18.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:19.084 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:00:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:19.085 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:00:19 compute-1 nova_compute[225855]: 2026-01-20 15:00:19.087 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:19 compute-1 ceph-mon[81775]: pgmap v2249: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 24 KiB/s wr, 164 op/s
Jan 20 15:00:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:19.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:20.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:20 compute-1 ceph-mon[81775]: pgmap v2250: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 108 op/s
Jan 20 15:00:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:21.087 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:00:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:21.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:21 compute-1 nova_compute[225855]: 2026-01-20 15:00:21.788 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:22 compute-1 podman[283808]: 2026-01-20 15:00:22.070167346 +0000 UTC m=+0.095773583 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 20 15:00:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:22.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:22 compute-1 nova_compute[225855]: 2026-01-20 15:00:22.407 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:22 compute-1 ceph-mon[81775]: pgmap v2251: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 18 KiB/s wr, 90 op/s
Jan 20 15:00:22 compute-1 ovn_controller[130490]: 2026-01-20T15:00:22Z|00586|binding|INFO|Releasing lport 22c381cf-5510-43a8-bc2e-16e5bc5ac409 from this chassis (sb_readonly=0)
Jan 20 15:00:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:22 compute-1 nova_compute[225855]: 2026-01-20 15:00:22.670 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:23 compute-1 nova_compute[225855]: 2026-01-20 15:00:23.399 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:23.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:23 compute-1 ovn_controller[130490]: 2026-01-20T15:00:23Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cb:97:d3 10.100.0.13
Jan 20 15:00:23 compute-1 ovn_controller[130490]: 2026-01-20T15:00:23Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cb:97:d3 10.100.0.13
Jan 20 15:00:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:00:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:24.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:00:25 compute-1 ceph-mon[81775]: pgmap v2252: 321 pgs: 321 active+clean; 170 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 523 KiB/s wr, 61 op/s
Jan 20 15:00:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:25.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:26.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:26 compute-1 nova_compute[225855]: 2026-01-20 15:00:26.790 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:27 compute-1 ceph-mon[81775]: pgmap v2253: 321 pgs: 321 active+clean; 197 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 556 KiB/s rd, 2.3 MiB/s wr, 67 op/s
Jan 20 15:00:27 compute-1 nova_compute[225855]: 2026-01-20 15:00:27.409 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:27.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:28.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:29 compute-1 ceph-mon[81775]: pgmap v2254: 321 pgs: 321 active+clean; 197 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 369 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 20 15:00:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:29.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:30.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:31 compute-1 ceph-mon[81775]: pgmap v2255: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:00:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3920558439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:31.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:31 compute-1 nova_compute[225855]: 2026-01-20 15:00:31.792 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:32.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:32 compute-1 nova_compute[225855]: 2026-01-20 15:00:32.410 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:32 compute-1 ceph-mon[81775]: pgmap v2256: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 379 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:00:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:32.944 140461 DEBUG eventlet.wsgi.server [-] (140461) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Jan 20 15:00:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:32.946 140461 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Jan 20 15:00:32 compute-1 ovn_metadata_agent[140349]: Accept: */*
Jan 20 15:00:32 compute-1 ovn_metadata_agent[140349]: Connection: close
Jan 20 15:00:32 compute-1 ovn_metadata_agent[140349]: Content-Type: text/plain
Jan 20 15:00:32 compute-1 ovn_metadata_agent[140349]: Host: 169.254.169.254
Jan 20 15:00:32 compute-1 ovn_metadata_agent[140349]: User-Agent: curl/7.84.0
Jan 20 15:00:32 compute-1 ovn_metadata_agent[140349]: X-Forwarded-For: 10.100.0.13
Jan 20 15:00:32 compute-1 ovn_metadata_agent[140349]: X-Ovn-Network-Id: 347be9eb-2d7b-4b2d-b1e8-8ed5a063f269 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Jan 20 15:00:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:33.431 140461 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Jan 20 15:00:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:33.432 140461 INFO eventlet.wsgi.server [-] 10.100.0.13,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 0.4856715
Jan 20 15:00:33 compute-1 haproxy-metadata-proxy-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269[283741]: 10.100.0.13:48962 [20/Jan/2026:15:00:32.943] listener listener/metadata 0/0/0/488/488 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Jan 20 15:00:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:33.562 140461 DEBUG eventlet.wsgi.server [-] (140461) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Jan 20 15:00:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:33.563 140461 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Jan 20 15:00:33 compute-1 ovn_metadata_agent[140349]: Accept: */*
Jan 20 15:00:33 compute-1 ovn_metadata_agent[140349]: Connection: close
Jan 20 15:00:33 compute-1 ovn_metadata_agent[140349]: Content-Length: 100
Jan 20 15:00:33 compute-1 ovn_metadata_agent[140349]: Content-Type: application/x-www-form-urlencoded
Jan 20 15:00:33 compute-1 ovn_metadata_agent[140349]: Host: 169.254.169.254
Jan 20 15:00:33 compute-1 ovn_metadata_agent[140349]: User-Agent: curl/7.84.0
Jan 20 15:00:33 compute-1 ovn_metadata_agent[140349]: X-Forwarded-For: 10.100.0.13
Jan 20 15:00:33 compute-1 ovn_metadata_agent[140349]: X-Ovn-Network-Id: 347be9eb-2d7b-4b2d-b1e8-8ed5a063f269
Jan 20 15:00:33 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:00:33 compute-1 ovn_metadata_agent[140349]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Jan 20 15:00:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:33.734 140461 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Jan 20 15:00:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:33.735 140461 INFO eventlet.wsgi.server [-] 10.100.0.13,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.1713569
Jan 20 15:00:33 compute-1 haproxy-metadata-proxy-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269[283741]: 10.100.0.13:60928 [20/Jan/2026:15:00:33.562] listener listener/metadata 0/0/0/172/172 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Jan 20 15:00:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:33.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:34 compute-1 podman[283840]: 2026-01-20 15:00:34.018333674 +0000 UTC m=+0.060788136 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 20 15:00:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:34.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:34 compute-1 sudo[283859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:00:34 compute-1 sudo[283859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:00:34 compute-1 sudo[283859]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:34 compute-1 sudo[283884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:00:34 compute-1 sudo[283884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:00:34 compute-1 sudo[283884]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:35 compute-1 ceph-mon[81775]: pgmap v2257: 321 pgs: 321 active+clean; 212 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 2.5 MiB/s wr, 77 op/s
Jan 20 15:00:35 compute-1 nova_compute[225855]: 2026-01-20 15:00:35.712 225859 DEBUG oslo_concurrency.lockutils [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Acquiring lock "f1757bed-1718-45e4-a731-11f1a3b4f068" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:35 compute-1 nova_compute[225855]: 2026-01-20 15:00:35.713 225859 DEBUG oslo_concurrency.lockutils [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Lock "f1757bed-1718-45e4-a731-11f1a3b4f068" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:35 compute-1 nova_compute[225855]: 2026-01-20 15:00:35.713 225859 DEBUG oslo_concurrency.lockutils [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Acquiring lock "f1757bed-1718-45e4-a731-11f1a3b4f068-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:35 compute-1 nova_compute[225855]: 2026-01-20 15:00:35.714 225859 DEBUG oslo_concurrency.lockutils [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Lock "f1757bed-1718-45e4-a731-11f1a3b4f068-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:35 compute-1 nova_compute[225855]: 2026-01-20 15:00:35.714 225859 DEBUG oslo_concurrency.lockutils [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Lock "f1757bed-1718-45e4-a731-11f1a3b4f068-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:35 compute-1 nova_compute[225855]: 2026-01-20 15:00:35.715 225859 INFO nova.compute.manager [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Terminating instance
Jan 20 15:00:35 compute-1 nova_compute[225855]: 2026-01-20 15:00:35.716 225859 DEBUG nova.compute.manager [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:00:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:00:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:35.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:00:35 compute-1 kernel: tapf62f622f-1d (unregistering): left promiscuous mode
Jan 20 15:00:35 compute-1 NetworkManager[49104]: <info>  [1768921235.7785] device (tapf62f622f-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:00:35 compute-1 ovn_controller[130490]: 2026-01-20T15:00:35Z|00587|binding|INFO|Releasing lport f62f622f-1d0a-4a68-9540-d1a7f48a66d0 from this chassis (sb_readonly=0)
Jan 20 15:00:35 compute-1 ovn_controller[130490]: 2026-01-20T15:00:35Z|00588|binding|INFO|Setting lport f62f622f-1d0a-4a68-9540-d1a7f48a66d0 down in Southbound
Jan 20 15:00:35 compute-1 nova_compute[225855]: 2026-01-20 15:00:35.789 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:35 compute-1 ovn_controller[130490]: 2026-01-20T15:00:35Z|00589|binding|INFO|Removing iface tapf62f622f-1d ovn-installed in OVS
Jan 20 15:00:35 compute-1 nova_compute[225855]: 2026-01-20 15:00:35.791 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:35.795 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:97:d3 10.100.0.13'], port_security=['fa:16:3e:cb:97:d3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f1757bed-1718-45e4-a731-11f1a3b4f068', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '994b02a8c0094d2daa7b775b1f86f394', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6cdfb95a-0d3a-472b-8deb-06068f9edf9a c3b2b511-7001-4a00-abcb-ee7970518e80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34e153e6-2244-443e-a2b3-4d18f8409d44, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=f62f622f-1d0a-4a68-9540-d1a7f48a66d0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:00:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:35.796 140354 INFO neutron.agent.ovn.metadata.agent [-] Port f62f622f-1d0a-4a68-9540-d1a7f48a66d0 in datapath 347be9eb-2d7b-4b2d-b1e8-8ed5a063f269 unbound from our chassis
Jan 20 15:00:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:35.798 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 347be9eb-2d7b-4b2d-b1e8-8ed5a063f269, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:00:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:35.799 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c2bbb8-5f8c-4cfd-ab75-2dece8688209]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:35.799 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269 namespace which is not needed anymore
Jan 20 15:00:35 compute-1 nova_compute[225855]: 2026-01-20 15:00:35.811 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:35 compute-1 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000092.scope: Deactivated successfully.
Jan 20 15:00:35 compute-1 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000092.scope: Consumed 15.091s CPU time.
Jan 20 15:00:35 compute-1 systemd-machined[194361]: Machine qemu-68-instance-00000092 terminated.
Jan 20 15:00:35 compute-1 nova_compute[225855]: 2026-01-20 15:00:35.935 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:35 compute-1 nova_compute[225855]: 2026-01-20 15:00:35.940 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:35 compute-1 neutron-haproxy-ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269[283735]: [NOTICE]   (283739) : haproxy version is 2.8.14-c23fe91
Jan 20 15:00:35 compute-1 neutron-haproxy-ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269[283735]: [NOTICE]   (283739) : path to executable is /usr/sbin/haproxy
Jan 20 15:00:35 compute-1 neutron-haproxy-ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269[283735]: [WARNING]  (283739) : Exiting Master process...
Jan 20 15:00:35 compute-1 neutron-haproxy-ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269[283735]: [ALERT]    (283739) : Current worker (283741) exited with code 143 (Terminated)
Jan 20 15:00:35 compute-1 neutron-haproxy-ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269[283735]: [WARNING]  (283739) : All workers exited. Exiting... (0)
Jan 20 15:00:35 compute-1 systemd[1]: libpod-62f09e6d34cd4e3fc9344987f57c02ddc6251d2f2f53944064b170b013df8b8c.scope: Deactivated successfully.
Jan 20 15:00:35 compute-1 podman[283934]: 2026-01-20 15:00:35.956623226 +0000 UTC m=+0.060924290 container died 62f09e6d34cd4e3fc9344987f57c02ddc6251d2f2f53944064b170b013df8b8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:00:35 compute-1 nova_compute[225855]: 2026-01-20 15:00:35.958 225859 INFO nova.virt.libvirt.driver [-] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Instance destroyed successfully.
Jan 20 15:00:35 compute-1 nova_compute[225855]: 2026-01-20 15:00:35.959 225859 DEBUG nova.objects.instance [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Lazy-loading 'resources' on Instance uuid f1757bed-1718-45e4-a731-11f1a3b4f068 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:00:35 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-62f09e6d34cd4e3fc9344987f57c02ddc6251d2f2f53944064b170b013df8b8c-userdata-shm.mount: Deactivated successfully.
Jan 20 15:00:35 compute-1 systemd[1]: var-lib-containers-storage-overlay-f4b2c64753b02dd1a3ea792cfd3a0a537701996b7974d1a901afebdbb9cc4a05-merged.mount: Deactivated successfully.
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.011 225859 DEBUG nova.virt.libvirt.vif [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:59:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-2075964816',display_name='tempest-TestServerBasicOps-server-2075964816',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserverbasicops-server-2075964816',id=146,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCiCeGBNIIjQtIa5Udw9T4nKiS3sQpmIFto+Zj/ppiwHl3KoPC1ZwXSQfteIxtI2AuErtkRwyRat7WVpBCL4SK6jCl43k4+LHYwocVMfWmtSf2fkMge6nUPK98YTBKBV9g==',key_name='tempest-TestServerBasicOps-469960215',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:00:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='994b02a8c0094d2daa7b775b1f86f394',ramdisk_id='',reservation_id='r-c012j9n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-2006244970',owner_user_name='tempest-TestServerBasicOps-2006244970-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:00:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2a3fc2ba2a08423eb2e0bd7cf0fd5cf7',uuid=f1757bed-1718-45e4-a731-11f1a3b4f068,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "address": "fa:16:3e:cb:97:d3", "network": {"id": "347be9eb-2d7b-4b2d-b1e8-8ed5a063f269", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1057973257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "994b02a8c0094d2daa7b775b1f86f394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf62f622f-1d", "ovs_interfaceid": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.012 225859 DEBUG nova.network.os_vif_util [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Converting VIF {"id": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "address": "fa:16:3e:cb:97:d3", "network": {"id": "347be9eb-2d7b-4b2d-b1e8-8ed5a063f269", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1057973257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "994b02a8c0094d2daa7b775b1f86f394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf62f622f-1d", "ovs_interfaceid": "f62f622f-1d0a-4a68-9540-d1a7f48a66d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.014 225859 DEBUG nova.network.os_vif_util [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cb:97:d3,bridge_name='br-int',has_traffic_filtering=True,id=f62f622f-1d0a-4a68-9540-d1a7f48a66d0,network=Network(347be9eb-2d7b-4b2d-b1e8-8ed5a063f269),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf62f622f-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.014 225859 DEBUG os_vif [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:97:d3,bridge_name='br-int',has_traffic_filtering=True,id=f62f622f-1d0a-4a68-9540-d1a7f48a66d0,network=Network(347be9eb-2d7b-4b2d-b1e8-8ed5a063f269),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf62f622f-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.017 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.017 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf62f622f-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.019 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.021 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:36 compute-1 podman[283934]: 2026-01-20 15:00:36.025077197 +0000 UTC m=+0.129378221 container cleanup 62f09e6d34cd4e3fc9344987f57c02ddc6251d2f2f53944064b170b013df8b8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.025 225859 INFO os_vif [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:97:d3,bridge_name='br-int',has_traffic_filtering=True,id=f62f622f-1d0a-4a68-9540-d1a7f48a66d0,network=Network(347be9eb-2d7b-4b2d-b1e8-8ed5a063f269),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf62f622f-1d')
Jan 20 15:00:36 compute-1 systemd[1]: libpod-conmon-62f09e6d34cd4e3fc9344987f57c02ddc6251d2f2f53944064b170b013df8b8c.scope: Deactivated successfully.
Jan 20 15:00:36 compute-1 podman[283980]: 2026-01-20 15:00:36.095603627 +0000 UTC m=+0.047236534 container remove 62f09e6d34cd4e3fc9344987f57c02ddc6251d2f2f53944064b170b013df8b8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 15:00:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:36.102 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fb4f35a6-71bf-4557-8a4a-dbf9407b6a26]: (4, ('Tue Jan 20 03:00:35 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269 (62f09e6d34cd4e3fc9344987f57c02ddc6251d2f2f53944064b170b013df8b8c)\n62f09e6d34cd4e3fc9344987f57c02ddc6251d2f2f53944064b170b013df8b8c\nTue Jan 20 03:00:36 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269 (62f09e6d34cd4e3fc9344987f57c02ddc6251d2f2f53944064b170b013df8b8c)\n62f09e6d34cd4e3fc9344987f57c02ddc6251d2f2f53944064b170b013df8b8c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:36.104 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ceb24972-bca1-4618-913d-40183a4dcf83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:36.105 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap347be9eb-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.107 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:36 compute-1 kernel: tap347be9eb-20: left promiscuous mode
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.121 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:36.124 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6b1306bd-6e94-448d-9cf0-a00a079e4b05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:36.140 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f8c08823-abb7-4c69-b1c3-acc039a94c1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:36.142 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c1204772-b20a-45f6-a007-b4d97d81f4fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:36.159 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[55ea87be-b97d-4422-be04-295c757196c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624837, 'reachable_time': 42436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284004, 'error': None, 'target': 'ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:36.162 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-347be9eb-2d7b-4b2d-b1e8-8ed5a063f269 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:00:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:36.163 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[0a10c0f8-f60c-4a29-aa5b-430c9330b316]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:36 compute-1 systemd[1]: run-netns-ovnmeta\x2d347be9eb\x2d2d7b\x2d4b2d\x2db1e8\x2d8ed5a063f269.mount: Deactivated successfully.
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.178 225859 DEBUG nova.compute.manager [req-fa54498c-5459-43d1-b64f-943bd2ba939e req-90679d2e-6ed8-4b9e-be4e-d6a68542b000 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Received event network-vif-unplugged-f62f622f-1d0a-4a68-9540-d1a7f48a66d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.179 225859 DEBUG oslo_concurrency.lockutils [req-fa54498c-5459-43d1-b64f-943bd2ba939e req-90679d2e-6ed8-4b9e-be4e-d6a68542b000 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f1757bed-1718-45e4-a731-11f1a3b4f068-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.179 225859 DEBUG oslo_concurrency.lockutils [req-fa54498c-5459-43d1-b64f-943bd2ba939e req-90679d2e-6ed8-4b9e-be4e-d6a68542b000 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1757bed-1718-45e4-a731-11f1a3b4f068-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.180 225859 DEBUG oslo_concurrency.lockutils [req-fa54498c-5459-43d1-b64f-943bd2ba939e req-90679d2e-6ed8-4b9e-be4e-d6a68542b000 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1757bed-1718-45e4-a731-11f1a3b4f068-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.180 225859 DEBUG nova.compute.manager [req-fa54498c-5459-43d1-b64f-943bd2ba939e req-90679d2e-6ed8-4b9e-be4e-d6a68542b000 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] No waiting events found dispatching network-vif-unplugged-f62f622f-1d0a-4a68-9540-d1a7f48a66d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.180 225859 DEBUG nova.compute.manager [req-fa54498c-5459-43d1-b64f-943bd2ba939e req-90679d2e-6ed8-4b9e-be4e-d6a68542b000 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Received event network-vif-unplugged-f62f622f-1d0a-4a68-9540-d1a7f48a66d0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:00:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/987469144' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:00:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3787191888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:00:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:00:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:36.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.577 225859 INFO nova.virt.libvirt.driver [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Deleting instance files /var/lib/nova/instances/f1757bed-1718-45e4-a731-11f1a3b4f068_del
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.578 225859 INFO nova.virt.libvirt.driver [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Deletion of /var/lib/nova/instances/f1757bed-1718-45e4-a731-11f1a3b4f068_del complete
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.637 225859 INFO nova.compute.manager [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Took 0.92 seconds to destroy the instance on the hypervisor.
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.638 225859 DEBUG oslo.service.loopingcall [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.638 225859 DEBUG nova.compute.manager [-] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.638 225859 DEBUG nova.network.neutron [-] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:00:36 compute-1 nova_compute[225855]: 2026-01-20 15:00:36.794 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:37 compute-1 ceph-mon[81775]: pgmap v2258: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 372 KiB/s rd, 3.5 MiB/s wr, 78 op/s
Jan 20 15:00:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:37.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:00:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:38.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:00:38 compute-1 nova_compute[225855]: 2026-01-20 15:00:38.512 225859 DEBUG nova.network.neutron [-] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:00:38 compute-1 nova_compute[225855]: 2026-01-20 15:00:38.556 225859 INFO nova.compute.manager [-] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Took 1.92 seconds to deallocate network for instance.
Jan 20 15:00:38 compute-1 nova_compute[225855]: 2026-01-20 15:00:38.628 225859 DEBUG oslo_concurrency.lockutils [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:38 compute-1 nova_compute[225855]: 2026-01-20 15:00:38.629 225859 DEBUG oslo_concurrency.lockutils [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:38 compute-1 nova_compute[225855]: 2026-01-20 15:00:38.839 225859 DEBUG oslo_concurrency.processutils [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:00:38 compute-1 nova_compute[225855]: 2026-01-20 15:00:38.874 225859 DEBUG nova.compute.manager [req-522d8237-f10e-411b-b0c3-a305cee240f4 req-772d1398-e054-42b9-8153-4bf31d12d75c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Received event network-vif-plugged-f62f622f-1d0a-4a68-9540-d1a7f48a66d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:00:38 compute-1 nova_compute[225855]: 2026-01-20 15:00:38.874 225859 DEBUG oslo_concurrency.lockutils [req-522d8237-f10e-411b-b0c3-a305cee240f4 req-772d1398-e054-42b9-8153-4bf31d12d75c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f1757bed-1718-45e4-a731-11f1a3b4f068-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:38 compute-1 nova_compute[225855]: 2026-01-20 15:00:38.874 225859 DEBUG oslo_concurrency.lockutils [req-522d8237-f10e-411b-b0c3-a305cee240f4 req-772d1398-e054-42b9-8153-4bf31d12d75c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1757bed-1718-45e4-a731-11f1a3b4f068-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:38 compute-1 nova_compute[225855]: 2026-01-20 15:00:38.875 225859 DEBUG oslo_concurrency.lockutils [req-522d8237-f10e-411b-b0c3-a305cee240f4 req-772d1398-e054-42b9-8153-4bf31d12d75c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1757bed-1718-45e4-a731-11f1a3b4f068-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:38 compute-1 nova_compute[225855]: 2026-01-20 15:00:38.875 225859 DEBUG nova.compute.manager [req-522d8237-f10e-411b-b0c3-a305cee240f4 req-772d1398-e054-42b9-8153-4bf31d12d75c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] No waiting events found dispatching network-vif-plugged-f62f622f-1d0a-4a68-9540-d1a7f48a66d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:00:38 compute-1 nova_compute[225855]: 2026-01-20 15:00:38.875 225859 WARNING nova.compute.manager [req-522d8237-f10e-411b-b0c3-a305cee240f4 req-772d1398-e054-42b9-8153-4bf31d12d75c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Received unexpected event network-vif-plugged-f62f622f-1d0a-4a68-9540-d1a7f48a66d0 for instance with vm_state deleted and task_state None.
Jan 20 15:00:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:00:39 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/842019597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:39 compute-1 nova_compute[225855]: 2026-01-20 15:00:39.303 225859 DEBUG oslo_concurrency.processutils [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:00:39 compute-1 nova_compute[225855]: 2026-01-20 15:00:39.309 225859 DEBUG nova.compute.provider_tree [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:00:39 compute-1 nova_compute[225855]: 2026-01-20 15:00:39.326 225859 DEBUG nova.scheduler.client.report [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:00:39 compute-1 nova_compute[225855]: 2026-01-20 15:00:39.363 225859 DEBUG oslo_concurrency.lockutils [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:39 compute-1 nova_compute[225855]: 2026-01-20 15:00:39.412 225859 INFO nova.scheduler.client.report [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Deleted allocations for instance f1757bed-1718-45e4-a731-11f1a3b4f068
Jan 20 15:00:39 compute-1 ceph-mon[81775]: pgmap v2259: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 20 15:00:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/842019597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:39 compute-1 nova_compute[225855]: 2026-01-20 15:00:39.469 225859 DEBUG oslo_concurrency.lockutils [None req-de380fa1-243d-40f5-9128-16bf99a9f662 2a3fc2ba2a08423eb2e0bd7cf0fd5cf7 994b02a8c0094d2daa7b775b1f86f394 - - default default] Lock "f1757bed-1718-45e4-a731-11f1a3b4f068" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:39.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:00:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:40.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:00:40 compute-1 nova_compute[225855]: 2026-01-20 15:00:40.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:00:40 compute-1 ceph-mon[81775]: pgmap v2260: 321 pgs: 321 active+clean; 186 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 550 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Jan 20 15:00:41 compute-1 nova_compute[225855]: 2026-01-20 15:00:41.021 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:41 compute-1 nova_compute[225855]: 2026-01-20 15:00:41.260 225859 DEBUG oslo_concurrency.lockutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Acquiring lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:41 compute-1 nova_compute[225855]: 2026-01-20 15:00:41.261 225859 DEBUG oslo_concurrency.lockutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:41 compute-1 nova_compute[225855]: 2026-01-20 15:00:41.283 225859 DEBUG nova.compute.manager [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:00:41 compute-1 nova_compute[225855]: 2026-01-20 15:00:41.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:00:41 compute-1 nova_compute[225855]: 2026-01-20 15:00:41.379 225859 DEBUG nova.compute.manager [req-e3b24e08-7905-49db-a354-c3b2ec77f508 req-0fd169e9-9d1f-47de-b996-6fc8719ccac2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Received event network-vif-deleted-f62f622f-1d0a-4a68-9540-d1a7f48a66d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:00:41 compute-1 nova_compute[225855]: 2026-01-20 15:00:41.381 225859 DEBUG oslo_concurrency.lockutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:41 compute-1 nova_compute[225855]: 2026-01-20 15:00:41.381 225859 DEBUG oslo_concurrency.lockutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:41 compute-1 nova_compute[225855]: 2026-01-20 15:00:41.387 225859 DEBUG nova.virt.hardware [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:00:41 compute-1 nova_compute[225855]: 2026-01-20 15:00:41.387 225859 INFO nova.compute.claims [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:00:41 compute-1 nova_compute[225855]: 2026-01-20 15:00:41.505 225859 DEBUG oslo_concurrency.processutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:00:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:41.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:41 compute-1 nova_compute[225855]: 2026-01-20 15:00:41.796 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:41 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:00:41 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/292610090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:41 compute-1 nova_compute[225855]: 2026-01-20 15:00:41.975 225859 DEBUG oslo_concurrency.processutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:00:41 compute-1 nova_compute[225855]: 2026-01-20 15:00:41.984 225859 DEBUG nova.compute.provider_tree [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:00:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/292610090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:00:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:42.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:00:42 compute-1 nova_compute[225855]: 2026-01-20 15:00:42.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:00:42 compute-1 nova_compute[225855]: 2026-01-20 15:00:42.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:00:42 compute-1 nova_compute[225855]: 2026-01-20 15:00:42.617 225859 DEBUG nova.scheduler.client.report [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:00:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:42 compute-1 nova_compute[225855]: 2026-01-20 15:00:42.671 225859 DEBUG oslo_concurrency.lockutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.290s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:42 compute-1 nova_compute[225855]: 2026-01-20 15:00:42.672 225859 DEBUG nova.compute.manager [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:00:42 compute-1 nova_compute[225855]: 2026-01-20 15:00:42.728 225859 DEBUG nova.compute.manager [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:00:42 compute-1 nova_compute[225855]: 2026-01-20 15:00:42.729 225859 DEBUG nova.network.neutron [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:00:42 compute-1 nova_compute[225855]: 2026-01-20 15:00:42.760 225859 INFO nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:00:42 compute-1 nova_compute[225855]: 2026-01-20 15:00:42.776 225859 DEBUG nova.compute.manager [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:00:42 compute-1 nova_compute[225855]: 2026-01-20 15:00:42.874 225859 DEBUG nova.compute.manager [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:00:42 compute-1 nova_compute[225855]: 2026-01-20 15:00:42.875 225859 DEBUG nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:00:42 compute-1 nova_compute[225855]: 2026-01-20 15:00:42.876 225859 INFO nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Creating image(s)
Jan 20 15:00:42 compute-1 nova_compute[225855]: 2026-01-20 15:00:42.903 225859 DEBUG nova.storage.rbd_utils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] rbd image 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:00:42 compute-1 nova_compute[225855]: 2026-01-20 15:00:42.932 225859 DEBUG nova.storage.rbd_utils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] rbd image 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:00:42 compute-1 nova_compute[225855]: 2026-01-20 15:00:42.961 225859 DEBUG nova.storage.rbd_utils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] rbd image 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:00:42 compute-1 nova_compute[225855]: 2026-01-20 15:00:42.965 225859 DEBUG oslo_concurrency.processutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:00:43 compute-1 ceph-mon[81775]: pgmap v2261: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 862 KiB/s rd, 1.8 MiB/s wr, 95 op/s
Jan 20 15:00:43 compute-1 nova_compute[225855]: 2026-01-20 15:00:43.033 225859 DEBUG oslo_concurrency.processutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:00:43 compute-1 nova_compute[225855]: 2026-01-20 15:00:43.034 225859 DEBUG oslo_concurrency.lockutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:43 compute-1 nova_compute[225855]: 2026-01-20 15:00:43.035 225859 DEBUG oslo_concurrency.lockutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:43 compute-1 nova_compute[225855]: 2026-01-20 15:00:43.035 225859 DEBUG oslo_concurrency.lockutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:43 compute-1 nova_compute[225855]: 2026-01-20 15:00:43.058 225859 DEBUG nova.storage.rbd_utils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] rbd image 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:00:43 compute-1 nova_compute[225855]: 2026-01-20 15:00:43.061 225859 DEBUG oslo_concurrency.processutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:00:43 compute-1 nova_compute[225855]: 2026-01-20 15:00:43.091 225859 DEBUG nova.policy [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '37466ba8c9504f1ca6cfbce8add0b52a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '41da7b7508634e869bbbe5203e7023cc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:00:43 compute-1 nova_compute[225855]: 2026-01-20 15:00:43.334 225859 DEBUG oslo_concurrency.processutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:00:43 compute-1 nova_compute[225855]: 2026-01-20 15:00:43.363 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:00:43 compute-1 nova_compute[225855]: 2026-01-20 15:00:43.364 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:00:43 compute-1 nova_compute[225855]: 2026-01-20 15:00:43.364 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:00:43 compute-1 nova_compute[225855]: 2026-01-20 15:00:43.406 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 15:00:43 compute-1 nova_compute[225855]: 2026-01-20 15:00:43.406 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:00:43 compute-1 nova_compute[225855]: 2026-01-20 15:00:43.414 225859 DEBUG nova.storage.rbd_utils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] resizing rbd image 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:00:43 compute-1 nova_compute[225855]: 2026-01-20 15:00:43.523 225859 DEBUG nova.objects.instance [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lazy-loading 'migration_context' on Instance uuid 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:00:43 compute-1 nova_compute[225855]: 2026-01-20 15:00:43.563 225859 DEBUG nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:00:43 compute-1 nova_compute[225855]: 2026-01-20 15:00:43.564 225859 DEBUG nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Ensure instance console log exists: /var/lib/nova/instances/3b9ae6db-82fd-4f0d-96f4-92a09c1c1677/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:00:43 compute-1 nova_compute[225855]: 2026-01-20 15:00:43.565 225859 DEBUG oslo_concurrency.lockutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:43 compute-1 nova_compute[225855]: 2026-01-20 15:00:43.565 225859 DEBUG oslo_concurrency.lockutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:43 compute-1 nova_compute[225855]: 2026-01-20 15:00:43.565 225859 DEBUG oslo_concurrency.lockutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:43.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2942301129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:44.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:44 compute-1 nova_compute[225855]: 2026-01-20 15:00:44.390 225859 DEBUG nova.network.neutron [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Successfully created port: 086e4aee-1846-436c-8c93-dab333d31521 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:00:45 compute-1 ceph-mon[81775]: pgmap v2262: 321 pgs: 321 active+clean; 171 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 125 op/s
Jan 20 15:00:45 compute-1 nova_compute[225855]: 2026-01-20 15:00:45.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:00:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:45.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:46 compute-1 nova_compute[225855]: 2026-01-20 15:00:46.023 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:00:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:46.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:00:46 compute-1 ceph-mon[81775]: pgmap v2263: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 175 op/s
Jan 20 15:00:46 compute-1 nova_compute[225855]: 2026-01-20 15:00:46.798 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:47 compute-1 nova_compute[225855]: 2026-01-20 15:00:47.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:00:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e324 e324: 3 total, 3 up, 3 in
Jan 20 15:00:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:47.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:48 compute-1 nova_compute[225855]: 2026-01-20 15:00:48.278 225859 DEBUG nova.network.neutron [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Successfully updated port: 086e4aee-1846-436c-8c93-dab333d31521 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:00:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:48.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:48 compute-1 nova_compute[225855]: 2026-01-20 15:00:48.387 225859 DEBUG oslo_concurrency.lockutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Acquiring lock "refresh_cache-3b9ae6db-82fd-4f0d-96f4-92a09c1c1677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:00:48 compute-1 nova_compute[225855]: 2026-01-20 15:00:48.388 225859 DEBUG oslo_concurrency.lockutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Acquired lock "refresh_cache-3b9ae6db-82fd-4f0d-96f4-92a09c1c1677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:00:48 compute-1 nova_compute[225855]: 2026-01-20 15:00:48.388 225859 DEBUG nova.network.neutron [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:00:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e325 e325: 3 total, 3 up, 3 in
Jan 20 15:00:48 compute-1 ceph-mon[81775]: osdmap e324: 3 total, 3 up, 3 in
Jan 20 15:00:48 compute-1 ceph-mon[81775]: pgmap v2265: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 190 op/s
Jan 20 15:00:48 compute-1 nova_compute[225855]: 2026-01-20 15:00:48.638 225859 DEBUG nova.compute.manager [req-77b38338-1360-4fd3-9c19-63944101294d req-5d9fb130-7c6b-4c98-961a-5977116485f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received event network-changed-086e4aee-1846-436c-8c93-dab333d31521 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:00:48 compute-1 nova_compute[225855]: 2026-01-20 15:00:48.638 225859 DEBUG nova.compute.manager [req-77b38338-1360-4fd3-9c19-63944101294d req-5d9fb130-7c6b-4c98-961a-5977116485f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Refreshing instance network info cache due to event network-changed-086e4aee-1846-436c-8c93-dab333d31521. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:00:48 compute-1 nova_compute[225855]: 2026-01-20 15:00:48.639 225859 DEBUG oslo_concurrency.lockutils [req-77b38338-1360-4fd3-9c19-63944101294d req-5d9fb130-7c6b-4c98-961a-5977116485f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-3b9ae6db-82fd-4f0d-96f4-92a09c1c1677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:00:48 compute-1 nova_compute[225855]: 2026-01-20 15:00:48.672 225859 DEBUG nova.network.neutron [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:00:49 compute-1 ceph-mon[81775]: osdmap e325: 3 total, 3 up, 3 in
Jan 20 15:00:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2122915934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1288864457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e326 e326: 3 total, 3 up, 3 in
Jan 20 15:00:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:49.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:50.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.369 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.370 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.370 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.370 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.371 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.578 225859 DEBUG nova.network.neutron [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Updating instance_info_cache with network_info: [{"id": "086e4aee-1846-436c-8c93-dab333d31521", "address": "fa:16:3e:f3:aa:10", "network": {"id": "e442dddb-90bf-46c8-b680-3f7b90171ffe", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1981769764-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "41da7b7508634e869bbbe5203e7023cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086e4aee-18", "ovs_interfaceid": "086e4aee-1846-436c-8c93-dab333d31521", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:00:50 compute-1 ceph-mon[81775]: osdmap e326: 3 total, 3 up, 3 in
Jan 20 15:00:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4026246915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:50 compute-1 ceph-mon[81775]: pgmap v2268: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 213 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 7.7 MiB/s wr, 241 op/s
Jan 20 15:00:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1511139781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.611 225859 DEBUG oslo_concurrency.lockutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Releasing lock "refresh_cache-3b9ae6db-82fd-4f0d-96f4-92a09c1c1677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.612 225859 DEBUG nova.compute.manager [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Instance network_info: |[{"id": "086e4aee-1846-436c-8c93-dab333d31521", "address": "fa:16:3e:f3:aa:10", "network": {"id": "e442dddb-90bf-46c8-b680-3f7b90171ffe", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1981769764-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "41da7b7508634e869bbbe5203e7023cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086e4aee-18", "ovs_interfaceid": "086e4aee-1846-436c-8c93-dab333d31521", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.612 225859 DEBUG oslo_concurrency.lockutils [req-77b38338-1360-4fd3-9c19-63944101294d req-5d9fb130-7c6b-4c98-961a-5977116485f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-3b9ae6db-82fd-4f0d-96f4-92a09c1c1677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.613 225859 DEBUG nova.network.neutron [req-77b38338-1360-4fd3-9c19-63944101294d req-5d9fb130-7c6b-4c98-961a-5977116485f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Refreshing network info cache for port 086e4aee-1846-436c-8c93-dab333d31521 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.616 225859 DEBUG nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Start _get_guest_xml network_info=[{"id": "086e4aee-1846-436c-8c93-dab333d31521", "address": "fa:16:3e:f3:aa:10", "network": {"id": "e442dddb-90bf-46c8-b680-3f7b90171ffe", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1981769764-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "41da7b7508634e869bbbe5203e7023cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086e4aee-18", "ovs_interfaceid": "086e4aee-1846-436c-8c93-dab333d31521", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.620 225859 WARNING nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.624 225859 DEBUG nova.virt.libvirt.host [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.625 225859 DEBUG nova.virt.libvirt.host [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.627 225859 DEBUG nova.virt.libvirt.host [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.628 225859 DEBUG nova.virt.libvirt.host [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.629 225859 DEBUG nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.629 225859 DEBUG nova.virt.hardware [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.630 225859 DEBUG nova.virt.hardware [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.630 225859 DEBUG nova.virt.hardware [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.630 225859 DEBUG nova.virt.hardware [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.631 225859 DEBUG nova.virt.hardware [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.631 225859 DEBUG nova.virt.hardware [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.631 225859 DEBUG nova.virt.hardware [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.631 225859 DEBUG nova.virt.hardware [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.631 225859 DEBUG nova.virt.hardware [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.632 225859 DEBUG nova.virt.hardware [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.632 225859 DEBUG nova.virt.hardware [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.635 225859 DEBUG oslo_concurrency.processutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:00:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:00:50 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2685457807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.803 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.955 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921235.9552252, f1757bed-1718-45e4-a731-11f1a3b4f068 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.956 225859 INFO nova.compute.manager [-] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] VM Stopped (Lifecycle Event)
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.963 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.964 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4355MB free_disk=20.921974182128906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.964 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.965 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:50 compute-1 nova_compute[225855]: 2026-01-20 15:00:50.983 225859 DEBUG nova.compute.manager [None req-afb7133f-56a8-4042-847e-bdfd3968424a - - - - - -] [instance: f1757bed-1718-45e4-a731-11f1a3b4f068] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.026 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.038 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.039 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.039 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.045 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.087 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:00:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:00:51 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1371259099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.113 225859 DEBUG oslo_concurrency.processutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.140 225859 DEBUG nova.storage.rbd_utils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] rbd image 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.144 225859 DEBUG oslo_concurrency.processutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.273 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:00:51 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3543908004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.518 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.524 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.553 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:00:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:00:51 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2231245816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.581 225859 DEBUG oslo_concurrency.processutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.582 225859 DEBUG nova.virt.libvirt.vif [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:00:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1699199838',display_name='tempest-TestServerAdvancedOps-server-1699199838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1699199838',id=148,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='41da7b7508634e869bbbe5203e7023cc',ramdisk_id='',reservation_id='r-y4cbtvvj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1175826361',owner_user_name='tempest-TestServerAdvancedOps-1175826361-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:00:42Z,user_data=None,user_id='37466ba8c9504f1ca6cfbce8add0b52a',uuid=3b9ae6db-82fd-4f0d-96f4-92a09c1c1677,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "086e4aee-1846-436c-8c93-dab333d31521", "address": "fa:16:3e:f3:aa:10", "network": {"id": "e442dddb-90bf-46c8-b680-3f7b90171ffe", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1981769764-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "41da7b7508634e869bbbe5203e7023cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086e4aee-18", "ovs_interfaceid": "086e4aee-1846-436c-8c93-dab333d31521", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.583 225859 DEBUG nova.network.os_vif_util [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Converting VIF {"id": "086e4aee-1846-436c-8c93-dab333d31521", "address": "fa:16:3e:f3:aa:10", "network": {"id": "e442dddb-90bf-46c8-b680-3f7b90171ffe", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1981769764-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "41da7b7508634e869bbbe5203e7023cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086e4aee-18", "ovs_interfaceid": "086e4aee-1846-436c-8c93-dab333d31521", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.583 225859 DEBUG nova.network.os_vif_util [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:aa:10,bridge_name='br-int',has_traffic_filtering=True,id=086e4aee-1846-436c-8c93-dab333d31521,network=Network(e442dddb-90bf-46c8-b680-3f7b90171ffe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap086e4aee-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.584 225859 DEBUG nova.objects.instance [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lazy-loading 'pci_devices' on Instance uuid 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.586 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.586 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2685457807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1371259099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:00:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3543908004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2231245816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.631 225859 DEBUG nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:00:51 compute-1 nova_compute[225855]:   <uuid>3b9ae6db-82fd-4f0d-96f4-92a09c1c1677</uuid>
Jan 20 15:00:51 compute-1 nova_compute[225855]:   <name>instance-00000094</name>
Jan 20 15:00:51 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:00:51 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:00:51 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <nova:name>tempest-TestServerAdvancedOps-server-1699199838</nova:name>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:00:50</nova:creationTime>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:00:51 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:00:51 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:00:51 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:00:51 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:00:51 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:00:51 compute-1 nova_compute[225855]:         <nova:user uuid="37466ba8c9504f1ca6cfbce8add0b52a">tempest-TestServerAdvancedOps-1175826361-project-member</nova:user>
Jan 20 15:00:51 compute-1 nova_compute[225855]:         <nova:project uuid="41da7b7508634e869bbbe5203e7023cc">tempest-TestServerAdvancedOps-1175826361</nova:project>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:00:51 compute-1 nova_compute[225855]:         <nova:port uuid="086e4aee-1846-436c-8c93-dab333d31521">
Jan 20 15:00:51 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:00:51 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:00:51 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <system>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <entry name="serial">3b9ae6db-82fd-4f0d-96f4-92a09c1c1677</entry>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <entry name="uuid">3b9ae6db-82fd-4f0d-96f4-92a09c1c1677</entry>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     </system>
Jan 20 15:00:51 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:00:51 compute-1 nova_compute[225855]:   <os>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:   </os>
Jan 20 15:00:51 compute-1 nova_compute[225855]:   <features>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:   </features>
Jan 20 15:00:51 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:00:51 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:00:51 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/3b9ae6db-82fd-4f0d-96f4-92a09c1c1677_disk">
Jan 20 15:00:51 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       </source>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:00:51 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/3b9ae6db-82fd-4f0d-96f4-92a09c1c1677_disk.config">
Jan 20 15:00:51 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       </source>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:00:51 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:f3:aa:10"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <target dev="tap086e4aee-18"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/3b9ae6db-82fd-4f0d-96f4-92a09c1c1677/console.log" append="off"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <video>
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     </video>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:00:51 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:00:51 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:00:51 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:00:51 compute-1 nova_compute[225855]: </domain>
Jan 20 15:00:51 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.632 225859 DEBUG nova.compute.manager [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Preparing to wait for external event network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.633 225859 DEBUG oslo_concurrency.lockutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Acquiring lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.633 225859 DEBUG oslo_concurrency.lockutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.633 225859 DEBUG oslo_concurrency.lockutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.634 225859 DEBUG nova.virt.libvirt.vif [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:00:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1699199838',display_name='tempest-TestServerAdvancedOps-server-1699199838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1699199838',id=148,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='41da7b7508634e869bbbe5203e7023cc',ramdisk_id='',reservation_id='r-y4cbtvvj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1175826361',owner_user_name='tempest-TestServerAdvancedOps-1175826361-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:00:42Z,user_data=None,user_id='37466ba8c9504f1ca6cfbce8add0b52a',uuid=3b9ae6db-82fd-4f0d-96f4-92a09c1c1677,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "086e4aee-1846-436c-8c93-dab333d31521", "address": "fa:16:3e:f3:aa:10", "network": {"id": "e442dddb-90bf-46c8-b680-3f7b90171ffe", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1981769764-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "41da7b7508634e869bbbe5203e7023cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086e4aee-18", "ovs_interfaceid": "086e4aee-1846-436c-8c93-dab333d31521", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.634 225859 DEBUG nova.network.os_vif_util [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Converting VIF {"id": "086e4aee-1846-436c-8c93-dab333d31521", "address": "fa:16:3e:f3:aa:10", "network": {"id": "e442dddb-90bf-46c8-b680-3f7b90171ffe", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1981769764-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "41da7b7508634e869bbbe5203e7023cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086e4aee-18", "ovs_interfaceid": "086e4aee-1846-436c-8c93-dab333d31521", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.635 225859 DEBUG nova.network.os_vif_util [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:aa:10,bridge_name='br-int',has_traffic_filtering=True,id=086e4aee-1846-436c-8c93-dab333d31521,network=Network(e442dddb-90bf-46c8-b680-3f7b90171ffe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap086e4aee-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.635 225859 DEBUG os_vif [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:aa:10,bridge_name='br-int',has_traffic_filtering=True,id=086e4aee-1846-436c-8c93-dab333d31521,network=Network(e442dddb-90bf-46c8-b680-3f7b90171ffe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap086e4aee-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.636 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.636 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.636 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.639 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.639 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap086e4aee-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.639 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap086e4aee-18, col_values=(('external_ids', {'iface-id': '086e4aee-1846-436c-8c93-dab333d31521', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:aa:10', 'vm-uuid': '3b9ae6db-82fd-4f0d-96f4-92a09c1c1677'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #106. Immutable memtables: 0.
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:51.640435) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 106
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921251640552, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 772, "num_deletes": 253, "total_data_size": 1297546, "memory_usage": 1312440, "flush_reason": "Manual Compaction"}
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #107: started
Jan 20 15:00:51 compute-1 NetworkManager[49104]: <info>  [1768921251.6419] manager: (tap086e4aee-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.644 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.647 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921251648708, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 107, "file_size": 855033, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54440, "largest_seqno": 55207, "table_properties": {"data_size": 851339, "index_size": 1474, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8784, "raw_average_key_size": 19, "raw_value_size": 843815, "raw_average_value_size": 1904, "num_data_blocks": 66, "num_entries": 443, "num_filter_entries": 443, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921205, "oldest_key_time": 1768921205, "file_creation_time": 1768921251, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 8315 microseconds, and 3777 cpu microseconds.
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.648 225859 INFO os_vif [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:aa:10,bridge_name='br-int',has_traffic_filtering=True,id=086e4aee-1846-436c-8c93-dab333d31521,network=Network(e442dddb-90bf-46c8-b680-3f7b90171ffe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap086e4aee-18')
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:51.648763) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #107: 855033 bytes OK
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:51.648780) [db/memtable_list.cc:519] [default] Level-0 commit table #107 started
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:51.650063) [db/memtable_list.cc:722] [default] Level-0 commit table #107: memtable #1 done
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:51.650077) EVENT_LOG_v1 {"time_micros": 1768921251650073, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:51.650098) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1293437, prev total WAL file size 1293437, number of live WAL files 2.
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000103.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:51.650778) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [107(834KB)], [105(12MB)]
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921251650853, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [107], "files_L6": [105], "score": -1, "input_data_size": 13878054, "oldest_snapshot_seqno": -1}
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.704 225859 DEBUG nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.705 225859 DEBUG nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.705 225859 DEBUG nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] No VIF found with MAC fa:16:3e:f3:aa:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.706 225859 INFO nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Using config drive
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.733 225859 DEBUG nova.storage.rbd_utils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] rbd image 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:00:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:51.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #108: 8020 keys, 11993455 bytes, temperature: kUnknown
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921251780434, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 108, "file_size": 11993455, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11939269, "index_size": 33018, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20101, "raw_key_size": 208244, "raw_average_key_size": 25, "raw_value_size": 11795682, "raw_average_value_size": 1470, "num_data_blocks": 1295, "num_entries": 8020, "num_filter_entries": 8020, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768921251, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 108, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:51.780698) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 11993455 bytes
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:51.782102) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.0 rd, 92.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 12.4 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(30.3) write-amplify(14.0) OK, records in: 8538, records dropped: 518 output_compression: NoCompression
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:51.782122) EVENT_LOG_v1 {"time_micros": 1768921251782113, "job": 66, "event": "compaction_finished", "compaction_time_micros": 129641, "compaction_time_cpu_micros": 32241, "output_level": 6, "num_output_files": 1, "total_output_size": 11993455, "num_input_records": 8538, "num_output_records": 8020, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921251782350, "job": 66, "event": "table_file_deletion", "file_number": 107}
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000105.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921251784279, "job": 66, "event": "table_file_deletion", "file_number": 105}
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:51.650657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:51.784315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:51.784319) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:51.784320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:51.784321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:51 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:00:51.784323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:51 compute-1 nova_compute[225855]: 2026-01-20 15:00:51.800 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:52 compute-1 nova_compute[225855]: 2026-01-20 15:00:52.067 225859 DEBUG nova.network.neutron [req-77b38338-1360-4fd3-9c19-63944101294d req-5d9fb130-7c6b-4c98-961a-5977116485f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Updated VIF entry in instance network info cache for port 086e4aee-1846-436c-8c93-dab333d31521. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:00:52 compute-1 nova_compute[225855]: 2026-01-20 15:00:52.067 225859 DEBUG nova.network.neutron [req-77b38338-1360-4fd3-9c19-63944101294d req-5d9fb130-7c6b-4c98-961a-5977116485f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Updating instance_info_cache with network_info: [{"id": "086e4aee-1846-436c-8c93-dab333d31521", "address": "fa:16:3e:f3:aa:10", "network": {"id": "e442dddb-90bf-46c8-b680-3f7b90171ffe", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1981769764-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "41da7b7508634e869bbbe5203e7023cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086e4aee-18", "ovs_interfaceid": "086e4aee-1846-436c-8c93-dab333d31521", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:00:52 compute-1 nova_compute[225855]: 2026-01-20 15:00:52.088 225859 DEBUG oslo_concurrency.lockutils [req-77b38338-1360-4fd3-9c19-63944101294d req-5d9fb130-7c6b-4c98-961a-5977116485f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-3b9ae6db-82fd-4f0d-96f4-92a09c1c1677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:00:52 compute-1 nova_compute[225855]: 2026-01-20 15:00:52.237 225859 INFO nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Creating config drive at /var/lib/nova/instances/3b9ae6db-82fd-4f0d-96f4-92a09c1c1677/disk.config
Jan 20 15:00:52 compute-1 nova_compute[225855]: 2026-01-20 15:00:52.243 225859 DEBUG oslo_concurrency.processutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3b9ae6db-82fd-4f0d-96f4-92a09c1c1677/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpssu39_w3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:00:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:00:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:52.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:00:52 compute-1 nova_compute[225855]: 2026-01-20 15:00:52.378 225859 DEBUG oslo_concurrency.processutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3b9ae6db-82fd-4f0d-96f4-92a09c1c1677/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpssu39_w3" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:00:52 compute-1 nova_compute[225855]: 2026-01-20 15:00:52.405 225859 DEBUG nova.storage.rbd_utils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] rbd image 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:00:52 compute-1 nova_compute[225855]: 2026-01-20 15:00:52.410 225859 DEBUG oslo_concurrency.processutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3b9ae6db-82fd-4f0d-96f4-92a09c1c1677/disk.config 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:00:52 compute-1 nova_compute[225855]: 2026-01-20 15:00:52.611 225859 DEBUG oslo_concurrency.processutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3b9ae6db-82fd-4f0d-96f4-92a09c1c1677/disk.config 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:00:52 compute-1 nova_compute[225855]: 2026-01-20 15:00:52.612 225859 INFO nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Deleting local config drive /var/lib/nova/instances/3b9ae6db-82fd-4f0d-96f4-92a09c1c1677/disk.config because it was imported into RBD.
Jan 20 15:00:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:52 compute-1 kernel: tap086e4aee-18: entered promiscuous mode
Jan 20 15:00:52 compute-1 NetworkManager[49104]: <info>  [1768921252.6591] manager: (tap086e4aee-18): new Tun device (/org/freedesktop/NetworkManager/Devices/256)
Jan 20 15:00:52 compute-1 ceph-mon[81775]: pgmap v2269: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 246 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 140 op/s
Jan 20 15:00:52 compute-1 ovn_controller[130490]: 2026-01-20T15:00:52Z|00590|binding|INFO|Claiming lport 086e4aee-1846-436c-8c93-dab333d31521 for this chassis.
Jan 20 15:00:52 compute-1 ovn_controller[130490]: 2026-01-20T15:00:52Z|00591|binding|INFO|086e4aee-1846-436c-8c93-dab333d31521: Claiming fa:16:3e:f3:aa:10 10.100.0.13
Jan 20 15:00:52 compute-1 nova_compute[225855]: 2026-01-20 15:00:52.662 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:52 compute-1 systemd-machined[194361]: New machine qemu-69-instance-00000094.
Jan 20 15:00:52 compute-1 systemd-udevd[284411]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:00:52 compute-1 systemd[1]: Started Virtual Machine qemu-69-instance-00000094.
Jan 20 15:00:52 compute-1 NetworkManager[49104]: <info>  [1768921252.7086] device (tap086e4aee-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:00:52 compute-1 NetworkManager[49104]: <info>  [1768921252.7124] device (tap086e4aee-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:00:52 compute-1 ovn_controller[130490]: 2026-01-20T15:00:52Z|00592|binding|INFO|Setting lport 086e4aee-1846-436c-8c93-dab333d31521 ovn-installed in OVS
Jan 20 15:00:52 compute-1 nova_compute[225855]: 2026-01-20 15:00:52.710 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:52 compute-1 nova_compute[225855]: 2026-01-20 15:00:52.714 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:52 compute-1 ovn_controller[130490]: 2026-01-20T15:00:52Z|00593|binding|INFO|Setting lport 086e4aee-1846-436c-8c93-dab333d31521 up in Southbound
Jan 20 15:00:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:52.749 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:aa:10 10.100.0.13'], port_security=['fa:16:3e:f3:aa:10 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3b9ae6db-82fd-4f0d-96f4-92a09c1c1677', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e442dddb-90bf-46c8-b680-3f7b90171ffe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41da7b7508634e869bbbe5203e7023cc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9fe8372e-13b8-4476-ba27-8f6ac71e4da5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0874ec4-826b-4e92-aca5-efa961e93290, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=086e4aee-1846-436c-8c93-dab333d31521) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:00:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:52.751 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 086e4aee-1846-436c-8c93-dab333d31521 in datapath e442dddb-90bf-46c8-b680-3f7b90171ffe bound to our chassis
Jan 20 15:00:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:52.751 140354 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e442dddb-90bf-46c8-b680-3f7b90171ffe or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 20 15:00:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:52.752 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[62fd0905-5e2d-421f-b9fe-7a6508ecdbc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:52 compute-1 podman[284402]: 2026-01-20 15:00:52.772266042 +0000 UTC m=+0.090548035 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.206 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921253.2063863, 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.207 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] VM Started (Lifecycle Event)
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.446 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.451 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921253.2092814, 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.452 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] VM Paused (Lifecycle Event)
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.633 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.637 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.678 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:00:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:53.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.941 225859 DEBUG nova.compute.manager [req-577aa2c7-4203-4617-9e55-ff97da7b172e req-8670956c-2d1b-4bdc-b464-5f2482a6bab5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received event network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.942 225859 DEBUG oslo_concurrency.lockutils [req-577aa2c7-4203-4617-9e55-ff97da7b172e req-8670956c-2d1b-4bdc-b464-5f2482a6bab5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.942 225859 DEBUG oslo_concurrency.lockutils [req-577aa2c7-4203-4617-9e55-ff97da7b172e req-8670956c-2d1b-4bdc-b464-5f2482a6bab5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.942 225859 DEBUG oslo_concurrency.lockutils [req-577aa2c7-4203-4617-9e55-ff97da7b172e req-8670956c-2d1b-4bdc-b464-5f2482a6bab5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.942 225859 DEBUG nova.compute.manager [req-577aa2c7-4203-4617-9e55-ff97da7b172e req-8670956c-2d1b-4bdc-b464-5f2482a6bab5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Processing event network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.943 225859 DEBUG nova.compute.manager [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.946 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921253.9464118, 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.946 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] VM Resumed (Lifecycle Event)
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.948 225859 DEBUG nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.951 225859 INFO nova.virt.libvirt.driver [-] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Instance spawned successfully.
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.951 225859 DEBUG nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.966 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.972 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.975 225859 DEBUG nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.976 225859 DEBUG nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.976 225859 DEBUG nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.977 225859 DEBUG nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.977 225859 DEBUG nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:00:53 compute-1 nova_compute[225855]: 2026-01-20 15:00:53.977 225859 DEBUG nova.virt.libvirt.driver [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:00:54 compute-1 nova_compute[225855]: 2026-01-20 15:00:54.018 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:00:54 compute-1 nova_compute[225855]: 2026-01-20 15:00:54.048 225859 INFO nova.compute.manager [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Took 11.17 seconds to spawn the instance on the hypervisor.
Jan 20 15:00:54 compute-1 nova_compute[225855]: 2026-01-20 15:00:54.049 225859 DEBUG nova.compute.manager [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:00:54 compute-1 nova_compute[225855]: 2026-01-20 15:00:54.140 225859 INFO nova.compute.manager [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Took 12.78 seconds to build instance.
Jan 20 15:00:54 compute-1 nova_compute[225855]: 2026-01-20 15:00:54.161 225859 DEBUG oslo_concurrency.lockutils [None req-6782266e-ea27-401c-a11a-7f70c6603bcc 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:00:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:54.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:00:54 compute-1 nova_compute[225855]: 2026-01-20 15:00:54.582 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:00:54 compute-1 nova_compute[225855]: 2026-01-20 15:00:54.582 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:00:54 compute-1 sudo[284481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:00:54 compute-1 sudo[284481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:00:54 compute-1 sudo[284481]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:54 compute-1 sudo[284506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:00:54 compute-1 sudo[284506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:00:54 compute-1 sudo[284506]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e327 e327: 3 total, 3 up, 3 in
Jan 20 15:00:55 compute-1 ceph-mon[81775]: pgmap v2270: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 7.0 MiB/s wr, 155 op/s
Jan 20 15:00:55 compute-1 ceph-mon[81775]: osdmap e327: 3 total, 3 up, 3 in
Jan 20 15:00:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:00:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:55.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:00:56 compute-1 nova_compute[225855]: 2026-01-20 15:00:56.183 225859 DEBUG nova.compute.manager [req-d501a09c-d195-4c14-8170-f1497ef13b7a req-3b29a097-c578-4f0d-b25a-7f2bdc104690 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received event network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:00:56 compute-1 nova_compute[225855]: 2026-01-20 15:00:56.184 225859 DEBUG oslo_concurrency.lockutils [req-d501a09c-d195-4c14-8170-f1497ef13b7a req-3b29a097-c578-4f0d-b25a-7f2bdc104690 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:56 compute-1 nova_compute[225855]: 2026-01-20 15:00:56.187 225859 DEBUG oslo_concurrency.lockutils [req-d501a09c-d195-4c14-8170-f1497ef13b7a req-3b29a097-c578-4f0d-b25a-7f2bdc104690 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:56 compute-1 nova_compute[225855]: 2026-01-20 15:00:56.188 225859 DEBUG oslo_concurrency.lockutils [req-d501a09c-d195-4c14-8170-f1497ef13b7a req-3b29a097-c578-4f0d-b25a-7f2bdc104690 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:56 compute-1 nova_compute[225855]: 2026-01-20 15:00:56.188 225859 DEBUG nova.compute.manager [req-d501a09c-d195-4c14-8170-f1497ef13b7a req-3b29a097-c578-4f0d-b25a-7f2bdc104690 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] No waiting events found dispatching network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:00:56 compute-1 nova_compute[225855]: 2026-01-20 15:00:56.188 225859 WARNING nova.compute.manager [req-d501a09c-d195-4c14-8170-f1497ef13b7a req-3b29a097-c578-4f0d-b25a-7f2bdc104690 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received unexpected event network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 for instance with vm_state active and task_state None.
Jan 20 15:00:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:00:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:56.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:00:56 compute-1 nova_compute[225855]: 2026-01-20 15:00:56.641 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:56 compute-1 nova_compute[225855]: 2026-01-20 15:00:56.802 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:57 compute-1 nova_compute[225855]: 2026-01-20 15:00:57.015 225859 DEBUG nova.objects.instance [None req-acfd2f9b-c323-4854-88e3-dd45ac87b1ba 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lazy-loading 'pci_devices' on Instance uuid 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:00:57 compute-1 nova_compute[225855]: 2026-01-20 15:00:57.042 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921257.0416844, 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:00:57 compute-1 nova_compute[225855]: 2026-01-20 15:00:57.042 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] VM Paused (Lifecycle Event)
Jan 20 15:00:57 compute-1 nova_compute[225855]: 2026-01-20 15:00:57.128 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:00:57 compute-1 nova_compute[225855]: 2026-01-20 15:00:57.133 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:00:57 compute-1 nova_compute[225855]: 2026-01-20 15:00:57.209 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 20 15:00:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:57.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:57 compute-1 ceph-mon[81775]: pgmap v2272: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 6.1 MiB/s wr, 198 op/s
Jan 20 15:00:58 compute-1 kernel: tap086e4aee-18 (unregistering): left promiscuous mode
Jan 20 15:00:58 compute-1 NetworkManager[49104]: <info>  [1768921258.0392] device (tap086e4aee-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:00:58 compute-1 ovn_controller[130490]: 2026-01-20T15:00:58Z|00594|binding|INFO|Releasing lport 086e4aee-1846-436c-8c93-dab333d31521 from this chassis (sb_readonly=0)
Jan 20 15:00:58 compute-1 ovn_controller[130490]: 2026-01-20T15:00:58Z|00595|binding|INFO|Setting lport 086e4aee-1846-436c-8c93-dab333d31521 down in Southbound
Jan 20 15:00:58 compute-1 ovn_controller[130490]: 2026-01-20T15:00:58Z|00596|binding|INFO|Removing iface tap086e4aee-18 ovn-installed in OVS
Jan 20 15:00:58 compute-1 nova_compute[225855]: 2026-01-20 15:00:58.042 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:58 compute-1 nova_compute[225855]: 2026-01-20 15:00:58.044 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:58 compute-1 nova_compute[225855]: 2026-01-20 15:00:58.059 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:58 compute-1 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000094.scope: Deactivated successfully.
Jan 20 15:00:58 compute-1 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000094.scope: Consumed 3.781s CPU time.
Jan 20 15:00:58 compute-1 systemd-machined[194361]: Machine qemu-69-instance-00000094 terminated.
Jan 20 15:00:58 compute-1 kernel: tap086e4aee-18: entered promiscuous mode
Jan 20 15:00:58 compute-1 kernel: tap086e4aee-18 (unregistering): left promiscuous mode
Jan 20 15:00:58 compute-1 NetworkManager[49104]: <info>  [1768921258.1998] manager: (tap086e4aee-18): new Tun device (/org/freedesktop/NetworkManager/Devices/257)
Jan 20 15:00:58 compute-1 nova_compute[225855]: 2026-01-20 15:00:58.204 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:58 compute-1 nova_compute[225855]: 2026-01-20 15:00:58.219 225859 DEBUG nova.compute.manager [None req-acfd2f9b-c323-4854-88e3-dd45ac87b1ba 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:00:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:58.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:58.430 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:aa:10 10.100.0.13'], port_security=['fa:16:3e:f3:aa:10 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3b9ae6db-82fd-4f0d-96f4-92a09c1c1677', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e442dddb-90bf-46c8-b680-3f7b90171ffe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41da7b7508634e869bbbe5203e7023cc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9fe8372e-13b8-4476-ba27-8f6ac71e4da5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0874ec4-826b-4e92-aca5-efa961e93290, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=086e4aee-1846-436c-8c93-dab333d31521) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:00:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:58.431 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 086e4aee-1846-436c-8c93-dab333d31521 in datapath e442dddb-90bf-46c8-b680-3f7b90171ffe unbound from our chassis
Jan 20 15:00:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:58.432 140354 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e442dddb-90bf-46c8-b680-3f7b90171ffe or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 20 15:00:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:00:58.432 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e7eb9be5-17c3-41e8-b5cd-ecf36e87347d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:59 compute-1 ceph-mon[81775]: pgmap v2273: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 100 op/s
Jan 20 15:00:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:00:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:59.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:00.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:00 compute-1 nova_compute[225855]: 2026-01-20 15:01:00.766 225859 DEBUG nova.compute.manager [req-ee99b29b-8b3c-4f9e-bba9-14d13623edc5 req-a18aa099-4f65-4c5c-903a-7756c3beb7c4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received event network-vif-unplugged-086e4aee-1846-436c-8c93-dab333d31521 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:01:00 compute-1 nova_compute[225855]: 2026-01-20 15:01:00.766 225859 DEBUG oslo_concurrency.lockutils [req-ee99b29b-8b3c-4f9e-bba9-14d13623edc5 req-a18aa099-4f65-4c5c-903a-7756c3beb7c4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:00 compute-1 nova_compute[225855]: 2026-01-20 15:01:00.767 225859 DEBUG oslo_concurrency.lockutils [req-ee99b29b-8b3c-4f9e-bba9-14d13623edc5 req-a18aa099-4f65-4c5c-903a-7756c3beb7c4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:00 compute-1 nova_compute[225855]: 2026-01-20 15:01:00.767 225859 DEBUG oslo_concurrency.lockutils [req-ee99b29b-8b3c-4f9e-bba9-14d13623edc5 req-a18aa099-4f65-4c5c-903a-7756c3beb7c4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:00 compute-1 nova_compute[225855]: 2026-01-20 15:01:00.767 225859 DEBUG nova.compute.manager [req-ee99b29b-8b3c-4f9e-bba9-14d13623edc5 req-a18aa099-4f65-4c5c-903a-7756c3beb7c4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] No waiting events found dispatching network-vif-unplugged-086e4aee-1846-436c-8c93-dab333d31521 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:01:00 compute-1 nova_compute[225855]: 2026-01-20 15:01:00.767 225859 WARNING nova.compute.manager [req-ee99b29b-8b3c-4f9e-bba9-14d13623edc5 req-a18aa099-4f65-4c5c-903a-7756c3beb7c4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received unexpected event network-vif-unplugged-086e4aee-1846-436c-8c93-dab333d31521 for instance with vm_state suspended and task_state None.
Jan 20 15:01:01 compute-1 CROND[284555]: (root) CMD (run-parts /etc/cron.hourly)
Jan 20 15:01:01 compute-1 run-parts[284558]: (/etc/cron.hourly) starting 0anacron
Jan 20 15:01:01 compute-1 run-parts[284564]: (/etc/cron.hourly) finished 0anacron
Jan 20 15:01:01 compute-1 CROND[284554]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 20 15:01:01 compute-1 ceph-mon[81775]: pgmap v2274: 321 pgs: 321 active+clean; 189 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.7 MiB/s wr, 142 op/s
Jan 20 15:01:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/203376434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:01 compute-1 nova_compute[225855]: 2026-01-20 15:01:01.645 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:01.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:01 compute-1 nova_compute[225855]: 2026-01-20 15:01:01.803 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:01 compute-1 nova_compute[225855]: 2026-01-20 15:01:01.944 225859 INFO nova.compute.manager [None req-db0cc92f-8ea4-468c-9d7a-2b3c1a447a80 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Resuming
Jan 20 15:01:01 compute-1 nova_compute[225855]: 2026-01-20 15:01:01.945 225859 DEBUG nova.objects.instance [None req-db0cc92f-8ea4-468c-9d7a-2b3c1a447a80 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lazy-loading 'flavor' on Instance uuid 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:01:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:01.964 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:01:01 compute-1 nova_compute[225855]: 2026-01-20 15:01:01.967 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:01.967 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:01:01 compute-1 nova_compute[225855]: 2026-01-20 15:01:01.994 225859 DEBUG oslo_concurrency.lockutils [None req-db0cc92f-8ea4-468c-9d7a-2b3c1a447a80 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Acquiring lock "refresh_cache-3b9ae6db-82fd-4f0d-96f4-92a09c1c1677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:01:01 compute-1 nova_compute[225855]: 2026-01-20 15:01:01.995 225859 DEBUG oslo_concurrency.lockutils [None req-db0cc92f-8ea4-468c-9d7a-2b3c1a447a80 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Acquired lock "refresh_cache-3b9ae6db-82fd-4f0d-96f4-92a09c1c1677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:01:01 compute-1 nova_compute[225855]: 2026-01-20 15:01:01.995 225859 DEBUG nova.network.neutron [None req-db0cc92f-8ea4-468c-9d7a-2b3c1a447a80 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:01:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:01:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:02.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:01:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:03 compute-1 nova_compute[225855]: 2026-01-20 15:01:03.160 225859 DEBUG nova.compute.manager [req-ee07a9b2-5bc5-4988-bee9-7bf6b5e67f1b req-630d6936-cad2-4ad4-b4d1-8879c5ed9727 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received event network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:01:03 compute-1 nova_compute[225855]: 2026-01-20 15:01:03.161 225859 DEBUG oslo_concurrency.lockutils [req-ee07a9b2-5bc5-4988-bee9-7bf6b5e67f1b req-630d6936-cad2-4ad4-b4d1-8879c5ed9727 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:03 compute-1 nova_compute[225855]: 2026-01-20 15:01:03.161 225859 DEBUG oslo_concurrency.lockutils [req-ee07a9b2-5bc5-4988-bee9-7bf6b5e67f1b req-630d6936-cad2-4ad4-b4d1-8879c5ed9727 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:03 compute-1 nova_compute[225855]: 2026-01-20 15:01:03.161 225859 DEBUG oslo_concurrency.lockutils [req-ee07a9b2-5bc5-4988-bee9-7bf6b5e67f1b req-630d6936-cad2-4ad4-b4d1-8879c5ed9727 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:03 compute-1 nova_compute[225855]: 2026-01-20 15:01:03.161 225859 DEBUG nova.compute.manager [req-ee07a9b2-5bc5-4988-bee9-7bf6b5e67f1b req-630d6936-cad2-4ad4-b4d1-8879c5ed9727 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] No waiting events found dispatching network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:01:03 compute-1 nova_compute[225855]: 2026-01-20 15:01:03.162 225859 WARNING nova.compute.manager [req-ee07a9b2-5bc5-4988-bee9-7bf6b5e67f1b req-630d6936-cad2-4ad4-b4d1-8879c5ed9727 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received unexpected event network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 for instance with vm_state suspended and task_state resuming.
Jan 20 15:01:03 compute-1 ceph-mon[81775]: pgmap v2275: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 140 op/s
Jan 20 15:01:03 compute-1 sudo[284566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:03 compute-1 sudo[284566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:03 compute-1 sudo[284566]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:03 compute-1 sudo[284591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:01:03 compute-1 sudo[284591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:03 compute-1 sudo[284591]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:03 compute-1 sudo[284616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:03 compute-1 sudo[284616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:03 compute-1 sudo[284616]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:03 compute-1 sudo[284641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:01:03 compute-1 sudo[284641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:03.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:03 compute-1 nova_compute[225855]: 2026-01-20 15:01:03.930 225859 DEBUG nova.network.neutron [None req-db0cc92f-8ea4-468c-9d7a-2b3c1a447a80 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Updating instance_info_cache with network_info: [{"id": "086e4aee-1846-436c-8c93-dab333d31521", "address": "fa:16:3e:f3:aa:10", "network": {"id": "e442dddb-90bf-46c8-b680-3f7b90171ffe", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1981769764-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "41da7b7508634e869bbbe5203e7023cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086e4aee-18", "ovs_interfaceid": "086e4aee-1846-436c-8c93-dab333d31521", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:01:04 compute-1 sudo[284641]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.086 225859 DEBUG oslo_concurrency.lockutils [None req-db0cc92f-8ea4-468c-9d7a-2b3c1a447a80 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Releasing lock "refresh_cache-3b9ae6db-82fd-4f0d-96f4-92a09c1c1677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.090 225859 DEBUG nova.virt.libvirt.vif [None req-db0cc92f-8ea4-468c-9d7a-2b3c1a447a80 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:00:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1699199838',display_name='tempest-TestServerAdvancedOps-server-1699199838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1699199838',id=148,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:00:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='41da7b7508634e869bbbe5203e7023cc',ramdisk_id='',reservation_id='r-y4cbtvvj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1175826361',owner_user_name='tempest-TestServerAdvancedOps-1175826361-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:00:59Z,user_data=None,user_id='37466ba8c9504f1ca6cfbce8add0b52a',uuid=3b9ae6db-82fd-4f0d-96f4-92a09c1c1677,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "086e4aee-1846-436c-8c93-dab333d31521", "address": "fa:16:3e:f3:aa:10", "network": {"id": "e442dddb-90bf-46c8-b680-3f7b90171ffe", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1981769764-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "41da7b7508634e869bbbe5203e7023cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086e4aee-18", "ovs_interfaceid": "086e4aee-1846-436c-8c93-dab333d31521", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.090 225859 DEBUG nova.network.os_vif_util [None req-db0cc92f-8ea4-468c-9d7a-2b3c1a447a80 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Converting VIF {"id": "086e4aee-1846-436c-8c93-dab333d31521", "address": "fa:16:3e:f3:aa:10", "network": {"id": "e442dddb-90bf-46c8-b680-3f7b90171ffe", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1981769764-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "41da7b7508634e869bbbe5203e7023cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086e4aee-18", "ovs_interfaceid": "086e4aee-1846-436c-8c93-dab333d31521", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.091 225859 DEBUG nova.network.os_vif_util [None req-db0cc92f-8ea4-468c-9d7a-2b3c1a447a80 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:aa:10,bridge_name='br-int',has_traffic_filtering=True,id=086e4aee-1846-436c-8c93-dab333d31521,network=Network(e442dddb-90bf-46c8-b680-3f7b90171ffe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap086e4aee-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.092 225859 DEBUG os_vif [None req-db0cc92f-8ea4-468c-9d7a-2b3c1a447a80 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:aa:10,bridge_name='br-int',has_traffic_filtering=True,id=086e4aee-1846-436c-8c93-dab333d31521,network=Network(e442dddb-90bf-46c8-b680-3f7b90171ffe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap086e4aee-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.092 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.093 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.093 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.096 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.096 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap086e4aee-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.096 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap086e4aee-18, col_values=(('external_ids', {'iface-id': '086e4aee-1846-436c-8c93-dab333d31521', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:aa:10', 'vm-uuid': '3b9ae6db-82fd-4f0d-96f4-92a09c1c1677'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.097 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.097 225859 INFO os_vif [None req-db0cc92f-8ea4-468c-9d7a-2b3c1a447a80 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:aa:10,bridge_name='br-int',has_traffic_filtering=True,id=086e4aee-1846-436c-8c93-dab333d31521,network=Network(e442dddb-90bf-46c8-b680-3f7b90171ffe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap086e4aee-18')
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.119 225859 DEBUG nova.objects.instance [None req-db0cc92f-8ea4-468c-9d7a-2b3c1a447a80 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lazy-loading 'numa_topology' on Instance uuid 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:01:04 compute-1 podman[284701]: 2026-01-20 15:01:04.186712553 +0000 UTC m=+0.054649643 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 15:01:04 compute-1 kernel: tap086e4aee-18: entered promiscuous mode
Jan 20 15:01:04 compute-1 NetworkManager[49104]: <info>  [1768921264.2201] manager: (tap086e4aee-18): new Tun device (/org/freedesktop/NetworkManager/Devices/258)
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.220 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:04 compute-1 ovn_controller[130490]: 2026-01-20T15:01:04Z|00597|binding|INFO|Claiming lport 086e4aee-1846-436c-8c93-dab333d31521 for this chassis.
Jan 20 15:01:04 compute-1 ovn_controller[130490]: 2026-01-20T15:01:04Z|00598|binding|INFO|086e4aee-1846-436c-8c93-dab333d31521: Claiming fa:16:3e:f3:aa:10 10.100.0.13
Jan 20 15:01:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:04.227 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:aa:10 10.100.0.13'], port_security=['fa:16:3e:f3:aa:10 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3b9ae6db-82fd-4f0d-96f4-92a09c1c1677', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e442dddb-90bf-46c8-b680-3f7b90171ffe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41da7b7508634e869bbbe5203e7023cc', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9fe8372e-13b8-4476-ba27-8f6ac71e4da5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0874ec4-826b-4e92-aca5-efa961e93290, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=086e4aee-1846-436c-8c93-dab333d31521) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:01:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:04.228 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 086e4aee-1846-436c-8c93-dab333d31521 in datapath e442dddb-90bf-46c8-b680-3f7b90171ffe bound to our chassis
Jan 20 15:01:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:04.228 140354 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e442dddb-90bf-46c8-b680-3f7b90171ffe or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 20 15:01:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:04.229 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6f7f183e-27ee-4f28-8fcb-7eea23b11925]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:04 compute-1 ovn_controller[130490]: 2026-01-20T15:01:04Z|00599|binding|INFO|Setting lport 086e4aee-1846-436c-8c93-dab333d31521 up in Southbound
Jan 20 15:01:04 compute-1 ovn_controller[130490]: 2026-01-20T15:01:04Z|00600|binding|INFO|Setting lport 086e4aee-1846-436c-8c93-dab333d31521 ovn-installed in OVS
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.238 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.242 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:04 compute-1 systemd-udevd[284734]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:01:04 compute-1 systemd-machined[194361]: New machine qemu-70-instance-00000094.
Jan 20 15:01:04 compute-1 NetworkManager[49104]: <info>  [1768921264.2645] device (tap086e4aee-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:01:04 compute-1 NetworkManager[49104]: <info>  [1768921264.2657] device (tap086e4aee-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:01:04 compute-1 systemd[1]: Started Virtual Machine qemu-70-instance-00000094.
Jan 20 15:01:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 15:01:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:01:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:01:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:01:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:01:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:01:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:01:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:04.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.745 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.746 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921264.7451344, 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.746 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] VM Started (Lifecycle Event)
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.762 225859 DEBUG nova.compute.manager [None req-db0cc92f-8ea4-468c-9d7a-2b3c1a447a80 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:01:04 compute-1 nova_compute[225855]: 2026-01-20 15:01:04.762 225859 DEBUG nova.objects.instance [None req-db0cc92f-8ea4-468c-9d7a-2b3c1a447a80 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lazy-loading 'pci_devices' on Instance uuid 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:01:05 compute-1 ceph-mon[81775]: pgmap v2276: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 122 op/s
Jan 20 15:01:05 compute-1 nova_compute[225855]: 2026-01-20 15:01:05.754 225859 DEBUG nova.compute.manager [req-0ea6a4cc-828e-4437-9e6f-8499645d0a88 req-bf8f50ec-897d-492a-bf84-c95d8346946a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received event network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:01:05 compute-1 nova_compute[225855]: 2026-01-20 15:01:05.754 225859 DEBUG oslo_concurrency.lockutils [req-0ea6a4cc-828e-4437-9e6f-8499645d0a88 req-bf8f50ec-897d-492a-bf84-c95d8346946a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:05 compute-1 nova_compute[225855]: 2026-01-20 15:01:05.754 225859 DEBUG oslo_concurrency.lockutils [req-0ea6a4cc-828e-4437-9e6f-8499645d0a88 req-bf8f50ec-897d-492a-bf84-c95d8346946a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:05 compute-1 nova_compute[225855]: 2026-01-20 15:01:05.755 225859 DEBUG oslo_concurrency.lockutils [req-0ea6a4cc-828e-4437-9e6f-8499645d0a88 req-bf8f50ec-897d-492a-bf84-c95d8346946a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:05 compute-1 nova_compute[225855]: 2026-01-20 15:01:05.755 225859 DEBUG nova.compute.manager [req-0ea6a4cc-828e-4437-9e6f-8499645d0a88 req-bf8f50ec-897d-492a-bf84-c95d8346946a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] No waiting events found dispatching network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:01:05 compute-1 nova_compute[225855]: 2026-01-20 15:01:05.755 225859 WARNING nova.compute.manager [req-0ea6a4cc-828e-4437-9e6f-8499645d0a88 req-bf8f50ec-897d-492a-bf84-c95d8346946a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received unexpected event network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 for instance with vm_state suspended and task_state resuming.
Jan 20 15:01:05 compute-1 nova_compute[225855]: 2026-01-20 15:01:05.770 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:05 compute-1 nova_compute[225855]: 2026-01-20 15:01:05.772 225859 INFO nova.virt.libvirt.driver [-] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Instance running successfully.
Jan 20 15:01:05 compute-1 virtqemud[225396]: argument unsupported: QEMU guest agent is not configured
Jan 20 15:01:05 compute-1 nova_compute[225855]: 2026-01-20 15:01:05.776 225859 DEBUG nova.virt.libvirt.guest [None req-db0cc92f-8ea4-468c-9d7a-2b3c1a447a80 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 20 15:01:05 compute-1 nova_compute[225855]: 2026-01-20 15:01:05.776 225859 DEBUG nova.compute.manager [None req-db0cc92f-8ea4-468c-9d7a-2b3c1a447a80 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:05 compute-1 nova_compute[225855]: 2026-01-20 15:01:05.777 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:01:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:05.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:05 compute-1 nova_compute[225855]: 2026-01-20 15:01:05.844 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] During sync_power_state the instance has a pending task (resuming). Skip.
Jan 20 15:01:05 compute-1 nova_compute[225855]: 2026-01-20 15:01:05.845 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921264.748487, 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:01:05 compute-1 nova_compute[225855]: 2026-01-20 15:01:05.845 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] VM Resumed (Lifecycle Event)
Jan 20 15:01:06 compute-1 nova_compute[225855]: 2026-01-20 15:01:06.079 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:06 compute-1 nova_compute[225855]: 2026-01-20 15:01:06.083 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:01:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:01:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:06.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:01:06 compute-1 nova_compute[225855]: 2026-01-20 15:01:06.647 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:06 compute-1 nova_compute[225855]: 2026-01-20 15:01:06.805 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:06.969 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:07.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:07 compute-1 nova_compute[225855]: 2026-01-20 15:01:07.873 225859 DEBUG nova.compute.manager [req-7f785eed-1da4-4bf4-888e-37973a098707 req-a4ba6737-a010-498e-917e-1248fd07739a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received event network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:01:07 compute-1 nova_compute[225855]: 2026-01-20 15:01:07.873 225859 DEBUG oslo_concurrency.lockutils [req-7f785eed-1da4-4bf4-888e-37973a098707 req-a4ba6737-a010-498e-917e-1248fd07739a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:07 compute-1 nova_compute[225855]: 2026-01-20 15:01:07.874 225859 DEBUG oslo_concurrency.lockutils [req-7f785eed-1da4-4bf4-888e-37973a098707 req-a4ba6737-a010-498e-917e-1248fd07739a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:07 compute-1 nova_compute[225855]: 2026-01-20 15:01:07.874 225859 DEBUG oslo_concurrency.lockutils [req-7f785eed-1da4-4bf4-888e-37973a098707 req-a4ba6737-a010-498e-917e-1248fd07739a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:07 compute-1 nova_compute[225855]: 2026-01-20 15:01:07.874 225859 DEBUG nova.compute.manager [req-7f785eed-1da4-4bf4-888e-37973a098707 req-a4ba6737-a010-498e-917e-1248fd07739a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] No waiting events found dispatching network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:01:07 compute-1 nova_compute[225855]: 2026-01-20 15:01:07.875 225859 WARNING nova.compute.manager [req-7f785eed-1da4-4bf4-888e-37973a098707 req-a4ba6737-a010-498e-917e-1248fd07739a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received unexpected event network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 for instance with vm_state active and task_state None.
Jan 20 15:01:07 compute-1 ceph-mon[81775]: pgmap v2277: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 20 15:01:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:01:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:08.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:01:09 compute-1 ceph-mon[81775]: pgmap v2278: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.2 KiB/s wr, 66 op/s
Jan 20 15:01:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:09.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:09 compute-1 nova_compute[225855]: 2026-01-20 15:01:09.937 225859 DEBUG nova.objects.instance [None req-7149fc9e-4a24-474f-bf7d-a9deb298197c 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lazy-loading 'pci_devices' on Instance uuid 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:01:09 compute-1 nova_compute[225855]: 2026-01-20 15:01:09.957 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921269.957014, 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:01:09 compute-1 nova_compute[225855]: 2026-01-20 15:01:09.958 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] VM Paused (Lifecycle Event)
Jan 20 15:01:09 compute-1 nova_compute[225855]: 2026-01-20 15:01:09.984 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:09 compute-1 nova_compute[225855]: 2026-01-20 15:01:09.988 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:01:10 compute-1 nova_compute[225855]: 2026-01-20 15:01:10.021 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 20 15:01:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:10.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:10 compute-1 kernel: tap086e4aee-18 (unregistering): left promiscuous mode
Jan 20 15:01:10 compute-1 NetworkManager[49104]: <info>  [1768921270.6107] device (tap086e4aee-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:01:10 compute-1 ovn_controller[130490]: 2026-01-20T15:01:10Z|00601|binding|INFO|Releasing lport 086e4aee-1846-436c-8c93-dab333d31521 from this chassis (sb_readonly=0)
Jan 20 15:01:10 compute-1 ovn_controller[130490]: 2026-01-20T15:01:10Z|00602|binding|INFO|Setting lport 086e4aee-1846-436c-8c93-dab333d31521 down in Southbound
Jan 20 15:01:10 compute-1 ovn_controller[130490]: 2026-01-20T15:01:10Z|00603|binding|INFO|Removing iface tap086e4aee-18 ovn-installed in OVS
Jan 20 15:01:10 compute-1 nova_compute[225855]: 2026-01-20 15:01:10.616 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:10 compute-1 nova_compute[225855]: 2026-01-20 15:01:10.619 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:10.623 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:aa:10 10.100.0.13'], port_security=['fa:16:3e:f3:aa:10 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3b9ae6db-82fd-4f0d-96f4-92a09c1c1677', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e442dddb-90bf-46c8-b680-3f7b90171ffe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41da7b7508634e869bbbe5203e7023cc', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9fe8372e-13b8-4476-ba27-8f6ac71e4da5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0874ec4-826b-4e92-aca5-efa961e93290, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=086e4aee-1846-436c-8c93-dab333d31521) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:01:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:10.624 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 086e4aee-1846-436c-8c93-dab333d31521 in datapath e442dddb-90bf-46c8-b680-3f7b90171ffe unbound from our chassis
Jan 20 15:01:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:10.624 140354 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e442dddb-90bf-46c8-b680-3f7b90171ffe or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 20 15:01:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:10.625 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[43e0694b-53d3-4f1f-8dc4-96a7341432fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:10 compute-1 nova_compute[225855]: 2026-01-20 15:01:10.635 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:10 compute-1 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000094.scope: Deactivated successfully.
Jan 20 15:01:10 compute-1 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000094.scope: Consumed 5.782s CPU time.
Jan 20 15:01:10 compute-1 systemd-machined[194361]: Machine qemu-70-instance-00000094 terminated.
Jan 20 15:01:10 compute-1 nova_compute[225855]: 2026-01-20 15:01:10.790 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:10 compute-1 nova_compute[225855]: 2026-01-20 15:01:10.795 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:10 compute-1 nova_compute[225855]: 2026-01-20 15:01:10.807 225859 DEBUG nova.compute.manager [None req-7149fc9e-4a24-474f-bf7d-a9deb298197c 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:10 compute-1 ceph-mon[81775]: pgmap v2279: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.2 KiB/s wr, 66 op/s
Jan 20 15:01:11 compute-1 nova_compute[225855]: 2026-01-20 15:01:11.649 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:11.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:11 compute-1 nova_compute[225855]: 2026-01-20 15:01:11.807 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:12.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1687987888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:13 compute-1 sudo[284809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:13 compute-1 sudo[284809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:13 compute-1 sudo[284809]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:13 compute-1 sudo[284835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:01:13 compute-1 sudo[284835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:13 compute-1 sudo[284835]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:13.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:14 compute-1 ceph-mon[81775]: pgmap v2280: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.2 KiB/s rd, 1.2 KiB/s wr, 13 op/s
Jan 20 15:01:14 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:01:14 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:01:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3160512075' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:01:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3160512075' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:01:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:01:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:14.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:01:14 compute-1 nova_compute[225855]: 2026-01-20 15:01:14.420 225859 INFO nova.compute.manager [None req-c40f9667-ac01-4bfc-952d-90e51ac5e7f5 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Resuming
Jan 20 15:01:14 compute-1 nova_compute[225855]: 2026-01-20 15:01:14.420 225859 DEBUG nova.objects.instance [None req-c40f9667-ac01-4bfc-952d-90e51ac5e7f5 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lazy-loading 'flavor' on Instance uuid 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:01:14 compute-1 nova_compute[225855]: 2026-01-20 15:01:14.545 225859 DEBUG oslo_concurrency.lockutils [None req-c40f9667-ac01-4bfc-952d-90e51ac5e7f5 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Acquiring lock "refresh_cache-3b9ae6db-82fd-4f0d-96f4-92a09c1c1677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:01:14 compute-1 nova_compute[225855]: 2026-01-20 15:01:14.545 225859 DEBUG oslo_concurrency.lockutils [None req-c40f9667-ac01-4bfc-952d-90e51ac5e7f5 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Acquired lock "refresh_cache-3b9ae6db-82fd-4f0d-96f4-92a09c1c1677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:01:14 compute-1 nova_compute[225855]: 2026-01-20 15:01:14.546 225859 DEBUG nova.network.neutron [None req-c40f9667-ac01-4bfc-952d-90e51ac5e7f5 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:01:15 compute-1 sudo[284860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:15 compute-1 sudo[284860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:15 compute-1 sudo[284860]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:15 compute-1 sudo[284885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:15 compute-1 sudo[284885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:15 compute-1 sudo[284885]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:15 compute-1 ceph-mon[81775]: pgmap v2281: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 KiB/s rd, 4 op/s
Jan 20 15:01:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:15.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:16.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:16.421 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:16.422 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:16.422 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.615 225859 DEBUG nova.network.neutron [None req-c40f9667-ac01-4bfc-952d-90e51ac5e7f5 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Updating instance_info_cache with network_info: [{"id": "086e4aee-1846-436c-8c93-dab333d31521", "address": "fa:16:3e:f3:aa:10", "network": {"id": "e442dddb-90bf-46c8-b680-3f7b90171ffe", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1981769764-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "41da7b7508634e869bbbe5203e7023cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086e4aee-18", "ovs_interfaceid": "086e4aee-1846-436c-8c93-dab333d31521", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.626 225859 DEBUG nova.compute.manager [req-ecc20836-0a95-46f5-9121-0e05ea05b3bb req-30cfcb20-0d16-4e32-9e9c-e261c0c2fb0c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received event network-vif-unplugged-086e4aee-1846-436c-8c93-dab333d31521 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.627 225859 DEBUG oslo_concurrency.lockutils [req-ecc20836-0a95-46f5-9121-0e05ea05b3bb req-30cfcb20-0d16-4e32-9e9c-e261c0c2fb0c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.627 225859 DEBUG oslo_concurrency.lockutils [req-ecc20836-0a95-46f5-9121-0e05ea05b3bb req-30cfcb20-0d16-4e32-9e9c-e261c0c2fb0c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.627 225859 DEBUG oslo_concurrency.lockutils [req-ecc20836-0a95-46f5-9121-0e05ea05b3bb req-30cfcb20-0d16-4e32-9e9c-e261c0c2fb0c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.627 225859 DEBUG nova.compute.manager [req-ecc20836-0a95-46f5-9121-0e05ea05b3bb req-30cfcb20-0d16-4e32-9e9c-e261c0c2fb0c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] No waiting events found dispatching network-vif-unplugged-086e4aee-1846-436c-8c93-dab333d31521 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.627 225859 WARNING nova.compute.manager [req-ecc20836-0a95-46f5-9121-0e05ea05b3bb req-30cfcb20-0d16-4e32-9e9c-e261c0c2fb0c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received unexpected event network-vif-unplugged-086e4aee-1846-436c-8c93-dab333d31521 for instance with vm_state suspended and task_state resuming.
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.636 225859 DEBUG oslo_concurrency.lockutils [None req-c40f9667-ac01-4bfc-952d-90e51ac5e7f5 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Releasing lock "refresh_cache-3b9ae6db-82fd-4f0d-96f4-92a09c1c1677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.641 225859 DEBUG nova.virt.libvirt.vif [None req-c40f9667-ac01-4bfc-952d-90e51ac5e7f5 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:00:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1699199838',display_name='tempest-TestServerAdvancedOps-server-1699199838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1699199838',id=148,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:00:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='41da7b7508634e869bbbe5203e7023cc',ramdisk_id='',reservation_id='r-y4cbtvvj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1175826361',owner_user_name='tempest-TestServerAdvancedOps-1175826361-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:01:10Z,user_data=None,user_id='37466ba8c9504f1ca6cfbce8add0b52a',uuid=3b9ae6db-82fd-4f0d-96f4-92a09c1c1677,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "086e4aee-1846-436c-8c93-dab333d31521", "address": "fa:16:3e:f3:aa:10", "network": {"id": "e442dddb-90bf-46c8-b680-3f7b90171ffe", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1981769764-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "41da7b7508634e869bbbe5203e7023cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086e4aee-18", "ovs_interfaceid": "086e4aee-1846-436c-8c93-dab333d31521", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.642 225859 DEBUG nova.network.os_vif_util [None req-c40f9667-ac01-4bfc-952d-90e51ac5e7f5 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Converting VIF {"id": "086e4aee-1846-436c-8c93-dab333d31521", "address": "fa:16:3e:f3:aa:10", "network": {"id": "e442dddb-90bf-46c8-b680-3f7b90171ffe", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1981769764-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "41da7b7508634e869bbbe5203e7023cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086e4aee-18", "ovs_interfaceid": "086e4aee-1846-436c-8c93-dab333d31521", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.643 225859 DEBUG nova.network.os_vif_util [None req-c40f9667-ac01-4bfc-952d-90e51ac5e7f5 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f3:aa:10,bridge_name='br-int',has_traffic_filtering=True,id=086e4aee-1846-436c-8c93-dab333d31521,network=Network(e442dddb-90bf-46c8-b680-3f7b90171ffe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap086e4aee-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.643 225859 DEBUG os_vif [None req-c40f9667-ac01-4bfc-952d-90e51ac5e7f5 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:aa:10,bridge_name='br-int',has_traffic_filtering=True,id=086e4aee-1846-436c-8c93-dab333d31521,network=Network(e442dddb-90bf-46c8-b680-3f7b90171ffe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap086e4aee-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.643 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.644 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.644 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.647 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.648 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap086e4aee-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.648 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap086e4aee-18, col_values=(('external_ids', {'iface-id': '086e4aee-1846-436c-8c93-dab333d31521', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:aa:10', 'vm-uuid': '3b9ae6db-82fd-4f0d-96f4-92a09c1c1677'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.649 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.650 225859 INFO os_vif [None req-c40f9667-ac01-4bfc-952d-90e51ac5e7f5 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:aa:10,bridge_name='br-int',has_traffic_filtering=True,id=086e4aee-1846-436c-8c93-dab333d31521,network=Network(e442dddb-90bf-46c8-b680-3f7b90171ffe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap086e4aee-18')
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.652 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.670 225859 DEBUG nova.objects.instance [None req-c40f9667-ac01-4bfc-952d-90e51ac5e7f5 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lazy-loading 'numa_topology' on Instance uuid 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:01:16 compute-1 kernel: tap086e4aee-18: entered promiscuous mode
Jan 20 15:01:16 compute-1 NetworkManager[49104]: <info>  [1768921276.7315] manager: (tap086e4aee-18): new Tun device (/org/freedesktop/NetworkManager/Devices/259)
Jan 20 15:01:16 compute-1 ovn_controller[130490]: 2026-01-20T15:01:16Z|00604|binding|INFO|Claiming lport 086e4aee-1846-436c-8c93-dab333d31521 for this chassis.
Jan 20 15:01:16 compute-1 ovn_controller[130490]: 2026-01-20T15:01:16Z|00605|binding|INFO|086e4aee-1846-436c-8c93-dab333d31521: Claiming fa:16:3e:f3:aa:10 10.100.0.13
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.733 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:16.742 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:aa:10 10.100.0.13'], port_security=['fa:16:3e:f3:aa:10 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3b9ae6db-82fd-4f0d-96f4-92a09c1c1677', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e442dddb-90bf-46c8-b680-3f7b90171ffe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41da7b7508634e869bbbe5203e7023cc', 'neutron:revision_number': '7', 'neutron:security_group_ids': '9fe8372e-13b8-4476-ba27-8f6ac71e4da5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0874ec4-826b-4e92-aca5-efa961e93290, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=086e4aee-1846-436c-8c93-dab333d31521) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:01:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:16.743 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 086e4aee-1846-436c-8c93-dab333d31521 in datapath e442dddb-90bf-46c8-b680-3f7b90171ffe bound to our chassis
Jan 20 15:01:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:16.744 140354 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e442dddb-90bf-46c8-b680-3f7b90171ffe or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 20 15:01:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:16.745 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9832d3-5794-462c-86de-3bfcfb2d3620]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:16 compute-1 ovn_controller[130490]: 2026-01-20T15:01:16Z|00606|binding|INFO|Setting lport 086e4aee-1846-436c-8c93-dab333d31521 up in Southbound
Jan 20 15:01:16 compute-1 ovn_controller[130490]: 2026-01-20T15:01:16Z|00607|binding|INFO|Setting lport 086e4aee-1846-436c-8c93-dab333d31521 ovn-installed in OVS
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.758 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:16 compute-1 systemd-udevd[284924]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.763 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:16 compute-1 systemd-machined[194361]: New machine qemu-71-instance-00000094.
Jan 20 15:01:16 compute-1 NetworkManager[49104]: <info>  [1768921276.7741] device (tap086e4aee-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:01:16 compute-1 NetworkManager[49104]: <info>  [1768921276.7753] device (tap086e4aee-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:01:16 compute-1 systemd[1]: Started Virtual Machine qemu-71-instance-00000094.
Jan 20 15:01:16 compute-1 nova_compute[225855]: 2026-01-20 15:01:16.809 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:17 compute-1 nova_compute[225855]: 2026-01-20 15:01:17.558 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 15:01:17 compute-1 nova_compute[225855]: 2026-01-20 15:01:17.558 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921277.5577242, 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:01:17 compute-1 nova_compute[225855]: 2026-01-20 15:01:17.558 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] VM Started (Lifecycle Event)
Jan 20 15:01:17 compute-1 ceph-mon[81775]: pgmap v2282: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 KiB/s rd, 4 op/s
Jan 20 15:01:17 compute-1 nova_compute[225855]: 2026-01-20 15:01:17.577 225859 DEBUG nova.compute.manager [None req-c40f9667-ac01-4bfc-952d-90e51ac5e7f5 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:01:17 compute-1 nova_compute[225855]: 2026-01-20 15:01:17.578 225859 DEBUG nova.objects.instance [None req-c40f9667-ac01-4bfc-952d-90e51ac5e7f5 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lazy-loading 'pci_devices' on Instance uuid 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:01:17 compute-1 nova_compute[225855]: 2026-01-20 15:01:17.584 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:17 compute-1 nova_compute[225855]: 2026-01-20 15:01:17.588 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:01:17 compute-1 nova_compute[225855]: 2026-01-20 15:01:17.595 225859 INFO nova.virt.libvirt.driver [-] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Instance running successfully.
Jan 20 15:01:17 compute-1 virtqemud[225396]: argument unsupported: QEMU guest agent is not configured
Jan 20 15:01:17 compute-1 nova_compute[225855]: 2026-01-20 15:01:17.597 225859 DEBUG nova.virt.libvirt.guest [None req-c40f9667-ac01-4bfc-952d-90e51ac5e7f5 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 20 15:01:17 compute-1 nova_compute[225855]: 2026-01-20 15:01:17.597 225859 DEBUG nova.compute.manager [None req-c40f9667-ac01-4bfc-952d-90e51ac5e7f5 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:17 compute-1 nova_compute[225855]: 2026-01-20 15:01:17.631 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] During sync_power_state the instance has a pending task (resuming). Skip.
Jan 20 15:01:17 compute-1 nova_compute[225855]: 2026-01-20 15:01:17.632 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921277.5636852, 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:01:17 compute-1 nova_compute[225855]: 2026-01-20 15:01:17.632 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] VM Resumed (Lifecycle Event)
Jan 20 15:01:17 compute-1 nova_compute[225855]: 2026-01-20 15:01:17.658 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:17 compute-1 nova_compute[225855]: 2026-01-20 15:01:17.667 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:01:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:17.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:01:17 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1919300651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:01:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:18.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.738 225859 DEBUG nova.compute.manager [req-e08989d6-c841-449f-9dc4-7bacc87b7f5e req-90f136ed-61af-40cb-b507-138ff93265a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received event network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.739 225859 DEBUG oslo_concurrency.lockutils [req-e08989d6-c841-449f-9dc4-7bacc87b7f5e req-90f136ed-61af-40cb-b507-138ff93265a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.739 225859 DEBUG oslo_concurrency.lockutils [req-e08989d6-c841-449f-9dc4-7bacc87b7f5e req-90f136ed-61af-40cb-b507-138ff93265a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.739 225859 DEBUG oslo_concurrency.lockutils [req-e08989d6-c841-449f-9dc4-7bacc87b7f5e req-90f136ed-61af-40cb-b507-138ff93265a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.739 225859 DEBUG nova.compute.manager [req-e08989d6-c841-449f-9dc4-7bacc87b7f5e req-90f136ed-61af-40cb-b507-138ff93265a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] No waiting events found dispatching network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.740 225859 WARNING nova.compute.manager [req-e08989d6-c841-449f-9dc4-7bacc87b7f5e req-90f136ed-61af-40cb-b507-138ff93265a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received unexpected event network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 for instance with vm_state active and task_state None.
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.740 225859 DEBUG nova.compute.manager [req-e08989d6-c841-449f-9dc4-7bacc87b7f5e req-90f136ed-61af-40cb-b507-138ff93265a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received event network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.740 225859 DEBUG oslo_concurrency.lockutils [req-e08989d6-c841-449f-9dc4-7bacc87b7f5e req-90f136ed-61af-40cb-b507-138ff93265a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.740 225859 DEBUG oslo_concurrency.lockutils [req-e08989d6-c841-449f-9dc4-7bacc87b7f5e req-90f136ed-61af-40cb-b507-138ff93265a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.740 225859 DEBUG oslo_concurrency.lockutils [req-e08989d6-c841-449f-9dc4-7bacc87b7f5e req-90f136ed-61af-40cb-b507-138ff93265a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.741 225859 DEBUG nova.compute.manager [req-e08989d6-c841-449f-9dc4-7bacc87b7f5e req-90f136ed-61af-40cb-b507-138ff93265a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] No waiting events found dispatching network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.741 225859 WARNING nova.compute.manager [req-e08989d6-c841-449f-9dc4-7bacc87b7f5e req-90f136ed-61af-40cb-b507-138ff93265a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received unexpected event network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 for instance with vm_state active and task_state None.
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.741 225859 DEBUG nova.compute.manager [req-e08989d6-c841-449f-9dc4-7bacc87b7f5e req-90f136ed-61af-40cb-b507-138ff93265a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received event network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.741 225859 DEBUG oslo_concurrency.lockutils [req-e08989d6-c841-449f-9dc4-7bacc87b7f5e req-90f136ed-61af-40cb-b507-138ff93265a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.742 225859 DEBUG oslo_concurrency.lockutils [req-e08989d6-c841-449f-9dc4-7bacc87b7f5e req-90f136ed-61af-40cb-b507-138ff93265a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.742 225859 DEBUG oslo_concurrency.lockutils [req-e08989d6-c841-449f-9dc4-7bacc87b7f5e req-90f136ed-61af-40cb-b507-138ff93265a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.742 225859 DEBUG nova.compute.manager [req-e08989d6-c841-449f-9dc4-7bacc87b7f5e req-90f136ed-61af-40cb-b507-138ff93265a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] No waiting events found dispatching network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.742 225859 WARNING nova.compute.manager [req-e08989d6-c841-449f-9dc4-7bacc87b7f5e req-90f136ed-61af-40cb-b507-138ff93265a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received unexpected event network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 for instance with vm_state active and task_state None.
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.953 225859 DEBUG oslo_concurrency.lockutils [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Acquiring lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.954 225859 DEBUG oslo_concurrency.lockutils [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.954 225859 DEBUG oslo_concurrency.lockutils [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Acquiring lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.955 225859 DEBUG oslo_concurrency.lockutils [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.955 225859 DEBUG oslo_concurrency.lockutils [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.956 225859 INFO nova.compute.manager [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Terminating instance
Jan 20 15:01:18 compute-1 nova_compute[225855]: 2026-01-20 15:01:18.958 225859 DEBUG nova.compute.manager [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:01:18 compute-1 kernel: tap086e4aee-18 (unregistering): left promiscuous mode
Jan 20 15:01:18 compute-1 NetworkManager[49104]: <info>  [1768921278.9975] device (tap086e4aee-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:01:19 compute-1 ovn_controller[130490]: 2026-01-20T15:01:19Z|00608|binding|INFO|Releasing lport 086e4aee-1846-436c-8c93-dab333d31521 from this chassis (sb_readonly=0)
Jan 20 15:01:19 compute-1 ovn_controller[130490]: 2026-01-20T15:01:19Z|00609|binding|INFO|Setting lport 086e4aee-1846-436c-8c93-dab333d31521 down in Southbound
Jan 20 15:01:19 compute-1 nova_compute[225855]: 2026-01-20 15:01:19.007 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:19 compute-1 ovn_controller[130490]: 2026-01-20T15:01:19Z|00610|binding|INFO|Removing iface tap086e4aee-18 ovn-installed in OVS
Jan 20 15:01:19 compute-1 nova_compute[225855]: 2026-01-20 15:01:19.008 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:19.013 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:aa:10 10.100.0.13'], port_security=['fa:16:3e:f3:aa:10 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3b9ae6db-82fd-4f0d-96f4-92a09c1c1677', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e442dddb-90bf-46c8-b680-3f7b90171ffe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41da7b7508634e869bbbe5203e7023cc', 'neutron:revision_number': '8', 'neutron:security_group_ids': '9fe8372e-13b8-4476-ba27-8f6ac71e4da5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0874ec4-826b-4e92-aca5-efa961e93290, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=086e4aee-1846-436c-8c93-dab333d31521) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:01:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:19.014 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 086e4aee-1846-436c-8c93-dab333d31521 in datapath e442dddb-90bf-46c8-b680-3f7b90171ffe unbound from our chassis
Jan 20 15:01:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:19.015 140354 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e442dddb-90bf-46c8-b680-3f7b90171ffe or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 20 15:01:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:19.015 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c077dda9-8c32-40d7-9225-79a6b2ff0b79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:19 compute-1 nova_compute[225855]: 2026-01-20 15:01:19.025 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:19 compute-1 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000094.scope: Deactivated successfully.
Jan 20 15:01:19 compute-1 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000094.scope: Consumed 2.122s CPU time.
Jan 20 15:01:19 compute-1 systemd-machined[194361]: Machine qemu-71-instance-00000094 terminated.
Jan 20 15:01:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1919300651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:19 compute-1 ceph-mon[81775]: pgmap v2283: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:01:19 compute-1 NetworkManager[49104]: <info>  [1768921279.1731] manager: (tap086e4aee-18): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Jan 20 15:01:19 compute-1 nova_compute[225855]: 2026-01-20 15:01:19.175 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:19 compute-1 nova_compute[225855]: 2026-01-20 15:01:19.181 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:19 compute-1 nova_compute[225855]: 2026-01-20 15:01:19.192 225859 INFO nova.virt.libvirt.driver [-] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Instance destroyed successfully.
Jan 20 15:01:19 compute-1 nova_compute[225855]: 2026-01-20 15:01:19.192 225859 DEBUG nova.objects.instance [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lazy-loading 'resources' on Instance uuid 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:01:19 compute-1 nova_compute[225855]: 2026-01-20 15:01:19.213 225859 DEBUG nova.virt.libvirt.vif [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:00:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1699199838',display_name='tempest-TestServerAdvancedOps-server-1699199838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1699199838',id=148,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:00:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='41da7b7508634e869bbbe5203e7023cc',ramdisk_id='',reservation_id='r-y4cbtvvj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-1175826361',owner_user_name='tempest-TestServerAdvancedOps-1175826361-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:01:17Z,user_data=None,user_id='37466ba8c9504f1ca6cfbce8add0b52a',uuid=3b9ae6db-82fd-4f0d-96f4-92a09c1c1677,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "086e4aee-1846-436c-8c93-dab333d31521", "address": "fa:16:3e:f3:aa:10", "network": {"id": "e442dddb-90bf-46c8-b680-3f7b90171ffe", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1981769764-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "41da7b7508634e869bbbe5203e7023cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086e4aee-18", "ovs_interfaceid": "086e4aee-1846-436c-8c93-dab333d31521", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:01:19 compute-1 nova_compute[225855]: 2026-01-20 15:01:19.213 225859 DEBUG nova.network.os_vif_util [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Converting VIF {"id": "086e4aee-1846-436c-8c93-dab333d31521", "address": "fa:16:3e:f3:aa:10", "network": {"id": "e442dddb-90bf-46c8-b680-3f7b90171ffe", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1981769764-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "41da7b7508634e869bbbe5203e7023cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086e4aee-18", "ovs_interfaceid": "086e4aee-1846-436c-8c93-dab333d31521", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:01:19 compute-1 nova_compute[225855]: 2026-01-20 15:01:19.214 225859 DEBUG nova.network.os_vif_util [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f3:aa:10,bridge_name='br-int',has_traffic_filtering=True,id=086e4aee-1846-436c-8c93-dab333d31521,network=Network(e442dddb-90bf-46c8-b680-3f7b90171ffe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap086e4aee-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:01:19 compute-1 nova_compute[225855]: 2026-01-20 15:01:19.214 225859 DEBUG os_vif [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:aa:10,bridge_name='br-int',has_traffic_filtering=True,id=086e4aee-1846-436c-8c93-dab333d31521,network=Network(e442dddb-90bf-46c8-b680-3f7b90171ffe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap086e4aee-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:01:19 compute-1 nova_compute[225855]: 2026-01-20 15:01:19.216 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:19 compute-1 nova_compute[225855]: 2026-01-20 15:01:19.216 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap086e4aee-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:19 compute-1 nova_compute[225855]: 2026-01-20 15:01:19.218 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:19 compute-1 nova_compute[225855]: 2026-01-20 15:01:19.219 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:19 compute-1 nova_compute[225855]: 2026-01-20 15:01:19.222 225859 INFO os_vif [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:aa:10,bridge_name='br-int',has_traffic_filtering=True,id=086e4aee-1846-436c-8c93-dab333d31521,network=Network(e442dddb-90bf-46c8-b680-3f7b90171ffe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap086e4aee-18')
Jan 20 15:01:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:19.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1583981785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2262638477' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:20.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:20 compute-1 nova_compute[225855]: 2026-01-20 15:01:20.882 225859 DEBUG nova.compute.manager [req-12c895fc-f6d0-45e9-b45a-eaedac3f0c2f req-422b6e7f-7cf2-4729-9e64-f4a924479861 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received event network-vif-unplugged-086e4aee-1846-436c-8c93-dab333d31521 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:01:20 compute-1 nova_compute[225855]: 2026-01-20 15:01:20.882 225859 DEBUG oslo_concurrency.lockutils [req-12c895fc-f6d0-45e9-b45a-eaedac3f0c2f req-422b6e7f-7cf2-4729-9e64-f4a924479861 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:20 compute-1 nova_compute[225855]: 2026-01-20 15:01:20.883 225859 DEBUG oslo_concurrency.lockutils [req-12c895fc-f6d0-45e9-b45a-eaedac3f0c2f req-422b6e7f-7cf2-4729-9e64-f4a924479861 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:20 compute-1 nova_compute[225855]: 2026-01-20 15:01:20.883 225859 DEBUG oslo_concurrency.lockutils [req-12c895fc-f6d0-45e9-b45a-eaedac3f0c2f req-422b6e7f-7cf2-4729-9e64-f4a924479861 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:20 compute-1 nova_compute[225855]: 2026-01-20 15:01:20.883 225859 DEBUG nova.compute.manager [req-12c895fc-f6d0-45e9-b45a-eaedac3f0c2f req-422b6e7f-7cf2-4729-9e64-f4a924479861 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] No waiting events found dispatching network-vif-unplugged-086e4aee-1846-436c-8c93-dab333d31521 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:01:20 compute-1 nova_compute[225855]: 2026-01-20 15:01:20.883 225859 DEBUG nova.compute.manager [req-12c895fc-f6d0-45e9-b45a-eaedac3f0c2f req-422b6e7f-7cf2-4729-9e64-f4a924479861 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received event network-vif-unplugged-086e4aee-1846-436c-8c93-dab333d31521 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:01:20 compute-1 nova_compute[225855]: 2026-01-20 15:01:20.883 225859 DEBUG nova.compute.manager [req-12c895fc-f6d0-45e9-b45a-eaedac3f0c2f req-422b6e7f-7cf2-4729-9e64-f4a924479861 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received event network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:01:20 compute-1 nova_compute[225855]: 2026-01-20 15:01:20.884 225859 DEBUG oslo_concurrency.lockutils [req-12c895fc-f6d0-45e9-b45a-eaedac3f0c2f req-422b6e7f-7cf2-4729-9e64-f4a924479861 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:20 compute-1 nova_compute[225855]: 2026-01-20 15:01:20.884 225859 DEBUG oslo_concurrency.lockutils [req-12c895fc-f6d0-45e9-b45a-eaedac3f0c2f req-422b6e7f-7cf2-4729-9e64-f4a924479861 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:20 compute-1 nova_compute[225855]: 2026-01-20 15:01:20.884 225859 DEBUG oslo_concurrency.lockutils [req-12c895fc-f6d0-45e9-b45a-eaedac3f0c2f req-422b6e7f-7cf2-4729-9e64-f4a924479861 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:20 compute-1 nova_compute[225855]: 2026-01-20 15:01:20.884 225859 DEBUG nova.compute.manager [req-12c895fc-f6d0-45e9-b45a-eaedac3f0c2f req-422b6e7f-7cf2-4729-9e64-f4a924479861 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] No waiting events found dispatching network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:01:20 compute-1 nova_compute[225855]: 2026-01-20 15:01:20.884 225859 WARNING nova.compute.manager [req-12c895fc-f6d0-45e9-b45a-eaedac3f0c2f req-422b6e7f-7cf2-4729-9e64-f4a924479861 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received unexpected event network-vif-plugged-086e4aee-1846-436c-8c93-dab333d31521 for instance with vm_state active and task_state deleting.
Jan 20 15:01:21 compute-1 ceph-mon[81775]: pgmap v2284: 321 pgs: 321 active+clean; 225 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.4 MiB/s wr, 66 op/s
Jan 20 15:01:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:21.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:21 compute-1 nova_compute[225855]: 2026-01-20 15:01:21.803 225859 INFO nova.virt.libvirt.driver [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Deleting instance files /var/lib/nova/instances/3b9ae6db-82fd-4f0d-96f4-92a09c1c1677_del
Jan 20 15:01:21 compute-1 nova_compute[225855]: 2026-01-20 15:01:21.804 225859 INFO nova.virt.libvirt.driver [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Deletion of /var/lib/nova/instances/3b9ae6db-82fd-4f0d-96f4-92a09c1c1677_del complete
Jan 20 15:01:21 compute-1 nova_compute[225855]: 2026-01-20 15:01:21.811 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:21 compute-1 nova_compute[225855]: 2026-01-20 15:01:21.872 225859 INFO nova.compute.manager [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Took 2.91 seconds to destroy the instance on the hypervisor.
Jan 20 15:01:21 compute-1 nova_compute[225855]: 2026-01-20 15:01:21.873 225859 DEBUG oslo.service.loopingcall [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:01:21 compute-1 nova_compute[225855]: 2026-01-20 15:01:21.873 225859 DEBUG nova.compute.manager [-] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:01:21 compute-1 nova_compute[225855]: 2026-01-20 15:01:21.873 225859 DEBUG nova.network.neutron [-] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:01:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:22.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:22 compute-1 ceph-mon[81775]: pgmap v2285: 321 pgs: 321 active+clean; 258 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.6 MiB/s wr, 108 op/s
Jan 20 15:01:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e328 e328: 3 total, 3 up, 3 in
Jan 20 15:01:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:23 compute-1 podman[285012]: 2026-01-20 15:01:23.04862288 +0000 UTC m=+0.086093240 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 20 15:01:23 compute-1 nova_compute[225855]: 2026-01-20 15:01:23.177 225859 DEBUG nova.network.neutron [-] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:01:23 compute-1 nova_compute[225855]: 2026-01-20 15:01:23.201 225859 INFO nova.compute.manager [-] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Took 1.33 seconds to deallocate network for instance.
Jan 20 15:01:23 compute-1 nova_compute[225855]: 2026-01-20 15:01:23.263 225859 DEBUG oslo_concurrency.lockutils [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:23 compute-1 nova_compute[225855]: 2026-01-20 15:01:23.263 225859 DEBUG oslo_concurrency.lockutils [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:23 compute-1 nova_compute[225855]: 2026-01-20 15:01:23.285 225859 DEBUG nova.compute.manager [req-d3ea38bc-5b53-4be2-a4cb-a2443637b348 req-74bca486-55c1-4583-a28d-8f87d8eadfd6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Received event network-vif-deleted-086e4aee-1846-436c-8c93-dab333d31521 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:01:23 compute-1 nova_compute[225855]: 2026-01-20 15:01:23.328 225859 DEBUG oslo_concurrency.processutils [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:01:23 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4191032683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:23 compute-1 ceph-mon[81775]: osdmap e328: 3 total, 3 up, 3 in
Jan 20 15:01:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1218882102' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:23.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:23 compute-1 nova_compute[225855]: 2026-01-20 15:01:23.803 225859 DEBUG oslo_concurrency.processutils [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:23 compute-1 nova_compute[225855]: 2026-01-20 15:01:23.810 225859 DEBUG nova.compute.provider_tree [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:01:23 compute-1 nova_compute[225855]: 2026-01-20 15:01:23.825 225859 DEBUG nova.scheduler.client.report [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:01:23 compute-1 nova_compute[225855]: 2026-01-20 15:01:23.847 225859 DEBUG oslo_concurrency.lockutils [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:23 compute-1 nova_compute[225855]: 2026-01-20 15:01:23.885 225859 INFO nova.scheduler.client.report [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Deleted allocations for instance 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677
Jan 20 15:01:23 compute-1 nova_compute[225855]: 2026-01-20 15:01:23.966 225859 DEBUG oslo_concurrency.lockutils [None req-b57706f9-218c-42cb-8795-8e80f5b2bba6 37466ba8c9504f1ca6cfbce8add0b52a 41da7b7508634e869bbbe5203e7023cc - - default default] Lock "3b9ae6db-82fd-4f0d-96f4-92a09c1c1677" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:24 compute-1 nova_compute[225855]: 2026-01-20 15:01:24.218 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:24.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4191032683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2458772179' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:24 compute-1 ceph-mon[81775]: pgmap v2287: 321 pgs: 321 active+clean; 220 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 6.8 MiB/s wr, 198 op/s
Jan 20 15:01:25 compute-1 nova_compute[225855]: 2026-01-20 15:01:25.455 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:25.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:01:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:26.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:01:26 compute-1 nova_compute[225855]: 2026-01-20 15:01:26.813 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:27 compute-1 ceph-mon[81775]: pgmap v2288: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 282 op/s
Jan 20 15:01:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:27.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:01:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:28.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:01:29 compute-1 nova_compute[225855]: 2026-01-20 15:01:29.222 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:29 compute-1 ceph-mon[81775]: pgmap v2289: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 282 op/s
Jan 20 15:01:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:29.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e329 e329: 3 total, 3 up, 3 in
Jan 20 15:01:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:30.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:31 compute-1 ceph-mon[81775]: osdmap e329: 3 total, 3 up, 3 in
Jan 20 15:01:31 compute-1 ceph-mon[81775]: pgmap v2291: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 110 KiB/s wr, 200 op/s
Jan 20 15:01:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:01:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:31.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:01:31 compute-1 nova_compute[225855]: 2026-01-20 15:01:31.859 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:32.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:33 compute-1 ceph-mon[81775]: pgmap v2292: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 93 KiB/s wr, 200 op/s
Jan 20 15:01:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e330 e330: 3 total, 3 up, 3 in
Jan 20 15:01:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:33.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:34 compute-1 nova_compute[225855]: 2026-01-20 15:01:34.190 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921279.1881814, 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:01:34 compute-1 nova_compute[225855]: 2026-01-20 15:01:34.190 225859 INFO nova.compute.manager [-] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] VM Stopped (Lifecycle Event)
Jan 20 15:01:34 compute-1 nova_compute[225855]: 2026-01-20 15:01:34.222 225859 DEBUG nova.compute.manager [None req-248dfd82-efa5-48ad-9e2c-6f8acd429ecb - - - - - -] [instance: 3b9ae6db-82fd-4f0d-96f4-92a09c1c1677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:34 compute-1 nova_compute[225855]: 2026-01-20 15:01:34.226 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:01:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:34.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:01:34 compute-1 ceph-mon[81775]: osdmap e330: 3 total, 3 up, 3 in
Jan 20 15:01:34 compute-1 podman[285067]: 2026-01-20 15:01:34.998289812 +0000 UTC m=+0.045720491 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 20 15:01:35 compute-1 sudo[285088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:35 compute-1 sudo[285088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:35 compute-1 sudo[285088]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:35 compute-1 sudo[285113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:35 compute-1 sudo[285113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:35 compute-1 sudo[285113]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:35 compute-1 ceph-mon[81775]: pgmap v2294: 321 pgs: 321 active+clean; 175 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 120 KiB/s wr, 58 op/s
Jan 20 15:01:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e331 e331: 3 total, 3 up, 3 in
Jan 20 15:01:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:01:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:35.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:01:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:01:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:36.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:01:36 compute-1 ceph-mon[81775]: osdmap e331: 3 total, 3 up, 3 in
Jan 20 15:01:36 compute-1 ceph-mon[81775]: pgmap v2296: 321 pgs: 321 active+clean; 198 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 1.5 MiB/s wr, 216 op/s
Jan 20 15:01:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e332 e332: 3 total, 3 up, 3 in
Jan 20 15:01:36 compute-1 nova_compute[225855]: 2026-01-20 15:01:36.861 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:37 compute-1 ceph-mon[81775]: osdmap e332: 3 total, 3 up, 3 in
Jan 20 15:01:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:37.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:38.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:38 compute-1 ceph-mon[81775]: pgmap v2298: 321 pgs: 321 active+clean; 198 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 1.6 MiB/s wr, 175 op/s
Jan 20 15:01:39 compute-1 nova_compute[225855]: 2026-01-20 15:01:39.229 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:39.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:40.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:41 compute-1 ceph-mon[81775]: pgmap v2299: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.1 MiB/s wr, 192 op/s
Jan 20 15:01:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:41.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:41 compute-1 nova_compute[225855]: 2026-01-20 15:01:41.862 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:42 compute-1 nova_compute[225855]: 2026-01-20 15:01:42.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:01:42 compute-1 nova_compute[225855]: 2026-01-20 15:01:42.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:01:42 compute-1 nova_compute[225855]: 2026-01-20 15:01:42.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:01:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:42.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:43 compute-1 nova_compute[225855]: 2026-01-20 15:01:43.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:01:43 compute-1 ceph-mon[81775]: pgmap v2300: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.6 MiB/s wr, 166 op/s
Jan 20 15:01:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:43.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:44 compute-1 nova_compute[225855]: 2026-01-20 15:01:44.233 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:44 compute-1 nova_compute[225855]: 2026-01-20 15:01:44.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:01:44 compute-1 nova_compute[225855]: 2026-01-20 15:01:44.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:01:44 compute-1 nova_compute[225855]: 2026-01-20 15:01:44.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:01:44 compute-1 nova_compute[225855]: 2026-01-20 15:01:44.355 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:01:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:44.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:44 compute-1 nova_compute[225855]: 2026-01-20 15:01:44.382 225859 DEBUG oslo_concurrency.lockutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "474cec75-3b01-411a-9074-75859d2a9ddf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:44 compute-1 nova_compute[225855]: 2026-01-20 15:01:44.382 225859 DEBUG oslo_concurrency.lockutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:44 compute-1 nova_compute[225855]: 2026-01-20 15:01:44.401 225859 DEBUG nova.compute.manager [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:01:44 compute-1 nova_compute[225855]: 2026-01-20 15:01:44.480 225859 DEBUG oslo_concurrency.lockutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:44 compute-1 nova_compute[225855]: 2026-01-20 15:01:44.481 225859 DEBUG oslo_concurrency.lockutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:44 compute-1 nova_compute[225855]: 2026-01-20 15:01:44.488 225859 DEBUG nova.virt.hardware [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:01:44 compute-1 nova_compute[225855]: 2026-01-20 15:01:44.488 225859 INFO nova.compute.claims [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:01:44 compute-1 nova_compute[225855]: 2026-01-20 15:01:44.675 225859 DEBUG oslo_concurrency.processutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:44 compute-1 nova_compute[225855]: 2026-01-20 15:01:44.888 225859 DEBUG oslo_concurrency.lockutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Acquiring lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:44 compute-1 nova_compute[225855]: 2026-01-20 15:01:44.888 225859 DEBUG oslo_concurrency.lockutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:44 compute-1 nova_compute[225855]: 2026-01-20 15:01:44.903 225859 DEBUG nova.compute.manager [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:01:44 compute-1 nova_compute[225855]: 2026-01-20 15:01:44.956 225859 DEBUG oslo_concurrency.lockutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e333 e333: 3 total, 3 up, 3 in
Jan 20 15:01:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:01:45 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3249063162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.164 225859 DEBUG oslo_concurrency.processutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.169 225859 DEBUG nova.compute.provider_tree [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.188 225859 DEBUG nova.scheduler.client.report [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.209 225859 DEBUG oslo_concurrency.lockutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.210 225859 DEBUG nova.compute.manager [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.212 225859 DEBUG oslo_concurrency.lockutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.217 225859 DEBUG nova.virt.hardware [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.218 225859 INFO nova.compute.claims [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.283 225859 DEBUG nova.compute.manager [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.283 225859 DEBUG nova.network.neutron [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.300 225859 INFO nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.314 225859 DEBUG nova.compute.manager [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.347 225859 INFO nova.virt.block_device [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Booting with blank volume at /dev/vda
Jan 20 15:01:45 compute-1 ceph-mon[81775]: pgmap v2301: 321 pgs: 321 active+clean; 219 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.4 MiB/s wr, 87 op/s
Jan 20 15:01:45 compute-1 ceph-mon[81775]: osdmap e333: 3 total, 3 up, 3 in
Jan 20 15:01:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3249063162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.375 225859 DEBUG oslo_concurrency.processutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.557 225859 DEBUG nova.policy [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2446e8399b344b29986c1aaf8bf73adf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '63555e5851564db08c6429231d264f2c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:01:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:01:45 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1401472554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.785 225859 DEBUG oslo_concurrency.processutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.792 225859 DEBUG nova.compute.provider_tree [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.813 225859 DEBUG nova.scheduler.client.report [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:01:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:01:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:45.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.841 225859 DEBUG oslo_concurrency.lockutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.842 225859 DEBUG nova.compute.manager [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.907 225859 DEBUG nova.compute.manager [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.908 225859 DEBUG nova.network.neutron [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.928 225859 INFO nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:01:45 compute-1 nova_compute[225855]: 2026-01-20 15:01:45.952 225859 DEBUG nova.compute.manager [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.087 225859 DEBUG nova.compute.manager [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.088 225859 DEBUG nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.089 225859 INFO nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Creating image(s)
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.114 225859 DEBUG nova.storage.rbd_utils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] rbd image 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.144 225859 DEBUG nova.storage.rbd_utils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] rbd image 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.174 225859 DEBUG nova.storage.rbd_utils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] rbd image 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.178 225859 DEBUG oslo_concurrency.processutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.205 225859 DEBUG nova.network.neutron [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Successfully created port: 244332ba-1b58-4d42-98b0-245f9460c50f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.214 225859 DEBUG nova.policy [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1654794111844ca88666b3529173e9a7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3a1d679d5c954662a271e842fe2f2c05', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.241 225859 DEBUG oslo_concurrency.processutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.242 225859 DEBUG oslo_concurrency.lockutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.243 225859 DEBUG oslo_concurrency.lockutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.243 225859 DEBUG oslo_concurrency.lockutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.298 225859 DEBUG nova.storage.rbd_utils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] rbd image 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.302 225859 DEBUG oslo_concurrency.processutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1401472554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:46.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.754 225859 DEBUG oslo_concurrency.processutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.821 225859 DEBUG os_brick.utils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.823 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.827 225859 DEBUG nova.storage.rbd_utils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] resizing rbd image 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.835 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.835 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[5e313ee8-c423-428e-8988-2ec8c1e58971]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.862 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.863 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.871 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.872 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[981660ef-7e3d-4cc2-bd78-766703893b6f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.874 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.884 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.885 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[09a1c65b-2645-4706-9cae-90bb459c0d67]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.886 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[26bb15ab-c521-4600-b569-a0cd2146e509]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.887 225859 DEBUG oslo_concurrency.processutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.919 225859 DEBUG oslo_concurrency.processutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.921 225859 DEBUG os_brick.initiator.connectors.lightos [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.922 225859 DEBUG os_brick.initiator.connectors.lightos [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.922 225859 DEBUG os_brick.initiator.connectors.lightos [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.922 225859 DEBUG os_brick.utils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] <== get_connector_properties: return (100ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.923 225859 DEBUG nova.virt.block_device [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Updating existing volume attachment record: 20658306-e0e7-4d9c-a904-24cfdd1b82ee _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.967 225859 DEBUG nova.objects.instance [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Lazy-loading 'migration_context' on Instance uuid 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.980 225859 DEBUG nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.980 225859 DEBUG nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Ensure instance console log exists: /var/lib/nova/instances/2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.981 225859 DEBUG oslo_concurrency.lockutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.981 225859 DEBUG oslo_concurrency.lockutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:46 compute-1 nova_compute[225855]: 2026-01-20 15:01:46.982 225859 DEBUG oslo_concurrency.lockutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:47 compute-1 nova_compute[225855]: 2026-01-20 15:01:47.327 225859 DEBUG nova.network.neutron [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Successfully created port: 6216baae-337d-44a3-aa38-60c2afb5d13f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:01:47 compute-1 nova_compute[225855]: 2026-01-20 15:01:47.333 225859 DEBUG nova.network.neutron [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Successfully updated port: 244332ba-1b58-4d42-98b0-245f9460c50f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:01:47 compute-1 nova_compute[225855]: 2026-01-20 15:01:47.354 225859 DEBUG oslo_concurrency.lockutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "refresh_cache-474cec75-3b01-411a-9074-75859d2a9ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:01:47 compute-1 nova_compute[225855]: 2026-01-20 15:01:47.354 225859 DEBUG oslo_concurrency.lockutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquired lock "refresh_cache-474cec75-3b01-411a-9074-75859d2a9ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:01:47 compute-1 nova_compute[225855]: 2026-01-20 15:01:47.355 225859 DEBUG nova.network.neutron [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:01:47 compute-1 ceph-mon[81775]: pgmap v2303: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.0 MiB/s wr, 169 op/s
Jan 20 15:01:47 compute-1 nova_compute[225855]: 2026-01-20 15:01:47.444 225859 DEBUG nova.compute.manager [req-caaf922f-48c2-4bec-a8fb-33fc70f25712 req-2f67e3bb-0a0e-44ee-b55e-0afae93f40cf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received event network-changed-244332ba-1b58-4d42-98b0-245f9460c50f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:01:47 compute-1 nova_compute[225855]: 2026-01-20 15:01:47.445 225859 DEBUG nova.compute.manager [req-caaf922f-48c2-4bec-a8fb-33fc70f25712 req-2f67e3bb-0a0e-44ee-b55e-0afae93f40cf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Refreshing instance network info cache due to event network-changed-244332ba-1b58-4d42-98b0-245f9460c50f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:01:47 compute-1 nova_compute[225855]: 2026-01-20 15:01:47.445 225859 DEBUG oslo_concurrency.lockutils [req-caaf922f-48c2-4bec-a8fb-33fc70f25712 req-2f67e3bb-0a0e-44ee-b55e-0afae93f40cf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-474cec75-3b01-411a-9074-75859d2a9ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:01:47 compute-1 nova_compute[225855]: 2026-01-20 15:01:47.645 225859 DEBUG nova.network.neutron [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:01:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:47.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:48 compute-1 nova_compute[225855]: 2026-01-20 15:01:48.033 225859 DEBUG nova.compute.manager [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:01:48 compute-1 nova_compute[225855]: 2026-01-20 15:01:48.035 225859 DEBUG nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:01:48 compute-1 nova_compute[225855]: 2026-01-20 15:01:48.035 225859 INFO nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Creating image(s)
Jan 20 15:01:48 compute-1 nova_compute[225855]: 2026-01-20 15:01:48.035 225859 DEBUG nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 15:01:48 compute-1 nova_compute[225855]: 2026-01-20 15:01:48.036 225859 DEBUG nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Ensure instance console log exists: /var/lib/nova/instances/474cec75-3b01-411a-9074-75859d2a9ddf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:01:48 compute-1 nova_compute[225855]: 2026-01-20 15:01:48.036 225859 DEBUG oslo_concurrency.lockutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:48 compute-1 nova_compute[225855]: 2026-01-20 15:01:48.036 225859 DEBUG oslo_concurrency.lockutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:48 compute-1 nova_compute[225855]: 2026-01-20 15:01:48.037 225859 DEBUG oslo_concurrency.lockutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:48 compute-1 nova_compute[225855]: 2026-01-20 15:01:48.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:01:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:01:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:48.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:01:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2885650483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:48 compute-1 ceph-mon[81775]: pgmap v2304: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 159 op/s
Jan 20 15:01:48 compute-1 nova_compute[225855]: 2026-01-20 15:01:48.776 225859 DEBUG nova.network.neutron [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Successfully updated port: 6216baae-337d-44a3-aa38-60c2afb5d13f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:01:48 compute-1 nova_compute[225855]: 2026-01-20 15:01:48.803 225859 DEBUG oslo_concurrency.lockutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Acquiring lock "refresh_cache-2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:01:48 compute-1 nova_compute[225855]: 2026-01-20 15:01:48.804 225859 DEBUG oslo_concurrency.lockutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Acquired lock "refresh_cache-2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:01:48 compute-1 nova_compute[225855]: 2026-01-20 15:01:48.804 225859 DEBUG nova.network.neutron [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.029 225859 DEBUG nova.compute.manager [req-9a937e69-f6fd-46bc-9bbf-4370a0ca8765 req-61a44dde-7864-4631-8912-f6885a52cef2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Received event network-changed-6216baae-337d-44a3-aa38-60c2afb5d13f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.029 225859 DEBUG nova.compute.manager [req-9a937e69-f6fd-46bc-9bbf-4370a0ca8765 req-61a44dde-7864-4631-8912-f6885a52cef2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Refreshing instance network info cache due to event network-changed-6216baae-337d-44a3-aa38-60c2afb5d13f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.029 225859 DEBUG oslo_concurrency.lockutils [req-9a937e69-f6fd-46bc-9bbf-4370a0ca8765 req-61a44dde-7864-4631-8912-f6885a52cef2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.073 225859 DEBUG nova.network.neutron [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.218 225859 DEBUG nova.network.neutron [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Updating instance_info_cache with network_info: [{"id": "244332ba-1b58-4d42-98b0-245f9460c50f", "address": "fa:16:3e:6f:36:24", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244332ba-1b", "ovs_interfaceid": "244332ba-1b58-4d42-98b0-245f9460c50f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.257 225859 DEBUG oslo_concurrency.lockutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Releasing lock "refresh_cache-474cec75-3b01-411a-9074-75859d2a9ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.257 225859 DEBUG nova.compute.manager [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Instance network_info: |[{"id": "244332ba-1b58-4d42-98b0-245f9460c50f", "address": "fa:16:3e:6f:36:24", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244332ba-1b", "ovs_interfaceid": "244332ba-1b58-4d42-98b0-245f9460c50f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.258 225859 DEBUG oslo_concurrency.lockutils [req-caaf922f-48c2-4bec-a8fb-33fc70f25712 req-2f67e3bb-0a0e-44ee-b55e-0afae93f40cf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-474cec75-3b01-411a-9074-75859d2a9ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.258 225859 DEBUG nova.network.neutron [req-caaf922f-48c2-4bec-a8fb-33fc70f25712 req-2f67e3bb-0a0e-44ee-b55e-0afae93f40cf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Refreshing network info cache for port 244332ba-1b58-4d42-98b0-245f9460c50f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.262 225859 DEBUG nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Start _get_guest_xml network_info=[{"id": "244332ba-1b58-4d42-98b0-245f9460c50f", "address": "fa:16:3e:6f:36:24", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244332ba-1b", "ovs_interfaceid": "244332ba-1b58-4d42-98b0-245f9460c50f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-41743468-7add-45cb-bc94-02eb6f850278', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '41743468-7add-45cb-bc94-02eb6f850278', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '474cec75-3b01-411a-9074-75859d2a9ddf', 'attached_at': '', 'detached_at': '', 'volume_id': '41743468-7add-45cb-bc94-02eb6f850278', 'serial': '41743468-7add-45cb-bc94-02eb6f850278'}, 'guest_format': None, 'boot_index': 0, 'mount_device': '/dev/vda', 'attachment_id': '20658306-e0e7-4d9c-a904-24cfdd1b82ee', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.274 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.279 225859 WARNING nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.283 225859 DEBUG nova.virt.libvirt.host [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.283 225859 DEBUG nova.virt.libvirt.host [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.291 225859 DEBUG nova.virt.libvirt.host [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.292 225859 DEBUG nova.virt.libvirt.host [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.293 225859 DEBUG nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.293 225859 DEBUG nova.virt.hardware [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.293 225859 DEBUG nova.virt.hardware [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.294 225859 DEBUG nova.virt.hardware [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.294 225859 DEBUG nova.virt.hardware [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.294 225859 DEBUG nova.virt.hardware [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.294 225859 DEBUG nova.virt.hardware [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.294 225859 DEBUG nova.virt.hardware [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.295 225859 DEBUG nova.virt.hardware [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.295 225859 DEBUG nova.virt.hardware [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.295 225859 DEBUG nova.virt.hardware [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.295 225859 DEBUG nova.virt.hardware [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.324 225859 DEBUG nova.storage.rbd_utils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image 474cec75-3b01-411a-9074-75859d2a9ddf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.329 225859 DEBUG oslo_concurrency.processutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:01:49 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/193634755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.760 225859 DEBUG oslo_concurrency.processutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.786 225859 DEBUG nova.virt.libvirt.vif [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:01:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-254746207',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-254746207',id=150,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63555e5851564db08c6429231d264f2c',ramdisk_id='',reservation_id='r-solng1yz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:01:45Z,user_data=None,user_id='2446e8399b344b29986c1aaf8bf73adf',uuid=474cec75-3b01-411a-9074-75859d2a9ddf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "244332ba-1b58-4d42-98b0-245f9460c50f", "address": "fa:16:3e:6f:36:24", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244332ba-1b", "ovs_interfaceid": "244332ba-1b58-4d42-98b0-245f9460c50f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.787 225859 DEBUG nova.network.os_vif_util [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converting VIF {"id": "244332ba-1b58-4d42-98b0-245f9460c50f", "address": "fa:16:3e:6f:36:24", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244332ba-1b", "ovs_interfaceid": "244332ba-1b58-4d42-98b0-245f9460c50f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.788 225859 DEBUG nova.network.os_vif_util [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:36:24,bridge_name='br-int',has_traffic_filtering=True,id=244332ba-1b58-4d42-98b0-245f9460c50f,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244332ba-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.789 225859 DEBUG nova.objects.instance [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'pci_devices' on Instance uuid 474cec75-3b01-411a-9074-75859d2a9ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.803 225859 DEBUG nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:01:49 compute-1 nova_compute[225855]:   <uuid>474cec75-3b01-411a-9074-75859d2a9ddf</uuid>
Jan 20 15:01:49 compute-1 nova_compute[225855]:   <name>instance-00000096</name>
Jan 20 15:01:49 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:01:49 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:01:49 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-254746207</nova:name>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:01:49</nova:creationTime>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:01:49 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:01:49 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:01:49 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:01:49 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:01:49 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:01:49 compute-1 nova_compute[225855]:         <nova:user uuid="2446e8399b344b29986c1aaf8bf73adf">tempest-ServerBootFromVolumeStableRescueTest-1871371328-project-member</nova:user>
Jan 20 15:01:49 compute-1 nova_compute[225855]:         <nova:project uuid="63555e5851564db08c6429231d264f2c">tempest-ServerBootFromVolumeStableRescueTest-1871371328</nova:project>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:01:49 compute-1 nova_compute[225855]:         <nova:port uuid="244332ba-1b58-4d42-98b0-245f9460c50f">
Jan 20 15:01:49 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:01:49 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:01:49 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <system>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <entry name="serial">474cec75-3b01-411a-9074-75859d2a9ddf</entry>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <entry name="uuid">474cec75-3b01-411a-9074-75859d2a9ddf</entry>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     </system>
Jan 20 15:01:49 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:01:49 compute-1 nova_compute[225855]:   <os>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:   </os>
Jan 20 15:01:49 compute-1 nova_compute[225855]:   <features>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:   </features>
Jan 20 15:01:49 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:01:49 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:01:49 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/474cec75-3b01-411a-9074-75859d2a9ddf_disk.config">
Jan 20 15:01:49 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       </source>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:01:49 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <source protocol="rbd" name="volumes/volume-41743468-7add-45cb-bc94-02eb6f850278">
Jan 20 15:01:49 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       </source>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:01:49 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <serial>41743468-7add-45cb-bc94-02eb6f850278</serial>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:6f:36:24"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <target dev="tap244332ba-1b"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/474cec75-3b01-411a-9074-75859d2a9ddf/console.log" append="off"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <video>
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     </video>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:01:49 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:01:49 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:01:49 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:01:49 compute-1 nova_compute[225855]: </domain>
Jan 20 15:01:49 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.804 225859 DEBUG nova.compute.manager [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Preparing to wait for external event network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.805 225859 DEBUG oslo_concurrency.lockutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.805 225859 DEBUG oslo_concurrency.lockutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.805 225859 DEBUG oslo_concurrency.lockutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.806 225859 DEBUG nova.virt.libvirt.vif [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:01:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-254746207',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-254746207',id=150,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63555e5851564db08c6429231d264f2c',ramdisk_id='',reservation_id='r-solng1yz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:01:45Z,user_data=None,user_id='2446e8399b344b29986c1aaf8bf73adf',uuid=474cec75-3b01-411a-9074-75859d2a9ddf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "244332ba-1b58-4d42-98b0-245f9460c50f", "address": "fa:16:3e:6f:36:24", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244332ba-1b", "ovs_interfaceid": "244332ba-1b58-4d42-98b0-245f9460c50f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.806 225859 DEBUG nova.network.os_vif_util [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converting VIF {"id": "244332ba-1b58-4d42-98b0-245f9460c50f", "address": "fa:16:3e:6f:36:24", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244332ba-1b", "ovs_interfaceid": "244332ba-1b58-4d42-98b0-245f9460c50f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.807 225859 DEBUG nova.network.os_vif_util [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:36:24,bridge_name='br-int',has_traffic_filtering=True,id=244332ba-1b58-4d42-98b0-245f9460c50f,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244332ba-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.807 225859 DEBUG os_vif [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:36:24,bridge_name='br-int',has_traffic_filtering=True,id=244332ba-1b58-4d42-98b0-245f9460c50f,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244332ba-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.808 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.808 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.809 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.811 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.812 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap244332ba-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.812 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap244332ba-1b, col_values=(('external_ids', {'iface-id': '244332ba-1b58-4d42-98b0-245f9460c50f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:36:24', 'vm-uuid': '474cec75-3b01-411a-9074-75859d2a9ddf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:49 compute-1 NetworkManager[49104]: <info>  [1768921309.8149] manager: (tap244332ba-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.816 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.819 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.820 225859 INFO os_vif [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:36:24,bridge_name='br-int',has_traffic_filtering=True,id=244332ba-1b58-4d42-98b0-245f9460c50f,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244332ba-1b')
Jan 20 15:01:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:49.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/193634755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.898 225859 DEBUG nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.899 225859 DEBUG nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.899 225859 DEBUG nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No VIF found with MAC fa:16:3e:6f:36:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.899 225859 INFO nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Using config drive
Jan 20 15:01:49 compute-1 nova_compute[225855]: 2026-01-20 15:01:49.923 225859 DEBUG nova.storage.rbd_utils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image 474cec75-3b01-411a-9074-75859d2a9ddf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.366 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.367 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.367 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.367 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.368 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:50.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.629 225859 DEBUG nova.network.neutron [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Updating instance_info_cache with network_info: [{"id": "6216baae-337d-44a3-aa38-60c2afb5d13f", "address": "fa:16:3e:87:b9:ea", "network": {"id": "43d3be8f-9be1-4892-bbfe-d0ba2d7157ad", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1740636070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a1d679d5c954662a271e842fe2f2c05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6216baae-33", "ovs_interfaceid": "6216baae-337d-44a3-aa38-60c2afb5d13f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.653 225859 DEBUG oslo_concurrency.lockutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Releasing lock "refresh_cache-2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.654 225859 DEBUG nova.compute.manager [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Instance network_info: |[{"id": "6216baae-337d-44a3-aa38-60c2afb5d13f", "address": "fa:16:3e:87:b9:ea", "network": {"id": "43d3be8f-9be1-4892-bbfe-d0ba2d7157ad", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1740636070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a1d679d5c954662a271e842fe2f2c05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6216baae-33", "ovs_interfaceid": "6216baae-337d-44a3-aa38-60c2afb5d13f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.654 225859 DEBUG oslo_concurrency.lockutils [req-9a937e69-f6fd-46bc-9bbf-4370a0ca8765 req-61a44dde-7864-4631-8912-f6885a52cef2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.655 225859 DEBUG nova.network.neutron [req-9a937e69-f6fd-46bc-9bbf-4370a0ca8765 req-61a44dde-7864-4631-8912-f6885a52cef2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Refreshing network info cache for port 6216baae-337d-44a3-aa38-60c2afb5d13f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.658 225859 DEBUG nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Start _get_guest_xml network_info=[{"id": "6216baae-337d-44a3-aa38-60c2afb5d13f", "address": "fa:16:3e:87:b9:ea", "network": {"id": "43d3be8f-9be1-4892-bbfe-d0ba2d7157ad", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1740636070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a1d679d5c954662a271e842fe2f2c05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6216baae-33", "ovs_interfaceid": "6216baae-337d-44a3-aa38-60c2afb5d13f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.665 225859 WARNING nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.670 225859 DEBUG nova.virt.libvirt.host [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.671 225859 DEBUG nova.virt.libvirt.host [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.680 225859 DEBUG nova.virt.libvirt.host [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.680 225859 DEBUG nova.virt.libvirt.host [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.681 225859 DEBUG nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.682 225859 DEBUG nova.virt.hardware [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.682 225859 DEBUG nova.virt.hardware [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.682 225859 DEBUG nova.virt.hardware [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.683 225859 DEBUG nova.virt.hardware [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.683 225859 DEBUG nova.virt.hardware [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.683 225859 DEBUG nova.virt.hardware [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.683 225859 DEBUG nova.virt.hardware [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.683 225859 DEBUG nova.virt.hardware [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.684 225859 DEBUG nova.virt.hardware [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.684 225859 DEBUG nova.virt.hardware [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.684 225859 DEBUG nova.virt.hardware [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.687 225859 DEBUG oslo_concurrency.processutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:01:50 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3373454363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.803 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.836 225859 INFO nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Creating config drive at /var/lib/nova/instances/474cec75-3b01-411a-9074-75859d2a9ddf/disk.config
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.842 225859 DEBUG oslo_concurrency.processutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/474cec75-3b01-411a-9074-75859d2a9ddf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvpwqoz6p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.894 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.895 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:01:50 compute-1 ceph-mon[81775]: pgmap v2305: 321 pgs: 321 active+clean; 280 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.8 MiB/s wr, 164 op/s
Jan 20 15:01:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3373454363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.976 225859 DEBUG oslo_concurrency.processutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/474cec75-3b01-411a-9074-75859d2a9ddf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvpwqoz6p" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:50 compute-1 nova_compute[225855]: 2026-01-20 15:01:50.999 225859 DEBUG nova.storage.rbd_utils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image 474cec75-3b01-411a-9074-75859d2a9ddf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.002 225859 DEBUG oslo_concurrency.processutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/474cec75-3b01-411a-9074-75859d2a9ddf/disk.config 474cec75-3b01-411a-9074-75859d2a9ddf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.118 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.120 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4311MB free_disk=20.897380828857422GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.120 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.121 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:01:51 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2575874763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.145 225859 DEBUG oslo_concurrency.processutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.168 225859 DEBUG nova.storage.rbd_utils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] rbd image 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.172 225859 DEBUG oslo_concurrency.processutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.197 225859 DEBUG oslo_concurrency.processutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/474cec75-3b01-411a-9074-75859d2a9ddf/disk.config 474cec75-3b01-411a-9074-75859d2a9ddf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.198 225859 INFO nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Deleting local config drive /var/lib/nova/instances/474cec75-3b01-411a-9074-75859d2a9ddf/disk.config because it was imported into RBD.
Jan 20 15:01:51 compute-1 NetworkManager[49104]: <info>  [1768921311.2472] manager: (tap244332ba-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/262)
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.247 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 474cec75-3b01-411a-9074-75859d2a9ddf actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.247 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.248 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.248 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:01:51 compute-1 kernel: tap244332ba-1b: entered promiscuous mode
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.251 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:51 compute-1 ovn_controller[130490]: 2026-01-20T15:01:51Z|00611|binding|INFO|Claiming lport 244332ba-1b58-4d42-98b0-245f9460c50f for this chassis.
Jan 20 15:01:51 compute-1 ovn_controller[130490]: 2026-01-20T15:01:51Z|00612|binding|INFO|244332ba-1b58-4d42-98b0-245f9460c50f: Claiming fa:16:3e:6f:36:24 10.100.0.4
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.260 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.268 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:36:24 10.100.0.4'], port_security=['fa:16:3e:6f:36:24 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '474cec75-3b01-411a-9074-75859d2a9ddf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63555e5851564db08c6429231d264f2c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7e54c470-6a6f-454e-ae01-9d2d59b2c74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=248fa32c-94be-4e1b-b4d3-cb9fac0ec155, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=244332ba-1b58-4d42-98b0-245f9460c50f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.269 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 244332ba-1b58-4d42-98b0-245f9460c50f in datapath 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec bound to our chassis
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.271 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec
Jan 20 15:01:51 compute-1 systemd-machined[194361]: New machine qemu-72-instance-00000096.
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.290 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c35fa35e-a517-484c-bc39-a153017c50c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.290 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap671e28d0-01 in ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.292 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap671e28d0-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.292 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d60af356-92cd-4c0f-a067-5257859fa9ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.293 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[85bb7cfd-85ec-44f4-a337-f3968852c99d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:51 compute-1 systemd[1]: Started Virtual Machine qemu-72-instance-00000096.
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.305 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[aaaa6497-d0c9-4e02-8084-89dc648d7ea5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:51 compute-1 ovn_controller[130490]: 2026-01-20T15:01:51Z|00613|binding|INFO|Setting lport 244332ba-1b58-4d42-98b0-245f9460c50f ovn-installed in OVS
Jan 20 15:01:51 compute-1 ovn_controller[130490]: 2026-01-20T15:01:51Z|00614|binding|INFO|Setting lport 244332ba-1b58-4d42-98b0-245f9460c50f up in Southbound
Jan 20 15:01:51 compute-1 systemd-udevd[285562]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.333 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee3a330-a29b-461f-8368-7c2e48a598a9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.336 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:51 compute-1 NetworkManager[49104]: <info>  [1768921311.3492] device (tap244332ba-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:01:51 compute-1 NetworkManager[49104]: <info>  [1768921311.3499] device (tap244332ba-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.376 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[72b66347-e393-42d0-8c90-fe7cf75d0aed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.376 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.382 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[19cb985a-867d-4650-88ee-2a13d9f3fe9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:51 compute-1 NetworkManager[49104]: <info>  [1768921311.3832] manager: (tap671e28d0-00): new Veth device (/org/freedesktop/NetworkManager/Devices/263)
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.421 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e65acb-9db3-4634-850d-76b483aadeec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.424 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c01aab0c-7532-4d57-aadb-7e3e7123376c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:51 compute-1 NetworkManager[49104]: <info>  [1768921311.4457] device (tap671e28d0-00): carrier: link connected
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.451 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[6cba133a-e9c1-4e5f-9575-ca4478f82c15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.467 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9eb4c9-9ed4-4fb0-bca0-6206d4a45e03]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap671e28d0-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:4e:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 173], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 635037, 'reachable_time': 30729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285595, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.481 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7ef09764-ac6e-4e3a-825f-2dfc24dd8625]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:4e69'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 635037, 'tstamp': 635037}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285596, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.496 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0a047f33-f554-4486-a6c0-31ff79179bd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap671e28d0-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:4e:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 173], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 635037, 'reachable_time': 30729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285598, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.524 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9d3a5f4f-6eb6-4479-8673-4e7bfbb287e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.529 225859 DEBUG nova.network.neutron [req-caaf922f-48c2-4bec-a8fb-33fc70f25712 req-2f67e3bb-0a0e-44ee-b55e-0afae93f40cf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Updated VIF entry in instance network info cache for port 244332ba-1b58-4d42-98b0-245f9460c50f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.531 225859 DEBUG nova.network.neutron [req-caaf922f-48c2-4bec-a8fb-33fc70f25712 req-2f67e3bb-0a0e-44ee-b55e-0afae93f40cf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Updating instance_info_cache with network_info: [{"id": "244332ba-1b58-4d42-98b0-245f9460c50f", "address": "fa:16:3e:6f:36:24", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244332ba-1b", "ovs_interfaceid": "244332ba-1b58-4d42-98b0-245f9460c50f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.561 225859 DEBUG oslo_concurrency.lockutils [req-caaf922f-48c2-4bec-a8fb-33fc70f25712 req-2f67e3bb-0a0e-44ee-b55e-0afae93f40cf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-474cec75-3b01-411a-9074-75859d2a9ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.587 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ce5ed1a5-5f5a-460b-9d86-384f6e5043a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.589 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap671e28d0-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.589 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.590 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap671e28d0-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:01:51 compute-1 kernel: tap671e28d0-00: entered promiscuous mode
Jan 20 15:01:51 compute-1 NetworkManager[49104]: <info>  [1768921311.6376] manager: (tap671e28d0-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/264)
Jan 20 15:01:51 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/120102328' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.637 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.640 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap671e28d0-00, col_values=(('external_ids', {'iface-id': 'a8628d9e-196f-4b84-89fd-d3a41792b8a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.641 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:51 compute-1 ovn_controller[130490]: 2026-01-20T15:01:51Z|00615|binding|INFO|Releasing lport a8628d9e-196f-4b84-89fd-d3a41792b8a0 from this chassis (sb_readonly=0)
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.660 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.660 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/671e28d0-0b9e-41e0-b5e0-db1ccd4717ec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/671e28d0-0b9e-41e0-b5e0-db1ccd4717ec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.661 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7b3e5480-4f76-4bdd-a24f-d53ad9a2b5f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.662 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/671e28d0-0b9e-41e0-b5e0-db1ccd4717ec.pid.haproxy
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:01:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:51.663 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'env', 'PROCESS_TAG=haproxy-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/671e28d0-0b9e-41e0-b5e0-db1ccd4717ec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.672 225859 DEBUG oslo_concurrency.processutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.674 225859 DEBUG nova.virt.libvirt.vif [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:01:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-2070424486',display_name='tempest-TestSnapshotPattern-server-2070424486',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-2070424486',id=151,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHt2Pjp5fO1h9ikmCXDj2fSFlpzjIfjh7jCgXMa0An0AiWgQhFRQBExuSvqHDwsNMcN7FUPQzPGoYvUkqz0I21jbk9kMja07pP6W664P26WxVinBA8YoIkVl5tlHownM8g==',key_name='tempest-TestSnapshotPattern-503298877',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3a1d679d5c954662a271e842fe2f2c05',ramdisk_id='',reservation_id='r-4u8oxks9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1341092631',owner_user_name='tempest-TestSnapshotPattern-1341092631-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:01:45Z,user_data=None,user_id='1654794111844ca88666b3529173e9a7',uuid=2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6216baae-337d-44a3-aa38-60c2afb5d13f", "address": "fa:16:3e:87:b9:ea", "network": {"id": "43d3be8f-9be1-4892-bbfe-d0ba2d7157ad", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1740636070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a1d679d5c954662a271e842fe2f2c05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6216baae-33", "ovs_interfaceid": "6216baae-337d-44a3-aa38-60c2afb5d13f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.674 225859 DEBUG nova.network.os_vif_util [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Converting VIF {"id": "6216baae-337d-44a3-aa38-60c2afb5d13f", "address": "fa:16:3e:87:b9:ea", "network": {"id": "43d3be8f-9be1-4892-bbfe-d0ba2d7157ad", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1740636070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a1d679d5c954662a271e842fe2f2c05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6216baae-33", "ovs_interfaceid": "6216baae-337d-44a3-aa38-60c2afb5d13f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.675 225859 DEBUG nova.network.os_vif_util [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:b9:ea,bridge_name='br-int',has_traffic_filtering=True,id=6216baae-337d-44a3-aa38-60c2afb5d13f,network=Network(43d3be8f-9be1-4892-bbfe-d0ba2d7157ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6216baae-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.677 225859 DEBUG nova.objects.instance [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.701 225859 DEBUG nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:01:51 compute-1 nova_compute[225855]:   <uuid>2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1</uuid>
Jan 20 15:01:51 compute-1 nova_compute[225855]:   <name>instance-00000097</name>
Jan 20 15:01:51 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:01:51 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:01:51 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <nova:name>tempest-TestSnapshotPattern-server-2070424486</nova:name>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:01:50</nova:creationTime>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:01:51 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:01:51 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:01:51 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:01:51 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:01:51 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:01:51 compute-1 nova_compute[225855]:         <nova:user uuid="1654794111844ca88666b3529173e9a7">tempest-TestSnapshotPattern-1341092631-project-member</nova:user>
Jan 20 15:01:51 compute-1 nova_compute[225855]:         <nova:project uuid="3a1d679d5c954662a271e842fe2f2c05">tempest-TestSnapshotPattern-1341092631</nova:project>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:01:51 compute-1 nova_compute[225855]:         <nova:port uuid="6216baae-337d-44a3-aa38-60c2afb5d13f">
Jan 20 15:01:51 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:01:51 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:01:51 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <system>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <entry name="serial">2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1</entry>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <entry name="uuid">2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1</entry>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     </system>
Jan 20 15:01:51 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:01:51 compute-1 nova_compute[225855]:   <os>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:   </os>
Jan 20 15:01:51 compute-1 nova_compute[225855]:   <features>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:   </features>
Jan 20 15:01:51 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:01:51 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:01:51 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1_disk">
Jan 20 15:01:51 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       </source>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:01:51 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1_disk.config">
Jan 20 15:01:51 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       </source>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:01:51 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:87:b9:ea"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <target dev="tap6216baae-33"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1/console.log" append="off"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <video>
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     </video>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:01:51 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:01:51 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:01:51 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:01:51 compute-1 nova_compute[225855]: </domain>
Jan 20 15:01:51 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.708 225859 DEBUG nova.compute.manager [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Preparing to wait for external event network-vif-plugged-6216baae-337d-44a3-aa38-60c2afb5d13f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.709 225859 DEBUG oslo_concurrency.lockutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Acquiring lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.709 225859 DEBUG oslo_concurrency.lockutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.709 225859 DEBUG oslo_concurrency.lockutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.710 225859 DEBUG nova.virt.libvirt.vif [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:01:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-2070424486',display_name='tempest-TestSnapshotPattern-server-2070424486',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-2070424486',id=151,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHt2Pjp5fO1h9ikmCXDj2fSFlpzjIfjh7jCgXMa0An0AiWgQhFRQBExuSvqHDwsNMcN7FUPQzPGoYvUkqz0I21jbk9kMja07pP6W664P26WxVinBA8YoIkVl5tlHownM8g==',key_name='tempest-TestSnapshotPattern-503298877',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3a1d679d5c954662a271e842fe2f2c05',ramdisk_id='',reservation_id='r-4u8oxks9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1341092631',owner_user_name='tempest-TestSnapshotPattern-1341092631-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:01:45Z,user_data=None,user_id='1654794111844ca88666b3529173e9a7',uuid=2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6216baae-337d-44a3-aa38-60c2afb5d13f", "address": "fa:16:3e:87:b9:ea", "network": {"id": "43d3be8f-9be1-4892-bbfe-d0ba2d7157ad", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1740636070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a1d679d5c954662a271e842fe2f2c05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6216baae-33", "ovs_interfaceid": "6216baae-337d-44a3-aa38-60c2afb5d13f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.711 225859 DEBUG nova.network.os_vif_util [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Converting VIF {"id": "6216baae-337d-44a3-aa38-60c2afb5d13f", "address": "fa:16:3e:87:b9:ea", "network": {"id": "43d3be8f-9be1-4892-bbfe-d0ba2d7157ad", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1740636070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a1d679d5c954662a271e842fe2f2c05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6216baae-33", "ovs_interfaceid": "6216baae-337d-44a3-aa38-60c2afb5d13f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.712 225859 DEBUG nova.network.os_vif_util [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:b9:ea,bridge_name='br-int',has_traffic_filtering=True,id=6216baae-337d-44a3-aa38-60c2afb5d13f,network=Network(43d3be8f-9be1-4892-bbfe-d0ba2d7157ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6216baae-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.712 225859 DEBUG os_vif [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:b9:ea,bridge_name='br-int',has_traffic_filtering=True,id=6216baae-337d-44a3-aa38-60c2afb5d13f,network=Network(43d3be8f-9be1-4892-bbfe-d0ba2d7157ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6216baae-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.713 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.713 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.714 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.716 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.717 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6216baae-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.717 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6216baae-33, col_values=(('external_ids', {'iface-id': '6216baae-337d-44a3-aa38-60c2afb5d13f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:b9:ea', 'vm-uuid': '2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.718 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:51 compute-1 NetworkManager[49104]: <info>  [1768921311.7197] manager: (tap6216baae-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/265)
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.721 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.723 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.724 225859 INFO os_vif [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:b9:ea,bridge_name='br-int',has_traffic_filtering=True,id=6216baae-337d-44a3-aa38-60c2afb5d13f,network=Network(43d3be8f-9be1-4892-bbfe-d0ba2d7157ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6216baae-33')
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.751 225859 DEBUG nova.compute.manager [req-7c410de6-4154-4108-a488-cda6e89a65b3 req-cfec6579-19b4-4958-b792-98efc3293a0b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received event network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.753 225859 DEBUG oslo_concurrency.lockutils [req-7c410de6-4154-4108-a488-cda6e89a65b3 req-cfec6579-19b4-4958-b792-98efc3293a0b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.753 225859 DEBUG oslo_concurrency.lockutils [req-7c410de6-4154-4108-a488-cda6e89a65b3 req-cfec6579-19b4-4958-b792-98efc3293a0b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.753 225859 DEBUG oslo_concurrency.lockutils [req-7c410de6-4154-4108-a488-cda6e89a65b3 req-cfec6579-19b4-4958-b792-98efc3293a0b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.754 225859 DEBUG nova.compute.manager [req-7c410de6-4154-4108-a488-cda6e89a65b3 req-cfec6579-19b4-4958-b792-98efc3293a0b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Processing event network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.818 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.826 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:01:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:51.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.859 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.867 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.873 225859 DEBUG nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.873 225859 DEBUG nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.874 225859 DEBUG nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] No VIF found with MAC fa:16:3e:87:b9:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.874 225859 INFO nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Using config drive
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.908 225859 DEBUG nova.storage.rbd_utils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] rbd image 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.916 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:01:51 compute-1 nova_compute[225855]: 2026-01-20 15:01:51.916 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2575874763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3482231016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1211339276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2784167277' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:01:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2784167277' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:01:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/120102328' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2888294495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:52 compute-1 podman[285672]: 2026-01-20 15:01:52.041055805 +0000 UTC m=+0.051337249 container create c3195ecf870ff81516979fba2a40a24375f9afe5dc36f02bd45e63cf475acd84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:01:52 compute-1 systemd[1]: Started libpod-conmon-c3195ecf870ff81516979fba2a40a24375f9afe5dc36f02bd45e63cf475acd84.scope.
Jan 20 15:01:52 compute-1 podman[285672]: 2026-01-20 15:01:52.015733101 +0000 UTC m=+0.026014575 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:01:52 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:01:52 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9583a611064af7f6e72be93b7a47c8363e51f876af054788b7c0ed954b9e3b3d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:01:52 compute-1 podman[285672]: 2026-01-20 15:01:52.135674755 +0000 UTC m=+0.145956209 container init c3195ecf870ff81516979fba2a40a24375f9afe5dc36f02bd45e63cf475acd84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 15:01:52 compute-1 podman[285672]: 2026-01-20 15:01:52.141248412 +0000 UTC m=+0.151529866 container start c3195ecf870ff81516979fba2a40a24375f9afe5dc36f02bd45e63cf475acd84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 15:01:52 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[285687]: [NOTICE]   (285691) : New worker (285693) forked
Jan 20 15:01:52 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[285687]: [NOTICE]   (285691) : Loading success.
Jan 20 15:01:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:52.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.646 225859 DEBUG nova.compute.manager [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.648 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921312.646165, 474cec75-3b01-411a-9074-75859d2a9ddf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.648 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] VM Started (Lifecycle Event)
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.652 225859 DEBUG nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.655 225859 INFO nova.virt.libvirt.driver [-] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Instance spawned successfully.
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.655 225859 DEBUG nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.675 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.681 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.684 225859 DEBUG nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.684 225859 DEBUG nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.685 225859 DEBUG nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.685 225859 DEBUG nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.686 225859 DEBUG nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.686 225859 DEBUG nova.virt.libvirt.driver [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.714 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.715 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921312.647177, 474cec75-3b01-411a-9074-75859d2a9ddf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.715 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] VM Paused (Lifecycle Event)
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.728 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:52.729 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:01:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:52.730 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.764 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.768 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921312.6512923, 474cec75-3b01-411a-9074-75859d2a9ddf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.768 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] VM Resumed (Lifecycle Event)
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.772 225859 INFO nova.compute.manager [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Took 4.74 seconds to spawn the instance on the hypervisor.
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.772 225859 DEBUG nova.compute.manager [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.825 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.828 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.841 225859 INFO nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Creating config drive at /var/lib/nova/instances/2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1/disk.config
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.847 225859 DEBUG oslo_concurrency.processutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp28zsa7s7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.882 225859 INFO nova.compute.manager [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Took 8.43 seconds to build instance.
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.908 225859 DEBUG nova.network.neutron [req-9a937e69-f6fd-46bc-9bbf-4370a0ca8765 req-61a44dde-7864-4631-8912-f6885a52cef2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Updated VIF entry in instance network info cache for port 6216baae-337d-44a3-aa38-60c2afb5d13f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.909 225859 DEBUG nova.network.neutron [req-9a937e69-f6fd-46bc-9bbf-4370a0ca8765 req-61a44dde-7864-4631-8912-f6885a52cef2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Updating instance_info_cache with network_info: [{"id": "6216baae-337d-44a3-aa38-60c2afb5d13f", "address": "fa:16:3e:87:b9:ea", "network": {"id": "43d3be8f-9be1-4892-bbfe-d0ba2d7157ad", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1740636070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a1d679d5c954662a271e842fe2f2c05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6216baae-33", "ovs_interfaceid": "6216baae-337d-44a3-aa38-60c2afb5d13f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.917 225859 DEBUG oslo_concurrency.lockutils [None req-de5e8f40-ac38-43be-94d3-f6d29767939e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.928 225859 DEBUG oslo_concurrency.lockutils [req-9a937e69-f6fd-46bc-9bbf-4370a0ca8765 req-61a44dde-7864-4631-8912-f6885a52cef2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:01:52 compute-1 nova_compute[225855]: 2026-01-20 15:01:52.979 225859 DEBUG oslo_concurrency.processutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp28zsa7s7" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:53 compute-1 ceph-mon[81775]: pgmap v2306: 321 pgs: 321 active+clean; 294 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 965 KiB/s rd, 4.7 MiB/s wr, 155 op/s
Jan 20 15:01:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2513427491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4289062265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:53 compute-1 nova_compute[225855]: 2026-01-20 15:01:53.005 225859 DEBUG nova.storage.rbd_utils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] rbd image 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:01:53 compute-1 nova_compute[225855]: 2026-01-20 15:01:53.011 225859 DEBUG oslo_concurrency.processutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1/disk.config 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:01:53 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2483513834' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:01:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:01:53 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2483513834' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:01:53 compute-1 nova_compute[225855]: 2026-01-20 15:01:53.371 225859 DEBUG oslo_concurrency.processutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1/disk.config 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:53 compute-1 nova_compute[225855]: 2026-01-20 15:01:53.372 225859 INFO nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Deleting local config drive /var/lib/nova/instances/2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1/disk.config because it was imported into RBD.
Jan 20 15:01:53 compute-1 kernel: tap6216baae-33: entered promiscuous mode
Jan 20 15:01:53 compute-1 NetworkManager[49104]: <info>  [1768921313.4277] manager: (tap6216baae-33): new Tun device (/org/freedesktop/NetworkManager/Devices/266)
Jan 20 15:01:53 compute-1 systemd-udevd[285590]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:01:53 compute-1 ovn_controller[130490]: 2026-01-20T15:01:53Z|00616|binding|INFO|Claiming lport 6216baae-337d-44a3-aa38-60c2afb5d13f for this chassis.
Jan 20 15:01:53 compute-1 ovn_controller[130490]: 2026-01-20T15:01:53Z|00617|binding|INFO|6216baae-337d-44a3-aa38-60c2afb5d13f: Claiming fa:16:3e:87:b9:ea 10.100.0.12
Jan 20 15:01:53 compute-1 nova_compute[225855]: 2026-01-20 15:01:53.431 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.439 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:b9:ea 10.100.0.12'], port_security=['fa:16:3e:87:b9:ea 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a1d679d5c954662a271e842fe2f2c05', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f11f0ae2-6b78-4d57-a9ea-5a7c52439262', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=773a665f-440e-445e-8ca6-20a8b67e017a, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6216baae-337d-44a3-aa38-60c2afb5d13f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.440 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6216baae-337d-44a3-aa38-60c2afb5d13f in datapath 43d3be8f-9be1-4892-bbfe-d0ba2d7157ad bound to our chassis
Jan 20 15:01:53 compute-1 NetworkManager[49104]: <info>  [1768921313.4420] device (tap6216baae-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:01:53 compute-1 NetworkManager[49104]: <info>  [1768921313.4437] device (tap6216baae-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.442 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43d3be8f-9be1-4892-bbfe-d0ba2d7157ad
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.454 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[62dc41d6-e889-498a-ac25-a0c5aedb243c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.454 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43d3be8f-91 in ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.456 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43d3be8f-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.456 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff27907-c7b1-49ae-9871-a0e06379458f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.457 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[30c16653-92f6-454a-bb43-1e9306e68414]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:53 compute-1 systemd-machined[194361]: New machine qemu-73-instance-00000097.
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.469 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[ccbf268c-2b3b-4387-a301-9ce44c22c0a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:53 compute-1 systemd[1]: Started Virtual Machine qemu-73-instance-00000097.
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.492 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cda80882-3d91-4356-b3e7-596bcb6a6bd7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:53 compute-1 nova_compute[225855]: 2026-01-20 15:01:53.502 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:53 compute-1 podman[285785]: 2026-01-20 15:01:53.505610173 +0000 UTC m=+0.100210888 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Jan 20 15:01:53 compute-1 ovn_controller[130490]: 2026-01-20T15:01:53Z|00618|binding|INFO|Setting lport 6216baae-337d-44a3-aa38-60c2afb5d13f ovn-installed in OVS
Jan 20 15:01:53 compute-1 ovn_controller[130490]: 2026-01-20T15:01:53Z|00619|binding|INFO|Setting lport 6216baae-337d-44a3-aa38-60c2afb5d13f up in Southbound
Jan 20 15:01:53 compute-1 nova_compute[225855]: 2026-01-20 15:01:53.507 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.521 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[333e9fa2-a0bd-4deb-9f9c-e8d1da93c7e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.527 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f616764e-ce11-4c90-8916-30011c814a9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:53 compute-1 NetworkManager[49104]: <info>  [1768921313.5281] manager: (tap43d3be8f-90): new Veth device (/org/freedesktop/NetworkManager/Devices/267)
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.553 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[a931b3b2-d0af-48ed-aa37-d60b1dd822da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.556 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c064ae-e4ef-4791-b977-f776965bda85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:53 compute-1 NetworkManager[49104]: <info>  [1768921313.5774] device (tap43d3be8f-90): carrier: link connected
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.582 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb4ae05-029a-4287-be41-7ba79d3f0ef0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.598 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b9707558-b6ab-4857-b8dc-191ba160e05d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43d3be8f-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:0f:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 175], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 635250, 'reachable_time': 42674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285839, 'error': None, 'target': 'ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.613 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b2dce26e-757e-4827-9cc0-effe7210a464]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0d:f60'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 635250, 'tstamp': 635250}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285840, 'error': None, 'target': 'ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.633 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e2310d5a-4b73-42b7-821b-67a07e4900f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43d3be8f-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:0f:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 175], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 635250, 'reachable_time': 42674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285841, 'error': None, 'target': 'ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.667 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3676eb-fc98-41f8-b886-db4858a3f3c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.726 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6b22ca45-0f00-48ba-86de-edb90bf97fa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.727 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43d3be8f-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.727 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.728 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43d3be8f-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:53 compute-1 NetworkManager[49104]: <info>  [1768921313.7317] manager: (tap43d3be8f-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Jan 20 15:01:53 compute-1 kernel: tap43d3be8f-90: entered promiscuous mode
Jan 20 15:01:53 compute-1 nova_compute[225855]: 2026-01-20 15:01:53.732 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.733 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43d3be8f-90, col_values=(('external_ids', {'iface-id': '32afa112-2ec4-4d59-b6eb-a77db2858bd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:53 compute-1 ovn_controller[130490]: 2026-01-20T15:01:53Z|00620|binding|INFO|Releasing lport 32afa112-2ec4-4d59-b6eb-a77db2858bd4 from this chassis (sb_readonly=0)
Jan 20 15:01:53 compute-1 nova_compute[225855]: 2026-01-20 15:01:53.749 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.751 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43d3be8f-9be1-4892-bbfe-d0ba2d7157ad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43d3be8f-9be1-4892-bbfe-d0ba2d7157ad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.752 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f07ac62f-6b1e-4e0f-873f-d94a7414a6e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.753 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/43d3be8f-9be1-4892-bbfe-d0ba2d7157ad.pid.haproxy
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 43d3be8f-9be1-4892-bbfe-d0ba2d7157ad
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:01:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:53.755 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad', 'env', 'PROCESS_TAG=haproxy-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43d3be8f-9be1-4892-bbfe-d0ba2d7157ad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:01:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:53.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:53 compute-1 nova_compute[225855]: 2026-01-20 15:01:53.856 225859 DEBUG nova.compute.manager [req-d0939939-3372-4503-a492-d19bf543f0f5 req-9e42d97d-b2bc-49f3-948f-0ff2ef99cd8d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received event network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:01:53 compute-1 nova_compute[225855]: 2026-01-20 15:01:53.857 225859 DEBUG oslo_concurrency.lockutils [req-d0939939-3372-4503-a492-d19bf543f0f5 req-9e42d97d-b2bc-49f3-948f-0ff2ef99cd8d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:53 compute-1 nova_compute[225855]: 2026-01-20 15:01:53.857 225859 DEBUG oslo_concurrency.lockutils [req-d0939939-3372-4503-a492-d19bf543f0f5 req-9e42d97d-b2bc-49f3-948f-0ff2ef99cd8d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:53 compute-1 nova_compute[225855]: 2026-01-20 15:01:53.857 225859 DEBUG oslo_concurrency.lockutils [req-d0939939-3372-4503-a492-d19bf543f0f5 req-9e42d97d-b2bc-49f3-948f-0ff2ef99cd8d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:53 compute-1 nova_compute[225855]: 2026-01-20 15:01:53.857 225859 DEBUG nova.compute.manager [req-d0939939-3372-4503-a492-d19bf543f0f5 req-9e42d97d-b2bc-49f3-948f-0ff2ef99cd8d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] No waiting events found dispatching network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:01:53 compute-1 nova_compute[225855]: 2026-01-20 15:01:53.858 225859 WARNING nova.compute.manager [req-d0939939-3372-4503-a492-d19bf543f0f5 req-9e42d97d-b2bc-49f3-948f-0ff2ef99cd8d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received unexpected event network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f for instance with vm_state active and task_state None.
Jan 20 15:01:53 compute-1 nova_compute[225855]: 2026-01-20 15:01:53.858 225859 DEBUG nova.compute.manager [req-d0939939-3372-4503-a492-d19bf543f0f5 req-9e42d97d-b2bc-49f3-948f-0ff2ef99cd8d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Received event network-vif-plugged-6216baae-337d-44a3-aa38-60c2afb5d13f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:01:53 compute-1 nova_compute[225855]: 2026-01-20 15:01:53.858 225859 DEBUG oslo_concurrency.lockutils [req-d0939939-3372-4503-a492-d19bf543f0f5 req-9e42d97d-b2bc-49f3-948f-0ff2ef99cd8d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:53 compute-1 nova_compute[225855]: 2026-01-20 15:01:53.858 225859 DEBUG oslo_concurrency.lockutils [req-d0939939-3372-4503-a492-d19bf543f0f5 req-9e42d97d-b2bc-49f3-948f-0ff2ef99cd8d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:53 compute-1 nova_compute[225855]: 2026-01-20 15:01:53.859 225859 DEBUG oslo_concurrency.lockutils [req-d0939939-3372-4503-a492-d19bf543f0f5 req-9e42d97d-b2bc-49f3-948f-0ff2ef99cd8d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:53 compute-1 nova_compute[225855]: 2026-01-20 15:01:53.859 225859 DEBUG nova.compute.manager [req-d0939939-3372-4503-a492-d19bf543f0f5 req-9e42d97d-b2bc-49f3-948f-0ff2ef99cd8d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Processing event network-vif-plugged-6216baae-337d-44a3-aa38-60c2afb5d13f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:01:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2483513834' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:01:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2483513834' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:01:54 compute-1 podman[285874]: 2026-01-20 15:01:54.11805894 +0000 UTC m=+0.049114656 container create 858464b04d58a5cb3b3a1293894336ed3fa1c40b3e83ce1971c3bfe8ff0e8d70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:01:54 compute-1 systemd[1]: Started libpod-conmon-858464b04d58a5cb3b3a1293894336ed3fa1c40b3e83ce1971c3bfe8ff0e8d70.scope.
Jan 20 15:01:54 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:01:54 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a768a9de3a5bcf02ee70e03b9ac0d04f5f61e6aa1b57b9c32ed35cc799999e46/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:01:54 compute-1 podman[285874]: 2026-01-20 15:01:54.09392861 +0000 UTC m=+0.024984346 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:01:54 compute-1 podman[285874]: 2026-01-20 15:01:54.203942433 +0000 UTC m=+0.134998169 container init 858464b04d58a5cb3b3a1293894336ed3fa1c40b3e83ce1971c3bfe8ff0e8d70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 15:01:54 compute-1 podman[285874]: 2026-01-20 15:01:54.211583619 +0000 UTC m=+0.142639345 container start 858464b04d58a5cb3b3a1293894336ed3fa1c40b3e83ce1971c3bfe8ff0e8d70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:01:54 compute-1 neutron-haproxy-ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad[285889]: [NOTICE]   (285893) : New worker (285895) forked
Jan 20 15:01:54 compute-1 neutron-haproxy-ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad[285889]: [NOTICE]   (285893) : Loading success.
Jan 20 15:01:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:54.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.573 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921314.5728111, 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.575 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] VM Started (Lifecycle Event)
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.578 225859 DEBUG nova.compute.manager [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.584 225859 DEBUG nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.588 225859 INFO nova.virt.libvirt.driver [-] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Instance spawned successfully.
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.589 225859 DEBUG nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.599 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.605 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.610 225859 DEBUG nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.610 225859 DEBUG nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.611 225859 DEBUG nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.611 225859 DEBUG nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.611 225859 DEBUG nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.612 225859 DEBUG nova.virt.libvirt.driver [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.627 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.627 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921314.5729702, 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.627 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] VM Paused (Lifecycle Event)
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.655 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.659 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921314.5828328, 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.659 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] VM Resumed (Lifecycle Event)
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.677 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.680 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.693 225859 INFO nova.compute.manager [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Took 8.61 seconds to spawn the instance on the hypervisor.
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.693 225859 DEBUG nova.compute.manager [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.722 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.785 225859 INFO nova.compute.manager [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Took 9.84 seconds to build instance.
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.813 225859 DEBUG oslo_concurrency.lockutils [None req-a557ce2c-7da9-48a8-8108-5744558daadb 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:54 compute-1 nova_compute[225855]: 2026-01-20 15:01:54.911 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:01:55 compute-1 ceph-mon[81775]: pgmap v2307: 321 pgs: 321 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 823 KiB/s rd, 4.7 MiB/s wr, 132 op/s
Jan 20 15:01:55 compute-1 sudo[285947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:55 compute-1 sudo[285947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:55 compute-1 sudo[285947]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:55 compute-1 sudo[285972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:55 compute-1 sudo[285972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:55 compute-1 sudo[285972]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:55.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:56 compute-1 nova_compute[225855]: 2026-01-20 15:01:56.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:01:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:01:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:56.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:01:56 compute-1 nova_compute[225855]: 2026-01-20 15:01:56.720 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:56 compute-1 nova_compute[225855]: 2026-01-20 15:01:56.778 225859 DEBUG nova.compute.manager [req-a2d79d48-0e15-4bd9-b3b1-3e6e70fbfb25 req-87cc4941-192f-4471-b3cc-949d4b17a459 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Received event network-vif-plugged-6216baae-337d-44a3-aa38-60c2afb5d13f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:01:56 compute-1 nova_compute[225855]: 2026-01-20 15:01:56.778 225859 DEBUG oslo_concurrency.lockutils [req-a2d79d48-0e15-4bd9-b3b1-3e6e70fbfb25 req-87cc4941-192f-4471-b3cc-949d4b17a459 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:56 compute-1 nova_compute[225855]: 2026-01-20 15:01:56.779 225859 DEBUG oslo_concurrency.lockutils [req-a2d79d48-0e15-4bd9-b3b1-3e6e70fbfb25 req-87cc4941-192f-4471-b3cc-949d4b17a459 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:56 compute-1 nova_compute[225855]: 2026-01-20 15:01:56.779 225859 DEBUG oslo_concurrency.lockutils [req-a2d79d48-0e15-4bd9-b3b1-3e6e70fbfb25 req-87cc4941-192f-4471-b3cc-949d4b17a459 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:56 compute-1 nova_compute[225855]: 2026-01-20 15:01:56.779 225859 DEBUG nova.compute.manager [req-a2d79d48-0e15-4bd9-b3b1-3e6e70fbfb25 req-87cc4941-192f-4471-b3cc-949d4b17a459 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] No waiting events found dispatching network-vif-plugged-6216baae-337d-44a3-aa38-60c2afb5d13f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:01:56 compute-1 nova_compute[225855]: 2026-01-20 15:01:56.780 225859 WARNING nova.compute.manager [req-a2d79d48-0e15-4bd9-b3b1-3e6e70fbfb25 req-87cc4941-192f-4471-b3cc-949d4b17a459 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Received unexpected event network-vif-plugged-6216baae-337d-44a3-aa38-60c2afb5d13f for instance with vm_state active and task_state None.
Jan 20 15:01:56 compute-1 nova_compute[225855]: 2026-01-20 15:01:56.856 225859 INFO nova.compute.manager [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Rescuing
Jan 20 15:01:56 compute-1 nova_compute[225855]: 2026-01-20 15:01:56.857 225859 DEBUG oslo_concurrency.lockutils [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "refresh_cache-474cec75-3b01-411a-9074-75859d2a9ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:01:56 compute-1 nova_compute[225855]: 2026-01-20 15:01:56.858 225859 DEBUG oslo_concurrency.lockutils [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquired lock "refresh_cache-474cec75-3b01-411a-9074-75859d2a9ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:01:56 compute-1 nova_compute[225855]: 2026-01-20 15:01:56.858 225859 DEBUG nova.network.neutron [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:01:56 compute-1 nova_compute[225855]: 2026-01-20 15:01:56.867 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:57 compute-1 ceph-mon[81775]: pgmap v2308: 321 pgs: 321 active+clean; 238 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 819 KiB/s rd, 2.5 MiB/s wr, 150 op/s
Jan 20 15:01:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:57.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2765111203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:58.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:01:58.732 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:59 compute-1 ceph-mon[81775]: pgmap v2309: 321 pgs: 321 active+clean; 238 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 466 KiB/s rd, 1.8 MiB/s wr, 108 op/s
Jan 20 15:01:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:01:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:59.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:00 compute-1 nova_compute[225855]: 2026-01-20 15:02:00.041 225859 DEBUG nova.network.neutron [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Updating instance_info_cache with network_info: [{"id": "244332ba-1b58-4d42-98b0-245f9460c50f", "address": "fa:16:3e:6f:36:24", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244332ba-1b", "ovs_interfaceid": "244332ba-1b58-4d42-98b0-245f9460c50f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:02:00 compute-1 nova_compute[225855]: 2026-01-20 15:02:00.063 225859 DEBUG oslo_concurrency.lockutils [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Releasing lock "refresh_cache-474cec75-3b01-411a-9074-75859d2a9ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:02:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:02:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:00.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:02:00 compute-1 nova_compute[225855]: 2026-01-20 15:02:00.418 225859 DEBUG nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 15:02:01 compute-1 ceph-mon[81775]: pgmap v2310: 321 pgs: 321 active+clean; 214 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Jan 20 15:02:01 compute-1 nova_compute[225855]: 2026-01-20 15:02:01.769 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:01 compute-1 nova_compute[225855]: 2026-01-20 15:02:01.772 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:01 compute-1 NetworkManager[49104]: <info>  [1768921321.7731] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Jan 20 15:02:01 compute-1 NetworkManager[49104]: <info>  [1768921321.7740] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Jan 20 15:02:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:02:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:01.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:02:01 compute-1 nova_compute[225855]: 2026-01-20 15:02:01.921 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:01 compute-1 nova_compute[225855]: 2026-01-20 15:02:01.923 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:01 compute-1 ovn_controller[130490]: 2026-01-20T15:02:01Z|00621|binding|INFO|Releasing lport 32afa112-2ec4-4d59-b6eb-a77db2858bd4 from this chassis (sb_readonly=0)
Jan 20 15:02:01 compute-1 ovn_controller[130490]: 2026-01-20T15:02:01Z|00622|binding|INFO|Releasing lport a8628d9e-196f-4b84-89fd-d3a41792b8a0 from this chassis (sb_readonly=0)
Jan 20 15:02:01 compute-1 nova_compute[225855]: 2026-01-20 15:02:01.947 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:02 compute-1 nova_compute[225855]: 2026-01-20 15:02:02.137 225859 DEBUG nova.compute.manager [req-313b9c23-99a1-49e6-8300-b14e5e036c76 req-8f48e61e-58ab-4153-b2e8-67936b63ae4c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Received event network-changed-6216baae-337d-44a3-aa38-60c2afb5d13f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:02:02 compute-1 nova_compute[225855]: 2026-01-20 15:02:02.137 225859 DEBUG nova.compute.manager [req-313b9c23-99a1-49e6-8300-b14e5e036c76 req-8f48e61e-58ab-4153-b2e8-67936b63ae4c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Refreshing instance network info cache due to event network-changed-6216baae-337d-44a3-aa38-60c2afb5d13f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:02:02 compute-1 nova_compute[225855]: 2026-01-20 15:02:02.137 225859 DEBUG oslo_concurrency.lockutils [req-313b9c23-99a1-49e6-8300-b14e5e036c76 req-8f48e61e-58ab-4153-b2e8-67936b63ae4c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:02:02 compute-1 nova_compute[225855]: 2026-01-20 15:02:02.137 225859 DEBUG oslo_concurrency.lockutils [req-313b9c23-99a1-49e6-8300-b14e5e036c76 req-8f48e61e-58ab-4153-b2e8-67936b63ae4c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:02:02 compute-1 nova_compute[225855]: 2026-01-20 15:02:02.138 225859 DEBUG nova.network.neutron [req-313b9c23-99a1-49e6-8300-b14e5e036c76 req-8f48e61e-58ab-4153-b2e8-67936b63ae4c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Refreshing network info cache for port 6216baae-337d-44a3-aa38-60c2afb5d13f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:02:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:02.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:03 compute-1 ceph-mon[81775]: pgmap v2311: 321 pgs: 321 active+clean; 214 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 844 KiB/s wr, 140 op/s
Jan 20 15:02:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3880140080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:03.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:04.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:04 compute-1 nova_compute[225855]: 2026-01-20 15:02:04.515 225859 DEBUG nova.network.neutron [req-313b9c23-99a1-49e6-8300-b14e5e036c76 req-8f48e61e-58ab-4153-b2e8-67936b63ae4c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Updated VIF entry in instance network info cache for port 6216baae-337d-44a3-aa38-60c2afb5d13f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:02:04 compute-1 nova_compute[225855]: 2026-01-20 15:02:04.516 225859 DEBUG nova.network.neutron [req-313b9c23-99a1-49e6-8300-b14e5e036c76 req-8f48e61e-58ab-4153-b2e8-67936b63ae4c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Updating instance_info_cache with network_info: [{"id": "6216baae-337d-44a3-aa38-60c2afb5d13f", "address": "fa:16:3e:87:b9:ea", "network": {"id": "43d3be8f-9be1-4892-bbfe-d0ba2d7157ad", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1740636070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a1d679d5c954662a271e842fe2f2c05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6216baae-33", "ovs_interfaceid": "6216baae-337d-44a3-aa38-60c2afb5d13f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:02:04 compute-1 ovn_controller[130490]: 2026-01-20T15:02:04Z|00623|binding|INFO|Releasing lport 32afa112-2ec4-4d59-b6eb-a77db2858bd4 from this chassis (sb_readonly=0)
Jan 20 15:02:04 compute-1 ovn_controller[130490]: 2026-01-20T15:02:04Z|00624|binding|INFO|Releasing lport a8628d9e-196f-4b84-89fd-d3a41792b8a0 from this chassis (sb_readonly=0)
Jan 20 15:02:04 compute-1 nova_compute[225855]: 2026-01-20 15:02:04.565 225859 DEBUG oslo_concurrency.lockutils [req-313b9c23-99a1-49e6-8300-b14e5e036c76 req-8f48e61e-58ab-4153-b2e8-67936b63ae4c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:02:04 compute-1 nova_compute[225855]: 2026-01-20 15:02:04.567 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:05 compute-1 ceph-mon[81775]: pgmap v2312: 321 pgs: 321 active+clean; 225 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 506 KiB/s wr, 150 op/s
Jan 20 15:02:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1790087133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:02:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:05.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:02:06 compute-1 podman[286004]: 2026-01-20 15:02:06.016702402 +0000 UTC m=+0.053405957 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 20 15:02:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:02:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:06.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:02:06 compute-1 nova_compute[225855]: 2026-01-20 15:02:06.774 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:06 compute-1 nova_compute[225855]: 2026-01-20 15:02:06.925 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:06 compute-1 nova_compute[225855]: 2026-01-20 15:02:06.992 225859 DEBUG oslo_concurrency.lockutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Acquiring lock "0a74eb9c-7f01-437d-a0c8-c01696fc8f9d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:06 compute-1 nova_compute[225855]: 2026-01-20 15:02:06.992 225859 DEBUG oslo_concurrency.lockutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "0a74eb9c-7f01-437d-a0c8-c01696fc8f9d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:07 compute-1 nova_compute[225855]: 2026-01-20 15:02:07.009 225859 DEBUG nova.compute.manager [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:02:07 compute-1 nova_compute[225855]: 2026-01-20 15:02:07.072 225859 DEBUG oslo_concurrency.lockutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:07 compute-1 nova_compute[225855]: 2026-01-20 15:02:07.073 225859 DEBUG oslo_concurrency.lockutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:07 compute-1 nova_compute[225855]: 2026-01-20 15:02:07.087 225859 DEBUG nova.virt.hardware [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:02:07 compute-1 nova_compute[225855]: 2026-01-20 15:02:07.088 225859 INFO nova.compute.claims [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:02:07 compute-1 nova_compute[225855]: 2026-01-20 15:02:07.237 225859 DEBUG oslo_concurrency.processutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:02:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:02:07 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2601699422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:07 compute-1 nova_compute[225855]: 2026-01-20 15:02:07.670 225859 DEBUG oslo_concurrency.processutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:02:07 compute-1 nova_compute[225855]: 2026-01-20 15:02:07.676 225859 DEBUG nova.compute.provider_tree [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:02:07 compute-1 nova_compute[225855]: 2026-01-20 15:02:07.695 225859 DEBUG nova.scheduler.client.report [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:02:07 compute-1 nova_compute[225855]: 2026-01-20 15:02:07.725 225859 DEBUG oslo_concurrency.lockutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:07 compute-1 nova_compute[225855]: 2026-01-20 15:02:07.726 225859 DEBUG nova.compute.manager [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:02:07 compute-1 nova_compute[225855]: 2026-01-20 15:02:07.790 225859 DEBUG nova.compute.manager [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:02:07 compute-1 nova_compute[225855]: 2026-01-20 15:02:07.790 225859 DEBUG nova.network.neutron [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:02:07 compute-1 nova_compute[225855]: 2026-01-20 15:02:07.820 225859 INFO nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:02:07 compute-1 nova_compute[225855]: 2026-01-20 15:02:07.841 225859 DEBUG nova.compute.manager [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:02:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:07.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:07 compute-1 ceph-mon[81775]: pgmap v2313: 321 pgs: 321 active+clean; 260 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Jan 20 15:02:07 compute-1 nova_compute[225855]: 2026-01-20 15:02:07.929 225859 DEBUG nova.compute.manager [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:02:07 compute-1 nova_compute[225855]: 2026-01-20 15:02:07.930 225859 DEBUG nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:02:07 compute-1 nova_compute[225855]: 2026-01-20 15:02:07.931 225859 INFO nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Creating image(s)
Jan 20 15:02:07 compute-1 nova_compute[225855]: 2026-01-20 15:02:07.960 225859 DEBUG nova.storage.rbd_utils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] rbd image 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:02:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:07 compute-1 nova_compute[225855]: 2026-01-20 15:02:07.988 225859 DEBUG nova.storage.rbd_utils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] rbd image 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:02:08 compute-1 nova_compute[225855]: 2026-01-20 15:02:08.013 225859 DEBUG nova.storage.rbd_utils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] rbd image 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:02:08 compute-1 nova_compute[225855]: 2026-01-20 15:02:08.017 225859 DEBUG oslo_concurrency.processutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:02:08 compute-1 nova_compute[225855]: 2026-01-20 15:02:08.089 225859 DEBUG oslo_concurrency.processutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:02:08 compute-1 nova_compute[225855]: 2026-01-20 15:02:08.091 225859 DEBUG oslo_concurrency.lockutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:08 compute-1 nova_compute[225855]: 2026-01-20 15:02:08.092 225859 DEBUG oslo_concurrency.lockutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:08 compute-1 nova_compute[225855]: 2026-01-20 15:02:08.092 225859 DEBUG oslo_concurrency.lockutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:08 compute-1 nova_compute[225855]: 2026-01-20 15:02:08.122 225859 DEBUG nova.storage.rbd_utils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] rbd image 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:02:08 compute-1 nova_compute[225855]: 2026-01-20 15:02:08.126 225859 DEBUG oslo_concurrency.processutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:02:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:02:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:08.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:02:08 compute-1 nova_compute[225855]: 2026-01-20 15:02:08.697 225859 DEBUG oslo_concurrency.processutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:02:08 compute-1 nova_compute[225855]: 2026-01-20 15:02:08.760 225859 DEBUG nova.storage.rbd_utils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] resizing rbd image 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:02:08 compute-1 nova_compute[225855]: 2026-01-20 15:02:08.864 225859 DEBUG nova.objects.instance [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lazy-loading 'migration_context' on Instance uuid 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:02:08 compute-1 nova_compute[225855]: 2026-01-20 15:02:08.879 225859 DEBUG nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:02:08 compute-1 nova_compute[225855]: 2026-01-20 15:02:08.880 225859 DEBUG nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Ensure instance console log exists: /var/lib/nova/instances/0a74eb9c-7f01-437d-a0c8-c01696fc8f9d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:02:08 compute-1 nova_compute[225855]: 2026-01-20 15:02:08.880 225859 DEBUG oslo_concurrency.lockutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:08 compute-1 nova_compute[225855]: 2026-01-20 15:02:08.881 225859 DEBUG oslo_concurrency.lockutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:08 compute-1 nova_compute[225855]: 2026-01-20 15:02:08.881 225859 DEBUG oslo_concurrency.lockutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2601699422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:08 compute-1 ceph-mon[81775]: pgmap v2314: 321 pgs: 321 active+clean; 260 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Jan 20 15:02:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2172022102' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2349293346' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:09 compute-1 ovn_controller[130490]: 2026-01-20T15:02:09Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:b9:ea 10.100.0.12
Jan 20 15:02:09 compute-1 ovn_controller[130490]: 2026-01-20T15:02:09Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:b9:ea 10.100.0.12
Jan 20 15:02:09 compute-1 nova_compute[225855]: 2026-01-20 15:02:09.715 225859 DEBUG nova.network.neutron [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Successfully created port: d9897519-3517-45da-be53-d342192fa380 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:02:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:09.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:10.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:10 compute-1 nova_compute[225855]: 2026-01-20 15:02:10.462 225859 DEBUG nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 15:02:10 compute-1 nova_compute[225855]: 2026-01-20 15:02:10.723 225859 DEBUG nova.network.neutron [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Successfully updated port: d9897519-3517-45da-be53-d342192fa380 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:02:10 compute-1 nova_compute[225855]: 2026-01-20 15:02:10.742 225859 DEBUG oslo_concurrency.lockutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Acquiring lock "refresh_cache-0a74eb9c-7f01-437d-a0c8-c01696fc8f9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:02:10 compute-1 nova_compute[225855]: 2026-01-20 15:02:10.743 225859 DEBUG oslo_concurrency.lockutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Acquired lock "refresh_cache-0a74eb9c-7f01-437d-a0c8-c01696fc8f9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:02:10 compute-1 nova_compute[225855]: 2026-01-20 15:02:10.743 225859 DEBUG nova.network.neutron [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:02:10 compute-1 nova_compute[225855]: 2026-01-20 15:02:10.871 225859 DEBUG nova.compute.manager [req-4004eee4-5a31-423e-8a25-56d4197964f0 req-8aacd456-145e-49ed-9d29-a40ed8f1ce23 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Received event network-changed-d9897519-3517-45da-be53-d342192fa380 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:02:10 compute-1 nova_compute[225855]: 2026-01-20 15:02:10.871 225859 DEBUG nova.compute.manager [req-4004eee4-5a31-423e-8a25-56d4197964f0 req-8aacd456-145e-49ed-9d29-a40ed8f1ce23 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Refreshing instance network info cache due to event network-changed-d9897519-3517-45da-be53-d342192fa380. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:02:10 compute-1 nova_compute[225855]: 2026-01-20 15:02:10.872 225859 DEBUG oslo_concurrency.lockutils [req-4004eee4-5a31-423e-8a25-56d4197964f0 req-8aacd456-145e-49ed-9d29-a40ed8f1ce23 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-0a74eb9c-7f01-437d-a0c8-c01696fc8f9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:02:11 compute-1 ceph-mon[81775]: pgmap v2315: 321 pgs: 321 active+clean; 336 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.0 MiB/s wr, 177 op/s
Jan 20 15:02:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1448665557' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:11 compute-1 nova_compute[225855]: 2026-01-20 15:02:11.572 225859 DEBUG nova.network.neutron [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:02:11 compute-1 nova_compute[225855]: 2026-01-20 15:02:11.778 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:11.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:11 compute-1 nova_compute[225855]: 2026-01-20 15:02:11.982 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:12.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3771773152' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:12 compute-1 ceph-mon[81775]: pgmap v2316: 321 pgs: 321 active+clean; 371 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 748 KiB/s rd, 6.6 MiB/s wr, 155 op/s
Jan 20 15:02:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.379 225859 DEBUG nova.network.neutron [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Updating instance_info_cache with network_info: [{"id": "d9897519-3517-45da-be53-d342192fa380", "address": "fa:16:3e:7a:5f:fd", "network": {"id": "0296a21f-6ec4-43a7-8731-1d3692a5de4a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1878354210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "908b5ba217ab458e8c9aa0e5a471c194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9897519-35", "ovs_interfaceid": "d9897519-3517-45da-be53-d342192fa380", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.411 225859 DEBUG oslo_concurrency.lockutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Releasing lock "refresh_cache-0a74eb9c-7f01-437d-a0c8-c01696fc8f9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.411 225859 DEBUG nova.compute.manager [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Instance network_info: |[{"id": "d9897519-3517-45da-be53-d342192fa380", "address": "fa:16:3e:7a:5f:fd", "network": {"id": "0296a21f-6ec4-43a7-8731-1d3692a5de4a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1878354210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "908b5ba217ab458e8c9aa0e5a471c194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9897519-35", "ovs_interfaceid": "d9897519-3517-45da-be53-d342192fa380", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.412 225859 DEBUG oslo_concurrency.lockutils [req-4004eee4-5a31-423e-8a25-56d4197964f0 req-8aacd456-145e-49ed-9d29-a40ed8f1ce23 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-0a74eb9c-7f01-437d-a0c8-c01696fc8f9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.412 225859 DEBUG nova.network.neutron [req-4004eee4-5a31-423e-8a25-56d4197964f0 req-8aacd456-145e-49ed-9d29-a40ed8f1ce23 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Refreshing network info cache for port d9897519-3517-45da-be53-d342192fa380 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.415 225859 DEBUG nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Start _get_guest_xml network_info=[{"id": "d9897519-3517-45da-be53-d342192fa380", "address": "fa:16:3e:7a:5f:fd", "network": {"id": "0296a21f-6ec4-43a7-8731-1d3692a5de4a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1878354210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "908b5ba217ab458e8c9aa0e5a471c194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9897519-35", "ovs_interfaceid": "d9897519-3517-45da-be53-d342192fa380", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.420 225859 WARNING nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.426 225859 DEBUG nova.virt.libvirt.host [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.427 225859 DEBUG nova.virt.libvirt.host [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.430 225859 DEBUG nova.virt.libvirt.host [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.431 225859 DEBUG nova.virt.libvirt.host [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.432 225859 DEBUG nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.432 225859 DEBUG nova.virt.hardware [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.432 225859 DEBUG nova.virt.hardware [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.433 225859 DEBUG nova.virt.hardware [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.433 225859 DEBUG nova.virt.hardware [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.433 225859 DEBUG nova.virt.hardware [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.433 225859 DEBUG nova.virt.hardware [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.434 225859 DEBUG nova.virt.hardware [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.434 225859 DEBUG nova.virt.hardware [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.434 225859 DEBUG nova.virt.hardware [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.434 225859 DEBUG nova.virt.hardware [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.435 225859 DEBUG nova.virt.hardware [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.437 225859 DEBUG oslo_concurrency.processutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:02:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:13.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3642955436' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:02:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3642955436' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:02:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:02:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3696145664' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:13 compute-1 sudo[286237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:13 compute-1 sudo[286237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:13 compute-1 sudo[286237]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.959 225859 DEBUG oslo_concurrency.processutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:02:13 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.995 225859 DEBUG nova.storage.rbd_utils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] rbd image 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:13.999 225859 DEBUG oslo_concurrency.processutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:02:14 compute-1 sudo[286265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:02:14 compute-1 sudo[286265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:14 compute-1 sudo[286265]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:14 compute-1 sudo[286308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:14 compute-1 sudo[286308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:14 compute-1 sudo[286308]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:14 compute-1 sudo[286333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 15:02:14 compute-1 sudo[286333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:14.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:02:14 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/295816442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.437 225859 DEBUG oslo_concurrency.processutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.439 225859 DEBUG nova.virt.libvirt.vif [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:02:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1200545801',display_name='tempest-TestServerMultinode-server-1200545801',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-1200545801',id=154,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='654b3ce7b3644fc58f8dc9f60529320b',ramdisk_id='',reservation_id='r-nu0b3t9m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1071973011',owner_user_name='tempest-TestServerMultinode-1071973011-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:02:07Z,user_data=None,user_id='158563a99d4a420890aaa00b05c8bb57',uuid=0a74eb9c-7f01-437d-a0c8-c01696fc8f9d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9897519-3517-45da-be53-d342192fa380", "address": "fa:16:3e:7a:5f:fd", "network": {"id": "0296a21f-6ec4-43a7-8731-1d3692a5de4a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1878354210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "908b5ba217ab458e8c9aa0e5a471c194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9897519-35", "ovs_interfaceid": "d9897519-3517-45da-be53-d342192fa380", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.439 225859 DEBUG nova.network.os_vif_util [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Converting VIF {"id": "d9897519-3517-45da-be53-d342192fa380", "address": "fa:16:3e:7a:5f:fd", "network": {"id": "0296a21f-6ec4-43a7-8731-1d3692a5de4a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1878354210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "908b5ba217ab458e8c9aa0e5a471c194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9897519-35", "ovs_interfaceid": "d9897519-3517-45da-be53-d342192fa380", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.440 225859 DEBUG nova.network.os_vif_util [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:5f:fd,bridge_name='br-int',has_traffic_filtering=True,id=d9897519-3517-45da-be53-d342192fa380,network=Network(0296a21f-6ec4-43a7-8731-1d3692a5de4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9897519-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.441 225859 DEBUG nova.objects.instance [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lazy-loading 'pci_devices' on Instance uuid 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.460 225859 DEBUG nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:02:14 compute-1 nova_compute[225855]:   <uuid>0a74eb9c-7f01-437d-a0c8-c01696fc8f9d</uuid>
Jan 20 15:02:14 compute-1 nova_compute[225855]:   <name>instance-0000009a</name>
Jan 20 15:02:14 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:02:14 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:02:14 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <nova:name>tempest-TestServerMultinode-server-1200545801</nova:name>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:02:13</nova:creationTime>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:02:14 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:02:14 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:02:14 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:02:14 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:02:14 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:02:14 compute-1 nova_compute[225855]:         <nova:user uuid="158563a99d4a420890aaa00b05c8bb57">tempest-TestServerMultinode-1071973011-project-admin</nova:user>
Jan 20 15:02:14 compute-1 nova_compute[225855]:         <nova:project uuid="654b3ce7b3644fc58f8dc9f60529320b">tempest-TestServerMultinode-1071973011</nova:project>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:02:14 compute-1 nova_compute[225855]:         <nova:port uuid="d9897519-3517-45da-be53-d342192fa380">
Jan 20 15:02:14 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:02:14 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:02:14 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <system>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <entry name="serial">0a74eb9c-7f01-437d-a0c8-c01696fc8f9d</entry>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <entry name="uuid">0a74eb9c-7f01-437d-a0c8-c01696fc8f9d</entry>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     </system>
Jan 20 15:02:14 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:02:14 compute-1 nova_compute[225855]:   <os>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:   </os>
Jan 20 15:02:14 compute-1 nova_compute[225855]:   <features>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:   </features>
Jan 20 15:02:14 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:02:14 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:02:14 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/0a74eb9c-7f01-437d-a0c8-c01696fc8f9d_disk">
Jan 20 15:02:14 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       </source>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:02:14 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/0a74eb9c-7f01-437d-a0c8-c01696fc8f9d_disk.config">
Jan 20 15:02:14 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       </source>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:02:14 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:7a:5f:fd"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <target dev="tapd9897519-35"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/0a74eb9c-7f01-437d-a0c8-c01696fc8f9d/console.log" append="off"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <video>
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     </video>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:02:14 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:02:14 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:02:14 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:02:14 compute-1 nova_compute[225855]: </domain>
Jan 20 15:02:14 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.460 225859 DEBUG nova.compute.manager [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Preparing to wait for external event network-vif-plugged-d9897519-3517-45da-be53-d342192fa380 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.461 225859 DEBUG oslo_concurrency.lockutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Acquiring lock "0a74eb9c-7f01-437d-a0c8-c01696fc8f9d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.461 225859 DEBUG oslo_concurrency.lockutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "0a74eb9c-7f01-437d-a0c8-c01696fc8f9d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.461 225859 DEBUG oslo_concurrency.lockutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "0a74eb9c-7f01-437d-a0c8-c01696fc8f9d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.462 225859 DEBUG nova.virt.libvirt.vif [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:02:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1200545801',display_name='tempest-TestServerMultinode-server-1200545801',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-1200545801',id=154,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='654b3ce7b3644fc58f8dc9f60529320b',ramdisk_id='',reservation_id='r-nu0b3t9m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1071973011',owner_user_name='tempest-TestServerMultinode-1071973011-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:02:07Z,user_data=None,user_id='158563a99d4a420890aaa00b05c8bb57',uuid=0a74eb9c-7f01-437d-a0c8-c01696fc8f9d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9897519-3517-45da-be53-d342192fa380", "address": "fa:16:3e:7a:5f:fd", "network": {"id": "0296a21f-6ec4-43a7-8731-1d3692a5de4a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1878354210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "908b5ba217ab458e8c9aa0e5a471c194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9897519-35", "ovs_interfaceid": "d9897519-3517-45da-be53-d342192fa380", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.462 225859 DEBUG nova.network.os_vif_util [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Converting VIF {"id": "d9897519-3517-45da-be53-d342192fa380", "address": "fa:16:3e:7a:5f:fd", "network": {"id": "0296a21f-6ec4-43a7-8731-1d3692a5de4a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1878354210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "908b5ba217ab458e8c9aa0e5a471c194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9897519-35", "ovs_interfaceid": "d9897519-3517-45da-be53-d342192fa380", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.463 225859 DEBUG nova.network.os_vif_util [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:5f:fd,bridge_name='br-int',has_traffic_filtering=True,id=d9897519-3517-45da-be53-d342192fa380,network=Network(0296a21f-6ec4-43a7-8731-1d3692a5de4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9897519-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.463 225859 DEBUG os_vif [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:5f:fd,bridge_name='br-int',has_traffic_filtering=True,id=d9897519-3517-45da-be53-d342192fa380,network=Network(0296a21f-6ec4-43a7-8731-1d3692a5de4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9897519-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.467 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.467 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.467 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.471 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.471 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9897519-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.472 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd9897519-35, col_values=(('external_ids', {'iface-id': 'd9897519-3517-45da-be53-d342192fa380', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:5f:fd', 'vm-uuid': '0a74eb9c-7f01-437d-a0c8-c01696fc8f9d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.473 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:14 compute-1 NetworkManager[49104]: <info>  [1768921334.4747] manager: (tapd9897519-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/271)
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.475 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.481 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.483 225859 INFO os_vif [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:5f:fd,bridge_name='br-int',has_traffic_filtering=True,id=d9897519-3517-45da-be53-d342192fa380,network=Network(0296a21f-6ec4-43a7-8731-1d3692a5de4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9897519-35')
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.603 225859 DEBUG nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.603 225859 DEBUG nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.604 225859 DEBUG nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] No VIF found with MAC fa:16:3e:7a:5f:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.604 225859 INFO nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Using config drive
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.636 225859 DEBUG nova.storage.rbd_utils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] rbd image 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:02:14 compute-1 podman[286450]: 2026-01-20 15:02:14.638047526 +0000 UTC m=+0.085607856 container exec 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.739 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:14 compute-1 podman[286450]: 2026-01-20 15:02:14.754346077 +0000 UTC m=+0.201906377 container exec_died 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.868 225859 DEBUG nova.network.neutron [req-4004eee4-5a31-423e-8a25-56d4197964f0 req-8aacd456-145e-49ed-9d29-a40ed8f1ce23 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Updated VIF entry in instance network info cache for port d9897519-3517-45da-be53-d342192fa380. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.868 225859 DEBUG nova.network.neutron [req-4004eee4-5a31-423e-8a25-56d4197964f0 req-8aacd456-145e-49ed-9d29-a40ed8f1ce23 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Updating instance_info_cache with network_info: [{"id": "d9897519-3517-45da-be53-d342192fa380", "address": "fa:16:3e:7a:5f:fd", "network": {"id": "0296a21f-6ec4-43a7-8731-1d3692a5de4a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1878354210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "908b5ba217ab458e8c9aa0e5a471c194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9897519-35", "ovs_interfaceid": "d9897519-3517-45da-be53-d342192fa380", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:02:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3696145664' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:14 compute-1 ceph-mon[81775]: pgmap v2317: 321 pgs: 321 active+clean; 385 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 912 KiB/s rd, 7.4 MiB/s wr, 169 op/s
Jan 20 15:02:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/295816442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:14 compute-1 nova_compute[225855]: 2026-01-20 15:02:14.886 225859 DEBUG oslo_concurrency.lockutils [req-4004eee4-5a31-423e-8a25-56d4197964f0 req-8aacd456-145e-49ed-9d29-a40ed8f1ce23 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-0a74eb9c-7f01-437d-a0c8-c01696fc8f9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:02:15 compute-1 nova_compute[225855]: 2026-01-20 15:02:15.131 225859 INFO nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Creating config drive at /var/lib/nova/instances/0a74eb9c-7f01-437d-a0c8-c01696fc8f9d/disk.config
Jan 20 15:02:15 compute-1 nova_compute[225855]: 2026-01-20 15:02:15.136 225859 DEBUG oslo_concurrency.processutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0a74eb9c-7f01-437d-a0c8-c01696fc8f9d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp57ra_zsl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:02:15 compute-1 nova_compute[225855]: 2026-01-20 15:02:15.270 225859 DEBUG oslo_concurrency.processutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0a74eb9c-7f01-437d-a0c8-c01696fc8f9d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp57ra_zsl" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:02:15 compute-1 nova_compute[225855]: 2026-01-20 15:02:15.305 225859 DEBUG nova.storage.rbd_utils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] rbd image 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:02:15 compute-1 nova_compute[225855]: 2026-01-20 15:02:15.312 225859 DEBUG oslo_concurrency.processutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0a74eb9c-7f01-437d-a0c8-c01696fc8f9d/disk.config 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:02:15 compute-1 podman[286633]: 2026-01-20 15:02:15.362708299 +0000 UTC m=+0.057535215 container exec 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 15:02:15 compute-1 podman[286665]: 2026-01-20 15:02:15.434547685 +0000 UTC m=+0.052411089 container exec_died 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 15:02:15 compute-1 podman[286633]: 2026-01-20 15:02:15.441247794 +0000 UTC m=+0.136074700 container exec_died 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 15:02:15 compute-1 sudo[286683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:15 compute-1 sudo[286683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:15 compute-1 sudo[286683]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:15 compute-1 sudo[286715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:15 compute-1 nova_compute[225855]: 2026-01-20 15:02:15.539 225859 DEBUG oslo_concurrency.processutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0a74eb9c-7f01-437d-a0c8-c01696fc8f9d/disk.config 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.227s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:02:15 compute-1 nova_compute[225855]: 2026-01-20 15:02:15.540 225859 INFO nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Deleting local config drive /var/lib/nova/instances/0a74eb9c-7f01-437d-a0c8-c01696fc8f9d/disk.config because it was imported into RBD.
Jan 20 15:02:15 compute-1 sudo[286715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:15 compute-1 sudo[286715]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:15 compute-1 kernel: tapd9897519-35: entered promiscuous mode
Jan 20 15:02:15 compute-1 NetworkManager[49104]: <info>  [1768921335.5935] manager: (tapd9897519-35): new Tun device (/org/freedesktop/NetworkManager/Devices/272)
Jan 20 15:02:15 compute-1 ovn_controller[130490]: 2026-01-20T15:02:15Z|00625|binding|INFO|Claiming lport d9897519-3517-45da-be53-d342192fa380 for this chassis.
Jan 20 15:02:15 compute-1 ovn_controller[130490]: 2026-01-20T15:02:15Z|00626|binding|INFO|d9897519-3517-45da-be53-d342192fa380: Claiming fa:16:3e:7a:5f:fd 10.100.0.8
Jan 20 15:02:15 compute-1 nova_compute[225855]: 2026-01-20 15:02:15.597 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.607 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:5f:fd 10.100.0.8'], port_security=['fa:16:3e:7a:5f:fd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0a74eb9c-7f01-437d-a0c8-c01696fc8f9d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0296a21f-6ec4-43a7-8731-1d3692a5de4a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '654b3ce7b3644fc58f8dc9f60529320b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ff1c5b6a-5ab6-401e-b333-7f359193e012', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a3d5928-255d-4c0c-af70-f26be5196416, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=d9897519-3517-45da-be53-d342192fa380) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.608 140354 INFO neutron.agent.ovn.metadata.agent [-] Port d9897519-3517-45da-be53-d342192fa380 in datapath 0296a21f-6ec4-43a7-8731-1d3692a5de4a bound to our chassis
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.610 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0296a21f-6ec4-43a7-8731-1d3692a5de4a
Jan 20 15:02:15 compute-1 ovn_controller[130490]: 2026-01-20T15:02:15Z|00627|binding|INFO|Setting lport d9897519-3517-45da-be53-d342192fa380 ovn-installed in OVS
Jan 20 15:02:15 compute-1 ovn_controller[130490]: 2026-01-20T15:02:15Z|00628|binding|INFO|Setting lport d9897519-3517-45da-be53-d342192fa380 up in Southbound
Jan 20 15:02:15 compute-1 nova_compute[225855]: 2026-01-20 15:02:15.620 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:15 compute-1 nova_compute[225855]: 2026-01-20 15:02:15.621 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.625 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c18260e8-537b-4ba6-aed5-3ee695d04ce5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.625 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0296a21f-61 in ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.627 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0296a21f-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.627 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[97389464-2adb-483a-988e-b9da3ce46685]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.628 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2e07438e-a020-4b9f-8fc1-1c4595d10dcd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:15 compute-1 systemd-udevd[286790]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:02:15 compute-1 systemd-machined[194361]: New machine qemu-74-instance-0000009a.
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.640 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[34627889-b4e2-43f5-b432-abbbef51ca41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:15 compute-1 NetworkManager[49104]: <info>  [1768921335.6482] device (tapd9897519-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:02:15 compute-1 NetworkManager[49104]: <info>  [1768921335.6492] device (tapd9897519-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:02:15 compute-1 systemd[1]: Started Virtual Machine qemu-74-instance-0000009a.
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.666 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[99c7638f-b672-4f08-8c2c-8b223a636b4a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.694 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[3fba78a1-6fed-4eef-87b9-51d458270c6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.700 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[439ed9da-6dd1-4a71-b52e-8d96d578ea82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:15 compute-1 NetworkManager[49104]: <info>  [1768921335.7012] manager: (tap0296a21f-60): new Veth device (/org/freedesktop/NetworkManager/Devices/273)
Jan 20 15:02:15 compute-1 podman[286780]: 2026-01-20 15:02:15.705297814 +0000 UTC m=+0.075051899 container exec e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, version=2.2.4, io.openshift.tags=Ceph keepalived, distribution-scope=public, release=1793, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.733 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e4ae46-22db-47c1-9875-441c9c24ce59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.736 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[3459ccfc-0f4f-4e98-a323-7e1aa525197b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:15 compute-1 podman[286780]: 2026-01-20 15:02:15.747546986 +0000 UTC m=+0.117301061 container exec_died e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, distribution-scope=public, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.openshift.expose-services=)
Jan 20 15:02:15 compute-1 NetworkManager[49104]: <info>  [1768921335.7639] device (tap0296a21f-60): carrier: link connected
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.770 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[9949f421-0e25-4148-8c78-ca53df948026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.790 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c95526a3-8f22-4bbe-9847-dcc9dfcaabfd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0296a21f-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:1c:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637469, 'reachable_time': 44936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286846, 'error': None, 'target': 'ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:15 compute-1 sudo[286333]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.808 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[19f646ca-58ff-4d95-8183-888b0aee162c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee3:1c68'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637469, 'tstamp': 637469}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286847, 'error': None, 'target': 'ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.827 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f89cf3d1-12e7-4278-8afd-7d630919e20d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0296a21f-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:1c:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637469, 'reachable_time': 44936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286848, 'error': None, 'target': 'ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.859 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ae739682-f2f3-4825-84ff-07abfabf369f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:15.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:15 compute-1 sudo[286850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:15 compute-1 sudo[286850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:15 compute-1 sudo[286850]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.916 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a41f7a57-ad33-4b24-b228-2d954f974e3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.920 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0296a21f-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.920 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.921 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0296a21f-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:02:15 compute-1 kernel: tap0296a21f-60: entered promiscuous mode
Jan 20 15:02:15 compute-1 nova_compute[225855]: 2026-01-20 15:02:15.922 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:15 compute-1 NetworkManager[49104]: <info>  [1768921335.9239] manager: (tap0296a21f-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Jan 20 15:02:15 compute-1 nova_compute[225855]: 2026-01-20 15:02:15.925 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.926 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0296a21f-60, col_values=(('external_ids', {'iface-id': 'a6fccd00-2fdb-4d49-8d76-4860c81e4a5f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:02:15 compute-1 nova_compute[225855]: 2026-01-20 15:02:15.928 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:15 compute-1 ovn_controller[130490]: 2026-01-20T15:02:15Z|00629|binding|INFO|Releasing lport a6fccd00-2fdb-4d49-8d76-4860c81e4a5f from this chassis (sb_readonly=0)
Jan 20 15:02:15 compute-1 nova_compute[225855]: 2026-01-20 15:02:15.944 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.945 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0296a21f-6ec4-43a7-8731-1d3692a5de4a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0296a21f-6ec4-43a7-8731-1d3692a5de4a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.946 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7f451004-0463-41c4-8bed-7814188c7799]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.947 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-0296a21f-6ec4-43a7-8731-1d3692a5de4a
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/0296a21f-6ec4-43a7-8731-1d3692a5de4a.pid.haproxy
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 0296a21f-6ec4-43a7-8731-1d3692a5de4a
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:02:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:15.948 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a', 'env', 'PROCESS_TAG=haproxy-0296a21f-6ec4-43a7-8731-1d3692a5de4a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0296a21f-6ec4-43a7-8731-1d3692a5de4a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:02:15 compute-1 sudo[286879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:02:15 compute-1 sudo[286879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:15 compute-1 sudo[286879]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:15 compute-1 nova_compute[225855]: 2026-01-20 15:02:15.998 225859 DEBUG nova.compute.manager [req-4f27abd5-f123-410c-8c71-c1bce10dde58 req-73e39e3a-2b6d-4748-a151-f8dc15115a6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Received event network-vif-plugged-d9897519-3517-45da-be53-d342192fa380 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:02:15 compute-1 nova_compute[225855]: 2026-01-20 15:02:15.998 225859 DEBUG oslo_concurrency.lockutils [req-4f27abd5-f123-410c-8c71-c1bce10dde58 req-73e39e3a-2b6d-4748-a151-f8dc15115a6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "0a74eb9c-7f01-437d-a0c8-c01696fc8f9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:15 compute-1 nova_compute[225855]: 2026-01-20 15:02:15.999 225859 DEBUG oslo_concurrency.lockutils [req-4f27abd5-f123-410c-8c71-c1bce10dde58 req-73e39e3a-2b6d-4748-a151-f8dc15115a6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "0a74eb9c-7f01-437d-a0c8-c01696fc8f9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:15 compute-1 nova_compute[225855]: 2026-01-20 15:02:15.999 225859 DEBUG oslo_concurrency.lockutils [req-4f27abd5-f123-410c-8c71-c1bce10dde58 req-73e39e3a-2b6d-4748-a151-f8dc15115a6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "0a74eb9c-7f01-437d-a0c8-c01696fc8f9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:15 compute-1 nova_compute[225855]: 2026-01-20 15:02:15.999 225859 DEBUG nova.compute.manager [req-4f27abd5-f123-410c-8c71-c1bce10dde58 req-73e39e3a-2b6d-4748-a151-f8dc15115a6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Processing event network-vif-plugged-d9897519-3517-45da-be53-d342192fa380 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:02:16 compute-1 sudo[286939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:16 compute-1 sudo[286939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:16 compute-1 sudo[286939]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.094 225859 DEBUG nova.compute.manager [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.096 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921336.0955298, 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.096 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] VM Started (Lifecycle Event)
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.100 225859 DEBUG nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.105 225859 INFO nova.virt.libvirt.driver [-] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Instance spawned successfully.
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.105 225859 DEBUG nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:02:16 compute-1 sudo[286974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:02:16 compute-1 sudo[286974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.150 225859 DEBUG nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.150 225859 DEBUG nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.151 225859 DEBUG nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.151 225859 DEBUG nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.152 225859 DEBUG nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.152 225859 DEBUG nova.virt.libvirt.driver [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.164 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.168 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.201 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.202 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921336.0957355, 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.209 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] VM Paused (Lifecycle Event)
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.237 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.243 225859 INFO nova.compute.manager [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Took 8.31 seconds to spawn the instance on the hypervisor.
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.244 225859 DEBUG nova.compute.manager [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.245 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921336.0983849, 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.245 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] VM Resumed (Lifecycle Event)
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.279 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.283 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:02:16 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:16 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:16 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:16 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:16 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:16 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.319 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.321 225859 INFO nova.compute.manager [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Took 9.27 seconds to build instance.
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.345 225859 DEBUG oslo_concurrency.lockutils [None req-4994abe5-feb5-4df3-8b7a-ac887128508a 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "0a74eb9c-7f01-437d-a0c8-c01696fc8f9d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:16 compute-1 podman[287033]: 2026-01-20 15:02:16.371735945 +0000 UTC m=+0.077153557 container create 2d3a4bd0692f59e12e25a197a7fb7fc22e2f341a10b58391cee2862341e502e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:02:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:16.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:16.422 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:16.422 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:16.423 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:16 compute-1 podman[287033]: 2026-01-20 15:02:16.328107285 +0000 UTC m=+0.033524917 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:02:16 compute-1 systemd[1]: Started libpod-conmon-2d3a4bd0692f59e12e25a197a7fb7fc22e2f341a10b58391cee2862341e502e7.scope.
Jan 20 15:02:16 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:02:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b849f5b9085b963f8484cdd30046286b99fb772010029aac19f41105e734dffc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:02:16 compute-1 podman[287033]: 2026-01-20 15:02:16.477052407 +0000 UTC m=+0.182470039 container init 2d3a4bd0692f59e12e25a197a7fb7fc22e2f341a10b58391cee2862341e502e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:02:16 compute-1 podman[287033]: 2026-01-20 15:02:16.482626164 +0000 UTC m=+0.188043776 container start 2d3a4bd0692f59e12e25a197a7fb7fc22e2f341a10b58391cee2862341e502e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 15:02:16 compute-1 neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a[287052]: [NOTICE]   (287056) : New worker (287060) forked
Jan 20 15:02:16 compute-1 neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a[287052]: [NOTICE]   (287056) : Loading success.
Jan 20 15:02:16 compute-1 sudo[286974]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:16 compute-1 nova_compute[225855]: 2026-01-20 15:02:16.984 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:17 compute-1 ceph-mon[81775]: pgmap v2318: 321 pgs: 321 active+clean; 386 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.0 MiB/s wr, 266 op/s
Jan 20 15:02:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:02:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:02:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:02:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:02:17 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:02:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:17.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:18 compute-1 nova_compute[225855]: 2026-01-20 15:02:18.123 225859 DEBUG nova.compute.manager [req-e08ff055-5bec-44fa-8e5c-ccc422962091 req-a32039a5-6bd8-49b2-80b2-58406b13dfb2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Received event network-vif-plugged-d9897519-3517-45da-be53-d342192fa380 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:02:18 compute-1 nova_compute[225855]: 2026-01-20 15:02:18.124 225859 DEBUG oslo_concurrency.lockutils [req-e08ff055-5bec-44fa-8e5c-ccc422962091 req-a32039a5-6bd8-49b2-80b2-58406b13dfb2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "0a74eb9c-7f01-437d-a0c8-c01696fc8f9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:18 compute-1 nova_compute[225855]: 2026-01-20 15:02:18.125 225859 DEBUG oslo_concurrency.lockutils [req-e08ff055-5bec-44fa-8e5c-ccc422962091 req-a32039a5-6bd8-49b2-80b2-58406b13dfb2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "0a74eb9c-7f01-437d-a0c8-c01696fc8f9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:18 compute-1 nova_compute[225855]: 2026-01-20 15:02:18.125 225859 DEBUG oslo_concurrency.lockutils [req-e08ff055-5bec-44fa-8e5c-ccc422962091 req-a32039a5-6bd8-49b2-80b2-58406b13dfb2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "0a74eb9c-7f01-437d-a0c8-c01696fc8f9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:18 compute-1 nova_compute[225855]: 2026-01-20 15:02:18.126 225859 DEBUG nova.compute.manager [req-e08ff055-5bec-44fa-8e5c-ccc422962091 req-a32039a5-6bd8-49b2-80b2-58406b13dfb2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] No waiting events found dispatching network-vif-plugged-d9897519-3517-45da-be53-d342192fa380 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:02:18 compute-1 nova_compute[225855]: 2026-01-20 15:02:18.126 225859 WARNING nova.compute.manager [req-e08ff055-5bec-44fa-8e5c-ccc422962091 req-a32039a5-6bd8-49b2-80b2-58406b13dfb2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Received unexpected event network-vif-plugged-d9897519-3517-45da-be53-d342192fa380 for instance with vm_state active and task_state None.
Jan 20 15:02:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:02:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:18.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:02:18 compute-1 nova_compute[225855]: 2026-01-20 15:02:18.795 225859 DEBUG nova.compute.manager [None req-08ef4085-d599-4c96-ac53-ce83fd2a6586 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:02:18 compute-1 nova_compute[225855]: 2026-01-20 15:02:18.859 225859 INFO nova.compute.manager [None req-08ef4085-d599-4c96-ac53-ce83fd2a6586 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] instance snapshotting
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.042 225859 DEBUG oslo_concurrency.lockutils [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Acquiring lock "0a74eb9c-7f01-437d-a0c8-c01696fc8f9d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.043 225859 DEBUG oslo_concurrency.lockutils [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "0a74eb9c-7f01-437d-a0c8-c01696fc8f9d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.043 225859 DEBUG oslo_concurrency.lockutils [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Acquiring lock "0a74eb9c-7f01-437d-a0c8-c01696fc8f9d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.043 225859 DEBUG oslo_concurrency.lockutils [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "0a74eb9c-7f01-437d-a0c8-c01696fc8f9d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.043 225859 DEBUG oslo_concurrency.lockutils [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "0a74eb9c-7f01-437d-a0c8-c01696fc8f9d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.044 225859 INFO nova.compute.manager [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Terminating instance
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.046 225859 DEBUG nova.compute.manager [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:02:19 compute-1 ceph-mon[81775]: pgmap v2319: 321 pgs: 321 active+clean; 386 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.7 MiB/s wr, 251 op/s
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.465 225859 INFO nova.virt.libvirt.driver [None req-08ef4085-d599-4c96-ac53-ce83fd2a6586 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Beginning live snapshot process
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.476 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:19 compute-1 kernel: tapd9897519-35 (unregistering): left promiscuous mode
Jan 20 15:02:19 compute-1 NetworkManager[49104]: <info>  [1768921339.5113] device (tapd9897519-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:02:19 compute-1 ovn_controller[130490]: 2026-01-20T15:02:19Z|00630|binding|INFO|Releasing lport d9897519-3517-45da-be53-d342192fa380 from this chassis (sb_readonly=0)
Jan 20 15:02:19 compute-1 ovn_controller[130490]: 2026-01-20T15:02:19Z|00631|binding|INFO|Setting lport d9897519-3517-45da-be53-d342192fa380 down in Southbound
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.567 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:19 compute-1 ovn_controller[130490]: 2026-01-20T15:02:19Z|00632|binding|INFO|Removing iface tapd9897519-35 ovn-installed in OVS
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.569 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:19.574 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:5f:fd 10.100.0.8'], port_security=['fa:16:3e:7a:5f:fd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0a74eb9c-7f01-437d-a0c8-c01696fc8f9d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0296a21f-6ec4-43a7-8731-1d3692a5de4a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '654b3ce7b3644fc58f8dc9f60529320b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ff1c5b6a-5ab6-401e-b333-7f359193e012', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a3d5928-255d-4c0c-af70-f26be5196416, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=d9897519-3517-45da-be53-d342192fa380) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:02:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:19.575 140354 INFO neutron.agent.ovn.metadata.agent [-] Port d9897519-3517-45da-be53-d342192fa380 in datapath 0296a21f-6ec4-43a7-8731-1d3692a5de4a unbound from our chassis
Jan 20 15:02:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:19.577 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0296a21f-6ec4-43a7-8731-1d3692a5de4a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:02:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:19.578 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7357c450-9ff7-49f5-b4c7-9a6099266654]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:19.578 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a namespace which is not needed anymore
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.584 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:19 compute-1 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Jan 20 15:02:19 compute-1 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000009a.scope: Consumed 3.440s CPU time.
Jan 20 15:02:19 compute-1 systemd-machined[194361]: Machine qemu-74-instance-0000009a terminated.
Jan 20 15:02:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:19.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.892 225859 DEBUG nova.virt.libvirt.imagebackend [None req-08ef4085-d599-4c96-ac53-ce83fd2a6586 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.899 225859 INFO nova.virt.libvirt.driver [-] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Instance destroyed successfully.
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.899 225859 DEBUG nova.objects.instance [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lazy-loading 'resources' on Instance uuid 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.923 225859 DEBUG nova.virt.libvirt.vif [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:02:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1200545801',display_name='tempest-TestServerMultinode-server-1200545801',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-1200545801',id=154,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:02:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='654b3ce7b3644fc58f8dc9f60529320b',ramdisk_id='',reservation_id='r-nu0b3t9m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-1071973011',owner_user_name='tempest-TestServerMultinode-1071973011-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:02:16Z,user_data=None,user_id='158563a99d4a420890aaa00b05c8bb57',uuid=0a74eb9c-7f01-437d-a0c8-c01696fc8f9d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d9897519-3517-45da-be53-d342192fa380", "address": "fa:16:3e:7a:5f:fd", "network": {"id": "0296a21f-6ec4-43a7-8731-1d3692a5de4a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1878354210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "908b5ba217ab458e8c9aa0e5a471c194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9897519-35", "ovs_interfaceid": "d9897519-3517-45da-be53-d342192fa380", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.923 225859 DEBUG nova.network.os_vif_util [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Converting VIF {"id": "d9897519-3517-45da-be53-d342192fa380", "address": "fa:16:3e:7a:5f:fd", "network": {"id": "0296a21f-6ec4-43a7-8731-1d3692a5de4a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1878354210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "908b5ba217ab458e8c9aa0e5a471c194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9897519-35", "ovs_interfaceid": "d9897519-3517-45da-be53-d342192fa380", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.924 225859 DEBUG nova.network.os_vif_util [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:5f:fd,bridge_name='br-int',has_traffic_filtering=True,id=d9897519-3517-45da-be53-d342192fa380,network=Network(0296a21f-6ec4-43a7-8731-1d3692a5de4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9897519-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.924 225859 DEBUG os_vif [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:5f:fd,bridge_name='br-int',has_traffic_filtering=True,id=d9897519-3517-45da-be53-d342192fa380,network=Network(0296a21f-6ec4-43a7-8731-1d3692a5de4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9897519-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.926 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.926 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9897519-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.929 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:19 compute-1 nova_compute[225855]: 2026-01-20 15:02:19.932 225859 INFO os_vif [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:5f:fd,bridge_name='br-int',has_traffic_filtering=True,id=d9897519-3517-45da-be53-d342192fa380,network=Network(0296a21f-6ec4-43a7-8731-1d3692a5de4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9897519-35')
Jan 20 15:02:20 compute-1 nova_compute[225855]: 2026-01-20 15:02:20.097 225859 DEBUG nova.storage.rbd_utils [None req-08ef4085-d599-4c96-ac53-ce83fd2a6586 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] creating snapshot(ff4322e85df1493480d9bf54ecc676ab) on rbd image(2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 15:02:20 compute-1 neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a[287052]: [NOTICE]   (287056) : haproxy version is 2.8.14-c23fe91
Jan 20 15:02:20 compute-1 neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a[287052]: [NOTICE]   (287056) : path to executable is /usr/sbin/haproxy
Jan 20 15:02:20 compute-1 neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a[287052]: [WARNING]  (287056) : Exiting Master process...
Jan 20 15:02:20 compute-1 neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a[287052]: [ALERT]    (287056) : Current worker (287060) exited with code 143 (Terminated)
Jan 20 15:02:20 compute-1 neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a[287052]: [WARNING]  (287056) : All workers exited. Exiting... (0)
Jan 20 15:02:20 compute-1 systemd[1]: libpod-2d3a4bd0692f59e12e25a197a7fb7fc22e2f341a10b58391cee2862341e502e7.scope: Deactivated successfully.
Jan 20 15:02:20 compute-1 conmon[287052]: conmon 2d3a4bd0692f59e12e25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2d3a4bd0692f59e12e25a197a7fb7fc22e2f341a10b58391cee2862341e502e7.scope/container/memory.events
Jan 20 15:02:20 compute-1 podman[287113]: 2026-01-20 15:02:20.184640384 +0000 UTC m=+0.519672772 container died 2d3a4bd0692f59e12e25a197a7fb7fc22e2f341a10b58391cee2862341e502e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:02:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:20.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:20 compute-1 nova_compute[225855]: 2026-01-20 15:02:20.678 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:20 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2d3a4bd0692f59e12e25a197a7fb7fc22e2f341a10b58391cee2862341e502e7-userdata-shm.mount: Deactivated successfully.
Jan 20 15:02:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-b849f5b9085b963f8484cdd30046286b99fb772010029aac19f41105e734dffc-merged.mount: Deactivated successfully.
Jan 20 15:02:20 compute-1 podman[287113]: 2026-01-20 15:02:20.712976919 +0000 UTC m=+1.048009307 container cleanup 2d3a4bd0692f59e12e25a197a7fb7fc22e2f341a10b58391cee2862341e502e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:02:20 compute-1 systemd[1]: libpod-conmon-2d3a4bd0692f59e12e25a197a7fb7fc22e2f341a10b58391cee2862341e502e7.scope: Deactivated successfully.
Jan 20 15:02:20 compute-1 podman[287213]: 2026-01-20 15:02:20.781573854 +0000 UTC m=+0.042198361 container remove 2d3a4bd0692f59e12e25a197a7fb7fc22e2f341a10b58391cee2862341e502e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:02:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:20.788 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6c340738-3b21-4f98-8537-5e4eb94048c7]: (4, ('Tue Jan 20 03:02:19 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a (2d3a4bd0692f59e12e25a197a7fb7fc22e2f341a10b58391cee2862341e502e7)\n2d3a4bd0692f59e12e25a197a7fb7fc22e2f341a10b58391cee2862341e502e7\nTue Jan 20 03:02:20 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a (2d3a4bd0692f59e12e25a197a7fb7fc22e2f341a10b58391cee2862341e502e7)\n2d3a4bd0692f59e12e25a197a7fb7fc22e2f341a10b58391cee2862341e502e7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:20.790 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7191af5d-3ed5-41aa-b391-4b77d840a9d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:20.791 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0296a21f-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:02:20 compute-1 nova_compute[225855]: 2026-01-20 15:02:20.792 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:20 compute-1 kernel: tap0296a21f-60: left promiscuous mode
Jan 20 15:02:20 compute-1 nova_compute[225855]: 2026-01-20 15:02:20.813 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:20.816 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8c679a-333f-438c-97fb-f1f8ffa71024]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:20.828 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b07ca782-dac1-4090-8914-d7684b22e78b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:20.829 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a59ce970-9931-4673-9aef-35d4858dceb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:20.846 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bb280cf7-0b8a-43b3-89e4-e1f93af9e20b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637461, 'reachable_time': 43720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287229, 'error': None, 'target': 'ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:20.849 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:02:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:20.849 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[3e9fa55a-8ef8-4d3f-80d3-6c165cce7813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:20 compute-1 systemd[1]: run-netns-ovnmeta\x2d0296a21f\x2d6ec4\x2d43a7\x2d8731\x2d1d3692a5de4a.mount: Deactivated successfully.
Jan 20 15:02:21 compute-1 ceph-mon[81775]: pgmap v2320: 321 pgs: 321 active+clean; 386 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.7 MiB/s wr, 317 op/s
Jan 20 15:02:21 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e334 e334: 3 total, 3 up, 3 in
Jan 20 15:02:21 compute-1 nova_compute[225855]: 2026-01-20 15:02:21.456 225859 DEBUG nova.storage.rbd_utils [None req-08ef4085-d599-4c96-ac53-ce83fd2a6586 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] cloning vms/2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1_disk@ff4322e85df1493480d9bf54ecc676ab to images/97fb0fa0-6803-480b-96d2-4a219153376d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 15:02:21 compute-1 nova_compute[225855]: 2026-01-20 15:02:21.524 225859 DEBUG nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 15:02:21 compute-1 nova_compute[225855]: 2026-01-20 15:02:21.577 225859 DEBUG nova.storage.rbd_utils [None req-08ef4085-d599-4c96-ac53-ce83fd2a6586 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] flattening images/97fb0fa0-6803-480b-96d2-4a219153376d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 15:02:21 compute-1 nova_compute[225855]: 2026-01-20 15:02:21.625 225859 INFO nova.virt.libvirt.driver [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Deleting instance files /var/lib/nova/instances/0a74eb9c-7f01-437d-a0c8-c01696fc8f9d_del
Jan 20 15:02:21 compute-1 nova_compute[225855]: 2026-01-20 15:02:21.626 225859 INFO nova.virt.libvirt.driver [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Deletion of /var/lib/nova/instances/0a74eb9c-7f01-437d-a0c8-c01696fc8f9d_del complete
Jan 20 15:02:21 compute-1 nova_compute[225855]: 2026-01-20 15:02:21.753 225859 INFO nova.compute.manager [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Took 2.71 seconds to destroy the instance on the hypervisor.
Jan 20 15:02:21 compute-1 nova_compute[225855]: 2026-01-20 15:02:21.754 225859 DEBUG oslo.service.loopingcall [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:02:21 compute-1 nova_compute[225855]: 2026-01-20 15:02:21.755 225859 DEBUG nova.compute.manager [-] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:02:21 compute-1 nova_compute[225855]: 2026-01-20 15:02:21.755 225859 DEBUG nova.network.neutron [-] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:02:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:21.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:21 compute-1 nova_compute[225855]: 2026-01-20 15:02:21.986 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:22 compute-1 nova_compute[225855]: 2026-01-20 15:02:22.079 225859 DEBUG nova.storage.rbd_utils [None req-08ef4085-d599-4c96-ac53-ce83fd2a6586 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] removing snapshot(ff4322e85df1493480d9bf54ecc676ab) on rbd image(2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 15:02:22 compute-1 nova_compute[225855]: 2026-01-20 15:02:22.341 225859 DEBUG nova.network.neutron [-] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:02:22 compute-1 nova_compute[225855]: 2026-01-20 15:02:22.364 225859 INFO nova.compute.manager [-] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Took 0.61 seconds to deallocate network for instance.
Jan 20 15:02:22 compute-1 ceph-mon[81775]: osdmap e334: 3 total, 3 up, 3 in
Jan 20 15:02:22 compute-1 nova_compute[225855]: 2026-01-20 15:02:22.408 225859 DEBUG nova.compute.manager [req-1abd9743-9543-4f0b-a76e-26bfdf3f3bbb req-8301eff1-6a0a-4a63-b185-14fc78858c09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Received event network-vif-deleted-d9897519-3517-45da-be53-d342192fa380 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:02:22 compute-1 nova_compute[225855]: 2026-01-20 15:02:22.410 225859 DEBUG oslo_concurrency.lockutils [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:22 compute-1 nova_compute[225855]: 2026-01-20 15:02:22.411 225859 DEBUG oslo_concurrency.lockutils [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:22.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e335 e335: 3 total, 3 up, 3 in
Jan 20 15:02:22 compute-1 nova_compute[225855]: 2026-01-20 15:02:22.454 225859 DEBUG nova.storage.rbd_utils [None req-08ef4085-d599-4c96-ac53-ce83fd2a6586 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] creating snapshot(snap) on rbd image(97fb0fa0-6803-480b-96d2-4a219153376d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 15:02:22 compute-1 nova_compute[225855]: 2026-01-20 15:02:22.590 225859 DEBUG oslo_concurrency.processutils [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:02:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:02:23 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/33524825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:23 compute-1 nova_compute[225855]: 2026-01-20 15:02:23.032 225859 DEBUG oslo_concurrency.processutils [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:02:23 compute-1 nova_compute[225855]: 2026-01-20 15:02:23.039 225859 DEBUG nova.compute.provider_tree [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:02:23 compute-1 nova_compute[225855]: 2026-01-20 15:02:23.058 225859 DEBUG nova.scheduler.client.report [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:02:23 compute-1 nova_compute[225855]: 2026-01-20 15:02:23.092 225859 DEBUG oslo_concurrency.lockutils [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:23 compute-1 nova_compute[225855]: 2026-01-20 15:02:23.126 225859 INFO nova.scheduler.client.report [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Deleted allocations for instance 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d
Jan 20 15:02:23 compute-1 nova_compute[225855]: 2026-01-20 15:02:23.187 225859 DEBUG oslo_concurrency.lockutils [None req-c0d13fe6-9e14-49ab-958a-aa1da6889b1e 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "0a74eb9c-7f01-437d-a0c8-c01696fc8f9d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:23 compute-1 ceph-mon[81775]: pgmap v2322: 321 pgs: 321 active+clean; 386 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 1.1 MiB/s wr, 272 op/s
Jan 20 15:02:23 compute-1 ceph-mon[81775]: osdmap e335: 3 total, 3 up, 3 in
Jan 20 15:02:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/33524825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e336 e336: 3 total, 3 up, 3 in
Jan 20 15:02:23 compute-1 sudo[287345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:23 compute-1 sudo[287345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:23 compute-1 sudo[287345]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:23 compute-1 sudo[287377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:02:23 compute-1 sudo[287377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:23 compute-1 sudo[287377]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:23 compute-1 podman[287370]: 2026-01-20 15:02:23.774977064 +0000 UTC m=+0.084798183 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 15:02:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:23.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:24.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:24 compute-1 ceph-mon[81775]: osdmap e336: 3 total, 3 up, 3 in
Jan 20 15:02:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:24 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:24 compute-1 ceph-mon[81775]: pgmap v2325: 321 pgs: 321 active+clean; 402 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 3.0 MiB/s wr, 252 op/s
Jan 20 15:02:24 compute-1 nova_compute[225855]: 2026-01-20 15:02:24.977 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:25 compute-1 nova_compute[225855]: 2026-01-20 15:02:25.594 225859 INFO nova.virt.libvirt.driver [None req-08ef4085-d599-4c96-ac53-ce83fd2a6586 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Snapshot image upload complete
Jan 20 15:02:25 compute-1 nova_compute[225855]: 2026-01-20 15:02:25.595 225859 INFO nova.compute.manager [None req-08ef4085-d599-4c96-ac53-ce83fd2a6586 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Took 6.73 seconds to snapshot the instance on the hypervisor.
Jan 20 15:02:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:25.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:02:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:26.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:02:26 compute-1 nova_compute[225855]: 2026-01-20 15:02:26.988 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:27 compute-1 ceph-mon[81775]: pgmap v2326: 321 pgs: 321 active+clean; 405 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 8.2 MiB/s wr, 294 op/s
Jan 20 15:02:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:27.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/841351583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:28.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:02:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:29.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:02:29 compute-1 ceph-mon[81775]: pgmap v2327: 321 pgs: 321 active+clean; 405 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 7.2 MiB/s wr, 212 op/s
Jan 20 15:02:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3390699459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:29 compute-1 nova_compute[225855]: 2026-01-20 15:02:29.979 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e337 e337: 3 total, 3 up, 3 in
Jan 20 15:02:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:30.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3652761978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:31 compute-1 ceph-mon[81775]: pgmap v2328: 321 pgs: 321 active+clean; 437 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 11 MiB/s wr, 334 op/s
Jan 20 15:02:31 compute-1 ceph-mon[81775]: osdmap e337: 3 total, 3 up, 3 in
Jan 20 15:02:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:02:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:31.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:02:31 compute-1 nova_compute[225855]: 2026-01-20 15:02:31.989 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:02:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:32.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:02:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3110069501' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:32 compute-1 nova_compute[225855]: 2026-01-20 15:02:32.592 225859 DEBUG nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 15:02:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:33 compute-1 ceph-mon[81775]: pgmap v2330: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 11 MiB/s wr, 327 op/s
Jan 20 15:02:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2716342833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:33.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:02:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:34.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:02:34 compute-1 ceph-mon[81775]: pgmap v2331: 321 pgs: 321 active+clean; 435 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 7.6 MiB/s wr, 258 op/s
Jan 20 15:02:34 compute-1 nova_compute[225855]: 2026-01-20 15:02:34.887 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921339.68305, 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:02:34 compute-1 nova_compute[225855]: 2026-01-20 15:02:34.888 225859 INFO nova.compute.manager [-] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] VM Stopped (Lifecycle Event)
Jan 20 15:02:34 compute-1 nova_compute[225855]: 2026-01-20 15:02:34.926 225859 DEBUG nova.compute.manager [None req-4bb4e7bb-29ad-4602-8b5b-d205e42cebca - - - - - -] [instance: 0a74eb9c-7f01-437d-a0c8-c01696fc8f9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:02:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:34.941 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:02:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:34.942 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:02:34 compute-1 nova_compute[225855]: 2026-01-20 15:02:34.954 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:34 compute-1 nova_compute[225855]: 2026-01-20 15:02:34.981 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:35 compute-1 sudo[287427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:35 compute-1 sudo[287427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:35 compute-1 sudo[287427]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:35 compute-1 sudo[287453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:35 compute-1 sudo[287453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:35 compute-1 sudo[287453]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1771894953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4115469443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:35.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:36 compute-1 podman[287478]: 2026-01-20 15:02:36.249145813 +0000 UTC m=+0.048333034 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 20 15:02:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:36.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3456037100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:36 compute-1 ceph-mon[81775]: pgmap v2332: 321 pgs: 321 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 4.4 MiB/s wr, 239 op/s
Jan 20 15:02:36 compute-1 nova_compute[225855]: 2026-01-20 15:02:36.990 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:37.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:02:37.944 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:02:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:38.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:39 compute-1 ceph-mon[81775]: pgmap v2333: 321 pgs: 321 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 4.4 MiB/s wr, 239 op/s
Jan 20 15:02:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:39.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:39 compute-1 nova_compute[225855]: 2026-01-20 15:02:39.984 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:40.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:40 compute-1 ceph-mon[81775]: pgmap v2334: 321 pgs: 321 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 880 KiB/s wr, 181 op/s
Jan 20 15:02:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:41.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:41 compute-1 nova_compute[225855]: 2026-01-20 15:02:41.993 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:42.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:42 compute-1 ovn_controller[130490]: 2026-01-20T15:02:42Z|00633|binding|INFO|Releasing lport 32afa112-2ec4-4d59-b6eb-a77db2858bd4 from this chassis (sb_readonly=0)
Jan 20 15:02:42 compute-1 ovn_controller[130490]: 2026-01-20T15:02:42Z|00634|binding|INFO|Releasing lport a8628d9e-196f-4b84-89fd-d3a41792b8a0 from this chassis (sb_readonly=0)
Jan 20 15:02:42 compute-1 nova_compute[225855]: 2026-01-20 15:02:42.655 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:43 compute-1 ceph-mon[81775]: pgmap v2335: 321 pgs: 321 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 749 KiB/s wr, 176 op/s
Jan 20 15:02:43 compute-1 nova_compute[225855]: 2026-01-20 15:02:43.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:02:43 compute-1 nova_compute[225855]: 2026-01-20 15:02:43.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:02:43 compute-1 nova_compute[225855]: 2026-01-20 15:02:43.633 225859 DEBUG nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Instance in state 1 after 43 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 15:02:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:02:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:43.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:02:44 compute-1 nova_compute[225855]: 2026-01-20 15:02:44.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:02:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:44.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:44 compute-1 nova_compute[225855]: 2026-01-20 15:02:44.988 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:45 compute-1 nova_compute[225855]: 2026-01-20 15:02:45.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:02:45 compute-1 nova_compute[225855]: 2026-01-20 15:02:45.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:02:45 compute-1 ceph-mon[81775]: pgmap v2336: 321 pgs: 321 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 29 KiB/s wr, 184 op/s
Jan 20 15:02:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:45.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:46 compute-1 nova_compute[225855]: 2026-01-20 15:02:46.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:02:46 compute-1 nova_compute[225855]: 2026-01-20 15:02:46.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:02:46 compute-1 nova_compute[225855]: 2026-01-20 15:02:46.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:02:46 compute-1 nova_compute[225855]: 2026-01-20 15:02:46.376 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-474cec75-3b01-411a-9074-75859d2a9ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:02:46 compute-1 nova_compute[225855]: 2026-01-20 15:02:46.377 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-474cec75-3b01-411a-9074-75859d2a9ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:02:46 compute-1 nova_compute[225855]: 2026-01-20 15:02:46.377 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:02:46 compute-1 nova_compute[225855]: 2026-01-20 15:02:46.377 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 474cec75-3b01-411a-9074-75859d2a9ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:02:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:46.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:46 compute-1 nova_compute[225855]: 2026-01-20 15:02:46.996 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:47 compute-1 ceph-mon[81775]: pgmap v2337: 321 pgs: 321 active+clean; 375 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 124 KiB/s wr, 190 op/s
Jan 20 15:02:47 compute-1 ovn_controller[130490]: 2026-01-20T15:02:47Z|00635|binding|INFO|Releasing lport 32afa112-2ec4-4d59-b6eb-a77db2858bd4 from this chassis (sb_readonly=0)
Jan 20 15:02:47 compute-1 ovn_controller[130490]: 2026-01-20T15:02:47Z|00636|binding|INFO|Releasing lport a8628d9e-196f-4b84-89fd-d3a41792b8a0 from this chassis (sb_readonly=0)
Jan 20 15:02:47 compute-1 nova_compute[225855]: 2026-01-20 15:02:47.522 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:47 compute-1 nova_compute[225855]: 2026-01-20 15:02:47.529 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Updating instance_info_cache with network_info: [{"id": "244332ba-1b58-4d42-98b0-245f9460c50f", "address": "fa:16:3e:6f:36:24", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244332ba-1b", "ovs_interfaceid": "244332ba-1b58-4d42-98b0-245f9460c50f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:02:47 compute-1 nova_compute[225855]: 2026-01-20 15:02:47.546 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-474cec75-3b01-411a-9074-75859d2a9ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:02:47 compute-1 nova_compute[225855]: 2026-01-20 15:02:47.546 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:02:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:47.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:48 compute-1 nova_compute[225855]: 2026-01-20 15:02:48.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:02:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:02:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:48.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:02:48 compute-1 ceph-mon[81775]: pgmap v2338: 321 pgs: 321 active+clean; 375 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 122 KiB/s wr, 119 op/s
Jan 20 15:02:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:49.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:49 compute-1 nova_compute[225855]: 2026-01-20 15:02:49.991 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:02:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:50.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:02:51 compute-1 ceph-mon[81775]: pgmap v2339: 321 pgs: 321 active+clean; 395 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Jan 20 15:02:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:51.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:52 compute-1 nova_compute[225855]: 2026-01-20 15:02:52.039 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:52 compute-1 nova_compute[225855]: 2026-01-20 15:02:52.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:02:52 compute-1 nova_compute[225855]: 2026-01-20 15:02:52.394 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:52 compute-1 nova_compute[225855]: 2026-01-20 15:02:52.394 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:52 compute-1 nova_compute[225855]: 2026-01-20 15:02:52.395 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:52 compute-1 nova_compute[225855]: 2026-01-20 15:02:52.395 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:02:52 compute-1 nova_compute[225855]: 2026-01-20 15:02:52.395 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:02:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2485144396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/51400553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:52.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:02:52 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3186788731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:52 compute-1 nova_compute[225855]: 2026-01-20 15:02:52.837 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:02:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:53 compute-1 nova_compute[225855]: 2026-01-20 15:02:53.025 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:02:53 compute-1 nova_compute[225855]: 2026-01-20 15:02:53.026 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:02:53 compute-1 nova_compute[225855]: 2026-01-20 15:02:53.029 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:02:53 compute-1 nova_compute[225855]: 2026-01-20 15:02:53.030 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:02:53 compute-1 nova_compute[225855]: 2026-01-20 15:02:53.211 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:02:53 compute-1 nova_compute[225855]: 2026-01-20 15:02:53.212 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4067MB free_disk=20.851417541503906GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:02:53 compute-1 nova_compute[225855]: 2026-01-20 15:02:53.212 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:53 compute-1 nova_compute[225855]: 2026-01-20 15:02:53.213 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:53 compute-1 nova_compute[225855]: 2026-01-20 15:02:53.286 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 474cec75-3b01-411a-9074-75859d2a9ddf actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:02:53 compute-1 nova_compute[225855]: 2026-01-20 15:02:53.286 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:02:53 compute-1 nova_compute[225855]: 2026-01-20 15:02:53.287 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:02:53 compute-1 nova_compute[225855]: 2026-01-20 15:02:53.287 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:02:53 compute-1 nova_compute[225855]: 2026-01-20 15:02:53.344 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:02:53 compute-1 ceph-mon[81775]: pgmap v2340: 321 pgs: 321 active+clean; 405 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 127 op/s
Jan 20 15:02:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2220083062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3186788731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2057302993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:02:53 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4019807425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:53 compute-1 nova_compute[225855]: 2026-01-20 15:02:53.790 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:02:53 compute-1 nova_compute[225855]: 2026-01-20 15:02:53.795 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:02:53 compute-1 nova_compute[225855]: 2026-01-20 15:02:53.811 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:02:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:53.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:54 compute-1 podman[287551]: 2026-01-20 15:02:54.036428001 +0000 UTC m=+0.088315303 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 15:02:54 compute-1 nova_compute[225855]: 2026-01-20 15:02:54.090 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:02:54 compute-1 nova_compute[225855]: 2026-01-20 15:02:54.091 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:54.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4019807425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:54 compute-1 ceph-mon[81775]: pgmap v2341: 321 pgs: 321 active+clean; 409 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.4 MiB/s wr, 121 op/s
Jan 20 15:02:54 compute-1 nova_compute[225855]: 2026-01-20 15:02:54.676 225859 DEBUG nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Instance in state 1 after 54 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 15:02:54 compute-1 nova_compute[225855]: 2026-01-20 15:02:54.993 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:55 compute-1 sudo[287578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:55 compute-1 sudo[287578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:55 compute-1 sudo[287578]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:55 compute-1 sudo[287603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:55 compute-1 sudo[287603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:55 compute-1 sudo[287603]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:55.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:56.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:57 compute-1 nova_compute[225855]: 2026-01-20 15:02:57.050 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:57 compute-1 nova_compute[225855]: 2026-01-20 15:02:57.086 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:02:57 compute-1 ceph-mon[81775]: pgmap v2342: 321 pgs: 321 active+clean; 419 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 131 op/s
Jan 20 15:02:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:02:57 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2084056353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:57.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:57 compute-1 nova_compute[225855]: 2026-01-20 15:02:57.958 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:58 compute-1 nova_compute[225855]: 2026-01-20 15:02:58.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:02:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2084056353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:58.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:59 compute-1 ceph-mon[81775]: pgmap v2343: 321 pgs: 321 active+clean; 419 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 114 op/s
Jan 20 15:02:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:02:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:02:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:59.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:02:59 compute-1 nova_compute[225855]: 2026-01-20 15:02:59.995 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:03:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:00.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:03:00 compute-1 ceph-mon[81775]: pgmap v2344: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 120 op/s
Jan 20 15:03:00 compute-1 nova_compute[225855]: 2026-01-20 15:03:00.700 225859 INFO nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Instance failed to shutdown in 60 seconds.
Jan 20 15:03:00 compute-1 kernel: tap244332ba-1b (unregistering): left promiscuous mode
Jan 20 15:03:00 compute-1 NetworkManager[49104]: <info>  [1768921380.8029] device (tap244332ba-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:03:00 compute-1 ovn_controller[130490]: 2026-01-20T15:03:00Z|00637|binding|INFO|Releasing lport 244332ba-1b58-4d42-98b0-245f9460c50f from this chassis (sb_readonly=0)
Jan 20 15:03:00 compute-1 ovn_controller[130490]: 2026-01-20T15:03:00Z|00638|binding|INFO|Setting lport 244332ba-1b58-4d42-98b0-245f9460c50f down in Southbound
Jan 20 15:03:00 compute-1 ovn_controller[130490]: 2026-01-20T15:03:00Z|00639|binding|INFO|Removing iface tap244332ba-1b ovn-installed in OVS
Jan 20 15:03:00 compute-1 nova_compute[225855]: 2026-01-20 15:03:00.817 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:00 compute-1 nova_compute[225855]: 2026-01-20 15:03:00.819 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:00.825 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:36:24 10.100.0.4'], port_security=['fa:16:3e:6f:36:24 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '474cec75-3b01-411a-9074-75859d2a9ddf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63555e5851564db08c6429231d264f2c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7e54c470-6a6f-454e-ae01-9d2d59b2c74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=248fa32c-94be-4e1b-b4d3-cb9fac0ec155, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=244332ba-1b58-4d42-98b0-245f9460c50f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:03:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:00.827 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 244332ba-1b58-4d42-98b0-245f9460c50f in datapath 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec unbound from our chassis
Jan 20 15:03:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:00.831 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:03:00 compute-1 nova_compute[225855]: 2026-01-20 15:03:00.834 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:00.834 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e43fef-8746-4384-b951-6ad0be9c3680]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:00 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:00.836 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec namespace which is not needed anymore
Jan 20 15:03:00 compute-1 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000096.scope: Deactivated successfully.
Jan 20 15:03:00 compute-1 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000096.scope: Consumed 2.067s CPU time.
Jan 20 15:03:00 compute-1 systemd-machined[194361]: Machine qemu-72-instance-00000096 terminated.
Jan 20 15:03:00 compute-1 nova_compute[225855]: 2026-01-20 15:03:00.933 225859 INFO nova.virt.libvirt.driver [-] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Instance destroyed successfully.
Jan 20 15:03:00 compute-1 nova_compute[225855]: 2026-01-20 15:03:00.934 225859 DEBUG nova.objects.instance [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'numa_topology' on Instance uuid 474cec75-3b01-411a-9074-75859d2a9ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:03:00 compute-1 nova_compute[225855]: 2026-01-20 15:03:00.951 225859 INFO nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Attempting a stable device rescue
Jan 20 15:03:00 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[285687]: [NOTICE]   (285691) : haproxy version is 2.8.14-c23fe91
Jan 20 15:03:00 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[285687]: [NOTICE]   (285691) : path to executable is /usr/sbin/haproxy
Jan 20 15:03:00 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[285687]: [WARNING]  (285691) : Exiting Master process...
Jan 20 15:03:00 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[285687]: [WARNING]  (285691) : Exiting Master process...
Jan 20 15:03:00 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[285687]: [ALERT]    (285691) : Current worker (285693) exited with code 143 (Terminated)
Jan 20 15:03:00 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[285687]: [WARNING]  (285691) : All workers exited. Exiting... (0)
Jan 20 15:03:00 compute-1 systemd[1]: libpod-c3195ecf870ff81516979fba2a40a24375f9afe5dc36f02bd45e63cf475acd84.scope: Deactivated successfully.
Jan 20 15:03:00 compute-1 podman[287658]: 2026-01-20 15:03:00.988706915 +0000 UTC m=+0.053451189 container died c3195ecf870ff81516979fba2a40a24375f9afe5dc36f02bd45e63cf475acd84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 20 15:03:01 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c3195ecf870ff81516979fba2a40a24375f9afe5dc36f02bd45e63cf475acd84-userdata-shm.mount: Deactivated successfully.
Jan 20 15:03:01 compute-1 systemd[1]: var-lib-containers-storage-overlay-9583a611064af7f6e72be93b7a47c8363e51f876af054788b7c0ed954b9e3b3d-merged.mount: Deactivated successfully.
Jan 20 15:03:01 compute-1 podman[287658]: 2026-01-20 15:03:01.025158753 +0000 UTC m=+0.089903057 container cleanup c3195ecf870ff81516979fba2a40a24375f9afe5dc36f02bd45e63cf475acd84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:03:01 compute-1 systemd[1]: libpod-conmon-c3195ecf870ff81516979fba2a40a24375f9afe5dc36f02bd45e63cf475acd84.scope: Deactivated successfully.
Jan 20 15:03:01 compute-1 podman[287692]: 2026-01-20 15:03:01.113554097 +0000 UTC m=+0.063988707 container remove c3195ecf870ff81516979fba2a40a24375f9afe5dc36f02bd45e63cf475acd84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:03:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:01.121 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a186a4d1-8063-463c-a368-1181aaf39d7f]: (4, ('Tue Jan 20 03:03:00 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec (c3195ecf870ff81516979fba2a40a24375f9afe5dc36f02bd45e63cf475acd84)\nc3195ecf870ff81516979fba2a40a24375f9afe5dc36f02bd45e63cf475acd84\nTue Jan 20 03:03:01 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec (c3195ecf870ff81516979fba2a40a24375f9afe5dc36f02bd45e63cf475acd84)\nc3195ecf870ff81516979fba2a40a24375f9afe5dc36f02bd45e63cf475acd84\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:01.123 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3db5cfad-c551-4141-b656-9b4aabae0bfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:01.123 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap671e28d0-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:03:01 compute-1 nova_compute[225855]: 2026-01-20 15:03:01.125 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:01 compute-1 kernel: tap671e28d0-00: left promiscuous mode
Jan 20 15:03:01 compute-1 nova_compute[225855]: 2026-01-20 15:03:01.144 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:01.148 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[678f65af-d52b-4c0e-b9b9-e623bb6c8fd6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:01.167 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fa196537-0594-4955-b06e-58ce0c409a62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:01.168 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[41599865-25eb-47e5-b4d0-4195e257bcf6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:01.184 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ac02020c-a52b-425d-bd6b-e89c82c4732f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 635029, 'reachable_time': 43720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287709, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:01.187 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:03:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:01.187 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[36d5aa68-b993-4d9a-a226-5c8476dc36c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:01 compute-1 systemd[1]: run-netns-ovnmeta\x2d671e28d0\x2d0b9e\x2d41e0\x2db5e0\x2ddb1ccd4717ec.mount: Deactivated successfully.
Jan 20 15:03:01 compute-1 nova_compute[225855]: 2026-01-20 15:03:01.341 225859 DEBUG nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 20 15:03:01 compute-1 nova_compute[225855]: 2026-01-20 15:03:01.347 225859 DEBUG nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 15:03:01 compute-1 nova_compute[225855]: 2026-01-20 15:03:01.348 225859 INFO nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Creating image(s)
Jan 20 15:03:01 compute-1 nova_compute[225855]: 2026-01-20 15:03:01.379 225859 DEBUG nova.storage.rbd_utils [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image 474cec75-3b01-411a-9074-75859d2a9ddf_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:03:01 compute-1 nova_compute[225855]: 2026-01-20 15:03:01.384 225859 DEBUG nova.objects.instance [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'trusted_certs' on Instance uuid 474cec75-3b01-411a-9074-75859d2a9ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:03:01 compute-1 nova_compute[225855]: 2026-01-20 15:03:01.631 225859 DEBUG nova.storage.rbd_utils [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image 474cec75-3b01-411a-9074-75859d2a9ddf_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:03:01 compute-1 nova_compute[225855]: 2026-01-20 15:03:01.659 225859 DEBUG nova.storage.rbd_utils [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image 474cec75-3b01-411a-9074-75859d2a9ddf_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:03:01 compute-1 nova_compute[225855]: 2026-01-20 15:03:01.663 225859 DEBUG oslo_concurrency.lockutils [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "79f6afbb8111f4bd3cacc8182575e32c185fa390" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:01 compute-1 nova_compute[225855]: 2026-01-20 15:03:01.664 225859 DEBUG oslo_concurrency.lockutils [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "79f6afbb8111f4bd3cacc8182575e32c185fa390" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:01 compute-1 nova_compute[225855]: 2026-01-20 15:03:01.819 225859 DEBUG nova.compute.manager [req-aa6982d4-ac7c-4fa5-b983-76cc5d59e4cf req-6912e038-dc88-4ebe-b2f8-d2fe8adc5c1f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received event network-vif-unplugged-244332ba-1b58-4d42-98b0-245f9460c50f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:03:01 compute-1 nova_compute[225855]: 2026-01-20 15:03:01.819 225859 DEBUG oslo_concurrency.lockutils [req-aa6982d4-ac7c-4fa5-b983-76cc5d59e4cf req-6912e038-dc88-4ebe-b2f8-d2fe8adc5c1f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:01 compute-1 nova_compute[225855]: 2026-01-20 15:03:01.819 225859 DEBUG oslo_concurrency.lockutils [req-aa6982d4-ac7c-4fa5-b983-76cc5d59e4cf req-6912e038-dc88-4ebe-b2f8-d2fe8adc5c1f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:01 compute-1 nova_compute[225855]: 2026-01-20 15:03:01.820 225859 DEBUG oslo_concurrency.lockutils [req-aa6982d4-ac7c-4fa5-b983-76cc5d59e4cf req-6912e038-dc88-4ebe-b2f8-d2fe8adc5c1f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:01 compute-1 nova_compute[225855]: 2026-01-20 15:03:01.820 225859 DEBUG nova.compute.manager [req-aa6982d4-ac7c-4fa5-b983-76cc5d59e4cf req-6912e038-dc88-4ebe-b2f8-d2fe8adc5c1f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] No waiting events found dispatching network-vif-unplugged-244332ba-1b58-4d42-98b0-245f9460c50f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:03:01 compute-1 nova_compute[225855]: 2026-01-20 15:03:01.820 225859 WARNING nova.compute.manager [req-aa6982d4-ac7c-4fa5-b983-76cc5d59e4cf req-6912e038-dc88-4ebe-b2f8-d2fe8adc5c1f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received unexpected event network-vif-unplugged-244332ba-1b58-4d42-98b0-245f9460c50f for instance with vm_state active and task_state rescuing.
Jan 20 15:03:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:01.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:01 compute-1 nova_compute[225855]: 2026-01-20 15:03:01.994 225859 DEBUG nova.virt.libvirt.imagebackend [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Image locations are: [{'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/9c1c8ad1-376e-4dd8-93d8-70f0aa412977/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/9c1c8ad1-376e-4dd8-93d8-70f0aa412977/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.048 225859 DEBUG nova.virt.libvirt.imagebackend [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Selected location: {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/9c1c8ad1-376e-4dd8-93d8-70f0aa412977/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.049 225859 DEBUG nova.storage.rbd_utils [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] cloning images/9c1c8ad1-376e-4dd8-93d8-70f0aa412977@snap to None/474cec75-3b01-411a-9074-75859d2a9ddf_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.077 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.287 225859 DEBUG oslo_concurrency.lockutils [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "79f6afbb8111f4bd3cacc8182575e32c185fa390" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.338 225859 DEBUG nova.objects.instance [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'migration_context' on Instance uuid 474cec75-3b01-411a-9074-75859d2a9ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.379 225859 DEBUG nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.382 225859 DEBUG nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Start _get_guest_xml network_info=[{"id": "244332ba-1b58-4d42-98b0-245f9460c50f", "address": "fa:16:3e:6f:36:24", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "vif_mac": "fa:16:3e:6f:36:24"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244332ba-1b", "ovs_interfaceid": "244332ba-1b58-4d42-98b0-245f9460c50f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '9c1c8ad1-376e-4dd8-93d8-70f0aa412977', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-41743468-7add-45cb-bc94-02eb6f850278', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '41743468-7add-45cb-bc94-02eb6f850278', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '474cec75-3b01-411a-9074-75859d2a9ddf', 'attached_at': '', 'detached_at': '', 'volume_id': '41743468-7add-45cb-bc94-02eb6f850278', 'serial': '41743468-7add-45cb-bc94-02eb6f850278'}, 'guest_format': None, 'boot_index': 0, 'mount_device': '/dev/vda', 'attachment_id': '20658306-e0e7-4d9c-a904-24cfdd1b82ee', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.382 225859 DEBUG nova.objects.instance [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'resources' on Instance uuid 474cec75-3b01-411a-9074-75859d2a9ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.407 225859 WARNING nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.413 225859 DEBUG nova.virt.libvirt.host [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.413 225859 DEBUG nova.virt.libvirt.host [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.416 225859 DEBUG nova.virt.libvirt.host [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.417 225859 DEBUG nova.virt.libvirt.host [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.418 225859 DEBUG nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.418 225859 DEBUG nova.virt.hardware [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.419 225859 DEBUG nova.virt.hardware [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.419 225859 DEBUG nova.virt.hardware [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.419 225859 DEBUG nova.virt.hardware [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.419 225859 DEBUG nova.virt.hardware [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.420 225859 DEBUG nova.virt.hardware [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.420 225859 DEBUG nova.virt.hardware [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.420 225859 DEBUG nova.virt.hardware [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.420 225859 DEBUG nova.virt.hardware [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.421 225859 DEBUG nova.virt.hardware [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.421 225859 DEBUG nova.virt.hardware [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.421 225859 DEBUG nova.objects.instance [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'vcpu_model' on Instance uuid 474cec75-3b01-411a-9074-75859d2a9ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:03:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:02.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.478 225859 DEBUG oslo_concurrency.processutils [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:03:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:03:02 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2658445181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:03:02 compute-1 nova_compute[225855]: 2026-01-20 15:03:02.969 225859 DEBUG oslo_concurrency.processutils [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:03:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.004 225859 DEBUG oslo_concurrency.processutils [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:03:03 compute-1 ceph-mon[81775]: pgmap v2345: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 909 KiB/s wr, 75 op/s
Jan 20 15:03:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2658445181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:03:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:03:03 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2503648110' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.509 225859 DEBUG oslo_concurrency.processutils [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.511 225859 DEBUG nova.virt.libvirt.vif [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:01:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-254746207',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-254746207',id=150,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:01:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='63555e5851564db08c6429231d264f2c',ramdisk_id='',reservation_id='r-solng1yz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:01:52Z,user_data=None,user_id='2446e8399b344b29986c1aaf8bf73adf',uuid=474cec75-3b01-411a-9074-75859d2a9ddf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "244332ba-1b58-4d42-98b0-245f9460c50f", "address": "fa:16:3e:6f:36:24", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "vif_mac": "fa:16:3e:6f:36:24"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244332ba-1b", "ovs_interfaceid": "244332ba-1b58-4d42-98b0-245f9460c50f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.511 225859 DEBUG nova.network.os_vif_util [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converting VIF {"id": "244332ba-1b58-4d42-98b0-245f9460c50f", "address": "fa:16:3e:6f:36:24", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "vif_mac": "fa:16:3e:6f:36:24"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244332ba-1b", "ovs_interfaceid": "244332ba-1b58-4d42-98b0-245f9460c50f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.512 225859 DEBUG nova.network.os_vif_util [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6f:36:24,bridge_name='br-int',has_traffic_filtering=True,id=244332ba-1b58-4d42-98b0-245f9460c50f,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244332ba-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.513 225859 DEBUG nova.objects.instance [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'pci_devices' on Instance uuid 474cec75-3b01-411a-9074-75859d2a9ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.527 225859 DEBUG nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:03:03 compute-1 nova_compute[225855]:   <uuid>474cec75-3b01-411a-9074-75859d2a9ddf</uuid>
Jan 20 15:03:03 compute-1 nova_compute[225855]:   <name>instance-00000096</name>
Jan 20 15:03:03 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:03:03 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:03:03 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-254746207</nova:name>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:03:02</nova:creationTime>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:03:03 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:03:03 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:03:03 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:03:03 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:03:03 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:03:03 compute-1 nova_compute[225855]:         <nova:user uuid="2446e8399b344b29986c1aaf8bf73adf">tempest-ServerBootFromVolumeStableRescueTest-1871371328-project-member</nova:user>
Jan 20 15:03:03 compute-1 nova_compute[225855]:         <nova:project uuid="63555e5851564db08c6429231d264f2c">tempest-ServerBootFromVolumeStableRescueTest-1871371328</nova:project>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:03:03 compute-1 nova_compute[225855]:         <nova:port uuid="244332ba-1b58-4d42-98b0-245f9460c50f">
Jan 20 15:03:03 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:03:03 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:03:03 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <system>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <entry name="serial">474cec75-3b01-411a-9074-75859d2a9ddf</entry>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <entry name="uuid">474cec75-3b01-411a-9074-75859d2a9ddf</entry>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     </system>
Jan 20 15:03:03 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:03:03 compute-1 nova_compute[225855]:   <os>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:   </os>
Jan 20 15:03:03 compute-1 nova_compute[225855]:   <features>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:   </features>
Jan 20 15:03:03 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:03:03 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:03:03 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/474cec75-3b01-411a-9074-75859d2a9ddf_disk.config">
Jan 20 15:03:03 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       </source>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:03:03 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <source protocol="rbd" name="volumes/volume-41743468-7add-45cb-bc94-02eb6f850278">
Jan 20 15:03:03 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       </source>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:03:03 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <serial>41743468-7add-45cb-bc94-02eb6f850278</serial>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/474cec75-3b01-411a-9074-75859d2a9ddf_disk.rescue">
Jan 20 15:03:03 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       </source>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:03:03 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <target dev="vdb" bus="virtio"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <boot order="1"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:6f:36:24"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <target dev="tap244332ba-1b"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/474cec75-3b01-411a-9074-75859d2a9ddf/console.log" append="off"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <video>
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     </video>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:03:03 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:03:03 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:03:03 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:03:03 compute-1 nova_compute[225855]: </domain>
Jan 20 15:03:03 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.534 225859 INFO nova.virt.libvirt.driver [-] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Instance destroyed successfully.
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.587 225859 DEBUG nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.588 225859 DEBUG nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.588 225859 DEBUG nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.588 225859 DEBUG nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No VIF found with MAC fa:16:3e:6f:36:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.589 225859 INFO nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Using config drive
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.614 225859 DEBUG nova.storage.rbd_utils [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image 474cec75-3b01-411a-9074-75859d2a9ddf_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.634 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.662 225859 DEBUG nova.objects.instance [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'ec2_ids' on Instance uuid 474cec75-3b01-411a-9074-75859d2a9ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.686 225859 DEBUG nova.objects.instance [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'keypairs' on Instance uuid 474cec75-3b01-411a-9074-75859d2a9ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:03:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:03:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:03.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.954 225859 DEBUG nova.compute.manager [req-276513a8-845b-4dc6-9d8b-a89548569679 req-34f7f5d1-04a0-4687-b1c2-423cde9a631c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received event network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.955 225859 DEBUG oslo_concurrency.lockutils [req-276513a8-845b-4dc6-9d8b-a89548569679 req-34f7f5d1-04a0-4687-b1c2-423cde9a631c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.955 225859 DEBUG oslo_concurrency.lockutils [req-276513a8-845b-4dc6-9d8b-a89548569679 req-34f7f5d1-04a0-4687-b1c2-423cde9a631c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.955 225859 DEBUG oslo_concurrency.lockutils [req-276513a8-845b-4dc6-9d8b-a89548569679 req-34f7f5d1-04a0-4687-b1c2-423cde9a631c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.956 225859 DEBUG nova.compute.manager [req-276513a8-845b-4dc6-9d8b-a89548569679 req-34f7f5d1-04a0-4687-b1c2-423cde9a631c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] No waiting events found dispatching network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:03:03 compute-1 nova_compute[225855]: 2026-01-20 15:03:03.956 225859 WARNING nova.compute.manager [req-276513a8-845b-4dc6-9d8b-a89548569679 req-34f7f5d1-04a0-4687-b1c2-423cde9a631c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received unexpected event network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f for instance with vm_state active and task_state rescuing.
Jan 20 15:03:04 compute-1 nova_compute[225855]: 2026-01-20 15:03:04.125 225859 INFO nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Creating config drive at /var/lib/nova/instances/474cec75-3b01-411a-9074-75859d2a9ddf/disk.config.rescue
Jan 20 15:03:04 compute-1 nova_compute[225855]: 2026-01-20 15:03:04.130 225859 DEBUG oslo_concurrency.processutils [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/474cec75-3b01-411a-9074-75859d2a9ddf/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppn39s63x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:03:04 compute-1 nova_compute[225855]: 2026-01-20 15:03:04.278 225859 DEBUG oslo_concurrency.processutils [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/474cec75-3b01-411a-9074-75859d2a9ddf/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppn39s63x" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:03:04 compute-1 nova_compute[225855]: 2026-01-20 15:03:04.307 225859 DEBUG nova.storage.rbd_utils [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image 474cec75-3b01-411a-9074-75859d2a9ddf_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:03:04 compute-1 nova_compute[225855]: 2026-01-20 15:03:04.311 225859 DEBUG oslo_concurrency.processutils [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/474cec75-3b01-411a-9074-75859d2a9ddf/disk.config.rescue 474cec75-3b01-411a-9074-75859d2a9ddf_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:03:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2503648110' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:03:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:04.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:04 compute-1 nova_compute[225855]: 2026-01-20 15:03:04.482 225859 DEBUG oslo_concurrency.processutils [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/474cec75-3b01-411a-9074-75859d2a9ddf/disk.config.rescue 474cec75-3b01-411a-9074-75859d2a9ddf_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:03:04 compute-1 nova_compute[225855]: 2026-01-20 15:03:04.483 225859 INFO nova.virt.libvirt.driver [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Deleting local config drive /var/lib/nova/instances/474cec75-3b01-411a-9074-75859d2a9ddf/disk.config.rescue because it was imported into RBD.
Jan 20 15:03:04 compute-1 kernel: tap244332ba-1b: entered promiscuous mode
Jan 20 15:03:04 compute-1 NetworkManager[49104]: <info>  [1768921384.5255] manager: (tap244332ba-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/275)
Jan 20 15:03:04 compute-1 ovn_controller[130490]: 2026-01-20T15:03:04Z|00640|binding|INFO|Claiming lport 244332ba-1b58-4d42-98b0-245f9460c50f for this chassis.
Jan 20 15:03:04 compute-1 ovn_controller[130490]: 2026-01-20T15:03:04Z|00641|binding|INFO|244332ba-1b58-4d42-98b0-245f9460c50f: Claiming fa:16:3e:6f:36:24 10.100.0.4
Jan 20 15:03:04 compute-1 nova_compute[225855]: 2026-01-20 15:03:04.525 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.534 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:36:24 10.100.0.4'], port_security=['fa:16:3e:6f:36:24 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '474cec75-3b01-411a-9074-75859d2a9ddf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63555e5851564db08c6429231d264f2c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7e54c470-6a6f-454e-ae01-9d2d59b2c74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=248fa32c-94be-4e1b-b4d3-cb9fac0ec155, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=244332ba-1b58-4d42-98b0-245f9460c50f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.535 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 244332ba-1b58-4d42-98b0-245f9460c50f in datapath 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec bound to our chassis
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.537 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec
Jan 20 15:03:04 compute-1 ovn_controller[130490]: 2026-01-20T15:03:04Z|00642|binding|INFO|Setting lport 244332ba-1b58-4d42-98b0-245f9460c50f ovn-installed in OVS
Jan 20 15:03:04 compute-1 ovn_controller[130490]: 2026-01-20T15:03:04Z|00643|binding|INFO|Setting lport 244332ba-1b58-4d42-98b0-245f9460c50f up in Southbound
Jan 20 15:03:04 compute-1 nova_compute[225855]: 2026-01-20 15:03:04.545 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:04 compute-1 nova_compute[225855]: 2026-01-20 15:03:04.549 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.549 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[dca39067-7502-49ec-a772-0f10fcd9d440]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.550 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap671e28d0-01 in ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.552 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap671e28d0-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.552 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[406d8972-37ff-4fb8-abd6-2619ea60ce28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.553 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d8a1911f-ad54-44c2-99f0-12ac0cc7f217]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:04 compute-1 systemd-machined[194361]: New machine qemu-75-instance-00000096.
Jan 20 15:03:04 compute-1 systemd-udevd[287988]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.563 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[01c36c84-113a-41d4-b148-d22c00fe3c0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:04 compute-1 systemd[1]: Started Virtual Machine qemu-75-instance-00000096.
Jan 20 15:03:04 compute-1 NetworkManager[49104]: <info>  [1768921384.5717] device (tap244332ba-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:03:04 compute-1 NetworkManager[49104]: <info>  [1768921384.5726] device (tap244332ba-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.587 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c56093f2-92a3-4c2e-9daa-500c8f3319c7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.612 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[3ac0ba3c-c9bc-4b8b-8e86-257b7678f775]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:04 compute-1 systemd-udevd[287992]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:03:04 compute-1 NetworkManager[49104]: <info>  [1768921384.6201] manager: (tap671e28d0-00): new Veth device (/org/freedesktop/NetworkManager/Devices/276)
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.619 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b6481005-23f9-4359-8707-a7da677b0289]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.651 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[91ab9a8a-cd21-41ba-99e1-497d2c23ffa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.654 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[2071ac7c-02ce-4fc5-bb92-2c30d756fcbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:04 compute-1 NetworkManager[49104]: <info>  [1768921384.6747] device (tap671e28d0-00): carrier: link connected
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.679 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[ff64cb72-eb1a-4861-82d5-e0eb57aa7709]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.694 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0d02a7b5-6ab9-4c06-903c-411dca8daf50]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap671e28d0-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:4e:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 181], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642360, 'reachable_time': 36035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288020, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.711 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2a1841b4-3b5b-4c04-ad05-aa17f964075c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:4e69'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642360, 'tstamp': 642360}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288021, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.728 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7f3951-1249-4c91-b054-09b9dca2a8e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap671e28d0-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:4e:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 181], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642360, 'reachable_time': 36035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288022, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.764 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4a4ab88d-90a7-43f5-a737-f05f02e21c5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.823 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[de624ca4-6d36-4d52-8c25-f1454eae3efa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.824 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap671e28d0-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.825 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.825 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap671e28d0-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:03:04 compute-1 nova_compute[225855]: 2026-01-20 15:03:04.826 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:04 compute-1 NetworkManager[49104]: <info>  [1768921384.8273] manager: (tap671e28d0-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Jan 20 15:03:04 compute-1 kernel: tap671e28d0-00: entered promiscuous mode
Jan 20 15:03:04 compute-1 nova_compute[225855]: 2026-01-20 15:03:04.828 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.829 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap671e28d0-00, col_values=(('external_ids', {'iface-id': 'a8628d9e-196f-4b84-89fd-d3a41792b8a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:03:04 compute-1 nova_compute[225855]: 2026-01-20 15:03:04.830 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:04 compute-1 ovn_controller[130490]: 2026-01-20T15:03:04Z|00644|binding|INFO|Releasing lport a8628d9e-196f-4b84-89fd-d3a41792b8a0 from this chassis (sb_readonly=0)
Jan 20 15:03:04 compute-1 nova_compute[225855]: 2026-01-20 15:03:04.844 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.846 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/671e28d0-0b9e-41e0-b5e0-db1ccd4717ec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/671e28d0-0b9e-41e0-b5e0-db1ccd4717ec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.846 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[745d6a8b-68d1-4d32-933d-8ecfea7d3c93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.847 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/671e28d0-0b9e-41e0-b5e0-db1ccd4717ec.pid.haproxy
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:03:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:04.848 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'env', 'PROCESS_TAG=haproxy-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/671e28d0-0b9e-41e0-b5e0-db1ccd4717ec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:03:04 compute-1 nova_compute[225855]: 2026-01-20 15:03:04.984 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for 474cec75-3b01-411a-9074-75859d2a9ddf due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 15:03:04 compute-1 nova_compute[225855]: 2026-01-20 15:03:04.985 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921384.9840426, 474cec75-3b01-411a-9074-75859d2a9ddf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:03:04 compute-1 nova_compute[225855]: 2026-01-20 15:03:04.985 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] VM Resumed (Lifecycle Event)
Jan 20 15:03:04 compute-1 nova_compute[225855]: 2026-01-20 15:03:04.992 225859 DEBUG nova.compute.manager [None req-615f4080-5b92-4f81-a031-8b89a6db13e8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:03:05 compute-1 nova_compute[225855]: 2026-01-20 15:03:05.000 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:05 compute-1 nova_compute[225855]: 2026-01-20 15:03:05.006 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:03:05 compute-1 nova_compute[225855]: 2026-01-20 15:03:05.010 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:03:05 compute-1 nova_compute[225855]: 2026-01-20 15:03:05.035 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] During sync_power_state the instance has a pending task (rescuing). Skip.
Jan 20 15:03:05 compute-1 nova_compute[225855]: 2026-01-20 15:03:05.035 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921384.988022, 474cec75-3b01-411a-9074-75859d2a9ddf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:03:05 compute-1 nova_compute[225855]: 2026-01-20 15:03:05.036 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] VM Started (Lifecycle Event)
Jan 20 15:03:05 compute-1 nova_compute[225855]: 2026-01-20 15:03:05.069 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:03:05 compute-1 nova_compute[225855]: 2026-01-20 15:03:05.073 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:03:05 compute-1 podman[288114]: 2026-01-20 15:03:05.236513802 +0000 UTC m=+0.062998588 container create 7e9df3d86ba0ee1b1089109f5e9eeba2832f1bd1a9b80703602546b20ab605e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 15:03:05 compute-1 systemd[1]: Started libpod-conmon-7e9df3d86ba0ee1b1089109f5e9eeba2832f1bd1a9b80703602546b20ab605e7.scope.
Jan 20 15:03:05 compute-1 podman[288114]: 2026-01-20 15:03:05.201785873 +0000 UTC m=+0.028270739 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:03:05 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:03:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee5eb1c0534cfb4c70b46823a0b5c9d68a80b01a6c011e78131e499ba71b19d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:03:05 compute-1 podman[288114]: 2026-01-20 15:03:05.315660675 +0000 UTC m=+0.142145471 container init 7e9df3d86ba0ee1b1089109f5e9eeba2832f1bd1a9b80703602546b20ab605e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:03:05 compute-1 podman[288114]: 2026-01-20 15:03:05.321401346 +0000 UTC m=+0.147886132 container start 7e9df3d86ba0ee1b1089109f5e9eeba2832f1bd1a9b80703602546b20ab605e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:03:05 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[288129]: [NOTICE]   (288133) : New worker (288135) forked
Jan 20 15:03:05 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[288129]: [NOTICE]   (288133) : Loading success.
Jan 20 15:03:05 compute-1 ceph-mon[81775]: pgmap v2346: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 577 KiB/s wr, 71 op/s
Jan 20 15:03:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:05.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:05 compute-1 nova_compute[225855]: 2026-01-20 15:03:05.953 225859 INFO nova.compute.manager [None req-bdcbe869-5a54-47f5-8e01-6c8bd5494045 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Unrescuing
Jan 20 15:03:05 compute-1 nova_compute[225855]: 2026-01-20 15:03:05.953 225859 DEBUG oslo_concurrency.lockutils [None req-bdcbe869-5a54-47f5-8e01-6c8bd5494045 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "refresh_cache-474cec75-3b01-411a-9074-75859d2a9ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:03:05 compute-1 nova_compute[225855]: 2026-01-20 15:03:05.953 225859 DEBUG oslo_concurrency.lockutils [None req-bdcbe869-5a54-47f5-8e01-6c8bd5494045 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquired lock "refresh_cache-474cec75-3b01-411a-9074-75859d2a9ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:03:05 compute-1 nova_compute[225855]: 2026-01-20 15:03:05.954 225859 DEBUG nova.network.neutron [None req-bdcbe869-5a54-47f5-8e01-6c8bd5494045 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:03:06 compute-1 nova_compute[225855]: 2026-01-20 15:03:06.084 225859 DEBUG nova.compute.manager [req-3351e714-3409-457d-9ef4-48b173385f5c req-46eb7f1d-2331-456f-95fb-2e9cfb0f490b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received event network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:03:06 compute-1 nova_compute[225855]: 2026-01-20 15:03:06.085 225859 DEBUG oslo_concurrency.lockutils [req-3351e714-3409-457d-9ef4-48b173385f5c req-46eb7f1d-2331-456f-95fb-2e9cfb0f490b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:06 compute-1 nova_compute[225855]: 2026-01-20 15:03:06.085 225859 DEBUG oslo_concurrency.lockutils [req-3351e714-3409-457d-9ef4-48b173385f5c req-46eb7f1d-2331-456f-95fb-2e9cfb0f490b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:06 compute-1 nova_compute[225855]: 2026-01-20 15:03:06.085 225859 DEBUG oslo_concurrency.lockutils [req-3351e714-3409-457d-9ef4-48b173385f5c req-46eb7f1d-2331-456f-95fb-2e9cfb0f490b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:06 compute-1 nova_compute[225855]: 2026-01-20 15:03:06.085 225859 DEBUG nova.compute.manager [req-3351e714-3409-457d-9ef4-48b173385f5c req-46eb7f1d-2331-456f-95fb-2e9cfb0f490b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] No waiting events found dispatching network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:03:06 compute-1 nova_compute[225855]: 2026-01-20 15:03:06.086 225859 WARNING nova.compute.manager [req-3351e714-3409-457d-9ef4-48b173385f5c req-46eb7f1d-2331-456f-95fb-2e9cfb0f490b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received unexpected event network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f for instance with vm_state rescued and task_state unrescuing.
Jan 20 15:03:06 compute-1 nova_compute[225855]: 2026-01-20 15:03:06.086 225859 DEBUG nova.compute.manager [req-3351e714-3409-457d-9ef4-48b173385f5c req-46eb7f1d-2331-456f-95fb-2e9cfb0f490b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received event network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:03:06 compute-1 nova_compute[225855]: 2026-01-20 15:03:06.086 225859 DEBUG oslo_concurrency.lockutils [req-3351e714-3409-457d-9ef4-48b173385f5c req-46eb7f1d-2331-456f-95fb-2e9cfb0f490b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:06 compute-1 nova_compute[225855]: 2026-01-20 15:03:06.086 225859 DEBUG oslo_concurrency.lockutils [req-3351e714-3409-457d-9ef4-48b173385f5c req-46eb7f1d-2331-456f-95fb-2e9cfb0f490b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:06 compute-1 nova_compute[225855]: 2026-01-20 15:03:06.086 225859 DEBUG oslo_concurrency.lockutils [req-3351e714-3409-457d-9ef4-48b173385f5c req-46eb7f1d-2331-456f-95fb-2e9cfb0f490b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:06 compute-1 nova_compute[225855]: 2026-01-20 15:03:06.086 225859 DEBUG nova.compute.manager [req-3351e714-3409-457d-9ef4-48b173385f5c req-46eb7f1d-2331-456f-95fb-2e9cfb0f490b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] No waiting events found dispatching network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:03:06 compute-1 nova_compute[225855]: 2026-01-20 15:03:06.087 225859 WARNING nova.compute.manager [req-3351e714-3409-457d-9ef4-48b173385f5c req-46eb7f1d-2331-456f-95fb-2e9cfb0f490b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received unexpected event network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f for instance with vm_state rescued and task_state unrescuing.
Jan 20 15:03:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:03:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:06.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:03:07 compute-1 podman[288145]: 2026-01-20 15:03:07.022604731 +0000 UTC m=+0.067551927 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 20 15:03:07 compute-1 nova_compute[225855]: 2026-01-20 15:03:07.059 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:07 compute-1 ceph-mon[81775]: pgmap v2347: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 587 KiB/s rd, 333 KiB/s wr, 80 op/s
Jan 20 15:03:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4071570443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:03:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:07.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:07 compute-1 nova_compute[225855]: 2026-01-20 15:03:07.980 225859 DEBUG nova.network.neutron [None req-bdcbe869-5a54-47f5-8e01-6c8bd5494045 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Updating instance_info_cache with network_info: [{"id": "244332ba-1b58-4d42-98b0-245f9460c50f", "address": "fa:16:3e:6f:36:24", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244332ba-1b", "ovs_interfaceid": "244332ba-1b58-4d42-98b0-245f9460c50f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:03:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:08 compute-1 nova_compute[225855]: 2026-01-20 15:03:08.010 225859 DEBUG oslo_concurrency.lockutils [None req-bdcbe869-5a54-47f5-8e01-6c8bd5494045 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Releasing lock "refresh_cache-474cec75-3b01-411a-9074-75859d2a9ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:03:08 compute-1 nova_compute[225855]: 2026-01-20 15:03:08.011 225859 DEBUG nova.objects.instance [None req-bdcbe869-5a54-47f5-8e01-6c8bd5494045 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'flavor' on Instance uuid 474cec75-3b01-411a-9074-75859d2a9ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:03:08 compute-1 kernel: tap244332ba-1b (unregistering): left promiscuous mode
Jan 20 15:03:08 compute-1 NetworkManager[49104]: <info>  [1768921388.1736] device (tap244332ba-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:03:08 compute-1 ovn_controller[130490]: 2026-01-20T15:03:08Z|00645|binding|INFO|Releasing lport 244332ba-1b58-4d42-98b0-245f9460c50f from this chassis (sb_readonly=0)
Jan 20 15:03:08 compute-1 ovn_controller[130490]: 2026-01-20T15:03:08Z|00646|binding|INFO|Setting lport 244332ba-1b58-4d42-98b0-245f9460c50f down in Southbound
Jan 20 15:03:08 compute-1 nova_compute[225855]: 2026-01-20 15:03:08.184 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:08 compute-1 ovn_controller[130490]: 2026-01-20T15:03:08Z|00647|binding|INFO|Removing iface tap244332ba-1b ovn-installed in OVS
Jan 20 15:03:08 compute-1 nova_compute[225855]: 2026-01-20 15:03:08.185 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.191 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:36:24 10.100.0.4'], port_security=['fa:16:3e:6f:36:24 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '474cec75-3b01-411a-9074-75859d2a9ddf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63555e5851564db08c6429231d264f2c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7e54c470-6a6f-454e-ae01-9d2d59b2c74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=248fa32c-94be-4e1b-b4d3-cb9fac0ec155, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=244332ba-1b58-4d42-98b0-245f9460c50f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.193 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 244332ba-1b58-4d42-98b0-245f9460c50f in datapath 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec unbound from our chassis
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.195 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.195 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5039fbe9-0b56-4713-a74c-fab2f4882032]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.196 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec namespace which is not needed anymore
Jan 20 15:03:08 compute-1 nova_compute[225855]: 2026-01-20 15:03:08.201 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:08 compute-1 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000096.scope: Deactivated successfully.
Jan 20 15:03:08 compute-1 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000096.scope: Consumed 3.695s CPU time.
Jan 20 15:03:08 compute-1 systemd-machined[194361]: Machine qemu-75-instance-00000096 terminated.
Jan 20 15:03:08 compute-1 nova_compute[225855]: 2026-01-20 15:03:08.280 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:08 compute-1 nova_compute[225855]: 2026-01-20 15:03:08.285 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:08 compute-1 nova_compute[225855]: 2026-01-20 15:03:08.294 225859 INFO nova.virt.libvirt.driver [-] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Instance destroyed successfully.
Jan 20 15:03:08 compute-1 nova_compute[225855]: 2026-01-20 15:03:08.295 225859 DEBUG nova.objects.instance [None req-bdcbe869-5a54-47f5-8e01-6c8bd5494045 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'numa_topology' on Instance uuid 474cec75-3b01-411a-9074-75859d2a9ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:03:08 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[288129]: [NOTICE]   (288133) : haproxy version is 2.8.14-c23fe91
Jan 20 15:03:08 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[288129]: [NOTICE]   (288133) : path to executable is /usr/sbin/haproxy
Jan 20 15:03:08 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[288129]: [WARNING]  (288133) : Exiting Master process...
Jan 20 15:03:08 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[288129]: [ALERT]    (288133) : Current worker (288135) exited with code 143 (Terminated)
Jan 20 15:03:08 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[288129]: [WARNING]  (288133) : All workers exited. Exiting... (0)
Jan 20 15:03:08 compute-1 systemd[1]: libpod-7e9df3d86ba0ee1b1089109f5e9eeba2832f1bd1a9b80703602546b20ab605e7.scope: Deactivated successfully.
Jan 20 15:03:08 compute-1 podman[288192]: 2026-01-20 15:03:08.337168697 +0000 UTC m=+0.050506796 container died 7e9df3d86ba0ee1b1089109f5e9eeba2832f1bd1a9b80703602546b20ab605e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:03:08 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7e9df3d86ba0ee1b1089109f5e9eeba2832f1bd1a9b80703602546b20ab605e7-userdata-shm.mount: Deactivated successfully.
Jan 20 15:03:08 compute-1 systemd[1]: var-lib-containers-storage-overlay-dee5eb1c0534cfb4c70b46823a0b5c9d68a80b01a6c011e78131e499ba71b19d-merged.mount: Deactivated successfully.
Jan 20 15:03:08 compute-1 podman[288192]: 2026-01-20 15:03:08.376364633 +0000 UTC m=+0.089702732 container cleanup 7e9df3d86ba0ee1b1089109f5e9eeba2832f1bd1a9b80703602546b20ab605e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 15:03:08 compute-1 kernel: tap244332ba-1b: entered promiscuous mode
Jan 20 15:03:08 compute-1 NetworkManager[49104]: <info>  [1768921388.3834] manager: (tap244332ba-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/278)
Jan 20 15:03:08 compute-1 nova_compute[225855]: 2026-01-20 15:03:08.383 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:08 compute-1 systemd[1]: libpod-conmon-7e9df3d86ba0ee1b1089109f5e9eeba2832f1bd1a9b80703602546b20ab605e7.scope: Deactivated successfully.
Jan 20 15:03:08 compute-1 ovn_controller[130490]: 2026-01-20T15:03:08Z|00648|binding|INFO|Claiming lport 244332ba-1b58-4d42-98b0-245f9460c50f for this chassis.
Jan 20 15:03:08 compute-1 ovn_controller[130490]: 2026-01-20T15:03:08Z|00649|binding|INFO|244332ba-1b58-4d42-98b0-245f9460c50f: Claiming fa:16:3e:6f:36:24 10.100.0.4
Jan 20 15:03:08 compute-1 systemd-udevd[288169]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.393 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:36:24 10.100.0.4'], port_security=['fa:16:3e:6f:36:24 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '474cec75-3b01-411a-9074-75859d2a9ddf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63555e5851564db08c6429231d264f2c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7e54c470-6a6f-454e-ae01-9d2d59b2c74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=248fa32c-94be-4e1b-b4d3-cb9fac0ec155, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=244332ba-1b58-4d42-98b0-245f9460c50f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:03:08 compute-1 NetworkManager[49104]: <info>  [1768921388.3960] device (tap244332ba-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:03:08 compute-1 NetworkManager[49104]: <info>  [1768921388.3968] device (tap244332ba-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:03:08 compute-1 ovn_controller[130490]: 2026-01-20T15:03:08Z|00650|binding|INFO|Setting lport 244332ba-1b58-4d42-98b0-245f9460c50f ovn-installed in OVS
Jan 20 15:03:08 compute-1 ovn_controller[130490]: 2026-01-20T15:03:08Z|00651|binding|INFO|Setting lport 244332ba-1b58-4d42-98b0-245f9460c50f up in Southbound
Jan 20 15:03:08 compute-1 nova_compute[225855]: 2026-01-20 15:03:08.403 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:08 compute-1 nova_compute[225855]: 2026-01-20 15:03:08.406 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:08 compute-1 systemd-machined[194361]: New machine qemu-76-instance-00000096.
Jan 20 15:03:08 compute-1 systemd[1]: Started Virtual Machine qemu-76-instance-00000096.
Jan 20 15:03:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/424647466' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:03:08 compute-1 ceph-mon[81775]: pgmap v2348: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 241 KiB/s rd, 94 KiB/s wr, 45 op/s
Jan 20 15:03:08 compute-1 podman[288241]: 2026-01-20 15:03:08.447196571 +0000 UTC m=+0.044592959 container remove 7e9df3d86ba0ee1b1089109f5e9eeba2832f1bd1a9b80703602546b20ab605e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.455 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b1b5d68e-a44d-4c04-a179-18b7495e279f]: (4, ('Tue Jan 20 03:03:08 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec (7e9df3d86ba0ee1b1089109f5e9eeba2832f1bd1a9b80703602546b20ab605e7)\n7e9df3d86ba0ee1b1089109f5e9eeba2832f1bd1a9b80703602546b20ab605e7\nTue Jan 20 03:03:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec (7e9df3d86ba0ee1b1089109f5e9eeba2832f1bd1a9b80703602546b20ab605e7)\n7e9df3d86ba0ee1b1089109f5e9eeba2832f1bd1a9b80703602546b20ab605e7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.458 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6654953d-a13f-487d-8011-d9d6468a3798]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.459 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap671e28d0-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:03:08 compute-1 kernel: tap671e28d0-00: left promiscuous mode
Jan 20 15:03:08 compute-1 nova_compute[225855]: 2026-01-20 15:03:08.461 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:03:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:08.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:03:08 compute-1 nova_compute[225855]: 2026-01-20 15:03:08.475 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.478 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd23492-45f6-4226-ab63-1066c0f9ff35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.494 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cd110acc-2fb1-492d-a51e-33e24f8215b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.495 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb0e61e-36e6-41c5-9bbe-58a3db1eb1f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.512 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ea691813-0599-46bd-a88f-d5b7d2318912]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642353, 'reachable_time': 36789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288263, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 systemd[1]: run-netns-ovnmeta\x2d671e28d0\x2d0b9e\x2d41e0\x2db5e0\x2ddb1ccd4717ec.mount: Deactivated successfully.
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.517 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.517 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[98e7e985-caa6-4fcc-9101-9c74d8eeac0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.518 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 244332ba-1b58-4d42-98b0-245f9460c50f in datapath 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec unbound from our chassis
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.520 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.530 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9210d910-ee55-405e-aaad-eddbc90e8e25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.531 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap671e28d0-01 in ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.534 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap671e28d0-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.534 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[52129f3f-6ab2-4537-b331-89f0f2612650]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.535 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[51617bc7-118d-483b-a2aa-e17d9eabb404]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.547 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[a82d2654-bc40-452d-b927-87dc4cae4c14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.567 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d6325685-53bb-451d-8b08-f3404f376c0f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.595 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[769ff5de-fc8d-47bc-85ef-97250f8529bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 NetworkManager[49104]: <info>  [1768921388.6014] manager: (tap671e28d0-00): new Veth device (/org/freedesktop/NetworkManager/Devices/279)
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.600 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[37e19214-88f3-4c00-83ea-36e610a70e5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.641 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[26503e07-036e-49f2-b6b7-2424cf2d8f29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.644 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c98acd26-bb15-4fc6-af44-a5aa848fc080]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 NetworkManager[49104]: <info>  [1768921388.6645] device (tap671e28d0-00): carrier: link connected
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.670 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[54477511-f7d1-4f8f-8167-15bb69b12dfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.685 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[dc64bb97-3066-4a1e-9215-2d6350f8128a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap671e28d0-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:4e:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642759, 'reachable_time': 32213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288288, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.703 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ddfd07-21a4-4a2d-b89c-e73711c27b7a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:4e69'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642759, 'tstamp': 642759}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288289, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.723 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5b96edec-c371-4b21-9b81-fd9401907627]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap671e28d0-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:4e:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642759, 'reachable_time': 32213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288290, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.752 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c2d831-e70e-46bd-9c59-d02218f1cda1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.809 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1b56bc6b-a821-44c4-b585-9bb1a9238f43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.811 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap671e28d0-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.811 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.812 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap671e28d0-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:03:08 compute-1 nova_compute[225855]: 2026-01-20 15:03:08.813 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:08 compute-1 NetworkManager[49104]: <info>  [1768921388.8144] manager: (tap671e28d0-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/280)
Jan 20 15:03:08 compute-1 kernel: tap671e28d0-00: entered promiscuous mode
Jan 20 15:03:08 compute-1 nova_compute[225855]: 2026-01-20 15:03:08.815 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.816 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap671e28d0-00, col_values=(('external_ids', {'iface-id': 'a8628d9e-196f-4b84-89fd-d3a41792b8a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:03:08 compute-1 nova_compute[225855]: 2026-01-20 15:03:08.817 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:08 compute-1 ovn_controller[130490]: 2026-01-20T15:03:08Z|00652|binding|INFO|Releasing lport a8628d9e-196f-4b84-89fd-d3a41792b8a0 from this chassis (sb_readonly=0)
Jan 20 15:03:08 compute-1 nova_compute[225855]: 2026-01-20 15:03:08.832 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.833 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/671e28d0-0b9e-41e0-b5e0-db1ccd4717ec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/671e28d0-0b9e-41e0-b5e0-db1ccd4717ec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.834 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5d862f45-9651-4460-917a-999facffe7c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.835 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/671e28d0-0b9e-41e0-b5e0-db1ccd4717ec.pid.haproxy
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:03:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:08.835 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'env', 'PROCESS_TAG=haproxy-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/671e28d0-0b9e-41e0-b5e0-db1ccd4717ec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:03:09 compute-1 nova_compute[225855]: 2026-01-20 15:03:09.067 225859 DEBUG nova.compute.manager [req-c3a5acde-fb08-461b-94cb-65a7665133c6 req-e8c26888-f98d-4b0c-93dc-2cd72015a302 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received event network-vif-unplugged-244332ba-1b58-4d42-98b0-245f9460c50f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:03:09 compute-1 nova_compute[225855]: 2026-01-20 15:03:09.068 225859 DEBUG oslo_concurrency.lockutils [req-c3a5acde-fb08-461b-94cb-65a7665133c6 req-e8c26888-f98d-4b0c-93dc-2cd72015a302 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:09 compute-1 nova_compute[225855]: 2026-01-20 15:03:09.068 225859 DEBUG oslo_concurrency.lockutils [req-c3a5acde-fb08-461b-94cb-65a7665133c6 req-e8c26888-f98d-4b0c-93dc-2cd72015a302 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:09 compute-1 nova_compute[225855]: 2026-01-20 15:03:09.068 225859 DEBUG oslo_concurrency.lockutils [req-c3a5acde-fb08-461b-94cb-65a7665133c6 req-e8c26888-f98d-4b0c-93dc-2cd72015a302 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:09 compute-1 nova_compute[225855]: 2026-01-20 15:03:09.068 225859 DEBUG nova.compute.manager [req-c3a5acde-fb08-461b-94cb-65a7665133c6 req-e8c26888-f98d-4b0c-93dc-2cd72015a302 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] No waiting events found dispatching network-vif-unplugged-244332ba-1b58-4d42-98b0-245f9460c50f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:03:09 compute-1 nova_compute[225855]: 2026-01-20 15:03:09.068 225859 WARNING nova.compute.manager [req-c3a5acde-fb08-461b-94cb-65a7665133c6 req-e8c26888-f98d-4b0c-93dc-2cd72015a302 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received unexpected event network-vif-unplugged-244332ba-1b58-4d42-98b0-245f9460c50f for instance with vm_state rescued and task_state unrescuing.
Jan 20 15:03:09 compute-1 podman[288363]: 2026-01-20 15:03:09.218584443 +0000 UTC m=+0.045524096 container create 791837c27285c68cb685dfa82b8474123f452a4f4d63910113cfbdc8f564544e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 20 15:03:09 compute-1 nova_compute[225855]: 2026-01-20 15:03:09.219 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for 474cec75-3b01-411a-9074-75859d2a9ddf due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 15:03:09 compute-1 nova_compute[225855]: 2026-01-20 15:03:09.220 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921389.2193475, 474cec75-3b01-411a-9074-75859d2a9ddf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:03:09 compute-1 nova_compute[225855]: 2026-01-20 15:03:09.220 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] VM Resumed (Lifecycle Event)
Jan 20 15:03:09 compute-1 nova_compute[225855]: 2026-01-20 15:03:09.250 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:03:09 compute-1 systemd[1]: Started libpod-conmon-791837c27285c68cb685dfa82b8474123f452a4f4d63910113cfbdc8f564544e.scope.
Jan 20 15:03:09 compute-1 nova_compute[225855]: 2026-01-20 15:03:09.255 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:03:09 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:03:09 compute-1 nova_compute[225855]: 2026-01-20 15:03:09.280 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 20 15:03:09 compute-1 nova_compute[225855]: 2026-01-20 15:03:09.280 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921389.2208128, 474cec75-3b01-411a-9074-75859d2a9ddf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:03:09 compute-1 nova_compute[225855]: 2026-01-20 15:03:09.280 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] VM Started (Lifecycle Event)
Jan 20 15:03:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a9b0c9e2a81eedae01229880b31ef78cadc4986b7f57eb0b4c053b0733a83f7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:03:09 compute-1 podman[288363]: 2026-01-20 15:03:09.197308552 +0000 UTC m=+0.024248225 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:03:09 compute-1 podman[288363]: 2026-01-20 15:03:09.29893747 +0000 UTC m=+0.125877133 container init 791837c27285c68cb685dfa82b8474123f452a4f4d63910113cfbdc8f564544e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 15:03:09 compute-1 nova_compute[225855]: 2026-01-20 15:03:09.302 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:03:09 compute-1 podman[288363]: 2026-01-20 15:03:09.304756174 +0000 UTC m=+0.131695827 container start 791837c27285c68cb685dfa82b8474123f452a4f4d63910113cfbdc8f564544e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:03:09 compute-1 nova_compute[225855]: 2026-01-20 15:03:09.306 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:03:09 compute-1 nova_compute[225855]: 2026-01-20 15:03:09.327 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 20 15:03:09 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[288392]: [NOTICE]   (288401) : New worker (288403) forked
Jan 20 15:03:09 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[288392]: [NOTICE]   (288401) : Loading success.
Jan 20 15:03:09 compute-1 nova_compute[225855]: 2026-01-20 15:03:09.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:03:09 compute-1 nova_compute[225855]: 2026-01-20 15:03:09.652 225859 DEBUG nova.compute.manager [None req-bdcbe869-5a54-47f5-8e01-6c8bd5494045 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:03:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:09.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:10 compute-1 nova_compute[225855]: 2026-01-20 15:03:10.001 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:10.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:10 compute-1 ceph-mon[81775]: pgmap v2349: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 94 KiB/s wr, 146 op/s
Jan 20 15:03:10 compute-1 nova_compute[225855]: 2026-01-20 15:03:10.716 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:11 compute-1 nova_compute[225855]: 2026-01-20 15:03:11.211 225859 DEBUG nova.compute.manager [req-eacff317-3504-4bd7-b253-ff0c6d8b0780 req-5de96166-8d49-4003-bd2a-5fcace0cd512 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received event network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:03:11 compute-1 nova_compute[225855]: 2026-01-20 15:03:11.212 225859 DEBUG oslo_concurrency.lockutils [req-eacff317-3504-4bd7-b253-ff0c6d8b0780 req-5de96166-8d49-4003-bd2a-5fcace0cd512 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:11 compute-1 nova_compute[225855]: 2026-01-20 15:03:11.212 225859 DEBUG oslo_concurrency.lockutils [req-eacff317-3504-4bd7-b253-ff0c6d8b0780 req-5de96166-8d49-4003-bd2a-5fcace0cd512 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:11 compute-1 nova_compute[225855]: 2026-01-20 15:03:11.212 225859 DEBUG oslo_concurrency.lockutils [req-eacff317-3504-4bd7-b253-ff0c6d8b0780 req-5de96166-8d49-4003-bd2a-5fcace0cd512 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:11 compute-1 nova_compute[225855]: 2026-01-20 15:03:11.212 225859 DEBUG nova.compute.manager [req-eacff317-3504-4bd7-b253-ff0c6d8b0780 req-5de96166-8d49-4003-bd2a-5fcace0cd512 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] No waiting events found dispatching network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:03:11 compute-1 nova_compute[225855]: 2026-01-20 15:03:11.213 225859 WARNING nova.compute.manager [req-eacff317-3504-4bd7-b253-ff0c6d8b0780 req-5de96166-8d49-4003-bd2a-5fcace0cd512 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received unexpected event network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f for instance with vm_state active and task_state None.
Jan 20 15:03:11 compute-1 nova_compute[225855]: 2026-01-20 15:03:11.213 225859 DEBUG nova.compute.manager [req-eacff317-3504-4bd7-b253-ff0c6d8b0780 req-5de96166-8d49-4003-bd2a-5fcace0cd512 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received event network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:03:11 compute-1 nova_compute[225855]: 2026-01-20 15:03:11.213 225859 DEBUG oslo_concurrency.lockutils [req-eacff317-3504-4bd7-b253-ff0c6d8b0780 req-5de96166-8d49-4003-bd2a-5fcace0cd512 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:11 compute-1 nova_compute[225855]: 2026-01-20 15:03:11.213 225859 DEBUG oslo_concurrency.lockutils [req-eacff317-3504-4bd7-b253-ff0c6d8b0780 req-5de96166-8d49-4003-bd2a-5fcace0cd512 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:11 compute-1 nova_compute[225855]: 2026-01-20 15:03:11.213 225859 DEBUG oslo_concurrency.lockutils [req-eacff317-3504-4bd7-b253-ff0c6d8b0780 req-5de96166-8d49-4003-bd2a-5fcace0cd512 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:11 compute-1 nova_compute[225855]: 2026-01-20 15:03:11.214 225859 DEBUG nova.compute.manager [req-eacff317-3504-4bd7-b253-ff0c6d8b0780 req-5de96166-8d49-4003-bd2a-5fcace0cd512 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] No waiting events found dispatching network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:03:11 compute-1 nova_compute[225855]: 2026-01-20 15:03:11.214 225859 WARNING nova.compute.manager [req-eacff317-3504-4bd7-b253-ff0c6d8b0780 req-5de96166-8d49-4003-bd2a-5fcace0cd512 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received unexpected event network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f for instance with vm_state active and task_state None.
Jan 20 15:03:11 compute-1 nova_compute[225855]: 2026-01-20 15:03:11.214 225859 DEBUG nova.compute.manager [req-eacff317-3504-4bd7-b253-ff0c6d8b0780 req-5de96166-8d49-4003-bd2a-5fcace0cd512 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received event network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:03:11 compute-1 nova_compute[225855]: 2026-01-20 15:03:11.214 225859 DEBUG oslo_concurrency.lockutils [req-eacff317-3504-4bd7-b253-ff0c6d8b0780 req-5de96166-8d49-4003-bd2a-5fcace0cd512 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:11 compute-1 nova_compute[225855]: 2026-01-20 15:03:11.214 225859 DEBUG oslo_concurrency.lockutils [req-eacff317-3504-4bd7-b253-ff0c6d8b0780 req-5de96166-8d49-4003-bd2a-5fcace0cd512 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:11 compute-1 nova_compute[225855]: 2026-01-20 15:03:11.215 225859 DEBUG oslo_concurrency.lockutils [req-eacff317-3504-4bd7-b253-ff0c6d8b0780 req-5de96166-8d49-4003-bd2a-5fcace0cd512 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:11 compute-1 nova_compute[225855]: 2026-01-20 15:03:11.215 225859 DEBUG nova.compute.manager [req-eacff317-3504-4bd7-b253-ff0c6d8b0780 req-5de96166-8d49-4003-bd2a-5fcace0cd512 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] No waiting events found dispatching network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:03:11 compute-1 nova_compute[225855]: 2026-01-20 15:03:11.215 225859 WARNING nova.compute.manager [req-eacff317-3504-4bd7-b253-ff0c6d8b0780 req-5de96166-8d49-4003-bd2a-5fcace0cd512 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received unexpected event network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f for instance with vm_state active and task_state None.
Jan 20 15:03:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:11.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:12 compute-1 nova_compute[225855]: 2026-01-20 15:03:12.105 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:03:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:12.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:03:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:13 compute-1 ceph-mon[81775]: pgmap v2350: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 36 KiB/s wr, 188 op/s
Jan 20 15:03:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:13.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/605902267' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:03:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/605902267' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:03:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:03:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:14.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:03:15 compute-1 nova_compute[225855]: 2026-01-20 15:03:15.003 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:15 compute-1 nova_compute[225855]: 2026-01-20 15:03:15.414 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:15 compute-1 ceph-mon[81775]: pgmap v2351: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 28 KiB/s wr, 204 op/s
Jan 20 15:03:15 compute-1 sudo[288417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:15 compute-1 sudo[288417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:15 compute-1 sudo[288417]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:03:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:15.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:03:15 compute-1 sudo[288442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:15 compute-1 sudo[288442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:15 compute-1 sudo[288442]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:16.423 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:16.424 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:16.424 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:03:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:16.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:03:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3936023232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:16 compute-1 ceph-mon[81775]: pgmap v2352: 321 pgs: 321 active+clean; 426 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 213 KiB/s wr, 197 op/s
Jan 20 15:03:17 compute-1 nova_compute[225855]: 2026-01-20 15:03:17.107 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e338 e338: 3 total, 3 up, 3 in
Jan 20 15:03:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:03:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:17.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:03:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:03:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:18.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:03:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e339 e339: 3 total, 3 up, 3 in
Jan 20 15:03:18 compute-1 ceph-mon[81775]: osdmap e338: 3 total, 3 up, 3 in
Jan 20 15:03:18 compute-1 ceph-mon[81775]: pgmap v2354: 321 pgs: 321 active+clean; 426 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 236 KiB/s wr, 205 op/s
Jan 20 15:03:19 compute-1 ceph-mon[81775]: osdmap e339: 3 total, 3 up, 3 in
Jan 20 15:03:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e340 e340: 3 total, 3 up, 3 in
Jan 20 15:03:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:19.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:20 compute-1 nova_compute[225855]: 2026-01-20 15:03:20.005 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:03:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:20.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:03:20 compute-1 ceph-mon[81775]: osdmap e340: 3 total, 3 up, 3 in
Jan 20 15:03:20 compute-1 ceph-mon[81775]: pgmap v2357: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 514 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 11 MiB/s wr, 118 op/s
Jan 20 15:03:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3281668790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:03:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3406533712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:03:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:21.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:22 compute-1 nova_compute[225855]: 2026-01-20 15:03:22.109 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:03:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:22.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:03:22 compute-1 ceph-mon[81775]: pgmap v2358: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 574 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 18 MiB/s wr, 256 op/s
Jan 20 15:03:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e341 e341: 3 total, 3 up, 3 in
Jan 20 15:03:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:03:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:23.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:03:23 compute-1 sudo[288472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:23 compute-1 sudo[288472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:23 compute-1 sudo[288472]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:24 compute-1 sudo[288497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:03:24 compute-1 sudo[288497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:24 compute-1 sudo[288497]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:24 compute-1 sudo[288523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:24 compute-1 sudo[288523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:24 compute-1 sudo[288523]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:24 compute-1 podman[288521]: 2026-01-20 15:03:24.168806405 +0000 UTC m=+0.087088138 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:03:24 compute-1 sudo[288570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:03:24 compute-1 sudo[288570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:24.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:24 compute-1 ceph-mon[81775]: osdmap e341: 3 total, 3 up, 3 in
Jan 20 15:03:24 compute-1 ceph-mon[81775]: pgmap v2360: 321 pgs: 321 active+clean; 574 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 18 MiB/s wr, 297 op/s
Jan 20 15:03:24 compute-1 sudo[288570]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:25 compute-1 nova_compute[225855]: 2026-01-20 15:03:25.006 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e342 e342: 3 total, 3 up, 3 in
Jan 20 15:03:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:03:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:03:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:03:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:03:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:03:25 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:03:25 compute-1 ceph-mon[81775]: osdmap e342: 3 total, 3 up, 3 in
Jan 20 15:03:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:25.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:26.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:26 compute-1 ceph-mon[81775]: pgmap v2362: 321 pgs: 321 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 12 MiB/s wr, 372 op/s
Jan 20 15:03:27 compute-1 nova_compute[225855]: 2026-01-20 15:03:27.111 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/817872846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:03:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:27.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:03:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:28.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:29 compute-1 ceph-mon[81775]: pgmap v2363: 321 pgs: 321 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 6.0 MiB/s wr, 273 op/s
Jan 20 15:03:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:29.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:30 compute-1 nova_compute[225855]: 2026-01-20 15:03:30.008 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:30.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:31 compute-1 sudo[288632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:31 compute-1 sudo[288632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:31 compute-1 sudo[288632]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:31 compute-1 sudo[288657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:03:31 compute-1 sudo[288657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:31 compute-1 sudo[288657]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:31 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e343 e343: 3 total, 3 up, 3 in
Jan 20 15:03:31 compute-1 ceph-mon[81775]: pgmap v2364: 321 pgs: 321 active+clean; 480 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.4 MiB/s wr, 271 op/s
Jan 20 15:03:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:03:31 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:03:31 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Jan 20 15:03:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:31.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:32 compute-1 nova_compute[225855]: 2026-01-20 15:03:32.113 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:32 compute-1 ceph-mon[81775]: osdmap e343: 3 total, 3 up, 3 in
Jan 20 15:03:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e344 e344: 3 total, 3 up, 3 in
Jan 20 15:03:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:03:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:32.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:03:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:33 compute-1 ceph-mon[81775]: pgmap v2366: 321 pgs: 321 active+clean; 488 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.9 MiB/s wr, 267 op/s
Jan 20 15:03:33 compute-1 ceph-mon[81775]: osdmap e344: 3 total, 3 up, 3 in
Jan 20 15:03:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e345 e345: 3 total, 3 up, 3 in
Jan 20 15:03:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:33.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:34.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:34 compute-1 ceph-mon[81775]: osdmap e345: 3 total, 3 up, 3 in
Jan 20 15:03:34 compute-1 ceph-mon[81775]: pgmap v2369: 321 pgs: 321 active+clean; 516 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.8 MiB/s wr, 193 op/s
Jan 20 15:03:35 compute-1 nova_compute[225855]: 2026-01-20 15:03:35.010 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e346 e346: 3 total, 3 up, 3 in
Jan 20 15:03:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:35.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:36 compute-1 sudo[288685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:36 compute-1 sudo[288685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:36 compute-1 sudo[288685]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:36 compute-1 sudo[288710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:36 compute-1 sudo[288710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:36 compute-1 sudo[288710]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:36 compute-1 ceph-mon[81775]: osdmap e346: 3 total, 3 up, 3 in
Jan 20 15:03:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:03:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:36.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:03:37 compute-1 nova_compute[225855]: 2026-01-20 15:03:37.039 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:37.042 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:03:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:37.043 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:03:37 compute-1 nova_compute[225855]: 2026-01-20 15:03:37.115 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:37 compute-1 ceph-mon[81775]: pgmap v2371: 321 pgs: 321 active+clean; 546 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.7 MiB/s wr, 149 op/s
Jan 20 15:03:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:37.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:38 compute-1 podman[288736]: 2026-01-20 15:03:38.011751066 +0000 UTC m=+0.049912189 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 20 15:03:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1143980325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:38.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e347 e347: 3 total, 3 up, 3 in
Jan 20 15:03:39 compute-1 ceph-mon[81775]: pgmap v2372: 321 pgs: 321 active+clean; 546 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.6 MiB/s wr, 123 op/s
Jan 20 15:03:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:03:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:39.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:03:40 compute-1 nova_compute[225855]: 2026-01-20 15:03:40.012 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e348 e348: 3 total, 3 up, 3 in
Jan 20 15:03:40 compute-1 ceph-mon[81775]: osdmap e347: 3 total, 3 up, 3 in
Jan 20 15:03:40 compute-1 ceph-mon[81775]: osdmap e348: 3 total, 3 up, 3 in
Jan 20 15:03:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:40.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:41 compute-1 ceph-mon[81775]: pgmap v2375: 321 pgs: 321 active+clean; 575 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.7 MiB/s wr, 232 op/s
Jan 20 15:03:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:41.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:42 compute-1 nova_compute[225855]: 2026-01-20 15:03:42.118 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1625105208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:03:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1704404478' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:03:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:42.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:42 compute-1 nova_compute[225855]: 2026-01-20 15:03:42.876 225859 DEBUG oslo_concurrency.lockutils [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Acquiring lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:42 compute-1 nova_compute[225855]: 2026-01-20 15:03:42.877 225859 DEBUG oslo_concurrency.lockutils [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:42 compute-1 nova_compute[225855]: 2026-01-20 15:03:42.877 225859 DEBUG oslo_concurrency.lockutils [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Acquiring lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:42 compute-1 nova_compute[225855]: 2026-01-20 15:03:42.878 225859 DEBUG oslo_concurrency.lockutils [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:42 compute-1 nova_compute[225855]: 2026-01-20 15:03:42.878 225859 DEBUG oslo_concurrency.lockutils [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:42 compute-1 nova_compute[225855]: 2026-01-20 15:03:42.879 225859 INFO nova.compute.manager [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Terminating instance
Jan 20 15:03:42 compute-1 nova_compute[225855]: 2026-01-20 15:03:42.880 225859 DEBUG nova.compute.manager [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:03:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:43 compute-1 kernel: tap6216baae-33 (unregistering): left promiscuous mode
Jan 20 15:03:43 compute-1 NetworkManager[49104]: <info>  [1768921423.1911] device (tap6216baae-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:03:43 compute-1 ovn_controller[130490]: 2026-01-20T15:03:43Z|00653|binding|INFO|Releasing lport 6216baae-337d-44a3-aa38-60c2afb5d13f from this chassis (sb_readonly=0)
Jan 20 15:03:43 compute-1 ovn_controller[130490]: 2026-01-20T15:03:43Z|00654|binding|INFO|Setting lport 6216baae-337d-44a3-aa38-60c2afb5d13f down in Southbound
Jan 20 15:03:43 compute-1 ovn_controller[130490]: 2026-01-20T15:03:43Z|00655|binding|INFO|Removing iface tap6216baae-33 ovn-installed in OVS
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.200 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.203 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.216 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:43 compute-1 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000097.scope: Deactivated successfully.
Jan 20 15:03:43 compute-1 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000097.scope: Consumed 17.836s CPU time.
Jan 20 15:03:43 compute-1 systemd-machined[194361]: Machine qemu-73-instance-00000097 terminated.
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.321 225859 INFO nova.virt.libvirt.driver [-] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Instance destroyed successfully.
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.322 225859 DEBUG nova.objects.instance [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Lazy-loading 'resources' on Instance uuid 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:03:43 compute-1 ceph-mon[81775]: pgmap v2376: 321 pgs: 321 active+clean; 560 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.4 MiB/s wr, 205 op/s
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.430 225859 DEBUG nova.compute.manager [req-a3481e62-fec8-4292-9bb1-45fb97d9aaf5 req-9a2cfc97-eb2c-4f0a-ae47-39363ba4a923 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Received event network-changed-6216baae-337d-44a3-aa38-60c2afb5d13f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.430 225859 DEBUG nova.compute.manager [req-a3481e62-fec8-4292-9bb1-45fb97d9aaf5 req-9a2cfc97-eb2c-4f0a-ae47-39363ba4a923 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Refreshing instance network info cache due to event network-changed-6216baae-337d-44a3-aa38-60c2afb5d13f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.430 225859 DEBUG oslo_concurrency.lockutils [req-a3481e62-fec8-4292-9bb1-45fb97d9aaf5 req-9a2cfc97-eb2c-4f0a-ae47-39363ba4a923 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.430 225859 DEBUG oslo_concurrency.lockutils [req-a3481e62-fec8-4292-9bb1-45fb97d9aaf5 req-9a2cfc97-eb2c-4f0a-ae47-39363ba4a923 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.430 225859 DEBUG nova.network.neutron [req-a3481e62-fec8-4292-9bb1-45fb97d9aaf5 req-9a2cfc97-eb2c-4f0a-ae47-39363ba4a923 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Refreshing network info cache for port 6216baae-337d-44a3-aa38-60c2afb5d13f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:03:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:43.440 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:b9:ea 10.100.0.12'], port_security=['fa:16:3e:87:b9:ea 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a1d679d5c954662a271e842fe2f2c05', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f11f0ae2-6b78-4d57-a9ea-5a7c52439262', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=773a665f-440e-445e-8ca6-20a8b67e017a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6216baae-337d-44a3-aa38-60c2afb5d13f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:03:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:43.441 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6216baae-337d-44a3-aa38-60c2afb5d13f in datapath 43d3be8f-9be1-4892-bbfe-d0ba2d7157ad unbound from our chassis
Jan 20 15:03:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:43.443 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43d3be8f-9be1-4892-bbfe-d0ba2d7157ad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:03:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:43.444 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0e5922f7-0d49-4696-89a2-60eaa06cfaa1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:43.445 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad namespace which is not needed anymore
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.490 225859 DEBUG nova.virt.libvirt.vif [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:01:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-2070424486',display_name='tempest-TestSnapshotPattern-server-2070424486',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-2070424486',id=151,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHt2Pjp5fO1h9ikmCXDj2fSFlpzjIfjh7jCgXMa0An0AiWgQhFRQBExuSvqHDwsNMcN7FUPQzPGoYvUkqz0I21jbk9kMja07pP6W664P26WxVinBA8YoIkVl5tlHownM8g==',key_name='tempest-TestSnapshotPattern-503298877',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:01:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3a1d679d5c954662a271e842fe2f2c05',ramdisk_id='',reservation_id='r-4u8oxks9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-1341092631',owner_user_name='tempest-TestSnapshotPattern-1341092631-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:02:25Z,user_data=None,user_id='1654794111844ca88666b3529173e9a7',uuid=2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6216baae-337d-44a3-aa38-60c2afb5d13f", "address": "fa:16:3e:87:b9:ea", "network": {"id": "43d3be8f-9be1-4892-bbfe-d0ba2d7157ad", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1740636070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a1d679d5c954662a271e842fe2f2c05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6216baae-33", "ovs_interfaceid": "6216baae-337d-44a3-aa38-60c2afb5d13f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.490 225859 DEBUG nova.network.os_vif_util [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Converting VIF {"id": "6216baae-337d-44a3-aa38-60c2afb5d13f", "address": "fa:16:3e:87:b9:ea", "network": {"id": "43d3be8f-9be1-4892-bbfe-d0ba2d7157ad", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1740636070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a1d679d5c954662a271e842fe2f2c05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6216baae-33", "ovs_interfaceid": "6216baae-337d-44a3-aa38-60c2afb5d13f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.491 225859 DEBUG nova.network.os_vif_util [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:b9:ea,bridge_name='br-int',has_traffic_filtering=True,id=6216baae-337d-44a3-aa38-60c2afb5d13f,network=Network(43d3be8f-9be1-4892-bbfe-d0ba2d7157ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6216baae-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.492 225859 DEBUG os_vif [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:b9:ea,bridge_name='br-int',has_traffic_filtering=True,id=6216baae-337d-44a3-aa38-60c2afb5d13f,network=Network(43d3be8f-9be1-4892-bbfe-d0ba2d7157ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6216baae-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.494 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.495 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6216baae-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.498 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.507 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.510 225859 INFO os_vif [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:b9:ea,bridge_name='br-int',has_traffic_filtering=True,id=6216baae-337d-44a3-aa38-60c2afb5d13f,network=Network(43d3be8f-9be1-4892-bbfe-d0ba2d7157ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6216baae-33')
Jan 20 15:03:43 compute-1 neutron-haproxy-ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad[285889]: [NOTICE]   (285893) : haproxy version is 2.8.14-c23fe91
Jan 20 15:03:43 compute-1 neutron-haproxy-ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad[285889]: [NOTICE]   (285893) : path to executable is /usr/sbin/haproxy
Jan 20 15:03:43 compute-1 neutron-haproxy-ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad[285889]: [WARNING]  (285893) : Exiting Master process...
Jan 20 15:03:43 compute-1 neutron-haproxy-ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad[285889]: [WARNING]  (285893) : Exiting Master process...
Jan 20 15:03:43 compute-1 neutron-haproxy-ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad[285889]: [ALERT]    (285893) : Current worker (285895) exited with code 143 (Terminated)
Jan 20 15:03:43 compute-1 neutron-haproxy-ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad[285889]: [WARNING]  (285893) : All workers exited. Exiting... (0)
Jan 20 15:03:43 compute-1 systemd[1]: libpod-858464b04d58a5cb3b3a1293894336ed3fa1c40b3e83ce1971c3bfe8ff0e8d70.scope: Deactivated successfully.
Jan 20 15:03:43 compute-1 podman[288808]: 2026-01-20 15:03:43.587980371 +0000 UTC m=+0.048347205 container died 858464b04d58a5cb3b3a1293894336ed3fa1c40b3e83ce1971c3bfe8ff0e8d70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:03:43 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-858464b04d58a5cb3b3a1293894336ed3fa1c40b3e83ce1971c3bfe8ff0e8d70-userdata-shm.mount: Deactivated successfully.
Jan 20 15:03:43 compute-1 systemd[1]: var-lib-containers-storage-overlay-a768a9de3a5bcf02ee70e03b9ac0d04f5f61e6aa1b57b9c32ed35cc799999e46-merged.mount: Deactivated successfully.
Jan 20 15:03:43 compute-1 podman[288808]: 2026-01-20 15:03:43.629381379 +0000 UTC m=+0.089748203 container cleanup 858464b04d58a5cb3b3a1293894336ed3fa1c40b3e83ce1971c3bfe8ff0e8d70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 20 15:03:43 compute-1 systemd[1]: libpod-conmon-858464b04d58a5cb3b3a1293894336ed3fa1c40b3e83ce1971c3bfe8ff0e8d70.scope: Deactivated successfully.
Jan 20 15:03:43 compute-1 podman[288843]: 2026-01-20 15:03:43.700145585 +0000 UTC m=+0.046805001 container remove 858464b04d58a5cb3b3a1293894336ed3fa1c40b3e83ce1971c3bfe8ff0e8d70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 15:03:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:43.706 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4cfe09-3e74-4d9a-9ac9-57502967576d]: (4, ('Tue Jan 20 03:03:43 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad (858464b04d58a5cb3b3a1293894336ed3fa1c40b3e83ce1971c3bfe8ff0e8d70)\n858464b04d58a5cb3b3a1293894336ed3fa1c40b3e83ce1971c3bfe8ff0e8d70\nTue Jan 20 03:03:43 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad (858464b04d58a5cb3b3a1293894336ed3fa1c40b3e83ce1971c3bfe8ff0e8d70)\n858464b04d58a5cb3b3a1293894336ed3fa1c40b3e83ce1971c3bfe8ff0e8d70\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:43.707 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[45ac42af-7062-4d46-8341-588b36dfb22b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:43.708 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43d3be8f-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.710 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:43 compute-1 kernel: tap43d3be8f-90: left promiscuous mode
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.725 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:43.727 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6c8f47a1-2a73-456f-83c8-315d28dd7289]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:43.754 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bec636d5-ee60-45a4-badd-6ae9e6cf72d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:43.756 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[20f645c2-31b8-4c56-be7e-6f7094c078c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:43.771 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[28a9a447-3c6e-420a-80b3-b7caeed14da1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 635244, 'reachable_time': 33822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288858, 'error': None, 'target': 'ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:43 compute-1 systemd[1]: run-netns-ovnmeta\x2d43d3be8f\x2d9be1\x2d4892\x2dbbfe\x2dd0ba2d7157ad.mount: Deactivated successfully.
Jan 20 15:03:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:43.775 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43d3be8f-9be1-4892-bbfe-d0ba2d7157ad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:03:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:43.776 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[3b466783-0f2f-4fcc-a02b-9eca4f4b49e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:43.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.985 225859 INFO nova.virt.libvirt.driver [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Deleting instance files /var/lib/nova/instances/2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1_del
Jan 20 15:03:43 compute-1 nova_compute[225855]: 2026-01-20 15:03:43.986 225859 INFO nova.virt.libvirt.driver [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Deletion of /var/lib/nova/instances/2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1_del complete
Jan 20 15:03:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:44.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:45 compute-1 nova_compute[225855]: 2026-01-20 15:03:45.119 225859 DEBUG nova.compute.manager [req-43928d7a-a92d-489b-b123-cc4e9622df67 req-2e5744cb-58da-4641-96fb-7b6c44ba0358 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Received event network-vif-unplugged-6216baae-337d-44a3-aa38-60c2afb5d13f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:03:45 compute-1 nova_compute[225855]: 2026-01-20 15:03:45.120 225859 DEBUG oslo_concurrency.lockutils [req-43928d7a-a92d-489b-b123-cc4e9622df67 req-2e5744cb-58da-4641-96fb-7b6c44ba0358 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:45 compute-1 nova_compute[225855]: 2026-01-20 15:03:45.120 225859 DEBUG oslo_concurrency.lockutils [req-43928d7a-a92d-489b-b123-cc4e9622df67 req-2e5744cb-58da-4641-96fb-7b6c44ba0358 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:45 compute-1 nova_compute[225855]: 2026-01-20 15:03:45.120 225859 DEBUG oslo_concurrency.lockutils [req-43928d7a-a92d-489b-b123-cc4e9622df67 req-2e5744cb-58da-4641-96fb-7b6c44ba0358 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:45 compute-1 nova_compute[225855]: 2026-01-20 15:03:45.121 225859 DEBUG nova.compute.manager [req-43928d7a-a92d-489b-b123-cc4e9622df67 req-2e5744cb-58da-4641-96fb-7b6c44ba0358 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] No waiting events found dispatching network-vif-unplugged-6216baae-337d-44a3-aa38-60c2afb5d13f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:03:45 compute-1 nova_compute[225855]: 2026-01-20 15:03:45.121 225859 DEBUG nova.compute.manager [req-43928d7a-a92d-489b-b123-cc4e9622df67 req-2e5744cb-58da-4641-96fb-7b6c44ba0358 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Received event network-vif-unplugged-6216baae-337d-44a3-aa38-60c2afb5d13f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:03:45 compute-1 nova_compute[225855]: 2026-01-20 15:03:45.212 225859 INFO nova.compute.manager [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Took 2.33 seconds to destroy the instance on the hypervisor.
Jan 20 15:03:45 compute-1 nova_compute[225855]: 2026-01-20 15:03:45.213 225859 DEBUG oslo.service.loopingcall [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:03:45 compute-1 nova_compute[225855]: 2026-01-20 15:03:45.213 225859 DEBUG nova.compute.manager [-] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:03:45 compute-1 nova_compute[225855]: 2026-01-20 15:03:45.214 225859 DEBUG nova.network.neutron [-] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:03:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e349 e349: 3 total, 3 up, 3 in
Jan 20 15:03:45 compute-1 nova_compute[225855]: 2026-01-20 15:03:45.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:03:45 compute-1 ceph-mon[81775]: pgmap v2377: 321 pgs: 321 active+clean; 528 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 541 KiB/s rd, 3.2 MiB/s wr, 134 op/s
Jan 20 15:03:45 compute-1 ceph-mon[81775]: osdmap e349: 3 total, 3 up, 3 in
Jan 20 15:03:45 compute-1 nova_compute[225855]: 2026-01-20 15:03:45.918 225859 DEBUG oslo_concurrency.lockutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:45 compute-1 nova_compute[225855]: 2026-01-20 15:03:45.918 225859 DEBUG oslo_concurrency.lockutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:45 compute-1 nova_compute[225855]: 2026-01-20 15:03:45.957 225859 DEBUG nova.compute.manager [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:03:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:45.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:46 compute-1 nova_compute[225855]: 2026-01-20 15:03:46.115 225859 DEBUG oslo_concurrency.lockutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:46 compute-1 nova_compute[225855]: 2026-01-20 15:03:46.115 225859 DEBUG oslo_concurrency.lockutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:46 compute-1 nova_compute[225855]: 2026-01-20 15:03:46.123 225859 DEBUG nova.virt.hardware [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:03:46 compute-1 nova_compute[225855]: 2026-01-20 15:03:46.123 225859 INFO nova.compute.claims [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:03:46 compute-1 nova_compute[225855]: 2026-01-20 15:03:46.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:03:46 compute-1 nova_compute[225855]: 2026-01-20 15:03:46.409 225859 DEBUG oslo_concurrency.processutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:03:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:46.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:46 compute-1 nova_compute[225855]: 2026-01-20 15:03:46.753 225859 DEBUG nova.network.neutron [-] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:03:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:03:46 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/375679175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:46 compute-1 nova_compute[225855]: 2026-01-20 15:03:46.856 225859 DEBUG oslo_concurrency.processutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:03:46 compute-1 nova_compute[225855]: 2026-01-20 15:03:46.862 225859 DEBUG nova.compute.provider_tree [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.008 225859 DEBUG nova.network.neutron [req-a3481e62-fec8-4292-9bb1-45fb97d9aaf5 req-9a2cfc97-eb2c-4f0a-ae47-39363ba4a923 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Updated VIF entry in instance network info cache for port 6216baae-337d-44a3-aa38-60c2afb5d13f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.009 225859 DEBUG nova.network.neutron [req-a3481e62-fec8-4292-9bb1-45fb97d9aaf5 req-9a2cfc97-eb2c-4f0a-ae47-39363ba4a923 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Updating instance_info_cache with network_info: [{"id": "6216baae-337d-44a3-aa38-60c2afb5d13f", "address": "fa:16:3e:87:b9:ea", "network": {"id": "43d3be8f-9be1-4892-bbfe-d0ba2d7157ad", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1740636070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a1d679d5c954662a271e842fe2f2c05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6216baae-33", "ovs_interfaceid": "6216baae-337d-44a3-aa38-60c2afb5d13f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:03:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:03:47.044 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.120 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:03:47 compute-1 ceph-mon[81775]: pgmap v2379: 321 pgs: 321 active+clean; 455 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 624 KiB/s rd, 2.5 MiB/s wr, 181 op/s
Jan 20 15:03:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/375679175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.682 225859 DEBUG nova.compute.manager [req-d6b6cc3b-38a7-4d76-bf05-85c0e612f3d3 req-c11dd991-7154-4c0d-b404-a77db5b06fb2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Received event network-vif-deleted-6216baae-337d-44a3-aa38-60c2afb5d13f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.682 225859 INFO nova.compute.manager [req-d6b6cc3b-38a7-4d76-bf05-85c0e612f3d3 req-c11dd991-7154-4c0d-b404-a77db5b06fb2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Neutron deleted interface 6216baae-337d-44a3-aa38-60c2afb5d13f; detaching it from the instance and deleting it from the info cache
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.683 225859 DEBUG nova.network.neutron [req-d6b6cc3b-38a7-4d76-bf05-85c0e612f3d3 req-c11dd991-7154-4c0d-b404-a77db5b06fb2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.706 225859 DEBUG nova.scheduler.client.report [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.762 225859 DEBUG oslo_concurrency.lockutils [req-a3481e62-fec8-4292-9bb1-45fb97d9aaf5 req-9a2cfc97-eb2c-4f0a-ae47-39363ba4a923 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.768 225859 DEBUG oslo_concurrency.lockutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.769 225859 DEBUG nova.compute.manager [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.772 225859 INFO nova.compute.manager [-] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Took 2.56 seconds to deallocate network for instance.
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.778 225859 DEBUG nova.compute.manager [req-d6b6cc3b-38a7-4d76-bf05-85c0e612f3d3 req-c11dd991-7154-4c0d-b404-a77db5b06fb2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Detach interface failed, port_id=6216baae-337d-44a3-aa38-60c2afb5d13f, reason: Instance 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.832 225859 DEBUG nova.compute.manager [req-387e23a1-1366-456c-a24c-59ecf75e57ce req-c33055b8-0320-4986-bfee-d8f334dd9690 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Received event network-vif-plugged-6216baae-337d-44a3-aa38-60c2afb5d13f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.833 225859 DEBUG oslo_concurrency.lockutils [req-387e23a1-1366-456c-a24c-59ecf75e57ce req-c33055b8-0320-4986-bfee-d8f334dd9690 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.833 225859 DEBUG oslo_concurrency.lockutils [req-387e23a1-1366-456c-a24c-59ecf75e57ce req-c33055b8-0320-4986-bfee-d8f334dd9690 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.833 225859 DEBUG oslo_concurrency.lockutils [req-387e23a1-1366-456c-a24c-59ecf75e57ce req-c33055b8-0320-4986-bfee-d8f334dd9690 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.834 225859 DEBUG nova.compute.manager [req-387e23a1-1366-456c-a24c-59ecf75e57ce req-c33055b8-0320-4986-bfee-d8f334dd9690 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] No waiting events found dispatching network-vif-plugged-6216baae-337d-44a3-aa38-60c2afb5d13f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.834 225859 WARNING nova.compute.manager [req-387e23a1-1366-456c-a24c-59ecf75e57ce req-c33055b8-0320-4986-bfee-d8f334dd9690 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Received unexpected event network-vif-plugged-6216baae-337d-44a3-aa38-60c2afb5d13f for instance with vm_state active and task_state deleting.
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.878 225859 DEBUG nova.compute.manager [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.879 225859 DEBUG nova.network.neutron [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.913 225859 DEBUG oslo_concurrency.lockutils [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.914 225859 DEBUG oslo_concurrency.lockutils [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.915 225859 INFO nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:03:47 compute-1 nova_compute[225855]: 2026-01-20 15:03:47.933 225859 DEBUG nova.compute.manager [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:03:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:47.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:48 compute-1 nova_compute[225855]: 2026-01-20 15:03:48.026 225859 INFO nova.virt.block_device [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Booting with volume-backed-image a32b3e07-16d8-46fd-9a7b-c242c432fcf9 at /dev/vda
Jan 20 15:03:48 compute-1 nova_compute[225855]: 2026-01-20 15:03:48.106 225859 DEBUG oslo_concurrency.processutils [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:03:48 compute-1 nova_compute[225855]: 2026-01-20 15:03:48.424 225859 DEBUG nova.policy [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2446e8399b344b29986c1aaf8bf73adf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '63555e5851564db08c6429231d264f2c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:03:48 compute-1 nova_compute[225855]: 2026-01-20 15:03:48.498 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:48.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:03:48 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1811257346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:48 compute-1 nova_compute[225855]: 2026-01-20 15:03:48.569 225859 DEBUG oslo_concurrency.processutils [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:03:48 compute-1 nova_compute[225855]: 2026-01-20 15:03:48.575 225859 DEBUG nova.compute.provider_tree [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:03:48 compute-1 ceph-mon[81775]: pgmap v2380: 321 pgs: 321 active+clean; 455 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 146 KiB/s rd, 703 KiB/s wr, 78 op/s
Jan 20 15:03:48 compute-1 nova_compute[225855]: 2026-01-20 15:03:48.712 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:03:48 compute-1 nova_compute[225855]: 2026-01-20 15:03:48.712 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:03:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1811257346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:49.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:50 compute-1 nova_compute[225855]: 2026-01-20 15:03:50.450 225859 DEBUG nova.scheduler.client.report [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:03:50 compute-1 nova_compute[225855]: 2026-01-20 15:03:50.495 225859 DEBUG oslo_concurrency.lockutils [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:50.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:50 compute-1 ceph-mon[81775]: pgmap v2381: 321 pgs: 321 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 582 KiB/s wr, 143 op/s
Jan 20 15:03:50 compute-1 nova_compute[225855]: 2026-01-20 15:03:50.708 225859 INFO nova.scheduler.client.report [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Deleted allocations for instance 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1
Jan 20 15:03:50 compute-1 nova_compute[225855]: 2026-01-20 15:03:50.794 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:03:50 compute-1 nova_compute[225855]: 2026-01-20 15:03:50.794 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:03:50 compute-1 nova_compute[225855]: 2026-01-20 15:03:50.794 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:03:50 compute-1 nova_compute[225855]: 2026-01-20 15:03:50.831 225859 DEBUG oslo_concurrency.lockutils [None req-b9235182-8291-45c2-a2b2-fc1ae2b3ea60 1654794111844ca88666b3529173e9a7 3a1d679d5c954662a271e842fe2f2c05 - - default default] Lock "2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:51 compute-1 nova_compute[225855]: 2026-01-20 15:03:51.836 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:03:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:51.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:52 compute-1 nova_compute[225855]: 2026-01-20 15:03:52.122 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:03:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:52.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:03:52 compute-1 nova_compute[225855]: 2026-01-20 15:03:52.607 225859 DEBUG nova.network.neutron [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Successfully created port: 22663aa0-a7f4-431c-b5a9-4433da2dff09 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:03:52 compute-1 nova_compute[225855]: 2026-01-20 15:03:52.978 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:03:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:53 compute-1 nova_compute[225855]: 2026-01-20 15:03:53.094 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:03:53 compute-1 nova_compute[225855]: 2026-01-20 15:03:53.095 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:03:53 compute-1 nova_compute[225855]: 2026-01-20 15:03:53.096 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:03:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:03:53 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3930603493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:53 compute-1 nova_compute[225855]: 2026-01-20 15:03:53.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:03:53 compute-1 nova_compute[225855]: 2026-01-20 15:03:53.396 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:53 compute-1 nova_compute[225855]: 2026-01-20 15:03:53.396 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:53 compute-1 nova_compute[225855]: 2026-01-20 15:03:53.396 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:53 compute-1 nova_compute[225855]: 2026-01-20 15:03:53.397 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:03:53 compute-1 nova_compute[225855]: 2026-01-20 15:03:53.397 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:03:53 compute-1 ceph-mon[81775]: pgmap v2382: 321 pgs: 321 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 38 KiB/s wr, 137 op/s
Jan 20 15:03:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3845887982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3930603493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:53 compute-1 nova_compute[225855]: 2026-01-20 15:03:53.500 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:03:53 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1886119184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:53 compute-1 nova_compute[225855]: 2026-01-20 15:03:53.864 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:03:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:53.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1886119184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/548541487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:03:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:54.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:03:55 compute-1 podman[288932]: 2026-01-20 15:03:55.073684442 +0000 UTC m=+0.119617936 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:03:55 compute-1 nova_compute[225855]: 2026-01-20 15:03:55.469 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:03:55 compute-1 nova_compute[225855]: 2026-01-20 15:03:55.470 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:03:55 compute-1 ceph-mon[81775]: pgmap v2383: 321 pgs: 321 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 37 KiB/s wr, 135 op/s
Jan 20 15:03:55 compute-1 nova_compute[225855]: 2026-01-20 15:03:55.634 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:03:55 compute-1 nova_compute[225855]: 2026-01-20 15:03:55.636 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4295MB free_disk=20.830543518066406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:03:55 compute-1 nova_compute[225855]: 2026-01-20 15:03:55.636 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:55 compute-1 nova_compute[225855]: 2026-01-20 15:03:55.636 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:55.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:56 compute-1 sudo[288959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:56 compute-1 sudo[288959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:56 compute-1 sudo[288959]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:56 compute-1 sudo[288984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:56 compute-1 sudo[288984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:56 compute-1 sudo[288984]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:56 compute-1 ceph-mon[81775]: pgmap v2384: 321 pgs: 321 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 32 KiB/s wr, 102 op/s
Jan 20 15:03:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:56.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #109. Immutable memtables: 0.
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:03:56.521013) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 109
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921436521040, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 2502, "num_deletes": 257, "total_data_size": 5667805, "memory_usage": 5746816, "flush_reason": "Manual Compaction"}
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #110: started
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921436559362, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 110, "file_size": 3702117, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55213, "largest_seqno": 57709, "table_properties": {"data_size": 3691912, "index_size": 6507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21936, "raw_average_key_size": 21, "raw_value_size": 3671190, "raw_average_value_size": 3519, "num_data_blocks": 281, "num_entries": 1043, "num_filter_entries": 1043, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921251, "oldest_key_time": 1768921251, "file_creation_time": 1768921436, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 38426 microseconds, and 7615 cpu microseconds.
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:03:56.559430) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #110: 3702117 bytes OK
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:03:56.559459) [db/memtable_list.cc:519] [default] Level-0 commit table #110 started
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:03:56.562840) [db/memtable_list.cc:722] [default] Level-0 commit table #110: memtable #1 done
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:03:56.562920) EVENT_LOG_v1 {"time_micros": 1768921436562907, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:03:56.562949) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 5656670, prev total WAL file size 5656670, number of live WAL files 2.
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000106.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:03:56.565562) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [110(3615KB)], [108(11MB)]
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921436565647, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [110], "files_L6": [108], "score": -1, "input_data_size": 15695572, "oldest_snapshot_seqno": -1}
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #111: 8531 keys, 13825127 bytes, temperature: kUnknown
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921436787564, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 111, "file_size": 13825127, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13765827, "index_size": 36852, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21381, "raw_key_size": 219971, "raw_average_key_size": 25, "raw_value_size": 13611672, "raw_average_value_size": 1595, "num_data_blocks": 1451, "num_entries": 8531, "num_filter_entries": 8531, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768921436, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 111, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:03:56.787817) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 13825127 bytes
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:03:56.790102) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 70.7 rd, 62.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 11.4 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 9063, records dropped: 532 output_compression: NoCompression
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:03:56.790126) EVENT_LOG_v1 {"time_micros": 1768921436790117, "job": 68, "event": "compaction_finished", "compaction_time_micros": 221983, "compaction_time_cpu_micros": 35279, "output_level": 6, "num_output_files": 1, "total_output_size": 13825127, "num_input_records": 9063, "num_output_records": 8531, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921436790836, "job": 68, "event": "table_file_deletion", "file_number": 110}
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000108.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921436793131, "job": 68, "event": "table_file_deletion", "file_number": 108}
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:03:56.565418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:03:56.793223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:03:56.793230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:03:56.793233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:03:56.793237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:03:56 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:03:56.793240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:03:56 compute-1 nova_compute[225855]: 2026-01-20 15:03:56.796 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 474cec75-3b01-411a-9074-75859d2a9ddf actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:03:56 compute-1 nova_compute[225855]: 2026-01-20 15:03:56.797 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance c1db561e-0c8b-4cfb-97bb-55f8d4731b87 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:03:56 compute-1 nova_compute[225855]: 2026-01-20 15:03:56.797 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:03:56 compute-1 nova_compute[225855]: 2026-01-20 15:03:56.798 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:03:56 compute-1 nova_compute[225855]: 2026-01-20 15:03:56.843 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing inventories for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 15:03:56 compute-1 nova_compute[225855]: 2026-01-20 15:03:56.910 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating ProviderTree inventory for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 15:03:56 compute-1 nova_compute[225855]: 2026-01-20 15:03:56.911 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 15:03:56 compute-1 nova_compute[225855]: 2026-01-20 15:03:56.930 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing aggregate associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 15:03:56 compute-1 nova_compute[225855]: 2026-01-20 15:03:56.988 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing trait associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 15:03:57 compute-1 nova_compute[225855]: 2026-01-20 15:03:57.124 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:57 compute-1 nova_compute[225855]: 2026-01-20 15:03:57.127 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:03:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:03:57 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3864538602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:57 compute-1 nova_compute[225855]: 2026-01-20 15:03:57.662 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:03:57 compute-1 nova_compute[225855]: 2026-01-20 15:03:57.668 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:03:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3864538602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:57.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:58 compute-1 nova_compute[225855]: 2026-01-20 15:03:58.320 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921423.3193264, 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:03:58 compute-1 nova_compute[225855]: 2026-01-20 15:03:58.321 225859 INFO nova.compute.manager [-] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] VM Stopped (Lifecycle Event)
Jan 20 15:03:58 compute-1 nova_compute[225855]: 2026-01-20 15:03:58.502 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:03:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:58.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:58 compute-1 nova_compute[225855]: 2026-01-20 15:03:58.536 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:03:58 compute-1 nova_compute[225855]: 2026-01-20 15:03:58.574 225859 DEBUG nova.compute.manager [None req-7cde4017-05a0-4550-b14a-1a1c814fc584 - - - - - -] [instance: 2b9353e4-2bd0-4a4a-b9a1-dd24929a4af1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:03:58 compute-1 nova_compute[225855]: 2026-01-20 15:03:58.629 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:03:58 compute-1 nova_compute[225855]: 2026-01-20 15:03:58.629 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/571537298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:58 compute-1 ceph-mon[81775]: pgmap v2385: 321 pgs: 321 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 18 KiB/s wr, 89 op/s
Jan 20 15:03:59 compute-1 nova_compute[225855]: 2026-01-20 15:03:59.036 225859 DEBUG nova.network.neutron [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Successfully updated port: 22663aa0-a7f4-431c-b5a9-4433da2dff09 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:03:59 compute-1 nova_compute[225855]: 2026-01-20 15:03:59.104 225859 DEBUG oslo_concurrency.lockutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "refresh_cache-c1db561e-0c8b-4cfb-97bb-55f8d4731b87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:03:59 compute-1 nova_compute[225855]: 2026-01-20 15:03:59.104 225859 DEBUG oslo_concurrency.lockutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquired lock "refresh_cache-c1db561e-0c8b-4cfb-97bb-55f8d4731b87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:03:59 compute-1 nova_compute[225855]: 2026-01-20 15:03:59.104 225859 DEBUG nova.network.neutron [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:03:59 compute-1 nova_compute[225855]: 2026-01-20 15:03:59.591 225859 DEBUG nova.compute.manager [req-fe260f39-b2f0-46ba-b8ac-a1e47eb5575d req-97a01128-eebf-4fec-8a5c-24cede52db50 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received event network-changed-22663aa0-a7f4-431c-b5a9-4433da2dff09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:03:59 compute-1 nova_compute[225855]: 2026-01-20 15:03:59.591 225859 DEBUG nova.compute.manager [req-fe260f39-b2f0-46ba-b8ac-a1e47eb5575d req-97a01128-eebf-4fec-8a5c-24cede52db50 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Refreshing instance network info cache due to event network-changed-22663aa0-a7f4-431c-b5a9-4433da2dff09. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:03:59 compute-1 nova_compute[225855]: 2026-01-20 15:03:59.592 225859 DEBUG oslo_concurrency.lockutils [req-fe260f39-b2f0-46ba-b8ac-a1e47eb5575d req-97a01128-eebf-4fec-8a5c-24cede52db50 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-c1db561e-0c8b-4cfb-97bb-55f8d4731b87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:03:59 compute-1 nova_compute[225855]: 2026-01-20 15:03:59.941 225859 DEBUG nova.network.neutron [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:04:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:00.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:00.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:01 compute-1 ceph-mon[81775]: pgmap v2386: 321 pgs: 321 active+clean; 438 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 130 op/s
Jan 20 15:04:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:04:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:02.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:04:02 compute-1 nova_compute[225855]: 2026-01-20 15:04:02.127 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3862250939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/656038064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:02.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:02 compute-1 nova_compute[225855]: 2026-01-20 15:04:02.625 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:02 compute-1 nova_compute[225855]: 2026-01-20 15:04:02.625 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:03 compute-1 nova_compute[225855]: 2026-01-20 15:04:03.298 225859 DEBUG nova.network.neutron [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Updating instance_info_cache with network_info: [{"id": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "address": "fa:16:3e:2e:ec:9e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22663aa0-a7", "ovs_interfaceid": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:04:03 compute-1 ceph-mon[81775]: pgmap v2387: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 77 op/s
Jan 20 15:04:03 compute-1 nova_compute[225855]: 2026-01-20 15:04:03.504 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:04.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:04 compute-1 nova_compute[225855]: 2026-01-20 15:04:04.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:04 compute-1 nova_compute[225855]: 2026-01-20 15:04:04.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 15:04:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:04.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:04 compute-1 nova_compute[225855]: 2026-01-20 15:04:04.701 225859 DEBUG oslo_concurrency.lockutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Releasing lock "refresh_cache-c1db561e-0c8b-4cfb-97bb-55f8d4731b87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:04:04 compute-1 nova_compute[225855]: 2026-01-20 15:04:04.701 225859 DEBUG nova.compute.manager [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Instance network_info: |[{"id": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "address": "fa:16:3e:2e:ec:9e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22663aa0-a7", "ovs_interfaceid": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:04:04 compute-1 nova_compute[225855]: 2026-01-20 15:04:04.702 225859 DEBUG oslo_concurrency.lockutils [req-fe260f39-b2f0-46ba-b8ac-a1e47eb5575d req-97a01128-eebf-4fec-8a5c-24cede52db50 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-c1db561e-0c8b-4cfb-97bb-55f8d4731b87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:04:04 compute-1 nova_compute[225855]: 2026-01-20 15:04:04.702 225859 DEBUG nova.network.neutron [req-fe260f39-b2f0-46ba-b8ac-a1e47eb5575d req-97a01128-eebf-4fec-8a5c-24cede52db50 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Refreshing network info cache for port 22663aa0-a7f4-431c-b5a9-4433da2dff09 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:04:04 compute-1 nova_compute[225855]: 2026-01-20 15:04:04.718 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 15:04:05 compute-1 nova_compute[225855]: 2026-01-20 15:04:05.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:05 compute-1 nova_compute[225855]: 2026-01-20 15:04:05.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 15:04:05 compute-1 ceph-mon[81775]: pgmap v2388: 321 pgs: 321 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 71 op/s
Jan 20 15:04:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:04:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:06.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:04:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:04:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:06.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:04:07 compute-1 nova_compute[225855]: 2026-01-20 15:04:07.128 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:07 compute-1 ceph-mon[81775]: pgmap v2389: 321 pgs: 321 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Jan 20 15:04:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:08.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:08 compute-1 nova_compute[225855]: 2026-01-20 15:04:08.506 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:08.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:08 compute-1 ovn_controller[130490]: 2026-01-20T15:04:08Z|00656|binding|INFO|Releasing lport a8628d9e-196f-4b84-89fd-d3a41792b8a0 from this chassis (sb_readonly=0)
Jan 20 15:04:08 compute-1 nova_compute[225855]: 2026-01-20 15:04:08.608 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:08 compute-1 ovn_controller[130490]: 2026-01-20T15:04:08Z|00657|binding|INFO|Releasing lport a8628d9e-196f-4b84-89fd-d3a41792b8a0 from this chassis (sb_readonly=0)
Jan 20 15:04:08 compute-1 nova_compute[225855]: 2026-01-20 15:04:08.889 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:09 compute-1 podman[289038]: 2026-01-20 15:04:09.028783139 +0000 UTC m=+0.055238629 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 20 15:04:09 compute-1 nova_compute[225855]: 2026-01-20 15:04:09.072 225859 DEBUG os_brick.utils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:04:09 compute-1 nova_compute[225855]: 2026-01-20 15:04:09.073 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:04:09 compute-1 nova_compute[225855]: 2026-01-20 15:04:09.084 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:04:09 compute-1 nova_compute[225855]: 2026-01-20 15:04:09.084 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[cf56d54c-1c88-450a-9219-528da2357c16]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:09 compute-1 nova_compute[225855]: 2026-01-20 15:04:09.085 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:04:09 compute-1 nova_compute[225855]: 2026-01-20 15:04:09.093 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:04:09 compute-1 nova_compute[225855]: 2026-01-20 15:04:09.093 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[24ec3c1a-4d08-4507-99dd-2a0eadffc8d3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:09 compute-1 nova_compute[225855]: 2026-01-20 15:04:09.094 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:04:09 compute-1 nova_compute[225855]: 2026-01-20 15:04:09.102 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:04:09 compute-1 nova_compute[225855]: 2026-01-20 15:04:09.103 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[9271998a-6cb6-453a-8ac5-b955a9f9fc55]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:09 compute-1 nova_compute[225855]: 2026-01-20 15:04:09.104 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[d3d8197c-0c04-4c92-bf48-7d1c0298c337]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:09 compute-1 nova_compute[225855]: 2026-01-20 15:04:09.104 225859 DEBUG oslo_concurrency.processutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:04:09 compute-1 nova_compute[225855]: 2026-01-20 15:04:09.130 225859 DEBUG oslo_concurrency.processutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:04:09 compute-1 nova_compute[225855]: 2026-01-20 15:04:09.132 225859 DEBUG os_brick.initiator.connectors.lightos [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:04:09 compute-1 nova_compute[225855]: 2026-01-20 15:04:09.133 225859 DEBUG os_brick.initiator.connectors.lightos [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:04:09 compute-1 nova_compute[225855]: 2026-01-20 15:04:09.133 225859 DEBUG os_brick.initiator.connectors.lightos [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:04:09 compute-1 nova_compute[225855]: 2026-01-20 15:04:09.134 225859 DEBUG os_brick.utils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:04:09 compute-1 nova_compute[225855]: 2026-01-20 15:04:09.134 225859 DEBUG nova.virt.block_device [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Updating existing volume attachment record: 2e3b09a8-1ad6-45d8-9497-37288b706f5c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 15:04:09 compute-1 ceph-mon[81775]: pgmap v2390: 321 pgs: 321 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Jan 20 15:04:10 compute-1 nova_compute[225855]: 2026-01-20 15:04:09.998 225859 DEBUG nova.network.neutron [req-fe260f39-b2f0-46ba-b8ac-a1e47eb5575d req-97a01128-eebf-4fec-8a5c-24cede52db50 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Updated VIF entry in instance network info cache for port 22663aa0-a7f4-431c-b5a9-4433da2dff09. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:04:10 compute-1 nova_compute[225855]: 2026-01-20 15:04:09.999 225859 DEBUG nova.network.neutron [req-fe260f39-b2f0-46ba-b8ac-a1e47eb5575d req-97a01128-eebf-4fec-8a5c-24cede52db50 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Updating instance_info_cache with network_info: [{"id": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "address": "fa:16:3e:2e:ec:9e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22663aa0-a7", "ovs_interfaceid": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:04:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:10.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:10 compute-1 nova_compute[225855]: 2026-01-20 15:04:10.018 225859 DEBUG oslo_concurrency.lockutils [req-fe260f39-b2f0-46ba-b8ac-a1e47eb5575d req-97a01128-eebf-4fec-8a5c-24cede52db50 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-c1db561e-0c8b-4cfb-97bb-55f8d4731b87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:04:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:04:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:10.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:04:11 compute-1 ceph-mon[81775]: pgmap v2391: 321 pgs: 321 active+clean; 474 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 169 op/s
Jan 20 15:04:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1358398100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:12.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:12 compute-1 nova_compute[225855]: 2026-01-20 15:04:12.196 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:12.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.196 225859 DEBUG nova.compute.manager [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.198 225859 DEBUG nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.198 225859 INFO nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Creating image(s)
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.199 225859 DEBUG nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.199 225859 DEBUG nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Ensure instance console log exists: /var/lib/nova/instances/c1db561e-0c8b-4cfb-97bb-55f8d4731b87/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.199 225859 DEBUG oslo_concurrency.lockutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.199 225859 DEBUG oslo_concurrency.lockutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.200 225859 DEBUG oslo_concurrency.lockutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.202 225859 DEBUG nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Start _get_guest_xml network_info=[{"id": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "address": "fa:16:3e:2e:ec:9e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22663aa0-a7", "ovs_interfaceid": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9d237554-9581-4577-897a-3907d38a0cb3', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9d237554-9581-4577-897a-3907d38a0cb3', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'c1db561e-0c8b-4cfb-97bb-55f8d4731b87', 'attached_at': '', 'detached_at': '', 'volume_id': '9d237554-9581-4577-897a-3907d38a0cb3', 'serial': '9d237554-9581-4577-897a-3907d38a0cb3'}, 'guest_format': None, 'boot_index': 0, 'mount_device': '/dev/vda', 'attachment_id': '2e3b09a8-1ad6-45d8-9497-37288b706f5c', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.206 225859 WARNING nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.212 225859 DEBUG nova.virt.libvirt.host [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.213 225859 DEBUG nova.virt.libvirt.host [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.216 225859 DEBUG nova.virt.libvirt.host [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.216 225859 DEBUG nova.virt.libvirt.host [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.217 225859 DEBUG nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.217 225859 DEBUG nova.virt.hardware [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.218 225859 DEBUG nova.virt.hardware [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.218 225859 DEBUG nova.virt.hardware [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.218 225859 DEBUG nova.virt.hardware [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.218 225859 DEBUG nova.virt.hardware [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.219 225859 DEBUG nova.virt.hardware [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.219 225859 DEBUG nova.virt.hardware [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.219 225859 DEBUG nova.virt.hardware [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.219 225859 DEBUG nova.virt.hardware [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.220 225859 DEBUG nova.virt.hardware [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.220 225859 DEBUG nova.virt.hardware [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.245 225859 DEBUG nova.storage.rbd_utils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image c1db561e-0c8b-4cfb-97bb-55f8d4731b87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.249 225859 DEBUG oslo_concurrency.processutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:04:13 compute-1 ceph-mon[81775]: pgmap v2392: 321 pgs: 321 active+clean; 500 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 133 op/s
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.509 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:04:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/950145876' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.744 225859 DEBUG oslo_concurrency.processutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.794 225859 DEBUG nova.virt.libvirt.vif [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:03:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1670904176',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1670904176',id=159,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63555e5851564db08c6429231d264f2c',ramdisk_id='',reservation_id='r-qovaqapn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:03:47Z,user_data=None,user_id='2446e8399b344b29986c1aaf8bf73adf',uuid=c1db561e-0c8b-4cfb-97bb-55f8d4731b87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "address": "fa:16:3e:2e:ec:9e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22663aa0-a7", "ovs_interfaceid": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.795 225859 DEBUG nova.network.os_vif_util [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converting VIF {"id": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "address": "fa:16:3e:2e:ec:9e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22663aa0-a7", "ovs_interfaceid": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.795 225859 DEBUG nova.network.os_vif_util [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:ec:9e,bridge_name='br-int',has_traffic_filtering=True,id=22663aa0-a7f4-431c-b5a9-4433da2dff09,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22663aa0-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.797 225859 DEBUG nova.objects.instance [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'pci_devices' on Instance uuid c1db561e-0c8b-4cfb-97bb-55f8d4731b87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.820 225859 DEBUG nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:04:13 compute-1 nova_compute[225855]:   <uuid>c1db561e-0c8b-4cfb-97bb-55f8d4731b87</uuid>
Jan 20 15:04:13 compute-1 nova_compute[225855]:   <name>instance-0000009f</name>
Jan 20 15:04:13 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:04:13 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:04:13 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-1670904176</nova:name>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:04:13</nova:creationTime>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:04:13 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:04:13 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:04:13 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:04:13 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:04:13 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:04:13 compute-1 nova_compute[225855]:         <nova:user uuid="2446e8399b344b29986c1aaf8bf73adf">tempest-ServerBootFromVolumeStableRescueTest-1871371328-project-member</nova:user>
Jan 20 15:04:13 compute-1 nova_compute[225855]:         <nova:project uuid="63555e5851564db08c6429231d264f2c">tempest-ServerBootFromVolumeStableRescueTest-1871371328</nova:project>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:04:13 compute-1 nova_compute[225855]:         <nova:port uuid="22663aa0-a7f4-431c-b5a9-4433da2dff09">
Jan 20 15:04:13 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:04:13 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:04:13 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <system>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <entry name="serial">c1db561e-0c8b-4cfb-97bb-55f8d4731b87</entry>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <entry name="uuid">c1db561e-0c8b-4cfb-97bb-55f8d4731b87</entry>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     </system>
Jan 20 15:04:13 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:04:13 compute-1 nova_compute[225855]:   <os>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:   </os>
Jan 20 15:04:13 compute-1 nova_compute[225855]:   <features>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:   </features>
Jan 20 15:04:13 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:04:13 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:04:13 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/c1db561e-0c8b-4cfb-97bb-55f8d4731b87_disk.config">
Jan 20 15:04:13 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       </source>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:04:13 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <source protocol="rbd" name="volumes/volume-9d237554-9581-4577-897a-3907d38a0cb3">
Jan 20 15:04:13 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       </source>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:04:13 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <serial>9d237554-9581-4577-897a-3907d38a0cb3</serial>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:2e:ec:9e"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <target dev="tap22663aa0-a7"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/c1db561e-0c8b-4cfb-97bb-55f8d4731b87/console.log" append="off"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <video>
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     </video>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:04:13 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:04:13 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:04:13 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:04:13 compute-1 nova_compute[225855]: </domain>
Jan 20 15:04:13 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.821 225859 DEBUG nova.compute.manager [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Preparing to wait for external event network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.821 225859 DEBUG oslo_concurrency.lockutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.822 225859 DEBUG oslo_concurrency.lockutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.822 225859 DEBUG oslo_concurrency.lockutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.822 225859 DEBUG nova.virt.libvirt.vif [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:03:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1670904176',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1670904176',id=159,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63555e5851564db08c6429231d264f2c',ramdisk_id='',reservation_id='r-qovaqapn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:03:47Z,user_data=None,user_id='2446e8399b344b29986c1aaf8bf73adf',uuid=c1db561e-0c8b-4cfb-97bb-55f8d4731b87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "address": "fa:16:3e:2e:ec:9e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22663aa0-a7", "ovs_interfaceid": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.823 225859 DEBUG nova.network.os_vif_util [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converting VIF {"id": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "address": "fa:16:3e:2e:ec:9e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22663aa0-a7", "ovs_interfaceid": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.823 225859 DEBUG nova.network.os_vif_util [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:ec:9e,bridge_name='br-int',has_traffic_filtering=True,id=22663aa0-a7f4-431c-b5a9-4433da2dff09,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22663aa0-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.824 225859 DEBUG os_vif [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:ec:9e,bridge_name='br-int',has_traffic_filtering=True,id=22663aa0-a7f4-431c-b5a9-4433da2dff09,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22663aa0-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.824 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.825 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.825 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.827 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.827 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22663aa0-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.828 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap22663aa0-a7, col_values=(('external_ids', {'iface-id': '22663aa0-a7f4-431c-b5a9-4433da2dff09', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:ec:9e', 'vm-uuid': 'c1db561e-0c8b-4cfb-97bb-55f8d4731b87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.829 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:13 compute-1 NetworkManager[49104]: <info>  [1768921453.8298] manager: (tap22663aa0-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/281)
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.831 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.835 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.836 225859 INFO os_vif [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:ec:9e,bridge_name='br-int',has_traffic_filtering=True,id=22663aa0-a7f4-431c-b5a9-4433da2dff09,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22663aa0-a7')
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.946 225859 DEBUG nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.947 225859 DEBUG nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.947 225859 DEBUG nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No VIF found with MAC fa:16:3e:2e:ec:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.947 225859 INFO nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Using config drive
Jan 20 15:04:13 compute-1 nova_compute[225855]: 2026-01-20 15:04:13.970 225859 DEBUG nova.storage.rbd_utils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image c1db561e-0c8b-4cfb-97bb-55f8d4731b87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:04:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:04:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:14.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:04:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e350 e350: 3 total, 3 up, 3 in
Jan 20 15:04:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2818727511' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:04:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2818727511' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:04:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/950145876' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:14.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:15 compute-1 nova_compute[225855]: 2026-01-20 15:04:15.008 225859 INFO nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Creating config drive at /var/lib/nova/instances/c1db561e-0c8b-4cfb-97bb-55f8d4731b87/disk.config
Jan 20 15:04:15 compute-1 nova_compute[225855]: 2026-01-20 15:04:15.013 225859 DEBUG oslo_concurrency.processutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c1db561e-0c8b-4cfb-97bb-55f8d4731b87/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvy95zk4h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:04:15 compute-1 nova_compute[225855]: 2026-01-20 15:04:15.143 225859 DEBUG oslo_concurrency.processutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c1db561e-0c8b-4cfb-97bb-55f8d4731b87/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvy95zk4h" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:04:15 compute-1 nova_compute[225855]: 2026-01-20 15:04:15.169 225859 DEBUG nova.storage.rbd_utils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image c1db561e-0c8b-4cfb-97bb-55f8d4731b87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:04:15 compute-1 nova_compute[225855]: 2026-01-20 15:04:15.173 225859 DEBUG oslo_concurrency.processutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c1db561e-0c8b-4cfb-97bb-55f8d4731b87/disk.config c1db561e-0c8b-4cfb-97bb-55f8d4731b87_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #112. Immutable memtables: 0.
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:04:15.242519) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 112
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921455242542, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 430, "num_deletes": 250, "total_data_size": 493335, "memory_usage": 501616, "flush_reason": "Manual Compaction"}
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #113: started
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921455245688, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 113, "file_size": 278919, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57714, "largest_seqno": 58139, "table_properties": {"data_size": 276551, "index_size": 468, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6502, "raw_average_key_size": 20, "raw_value_size": 271740, "raw_average_value_size": 854, "num_data_blocks": 21, "num_entries": 318, "num_filter_entries": 318, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921437, "oldest_key_time": 1768921437, "file_creation_time": 1768921455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 3204 microseconds, and 1151 cpu microseconds.
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:04:15.245720) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #113: 278919 bytes OK
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:04:15.245738) [db/memtable_list.cc:519] [default] Level-0 commit table #113 started
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:04:15.248143) [db/memtable_list.cc:722] [default] Level-0 commit table #113: memtable #1 done
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:04:15.248161) EVENT_LOG_v1 {"time_micros": 1768921455248155, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:04:15.248180) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 490627, prev total WAL file size 490627, number of live WAL files 2.
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000109.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:04:15.248686) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373535' seq:72057594037927935, type:22 .. '6D6772737461740032303036' seq:0, type:0; will stop at (end)
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [113(272KB)], [111(13MB)]
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921455248788, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [113], "files_L6": [111], "score": -1, "input_data_size": 14104046, "oldest_snapshot_seqno": -1}
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #114: 8340 keys, 10313447 bytes, temperature: kUnknown
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921455369998, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 114, "file_size": 10313447, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10260102, "index_size": 31409, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20869, "raw_key_size": 216205, "raw_average_key_size": 25, "raw_value_size": 10113916, "raw_average_value_size": 1212, "num_data_blocks": 1224, "num_entries": 8340, "num_filter_entries": 8340, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768921455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 114, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:04:15.370250) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10313447 bytes
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:04:15.372023) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 116.3 rd, 85.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 13.2 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(87.5) write-amplify(37.0) OK, records in: 8849, records dropped: 509 output_compression: NoCompression
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:04:15.372040) EVENT_LOG_v1 {"time_micros": 1768921455372032, "job": 70, "event": "compaction_finished", "compaction_time_micros": 121263, "compaction_time_cpu_micros": 29564, "output_level": 6, "num_output_files": 1, "total_output_size": 10313447, "num_input_records": 8849, "num_output_records": 8340, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921455372194, "job": 70, "event": "table_file_deletion", "file_number": 113}
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000111.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921455374653, "job": 70, "event": "table_file_deletion", "file_number": 111}
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:04:15.248578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:04:15.374718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:04:15.374723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:04:15.374724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:04:15.374726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:04:15 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:04:15.374727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:04:15 compute-1 ceph-mon[81775]: pgmap v2393: 321 pgs: 321 active+clean; 500 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 119 op/s
Jan 20 15:04:15 compute-1 ceph-mon[81775]: osdmap e350: 3 total, 3 up, 3 in
Jan 20 15:04:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e351 e351: 3 total, 3 up, 3 in
Jan 20 15:04:15 compute-1 nova_compute[225855]: 2026-01-20 15:04:15.589 225859 DEBUG oslo_concurrency.processutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c1db561e-0c8b-4cfb-97bb-55f8d4731b87/disk.config c1db561e-0c8b-4cfb-97bb-55f8d4731b87_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:04:15 compute-1 nova_compute[225855]: 2026-01-20 15:04:15.590 225859 INFO nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Deleting local config drive /var/lib/nova/instances/c1db561e-0c8b-4cfb-97bb-55f8d4731b87/disk.config because it was imported into RBD.
Jan 20 15:04:15 compute-1 kernel: tap22663aa0-a7: entered promiscuous mode
Jan 20 15:04:15 compute-1 NetworkManager[49104]: <info>  [1768921455.6482] manager: (tap22663aa0-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/282)
Jan 20 15:04:15 compute-1 ovn_controller[130490]: 2026-01-20T15:04:15Z|00658|binding|INFO|Claiming lport 22663aa0-a7f4-431c-b5a9-4433da2dff09 for this chassis.
Jan 20 15:04:15 compute-1 ovn_controller[130490]: 2026-01-20T15:04:15Z|00659|binding|INFO|22663aa0-a7f4-431c-b5a9-4433da2dff09: Claiming fa:16:3e:2e:ec:9e 10.100.0.10
Jan 20 15:04:15 compute-1 nova_compute[225855]: 2026-01-20 15:04:15.651 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:15 compute-1 ovn_controller[130490]: 2026-01-20T15:04:15Z|00660|binding|INFO|Setting lport 22663aa0-a7f4-431c-b5a9-4433da2dff09 ovn-installed in OVS
Jan 20 15:04:15 compute-1 nova_compute[225855]: 2026-01-20 15:04:15.670 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:15 compute-1 nova_compute[225855]: 2026-01-20 15:04:15.674 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:15 compute-1 ovn_controller[130490]: 2026-01-20T15:04:15Z|00661|binding|INFO|Setting lport 22663aa0-a7f4-431c-b5a9-4433da2dff09 up in Southbound
Jan 20 15:04:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:15.676 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:ec:9e 10.100.0.10'], port_security=['fa:16:3e:2e:ec:9e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c1db561e-0c8b-4cfb-97bb-55f8d4731b87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63555e5851564db08c6429231d264f2c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7e54c470-6a6f-454e-ae01-9d2d59b2c74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=248fa32c-94be-4e1b-b4d3-cb9fac0ec155, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=22663aa0-a7f4-431c-b5a9-4433da2dff09) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:04:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:15.677 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 22663aa0-a7f4-431c-b5a9-4433da2dff09 in datapath 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec bound to our chassis
Jan 20 15:04:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:15.679 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec
Jan 20 15:04:15 compute-1 systemd-udevd[289182]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:04:15 compute-1 systemd-machined[194361]: New machine qemu-77-instance-0000009f.
Jan 20 15:04:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:15.693 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[14a84b44-a527-4225-8b4d-ed1026d75d95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:15 compute-1 NetworkManager[49104]: <info>  [1768921455.7026] device (tap22663aa0-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:04:15 compute-1 NetworkManager[49104]: <info>  [1768921455.7041] device (tap22663aa0-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:04:15 compute-1 systemd[1]: Started Virtual Machine qemu-77-instance-0000009f.
Jan 20 15:04:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:15.725 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[46eec561-ca79-4607-91ce-1588f5cef2fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:15.727 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[697850e5-b75b-42ad-92e3-e3ad7fad0b04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:15.754 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[53640314-522d-42e7-96eb-79ce183f01ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:15.770 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d851606a-5a46-426a-a5c1-3d9e6450a6c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap671e28d0-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:4e:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642759, 'reachable_time': 32213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289196, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:15.783 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[df115565-6db8-44d9-9369-71b35a7dc5ed]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap671e28d0-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642770, 'tstamp': 642770}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289197, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap671e28d0-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642773, 'tstamp': 642773}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289197, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:15.785 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap671e28d0-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:15 compute-1 nova_compute[225855]: 2026-01-20 15:04:15.786 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:15.789 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap671e28d0-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:15 compute-1 nova_compute[225855]: 2026-01-20 15:04:15.788 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:15.789 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:04:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:15.789 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap671e28d0-00, col_values=(('external_ids', {'iface-id': 'a8628d9e-196f-4b84-89fd-d3a41792b8a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:15.790 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:04:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:16.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.342 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921456.3416662, c1db561e-0c8b-4cfb-97bb-55f8d4731b87 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.342 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] VM Started (Lifecycle Event)
Jan 20 15:04:16 compute-1 sudo[289241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:16 compute-1 sudo[289241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:16 compute-1 sudo[289241]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:16.424 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:16.425 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:16.425 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.436 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.440 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921456.3420632, c1db561e-0c8b-4cfb-97bb-55f8d4731b87 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.440 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] VM Paused (Lifecycle Event)
Jan 20 15:04:16 compute-1 sudo[289266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:16 compute-1 sudo[289266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:16 compute-1 sudo[289266]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.528 225859 DEBUG nova.compute.manager [req-5cacb33e-42b8-4f8d-be61-9c70f4181505 req-9d7d69e4-b163-4f73-b02d-c2c5cbce688e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received event network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.529 225859 DEBUG oslo_concurrency.lockutils [req-5cacb33e-42b8-4f8d-be61-9c70f4181505 req-9d7d69e4-b163-4f73-b02d-c2c5cbce688e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.529 225859 DEBUG oslo_concurrency.lockutils [req-5cacb33e-42b8-4f8d-be61-9c70f4181505 req-9d7d69e4-b163-4f73-b02d-c2c5cbce688e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.529 225859 DEBUG oslo_concurrency.lockutils [req-5cacb33e-42b8-4f8d-be61-9c70f4181505 req-9d7d69e4-b163-4f73-b02d-c2c5cbce688e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.530 225859 DEBUG nova.compute.manager [req-5cacb33e-42b8-4f8d-be61-9c70f4181505 req-9d7d69e4-b163-4f73-b02d-c2c5cbce688e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Processing event network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.531 225859 DEBUG nova.compute.manager [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.532 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:04:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:16.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.537 225859 DEBUG nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.540 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921456.53564, c1db561e-0c8b-4cfb-97bb-55f8d4731b87 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.541 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] VM Resumed (Lifecycle Event)
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.545 225859 INFO nova.virt.libvirt.driver [-] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Instance spawned successfully.
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.546 225859 DEBUG nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.591 225859 DEBUG nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.591 225859 DEBUG nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.592 225859 DEBUG nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.593 225859 DEBUG nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.593 225859 DEBUG nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.593 225859 DEBUG nova.virt.libvirt.driver [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.599 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.602 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.642 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.671 225859 INFO nova.compute.manager [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Took 3.47 seconds to spawn the instance on the hypervisor.
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.671 225859 DEBUG nova.compute.manager [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:04:16 compute-1 nova_compute[225855]: 2026-01-20 15:04:16.729 225859 INFO nova.compute.manager [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Took 30.64 seconds to build instance.
Jan 20 15:04:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e352 e352: 3 total, 3 up, 3 in
Jan 20 15:04:16 compute-1 ceph-mon[81775]: osdmap e351: 3 total, 3 up, 3 in
Jan 20 15:04:16 compute-1 ceph-mon[81775]: pgmap v2396: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 536 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.4 MiB/s wr, 113 op/s
Jan 20 15:04:17 compute-1 nova_compute[225855]: 2026-01-20 15:04:17.199 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:17 compute-1 nova_compute[225855]: 2026-01-20 15:04:17.458 225859 DEBUG oslo_concurrency.lockutils [None req-9f6f69f3-98d5-40ce-bed7-1eadbfe8563e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 31.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:17 compute-1 ceph-mon[81775]: osdmap e352: 3 total, 3 up, 3 in
Jan 20 15:04:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:18.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:18.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:18 compute-1 nova_compute[225855]: 2026-01-20 15:04:18.758 225859 DEBUG nova.compute.manager [req-593c411e-5cd2-4029-84df-63a51fc14b21 req-2511cc59-080a-437c-b22c-20af8e5eac96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received event network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:04:18 compute-1 nova_compute[225855]: 2026-01-20 15:04:18.759 225859 DEBUG oslo_concurrency.lockutils [req-593c411e-5cd2-4029-84df-63a51fc14b21 req-2511cc59-080a-437c-b22c-20af8e5eac96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:18 compute-1 nova_compute[225855]: 2026-01-20 15:04:18.759 225859 DEBUG oslo_concurrency.lockutils [req-593c411e-5cd2-4029-84df-63a51fc14b21 req-2511cc59-080a-437c-b22c-20af8e5eac96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:18 compute-1 nova_compute[225855]: 2026-01-20 15:04:18.759 225859 DEBUG oslo_concurrency.lockutils [req-593c411e-5cd2-4029-84df-63a51fc14b21 req-2511cc59-080a-437c-b22c-20af8e5eac96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:18 compute-1 nova_compute[225855]: 2026-01-20 15:04:18.760 225859 DEBUG nova.compute.manager [req-593c411e-5cd2-4029-84df-63a51fc14b21 req-2511cc59-080a-437c-b22c-20af8e5eac96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] No waiting events found dispatching network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:04:18 compute-1 nova_compute[225855]: 2026-01-20 15:04:18.760 225859 WARNING nova.compute.manager [req-593c411e-5cd2-4029-84df-63a51fc14b21 req-2511cc59-080a-437c-b22c-20af8e5eac96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received unexpected event network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 for instance with vm_state active and task_state None.
Jan 20 15:04:18 compute-1 ceph-mon[81775]: pgmap v2398: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 536 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.6 MiB/s wr, 67 op/s
Jan 20 15:04:18 compute-1 nova_compute[225855]: 2026-01-20 15:04:18.830 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:20.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:20 compute-1 nova_compute[225855]: 2026-01-20 15:04:20.242 225859 INFO nova.compute.manager [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Rescuing
Jan 20 15:04:20 compute-1 nova_compute[225855]: 2026-01-20 15:04:20.242 225859 DEBUG oslo_concurrency.lockutils [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "refresh_cache-c1db561e-0c8b-4cfb-97bb-55f8d4731b87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:04:20 compute-1 nova_compute[225855]: 2026-01-20 15:04:20.243 225859 DEBUG oslo_concurrency.lockutils [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquired lock "refresh_cache-c1db561e-0c8b-4cfb-97bb-55f8d4731b87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:04:20 compute-1 nova_compute[225855]: 2026-01-20 15:04:20.243 225859 DEBUG nova.network.neutron [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:04:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:20.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:21 compute-1 ceph-mon[81775]: pgmap v2399: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 579 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 317 op/s
Jan 20 15:04:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:04:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:22.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:04:22 compute-1 nova_compute[225855]: 2026-01-20 15:04:22.200 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:22 compute-1 ceph-mon[81775]: pgmap v2400: 321 pgs: 321 active+clean; 579 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.8 MiB/s rd, 6.0 MiB/s wr, 304 op/s
Jan 20 15:04:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:22.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:22 compute-1 nova_compute[225855]: 2026-01-20 15:04:22.917 225859 DEBUG nova.network.neutron [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Updating instance_info_cache with network_info: [{"id": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "address": "fa:16:3e:2e:ec:9e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22663aa0-a7", "ovs_interfaceid": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:04:22 compute-1 nova_compute[225855]: 2026-01-20 15:04:22.972 225859 DEBUG oslo_concurrency.lockutils [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Releasing lock "refresh_cache-c1db561e-0c8b-4cfb-97bb-55f8d4731b87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:04:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:23 compute-1 nova_compute[225855]: 2026-01-20 15:04:23.641 225859 DEBUG nova.virt.libvirt.driver [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 15:04:23 compute-1 nova_compute[225855]: 2026-01-20 15:04:23.832 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:04:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:24.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:04:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:04:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:24.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:04:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e353 e353: 3 total, 3 up, 3 in
Jan 20 15:04:25 compute-1 ceph-mon[81775]: pgmap v2401: 321 pgs: 321 active+clean; 558 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 4.8 MiB/s wr, 255 op/s
Jan 20 15:04:26 compute-1 podman[289297]: 2026-01-20 15:04:26.037882476 +0000 UTC m=+0.083066685 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:04:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:26.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:26 compute-1 ceph-mon[81775]: osdmap e353: 3 total, 3 up, 3 in
Jan 20 15:04:26 compute-1 ceph-mon[81775]: pgmap v2403: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.7 MiB/s wr, 252 op/s
Jan 20 15:04:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:26.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:27 compute-1 nova_compute[225855]: 2026-01-20 15:04:27.202 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3419700180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/962284744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:28.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:28 compute-1 ceph-mon[81775]: pgmap v2404: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.6 MiB/s wr, 239 op/s
Jan 20 15:04:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:28.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:28 compute-1 nova_compute[225855]: 2026-01-20 15:04:28.835 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:30.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:30 compute-1 ovn_controller[130490]: 2026-01-20T15:04:30Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2e:ec:9e 10.100.0.10
Jan 20 15:04:30 compute-1 ovn_controller[130490]: 2026-01-20T15:04:30Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2e:ec:9e 10.100.0.10
Jan 20 15:04:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:30.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:31 compute-1 ceph-mon[81775]: pgmap v2405: 321 pgs: 321 active+clean; 448 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 683 KiB/s wr, 148 op/s
Jan 20 15:04:31 compute-1 sudo[289326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:31 compute-1 sudo[289326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:31 compute-1 sudo[289326]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:31 compute-1 sudo[289351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:04:31 compute-1 sudo[289351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:31 compute-1 sudo[289351]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:31 compute-1 sudo[289376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:31 compute-1 sudo[289376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:31 compute-1 sudo[289376]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:31 compute-1 sudo[289401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:04:31 compute-1 sudo[289401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:32.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:32 compute-1 sudo[289401]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:32 compute-1 nova_compute[225855]: 2026-01-20 15:04:32.206 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:32 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:04:32 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:04:32 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:04:32 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:04:32 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:04:32 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:04:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:04:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:32.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:04:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:33 compute-1 ceph-mon[81775]: pgmap v2406: 321 pgs: 321 active+clean; 447 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 420 KiB/s rd, 2.5 MiB/s wr, 140 op/s
Jan 20 15:04:33 compute-1 nova_compute[225855]: 2026-01-20 15:04:33.690 225859 DEBUG nova.virt.libvirt.driver [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 15:04:33 compute-1 nova_compute[225855]: 2026-01-20 15:04:33.838 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:04:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:34.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:04:34 compute-1 ceph-mon[81775]: pgmap v2407: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 435 KiB/s rd, 2.6 MiB/s wr, 141 op/s
Jan 20 15:04:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:04:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:34.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:04:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1897084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:36.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:36 compute-1 kernel: tap22663aa0-a7 (unregistering): left promiscuous mode
Jan 20 15:04:36 compute-1 NetworkManager[49104]: <info>  [1768921476.2500] device (tap22663aa0-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:04:36 compute-1 ovn_controller[130490]: 2026-01-20T15:04:36Z|00662|binding|INFO|Releasing lport 22663aa0-a7f4-431c-b5a9-4433da2dff09 from this chassis (sb_readonly=0)
Jan 20 15:04:36 compute-1 ovn_controller[130490]: 2026-01-20T15:04:36Z|00663|binding|INFO|Setting lport 22663aa0-a7f4-431c-b5a9-4433da2dff09 down in Southbound
Jan 20 15:04:36 compute-1 nova_compute[225855]: 2026-01-20 15:04:36.257 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:36 compute-1 ovn_controller[130490]: 2026-01-20T15:04:36Z|00664|binding|INFO|Removing iface tap22663aa0-a7 ovn-installed in OVS
Jan 20 15:04:36 compute-1 nova_compute[225855]: 2026-01-20 15:04:36.259 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:36.268 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:ec:9e 10.100.0.10'], port_security=['fa:16:3e:2e:ec:9e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c1db561e-0c8b-4cfb-97bb-55f8d4731b87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63555e5851564db08c6429231d264f2c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7e54c470-6a6f-454e-ae01-9d2d59b2c74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=248fa32c-94be-4e1b-b4d3-cb9fac0ec155, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=22663aa0-a7f4-431c-b5a9-4433da2dff09) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:04:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:36.269 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 22663aa0-a7f4-431c-b5a9-4433da2dff09 in datapath 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec unbound from our chassis
Jan 20 15:04:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:36.272 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec
Jan 20 15:04:36 compute-1 nova_compute[225855]: 2026-01-20 15:04:36.273 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:36.295 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6d12d7f8-b71d-4a86-b219-39a3a6099672]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:36.322 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[72aa32b0-dae8-4728-91a7-d5a3f795d925]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:36 compute-1 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000009f.scope: Deactivated successfully.
Jan 20 15:04:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:36.326 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[04b0ca05-35ea-4dc9-ae3d-2f55f8f3e429]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:36 compute-1 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000009f.scope: Consumed 13.682s CPU time.
Jan 20 15:04:36 compute-1 systemd-machined[194361]: Machine qemu-77-instance-0000009f terminated.
Jan 20 15:04:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:36.355 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c11b4ec7-9106-4a47-bdb2-991c6f9a35ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:36.372 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[58b7452c-c3e2-4330-9405-b99d83d0db03]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap671e28d0-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:4e:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642759, 'reachable_time': 32213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289472, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:36.385 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b463cf19-aad8-45bb-9305-4af88fb68f7c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap671e28d0-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642770, 'tstamp': 642770}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289473, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap671e28d0-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642773, 'tstamp': 642773}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289473, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:36.387 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap671e28d0-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:36 compute-1 nova_compute[225855]: 2026-01-20 15:04:36.389 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:36 compute-1 nova_compute[225855]: 2026-01-20 15:04:36.392 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:36.393 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap671e28d0-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:36.394 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:04:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:36.394 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap671e28d0-00, col_values=(('external_ids', {'iface-id': 'a8628d9e-196f-4b84-89fd-d3a41792b8a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:36.394 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:04:36 compute-1 ceph-mon[81775]: pgmap v2408: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 386 KiB/s rd, 2.4 MiB/s wr, 109 op/s
Jan 20 15:04:36 compute-1 sudo[289476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:36 compute-1 sudo[289476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:36 compute-1 sudo[289476]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:36.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:36 compute-1 sudo[289510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:36 compute-1 sudo[289510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:36 compute-1 sudo[289510]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:36 compute-1 nova_compute[225855]: 2026-01-20 15:04:36.702 225859 INFO nova.virt.libvirt.driver [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Instance shutdown successfully after 13 seconds.
Jan 20 15:04:36 compute-1 nova_compute[225855]: 2026-01-20 15:04:36.708 225859 INFO nova.virt.libvirt.driver [-] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Instance destroyed successfully.
Jan 20 15:04:36 compute-1 nova_compute[225855]: 2026-01-20 15:04:36.708 225859 DEBUG nova.objects.instance [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'numa_topology' on Instance uuid c1db561e-0c8b-4cfb-97bb-55f8d4731b87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:04:36 compute-1 nova_compute[225855]: 2026-01-20 15:04:36.799 225859 INFO nova.virt.libvirt.driver [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Attempting a stable device rescue
Jan 20 15:04:36 compute-1 nova_compute[225855]: 2026-01-20 15:04:36.914 225859 DEBUG nova.compute.manager [req-1c14e547-8f6a-4df8-9f37-5cda5a7abeb1 req-ddf6430c-e623-4953-918f-9d7ef43e199c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received event network-vif-unplugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:04:36 compute-1 nova_compute[225855]: 2026-01-20 15:04:36.915 225859 DEBUG oslo_concurrency.lockutils [req-1c14e547-8f6a-4df8-9f37-5cda5a7abeb1 req-ddf6430c-e623-4953-918f-9d7ef43e199c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:36 compute-1 nova_compute[225855]: 2026-01-20 15:04:36.915 225859 DEBUG oslo_concurrency.lockutils [req-1c14e547-8f6a-4df8-9f37-5cda5a7abeb1 req-ddf6430c-e623-4953-918f-9d7ef43e199c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:36 compute-1 nova_compute[225855]: 2026-01-20 15:04:36.915 225859 DEBUG oslo_concurrency.lockutils [req-1c14e547-8f6a-4df8-9f37-5cda5a7abeb1 req-ddf6430c-e623-4953-918f-9d7ef43e199c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:36 compute-1 nova_compute[225855]: 2026-01-20 15:04:36.916 225859 DEBUG nova.compute.manager [req-1c14e547-8f6a-4df8-9f37-5cda5a7abeb1 req-ddf6430c-e623-4953-918f-9d7ef43e199c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] No waiting events found dispatching network-vif-unplugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:04:36 compute-1 nova_compute[225855]: 2026-01-20 15:04:36.916 225859 WARNING nova.compute.manager [req-1c14e547-8f6a-4df8-9f37-5cda5a7abeb1 req-ddf6430c-e623-4953-918f-9d7ef43e199c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received unexpected event network-vif-unplugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 for instance with vm_state active and task_state rescuing.
Jan 20 15:04:37 compute-1 nova_compute[225855]: 2026-01-20 15:04:37.208 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:37 compute-1 nova_compute[225855]: 2026-01-20 15:04:37.286 225859 DEBUG nova.virt.libvirt.driver [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 20 15:04:37 compute-1 nova_compute[225855]: 2026-01-20 15:04:37.290 225859 DEBUG nova.virt.libvirt.driver [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 15:04:37 compute-1 nova_compute[225855]: 2026-01-20 15:04:37.290 225859 INFO nova.virt.libvirt.driver [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Creating image(s)
Jan 20 15:04:37 compute-1 nova_compute[225855]: 2026-01-20 15:04:37.314 225859 DEBUG nova.storage.rbd_utils [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image c1db561e-0c8b-4cfb-97bb-55f8d4731b87_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:04:37 compute-1 nova_compute[225855]: 2026-01-20 15:04:37.318 225859 DEBUG nova.objects.instance [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'trusted_certs' on Instance uuid c1db561e-0c8b-4cfb-97bb-55f8d4731b87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:04:37 compute-1 nova_compute[225855]: 2026-01-20 15:04:37.383 225859 DEBUG nova.storage.rbd_utils [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image c1db561e-0c8b-4cfb-97bb-55f8d4731b87_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:04:37 compute-1 nova_compute[225855]: 2026-01-20 15:04:37.405 225859 DEBUG nova.storage.rbd_utils [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image c1db561e-0c8b-4cfb-97bb-55f8d4731b87_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:04:37 compute-1 nova_compute[225855]: 2026-01-20 15:04:37.409 225859 DEBUG oslo_concurrency.lockutils [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "132c88f1a4a6a63e2a2024a3c1506ff21c276bf0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:37 compute-1 nova_compute[225855]: 2026-01-20 15:04:37.410 225859 DEBUG oslo_concurrency.lockutils [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "132c88f1a4a6a63e2a2024a3c1506ff21c276bf0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:38.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.123 225859 DEBUG nova.virt.libvirt.imagebackend [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Image locations are: [{'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/7f0d068e-5d2b-485d-b65c-7244508ab6b6/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/7f0d068e-5d2b-485d-b65c-7244508ab6b6/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.270 225859 DEBUG nova.virt.libvirt.imagebackend [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Selected location: {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/7f0d068e-5d2b-485d-b65c-7244508ab6b6/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.272 225859 DEBUG nova.storage.rbd_utils [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] cloning images/7f0d068e-5d2b-485d-b65c-7244508ab6b6@snap to None/c1db561e-0c8b-4cfb-97bb-55f8d4731b87_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.390 225859 DEBUG oslo_concurrency.lockutils [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "132c88f1a4a6a63e2a2024a3c1506ff21c276bf0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:38 compute-1 sudo[289659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:38 compute-1 sudo[289659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:38 compute-1 sudo[289659]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.437 225859 DEBUG nova.objects.instance [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'migration_context' on Instance uuid c1db561e-0c8b-4cfb-97bb-55f8d4731b87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.454 225859 DEBUG nova.virt.libvirt.driver [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.457 225859 DEBUG nova.virt.libvirt.driver [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Start _get_guest_xml network_info=[{"id": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "address": "fa:16:3e:2e:ec:9e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "vif_mac": "fa:16:3e:2e:ec:9e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22663aa0-a7", "ovs_interfaceid": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '7f0d068e-5d2b-485d-b65c-7244508ab6b6', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9d237554-9581-4577-897a-3907d38a0cb3', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9d237554-9581-4577-897a-3907d38a0cb3', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'c1db561e-0c8b-4cfb-97bb-55f8d4731b87', 'attached_at': '', 'detached_at': '', 'volume_id': '9d237554-9581-4577-897a-3907d38a0cb3', 'serial': '9d237554-9581-4577-897a-3907d38a0cb3'}, 'guest_format': None, 'boot_index': 0, 'mount_device': '/dev/vda', 'attachment_id': '2e3b09a8-1ad6-45d8-9497-37288b706f5c', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.457 225859 DEBUG nova.objects.instance [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'resources' on Instance uuid c1db561e-0c8b-4cfb-97bb-55f8d4731b87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:04:38 compute-1 sudo[289702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:04:38 compute-1 sudo[289702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:38 compute-1 sudo[289702]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.488 225859 WARNING nova.virt.libvirt.driver [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.504 225859 DEBUG nova.virt.libvirt.host [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.505 225859 DEBUG nova.virt.libvirt.host [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.516 225859 DEBUG nova.virt.libvirt.host [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.516 225859 DEBUG nova.virt.libvirt.host [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.518 225859 DEBUG nova.virt.libvirt.driver [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.518 225859 DEBUG nova.virt.hardware [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.518 225859 DEBUG nova.virt.hardware [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.519 225859 DEBUG nova.virt.hardware [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.519 225859 DEBUG nova.virt.hardware [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.519 225859 DEBUG nova.virt.hardware [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.519 225859 DEBUG nova.virt.hardware [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.520 225859 DEBUG nova.virt.hardware [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.520 225859 DEBUG nova.virt.hardware [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.520 225859 DEBUG nova.virt.hardware [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.520 225859 DEBUG nova.virt.hardware [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.520 225859 DEBUG nova.virt.hardware [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.521 225859 DEBUG nova.objects.instance [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'vcpu_model' on Instance uuid c1db561e-0c8b-4cfb-97bb-55f8d4731b87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:04:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:38.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.589 225859 DEBUG oslo_concurrency.processutils [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:04:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:38.828 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:04:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:38.830 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.830 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:38 compute-1 nova_compute[225855]: 2026-01-20 15:04:38.838 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:04:39 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3218525619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.057 225859 DEBUG oslo_concurrency.processutils [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.077 225859 DEBUG nova.compute.manager [req-e89e0823-e6ca-46e1-8a61-9d860589d0f9 req-541487a6-4201-47e8-b49f-1e3d8e6a9fee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received event network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.077 225859 DEBUG oslo_concurrency.lockutils [req-e89e0823-e6ca-46e1-8a61-9d860589d0f9 req-541487a6-4201-47e8-b49f-1e3d8e6a9fee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.077 225859 DEBUG oslo_concurrency.lockutils [req-e89e0823-e6ca-46e1-8a61-9d860589d0f9 req-541487a6-4201-47e8-b49f-1e3d8e6a9fee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.078 225859 DEBUG oslo_concurrency.lockutils [req-e89e0823-e6ca-46e1-8a61-9d860589d0f9 req-541487a6-4201-47e8-b49f-1e3d8e6a9fee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.078 225859 DEBUG nova.compute.manager [req-e89e0823-e6ca-46e1-8a61-9d860589d0f9 req-541487a6-4201-47e8-b49f-1e3d8e6a9fee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] No waiting events found dispatching network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.078 225859 WARNING nova.compute.manager [req-e89e0823-e6ca-46e1-8a61-9d860589d0f9 req-541487a6-4201-47e8-b49f-1e3d8e6a9fee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received unexpected event network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 for instance with vm_state active and task_state rescuing.
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.085 225859 DEBUG oslo_concurrency.processutils [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:04:39 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:04:39 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:04:39 compute-1 ceph-mon[81775]: pgmap v2409: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Jan 20 15:04:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3218525619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:04:39 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1912797042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.727 225859 DEBUG oslo_concurrency.processutils [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.729 225859 DEBUG nova.virt.libvirt.vif [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:03:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1670904176',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1670904176',id=159,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:04:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='63555e5851564db08c6429231d264f2c',ramdisk_id='',reservation_id='r-qovaqapn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:04:16Z,user_data=None,user_id='2446e8399b344b29986c1aaf8bf73adf',uuid=c1db561e-0c8b-4cfb-97bb-55f8d4731b87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "address": "fa:16:3e:2e:ec:9e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "vif_mac": "fa:16:3e:2e:ec:9e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22663aa0-a7", "ovs_interfaceid": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.730 225859 DEBUG nova.network.os_vif_util [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converting VIF {"id": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "address": "fa:16:3e:2e:ec:9e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "vif_mac": "fa:16:3e:2e:ec:9e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22663aa0-a7", "ovs_interfaceid": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.731 225859 DEBUG nova.network.os_vif_util [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2e:ec:9e,bridge_name='br-int',has_traffic_filtering=True,id=22663aa0-a7f4-431c-b5a9-4433da2dff09,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22663aa0-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.732 225859 DEBUG nova.objects.instance [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'pci_devices' on Instance uuid c1db561e-0c8b-4cfb-97bb-55f8d4731b87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.760 225859 DEBUG nova.virt.libvirt.driver [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:04:39 compute-1 nova_compute[225855]:   <uuid>c1db561e-0c8b-4cfb-97bb-55f8d4731b87</uuid>
Jan 20 15:04:39 compute-1 nova_compute[225855]:   <name>instance-0000009f</name>
Jan 20 15:04:39 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:04:39 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:04:39 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-1670904176</nova:name>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:04:38</nova:creationTime>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:04:39 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:04:39 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:04:39 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:04:39 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:04:39 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:04:39 compute-1 nova_compute[225855]:         <nova:user uuid="2446e8399b344b29986c1aaf8bf73adf">tempest-ServerBootFromVolumeStableRescueTest-1871371328-project-member</nova:user>
Jan 20 15:04:39 compute-1 nova_compute[225855]:         <nova:project uuid="63555e5851564db08c6429231d264f2c">tempest-ServerBootFromVolumeStableRescueTest-1871371328</nova:project>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:04:39 compute-1 nova_compute[225855]:         <nova:port uuid="22663aa0-a7f4-431c-b5a9-4433da2dff09">
Jan 20 15:04:39 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:04:39 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:04:39 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <system>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <entry name="serial">c1db561e-0c8b-4cfb-97bb-55f8d4731b87</entry>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <entry name="uuid">c1db561e-0c8b-4cfb-97bb-55f8d4731b87</entry>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     </system>
Jan 20 15:04:39 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:04:39 compute-1 nova_compute[225855]:   <os>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:   </os>
Jan 20 15:04:39 compute-1 nova_compute[225855]:   <features>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:   </features>
Jan 20 15:04:39 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:04:39 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:04:39 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/c1db561e-0c8b-4cfb-97bb-55f8d4731b87_disk.config">
Jan 20 15:04:39 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       </source>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:04:39 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <source protocol="rbd" name="volumes/volume-9d237554-9581-4577-897a-3907d38a0cb3">
Jan 20 15:04:39 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       </source>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:04:39 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <serial>9d237554-9581-4577-897a-3907d38a0cb3</serial>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/c1db561e-0c8b-4cfb-97bb-55f8d4731b87_disk.rescue">
Jan 20 15:04:39 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       </source>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:04:39 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <target dev="vdb" bus="virtio"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <boot order="1"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:2e:ec:9e"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <target dev="tap22663aa0-a7"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/c1db561e-0c8b-4cfb-97bb-55f8d4731b87/console.log" append="off"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <video>
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     </video>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:04:39 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:04:39 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:04:39 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:04:39 compute-1 nova_compute[225855]: </domain>
Jan 20 15:04:39 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.768 225859 INFO nova.virt.libvirt.driver [-] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Instance destroyed successfully.
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.829 225859 DEBUG nova.virt.libvirt.driver [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.829 225859 DEBUG nova.virt.libvirt.driver [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.829 225859 DEBUG nova.virt.libvirt.driver [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.830 225859 DEBUG nova.virt.libvirt.driver [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No VIF found with MAC fa:16:3e:2e:ec:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.830 225859 INFO nova.virt.libvirt.driver [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Using config drive
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.853 225859 DEBUG nova.storage.rbd_utils [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image c1db561e-0c8b-4cfb-97bb-55f8d4731b87_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:04:39 compute-1 podman[289791]: 2026-01-20 15:04:39.87100559 +0000 UTC m=+0.064550682 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.889 225859 DEBUG nova.objects.instance [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'ec2_ids' on Instance uuid c1db561e-0c8b-4cfb-97bb-55f8d4731b87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:04:39 compute-1 nova_compute[225855]: 2026-01-20 15:04:39.924 225859 DEBUG nova.objects.instance [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'keypairs' on Instance uuid c1db561e-0c8b-4cfb-97bb-55f8d4731b87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:04:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:40.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1912797042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:04:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:40.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:04:40 compute-1 nova_compute[225855]: 2026-01-20 15:04:40.649 225859 INFO nova.virt.libvirt.driver [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Creating config drive at /var/lib/nova/instances/c1db561e-0c8b-4cfb-97bb-55f8d4731b87/disk.config.rescue
Jan 20 15:04:40 compute-1 nova_compute[225855]: 2026-01-20 15:04:40.654 225859 DEBUG oslo_concurrency.processutils [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c1db561e-0c8b-4cfb-97bb-55f8d4731b87/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpczuypwxj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:04:40 compute-1 nova_compute[225855]: 2026-01-20 15:04:40.797 225859 DEBUG oslo_concurrency.processutils [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c1db561e-0c8b-4cfb-97bb-55f8d4731b87/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpczuypwxj" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:04:40 compute-1 nova_compute[225855]: 2026-01-20 15:04:40.827 225859 DEBUG nova.storage.rbd_utils [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image c1db561e-0c8b-4cfb-97bb-55f8d4731b87_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:04:40 compute-1 nova_compute[225855]: 2026-01-20 15:04:40.831 225859 DEBUG oslo_concurrency.processutils [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c1db561e-0c8b-4cfb-97bb-55f8d4731b87/disk.config.rescue c1db561e-0c8b-4cfb-97bb-55f8d4731b87_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:04:41 compute-1 nova_compute[225855]: 2026-01-20 15:04:41.007 225859 DEBUG oslo_concurrency.processutils [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c1db561e-0c8b-4cfb-97bb-55f8d4731b87/disk.config.rescue c1db561e-0c8b-4cfb-97bb-55f8d4731b87_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:04:41 compute-1 nova_compute[225855]: 2026-01-20 15:04:41.007 225859 INFO nova.virt.libvirt.driver [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Deleting local config drive /var/lib/nova/instances/c1db561e-0c8b-4cfb-97bb-55f8d4731b87/disk.config.rescue because it was imported into RBD.
Jan 20 15:04:41 compute-1 kernel: tap22663aa0-a7: entered promiscuous mode
Jan 20 15:04:41 compute-1 ovn_controller[130490]: 2026-01-20T15:04:41Z|00665|binding|INFO|Claiming lport 22663aa0-a7f4-431c-b5a9-4433da2dff09 for this chassis.
Jan 20 15:04:41 compute-1 ovn_controller[130490]: 2026-01-20T15:04:41Z|00666|binding|INFO|22663aa0-a7f4-431c-b5a9-4433da2dff09: Claiming fa:16:3e:2e:ec:9e 10.100.0.10
Jan 20 15:04:41 compute-1 NetworkManager[49104]: <info>  [1768921481.0575] manager: (tap22663aa0-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/283)
Jan 20 15:04:41 compute-1 nova_compute[225855]: 2026-01-20 15:04:41.056 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:41 compute-1 ovn_controller[130490]: 2026-01-20T15:04:41Z|00667|binding|INFO|Setting lport 22663aa0-a7f4-431c-b5a9-4433da2dff09 ovn-installed in OVS
Jan 20 15:04:41 compute-1 nova_compute[225855]: 2026-01-20 15:04:41.075 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:41 compute-1 ovn_controller[130490]: 2026-01-20T15:04:41Z|00668|binding|INFO|Setting lport 22663aa0-a7f4-431c-b5a9-4433da2dff09 up in Southbound
Jan 20 15:04:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:41.078 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:ec:9e 10.100.0.10'], port_security=['fa:16:3e:2e:ec:9e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c1db561e-0c8b-4cfb-97bb-55f8d4731b87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63555e5851564db08c6429231d264f2c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7e54c470-6a6f-454e-ae01-9d2d59b2c74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=248fa32c-94be-4e1b-b4d3-cb9fac0ec155, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=22663aa0-a7f4-431c-b5a9-4433da2dff09) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:04:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:41.079 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 22663aa0-a7f4-431c-b5a9-4433da2dff09 in datapath 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec bound to our chassis
Jan 20 15:04:41 compute-1 nova_compute[225855]: 2026-01-20 15:04:41.081 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:41.081 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec
Jan 20 15:04:41 compute-1 systemd-udevd[289879]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:04:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:41.097 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d3c5dc-7ec4-4ae0-ac7f-97026790497e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:41 compute-1 systemd-machined[194361]: New machine qemu-78-instance-0000009f.
Jan 20 15:04:41 compute-1 NetworkManager[49104]: <info>  [1768921481.1031] device (tap22663aa0-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:04:41 compute-1 NetworkManager[49104]: <info>  [1768921481.1039] device (tap22663aa0-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:04:41 compute-1 systemd[1]: Started Virtual Machine qemu-78-instance-0000009f.
Jan 20 15:04:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:41.131 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[0280643d-c3e9-4448-b83d-b1fefcc47c93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:41.136 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[3189c29a-9f92-42d4-bbfd-aaf06066af3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:41.163 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[0c4e2d02-ac89-4db5-96b6-b96835e79aaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:41.179 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[035f5a6d-aee7-4df9-8efa-f9f4ff085b3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap671e28d0-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:4e:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642759, 'reachable_time': 32213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289892, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:41.199 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a2364a94-4972-471a-8fc6-8d141c72fc63]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap671e28d0-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642770, 'tstamp': 642770}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289894, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap671e28d0-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642773, 'tstamp': 642773}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289894, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:41.201 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap671e28d0-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:41 compute-1 nova_compute[225855]: 2026-01-20 15:04:41.203 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:41 compute-1 nova_compute[225855]: 2026-01-20 15:04:41.204 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:41.204 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap671e28d0-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:41.205 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:04:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:41.205 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap671e28d0-00, col_values=(('external_ids', {'iface-id': 'a8628d9e-196f-4b84-89fd-d3a41792b8a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:41.205 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:04:41 compute-1 ceph-mon[81775]: pgmap v2410: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 358 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Jan 20 15:04:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3769637538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:41 compute-1 nova_compute[225855]: 2026-01-20 15:04:41.666 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for c1db561e-0c8b-4cfb-97bb-55f8d4731b87 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 15:04:41 compute-1 nova_compute[225855]: 2026-01-20 15:04:41.668 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921481.665639, c1db561e-0c8b-4cfb-97bb-55f8d4731b87 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:04:41 compute-1 nova_compute[225855]: 2026-01-20 15:04:41.668 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] VM Resumed (Lifecycle Event)
Jan 20 15:04:41 compute-1 nova_compute[225855]: 2026-01-20 15:04:41.677 225859 DEBUG nova.compute.manager [None req-b0daa2a6-7c5c-4d0f-9607-8a8fbe9e85a0 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:04:41 compute-1 nova_compute[225855]: 2026-01-20 15:04:41.722 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:04:41 compute-1 nova_compute[225855]: 2026-01-20 15:04:41.726 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:04:41 compute-1 nova_compute[225855]: 2026-01-20 15:04:41.755 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] During sync_power_state the instance has a pending task (rescuing). Skip.
Jan 20 15:04:41 compute-1 nova_compute[225855]: 2026-01-20 15:04:41.756 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921481.6660094, c1db561e-0c8b-4cfb-97bb-55f8d4731b87 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:04:41 compute-1 nova_compute[225855]: 2026-01-20 15:04:41.756 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] VM Started (Lifecycle Event)
Jan 20 15:04:41 compute-1 nova_compute[225855]: 2026-01-20 15:04:41.812 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:04:41 compute-1 nova_compute[225855]: 2026-01-20 15:04:41.816 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:04:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:04:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:42.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:04:42 compute-1 nova_compute[225855]: 2026-01-20 15:04:42.210 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:42 compute-1 nova_compute[225855]: 2026-01-20 15:04:42.329 225859 DEBUG nova.compute.manager [req-fe92bba3-7e00-4163-b400-ede3e41863e4 req-9cff9c6f-ae26-433d-8db1-e5a1a6137130 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received event network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:04:42 compute-1 nova_compute[225855]: 2026-01-20 15:04:42.330 225859 DEBUG oslo_concurrency.lockutils [req-fe92bba3-7e00-4163-b400-ede3e41863e4 req-9cff9c6f-ae26-433d-8db1-e5a1a6137130 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:42 compute-1 nova_compute[225855]: 2026-01-20 15:04:42.331 225859 DEBUG oslo_concurrency.lockutils [req-fe92bba3-7e00-4163-b400-ede3e41863e4 req-9cff9c6f-ae26-433d-8db1-e5a1a6137130 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:42 compute-1 nova_compute[225855]: 2026-01-20 15:04:42.331 225859 DEBUG oslo_concurrency.lockutils [req-fe92bba3-7e00-4163-b400-ede3e41863e4 req-9cff9c6f-ae26-433d-8db1-e5a1a6137130 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:42 compute-1 nova_compute[225855]: 2026-01-20 15:04:42.331 225859 DEBUG nova.compute.manager [req-fe92bba3-7e00-4163-b400-ede3e41863e4 req-9cff9c6f-ae26-433d-8db1-e5a1a6137130 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] No waiting events found dispatching network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:04:42 compute-1 nova_compute[225855]: 2026-01-20 15:04:42.331 225859 WARNING nova.compute.manager [req-fe92bba3-7e00-4163-b400-ede3e41863e4 req-9cff9c6f-ae26-433d-8db1-e5a1a6137130 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received unexpected event network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 for instance with vm_state rescued and task_state None.
Jan 20 15:04:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:42.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:43 compute-1 ceph-mon[81775]: pgmap v2411: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 253 KiB/s rd, 1.6 MiB/s wr, 63 op/s
Jan 20 15:04:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2179692119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3343341651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:43 compute-1 nova_compute[225855]: 2026-01-20 15:04:43.840 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:44.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:44 compute-1 nova_compute[225855]: 2026-01-20 15:04:44.403 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:44 compute-1 nova_compute[225855]: 2026-01-20 15:04:44.404 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:04:44 compute-1 nova_compute[225855]: 2026-01-20 15:04:44.515 225859 DEBUG nova.compute.manager [req-0d3777e8-a9ea-4f8a-8a76-b2b5aa3c1f40 req-d260426f-7eec-42b0-b134-23d162b144d4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received event network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:04:44 compute-1 nova_compute[225855]: 2026-01-20 15:04:44.517 225859 DEBUG oslo_concurrency.lockutils [req-0d3777e8-a9ea-4f8a-8a76-b2b5aa3c1f40 req-d260426f-7eec-42b0-b134-23d162b144d4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:44 compute-1 nova_compute[225855]: 2026-01-20 15:04:44.517 225859 DEBUG oslo_concurrency.lockutils [req-0d3777e8-a9ea-4f8a-8a76-b2b5aa3c1f40 req-d260426f-7eec-42b0-b134-23d162b144d4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:44 compute-1 nova_compute[225855]: 2026-01-20 15:04:44.517 225859 DEBUG oslo_concurrency.lockutils [req-0d3777e8-a9ea-4f8a-8a76-b2b5aa3c1f40 req-d260426f-7eec-42b0-b134-23d162b144d4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:44 compute-1 nova_compute[225855]: 2026-01-20 15:04:44.518 225859 DEBUG nova.compute.manager [req-0d3777e8-a9ea-4f8a-8a76-b2b5aa3c1f40 req-d260426f-7eec-42b0-b134-23d162b144d4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] No waiting events found dispatching network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:04:44 compute-1 nova_compute[225855]: 2026-01-20 15:04:44.518 225859 WARNING nova.compute.manager [req-0d3777e8-a9ea-4f8a-8a76-b2b5aa3c1f40 req-d260426f-7eec-42b0-b134-23d162b144d4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received unexpected event network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 for instance with vm_state rescued and task_state None.
Jan 20 15:04:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:04:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:44.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:04:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:44.832 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:45 compute-1 nova_compute[225855]: 2026-01-20 15:04:45.104 225859 INFO nova.compute.manager [None req-06ed0cbf-d946-4a7b-81ed-059b0fd3853e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Unrescuing
Jan 20 15:04:45 compute-1 nova_compute[225855]: 2026-01-20 15:04:45.105 225859 DEBUG oslo_concurrency.lockutils [None req-06ed0cbf-d946-4a7b-81ed-059b0fd3853e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "refresh_cache-c1db561e-0c8b-4cfb-97bb-55f8d4731b87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:04:45 compute-1 nova_compute[225855]: 2026-01-20 15:04:45.105 225859 DEBUG oslo_concurrency.lockutils [None req-06ed0cbf-d946-4a7b-81ed-059b0fd3853e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquired lock "refresh_cache-c1db561e-0c8b-4cfb-97bb-55f8d4731b87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:04:45 compute-1 nova_compute[225855]: 2026-01-20 15:04:45.105 225859 DEBUG nova.network.neutron [None req-06ed0cbf-d946-4a7b-81ed-059b0fd3853e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:04:45 compute-1 ceph-mon[81775]: pgmap v2412: 321 pgs: 321 active+clean; 482 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.9 MiB/s wr, 94 op/s
Jan 20 15:04:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:46.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:46 compute-1 nova_compute[225855]: 2026-01-20 15:04:46.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e354 e354: 3 total, 3 up, 3 in
Jan 20 15:04:46 compute-1 ceph-mon[81775]: pgmap v2413: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.7 MiB/s wr, 208 op/s
Jan 20 15:04:46 compute-1 ceph-mon[81775]: osdmap e354: 3 total, 3 up, 3 in
Jan 20 15:04:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:46.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:47 compute-1 nova_compute[225855]: 2026-01-20 15:04:47.212 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:47 compute-1 nova_compute[225855]: 2026-01-20 15:04:47.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:48.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:48 compute-1 nova_compute[225855]: 2026-01-20 15:04:48.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:04:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:48.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:04:48 compute-1 ceph-mon[81775]: pgmap v2415: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 246 op/s
Jan 20 15:04:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/259831174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:48 compute-1 nova_compute[225855]: 2026-01-20 15:04:48.844 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:49 compute-1 nova_compute[225855]: 2026-01-20 15:04:49.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:49 compute-1 nova_compute[225855]: 2026-01-20 15:04:49.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:04:49 compute-1 nova_compute[225855]: 2026-01-20 15:04:49.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:04:49 compute-1 nova_compute[225855]: 2026-01-20 15:04:49.579 225859 DEBUG nova.network.neutron [None req-06ed0cbf-d946-4a7b-81ed-059b0fd3853e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Updating instance_info_cache with network_info: [{"id": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "address": "fa:16:3e:2e:ec:9e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22663aa0-a7", "ovs_interfaceid": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:04:49 compute-1 nova_compute[225855]: 2026-01-20 15:04:49.637 225859 DEBUG oslo_concurrency.lockutils [None req-06ed0cbf-d946-4a7b-81ed-059b0fd3853e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Releasing lock "refresh_cache-c1db561e-0c8b-4cfb-97bb-55f8d4731b87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:04:49 compute-1 nova_compute[225855]: 2026-01-20 15:04:49.638 225859 DEBUG nova.objects.instance [None req-06ed0cbf-d946-4a7b-81ed-059b0fd3853e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'flavor' on Instance uuid c1db561e-0c8b-4cfb-97bb-55f8d4731b87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:04:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2211886453' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:49 compute-1 kernel: tap22663aa0-a7 (unregistering): left promiscuous mode
Jan 20 15:04:49 compute-1 NetworkManager[49104]: <info>  [1768921489.7742] device (tap22663aa0-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:04:49 compute-1 nova_compute[225855]: 2026-01-20 15:04:49.781 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:49 compute-1 ovn_controller[130490]: 2026-01-20T15:04:49Z|00669|binding|INFO|Releasing lport 22663aa0-a7f4-431c-b5a9-4433da2dff09 from this chassis (sb_readonly=0)
Jan 20 15:04:49 compute-1 ovn_controller[130490]: 2026-01-20T15:04:49Z|00670|binding|INFO|Setting lport 22663aa0-a7f4-431c-b5a9-4433da2dff09 down in Southbound
Jan 20 15:04:49 compute-1 ovn_controller[130490]: 2026-01-20T15:04:49Z|00671|binding|INFO|Removing iface tap22663aa0-a7 ovn-installed in OVS
Jan 20 15:04:49 compute-1 nova_compute[225855]: 2026-01-20 15:04:49.783 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:49 compute-1 nova_compute[225855]: 2026-01-20 15:04:49.801 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:49.822 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:ec:9e 10.100.0.10'], port_security=['fa:16:3e:2e:ec:9e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c1db561e-0c8b-4cfb-97bb-55f8d4731b87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63555e5851564db08c6429231d264f2c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7e54c470-6a6f-454e-ae01-9d2d59b2c74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=248fa32c-94be-4e1b-b4d3-cb9fac0ec155, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=22663aa0-a7f4-431c-b5a9-4433da2dff09) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:04:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:49.823 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 22663aa0-a7f4-431c-b5a9-4433da2dff09 in datapath 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec unbound from our chassis
Jan 20 15:04:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:49.825 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec
Jan 20 15:04:49 compute-1 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d0000009f.scope: Deactivated successfully.
Jan 20 15:04:49 compute-1 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d0000009f.scope: Consumed 8.934s CPU time.
Jan 20 15:04:49 compute-1 systemd-machined[194361]: Machine qemu-78-instance-0000009f terminated.
Jan 20 15:04:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:49.842 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7d478294-3839-46f8-a893-125bc908d6ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:49.873 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e16032-8a1f-4a46-8fbf-5e883b325846]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:49.878 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e70a0e25-5834-4b2e-8683-15cea42b974f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:49.905 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[5e74f77e-676c-4090-9f4f-68fc33f90579]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:49.931 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4035c115-9c83-4ef3-be3f-49db93cf76ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap671e28d0-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:4e:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642759, 'reachable_time': 32213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289971, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:49.954 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5e4bd608-1131-4728-a579-d110351a63f5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap671e28d0-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642770, 'tstamp': 642770}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289974, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap671e28d0-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642773, 'tstamp': 642773}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289974, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:49.956 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap671e28d0-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:49 compute-1 nova_compute[225855]: 2026-01-20 15:04:49.958 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:49.962 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap671e28d0-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:49 compute-1 nova_compute[225855]: 2026-01-20 15:04:49.961 225859 INFO nova.virt.libvirt.driver [-] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Instance destroyed successfully.
Jan 20 15:04:49 compute-1 nova_compute[225855]: 2026-01-20 15:04:49.961 225859 DEBUG nova.objects.instance [None req-06ed0cbf-d946-4a7b-81ed-059b0fd3853e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'numa_topology' on Instance uuid c1db561e-0c8b-4cfb-97bb-55f8d4731b87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:04:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:49.962 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:04:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:49.962 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap671e28d0-00, col_values=(('external_ids', {'iface-id': 'a8628d9e-196f-4b84-89fd-d3a41792b8a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:49.963 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:04:49 compute-1 nova_compute[225855]: 2026-01-20 15:04:49.963 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:50.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:50 compute-1 kernel: tap22663aa0-a7: entered promiscuous mode
Jan 20 15:04:50 compute-1 systemd-udevd[289962]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:04:50 compute-1 NetworkManager[49104]: <info>  [1768921490.1803] manager: (tap22663aa0-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/284)
Jan 20 15:04:50 compute-1 ovn_controller[130490]: 2026-01-20T15:04:50Z|00672|binding|INFO|Claiming lport 22663aa0-a7f4-431c-b5a9-4433da2dff09 for this chassis.
Jan 20 15:04:50 compute-1 ovn_controller[130490]: 2026-01-20T15:04:50Z|00673|binding|INFO|22663aa0-a7f4-431c-b5a9-4433da2dff09: Claiming fa:16:3e:2e:ec:9e 10.100.0.10
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.180 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:50 compute-1 NetworkManager[49104]: <info>  [1768921490.1903] device (tap22663aa0-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:04:50 compute-1 NetworkManager[49104]: <info>  [1768921490.1910] device (tap22663aa0-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:04:50 compute-1 ovn_controller[130490]: 2026-01-20T15:04:50Z|00674|binding|INFO|Setting lport 22663aa0-a7f4-431c-b5a9-4433da2dff09 ovn-installed in OVS
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.199 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-474cec75-3b01-411a-9074-75859d2a9ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.199 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-474cec75-3b01-411a-9074-75859d2a9ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.199 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.199 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 474cec75-3b01-411a-9074-75859d2a9ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.201 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:50 compute-1 systemd-machined[194361]: New machine qemu-79-instance-0000009f.
Jan 20 15:04:50 compute-1 systemd[1]: Started Virtual Machine qemu-79-instance-0000009f.
Jan 20 15:04:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:50.346 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:ec:9e 10.100.0.10'], port_security=['fa:16:3e:2e:ec:9e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c1db561e-0c8b-4cfb-97bb-55f8d4731b87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63555e5851564db08c6429231d264f2c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7e54c470-6a6f-454e-ae01-9d2d59b2c74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=248fa32c-94be-4e1b-b4d3-cb9fac0ec155, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=22663aa0-a7f4-431c-b5a9-4433da2dff09) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:04:50 compute-1 ovn_controller[130490]: 2026-01-20T15:04:50Z|00675|binding|INFO|Setting lport 22663aa0-a7f4-431c-b5a9-4433da2dff09 up in Southbound
Jan 20 15:04:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:50.348 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 22663aa0-a7f4-431c-b5a9-4433da2dff09 in datapath 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec bound to our chassis
Jan 20 15:04:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:50.349 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec
Jan 20 15:04:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:50.363 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7806279d-da7b-4a6b-9088-aac96a7f29eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:50.396 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d47c37ba-5f9f-41a8-bd48-0218b4a063bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:50.399 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad1a253-4891-43fc-932e-33c44affd2ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.423 225859 DEBUG nova.compute.manager [req-c15dff0f-234c-4e23-9290-c6a4e8977476 req-5ed0313a-c8ae-4859-b39f-c33d37781ff4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received event network-vif-unplugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.424 225859 DEBUG oslo_concurrency.lockutils [req-c15dff0f-234c-4e23-9290-c6a4e8977476 req-5ed0313a-c8ae-4859-b39f-c33d37781ff4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.424 225859 DEBUG oslo_concurrency.lockutils [req-c15dff0f-234c-4e23-9290-c6a4e8977476 req-5ed0313a-c8ae-4859-b39f-c33d37781ff4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.425 225859 DEBUG oslo_concurrency.lockutils [req-c15dff0f-234c-4e23-9290-c6a4e8977476 req-5ed0313a-c8ae-4859-b39f-c33d37781ff4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.425 225859 DEBUG nova.compute.manager [req-c15dff0f-234c-4e23-9290-c6a4e8977476 req-5ed0313a-c8ae-4859-b39f-c33d37781ff4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] No waiting events found dispatching network-vif-unplugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.425 225859 WARNING nova.compute.manager [req-c15dff0f-234c-4e23-9290-c6a4e8977476 req-5ed0313a-c8ae-4859-b39f-c33d37781ff4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received unexpected event network-vif-unplugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 for instance with vm_state rescued and task_state unrescuing.
Jan 20 15:04:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:50.431 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7ea5a0-60c4-46ca-8f14-4805ebb31171]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:50.447 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[833de69c-2d3b-42c2-89fb-51e61b7f9468]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap671e28d0-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:4e:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 13, 'rx_bytes': 532, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 13, 'rx_bytes': 532, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642759, 'reachable_time': 32213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290004, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:50.471 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2c0e4051-9f4f-45c9-b2b2-95e804491e49]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap671e28d0-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642770, 'tstamp': 642770}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290005, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap671e28d0-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642773, 'tstamp': 642773}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290005, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:50.472 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap671e28d0-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.473 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.475 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:50.475 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap671e28d0-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:50.475 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:04:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:50.476 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap671e28d0-00, col_values=(('external_ids', {'iface-id': 'a8628d9e-196f-4b84-89fd-d3a41792b8a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:50.476 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:04:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:04:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:50.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:04:50 compute-1 ceph-mon[81775]: pgmap v2416: 321 pgs: 321 active+clean; 522 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 6.8 MiB/s wr, 321 op/s
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.682 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for c1db561e-0c8b-4cfb-97bb-55f8d4731b87 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.682 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921490.6819882, c1db561e-0c8b-4cfb-97bb-55f8d4731b87 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.682 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] VM Resumed (Lifecycle Event)
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.729 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.734 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.776 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.776 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921490.6841216, c1db561e-0c8b-4cfb-97bb-55f8d4731b87 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.776 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] VM Started (Lifecycle Event)
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.799 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.803 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:04:50 compute-1 nova_compute[225855]: 2026-01-20 15:04:50.854 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 20 15:04:51 compute-1 nova_compute[225855]: 2026-01-20 15:04:51.044 225859 DEBUG nova.compute.manager [None req-06ed0cbf-d946-4a7b-81ed-059b0fd3853e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:04:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:52.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:52 compute-1 nova_compute[225855]: 2026-01-20 15:04:52.214 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:52.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:52 compute-1 nova_compute[225855]: 2026-01-20 15:04:52.700 225859 DEBUG nova.compute.manager [req-76a19a51-5bb1-4e29-b020-777903b3b7e1 req-d500da47-6b0a-482c-a946-31c3b5c4f34c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received event network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:04:52 compute-1 nova_compute[225855]: 2026-01-20 15:04:52.701 225859 DEBUG oslo_concurrency.lockutils [req-76a19a51-5bb1-4e29-b020-777903b3b7e1 req-d500da47-6b0a-482c-a946-31c3b5c4f34c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:52 compute-1 nova_compute[225855]: 2026-01-20 15:04:52.701 225859 DEBUG oslo_concurrency.lockutils [req-76a19a51-5bb1-4e29-b020-777903b3b7e1 req-d500da47-6b0a-482c-a946-31c3b5c4f34c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:52 compute-1 nova_compute[225855]: 2026-01-20 15:04:52.701 225859 DEBUG oslo_concurrency.lockutils [req-76a19a51-5bb1-4e29-b020-777903b3b7e1 req-d500da47-6b0a-482c-a946-31c3b5c4f34c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:52 compute-1 nova_compute[225855]: 2026-01-20 15:04:52.702 225859 DEBUG nova.compute.manager [req-76a19a51-5bb1-4e29-b020-777903b3b7e1 req-d500da47-6b0a-482c-a946-31c3b5c4f34c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] No waiting events found dispatching network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:04:52 compute-1 nova_compute[225855]: 2026-01-20 15:04:52.702 225859 WARNING nova.compute.manager [req-76a19a51-5bb1-4e29-b020-777903b3b7e1 req-d500da47-6b0a-482c-a946-31c3b5c4f34c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received unexpected event network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 for instance with vm_state active and task_state None.
Jan 20 15:04:52 compute-1 nova_compute[225855]: 2026-01-20 15:04:52.702 225859 DEBUG nova.compute.manager [req-76a19a51-5bb1-4e29-b020-777903b3b7e1 req-d500da47-6b0a-482c-a946-31c3b5c4f34c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received event network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:04:52 compute-1 nova_compute[225855]: 2026-01-20 15:04:52.703 225859 DEBUG oslo_concurrency.lockutils [req-76a19a51-5bb1-4e29-b020-777903b3b7e1 req-d500da47-6b0a-482c-a946-31c3b5c4f34c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:52 compute-1 nova_compute[225855]: 2026-01-20 15:04:52.703 225859 DEBUG oslo_concurrency.lockutils [req-76a19a51-5bb1-4e29-b020-777903b3b7e1 req-d500da47-6b0a-482c-a946-31c3b5c4f34c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:52 compute-1 nova_compute[225855]: 2026-01-20 15:04:52.703 225859 DEBUG oslo_concurrency.lockutils [req-76a19a51-5bb1-4e29-b020-777903b3b7e1 req-d500da47-6b0a-482c-a946-31c3b5c4f34c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:52 compute-1 nova_compute[225855]: 2026-01-20 15:04:52.703 225859 DEBUG nova.compute.manager [req-76a19a51-5bb1-4e29-b020-777903b3b7e1 req-d500da47-6b0a-482c-a946-31c3b5c4f34c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] No waiting events found dispatching network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:04:52 compute-1 nova_compute[225855]: 2026-01-20 15:04:52.704 225859 WARNING nova.compute.manager [req-76a19a51-5bb1-4e29-b020-777903b3b7e1 req-d500da47-6b0a-482c-a946-31c3b5c4f34c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received unexpected event network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 for instance with vm_state active and task_state None.
Jan 20 15:04:52 compute-1 nova_compute[225855]: 2026-01-20 15:04:52.704 225859 DEBUG nova.compute.manager [req-76a19a51-5bb1-4e29-b020-777903b3b7e1 req-d500da47-6b0a-482c-a946-31c3b5c4f34c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received event network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:04:52 compute-1 nova_compute[225855]: 2026-01-20 15:04:52.704 225859 DEBUG oslo_concurrency.lockutils [req-76a19a51-5bb1-4e29-b020-777903b3b7e1 req-d500da47-6b0a-482c-a946-31c3b5c4f34c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:52 compute-1 nova_compute[225855]: 2026-01-20 15:04:52.704 225859 DEBUG oslo_concurrency.lockutils [req-76a19a51-5bb1-4e29-b020-777903b3b7e1 req-d500da47-6b0a-482c-a946-31c3b5c4f34c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:52 compute-1 nova_compute[225855]: 2026-01-20 15:04:52.705 225859 DEBUG oslo_concurrency.lockutils [req-76a19a51-5bb1-4e29-b020-777903b3b7e1 req-d500da47-6b0a-482c-a946-31c3b5c4f34c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:52 compute-1 nova_compute[225855]: 2026-01-20 15:04:52.705 225859 DEBUG nova.compute.manager [req-76a19a51-5bb1-4e29-b020-777903b3b7e1 req-d500da47-6b0a-482c-a946-31c3b5c4f34c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] No waiting events found dispatching network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:04:52 compute-1 nova_compute[225855]: 2026-01-20 15:04:52.705 225859 WARNING nova.compute.manager [req-76a19a51-5bb1-4e29-b020-777903b3b7e1 req-d500da47-6b0a-482c-a946-31c3b5c4f34c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received unexpected event network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 for instance with vm_state active and task_state None.
Jan 20 15:04:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.012 225859 DEBUG oslo_concurrency.lockutils [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.013 225859 DEBUG oslo_concurrency.lockutils [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.014 225859 DEBUG oslo_concurrency.lockutils [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.014 225859 DEBUG oslo_concurrency.lockutils [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.015 225859 DEBUG oslo_concurrency.lockutils [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.016 225859 INFO nova.compute.manager [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Terminating instance
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.018 225859 DEBUG nova.compute.manager [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:04:53 compute-1 kernel: tap22663aa0-a7 (unregistering): left promiscuous mode
Jan 20 15:04:53 compute-1 NetworkManager[49104]: <info>  [1768921493.0698] device (tap22663aa0-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:04:53 compute-1 ovn_controller[130490]: 2026-01-20T15:04:53Z|00676|binding|INFO|Releasing lport 22663aa0-a7f4-431c-b5a9-4433da2dff09 from this chassis (sb_readonly=0)
Jan 20 15:04:53 compute-1 ovn_controller[130490]: 2026-01-20T15:04:53Z|00677|binding|INFO|Setting lport 22663aa0-a7f4-431c-b5a9-4433da2dff09 down in Southbound
Jan 20 15:04:53 compute-1 ovn_controller[130490]: 2026-01-20T15:04:53Z|00678|binding|INFO|Removing iface tap22663aa0-a7 ovn-installed in OVS
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.080 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.094 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:53 compute-1 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d0000009f.scope: Deactivated successfully.
Jan 20 15:04:53 compute-1 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d0000009f.scope: Consumed 2.941s CPU time.
Jan 20 15:04:53 compute-1 systemd-machined[194361]: Machine qemu-79-instance-0000009f terminated.
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.263 225859 INFO nova.virt.libvirt.driver [-] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Instance destroyed successfully.
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.264 225859 DEBUG nova.objects.instance [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'resources' on Instance uuid c1db561e-0c8b-4cfb-97bb-55f8d4731b87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:04:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:53.269 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:ec:9e 10.100.0.10'], port_security=['fa:16:3e:2e:ec:9e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c1db561e-0c8b-4cfb-97bb-55f8d4731b87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63555e5851564db08c6429231d264f2c', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7e54c470-6a6f-454e-ae01-9d2d59b2c74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=248fa32c-94be-4e1b-b4d3-cb9fac0ec155, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=22663aa0-a7f4-431c-b5a9-4433da2dff09) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:04:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:53.270 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 22663aa0-a7f4-431c-b5a9-4433da2dff09 in datapath 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec unbound from our chassis
Jan 20 15:04:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:53.271 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.285 225859 DEBUG nova.virt.libvirt.vif [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:03:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1670904176',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1670904176',id=159,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:04:41Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='63555e5851564db08c6429231d264f2c',ramdisk_id='',reservation_id='r-qovaqapn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:04:51Z,user_data=None,user_id='2446e8399b344b29986c1aaf8bf73adf',uuid=c1db561e-0c8b-4cfb-97bb-55f8d4731b87,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "address": "fa:16:3e:2e:ec:9e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22663aa0-a7", "ovs_interfaceid": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.285 225859 DEBUG nova.network.os_vif_util [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converting VIF {"id": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "address": "fa:16:3e:2e:ec:9e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22663aa0-a7", "ovs_interfaceid": "22663aa0-a7f4-431c-b5a9-4433da2dff09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.286 225859 DEBUG nova.network.os_vif_util [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2e:ec:9e,bridge_name='br-int',has_traffic_filtering=True,id=22663aa0-a7f4-431c-b5a9-4433da2dff09,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22663aa0-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.286 225859 DEBUG os_vif [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:ec:9e,bridge_name='br-int',has_traffic_filtering=True,id=22663aa0-a7f4-431c-b5a9-4433da2dff09,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22663aa0-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.288 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.288 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22663aa0-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.290 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:53.291 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3757cdd5-4f79-4fbb-9658-6b15b462645b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.292 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.294 225859 INFO os_vif [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:ec:9e,bridge_name='br-int',has_traffic_filtering=True,id=22663aa0-a7f4-431c-b5a9-4433da2dff09,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22663aa0-a7')
Jan 20 15:04:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:53.319 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[735b1d35-28f0-49aa-b18f-b124c7de9ed7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:53.322 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c9414c-a0c5-4291-adf5-3c638fe2ce2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:53 compute-1 ceph-mon[81775]: pgmap v2417: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 6.8 MiB/s wr, 336 op/s
Jan 20 15:04:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:53.350 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[52adab2b-dd6b-4da9-b098-57ebd4177e90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:53.371 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1c79c956-8d9b-4f46-b81b-149736630759]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap671e28d0-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:4e:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 15, 'rx_bytes': 532, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 15, 'rx_bytes': 532, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642759, 'reachable_time': 32213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290108, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:53.388 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[37ca0fce-47d3-4f6b-bae7-dc1213df13b1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap671e28d0-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642770, 'tstamp': 642770}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290109, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap671e28d0-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642773, 'tstamp': 642773}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290109, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:04:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:53.389 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap671e28d0-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.392 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:53.392 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap671e28d0-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:53.392 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:04:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:53.392 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap671e28d0-00, col_values=(('external_ids', {'iface-id': 'a8628d9e-196f-4b84-89fd-d3a41792b8a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:04:53.393 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.398 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Updating instance_info_cache with network_info: [{"id": "244332ba-1b58-4d42-98b0-245f9460c50f", "address": "fa:16:3e:6f:36:24", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244332ba-1b", "ovs_interfaceid": "244332ba-1b58-4d42-98b0-245f9460c50f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.490 225859 INFO nova.virt.libvirt.driver [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Deleting instance files /var/lib/nova/instances/c1db561e-0c8b-4cfb-97bb-55f8d4731b87_del
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.491 225859 INFO nova.virt.libvirt.driver [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Deletion of /var/lib/nova/instances/c1db561e-0c8b-4cfb-97bb-55f8d4731b87_del complete
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.508 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-474cec75-3b01-411a-9074-75859d2a9ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.509 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.509 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.798 225859 INFO nova.compute.manager [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Took 0.78 seconds to destroy the instance on the hypervisor.
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.799 225859 DEBUG oslo.service.loopingcall [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.799 225859 DEBUG nova.compute.manager [-] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:04:53 compute-1 nova_compute[225855]: 2026-01-20 15:04:53.799 225859 DEBUG nova.network.neutron [-] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:04:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:54.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:54 compute-1 nova_compute[225855]: 2026-01-20 15:04:54.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3317582168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:54.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:54 compute-1 nova_compute[225855]: 2026-01-20 15:04:54.870 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:54 compute-1 nova_compute[225855]: 2026-01-20 15:04:54.871 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:54 compute-1 nova_compute[225855]: 2026-01-20 15:04:54.871 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:54 compute-1 nova_compute[225855]: 2026-01-20 15:04:54.872 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:04:54 compute-1 nova_compute[225855]: 2026-01-20 15:04:54.872 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:04:55 compute-1 nova_compute[225855]: 2026-01-20 15:04:55.320 225859 DEBUG nova.compute.manager [req-1e46f14a-5589-42a6-8996-75241031ac4c req-9593386c-83c8-4ecf-9426-67566f273212 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received event network-vif-unplugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:04:55 compute-1 nova_compute[225855]: 2026-01-20 15:04:55.320 225859 DEBUG oslo_concurrency.lockutils [req-1e46f14a-5589-42a6-8996-75241031ac4c req-9593386c-83c8-4ecf-9426-67566f273212 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:55 compute-1 nova_compute[225855]: 2026-01-20 15:04:55.321 225859 DEBUG oslo_concurrency.lockutils [req-1e46f14a-5589-42a6-8996-75241031ac4c req-9593386c-83c8-4ecf-9426-67566f273212 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:55 compute-1 nova_compute[225855]: 2026-01-20 15:04:55.321 225859 DEBUG oslo_concurrency.lockutils [req-1e46f14a-5589-42a6-8996-75241031ac4c req-9593386c-83c8-4ecf-9426-67566f273212 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:55 compute-1 nova_compute[225855]: 2026-01-20 15:04:55.321 225859 DEBUG nova.compute.manager [req-1e46f14a-5589-42a6-8996-75241031ac4c req-9593386c-83c8-4ecf-9426-67566f273212 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] No waiting events found dispatching network-vif-unplugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:04:55 compute-1 nova_compute[225855]: 2026-01-20 15:04:55.321 225859 DEBUG nova.compute.manager [req-1e46f14a-5589-42a6-8996-75241031ac4c req-9593386c-83c8-4ecf-9426-67566f273212 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received event network-vif-unplugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:04:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:04:55 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1672774260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:55 compute-1 ceph-mon[81775]: pgmap v2418: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 287 op/s
Jan 20 15:04:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/943230783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1672774260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:55 compute-1 nova_compute[225855]: 2026-01-20 15:04:55.378 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:04:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e355 e355: 3 total, 3 up, 3 in
Jan 20 15:04:55 compute-1 nova_compute[225855]: 2026-01-20 15:04:55.544 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:04:55 compute-1 nova_compute[225855]: 2026-01-20 15:04:55.545 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:04:55 compute-1 nova_compute[225855]: 2026-01-20 15:04:55.705 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:04:55 compute-1 nova_compute[225855]: 2026-01-20 15:04:55.706 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4237MB free_disk=20.830379486083984GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:04:55 compute-1 nova_compute[225855]: 2026-01-20 15:04:55.706 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:55 compute-1 nova_compute[225855]: 2026-01-20 15:04:55.706 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:56.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:56 compute-1 nova_compute[225855]: 2026-01-20 15:04:56.355 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 474cec75-3b01-411a-9074-75859d2a9ddf actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:04:56 compute-1 nova_compute[225855]: 2026-01-20 15:04:56.355 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance c1db561e-0c8b-4cfb-97bb-55f8d4731b87 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:04:56 compute-1 nova_compute[225855]: 2026-01-20 15:04:56.355 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:04:56 compute-1 nova_compute[225855]: 2026-01-20 15:04:56.356 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:04:56 compute-1 ceph-mon[81775]: osdmap e355: 3 total, 3 up, 3 in
Jan 20 15:04:56 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/828516164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:56 compute-1 ceph-mon[81775]: pgmap v2420: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 36 KiB/s wr, 258 op/s
Jan 20 15:04:56 compute-1 nova_compute[225855]: 2026-01-20 15:04:56.524 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:04:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:56.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:56 compute-1 sudo[290137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:56 compute-1 sudo[290137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:56 compute-1 sudo[290137]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:56 compute-1 sudo[290187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:56 compute-1 sudo[290187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:56 compute-1 sudo[290187]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:56 compute-1 podman[290180]: 2026-01-20 15:04:56.8158216 +0000 UTC m=+0.099043255 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:04:56 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:04:56 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2547310442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:56 compute-1 nova_compute[225855]: 2026-01-20 15:04:56.993 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:04:57 compute-1 nova_compute[225855]: 2026-01-20 15:04:57.005 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:04:57 compute-1 nova_compute[225855]: 2026-01-20 15:04:57.054 225859 DEBUG nova.network.neutron [-] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:04:57 compute-1 nova_compute[225855]: 2026-01-20 15:04:57.058 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:04:57 compute-1 nova_compute[225855]: 2026-01-20 15:04:57.105 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:04:57 compute-1 nova_compute[225855]: 2026-01-20 15:04:57.106 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:57 compute-1 nova_compute[225855]: 2026-01-20 15:04:57.119 225859 INFO nova.compute.manager [-] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Took 3.32 seconds to deallocate network for instance.
Jan 20 15:04:57 compute-1 nova_compute[225855]: 2026-01-20 15:04:57.216 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:57 compute-1 nova_compute[225855]: 2026-01-20 15:04:57.502 225859 DEBUG nova.compute.manager [req-cbf0bcec-d0ea-487f-aadb-10f340d606d0 req-5c866872-954c-4508-a6c1-10fbbdc4edd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received event network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:04:57 compute-1 nova_compute[225855]: 2026-01-20 15:04:57.502 225859 DEBUG oslo_concurrency.lockutils [req-cbf0bcec-d0ea-487f-aadb-10f340d606d0 req-5c866872-954c-4508-a6c1-10fbbdc4edd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:57 compute-1 nova_compute[225855]: 2026-01-20 15:04:57.503 225859 DEBUG oslo_concurrency.lockutils [req-cbf0bcec-d0ea-487f-aadb-10f340d606d0 req-5c866872-954c-4508-a6c1-10fbbdc4edd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:57 compute-1 nova_compute[225855]: 2026-01-20 15:04:57.503 225859 DEBUG oslo_concurrency.lockutils [req-cbf0bcec-d0ea-487f-aadb-10f340d606d0 req-5c866872-954c-4508-a6c1-10fbbdc4edd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:57 compute-1 nova_compute[225855]: 2026-01-20 15:04:57.503 225859 DEBUG nova.compute.manager [req-cbf0bcec-d0ea-487f-aadb-10f340d606d0 req-5c866872-954c-4508-a6c1-10fbbdc4edd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] No waiting events found dispatching network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:04:57 compute-1 nova_compute[225855]: 2026-01-20 15:04:57.503 225859 WARNING nova.compute.manager [req-cbf0bcec-d0ea-487f-aadb-10f340d606d0 req-5c866872-954c-4508-a6c1-10fbbdc4edd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received unexpected event network-vif-plugged-22663aa0-a7f4-431c-b5a9-4433da2dff09 for instance with vm_state active and task_state deleting.
Jan 20 15:04:57 compute-1 nova_compute[225855]: 2026-01-20 15:04:57.503 225859 DEBUG nova.compute.manager [req-cbf0bcec-d0ea-487f-aadb-10f340d606d0 req-5c866872-954c-4508-a6c1-10fbbdc4edd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Received event network-vif-deleted-22663aa0-a7f4-431c-b5a9-4433da2dff09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:04:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2547310442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3976258873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:57 compute-1 nova_compute[225855]: 2026-01-20 15:04:57.691 225859 INFO nova.compute.manager [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Took 0.57 seconds to detach 1 volumes for instance.
Jan 20 15:04:57 compute-1 nova_compute[225855]: 2026-01-20 15:04:57.744 225859 DEBUG oslo_concurrency.lockutils [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:57 compute-1 nova_compute[225855]: 2026-01-20 15:04:57.745 225859 DEBUG oslo_concurrency.lockutils [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:57 compute-1 nova_compute[225855]: 2026-01-20 15:04:57.877 225859 DEBUG oslo_concurrency.processutils [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:04:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e355 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:58.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:58 compute-1 nova_compute[225855]: 2026-01-20 15:04:58.324 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:04:58 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2668567201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:58 compute-1 nova_compute[225855]: 2026-01-20 15:04:58.361 225859 DEBUG oslo_concurrency.processutils [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:04:58 compute-1 nova_compute[225855]: 2026-01-20 15:04:58.366 225859 DEBUG nova.compute.provider_tree [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:04:58 compute-1 nova_compute[225855]: 2026-01-20 15:04:58.390 225859 DEBUG nova.scheduler.client.report [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:04:58 compute-1 nova_compute[225855]: 2026-01-20 15:04:58.448 225859 DEBUG oslo_concurrency.lockutils [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:04:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:04:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:58.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:04:58 compute-1 nova_compute[225855]: 2026-01-20 15:04:58.601 225859 INFO nova.scheduler.client.report [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Deleted allocations for instance c1db561e-0c8b-4cfb-97bb-55f8d4731b87
Jan 20 15:04:58 compute-1 ceph-mon[81775]: pgmap v2421: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 35 KiB/s wr, 253 op/s
Jan 20 15:04:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2668567201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:59 compute-1 nova_compute[225855]: 2026-01-20 15:04:59.098 225859 DEBUG oslo_concurrency.lockutils [None req-c4974407-f15e-4815-90b7-11ad883cd1a8 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "c1db561e-0c8b-4cfb-97bb-55f8d4731b87" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:00.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:00 compute-1 nova_compute[225855]: 2026-01-20 15:05:00.101 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:00 compute-1 nova_compute[225855]: 2026-01-20 15:05:00.101 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e356 e356: 3 total, 3 up, 3 in
Jan 20 15:05:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:05:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:00.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:05:01 compute-1 ceph-mon[81775]: pgmap v2422: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 34 KiB/s wr, 263 op/s
Jan 20 15:05:01 compute-1 ceph-mon[81775]: osdmap e356: 3 total, 3 up, 3 in
Jan 20 15:05:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:02.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:02 compute-1 nova_compute[225855]: 2026-01-20 15:05:02.218 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:05:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:02.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:05:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:03 compute-1 nova_compute[225855]: 2026-01-20 15:05:03.328 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:03 compute-1 ceph-mon[81775]: pgmap v2424: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 480 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 23 KiB/s wr, 307 op/s
Jan 20 15:05:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:04.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:04 compute-1 ceph-mon[81775]: pgmap v2425: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 440 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 22 KiB/s wr, 205 op/s
Jan 20 15:05:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:05:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:04.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:05:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2855842943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:05:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:06.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:05:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:05:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:06.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.641 225859 DEBUG oslo_concurrency.lockutils [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "474cec75-3b01-411a-9074-75859d2a9ddf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.642 225859 DEBUG oslo_concurrency.lockutils [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.642 225859 DEBUG oslo_concurrency.lockutils [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.642 225859 DEBUG oslo_concurrency.lockutils [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.642 225859 DEBUG oslo_concurrency.lockutils [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.644 225859 INFO nova.compute.manager [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Terminating instance
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.645 225859 DEBUG nova.compute.manager [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:05:06 compute-1 kernel: tap244332ba-1b (unregistering): left promiscuous mode
Jan 20 15:05:06 compute-1 NetworkManager[49104]: <info>  [1768921506.7547] device (tap244332ba-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.763 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:06 compute-1 ovn_controller[130490]: 2026-01-20T15:05:06Z|00679|binding|INFO|Releasing lport 244332ba-1b58-4d42-98b0-245f9460c50f from this chassis (sb_readonly=0)
Jan 20 15:05:06 compute-1 ovn_controller[130490]: 2026-01-20T15:05:06Z|00680|binding|INFO|Setting lport 244332ba-1b58-4d42-98b0-245f9460c50f down in Southbound
Jan 20 15:05:06 compute-1 ovn_controller[130490]: 2026-01-20T15:05:06Z|00681|binding|INFO|Removing iface tap244332ba-1b ovn-installed in OVS
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.764 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:06 compute-1 ceph-mon[81775]: pgmap v2426: 321 pgs: 321 active+clean; 374 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 31 KiB/s wr, 189 op/s
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.784 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:06 compute-1 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000096.scope: Deactivated successfully.
Jan 20 15:05:06 compute-1 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000096.scope: Consumed 1.842s CPU time.
Jan 20 15:05:06 compute-1 systemd-machined[194361]: Machine qemu-76-instance-00000096 terminated.
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.878 225859 INFO nova.virt.libvirt.driver [-] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Instance destroyed successfully.
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.879 225859 DEBUG nova.objects.instance [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'resources' on Instance uuid 474cec75-3b01-411a-9074-75859d2a9ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:05:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:06.911 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:36:24 10.100.0.4'], port_security=['fa:16:3e:6f:36:24 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '474cec75-3b01-411a-9074-75859d2a9ddf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63555e5851564db08c6429231d264f2c', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7e54c470-6a6f-454e-ae01-9d2d59b2c74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=248fa32c-94be-4e1b-b4d3-cb9fac0ec155, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=244332ba-1b58-4d42-98b0-245f9460c50f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:05:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:06.912 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 244332ba-1b58-4d42-98b0-245f9460c50f in datapath 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec unbound from our chassis
Jan 20 15:05:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:06.914 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:05:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:06.915 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ff6cac15-53dd-42a0-b1c1-33f2a5a12ec3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:06.915 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec namespace which is not needed anymore
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.917 225859 DEBUG nova.virt.libvirt.vif [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:01:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-254746207',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-254746207',id=150,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:03:04Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='63555e5851564db08c6429231d264f2c',ramdisk_id='',reservation_id='r-solng1yz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:03:09Z,user_data=None,user_id='2446e8399b344b29986c1aaf8bf73adf',uuid=474cec75-3b01-411a-9074-75859d2a9ddf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "244332ba-1b58-4d42-98b0-245f9460c50f", "address": "fa:16:3e:6f:36:24", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244332ba-1b", "ovs_interfaceid": "244332ba-1b58-4d42-98b0-245f9460c50f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.917 225859 DEBUG nova.network.os_vif_util [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converting VIF {"id": "244332ba-1b58-4d42-98b0-245f9460c50f", "address": "fa:16:3e:6f:36:24", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244332ba-1b", "ovs_interfaceid": "244332ba-1b58-4d42-98b0-245f9460c50f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.918 225859 DEBUG nova.network.os_vif_util [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6f:36:24,bridge_name='br-int',has_traffic_filtering=True,id=244332ba-1b58-4d42-98b0-245f9460c50f,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244332ba-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.918 225859 DEBUG os_vif [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:36:24,bridge_name='br-int',has_traffic_filtering=True,id=244332ba-1b58-4d42-98b0-245f9460c50f,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244332ba-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.919 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.920 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap244332ba-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.921 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.923 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:06 compute-1 nova_compute[225855]: 2026-01-20 15:05:06.925 225859 INFO os_vif [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:36:24,bridge_name='br-int',has_traffic_filtering=True,id=244332ba-1b58-4d42-98b0-245f9460c50f,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244332ba-1b')
Jan 20 15:05:07 compute-1 nova_compute[225855]: 2026-01-20 15:05:07.221 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:07 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[288392]: [NOTICE]   (288401) : haproxy version is 2.8.14-c23fe91
Jan 20 15:05:07 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[288392]: [NOTICE]   (288401) : path to executable is /usr/sbin/haproxy
Jan 20 15:05:07 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[288392]: [WARNING]  (288401) : Exiting Master process...
Jan 20 15:05:07 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[288392]: [WARNING]  (288401) : Exiting Master process...
Jan 20 15:05:07 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[288392]: [ALERT]    (288401) : Current worker (288403) exited with code 143 (Terminated)
Jan 20 15:05:07 compute-1 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[288392]: [WARNING]  (288401) : All workers exited. Exiting... (0)
Jan 20 15:05:07 compute-1 systemd[1]: libpod-791837c27285c68cb685dfa82b8474123f452a4f4d63910113cfbdc8f564544e.scope: Deactivated successfully.
Jan 20 15:05:07 compute-1 podman[290311]: 2026-01-20 15:05:07.387924778 +0000 UTC m=+0.371769319 container died 791837c27285c68cb685dfa82b8474123f452a4f4d63910113cfbdc8f564544e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 15:05:07 compute-1 systemd[1]: var-lib-containers-storage-overlay-9a9b0c9e2a81eedae01229880b31ef78cadc4986b7f57eb0b4c053b0733a83f7-merged.mount: Deactivated successfully.
Jan 20 15:05:07 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-791837c27285c68cb685dfa82b8474123f452a4f4d63910113cfbdc8f564544e-userdata-shm.mount: Deactivated successfully.
Jan 20 15:05:07 compute-1 podman[290311]: 2026-01-20 15:05:07.690033171 +0000 UTC m=+0.673877712 container cleanup 791837c27285c68cb685dfa82b8474123f452a4f4d63910113cfbdc8f564544e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 15:05:07 compute-1 systemd[1]: libpod-conmon-791837c27285c68cb685dfa82b8474123f452a4f4d63910113cfbdc8f564544e.scope: Deactivated successfully.
Jan 20 15:05:07 compute-1 podman[290343]: 2026-01-20 15:05:07.915701878 +0000 UTC m=+0.206228459 container remove 791837c27285c68cb685dfa82b8474123f452a4f4d63910113cfbdc8f564544e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 15:05:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:07.921 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[930bdca2-9146-42c5-b6e5-3e9a047c1a5e]: (4, ('Tue Jan 20 03:05:07 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec (791837c27285c68cb685dfa82b8474123f452a4f4d63910113cfbdc8f564544e)\n791837c27285c68cb685dfa82b8474123f452a4f4d63910113cfbdc8f564544e\nTue Jan 20 03:05:07 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec (791837c27285c68cb685dfa82b8474123f452a4f4d63910113cfbdc8f564544e)\n791837c27285c68cb685dfa82b8474123f452a4f4d63910113cfbdc8f564544e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:07.923 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1a50cb63-198b-4301-9a28-9ca0f4ed76c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:07.924 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap671e28d0-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:07 compute-1 nova_compute[225855]: 2026-01-20 15:05:07.925 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:07 compute-1 kernel: tap671e28d0-00: left promiscuous mode
Jan 20 15:05:07 compute-1 nova_compute[225855]: 2026-01-20 15:05:07.940 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:07.943 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b37b3464-9155-4d12-865f-13079a4ec34e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:07.966 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[487c3fb9-a856-42fb-b63e-038ab23a58bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:07.967 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[aa79f33b-b49b-4354-9ac6-d106c7bba49a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:07.982 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[41271373-5ba2-4734-81ab-a4e05a69d876]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642751, 'reachable_time': 33787, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290359, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:07.984 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:05:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:07.984 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[69982be6-ec57-4487-b2f4-6b352544f001]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:07 compute-1 systemd[1]: run-netns-ovnmeta\x2d671e28d0\x2d0b9e\x2d41e0\x2db5e0\x2ddb1ccd4717ec.mount: Deactivated successfully.
Jan 20 15:05:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:08.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:08 compute-1 nova_compute[225855]: 2026-01-20 15:05:08.262 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921493.2610023, c1db561e-0c8b-4cfb-97bb-55f8d4731b87 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:05:08 compute-1 nova_compute[225855]: 2026-01-20 15:05:08.262 225859 INFO nova.compute.manager [-] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] VM Stopped (Lifecycle Event)
Jan 20 15:05:08 compute-1 nova_compute[225855]: 2026-01-20 15:05:08.413 225859 DEBUG nova.compute.manager [None req-46b383e3-2bfb-481c-bda9-895b17668af2 - - - - - -] [instance: c1db561e-0c8b-4cfb-97bb-55f8d4731b87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:05:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:08.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:08 compute-1 ceph-mon[81775]: pgmap v2427: 321 pgs: 321 active+clean; 374 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 31 KiB/s wr, 189 op/s
Jan 20 15:05:08 compute-1 nova_compute[225855]: 2026-01-20 15:05:08.843 225859 INFO nova.virt.libvirt.driver [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Deleting instance files /var/lib/nova/instances/474cec75-3b01-411a-9074-75859d2a9ddf_del
Jan 20 15:05:08 compute-1 nova_compute[225855]: 2026-01-20 15:05:08.844 225859 INFO nova.virt.libvirt.driver [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Deletion of /var/lib/nova/instances/474cec75-3b01-411a-9074-75859d2a9ddf_del complete
Jan 20 15:05:08 compute-1 nova_compute[225855]: 2026-01-20 15:05:08.939 225859 INFO nova.compute.manager [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Took 2.29 seconds to destroy the instance on the hypervisor.
Jan 20 15:05:08 compute-1 nova_compute[225855]: 2026-01-20 15:05:08.940 225859 DEBUG oslo.service.loopingcall [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:05:08 compute-1 nova_compute[225855]: 2026-01-20 15:05:08.940 225859 DEBUG nova.compute.manager [-] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:05:08 compute-1 nova_compute[225855]: 2026-01-20 15:05:08.941 225859 DEBUG nova.network.neutron [-] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:05:09 compute-1 nova_compute[225855]: 2026-01-20 15:05:09.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:09 compute-1 nova_compute[225855]: 2026-01-20 15:05:09.761 225859 DEBUG nova.compute.manager [req-3ae0ac1f-18f0-4c98-9981-9895698cdca3 req-63ad801d-8872-47c1-906e-0b2e118e5358 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received event network-vif-unplugged-244332ba-1b58-4d42-98b0-245f9460c50f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:05:09 compute-1 nova_compute[225855]: 2026-01-20 15:05:09.761 225859 DEBUG oslo_concurrency.lockutils [req-3ae0ac1f-18f0-4c98-9981-9895698cdca3 req-63ad801d-8872-47c1-906e-0b2e118e5358 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:09 compute-1 nova_compute[225855]: 2026-01-20 15:05:09.762 225859 DEBUG oslo_concurrency.lockutils [req-3ae0ac1f-18f0-4c98-9981-9895698cdca3 req-63ad801d-8872-47c1-906e-0b2e118e5358 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:09 compute-1 nova_compute[225855]: 2026-01-20 15:05:09.762 225859 DEBUG oslo_concurrency.lockutils [req-3ae0ac1f-18f0-4c98-9981-9895698cdca3 req-63ad801d-8872-47c1-906e-0b2e118e5358 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:09 compute-1 nova_compute[225855]: 2026-01-20 15:05:09.762 225859 DEBUG nova.compute.manager [req-3ae0ac1f-18f0-4c98-9981-9895698cdca3 req-63ad801d-8872-47c1-906e-0b2e118e5358 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] No waiting events found dispatching network-vif-unplugged-244332ba-1b58-4d42-98b0-245f9460c50f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:05:09 compute-1 nova_compute[225855]: 2026-01-20 15:05:09.762 225859 DEBUG nova.compute.manager [req-3ae0ac1f-18f0-4c98-9981-9895698cdca3 req-63ad801d-8872-47c1-906e-0b2e118e5358 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received event network-vif-unplugged-244332ba-1b58-4d42-98b0-245f9460c50f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:05:09 compute-1 nova_compute[225855]: 2026-01-20 15:05:09.762 225859 DEBUG nova.compute.manager [req-3ae0ac1f-18f0-4c98-9981-9895698cdca3 req-63ad801d-8872-47c1-906e-0b2e118e5358 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received event network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:05:09 compute-1 nova_compute[225855]: 2026-01-20 15:05:09.763 225859 DEBUG oslo_concurrency.lockutils [req-3ae0ac1f-18f0-4c98-9981-9895698cdca3 req-63ad801d-8872-47c1-906e-0b2e118e5358 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:09 compute-1 nova_compute[225855]: 2026-01-20 15:05:09.763 225859 DEBUG oslo_concurrency.lockutils [req-3ae0ac1f-18f0-4c98-9981-9895698cdca3 req-63ad801d-8872-47c1-906e-0b2e118e5358 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:09 compute-1 nova_compute[225855]: 2026-01-20 15:05:09.763 225859 DEBUG oslo_concurrency.lockutils [req-3ae0ac1f-18f0-4c98-9981-9895698cdca3 req-63ad801d-8872-47c1-906e-0b2e118e5358 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:09 compute-1 nova_compute[225855]: 2026-01-20 15:05:09.763 225859 DEBUG nova.compute.manager [req-3ae0ac1f-18f0-4c98-9981-9895698cdca3 req-63ad801d-8872-47c1-906e-0b2e118e5358 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] No waiting events found dispatching network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:05:09 compute-1 nova_compute[225855]: 2026-01-20 15:05:09.764 225859 WARNING nova.compute.manager [req-3ae0ac1f-18f0-4c98-9981-9895698cdca3 req-63ad801d-8872-47c1-906e-0b2e118e5358 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received unexpected event network-vif-plugged-244332ba-1b58-4d42-98b0-245f9460c50f for instance with vm_state active and task_state deleting.
Jan 20 15:05:10 compute-1 podman[290362]: 2026-01-20 15:05:10.022698341 +0000 UTC m=+0.060870759 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:05:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:05:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:10.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:05:10 compute-1 nova_compute[225855]: 2026-01-20 15:05:10.280 225859 DEBUG nova.network.neutron [-] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:05:10 compute-1 nova_compute[225855]: 2026-01-20 15:05:10.341 225859 INFO nova.compute.manager [-] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Took 1.40 seconds to deallocate network for instance.
Jan 20 15:05:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e357 e357: 3 total, 3 up, 3 in
Jan 20 15:05:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:10.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:10 compute-1 nova_compute[225855]: 2026-01-20 15:05:10.720 225859 INFO nova.compute.manager [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Took 0.38 seconds to detach 1 volumes for instance.
Jan 20 15:05:10 compute-1 nova_compute[225855]: 2026-01-20 15:05:10.807 225859 DEBUG nova.compute.manager [req-a8b5f4c0-2810-44ec-a395-79d27527a7b8 req-1dc026ef-cd92-4298-ae18-b7c0a8c3b387 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Received event network-vif-deleted-244332ba-1b58-4d42-98b0-245f9460c50f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:05:10 compute-1 nova_compute[225855]: 2026-01-20 15:05:10.811 225859 DEBUG oslo_concurrency.lockutils [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:10 compute-1 nova_compute[225855]: 2026-01-20 15:05:10.812 225859 DEBUG oslo_concurrency.lockutils [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:10 compute-1 nova_compute[225855]: 2026-01-20 15:05:10.883 225859 DEBUG oslo_concurrency.processutils [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:05:11 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1033225945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:11 compute-1 nova_compute[225855]: 2026-01-20 15:05:11.306 225859 DEBUG oslo_concurrency.processutils [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:11 compute-1 nova_compute[225855]: 2026-01-20 15:05:11.313 225859 DEBUG nova.compute.provider_tree [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:05:11 compute-1 nova_compute[225855]: 2026-01-20 15:05:11.347 225859 DEBUG nova.scheduler.client.report [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:05:11 compute-1 nova_compute[225855]: 2026-01-20 15:05:11.401 225859 DEBUG oslo_concurrency.lockutils [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:11 compute-1 nova_compute[225855]: 2026-01-20 15:05:11.483 225859 INFO nova.scheduler.client.report [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Deleted allocations for instance 474cec75-3b01-411a-9074-75859d2a9ddf
Jan 20 15:05:11 compute-1 ceph-mon[81775]: pgmap v2428: 321 pgs: 321 active+clean; 347 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 943 KiB/s rd, 1.6 MiB/s wr, 148 op/s
Jan 20 15:05:11 compute-1 ceph-mon[81775]: osdmap e357: 3 total, 3 up, 3 in
Jan 20 15:05:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1033225945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:11 compute-1 nova_compute[225855]: 2026-01-20 15:05:11.923 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:11 compute-1 nova_compute[225855]: 2026-01-20 15:05:11.959 225859 DEBUG oslo_concurrency.lockutils [None req-34cd6af6-9069-4587-949a-6da0bc77cc0c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "474cec75-3b01-411a-9074-75859d2a9ddf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:12.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:12 compute-1 nova_compute[225855]: 2026-01-20 15:05:12.223 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:12.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2837469865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:12 compute-1 ceph-mon[81775]: pgmap v2430: 321 pgs: 321 active+clean; 339 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 676 KiB/s rd, 2.5 MiB/s wr, 163 op/s
Jan 20 15:05:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e358 e358: 3 total, 3 up, 3 in
Jan 20 15:05:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1230677250' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:05:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:14.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:14.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1230677250' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:05:15 compute-1 ceph-mon[81775]: osdmap e358: 3 total, 3 up, 3 in
Jan 20 15:05:15 compute-1 ceph-mon[81775]: pgmap v2432: 321 pgs: 321 active+clean; 323 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 887 KiB/s rd, 3.1 MiB/s wr, 172 op/s
Jan 20 15:05:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:05:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:16.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:05:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:16.425 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:16.425 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:16.425 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:05:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:16.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:05:16 compute-1 sudo[290408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:16 compute-1 sudo[290408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:16 compute-1 sudo[290408]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:16 compute-1 sudo[290434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:16 compute-1 sudo[290434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:16 compute-1 sudo[290434]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:16 compute-1 nova_compute[225855]: 2026-01-20 15:05:16.926 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:17 compute-1 nova_compute[225855]: 2026-01-20 15:05:17.225 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:18 compute-1 ceph-mon[81775]: pgmap v2433: 321 pgs: 321 active+clean; 306 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 906 KiB/s rd, 3.2 MiB/s wr, 186 op/s
Jan 20 15:05:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:18.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:05:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:18.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:05:19 compute-1 ceph-mon[81775]: pgmap v2434: 321 pgs: 321 active+clean; 306 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 1.2 MiB/s wr, 104 op/s
Jan 20 15:05:19 compute-1 nova_compute[225855]: 2026-01-20 15:05:19.933 225859 DEBUG oslo_concurrency.lockutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:19 compute-1 nova_compute[225855]: 2026-01-20 15:05:19.934 225859 DEBUG oslo_concurrency.lockutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:19 compute-1 nova_compute[225855]: 2026-01-20 15:05:19.960 225859 DEBUG nova.compute.manager [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:05:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:05:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:20.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:05:20 compute-1 nova_compute[225855]: 2026-01-20 15:05:20.140 225859 DEBUG oslo_concurrency.lockutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:20 compute-1 nova_compute[225855]: 2026-01-20 15:05:20.141 225859 DEBUG oslo_concurrency.lockutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:20 compute-1 nova_compute[225855]: 2026-01-20 15:05:20.151 225859 DEBUG nova.virt.hardware [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:05:20 compute-1 nova_compute[225855]: 2026-01-20 15:05:20.152 225859 INFO nova.compute.claims [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:05:20 compute-1 nova_compute[225855]: 2026-01-20 15:05:20.377 225859 DEBUG oslo_concurrency.processutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:05:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:20.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:05:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:05:20 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1800558050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:20 compute-1 nova_compute[225855]: 2026-01-20 15:05:20.826 225859 DEBUG oslo_concurrency.processutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:20 compute-1 nova_compute[225855]: 2026-01-20 15:05:20.833 225859 DEBUG nova.compute.provider_tree [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:05:20 compute-1 nova_compute[225855]: 2026-01-20 15:05:20.856 225859 DEBUG nova.scheduler.client.report [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:05:20 compute-1 nova_compute[225855]: 2026-01-20 15:05:20.892 225859 DEBUG oslo_concurrency.lockutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:20 compute-1 nova_compute[225855]: 2026-01-20 15:05:20.894 225859 DEBUG nova.compute.manager [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:05:20 compute-1 nova_compute[225855]: 2026-01-20 15:05:20.960 225859 DEBUG nova.compute.manager [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:05:20 compute-1 nova_compute[225855]: 2026-01-20 15:05:20.961 225859 DEBUG nova.network.neutron [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:05:20 compute-1 nova_compute[225855]: 2026-01-20 15:05:20.981 225859 INFO nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:05:21 compute-1 nova_compute[225855]: 2026-01-20 15:05:21.002 225859 DEBUG nova.compute.manager [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:05:21 compute-1 nova_compute[225855]: 2026-01-20 15:05:21.130 225859 DEBUG nova.compute.manager [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:05:21 compute-1 nova_compute[225855]: 2026-01-20 15:05:21.132 225859 DEBUG nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:05:21 compute-1 nova_compute[225855]: 2026-01-20 15:05:21.133 225859 INFO nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Creating image(s)
Jan 20 15:05:21 compute-1 nova_compute[225855]: 2026-01-20 15:05:21.164 225859 DEBUG nova.storage.rbd_utils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] rbd image 33ba7a73-3233-40a3-a49a-e5bbd604dc3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:05:21 compute-1 nova_compute[225855]: 2026-01-20 15:05:21.195 225859 DEBUG nova.storage.rbd_utils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] rbd image 33ba7a73-3233-40a3-a49a-e5bbd604dc3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:05:21 compute-1 nova_compute[225855]: 2026-01-20 15:05:21.219 225859 DEBUG nova.storage.rbd_utils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] rbd image 33ba7a73-3233-40a3-a49a-e5bbd604dc3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:05:21 compute-1 nova_compute[225855]: 2026-01-20 15:05:21.223 225859 DEBUG oslo_concurrency.processutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:21 compute-1 nova_compute[225855]: 2026-01-20 15:05:21.286 225859 DEBUG nova.policy [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bc554998e71a4322bdd27ac727a9044c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e142d118583b4f9ba3531bcf3838e256', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:05:21 compute-1 nova_compute[225855]: 2026-01-20 15:05:21.289 225859 DEBUG oslo_concurrency.processutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:21 compute-1 nova_compute[225855]: 2026-01-20 15:05:21.290 225859 DEBUG oslo_concurrency.lockutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:21 compute-1 nova_compute[225855]: 2026-01-20 15:05:21.290 225859 DEBUG oslo_concurrency.lockutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:21 compute-1 nova_compute[225855]: 2026-01-20 15:05:21.291 225859 DEBUG oslo_concurrency.lockutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:21 compute-1 nova_compute[225855]: 2026-01-20 15:05:21.314 225859 DEBUG nova.storage.rbd_utils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] rbd image 33ba7a73-3233-40a3-a49a-e5bbd604dc3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:05:21 compute-1 nova_compute[225855]: 2026-01-20 15:05:21.318 225859 DEBUG oslo_concurrency.processutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 33ba7a73-3233-40a3-a49a-e5bbd604dc3c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:21 compute-1 nova_compute[225855]: 2026-01-20 15:05:21.878 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921506.8769155, 474cec75-3b01-411a-9074-75859d2a9ddf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:05:21 compute-1 nova_compute[225855]: 2026-01-20 15:05:21.879 225859 INFO nova.compute.manager [-] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] VM Stopped (Lifecycle Event)
Jan 20 15:05:21 compute-1 nova_compute[225855]: 2026-01-20 15:05:21.910 225859 DEBUG nova.compute.manager [None req-6475071c-9006-46db-8395-65b257e850e3 - - - - - -] [instance: 474cec75-3b01-411a-9074-75859d2a9ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:05:21 compute-1 nova_compute[225855]: 2026-01-20 15:05:21.929 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:22 compute-1 ceph-mon[81775]: pgmap v2435: 321 pgs: 321 active+clean; 279 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 984 KiB/s wr, 107 op/s
Jan 20 15:05:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1800558050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:22.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:22 compute-1 nova_compute[225855]: 2026-01-20 15:05:22.256 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:22 compute-1 nova_compute[225855]: 2026-01-20 15:05:22.336 225859 DEBUG oslo_concurrency.processutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 33ba7a73-3233-40a3-a49a-e5bbd604dc3c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:22 compute-1 nova_compute[225855]: 2026-01-20 15:05:22.404 225859 DEBUG nova.storage.rbd_utils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] resizing rbd image 33ba7a73-3233-40a3-a49a-e5bbd604dc3c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:05:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:05:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:22.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:05:22 compute-1 nova_compute[225855]: 2026-01-20 15:05:22.658 225859 DEBUG nova.network.neutron [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Successfully created port: 070862f1-1db2-45c2-9787-752e6d88449a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:05:22 compute-1 nova_compute[225855]: 2026-01-20 15:05:22.754 225859 DEBUG nova.objects.instance [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lazy-loading 'migration_context' on Instance uuid 33ba7a73-3233-40a3-a49a-e5bbd604dc3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:05:22 compute-1 nova_compute[225855]: 2026-01-20 15:05:22.768 225859 DEBUG nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:05:22 compute-1 nova_compute[225855]: 2026-01-20 15:05:22.768 225859 DEBUG nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Ensure instance console log exists: /var/lib/nova/instances/33ba7a73-3233-40a3-a49a-e5bbd604dc3c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:05:22 compute-1 nova_compute[225855]: 2026-01-20 15:05:22.769 225859 DEBUG oslo_concurrency.lockutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:22 compute-1 nova_compute[225855]: 2026-01-20 15:05:22.769 225859 DEBUG oslo_concurrency.lockutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:22 compute-1 nova_compute[225855]: 2026-01-20 15:05:22.769 225859 DEBUG oslo_concurrency.lockutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3052434455' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:05:23 compute-1 ceph-mon[81775]: pgmap v2436: 321 pgs: 321 active+clean; 279 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 110 KiB/s rd, 123 KiB/s wr, 64 op/s
Jan 20 15:05:24 compute-1 nova_compute[225855]: 2026-01-20 15:05:24.095 225859 DEBUG nova.network.neutron [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Successfully updated port: 070862f1-1db2-45c2-9787-752e6d88449a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:05:24 compute-1 nova_compute[225855]: 2026-01-20 15:05:24.114 225859 DEBUG oslo_concurrency.lockutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "refresh_cache-33ba7a73-3233-40a3-a49a-e5bbd604dc3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:05:24 compute-1 nova_compute[225855]: 2026-01-20 15:05:24.114 225859 DEBUG oslo_concurrency.lockutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquired lock "refresh_cache-33ba7a73-3233-40a3-a49a-e5bbd604dc3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:05:24 compute-1 nova_compute[225855]: 2026-01-20 15:05:24.115 225859 DEBUG nova.network.neutron [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:05:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:24.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:24 compute-1 nova_compute[225855]: 2026-01-20 15:05:24.286 225859 DEBUG nova.compute.manager [req-1181a5ab-d1d8-40d2-adc7-b4d6ad28e428 req-723d4bed-c818-47d7-abad-c5f2d60f4592 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Received event network-changed-070862f1-1db2-45c2-9787-752e6d88449a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:05:24 compute-1 nova_compute[225855]: 2026-01-20 15:05:24.287 225859 DEBUG nova.compute.manager [req-1181a5ab-d1d8-40d2-adc7-b4d6ad28e428 req-723d4bed-c818-47d7-abad-c5f2d60f4592 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Refreshing instance network info cache due to event network-changed-070862f1-1db2-45c2-9787-752e6d88449a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:05:24 compute-1 nova_compute[225855]: 2026-01-20 15:05:24.287 225859 DEBUG oslo_concurrency.lockutils [req-1181a5ab-d1d8-40d2-adc7-b4d6ad28e428 req-723d4bed-c818-47d7-abad-c5f2d60f4592 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-33ba7a73-3233-40a3-a49a-e5bbd604dc3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:05:24 compute-1 nova_compute[225855]: 2026-01-20 15:05:24.525 225859 DEBUG nova.network.neutron [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:05:24 compute-1 ceph-mon[81775]: pgmap v2437: 321 pgs: 321 active+clean; 282 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.1 MiB/s wr, 45 op/s
Jan 20 15:05:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:24.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e359 e359: 3 total, 3 up, 3 in
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.020 225859 DEBUG nova.network.neutron [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Updating instance_info_cache with network_info: [{"id": "070862f1-1db2-45c2-9787-752e6d88449a", "address": "fa:16:3e:e5:e7:09", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap070862f1-1d", "ovs_interfaceid": "070862f1-1db2-45c2-9787-752e6d88449a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:05:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:26.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:05:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:26.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.638 225859 DEBUG oslo_concurrency.lockutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Releasing lock "refresh_cache-33ba7a73-3233-40a3-a49a-e5bbd604dc3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.639 225859 DEBUG nova.compute.manager [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Instance network_info: |[{"id": "070862f1-1db2-45c2-9787-752e6d88449a", "address": "fa:16:3e:e5:e7:09", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap070862f1-1d", "ovs_interfaceid": "070862f1-1db2-45c2-9787-752e6d88449a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.639 225859 DEBUG oslo_concurrency.lockutils [req-1181a5ab-d1d8-40d2-adc7-b4d6ad28e428 req-723d4bed-c818-47d7-abad-c5f2d60f4592 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-33ba7a73-3233-40a3-a49a-e5bbd604dc3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.639 225859 DEBUG nova.network.neutron [req-1181a5ab-d1d8-40d2-adc7-b4d6ad28e428 req-723d4bed-c818-47d7-abad-c5f2d60f4592 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Refreshing network info cache for port 070862f1-1db2-45c2-9787-752e6d88449a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.642 225859 DEBUG nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Start _get_guest_xml network_info=[{"id": "070862f1-1db2-45c2-9787-752e6d88449a", "address": "fa:16:3e:e5:e7:09", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap070862f1-1d", "ovs_interfaceid": "070862f1-1db2-45c2-9787-752e6d88449a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.645 225859 WARNING nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.650 225859 DEBUG nova.virt.libvirt.host [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.650 225859 DEBUG nova.virt.libvirt.host [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.653 225859 DEBUG nova.virt.libvirt.host [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.653 225859 DEBUG nova.virt.libvirt.host [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.654 225859 DEBUG nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.655 225859 DEBUG nova.virt.hardware [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.655 225859 DEBUG nova.virt.hardware [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.655 225859 DEBUG nova.virt.hardware [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.655 225859 DEBUG nova.virt.hardware [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.655 225859 DEBUG nova.virt.hardware [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.655 225859 DEBUG nova.virt.hardware [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.656 225859 DEBUG nova.virt.hardware [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.656 225859 DEBUG nova.virt.hardware [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.656 225859 DEBUG nova.virt.hardware [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.656 225859 DEBUG nova.virt.hardware [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.657 225859 DEBUG nova.virt.hardware [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.659 225859 DEBUG oslo_concurrency.processutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:26 compute-1 ceph-mon[81775]: osdmap e359: 3 total, 3 up, 3 in
Jan 20 15:05:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/883178459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:26 compute-1 ceph-mon[81775]: pgmap v2439: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 93 op/s
Jan 20 15:05:26 compute-1 nova_compute[225855]: 2026-01-20 15:05:26.933 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:27 compute-1 podman[290673]: 2026-01-20 15:05:27.037959855 +0000 UTC m=+0.083163601 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 20 15:05:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:05:27 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2898267171' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.113 225859 DEBUG oslo_concurrency.processutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.143 225859 DEBUG nova.storage.rbd_utils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] rbd image 33ba7a73-3233-40a3-a49a-e5bbd604dc3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.148 225859 DEBUG oslo_concurrency.processutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.259 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:05:27 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3990956425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.594 225859 DEBUG oslo_concurrency.processutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.595 225859 DEBUG nova.virt.libvirt.vif [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:05:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-2111868448',display_name='tempest-TestStampPattern-server-2111868448',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-teststamppattern-server-2111868448',id=161,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHW99EAKkcMHbb6foGeGxm9beD/C9AeSuQLW3fqIuoocya0hep1/utcjh4cUxZzvt5K+5yMQG3K45jiLKihqKM6cawBqTQvgzcywKN5pk06AjS3tvq9GuiAvDAys6caVkA==',key_name='tempest-TestStampPattern-1928143162',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e142d118583b4f9ba3531bcf3838e256',ramdisk_id='',reservation_id='r-7ei3hy41',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-487600181',owner_user_name='tempest-TestStampPattern-487600181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:05:21Z,user_data=None,user_id='bc554998e71a4322bdd27ac727a9044c',uuid=33ba7a73-3233-40a3-a49a-e5bbd604dc3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "070862f1-1db2-45c2-9787-752e6d88449a", "address": "fa:16:3e:e5:e7:09", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap070862f1-1d", "ovs_interfaceid": "070862f1-1db2-45c2-9787-752e6d88449a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.596 225859 DEBUG nova.network.os_vif_util [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Converting VIF {"id": "070862f1-1db2-45c2-9787-752e6d88449a", "address": "fa:16:3e:e5:e7:09", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap070862f1-1d", "ovs_interfaceid": "070862f1-1db2-45c2-9787-752e6d88449a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.597 225859 DEBUG nova.network.os_vif_util [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=070862f1-1db2-45c2-9787-752e6d88449a,network=Network(8472bae1-476b-4100-b9fa-e8827bc4f7bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap070862f1-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.598 225859 DEBUG nova.objects.instance [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lazy-loading 'pci_devices' on Instance uuid 33ba7a73-3233-40a3-a49a-e5bbd604dc3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.615 225859 DEBUG nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:05:27 compute-1 nova_compute[225855]:   <uuid>33ba7a73-3233-40a3-a49a-e5bbd604dc3c</uuid>
Jan 20 15:05:27 compute-1 nova_compute[225855]:   <name>instance-000000a1</name>
Jan 20 15:05:27 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:05:27 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:05:27 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <nova:name>tempest-TestStampPattern-server-2111868448</nova:name>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:05:26</nova:creationTime>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:05:27 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:05:27 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:05:27 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:05:27 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:05:27 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:05:27 compute-1 nova_compute[225855]:         <nova:user uuid="bc554998e71a4322bdd27ac727a9044c">tempest-TestStampPattern-487600181-project-member</nova:user>
Jan 20 15:05:27 compute-1 nova_compute[225855]:         <nova:project uuid="e142d118583b4f9ba3531bcf3838e256">tempest-TestStampPattern-487600181</nova:project>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:05:27 compute-1 nova_compute[225855]:         <nova:port uuid="070862f1-1db2-45c2-9787-752e6d88449a">
Jan 20 15:05:27 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:05:27 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:05:27 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <system>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <entry name="serial">33ba7a73-3233-40a3-a49a-e5bbd604dc3c</entry>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <entry name="uuid">33ba7a73-3233-40a3-a49a-e5bbd604dc3c</entry>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     </system>
Jan 20 15:05:27 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:05:27 compute-1 nova_compute[225855]:   <os>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:   </os>
Jan 20 15:05:27 compute-1 nova_compute[225855]:   <features>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:   </features>
Jan 20 15:05:27 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:05:27 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:05:27 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/33ba7a73-3233-40a3-a49a-e5bbd604dc3c_disk">
Jan 20 15:05:27 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       </source>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:05:27 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/33ba7a73-3233-40a3-a49a-e5bbd604dc3c_disk.config">
Jan 20 15:05:27 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       </source>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:05:27 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:e5:e7:09"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <target dev="tap070862f1-1d"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/33ba7a73-3233-40a3-a49a-e5bbd604dc3c/console.log" append="off"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <video>
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     </video>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:05:27 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:05:27 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:05:27 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:05:27 compute-1 nova_compute[225855]: </domain>
Jan 20 15:05:27 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.617 225859 DEBUG nova.compute.manager [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Preparing to wait for external event network-vif-plugged-070862f1-1db2-45c2-9787-752e6d88449a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.617 225859 DEBUG oslo_concurrency.lockutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.617 225859 DEBUG oslo_concurrency.lockutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.618 225859 DEBUG oslo_concurrency.lockutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.618 225859 DEBUG nova.virt.libvirt.vif [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:05:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-2111868448',display_name='tempest-TestStampPattern-server-2111868448',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-teststamppattern-server-2111868448',id=161,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHW99EAKkcMHbb6foGeGxm9beD/C9AeSuQLW3fqIuoocya0hep1/utcjh4cUxZzvt5K+5yMQG3K45jiLKihqKM6cawBqTQvgzcywKN5pk06AjS3tvq9GuiAvDAys6caVkA==',key_name='tempest-TestStampPattern-1928143162',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e142d118583b4f9ba3531bcf3838e256',ramdisk_id='',reservation_id='r-7ei3hy41',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-487600181',owner_user_name='tempest-TestStampPattern-487600181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:05:21Z,user_data=None,user_id='bc554998e71a4322bdd27ac727a9044c',uuid=33ba7a73-3233-40a3-a49a-e5bbd604dc3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "070862f1-1db2-45c2-9787-752e6d88449a", "address": "fa:16:3e:e5:e7:09", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap070862f1-1d", "ovs_interfaceid": "070862f1-1db2-45c2-9787-752e6d88449a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.619 225859 DEBUG nova.network.os_vif_util [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Converting VIF {"id": "070862f1-1db2-45c2-9787-752e6d88449a", "address": "fa:16:3e:e5:e7:09", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap070862f1-1d", "ovs_interfaceid": "070862f1-1db2-45c2-9787-752e6d88449a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.619 225859 DEBUG nova.network.os_vif_util [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=070862f1-1db2-45c2-9787-752e6d88449a,network=Network(8472bae1-476b-4100-b9fa-e8827bc4f7bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap070862f1-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.620 225859 DEBUG os_vif [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=070862f1-1db2-45c2-9787-752e6d88449a,network=Network(8472bae1-476b-4100-b9fa-e8827bc4f7bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap070862f1-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.620 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.620 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.621 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.624 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.625 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap070862f1-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.625 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap070862f1-1d, col_values=(('external_ids', {'iface-id': '070862f1-1db2-45c2-9787-752e6d88449a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:e7:09', 'vm-uuid': '33ba7a73-3233-40a3-a49a-e5bbd604dc3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.639 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:27 compute-1 NetworkManager[49104]: <info>  [1768921527.6400] manager: (tap070862f1-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/285)
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.642 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.644 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.644 225859 INFO os_vif [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=070862f1-1db2-45c2-9787-752e6d88449a,network=Network(8472bae1-476b-4100-b9fa-e8827bc4f7bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap070862f1-1d')
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.738 225859 DEBUG nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.738 225859 DEBUG nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.739 225859 DEBUG nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] No VIF found with MAC fa:16:3e:e5:e7:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.739 225859 INFO nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Using config drive
Jan 20 15:05:27 compute-1 nova_compute[225855]: 2026-01-20 15:05:27.768 225859 DEBUG nova.storage.rbd_utils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] rbd image 33ba7a73-3233-40a3-a49a-e5bbd604dc3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:05:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2898267171' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:05:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3990956425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:05:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:28.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:28.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:28 compute-1 nova_compute[225855]: 2026-01-20 15:05:28.977 225859 INFO nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Creating config drive at /var/lib/nova/instances/33ba7a73-3233-40a3-a49a-e5bbd604dc3c/disk.config
Jan 20 15:05:28 compute-1 nova_compute[225855]: 2026-01-20 15:05:28.983 225859 DEBUG oslo_concurrency.processutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/33ba7a73-3233-40a3-a49a-e5bbd604dc3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr_r57kcm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:29 compute-1 nova_compute[225855]: 2026-01-20 15:05:29.115 225859 DEBUG oslo_concurrency.processutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/33ba7a73-3233-40a3-a49a-e5bbd604dc3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr_r57kcm" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3884054008' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:05:29 compute-1 ceph-mon[81775]: pgmap v2440: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 93 op/s
Jan 20 15:05:29 compute-1 nova_compute[225855]: 2026-01-20 15:05:29.299 225859 DEBUG nova.storage.rbd_utils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] rbd image 33ba7a73-3233-40a3-a49a-e5bbd604dc3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:05:29 compute-1 nova_compute[225855]: 2026-01-20 15:05:29.304 225859 DEBUG oslo_concurrency.processutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/33ba7a73-3233-40a3-a49a-e5bbd604dc3c/disk.config 33ba7a73-3233-40a3-a49a-e5bbd604dc3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:29 compute-1 nova_compute[225855]: 2026-01-20 15:05:29.574 225859 DEBUG oslo_concurrency.processutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/33ba7a73-3233-40a3-a49a-e5bbd604dc3c/disk.config 33ba7a73-3233-40a3-a49a-e5bbd604dc3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:29 compute-1 nova_compute[225855]: 2026-01-20 15:05:29.575 225859 INFO nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Deleting local config drive /var/lib/nova/instances/33ba7a73-3233-40a3-a49a-e5bbd604dc3c/disk.config because it was imported into RBD.
Jan 20 15:05:29 compute-1 kernel: tap070862f1-1d: entered promiscuous mode
Jan 20 15:05:29 compute-1 NetworkManager[49104]: <info>  [1768921529.6277] manager: (tap070862f1-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/286)
Jan 20 15:05:29 compute-1 ovn_controller[130490]: 2026-01-20T15:05:29Z|00682|binding|INFO|Claiming lport 070862f1-1db2-45c2-9787-752e6d88449a for this chassis.
Jan 20 15:05:29 compute-1 ovn_controller[130490]: 2026-01-20T15:05:29Z|00683|binding|INFO|070862f1-1db2-45c2-9787-752e6d88449a: Claiming fa:16:3e:e5:e7:09 10.100.0.7
Jan 20 15:05:29 compute-1 nova_compute[225855]: 2026-01-20 15:05:29.628 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.641 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:e7:09 10.100.0.7'], port_security=['fa:16:3e:e5:e7:09 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '33ba7a73-3233-40a3-a49a-e5bbd604dc3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8472bae1-476b-4100-b9fa-e8827bc4f7bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e142d118583b4f9ba3531bcf3838e256', 'neutron:revision_number': '2', 'neutron:security_group_ids': '37efc868-18af-48b7-8d56-e37fd1ec4df0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9deb561-4473-4aa7-8b6f-d70e20e7cf6d, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=070862f1-1db2-45c2-9787-752e6d88449a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.642 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 070862f1-1db2-45c2-9787-752e6d88449a in datapath 8472bae1-476b-4100-b9fa-e8827bc4f7bf bound to our chassis
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.644 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8472bae1-476b-4100-b9fa-e8827bc4f7bf
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.657 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd01cc5-89dd-495d-b639-f5c8a8a23ee8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.658 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8472bae1-41 in ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.660 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8472bae1-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.660 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[dd4c6243-e40e-4bd8-ae0a-30494f77c964]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.661 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2ce31a6d-f4f4-43d9-8067-498437602899]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:29 compute-1 systemd-udevd[290818]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:05:29 compute-1 systemd-machined[194361]: New machine qemu-80-instance-000000a1.
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.672 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[9df1230a-87bb-42f6-98ee-bcc629fc304a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:29 compute-1 NetworkManager[49104]: <info>  [1768921529.6819] device (tap070862f1-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:05:29 compute-1 NetworkManager[49104]: <info>  [1768921529.6826] device (tap070862f1-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:05:29 compute-1 nova_compute[225855]: 2026-01-20 15:05:29.696 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:29 compute-1 systemd[1]: Started Virtual Machine qemu-80-instance-000000a1.
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.698 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[72ca53cd-e252-4d1c-9811-82ce42b7b397]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:29 compute-1 nova_compute[225855]: 2026-01-20 15:05:29.704 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:29 compute-1 ovn_controller[130490]: 2026-01-20T15:05:29Z|00684|binding|INFO|Setting lport 070862f1-1db2-45c2-9787-752e6d88449a ovn-installed in OVS
Jan 20 15:05:29 compute-1 ovn_controller[130490]: 2026-01-20T15:05:29Z|00685|binding|INFO|Setting lport 070862f1-1db2-45c2-9787-752e6d88449a up in Southbound
Jan 20 15:05:29 compute-1 nova_compute[225855]: 2026-01-20 15:05:29.708 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.725 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e4923156-881d-4fba-89b5-827249749980]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.731 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3e5e8b41-690e-4b27-ab5b-b4b7b7f46900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:29 compute-1 NetworkManager[49104]: <info>  [1768921529.7326] manager: (tap8472bae1-40): new Veth device (/org/freedesktop/NetworkManager/Devices/287)
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.766 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[99166ce2-c3c3-4e43-a0da-f294c2c2dc02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.769 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[117e2c3b-2b93-4e7d-8d39-235081c6dc83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:29 compute-1 NetworkManager[49104]: <info>  [1768921529.7919] device (tap8472bae1-40): carrier: link connected
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.797 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c704badc-1b74-421f-84f0-062b0a4fe7c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.814 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[30696ba6-248f-48c2-baa3-fc46c86abe0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8472bae1-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:38:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 194], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656871, 'reachable_time': 27771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290850, 'error': None, 'target': 'ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.830 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9e5466-f467-49cd-b069-6d1b8a67ceaf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:38ca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 656871, 'tstamp': 656871}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290851, 'error': None, 'target': 'ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.849 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[47f9244e-f7a5-413a-9ffc-8c61c838f2d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8472bae1-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:38:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 194], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656871, 'reachable_time': 27771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290852, 'error': None, 'target': 'ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.879 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5d226740-ade9-491c-9157-96e8c47c8489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.931 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[457857c7-aa0e-4e21-9e9c-df5bd3846bc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.932 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8472bae1-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.933 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.933 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8472bae1-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:29 compute-1 nova_compute[225855]: 2026-01-20 15:05:29.935 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:29 compute-1 NetworkManager[49104]: <info>  [1768921529.9359] manager: (tap8472bae1-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Jan 20 15:05:29 compute-1 kernel: tap8472bae1-40: entered promiscuous mode
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.937 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8472bae1-40, col_values=(('external_ids', {'iface-id': 'a48fbce9-f06f-49f1-8e61-d1d46e8f5808'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:29 compute-1 nova_compute[225855]: 2026-01-20 15:05:29.938 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:29 compute-1 ovn_controller[130490]: 2026-01-20T15:05:29Z|00686|binding|INFO|Releasing lport a48fbce9-f06f-49f1-8e61-d1d46e8f5808 from this chassis (sb_readonly=0)
Jan 20 15:05:29 compute-1 nova_compute[225855]: 2026-01-20 15:05:29.952 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.953 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8472bae1-476b-4100-b9fa-e8827bc4f7bf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8472bae1-476b-4100-b9fa-e8827bc4f7bf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.954 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4f9e7658-aaa3-49b3-9708-e84448693e5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.954 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-8472bae1-476b-4100-b9fa-e8827bc4f7bf
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/8472bae1-476b-4100-b9fa-e8827bc4f7bf.pid.haproxy
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 8472bae1-476b-4100-b9fa-e8827bc4f7bf
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:05:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:29.955 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf', 'env', 'PROCESS_TAG=haproxy-8472bae1-476b-4100-b9fa-e8827bc4f7bf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8472bae1-476b-4100-b9fa-e8827bc4f7bf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.029 225859 DEBUG nova.network.neutron [req-1181a5ab-d1d8-40d2-adc7-b4d6ad28e428 req-723d4bed-c818-47d7-abad-c5f2d60f4592 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Updated VIF entry in instance network info cache for port 070862f1-1db2-45c2-9787-752e6d88449a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.030 225859 DEBUG nova.network.neutron [req-1181a5ab-d1d8-40d2-adc7-b4d6ad28e428 req-723d4bed-c818-47d7-abad-c5f2d60f4592 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Updating instance_info_cache with network_info: [{"id": "070862f1-1db2-45c2-9787-752e6d88449a", "address": "fa:16:3e:e5:e7:09", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap070862f1-1d", "ovs_interfaceid": "070862f1-1db2-45c2-9787-752e6d88449a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.052 225859 DEBUG oslo_concurrency.lockutils [req-1181a5ab-d1d8-40d2-adc7-b4d6ad28e428 req-723d4bed-c818-47d7-abad-c5f2d60f4592 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-33ba7a73-3233-40a3-a49a-e5bbd604dc3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:05:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:30.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:30 compute-1 podman[290884]: 2026-01-20 15:05:30.324193477 +0000 UTC m=+0.058278655 container create 49cb1fc9eefac14d88a80a448d309dbf7c2e07925d5c789c364c5dddff3d7edd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 20 15:05:30 compute-1 systemd[1]: Started libpod-conmon-49cb1fc9eefac14d88a80a448d309dbf7c2e07925d5c789c364c5dddff3d7edd.scope.
Jan 20 15:05:30 compute-1 podman[290884]: 2026-01-20 15:05:30.296852391 +0000 UTC m=+0.030937599 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:05:30 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.401 225859 DEBUG nova.compute.manager [req-437f8ecd-1461-496f-969a-8cdeed601b9f req-3ab938b4-c86a-4e3b-8e78-90d342aaf2de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Received event network-vif-plugged-070862f1-1db2-45c2-9787-752e6d88449a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.402 225859 DEBUG oslo_concurrency.lockutils [req-437f8ecd-1461-496f-969a-8cdeed601b9f req-3ab938b4-c86a-4e3b-8e78-90d342aaf2de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.402 225859 DEBUG oslo_concurrency.lockutils [req-437f8ecd-1461-496f-969a-8cdeed601b9f req-3ab938b4-c86a-4e3b-8e78-90d342aaf2de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.403 225859 DEBUG oslo_concurrency.lockutils [req-437f8ecd-1461-496f-969a-8cdeed601b9f req-3ab938b4-c86a-4e3b-8e78-90d342aaf2de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.403 225859 DEBUG nova.compute.manager [req-437f8ecd-1461-496f-969a-8cdeed601b9f req-3ab938b4-c86a-4e3b-8e78-90d342aaf2de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Processing event network-vif-plugged-070862f1-1db2-45c2-9787-752e6d88449a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:05:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5103777890e53183e892eb29a7309526c22c2d478bcad4b3e03ce4ad7bb2fbd2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:05:30 compute-1 podman[290884]: 2026-01-20 15:05:30.418257376 +0000 UTC m=+0.152342584 container init 49cb1fc9eefac14d88a80a448d309dbf7c2e07925d5c789c364c5dddff3d7edd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 15:05:30 compute-1 podman[290884]: 2026-01-20 15:05:30.425007617 +0000 UTC m=+0.159092815 container start 49cb1fc9eefac14d88a80a448d309dbf7c2e07925d5c789c364c5dddff3d7edd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:05:30 compute-1 neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf[290899]: [NOTICE]   (290921) : New worker (290938) forked
Jan 20 15:05:30 compute-1 neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf[290899]: [NOTICE]   (290921) : Loading success.
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.573 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921530.572652, 33ba7a73-3233-40a3-a49a-e5bbd604dc3c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.574 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] VM Started (Lifecycle Event)
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.575 225859 DEBUG nova.compute.manager [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.578 225859 DEBUG nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.581 225859 INFO nova.virt.libvirt.driver [-] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Instance spawned successfully.
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.581 225859 DEBUG nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.608 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:05:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:30.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.614 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.618 225859 DEBUG nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.618 225859 DEBUG nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.619 225859 DEBUG nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.619 225859 DEBUG nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.620 225859 DEBUG nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.620 225859 DEBUG nova.virt.libvirt.driver [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.649 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.652 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921530.5727563, 33ba7a73-3233-40a3-a49a-e5bbd604dc3c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.652 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] VM Paused (Lifecycle Event)
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.732 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.737 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921530.5777285, 33ba7a73-3233-40a3-a49a-e5bbd604dc3c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.737 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] VM Resumed (Lifecycle Event)
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.787 225859 INFO nova.compute.manager [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Took 9.66 seconds to spawn the instance on the hypervisor.
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.788 225859 DEBUG nova.compute.manager [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.788 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.795 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:05:30 compute-1 nova_compute[225855]: 2026-01-20 15:05:30.940 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:05:31 compute-1 nova_compute[225855]: 2026-01-20 15:05:31.008 225859 INFO nova.compute.manager [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Took 10.90 seconds to build instance.
Jan 20 15:05:31 compute-1 nova_compute[225855]: 2026-01-20 15:05:31.037 225859 DEBUG oslo_concurrency.lockutils [None req-81c2281d-dca4-4bae-856a-c49b3b605249 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:31 compute-1 ceph-mon[81775]: pgmap v2441: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 95 op/s
Jan 20 15:05:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:32.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:32 compute-1 nova_compute[225855]: 2026-01-20 15:05:32.296 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:05:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:32.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:05:32 compute-1 nova_compute[225855]: 2026-01-20 15:05:32.638 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:32 compute-1 ceph-mon[81775]: pgmap v2442: 321 pgs: 321 active+clean; 273 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 109 op/s
Jan 20 15:05:32 compute-1 nova_compute[225855]: 2026-01-20 15:05:32.645 225859 DEBUG nova.compute.manager [req-5c7b46b8-20f6-47ed-8631-6812815e4af0 req-f55b047e-cac4-4d22-8b66-612bed7c89e5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Received event network-vif-plugged-070862f1-1db2-45c2-9787-752e6d88449a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:05:32 compute-1 nova_compute[225855]: 2026-01-20 15:05:32.645 225859 DEBUG oslo_concurrency.lockutils [req-5c7b46b8-20f6-47ed-8631-6812815e4af0 req-f55b047e-cac4-4d22-8b66-612bed7c89e5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:32 compute-1 nova_compute[225855]: 2026-01-20 15:05:32.645 225859 DEBUG oslo_concurrency.lockutils [req-5c7b46b8-20f6-47ed-8631-6812815e4af0 req-f55b047e-cac4-4d22-8b66-612bed7c89e5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:32 compute-1 nova_compute[225855]: 2026-01-20 15:05:32.646 225859 DEBUG oslo_concurrency.lockutils [req-5c7b46b8-20f6-47ed-8631-6812815e4af0 req-f55b047e-cac4-4d22-8b66-612bed7c89e5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:32 compute-1 nova_compute[225855]: 2026-01-20 15:05:32.646 225859 DEBUG nova.compute.manager [req-5c7b46b8-20f6-47ed-8631-6812815e4af0 req-f55b047e-cac4-4d22-8b66-612bed7c89e5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] No waiting events found dispatching network-vif-plugged-070862f1-1db2-45c2-9787-752e6d88449a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:05:32 compute-1 nova_compute[225855]: 2026-01-20 15:05:32.646 225859 WARNING nova.compute.manager [req-5c7b46b8-20f6-47ed-8631-6812815e4af0 req-f55b047e-cac4-4d22-8b66-612bed7c89e5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Received unexpected event network-vif-plugged-070862f1-1db2-45c2-9787-752e6d88449a for instance with vm_state active and task_state None.
Jan 20 15:05:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:33 compute-1 ovn_controller[130490]: 2026-01-20T15:05:33Z|00687|binding|INFO|Releasing lport a48fbce9-f06f-49f1-8e61-d1d46e8f5808 from this chassis (sb_readonly=0)
Jan 20 15:05:33 compute-1 nova_compute[225855]: 2026-01-20 15:05:33.216 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:34.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:34.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:35 compute-1 nova_compute[225855]: 2026-01-20 15:05:35.539 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:35 compute-1 NetworkManager[49104]: <info>  [1768921535.5400] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/289)
Jan 20 15:05:35 compute-1 NetworkManager[49104]: <info>  [1768921535.5409] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/290)
Jan 20 15:05:35 compute-1 nova_compute[225855]: 2026-01-20 15:05:35.541 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:35 compute-1 ceph-mon[81775]: pgmap v2443: 321 pgs: 321 active+clean; 277 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 129 op/s
Jan 20 15:05:35 compute-1 ovn_controller[130490]: 2026-01-20T15:05:35Z|00688|binding|INFO|Releasing lport a48fbce9-f06f-49f1-8e61-d1d46e8f5808 from this chassis (sb_readonly=0)
Jan 20 15:05:35 compute-1 nova_compute[225855]: 2026-01-20 15:05:35.561 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:35 compute-1 nova_compute[225855]: 2026-01-20 15:05:35.825 225859 DEBUG nova.compute.manager [req-6b95bc0c-501a-499f-abce-aca5bcfe3379 req-ba08f24e-4478-41d4-9d21-0c2338fe6db1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Received event network-changed-070862f1-1db2-45c2-9787-752e6d88449a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:05:35 compute-1 nova_compute[225855]: 2026-01-20 15:05:35.826 225859 DEBUG nova.compute.manager [req-6b95bc0c-501a-499f-abce-aca5bcfe3379 req-ba08f24e-4478-41d4-9d21-0c2338fe6db1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Refreshing instance network info cache due to event network-changed-070862f1-1db2-45c2-9787-752e6d88449a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:05:35 compute-1 nova_compute[225855]: 2026-01-20 15:05:35.826 225859 DEBUG oslo_concurrency.lockutils [req-6b95bc0c-501a-499f-abce-aca5bcfe3379 req-ba08f24e-4478-41d4-9d21-0c2338fe6db1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-33ba7a73-3233-40a3-a49a-e5bbd604dc3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:05:35 compute-1 nova_compute[225855]: 2026-01-20 15:05:35.827 225859 DEBUG oslo_concurrency.lockutils [req-6b95bc0c-501a-499f-abce-aca5bcfe3379 req-ba08f24e-4478-41d4-9d21-0c2338fe6db1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-33ba7a73-3233-40a3-a49a-e5bbd604dc3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:05:35 compute-1 nova_compute[225855]: 2026-01-20 15:05:35.827 225859 DEBUG nova.network.neutron [req-6b95bc0c-501a-499f-abce-aca5bcfe3379 req-ba08f24e-4478-41d4-9d21-0c2338fe6db1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Refreshing network info cache for port 070862f1-1db2-45c2-9787-752e6d88449a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:05:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:36.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:36 compute-1 ceph-mon[81775]: pgmap v2444: 321 pgs: 321 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Jan 20 15:05:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:05:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:36.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:05:36 compute-1 sudo[290960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:37 compute-1 sudo[290960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:37 compute-1 sudo[290960]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:37 compute-1 sudo[290985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:37 compute-1 sudo[290985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:37 compute-1 sudo[290985]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:37 compute-1 nova_compute[225855]: 2026-01-20 15:05:37.298 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:37 compute-1 nova_compute[225855]: 2026-01-20 15:05:37.678 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3262336420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:05:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:38.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:05:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:05:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:38.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:05:38 compute-1 sudo[291011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:38 compute-1 sudo[291011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:38 compute-1 sudo[291011]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:38 compute-1 sudo[291036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:05:38 compute-1 sudo[291036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:38 compute-1 sudo[291036]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:38 compute-1 sudo[291061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:38 compute-1 sudo[291061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:38 compute-1 sudo[291061]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:38 compute-1 sudo[291086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:05:38 compute-1 sudo[291086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:38 compute-1 ceph-mon[81775]: pgmap v2445: 321 pgs: 321 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Jan 20 15:05:39 compute-1 sudo[291086]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:05:39 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2436202043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:05:39 compute-1 nova_compute[225855]: 2026-01-20 15:05:39.750 225859 DEBUG nova.network.neutron [req-6b95bc0c-501a-499f-abce-aca5bcfe3379 req-ba08f24e-4478-41d4-9d21-0c2338fe6db1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Updated VIF entry in instance network info cache for port 070862f1-1db2-45c2-9787-752e6d88449a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:05:39 compute-1 nova_compute[225855]: 2026-01-20 15:05:39.751 225859 DEBUG nova.network.neutron [req-6b95bc0c-501a-499f-abce-aca5bcfe3379 req-ba08f24e-4478-41d4-9d21-0c2338fe6db1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Updating instance_info_cache with network_info: [{"id": "070862f1-1db2-45c2-9787-752e6d88449a", "address": "fa:16:3e:e5:e7:09", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap070862f1-1d", "ovs_interfaceid": "070862f1-1db2-45c2-9787-752e6d88449a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:05:39 compute-1 nova_compute[225855]: 2026-01-20 15:05:39.788 225859 DEBUG oslo_concurrency.lockutils [req-6b95bc0c-501a-499f-abce-aca5bcfe3379 req-ba08f24e-4478-41d4-9d21-0c2338fe6db1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-33ba7a73-3233-40a3-a49a-e5bbd604dc3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:05:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:05:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:05:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:05:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:05:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:05:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:05:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2436202043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:05:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:05:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:40.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:05:40 compute-1 nova_compute[225855]: 2026-01-20 15:05:40.341 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:40.342 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:05:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:40.343 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:05:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:40.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:41 compute-1 podman[291140]: 2026-01-20 15:05:41.020687643 +0000 UTC m=+0.063999727 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 20 15:05:41 compute-1 ceph-mon[81775]: pgmap v2446: 321 pgs: 321 active+clean; 239 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Jan 20 15:05:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1135024478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:42.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:42 compute-1 nova_compute[225855]: 2026-01-20 15:05:42.301 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:05:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:42.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:05:42 compute-1 nova_compute[225855]: 2026-01-20 15:05:42.682 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:43 compute-1 nova_compute[225855]: 2026-01-20 15:05:43.097 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:43 compute-1 ovn_controller[130490]: 2026-01-20T15:05:43Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e5:e7:09 10.100.0.7
Jan 20 15:05:43 compute-1 ovn_controller[130490]: 2026-01-20T15:05:43Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e5:e7:09 10.100.0.7
Jan 20 15:05:43 compute-1 ceph-mon[81775]: pgmap v2447: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Jan 20 15:05:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:44.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4287930050' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:05:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4287930050' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:05:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2430875220' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:05:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:44.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:05:45.345 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:45 compute-1 ceph-mon[81775]: pgmap v2448: 321 pgs: 321 active+clean; 214 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 760 KiB/s wr, 124 op/s
Jan 20 15:05:45 compute-1 sudo[291164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:45 compute-1 sudo[291164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:45 compute-1 sudo[291164]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:45 compute-1 sudo[291189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:05:45 compute-1 sudo[291189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:45 compute-1 sudo[291189]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:46.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:46 compute-1 nova_compute[225855]: 2026-01-20 15:05:46.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:46 compute-1 nova_compute[225855]: 2026-01-20 15:05:46.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:05:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/186537975' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:05:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/186537975' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:05:46 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:05:46 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:05:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:46.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:47 compute-1 nova_compute[225855]: 2026-01-20 15:05:47.304 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:47 compute-1 nova_compute[225855]: 2026-01-20 15:05:47.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:47 compute-1 ceph-mon[81775]: pgmap v2449: 321 pgs: 321 active+clean; 243 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.7 MiB/s wr, 165 op/s
Jan 20 15:05:47 compute-1 nova_compute[225855]: 2026-01-20 15:05:47.684 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:48 compute-1 nova_compute[225855]: 2026-01-20 15:05:48.033 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:48.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3225821264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:05:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3225821264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:05:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:48.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:49 compute-1 nova_compute[225855]: 2026-01-20 15:05:49.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:49 compute-1 nova_compute[225855]: 2026-01-20 15:05:49.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:05:49 compute-1 nova_compute[225855]: 2026-01-20 15:05:49.372 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:05:49 compute-1 nova_compute[225855]: 2026-01-20 15:05:49.372 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:49 compute-1 ceph-mon[81775]: pgmap v2450: 321 pgs: 321 active+clean; 243 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 115 op/s
Jan 20 15:05:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:50.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:50 compute-1 nova_compute[225855]: 2026-01-20 15:05:50.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:50.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:51 compute-1 nova_compute[225855]: 2026-01-20 15:05:51.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:51 compute-1 ceph-mon[81775]: pgmap v2451: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 992 KiB/s rd, 2.1 MiB/s wr, 157 op/s
Jan 20 15:05:51 compute-1 nova_compute[225855]: 2026-01-20 15:05:51.832 225859 DEBUG oslo_concurrency.lockutils [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:51 compute-1 nova_compute[225855]: 2026-01-20 15:05:51.832 225859 DEBUG oslo_concurrency.lockutils [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:51 compute-1 nova_compute[225855]: 2026-01-20 15:05:51.855 225859 DEBUG nova.objects.instance [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lazy-loading 'flavor' on Instance uuid 33ba7a73-3233-40a3-a49a-e5bbd604dc3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:05:51 compute-1 nova_compute[225855]: 2026-01-20 15:05:51.897 225859 DEBUG oslo_concurrency.lockutils [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.115 225859 DEBUG oslo_concurrency.lockutils [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.115 225859 DEBUG oslo_concurrency.lockutils [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.116 225859 INFO nova.compute.manager [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Attaching volume 4a621494-2aaf-461c-b7c1-05665913aaf9 to /dev/vdb
Jan 20 15:05:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:52.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.288 225859 DEBUG os_brick.utils [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.289 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.301 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.302 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[9aabd57a-3196-4d2a-946b-8c724c961506]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.304 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.334 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.335 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[8b253404-f9b0-4167-9514-cec9ad1c659d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.335 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.338 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.347 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.347 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[8380cd7b-ae81-4fea-a892-361384ce5d82]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.348 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[234b23b9-b0a0-4313-b91b-7dfbe6e4b9b0]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.349 225859 DEBUG oslo_concurrency.processutils [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.381 225859 DEBUG oslo_concurrency.processutils [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.383 225859 DEBUG os_brick.initiator.connectors.lightos [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.383 225859 DEBUG os_brick.initiator.connectors.lightos [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.383 225859 DEBUG os_brick.initiator.connectors.lightos [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.384 225859 DEBUG os_brick.utils [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] <== get_connector_properties: return (95ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.384 225859 DEBUG nova.virt.block_device [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Updating existing volume attachment record: c023db9a-8197-4b91-bb70-8b3b1d00be58 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 15:05:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:05:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:52.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:05:52 compute-1 nova_compute[225855]: 2026-01-20 15:05:52.685 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:53 compute-1 nova_compute[225855]: 2026-01-20 15:05:53.325 225859 DEBUG nova.objects.instance [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lazy-loading 'flavor' on Instance uuid 33ba7a73-3233-40a3-a49a-e5bbd604dc3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:05:53 compute-1 nova_compute[225855]: 2026-01-20 15:05:53.381 225859 DEBUG nova.virt.libvirt.driver [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Attempting to attach volume 4a621494-2aaf-461c-b7c1-05665913aaf9 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 20 15:05:53 compute-1 nova_compute[225855]: 2026-01-20 15:05:53.384 225859 DEBUG nova.virt.libvirt.guest [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] attach device xml: <disk type="network" device="disk">
Jan 20 15:05:53 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:05:53 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-4a621494-2aaf-461c-b7c1-05665913aaf9">
Jan 20 15:05:53 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:05:53 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:05:53 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:05:53 compute-1 nova_compute[225855]:   </source>
Jan 20 15:05:53 compute-1 nova_compute[225855]:   <auth username="openstack">
Jan 20 15:05:53 compute-1 nova_compute[225855]:     <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:05:53 compute-1 nova_compute[225855]:   </auth>
Jan 20 15:05:53 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:05:53 compute-1 nova_compute[225855]:   <serial>4a621494-2aaf-461c-b7c1-05665913aaf9</serial>
Jan 20 15:05:53 compute-1 nova_compute[225855]: </disk>
Jan 20 15:05:53 compute-1 nova_compute[225855]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 15:05:53 compute-1 ceph-mon[81775]: pgmap v2452: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 188 op/s
Jan 20 15:05:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3737704240' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:05:53 compute-1 ovn_controller[130490]: 2026-01-20T15:05:53Z|00689|binding|INFO|Releasing lport a48fbce9-f06f-49f1-8e61-d1d46e8f5808 from this chassis (sb_readonly=0)
Jan 20 15:05:53 compute-1 nova_compute[225855]: 2026-01-20 15:05:53.542 225859 DEBUG nova.virt.libvirt.driver [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:05:53 compute-1 nova_compute[225855]: 2026-01-20 15:05:53.543 225859 DEBUG nova.virt.libvirt.driver [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:05:53 compute-1 nova_compute[225855]: 2026-01-20 15:05:53.544 225859 DEBUG nova.virt.libvirt.driver [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:05:53 compute-1 nova_compute[225855]: 2026-01-20 15:05:53.544 225859 DEBUG nova.virt.libvirt.driver [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] No VIF found with MAC fa:16:3e:e5:e7:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:05:53 compute-1 nova_compute[225855]: 2026-01-20 15:05:53.549 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:53 compute-1 nova_compute[225855]: 2026-01-20 15:05:53.767 225859 DEBUG oslo_concurrency.lockutils [None req-bdcb178d-591d-435f-ba7a-6af358df18ba bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:54.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:54 compute-1 nova_compute[225855]: 2026-01-20 15:05:54.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:54 compute-1 nova_compute[225855]: 2026-01-20 15:05:54.388 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:54 compute-1 nova_compute[225855]: 2026-01-20 15:05:54.389 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:54 compute-1 nova_compute[225855]: 2026-01-20 15:05:54.389 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:54 compute-1 nova_compute[225855]: 2026-01-20 15:05:54.390 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:05:54 compute-1 nova_compute[225855]: 2026-01-20 15:05:54.390 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2589043131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:05:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:54.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:05:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:05:54 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2670266435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:54 compute-1 nova_compute[225855]: 2026-01-20 15:05:54.833 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:54 compute-1 nova_compute[225855]: 2026-01-20 15:05:54.912 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:05:54 compute-1 nova_compute[225855]: 2026-01-20 15:05:54.912 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:05:54 compute-1 nova_compute[225855]: 2026-01-20 15:05:54.912 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:05:55 compute-1 nova_compute[225855]: 2026-01-20 15:05:55.054 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:05:55 compute-1 nova_compute[225855]: 2026-01-20 15:05:55.055 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4132MB free_disk=20.942607879638672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:05:55 compute-1 nova_compute[225855]: 2026-01-20 15:05:55.055 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:55 compute-1 nova_compute[225855]: 2026-01-20 15:05:55.056 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:55 compute-1 nova_compute[225855]: 2026-01-20 15:05:55.128 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 33ba7a73-3233-40a3-a49a-e5bbd604dc3c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:05:55 compute-1 nova_compute[225855]: 2026-01-20 15:05:55.128 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:05:55 compute-1 nova_compute[225855]: 2026-01-20 15:05:55.128 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:05:55 compute-1 nova_compute[225855]: 2026-01-20 15:05:55.333 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:55 compute-1 ceph-mon[81775]: pgmap v2453: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 177 op/s
Jan 20 15:05:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2670266435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:55 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:05:55 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3251268508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:55 compute-1 nova_compute[225855]: 2026-01-20 15:05:55.765 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:55 compute-1 nova_compute[225855]: 2026-01-20 15:05:55.771 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:05:55 compute-1 nova_compute[225855]: 2026-01-20 15:05:55.787 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:05:55 compute-1 nova_compute[225855]: 2026-01-20 15:05:55.812 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:05:55 compute-1 nova_compute[225855]: 2026-01-20 15:05:55.813 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:56.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:56 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2881052373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:56 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3251268508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:56 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2070363681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:56 compute-1 nova_compute[225855]: 2026-01-20 15:05:56.570 225859 DEBUG oslo_concurrency.lockutils [None req-1e86e0aa-03c9-4e89-9521-3f8705123b1e bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:56 compute-1 nova_compute[225855]: 2026-01-20 15:05:56.571 225859 DEBUG oslo_concurrency.lockutils [None req-1e86e0aa-03c9-4e89-9521-3f8705123b1e bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:56 compute-1 nova_compute[225855]: 2026-01-20 15:05:56.605 225859 INFO nova.compute.manager [None req-1e86e0aa-03c9-4e89-9521-3f8705123b1e bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Detaching volume 4a621494-2aaf-461c-b7c1-05665913aaf9
Jan 20 15:05:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:56.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:56 compute-1 nova_compute[225855]: 2026-01-20 15:05:56.876 225859 INFO nova.virt.block_device [None req-1e86e0aa-03c9-4e89-9521-3f8705123b1e bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Attempting to driver detach volume 4a621494-2aaf-461c-b7c1-05665913aaf9 from mountpoint /dev/vdb
Jan 20 15:05:56 compute-1 nova_compute[225855]: 2026-01-20 15:05:56.884 225859 DEBUG nova.virt.libvirt.driver [None req-1e86e0aa-03c9-4e89-9521-3f8705123b1e bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Attempting to detach device vdb from instance 33ba7a73-3233-40a3-a49a-e5bbd604dc3c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 15:05:56 compute-1 nova_compute[225855]: 2026-01-20 15:05:56.885 225859 DEBUG nova.virt.libvirt.guest [None req-1e86e0aa-03c9-4e89-9521-3f8705123b1e bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 15:05:56 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:05:56 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-4a621494-2aaf-461c-b7c1-05665913aaf9">
Jan 20 15:05:56 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:05:56 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:05:56 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:05:56 compute-1 nova_compute[225855]:   </source>
Jan 20 15:05:56 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:05:56 compute-1 nova_compute[225855]:   <serial>4a621494-2aaf-461c-b7c1-05665913aaf9</serial>
Jan 20 15:05:56 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 15:05:56 compute-1 nova_compute[225855]: </disk>
Jan 20 15:05:56 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:05:56 compute-1 nova_compute[225855]: 2026-01-20 15:05:56.932 225859 INFO nova.virt.libvirt.driver [None req-1e86e0aa-03c9-4e89-9521-3f8705123b1e bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Successfully detached device vdb from instance 33ba7a73-3233-40a3-a49a-e5bbd604dc3c from the persistent domain config.
Jan 20 15:05:56 compute-1 nova_compute[225855]: 2026-01-20 15:05:56.933 225859 DEBUG nova.virt.libvirt.driver [None req-1e86e0aa-03c9-4e89-9521-3f8705123b1e bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 33ba7a73-3233-40a3-a49a-e5bbd604dc3c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 15:05:56 compute-1 nova_compute[225855]: 2026-01-20 15:05:56.933 225859 DEBUG nova.virt.libvirt.guest [None req-1e86e0aa-03c9-4e89-9521-3f8705123b1e bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 15:05:56 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:05:56 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-4a621494-2aaf-461c-b7c1-05665913aaf9">
Jan 20 15:05:56 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:05:56 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:05:56 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:05:56 compute-1 nova_compute[225855]:   </source>
Jan 20 15:05:56 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:05:56 compute-1 nova_compute[225855]:   <serial>4a621494-2aaf-461c-b7c1-05665913aaf9</serial>
Jan 20 15:05:56 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 15:05:56 compute-1 nova_compute[225855]: </disk>
Jan 20 15:05:56 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:05:57 compute-1 nova_compute[225855]: 2026-01-20 15:05:57.031 225859 DEBUG nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Received event <DeviceRemovedEvent: 1768921557.0311704, 33ba7a73-3233-40a3-a49a-e5bbd604dc3c => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 15:05:57 compute-1 nova_compute[225855]: 2026-01-20 15:05:57.033 225859 DEBUG nova.virt.libvirt.driver [None req-1e86e0aa-03c9-4e89-9521-3f8705123b1e bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 33ba7a73-3233-40a3-a49a-e5bbd604dc3c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 15:05:57 compute-1 nova_compute[225855]: 2026-01-20 15:05:57.036 225859 INFO nova.virt.libvirt.driver [None req-1e86e0aa-03c9-4e89-9521-3f8705123b1e bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Successfully detached device vdb from instance 33ba7a73-3233-40a3-a49a-e5bbd604dc3c from the live domain config.
Jan 20 15:05:57 compute-1 sudo[291293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:57 compute-1 sudo[291293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:57 compute-1 sudo[291293]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:57 compute-1 sudo[291324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:57 compute-1 sudo[291324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:57 compute-1 sudo[291324]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:57 compute-1 podman[291317]: 2026-01-20 15:05:57.281414856 +0000 UTC m=+0.082890063 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 20 15:05:57 compute-1 nova_compute[225855]: 2026-01-20 15:05:57.332 225859 DEBUG nova.objects.instance [None req-1e86e0aa-03c9-4e89-9521-3f8705123b1e bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lazy-loading 'flavor' on Instance uuid 33ba7a73-3233-40a3-a49a-e5bbd604dc3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:05:57 compute-1 nova_compute[225855]: 2026-01-20 15:05:57.337 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:57 compute-1 nova_compute[225855]: 2026-01-20 15:05:57.393 225859 DEBUG oslo_concurrency.lockutils [None req-1e86e0aa-03c9-4e89-9521-3f8705123b1e bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:57 compute-1 ceph-mon[81775]: pgmap v2454: 321 pgs: 321 active+clean; 264 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 177 op/s
Jan 20 15:05:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2043803133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3670247231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:57 compute-1 nova_compute[225855]: 2026-01-20 15:05:57.687 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:58.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:05:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:58.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:58 compute-1 nova_compute[225855]: 2026-01-20 15:05:58.808 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:59 compute-1 nova_compute[225855]: 2026-01-20 15:05:59.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e360 e360: 3 total, 3 up, 3 in
Jan 20 15:05:59 compute-1 ceph-mon[81775]: pgmap v2455: 321 pgs: 321 active+clean; 264 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 814 KiB/s wr, 108 op/s
Jan 20 15:06:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:00.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3048424037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2837693437' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:00 compute-1 ceph-mon[81775]: osdmap e360: 3 total, 3 up, 3 in
Jan 20 15:06:00 compute-1 ceph-mon[81775]: pgmap v2457: 321 pgs: 321 active+clean; 306 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 129 op/s
Jan 20 15:06:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e361 e361: 3 total, 3 up, 3 in
Jan 20 15:06:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:00.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:01 compute-1 nova_compute[225855]: 2026-01-20 15:06:01.431 225859 DEBUG nova.compute.manager [None req-e51df534-7bef-4f9c-bfbf-ab53b38c3363 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:06:01 compute-1 nova_compute[225855]: 2026-01-20 15:06:01.477 225859 INFO nova.compute.manager [None req-e51df534-7bef-4f9c-bfbf-ab53b38c3363 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] instance snapshotting
Jan 20 15:06:01 compute-1 ceph-mon[81775]: osdmap e361: 3 total, 3 up, 3 in
Jan 20 15:06:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e362 e362: 3 total, 3 up, 3 in
Jan 20 15:06:01 compute-1 nova_compute[225855]: 2026-01-20 15:06:01.944 225859 INFO nova.virt.libvirt.driver [None req-e51df534-7bef-4f9c-bfbf-ab53b38c3363 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Beginning live snapshot process
Jan 20 15:06:02 compute-1 nova_compute[225855]: 2026-01-20 15:06:02.105 225859 DEBUG nova.virt.libvirt.imagebackend [None req-e51df534-7bef-4f9c-bfbf-ab53b38c3363 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 15:06:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:02.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:02 compute-1 nova_compute[225855]: 2026-01-20 15:06:02.339 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:02 compute-1 nova_compute[225855]: 2026-01-20 15:06:02.398 225859 DEBUG nova.storage.rbd_utils [None req-e51df534-7bef-4f9c-bfbf-ab53b38c3363 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] creating snapshot(b1129a7f63564f43b029059be651efa0) on rbd image(33ba7a73-3233-40a3-a49a-e5bbd604dc3c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 15:06:02 compute-1 ceph-mon[81775]: osdmap e362: 3 total, 3 up, 3 in
Jan 20 15:06:02 compute-1 ceph-mon[81775]: pgmap v2460: 321 pgs: 321 active+clean; 311 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 480 KiB/s rd, 5.4 MiB/s wr, 134 op/s
Jan 20 15:06:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e363 e363: 3 total, 3 up, 3 in
Jan 20 15:06:02 compute-1 nova_compute[225855]: 2026-01-20 15:06:02.634 225859 DEBUG nova.storage.rbd_utils [None req-e51df534-7bef-4f9c-bfbf-ab53b38c3363 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] cloning vms/33ba7a73-3233-40a3-a49a-e5bbd604dc3c_disk@b1129a7f63564f43b029059be651efa0 to images/8c970c65-2888-4da3-891e-c2b6eb3ea735 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 15:06:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:02.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:02 compute-1 nova_compute[225855]: 2026-01-20 15:06:02.729 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:02 compute-1 nova_compute[225855]: 2026-01-20 15:06:02.810 225859 DEBUG nova.storage.rbd_utils [None req-e51df534-7bef-4f9c-bfbf-ab53b38c3363 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] flattening images/8c970c65-2888-4da3-891e-c2b6eb3ea735 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 15:06:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:03 compute-1 nova_compute[225855]: 2026-01-20 15:06:03.221 225859 DEBUG nova.storage.rbd_utils [None req-e51df534-7bef-4f9c-bfbf-ab53b38c3363 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] removing snapshot(b1129a7f63564f43b029059be651efa0) on rbd image(33ba7a73-3233-40a3-a49a-e5bbd604dc3c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 15:06:03 compute-1 ceph-mon[81775]: osdmap e363: 3 total, 3 up, 3 in
Jan 20 15:06:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2097377447' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:06:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2097377447' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:06:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e364 e364: 3 total, 3 up, 3 in
Jan 20 15:06:03 compute-1 nova_compute[225855]: 2026-01-20 15:06:03.648 225859 DEBUG nova.storage.rbd_utils [None req-e51df534-7bef-4f9c-bfbf-ab53b38c3363 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] creating snapshot(snap) on rbd image(8c970c65-2888-4da3-891e-c2b6eb3ea735) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 15:06:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:06:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:04.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:06:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e365 e365: 3 total, 3 up, 3 in
Jan 20 15:06:04 compute-1 ceph-mon[81775]: osdmap e364: 3 total, 3 up, 3 in
Jan 20 15:06:04 compute-1 ceph-mon[81775]: pgmap v2463: 321 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 316 active+clean; 349 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 6.4 MiB/s wr, 279 op/s
Jan 20 15:06:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:06:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:04.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:06:05 compute-1 ceph-mon[81775]: osdmap e365: 3 total, 3 up, 3 in
Jan 20 15:06:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:06.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:06 compute-1 ceph-mon[81775]: pgmap v2465: 321 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 316 active+clean; 407 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 15 MiB/s rd, 11 MiB/s wr, 508 op/s
Jan 20 15:06:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:06:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:06.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:06:06 compute-1 nova_compute[225855]: 2026-01-20 15:06:06.864 225859 INFO nova.virt.libvirt.driver [None req-e51df534-7bef-4f9c-bfbf-ab53b38c3363 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Snapshot image upload complete
Jan 20 15:06:06 compute-1 nova_compute[225855]: 2026-01-20 15:06:06.865 225859 INFO nova.compute.manager [None req-e51df534-7bef-4f9c-bfbf-ab53b38c3363 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Took 5.39 seconds to snapshot the instance on the hypervisor.
Jan 20 15:06:07 compute-1 nova_compute[225855]: 2026-01-20 15:06:07.340 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:07 compute-1 nova_compute[225855]: 2026-01-20 15:06:07.731 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:08.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:06:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:08.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:06:09 compute-1 ceph-mon[81775]: pgmap v2466: 321 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 316 active+clean; 407 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 9.0 MiB/s wr, 400 op/s
Jan 20 15:06:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:06:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:10.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:06:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:06:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:10.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:06:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e366 e366: 3 total, 3 up, 3 in
Jan 20 15:06:11 compute-1 ceph-mon[81775]: pgmap v2467: 321 pgs: 321 active+clean; 407 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 7.1 MiB/s wr, 339 op/s
Jan 20 15:06:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1653416062' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:06:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1653416062' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:06:11 compute-1 ceph-mon[81775]: osdmap e366: 3 total, 3 up, 3 in
Jan 20 15:06:12 compute-1 podman[291518]: 2026-01-20 15:06:12.002581667 +0000 UTC m=+0.049942858 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 20 15:06:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:12.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:12 compute-1 nova_compute[225855]: 2026-01-20 15:06:12.376 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:12.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:12 compute-1 nova_compute[225855]: 2026-01-20 15:06:12.733 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:13 compute-1 ceph-mon[81775]: pgmap v2469: 321 pgs: 321 active+clean; 407 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 4.6 MiB/s wr, 258 op/s
Jan 20 15:06:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/865544307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:06:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:14.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:06:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3713738264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:06:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3713738264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:06:14 compute-1 ceph-mon[81775]: pgmap v2470: 321 pgs: 321 active+clean; 408 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.9 MiB/s wr, 226 op/s
Jan 20 15:06:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:14.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:06:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:16.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:06:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:16.425 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:16.426 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:16.426 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:06:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:16.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:06:17 compute-1 sudo[291541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:17 compute-1 sudo[291541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:17 compute-1 sudo[291541]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:17 compute-1 sudo[291566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:17 compute-1 sudo[291566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:17 compute-1 nova_compute[225855]: 2026-01-20 15:06:17.379 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:17 compute-1 ceph-mon[81775]: pgmap v2471: 321 pgs: 321 active+clean; 430 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 385 KiB/s rd, 2.6 MiB/s wr, 141 op/s
Jan 20 15:06:17 compute-1 sudo[291566]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:17 compute-1 nova_compute[225855]: 2026-01-20 15:06:17.735 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:17.930 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:06:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:17.931 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:06:17 compute-1 nova_compute[225855]: 2026-01-20 15:06:17.933 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:06:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:18.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:06:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3494959299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:18.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:19 compute-1 ceph-mon[81775]: pgmap v2472: 321 pgs: 321 active+clean; 430 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 385 KiB/s rd, 2.6 MiB/s wr, 141 op/s
Jan 20 15:06:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1917396713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4247174354' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.142 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.156 225859 DEBUG oslo_concurrency.lockutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Acquiring lock "5feeb9de-434b-4ec7-aa99-6da718514c6f" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.157 225859 DEBUG oslo_concurrency.lockutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.157 225859 INFO nova.compute.manager [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Unshelving
Jan 20 15:06:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:20.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.254 225859 INFO nova.virt.block_device [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Booting with volume 94300d81-b4ca-4c0a-9283-83b76826d40f at /dev/vda
Jan 20 15:06:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2692737353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2544975497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.658 225859 DEBUG os_brick.utils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.659 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:06:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:20.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.670 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.670 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[5155bd37-dd5a-4f59-8e0d-5df2a95f4d0a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.671 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.679 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.680 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab853a2-d59b-48b6-9d30-2dbea2c17bc2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.681 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.690 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.691 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[49363ee3-5e53-44a6-a321-c3616c388a7e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.692 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[a97b22ef-cb54-46c1-8e9b-f12df7e90550]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.693 225859 DEBUG oslo_concurrency.processutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.732 225859 DEBUG oslo_concurrency.processutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] CMD "nvme version" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.734 225859 DEBUG os_brick.initiator.connectors.lightos [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.735 225859 DEBUG os_brick.initiator.connectors.lightos [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.735 225859 DEBUG os_brick.initiator.connectors.lightos [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.735 225859 DEBUG os_brick.utils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:06:20 compute-1 nova_compute[225855]: 2026-01-20 15:06:20.736 225859 DEBUG nova.virt.block_device [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Updating existing volume attachment record: 50d9296f-3ad8-43e9-a963-2b942f9bc3e3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 15:06:20 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #115. Immutable memtables: 0.
Jan 20 15:06:20 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:06:20.869242) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:06:20 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 115
Jan 20 15:06:20 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921580869314, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1764, "num_deletes": 265, "total_data_size": 3693740, "memory_usage": 3754728, "flush_reason": "Manual Compaction"}
Jan 20 15:06:20 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #116: started
Jan 20 15:06:20 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921580883990, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 116, "file_size": 2422490, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58144, "largest_seqno": 59903, "table_properties": {"data_size": 2415041, "index_size": 4327, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16450, "raw_average_key_size": 20, "raw_value_size": 2399765, "raw_average_value_size": 2999, "num_data_blocks": 189, "num_entries": 800, "num_filter_entries": 800, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921455, "oldest_key_time": 1768921455, "file_creation_time": 1768921580, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:06:20 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 14804 microseconds, and 5955 cpu microseconds.
Jan 20 15:06:20 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:06:20 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:06:20.884041) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #116: 2422490 bytes OK
Jan 20 15:06:20 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:06:20.884065) [db/memtable_list.cc:519] [default] Level-0 commit table #116 started
Jan 20 15:06:20 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:06:20.891384) [db/memtable_list.cc:722] [default] Level-0 commit table #116: memtable #1 done
Jan 20 15:06:20 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:06:20.891409) EVENT_LOG_v1 {"time_micros": 1768921580891402, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:06:20 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:06:20.891430) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:06:20 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 3685531, prev total WAL file size 3685531, number of live WAL files 2.
Jan 20 15:06:20 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000112.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:06:20 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:06:20.892454) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303137' seq:72057594037927935, type:22 .. '6C6F676D0032323731' seq:0, type:0; will stop at (end)
Jan 20 15:06:20 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:06:20 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [116(2365KB)], [114(10071KB)]
Jan 20 15:06:20 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921580892529, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [116], "files_L6": [114], "score": -1, "input_data_size": 12735937, "oldest_snapshot_seqno": -1}
Jan 20 15:06:21 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #117: 8596 keys, 12588995 bytes, temperature: kUnknown
Jan 20 15:06:21 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921581038942, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 117, "file_size": 12588995, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12531223, "index_size": 35196, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21509, "raw_key_size": 222783, "raw_average_key_size": 25, "raw_value_size": 12377915, "raw_average_value_size": 1439, "num_data_blocks": 1380, "num_entries": 8596, "num_filter_entries": 8596, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768921580, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 117, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:06:21 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:06:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:06:21.039230) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 12588995 bytes
Jan 20 15:06:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:06:21.040552) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 86.9 rd, 85.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 9.8 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(10.5) write-amplify(5.2) OK, records in: 9140, records dropped: 544 output_compression: NoCompression
Jan 20 15:06:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:06:21.040568) EVENT_LOG_v1 {"time_micros": 1768921581040560, "job": 72, "event": "compaction_finished", "compaction_time_micros": 146475, "compaction_time_cpu_micros": 35056, "output_level": 6, "num_output_files": 1, "total_output_size": 12588995, "num_input_records": 9140, "num_output_records": 8596, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:06:21 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:06:21 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921581041029, "job": 72, "event": "table_file_deletion", "file_number": 116}
Jan 20 15:06:21 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000114.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:06:21 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921581043057, "job": 72, "event": "table_file_deletion", "file_number": 114}
Jan 20 15:06:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:06:20.892284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:06:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:06:21.043145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:06:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:06:21.043151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:06:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:06:21.043153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:06:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:06:21.043155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:06:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:06:21.043157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:06:21 compute-1 ceph-mon[81775]: pgmap v2473: 321 pgs: 321 active+clean; 458 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 455 KiB/s rd, 3.5 MiB/s wr, 157 op/s
Jan 20 15:06:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1540173418' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2218343567' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:21 compute-1 nova_compute[225855]: 2026-01-20 15:06:21.799 225859 DEBUG oslo_concurrency.lockutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:21 compute-1 nova_compute[225855]: 2026-01-20 15:06:21.801 225859 DEBUG oslo_concurrency.lockutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:21 compute-1 nova_compute[225855]: 2026-01-20 15:06:21.805 225859 DEBUG nova.objects.instance [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lazy-loading 'pci_requests' on Instance uuid 5feeb9de-434b-4ec7-aa99-6da718514c6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:06:21 compute-1 nova_compute[225855]: 2026-01-20 15:06:21.825 225859 DEBUG nova.objects.instance [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lazy-loading 'numa_topology' on Instance uuid 5feeb9de-434b-4ec7-aa99-6da718514c6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:06:21 compute-1 nova_compute[225855]: 2026-01-20 15:06:21.840 225859 DEBUG nova.virt.hardware [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:06:21 compute-1 nova_compute[225855]: 2026-01-20 15:06:21.840 225859 INFO nova.compute.claims [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:06:21 compute-1 nova_compute[225855]: 2026-01-20 15:06:21.967 225859 DEBUG oslo_concurrency.processutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:06:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:22.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:06:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:06:22 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3242676901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:22 compute-1 nova_compute[225855]: 2026-01-20 15:06:22.424 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:22 compute-1 nova_compute[225855]: 2026-01-20 15:06:22.439 225859 DEBUG oslo_concurrency.processutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3242676901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:22 compute-1 nova_compute[225855]: 2026-01-20 15:06:22.447 225859 DEBUG nova.compute.provider_tree [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:06:22 compute-1 nova_compute[225855]: 2026-01-20 15:06:22.462 225859 DEBUG nova.scheduler.client.report [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:06:22 compute-1 nova_compute[225855]: 2026-01-20 15:06:22.486 225859 DEBUG oslo_concurrency.lockutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:22.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:22 compute-1 nova_compute[225855]: 2026-01-20 15:06:22.737 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:22 compute-1 nova_compute[225855]: 2026-01-20 15:06:22.808 225859 INFO nova.network.neutron [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Updating port 70668adb-f9ad-41cb-8eac-2e0aba32bf22 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 20 15:06:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:23 compute-1 ceph-mon[81775]: pgmap v2474: 321 pgs: 321 active+clean; 482 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 4.1 MiB/s wr, 134 op/s
Jan 20 15:06:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:23.933 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:24.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:24 compute-1 nova_compute[225855]: 2026-01-20 15:06:24.550 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:24 compute-1 nova_compute[225855]: 2026-01-20 15:06:24.660 225859 DEBUG oslo_concurrency.lockutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Acquiring lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:06:24 compute-1 nova_compute[225855]: 2026-01-20 15:06:24.661 225859 DEBUG oslo_concurrency.lockutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Acquired lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:06:24 compute-1 nova_compute[225855]: 2026-01-20 15:06:24.661 225859 DEBUG nova.network.neutron [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:06:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:24.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:24 compute-1 nova_compute[225855]: 2026-01-20 15:06:24.975 225859 DEBUG nova.compute.manager [req-b0baf1dc-9f24-4299-8c90-d5a44fd6c42a req-49edfd08-24e1-411d-babc-f90e50b80d9e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Received event network-changed-70668adb-f9ad-41cb-8eac-2e0aba32bf22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:06:24 compute-1 nova_compute[225855]: 2026-01-20 15:06:24.975 225859 DEBUG nova.compute.manager [req-b0baf1dc-9f24-4299-8c90-d5a44fd6c42a req-49edfd08-24e1-411d-babc-f90e50b80d9e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Refreshing instance network info cache due to event network-changed-70668adb-f9ad-41cb-8eac-2e0aba32bf22. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:06:24 compute-1 nova_compute[225855]: 2026-01-20 15:06:24.975 225859 DEBUG oslo_concurrency.lockutils [req-b0baf1dc-9f24-4299-8c90-d5a44fd6c42a req-49edfd08-24e1-411d-babc-f90e50b80d9e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:06:25 compute-1 ceph-mon[81775]: pgmap v2475: 321 pgs: 321 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.0 MiB/s wr, 164 op/s
Jan 20 15:06:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:06:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:26.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:06:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:26.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:27 compute-1 nova_compute[225855]: 2026-01-20 15:06:27.425 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:27 compute-1 ceph-mon[81775]: pgmap v2476: 321 pgs: 321 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 269 op/s
Jan 20 15:06:27 compute-1 nova_compute[225855]: 2026-01-20 15:06:27.739 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:28 compute-1 podman[291626]: 2026-01-20 15:06:28.042970988 +0000 UTC m=+0.082602245 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 20 15:06:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:28.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.251 225859 DEBUG nova.network.neutron [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Updating instance_info_cache with network_info: [{"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.272 225859 DEBUG oslo_concurrency.lockutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Releasing lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.274 225859 DEBUG nova.virt.libvirt.driver [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.275 225859 INFO nova.virt.libvirt.driver [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Creating image(s)
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.275 225859 DEBUG nova.virt.libvirt.driver [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.275 225859 DEBUG nova.virt.libvirt.driver [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Ensure instance console log exists: /var/lib/nova/instances/5feeb9de-434b-4ec7-aa99-6da718514c6f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.276 225859 DEBUG oslo_concurrency.lockutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.276 225859 DEBUG oslo_concurrency.lockutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.276 225859 DEBUG oslo_concurrency.lockutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.279 225859 DEBUG nova.virt.libvirt.driver [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Start _get_guest_xml network_info=[{"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': True, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-94300d81-b4ca-4c0a-9283-83b76826d40f', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '94300d81-b4ca-4c0a-9283-83b76826d40f', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '5feeb9de-434b-4ec7-aa99-6da718514c6f', 'attached_at': '', 'detached_at': '', 'volume_id': '94300d81-b4ca-4c0a-9283-83b76826d40f', 'serial': '94300d81-b4ca-4c0a-9283-83b76826d40f'}, 'guest_format': None, 'boot_index': 0, 'mount_device': '/dev/vda', 'attachment_id': '50d9296f-3ad8-43e9-a963-2b942f9bc3e3', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.280 225859 DEBUG oslo_concurrency.lockutils [req-b0baf1dc-9f24-4299-8c90-d5a44fd6c42a req-49edfd08-24e1-411d-babc-f90e50b80d9e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.280 225859 DEBUG nova.network.neutron [req-b0baf1dc-9f24-4299-8c90-d5a44fd6c42a req-49edfd08-24e1-411d-babc-f90e50b80d9e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Refreshing network info cache for port 70668adb-f9ad-41cb-8eac-2e0aba32bf22 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.287 225859 WARNING nova.virt.libvirt.driver [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.294 225859 DEBUG nova.virt.libvirt.host [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.295 225859 DEBUG nova.virt.libvirt.host [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.303 225859 DEBUG nova.virt.libvirt.host [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.303 225859 DEBUG nova.virt.libvirt.host [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.305 225859 DEBUG nova.virt.libvirt.driver [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.305 225859 DEBUG nova.virt.hardware [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.305 225859 DEBUG nova.virt.hardware [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.306 225859 DEBUG nova.virt.hardware [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.306 225859 DEBUG nova.virt.hardware [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.306 225859 DEBUG nova.virt.hardware [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.306 225859 DEBUG nova.virt.hardware [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.306 225859 DEBUG nova.virt.hardware [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.306 225859 DEBUG nova.virt.hardware [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.307 225859 DEBUG nova.virt.hardware [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.307 225859 DEBUG nova.virt.hardware [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.307 225859 DEBUG nova.virt.hardware [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.307 225859 DEBUG nova.objects.instance [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5feeb9de-434b-4ec7-aa99-6da718514c6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.355 225859 DEBUG nova.storage.rbd_utils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] rbd image 5feeb9de-434b-4ec7-aa99-6da718514c6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.362 225859 DEBUG oslo_concurrency.processutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:28.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:06:28 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2756618266' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.857 225859 DEBUG oslo_concurrency.processutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.886 225859 DEBUG nova.virt.libvirt.vif [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T15:05:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-670486896',display_name='tempest-TestShelveInstance-server-670486896',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-670486896',id=162,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1862119958',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:05:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='0fc924d2df984301897e81920c5e192f',ramdisk_id='',reservation_id='r-18qxw31n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1425544575',owner_user_name='tempest-TestShelveInstance-1425544575-project-member'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:06:20Z,user_data=None,user_id='b02a8ef6cc3946ceb2c8846aae2eae68',uuid=5feeb9de-434b-4ec7-aa99-6da718514c6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.887 225859 DEBUG nova.network.os_vif_util [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Converting VIF {"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.888 225859 DEBUG nova.network.os_vif_util [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:c0:d3,bridge_name='br-int',has_traffic_filtering=True,id=70668adb-f9ad-41cb-8eac-2e0aba32bf22,network=Network(0f434e83-45c8-454d-820b-af39b696a1d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70668adb-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.889 225859 DEBUG nova.objects.instance [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lazy-loading 'pci_devices' on Instance uuid 5feeb9de-434b-4ec7-aa99-6da718514c6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.907 225859 DEBUG nova.virt.libvirt.driver [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:06:28 compute-1 nova_compute[225855]:   <uuid>5feeb9de-434b-4ec7-aa99-6da718514c6f</uuid>
Jan 20 15:06:28 compute-1 nova_compute[225855]:   <name>instance-000000a2</name>
Jan 20 15:06:28 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:06:28 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:06:28 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <nova:name>tempest-TestShelveInstance-server-670486896</nova:name>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:06:28</nova:creationTime>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:06:28 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:06:28 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:06:28 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:06:28 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:06:28 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:06:28 compute-1 nova_compute[225855]:         <nova:user uuid="b02a8ef6cc3946ceb2c8846aae2eae68">tempest-TestShelveInstance-1425544575-project-member</nova:user>
Jan 20 15:06:28 compute-1 nova_compute[225855]:         <nova:project uuid="0fc924d2df984301897e81920c5e192f">tempest-TestShelveInstance-1425544575</nova:project>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:06:28 compute-1 nova_compute[225855]:         <nova:port uuid="70668adb-f9ad-41cb-8eac-2e0aba32bf22">
Jan 20 15:06:28 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:06:28 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:06:28 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <system>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <entry name="serial">5feeb9de-434b-4ec7-aa99-6da718514c6f</entry>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <entry name="uuid">5feeb9de-434b-4ec7-aa99-6da718514c6f</entry>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     </system>
Jan 20 15:06:28 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:06:28 compute-1 nova_compute[225855]:   <os>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:   </os>
Jan 20 15:06:28 compute-1 nova_compute[225855]:   <features>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:   </features>
Jan 20 15:06:28 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:06:28 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:06:28 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/5feeb9de-434b-4ec7-aa99-6da718514c6f_disk.config">
Jan 20 15:06:28 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       </source>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:06:28 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <source protocol="rbd" name="volumes/volume-94300d81-b4ca-4c0a-9283-83b76826d40f">
Jan 20 15:06:28 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       </source>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:06:28 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <serial>94300d81-b4ca-4c0a-9283-83b76826d40f</serial>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:6a:c0:d3"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <target dev="tap70668adb-f9"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/5feeb9de-434b-4ec7-aa99-6da718514c6f/console.log" append="off"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <video>
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     </video>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <input type="keyboard" bus="usb"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:06:28 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:06:28 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:06:28 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:06:28 compute-1 nova_compute[225855]: </domain>
Jan 20 15:06:28 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.908 225859 DEBUG nova.compute.manager [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Preparing to wait for external event network-vif-plugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.908 225859 DEBUG oslo_concurrency.lockutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Acquiring lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.908 225859 DEBUG oslo_concurrency.lockutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.908 225859 DEBUG oslo_concurrency.lockutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.909 225859 DEBUG nova.virt.libvirt.vif [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T15:05:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-670486896',display_name='tempest-TestShelveInstance-server-670486896',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-670486896',id=162,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1862119958',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:05:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='0fc924d2df984301897e81920c5e192f',ramdisk_id='',reservation_id='r-18qxw31n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1425544575',owner_user_name='tempest-TestShelveInstance-1425544575-project-member'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:06:20Z,user_data=None,user_id='b02a8ef6cc3946ceb2c8846aae2eae68',uuid=5feeb9de-434b-4ec7-aa99-6da718514c6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.909 225859 DEBUG nova.network.os_vif_util [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Converting VIF {"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.910 225859 DEBUG nova.network.os_vif_util [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:c0:d3,bridge_name='br-int',has_traffic_filtering=True,id=70668adb-f9ad-41cb-8eac-2e0aba32bf22,network=Network(0f434e83-45c8-454d-820b-af39b696a1d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70668adb-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.910 225859 DEBUG os_vif [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:c0:d3,bridge_name='br-int',has_traffic_filtering=True,id=70668adb-f9ad-41cb-8eac-2e0aba32bf22,network=Network(0f434e83-45c8-454d-820b-af39b696a1d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70668adb-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.911 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.911 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.911 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.914 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.915 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap70668adb-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.915 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap70668adb-f9, col_values=(('external_ids', {'iface-id': '70668adb-f9ad-41cb-8eac-2e0aba32bf22', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:c0:d3', 'vm-uuid': '5feeb9de-434b-4ec7-aa99-6da718514c6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.916 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:28 compute-1 NetworkManager[49104]: <info>  [1768921588.9176] manager: (tap70668adb-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/291)
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.918 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.923 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.925 225859 INFO os_vif [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:c0:d3,bridge_name='br-int',has_traffic_filtering=True,id=70668adb-f9ad-41cb-8eac-2e0aba32bf22,network=Network(0f434e83-45c8-454d-820b-af39b696a1d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70668adb-f9')
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.972 225859 DEBUG nova.virt.libvirt.driver [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.972 225859 DEBUG nova.virt.libvirt.driver [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.972 225859 DEBUG nova.virt.libvirt.driver [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] No VIF found with MAC fa:16:3e:6a:c0:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.973 225859 INFO nova.virt.libvirt.driver [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Using config drive
Jan 20 15:06:28 compute-1 nova_compute[225855]: 2026-01-20 15:06:28.996 225859 DEBUG nova.storage.rbd_utils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] rbd image 5feeb9de-434b-4ec7-aa99-6da718514c6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:29 compute-1 nova_compute[225855]: 2026-01-20 15:06:29.013 225859 DEBUG nova.objects.instance [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5feeb9de-434b-4ec7-aa99-6da718514c6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:06:29 compute-1 nova_compute[225855]: 2026-01-20 15:06:29.050 225859 DEBUG nova.objects.instance [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lazy-loading 'keypairs' on Instance uuid 5feeb9de-434b-4ec7-aa99-6da718514c6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:06:29 compute-1 ceph-mon[81775]: pgmap v2477: 321 pgs: 321 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 201 op/s
Jan 20 15:06:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2756618266' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:30 compute-1 nova_compute[225855]: 2026-01-20 15:06:30.192 225859 INFO nova.virt.libvirt.driver [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Creating config drive at /var/lib/nova/instances/5feeb9de-434b-4ec7-aa99-6da718514c6f/disk.config
Jan 20 15:06:30 compute-1 nova_compute[225855]: 2026-01-20 15:06:30.203 225859 DEBUG oslo_concurrency.processutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5feeb9de-434b-4ec7-aa99-6da718514c6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp63nupge2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:06:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:30.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:06:30 compute-1 nova_compute[225855]: 2026-01-20 15:06:30.346 225859 DEBUG oslo_concurrency.processutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5feeb9de-434b-4ec7-aa99-6da718514c6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp63nupge2" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:30 compute-1 nova_compute[225855]: 2026-01-20 15:06:30.379 225859 DEBUG nova.storage.rbd_utils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] rbd image 5feeb9de-434b-4ec7-aa99-6da718514c6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:30 compute-1 nova_compute[225855]: 2026-01-20 15:06:30.383 225859 DEBUG oslo_concurrency.processutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5feeb9de-434b-4ec7-aa99-6da718514c6f/disk.config 5feeb9de-434b-4ec7-aa99-6da718514c6f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:30 compute-1 nova_compute[225855]: 2026-01-20 15:06:30.526 225859 DEBUG oslo_concurrency.processutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5feeb9de-434b-4ec7-aa99-6da718514c6f/disk.config 5feeb9de-434b-4ec7-aa99-6da718514c6f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:30 compute-1 nova_compute[225855]: 2026-01-20 15:06:30.526 225859 INFO nova.virt.libvirt.driver [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Deleting local config drive /var/lib/nova/instances/5feeb9de-434b-4ec7-aa99-6da718514c6f/disk.config because it was imported into RBD.
Jan 20 15:06:30 compute-1 kernel: tap70668adb-f9: entered promiscuous mode
Jan 20 15:06:30 compute-1 NetworkManager[49104]: <info>  [1768921590.5857] manager: (tap70668adb-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/292)
Jan 20 15:06:30 compute-1 ovn_controller[130490]: 2026-01-20T15:06:30Z|00690|binding|INFO|Claiming lport 70668adb-f9ad-41cb-8eac-2e0aba32bf22 for this chassis.
Jan 20 15:06:30 compute-1 ovn_controller[130490]: 2026-01-20T15:06:30Z|00691|binding|INFO|70668adb-f9ad-41cb-8eac-2e0aba32bf22: Claiming fa:16:3e:6a:c0:d3 10.100.0.3
Jan 20 15:06:30 compute-1 nova_compute[225855]: 2026-01-20 15:06:30.587 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.596 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:c0:d3 10.100.0.3'], port_security=['fa:16:3e:6a:c0:d3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5feeb9de-434b-4ec7-aa99-6da718514c6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f434e83-45c8-454d-820b-af39b696a1d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0fc924d2df984301897e81920c5e192f', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'd8d958e0-892e-4275-9633-96783d5a96b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf0669b1-9b02-4bfa-859e-dac906b93fdc, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=70668adb-f9ad-41cb-8eac-2e0aba32bf22) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.597 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 70668adb-f9ad-41cb-8eac-2e0aba32bf22 in datapath 0f434e83-45c8-454d-820b-af39b696a1d5 bound to our chassis
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.599 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0f434e83-45c8-454d-820b-af39b696a1d5
Jan 20 15:06:30 compute-1 ovn_controller[130490]: 2026-01-20T15:06:30Z|00692|binding|INFO|Setting lport 70668adb-f9ad-41cb-8eac-2e0aba32bf22 ovn-installed in OVS
Jan 20 15:06:30 compute-1 ovn_controller[130490]: 2026-01-20T15:06:30Z|00693|binding|INFO|Setting lport 70668adb-f9ad-41cb-8eac-2e0aba32bf22 up in Southbound
Jan 20 15:06:30 compute-1 nova_compute[225855]: 2026-01-20 15:06:30.608 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:30 compute-1 nova_compute[225855]: 2026-01-20 15:06:30.610 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.612 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b92cea63-8afa-420b-802c-158c259a139f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.614 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0f434e83-41 in ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.617 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0f434e83-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.617 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e54bf5-e1a3-4a6b-842a-f2d758d910ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:30 compute-1 systemd-udevd[291768]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.618 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1250e6ed-b352-4d77-8735-22a05cd68966]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:30 compute-1 ceph-mon[81775]: pgmap v2478: 321 pgs: 321 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 204 op/s
Jan 20 15:06:30 compute-1 systemd-machined[194361]: New machine qemu-81-instance-000000a2.
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.632 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[82ad6d2c-6b45-41ef-a0f4-b7db00217865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:30 compute-1 NetworkManager[49104]: <info>  [1768921590.6345] device (tap70668adb-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:06:30 compute-1 NetworkManager[49104]: <info>  [1768921590.6352] device (tap70668adb-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:06:30 compute-1 systemd[1]: Started Virtual Machine qemu-81-instance-000000a2.
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.655 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[65c806d5-fa64-4ba0-b944-44c2db6208c3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:06:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:30.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.682 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[a3cc42d5-1ac5-451d-a521-40e340b4bd52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.686 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[77bc25be-847d-4756-b206-a2b2bee1aa4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:30 compute-1 NetworkManager[49104]: <info>  [1768921590.6881] manager: (tap0f434e83-40): new Veth device (/org/freedesktop/NetworkManager/Devices/293)
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.718 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d47d7515-3be9-4e87-abba-71ab833036b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.721 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[254e7f0d-9a14-4f2c-a52d-ebaa928e7b8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:30 compute-1 NetworkManager[49104]: <info>  [1768921590.7440] device (tap0f434e83-40): carrier: link connected
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.748 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[50d0c082-05c2-42e6-8a2b-fcfa7cb015e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.765 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[336af903-3b21-49d1-bf73-ca2345c02dd5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0f434e83-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:12:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662967, 'reachable_time': 24643, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291801, 'error': None, 'target': 'ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.778 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f1dac8bb-b0d0-4cc5-928b-d50c2a675733]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2e:128d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662967, 'tstamp': 662967}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291802, 'error': None, 'target': 'ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.791 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[42edd7c2-330b-4629-ba2e-fbc9966ab0f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0f434e83-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:12:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662967, 'reachable_time': 24643, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291803, 'error': None, 'target': 'ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.817 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c4399698-cea6-4eef-95e6-0e8ca37b1e2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.867 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[087e4bfe-e2b1-444b-ac91-97b540550198]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.869 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f434e83-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.869 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.870 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f434e83-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:30 compute-1 NetworkManager[49104]: <info>  [1768921590.9165] manager: (tap0f434e83-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Jan 20 15:06:30 compute-1 kernel: tap0f434e83-40: entered promiscuous mode
Jan 20 15:06:30 compute-1 nova_compute[225855]: 2026-01-20 15:06:30.917 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.920 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0f434e83-40, col_values=(('external_ids', {'iface-id': '6133323e-bf50-4bbd-bc0b-9ecf135d8cd5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:30 compute-1 nova_compute[225855]: 2026-01-20 15:06:30.921 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:30 compute-1 ovn_controller[130490]: 2026-01-20T15:06:30Z|00694|binding|INFO|Releasing lport 6133323e-bf50-4bbd-bc0b-9ecf135d8cd5 from this chassis (sb_readonly=0)
Jan 20 15:06:30 compute-1 nova_compute[225855]: 2026-01-20 15:06:30.936 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.938 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0f434e83-45c8-454d-820b-af39b696a1d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0f434e83-45c8-454d-820b-af39b696a1d5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.938 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3ba259-87ca-4fa9-82ba-ab49fd709f95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.939 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-0f434e83-45c8-454d-820b-af39b696a1d5
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/0f434e83-45c8-454d-820b-af39b696a1d5.pid.haproxy
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 0f434e83-45c8-454d-820b-af39b696a1d5
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:06:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:30.940 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5', 'env', 'PROCESS_TAG=haproxy-0f434e83-45c8-454d-820b-af39b696a1d5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0f434e83-45c8-454d-820b-af39b696a1d5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.200 225859 DEBUG nova.compute.manager [req-42b443c1-3a98-4913-9854-23990427110c req-905f3f03-50ac-4e62-b92f-4be760e1549b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Received event network-vif-plugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.201 225859 DEBUG oslo_concurrency.lockutils [req-42b443c1-3a98-4913-9854-23990427110c req-905f3f03-50ac-4e62-b92f-4be760e1549b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.202 225859 DEBUG oslo_concurrency.lockutils [req-42b443c1-3a98-4913-9854-23990427110c req-905f3f03-50ac-4e62-b92f-4be760e1549b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.203 225859 DEBUG oslo_concurrency.lockutils [req-42b443c1-3a98-4913-9854-23990427110c req-905f3f03-50ac-4e62-b92f-4be760e1549b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.203 225859 DEBUG nova.compute.manager [req-42b443c1-3a98-4913-9854-23990427110c req-905f3f03-50ac-4e62-b92f-4be760e1549b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Processing event network-vif-plugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.204 225859 DEBUG nova.network.neutron [req-b0baf1dc-9f24-4299-8c90-d5a44fd6c42a req-49edfd08-24e1-411d-babc-f90e50b80d9e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Updated VIF entry in instance network info cache for port 70668adb-f9ad-41cb-8eac-2e0aba32bf22. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.205 225859 DEBUG nova.network.neutron [req-b0baf1dc-9f24-4299-8c90-d5a44fd6c42a req-49edfd08-24e1-411d-babc-f90e50b80d9e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Updating instance_info_cache with network_info: [{"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.220 225859 DEBUG oslo_concurrency.lockutils [req-b0baf1dc-9f24-4299-8c90-d5a44fd6c42a req-49edfd08-24e1-411d-babc-f90e50b80d9e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.223 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921591.2233129, 5feeb9de-434b-4ec7-aa99-6da718514c6f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.224 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] VM Started (Lifecycle Event)
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.226 225859 DEBUG nova.compute.manager [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.229 225859 DEBUG nova.virt.libvirt.driver [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.232 225859 INFO nova.virt.libvirt.driver [-] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Instance spawned successfully.
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.232 225859 DEBUG nova.compute.manager [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.279 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.282 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:06:31 compute-1 podman[291877]: 2026-01-20 15:06:31.327006977 +0000 UTC m=+0.053841448 container create 7a60dbc62e73615be07404bd95899ae47d0c88a0458e87e1347e628a09141daa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.336 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.337 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921591.2241447, 5feeb9de-434b-4ec7-aa99-6da718514c6f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.337 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] VM Paused (Lifecycle Event)
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.354 225859 DEBUG oslo_concurrency.lockutils [None req-ae7ef1c3-3d4a-4f00-a894-01c19671aa6e b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 11.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:31 compute-1 systemd[1]: Started libpod-conmon-7a60dbc62e73615be07404bd95899ae47d0c88a0458e87e1347e628a09141daa.scope.
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.373 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.377 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921591.2283845, 5feeb9de-434b-4ec7-aa99-6da718514c6f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.378 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] VM Resumed (Lifecycle Event)
Jan 20 15:06:31 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:06:31 compute-1 podman[291877]: 2026-01-20 15:06:31.299749234 +0000 UTC m=+0.026583745 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:06:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2da085baf4d307a9cd37776373e9378476598690798d692b40336094440304d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.399 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.402 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:06:31 compute-1 podman[291877]: 2026-01-20 15:06:31.409943561 +0000 UTC m=+0.136778052 container init 7a60dbc62e73615be07404bd95899ae47d0c88a0458e87e1347e628a09141daa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 15:06:31 compute-1 podman[291877]: 2026-01-20 15:06:31.414896171 +0000 UTC m=+0.141730642 container start 7a60dbc62e73615be07404bd95899ae47d0c88a0458e87e1347e628a09141daa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 20 15:06:31 compute-1 neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5[291892]: [NOTICE]   (291896) : New worker (291898) forked
Jan 20 15:06:31 compute-1 neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5[291892]: [NOTICE]   (291896) : Loading success.
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.461 225859 DEBUG oslo_concurrency.lockutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Acquiring lock "5f8a2718-2106-431c-82c1-2609a52e7fb2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.461 225859 DEBUG oslo_concurrency.lockutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.477 225859 DEBUG nova.compute.manager [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.548 225859 DEBUG oslo_concurrency.lockutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.549 225859 DEBUG oslo_concurrency.lockutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.555 225859 DEBUG nova.virt.hardware [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.556 225859 INFO nova.compute.claims [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:06:31 compute-1 nova_compute[225855]: 2026-01-20 15:06:31.711 225859 DEBUG oslo_concurrency.processutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:06:32 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3538617676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:06:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:32.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.229 225859 DEBUG oslo_concurrency.processutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.235 225859 DEBUG nova.compute.provider_tree [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.254 225859 DEBUG nova.scheduler.client.report [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:06:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3538617676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.278 225859 DEBUG oslo_concurrency.lockutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.279 225859 DEBUG nova.compute.manager [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.331 225859 DEBUG nova.compute.manager [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.332 225859 DEBUG nova.network.neutron [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.368 225859 INFO nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.384 225859 DEBUG nova.compute.manager [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.429 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.485 225859 DEBUG nova.compute.manager [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.486 225859 DEBUG nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.487 225859 INFO nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Creating image(s)
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.509 225859 DEBUG nova.storage.rbd_utils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] rbd image 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.535 225859 DEBUG nova.storage.rbd_utils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] rbd image 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.560 225859 DEBUG nova.storage.rbd_utils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] rbd image 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.563 225859 DEBUG oslo_concurrency.processutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.598 225859 DEBUG nova.policy [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a2beb3d6247e457abd6e8d93cc602f02', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5e161d5a47f845fd89eb3f10627a0830', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.643 225859 DEBUG oslo_concurrency.processutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.644 225859 DEBUG oslo_concurrency.lockutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.645 225859 DEBUG oslo_concurrency.lockutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.645 225859 DEBUG oslo_concurrency.lockutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.672 225859 DEBUG nova.storage.rbd_utils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] rbd image 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:32.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.677 225859 DEBUG oslo_concurrency.processutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.955 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:32 compute-1 nova_compute[225855]: 2026-01-20 15:06:32.972 225859 DEBUG oslo_concurrency.processutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:33 compute-1 nova_compute[225855]: 2026-01-20 15:06:33.051 225859 DEBUG nova.storage.rbd_utils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] resizing rbd image 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:06:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:33 compute-1 nova_compute[225855]: 2026-01-20 15:06:33.148 225859 DEBUG nova.objects.instance [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lazy-loading 'migration_context' on Instance uuid 5f8a2718-2106-431c-82c1-2609a52e7fb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:06:33 compute-1 nova_compute[225855]: 2026-01-20 15:06:33.165 225859 DEBUG nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:06:33 compute-1 nova_compute[225855]: 2026-01-20 15:06:33.166 225859 DEBUG nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Ensure instance console log exists: /var/lib/nova/instances/5f8a2718-2106-431c-82c1-2609a52e7fb2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:06:33 compute-1 nova_compute[225855]: 2026-01-20 15:06:33.166 225859 DEBUG oslo_concurrency.lockutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:33 compute-1 nova_compute[225855]: 2026-01-20 15:06:33.167 225859 DEBUG oslo_concurrency.lockutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:33 compute-1 nova_compute[225855]: 2026-01-20 15:06:33.167 225859 DEBUG oslo_concurrency.lockutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:33 compute-1 ceph-mon[81775]: pgmap v2479: 321 pgs: 321 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 173 op/s
Jan 20 15:06:33 compute-1 nova_compute[225855]: 2026-01-20 15:06:33.326 225859 DEBUG nova.compute.manager [req-dbb32488-35f0-46ea-9e59-ba9c5923d0c7 req-1a7a350b-dc89-4eea-b390-f3a59cd6e5ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Received event network-vif-plugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:06:33 compute-1 nova_compute[225855]: 2026-01-20 15:06:33.327 225859 DEBUG oslo_concurrency.lockutils [req-dbb32488-35f0-46ea-9e59-ba9c5923d0c7 req-1a7a350b-dc89-4eea-b390-f3a59cd6e5ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:33 compute-1 nova_compute[225855]: 2026-01-20 15:06:33.327 225859 DEBUG oslo_concurrency.lockutils [req-dbb32488-35f0-46ea-9e59-ba9c5923d0c7 req-1a7a350b-dc89-4eea-b390-f3a59cd6e5ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:33 compute-1 nova_compute[225855]: 2026-01-20 15:06:33.327 225859 DEBUG oslo_concurrency.lockutils [req-dbb32488-35f0-46ea-9e59-ba9c5923d0c7 req-1a7a350b-dc89-4eea-b390-f3a59cd6e5ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:33 compute-1 nova_compute[225855]: 2026-01-20 15:06:33.327 225859 DEBUG nova.compute.manager [req-dbb32488-35f0-46ea-9e59-ba9c5923d0c7 req-1a7a350b-dc89-4eea-b390-f3a59cd6e5ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] No waiting events found dispatching network-vif-plugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:06:33 compute-1 nova_compute[225855]: 2026-01-20 15:06:33.327 225859 WARNING nova.compute.manager [req-dbb32488-35f0-46ea-9e59-ba9c5923d0c7 req-1a7a350b-dc89-4eea-b390-f3a59cd6e5ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Received unexpected event network-vif-plugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 for instance with vm_state active and task_state None.
Jan 20 15:06:33 compute-1 nova_compute[225855]: 2026-01-20 15:06:33.334 225859 DEBUG nova.network.neutron [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Successfully created port: 6e7af943-7ef0-441d-a402-bd595082f98e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:06:33 compute-1 nova_compute[225855]: 2026-01-20 15:06:33.917 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:34.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:34.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:34 compute-1 nova_compute[225855]: 2026-01-20 15:06:34.977 225859 DEBUG nova.network.neutron [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Successfully updated port: 6e7af943-7ef0-441d-a402-bd595082f98e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:06:34 compute-1 nova_compute[225855]: 2026-01-20 15:06:34.997 225859 DEBUG oslo_concurrency.lockutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Acquiring lock "refresh_cache-5f8a2718-2106-431c-82c1-2609a52e7fb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:06:34 compute-1 nova_compute[225855]: 2026-01-20 15:06:34.997 225859 DEBUG oslo_concurrency.lockutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Acquired lock "refresh_cache-5f8a2718-2106-431c-82c1-2609a52e7fb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:06:34 compute-1 nova_compute[225855]: 2026-01-20 15:06:34.997 225859 DEBUG nova.network.neutron [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:06:35 compute-1 nova_compute[225855]: 2026-01-20 15:06:35.092 225859 DEBUG nova.compute.manager [req-5ece68d1-de02-4889-9091-0f350535054a req-cbcc52ea-193a-4403-9426-b15d13d5da97 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received event network-changed-6e7af943-7ef0-441d-a402-bd595082f98e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:06:35 compute-1 nova_compute[225855]: 2026-01-20 15:06:35.093 225859 DEBUG nova.compute.manager [req-5ece68d1-de02-4889-9091-0f350535054a req-cbcc52ea-193a-4403-9426-b15d13d5da97 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Refreshing instance network info cache due to event network-changed-6e7af943-7ef0-441d-a402-bd595082f98e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:06:35 compute-1 nova_compute[225855]: 2026-01-20 15:06:35.093 225859 DEBUG oslo_concurrency.lockutils [req-5ece68d1-de02-4889-9091-0f350535054a req-cbcc52ea-193a-4403-9426-b15d13d5da97 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-5f8a2718-2106-431c-82c1-2609a52e7fb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:06:35 compute-1 nova_compute[225855]: 2026-01-20 15:06:35.207 225859 DEBUG nova.network.neutron [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:06:35 compute-1 ceph-mon[81775]: pgmap v2480: 321 pgs: 321 active+clean; 497 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 335 KiB/s wr, 174 op/s
Jan 20 15:06:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:06:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:36.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:06:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:36.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:36 compute-1 nova_compute[225855]: 2026-01-20 15:06:36.974 225859 DEBUG nova.network.neutron [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Updating instance_info_cache with network_info: [{"id": "6e7af943-7ef0-441d-a402-bd595082f98e", "address": "fa:16:3e:21:8b:e2", "network": {"id": "5bac39b9-563a-456f-9168-fd10b1b28c21", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-3053955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5e161d5a47f845fd89eb3f10627a0830", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e7af943-7e", "ovs_interfaceid": "6e7af943-7ef0-441d-a402-bd595082f98e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:06:36 compute-1 nova_compute[225855]: 2026-01-20 15:06:36.994 225859 DEBUG oslo_concurrency.lockutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Releasing lock "refresh_cache-5f8a2718-2106-431c-82c1-2609a52e7fb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:06:36 compute-1 nova_compute[225855]: 2026-01-20 15:06:36.995 225859 DEBUG nova.compute.manager [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Instance network_info: |[{"id": "6e7af943-7ef0-441d-a402-bd595082f98e", "address": "fa:16:3e:21:8b:e2", "network": {"id": "5bac39b9-563a-456f-9168-fd10b1b28c21", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-3053955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5e161d5a47f845fd89eb3f10627a0830", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e7af943-7e", "ovs_interfaceid": "6e7af943-7ef0-441d-a402-bd595082f98e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:06:36 compute-1 nova_compute[225855]: 2026-01-20 15:06:36.995 225859 DEBUG oslo_concurrency.lockutils [req-5ece68d1-de02-4889-9091-0f350535054a req-cbcc52ea-193a-4403-9426-b15d13d5da97 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-5f8a2718-2106-431c-82c1-2609a52e7fb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:06:36 compute-1 nova_compute[225855]: 2026-01-20 15:06:36.995 225859 DEBUG nova.network.neutron [req-5ece68d1-de02-4889-9091-0f350535054a req-cbcc52ea-193a-4403-9426-b15d13d5da97 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Refreshing network info cache for port 6e7af943-7ef0-441d-a402-bd595082f98e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:06:36 compute-1 nova_compute[225855]: 2026-01-20 15:06:36.998 225859 DEBUG nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Start _get_guest_xml network_info=[{"id": "6e7af943-7ef0-441d-a402-bd595082f98e", "address": "fa:16:3e:21:8b:e2", "network": {"id": "5bac39b9-563a-456f-9168-fd10b1b28c21", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-3053955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5e161d5a47f845fd89eb3f10627a0830", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e7af943-7e", "ovs_interfaceid": "6e7af943-7ef0-441d-a402-bd595082f98e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.184 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.295 225859 WARNING nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.300 225859 DEBUG nova.virt.libvirt.host [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.301 225859 DEBUG nova.virt.libvirt.host [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.303 225859 DEBUG nova.virt.libvirt.host [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.304 225859 DEBUG nova.virt.libvirt.host [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.305 225859 DEBUG nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.305 225859 DEBUG nova.virt.hardware [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.305 225859 DEBUG nova.virt.hardware [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.306 225859 DEBUG nova.virt.hardware [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.306 225859 DEBUG nova.virt.hardware [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.306 225859 DEBUG nova.virt.hardware [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.306 225859 DEBUG nova.virt.hardware [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.307 225859 DEBUG nova.virt.hardware [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.307 225859 DEBUG nova.virt.hardware [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.307 225859 DEBUG nova.virt.hardware [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.307 225859 DEBUG nova.virt.hardware [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.307 225859 DEBUG nova.virt.hardware [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.310 225859 DEBUG oslo_concurrency.processutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.430 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:37 compute-1 ceph-mon[81775]: pgmap v2481: 321 pgs: 321 active+clean; 543 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.2 MiB/s wr, 294 op/s
Jan 20 15:06:37 compute-1 sudo[292118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:37 compute-1 sudo[292118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:37 compute-1 sudo[292118]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:37 compute-1 sudo[292143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:37 compute-1 sudo[292143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:37 compute-1 sudo[292143]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:06:37 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3922319981' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.790 225859 DEBUG oslo_concurrency.processutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.830 225859 DEBUG nova.storage.rbd_utils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] rbd image 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:37 compute-1 nova_compute[225855]: 2026-01-20 15:06:37.834 225859 DEBUG oslo_concurrency.processutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.198 225859 DEBUG nova.network.neutron [req-5ece68d1-de02-4889-9091-0f350535054a req-cbcc52ea-193a-4403-9426-b15d13d5da97 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Updated VIF entry in instance network info cache for port 6e7af943-7ef0-441d-a402-bd595082f98e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.199 225859 DEBUG nova.network.neutron [req-5ece68d1-de02-4889-9091-0f350535054a req-cbcc52ea-193a-4403-9426-b15d13d5da97 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Updating instance_info_cache with network_info: [{"id": "6e7af943-7ef0-441d-a402-bd595082f98e", "address": "fa:16:3e:21:8b:e2", "network": {"id": "5bac39b9-563a-456f-9168-fd10b1b28c21", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-3053955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5e161d5a47f845fd89eb3f10627a0830", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e7af943-7e", "ovs_interfaceid": "6e7af943-7ef0-441d-a402-bd595082f98e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.216 225859 DEBUG oslo_concurrency.lockutils [req-5ece68d1-de02-4889-9091-0f350535054a req-cbcc52ea-193a-4403-9426-b15d13d5da97 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-5f8a2718-2106-431c-82c1-2609a52e7fb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:06:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:06:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:38.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:06:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:06:38 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4171499437' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.266 225859 DEBUG oslo_concurrency.processutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.267 225859 DEBUG nova.virt.libvirt.vif [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:06:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1315322326',display_name='tempest-ServerRescueTestJSON-server-1315322326',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1315322326',id=165,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5e161d5a47f845fd89eb3f10627a0830',ramdisk_id='',reservation_id='r-g1s28307',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1151598672',owner_user_name='tempest-ServerRescueTestJSON-1151598672-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:06:32Z,user_data=None,user_id='a2beb3d6247e457abd6e8d93cc602f02',uuid=5f8a2718-2106-431c-82c1-2609a52e7fb2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e7af943-7ef0-441d-a402-bd595082f98e", "address": "fa:16:3e:21:8b:e2", "network": {"id": "5bac39b9-563a-456f-9168-fd10b1b28c21", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-3053955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5e161d5a47f845fd89eb3f10627a0830", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e7af943-7e", "ovs_interfaceid": "6e7af943-7ef0-441d-a402-bd595082f98e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.268 225859 DEBUG nova.network.os_vif_util [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Converting VIF {"id": "6e7af943-7ef0-441d-a402-bd595082f98e", "address": "fa:16:3e:21:8b:e2", "network": {"id": "5bac39b9-563a-456f-9168-fd10b1b28c21", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-3053955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5e161d5a47f845fd89eb3f10627a0830", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e7af943-7e", "ovs_interfaceid": "6e7af943-7ef0-441d-a402-bd595082f98e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.269 225859 DEBUG nova.network.os_vif_util [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:8b:e2,bridge_name='br-int',has_traffic_filtering=True,id=6e7af943-7ef0-441d-a402-bd595082f98e,network=Network(5bac39b9-563a-456f-9168-fd10b1b28c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e7af943-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.270 225859 DEBUG nova.objects.instance [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5f8a2718-2106-431c-82c1-2609a52e7fb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.285 225859 DEBUG nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:06:38 compute-1 nova_compute[225855]:   <uuid>5f8a2718-2106-431c-82c1-2609a52e7fb2</uuid>
Jan 20 15:06:38 compute-1 nova_compute[225855]:   <name>instance-000000a5</name>
Jan 20 15:06:38 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:06:38 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:06:38 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerRescueTestJSON-server-1315322326</nova:name>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:06:37</nova:creationTime>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:06:38 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:06:38 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:06:38 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:06:38 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:06:38 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:06:38 compute-1 nova_compute[225855]:         <nova:user uuid="a2beb3d6247e457abd6e8d93cc602f02">tempest-ServerRescueTestJSON-1151598672-project-member</nova:user>
Jan 20 15:06:38 compute-1 nova_compute[225855]:         <nova:project uuid="5e161d5a47f845fd89eb3f10627a0830">tempest-ServerRescueTestJSON-1151598672</nova:project>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:06:38 compute-1 nova_compute[225855]:         <nova:port uuid="6e7af943-7ef0-441d-a402-bd595082f98e">
Jan 20 15:06:38 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:06:38 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:06:38 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <system>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <entry name="serial">5f8a2718-2106-431c-82c1-2609a52e7fb2</entry>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <entry name="uuid">5f8a2718-2106-431c-82c1-2609a52e7fb2</entry>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     </system>
Jan 20 15:06:38 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:06:38 compute-1 nova_compute[225855]:   <os>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:   </os>
Jan 20 15:06:38 compute-1 nova_compute[225855]:   <features>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:   </features>
Jan 20 15:06:38 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:06:38 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:06:38 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/5f8a2718-2106-431c-82c1-2609a52e7fb2_disk">
Jan 20 15:06:38 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       </source>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:06:38 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/5f8a2718-2106-431c-82c1-2609a52e7fb2_disk.config">
Jan 20 15:06:38 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       </source>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:06:38 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:21:8b:e2"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <target dev="tap6e7af943-7e"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/5f8a2718-2106-431c-82c1-2609a52e7fb2/console.log" append="off"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <video>
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     </video>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:06:38 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:06:38 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:06:38 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:06:38 compute-1 nova_compute[225855]: </domain>
Jan 20 15:06:38 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.291 225859 DEBUG nova.compute.manager [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Preparing to wait for external event network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.292 225859 DEBUG oslo_concurrency.lockutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Acquiring lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.292 225859 DEBUG oslo_concurrency.lockutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.292 225859 DEBUG oslo_concurrency.lockutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.293 225859 DEBUG nova.virt.libvirt.vif [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:06:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1315322326',display_name='tempest-ServerRescueTestJSON-server-1315322326',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1315322326',id=165,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5e161d5a47f845fd89eb3f10627a0830',ramdisk_id='',reservation_id='r-g1s28307',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1151598672',owner_user_name='tempest-ServerRescueTestJSON-1151598672-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:06:32Z,user_data=None,user_id='a2beb3d6247e457abd6e8d93cc602f02',uuid=5f8a2718-2106-431c-82c1-2609a52e7fb2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e7af943-7ef0-441d-a402-bd595082f98e", "address": "fa:16:3e:21:8b:e2", "network": {"id": "5bac39b9-563a-456f-9168-fd10b1b28c21", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-3053955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5e161d5a47f845fd89eb3f10627a0830", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e7af943-7e", "ovs_interfaceid": "6e7af943-7ef0-441d-a402-bd595082f98e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.293 225859 DEBUG nova.network.os_vif_util [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Converting VIF {"id": "6e7af943-7ef0-441d-a402-bd595082f98e", "address": "fa:16:3e:21:8b:e2", "network": {"id": "5bac39b9-563a-456f-9168-fd10b1b28c21", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-3053955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5e161d5a47f845fd89eb3f10627a0830", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e7af943-7e", "ovs_interfaceid": "6e7af943-7ef0-441d-a402-bd595082f98e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.294 225859 DEBUG nova.network.os_vif_util [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:8b:e2,bridge_name='br-int',has_traffic_filtering=True,id=6e7af943-7ef0-441d-a402-bd595082f98e,network=Network(5bac39b9-563a-456f-9168-fd10b1b28c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e7af943-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.295 225859 DEBUG os_vif [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:8b:e2,bridge_name='br-int',has_traffic_filtering=True,id=6e7af943-7ef0-441d-a402-bd595082f98e,network=Network(5bac39b9-563a-456f-9168-fd10b1b28c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e7af943-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.296 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.296 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.297 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.299 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.299 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e7af943-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.300 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e7af943-7e, col_values=(('external_ids', {'iface-id': '6e7af943-7ef0-441d-a402-bd595082f98e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:21:8b:e2', 'vm-uuid': '5f8a2718-2106-431c-82c1-2609a52e7fb2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.301 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:38 compute-1 NetworkManager[49104]: <info>  [1768921598.3021] manager: (tap6e7af943-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/295)
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.305 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.308 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.309 225859 INFO os_vif [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:8b:e2,bridge_name='br-int',has_traffic_filtering=True,id=6e7af943-7ef0-441d-a402-bd595082f98e,network=Network(5bac39b9-563a-456f-9168-fd10b1b28c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e7af943-7e')
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.390 225859 DEBUG nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.390 225859 DEBUG nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.391 225859 DEBUG nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] No VIF found with MAC fa:16:3e:21:8b:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.391 225859 INFO nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Using config drive
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.418 225859 DEBUG nova.storage.rbd_utils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] rbd image 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3922319981' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4171499437' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:38.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.890 225859 INFO nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Creating config drive at /var/lib/nova/instances/5f8a2718-2106-431c-82c1-2609a52e7fb2/disk.config
Jan 20 15:06:38 compute-1 nova_compute[225855]: 2026-01-20 15:06:38.896 225859 DEBUG oslo_concurrency.processutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5f8a2718-2106-431c-82c1-2609a52e7fb2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6dfgh5ii execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:39 compute-1 nova_compute[225855]: 2026-01-20 15:06:39.028 225859 DEBUG oslo_concurrency.processutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5f8a2718-2106-431c-82c1-2609a52e7fb2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6dfgh5ii" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:39 compute-1 nova_compute[225855]: 2026-01-20 15:06:39.058 225859 DEBUG nova.storage.rbd_utils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] rbd image 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:39 compute-1 nova_compute[225855]: 2026-01-20 15:06:39.061 225859 DEBUG oslo_concurrency.processutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5f8a2718-2106-431c-82c1-2609a52e7fb2/disk.config 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:39 compute-1 nova_compute[225855]: 2026-01-20 15:06:39.222 225859 DEBUG oslo_concurrency.processutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5f8a2718-2106-431c-82c1-2609a52e7fb2/disk.config 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:39 compute-1 nova_compute[225855]: 2026-01-20 15:06:39.223 225859 INFO nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Deleting local config drive /var/lib/nova/instances/5f8a2718-2106-431c-82c1-2609a52e7fb2/disk.config because it was imported into RBD.
Jan 20 15:06:39 compute-1 NetworkManager[49104]: <info>  [1768921599.2700] manager: (tap6e7af943-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/296)
Jan 20 15:06:39 compute-1 kernel: tap6e7af943-7e: entered promiscuous mode
Jan 20 15:06:39 compute-1 nova_compute[225855]: 2026-01-20 15:06:39.274 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:39 compute-1 ovn_controller[130490]: 2026-01-20T15:06:39Z|00695|binding|INFO|Claiming lport 6e7af943-7ef0-441d-a402-bd595082f98e for this chassis.
Jan 20 15:06:39 compute-1 ovn_controller[130490]: 2026-01-20T15:06:39Z|00696|binding|INFO|6e7af943-7ef0-441d-a402-bd595082f98e: Claiming fa:16:3e:21:8b:e2 10.100.0.13
Jan 20 15:06:39 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:39.291 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:8b:e2 10.100.0.13'], port_security=['fa:16:3e:21:8b:e2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5f8a2718-2106-431c-82c1-2609a52e7fb2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bac39b9-563a-456f-9168-fd10b1b28c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e161d5a47f845fd89eb3f10627a0830', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd72b979-cfcf-4dbd-bbff-8e22cd1b4096', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c853729a-de72-4ddb-be59-bc41e08984ce, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6e7af943-7ef0-441d-a402-bd595082f98e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:06:39 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:39.294 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6e7af943-7ef0-441d-a402-bd595082f98e in datapath 5bac39b9-563a-456f-9168-fd10b1b28c21 bound to our chassis
Jan 20 15:06:39 compute-1 ovn_controller[130490]: 2026-01-20T15:06:39Z|00697|binding|INFO|Setting lport 6e7af943-7ef0-441d-a402-bd595082f98e ovn-installed in OVS
Jan 20 15:06:39 compute-1 ovn_controller[130490]: 2026-01-20T15:06:39Z|00698|binding|INFO|Setting lport 6e7af943-7ef0-441d-a402-bd595082f98e up in Southbound
Jan 20 15:06:39 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:39.297 140354 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5bac39b9-563a-456f-9168-fd10b1b28c21 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 20 15:06:39 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:06:39.298 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[68f66ed2-e82b-4f40-aa74-7058e851b5d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:39 compute-1 nova_compute[225855]: 2026-01-20 15:06:39.299 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:39 compute-1 nova_compute[225855]: 2026-01-20 15:06:39.304 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:39 compute-1 systemd-udevd[292284]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:06:39 compute-1 systemd-machined[194361]: New machine qemu-82-instance-000000a5.
Jan 20 15:06:39 compute-1 NetworkManager[49104]: <info>  [1768921599.3228] device (tap6e7af943-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:06:39 compute-1 NetworkManager[49104]: <info>  [1768921599.3237] device (tap6e7af943-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:06:39 compute-1 systemd[1]: Started Virtual Machine qemu-82-instance-000000a5.
Jan 20 15:06:39 compute-1 ceph-mon[81775]: pgmap v2482: 321 pgs: 321 active+clean; 543 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Jan 20 15:06:39 compute-1 nova_compute[225855]: 2026-01-20 15:06:39.884 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921599.88368, 5f8a2718-2106-431c-82c1-2609a52e7fb2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:06:39 compute-1 nova_compute[225855]: 2026-01-20 15:06:39.885 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] VM Started (Lifecycle Event)
Jan 20 15:06:39 compute-1 nova_compute[225855]: 2026-01-20 15:06:39.906 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:06:39 compute-1 nova_compute[225855]: 2026-01-20 15:06:39.909 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921599.8844101, 5f8a2718-2106-431c-82c1-2609a52e7fb2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:06:39 compute-1 nova_compute[225855]: 2026-01-20 15:06:39.909 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] VM Paused (Lifecycle Event)
Jan 20 15:06:39 compute-1 nova_compute[225855]: 2026-01-20 15:06:39.929 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:06:39 compute-1 nova_compute[225855]: 2026-01-20 15:06:39.932 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:06:39 compute-1 nova_compute[225855]: 2026-01-20 15:06:39.965 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:06:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:40.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:06:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:40.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.820 225859 DEBUG nova.compute.manager [req-0eb15549-94bb-4129-9232-93f13bc5e44f req-c256054e-d834-4bba-b003-3b8fa5cbe07f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received event network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.820 225859 DEBUG oslo_concurrency.lockutils [req-0eb15549-94bb-4129-9232-93f13bc5e44f req-c256054e-d834-4bba-b003-3b8fa5cbe07f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.821 225859 DEBUG oslo_concurrency.lockutils [req-0eb15549-94bb-4129-9232-93f13bc5e44f req-c256054e-d834-4bba-b003-3b8fa5cbe07f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.821 225859 DEBUG oslo_concurrency.lockutils [req-0eb15549-94bb-4129-9232-93f13bc5e44f req-c256054e-d834-4bba-b003-3b8fa5cbe07f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.821 225859 DEBUG nova.compute.manager [req-0eb15549-94bb-4129-9232-93f13bc5e44f req-c256054e-d834-4bba-b003-3b8fa5cbe07f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Processing event network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.822 225859 DEBUG nova.compute.manager [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.825 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921600.8251643, 5f8a2718-2106-431c-82c1-2609a52e7fb2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.825 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] VM Resumed (Lifecycle Event)
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.827 225859 DEBUG nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.831 225859 INFO nova.virt.libvirt.driver [-] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Instance spawned successfully.
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.832 225859 DEBUG nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.849 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.854 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.858 225859 DEBUG nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.858 225859 DEBUG nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.858 225859 DEBUG nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.859 225859 DEBUG nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.859 225859 DEBUG nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.860 225859 DEBUG nova.virt.libvirt.driver [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.886 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.913 225859 INFO nova.compute.manager [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Took 8.43 seconds to spawn the instance on the hypervisor.
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.913 225859 DEBUG nova.compute.manager [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.968 225859 INFO nova.compute.manager [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Took 9.44 seconds to build instance.
Jan 20 15:06:40 compute-1 nova_compute[225855]: 2026-01-20 15:06:40.981 225859 DEBUG oslo_concurrency.lockutils [None req-3a9b1e1c-62ab-4a29-8bb2-7f8e6748dcd8 a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:41 compute-1 ceph-mon[81775]: pgmap v2483: 321 pgs: 321 active+clean; 547 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.3 MiB/s wr, 221 op/s
Jan 20 15:06:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:42.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:42 compute-1 nova_compute[225855]: 2026-01-20 15:06:42.433 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:42 compute-1 nova_compute[225855]: 2026-01-20 15:06:42.639 225859 INFO nova.compute.manager [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Rescuing
Jan 20 15:06:42 compute-1 nova_compute[225855]: 2026-01-20 15:06:42.640 225859 DEBUG oslo_concurrency.lockutils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Acquiring lock "refresh_cache-5f8a2718-2106-431c-82c1-2609a52e7fb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:06:42 compute-1 nova_compute[225855]: 2026-01-20 15:06:42.641 225859 DEBUG oslo_concurrency.lockutils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Acquired lock "refresh_cache-5f8a2718-2106-431c-82c1-2609a52e7fb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:06:42 compute-1 nova_compute[225855]: 2026-01-20 15:06:42.641 225859 DEBUG nova.network.neutron [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:06:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:42.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:42 compute-1 nova_compute[225855]: 2026-01-20 15:06:42.948 225859 DEBUG nova.compute.manager [req-7f9cb29a-f1bc-4b20-8202-2125b4da7b69 req-a6170d29-7c3f-419a-97a0-81cf695a9ee9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received event network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:06:42 compute-1 nova_compute[225855]: 2026-01-20 15:06:42.949 225859 DEBUG oslo_concurrency.lockutils [req-7f9cb29a-f1bc-4b20-8202-2125b4da7b69 req-a6170d29-7c3f-419a-97a0-81cf695a9ee9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:42 compute-1 nova_compute[225855]: 2026-01-20 15:06:42.949 225859 DEBUG oslo_concurrency.lockutils [req-7f9cb29a-f1bc-4b20-8202-2125b4da7b69 req-a6170d29-7c3f-419a-97a0-81cf695a9ee9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:42 compute-1 nova_compute[225855]: 2026-01-20 15:06:42.950 225859 DEBUG oslo_concurrency.lockutils [req-7f9cb29a-f1bc-4b20-8202-2125b4da7b69 req-a6170d29-7c3f-419a-97a0-81cf695a9ee9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:42 compute-1 nova_compute[225855]: 2026-01-20 15:06:42.950 225859 DEBUG nova.compute.manager [req-7f9cb29a-f1bc-4b20-8202-2125b4da7b69 req-a6170d29-7c3f-419a-97a0-81cf695a9ee9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] No waiting events found dispatching network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:06:42 compute-1 nova_compute[225855]: 2026-01-20 15:06:42.951 225859 WARNING nova.compute.manager [req-7f9cb29a-f1bc-4b20-8202-2125b4da7b69 req-a6170d29-7c3f-419a-97a0-81cf695a9ee9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received unexpected event network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e for instance with vm_state active and task_state rescuing.
Jan 20 15:06:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:43 compute-1 podman[292337]: 2026-01-20 15:06:43.098114085 +0000 UTC m=+0.123029642 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:06:43 compute-1 nova_compute[225855]: 2026-01-20 15:06:43.302 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:43 compute-1 ceph-mon[81775]: pgmap v2484: 321 pgs: 321 active+clean; 547 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.3 MiB/s wr, 221 op/s
Jan 20 15:06:43 compute-1 nova_compute[225855]: 2026-01-20 15:06:43.896 225859 DEBUG nova.network.neutron [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Updating instance_info_cache with network_info: [{"id": "6e7af943-7ef0-441d-a402-bd595082f98e", "address": "fa:16:3e:21:8b:e2", "network": {"id": "5bac39b9-563a-456f-9168-fd10b1b28c21", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-3053955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5e161d5a47f845fd89eb3f10627a0830", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e7af943-7e", "ovs_interfaceid": "6e7af943-7ef0-441d-a402-bd595082f98e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:06:43 compute-1 nova_compute[225855]: 2026-01-20 15:06:43.924 225859 DEBUG oslo_concurrency.lockutils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Releasing lock "refresh_cache-5f8a2718-2106-431c-82c1-2609a52e7fb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:06:44 compute-1 nova_compute[225855]: 2026-01-20 15:06:44.211 225859 DEBUG nova.virt.libvirt.driver [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 15:06:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:44.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:44 compute-1 ovn_controller[130490]: 2026-01-20T15:06:44Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:c0:d3 10.100.0.3
Jan 20 15:06:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:06:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:44.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:06:45 compute-1 ceph-mon[81775]: pgmap v2485: 321 pgs: 321 active+clean; 552 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.3 MiB/s wr, 243 op/s
Jan 20 15:06:46 compute-1 sudo[292359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:46 compute-1 sudo[292359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:46 compute-1 sudo[292359]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:46 compute-1 sudo[292384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:06:46 compute-1 sudo[292384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:46 compute-1 sudo[292384]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:46 compute-1 sudo[292409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:46 compute-1 sudo[292409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:46 compute-1 sudo[292409]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:46 compute-1 sudo[292434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:06:46 compute-1 sudo[292434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:46.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3937162431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:46 compute-1 sudo[292434]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:46.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:47 compute-1 nova_compute[225855]: 2026-01-20 15:06:47.437 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:47 compute-1 ceph-mon[81775]: pgmap v2486: 321 pgs: 321 active+clean; 552 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.1 MiB/s wr, 311 op/s
Jan 20 15:06:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:06:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:06:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:06:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:06:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:06:47 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:06:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:48.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:48 compute-1 nova_compute[225855]: 2026-01-20 15:06:48.304 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:48 compute-1 nova_compute[225855]: 2026-01-20 15:06:48.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:06:48 compute-1 nova_compute[225855]: 2026-01-20 15:06:48.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:06:48 compute-1 nova_compute[225855]: 2026-01-20 15:06:48.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:06:48 compute-1 ceph-mon[81775]: pgmap v2487: 321 pgs: 321 active+clean; 552 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 152 KiB/s wr, 154 op/s
Jan 20 15:06:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:06:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:48.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:06:49 compute-1 ovn_controller[130490]: 2026-01-20T15:06:49Z|00699|memory|INFO|peak resident set size grew 50% in last 3527.6 seconds, from 16256 kB to 24392 kB
Jan 20 15:06:49 compute-1 ovn_controller[130490]: 2026-01-20T15:06:49Z|00700|memory|INFO|idl-cells-OVN_Southbound:11021 idl-cells-Open_vSwitch:1041 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:395 lflow-cache-entries-cache-matches:294 lflow-cache-size-KB:1610 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:678 ofctrl_installed_flow_usage-KB:496 ofctrl_sb_flow_ref_usage-KB:256
Jan 20 15:06:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:50.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:50 compute-1 nova_compute[225855]: 2026-01-20 15:06:50.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:06:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:06:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:50.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:06:51 compute-1 nova_compute[225855]: 2026-01-20 15:06:51.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:06:51 compute-1 nova_compute[225855]: 2026-01-20 15:06:51.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:06:51 compute-1 nova_compute[225855]: 2026-01-20 15:06:51.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:06:51 compute-1 ceph-mon[81775]: pgmap v2488: 321 pgs: 321 active+clean; 598 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.7 MiB/s wr, 183 op/s
Jan 20 15:06:52 compute-1 nova_compute[225855]: 2026-01-20 15:06:52.009 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-33ba7a73-3233-40a3-a49a-e5bbd604dc3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:06:52 compute-1 nova_compute[225855]: 2026-01-20 15:06:52.010 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-33ba7a73-3233-40a3-a49a-e5bbd604dc3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:06:52 compute-1 nova_compute[225855]: 2026-01-20 15:06:52.010 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:06:52 compute-1 nova_compute[225855]: 2026-01-20 15:06:52.010 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 33ba7a73-3233-40a3-a49a-e5bbd604dc3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:06:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:52.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:52 compute-1 nova_compute[225855]: 2026-01-20 15:06:52.477 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:52.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:53 compute-1 nova_compute[225855]: 2026-01-20 15:06:53.173 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Updating instance_info_cache with network_info: [{"id": "070862f1-1db2-45c2-9787-752e6d88449a", "address": "fa:16:3e:e5:e7:09", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap070862f1-1d", "ovs_interfaceid": "070862f1-1db2-45c2-9787-752e6d88449a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:06:53 compute-1 nova_compute[225855]: 2026-01-20 15:06:53.188 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-33ba7a73-3233-40a3-a49a-e5bbd604dc3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:06:53 compute-1 nova_compute[225855]: 2026-01-20 15:06:53.189 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:06:53 compute-1 nova_compute[225855]: 2026-01-20 15:06:53.189 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:06:53 compute-1 nova_compute[225855]: 2026-01-20 15:06:53.189 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:06:53 compute-1 nova_compute[225855]: 2026-01-20 15:06:53.306 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:53 compute-1 ceph-mon[81775]: pgmap v2489: 321 pgs: 321 active+clean; 604 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Jan 20 15:06:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1888624225' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:53 compute-1 sudo[292495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:53 compute-1 sudo[292495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:53 compute-1 sudo[292495]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:53 compute-1 sudo[292520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:06:53 compute-1 sudo[292520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:53 compute-1 sudo[292520]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:54 compute-1 nova_compute[225855]: 2026-01-20 15:06:54.252 225859 DEBUG nova.virt.libvirt.driver [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 15:06:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:54.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:06:54 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:06:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1255956302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:54 compute-1 ceph-mon[81775]: pgmap v2490: 321 pgs: 321 active+clean; 604 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 149 op/s
Jan 20 15:06:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:54.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:56 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1344781968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:56.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:56 compute-1 nova_compute[225855]: 2026-01-20 15:06:56.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:06:56 compute-1 nova_compute[225855]: 2026-01-20 15:06:56.389 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:56 compute-1 nova_compute[225855]: 2026-01-20 15:06:56.389 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:56 compute-1 nova_compute[225855]: 2026-01-20 15:06:56.389 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:56 compute-1 nova_compute[225855]: 2026-01-20 15:06:56.390 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:06:56 compute-1 nova_compute[225855]: 2026-01-20 15:06:56.390 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:56.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:56 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:06:56 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1577499332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:56 compute-1 nova_compute[225855]: 2026-01-20 15:06:56.821 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:56 compute-1 nova_compute[225855]: 2026-01-20 15:06:56.903 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:06:56 compute-1 nova_compute[225855]: 2026-01-20 15:06:56.903 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:06:56 compute-1 nova_compute[225855]: 2026-01-20 15:06:56.906 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:06:56 compute-1 nova_compute[225855]: 2026-01-20 15:06:56.906 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:06:56 compute-1 nova_compute[225855]: 2026-01-20 15:06:56.909 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:06:56 compute-1 nova_compute[225855]: 2026-01-20 15:06:56.909 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:06:57 compute-1 nova_compute[225855]: 2026-01-20 15:06:57.078 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:06:57 compute-1 nova_compute[225855]: 2026-01-20 15:06:57.080 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3747MB free_disk=20.825679779052734GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:06:57 compute-1 nova_compute[225855]: 2026-01-20 15:06:57.080 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:57 compute-1 nova_compute[225855]: 2026-01-20 15:06:57.080 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:57 compute-1 nova_compute[225855]: 2026-01-20 15:06:57.180 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 33ba7a73-3233-40a3-a49a-e5bbd604dc3c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:06:57 compute-1 nova_compute[225855]: 2026-01-20 15:06:57.181 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 5feeb9de-434b-4ec7-aa99-6da718514c6f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:06:57 compute-1 nova_compute[225855]: 2026-01-20 15:06:57.181 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 5f8a2718-2106-431c-82c1-2609a52e7fb2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:06:57 compute-1 nova_compute[225855]: 2026-01-20 15:06:57.181 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:06:57 compute-1 nova_compute[225855]: 2026-01-20 15:06:57.181 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:06:57 compute-1 ceph-mon[81775]: pgmap v2491: 321 pgs: 321 active+clean; 620 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 163 op/s
Jan 20 15:06:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1667667141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1577499332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/485878596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:57 compute-1 nova_compute[225855]: 2026-01-20 15:06:57.281 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:57 compute-1 nova_compute[225855]: 2026-01-20 15:06:57.480 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:57 compute-1 sudo[292589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:57 compute-1 sudo[292589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:57 compute-1 sudo[292589]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:06:57 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4168235077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:57 compute-1 nova_compute[225855]: 2026-01-20 15:06:57.716 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:57 compute-1 nova_compute[225855]: 2026-01-20 15:06:57.722 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:06:57 compute-1 sudo[292615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:57 compute-1 sudo[292615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:57 compute-1 sudo[292615]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:57 compute-1 nova_compute[225855]: 2026-01-20 15:06:57.740 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:06:57 compute-1 nova_compute[225855]: 2026-01-20 15:06:57.761 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:06:57 compute-1 nova_compute[225855]: 2026-01-20 15:06:57.761 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:58.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3241671892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4168235077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:58 compute-1 nova_compute[225855]: 2026-01-20 15:06:58.308 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:06:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:58.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:59 compute-1 podman[292642]: 2026-01-20 15:06:59.041017981 +0000 UTC m=+0.088924145 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 20 15:06:59 compute-1 ceph-mon[81775]: pgmap v2492: 321 pgs: 321 active+clean; 620 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 306 KiB/s rd, 3.5 MiB/s wr, 75 op/s
Jan 20 15:06:59 compute-1 nova_compute[225855]: 2026-01-20 15:06:59.756 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:07:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:00.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3667091613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:07:00 compute-1 nova_compute[225855]: 2026-01-20 15:07:00.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:07:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:00.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:01 compute-1 ceph-mon[81775]: pgmap v2493: 321 pgs: 321 active+clean; 636 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.1 MiB/s wr, 156 op/s
Jan 20 15:07:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:02.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:02 compute-1 nova_compute[225855]: 2026-01-20 15:07:02.481 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:02.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:03 compute-1 nova_compute[225855]: 2026-01-20 15:07:03.309 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:03 compute-1 ceph-mon[81775]: pgmap v2494: 321 pgs: 321 active+clean; 637 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 149 op/s
Jan 20 15:07:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:04.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:07:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:04.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:07:05 compute-1 nova_compute[225855]: 2026-01-20 15:07:05.293 225859 DEBUG nova.virt.libvirt.driver [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 15:07:05 compute-1 ceph-mon[81775]: pgmap v2495: 321 pgs: 321 active+clean; 637 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Jan 20 15:07:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:06.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:07:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:06.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:07:07 compute-1 ceph-mon[81775]: pgmap v2496: 321 pgs: 321 active+clean; 634 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.3 MiB/s wr, 172 op/s
Jan 20 15:07:07 compute-1 nova_compute[225855]: 2026-01-20 15:07:07.484 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:07 compute-1 kernel: tap6e7af943-7e (unregistering): left promiscuous mode
Jan 20 15:07:07 compute-1 NetworkManager[49104]: <info>  [1768921627.6022] device (tap6e7af943-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:07:07 compute-1 nova_compute[225855]: 2026-01-20 15:07:07.616 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:07 compute-1 ovn_controller[130490]: 2026-01-20T15:07:07Z|00701|binding|INFO|Releasing lport 6e7af943-7ef0-441d-a402-bd595082f98e from this chassis (sb_readonly=0)
Jan 20 15:07:07 compute-1 ovn_controller[130490]: 2026-01-20T15:07:07Z|00702|binding|INFO|Setting lport 6e7af943-7ef0-441d-a402-bd595082f98e down in Southbound
Jan 20 15:07:07 compute-1 ovn_controller[130490]: 2026-01-20T15:07:07Z|00703|binding|INFO|Removing iface tap6e7af943-7e ovn-installed in OVS
Jan 20 15:07:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:07.625 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:8b:e2 10.100.0.13'], port_security=['fa:16:3e:21:8b:e2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5f8a2718-2106-431c-82c1-2609a52e7fb2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bac39b9-563a-456f-9168-fd10b1b28c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e161d5a47f845fd89eb3f10627a0830', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd72b979-cfcf-4dbd-bbff-8e22cd1b4096', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c853729a-de72-4ddb-be59-bc41e08984ce, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6e7af943-7ef0-441d-a402-bd595082f98e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:07:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:07.628 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6e7af943-7ef0-441d-a402-bd595082f98e in datapath 5bac39b9-563a-456f-9168-fd10b1b28c21 unbound from our chassis
Jan 20 15:07:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:07.629 140354 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5bac39b9-563a-456f-9168-fd10b1b28c21 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 20 15:07:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:07.631 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5650f4d5-c3b8-408d-9fc1-6d44449c7093]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:07 compute-1 nova_compute[225855]: 2026-01-20 15:07:07.652 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:07 compute-1 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000a5.scope: Deactivated successfully.
Jan 20 15:07:07 compute-1 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000a5.scope: Consumed 15.990s CPU time.
Jan 20 15:07:07 compute-1 systemd-machined[194361]: Machine qemu-82-instance-000000a5 terminated.
Jan 20 15:07:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.220 225859 DEBUG nova.compute.manager [req-ecc9b2c8-a707-41b3-b495-d831e828ed4f req-d1e29016-1c31-4f11-bffd-385fbac493a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received event network-vif-unplugged-6e7af943-7ef0-441d-a402-bd595082f98e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.222 225859 DEBUG oslo_concurrency.lockutils [req-ecc9b2c8-a707-41b3-b495-d831e828ed4f req-d1e29016-1c31-4f11-bffd-385fbac493a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.222 225859 DEBUG oslo_concurrency.lockutils [req-ecc9b2c8-a707-41b3-b495-d831e828ed4f req-d1e29016-1c31-4f11-bffd-385fbac493a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.222 225859 DEBUG oslo_concurrency.lockutils [req-ecc9b2c8-a707-41b3-b495-d831e828ed4f req-d1e29016-1c31-4f11-bffd-385fbac493a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.223 225859 DEBUG nova.compute.manager [req-ecc9b2c8-a707-41b3-b495-d831e828ed4f req-d1e29016-1c31-4f11-bffd-385fbac493a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] No waiting events found dispatching network-vif-unplugged-6e7af943-7ef0-441d-a402-bd595082f98e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.223 225859 WARNING nova.compute.manager [req-ecc9b2c8-a707-41b3-b495-d831e828ed4f req-d1e29016-1c31-4f11-bffd-385fbac493a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received unexpected event network-vif-unplugged-6e7af943-7ef0-441d-a402-bd595082f98e for instance with vm_state active and task_state rescuing.
Jan 20 15:07:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:08.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.305 225859 INFO nova.virt.libvirt.driver [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Instance shutdown successfully after 24 seconds.
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.310 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.311 225859 INFO nova.virt.libvirt.driver [-] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Instance destroyed successfully.
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.311 225859 DEBUG nova.objects.instance [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lazy-loading 'numa_topology' on Instance uuid 5f8a2718-2106-431c-82c1-2609a52e7fb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.332 225859 INFO nova.virt.libvirt.driver [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Attempting rescue
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.333 225859 DEBUG nova.virt.libvirt.driver [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.337 225859 DEBUG nova.virt.libvirt.driver [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.337 225859 INFO nova.virt.libvirt.driver [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Creating image(s)
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.361 225859 DEBUG nova.storage.rbd_utils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] rbd image 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.365 225859 DEBUG nova.objects.instance [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5f8a2718-2106-431c-82c1-2609a52e7fb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.401 225859 DEBUG nova.storage.rbd_utils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] rbd image 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.425 225859 DEBUG nova.storage.rbd_utils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] rbd image 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.429 225859 DEBUG oslo_concurrency.processutils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.491 225859 DEBUG oslo_concurrency.processutils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.492 225859 DEBUG oslo_concurrency.lockutils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.493 225859 DEBUG oslo_concurrency.lockutils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.493 225859 DEBUG oslo_concurrency.lockutils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.517 225859 DEBUG nova.storage.rbd_utils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] rbd image 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.521 225859 DEBUG oslo_concurrency.processutils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:08.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.853 225859 DEBUG oslo_concurrency.processutils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.854 225859 DEBUG nova.objects.instance [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lazy-loading 'migration_context' on Instance uuid 5f8a2718-2106-431c-82c1-2609a52e7fb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.875 225859 DEBUG nova.virt.libvirt.driver [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.876 225859 DEBUG nova.virt.libvirt.driver [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Start _get_guest_xml network_info=[{"id": "6e7af943-7ef0-441d-a402-bd595082f98e", "address": "fa:16:3e:21:8b:e2", "network": {"id": "5bac39b9-563a-456f-9168-fd10b1b28c21", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-3053955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-3053955-network", "vif_mac": "fa:16:3e:21:8b:e2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5e161d5a47f845fd89eb3f10627a0830", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e7af943-7e", "ovs_interfaceid": "6e7af943-7ef0-441d-a402-bd595082f98e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.876 225859 DEBUG nova.objects.instance [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lazy-loading 'resources' on Instance uuid 5f8a2718-2106-431c-82c1-2609a52e7fb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.878 225859 DEBUG oslo_concurrency.lockutils [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Acquiring lock "5feeb9de-434b-4ec7-aa99-6da718514c6f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.878 225859 DEBUG oslo_concurrency.lockutils [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.878 225859 DEBUG oslo_concurrency.lockutils [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Acquiring lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.879 225859 DEBUG oslo_concurrency.lockutils [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.879 225859 DEBUG oslo_concurrency.lockutils [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.881 225859 INFO nova.compute.manager [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Terminating instance
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.882 225859 DEBUG nova.compute.manager [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.901 225859 WARNING nova.virt.libvirt.driver [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.911 225859 DEBUG nova.virt.libvirt.host [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.912 225859 DEBUG nova.virt.libvirt.host [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.915 225859 DEBUG nova.virt.libvirt.host [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.916 225859 DEBUG nova.virt.libvirt.host [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.917 225859 DEBUG nova.virt.libvirt.driver [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.917 225859 DEBUG nova.virt.hardware [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.917 225859 DEBUG nova.virt.hardware [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.917 225859 DEBUG nova.virt.hardware [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.918 225859 DEBUG nova.virt.hardware [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.918 225859 DEBUG nova.virt.hardware [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.918 225859 DEBUG nova.virt.hardware [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.918 225859 DEBUG nova.virt.hardware [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.918 225859 DEBUG nova.virt.hardware [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.918 225859 DEBUG nova.virt.hardware [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.919 225859 DEBUG nova.virt.hardware [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.919 225859 DEBUG nova.virt.hardware [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.919 225859 DEBUG nova.objects.instance [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5f8a2718-2106-431c-82c1-2609a52e7fb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:08 compute-1 kernel: tap70668adb-f9 (unregistering): left promiscuous mode
Jan 20 15:07:08 compute-1 NetworkManager[49104]: <info>  [1768921628.9269] device (tap70668adb-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.932 225859 DEBUG oslo_concurrency.processutils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:08 compute-1 ovn_controller[130490]: 2026-01-20T15:07:08Z|00704|binding|INFO|Releasing lport 70668adb-f9ad-41cb-8eac-2e0aba32bf22 from this chassis (sb_readonly=0)
Jan 20 15:07:08 compute-1 ovn_controller[130490]: 2026-01-20T15:07:08Z|00705|binding|INFO|Setting lport 70668adb-f9ad-41cb-8eac-2e0aba32bf22 down in Southbound
Jan 20 15:07:08 compute-1 ovn_controller[130490]: 2026-01-20T15:07:08Z|00706|binding|INFO|Removing iface tap70668adb-f9 ovn-installed in OVS
Jan 20 15:07:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:08.950 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:c0:d3 10.100.0.3'], port_security=['fa:16:3e:6a:c0:d3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5feeb9de-434b-4ec7-aa99-6da718514c6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f434e83-45c8-454d-820b-af39b696a1d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0fc924d2df984301897e81920c5e192f', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'd8d958e0-892e-4275-9633-96783d5a96b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf0669b1-9b02-4bfa-859e-dac906b93fdc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=70668adb-f9ad-41cb-8eac-2e0aba32bf22) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:07:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:08.951 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 70668adb-f9ad-41cb-8eac-2e0aba32bf22 in datapath 0f434e83-45c8-454d-820b-af39b696a1d5 unbound from our chassis
Jan 20 15:07:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:08.952 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f434e83-45c8-454d-820b-af39b696a1d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:07:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:08.953 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[33386d12-9fe3-4be1-af46-b564f7cb7c9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:08.954 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5 namespace which is not needed anymore
Jan 20 15:07:08 compute-1 nova_compute[225855]: 2026-01-20 15:07:08.960 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:08 compute-1 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000a2.scope: Deactivated successfully.
Jan 20 15:07:08 compute-1 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000a2.scope: Consumed 14.279s CPU time.
Jan 20 15:07:08 compute-1 systemd-machined[194361]: Machine qemu-81-instance-000000a2 terminated.
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.055 225859 DEBUG nova.compute.manager [req-545408b4-58ab-4abc-b542-a6874fa00bf3 req-9d3fdaa7-2978-4034-b6b0-df47a2cb3cca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Received event network-changed-70668adb-f9ad-41cb-8eac-2e0aba32bf22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.057 225859 DEBUG nova.compute.manager [req-545408b4-58ab-4abc-b542-a6874fa00bf3 req-9d3fdaa7-2978-4034-b6b0-df47a2cb3cca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Refreshing instance network info cache due to event network-changed-70668adb-f9ad-41cb-8eac-2e0aba32bf22. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.057 225859 DEBUG oslo_concurrency.lockutils [req-545408b4-58ab-4abc-b542-a6874fa00bf3 req-9d3fdaa7-2978-4034-b6b0-df47a2cb3cca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.058 225859 DEBUG oslo_concurrency.lockutils [req-545408b4-58ab-4abc-b542-a6874fa00bf3 req-9d3fdaa7-2978-4034-b6b0-df47a2cb3cca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.058 225859 DEBUG nova.network.neutron [req-545408b4-58ab-4abc-b542-a6874fa00bf3 req-9d3fdaa7-2978-4034-b6b0-df47a2cb3cca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Refreshing network info cache for port 70668adb-f9ad-41cb-8eac-2e0aba32bf22 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.114 225859 INFO nova.virt.libvirt.driver [-] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Instance destroyed successfully.
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.115 225859 DEBUG nova.objects.instance [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lazy-loading 'resources' on Instance uuid 5feeb9de-434b-4ec7-aa99-6da718514c6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.136 225859 DEBUG nova.virt.libvirt.vif [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T15:05:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-670486896',display_name='tempest-TestShelveInstance-server-670486896',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-670486896',id=162,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGFkn7BLthBV1Y62q/iaiaFYVNSXov56cyC6gJDof3vS0dj6UwuVwvMnqOok2l8W+oqb55YucgjGf+63NOxxoSCxoRUO/Jcx5MarGHmdQPdT+6u18ixvV1ghiExv/Y0Nog==',key_name='tempest-TestShelveInstance-1862119958',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:06:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0fc924d2df984301897e81920c5e192f',ramdisk_id='',reservation_id='r-18qxw31n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1425544575',owner_user_name='tempest-TestShelveInstance-1425544575-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:06:31Z,user_data=None,user_id='b02a8ef6cc3946ceb2c8846aae2eae68',uuid=5feeb9de-434b-4ec7-aa99-6da718514c6f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.137 225859 DEBUG nova.network.os_vif_util [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Converting VIF {"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.138 225859 DEBUG nova.network.os_vif_util [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:c0:d3,bridge_name='br-int',has_traffic_filtering=True,id=70668adb-f9ad-41cb-8eac-2e0aba32bf22,network=Network(0f434e83-45c8-454d-820b-af39b696a1d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70668adb-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.139 225859 DEBUG os_vif [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:c0:d3,bridge_name='br-int',has_traffic_filtering=True,id=70668adb-f9ad-41cb-8eac-2e0aba32bf22,network=Network(0f434e83-45c8-454d-820b-af39b696a1d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70668adb-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.142 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.142 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap70668adb-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.145 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.147 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.150 225859 INFO os_vif [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:c0:d3,bridge_name='br-int',has_traffic_filtering=True,id=70668adb-f9ad-41cb-8eac-2e0aba32bf22,network=Network(0f434e83-45c8-454d-820b-af39b696a1d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70668adb-f9')
Jan 20 15:07:09 compute-1 neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5[291892]: [NOTICE]   (291896) : haproxy version is 2.8.14-c23fe91
Jan 20 15:07:09 compute-1 neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5[291892]: [NOTICE]   (291896) : path to executable is /usr/sbin/haproxy
Jan 20 15:07:09 compute-1 neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5[291892]: [WARNING]  (291896) : Exiting Master process...
Jan 20 15:07:09 compute-1 neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5[291892]: [WARNING]  (291896) : Exiting Master process...
Jan 20 15:07:09 compute-1 neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5[291892]: [ALERT]    (291896) : Current worker (291898) exited with code 143 (Terminated)
Jan 20 15:07:09 compute-1 neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5[291892]: [WARNING]  (291896) : All workers exited. Exiting... (0)
Jan 20 15:07:09 compute-1 systemd[1]: libpod-7a60dbc62e73615be07404bd95899ae47d0c88a0458e87e1347e628a09141daa.scope: Deactivated successfully.
Jan 20 15:07:09 compute-1 podman[292809]: 2026-01-20 15:07:09.231113249 +0000 UTC m=+0.181694596 container died 7a60dbc62e73615be07404bd95899ae47d0c88a0458e87e1347e628a09141daa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 15:07:09 compute-1 systemd[1]: var-lib-containers-storage-overlay-a2da085baf4d307a9cd37776373e9378476598690798d692b40336094440304d-merged.mount: Deactivated successfully.
Jan 20 15:07:09 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7a60dbc62e73615be07404bd95899ae47d0c88a0458e87e1347e628a09141daa-userdata-shm.mount: Deactivated successfully.
Jan 20 15:07:09 compute-1 podman[292809]: 2026-01-20 15:07:09.284298998 +0000 UTC m=+0.234880315 container cleanup 7a60dbc62e73615be07404bd95899ae47d0c88a0458e87e1347e628a09141daa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:07:09 compute-1 systemd[1]: libpod-conmon-7a60dbc62e73615be07404bd95899ae47d0c88a0458e87e1347e628a09141daa.scope: Deactivated successfully.
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:07:09 compute-1 podman[292888]: 2026-01-20 15:07:09.35272743 +0000 UTC m=+0.047513889 container remove 7a60dbc62e73615be07404bd95899ae47d0c88a0458e87e1347e628a09141daa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 15:07:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:09.358 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[aae38b1b-dc45-44dc-901c-dd460bf80b61]: (4, ('Tue Jan 20 03:07:09 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5 (7a60dbc62e73615be07404bd95899ae47d0c88a0458e87e1347e628a09141daa)\n7a60dbc62e73615be07404bd95899ae47d0c88a0458e87e1347e628a09141daa\nTue Jan 20 03:07:09 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5 (7a60dbc62e73615be07404bd95899ae47d0c88a0458e87e1347e628a09141daa)\n7a60dbc62e73615be07404bd95899ae47d0c88a0458e87e1347e628a09141daa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:09.360 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[94f37c74-d184-46cb-a0d6-bccd2e801dde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:09.361 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f434e83-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.362 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:09 compute-1 kernel: tap0f434e83-40: left promiscuous mode
Jan 20 15:07:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:07:09 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3574389037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.380 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:09.383 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9e50404a-7be7-4c1d-ae37-a760f69de589]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:09.398 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[82066958-ef94-47ae-a70e-0f784e8e1acd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.399 225859 DEBUG oslo_concurrency.processutils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:09.400 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9be82b8b-77ef-4950-bdb1-3b0841699140]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.400 225859 DEBUG oslo_concurrency.processutils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:09.415 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ca1ae3da-b748-4ff6-88ef-c51c3a738264]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662960, 'reachable_time': 28584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292911, 'error': None, 'target': 'ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:09.417 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:07:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:09.417 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[f92d946b-3a96-4562-9534-cf0ff2a9bc1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:09 compute-1 systemd[1]: run-netns-ovnmeta\x2d0f434e83\x2d45c8\x2d454d\x2d820b\x2daf39b696a1d5.mount: Deactivated successfully.
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.433 225859 INFO nova.virt.libvirt.driver [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Deleting instance files /var/lib/nova/instances/5feeb9de-434b-4ec7-aa99-6da718514c6f_del
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.435 225859 INFO nova.virt.libvirt.driver [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Deletion of /var/lib/nova/instances/5feeb9de-434b-4ec7-aa99-6da718514c6f_del complete
Jan 20 15:07:09 compute-1 ceph-mon[81775]: pgmap v2497: 321 pgs: 321 active+clean; 634 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 864 KiB/s wr, 135 op/s
Jan 20 15:07:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3574389037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:07:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:07:09 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2868866756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.482 225859 INFO nova.compute.manager [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Took 0.60 seconds to destroy the instance on the hypervisor.
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.482 225859 DEBUG oslo.service.loopingcall [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.482 225859 DEBUG nova.compute.manager [-] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.483 225859 DEBUG nova.network.neutron [-] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:07:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:07:09 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3150868772' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.839 225859 DEBUG oslo_concurrency.processutils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:09 compute-1 nova_compute[225855]: 2026-01-20 15:07:09.840 225859 DEBUG oslo_concurrency.processutils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:07:10 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/272620387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.258 225859 DEBUG oslo_concurrency.processutils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.260 225859 DEBUG nova.virt.libvirt.vif [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:06:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1315322326',display_name='tempest-ServerRescueTestJSON-server-1315322326',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1315322326',id=165,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:06:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5e161d5a47f845fd89eb3f10627a0830',ramdisk_id='',reservation_id='r-g1s28307',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1151598672',owner_user_name='tempest-ServerRescueTestJSON-1151598672-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:06:40Z,user_data=None,user_id='a2beb3d6247e457abd6e8d93cc602f02',uuid=5f8a2718-2106-431c-82c1-2609a52e7fb2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e7af943-7ef0-441d-a402-bd595082f98e", "address": "fa:16:3e:21:8b:e2", "network": {"id": "5bac39b9-563a-456f-9168-fd10b1b28c21", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-3053955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-3053955-network", "vif_mac": "fa:16:3e:21:8b:e2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5e161d5a47f845fd89eb3f10627a0830", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e7af943-7e", "ovs_interfaceid": "6e7af943-7ef0-441d-a402-bd595082f98e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.261 225859 DEBUG nova.network.os_vif_util [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Converting VIF {"id": "6e7af943-7ef0-441d-a402-bd595082f98e", "address": "fa:16:3e:21:8b:e2", "network": {"id": "5bac39b9-563a-456f-9168-fd10b1b28c21", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-3053955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-3053955-network", "vif_mac": "fa:16:3e:21:8b:e2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5e161d5a47f845fd89eb3f10627a0830", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e7af943-7e", "ovs_interfaceid": "6e7af943-7ef0-441d-a402-bd595082f98e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.262 225859 DEBUG nova.network.os_vif_util [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:21:8b:e2,bridge_name='br-int',has_traffic_filtering=True,id=6e7af943-7ef0-441d-a402-bd595082f98e,network=Network(5bac39b9-563a-456f-9168-fd10b1b28c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e7af943-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.265 225859 DEBUG nova.objects.instance [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5f8a2718-2106-431c-82c1-2609a52e7fb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.283 225859 DEBUG nova.virt.libvirt.driver [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:07:10 compute-1 nova_compute[225855]:   <uuid>5f8a2718-2106-431c-82c1-2609a52e7fb2</uuid>
Jan 20 15:07:10 compute-1 nova_compute[225855]:   <name>instance-000000a5</name>
Jan 20 15:07:10 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:07:10 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:07:10 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerRescueTestJSON-server-1315322326</nova:name>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:07:08</nova:creationTime>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:07:10 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:07:10 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:07:10 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:07:10 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:07:10 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:07:10 compute-1 nova_compute[225855]:         <nova:user uuid="a2beb3d6247e457abd6e8d93cc602f02">tempest-ServerRescueTestJSON-1151598672-project-member</nova:user>
Jan 20 15:07:10 compute-1 nova_compute[225855]:         <nova:project uuid="5e161d5a47f845fd89eb3f10627a0830">tempest-ServerRescueTestJSON-1151598672</nova:project>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:07:10 compute-1 nova_compute[225855]:         <nova:port uuid="6e7af943-7ef0-441d-a402-bd595082f98e">
Jan 20 15:07:10 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:07:10 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:07:10 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <system>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <entry name="serial">5f8a2718-2106-431c-82c1-2609a52e7fb2</entry>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <entry name="uuid">5f8a2718-2106-431c-82c1-2609a52e7fb2</entry>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     </system>
Jan 20 15:07:10 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:07:10 compute-1 nova_compute[225855]:   <os>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:   </os>
Jan 20 15:07:10 compute-1 nova_compute[225855]:   <features>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:   </features>
Jan 20 15:07:10 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:07:10 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:07:10 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/5f8a2718-2106-431c-82c1-2609a52e7fb2_disk.rescue">
Jan 20 15:07:10 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       </source>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:07:10 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/5f8a2718-2106-431c-82c1-2609a52e7fb2_disk">
Jan 20 15:07:10 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       </source>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:07:10 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <target dev="vdb" bus="virtio"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/5f8a2718-2106-431c-82c1-2609a52e7fb2_disk.config.rescue">
Jan 20 15:07:10 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       </source>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:07:10 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:21:8b:e2"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <target dev="tap6e7af943-7e"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/5f8a2718-2106-431c-82c1-2609a52e7fb2/console.log" append="off"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <video>
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     </video>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:07:10 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:07:10 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:07:10 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:07:10 compute-1 nova_compute[225855]: </domain>
Jan 20 15:07:10 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:07:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:10.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.290 225859 INFO nova.virt.libvirt.driver [-] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Instance destroyed successfully.
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.345 225859 DEBUG nova.compute.manager [req-05a36f30-c49b-460b-a1c2-d4abe25b8acb req-95220ae8-b1d5-4b8d-a02d-d4cf6450ddf7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received event network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.345 225859 DEBUG oslo_concurrency.lockutils [req-05a36f30-c49b-460b-a1c2-d4abe25b8acb req-95220ae8-b1d5-4b8d-a02d-d4cf6450ddf7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.345 225859 DEBUG oslo_concurrency.lockutils [req-05a36f30-c49b-460b-a1c2-d4abe25b8acb req-95220ae8-b1d5-4b8d-a02d-d4cf6450ddf7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.346 225859 DEBUG oslo_concurrency.lockutils [req-05a36f30-c49b-460b-a1c2-d4abe25b8acb req-95220ae8-b1d5-4b8d-a02d-d4cf6450ddf7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.346 225859 DEBUG nova.compute.manager [req-05a36f30-c49b-460b-a1c2-d4abe25b8acb req-95220ae8-b1d5-4b8d-a02d-d4cf6450ddf7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] No waiting events found dispatching network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.346 225859 WARNING nova.compute.manager [req-05a36f30-c49b-460b-a1c2-d4abe25b8acb req-95220ae8-b1d5-4b8d-a02d-d4cf6450ddf7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received unexpected event network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e for instance with vm_state active and task_state rescuing.
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.346 225859 DEBUG nova.compute.manager [req-05a36f30-c49b-460b-a1c2-d4abe25b8acb req-95220ae8-b1d5-4b8d-a02d-d4cf6450ddf7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Received event network-vif-unplugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.346 225859 DEBUG oslo_concurrency.lockutils [req-05a36f30-c49b-460b-a1c2-d4abe25b8acb req-95220ae8-b1d5-4b8d-a02d-d4cf6450ddf7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.346 225859 DEBUG oslo_concurrency.lockutils [req-05a36f30-c49b-460b-a1c2-d4abe25b8acb req-95220ae8-b1d5-4b8d-a02d-d4cf6450ddf7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.347 225859 DEBUG oslo_concurrency.lockutils [req-05a36f30-c49b-460b-a1c2-d4abe25b8acb req-95220ae8-b1d5-4b8d-a02d-d4cf6450ddf7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.347 225859 DEBUG nova.compute.manager [req-05a36f30-c49b-460b-a1c2-d4abe25b8acb req-95220ae8-b1d5-4b8d-a02d-d4cf6450ddf7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] No waiting events found dispatching network-vif-unplugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.347 225859 DEBUG nova.compute.manager [req-05a36f30-c49b-460b-a1c2-d4abe25b8acb req-95220ae8-b1d5-4b8d-a02d-d4cf6450ddf7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Received event network-vif-unplugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.347 225859 DEBUG nova.compute.manager [req-05a36f30-c49b-460b-a1c2-d4abe25b8acb req-95220ae8-b1d5-4b8d-a02d-d4cf6450ddf7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Received event network-vif-plugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.347 225859 DEBUG oslo_concurrency.lockutils [req-05a36f30-c49b-460b-a1c2-d4abe25b8acb req-95220ae8-b1d5-4b8d-a02d-d4cf6450ddf7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.347 225859 DEBUG oslo_concurrency.lockutils [req-05a36f30-c49b-460b-a1c2-d4abe25b8acb req-95220ae8-b1d5-4b8d-a02d-d4cf6450ddf7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.348 225859 DEBUG oslo_concurrency.lockutils [req-05a36f30-c49b-460b-a1c2-d4abe25b8acb req-95220ae8-b1d5-4b8d-a02d-d4cf6450ddf7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.348 225859 DEBUG nova.compute.manager [req-05a36f30-c49b-460b-a1c2-d4abe25b8acb req-95220ae8-b1d5-4b8d-a02d-d4cf6450ddf7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] No waiting events found dispatching network-vif-plugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.348 225859 WARNING nova.compute.manager [req-05a36f30-c49b-460b-a1c2-d4abe25b8acb req-95220ae8-b1d5-4b8d-a02d-d4cf6450ddf7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Received unexpected event network-vif-plugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 for instance with vm_state active and task_state deleting.
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.360 225859 DEBUG nova.virt.libvirt.driver [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.360 225859 DEBUG nova.virt.libvirt.driver [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.360 225859 DEBUG nova.virt.libvirt.driver [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.361 225859 DEBUG nova.virt.libvirt.driver [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] No VIF found with MAC fa:16:3e:21:8b:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.361 225859 INFO nova.virt.libvirt.driver [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Using config drive
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.385 225859 DEBUG nova.storage.rbd_utils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] rbd image 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.406 225859 DEBUG nova.objects.instance [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5f8a2718-2106-431c-82c1-2609a52e7fb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:10 compute-1 nova_compute[225855]: 2026-01-20 15:07:10.435 225859 DEBUG nova.objects.instance [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lazy-loading 'keypairs' on Instance uuid 5f8a2718-2106-431c-82c1-2609a52e7fb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2868866756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3150868772' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:07:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/272620387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:07:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:10.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:11 compute-1 nova_compute[225855]: 2026-01-20 15:07:11.075 225859 DEBUG nova.network.neutron [-] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:07:11 compute-1 nova_compute[225855]: 2026-01-20 15:07:11.093 225859 INFO nova.compute.manager [-] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Took 1.61 seconds to deallocate network for instance.
Jan 20 15:07:11 compute-1 nova_compute[225855]: 2026-01-20 15:07:11.205 225859 DEBUG nova.compute.manager [req-1c84b992-2071-4344-8efa-5dd5f6f9f46d req-e91bb6b5-196a-4585-a0cd-a854da354926 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Received event network-vif-deleted-70668adb-f9ad-41cb-8eac-2e0aba32bf22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:11 compute-1 nova_compute[225855]: 2026-01-20 15:07:11.383 225859 INFO nova.compute.manager [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Took 0.29 seconds to detach 1 volumes for instance.
Jan 20 15:07:11 compute-1 nova_compute[225855]: 2026-01-20 15:07:11.385 225859 DEBUG nova.compute.manager [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Deleting volume: 94300d81-b4ca-4c0a-9283-83b76826d40f _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Jan 20 15:07:11 compute-1 nova_compute[225855]: 2026-01-20 15:07:11.422 225859 INFO nova.virt.libvirt.driver [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Creating config drive at /var/lib/nova/instances/5f8a2718-2106-431c-82c1-2609a52e7fb2/disk.config.rescue
Jan 20 15:07:11 compute-1 nova_compute[225855]: 2026-01-20 15:07:11.427 225859 DEBUG oslo_concurrency.processutils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5f8a2718-2106-431c-82c1-2609a52e7fb2/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl297ss98 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:11 compute-1 ceph-mon[81775]: pgmap v2498: 321 pgs: 321 active+clean; 667 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 220 op/s
Jan 20 15:07:11 compute-1 nova_compute[225855]: 2026-01-20 15:07:11.561 225859 DEBUG oslo_concurrency.processutils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5f8a2718-2106-431c-82c1-2609a52e7fb2/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl297ss98" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:11 compute-1 nova_compute[225855]: 2026-01-20 15:07:11.586 225859 DEBUG nova.storage.rbd_utils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] rbd image 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:07:11 compute-1 nova_compute[225855]: 2026-01-20 15:07:11.590 225859 DEBUG oslo_concurrency.processutils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5f8a2718-2106-431c-82c1-2609a52e7fb2/disk.config.rescue 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:11 compute-1 nova_compute[225855]: 2026-01-20 15:07:11.616 225859 DEBUG nova.network.neutron [req-545408b4-58ab-4abc-b542-a6874fa00bf3 req-9d3fdaa7-2978-4034-b6b0-df47a2cb3cca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Updated VIF entry in instance network info cache for port 70668adb-f9ad-41cb-8eac-2e0aba32bf22. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:07:11 compute-1 nova_compute[225855]: 2026-01-20 15:07:11.617 225859 DEBUG nova.network.neutron [req-545408b4-58ab-4abc-b542-a6874fa00bf3 req-9d3fdaa7-2978-4034-b6b0-df47a2cb3cca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Updating instance_info_cache with network_info: [{"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:07:11 compute-1 nova_compute[225855]: 2026-01-20 15:07:11.619 225859 DEBUG oslo_concurrency.lockutils [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:11 compute-1 nova_compute[225855]: 2026-01-20 15:07:11.619 225859 DEBUG oslo_concurrency.lockutils [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:11 compute-1 nova_compute[225855]: 2026-01-20 15:07:11.639 225859 DEBUG oslo_concurrency.lockutils [req-545408b4-58ab-4abc-b542-a6874fa00bf3 req-9d3fdaa7-2978-4034-b6b0-df47a2cb3cca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:07:11 compute-1 nova_compute[225855]: 2026-01-20 15:07:11.729 225859 DEBUG oslo_concurrency.processutils [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:07:11 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1005697138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:07:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:07:11 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1005697138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:07:11 compute-1 nova_compute[225855]: 2026-01-20 15:07:11.916 225859 DEBUG oslo_concurrency.processutils [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5f8a2718-2106-431c-82c1-2609a52e7fb2/disk.config.rescue 5f8a2718-2106-431c-82c1-2609a52e7fb2_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:11 compute-1 nova_compute[225855]: 2026-01-20 15:07:11.917 225859 INFO nova.virt.libvirt.driver [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Deleting local config drive /var/lib/nova/instances/5f8a2718-2106-431c-82c1-2609a52e7fb2/disk.config.rescue because it was imported into RBD.
Jan 20 15:07:11 compute-1 NetworkManager[49104]: <info>  [1768921631.9743] manager: (tap6e7af943-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/297)
Jan 20 15:07:11 compute-1 kernel: tap6e7af943-7e: entered promiscuous mode
Jan 20 15:07:11 compute-1 ovn_controller[130490]: 2026-01-20T15:07:11Z|00707|binding|INFO|Claiming lport 6e7af943-7ef0-441d-a402-bd595082f98e for this chassis.
Jan 20 15:07:11 compute-1 ovn_controller[130490]: 2026-01-20T15:07:11Z|00708|binding|INFO|6e7af943-7ef0-441d-a402-bd595082f98e: Claiming fa:16:3e:21:8b:e2 10.100.0.13
Jan 20 15:07:11 compute-1 nova_compute[225855]: 2026-01-20 15:07:11.984 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:11.990 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:8b:e2 10.100.0.13'], port_security=['fa:16:3e:21:8b:e2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5f8a2718-2106-431c-82c1-2609a52e7fb2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bac39b9-563a-456f-9168-fd10b1b28c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e161d5a47f845fd89eb3f10627a0830', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'cd72b979-cfcf-4dbd-bbff-8e22cd1b4096', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c853729a-de72-4ddb-be59-bc41e08984ce, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6e7af943-7ef0-441d-a402-bd595082f98e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:07:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:11.992 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6e7af943-7ef0-441d-a402-bd595082f98e in datapath 5bac39b9-563a-456f-9168-fd10b1b28c21 bound to our chassis
Jan 20 15:07:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:11.994 140354 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5bac39b9-563a-456f-9168-fd10b1b28c21 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 20 15:07:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:11.995 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[21d4b404-b079-4df1-8c3a-e1402f6f76b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:12 compute-1 ovn_controller[130490]: 2026-01-20T15:07:12Z|00709|binding|INFO|Setting lport 6e7af943-7ef0-441d-a402-bd595082f98e up in Southbound
Jan 20 15:07:12 compute-1 ovn_controller[130490]: 2026-01-20T15:07:12Z|00710|binding|INFO|Setting lport 6e7af943-7ef0-441d-a402-bd595082f98e ovn-installed in OVS
Jan 20 15:07:12 compute-1 systemd-machined[194361]: New machine qemu-83-instance-000000a5.
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.014 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:12 compute-1 systemd[1]: Started Virtual Machine qemu-83-instance-000000a5.
Jan 20 15:07:12 compute-1 systemd-udevd[293051]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:07:12 compute-1 NetworkManager[49104]: <info>  [1768921632.0639] device (tap6e7af943-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:07:12 compute-1 NetworkManager[49104]: <info>  [1768921632.0651] device (tap6e7af943-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:07:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:07:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1766338785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.179 225859 DEBUG oslo_concurrency.processutils [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.185 225859 DEBUG nova.compute.provider_tree [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.211 225859 DEBUG nova.scheduler.client.report [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.237 225859 DEBUG oslo_concurrency.lockutils [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:07:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:12.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.296 225859 INFO nova.scheduler.client.report [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Deleted allocations for instance 5feeb9de-434b-4ec7-aa99-6da718514c6f
Jan 20 15:07:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:07:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3016473623' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:07:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:07:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3016473623' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.389 225859 DEBUG oslo_concurrency.lockutils [None req-6697113e-e9a3-43ee-bf8b-903b9d311a88 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.439 225859 DEBUG nova.compute.manager [req-5330001c-adbf-4d70-881f-ddbd6f048873 req-d26baadf-b86a-49ec-bdbd-2b1c4395c2ec 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received event network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.439 225859 DEBUG oslo_concurrency.lockutils [req-5330001c-adbf-4d70-881f-ddbd6f048873 req-d26baadf-b86a-49ec-bdbd-2b1c4395c2ec 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.439 225859 DEBUG oslo_concurrency.lockutils [req-5330001c-adbf-4d70-881f-ddbd6f048873 req-d26baadf-b86a-49ec-bdbd-2b1c4395c2ec 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.441 225859 DEBUG oslo_concurrency.lockutils [req-5330001c-adbf-4d70-881f-ddbd6f048873 req-d26baadf-b86a-49ec-bdbd-2b1c4395c2ec 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.441 225859 DEBUG nova.compute.manager [req-5330001c-adbf-4d70-881f-ddbd6f048873 req-d26baadf-b86a-49ec-bdbd-2b1c4395c2ec 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] No waiting events found dispatching network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.442 225859 WARNING nova.compute.manager [req-5330001c-adbf-4d70-881f-ddbd6f048873 req-d26baadf-b86a-49ec-bdbd-2b1c4395c2ec 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received unexpected event network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e for instance with vm_state active and task_state rescuing.
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.442 225859 DEBUG nova.compute.manager [req-5330001c-adbf-4d70-881f-ddbd6f048873 req-d26baadf-b86a-49ec-bdbd-2b1c4395c2ec 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received event network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.442 225859 DEBUG oslo_concurrency.lockutils [req-5330001c-adbf-4d70-881f-ddbd6f048873 req-d26baadf-b86a-49ec-bdbd-2b1c4395c2ec 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.442 225859 DEBUG oslo_concurrency.lockutils [req-5330001c-adbf-4d70-881f-ddbd6f048873 req-d26baadf-b86a-49ec-bdbd-2b1c4395c2ec 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.443 225859 DEBUG oslo_concurrency.lockutils [req-5330001c-adbf-4d70-881f-ddbd6f048873 req-d26baadf-b86a-49ec-bdbd-2b1c4395c2ec 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.443 225859 DEBUG nova.compute.manager [req-5330001c-adbf-4d70-881f-ddbd6f048873 req-d26baadf-b86a-49ec-bdbd-2b1c4395c2ec 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] No waiting events found dispatching network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.443 225859 WARNING nova.compute.manager [req-5330001c-adbf-4d70-881f-ddbd6f048873 req-d26baadf-b86a-49ec-bdbd-2b1c4395c2ec 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received unexpected event network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e for instance with vm_state active and task_state rescuing.
Jan 20 15:07:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1005697138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:07:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1005697138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:07:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1766338785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3016473623' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:07:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3016473623' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.486 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #118. Immutable memtables: 0.
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:07:12.488714) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 118
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921632488783, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 798, "num_deletes": 251, "total_data_size": 1410236, "memory_usage": 1432560, "flush_reason": "Manual Compaction"}
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #119: started
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921632503019, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 119, "file_size": 930086, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59908, "largest_seqno": 60701, "table_properties": {"data_size": 926352, "index_size": 1514, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8947, "raw_average_key_size": 19, "raw_value_size": 918689, "raw_average_value_size": 2028, "num_data_blocks": 67, "num_entries": 453, "num_filter_entries": 453, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921581, "oldest_key_time": 1768921581, "file_creation_time": 1768921632, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 14328 microseconds, and 2940 cpu microseconds.
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:07:12.503049) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #119: 930086 bytes OK
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:07:12.503115) [db/memtable_list.cc:519] [default] Level-0 commit table #119 started
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:07:12.505356) [db/memtable_list.cc:722] [default] Level-0 commit table #119: memtable #1 done
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:07:12.505368) EVENT_LOG_v1 {"time_micros": 1768921632505365, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:07:12.505386) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 1406019, prev total WAL file size 1406019, number of live WAL files 2.
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000115.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:07:12.505897) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [119(908KB)], [117(12MB)]
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921632505929, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [119], "files_L6": [117], "score": -1, "input_data_size": 13519081, "oldest_snapshot_seqno": -1}
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #120: 8534 keys, 11606337 bytes, temperature: kUnknown
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921632654993, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 120, "file_size": 11606337, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11549916, "index_size": 33973, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21381, "raw_key_size": 222268, "raw_average_key_size": 26, "raw_value_size": 11398608, "raw_average_value_size": 1335, "num_data_blocks": 1323, "num_entries": 8534, "num_filter_entries": 8534, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768921632, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 120, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:07:12.655232) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 11606337 bytes
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:07:12.656850) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 90.6 rd, 77.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 12.0 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(27.0) write-amplify(12.5) OK, records in: 9049, records dropped: 515 output_compression: NoCompression
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:07:12.656881) EVENT_LOG_v1 {"time_micros": 1768921632656873, "job": 74, "event": "compaction_finished", "compaction_time_micros": 149145, "compaction_time_cpu_micros": 26851, "output_level": 6, "num_output_files": 1, "total_output_size": 11606337, "num_input_records": 9049, "num_output_records": 8534, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921632657255, "job": 74, "event": "table_file_deletion", "file_number": 119}
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000117.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921632659741, "job": 74, "event": "table_file_deletion", "file_number": 117}
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:07:12.505820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:07:12.659793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:07:12.659796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:07:12.659798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:07:12.659799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:07:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:07:12.659801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:07:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:12.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.850 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for 5f8a2718-2106-431c-82c1-2609a52e7fb2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.851 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921632.8498917, 5f8a2718-2106-431c-82c1-2609a52e7fb2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.851 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] VM Resumed (Lifecycle Event)
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.856 225859 DEBUG nova.compute.manager [None req-66758f5d-5806-48e6-86ec-5a05118ea96c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.886 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.889 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.918 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] During sync_power_state the instance has a pending task (rescuing). Skip.
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.918 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921632.8517895, 5f8a2718-2106-431c-82c1-2609a52e7fb2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.918 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] VM Started (Lifecycle Event)
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.946 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:07:12 compute-1 nova_compute[225855]: 2026-01-20 15:07:12.949 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:07:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:13 compute-1 ceph-mon[81775]: pgmap v2499: 321 pgs: 321 active+clean; 690 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.0 MiB/s wr, 152 op/s
Jan 20 15:07:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e367 e367: 3 total, 3 up, 3 in
Jan 20 15:07:14 compute-1 podman[293122]: 2026-01-20 15:07:14.015308064 +0000 UTC m=+0.052950754 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 20 15:07:14 compute-1 nova_compute[225855]: 2026-01-20 15:07:14.146 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:14.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e368 e368: 3 total, 3 up, 3 in
Jan 20 15:07:14 compute-1 ceph-mon[81775]: osdmap e367: 3 total, 3 up, 3 in
Jan 20 15:07:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2028028369' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:07:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2028028369' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:07:14 compute-1 ceph-mon[81775]: pgmap v2501: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 677 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.9 MiB/s wr, 189 op/s
Jan 20 15:07:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:07:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:14.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:07:14 compute-1 nova_compute[225855]: 2026-01-20 15:07:14.924 225859 INFO nova.compute.manager [None req-e7e5be06-3ee0-449a-a5a4-42a0cfebbe4c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Unrescuing
Jan 20 15:07:14 compute-1 nova_compute[225855]: 2026-01-20 15:07:14.925 225859 DEBUG oslo_concurrency.lockutils [None req-e7e5be06-3ee0-449a-a5a4-42a0cfebbe4c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Acquiring lock "refresh_cache-5f8a2718-2106-431c-82c1-2609a52e7fb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:07:14 compute-1 nova_compute[225855]: 2026-01-20 15:07:14.925 225859 DEBUG oslo_concurrency.lockutils [None req-e7e5be06-3ee0-449a-a5a4-42a0cfebbe4c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Acquired lock "refresh_cache-5f8a2718-2106-431c-82c1-2609a52e7fb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:07:14 compute-1 nova_compute[225855]: 2026-01-20 15:07:14.925 225859 DEBUG nova.network.neutron [None req-e7e5be06-3ee0-449a-a5a4-42a0cfebbe4c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:07:15 compute-1 ceph-mon[81775]: osdmap e368: 3 total, 3 up, 3 in
Jan 20 15:07:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:16.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:16.298 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:07:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:16.299 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:07:16 compute-1 nova_compute[225855]: 2026-01-20 15:07:16.299 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:16.426 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:16.427 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:16.427 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:16 compute-1 ceph-mon[81775]: pgmap v2503: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 559 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.9 MiB/s wr, 375 op/s
Jan 20 15:07:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:16.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.224 225859 DEBUG nova.compute.manager [req-217159b0-8069-4aa8-b849-e25238e89e71 req-5fe5a655-30ec-4e9d-a814-d83d9f3d08a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Received event network-changed-070862f1-1db2-45c2-9787-752e6d88449a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.225 225859 DEBUG nova.compute.manager [req-217159b0-8069-4aa8-b849-e25238e89e71 req-5fe5a655-30ec-4e9d-a814-d83d9f3d08a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Refreshing instance network info cache due to event network-changed-070862f1-1db2-45c2-9787-752e6d88449a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.225 225859 DEBUG oslo_concurrency.lockutils [req-217159b0-8069-4aa8-b849-e25238e89e71 req-5fe5a655-30ec-4e9d-a814-d83d9f3d08a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-33ba7a73-3233-40a3-a49a-e5bbd604dc3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.225 225859 DEBUG oslo_concurrency.lockutils [req-217159b0-8069-4aa8-b849-e25238e89e71 req-5fe5a655-30ec-4e9d-a814-d83d9f3d08a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-33ba7a73-3233-40a3-a49a-e5bbd604dc3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.225 225859 DEBUG nova.network.neutron [req-217159b0-8069-4aa8-b849-e25238e89e71 req-5fe5a655-30ec-4e9d-a814-d83d9f3d08a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Refreshing network info cache for port 070862f1-1db2-45c2-9787-752e6d88449a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:07:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:17.301 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.328 225859 DEBUG oslo_concurrency.lockutils [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.328 225859 DEBUG oslo_concurrency.lockutils [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.329 225859 DEBUG oslo_concurrency.lockutils [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.329 225859 DEBUG oslo_concurrency.lockutils [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.329 225859 DEBUG oslo_concurrency.lockutils [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.330 225859 INFO nova.compute.manager [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Terminating instance
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.331 225859 DEBUG nova.compute.manager [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:07:17 compute-1 kernel: tap070862f1-1d (unregistering): left promiscuous mode
Jan 20 15:07:17 compute-1 NetworkManager[49104]: <info>  [1768921637.4007] device (tap070862f1-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:07:17 compute-1 ovn_controller[130490]: 2026-01-20T15:07:17Z|00711|binding|INFO|Releasing lport 070862f1-1db2-45c2-9787-752e6d88449a from this chassis (sb_readonly=0)
Jan 20 15:07:17 compute-1 ovn_controller[130490]: 2026-01-20T15:07:17Z|00712|binding|INFO|Setting lport 070862f1-1db2-45c2-9787-752e6d88449a down in Southbound
Jan 20 15:07:17 compute-1 ovn_controller[130490]: 2026-01-20T15:07:17Z|00713|binding|INFO|Removing iface tap070862f1-1d ovn-installed in OVS
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.410 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.413 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:17.419 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:e7:09 10.100.0.7'], port_security=['fa:16:3e:e5:e7:09 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '33ba7a73-3233-40a3-a49a-e5bbd604dc3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8472bae1-476b-4100-b9fa-e8827bc4f7bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e142d118583b4f9ba3531bcf3838e256', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37efc868-18af-48b7-8d56-e37fd1ec4df0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9deb561-4473-4aa7-8b6f-d70e20e7cf6d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=070862f1-1db2-45c2-9787-752e6d88449a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:07:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:17.421 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 070862f1-1db2-45c2-9787-752e6d88449a in datapath 8472bae1-476b-4100-b9fa-e8827bc4f7bf unbound from our chassis
Jan 20 15:07:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:17.423 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8472bae1-476b-4100-b9fa-e8827bc4f7bf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:07:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:17.424 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fb6807c5-8b88-4ea3-a10b-ed83a592a9f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:17.425 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf namespace which is not needed anymore
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.428 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:17 compute-1 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000a1.scope: Deactivated successfully.
Jan 20 15:07:17 compute-1 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000a1.scope: Consumed 17.971s CPU time.
Jan 20 15:07:17 compute-1 systemd-machined[194361]: Machine qemu-80-instance-000000a1 terminated.
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.488 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.572 225859 INFO nova.virt.libvirt.driver [-] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Instance destroyed successfully.
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.573 225859 DEBUG nova.objects.instance [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lazy-loading 'resources' on Instance uuid 33ba7a73-3233-40a3-a49a-e5bbd604dc3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:17 compute-1 neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf[290899]: [NOTICE]   (290921) : haproxy version is 2.8.14-c23fe91
Jan 20 15:07:17 compute-1 neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf[290899]: [NOTICE]   (290921) : path to executable is /usr/sbin/haproxy
Jan 20 15:07:17 compute-1 neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf[290899]: [WARNING]  (290921) : Exiting Master process...
Jan 20 15:07:17 compute-1 neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf[290899]: [WARNING]  (290921) : Exiting Master process...
Jan 20 15:07:17 compute-1 neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf[290899]: [ALERT]    (290921) : Current worker (290938) exited with code 143 (Terminated)
Jan 20 15:07:17 compute-1 neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf[290899]: [WARNING]  (290921) : All workers exited. Exiting... (0)
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.593 225859 DEBUG nova.virt.libvirt.vif [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:05:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-2111868448',display_name='tempest-TestStampPattern-server-2111868448',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-teststamppattern-server-2111868448',id=161,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHW99EAKkcMHbb6foGeGxm9beD/C9AeSuQLW3fqIuoocya0hep1/utcjh4cUxZzvt5K+5yMQG3K45jiLKihqKM6cawBqTQvgzcywKN5pk06AjS3tvq9GuiAvDAys6caVkA==',key_name='tempest-TestStampPattern-1928143162',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:05:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e142d118583b4f9ba3531bcf3838e256',ramdisk_id='',reservation_id='r-7ei3hy41',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-487600181',owner_user_name='tempest-TestStampPattern-487600181-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:06:06Z,user_data=None,user_id='bc554998e71a4322bdd27ac727a9044c',uuid=33ba7a73-3233-40a3-a49a-e5bbd604dc3c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "070862f1-1db2-45c2-9787-752e6d88449a", "address": "fa:16:3e:e5:e7:09", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap070862f1-1d", "ovs_interfaceid": "070862f1-1db2-45c2-9787-752e6d88449a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.594 225859 DEBUG nova.network.os_vif_util [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Converting VIF {"id": "070862f1-1db2-45c2-9787-752e6d88449a", "address": "fa:16:3e:e5:e7:09", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap070862f1-1d", "ovs_interfaceid": "070862f1-1db2-45c2-9787-752e6d88449a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:07:17 compute-1 systemd[1]: libpod-49cb1fc9eefac14d88a80a448d309dbf7c2e07925d5c789c364c5dddff3d7edd.scope: Deactivated successfully.
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.594 225859 DEBUG nova.network.os_vif_util [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e5:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=070862f1-1db2-45c2-9787-752e6d88449a,network=Network(8472bae1-476b-4100-b9fa-e8827bc4f7bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap070862f1-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.595 225859 DEBUG os_vif [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=070862f1-1db2-45c2-9787-752e6d88449a,network=Network(8472bae1-476b-4100-b9fa-e8827bc4f7bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap070862f1-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.597 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.597 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap070862f1-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.598 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.601 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:07:17 compute-1 podman[293166]: 2026-01-20 15:07:17.601447975 +0000 UTC m=+0.071063427 container died 49cb1fc9eefac14d88a80a448d309dbf7c2e07925d5c789c364c5dddff3d7edd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.603 225859 INFO os_vif [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=070862f1-1db2-45c2-9787-752e6d88449a,network=Network(8472bae1-476b-4100-b9fa-e8827bc4f7bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap070862f1-1d')
Jan 20 15:07:17 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-49cb1fc9eefac14d88a80a448d309dbf7c2e07925d5c789c364c5dddff3d7edd-userdata-shm.mount: Deactivated successfully.
Jan 20 15:07:17 compute-1 systemd[1]: var-lib-containers-storage-overlay-5103777890e53183e892eb29a7309526c22c2d478bcad4b3e03ce4ad7bb2fbd2-merged.mount: Deactivated successfully.
Jan 20 15:07:17 compute-1 podman[293166]: 2026-01-20 15:07:17.648642234 +0000 UTC m=+0.118257686 container cleanup 49cb1fc9eefac14d88a80a448d309dbf7c2e07925d5c789c364c5dddff3d7edd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 20 15:07:17 compute-1 systemd[1]: libpod-conmon-49cb1fc9eefac14d88a80a448d309dbf7c2e07925d5c789c364c5dddff3d7edd.scope: Deactivated successfully.
Jan 20 15:07:17 compute-1 podman[293225]: 2026-01-20 15:07:17.723169739 +0000 UTC m=+0.047405786 container remove 49cb1fc9eefac14d88a80a448d309dbf7c2e07925d5c789c364c5dddff3d7edd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 15:07:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:17.730 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb4145f-118c-4765-8f7a-b21b1a205dfc]: (4, ('Tue Jan 20 03:07:17 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf (49cb1fc9eefac14d88a80a448d309dbf7c2e07925d5c789c364c5dddff3d7edd)\n49cb1fc9eefac14d88a80a448d309dbf7c2e07925d5c789c364c5dddff3d7edd\nTue Jan 20 03:07:17 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf (49cb1fc9eefac14d88a80a448d309dbf7c2e07925d5c789c364c5dddff3d7edd)\n49cb1fc9eefac14d88a80a448d309dbf7c2e07925d5c789c364c5dddff3d7edd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:17.737 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[627e22cd-9394-42be-8175-c375b1c2bc14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:17.738 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8472bae1-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:07:17 compute-1 kernel: tap8472bae1-40: left promiscuous mode
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.740 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:17.746 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6ad1f3b8-593c-4600-9376-4ceeabd5c84f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.759 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:17.771 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f95f5505-1d6f-4765-a39b-3aa549cb8912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:17.773 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6033c587-585b-45e8-8b4f-73ace45b67dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.787 225859 DEBUG nova.compute.manager [req-c24f78e0-362e-4656-bda9-7588239ba960 req-bbe01543-8357-45cf-927d-c42af224c790 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Received event network-vif-unplugged-070862f1-1db2-45c2-9787-752e6d88449a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.788 225859 DEBUG oslo_concurrency.lockutils [req-c24f78e0-362e-4656-bda9-7588239ba960 req-bbe01543-8357-45cf-927d-c42af224c790 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.788 225859 DEBUG oslo_concurrency.lockutils [req-c24f78e0-362e-4656-bda9-7588239ba960 req-bbe01543-8357-45cf-927d-c42af224c790 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.788 225859 DEBUG oslo_concurrency.lockutils [req-c24f78e0-362e-4656-bda9-7588239ba960 req-bbe01543-8357-45cf-927d-c42af224c790 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.788 225859 DEBUG nova.compute.manager [req-c24f78e0-362e-4656-bda9-7588239ba960 req-bbe01543-8357-45cf-927d-c42af224c790 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] No waiting events found dispatching network-vif-unplugged-070862f1-1db2-45c2-9787-752e6d88449a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:07:17 compute-1 nova_compute[225855]: 2026-01-20 15:07:17.788 225859 DEBUG nova.compute.manager [req-c24f78e0-362e-4656-bda9-7588239ba960 req-bbe01543-8357-45cf-927d-c42af224c790 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Received event network-vif-unplugged-070862f1-1db2-45c2-9787-752e6d88449a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:07:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:17.790 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4e350b79-f7f8-45a5-b1a7-b006bc7de2c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656864, 'reachable_time': 32489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293253, 'error': None, 'target': 'ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:17.792 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:07:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:17.792 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[b43bbb6f-f442-4e89-a002-eb5774f4f62e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:17 compute-1 systemd[1]: run-netns-ovnmeta\x2d8472bae1\x2d476b\x2d4100\x2db9fa\x2de8827bc4f7bf.mount: Deactivated successfully.
Jan 20 15:07:17 compute-1 sudo[293238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:17 compute-1 sudo[293238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:17 compute-1 sudo[293238]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:17 compute-1 sudo[293266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:17 compute-1 sudo[293266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:17 compute-1 sudo[293266]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:18 compute-1 nova_compute[225855]: 2026-01-20 15:07:18.070 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:18 compute-1 nova_compute[225855]: 2026-01-20 15:07:18.106 225859 INFO nova.virt.libvirt.driver [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Deleting instance files /var/lib/nova/instances/33ba7a73-3233-40a3-a49a-e5bbd604dc3c_del
Jan 20 15:07:18 compute-1 nova_compute[225855]: 2026-01-20 15:07:18.107 225859 INFO nova.virt.libvirt.driver [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Deletion of /var/lib/nova/instances/33ba7a73-3233-40a3-a49a-e5bbd604dc3c_del complete
Jan 20 15:07:18 compute-1 nova_compute[225855]: 2026-01-20 15:07:18.181 225859 INFO nova.compute.manager [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Took 0.85 seconds to destroy the instance on the hypervisor.
Jan 20 15:07:18 compute-1 nova_compute[225855]: 2026-01-20 15:07:18.182 225859 DEBUG oslo.service.loopingcall [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:07:18 compute-1 nova_compute[225855]: 2026-01-20 15:07:18.182 225859 DEBUG nova.compute.manager [-] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:07:18 compute-1 nova_compute[225855]: 2026-01-20 15:07:18.182 225859 DEBUG nova.network.neutron [-] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:07:18 compute-1 nova_compute[225855]: 2026-01-20 15:07:18.293 225859 DEBUG nova.network.neutron [None req-e7e5be06-3ee0-449a-a5a4-42a0cfebbe4c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Updating instance_info_cache with network_info: [{"id": "6e7af943-7ef0-441d-a402-bd595082f98e", "address": "fa:16:3e:21:8b:e2", "network": {"id": "5bac39b9-563a-456f-9168-fd10b1b28c21", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-3053955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5e161d5a47f845fd89eb3f10627a0830", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e7af943-7e", "ovs_interfaceid": "6e7af943-7ef0-441d-a402-bd595082f98e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:07:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:18.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:18 compute-1 nova_compute[225855]: 2026-01-20 15:07:18.372 225859 DEBUG oslo_concurrency.lockutils [None req-e7e5be06-3ee0-449a-a5a4-42a0cfebbe4c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Releasing lock "refresh_cache-5f8a2718-2106-431c-82c1-2609a52e7fb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:07:18 compute-1 nova_compute[225855]: 2026-01-20 15:07:18.373 225859 DEBUG nova.objects.instance [None req-e7e5be06-3ee0-449a-a5a4-42a0cfebbe4c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lazy-loading 'flavor' on Instance uuid 5f8a2718-2106-431c-82c1-2609a52e7fb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:18 compute-1 nova_compute[225855]: 2026-01-20 15:07:18.409 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:18 compute-1 kernel: tap6e7af943-7e (unregistering): left promiscuous mode
Jan 20 15:07:18 compute-1 NetworkManager[49104]: <info>  [1768921638.4630] device (tap6e7af943-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:07:18 compute-1 ovn_controller[130490]: 2026-01-20T15:07:18Z|00714|binding|INFO|Releasing lport 6e7af943-7ef0-441d-a402-bd595082f98e from this chassis (sb_readonly=0)
Jan 20 15:07:18 compute-1 ovn_controller[130490]: 2026-01-20T15:07:18Z|00715|binding|INFO|Setting lport 6e7af943-7ef0-441d-a402-bd595082f98e down in Southbound
Jan 20 15:07:18 compute-1 ovn_controller[130490]: 2026-01-20T15:07:18Z|00716|binding|INFO|Removing iface tap6e7af943-7e ovn-installed in OVS
Jan 20 15:07:18 compute-1 nova_compute[225855]: 2026-01-20 15:07:18.468 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:18 compute-1 nova_compute[225855]: 2026-01-20 15:07:18.469 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:18.476 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:8b:e2 10.100.0.13'], port_security=['fa:16:3e:21:8b:e2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5f8a2718-2106-431c-82c1-2609a52e7fb2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bac39b9-563a-456f-9168-fd10b1b28c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e161d5a47f845fd89eb3f10627a0830', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'cd72b979-cfcf-4dbd-bbff-8e22cd1b4096', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c853729a-de72-4ddb-be59-bc41e08984ce, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6e7af943-7ef0-441d-a402-bd595082f98e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:07:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:18.477 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6e7af943-7ef0-441d-a402-bd595082f98e in datapath 5bac39b9-563a-456f-9168-fd10b1b28c21 unbound from our chassis
Jan 20 15:07:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:18.478 140354 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5bac39b9-563a-456f-9168-fd10b1b28c21 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 20 15:07:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:18.479 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[947b88bf-015b-4bab-86cc-86b71118186b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:18 compute-1 nova_compute[225855]: 2026-01-20 15:07:18.484 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:18 compute-1 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000a5.scope: Deactivated successfully.
Jan 20 15:07:18 compute-1 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000a5.scope: Consumed 6.393s CPU time.
Jan 20 15:07:18 compute-1 systemd-machined[194361]: Machine qemu-83-instance-000000a5 terminated.
Jan 20 15:07:18 compute-1 NetworkManager[49104]: <info>  [1768921638.6374] manager: (tap6e7af943-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/298)
Jan 20 15:07:18 compute-1 nova_compute[225855]: 2026-01-20 15:07:18.661 225859 INFO nova.virt.libvirt.driver [-] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Instance destroyed successfully.
Jan 20 15:07:18 compute-1 nova_compute[225855]: 2026-01-20 15:07:18.662 225859 DEBUG nova.objects.instance [None req-e7e5be06-3ee0-449a-a5a4-42a0cfebbe4c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lazy-loading 'numa_topology' on Instance uuid 5f8a2718-2106-431c-82c1-2609a52e7fb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:18.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:18 compute-1 kernel: tap6e7af943-7e: entered promiscuous mode
Jan 20 15:07:18 compute-1 NetworkManager[49104]: <info>  [1768921638.8600] manager: (tap6e7af943-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/299)
Jan 20 15:07:18 compute-1 systemd-udevd[293143]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:07:18 compute-1 nova_compute[225855]: 2026-01-20 15:07:18.862 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:18 compute-1 ovn_controller[130490]: 2026-01-20T15:07:18Z|00717|binding|INFO|Claiming lport 6e7af943-7ef0-441d-a402-bd595082f98e for this chassis.
Jan 20 15:07:18 compute-1 ovn_controller[130490]: 2026-01-20T15:07:18Z|00718|binding|INFO|6e7af943-7ef0-441d-a402-bd595082f98e: Claiming fa:16:3e:21:8b:e2 10.100.0.13
Jan 20 15:07:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:18.868 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:8b:e2 10.100.0.13'], port_security=['fa:16:3e:21:8b:e2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5f8a2718-2106-431c-82c1-2609a52e7fb2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bac39b9-563a-456f-9168-fd10b1b28c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e161d5a47f845fd89eb3f10627a0830', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'cd72b979-cfcf-4dbd-bbff-8e22cd1b4096', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c853729a-de72-4ddb-be59-bc41e08984ce, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6e7af943-7ef0-441d-a402-bd595082f98e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:07:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:18.870 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6e7af943-7ef0-441d-a402-bd595082f98e in datapath 5bac39b9-563a-456f-9168-fd10b1b28c21 bound to our chassis
Jan 20 15:07:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:18.871 140354 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5bac39b9-563a-456f-9168-fd10b1b28c21 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 20 15:07:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:18.871 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4872e6a1-c331-449f-899f-55723bcb659e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:18 compute-1 NetworkManager[49104]: <info>  [1768921638.8756] device (tap6e7af943-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:07:18 compute-1 NetworkManager[49104]: <info>  [1768921638.8771] device (tap6e7af943-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:07:18 compute-1 ovn_controller[130490]: 2026-01-20T15:07:18Z|00719|binding|INFO|Setting lport 6e7af943-7ef0-441d-a402-bd595082f98e ovn-installed in OVS
Jan 20 15:07:18 compute-1 ovn_controller[130490]: 2026-01-20T15:07:18Z|00720|binding|INFO|Setting lport 6e7af943-7ef0-441d-a402-bd595082f98e up in Southbound
Jan 20 15:07:18 compute-1 nova_compute[225855]: 2026-01-20 15:07:18.889 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:18 compute-1 nova_compute[225855]: 2026-01-20 15:07:18.891 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:18 compute-1 nova_compute[225855]: 2026-01-20 15:07:18.899 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:18 compute-1 systemd-machined[194361]: New machine qemu-84-instance-000000a5.
Jan 20 15:07:18 compute-1 systemd[1]: Started Virtual Machine qemu-84-instance-000000a5.
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.378 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for 5f8a2718-2106-431c-82c1-2609a52e7fb2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.379 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921639.3776076, 5f8a2718-2106-431c-82c1-2609a52e7fb2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.380 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] VM Resumed (Lifecycle Event)
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.410 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:07:19 compute-1 ceph-mon[81775]: pgmap v2504: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 559 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.4 MiB/s wr, 248 op/s
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.417 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.455 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.456 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921639.3822367, 5f8a2718-2106-431c-82c1-2609a52e7fb2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.456 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] VM Started (Lifecycle Event)
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.477 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.482 225859 DEBUG nova.compute.manager [req-7f2f672b-c6f4-445b-abf2-d10afb883840 req-016a1ffe-f196-42f6-84e7-acd4a623fb72 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received event network-vif-unplugged-6e7af943-7ef0-441d-a402-bd595082f98e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.483 225859 DEBUG oslo_concurrency.lockutils [req-7f2f672b-c6f4-445b-abf2-d10afb883840 req-016a1ffe-f196-42f6-84e7-acd4a623fb72 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.483 225859 DEBUG oslo_concurrency.lockutils [req-7f2f672b-c6f4-445b-abf2-d10afb883840 req-016a1ffe-f196-42f6-84e7-acd4a623fb72 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.483 225859 DEBUG oslo_concurrency.lockutils [req-7f2f672b-c6f4-445b-abf2-d10afb883840 req-016a1ffe-f196-42f6-84e7-acd4a623fb72 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.484 225859 DEBUG nova.compute.manager [req-7f2f672b-c6f4-445b-abf2-d10afb883840 req-016a1ffe-f196-42f6-84e7-acd4a623fb72 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] No waiting events found dispatching network-vif-unplugged-6e7af943-7ef0-441d-a402-bd595082f98e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.484 225859 WARNING nova.compute.manager [req-7f2f672b-c6f4-445b-abf2-d10afb883840 req-016a1ffe-f196-42f6-84e7-acd4a623fb72 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received unexpected event network-vif-unplugged-6e7af943-7ef0-441d-a402-bd595082f98e for instance with vm_state rescued and task_state unrescuing.
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.489 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.514 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.879 225859 DEBUG nova.compute.manager [req-8fa37a04-c694-40aa-8d25-7b9708fd6a59 req-e6e0042a-efd7-476f-bbe4-20cc0d13487a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Received event network-vif-plugged-070862f1-1db2-45c2-9787-752e6d88449a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.880 225859 DEBUG oslo_concurrency.lockutils [req-8fa37a04-c694-40aa-8d25-7b9708fd6a59 req-e6e0042a-efd7-476f-bbe4-20cc0d13487a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.880 225859 DEBUG oslo_concurrency.lockutils [req-8fa37a04-c694-40aa-8d25-7b9708fd6a59 req-e6e0042a-efd7-476f-bbe4-20cc0d13487a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.880 225859 DEBUG oslo_concurrency.lockutils [req-8fa37a04-c694-40aa-8d25-7b9708fd6a59 req-e6e0042a-efd7-476f-bbe4-20cc0d13487a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.880 225859 DEBUG nova.compute.manager [req-8fa37a04-c694-40aa-8d25-7b9708fd6a59 req-e6e0042a-efd7-476f-bbe4-20cc0d13487a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] No waiting events found dispatching network-vif-plugged-070862f1-1db2-45c2-9787-752e6d88449a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.881 225859 WARNING nova.compute.manager [req-8fa37a04-c694-40aa-8d25-7b9708fd6a59 req-e6e0042a-efd7-476f-bbe4-20cc0d13487a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Received unexpected event network-vif-plugged-070862f1-1db2-45c2-9787-752e6d88449a for instance with vm_state active and task_state deleting.
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.973 225859 DEBUG nova.network.neutron [req-217159b0-8069-4aa8-b849-e25238e89e71 req-5fe5a655-30ec-4e9d-a814-d83d9f3d08a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Updated VIF entry in instance network info cache for port 070862f1-1db2-45c2-9787-752e6d88449a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:07:19 compute-1 nova_compute[225855]: 2026-01-20 15:07:19.974 225859 DEBUG nova.network.neutron [req-217159b0-8069-4aa8-b849-e25238e89e71 req-5fe5a655-30ec-4e9d-a814-d83d9f3d08a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Updating instance_info_cache with network_info: [{"id": "070862f1-1db2-45c2-9787-752e6d88449a", "address": "fa:16:3e:e5:e7:09", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap070862f1-1d", "ovs_interfaceid": "070862f1-1db2-45c2-9787-752e6d88449a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:07:20 compute-1 nova_compute[225855]: 2026-01-20 15:07:20.009 225859 DEBUG oslo_concurrency.lockutils [req-217159b0-8069-4aa8-b849-e25238e89e71 req-5fe5a655-30ec-4e9d-a814-d83d9f3d08a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-33ba7a73-3233-40a3-a49a-e5bbd604dc3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:07:20 compute-1 nova_compute[225855]: 2026-01-20 15:07:20.059 225859 DEBUG nova.network.neutron [-] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:07:20 compute-1 nova_compute[225855]: 2026-01-20 15:07:20.090 225859 INFO nova.compute.manager [-] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Took 1.91 seconds to deallocate network for instance.
Jan 20 15:07:20 compute-1 nova_compute[225855]: 2026-01-20 15:07:20.144 225859 DEBUG oslo_concurrency.lockutils [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:20 compute-1 nova_compute[225855]: 2026-01-20 15:07:20.144 225859 DEBUG oslo_concurrency.lockutils [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:20 compute-1 nova_compute[225855]: 2026-01-20 15:07:20.189 225859 DEBUG nova.compute.manager [None req-e7e5be06-3ee0-449a-a5a4-42a0cfebbe4c a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:07:20 compute-1 nova_compute[225855]: 2026-01-20 15:07:20.239 225859 DEBUG oslo_concurrency.processutils [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:20.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:07:20 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3481712973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:20 compute-1 nova_compute[225855]: 2026-01-20 15:07:20.698 225859 DEBUG oslo_concurrency.processutils [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:20 compute-1 nova_compute[225855]: 2026-01-20 15:07:20.704 225859 DEBUG nova.compute.provider_tree [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:07:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:20.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:20 compute-1 nova_compute[225855]: 2026-01-20 15:07:20.732 225859 DEBUG nova.scheduler.client.report [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:07:20 compute-1 nova_compute[225855]: 2026-01-20 15:07:20.767 225859 DEBUG oslo_concurrency.lockutils [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:20 compute-1 nova_compute[225855]: 2026-01-20 15:07:20.794 225859 INFO nova.scheduler.client.report [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Deleted allocations for instance 33ba7a73-3233-40a3-a49a-e5bbd604dc3c
Jan 20 15:07:20 compute-1 nova_compute[225855]: 2026-01-20 15:07:20.860 225859 DEBUG oslo_concurrency.lockutils [None req-b7e9202a-dde3-4024-80ac-056613d9de92 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "33ba7a73-3233-40a3-a49a-e5bbd604dc3c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e369 e369: 3 total, 3 up, 3 in
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.194 225859 DEBUG oslo_concurrency.lockutils [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Acquiring lock "5f8a2718-2106-431c-82c1-2609a52e7fb2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.195 225859 DEBUG oslo_concurrency.lockutils [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.195 225859 DEBUG oslo_concurrency.lockutils [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Acquiring lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.196 225859 DEBUG oslo_concurrency.lockutils [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.196 225859 DEBUG oslo_concurrency.lockutils [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.197 225859 INFO nova.compute.manager [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Terminating instance
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.198 225859 DEBUG nova.compute.manager [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:07:21 compute-1 ceph-mon[81775]: pgmap v2505: 321 pgs: 321 active+clean; 453 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 166 KiB/s wr, 285 op/s
Jan 20 15:07:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3481712973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:21 compute-1 ceph-mon[81775]: osdmap e369: 3 total, 3 up, 3 in
Jan 20 15:07:21 compute-1 kernel: tap6e7af943-7e (unregistering): left promiscuous mode
Jan 20 15:07:21 compute-1 NetworkManager[49104]: <info>  [1768921641.5096] device (tap6e7af943-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:07:21 compute-1 ovn_controller[130490]: 2026-01-20T15:07:21Z|00721|binding|INFO|Releasing lport 6e7af943-7ef0-441d-a402-bd595082f98e from this chassis (sb_readonly=0)
Jan 20 15:07:21 compute-1 ovn_controller[130490]: 2026-01-20T15:07:21Z|00722|binding|INFO|Setting lport 6e7af943-7ef0-441d-a402-bd595082f98e down in Southbound
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.540 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:21 compute-1 ovn_controller[130490]: 2026-01-20T15:07:21Z|00723|binding|INFO|Removing iface tap6e7af943-7e ovn-installed in OVS
Jan 20 15:07:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:21.546 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:8b:e2 10.100.0.13'], port_security=['fa:16:3e:21:8b:e2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5f8a2718-2106-431c-82c1-2609a52e7fb2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bac39b9-563a-456f-9168-fd10b1b28c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e161d5a47f845fd89eb3f10627a0830', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'cd72b979-cfcf-4dbd-bbff-8e22cd1b4096', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c853729a-de72-4ddb-be59-bc41e08984ce, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6e7af943-7ef0-441d-a402-bd595082f98e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:07:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:21.547 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6e7af943-7ef0-441d-a402-bd595082f98e in datapath 5bac39b9-563a-456f-9168-fd10b1b28c21 unbound from our chassis
Jan 20 15:07:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:21.548 140354 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5bac39b9-563a-456f-9168-fd10b1b28c21 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 20 15:07:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:07:21.549 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[19b39b15-fb6b-474a-af9f-8ba08f911743]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.557 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:21 compute-1 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000a5.scope: Deactivated successfully.
Jan 20 15:07:21 compute-1 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000a5.scope: Consumed 2.415s CPU time.
Jan 20 15:07:21 compute-1 systemd-machined[194361]: Machine qemu-84-instance-000000a5 terminated.
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.594 225859 DEBUG nova.compute.manager [req-5c678cc2-b95b-481e-8aa8-5eee4aae19cc req-f53a18ec-c2a8-4bc0-889a-c18424f3d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received event network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.594 225859 DEBUG oslo_concurrency.lockutils [req-5c678cc2-b95b-481e-8aa8-5eee4aae19cc req-f53a18ec-c2a8-4bc0-889a-c18424f3d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.595 225859 DEBUG oslo_concurrency.lockutils [req-5c678cc2-b95b-481e-8aa8-5eee4aae19cc req-f53a18ec-c2a8-4bc0-889a-c18424f3d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.595 225859 DEBUG oslo_concurrency.lockutils [req-5c678cc2-b95b-481e-8aa8-5eee4aae19cc req-f53a18ec-c2a8-4bc0-889a-c18424f3d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.595 225859 DEBUG nova.compute.manager [req-5c678cc2-b95b-481e-8aa8-5eee4aae19cc req-f53a18ec-c2a8-4bc0-889a-c18424f3d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] No waiting events found dispatching network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.595 225859 WARNING nova.compute.manager [req-5c678cc2-b95b-481e-8aa8-5eee4aae19cc req-f53a18ec-c2a8-4bc0-889a-c18424f3d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received unexpected event network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e for instance with vm_state active and task_state deleting.
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.596 225859 DEBUG nova.compute.manager [req-5c678cc2-b95b-481e-8aa8-5eee4aae19cc req-f53a18ec-c2a8-4bc0-889a-c18424f3d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received event network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.596 225859 DEBUG oslo_concurrency.lockutils [req-5c678cc2-b95b-481e-8aa8-5eee4aae19cc req-f53a18ec-c2a8-4bc0-889a-c18424f3d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.596 225859 DEBUG oslo_concurrency.lockutils [req-5c678cc2-b95b-481e-8aa8-5eee4aae19cc req-f53a18ec-c2a8-4bc0-889a-c18424f3d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.596 225859 DEBUG oslo_concurrency.lockutils [req-5c678cc2-b95b-481e-8aa8-5eee4aae19cc req-f53a18ec-c2a8-4bc0-889a-c18424f3d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.596 225859 DEBUG nova.compute.manager [req-5c678cc2-b95b-481e-8aa8-5eee4aae19cc req-f53a18ec-c2a8-4bc0-889a-c18424f3d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] No waiting events found dispatching network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.597 225859 WARNING nova.compute.manager [req-5c678cc2-b95b-481e-8aa8-5eee4aae19cc req-f53a18ec-c2a8-4bc0-889a-c18424f3d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received unexpected event network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e for instance with vm_state active and task_state deleting.
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.597 225859 DEBUG nova.compute.manager [req-5c678cc2-b95b-481e-8aa8-5eee4aae19cc req-f53a18ec-c2a8-4bc0-889a-c18424f3d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Received event network-vif-deleted-070862f1-1db2-45c2-9787-752e6d88449a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.597 225859 DEBUG nova.compute.manager [req-5c678cc2-b95b-481e-8aa8-5eee4aae19cc req-f53a18ec-c2a8-4bc0-889a-c18424f3d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received event network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.597 225859 DEBUG oslo_concurrency.lockutils [req-5c678cc2-b95b-481e-8aa8-5eee4aae19cc req-f53a18ec-c2a8-4bc0-889a-c18424f3d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.598 225859 DEBUG oslo_concurrency.lockutils [req-5c678cc2-b95b-481e-8aa8-5eee4aae19cc req-f53a18ec-c2a8-4bc0-889a-c18424f3d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.598 225859 DEBUG oslo_concurrency.lockutils [req-5c678cc2-b95b-481e-8aa8-5eee4aae19cc req-f53a18ec-c2a8-4bc0-889a-c18424f3d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.598 225859 DEBUG nova.compute.manager [req-5c678cc2-b95b-481e-8aa8-5eee4aae19cc req-f53a18ec-c2a8-4bc0-889a-c18424f3d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] No waiting events found dispatching network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.598 225859 WARNING nova.compute.manager [req-5c678cc2-b95b-481e-8aa8-5eee4aae19cc req-f53a18ec-c2a8-4bc0-889a-c18424f3d0fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received unexpected event network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e for instance with vm_state active and task_state deleting.
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.638 225859 INFO nova.virt.libvirt.driver [-] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Instance destroyed successfully.
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.638 225859 DEBUG nova.objects.instance [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lazy-loading 'resources' on Instance uuid 5f8a2718-2106-431c-82c1-2609a52e7fb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.670 225859 DEBUG nova.virt.libvirt.vif [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:06:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1315322326',display_name='tempest-ServerRescueTestJSON-server-1315322326',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1315322326',id=165,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:07:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5e161d5a47f845fd89eb3f10627a0830',ramdisk_id='',reservation_id='r-g1s28307',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1151598672',owner_user_name='tempest-ServerRescueTestJSON-1151598672-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:07:20Z,user_data=None,user_id='a2beb3d6247e457abd6e8d93cc602f02',uuid=5f8a2718-2106-431c-82c1-2609a52e7fb2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e7af943-7ef0-441d-a402-bd595082f98e", "address": "fa:16:3e:21:8b:e2", "network": {"id": "5bac39b9-563a-456f-9168-fd10b1b28c21", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-3053955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5e161d5a47f845fd89eb3f10627a0830", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e7af943-7e", "ovs_interfaceid": "6e7af943-7ef0-441d-a402-bd595082f98e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.671 225859 DEBUG nova.network.os_vif_util [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Converting VIF {"id": "6e7af943-7ef0-441d-a402-bd595082f98e", "address": "fa:16:3e:21:8b:e2", "network": {"id": "5bac39b9-563a-456f-9168-fd10b1b28c21", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-3053955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5e161d5a47f845fd89eb3f10627a0830", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e7af943-7e", "ovs_interfaceid": "6e7af943-7ef0-441d-a402-bd595082f98e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.672 225859 DEBUG nova.network.os_vif_util [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:21:8b:e2,bridge_name='br-int',has_traffic_filtering=True,id=6e7af943-7ef0-441d-a402-bd595082f98e,network=Network(5bac39b9-563a-456f-9168-fd10b1b28c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e7af943-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.672 225859 DEBUG os_vif [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:21:8b:e2,bridge_name='br-int',has_traffic_filtering=True,id=6e7af943-7ef0-441d-a402-bd595082f98e,network=Network(5bac39b9-563a-456f-9168-fd10b1b28c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e7af943-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.674 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.674 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e7af943-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.676 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.677 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:21 compute-1 nova_compute[225855]: 2026-01-20 15:07:21.680 225859 INFO os_vif [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:21:8b:e2,bridge_name='br-int',has_traffic_filtering=True,id=6e7af943-7ef0-441d-a402-bd595082f98e,network=Network(5bac39b9-563a-456f-9168-fd10b1b28c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e7af943-7e')
Jan 20 15:07:22 compute-1 nova_compute[225855]: 2026-01-20 15:07:22.122 225859 INFO nova.virt.libvirt.driver [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Deleting instance files /var/lib/nova/instances/5f8a2718-2106-431c-82c1-2609a52e7fb2_del
Jan 20 15:07:22 compute-1 nova_compute[225855]: 2026-01-20 15:07:22.123 225859 INFO nova.virt.libvirt.driver [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Deletion of /var/lib/nova/instances/5f8a2718-2106-431c-82c1-2609a52e7fb2_del complete
Jan 20 15:07:22 compute-1 nova_compute[225855]: 2026-01-20 15:07:22.194 225859 INFO nova.compute.manager [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Took 1.00 seconds to destroy the instance on the hypervisor.
Jan 20 15:07:22 compute-1 nova_compute[225855]: 2026-01-20 15:07:22.195 225859 DEBUG oslo.service.loopingcall [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:07:22 compute-1 nova_compute[225855]: 2026-01-20 15:07:22.195 225859 DEBUG nova.compute.manager [-] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:07:22 compute-1 nova_compute[225855]: 2026-01-20 15:07:22.196 225859 DEBUG nova.network.neutron [-] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:07:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:07:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 51K writes, 201K keys, 51K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.05 MB/s
                                           Cumulative WAL: 51K writes, 19K syncs, 2.68 writes per sync, written: 0.19 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8536 writes, 30K keys, 8536 commit groups, 1.0 writes per commit group, ingest: 29.31 MB, 0.05 MB/s
                                           Interval WAL: 8536 writes, 3570 syncs, 2.39 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 15:07:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:22.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:22 compute-1 nova_compute[225855]: 2026-01-20 15:07:22.490 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/777705666' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:07:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/777705666' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:07:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:22.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:22 compute-1 nova_compute[225855]: 2026-01-20 15:07:22.936 225859 DEBUG nova.network.neutron [-] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:07:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.146 225859 INFO nova.compute.manager [-] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Took 0.95 seconds to deallocate network for instance.
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.276 225859 DEBUG oslo_concurrency.lockutils [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.277 225859 DEBUG oslo_concurrency.lockutils [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.335 225859 DEBUG oslo_concurrency.processutils [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:23 compute-1 ceph-mon[81775]: pgmap v2507: 321 pgs: 321 active+clean; 421 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 43 KiB/s wr, 299 op/s
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.715 225859 DEBUG nova.compute.manager [req-c51d29d2-74cf-4444-b21e-83faf4721dfc req-96aa0d8a-fe9a-4a00-af4a-e644b5ea9f60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received event network-vif-unplugged-6e7af943-7ef0-441d-a402-bd595082f98e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.715 225859 DEBUG oslo_concurrency.lockutils [req-c51d29d2-74cf-4444-b21e-83faf4721dfc req-96aa0d8a-fe9a-4a00-af4a-e644b5ea9f60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.716 225859 DEBUG oslo_concurrency.lockutils [req-c51d29d2-74cf-4444-b21e-83faf4721dfc req-96aa0d8a-fe9a-4a00-af4a-e644b5ea9f60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.716 225859 DEBUG oslo_concurrency.lockutils [req-c51d29d2-74cf-4444-b21e-83faf4721dfc req-96aa0d8a-fe9a-4a00-af4a-e644b5ea9f60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.716 225859 DEBUG nova.compute.manager [req-c51d29d2-74cf-4444-b21e-83faf4721dfc req-96aa0d8a-fe9a-4a00-af4a-e644b5ea9f60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] No waiting events found dispatching network-vif-unplugged-6e7af943-7ef0-441d-a402-bd595082f98e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.717 225859 WARNING nova.compute.manager [req-c51d29d2-74cf-4444-b21e-83faf4721dfc req-96aa0d8a-fe9a-4a00-af4a-e644b5ea9f60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received unexpected event network-vif-unplugged-6e7af943-7ef0-441d-a402-bd595082f98e for instance with vm_state deleted and task_state None.
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.717 225859 DEBUG nova.compute.manager [req-c51d29d2-74cf-4444-b21e-83faf4721dfc req-96aa0d8a-fe9a-4a00-af4a-e644b5ea9f60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received event network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.717 225859 DEBUG oslo_concurrency.lockutils [req-c51d29d2-74cf-4444-b21e-83faf4721dfc req-96aa0d8a-fe9a-4a00-af4a-e644b5ea9f60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.717 225859 DEBUG oslo_concurrency.lockutils [req-c51d29d2-74cf-4444-b21e-83faf4721dfc req-96aa0d8a-fe9a-4a00-af4a-e644b5ea9f60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.718 225859 DEBUG oslo_concurrency.lockutils [req-c51d29d2-74cf-4444-b21e-83faf4721dfc req-96aa0d8a-fe9a-4a00-af4a-e644b5ea9f60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.718 225859 DEBUG nova.compute.manager [req-c51d29d2-74cf-4444-b21e-83faf4721dfc req-96aa0d8a-fe9a-4a00-af4a-e644b5ea9f60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] No waiting events found dispatching network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.718 225859 WARNING nova.compute.manager [req-c51d29d2-74cf-4444-b21e-83faf4721dfc req-96aa0d8a-fe9a-4a00-af4a-e644b5ea9f60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received unexpected event network-vif-plugged-6e7af943-7ef0-441d-a402-bd595082f98e for instance with vm_state deleted and task_state None.
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.719 225859 DEBUG nova.compute.manager [req-c51d29d2-74cf-4444-b21e-83faf4721dfc req-96aa0d8a-fe9a-4a00-af4a-e644b5ea9f60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Received event network-vif-deleted-6e7af943-7ef0-441d-a402-bd595082f98e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:07:23 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1342745173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.786 225859 DEBUG oslo_concurrency.processutils [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.792 225859 DEBUG nova.compute.provider_tree [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.807 225859 DEBUG nova.scheduler.client.report [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.830 225859 DEBUG oslo_concurrency.lockutils [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.858 225859 INFO nova.scheduler.client.report [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Deleted allocations for instance 5f8a2718-2106-431c-82c1-2609a52e7fb2
Jan 20 15:07:23 compute-1 nova_compute[225855]: 2026-01-20 15:07:23.916 225859 DEBUG oslo_concurrency.lockutils [None req-8c89f417-ee5e-4e39-8750-d07a141e37ba a2beb3d6247e457abd6e8d93cc602f02 5e161d5a47f845fd89eb3f10627a0830 - - default default] Lock "5f8a2718-2106-431c-82c1-2609a52e7fb2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:24 compute-1 nova_compute[225855]: 2026-01-20 15:07:24.113 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921629.1123605, 5feeb9de-434b-4ec7-aa99-6da718514c6f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:07:24 compute-1 nova_compute[225855]: 2026-01-20 15:07:24.113 225859 INFO nova.compute.manager [-] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] VM Stopped (Lifecycle Event)
Jan 20 15:07:24 compute-1 nova_compute[225855]: 2026-01-20 15:07:24.141 225859 DEBUG nova.compute.manager [None req-6fd29b76-ca57-4c38-a508-7c49c42ac1ba - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:07:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:24.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1342745173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:24.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:25 compute-1 ceph-mon[81775]: pgmap v2508: 321 pgs: 321 active+clean; 386 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 36 KiB/s wr, 282 op/s
Jan 20 15:07:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e370 e370: 3 total, 3 up, 3 in
Jan 20 15:07:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:26.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:26 compute-1 nova_compute[225855]: 2026-01-20 15:07:26.676 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:26.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:26 compute-1 ceph-mon[81775]: osdmap e370: 3 total, 3 up, 3 in
Jan 20 15:07:26 compute-1 ceph-mon[81775]: pgmap v2510: 321 pgs: 321 active+clean; 306 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 31 KiB/s wr, 288 op/s
Jan 20 15:07:27 compute-1 nova_compute[225855]: 2026-01-20 15:07:27.490 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:28.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/479322346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:28.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:29 compute-1 ceph-mon[81775]: pgmap v2511: 321 pgs: 321 active+clean; 306 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 11 KiB/s wr, 231 op/s
Jan 20 15:07:30 compute-1 podman[293475]: 2026-01-20 15:07:30.045547087 +0000 UTC m=+0.084292503 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Jan 20 15:07:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:30.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:07:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:30.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:07:31 compute-1 ceph-mon[81775]: pgmap v2512: 321 pgs: 321 active+clean; 224 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 11 KiB/s wr, 187 op/s
Jan 20 15:07:31 compute-1 nova_compute[225855]: 2026-01-20 15:07:31.709 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:32.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:32 compute-1 nova_compute[225855]: 2026-01-20 15:07:32.491 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:32 compute-1 nova_compute[225855]: 2026-01-20 15:07:32.570 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921637.5692608, 33ba7a73-3233-40a3-a49a-e5bbd604dc3c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:07:32 compute-1 nova_compute[225855]: 2026-01-20 15:07:32.570 225859 INFO nova.compute.manager [-] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] VM Stopped (Lifecycle Event)
Jan 20 15:07:32 compute-1 nova_compute[225855]: 2026-01-20 15:07:32.593 225859 DEBUG nova.compute.manager [None req-0a64a240-fdde-4ce5-8bcd-4a6765e48bb8 - - - - - -] [instance: 33ba7a73-3233-40a3-a49a-e5bbd604dc3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:07:32 compute-1 ceph-mon[81775]: pgmap v2513: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 11 KiB/s wr, 181 op/s
Jan 20 15:07:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:32.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:33 compute-1 nova_compute[225855]: 2026-01-20 15:07:33.386 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:07:33 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1685285972' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:07:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:07:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:34.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:07:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1685285972' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:07:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:34.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:35 compute-1 ceph-mon[81775]: pgmap v2514: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 10 KiB/s wr, 146 op/s
Jan 20 15:07:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:36.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:36 compute-1 nova_compute[225855]: 2026-01-20 15:07:36.636 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921641.635253, 5f8a2718-2106-431c-82c1-2609a52e7fb2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:07:36 compute-1 nova_compute[225855]: 2026-01-20 15:07:36.636 225859 INFO nova.compute.manager [-] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] VM Stopped (Lifecycle Event)
Jan 20 15:07:36 compute-1 nova_compute[225855]: 2026-01-20 15:07:36.660 225859 DEBUG nova.compute.manager [None req-d30b17a3-b490-4e23-a6ca-ea8250130e4d - - - - - -] [instance: 5f8a2718-2106-431c-82c1-2609a52e7fb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:07:36 compute-1 nova_compute[225855]: 2026-01-20 15:07:36.712 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:36.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:37 compute-1 ceph-mon[81775]: pgmap v2515: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 3.0 KiB/s wr, 41 op/s
Jan 20 15:07:37 compute-1 nova_compute[225855]: 2026-01-20 15:07:37.492 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:37 compute-1 sudo[293505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:37 compute-1 sudo[293505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:37 compute-1 sudo[293505]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:38 compute-1 sudo[293530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:38 compute-1 sudo[293530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:38 compute-1 sudo[293530]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:07:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:38.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:07:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:38.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:39 compute-1 ceph-mon[81775]: pgmap v2516: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 2.6 KiB/s wr, 36 op/s
Jan 20 15:07:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:40.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:40 compute-1 ceph-mon[81775]: pgmap v2517: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 4.2 KiB/s wr, 36 op/s
Jan 20 15:07:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:07:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:40.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:07:41 compute-1 nova_compute[225855]: 2026-01-20 15:07:41.714 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:42.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:42 compute-1 nova_compute[225855]: 2026-01-20 15:07:42.493 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:42.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:43 compute-1 ceph-mon[81775]: pgmap v2518: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 7 op/s
Jan 20 15:07:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:44.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:07:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:44.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:07:45 compute-1 podman[293558]: 2026-01-20 15:07:45.015772863 +0000 UTC m=+0.057437231 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:07:45 compute-1 ceph-mon[81775]: pgmap v2519: 321 pgs: 321 active+clean; 177 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.7 KiB/s wr, 3 op/s
Jan 20 15:07:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2631843235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:07:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:46.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:07:46 compute-1 nova_compute[225855]: 2026-01-20 15:07:46.716 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:46.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:47 compute-1 ceph-mon[81775]: pgmap v2520: 321 pgs: 321 active+clean; 121 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 5.2 KiB/s wr, 31 op/s
Jan 20 15:07:47 compute-1 nova_compute[225855]: 2026-01-20 15:07:47.495 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:07:47 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2739921256' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:07:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:07:47 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2739921256' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:07:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:48.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2739921256' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:07:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2739921256' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:07:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:48.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:49 compute-1 ceph-mon[81775]: pgmap v2521: 321 pgs: 321 active+clean; 121 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Jan 20 15:07:50 compute-1 nova_compute[225855]: 2026-01-20 15:07:50.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:07:50 compute-1 nova_compute[225855]: 2026-01-20 15:07:50.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:07:50 compute-1 nova_compute[225855]: 2026-01-20 15:07:50.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:07:50 compute-1 nova_compute[225855]: 2026-01-20 15:07:50.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:07:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:50.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:50.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:51 compute-1 ceph-mon[81775]: pgmap v2522: 321 pgs: 321 active+clean; 121 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Jan 20 15:07:51 compute-1 nova_compute[225855]: 2026-01-20 15:07:51.719 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:52 compute-1 nova_compute[225855]: 2026-01-20 15:07:52.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:07:52 compute-1 nova_compute[225855]: 2026-01-20 15:07:52.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:07:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:52 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 15:07:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:52 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 15:07:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:52.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:52 compute-1 nova_compute[225855]: 2026-01-20 15:07:52.403 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:07:52 compute-1 nova_compute[225855]: 2026-01-20 15:07:52.404 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:07:52 compute-1 nova_compute[225855]: 2026-01-20 15:07:52.497 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:52.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:53 compute-1 nova_compute[225855]: 2026-01-20 15:07:53.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:07:53 compute-1 ceph-mon[81775]: pgmap v2523: 321 pgs: 321 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 4.1 KiB/s wr, 43 op/s
Jan 20 15:07:54 compute-1 sudo[293584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:54 compute-1 sudo[293584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:54 compute-1 sudo[293584]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:54 compute-1 sudo[293609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:07:54 compute-1 sudo[293609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:54 compute-1 sudo[293609]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:54 compute-1 sudo[293634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:54 compute-1 sudo[293634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:54 compute-1 sudo[293634]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:54 compute-1 sudo[293659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:07:54 compute-1 sudo[293659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:07:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:54.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:07:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:07:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 12K writes, 61K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1705 writes, 8440 keys, 1705 commit groups, 1.0 writes per commit group, ingest: 16.78 MB, 0.03 MB/s
                                           Interval WAL: 1705 writes, 1705 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     76.4      0.96              0.25        37    0.026       0      0       0.0       0.0
                                             L6      1/0   11.07 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.7    102.8     87.1      3.95              1.11        36    0.110    236K    19K       0.0       0.0
                                            Sum      1/0   11.07 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.7     82.7     85.0      4.91              1.36        73    0.067    236K    19K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.7     80.0     81.3      0.99              0.23        12    0.083     53K   3158       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    102.8     87.1      3.95              1.11        36    0.110    236K    19K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     76.6      0.96              0.25        36    0.027       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.072, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.41 GB write, 0.10 MB/s write, 0.40 GB read, 0.10 MB/s read, 4.9 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564d515a71f0#2 capacity: 304.00 MB usage: 46.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000325 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2665,44.63 MB,14.6812%) FilterBlock(73,666.36 KB,0.21406%) IndexBlock(73,1.10 MB,0.363054%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 15:07:54 compute-1 sudo[293659]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:54.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:55 compute-1 ceph-mon[81775]: pgmap v2524: 321 pgs: 321 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 4.1 KiB/s wr, 43 op/s
Jan 20 15:07:55 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:07:55 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:07:55 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:07:55 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:07:55 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:07:55 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:07:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1077560602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:56 compute-1 nova_compute[225855]: 2026-01-20 15:07:56.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:07:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:56.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:56 compute-1 nova_compute[225855]: 2026-01-20 15:07:56.386 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:56 compute-1 nova_compute[225855]: 2026-01-20 15:07:56.387 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:56 compute-1 nova_compute[225855]: 2026-01-20 15:07:56.387 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:56 compute-1 nova_compute[225855]: 2026-01-20 15:07:56.387 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:07:56 compute-1 nova_compute[225855]: 2026-01-20 15:07:56.388 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:56 compute-1 nova_compute[225855]: 2026-01-20 15:07:56.723 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:56.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:56 compute-1 nova_compute[225855]: 2026-01-20 15:07:56.794 225859 DEBUG oslo_concurrency.lockutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Acquiring lock "a25af5a3-096f-4363-842e-d960c22eb16b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:56 compute-1 nova_compute[225855]: 2026-01-20 15:07:56.795 225859 DEBUG oslo_concurrency.lockutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:56 compute-1 nova_compute[225855]: 2026-01-20 15:07:56.815 225859 DEBUG nova.compute.manager [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:07:56 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:07:56 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1416750325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:56 compute-1 nova_compute[225855]: 2026-01-20 15:07:56.863 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:56 compute-1 nova_compute[225855]: 2026-01-20 15:07:56.902 225859 DEBUG oslo_concurrency.lockutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:56 compute-1 nova_compute[225855]: 2026-01-20 15:07:56.903 225859 DEBUG oslo_concurrency.lockutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:56 compute-1 nova_compute[225855]: 2026-01-20 15:07:56.910 225859 DEBUG nova.virt.hardware [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:07:56 compute-1 nova_compute[225855]: 2026-01-20 15:07:56.911 225859 INFO nova.compute.claims [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.032 225859 DEBUG oslo_concurrency.processutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.118 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.119 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4309MB free_disk=20.986278533935547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.120 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:07:57 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3044378009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.469 225859 DEBUG oslo_concurrency.processutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.476 225859 DEBUG nova.compute.provider_tree [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.496 225859 DEBUG nova.scheduler.client.report [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.501 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.523 225859 DEBUG oslo_concurrency.lockutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.523 225859 DEBUG nova.compute.manager [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.526 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:57 compute-1 ceph-mon[81775]: pgmap v2525: 321 pgs: 321 active+clean; 126 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 178 KiB/s wr, 42 op/s
Jan 20 15:07:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1416750325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1565062573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3044378009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.597 225859 DEBUG nova.compute.manager [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.598 225859 DEBUG nova.network.neutron [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.616 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance a25af5a3-096f-4363-842e-d960c22eb16b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.616 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.616 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.621 225859 INFO nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.641 225859 DEBUG nova.compute.manager [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.671 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.738 225859 DEBUG nova.compute.manager [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.740 225859 DEBUG nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.740 225859 INFO nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Creating image(s)
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.765 225859 DEBUG nova.storage.rbd_utils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] rbd image a25af5a3-096f-4363-842e-d960c22eb16b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.789 225859 DEBUG nova.storage.rbd_utils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] rbd image a25af5a3-096f-4363-842e-d960c22eb16b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.818 225859 DEBUG nova.storage.rbd_utils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] rbd image a25af5a3-096f-4363-842e-d960c22eb16b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.822 225859 DEBUG oslo_concurrency.processutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.859 225859 DEBUG nova.policy [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '27658864f96d453586dd0846a4c55b7d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fc74c4a296554866969b05aef75252af', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.886 225859 DEBUG oslo_concurrency.processutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.887 225859 DEBUG oslo_concurrency.lockutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.887 225859 DEBUG oslo_concurrency.lockutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.888 225859 DEBUG oslo_concurrency.lockutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.911 225859 DEBUG nova.storage.rbd_utils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] rbd image a25af5a3-096f-4363-842e-d960c22eb16b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:07:57 compute-1 nova_compute[225855]: 2026-01-20 15:07:57.915 225859 DEBUG oslo_concurrency.processutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 a25af5a3-096f-4363-842e-d960c22eb16b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:07:58 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4139490107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:58 compute-1 nova_compute[225855]: 2026-01-20 15:07:58.110 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:58 compute-1 nova_compute[225855]: 2026-01-20 15:07:58.115 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:07:58 compute-1 sudo[293875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:58 compute-1 sudo[293875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:58 compute-1 sudo[293875]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:58 compute-1 nova_compute[225855]: 2026-01-20 15:07:58.148 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:07:58 compute-1 sudo[293902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:58 compute-1 sudo[293902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:58 compute-1 sudo[293902]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:58 compute-1 nova_compute[225855]: 2026-01-20 15:07:58.202 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:07:58 compute-1 nova_compute[225855]: 2026-01-20 15:07:58.202 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:58 compute-1 nova_compute[225855]: 2026-01-20 15:07:58.272 225859 DEBUG oslo_concurrency.processutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 a25af5a3-096f-4363-842e-d960c22eb16b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:58 compute-1 nova_compute[225855]: 2026-01-20 15:07:58.343 225859 DEBUG nova.storage.rbd_utils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] resizing rbd image a25af5a3-096f-4363-842e-d960c22eb16b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:07:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:58.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:58 compute-1 nova_compute[225855]: 2026-01-20 15:07:58.490 225859 DEBUG nova.objects.instance [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lazy-loading 'migration_context' on Instance uuid a25af5a3-096f-4363-842e-d960c22eb16b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:58 compute-1 nova_compute[225855]: 2026-01-20 15:07:58.535 225859 DEBUG nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:07:58 compute-1 nova_compute[225855]: 2026-01-20 15:07:58.535 225859 DEBUG nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Ensure instance console log exists: /var/lib/nova/instances/a25af5a3-096f-4363-842e-d960c22eb16b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:07:58 compute-1 nova_compute[225855]: 2026-01-20 15:07:58.536 225859 DEBUG oslo_concurrency.lockutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:58 compute-1 nova_compute[225855]: 2026-01-20 15:07:58.536 225859 DEBUG oslo_concurrency.lockutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:58 compute-1 nova_compute[225855]: 2026-01-20 15:07:58.536 225859 DEBUG oslo_concurrency.lockutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/694324407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3322640165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4139490107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4227773044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:58 compute-1 nova_compute[225855]: 2026-01-20 15:07:58.638 225859 DEBUG nova.network.neutron [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Successfully created port: 6b7cb043-d1f4-4c2b-8173-1e3e2a664767 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:07:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:07:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:58.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:59 compute-1 ceph-mon[81775]: pgmap v2526: 321 pgs: 321 active+clean; 126 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 175 KiB/s wr, 15 op/s
Jan 20 15:07:59 compute-1 nova_compute[225855]: 2026-01-20 15:07:59.779 225859 DEBUG nova.network.neutron [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Successfully updated port: 6b7cb043-d1f4-4c2b-8173-1e3e2a664767 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:07:59 compute-1 nova_compute[225855]: 2026-01-20 15:07:59.797 225859 DEBUG oslo_concurrency.lockutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Acquiring lock "refresh_cache-a25af5a3-096f-4363-842e-d960c22eb16b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:07:59 compute-1 nova_compute[225855]: 2026-01-20 15:07:59.797 225859 DEBUG oslo_concurrency.lockutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Acquired lock "refresh_cache-a25af5a3-096f-4363-842e-d960c22eb16b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:07:59 compute-1 nova_compute[225855]: 2026-01-20 15:07:59.797 225859 DEBUG nova.network.neutron [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:07:59 compute-1 nova_compute[225855]: 2026-01-20 15:07:59.938 225859 DEBUG nova.compute.manager [req-2bd30d7c-ea96-47b7-a938-f9edecd4020b req-b9f3137b-5f1e-4198-9025-50f15059a5f5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Received event network-changed-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:59 compute-1 nova_compute[225855]: 2026-01-20 15:07:59.939 225859 DEBUG nova.compute.manager [req-2bd30d7c-ea96-47b7-a938-f9edecd4020b req-b9f3137b-5f1e-4198-9025-50f15059a5f5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Refreshing instance network info cache due to event network-changed-6b7cb043-d1f4-4c2b-8173-1e3e2a664767. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:07:59 compute-1 nova_compute[225855]: 2026-01-20 15:07:59.939 225859 DEBUG oslo_concurrency.lockutils [req-2bd30d7c-ea96-47b7-a938-f9edecd4020b req-b9f3137b-5f1e-4198-9025-50f15059a5f5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-a25af5a3-096f-4363-842e-d960c22eb16b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:08:00 compute-1 nova_compute[225855]: 2026-01-20 15:08:00.011 225859 DEBUG nova.network.neutron [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:08:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:00.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:00 compute-1 ceph-mon[81775]: pgmap v2527: 321 pgs: 321 active+clean; 196 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 3.0 MiB/s wr, 45 op/s
Jan 20 15:08:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:00.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:00 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #121. Immutable memtables: 0.
Jan 20 15:08:00 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:08:00.905579) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:08:00 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 121
Jan 20 15:08:00 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921680905646, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 806, "num_deletes": 253, "total_data_size": 1391325, "memory_usage": 1409968, "flush_reason": "Manual Compaction"}
Jan 20 15:08:00 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #122: started
Jan 20 15:08:00 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921680913783, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 122, "file_size": 917779, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60707, "largest_seqno": 61507, "table_properties": {"data_size": 913954, "index_size": 1605, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8198, "raw_average_key_size": 17, "raw_value_size": 906097, "raw_average_value_size": 1982, "num_data_blocks": 70, "num_entries": 457, "num_filter_entries": 457, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921632, "oldest_key_time": 1768921632, "file_creation_time": 1768921680, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:08:00 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 8236 microseconds, and 3329 cpu microseconds.
Jan 20 15:08:00 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:08:00 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:08:00.913826) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #122: 917779 bytes OK
Jan 20 15:08:00 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:08:00.913845) [db/memtable_list.cc:519] [default] Level-0 commit table #122 started
Jan 20 15:08:00 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:08:00.916406) [db/memtable_list.cc:722] [default] Level-0 commit table #122: memtable #1 done
Jan 20 15:08:00 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:08:00.916420) EVENT_LOG_v1 {"time_micros": 1768921680916416, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:08:00 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:08:00.916440) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:08:00 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 1387086, prev total WAL file size 1387086, number of live WAL files 2.
Jan 20 15:08:00 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000118.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:08:00 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:08:00.916978) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323536' seq:72057594037927935, type:22 .. '6B7600353037' seq:0, type:0; will stop at (end)
Jan 20 15:08:00 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:08:00 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [122(896KB)], [120(11MB)]
Jan 20 15:08:00 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921680917035, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [122], "files_L6": [120], "score": -1, "input_data_size": 12524116, "oldest_snapshot_seqno": -1}
Jan 20 15:08:01 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #123: 8469 keys, 11460969 bytes, temperature: kUnknown
Jan 20 15:08:01 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921681048742, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 123, "file_size": 11460969, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11405026, "index_size": 33687, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21189, "raw_key_size": 222677, "raw_average_key_size": 26, "raw_value_size": 11254639, "raw_average_value_size": 1328, "num_data_blocks": 1293, "num_entries": 8469, "num_filter_entries": 8469, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768921680, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 123, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:08:01 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:08:01 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:08:01.049066) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 11460969 bytes
Jan 20 15:08:01 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:08:01.051729) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.0 rd, 87.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.1 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(26.1) write-amplify(12.5) OK, records in: 8991, records dropped: 522 output_compression: NoCompression
Jan 20 15:08:01 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:08:01.051754) EVENT_LOG_v1 {"time_micros": 1768921681051743, "job": 76, "event": "compaction_finished", "compaction_time_micros": 131809, "compaction_time_cpu_micros": 29134, "output_level": 6, "num_output_files": 1, "total_output_size": 11460969, "num_input_records": 8991, "num_output_records": 8469, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:08:01 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:08:01 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921681052613, "job": 76, "event": "table_file_deletion", "file_number": 122}
Jan 20 15:08:01 compute-1 sudo[294020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:08:01 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000120.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:08:01 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921681055050, "job": 76, "event": "table_file_deletion", "file_number": 120}
Jan 20 15:08:01 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:08:00.916923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:08:01 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:08:01.055155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:08:01 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:08:01.055160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:08:01 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:08:01.055162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:08:01 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:08:01.055163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:08:01 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:08:01.055165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:08:01 compute-1 sudo[294020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:08:01 compute-1 sudo[294020]: pam_unix(sudo:session): session closed for user root
Jan 20 15:08:01 compute-1 podman[294000]: 2026-01-20 15:08:01.115866369 +0000 UTC m=+0.160747762 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller)
Jan 20 15:08:01 compute-1 sudo[294050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:08:01 compute-1 sudo[294050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:08:01 compute-1 sudo[294050]: pam_unix(sudo:session): session closed for user root
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.197 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.474 225859 DEBUG nova.network.neutron [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Updating instance_info_cache with network_info: [{"id": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "address": "fa:16:3e:bf:9e:90", "network": {"id": "3967ae21-1590-4685-8881-8bd1bcf25258", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-285441107-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc74c4a296554866969b05aef75252af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7cb043-d1", "ovs_interfaceid": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.496 225859 DEBUG oslo_concurrency.lockutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Releasing lock "refresh_cache-a25af5a3-096f-4363-842e-d960c22eb16b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.497 225859 DEBUG nova.compute.manager [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Instance network_info: |[{"id": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "address": "fa:16:3e:bf:9e:90", "network": {"id": "3967ae21-1590-4685-8881-8bd1bcf25258", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-285441107-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc74c4a296554866969b05aef75252af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7cb043-d1", "ovs_interfaceid": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.497 225859 DEBUG oslo_concurrency.lockutils [req-2bd30d7c-ea96-47b7-a938-f9edecd4020b req-b9f3137b-5f1e-4198-9025-50f15059a5f5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-a25af5a3-096f-4363-842e-d960c22eb16b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.498 225859 DEBUG nova.network.neutron [req-2bd30d7c-ea96-47b7-a938-f9edecd4020b req-b9f3137b-5f1e-4198-9025-50f15059a5f5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Refreshing network info cache for port 6b7cb043-d1f4-4c2b-8173-1e3e2a664767 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.501 225859 DEBUG nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Start _get_guest_xml network_info=[{"id": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "address": "fa:16:3e:bf:9e:90", "network": {"id": "3967ae21-1590-4685-8881-8bd1bcf25258", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-285441107-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc74c4a296554866969b05aef75252af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7cb043-d1", "ovs_interfaceid": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.506 225859 WARNING nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.510 225859 DEBUG nova.virt.libvirt.host [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.511 225859 DEBUG nova.virt.libvirt.host [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.513 225859 DEBUG nova.virt.libvirt.host [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.513 225859 DEBUG nova.virt.libvirt.host [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.515 225859 DEBUG nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.515 225859 DEBUG nova.virt.hardware [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.515 225859 DEBUG nova.virt.hardware [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.516 225859 DEBUG nova.virt.hardware [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.516 225859 DEBUG nova.virt.hardware [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.516 225859 DEBUG nova.virt.hardware [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.517 225859 DEBUG nova.virt.hardware [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.517 225859 DEBUG nova.virt.hardware [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.517 225859 DEBUG nova.virt.hardware [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.517 225859 DEBUG nova.virt.hardware [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.518 225859 DEBUG nova.virt.hardware [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.518 225859 DEBUG nova.virt.hardware [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.521 225859 DEBUG oslo_concurrency.processutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/537198121' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:01 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:08:01 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:08:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3470223084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.726 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:08:01 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2382603208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:01 compute-1 nova_compute[225855]: 2026-01-20 15:08:01.992 225859 DEBUG oslo_concurrency.processutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.017 225859 DEBUG nova.storage.rbd_utils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] rbd image a25af5a3-096f-4363-842e-d960c22eb16b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.020 225859 DEBUG oslo_concurrency.processutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:08:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:02.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:08:02 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1789117815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.435 225859 DEBUG oslo_concurrency.processutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.438 225859 DEBUG nova.virt.libvirt.vif [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:07:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-950743647',display_name='tempest-ServerRescueNegativeTestJSON-server-950743647',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-950743647',id=168,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc74c4a296554866969b05aef75252af',ramdisk_id='',reservation_id='r-gkbc59sh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1649662639',owner_user_name='tempest-ServerRescueNegativeTestJSON-1649662639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:07:57Z,user_data=None,user_id='27658864f96d453586dd0846a4c55b7d',uuid=a25af5a3-096f-4363-842e-d960c22eb16b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "address": "fa:16:3e:bf:9e:90", "network": {"id": "3967ae21-1590-4685-8881-8bd1bcf25258", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-285441107-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc74c4a296554866969b05aef75252af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7cb043-d1", "ovs_interfaceid": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.439 225859 DEBUG nova.network.os_vif_util [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Converting VIF {"id": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "address": "fa:16:3e:bf:9e:90", "network": {"id": "3967ae21-1590-4685-8881-8bd1bcf25258", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-285441107-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc74c4a296554866969b05aef75252af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7cb043-d1", "ovs_interfaceid": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.440 225859 DEBUG nova.network.os_vif_util [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=6b7cb043-d1f4-4c2b-8173-1e3e2a664767,network=Network(3967ae21-1590-4685-8881-8bd1bcf25258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b7cb043-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.442 225859 DEBUG nova.objects.instance [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lazy-loading 'pci_devices' on Instance uuid a25af5a3-096f-4363-842e-d960c22eb16b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.460 225859 DEBUG nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:08:02 compute-1 nova_compute[225855]:   <uuid>a25af5a3-096f-4363-842e-d960c22eb16b</uuid>
Jan 20 15:08:02 compute-1 nova_compute[225855]:   <name>instance-000000a8</name>
Jan 20 15:08:02 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:08:02 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:08:02 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-950743647</nova:name>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:08:01</nova:creationTime>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:08:02 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:08:02 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:08:02 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:08:02 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:08:02 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:08:02 compute-1 nova_compute[225855]:         <nova:user uuid="27658864f96d453586dd0846a4c55b7d">tempest-ServerRescueNegativeTestJSON-1649662639-project-member</nova:user>
Jan 20 15:08:02 compute-1 nova_compute[225855]:         <nova:project uuid="fc74c4a296554866969b05aef75252af">tempest-ServerRescueNegativeTestJSON-1649662639</nova:project>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:08:02 compute-1 nova_compute[225855]:         <nova:port uuid="6b7cb043-d1f4-4c2b-8173-1e3e2a664767">
Jan 20 15:08:02 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:08:02 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:08:02 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <system>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <entry name="serial">a25af5a3-096f-4363-842e-d960c22eb16b</entry>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <entry name="uuid">a25af5a3-096f-4363-842e-d960c22eb16b</entry>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     </system>
Jan 20 15:08:02 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:08:02 compute-1 nova_compute[225855]:   <os>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:   </os>
Jan 20 15:08:02 compute-1 nova_compute[225855]:   <features>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:   </features>
Jan 20 15:08:02 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:08:02 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:08:02 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/a25af5a3-096f-4363-842e-d960c22eb16b_disk">
Jan 20 15:08:02 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       </source>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:08:02 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/a25af5a3-096f-4363-842e-d960c22eb16b_disk.config">
Jan 20 15:08:02 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       </source>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:08:02 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:bf:9e:90"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <target dev="tap6b7cb043-d1"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/a25af5a3-096f-4363-842e-d960c22eb16b/console.log" append="off"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <video>
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     </video>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:08:02 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:08:02 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:08:02 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:08:02 compute-1 nova_compute[225855]: </domain>
Jan 20 15:08:02 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.462 225859 DEBUG nova.compute.manager [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Preparing to wait for external event network-vif-plugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.462 225859 DEBUG oslo_concurrency.lockutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Acquiring lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.462 225859 DEBUG oslo_concurrency.lockutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.462 225859 DEBUG oslo_concurrency.lockutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.463 225859 DEBUG nova.virt.libvirt.vif [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:07:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-950743647',display_name='tempest-ServerRescueNegativeTestJSON-server-950743647',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-950743647',id=168,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc74c4a296554866969b05aef75252af',ramdisk_id='',reservation_id='r-gkbc59sh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1649662639',owner_user_name='tempest-ServerRescueNegativeTestJSON-1649662639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:07:57Z,user_data=None,user_id='27658864f96d453586dd0846a4c55b7d',uuid=a25af5a3-096f-4363-842e-d960c22eb16b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "address": "fa:16:3e:bf:9e:90", "network": {"id": "3967ae21-1590-4685-8881-8bd1bcf25258", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-285441107-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc74c4a296554866969b05aef75252af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7cb043-d1", "ovs_interfaceid": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.463 225859 DEBUG nova.network.os_vif_util [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Converting VIF {"id": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "address": "fa:16:3e:bf:9e:90", "network": {"id": "3967ae21-1590-4685-8881-8bd1bcf25258", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-285441107-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc74c4a296554866969b05aef75252af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7cb043-d1", "ovs_interfaceid": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.464 225859 DEBUG nova.network.os_vif_util [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=6b7cb043-d1f4-4c2b-8173-1e3e2a664767,network=Network(3967ae21-1590-4685-8881-8bd1bcf25258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b7cb043-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.464 225859 DEBUG os_vif [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=6b7cb043-d1f4-4c2b-8173-1e3e2a664767,network=Network(3967ae21-1590-4685-8881-8bd1bcf25258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b7cb043-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.465 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.466 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.466 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.469 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.469 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b7cb043-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.469 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b7cb043-d1, col_values=(('external_ids', {'iface-id': '6b7cb043-d1f4-4c2b-8173-1e3e2a664767', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:9e:90', 'vm-uuid': 'a25af5a3-096f-4363-842e-d960c22eb16b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.471 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:02 compute-1 NetworkManager[49104]: <info>  [1768921682.4725] manager: (tap6b7cb043-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/300)
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.475 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.476 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.477 225859 INFO os_vif [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=6b7cb043-d1f4-4c2b-8173-1e3e2a664767,network=Network(3967ae21-1590-4685-8881-8bd1bcf25258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b7cb043-d1')
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.502 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2382603208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:02 compute-1 ceph-mon[81775]: pgmap v2528: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 3.5 MiB/s wr, 68 op/s
Jan 20 15:08:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1789117815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.684 225859 DEBUG nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.684 225859 DEBUG nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.685 225859 DEBUG nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] No VIF found with MAC fa:16:3e:bf:9e:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.685 225859 INFO nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Using config drive
Jan 20 15:08:02 compute-1 nova_compute[225855]: 2026-01-20 15:08:02.714 225859 DEBUG nova.storage.rbd_utils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] rbd image a25af5a3-096f-4363-842e-d960c22eb16b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:08:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:02.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:03 compute-1 nova_compute[225855]: 2026-01-20 15:08:03.307 225859 DEBUG nova.network.neutron [req-2bd30d7c-ea96-47b7-a938-f9edecd4020b req-b9f3137b-5f1e-4198-9025-50f15059a5f5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Updated VIF entry in instance network info cache for port 6b7cb043-d1f4-4c2b-8173-1e3e2a664767. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:08:03 compute-1 nova_compute[225855]: 2026-01-20 15:08:03.308 225859 DEBUG nova.network.neutron [req-2bd30d7c-ea96-47b7-a938-f9edecd4020b req-b9f3137b-5f1e-4198-9025-50f15059a5f5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Updating instance_info_cache with network_info: [{"id": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "address": "fa:16:3e:bf:9e:90", "network": {"id": "3967ae21-1590-4685-8881-8bd1bcf25258", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-285441107-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc74c4a296554866969b05aef75252af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7cb043-d1", "ovs_interfaceid": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:08:03 compute-1 nova_compute[225855]: 2026-01-20 15:08:03.323 225859 DEBUG oslo_concurrency.lockutils [req-2bd30d7c-ea96-47b7-a938-f9edecd4020b req-b9f3137b-5f1e-4198-9025-50f15059a5f5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-a25af5a3-096f-4363-842e-d960c22eb16b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:08:03 compute-1 nova_compute[225855]: 2026-01-20 15:08:03.400 225859 INFO nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Creating config drive at /var/lib/nova/instances/a25af5a3-096f-4363-842e-d960c22eb16b/disk.config
Jan 20 15:08:03 compute-1 nova_compute[225855]: 2026-01-20 15:08:03.405 225859 DEBUG oslo_concurrency.processutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a25af5a3-096f-4363-842e-d960c22eb16b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb8fhnp_o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:03 compute-1 nova_compute[225855]: 2026-01-20 15:08:03.539 225859 DEBUG oslo_concurrency.processutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a25af5a3-096f-4363-842e-d960c22eb16b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb8fhnp_o" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:03 compute-1 nova_compute[225855]: 2026-01-20 15:08:03.568 225859 DEBUG nova.storage.rbd_utils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] rbd image a25af5a3-096f-4363-842e-d960c22eb16b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:08:03 compute-1 nova_compute[225855]: 2026-01-20 15:08:03.572 225859 DEBUG oslo_concurrency.processutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a25af5a3-096f-4363-842e-d960c22eb16b/disk.config a25af5a3-096f-4363-842e-d960c22eb16b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:03 compute-1 nova_compute[225855]: 2026-01-20 15:08:03.737 225859 DEBUG oslo_concurrency.processutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a25af5a3-096f-4363-842e-d960c22eb16b/disk.config a25af5a3-096f-4363-842e-d960c22eb16b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:03 compute-1 nova_compute[225855]: 2026-01-20 15:08:03.738 225859 INFO nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Deleting local config drive /var/lib/nova/instances/a25af5a3-096f-4363-842e-d960c22eb16b/disk.config because it was imported into RBD.
Jan 20 15:08:03 compute-1 kernel: tap6b7cb043-d1: entered promiscuous mode
Jan 20 15:08:03 compute-1 NetworkManager[49104]: <info>  [1768921683.8021] manager: (tap6b7cb043-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/301)
Jan 20 15:08:03 compute-1 ovn_controller[130490]: 2026-01-20T15:08:03Z|00724|binding|INFO|Claiming lport 6b7cb043-d1f4-4c2b-8173-1e3e2a664767 for this chassis.
Jan 20 15:08:03 compute-1 ovn_controller[130490]: 2026-01-20T15:08:03Z|00725|binding|INFO|6b7cb043-d1f4-4c2b-8173-1e3e2a664767: Claiming fa:16:3e:bf:9e:90 10.100.0.6
Jan 20 15:08:03 compute-1 nova_compute[225855]: 2026-01-20 15:08:03.804 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:03 compute-1 nova_compute[225855]: 2026-01-20 15:08:03.808 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:03.818 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:9e:90 10.100.0.6'], port_security=['fa:16:3e:bf:9e:90 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'a25af5a3-096f-4363-842e-d960c22eb16b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3967ae21-1590-4685-8881-8bd1bcf25258', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc74c4a296554866969b05aef75252af', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd56d9c6d-4bb5-4a73-ab07-9e0ee1fd3b93', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89ced88f-b1ed-4329-8a53-1931e6b0e3e9, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6b7cb043-d1f4-4c2b-8173-1e3e2a664767) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:08:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:03.819 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6b7cb043-d1f4-4c2b-8173-1e3e2a664767 in datapath 3967ae21-1590-4685-8881-8bd1bcf25258 bound to our chassis
Jan 20 15:08:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:03.821 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3967ae21-1590-4685-8881-8bd1bcf25258
Jan 20 15:08:03 compute-1 systemd-udevd[294213]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:08:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:03.836 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d2503d68-3509-46ed-9fab-61f3c0bc1ac2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:03.837 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3967ae21-11 in ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:08:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:03.840 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3967ae21-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:08:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:03.840 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2c8f43d2-2f9e-444b-b028-a10022d38135]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:03.841 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[231d3e0d-93ec-4550-8f18-606de4ef5f8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:03 compute-1 systemd-machined[194361]: New machine qemu-85-instance-000000a8.
Jan 20 15:08:03 compute-1 NetworkManager[49104]: <info>  [1768921683.8562] device (tap6b7cb043-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:08:03 compute-1 NetworkManager[49104]: <info>  [1768921683.8571] device (tap6b7cb043-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:08:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:03.859 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[200b1161-c057-4836-8a51-534a16b189c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:03 compute-1 nova_compute[225855]: 2026-01-20 15:08:03.876 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:03 compute-1 ovn_controller[130490]: 2026-01-20T15:08:03Z|00726|binding|INFO|Setting lport 6b7cb043-d1f4-4c2b-8173-1e3e2a664767 ovn-installed in OVS
Jan 20 15:08:03 compute-1 ovn_controller[130490]: 2026-01-20T15:08:03Z|00727|binding|INFO|Setting lport 6b7cb043-d1f4-4c2b-8173-1e3e2a664767 up in Southbound
Jan 20 15:08:03 compute-1 nova_compute[225855]: 2026-01-20 15:08:03.883 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:03.885 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a472e7b4-8ea1-434f-a24a-da77d8fa5bfd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:03 compute-1 systemd[1]: Started Virtual Machine qemu-85-instance-000000a8.
Jan 20 15:08:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:03.916 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf7319b-a156-46ee-ab0e-b651da7fa01d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:03.922 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[88e61a42-0446-4096-8b2c-93ca07eccb22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:03 compute-1 NetworkManager[49104]: <info>  [1768921683.9247] manager: (tap3967ae21-10): new Veth device (/org/freedesktop/NetworkManager/Devices/302)
Jan 20 15:08:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:03.963 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[4c6e0ae2-da1b-4498-9675-1768dc55fe4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:03.968 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c4272311-5fbe-4b4b-8eb3-a2b2f98732dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:03 compute-1 NetworkManager[49104]: <info>  [1768921683.9974] device (tap3967ae21-10): carrier: link connected
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:04.002 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[8aaabd28-1334-4f21-be52-025a92ef38e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:04.021 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9ff60764-19a4-426b-9e85-fbd21e67f8b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3967ae21-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:ce:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672292, 'reachable_time': 37097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294246, 'error': None, 'target': 'ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:04.038 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a99505-6f9c-43db-9b34-bbc91a9f03d2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb5:ce9d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672292, 'tstamp': 672292}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294247, 'error': None, 'target': 'ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:04.055 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[03b564e0-8709-4699-b430-717c655cce82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3967ae21-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:ce:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672292, 'reachable_time': 37097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294248, 'error': None, 'target': 'ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:04.086 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cfaf8a44-3908-414b-87b1-e527683fafcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.148 225859 DEBUG nova.compute.manager [req-e2f2f4d3-a6c4-4690-91b1-7945d9fc0659 req-4bbf4d54-2ace-4b98-a622-7b31886b23f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Received event network-vif-plugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.149 225859 DEBUG oslo_concurrency.lockutils [req-e2f2f4d3-a6c4-4690-91b1-7945d9fc0659 req-4bbf4d54-2ace-4b98-a622-7b31886b23f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.150 225859 DEBUG oslo_concurrency.lockutils [req-e2f2f4d3-a6c4-4690-91b1-7945d9fc0659 req-4bbf4d54-2ace-4b98-a622-7b31886b23f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.150 225859 DEBUG oslo_concurrency.lockutils [req-e2f2f4d3-a6c4-4690-91b1-7945d9fc0659 req-4bbf4d54-2ace-4b98-a622-7b31886b23f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.150 225859 DEBUG nova.compute.manager [req-e2f2f4d3-a6c4-4690-91b1-7945d9fc0659 req-4bbf4d54-2ace-4b98-a622-7b31886b23f4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Processing event network-vif-plugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:04.158 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c4efaf49-ad37-433b-bec7-845453232ba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:04.160 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3967ae21-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:04.161 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:04.162 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3967ae21-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:04 compute-1 kernel: tap3967ae21-10: entered promiscuous mode
Jan 20 15:08:04 compute-1 NetworkManager[49104]: <info>  [1768921684.1659] manager: (tap3967ae21-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.167 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:04.168 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3967ae21-10, col_values=(('external_ids', {'iface-id': 'b19d6956-fc8d-42c8-af98-d0f2fe9fe3a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:04 compute-1 ovn_controller[130490]: 2026-01-20T15:08:04Z|00728|binding|INFO|Releasing lport b19d6956-fc8d-42c8-af98-d0f2fe9fe3a3 from this chassis (sb_readonly=0)
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.184 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:04.184 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3967ae21-1590-4685-8881-8bd1bcf25258.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3967ae21-1590-4685-8881-8bd1bcf25258.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:04.185 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9e65d0e5-1769-4dc9-8d0b-17e1b0d62a0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:04.186 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-3967ae21-1590-4685-8881-8bd1bcf25258
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/3967ae21-1590-4685-8881-8bd1bcf25258.pid.haproxy
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 3967ae21-1590-4685-8881-8bd1bcf25258
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:08:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:04.187 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258', 'env', 'PROCESS_TAG=haproxy-3967ae21-1590-4685-8881-8bd1bcf25258', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3967ae21-1590-4685-8881-8bd1bcf25258.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.308 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921684.308199, a25af5a3-096f-4363-842e-d960c22eb16b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.309 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] VM Started (Lifecycle Event)
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.312 225859 DEBUG nova.compute.manager [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.315 225859 DEBUG nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.319 225859 INFO nova.virt.libvirt.driver [-] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Instance spawned successfully.
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.320 225859 DEBUG nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.337 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.344 225859 DEBUG nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.345 225859 DEBUG nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.346 225859 DEBUG nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.346 225859 DEBUG nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.347 225859 DEBUG nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.348 225859 DEBUG nova.virt.libvirt.driver [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.351 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:08:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:04.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.398 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.399 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921684.309127, a25af5a3-096f-4363-842e-d960c22eb16b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.399 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] VM Paused (Lifecycle Event)
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.422 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.426 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921684.3151796, a25af5a3-096f-4363-842e-d960c22eb16b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.427 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] VM Resumed (Lifecycle Event)
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.434 225859 INFO nova.compute.manager [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Took 6.70 seconds to spawn the instance on the hypervisor.
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.435 225859 DEBUG nova.compute.manager [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.462 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.465 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.491 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:08:04 compute-1 podman[294323]: 2026-01-20 15:08:04.604289848 +0000 UTC m=+0.058124671 container create 619281131e0bd7b526e0f1262ee854d58e71ee2663a61f0675cef0cbbbf34306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:08:04 compute-1 systemd[1]: Started libpod-conmon-619281131e0bd7b526e0f1262ee854d58e71ee2663a61f0675cef0cbbbf34306.scope.
Jan 20 15:08:04 compute-1 podman[294323]: 2026-01-20 15:08:04.571839467 +0000 UTC m=+0.025674320 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:08:04 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:08:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15a8fbf0d3cd1123225aae1fd10c9e39fdb8283817ce9b1cbec15659d2486e8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:08:04 compute-1 podman[294323]: 2026-01-20 15:08:04.708968988 +0000 UTC m=+0.162803831 container init 619281131e0bd7b526e0f1262ee854d58e71ee2663a61f0675cef0cbbbf34306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.710 225859 INFO nova.compute.manager [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Took 7.84 seconds to build instance.
Jan 20 15:08:04 compute-1 podman[294323]: 2026-01-20 15:08:04.714493515 +0000 UTC m=+0.168328338 container start 619281131e0bd7b526e0f1262ee854d58e71ee2663a61f0675cef0cbbbf34306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 15:08:04 compute-1 neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258[294339]: [NOTICE]   (294343) : New worker (294345) forked
Jan 20 15:08:04 compute-1 neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258[294339]: [NOTICE]   (294343) : Loading success.
Jan 20 15:08:04 compute-1 nova_compute[225855]: 2026-01-20 15:08:04.758 225859 DEBUG oslo_concurrency.lockutils [None req-1b5afbb8-83be-4882-a4ba-3dcc1a91e616 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:08:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:04.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:08:05 compute-1 ceph-mon[81775]: pgmap v2529: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 3.6 MiB/s wr, 61 op/s
Jan 20 15:08:06 compute-1 nova_compute[225855]: 2026-01-20 15:08:06.271 225859 DEBUG nova.compute.manager [req-d7f389f7-bdda-488c-88a6-9c510ef40a2a req-8e289a31-1d18-4a10-ac43-aad26c86bae6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Received event network-vif-plugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:08:06 compute-1 nova_compute[225855]: 2026-01-20 15:08:06.272 225859 DEBUG oslo_concurrency.lockutils [req-d7f389f7-bdda-488c-88a6-9c510ef40a2a req-8e289a31-1d18-4a10-ac43-aad26c86bae6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:06 compute-1 nova_compute[225855]: 2026-01-20 15:08:06.272 225859 DEBUG oslo_concurrency.lockutils [req-d7f389f7-bdda-488c-88a6-9c510ef40a2a req-8e289a31-1d18-4a10-ac43-aad26c86bae6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:06 compute-1 nova_compute[225855]: 2026-01-20 15:08:06.273 225859 DEBUG oslo_concurrency.lockutils [req-d7f389f7-bdda-488c-88a6-9c510ef40a2a req-8e289a31-1d18-4a10-ac43-aad26c86bae6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:06 compute-1 nova_compute[225855]: 2026-01-20 15:08:06.273 225859 DEBUG nova.compute.manager [req-d7f389f7-bdda-488c-88a6-9c510ef40a2a req-8e289a31-1d18-4a10-ac43-aad26c86bae6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] No waiting events found dispatching network-vif-plugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:08:06 compute-1 nova_compute[225855]: 2026-01-20 15:08:06.273 225859 WARNING nova.compute.manager [req-d7f389f7-bdda-488c-88a6-9c510ef40a2a req-8e289a31-1d18-4a10-ac43-aad26c86bae6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Received unexpected event network-vif-plugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 for instance with vm_state active and task_state None.
Jan 20 15:08:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:06.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:06 compute-1 nova_compute[225855]: 2026-01-20 15:08:06.755 225859 INFO nova.compute.manager [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Rescuing
Jan 20 15:08:06 compute-1 nova_compute[225855]: 2026-01-20 15:08:06.756 225859 DEBUG oslo_concurrency.lockutils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Acquiring lock "refresh_cache-a25af5a3-096f-4363-842e-d960c22eb16b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:08:06 compute-1 nova_compute[225855]: 2026-01-20 15:08:06.756 225859 DEBUG oslo_concurrency.lockutils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Acquired lock "refresh_cache-a25af5a3-096f-4363-842e-d960c22eb16b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:08:06 compute-1 nova_compute[225855]: 2026-01-20 15:08:06.756 225859 DEBUG nova.network.neutron [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:08:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:08:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:06.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:08:07 compute-1 nova_compute[225855]: 2026-01-20 15:08:07.473 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:07 compute-1 nova_compute[225855]: 2026-01-20 15:08:07.505 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:07 compute-1 ceph-mon[81775]: pgmap v2530: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 146 op/s
Jan 20 15:08:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:08:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:08.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:08:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:08:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:08.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:08:09 compute-1 ceph-mon[81775]: pgmap v2531: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.4 MiB/s wr, 145 op/s
Jan 20 15:08:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:10.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:10.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:10 compute-1 nova_compute[225855]: 2026-01-20 15:08:10.962 225859 DEBUG nova.network.neutron [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Updating instance_info_cache with network_info: [{"id": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "address": "fa:16:3e:bf:9e:90", "network": {"id": "3967ae21-1590-4685-8881-8bd1bcf25258", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-285441107-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc74c4a296554866969b05aef75252af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7cb043-d1", "ovs_interfaceid": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:08:10 compute-1 nova_compute[225855]: 2026-01-20 15:08:10.983 225859 DEBUG oslo_concurrency.lockutils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Releasing lock "refresh_cache-a25af5a3-096f-4363-842e-d960c22eb16b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:08:11 compute-1 nova_compute[225855]: 2026-01-20 15:08:11.343 225859 DEBUG nova.virt.libvirt.driver [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 15:08:11 compute-1 ceph-mon[81775]: pgmap v2532: 321 pgs: 321 active+clean; 229 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.8 MiB/s wr, 235 op/s
Jan 20 15:08:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:08:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:12.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:08:12 compute-1 nova_compute[225855]: 2026-01-20 15:08:12.476 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:12 compute-1 nova_compute[225855]: 2026-01-20 15:08:12.506 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:12 compute-1 ceph-mon[81775]: pgmap v2533: 321 pgs: 321 active+clean; 241 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.6 MiB/s wr, 210 op/s
Jan 20 15:08:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:12.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:08:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3538750701' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:08:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:08:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3538750701' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:08:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e371 e371: 3 total, 3 up, 3 in
Jan 20 15:08:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1594079858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:14.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3538750701' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:08:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3538750701' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:08:14 compute-1 ceph-mon[81775]: osdmap e371: 3 total, 3 up, 3 in
Jan 20 15:08:14 compute-1 ceph-mon[81775]: pgmap v2535: 321 pgs: 321 active+clean; 260 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 2.1 MiB/s wr, 217 op/s
Jan 20 15:08:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e372 e372: 3 total, 3 up, 3 in
Jan 20 15:08:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:08:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:14.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:08:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e373 e373: 3 total, 3 up, 3 in
Jan 20 15:08:15 compute-1 ceph-mon[81775]: osdmap e372: 3 total, 3 up, 3 in
Jan 20 15:08:16 compute-1 podman[294360]: 2026-01-20 15:08:16.025347333 +0000 UTC m=+0.068432712 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 20 15:08:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:16.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:16.427 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:16.427 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:16.428 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:16 compute-1 ceph-mon[81775]: osdmap e373: 3 total, 3 up, 3 in
Jan 20 15:08:16 compute-1 ceph-mon[81775]: pgmap v2538: 321 pgs: 321 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 202 KiB/s rd, 6.8 MiB/s wr, 74 op/s
Jan 20 15:08:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e374 e374: 3 total, 3 up, 3 in
Jan 20 15:08:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:08:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:16.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:08:16 compute-1 ovn_controller[130490]: 2026-01-20T15:08:16Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:9e:90 10.100.0.6
Jan 20 15:08:16 compute-1 ovn_controller[130490]: 2026-01-20T15:08:16Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:9e:90 10.100.0.6
Jan 20 15:08:17 compute-1 nova_compute[225855]: 2026-01-20 15:08:17.481 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:17 compute-1 nova_compute[225855]: 2026-01-20 15:08:17.507 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:17 compute-1 ceph-mon[81775]: osdmap e374: 3 total, 3 up, 3 in
Jan 20 15:08:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:18 compute-1 sudo[294381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:08:18 compute-1 sudo[294381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:08:18 compute-1 sudo[294381]: pam_unix(sudo:session): session closed for user root
Jan 20 15:08:18 compute-1 sudo[294406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:08:18 compute-1 sudo[294406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:08:18 compute-1 sudo[294406]: pam_unix(sudo:session): session closed for user root
Jan 20 15:08:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:18.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:18 compute-1 ceph-mon[81775]: pgmap v2540: 321 pgs: 321 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 253 KiB/s rd, 7.1 MiB/s wr, 83 op/s
Jan 20 15:08:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:18.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:08:19 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2580296979' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:19.575 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:08:19 compute-1 nova_compute[225855]: 2026-01-20 15:08:19.575 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:19.576 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:08:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2580296979' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:08:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:20.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:08:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e375 e375: 3 total, 3 up, 3 in
Jan 20 15:08:20 compute-1 ceph-mon[81775]: pgmap v2541: 321 pgs: 321 active+clean; 360 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 11 MiB/s wr, 278 op/s
Jan 20 15:08:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:08:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:20.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:08:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e376 e376: 3 total, 3 up, 3 in
Jan 20 15:08:21 compute-1 nova_compute[225855]: 2026-01-20 15:08:21.389 225859 DEBUG nova.virt.libvirt.driver [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 15:08:22 compute-1 ceph-mon[81775]: osdmap e375: 3 total, 3 up, 3 in
Jan 20 15:08:22 compute-1 ceph-mon[81775]: osdmap e376: 3 total, 3 up, 3 in
Jan 20 15:08:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:22.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:22 compute-1 nova_compute[225855]: 2026-01-20 15:08:22.483 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:22 compute-1 nova_compute[225855]: 2026-01-20 15:08:22.511 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:22.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:23 compute-1 ceph-mon[81775]: pgmap v2544: 321 pgs: 321 active+clean; 390 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 9.2 MiB/s wr, 263 op/s
Jan 20 15:08:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:23.578 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:23 compute-1 kernel: tap6b7cb043-d1 (unregistering): left promiscuous mode
Jan 20 15:08:23 compute-1 NetworkManager[49104]: <info>  [1768921703.8524] device (tap6b7cb043-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:08:23 compute-1 ovn_controller[130490]: 2026-01-20T15:08:23Z|00729|binding|INFO|Releasing lport 6b7cb043-d1f4-4c2b-8173-1e3e2a664767 from this chassis (sb_readonly=0)
Jan 20 15:08:23 compute-1 nova_compute[225855]: 2026-01-20 15:08:23.857 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:23 compute-1 ovn_controller[130490]: 2026-01-20T15:08:23Z|00730|binding|INFO|Setting lport 6b7cb043-d1f4-4c2b-8173-1e3e2a664767 down in Southbound
Jan 20 15:08:23 compute-1 ovn_controller[130490]: 2026-01-20T15:08:23Z|00731|binding|INFO|Removing iface tap6b7cb043-d1 ovn-installed in OVS
Jan 20 15:08:23 compute-1 nova_compute[225855]: 2026-01-20 15:08:23.859 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:23.868 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:9e:90 10.100.0.6'], port_security=['fa:16:3e:bf:9e:90 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'a25af5a3-096f-4363-842e-d960c22eb16b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3967ae21-1590-4685-8881-8bd1bcf25258', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc74c4a296554866969b05aef75252af', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd56d9c6d-4bb5-4a73-ab07-9e0ee1fd3b93', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89ced88f-b1ed-4329-8a53-1931e6b0e3e9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6b7cb043-d1f4-4c2b-8173-1e3e2a664767) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:08:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:23.869 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6b7cb043-d1f4-4c2b-8173-1e3e2a664767 in datapath 3967ae21-1590-4685-8881-8bd1bcf25258 unbound from our chassis
Jan 20 15:08:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:23.870 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3967ae21-1590-4685-8881-8bd1bcf25258, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:08:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:23.871 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f0748019-1163-4337-996a-1e5001bed076]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:23.872 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258 namespace which is not needed anymore
Jan 20 15:08:23 compute-1 nova_compute[225855]: 2026-01-20 15:08:23.891 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:23 compute-1 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000a8.scope: Deactivated successfully.
Jan 20 15:08:23 compute-1 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000a8.scope: Consumed 13.379s CPU time.
Jan 20 15:08:23 compute-1 systemd-machined[194361]: Machine qemu-85-instance-000000a8 terminated.
Jan 20 15:08:24 compute-1 neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258[294339]: [NOTICE]   (294343) : haproxy version is 2.8.14-c23fe91
Jan 20 15:08:24 compute-1 neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258[294339]: [NOTICE]   (294343) : path to executable is /usr/sbin/haproxy
Jan 20 15:08:24 compute-1 neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258[294339]: [WARNING]  (294343) : Exiting Master process...
Jan 20 15:08:24 compute-1 neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258[294339]: [ALERT]    (294343) : Current worker (294345) exited with code 143 (Terminated)
Jan 20 15:08:24 compute-1 neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258[294339]: [WARNING]  (294343) : All workers exited. Exiting... (0)
Jan 20 15:08:24 compute-1 systemd[1]: libpod-619281131e0bd7b526e0f1262ee854d58e71ee2663a61f0675cef0cbbbf34306.scope: Deactivated successfully.
Jan 20 15:08:24 compute-1 podman[294458]: 2026-01-20 15:08:24.036784945 +0000 UTC m=+0.064545122 container died 619281131e0bd7b526e0f1262ee854d58e71ee2663a61f0675cef0cbbbf34306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 15:08:24 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-619281131e0bd7b526e0f1262ee854d58e71ee2663a61f0675cef0cbbbf34306-userdata-shm.mount: Deactivated successfully.
Jan 20 15:08:24 compute-1 systemd[1]: var-lib-containers-storage-overlay-e15a8fbf0d3cd1123225aae1fd10c9e39fdb8283817ce9b1cbec15659d2486e8-merged.mount: Deactivated successfully.
Jan 20 15:08:24 compute-1 podman[294458]: 2026-01-20 15:08:24.076014348 +0000 UTC m=+0.103774535 container cleanup 619281131e0bd7b526e0f1262ee854d58e71ee2663a61f0675cef0cbbbf34306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 15:08:24 compute-1 systemd[1]: libpod-conmon-619281131e0bd7b526e0f1262ee854d58e71ee2663a61f0675cef0cbbbf34306.scope: Deactivated successfully.
Jan 20 15:08:24 compute-1 podman[294491]: 2026-01-20 15:08:24.137638457 +0000 UTC m=+0.038323899 container remove 619281131e0bd7b526e0f1262ee854d58e71ee2663a61f0675cef0cbbbf34306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:08:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:24.143 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7db549f5-225d-4ead-97d6-7d3ddcf1da59]: (4, ('Tue Jan 20 03:08:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258 (619281131e0bd7b526e0f1262ee854d58e71ee2663a61f0675cef0cbbbf34306)\n619281131e0bd7b526e0f1262ee854d58e71ee2663a61f0675cef0cbbbf34306\nTue Jan 20 03:08:24 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258 (619281131e0bd7b526e0f1262ee854d58e71ee2663a61f0675cef0cbbbf34306)\n619281131e0bd7b526e0f1262ee854d58e71ee2663a61f0675cef0cbbbf34306\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:24.145 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa1970d-2fc8-4d83-b019-fb1e1b1eb125]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:24.147 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3967ae21-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.149 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:24 compute-1 kernel: tap3967ae21-10: left promiscuous mode
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.171 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:24.175 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f7ac8e78-080f-4f28-b991-2708d08c3368]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:24.194 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[28d9d163-9355-4fed-8b54-083b7cd0afc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:24.195 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[dab90d20-0ecb-4f0b-b944-815e17a89c35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:24.210 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9960d6c2-a2d4-46e0-ba73-723f993ee9f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672283, 'reachable_time': 33815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294517, 'error': None, 'target': 'ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:24.213 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:08:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:24.213 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ec6393-d94f-49de-ae27-d5016da24abe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:24 compute-1 systemd[1]: run-netns-ovnmeta\x2d3967ae21\x2d1590\x2d4685\x2d8881\x2d8bd1bcf25258.mount: Deactivated successfully.
Jan 20 15:08:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:24.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.399 225859 DEBUG nova.compute.manager [req-9b156bec-7cc6-4bcc-a6c1-ba8ede0b3547 req-c0db19e3-0daf-485b-9c4b-8da0f3dd84de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Received event network-vif-unplugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.399 225859 DEBUG oslo_concurrency.lockutils [req-9b156bec-7cc6-4bcc-a6c1-ba8ede0b3547 req-c0db19e3-0daf-485b-9c4b-8da0f3dd84de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.400 225859 DEBUG oslo_concurrency.lockutils [req-9b156bec-7cc6-4bcc-a6c1-ba8ede0b3547 req-c0db19e3-0daf-485b-9c4b-8da0f3dd84de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.400 225859 DEBUG oslo_concurrency.lockutils [req-9b156bec-7cc6-4bcc-a6c1-ba8ede0b3547 req-c0db19e3-0daf-485b-9c4b-8da0f3dd84de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.400 225859 DEBUG nova.compute.manager [req-9b156bec-7cc6-4bcc-a6c1-ba8ede0b3547 req-c0db19e3-0daf-485b-9c4b-8da0f3dd84de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] No waiting events found dispatching network-vif-unplugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.400 225859 WARNING nova.compute.manager [req-9b156bec-7cc6-4bcc-a6c1-ba8ede0b3547 req-c0db19e3-0daf-485b-9c4b-8da0f3dd84de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Received unexpected event network-vif-unplugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 for instance with vm_state active and task_state rescuing.
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.404 225859 INFO nova.virt.libvirt.driver [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Instance shutdown successfully after 13 seconds.
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.409 225859 INFO nova.virt.libvirt.driver [-] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Instance destroyed successfully.
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.410 225859 DEBUG nova.objects.instance [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lazy-loading 'numa_topology' on Instance uuid a25af5a3-096f-4363-842e-d960c22eb16b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.428 225859 INFO nova.virt.libvirt.driver [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Attempting rescue
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.429 225859 DEBUG nova.virt.libvirt.driver [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.432 225859 DEBUG nova.virt.libvirt.driver [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.433 225859 INFO nova.virt.libvirt.driver [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Creating image(s)
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.456 225859 DEBUG nova.storage.rbd_utils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] rbd image a25af5a3-096f-4363-842e-d960c22eb16b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.459 225859 DEBUG nova.objects.instance [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lazy-loading 'trusted_certs' on Instance uuid a25af5a3-096f-4363-842e-d960c22eb16b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.491 225859 DEBUG nova.storage.rbd_utils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] rbd image a25af5a3-096f-4363-842e-d960c22eb16b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.516 225859 DEBUG nova.storage.rbd_utils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] rbd image a25af5a3-096f-4363-842e-d960c22eb16b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.518 225859 DEBUG oslo_concurrency.processutils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.580 225859 DEBUG oslo_concurrency.processutils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.580 225859 DEBUG oslo_concurrency.lockutils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.581 225859 DEBUG oslo_concurrency.lockutils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.581 225859 DEBUG oslo_concurrency.lockutils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.606 225859 DEBUG nova.storage.rbd_utils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] rbd image a25af5a3-096f-4363-842e-d960c22eb16b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.609 225859 DEBUG oslo_concurrency.processutils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 a25af5a3-096f-4363-842e-d960c22eb16b_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:24.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.891 225859 DEBUG oslo_concurrency.processutils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 a25af5a3-096f-4363-842e-d960c22eb16b_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.893 225859 DEBUG nova.objects.instance [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lazy-loading 'migration_context' on Instance uuid a25af5a3-096f-4363-842e-d960c22eb16b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.912 225859 DEBUG nova.virt.libvirt.driver [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.913 225859 DEBUG nova.virt.libvirt.driver [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Start _get_guest_xml network_info=[{"id": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "address": "fa:16:3e:bf:9e:90", "network": {"id": "3967ae21-1590-4685-8881-8bd1bcf25258", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-285441107-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-285441107-network", "vif_mac": "fa:16:3e:bf:9e:90"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc74c4a296554866969b05aef75252af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7cb043-d1", "ovs_interfaceid": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.914 225859 DEBUG nova.objects.instance [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lazy-loading 'resources' on Instance uuid a25af5a3-096f-4363-842e-d960c22eb16b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.936 225859 WARNING nova.virt.libvirt.driver [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.947 225859 DEBUG nova.virt.libvirt.host [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.948 225859 DEBUG nova.virt.libvirt.host [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.951 225859 DEBUG nova.virt.libvirt.host [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.952 225859 DEBUG nova.virt.libvirt.host [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.953 225859 DEBUG nova.virt.libvirt.driver [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.953 225859 DEBUG nova.virt.hardware [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.953 225859 DEBUG nova.virt.hardware [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.954 225859 DEBUG nova.virt.hardware [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.954 225859 DEBUG nova.virt.hardware [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.954 225859 DEBUG nova.virt.hardware [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.954 225859 DEBUG nova.virt.hardware [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.954 225859 DEBUG nova.virt.hardware [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.955 225859 DEBUG nova.virt.hardware [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.955 225859 DEBUG nova.virt.hardware [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.955 225859 DEBUG nova.virt.hardware [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.955 225859 DEBUG nova.virt.hardware [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.956 225859 DEBUG nova.objects.instance [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lazy-loading 'vcpu_model' on Instance uuid a25af5a3-096f-4363-842e-d960c22eb16b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:08:24 compute-1 nova_compute[225855]: 2026-01-20 15:08:24.979 225859 DEBUG oslo_concurrency.processutils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:08:25 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2739269448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:25 compute-1 nova_compute[225855]: 2026-01-20 15:08:25.422 225859 DEBUG oslo_concurrency.processutils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:25 compute-1 nova_compute[225855]: 2026-01-20 15:08:25.423 225859 DEBUG oslo_concurrency.processutils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:25 compute-1 ceph-mon[81775]: pgmap v2545: 321 pgs: 321 active+clean; 403 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 8.4 MiB/s wr, 223 op/s
Jan 20 15:08:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2739269448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:08:25 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2989555521' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:25 compute-1 nova_compute[225855]: 2026-01-20 15:08:25.874 225859 DEBUG oslo_concurrency.processutils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:25 compute-1 nova_compute[225855]: 2026-01-20 15:08:25.876 225859 DEBUG oslo_concurrency.processutils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:08:26 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2370305360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.306 225859 DEBUG oslo_concurrency.processutils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.308 225859 DEBUG nova.virt.libvirt.vif [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:07:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-950743647',display_name='tempest-ServerRescueNegativeTestJSON-server-950743647',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-950743647',id=168,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:08:04Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fc74c4a296554866969b05aef75252af',ramdisk_id='',reservation_id='r-gkbc59sh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1649662639',owner_user_name='tempest-ServerRescueNegativeTestJSON-1649662639-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:08:04Z,user_data=None,user_id='27658864f96d453586dd0846a4c55b7d',uuid=a25af5a3-096f-4363-842e-d960c22eb16b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "address": "fa:16:3e:bf:9e:90", "network": {"id": "3967ae21-1590-4685-8881-8bd1bcf25258", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-285441107-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-285441107-network", "vif_mac": "fa:16:3e:bf:9e:90"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc74c4a296554866969b05aef75252af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7cb043-d1", "ovs_interfaceid": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.308 225859 DEBUG nova.network.os_vif_util [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Converting VIF {"id": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "address": "fa:16:3e:bf:9e:90", "network": {"id": "3967ae21-1590-4685-8881-8bd1bcf25258", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-285441107-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-285441107-network", "vif_mac": "fa:16:3e:bf:9e:90"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc74c4a296554866969b05aef75252af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7cb043-d1", "ovs_interfaceid": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.310 225859 DEBUG nova.network.os_vif_util [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bf:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=6b7cb043-d1f4-4c2b-8173-1e3e2a664767,network=Network(3967ae21-1590-4685-8881-8bd1bcf25258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b7cb043-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.311 225859 DEBUG nova.objects.instance [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lazy-loading 'pci_devices' on Instance uuid a25af5a3-096f-4363-842e-d960c22eb16b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.340 225859 DEBUG nova.virt.libvirt.driver [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:08:26 compute-1 nova_compute[225855]:   <uuid>a25af5a3-096f-4363-842e-d960c22eb16b</uuid>
Jan 20 15:08:26 compute-1 nova_compute[225855]:   <name>instance-000000a8</name>
Jan 20 15:08:26 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:08:26 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:08:26 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-950743647</nova:name>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:08:24</nova:creationTime>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:08:26 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:08:26 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:08:26 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:08:26 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:08:26 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:08:26 compute-1 nova_compute[225855]:         <nova:user uuid="27658864f96d453586dd0846a4c55b7d">tempest-ServerRescueNegativeTestJSON-1649662639-project-member</nova:user>
Jan 20 15:08:26 compute-1 nova_compute[225855]:         <nova:project uuid="fc74c4a296554866969b05aef75252af">tempest-ServerRescueNegativeTestJSON-1649662639</nova:project>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:08:26 compute-1 nova_compute[225855]:         <nova:port uuid="6b7cb043-d1f4-4c2b-8173-1e3e2a664767">
Jan 20 15:08:26 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:08:26 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:08:26 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <system>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <entry name="serial">a25af5a3-096f-4363-842e-d960c22eb16b</entry>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <entry name="uuid">a25af5a3-096f-4363-842e-d960c22eb16b</entry>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     </system>
Jan 20 15:08:26 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:08:26 compute-1 nova_compute[225855]:   <os>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:   </os>
Jan 20 15:08:26 compute-1 nova_compute[225855]:   <features>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:   </features>
Jan 20 15:08:26 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:08:26 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:08:26 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/a25af5a3-096f-4363-842e-d960c22eb16b_disk.rescue">
Jan 20 15:08:26 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       </source>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:08:26 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/a25af5a3-096f-4363-842e-d960c22eb16b_disk">
Jan 20 15:08:26 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       </source>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:08:26 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <target dev="vdb" bus="virtio"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/a25af5a3-096f-4363-842e-d960c22eb16b_disk.config.rescue">
Jan 20 15:08:26 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       </source>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:08:26 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:bf:9e:90"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <target dev="tap6b7cb043-d1"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/a25af5a3-096f-4363-842e-d960c22eb16b/console.log" append="off"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <video>
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     </video>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:08:26 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:08:26 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:08:26 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:08:26 compute-1 nova_compute[225855]: </domain>
Jan 20 15:08:26 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.348 225859 INFO nova.virt.libvirt.driver [-] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Instance destroyed successfully.
Jan 20 15:08:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:26.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.406 225859 DEBUG nova.virt.libvirt.driver [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.406 225859 DEBUG nova.virt.libvirt.driver [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.407 225859 DEBUG nova.virt.libvirt.driver [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.407 225859 DEBUG nova.virt.libvirt.driver [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] No VIF found with MAC fa:16:3e:bf:9e:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.407 225859 INFO nova.virt.libvirt.driver [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Using config drive
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.430 225859 DEBUG nova.storage.rbd_utils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] rbd image a25af5a3-096f-4363-842e-d960c22eb16b_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:08:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2989555521' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2370305360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.459 225859 DEBUG nova.objects.instance [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lazy-loading 'ec2_ids' on Instance uuid a25af5a3-096f-4363-842e-d960c22eb16b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.498 225859 DEBUG nova.objects.instance [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lazy-loading 'keypairs' on Instance uuid a25af5a3-096f-4363-842e-d960c22eb16b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.524 225859 DEBUG nova.compute.manager [req-f64749e5-2344-4374-bf6a-e7951e9c5a25 req-cdf39710-058d-43c0-9806-447cc0e8a76d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Received event network-vif-plugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.525 225859 DEBUG oslo_concurrency.lockutils [req-f64749e5-2344-4374-bf6a-e7951e9c5a25 req-cdf39710-058d-43c0-9806-447cc0e8a76d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.525 225859 DEBUG oslo_concurrency.lockutils [req-f64749e5-2344-4374-bf6a-e7951e9c5a25 req-cdf39710-058d-43c0-9806-447cc0e8a76d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.526 225859 DEBUG oslo_concurrency.lockutils [req-f64749e5-2344-4374-bf6a-e7951e9c5a25 req-cdf39710-058d-43c0-9806-447cc0e8a76d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.526 225859 DEBUG nova.compute.manager [req-f64749e5-2344-4374-bf6a-e7951e9c5a25 req-cdf39710-058d-43c0-9806-447cc0e8a76d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] No waiting events found dispatching network-vif-plugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.526 225859 WARNING nova.compute.manager [req-f64749e5-2344-4374-bf6a-e7951e9c5a25 req-cdf39710-058d-43c0-9806-447cc0e8a76d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Received unexpected event network-vif-plugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 for instance with vm_state active and task_state rescuing.
Jan 20 15:08:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:26.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.884 225859 INFO nova.virt.libvirt.driver [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Creating config drive at /var/lib/nova/instances/a25af5a3-096f-4363-842e-d960c22eb16b/disk.config.rescue
Jan 20 15:08:26 compute-1 nova_compute[225855]: 2026-01-20 15:08:26.889 225859 DEBUG oslo_concurrency.processutils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a25af5a3-096f-4363-842e-d960c22eb16b/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1bo1ol_4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:27 compute-1 nova_compute[225855]: 2026-01-20 15:08:27.019 225859 DEBUG oslo_concurrency.processutils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a25af5a3-096f-4363-842e-d960c22eb16b/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1bo1ol_4" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:27 compute-1 nova_compute[225855]: 2026-01-20 15:08:27.046 225859 DEBUG nova.storage.rbd_utils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] rbd image a25af5a3-096f-4363-842e-d960c22eb16b_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:08:27 compute-1 nova_compute[225855]: 2026-01-20 15:08:27.050 225859 DEBUG oslo_concurrency.processutils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a25af5a3-096f-4363-842e-d960c22eb16b/disk.config.rescue a25af5a3-096f-4363-842e-d960c22eb16b_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:27 compute-1 nova_compute[225855]: 2026-01-20 15:08:27.222 225859 DEBUG oslo_concurrency.processutils [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a25af5a3-096f-4363-842e-d960c22eb16b/disk.config.rescue a25af5a3-096f-4363-842e-d960c22eb16b_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:27 compute-1 nova_compute[225855]: 2026-01-20 15:08:27.223 225859 INFO nova.virt.libvirt.driver [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Deleting local config drive /var/lib/nova/instances/a25af5a3-096f-4363-842e-d960c22eb16b/disk.config.rescue because it was imported into RBD.
Jan 20 15:08:27 compute-1 kernel: tap6b7cb043-d1: entered promiscuous mode
Jan 20 15:08:27 compute-1 NetworkManager[49104]: <info>  [1768921707.2933] manager: (tap6b7cb043-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/304)
Jan 20 15:08:27 compute-1 ovn_controller[130490]: 2026-01-20T15:08:27Z|00732|binding|INFO|Claiming lport 6b7cb043-d1f4-4c2b-8173-1e3e2a664767 for this chassis.
Jan 20 15:08:27 compute-1 ovn_controller[130490]: 2026-01-20T15:08:27Z|00733|binding|INFO|6b7cb043-d1f4-4c2b-8173-1e3e2a664767: Claiming fa:16:3e:bf:9e:90 10.100.0.6
Jan 20 15:08:27 compute-1 nova_compute[225855]: 2026-01-20 15:08:27.294 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.306 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:9e:90 10.100.0.6'], port_security=['fa:16:3e:bf:9e:90 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'a25af5a3-096f-4363-842e-d960c22eb16b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3967ae21-1590-4685-8881-8bd1bcf25258', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc74c4a296554866969b05aef75252af', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'd56d9c6d-4bb5-4a73-ab07-9e0ee1fd3b93', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89ced88f-b1ed-4329-8a53-1931e6b0e3e9, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6b7cb043-d1f4-4c2b-8173-1e3e2a664767) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.307 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6b7cb043-d1f4-4c2b-8173-1e3e2a664767 in datapath 3967ae21-1590-4685-8881-8bd1bcf25258 bound to our chassis
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.309 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3967ae21-1590-4685-8881-8bd1bcf25258
Jan 20 15:08:27 compute-1 nova_compute[225855]: 2026-01-20 15:08:27.311 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:27 compute-1 ovn_controller[130490]: 2026-01-20T15:08:27Z|00734|binding|INFO|Setting lport 6b7cb043-d1f4-4c2b-8173-1e3e2a664767 ovn-installed in OVS
Jan 20 15:08:27 compute-1 ovn_controller[130490]: 2026-01-20T15:08:27Z|00735|binding|INFO|Setting lport 6b7cb043-d1f4-4c2b-8173-1e3e2a664767 up in Southbound
Jan 20 15:08:27 compute-1 nova_compute[225855]: 2026-01-20 15:08:27.314 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:27 compute-1 nova_compute[225855]: 2026-01-20 15:08:27.316 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.323 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[636a2d9a-3c64-4fd8-92fb-8fd3c00dabd2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.323 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3967ae21-11 in ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.326 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3967ae21-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.326 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9eaeff23-291f-48e8-ad3f-5f6ef8b754e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.327 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bf328fc6-5972-49cd-a03f-09af5c767bab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:27 compute-1 systemd-machined[194361]: New machine qemu-86-instance-000000a8.
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.338 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[5a9e8c40-72c4-4418-89ca-310a3599ec67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.350 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d3693682-0966-4530-8616-0fe674d698b8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:27 compute-1 systemd[1]: Started Virtual Machine qemu-86-instance-000000a8.
Jan 20 15:08:27 compute-1 systemd-udevd[294755]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:08:27 compute-1 NetworkManager[49104]: <info>  [1768921707.3795] device (tap6b7cb043-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:08:27 compute-1 NetworkManager[49104]: <info>  [1768921707.3801] device (tap6b7cb043-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.384 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[6d01f1ce-100c-40e4-a212-e9c28dbbcb7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.388 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9688413f-fbdd-4a1a-8b35-0a98fa1f668b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:27 compute-1 systemd-udevd[294758]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:08:27 compute-1 NetworkManager[49104]: <info>  [1768921707.3913] manager: (tap3967ae21-10): new Veth device (/org/freedesktop/NetworkManager/Devices/305)
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.418 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[6ef7522e-3659-4ff1-9777-c948e36e5f1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.421 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ea9cee-e120-4570-b03d-b801b91874d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:27 compute-1 NetworkManager[49104]: <info>  [1768921707.4452] device (tap3967ae21-10): carrier: link connected
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.450 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[3adbe640-5899-4572-9ca9-833af711cd57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:27 compute-1 ceph-mon[81775]: pgmap v2546: 321 pgs: 321 active+clean; 445 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 10 MiB/s wr, 255 op/s
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.469 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7024665a-e84e-42b2-8456-bc911d2039d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3967ae21-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:ce:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 209], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674637, 'reachable_time': 39267, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294784, 'error': None, 'target': 'ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:27 compute-1 nova_compute[225855]: 2026-01-20 15:08:27.484 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.486 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[de93a492-c398-4b60-a884-d5a6dae16397]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb5:ce9d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 674637, 'tstamp': 674637}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294785, 'error': None, 'target': 'ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.505 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0b019050-3ddf-4bf8-b7f0-bfd4b905d671]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3967ae21-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:ce:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 209], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674637, 'reachable_time': 39267, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294786, 'error': None, 'target': 'ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:27 compute-1 nova_compute[225855]: 2026-01-20 15:08:27.514 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.532 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f90ecff0-75a9-46c9-b65b-c0863b4ba005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.596 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4a602c3b-a586-4f3b-bd65-46f4c3c12011]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.597 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3967ae21-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.597 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.598 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3967ae21-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:27 compute-1 nova_compute[225855]: 2026-01-20 15:08:27.599 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:27 compute-1 kernel: tap3967ae21-10: entered promiscuous mode
Jan 20 15:08:27 compute-1 nova_compute[225855]: 2026-01-20 15:08:27.601 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:27 compute-1 NetworkManager[49104]: <info>  [1768921707.6018] manager: (tap3967ae21-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.602 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3967ae21-10, col_values=(('external_ids', {'iface-id': 'b19d6956-fc8d-42c8-af98-d0f2fe9fe3a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:27 compute-1 nova_compute[225855]: 2026-01-20 15:08:27.603 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:27 compute-1 nova_compute[225855]: 2026-01-20 15:08:27.605 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.605 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3967ae21-1590-4685-8881-8bd1bcf25258.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3967ae21-1590-4685-8881-8bd1bcf25258.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:08:27 compute-1 ovn_controller[130490]: 2026-01-20T15:08:27Z|00736|binding|INFO|Releasing lport b19d6956-fc8d-42c8-af98-d0f2fe9fe3a3 from this chassis (sb_readonly=0)
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.606 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0d9c1ed5-1466-4397-bdaa-935e623a21e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.607 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-3967ae21-1590-4685-8881-8bd1bcf25258
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/3967ae21-1590-4685-8881-8bd1bcf25258.pid.haproxy
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 3967ae21-1590-4685-8881-8bd1bcf25258
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:08:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:27.607 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258', 'env', 'PROCESS_TAG=haproxy-3967ae21-1590-4685-8881-8bd1bcf25258', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3967ae21-1590-4685-8881-8bd1bcf25258.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:08:27 compute-1 nova_compute[225855]: 2026-01-20 15:08:27.621 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:27 compute-1 podman[294819]: 2026-01-20 15:08:27.94658885 +0000 UTC m=+0.042929939 container create d749b2135e8b112af5d3a20cbeb705a74120f88ef0dbe175ef347b92349abb8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:08:27 compute-1 systemd[1]: Started libpod-conmon-d749b2135e8b112af5d3a20cbeb705a74120f88ef0dbe175ef347b92349abb8e.scope.
Jan 20 15:08:28 compute-1 podman[294819]: 2026-01-20 15:08:27.925596504 +0000 UTC m=+0.021937613 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:08:28 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:08:28 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a0a29c0cfe0897e5a9faa037b73161c66d2ed59c5ab8259820c45f49ae9bf20/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:08:28 compute-1 podman[294819]: 2026-01-20 15:08:28.041195974 +0000 UTC m=+0.137537083 container init d749b2135e8b112af5d3a20cbeb705a74120f88ef0dbe175ef347b92349abb8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 15:08:28 compute-1 podman[294819]: 2026-01-20 15:08:28.046287259 +0000 UTC m=+0.142628348 container start d749b2135e8b112af5d3a20cbeb705a74120f88ef0dbe175ef347b92349abb8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:08:28 compute-1 neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258[294842]: [NOTICE]   (294872) : New worker (294884) forked
Jan 20 15:08:28 compute-1 neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258[294842]: [NOTICE]   (294872) : Loading success.
Jan 20 15:08:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.220 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for a25af5a3-096f-4363-842e-d960c22eb16b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.221 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921708.220054, a25af5a3-096f-4363-842e-d960c22eb16b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.222 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] VM Resumed (Lifecycle Event)
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.228 225859 DEBUG nova.compute.manager [None req-cbffac24-6e30-414d-95cc-0bbdcd9d1b37 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.277 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.281 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.314 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921708.2202642, a25af5a3-096f-4363-842e-d960c22eb16b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.314 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] VM Started (Lifecycle Event)
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.335 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.338 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:08:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:28.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2684129964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.683 225859 DEBUG nova.compute.manager [req-af1bd6c5-6428-42f6-bdeb-491e8ece872c req-8c49eeeb-a2cb-4a88-95f3-640fb24c01cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Received event network-vif-plugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.683 225859 DEBUG oslo_concurrency.lockutils [req-af1bd6c5-6428-42f6-bdeb-491e8ece872c req-8c49eeeb-a2cb-4a88-95f3-640fb24c01cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.684 225859 DEBUG oslo_concurrency.lockutils [req-af1bd6c5-6428-42f6-bdeb-491e8ece872c req-8c49eeeb-a2cb-4a88-95f3-640fb24c01cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.684 225859 DEBUG oslo_concurrency.lockutils [req-af1bd6c5-6428-42f6-bdeb-491e8ece872c req-8c49eeeb-a2cb-4a88-95f3-640fb24c01cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.684 225859 DEBUG nova.compute.manager [req-af1bd6c5-6428-42f6-bdeb-491e8ece872c req-8c49eeeb-a2cb-4a88-95f3-640fb24c01cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] No waiting events found dispatching network-vif-plugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.684 225859 WARNING nova.compute.manager [req-af1bd6c5-6428-42f6-bdeb-491e8ece872c req-8c49eeeb-a2cb-4a88-95f3-640fb24c01cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Received unexpected event network-vif-plugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 for instance with vm_state rescued and task_state None.
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.685 225859 DEBUG nova.compute.manager [req-af1bd6c5-6428-42f6-bdeb-491e8ece872c req-8c49eeeb-a2cb-4a88-95f3-640fb24c01cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Received event network-vif-plugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.685 225859 DEBUG oslo_concurrency.lockutils [req-af1bd6c5-6428-42f6-bdeb-491e8ece872c req-8c49eeeb-a2cb-4a88-95f3-640fb24c01cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.685 225859 DEBUG oslo_concurrency.lockutils [req-af1bd6c5-6428-42f6-bdeb-491e8ece872c req-8c49eeeb-a2cb-4a88-95f3-640fb24c01cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.685 225859 DEBUG oslo_concurrency.lockutils [req-af1bd6c5-6428-42f6-bdeb-491e8ece872c req-8c49eeeb-a2cb-4a88-95f3-640fb24c01cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.685 225859 DEBUG nova.compute.manager [req-af1bd6c5-6428-42f6-bdeb-491e8ece872c req-8c49eeeb-a2cb-4a88-95f3-640fb24c01cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] No waiting events found dispatching network-vif-plugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:08:28 compute-1 nova_compute[225855]: 2026-01-20 15:08:28.686 225859 WARNING nova.compute.manager [req-af1bd6c5-6428-42f6-bdeb-491e8ece872c req-8c49eeeb-a2cb-4a88-95f3-640fb24c01cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Received unexpected event network-vif-plugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 for instance with vm_state rescued and task_state None.
Jan 20 15:08:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:28.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:29 compute-1 ceph-mon[81775]: pgmap v2547: 321 pgs: 321 active+clean; 445 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.2 MiB/s wr, 92 op/s
Jan 20 15:08:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4189146071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:30.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e377 e377: 3 total, 3 up, 3 in
Jan 20 15:08:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:30.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:31 compute-1 ceph-mon[81775]: pgmap v2548: 321 pgs: 321 active+clean; 464 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.2 MiB/s wr, 116 op/s
Jan 20 15:08:31 compute-1 ceph-mon[81775]: osdmap e377: 3 total, 3 up, 3 in
Jan 20 15:08:32 compute-1 podman[294912]: 2026-01-20 15:08:32.084927509 +0000 UTC m=+0.117567667 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:08:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:08:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:32.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:08:32 compute-1 nova_compute[225855]: 2026-01-20 15:08:32.487 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:32 compute-1 nova_compute[225855]: 2026-01-20 15:08:32.517 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:32 compute-1 ceph-mon[81775]: pgmap v2550: 321 pgs: 321 active+clean; 477 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.9 MiB/s wr, 135 op/s
Jan 20 15:08:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:32.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:33 compute-1 nova_compute[225855]: 2026-01-20 15:08:33.618 225859 DEBUG oslo_concurrency.lockutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:33 compute-1 nova_compute[225855]: 2026-01-20 15:08:33.619 225859 DEBUG oslo_concurrency.lockutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:33 compute-1 nova_compute[225855]: 2026-01-20 15:08:33.637 225859 DEBUG nova.compute.manager [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:08:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/243867422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:33 compute-1 nova_compute[225855]: 2026-01-20 15:08:33.726 225859 DEBUG oslo_concurrency.lockutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:33 compute-1 nova_compute[225855]: 2026-01-20 15:08:33.727 225859 DEBUG oslo_concurrency.lockutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:33 compute-1 nova_compute[225855]: 2026-01-20 15:08:33.734 225859 DEBUG nova.virt.hardware [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:08:33 compute-1 nova_compute[225855]: 2026-01-20 15:08:33.735 225859 INFO nova.compute.claims [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:08:33 compute-1 nova_compute[225855]: 2026-01-20 15:08:33.886 225859 DEBUG oslo_concurrency.processutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:08:34 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/982314999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:08:34 compute-1 nova_compute[225855]: 2026-01-20 15:08:34.315 225859 DEBUG oslo_concurrency.processutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:34 compute-1 nova_compute[225855]: 2026-01-20 15:08:34.321 225859 DEBUG nova.compute.provider_tree [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:08:34 compute-1 nova_compute[225855]: 2026-01-20 15:08:34.343 225859 DEBUG nova.scheduler.client.report [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:08:34 compute-1 nova_compute[225855]: 2026-01-20 15:08:34.379 225859 DEBUG oslo_concurrency.lockutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:34 compute-1 nova_compute[225855]: 2026-01-20 15:08:34.380 225859 DEBUG nova.compute.manager [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:08:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:34.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:34.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/982314999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:08:34 compute-1 ceph-mon[81775]: pgmap v2551: 321 pgs: 321 active+clean; 485 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.8 MiB/s wr, 157 op/s
Jan 20 15:08:34 compute-1 nova_compute[225855]: 2026-01-20 15:08:34.943 225859 DEBUG nova.compute.manager [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:08:34 compute-1 nova_compute[225855]: 2026-01-20 15:08:34.944 225859 DEBUG nova.network.neutron [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:08:34 compute-1 nova_compute[225855]: 2026-01-20 15:08:34.972 225859 INFO nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:08:35 compute-1 nova_compute[225855]: 2026-01-20 15:08:34.999 225859 DEBUG nova.compute.manager [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:08:35 compute-1 nova_compute[225855]: 2026-01-20 15:08:35.092 225859 DEBUG nova.compute.manager [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:08:35 compute-1 nova_compute[225855]: 2026-01-20 15:08:35.095 225859 DEBUG nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:08:35 compute-1 nova_compute[225855]: 2026-01-20 15:08:35.095 225859 INFO nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Creating image(s)
Jan 20 15:08:35 compute-1 nova_compute[225855]: 2026-01-20 15:08:35.125 225859 DEBUG nova.storage.rbd_utils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] rbd image b4c55640-85f9-4d75-a4df-6ee77b21ca73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:08:35 compute-1 nova_compute[225855]: 2026-01-20 15:08:35.159 225859 DEBUG nova.storage.rbd_utils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] rbd image b4c55640-85f9-4d75-a4df-6ee77b21ca73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:08:35 compute-1 nova_compute[225855]: 2026-01-20 15:08:35.195 225859 DEBUG nova.storage.rbd_utils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] rbd image b4c55640-85f9-4d75-a4df-6ee77b21ca73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:08:35 compute-1 nova_compute[225855]: 2026-01-20 15:08:35.202 225859 DEBUG oslo_concurrency.lockutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "a522636f3423dd1eea3b834dfd08917146e09c47" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:35 compute-1 nova_compute[225855]: 2026-01-20 15:08:35.203 225859 DEBUG oslo_concurrency.lockutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "a522636f3423dd1eea3b834dfd08917146e09c47" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:35 compute-1 nova_compute[225855]: 2026-01-20 15:08:35.219 225859 DEBUG nova.policy [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c98bd3f0904e48efa524d598bcad85e9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5b43342be22543f79d4a56e26c6d0c96', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:08:35 compute-1 nova_compute[225855]: 2026-01-20 15:08:35.514 225859 DEBUG nova.virt.libvirt.imagebackend [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Image locations are: [{'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/5b64c953-6df3-45a3-ae28-e419ba117bb2/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/5b64c953-6df3-45a3-ae28-e419ba117bb2/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 20 15:08:36 compute-1 nova_compute[225855]: 2026-01-20 15:08:36.109 225859 DEBUG nova.network.neutron [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Successfully created port: c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:08:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:36.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:36 compute-1 nova_compute[225855]: 2026-01-20 15:08:36.623 225859 DEBUG oslo_concurrency.processutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a522636f3423dd1eea3b834dfd08917146e09c47.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:36 compute-1 nova_compute[225855]: 2026-01-20 15:08:36.705 225859 DEBUG oslo_concurrency.processutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a522636f3423dd1eea3b834dfd08917146e09c47.part --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:36 compute-1 nova_compute[225855]: 2026-01-20 15:08:36.707 225859 DEBUG nova.virt.images [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] 5b64c953-6df3-45a3-ae28-e419ba117bb2 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 20 15:08:36 compute-1 nova_compute[225855]: 2026-01-20 15:08:36.708 225859 DEBUG nova.privsep.utils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 20 15:08:36 compute-1 nova_compute[225855]: 2026-01-20 15:08:36.709 225859 DEBUG oslo_concurrency.processutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a522636f3423dd1eea3b834dfd08917146e09c47.part /var/lib/nova/instances/_base/a522636f3423dd1eea3b834dfd08917146e09c47.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:36.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:36 compute-1 nova_compute[225855]: 2026-01-20 15:08:36.923 225859 DEBUG oslo_concurrency.processutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a522636f3423dd1eea3b834dfd08917146e09c47.part /var/lib/nova/instances/_base/a522636f3423dd1eea3b834dfd08917146e09c47.converted" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:36 compute-1 nova_compute[225855]: 2026-01-20 15:08:36.927 225859 DEBUG oslo_concurrency.processutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a522636f3423dd1eea3b834dfd08917146e09c47.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:36 compute-1 nova_compute[225855]: 2026-01-20 15:08:36.993 225859 DEBUG oslo_concurrency.processutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a522636f3423dd1eea3b834dfd08917146e09c47.converted --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:36 compute-1 nova_compute[225855]: 2026-01-20 15:08:36.997 225859 DEBUG oslo_concurrency.lockutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "a522636f3423dd1eea3b834dfd08917146e09c47" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:37 compute-1 nova_compute[225855]: 2026-01-20 15:08:37.030 225859 DEBUG nova.storage.rbd_utils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] rbd image b4c55640-85f9-4d75-a4df-6ee77b21ca73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:08:37 compute-1 nova_compute[225855]: 2026-01-20 15:08:37.034 225859 DEBUG oslo_concurrency.processutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a522636f3423dd1eea3b834dfd08917146e09c47 b4c55640-85f9-4d75-a4df-6ee77b21ca73_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:37 compute-1 nova_compute[225855]: 2026-01-20 15:08:37.116 225859 DEBUG nova.network.neutron [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Successfully updated port: c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:08:37 compute-1 nova_compute[225855]: 2026-01-20 15:08:37.133 225859 DEBUG oslo_concurrency.lockutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:08:37 compute-1 nova_compute[225855]: 2026-01-20 15:08:37.134 225859 DEBUG oslo_concurrency.lockutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquired lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:08:37 compute-1 nova_compute[225855]: 2026-01-20 15:08:37.134 225859 DEBUG nova.network.neutron [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:08:37 compute-1 nova_compute[225855]: 2026-01-20 15:08:37.259 225859 DEBUG nova.compute.manager [req-c895dafb-aad6-4954-aec1-9a04e26b7a68 req-639877df-4f50-42c7-8a54-1b53b4583caf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-changed-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:08:37 compute-1 nova_compute[225855]: 2026-01-20 15:08:37.262 225859 DEBUG nova.compute.manager [req-c895dafb-aad6-4954-aec1-9a04e26b7a68 req-639877df-4f50-42c7-8a54-1b53b4583caf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Refreshing instance network info cache due to event network-changed-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:08:37 compute-1 nova_compute[225855]: 2026-01-20 15:08:37.262 225859 DEBUG oslo_concurrency.lockutils [req-c895dafb-aad6-4954-aec1-9a04e26b7a68 req-639877df-4f50-42c7-8a54-1b53b4583caf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:08:37 compute-1 nova_compute[225855]: 2026-01-20 15:08:37.316 225859 DEBUG oslo_concurrency.processutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a522636f3423dd1eea3b834dfd08917146e09c47 b4c55640-85f9-4d75-a4df-6ee77b21ca73_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:37 compute-1 nova_compute[225855]: 2026-01-20 15:08:37.372 225859 DEBUG nova.storage.rbd_utils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] resizing rbd image b4c55640-85f9-4d75-a4df-6ee77b21ca73_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:08:37 compute-1 nova_compute[225855]: 2026-01-20 15:08:37.407 225859 DEBUG nova.network.neutron [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:08:37 compute-1 nova_compute[225855]: 2026-01-20 15:08:37.470 225859 DEBUG nova.objects.instance [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lazy-loading 'migration_context' on Instance uuid b4c55640-85f9-4d75-a4df-6ee77b21ca73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:08:37 compute-1 nova_compute[225855]: 2026-01-20 15:08:37.490 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:37 compute-1 nova_compute[225855]: 2026-01-20 15:08:37.492 225859 DEBUG nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:08:37 compute-1 nova_compute[225855]: 2026-01-20 15:08:37.492 225859 DEBUG nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Ensure instance console log exists: /var/lib/nova/instances/b4c55640-85f9-4d75-a4df-6ee77b21ca73/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:08:37 compute-1 nova_compute[225855]: 2026-01-20 15:08:37.493 225859 DEBUG oslo_concurrency.lockutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:37 compute-1 nova_compute[225855]: 2026-01-20 15:08:37.493 225859 DEBUG oslo_concurrency.lockutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:37 compute-1 nova_compute[225855]: 2026-01-20 15:08:37.494 225859 DEBUG oslo_concurrency.lockutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:37 compute-1 nova_compute[225855]: 2026-01-20 15:08:37.518 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:37 compute-1 ceph-mon[81775]: pgmap v2552: 321 pgs: 321 active+clean; 485 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 137 op/s
Jan 20 15:08:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.394 225859 DEBUG nova.network.neutron [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Updating instance_info_cache with network_info: [{"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:08:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:38.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:38 compute-1 sudo[295141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:08:38 compute-1 sudo[295141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.419 225859 DEBUG oslo_concurrency.lockutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Releasing lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.420 225859 DEBUG nova.compute.manager [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Instance network_info: |[{"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.421 225859 DEBUG oslo_concurrency.lockutils [req-c895dafb-aad6-4954-aec1-9a04e26b7a68 req-639877df-4f50-42c7-8a54-1b53b4583caf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.421 225859 DEBUG nova.network.neutron [req-c895dafb-aad6-4954-aec1-9a04e26b7a68 req-639877df-4f50-42c7-8a54-1b53b4583caf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Refreshing network info cache for port c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:08:38 compute-1 sudo[295141]: pam_unix(sudo:session): session closed for user root
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.424 225859 DEBUG nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Start _get_guest_xml network_info=[{"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T15:08:29Z,direct_url=<?>,disk_format='qcow2',id=5b64c953-6df3-45a3-ae28-e419ba117bb2,min_disk=0,min_ram=0,name='tempest-scenario-img--310583103',owner='5b43342be22543f79d4a56e26c6d0c96',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T15:08:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': '5b64c953-6df3-45a3-ae28-e419ba117bb2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.429 225859 WARNING nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.434 225859 DEBUG nova.virt.libvirt.host [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.434 225859 DEBUG nova.virt.libvirt.host [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.437 225859 DEBUG nova.virt.libvirt.host [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.437 225859 DEBUG nova.virt.libvirt.host [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.438 225859 DEBUG nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.439 225859 DEBUG nova.virt.hardware [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T15:08:29Z,direct_url=<?>,disk_format='qcow2',id=5b64c953-6df3-45a3-ae28-e419ba117bb2,min_disk=0,min_ram=0,name='tempest-scenario-img--310583103',owner='5b43342be22543f79d4a56e26c6d0c96',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T15:08:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.439 225859 DEBUG nova.virt.hardware [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.439 225859 DEBUG nova.virt.hardware [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.440 225859 DEBUG nova.virt.hardware [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.440 225859 DEBUG nova.virt.hardware [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.440 225859 DEBUG nova.virt.hardware [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.440 225859 DEBUG nova.virt.hardware [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.441 225859 DEBUG nova.virt.hardware [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.441 225859 DEBUG nova.virt.hardware [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.441 225859 DEBUG nova.virt.hardware [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.442 225859 DEBUG nova.virt.hardware [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.445 225859 DEBUG oslo_concurrency.processutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:38 compute-1 sudo[295166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:08:38 compute-1 sudo[295166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:08:38 compute-1 sudo[295166]: pam_unix(sudo:session): session closed for user root
Jan 20 15:08:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:08:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:38.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:08:38 compute-1 ceph-mon[81775]: pgmap v2553: 321 pgs: 321 active+clean; 485 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 137 op/s
Jan 20 15:08:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:08:38 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/571252648' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.888 225859 DEBUG oslo_concurrency.processutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.919 225859 DEBUG nova.storage.rbd_utils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] rbd image b4c55640-85f9-4d75-a4df-6ee77b21ca73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:08:38 compute-1 nova_compute[225855]: 2026-01-20 15:08:38.925 225859 DEBUG oslo_concurrency.processutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:08:39 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/496706624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.400 225859 DEBUG oslo_concurrency.processutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.403 225859 DEBUG nova.virt.libvirt.vif [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:08:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-2033880413',display_name='tempest-TestMinimumBasicScenario-server-2033880413',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-2033880413',id=170,image_ref='5b64c953-6df3-45a3-ae28-e419ba117bb2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAWye8KRiKbxhnt7Xo8hUChshePiRtRRGKKbmjGtRpAbQNhUcsWAOe/4okup4yaafm+06AxmRjgJ9R8sVLFUEsSHiOZRgv3dFKZL11GpIpeu6UGBzzNxvi+GaA/Guzx6LQ==',key_name='tempest-TestMinimumBasicScenario-1345705235',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5b43342be22543f79d4a56e26c6d0c96',ramdisk_id='',reservation_id='r-srebood0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5b64c953-6df3-45a3-ae28-e419ba117bb2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1665080150',owner_user_name='tempest-TestMinimumBasicScenario-1665080150-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:08:35Z,user_data=None,user_id='c98bd3f0904e48efa524d598bcad85e9',uuid=b4c55640-85f9-4d75-a4df-6ee77b21ca73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.403 225859 DEBUG nova.network.os_vif_util [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Converting VIF {"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.404 225859 DEBUG nova.network.os_vif_util [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:85:09,bridge_name='br-int',has_traffic_filtering=True,id=c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6bf5189-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.406 225859 DEBUG nova.objects.instance [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4c55640-85f9-4d75-a4df-6ee77b21ca73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.422 225859 DEBUG nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:08:39 compute-1 nova_compute[225855]:   <uuid>b4c55640-85f9-4d75-a4df-6ee77b21ca73</uuid>
Jan 20 15:08:39 compute-1 nova_compute[225855]:   <name>instance-000000aa</name>
Jan 20 15:08:39 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:08:39 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:08:39 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <nova:name>tempest-TestMinimumBasicScenario-server-2033880413</nova:name>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:08:38</nova:creationTime>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:08:39 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:08:39 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:08:39 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:08:39 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:08:39 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:08:39 compute-1 nova_compute[225855]:         <nova:user uuid="c98bd3f0904e48efa524d598bcad85e9">tempest-TestMinimumBasicScenario-1665080150-project-member</nova:user>
Jan 20 15:08:39 compute-1 nova_compute[225855]:         <nova:project uuid="5b43342be22543f79d4a56e26c6d0c96">tempest-TestMinimumBasicScenario-1665080150</nova:project>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="5b64c953-6df3-45a3-ae28-e419ba117bb2"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:08:39 compute-1 nova_compute[225855]:         <nova:port uuid="c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf">
Jan 20 15:08:39 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:08:39 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:08:39 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <system>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <entry name="serial">b4c55640-85f9-4d75-a4df-6ee77b21ca73</entry>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <entry name="uuid">b4c55640-85f9-4d75-a4df-6ee77b21ca73</entry>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     </system>
Jan 20 15:08:39 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:08:39 compute-1 nova_compute[225855]:   <os>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:   </os>
Jan 20 15:08:39 compute-1 nova_compute[225855]:   <features>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:   </features>
Jan 20 15:08:39 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:08:39 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:08:39 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/b4c55640-85f9-4d75-a4df-6ee77b21ca73_disk">
Jan 20 15:08:39 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       </source>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:08:39 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/b4c55640-85f9-4d75-a4df-6ee77b21ca73_disk.config">
Jan 20 15:08:39 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       </source>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:08:39 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:7f:85:09"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <target dev="tapc6bf5189-ce"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/b4c55640-85f9-4d75-a4df-6ee77b21ca73/console.log" append="off"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <video>
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     </video>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:08:39 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:08:39 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:08:39 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:08:39 compute-1 nova_compute[225855]: </domain>
Jan 20 15:08:39 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.423 225859 DEBUG nova.compute.manager [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Preparing to wait for external event network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.424 225859 DEBUG oslo_concurrency.lockutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.424 225859 DEBUG oslo_concurrency.lockutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.424 225859 DEBUG oslo_concurrency.lockutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.425 225859 DEBUG nova.virt.libvirt.vif [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:08:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-2033880413',display_name='tempest-TestMinimumBasicScenario-server-2033880413',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-2033880413',id=170,image_ref='5b64c953-6df3-45a3-ae28-e419ba117bb2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAWye8KRiKbxhnt7Xo8hUChshePiRtRRGKKbmjGtRpAbQNhUcsWAOe/4okup4yaafm+06AxmRjgJ9R8sVLFUEsSHiOZRgv3dFKZL11GpIpeu6UGBzzNxvi+GaA/Guzx6LQ==',key_name='tempest-TestMinimumBasicScenario-1345705235',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5b43342be22543f79d4a56e26c6d0c96',ramdisk_id='',reservation_id='r-srebood0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5b64c953-6df3-45a3-ae28-e419ba117bb2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1665080150',owner_user_name='tempest-TestMinimumBasicScenario-1665080150-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:08:35Z,user_data=None,user_id='c98bd3f0904e48efa524d598bcad85e9',uuid=b4c55640-85f9-4d75-a4df-6ee77b21ca73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.425 225859 DEBUG nova.network.os_vif_util [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Converting VIF {"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.426 225859 DEBUG nova.network.os_vif_util [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:85:09,bridge_name='br-int',has_traffic_filtering=True,id=c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6bf5189-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.426 225859 DEBUG os_vif [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:85:09,bridge_name='br-int',has_traffic_filtering=True,id=c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6bf5189-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.427 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.427 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.428 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.431 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.432 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6bf5189-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.434 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc6bf5189-ce, col_values=(('external_ids', {'iface-id': 'c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:85:09', 'vm-uuid': 'b4c55640-85f9-4d75-a4df-6ee77b21ca73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.436 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:39 compute-1 NetworkManager[49104]: <info>  [1768921719.4375] manager: (tapc6bf5189-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/307)
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.438 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.445 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.446 225859 INFO os_vif [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:85:09,bridge_name='br-int',has_traffic_filtering=True,id=c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6bf5189-ce')
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.505 225859 DEBUG nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.505 225859 DEBUG nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.506 225859 DEBUG nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] No VIF found with MAC fa:16:3e:7f:85:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.506 225859 INFO nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Using config drive
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.537 225859 DEBUG nova.storage.rbd_utils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] rbd image b4c55640-85f9-4d75-a4df-6ee77b21ca73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.682 225859 DEBUG nova.network.neutron [req-c895dafb-aad6-4954-aec1-9a04e26b7a68 req-639877df-4f50-42c7-8a54-1b53b4583caf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Updated VIF entry in instance network info cache for port c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.683 225859 DEBUG nova.network.neutron [req-c895dafb-aad6-4954-aec1-9a04e26b7a68 req-639877df-4f50-42c7-8a54-1b53b4583caf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Updating instance_info_cache with network_info: [{"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.698 225859 DEBUG oslo_concurrency.lockutils [req-c895dafb-aad6-4954-aec1-9a04e26b7a68 req-639877df-4f50-42c7-8a54-1b53b4583caf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.827 225859 INFO nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Creating config drive at /var/lib/nova/instances/b4c55640-85f9-4d75-a4df-6ee77b21ca73/disk.config
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.832 225859 DEBUG oslo_concurrency.processutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4c55640-85f9-4d75-a4df-6ee77b21ca73/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp03eyzntx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:39 compute-1 nova_compute[225855]: 2026-01-20 15:08:39.965 225859 DEBUG oslo_concurrency.processutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4c55640-85f9-4d75-a4df-6ee77b21ca73/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp03eyzntx" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:40 compute-1 nova_compute[225855]: 2026-01-20 15:08:40.066 225859 DEBUG nova.storage.rbd_utils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] rbd image b4c55640-85f9-4d75-a4df-6ee77b21ca73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:08:40 compute-1 nova_compute[225855]: 2026-01-20 15:08:40.073 225859 DEBUG oslo_concurrency.processutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4c55640-85f9-4d75-a4df-6ee77b21ca73/disk.config b4c55640-85f9-4d75-a4df-6ee77b21ca73_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/571252648' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/496706624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2722338201' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:40.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:40 compute-1 nova_compute[225855]: 2026-01-20 15:08:40.445 225859 DEBUG oslo_concurrency.processutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4c55640-85f9-4d75-a4df-6ee77b21ca73/disk.config b4c55640-85f9-4d75-a4df-6ee77b21ca73_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:40 compute-1 nova_compute[225855]: 2026-01-20 15:08:40.446 225859 INFO nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Deleting local config drive /var/lib/nova/instances/b4c55640-85f9-4d75-a4df-6ee77b21ca73/disk.config because it was imported into RBD.
Jan 20 15:08:40 compute-1 kernel: tapc6bf5189-ce: entered promiscuous mode
Jan 20 15:08:40 compute-1 NetworkManager[49104]: <info>  [1768921720.4941] manager: (tapc6bf5189-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/308)
Jan 20 15:08:40 compute-1 ovn_controller[130490]: 2026-01-20T15:08:40Z|00737|binding|INFO|Claiming lport c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf for this chassis.
Jan 20 15:08:40 compute-1 ovn_controller[130490]: 2026-01-20T15:08:40Z|00738|binding|INFO|c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf: Claiming fa:16:3e:7f:85:09 10.100.0.3
Jan 20 15:08:40 compute-1 nova_compute[225855]: 2026-01-20 15:08:40.496 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.507 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:85:09 10.100.0.3'], port_security=['fa:16:3e:7f:85:09 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b4c55640-85f9-4d75-a4df-6ee77b21ca73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e22d6ddc-0339-4395-bc21-95081825f05b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5b43342be22543f79d4a56e26c6d0c96', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8514424f-703f-4374-a78e-584f6e7c233b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11c19619-1e7e-40ee-be83-c9dbc347543e, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.509 140354 INFO neutron.agent.ovn.metadata.agent [-] Port c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf in datapath e22d6ddc-0339-4395-bc21-95081825f05b bound to our chassis
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.512 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e22d6ddc-0339-4395-bc21-95081825f05b
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.525 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d1531949-8cd0-4651-9524-af1a36fc2d55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.526 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape22d6ddc-01 in ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.529 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape22d6ddc-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.529 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5c1b4b02-df86-406d-805f-35a694138dbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:40 compute-1 systemd-udevd[295331]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.530 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c034e42c-34bd-4ba4-9e1c-5cf908db42ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:40 compute-1 systemd-machined[194361]: New machine qemu-87-instance-000000aa.
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.544 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[86abc2db-cb49-49f5-acb8-9a1e7fee4815]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:40 compute-1 NetworkManager[49104]: <info>  [1768921720.5476] device (tapc6bf5189-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:08:40 compute-1 NetworkManager[49104]: <info>  [1768921720.5485] device (tapc6bf5189-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:08:40 compute-1 nova_compute[225855]: 2026-01-20 15:08:40.567 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:40 compute-1 systemd[1]: Started Virtual Machine qemu-87-instance-000000aa.
Jan 20 15:08:40 compute-1 ovn_controller[130490]: 2026-01-20T15:08:40Z|00739|binding|INFO|Setting lport c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf ovn-installed in OVS
Jan 20 15:08:40 compute-1 ovn_controller[130490]: 2026-01-20T15:08:40Z|00740|binding|INFO|Setting lport c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf up in Southbound
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.575 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[00883eb0-1154-4453-97e8-6bbe64fc72a4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:40 compute-1 nova_compute[225855]: 2026-01-20 15:08:40.579 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.603 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[99e03192-9728-47e7-ba68-131c81f1d285]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.612 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb305f1-1b83-49d6-97ce-420821571bba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:40 compute-1 NetworkManager[49104]: <info>  [1768921720.6135] manager: (tape22d6ddc-00): new Veth device (/org/freedesktop/NetworkManager/Devices/309)
Jan 20 15:08:40 compute-1 systemd-udevd[295334]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.650 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[53953e07-37b8-472f-ad17-0e2c239f95c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.654 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[26695018-1ff0-44d3-88e0-c35f1ee09af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:40 compute-1 NetworkManager[49104]: <info>  [1768921720.6821] device (tape22d6ddc-00): carrier: link connected
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.688 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[47573274-576e-44ea-b9b8-6edb69d2c21e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.704 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8c3c838d-9683-475f-9b43-3c0523ae2a84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape22d6ddc-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:3f:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675960, 'reachable_time': 35650, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295363, 'error': None, 'target': 'ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.720 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b7c3fb7b-fb18-4834-a1ad-8e298f68d8e0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:3f5c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675960, 'tstamp': 675960}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295364, 'error': None, 'target': 'ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.740 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b29715c6-3bfe-4c74-8d09-781a8a097fd4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape22d6ddc-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:3f:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675960, 'reachable_time': 35650, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295365, 'error': None, 'target': 'ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.771 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[899bd977-684b-4cc0-874e-32e4328a12a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:40.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:40 compute-1 ovn_controller[130490]: 2026-01-20T15:08:40Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:9e:90 10.100.0.6
Jan 20 15:08:40 compute-1 ovn_controller[130490]: 2026-01-20T15:08:40Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:9e:90 10.100.0.6
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.824 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2942a4d6-803d-4ba9-973c-4a016389092d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.826 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape22d6ddc-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.826 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.827 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape22d6ddc-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:40 compute-1 nova_compute[225855]: 2026-01-20 15:08:40.828 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:40 compute-1 NetworkManager[49104]: <info>  [1768921720.8292] manager: (tape22d6ddc-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/310)
Jan 20 15:08:40 compute-1 kernel: tape22d6ddc-00: entered promiscuous mode
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.832 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape22d6ddc-00, col_values=(('external_ids', {'iface-id': '940a1442-b0ab-49a2-87e8-750659cdda8d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:40 compute-1 ovn_controller[130490]: 2026-01-20T15:08:40Z|00741|binding|INFO|Releasing lport 940a1442-b0ab-49a2-87e8-750659cdda8d from this chassis (sb_readonly=0)
Jan 20 15:08:40 compute-1 nova_compute[225855]: 2026-01-20 15:08:40.833 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:40 compute-1 nova_compute[225855]: 2026-01-20 15:08:40.850 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:40 compute-1 nova_compute[225855]: 2026-01-20 15:08:40.851 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.852 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e22d6ddc-0339-4395-bc21-95081825f05b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e22d6ddc-0339-4395-bc21-95081825f05b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.853 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa47c91-3565-4bf8-ba65-2e554c803952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.854 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-e22d6ddc-0339-4395-bc21-95081825f05b
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/e22d6ddc-0339-4395-bc21-95081825f05b.pid.haproxy
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID e22d6ddc-0339-4395-bc21-95081825f05b
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:08:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:08:40.855 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b', 'env', 'PROCESS_TAG=haproxy-e22d6ddc-0339-4395-bc21-95081825f05b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e22d6ddc-0339-4395-bc21-95081825f05b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:08:40 compute-1 nova_compute[225855]: 2026-01-20 15:08:40.864 225859 DEBUG nova.compute.manager [req-98393fd6-b7f3-40f5-8cde-f4754969fef9 req-314d61ff-1164-49d2-b5d3-2fc2ecf99a2a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:08:40 compute-1 nova_compute[225855]: 2026-01-20 15:08:40.864 225859 DEBUG oslo_concurrency.lockutils [req-98393fd6-b7f3-40f5-8cde-f4754969fef9 req-314d61ff-1164-49d2-b5d3-2fc2ecf99a2a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:40 compute-1 nova_compute[225855]: 2026-01-20 15:08:40.865 225859 DEBUG oslo_concurrency.lockutils [req-98393fd6-b7f3-40f5-8cde-f4754969fef9 req-314d61ff-1164-49d2-b5d3-2fc2ecf99a2a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:40 compute-1 nova_compute[225855]: 2026-01-20 15:08:40.865 225859 DEBUG oslo_concurrency.lockutils [req-98393fd6-b7f3-40f5-8cde-f4754969fef9 req-314d61ff-1164-49d2-b5d3-2fc2ecf99a2a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:40 compute-1 nova_compute[225855]: 2026-01-20 15:08:40.865 225859 DEBUG nova.compute.manager [req-98393fd6-b7f3-40f5-8cde-f4754969fef9 req-314d61ff-1164-49d2-b5d3-2fc2ecf99a2a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Processing event network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:08:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2140542339' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:41 compute-1 ceph-mon[81775]: pgmap v2554: 321 pgs: 321 active+clean; 553 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.2 MiB/s wr, 184 op/s
Jan 20 15:08:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3093963373' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.115 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921721.1150274, b4c55640-85f9-4d75-a4df-6ee77b21ca73 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.116 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] VM Started (Lifecycle Event)
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.119 225859 DEBUG nova.compute.manager [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.123 225859 DEBUG nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.127 225859 INFO nova.virt.libvirt.driver [-] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Instance spawned successfully.
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.128 225859 DEBUG nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.156 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.162 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.166 225859 DEBUG nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.166 225859 DEBUG nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.167 225859 DEBUG nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.167 225859 DEBUG nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.168 225859 DEBUG nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.168 225859 DEBUG nova.virt.libvirt.driver [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.197 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.199 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921721.1183376, b4c55640-85f9-4d75-a4df-6ee77b21ca73 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.199 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] VM Paused (Lifecycle Event)
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.222 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.226 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921721.1220424, b4c55640-85f9-4d75-a4df-6ee77b21ca73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.227 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] VM Resumed (Lifecycle Event)
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.231 225859 INFO nova.compute.manager [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Took 6.14 seconds to spawn the instance on the hypervisor.
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.232 225859 DEBUG nova.compute.manager [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.289 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.292 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.333 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.334 225859 INFO nova.compute.manager [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Took 7.64 seconds to build instance.
Jan 20 15:08:41 compute-1 podman[295439]: 2026-01-20 15:08:41.239833326 +0000 UTC m=+0.022334925 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:08:41 compute-1 nova_compute[225855]: 2026-01-20 15:08:41.369 225859 DEBUG oslo_concurrency.lockutils [None req-e21a5016-04f7-439f-bc34-054eb2c5a89e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:41 compute-1 podman[295439]: 2026-01-20 15:08:41.759232313 +0000 UTC m=+0.541733892 container create e1b6e797dfad2f34daa91ed383fcf864e44f993bbf42f43b0de989cb9646429a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:08:41 compute-1 systemd[1]: Started libpod-conmon-e1b6e797dfad2f34daa91ed383fcf864e44f993bbf42f43b0de989cb9646429a.scope.
Jan 20 15:08:41 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:08:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2117d9a12e609e61fa38da0417b2e4ee38bd7ca4efd01c6c08034b29e0c123da/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:08:41 compute-1 podman[295439]: 2026-01-20 15:08:41.865078946 +0000 UTC m=+0.647580545 container init e1b6e797dfad2f34daa91ed383fcf864e44f993bbf42f43b0de989cb9646429a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 15:08:41 compute-1 podman[295439]: 2026-01-20 15:08:41.872334242 +0000 UTC m=+0.654835821 container start e1b6e797dfad2f34daa91ed383fcf864e44f993bbf42f43b0de989cb9646429a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:08:41 compute-1 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[295454]: [NOTICE]   (295458) : New worker (295460) forked
Jan 20 15:08:41 compute-1 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[295454]: [NOTICE]   (295458) : Loading success.
Jan 20 15:08:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:42.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:42 compute-1 nova_compute[225855]: 2026-01-20 15:08:42.521 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:08:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:42.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:08:43 compute-1 nova_compute[225855]: 2026-01-20 15:08:43.015 225859 DEBUG nova.compute.manager [req-06a3546f-1a29-404b-8f56-3c72d2b2d3e3 req-1d2d0d77-c80c-494a-96eb-2c23702f2ab1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:08:43 compute-1 nova_compute[225855]: 2026-01-20 15:08:43.016 225859 DEBUG oslo_concurrency.lockutils [req-06a3546f-1a29-404b-8f56-3c72d2b2d3e3 req-1d2d0d77-c80c-494a-96eb-2c23702f2ab1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:43 compute-1 nova_compute[225855]: 2026-01-20 15:08:43.016 225859 DEBUG oslo_concurrency.lockutils [req-06a3546f-1a29-404b-8f56-3c72d2b2d3e3 req-1d2d0d77-c80c-494a-96eb-2c23702f2ab1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:43 compute-1 nova_compute[225855]: 2026-01-20 15:08:43.016 225859 DEBUG oslo_concurrency.lockutils [req-06a3546f-1a29-404b-8f56-3c72d2b2d3e3 req-1d2d0d77-c80c-494a-96eb-2c23702f2ab1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:43 compute-1 nova_compute[225855]: 2026-01-20 15:08:43.017 225859 DEBUG nova.compute.manager [req-06a3546f-1a29-404b-8f56-3c72d2b2d3e3 req-1d2d0d77-c80c-494a-96eb-2c23702f2ab1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] No waiting events found dispatching network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:08:43 compute-1 nova_compute[225855]: 2026-01-20 15:08:43.017 225859 WARNING nova.compute.manager [req-06a3546f-1a29-404b-8f56-3c72d2b2d3e3 req-1d2d0d77-c80c-494a-96eb-2c23702f2ab1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received unexpected event network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf for instance with vm_state active and task_state None.
Jan 20 15:08:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:43 compute-1 ceph-mon[81775]: pgmap v2555: 321 pgs: 321 active+clean; 570 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 5.2 MiB/s wr, 217 op/s
Jan 20 15:08:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:44.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:44 compute-1 nova_compute[225855]: 2026-01-20 15:08:44.483 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:44.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:45 compute-1 ceph-mon[81775]: pgmap v2556: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 4.2 MiB/s wr, 227 op/s
Jan 20 15:08:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:46.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:46.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:47 compute-1 podman[295471]: 2026-01-20 15:08:47.014639327 +0000 UTC m=+0.054595190 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 15:08:47 compute-1 ceph-mon[81775]: pgmap v2557: 321 pgs: 321 active+clean; 580 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 3.6 MiB/s wr, 338 op/s
Jan 20 15:08:47 compute-1 nova_compute[225855]: 2026-01-20 15:08:47.523 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:48 compute-1 nova_compute[225855]: 2026-01-20 15:08:48.263 225859 DEBUG oslo_concurrency.lockutils [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:48 compute-1 nova_compute[225855]: 2026-01-20 15:08:48.264 225859 DEBUG oslo_concurrency.lockutils [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:48 compute-1 nova_compute[225855]: 2026-01-20 15:08:48.311 225859 DEBUG nova.objects.instance [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lazy-loading 'flavor' on Instance uuid b4c55640-85f9-4d75-a4df-6ee77b21ca73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:08:48 compute-1 nova_compute[225855]: 2026-01-20 15:08:48.423 225859 DEBUG oslo_concurrency.lockutils [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:48.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:48.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:48 compute-1 nova_compute[225855]: 2026-01-20 15:08:48.865 225859 DEBUG oslo_concurrency.lockutils [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:48 compute-1 nova_compute[225855]: 2026-01-20 15:08:48.866 225859 DEBUG oslo_concurrency.lockutils [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:48 compute-1 nova_compute[225855]: 2026-01-20 15:08:48.866 225859 INFO nova.compute.manager [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Attaching volume 1ae2be75-c922-4458-bd11-a97b4f6fdd2b to /dev/vdb
Jan 20 15:08:49 compute-1 nova_compute[225855]: 2026-01-20 15:08:49.038 225859 DEBUG os_brick.utils [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:08:49 compute-1 nova_compute[225855]: 2026-01-20 15:08:49.040 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:49 compute-1 nova_compute[225855]: 2026-01-20 15:08:49.051 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:49 compute-1 nova_compute[225855]: 2026-01-20 15:08:49.051 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[30906319-46d1-4a52-b966-f6c52bc6837e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:49 compute-1 nova_compute[225855]: 2026-01-20 15:08:49.052 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:49 compute-1 nova_compute[225855]: 2026-01-20 15:08:49.062 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:49 compute-1 nova_compute[225855]: 2026-01-20 15:08:49.063 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[10182486-989b-4ca5-9c4e-e718d0207583]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:49 compute-1 nova_compute[225855]: 2026-01-20 15:08:49.064 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:49 compute-1 nova_compute[225855]: 2026-01-20 15:08:49.073 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:49 compute-1 nova_compute[225855]: 2026-01-20 15:08:49.073 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[1c858e3a-b220-428e-8825-faa332466374]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:49 compute-1 nova_compute[225855]: 2026-01-20 15:08:49.075 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[6ecc4bae-375d-41a8-bacb-45776d7802bb]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:49 compute-1 nova_compute[225855]: 2026-01-20 15:08:49.076 225859 DEBUG oslo_concurrency.processutils [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:49 compute-1 nova_compute[225855]: 2026-01-20 15:08:49.108 225859 DEBUG oslo_concurrency.processutils [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:49 compute-1 nova_compute[225855]: 2026-01-20 15:08:49.111 225859 DEBUG os_brick.initiator.connectors.lightos [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:08:49 compute-1 nova_compute[225855]: 2026-01-20 15:08:49.112 225859 DEBUG os_brick.initiator.connectors.lightos [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:08:49 compute-1 nova_compute[225855]: 2026-01-20 15:08:49.112 225859 DEBUG os_brick.initiator.connectors.lightos [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:08:49 compute-1 nova_compute[225855]: 2026-01-20 15:08:49.112 225859 DEBUG os_brick.utils [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] <== get_connector_properties: return (73ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:08:49 compute-1 nova_compute[225855]: 2026-01-20 15:08:49.113 225859 DEBUG nova.virt.block_device [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Updating existing volume attachment record: 381cb790-ccf8-436f-94a0-eae0e8e507cc _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 15:08:49 compute-1 nova_compute[225855]: 2026-01-20 15:08:49.486 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:49 compute-1 ceph-mon[81775]: pgmap v2558: 321 pgs: 321 active+clean; 580 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 3.6 MiB/s wr, 327 op/s
Jan 20 15:08:49 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 20 15:08:49 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 20 15:08:49 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Jan 20 15:08:49 compute-1 nova_compute[225855]: 2026-01-20 15:08:49.984 225859 DEBUG nova.objects.instance [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lazy-loading 'flavor' on Instance uuid b4c55640-85f9-4d75-a4df-6ee77b21ca73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:08:50 compute-1 nova_compute[225855]: 2026-01-20 15:08:50.006 225859 DEBUG nova.virt.libvirt.driver [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Attempting to attach volume 1ae2be75-c922-4458-bd11-a97b4f6fdd2b with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 20 15:08:50 compute-1 nova_compute[225855]: 2026-01-20 15:08:50.008 225859 DEBUG nova.virt.libvirt.guest [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] attach device xml: <disk type="network" device="disk">
Jan 20 15:08:50 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:08:50 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-1ae2be75-c922-4458-bd11-a97b4f6fdd2b">
Jan 20 15:08:50 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:08:50 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:08:50 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:08:50 compute-1 nova_compute[225855]:   </source>
Jan 20 15:08:50 compute-1 nova_compute[225855]:   <auth username="openstack">
Jan 20 15:08:50 compute-1 nova_compute[225855]:     <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:08:50 compute-1 nova_compute[225855]:   </auth>
Jan 20 15:08:50 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:08:50 compute-1 nova_compute[225855]:   <serial>1ae2be75-c922-4458-bd11-a97b4f6fdd2b</serial>
Jan 20 15:08:50 compute-1 nova_compute[225855]: </disk>
Jan 20 15:08:50 compute-1 nova_compute[225855]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 15:08:50 compute-1 nova_compute[225855]: 2026-01-20 15:08:50.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:08:50 compute-1 nova_compute[225855]: 2026-01-20 15:08:50.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:08:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:08:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:50.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:08:50 compute-1 nova_compute[225855]: 2026-01-20 15:08:50.444 225859 DEBUG nova.virt.libvirt.driver [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:08:50 compute-1 nova_compute[225855]: 2026-01-20 15:08:50.444 225859 DEBUG nova.virt.libvirt.driver [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:08:50 compute-1 nova_compute[225855]: 2026-01-20 15:08:50.445 225859 DEBUG nova.virt.libvirt.driver [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:08:50 compute-1 nova_compute[225855]: 2026-01-20 15:08:50.445 225859 DEBUG nova.virt.libvirt.driver [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] No VIF found with MAC fa:16:3e:7f:85:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:08:50 compute-1 nova_compute[225855]: 2026-01-20 15:08:50.643 225859 DEBUG oslo_concurrency.lockutils [None req-7ca13fec-c26a-4224-9306-ab6ddca4a995 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:50.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2585228950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:51 compute-1 nova_compute[225855]: 2026-01-20 15:08:51.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:08:52 compute-1 ceph-mon[81775]: pgmap v2559: 321 pgs: 321 active+clean; 599 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 4.8 MiB/s wr, 379 op/s
Jan 20 15:08:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e378 e378: 3 total, 3 up, 3 in
Jan 20 15:08:52 compute-1 nova_compute[225855]: 2026-01-20 15:08:52.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:08:52 compute-1 nova_compute[225855]: 2026-01-20 15:08:52.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:08:52 compute-1 nova_compute[225855]: 2026-01-20 15:08:52.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:08:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:08:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:52.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:08:52 compute-1 nova_compute[225855]: 2026-01-20 15:08:52.494 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-a25af5a3-096f-4363-842e-d960c22eb16b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:08:52 compute-1 nova_compute[225855]: 2026-01-20 15:08:52.495 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-a25af5a3-096f-4363-842e-d960c22eb16b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:08:52 compute-1 nova_compute[225855]: 2026-01-20 15:08:52.495 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:08:52 compute-1 nova_compute[225855]: 2026-01-20 15:08:52.495 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a25af5a3-096f-4363-842e-d960c22eb16b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:08:52 compute-1 nova_compute[225855]: 2026-01-20 15:08:52.526 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:52.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:53 compute-1 ceph-mon[81775]: osdmap e378: 3 total, 3 up, 3 in
Jan 20 15:08:53 compute-1 ceph-mon[81775]: pgmap v2561: 321 pgs: 321 active+clean; 616 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.6 MiB/s wr, 319 op/s
Jan 20 15:08:54 compute-1 nova_compute[225855]: 2026-01-20 15:08:54.001 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Updating instance_info_cache with network_info: [{"id": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "address": "fa:16:3e:bf:9e:90", "network": {"id": "3967ae21-1590-4685-8881-8bd1bcf25258", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-285441107-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc74c4a296554866969b05aef75252af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7cb043-d1", "ovs_interfaceid": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:08:54 compute-1 nova_compute[225855]: 2026-01-20 15:08:54.017 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-a25af5a3-096f-4363-842e-d960c22eb16b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:08:54 compute-1 nova_compute[225855]: 2026-01-20 15:08:54.018 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:08:54 compute-1 nova_compute[225855]: 2026-01-20 15:08:54.018 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:08:54 compute-1 ovn_controller[130490]: 2026-01-20T15:08:54Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:85:09 10.100.0.3
Jan 20 15:08:54 compute-1 ovn_controller[130490]: 2026-01-20T15:08:54Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:85:09 10.100.0.3
Jan 20 15:08:54 compute-1 nova_compute[225855]: 2026-01-20 15:08:54.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:08:54 compute-1 nova_compute[225855]: 2026-01-20 15:08:54.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:08:54 compute-1 nova_compute[225855]: 2026-01-20 15:08:54.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:08:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:08:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:54.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:08:54 compute-1 nova_compute[225855]: 2026-01-20 15:08:54.489 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:54 compute-1 NetworkManager[49104]: <info>  [1768921734.5348] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Jan 20 15:08:54 compute-1 NetworkManager[49104]: <info>  [1768921734.5355] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Jan 20 15:08:54 compute-1 nova_compute[225855]: 2026-01-20 15:08:54.535 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:54 compute-1 nova_compute[225855]: 2026-01-20 15:08:54.718 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:54 compute-1 ovn_controller[130490]: 2026-01-20T15:08:54Z|00742|binding|INFO|Releasing lport b19d6956-fc8d-42c8-af98-d0f2fe9fe3a3 from this chassis (sb_readonly=0)
Jan 20 15:08:54 compute-1 ovn_controller[130490]: 2026-01-20T15:08:54Z|00743|binding|INFO|Releasing lport 940a1442-b0ab-49a2-87e8-750659cdda8d from this chassis (sb_readonly=0)
Jan 20 15:08:54 compute-1 nova_compute[225855]: 2026-01-20 15:08:54.741 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:54.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:54 compute-1 nova_compute[225855]: 2026-01-20 15:08:54.839 225859 DEBUG nova.compute.manager [req-8dfc7e61-9b50-4a25-8dfc-e9b38231d0cb req-d1768dd6-bd3e-46e5-88e5-7a0e67aa7feb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-changed-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:08:54 compute-1 nova_compute[225855]: 2026-01-20 15:08:54.840 225859 DEBUG nova.compute.manager [req-8dfc7e61-9b50-4a25-8dfc-e9b38231d0cb req-d1768dd6-bd3e-46e5-88e5-7a0e67aa7feb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Refreshing instance network info cache due to event network-changed-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:08:54 compute-1 nova_compute[225855]: 2026-01-20 15:08:54.840 225859 DEBUG oslo_concurrency.lockutils [req-8dfc7e61-9b50-4a25-8dfc-e9b38231d0cb req-d1768dd6-bd3e-46e5-88e5-7a0e67aa7feb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:08:54 compute-1 nova_compute[225855]: 2026-01-20 15:08:54.841 225859 DEBUG oslo_concurrency.lockutils [req-8dfc7e61-9b50-4a25-8dfc-e9b38231d0cb req-d1768dd6-bd3e-46e5-88e5-7a0e67aa7feb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:08:54 compute-1 nova_compute[225855]: 2026-01-20 15:08:54.841 225859 DEBUG nova.network.neutron [req-8dfc7e61-9b50-4a25-8dfc-e9b38231d0cb req-d1768dd6-bd3e-46e5-88e5-7a0e67aa7feb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Refreshing network info cache for port c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:08:55 compute-1 ceph-mon[81775]: pgmap v2562: 321 pgs: 321 active+clean; 620 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.3 MiB/s wr, 349 op/s
Jan 20 15:08:56 compute-1 nova_compute[225855]: 2026-01-20 15:08:56.254 225859 DEBUG nova.network.neutron [req-8dfc7e61-9b50-4a25-8dfc-e9b38231d0cb req-d1768dd6-bd3e-46e5-88e5-7a0e67aa7feb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Updated VIF entry in instance network info cache for port c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:08:56 compute-1 nova_compute[225855]: 2026-01-20 15:08:56.254 225859 DEBUG nova.network.neutron [req-8dfc7e61-9b50-4a25-8dfc-e9b38231d0cb req-d1768dd6-bd3e-46e5-88e5-7a0e67aa7feb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Updating instance_info_cache with network_info: [{"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:08:56 compute-1 nova_compute[225855]: 2026-01-20 15:08:56.279 225859 DEBUG oslo_concurrency.lockutils [req-8dfc7e61-9b50-4a25-8dfc-e9b38231d0cb req-d1768dd6-bd3e-46e5-88e5-7a0e67aa7feb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:08:56 compute-1 nova_compute[225855]: 2026-01-20 15:08:56.389 225859 DEBUG nova.compute.manager [req-475bc903-102a-46f8-b5cf-ffaa4fb4b5f9 req-c39949d2-7250-4d7b-a7ef-4b464f17cde7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-changed-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:08:56 compute-1 nova_compute[225855]: 2026-01-20 15:08:56.390 225859 DEBUG nova.compute.manager [req-475bc903-102a-46f8-b5cf-ffaa4fb4b5f9 req-c39949d2-7250-4d7b-a7ef-4b464f17cde7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Refreshing instance network info cache due to event network-changed-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:08:56 compute-1 nova_compute[225855]: 2026-01-20 15:08:56.390 225859 DEBUG oslo_concurrency.lockutils [req-475bc903-102a-46f8-b5cf-ffaa4fb4b5f9 req-c39949d2-7250-4d7b-a7ef-4b464f17cde7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:08:56 compute-1 nova_compute[225855]: 2026-01-20 15:08:56.390 225859 DEBUG oslo_concurrency.lockutils [req-475bc903-102a-46f8-b5cf-ffaa4fb4b5f9 req-c39949d2-7250-4d7b-a7ef-4b464f17cde7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:08:56 compute-1 nova_compute[225855]: 2026-01-20 15:08:56.390 225859 DEBUG nova.network.neutron [req-475bc903-102a-46f8-b5cf-ffaa4fb4b5f9 req-c39949d2-7250-4d7b-a7ef-4b464f17cde7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Refreshing network info cache for port c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:08:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:56.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:56 compute-1 ceph-mon[81775]: pgmap v2563: 321 pgs: 321 active+clean; 603 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.0 MiB/s wr, 444 op/s
Jan 20 15:08:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:56.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:57 compute-1 nova_compute[225855]: 2026-01-20 15:08:57.355 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:08:57 compute-1 nova_compute[225855]: 2026-01-20 15:08:57.382 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:57 compute-1 nova_compute[225855]: 2026-01-20 15:08:57.383 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:57 compute-1 nova_compute[225855]: 2026-01-20 15:08:57.383 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:57 compute-1 nova_compute[225855]: 2026-01-20 15:08:57.383 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:08:57 compute-1 nova_compute[225855]: 2026-01-20 15:08:57.383 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:57 compute-1 nova_compute[225855]: 2026-01-20 15:08:57.526 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:57 compute-1 nova_compute[225855]: 2026-01-20 15:08:57.722 225859 DEBUG nova.network.neutron [req-475bc903-102a-46f8-b5cf-ffaa4fb4b5f9 req-c39949d2-7250-4d7b-a7ef-4b464f17cde7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Updated VIF entry in instance network info cache for port c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:08:57 compute-1 nova_compute[225855]: 2026-01-20 15:08:57.723 225859 DEBUG nova.network.neutron [req-475bc903-102a-46f8-b5cf-ffaa4fb4b5f9 req-c39949d2-7250-4d7b-a7ef-4b464f17cde7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Updating instance_info_cache with network_info: [{"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:08:57 compute-1 nova_compute[225855]: 2026-01-20 15:08:57.744 225859 DEBUG oslo_concurrency.lockutils [req-475bc903-102a-46f8-b5cf-ffaa4fb4b5f9 req-c39949d2-7250-4d7b-a7ef-4b464f17cde7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:08:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:08:57 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3934644950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:08:57 compute-1 nova_compute[225855]: 2026-01-20 15:08:57.856 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:57 compute-1 nova_compute[225855]: 2026-01-20 15:08:57.941 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:08:57 compute-1 nova_compute[225855]: 2026-01-20 15:08:57.941 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:08:57 compute-1 nova_compute[225855]: 2026-01-20 15:08:57.942 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:08:57 compute-1 nova_compute[225855]: 2026-01-20 15:08:57.945 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:08:57 compute-1 nova_compute[225855]: 2026-01-20 15:08:57.946 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:08:57 compute-1 nova_compute[225855]: 2026-01-20 15:08:57.946 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:08:58 compute-1 nova_compute[225855]: 2026-01-20 15:08:58.101 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:08:58 compute-1 nova_compute[225855]: 2026-01-20 15:08:58.102 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3937MB free_disk=20.831497192382812GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:08:58 compute-1 nova_compute[225855]: 2026-01-20 15:08:58.102 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:58 compute-1 nova_compute[225855]: 2026-01-20 15:08:58.102 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3934644950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:08:58 compute-1 nova_compute[225855]: 2026-01-20 15:08:58.388 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance a25af5a3-096f-4363-842e-d960c22eb16b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:08:58 compute-1 nova_compute[225855]: 2026-01-20 15:08:58.389 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance b4c55640-85f9-4d75-a4df-6ee77b21ca73 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:08:58 compute-1 nova_compute[225855]: 2026-01-20 15:08:58.389 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:08:58 compute-1 nova_compute[225855]: 2026-01-20 15:08:58.390 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:08:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:58.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:58 compute-1 nova_compute[225855]: 2026-01-20 15:08:58.483 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing inventories for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 15:08:58 compute-1 nova_compute[225855]: 2026-01-20 15:08:58.511 225859 DEBUG nova.compute.manager [req-ec3380e5-2c88-4197-b167-66b752108d0c req-1beb8c9b-b757-414e-9793-26b0fd2d09cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-changed-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:08:58 compute-1 nova_compute[225855]: 2026-01-20 15:08:58.512 225859 DEBUG nova.compute.manager [req-ec3380e5-2c88-4197-b167-66b752108d0c req-1beb8c9b-b757-414e-9793-26b0fd2d09cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Refreshing instance network info cache due to event network-changed-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:08:58 compute-1 nova_compute[225855]: 2026-01-20 15:08:58.512 225859 DEBUG oslo_concurrency.lockutils [req-ec3380e5-2c88-4197-b167-66b752108d0c req-1beb8c9b-b757-414e-9793-26b0fd2d09cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:08:58 compute-1 nova_compute[225855]: 2026-01-20 15:08:58.512 225859 DEBUG oslo_concurrency.lockutils [req-ec3380e5-2c88-4197-b167-66b752108d0c req-1beb8c9b-b757-414e-9793-26b0fd2d09cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:08:58 compute-1 nova_compute[225855]: 2026-01-20 15:08:58.512 225859 DEBUG nova.network.neutron [req-ec3380e5-2c88-4197-b167-66b752108d0c req-1beb8c9b-b757-414e-9793-26b0fd2d09cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Refreshing network info cache for port c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:08:58 compute-1 nova_compute[225855]: 2026-01-20 15:08:58.515 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating ProviderTree inventory for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 15:08:58 compute-1 nova_compute[225855]: 2026-01-20 15:08:58.515 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 15:08:58 compute-1 nova_compute[225855]: 2026-01-20 15:08:58.534 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing aggregate associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 15:08:58 compute-1 sudo[295548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:08:58 compute-1 sudo[295548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:08:58 compute-1 sudo[295548]: pam_unix(sudo:session): session closed for user root
Jan 20 15:08:58 compute-1 nova_compute[225855]: 2026-01-20 15:08:58.574 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing trait associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 15:08:58 compute-1 sudo[295573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:08:58 compute-1 sudo[295573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:08:58 compute-1 sudo[295573]: pam_unix(sudo:session): session closed for user root
Jan 20 15:08:58 compute-1 nova_compute[225855]: 2026-01-20 15:08:58.641 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:08:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:58.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:08:59 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/698210959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:08:59 compute-1 nova_compute[225855]: 2026-01-20 15:08:59.071 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:59 compute-1 nova_compute[225855]: 2026-01-20 15:08:59.076 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:08:59 compute-1 nova_compute[225855]: 2026-01-20 15:08:59.092 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:08:59 compute-1 nova_compute[225855]: 2026-01-20 15:08:59.113 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:08:59 compute-1 nova_compute[225855]: 2026-01-20 15:08:59.114 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:59 compute-1 ceph-mon[81775]: pgmap v2564: 321 pgs: 321 active+clean; 603 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.0 MiB/s wr, 444 op/s
Jan 20 15:08:59 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4045448645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:08:59 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/698210959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:08:59 compute-1 nova_compute[225855]: 2026-01-20 15:08:59.491 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:00 compute-1 nova_compute[225855]: 2026-01-20 15:09:00.380 225859 DEBUG nova.network.neutron [req-ec3380e5-2c88-4197-b167-66b752108d0c req-1beb8c9b-b757-414e-9793-26b0fd2d09cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Updated VIF entry in instance network info cache for port c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:09:00 compute-1 nova_compute[225855]: 2026-01-20 15:09:00.381 225859 DEBUG nova.network.neutron [req-ec3380e5-2c88-4197-b167-66b752108d0c req-1beb8c9b-b757-414e-9793-26b0fd2d09cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Updating instance_info_cache with network_info: [{"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:09:00 compute-1 nova_compute[225855]: 2026-01-20 15:09:00.398 225859 DEBUG oslo_concurrency.lockutils [req-ec3380e5-2c88-4197-b167-66b752108d0c req-1beb8c9b-b757-414e-9793-26b0fd2d09cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:09:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:00.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2842859879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2808126759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:00.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:00 compute-1 nova_compute[225855]: 2026-01-20 15:09:00.938 225859 DEBUG nova.compute.manager [req-434d3210-55cb-47db-8f9e-b2a322b7290f req-f877106e-1550-45aa-a162-785d1d8de159 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-changed-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:09:00 compute-1 nova_compute[225855]: 2026-01-20 15:09:00.939 225859 DEBUG nova.compute.manager [req-434d3210-55cb-47db-8f9e-b2a322b7290f req-f877106e-1550-45aa-a162-785d1d8de159 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Refreshing instance network info cache due to event network-changed-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:09:00 compute-1 nova_compute[225855]: 2026-01-20 15:09:00.939 225859 DEBUG oslo_concurrency.lockutils [req-434d3210-55cb-47db-8f9e-b2a322b7290f req-f877106e-1550-45aa-a162-785d1d8de159 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:09:00 compute-1 nova_compute[225855]: 2026-01-20 15:09:00.939 225859 DEBUG oslo_concurrency.lockutils [req-434d3210-55cb-47db-8f9e-b2a322b7290f req-f877106e-1550-45aa-a162-785d1d8de159 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:09:00 compute-1 nova_compute[225855]: 2026-01-20 15:09:00.939 225859 DEBUG nova.network.neutron [req-434d3210-55cb-47db-8f9e-b2a322b7290f req-f877106e-1550-45aa-a162-785d1d8de159 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Refreshing network info cache for port c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:09:01 compute-1 nova_compute[225855]: 2026-01-20 15:09:01.095 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:01 compute-1 sudo[295621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:01 compute-1 sudo[295621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:01 compute-1 sudo[295621]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:01 compute-1 sudo[295646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:09:01 compute-1 sudo[295646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:01 compute-1 sudo[295646]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:01 compute-1 sudo[295671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:01 compute-1 sudo[295671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:01 compute-1 sudo[295671]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:01 compute-1 sudo[295696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:09:01 compute-1 sudo[295696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:01 compute-1 ceph-mon[81775]: pgmap v2565: 321 pgs: 321 active+clean; 608 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.7 MiB/s wr, 408 op/s
Jan 20 15:09:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3931540506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3777703864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:01 compute-1 sudo[295696]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:02 compute-1 nova_compute[225855]: 2026-01-20 15:09:02.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:02.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:02 compute-1 nova_compute[225855]: 2026-01-20 15:09:02.528 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:09:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:02.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:09:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3351445276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:02 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:09:02 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:09:02 compute-1 ceph-mon[81775]: pgmap v2566: 321 pgs: 321 active+clean; 608 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 394 op/s
Jan 20 15:09:03 compute-1 podman[295753]: 2026-01-20 15:09:03.042930806 +0000 UTC m=+0.087023910 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 20 15:09:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:03 compute-1 nova_compute[225855]: 2026-01-20 15:09:03.732 225859 DEBUG nova.network.neutron [req-434d3210-55cb-47db-8f9e-b2a322b7290f req-f877106e-1550-45aa-a162-785d1d8de159 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Updated VIF entry in instance network info cache for port c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:09:03 compute-1 nova_compute[225855]: 2026-01-20 15:09:03.733 225859 DEBUG nova.network.neutron [req-434d3210-55cb-47db-8f9e-b2a322b7290f req-f877106e-1550-45aa-a162-785d1d8de159 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Updating instance_info_cache with network_info: [{"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:09:03 compute-1 nova_compute[225855]: 2026-01-20 15:09:03.776 225859 DEBUG oslo_concurrency.lockutils [req-434d3210-55cb-47db-8f9e-b2a322b7290f req-f877106e-1550-45aa-a162-785d1d8de159 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:09:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:09:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:09:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:09:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:09:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:09:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:09:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:04.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:04 compute-1 nova_compute[225855]: 2026-01-20 15:09:04.495 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e379 e379: 3 total, 3 up, 3 in
Jan 20 15:09:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:09:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:04.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:09:05 compute-1 nova_compute[225855]: 2026-01-20 15:09:05.149 225859 DEBUG oslo_concurrency.lockutils [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:05 compute-1 nova_compute[225855]: 2026-01-20 15:09:05.149 225859 DEBUG oslo_concurrency.lockutils [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:05 compute-1 nova_compute[225855]: 2026-01-20 15:09:05.150 225859 INFO nova.compute.manager [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Rebooting instance
Jan 20 15:09:05 compute-1 nova_compute[225855]: 2026-01-20 15:09:05.166 225859 DEBUG oslo_concurrency.lockutils [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:09:05 compute-1 nova_compute[225855]: 2026-01-20 15:09:05.167 225859 DEBUG oslo_concurrency.lockutils [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquired lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:09:05 compute-1 nova_compute[225855]: 2026-01-20 15:09:05.167 225859 DEBUG nova.network.neutron [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:09:05 compute-1 ceph-mon[81775]: pgmap v2567: 321 pgs: 321 active+clean; 622 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.8 MiB/s wr, 352 op/s
Jan 20 15:09:05 compute-1 ceph-mon[81775]: osdmap e379: 3 total, 3 up, 3 in
Jan 20 15:09:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2658081429' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:09:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2658081429' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:09:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e380 e380: 3 total, 3 up, 3 in
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.305 225859 DEBUG nova.network.neutron [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Updating instance_info_cache with network_info: [{"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.377 225859 DEBUG oslo_concurrency.lockutils [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Releasing lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.379 225859 DEBUG nova.compute.manager [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:09:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:06.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:06 compute-1 kernel: tapc6bf5189-ce (unregistering): left promiscuous mode
Jan 20 15:09:06 compute-1 NetworkManager[49104]: <info>  [1768921746.5747] device (tapc6bf5189-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:09:06 compute-1 ovn_controller[130490]: 2026-01-20T15:09:06Z|00744|binding|INFO|Releasing lport c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf from this chassis (sb_readonly=0)
Jan 20 15:09:06 compute-1 ovn_controller[130490]: 2026-01-20T15:09:06Z|00745|binding|INFO|Setting lport c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf down in Southbound
Jan 20 15:09:06 compute-1 ovn_controller[130490]: 2026-01-20T15:09:06Z|00746|binding|INFO|Removing iface tapc6bf5189-ce ovn-installed in OVS
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.592 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:06.597 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:85:09 10.100.0.3'], port_security=['fa:16:3e:7f:85:09 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b4c55640-85f9-4d75-a4df-6ee77b21ca73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e22d6ddc-0339-4395-bc21-95081825f05b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5b43342be22543f79d4a56e26c6d0c96', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8514424f-703f-4374-a78e-584f6e7c233b e405f81b-5d97-4611-81c1-7315a012415b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11c19619-1e7e-40ee-be83-c9dbc347543e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:09:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:06.599 140354 INFO neutron.agent.ovn.metadata.agent [-] Port c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf in datapath e22d6ddc-0339-4395-bc21-95081825f05b unbound from our chassis
Jan 20 15:09:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:06.601 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e22d6ddc-0339-4395-bc21-95081825f05b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:09:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:06.603 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc27be8-f572-4373-ace5-6ea57c2a8f12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:06.603 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b namespace which is not needed anymore
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.606 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:06 compute-1 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000aa.scope: Deactivated successfully.
Jan 20 15:09:06 compute-1 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000aa.scope: Consumed 14.398s CPU time.
Jan 20 15:09:06 compute-1 systemd-machined[194361]: Machine qemu-87-instance-000000aa terminated.
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.726 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:06 compute-1 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[295454]: [NOTICE]   (295458) : haproxy version is 2.8.14-c23fe91
Jan 20 15:09:06 compute-1 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[295454]: [NOTICE]   (295458) : path to executable is /usr/sbin/haproxy
Jan 20 15:09:06 compute-1 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[295454]: [WARNING]  (295458) : Exiting Master process...
Jan 20 15:09:06 compute-1 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[295454]: [WARNING]  (295458) : Exiting Master process...
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.733 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:06 compute-1 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[295454]: [ALERT]    (295458) : Current worker (295460) exited with code 143 (Terminated)
Jan 20 15:09:06 compute-1 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[295454]: [WARNING]  (295458) : All workers exited. Exiting... (0)
Jan 20 15:09:06 compute-1 systemd[1]: libpod-e1b6e797dfad2f34daa91ed383fcf864e44f993bbf42f43b0de989cb9646429a.scope: Deactivated successfully.
Jan 20 15:09:06 compute-1 podman[295804]: 2026-01-20 15:09:06.744094481 +0000 UTC m=+0.052975084 container died e1b6e797dfad2f34daa91ed383fcf864e44f993bbf42f43b0de989cb9646429a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.744 225859 INFO nova.virt.libvirt.driver [-] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Instance destroyed successfully.
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.744 225859 DEBUG nova.objects.instance [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lazy-loading 'resources' on Instance uuid b4c55640-85f9-4d75-a4df-6ee77b21ca73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.764 225859 DEBUG nova.virt.libvirt.vif [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:08:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-2033880413',display_name='tempest-TestMinimumBasicScenario-server-2033880413',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-2033880413',id=170,image_ref='5b64c953-6df3-45a3-ae28-e419ba117bb2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAWye8KRiKbxhnt7Xo8hUChshePiRtRRGKKbmjGtRpAbQNhUcsWAOe/4okup4yaafm+06AxmRjgJ9R8sVLFUEsSHiOZRgv3dFKZL11GpIpeu6UGBzzNxvi+GaA/Guzx6LQ==',key_name='tempest-TestMinimumBasicScenario-1345705235',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:08:41Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5b43342be22543f79d4a56e26c6d0c96',ramdisk_id='',reservation_id='r-srebood0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5b64c953-6df3-45a3-ae28-e419ba117bb2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1665080150',owner_user_name='tempest-TestMinimumBasicScenario-1665080150-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:09:06Z,user_data=None,user_id='c98bd3f0904e48efa524d598bcad85e9',uuid=b4c55640-85f9-4d75-a4df-6ee77b21ca73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.764 225859 DEBUG nova.network.os_vif_util [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Converting VIF {"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.765 225859 DEBUG nova.network.os_vif_util [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:85:09,bridge_name='br-int',has_traffic_filtering=True,id=c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6bf5189-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.765 225859 DEBUG os_vif [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:85:09,bridge_name='br-int',has_traffic_filtering=True,id=c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6bf5189-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.767 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.768 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6bf5189-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.769 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.770 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:06 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e1b6e797dfad2f34daa91ed383fcf864e44f993bbf42f43b0de989cb9646429a-userdata-shm.mount: Deactivated successfully.
Jan 20 15:09:06 compute-1 ceph-mon[81775]: osdmap e380: 3 total, 3 up, 3 in
Jan 20 15:09:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3460090808' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:06 compute-1 ceph-mon[81775]: pgmap v2570: 321 pgs: 17 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 300 active+clean; 594 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.9 MiB/s wr, 214 op/s
Jan 20 15:09:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/330773330' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:06 compute-1 systemd[1]: var-lib-containers-storage-overlay-2117d9a12e609e61fa38da0417b2e4ee38bd7ca4efd01c6c08034b29e0c123da-merged.mount: Deactivated successfully.
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.779 225859 INFO os_vif [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:85:09,bridge_name='br-int',has_traffic_filtering=True,id=c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6bf5189-ce')
Jan 20 15:09:06 compute-1 podman[295804]: 2026-01-20 15:09:06.790014164 +0000 UTC m=+0.098894747 container cleanup e1b6e797dfad2f34daa91ed383fcf864e44f993bbf42f43b0de989cb9646429a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.790 225859 DEBUG nova.virt.libvirt.driver [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Start _get_guest_xml network_info=[{"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=5b64c953-6df3-45a3-ae28-e419ba117bb2,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': '5b64c953-6df3-45a3-ae28-e419ba117bb2'}], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1ae2be75-c922-4458-bd11-a97b4f6fdd2b', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1ae2be75-c922-4458-bd11-a97b4f6fdd2b', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'b4c55640-85f9-4d75-a4df-6ee77b21ca73', 'attached_at': '', 'detached_at': '', 'volume_id': '1ae2be75-c922-4458-bd11-a97b4f6fdd2b', 'serial': '1ae2be75-c922-4458-bd11-a97b4f6fdd2b'}, 'guest_format': None, 'boot_index': None, 'mount_device': '/dev/vdb', 'attachment_id': '381cb790-ccf8-436f-94a0-eae0e8e507cc', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.793 225859 WARNING nova.virt.libvirt.driver [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.798 225859 DEBUG nova.virt.libvirt.host [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.799 225859 DEBUG nova.virt.libvirt.host [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:09:06 compute-1 systemd[1]: libpod-conmon-e1b6e797dfad2f34daa91ed383fcf864e44f993bbf42f43b0de989cb9646429a.scope: Deactivated successfully.
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.806 225859 DEBUG nova.virt.libvirt.host [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.807 225859 DEBUG nova.virt.libvirt.host [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.808 225859 DEBUG nova.virt.libvirt.driver [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.808 225859 DEBUG nova.virt.hardware [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=5b64c953-6df3-45a3-ae28-e419ba117bb2,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.809 225859 DEBUG nova.virt.hardware [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.809 225859 DEBUG nova.virt.hardware [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.809 225859 DEBUG nova.virt.hardware [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.809 225859 DEBUG nova.virt.hardware [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.809 225859 DEBUG nova.virt.hardware [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.810 225859 DEBUG nova.virt.hardware [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.810 225859 DEBUG nova.virt.hardware [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.810 225859 DEBUG nova.virt.hardware [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.810 225859 DEBUG nova.virt.hardware [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.810 225859 DEBUG nova.virt.hardware [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.811 225859 DEBUG nova.objects.instance [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b4c55640-85f9-4d75-a4df-6ee77b21ca73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.844 225859 DEBUG oslo_concurrency.processutils [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:09:06 compute-1 podman[295843]: 2026-01-20 15:09:06.85510454 +0000 UTC m=+0.044801582 container remove e1b6e797dfad2f34daa91ed383fcf864e44f993bbf42f43b0de989cb9646429a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 20 15:09:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:06.861 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7a1f5161-1c8f-46c7-85be-aa260ebac476]: (4, ('Tue Jan 20 03:09:06 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b (e1b6e797dfad2f34daa91ed383fcf864e44f993bbf42f43b0de989cb9646429a)\ne1b6e797dfad2f34daa91ed383fcf864e44f993bbf42f43b0de989cb9646429a\nTue Jan 20 03:09:06 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b (e1b6e797dfad2f34daa91ed383fcf864e44f993bbf42f43b0de989cb9646429a)\ne1b6e797dfad2f34daa91ed383fcf864e44f993bbf42f43b0de989cb9646429a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:06.862 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2e34013f-ed05-41cd-8e00-23162bc92362]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:06.863 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape22d6ddc-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:09:06 compute-1 kernel: tape22d6ddc-00: left promiscuous mode
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.872 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:06.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.882 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:06.885 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1ba16376-ca95-44df-bd02-07395494aff3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:06.902 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ce012454-3e6a-415c-a515-abe5f08b2b2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:06.904 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[774c73bb-b892-4cc8-920d-769a01aeccce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:06.922 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[776bbe1a-185d-45d2-956b-a97585e4ad16]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675952, 'reachable_time': 41606, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295859, 'error': None, 'target': 'ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:06 compute-1 systemd[1]: run-netns-ovnmeta\x2de22d6ddc\x2d0339\x2d4395\x2dbc21\x2d95081825f05b.mount: Deactivated successfully.
Jan 20 15:09:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:06.925 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:09:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:06.926 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[4a81559c-b374-425d-80bb-d00fd2780cf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.990 225859 DEBUG nova.compute.manager [req-8f9ba387-f77e-4319-ba31-0407456e926b req-e124c674-28ca-4c70-a260-092c3f6aa93d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-vif-unplugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.991 225859 DEBUG oslo_concurrency.lockutils [req-8f9ba387-f77e-4319-ba31-0407456e926b req-e124c674-28ca-4c70-a260-092c3f6aa93d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.991 225859 DEBUG oslo_concurrency.lockutils [req-8f9ba387-f77e-4319-ba31-0407456e926b req-e124c674-28ca-4c70-a260-092c3f6aa93d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.992 225859 DEBUG oslo_concurrency.lockutils [req-8f9ba387-f77e-4319-ba31-0407456e926b req-e124c674-28ca-4c70-a260-092c3f6aa93d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.992 225859 DEBUG nova.compute.manager [req-8f9ba387-f77e-4319-ba31-0407456e926b req-e124c674-28ca-4c70-a260-092c3f6aa93d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] No waiting events found dispatching network-vif-unplugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:09:06 compute-1 nova_compute[225855]: 2026-01-20 15:09:06.992 225859 WARNING nova.compute.manager [req-8f9ba387-f77e-4319-ba31-0407456e926b req-e124c674-28ca-4c70-a260-092c3f6aa93d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received unexpected event network-vif-unplugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf for instance with vm_state active and task_state reboot_started_hard.
Jan 20 15:09:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:09:07 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1328764846' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.305 225859 DEBUG oslo_concurrency.processutils [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.344 225859 DEBUG oslo_concurrency.processutils [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.529 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3862902750' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:09:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3862902750' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:09:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1328764846' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:09:07 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4222916982' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.815 225859 DEBUG oslo_concurrency.processutils [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.844 225859 DEBUG nova.virt.libvirt.vif [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:08:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-2033880413',display_name='tempest-TestMinimumBasicScenario-server-2033880413',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-2033880413',id=170,image_ref='5b64c953-6df3-45a3-ae28-e419ba117bb2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAWye8KRiKbxhnt7Xo8hUChshePiRtRRGKKbmjGtRpAbQNhUcsWAOe/4okup4yaafm+06AxmRjgJ9R8sVLFUEsSHiOZRgv3dFKZL11GpIpeu6UGBzzNxvi+GaA/Guzx6LQ==',key_name='tempest-TestMinimumBasicScenario-1345705235',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:08:41Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5b43342be22543f79d4a56e26c6d0c96',ramdisk_id='',reservation_id='r-srebood0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5b64c953-6df3-45a3-ae28-e419ba117bb2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1665080150',owner_user_name='tempest-TestMinimumBasicScenario-1665080150-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:09:06Z,user_data=None,user_id='c98bd3f0904e48efa524d598bcad85e9',uuid=b4c55640-85f9-4d75-a4df-6ee77b21ca73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.845 225859 DEBUG nova.network.os_vif_util [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Converting VIF {"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.845 225859 DEBUG nova.network.os_vif_util [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:85:09,bridge_name='br-int',has_traffic_filtering=True,id=c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6bf5189-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.846 225859 DEBUG nova.objects.instance [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4c55640-85f9-4d75-a4df-6ee77b21ca73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.859 225859 DEBUG nova.virt.libvirt.driver [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:09:07 compute-1 nova_compute[225855]:   <uuid>b4c55640-85f9-4d75-a4df-6ee77b21ca73</uuid>
Jan 20 15:09:07 compute-1 nova_compute[225855]:   <name>instance-000000aa</name>
Jan 20 15:09:07 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:09:07 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:09:07 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <nova:name>tempest-TestMinimumBasicScenario-server-2033880413</nova:name>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:09:06</nova:creationTime>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:09:07 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:09:07 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:09:07 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:09:07 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:09:07 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:09:07 compute-1 nova_compute[225855]:         <nova:user uuid="c98bd3f0904e48efa524d598bcad85e9">tempest-TestMinimumBasicScenario-1665080150-project-member</nova:user>
Jan 20 15:09:07 compute-1 nova_compute[225855]:         <nova:project uuid="5b43342be22543f79d4a56e26c6d0c96">tempest-TestMinimumBasicScenario-1665080150</nova:project>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="5b64c953-6df3-45a3-ae28-e419ba117bb2"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:09:07 compute-1 nova_compute[225855]:         <nova:port uuid="c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf">
Jan 20 15:09:07 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:09:07 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:09:07 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <system>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <entry name="serial">b4c55640-85f9-4d75-a4df-6ee77b21ca73</entry>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <entry name="uuid">b4c55640-85f9-4d75-a4df-6ee77b21ca73</entry>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     </system>
Jan 20 15:09:07 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:09:07 compute-1 nova_compute[225855]:   <os>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:   </os>
Jan 20 15:09:07 compute-1 nova_compute[225855]:   <features>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:   </features>
Jan 20 15:09:07 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:09:07 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:09:07 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/b4c55640-85f9-4d75-a4df-6ee77b21ca73_disk">
Jan 20 15:09:07 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       </source>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:09:07 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/b4c55640-85f9-4d75-a4df-6ee77b21ca73_disk.config">
Jan 20 15:09:07 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       </source>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:09:07 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <source protocol="rbd" name="volumes/volume-1ae2be75-c922-4458-bd11-a97b4f6fdd2b">
Jan 20 15:09:07 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       </source>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:09:07 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <target dev="vdb" bus="virtio"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <serial>1ae2be75-c922-4458-bd11-a97b4f6fdd2b</serial>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:7f:85:09"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <target dev="tapc6bf5189-ce"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/b4c55640-85f9-4d75-a4df-6ee77b21ca73/console.log" append="off"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <video>
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     </video>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <input type="keyboard" bus="usb"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:09:07 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:09:07 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:09:07 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:09:07 compute-1 nova_compute[225855]: </domain>
Jan 20 15:09:07 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.860 225859 DEBUG nova.virt.libvirt.driver [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.861 225859 DEBUG nova.virt.libvirt.driver [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.861 225859 DEBUG nova.virt.libvirt.driver [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.862 225859 DEBUG nova.virt.libvirt.vif [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:08:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-2033880413',display_name='tempest-TestMinimumBasicScenario-server-2033880413',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-2033880413',id=170,image_ref='5b64c953-6df3-45a3-ae28-e419ba117bb2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAWye8KRiKbxhnt7Xo8hUChshePiRtRRGKKbmjGtRpAbQNhUcsWAOe/4okup4yaafm+06AxmRjgJ9R8sVLFUEsSHiOZRgv3dFKZL11GpIpeu6UGBzzNxvi+GaA/Guzx6LQ==',key_name='tempest-TestMinimumBasicScenario-1345705235',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:08:41Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='5b43342be22543f79d4a56e26c6d0c96',ramdisk_id='',reservation_id='r-srebood0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5b64c953-6df3-45a3-ae28-e419ba117bb2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1665080150',owner_user_name='tempest-TestMinimumBasicScenario-1665080150-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:09:06Z,user_data=None,user_id='c98bd3f0904e48efa524d598bcad85e9',uuid=b4c55640-85f9-4d75-a4df-6ee77b21ca73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.862 225859 DEBUG nova.network.os_vif_util [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Converting VIF {"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.863 225859 DEBUG nova.network.os_vif_util [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:85:09,bridge_name='br-int',has_traffic_filtering=True,id=c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6bf5189-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.863 225859 DEBUG os_vif [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:85:09,bridge_name='br-int',has_traffic_filtering=True,id=c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6bf5189-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.864 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.864 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.865 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.868 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.868 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6bf5189-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.869 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc6bf5189-ce, col_values=(('external_ids', {'iface-id': 'c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:85:09', 'vm-uuid': 'b4c55640-85f9-4d75-a4df-6ee77b21ca73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.870 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:07 compute-1 NetworkManager[49104]: <info>  [1768921747.8712] manager: (tapc6bf5189-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/313)
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.872 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.874 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.875 225859 INFO os_vif [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:85:09,bridge_name='br-int',has_traffic_filtering=True,id=c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6bf5189-ce')
Jan 20 15:09:07 compute-1 kernel: tapc6bf5189-ce: entered promiscuous mode
Jan 20 15:09:07 compute-1 NetworkManager[49104]: <info>  [1768921747.9380] manager: (tapc6bf5189-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/314)
Jan 20 15:09:07 compute-1 ovn_controller[130490]: 2026-01-20T15:09:07Z|00747|binding|INFO|Claiming lport c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf for this chassis.
Jan 20 15:09:07 compute-1 ovn_controller[130490]: 2026-01-20T15:09:07Z|00748|binding|INFO|c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf: Claiming fa:16:3e:7f:85:09 10.100.0.3
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.938 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:07 compute-1 systemd-udevd[295783]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:09:07 compute-1 NetworkManager[49104]: <info>  [1768921747.9498] device (tapc6bf5189-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:09:07 compute-1 NetworkManager[49104]: <info>  [1768921747.9511] device (tapc6bf5189-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:09:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:07.952 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:85:09 10.100.0.3'], port_security=['fa:16:3e:7f:85:09 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b4c55640-85f9-4d75-a4df-6ee77b21ca73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e22d6ddc-0339-4395-bc21-95081825f05b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5b43342be22543f79d4a56e26c6d0c96', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8514424f-703f-4374-a78e-584f6e7c233b e405f81b-5d97-4611-81c1-7315a012415b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11c19619-1e7e-40ee-be83-c9dbc347543e, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:09:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:07.954 140354 INFO neutron.agent.ovn.metadata.agent [-] Port c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf in datapath e22d6ddc-0339-4395-bc21-95081825f05b bound to our chassis
Jan 20 15:09:07 compute-1 ovn_controller[130490]: 2026-01-20T15:09:07Z|00749|binding|INFO|Setting lport c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf ovn-installed in OVS
Jan 20 15:09:07 compute-1 ovn_controller[130490]: 2026-01-20T15:09:07Z|00750|binding|INFO|Setting lport c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf up in Southbound
Jan 20 15:09:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:07.956 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e22d6ddc-0339-4395-bc21-95081825f05b
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.957 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:07 compute-1 nova_compute[225855]: 2026-01-20 15:09:07.960 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:07.967 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[49f3d1fe-9508-475c-b9b7-00ffbae96b29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:07.968 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape22d6ddc-01 in ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:09:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:07.969 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape22d6ddc-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:09:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:07.969 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e2607557-fb72-4d71-900e-8bc9f9018ca4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:07 compute-1 systemd-machined[194361]: New machine qemu-88-instance-000000aa.
Jan 20 15:09:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:07.970 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[54d93927-f86a-43c1-829c-2cfd1db85208]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:07 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:07.980 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[05978d9c-98c3-4f89-96c1-697230198f9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:07 compute-1 systemd[1]: Started Virtual Machine qemu-88-instance-000000aa.
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:08.001 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7dcc843b-afde-4e0f-a73f-5c860feefaf0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:08.030 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[8d0fac90-dfdf-4f42-b40c-1899ece237de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:08 compute-1 NetworkManager[49104]: <info>  [1768921748.0400] manager: (tape22d6ddc-00): new Veth device (/org/freedesktop/NetworkManager/Devices/315)
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:08.039 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7e2adc23-08ee-4865-9c55-1198c146ad78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:08.076 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[53c8e9d1-1682-4e32-815f-6aa7585fd50c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:08.079 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[ffdac37e-c7c2-44eb-9000-b005fa83c85f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:08 compute-1 NetworkManager[49104]: <info>  [1768921748.1007] device (tape22d6ddc-00): carrier: link connected
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:08.106 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[0b1a122c-c306-41d9-90ac-ab4f87acbaa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:08.122 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3e9ba8eb-9d11-43c3-9f68-fdbf82b12cea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape22d6ddc-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:3f:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 214], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678702, 'reachable_time': 15877, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295968, 'error': None, 'target': 'ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:08.138 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e637428b-c5f6-4b5c-9298-58de1b6073ff]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:3f5c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678702, 'tstamp': 678702}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295969, 'error': None, 'target': 'ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:08.158 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[348ec13a-958f-4ea5-abb4-f41d630e573b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape22d6ddc-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:3f:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 214], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678702, 'reachable_time': 15877, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295970, 'error': None, 'target': 'ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:08.186 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[857e585e-0f9f-4854-9e15-3694e3c6d0c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:08.244 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6a65875f-20cd-49a0-8c3b-315dc0ecc5a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:08.245 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape22d6ddc-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:08.245 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:08.245 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape22d6ddc-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:09:08 compute-1 NetworkManager[49104]: <info>  [1768921748.2480] manager: (tape22d6ddc-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/316)
Jan 20 15:09:08 compute-1 kernel: tape22d6ddc-00: entered promiscuous mode
Jan 20 15:09:08 compute-1 nova_compute[225855]: 2026-01-20 15:09:08.249 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:08.251 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape22d6ddc-00, col_values=(('external_ids', {'iface-id': '940a1442-b0ab-49a2-87e8-750659cdda8d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:09:08 compute-1 nova_compute[225855]: 2026-01-20 15:09:08.252 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:08 compute-1 ovn_controller[130490]: 2026-01-20T15:09:08Z|00751|binding|INFO|Releasing lport 940a1442-b0ab-49a2-87e8-750659cdda8d from this chassis (sb_readonly=0)
Jan 20 15:09:08 compute-1 nova_compute[225855]: 2026-01-20 15:09:08.267 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:08.269 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e22d6ddc-0339-4395-bc21-95081825f05b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e22d6ddc-0339-4395-bc21-95081825f05b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:08.270 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[03c002bd-4a81-4a2e-a577-f7eff7ac4346]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:08.271 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-e22d6ddc-0339-4395-bc21-95081825f05b
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/e22d6ddc-0339-4395-bc21-95081825f05b.pid.haproxy
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID e22d6ddc-0339-4395-bc21-95081825f05b
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:09:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:08.271 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b', 'env', 'PROCESS_TAG=haproxy-e22d6ddc-0339-4395-bc21-95081825f05b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e22d6ddc-0339-4395-bc21-95081825f05b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:09:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:09:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:08.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:09:08 compute-1 nova_compute[225855]: 2026-01-20 15:09:08.559 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for b4c55640-85f9-4d75-a4df-6ee77b21ca73 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 15:09:08 compute-1 nova_compute[225855]: 2026-01-20 15:09:08.561 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921748.5593107, b4c55640-85f9-4d75-a4df-6ee77b21ca73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:09:08 compute-1 nova_compute[225855]: 2026-01-20 15:09:08.561 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] VM Resumed (Lifecycle Event)
Jan 20 15:09:08 compute-1 nova_compute[225855]: 2026-01-20 15:09:08.563 225859 DEBUG nova.compute.manager [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:09:08 compute-1 nova_compute[225855]: 2026-01-20 15:09:08.566 225859 INFO nova.virt.libvirt.driver [-] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Instance rebooted successfully.
Jan 20 15:09:08 compute-1 nova_compute[225855]: 2026-01-20 15:09:08.566 225859 DEBUG nova.compute.manager [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:09:08 compute-1 nova_compute[225855]: 2026-01-20 15:09:08.585 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:09:08 compute-1 nova_compute[225855]: 2026-01-20 15:09:08.588 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:09:08 compute-1 nova_compute[225855]: 2026-01-20 15:09:08.606 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Jan 20 15:09:08 compute-1 nova_compute[225855]: 2026-01-20 15:09:08.607 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921748.5600595, b4c55640-85f9-4d75-a4df-6ee77b21ca73 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:09:08 compute-1 nova_compute[225855]: 2026-01-20 15:09:08.607 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] VM Started (Lifecycle Event)
Jan 20 15:09:08 compute-1 nova_compute[225855]: 2026-01-20 15:09:08.612 225859 DEBUG oslo_concurrency.lockutils [None req-67061584-2fa7-4fe7-a0eb-ebb74d618309 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 3.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:08 compute-1 nova_compute[225855]: 2026-01-20 15:09:08.624 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:09:08 compute-1 nova_compute[225855]: 2026-01-20 15:09:08.627 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:09:08 compute-1 podman[296062]: 2026-01-20 15:09:08.640546559 +0000 UTC m=+0.066831067 container create 5934c01e984de0e040eb3c06a49fe9ae7a05253acf5b998f13a7c3ee6661ebfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:09:08 compute-1 podman[296062]: 2026-01-20 15:09:08.596266783 +0000 UTC m=+0.022551311 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:09:08 compute-1 systemd[1]: Started libpod-conmon-5934c01e984de0e040eb3c06a49fe9ae7a05253acf5b998f13a7c3ee6661ebfe.scope.
Jan 20 15:09:08 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:09:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54aa6a90ad0d5f14e5fd3fa3051693ab8a0283f2ca8d04c2beba4ab7db8a43eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:09:08 compute-1 podman[296062]: 2026-01-20 15:09:08.75475293 +0000 UTC m=+0.181037468 container init 5934c01e984de0e040eb3c06a49fe9ae7a05253acf5b998f13a7c3ee6661ebfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 15:09:08 compute-1 podman[296062]: 2026-01-20 15:09:08.760772931 +0000 UTC m=+0.187057439 container start 5934c01e984de0e040eb3c06a49fe9ae7a05253acf5b998f13a7c3ee6661ebfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 15:09:08 compute-1 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[296077]: [NOTICE]   (296081) : New worker (296083) forked
Jan 20 15:09:08 compute-1 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[296077]: [NOTICE]   (296081) : Loading success.
Jan 20 15:09:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:08.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:09 compute-1 nova_compute[225855]: 2026-01-20 15:09:09.081 225859 DEBUG nova.compute.manager [req-3ed92bf2-7d06-4c1e-9650-c6f92520b8cd req-43575bf7-281c-42d5-98d9-d51d5656f8c7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:09:09 compute-1 nova_compute[225855]: 2026-01-20 15:09:09.081 225859 DEBUG oslo_concurrency.lockutils [req-3ed92bf2-7d06-4c1e-9650-c6f92520b8cd req-43575bf7-281c-42d5-98d9-d51d5656f8c7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:09 compute-1 nova_compute[225855]: 2026-01-20 15:09:09.083 225859 DEBUG oslo_concurrency.lockutils [req-3ed92bf2-7d06-4c1e-9650-c6f92520b8cd req-43575bf7-281c-42d5-98d9-d51d5656f8c7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:09 compute-1 nova_compute[225855]: 2026-01-20 15:09:09.084 225859 DEBUG oslo_concurrency.lockutils [req-3ed92bf2-7d06-4c1e-9650-c6f92520b8cd req-43575bf7-281c-42d5-98d9-d51d5656f8c7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:09 compute-1 nova_compute[225855]: 2026-01-20 15:09:09.084 225859 DEBUG nova.compute.manager [req-3ed92bf2-7d06-4c1e-9650-c6f92520b8cd req-43575bf7-281c-42d5-98d9-d51d5656f8c7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] No waiting events found dispatching network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:09:09 compute-1 nova_compute[225855]: 2026-01-20 15:09:09.084 225859 WARNING nova.compute.manager [req-3ed92bf2-7d06-4c1e-9650-c6f92520b8cd req-43575bf7-281c-42d5-98d9-d51d5656f8c7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received unexpected event network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf for instance with vm_state active and task_state None.
Jan 20 15:09:09 compute-1 nova_compute[225855]: 2026-01-20 15:09:09.085 225859 DEBUG nova.compute.manager [req-3ed92bf2-7d06-4c1e-9650-c6f92520b8cd req-43575bf7-281c-42d5-98d9-d51d5656f8c7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:09:09 compute-1 nova_compute[225855]: 2026-01-20 15:09:09.085 225859 DEBUG oslo_concurrency.lockutils [req-3ed92bf2-7d06-4c1e-9650-c6f92520b8cd req-43575bf7-281c-42d5-98d9-d51d5656f8c7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:09 compute-1 nova_compute[225855]: 2026-01-20 15:09:09.085 225859 DEBUG oslo_concurrency.lockutils [req-3ed92bf2-7d06-4c1e-9650-c6f92520b8cd req-43575bf7-281c-42d5-98d9-d51d5656f8c7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:09 compute-1 nova_compute[225855]: 2026-01-20 15:09:09.086 225859 DEBUG oslo_concurrency.lockutils [req-3ed92bf2-7d06-4c1e-9650-c6f92520b8cd req-43575bf7-281c-42d5-98d9-d51d5656f8c7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:09 compute-1 nova_compute[225855]: 2026-01-20 15:09:09.086 225859 DEBUG nova.compute.manager [req-3ed92bf2-7d06-4c1e-9650-c6f92520b8cd req-43575bf7-281c-42d5-98d9-d51d5656f8c7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] No waiting events found dispatching network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:09:09 compute-1 nova_compute[225855]: 2026-01-20 15:09:09.086 225859 WARNING nova.compute.manager [req-3ed92bf2-7d06-4c1e-9650-c6f92520b8cd req-43575bf7-281c-42d5-98d9-d51d5656f8c7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received unexpected event network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf for instance with vm_state active and task_state None.
Jan 20 15:09:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4222916982' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:09 compute-1 ceph-mon[81775]: pgmap v2571: 321 pgs: 17 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 300 active+clean; 594 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 868 KiB/s rd, 2.7 MiB/s wr, 181 op/s
Jan 20 15:09:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:10.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:10.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:10 compute-1 sudo[296093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:10 compute-1 sudo[296093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:10 compute-1 sudo[296093]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:10 compute-1 sudo[296118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:09:10 compute-1 sudo[296118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:10 compute-1 sudo[296118]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:11 compute-1 nova_compute[225855]: 2026-01-20 15:09:11.196 225859 DEBUG nova.compute.manager [req-b11fa76b-94ff-4357-8541-b1e664e64da0 req-f129f790-59ca-4771-9c48-edb270971680 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:09:11 compute-1 nova_compute[225855]: 2026-01-20 15:09:11.197 225859 DEBUG oslo_concurrency.lockutils [req-b11fa76b-94ff-4357-8541-b1e664e64da0 req-f129f790-59ca-4771-9c48-edb270971680 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:11 compute-1 nova_compute[225855]: 2026-01-20 15:09:11.197 225859 DEBUG oslo_concurrency.lockutils [req-b11fa76b-94ff-4357-8541-b1e664e64da0 req-f129f790-59ca-4771-9c48-edb270971680 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:11 compute-1 nova_compute[225855]: 2026-01-20 15:09:11.197 225859 DEBUG oslo_concurrency.lockutils [req-b11fa76b-94ff-4357-8541-b1e664e64da0 req-f129f790-59ca-4771-9c48-edb270971680 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:11 compute-1 nova_compute[225855]: 2026-01-20 15:09:11.197 225859 DEBUG nova.compute.manager [req-b11fa76b-94ff-4357-8541-b1e664e64da0 req-f129f790-59ca-4771-9c48-edb270971680 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] No waiting events found dispatching network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:09:11 compute-1 nova_compute[225855]: 2026-01-20 15:09:11.197 225859 WARNING nova.compute.manager [req-b11fa76b-94ff-4357-8541-b1e664e64da0 req-f129f790-59ca-4771-9c48-edb270971680 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received unexpected event network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf for instance with vm_state active and task_state None.
Jan 20 15:09:11 compute-1 nova_compute[225855]: 2026-01-20 15:09:11.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:11 compute-1 nova_compute[225855]: 2026-01-20 15:09:11.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 15:09:11 compute-1 nova_compute[225855]: 2026-01-20 15:09:11.380 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 15:09:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e381 e381: 3 total, 3 up, 3 in
Jan 20 15:09:11 compute-1 ceph-mon[81775]: pgmap v2572: 321 pgs: 16 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 303 active+clean; 490 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.8 MiB/s wr, 333 op/s
Jan 20 15:09:11 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:09:11 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:09:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:12.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:12 compute-1 nova_compute[225855]: 2026-01-20 15:09:12.532 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:12 compute-1 nova_compute[225855]: 2026-01-20 15:09:12.870 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:09:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:12.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:09:13 compute-1 ovn_controller[130490]: 2026-01-20T15:09:13Z|00752|binding|INFO|Releasing lport b19d6956-fc8d-42c8-af98-d0f2fe9fe3a3 from this chassis (sb_readonly=0)
Jan 20 15:09:13 compute-1 ovn_controller[130490]: 2026-01-20T15:09:13Z|00753|binding|INFO|Releasing lport 940a1442-b0ab-49a2-87e8-750659cdda8d from this chassis (sb_readonly=0)
Jan 20 15:09:13 compute-1 nova_compute[225855]: 2026-01-20 15:09:13.114 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:13 compute-1 ceph-mon[81775]: osdmap e381: 3 total, 3 up, 3 in
Jan 20 15:09:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:13 compute-1 nova_compute[225855]: 2026-01-20 15:09:13.376 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:09:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/668410972' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:09:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:09:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/668410972' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:09:14 compute-1 ceph-mon[81775]: pgmap v2574: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.0 MiB/s wr, 361 op/s
Jan 20 15:09:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/668410972' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:09:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/668410972' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:09:14 compute-1 nova_compute[225855]: 2026-01-20 15:09:14.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:14 compute-1 nova_compute[225855]: 2026-01-20 15:09:14.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 15:09:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:14.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:14.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:15 compute-1 ceph-mon[81775]: pgmap v2575: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 574 KiB/s wr, 309 op/s
Jan 20 15:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:16.428 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:16.429 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:16.429 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:09:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:16.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:09:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e382 e382: 3 total, 3 up, 3 in
Jan 20 15:09:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:16.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:17 compute-1 ceph-mon[81775]: pgmap v2576: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 37 KiB/s wr, 224 op/s
Jan 20 15:09:17 compute-1 ceph-mon[81775]: osdmap e382: 3 total, 3 up, 3 in
Jan 20 15:09:17 compute-1 nova_compute[225855]: 2026-01-20 15:09:17.534 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:17 compute-1 nova_compute[225855]: 2026-01-20 15:09:17.873 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:18 compute-1 podman[296148]: 2026-01-20 15:09:18.041673642 +0000 UTC m=+0.076599945 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 15:09:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:18.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:18 compute-1 sudo[296168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:18 compute-1 sudo[296168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:18 compute-1 sudo[296168]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:18 compute-1 sudo[296193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:18 compute-1 sudo[296193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:18 compute-1 sudo[296193]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:09:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:18.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:09:19 compute-1 ovn_controller[130490]: 2026-01-20T15:09:19Z|00754|binding|INFO|Releasing lport b19d6956-fc8d-42c8-af98-d0f2fe9fe3a3 from this chassis (sb_readonly=0)
Jan 20 15:09:19 compute-1 ovn_controller[130490]: 2026-01-20T15:09:19Z|00755|binding|INFO|Releasing lport 940a1442-b0ab-49a2-87e8-750659cdda8d from this chassis (sb_readonly=0)
Jan 20 15:09:19 compute-1 nova_compute[225855]: 2026-01-20 15:09:19.385 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:19 compute-1 ceph-mon[81775]: pgmap v2578: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 8.7 KiB/s wr, 110 op/s
Jan 20 15:09:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:20.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:20.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:21.385 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:09:21 compute-1 nova_compute[225855]: 2026-01-20 15:09:21.385 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:21.387 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:09:21 compute-1 ceph-mon[81775]: pgmap v2579: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.3 KiB/s wr, 55 op/s
Jan 20 15:09:21 compute-1 ovn_controller[130490]: 2026-01-20T15:09:21Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:85:09 10.100.0.3
Jan 20 15:09:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:22.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:22 compute-1 nova_compute[225855]: 2026-01-20 15:09:22.536 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:22 compute-1 nova_compute[225855]: 2026-01-20 15:09:22.875 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:22.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:23 compute-1 ceph-mon[81775]: pgmap v2580: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.6 KiB/s wr, 50 op/s
Jan 20 15:09:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:24.389 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:09:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:24.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:24.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:25 compute-1 ceph-mon[81775]: pgmap v2581: 321 pgs: 321 active+clean; 480 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 246 KiB/s wr, 42 op/s
Jan 20 15:09:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:09:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:26.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:09:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:26.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:27 compute-1 nova_compute[225855]: 2026-01-20 15:09:27.538 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:27 compute-1 ceph-mon[81775]: pgmap v2582: 321 pgs: 321 active+clean; 508 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.6 MiB/s wr, 130 op/s
Jan 20 15:09:27 compute-1 nova_compute[225855]: 2026-01-20 15:09:27.878 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:27 compute-1 nova_compute[225855]: 2026-01-20 15:09:27.918 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:27 compute-1 nova_compute[225855]: 2026-01-20 15:09:27.951 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Triggering sync for uuid a25af5a3-096f-4363-842e-d960c22eb16b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 20 15:09:27 compute-1 nova_compute[225855]: 2026-01-20 15:09:27.951 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Triggering sync for uuid b4c55640-85f9-4d75-a4df-6ee77b21ca73 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 20 15:09:27 compute-1 nova_compute[225855]: 2026-01-20 15:09:27.951 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "a25af5a3-096f-4363-842e-d960c22eb16b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:27 compute-1 nova_compute[225855]: 2026-01-20 15:09:27.952 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "a25af5a3-096f-4363-842e-d960c22eb16b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:27 compute-1 nova_compute[225855]: 2026-01-20 15:09:27.952 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:27 compute-1 nova_compute[225855]: 2026-01-20 15:09:27.952 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:27 compute-1 nova_compute[225855]: 2026-01-20 15:09:27.986 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:27 compute-1 nova_compute[225855]: 2026-01-20 15:09:27.987 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "a25af5a3-096f-4363-842e-d960c22eb16b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:28.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:28 compute-1 nova_compute[225855]: 2026-01-20 15:09:28.622 225859 DEBUG nova.compute.manager [req-df18a53a-0389-4506-b7d6-2153b1ed8c8e req-0d9f85fd-1a20-4876-a6f6-6366d9e9dc27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-changed-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:09:28 compute-1 nova_compute[225855]: 2026-01-20 15:09:28.623 225859 DEBUG nova.compute.manager [req-df18a53a-0389-4506-b7d6-2153b1ed8c8e req-0d9f85fd-1a20-4876-a6f6-6366d9e9dc27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Refreshing instance network info cache due to event network-changed-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:09:28 compute-1 nova_compute[225855]: 2026-01-20 15:09:28.624 225859 DEBUG oslo_concurrency.lockutils [req-df18a53a-0389-4506-b7d6-2153b1ed8c8e req-0d9f85fd-1a20-4876-a6f6-6366d9e9dc27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:09:28 compute-1 nova_compute[225855]: 2026-01-20 15:09:28.624 225859 DEBUG oslo_concurrency.lockutils [req-df18a53a-0389-4506-b7d6-2153b1ed8c8e req-0d9f85fd-1a20-4876-a6f6-6366d9e9dc27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:09:28 compute-1 nova_compute[225855]: 2026-01-20 15:09:28.624 225859 DEBUG nova.network.neutron [req-df18a53a-0389-4506-b7d6-2153b1ed8c8e req-0d9f85fd-1a20-4876-a6f6-6366d9e9dc27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Refreshing network info cache for port c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:09:28 compute-1 ceph-mon[81775]: pgmap v2583: 321 pgs: 321 active+clean; 508 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 864 KiB/s rd, 2.2 MiB/s wr, 109 op/s
Jan 20 15:09:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:28.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:30 compute-1 nova_compute[225855]: 2026-01-20 15:09:30.368 225859 DEBUG nova.network.neutron [req-df18a53a-0389-4506-b7d6-2153b1ed8c8e req-0d9f85fd-1a20-4876-a6f6-6366d9e9dc27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Updated VIF entry in instance network info cache for port c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:09:30 compute-1 nova_compute[225855]: 2026-01-20 15:09:30.368 225859 DEBUG nova.network.neutron [req-df18a53a-0389-4506-b7d6-2153b1ed8c8e req-0d9f85fd-1a20-4876-a6f6-6366d9e9dc27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Updating instance_info_cache with network_info: [{"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:09:30 compute-1 nova_compute[225855]: 2026-01-20 15:09:30.384 225859 DEBUG oslo_concurrency.lockutils [req-df18a53a-0389-4506-b7d6-2153b1ed8c8e req-0d9f85fd-1a20-4876-a6f6-6366d9e9dc27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:09:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:30.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1946179170' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:09:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:30.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:09:31 compute-1 nova_compute[225855]: 2026-01-20 15:09:31.561 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:31 compute-1 ceph-mon[81775]: pgmap v2584: 321 pgs: 321 active+clean; 510 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1002 KiB/s rd, 2.2 MiB/s wr, 114 op/s
Jan 20 15:09:32 compute-1 nova_compute[225855]: 2026-01-20 15:09:32.072 225859 DEBUG nova.compute.manager [req-574ed8e3-7abb-4edd-be36-a9488ad56efa req-b02e5cbb-f78b-448c-9ac0-657b6fe98137 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-changed-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:09:32 compute-1 nova_compute[225855]: 2026-01-20 15:09:32.073 225859 DEBUG nova.compute.manager [req-574ed8e3-7abb-4edd-be36-a9488ad56efa req-b02e5cbb-f78b-448c-9ac0-657b6fe98137 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Refreshing instance network info cache due to event network-changed-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:09:32 compute-1 nova_compute[225855]: 2026-01-20 15:09:32.073 225859 DEBUG oslo_concurrency.lockutils [req-574ed8e3-7abb-4edd-be36-a9488ad56efa req-b02e5cbb-f78b-448c-9ac0-657b6fe98137 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:09:32 compute-1 nova_compute[225855]: 2026-01-20 15:09:32.073 225859 DEBUG oslo_concurrency.lockutils [req-574ed8e3-7abb-4edd-be36-a9488ad56efa req-b02e5cbb-f78b-448c-9ac0-657b6fe98137 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:09:32 compute-1 nova_compute[225855]: 2026-01-20 15:09:32.073 225859 DEBUG nova.network.neutron [req-574ed8e3-7abb-4edd-be36-a9488ad56efa req-b02e5cbb-f78b-448c-9ac0-657b6fe98137 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Refreshing network info cache for port c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:09:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:32.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:32 compute-1 nova_compute[225855]: 2026-01-20 15:09:32.542 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:32 compute-1 nova_compute[225855]: 2026-01-20 15:09:32.880 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:32.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:32 compute-1 ceph-mon[81775]: pgmap v2585: 321 pgs: 321 active+clean; 510 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 115 op/s
Jan 20 15:09:33 compute-1 nova_compute[225855]: 2026-01-20 15:09:33.312 225859 DEBUG nova.network.neutron [req-574ed8e3-7abb-4edd-be36-a9488ad56efa req-b02e5cbb-f78b-448c-9ac0-657b6fe98137 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Updated VIF entry in instance network info cache for port c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:09:33 compute-1 nova_compute[225855]: 2026-01-20 15:09:33.312 225859 DEBUG nova.network.neutron [req-574ed8e3-7abb-4edd-be36-a9488ad56efa req-b02e5cbb-f78b-448c-9ac0-657b6fe98137 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Updating instance_info_cache with network_info: [{"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:09:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:33 compute-1 nova_compute[225855]: 2026-01-20 15:09:33.331 225859 DEBUG oslo_concurrency.lockutils [req-574ed8e3-7abb-4edd-be36-a9488ad56efa req-b02e5cbb-f78b-448c-9ac0-657b6fe98137 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b4c55640-85f9-4d75-a4df-6ee77b21ca73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:09:33 compute-1 nova_compute[225855]: 2026-01-20 15:09:33.822 225859 DEBUG oslo_concurrency.lockutils [None req-80abe54b-873c-4099-951b-e62569f7f748 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:33 compute-1 nova_compute[225855]: 2026-01-20 15:09:33.823 225859 DEBUG oslo_concurrency.lockutils [None req-80abe54b-873c-4099-951b-e62569f7f748 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:33 compute-1 nova_compute[225855]: 2026-01-20 15:09:33.837 225859 INFO nova.compute.manager [None req-80abe54b-873c-4099-951b-e62569f7f748 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Detaching volume 1ae2be75-c922-4458-bd11-a97b4f6fdd2b
Jan 20 15:09:34 compute-1 nova_compute[225855]: 2026-01-20 15:09:34.063 225859 INFO nova.virt.block_device [None req-80abe54b-873c-4099-951b-e62569f7f748 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Attempting to driver detach volume 1ae2be75-c922-4458-bd11-a97b4f6fdd2b from mountpoint /dev/vdb
Jan 20 15:09:34 compute-1 nova_compute[225855]: 2026-01-20 15:09:34.074 225859 DEBUG nova.virt.libvirt.driver [None req-80abe54b-873c-4099-951b-e62569f7f748 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Attempting to detach device vdb from instance b4c55640-85f9-4d75-a4df-6ee77b21ca73 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 15:09:34 compute-1 nova_compute[225855]: 2026-01-20 15:09:34.075 225859 DEBUG nova.virt.libvirt.guest [None req-80abe54b-873c-4099-951b-e62569f7f748 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 15:09:34 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:09:34 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-1ae2be75-c922-4458-bd11-a97b4f6fdd2b">
Jan 20 15:09:34 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:09:34 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:09:34 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:09:34 compute-1 nova_compute[225855]:   </source>
Jan 20 15:09:34 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:09:34 compute-1 nova_compute[225855]:   <serial>1ae2be75-c922-4458-bd11-a97b4f6fdd2b</serial>
Jan 20 15:09:34 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 20 15:09:34 compute-1 nova_compute[225855]: </disk>
Jan 20 15:09:34 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:09:34 compute-1 nova_compute[225855]: 2026-01-20 15:09:34.084 225859 INFO nova.virt.libvirt.driver [None req-80abe54b-873c-4099-951b-e62569f7f748 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Successfully detached device vdb from instance b4c55640-85f9-4d75-a4df-6ee77b21ca73 from the persistent domain config.
Jan 20 15:09:34 compute-1 nova_compute[225855]: 2026-01-20 15:09:34.084 225859 DEBUG nova.virt.libvirt.driver [None req-80abe54b-873c-4099-951b-e62569f7f748 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b4c55640-85f9-4d75-a4df-6ee77b21ca73 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 15:09:34 compute-1 nova_compute[225855]: 2026-01-20 15:09:34.085 225859 DEBUG nova.virt.libvirt.guest [None req-80abe54b-873c-4099-951b-e62569f7f748 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 15:09:34 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:09:34 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-1ae2be75-c922-4458-bd11-a97b4f6fdd2b">
Jan 20 15:09:34 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:09:34 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:09:34 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:09:34 compute-1 nova_compute[225855]:   </source>
Jan 20 15:09:34 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:09:34 compute-1 nova_compute[225855]:   <serial>1ae2be75-c922-4458-bd11-a97b4f6fdd2b</serial>
Jan 20 15:09:34 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 20 15:09:34 compute-1 nova_compute[225855]: </disk>
Jan 20 15:09:34 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:09:34 compute-1 nova_compute[225855]: 2026-01-20 15:09:34.145 225859 DEBUG nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Received event <DeviceRemovedEvent: 1768921774.1445324, b4c55640-85f9-4d75-a4df-6ee77b21ca73 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 15:09:34 compute-1 nova_compute[225855]: 2026-01-20 15:09:34.147 225859 DEBUG nova.virt.libvirt.driver [None req-80abe54b-873c-4099-951b-e62569f7f748 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b4c55640-85f9-4d75-a4df-6ee77b21ca73 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 15:09:34 compute-1 nova_compute[225855]: 2026-01-20 15:09:34.149 225859 INFO nova.virt.libvirt.driver [None req-80abe54b-873c-4099-951b-e62569f7f748 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Successfully detached device vdb from instance b4c55640-85f9-4d75-a4df-6ee77b21ca73 from the live domain config.
Jan 20 15:09:34 compute-1 podman[296226]: 2026-01-20 15:09:34.222065387 +0000 UTC m=+0.132439069 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2)
Jan 20 15:09:34 compute-1 nova_compute[225855]: 2026-01-20 15:09:34.396 225859 DEBUG nova.objects.instance [None req-80abe54b-873c-4099-951b-e62569f7f748 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lazy-loading 'flavor' on Instance uuid b4c55640-85f9-4d75-a4df-6ee77b21ca73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:09:34 compute-1 nova_compute[225855]: 2026-01-20 15:09:34.432 225859 DEBUG oslo_concurrency.lockutils [None req-80abe54b-873c-4099-951b-e62569f7f748 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:34.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:34.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:35 compute-1 ceph-mon[81775]: pgmap v2586: 321 pgs: 321 active+clean; 510 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 114 op/s
Jan 20 15:09:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:09:35 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2893274847' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:09:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:09:35 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2893274847' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:09:35 compute-1 nova_compute[225855]: 2026-01-20 15:09:35.993 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:36.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2893274847' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:09:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2893274847' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:09:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:36.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.103 225859 DEBUG oslo_concurrency.lockutils [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.104 225859 DEBUG oslo_concurrency.lockutils [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.104 225859 DEBUG oslo_concurrency.lockutils [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.104 225859 DEBUG oslo_concurrency.lockutils [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.104 225859 DEBUG oslo_concurrency.lockutils [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.105 225859 INFO nova.compute.manager [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Terminating instance
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.106 225859 DEBUG nova.compute.manager [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:09:37 compute-1 kernel: tapc6bf5189-ce (unregistering): left promiscuous mode
Jan 20 15:09:37 compute-1 NetworkManager[49104]: <info>  [1768921777.1550] device (tapc6bf5189-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.164 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:37 compute-1 ovn_controller[130490]: 2026-01-20T15:09:37Z|00756|binding|INFO|Releasing lport c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf from this chassis (sb_readonly=0)
Jan 20 15:09:37 compute-1 ovn_controller[130490]: 2026-01-20T15:09:37Z|00757|binding|INFO|Setting lport c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf down in Southbound
Jan 20 15:09:37 compute-1 ovn_controller[130490]: 2026-01-20T15:09:37Z|00758|binding|INFO|Removing iface tapc6bf5189-ce ovn-installed in OVS
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.169 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.173 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:85:09 10.100.0.3'], port_security=['fa:16:3e:7f:85:09 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b4c55640-85f9-4d75-a4df-6ee77b21ca73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e22d6ddc-0339-4395-bc21-95081825f05b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5b43342be22543f79d4a56e26c6d0c96', 'neutron:revision_number': '8', 'neutron:security_group_ids': '8514424f-703f-4374-a78e-584f6e7c233b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11c19619-1e7e-40ee-be83-c9dbc347543e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.174 140354 INFO neutron.agent.ovn.metadata.agent [-] Port c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf in datapath e22d6ddc-0339-4395-bc21-95081825f05b unbound from our chassis
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.175 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e22d6ddc-0339-4395-bc21-95081825f05b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.177 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[37b6548c-324d-458e-842c-17315a49ebbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.177 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b namespace which is not needed anymore
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.182 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:37 compute-1 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000aa.scope: Deactivated successfully.
Jan 20 15:09:37 compute-1 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000aa.scope: Consumed 14.398s CPU time.
Jan 20 15:09:37 compute-1 systemd-machined[194361]: Machine qemu-88-instance-000000aa terminated.
Jan 20 15:09:37 compute-1 kernel: tapc6bf5189-ce: entered promiscuous mode
Jan 20 15:09:37 compute-1 kernel: tapc6bf5189-ce (unregistering): left promiscuous mode
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.330 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:37 compute-1 ovn_controller[130490]: 2026-01-20T15:09:37Z|00759|binding|INFO|Claiming lport c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf for this chassis.
Jan 20 15:09:37 compute-1 ovn_controller[130490]: 2026-01-20T15:09:37Z|00760|binding|INFO|c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf: Claiming fa:16:3e:7f:85:09 10.100.0.3
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.337 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:85:09 10.100.0.3'], port_security=['fa:16:3e:7f:85:09 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b4c55640-85f9-4d75-a4df-6ee77b21ca73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e22d6ddc-0339-4395-bc21-95081825f05b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5b43342be22543f79d4a56e26c6d0c96', 'neutron:revision_number': '8', 'neutron:security_group_ids': '8514424f-703f-4374-a78e-584f6e7c233b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11c19619-1e7e-40ee-be83-c9dbc347543e, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.346 225859 INFO nova.virt.libvirt.driver [-] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Instance destroyed successfully.
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.347 225859 DEBUG nova.objects.instance [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lazy-loading 'resources' on Instance uuid b4c55640-85f9-4d75-a4df-6ee77b21ca73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:09:37 compute-1 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[296077]: [NOTICE]   (296081) : haproxy version is 2.8.14-c23fe91
Jan 20 15:09:37 compute-1 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[296077]: [NOTICE]   (296081) : path to executable is /usr/sbin/haproxy
Jan 20 15:09:37 compute-1 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[296077]: [WARNING]  (296081) : Exiting Master process...
Jan 20 15:09:37 compute-1 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[296077]: [WARNING]  (296081) : Exiting Master process...
Jan 20 15:09:37 compute-1 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[296077]: [ALERT]    (296081) : Current worker (296083) exited with code 143 (Terminated)
Jan 20 15:09:37 compute-1 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[296077]: [WARNING]  (296081) : All workers exited. Exiting... (0)
Jan 20 15:09:37 compute-1 systemd[1]: libpod-5934c01e984de0e040eb3c06a49fe9ae7a05253acf5b998f13a7c3ee6661ebfe.scope: Deactivated successfully.
Jan 20 15:09:37 compute-1 conmon[296077]: conmon 5934c01e984de0e040eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5934c01e984de0e040eb3c06a49fe9ae7a05253acf5b998f13a7c3ee6661ebfe.scope/container/memory.events
Jan 20 15:09:37 compute-1 ovn_controller[130490]: 2026-01-20T15:09:37Z|00761|binding|INFO|Setting lport c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf ovn-installed in OVS
Jan 20 15:09:37 compute-1 ovn_controller[130490]: 2026-01-20T15:09:37Z|00762|binding|INFO|Setting lport c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf up in Southbound
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.352 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:37 compute-1 ovn_controller[130490]: 2026-01-20T15:09:37Z|00763|binding|INFO|Releasing lport c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf from this chassis (sb_readonly=1)
Jan 20 15:09:37 compute-1 ovn_controller[130490]: 2026-01-20T15:09:37Z|00764|if_status|INFO|Dropped 2 log messages in last 939 seconds (most recently, 939 seconds ago) due to excessive rate
Jan 20 15:09:37 compute-1 ovn_controller[130490]: 2026-01-20T15:09:37Z|00765|if_status|INFO|Not setting lport c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf down as sb is readonly
Jan 20 15:09:37 compute-1 ovn_controller[130490]: 2026-01-20T15:09:37Z|00766|binding|INFO|Removing iface tapc6bf5189-ce ovn-installed in OVS
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.355 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:37 compute-1 ovn_controller[130490]: 2026-01-20T15:09:37Z|00767|binding|INFO|Releasing lport c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf from this chassis (sb_readonly=0)
Jan 20 15:09:37 compute-1 ovn_controller[130490]: 2026-01-20T15:09:37Z|00768|binding|INFO|Setting lport c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf down in Southbound
Jan 20 15:09:37 compute-1 podman[296278]: 2026-01-20 15:09:37.360759223 +0000 UTC m=+0.084196330 container died 5934c01e984de0e040eb3c06a49fe9ae7a05253acf5b998f13a7c3ee6661ebfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.364 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:85:09 10.100.0.3'], port_security=['fa:16:3e:7f:85:09 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b4c55640-85f9-4d75-a4df-6ee77b21ca73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e22d6ddc-0339-4395-bc21-95081825f05b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5b43342be22543f79d4a56e26c6d0c96', 'neutron:revision_number': '8', 'neutron:security_group_ids': '8514424f-703f-4374-a78e-584f6e7c233b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11c19619-1e7e-40ee-be83-c9dbc347543e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.364 225859 DEBUG nova.virt.libvirt.vif [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:08:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-2033880413',display_name='tempest-TestMinimumBasicScenario-server-2033880413',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-2033880413',id=170,image_ref='5b64c953-6df3-45a3-ae28-e419ba117bb2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAWye8KRiKbxhnt7Xo8hUChshePiRtRRGKKbmjGtRpAbQNhUcsWAOe/4okup4yaafm+06AxmRjgJ9R8sVLFUEsSHiOZRgv3dFKZL11GpIpeu6UGBzzNxvi+GaA/Guzx6LQ==',key_name='tempest-TestMinimumBasicScenario-1345705235',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:08:41Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5b43342be22543f79d4a56e26c6d0c96',ramdisk_id='',reservation_id='r-srebood0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5b64c953-6df3-45a3-ae28-e419ba117bb2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1665080150',owner_user_name='tempest-TestMinimumBasicScenario-1665080150-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:09:08Z,user_data=None,user_id='c98bd3f0904e48efa524d598bcad85e9',uuid=b4c55640-85f9-4d75-a4df-6ee77b21ca73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.365 225859 DEBUG nova.network.os_vif_util [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Converting VIF {"id": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "address": "fa:16:3e:7f:85:09", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6bf5189-ce", "ovs_interfaceid": "c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.365 225859 DEBUG nova.network.os_vif_util [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:85:09,bridge_name='br-int',has_traffic_filtering=True,id=c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6bf5189-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.366 225859 DEBUG os_vif [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:85:09,bridge_name='br-int',has_traffic_filtering=True,id=c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6bf5189-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.368 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.369 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6bf5189-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.370 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.370 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.371 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.374 225859 INFO os_vif [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:85:09,bridge_name='br-int',has_traffic_filtering=True,id=c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6bf5189-ce')
Jan 20 15:09:37 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5934c01e984de0e040eb3c06a49fe9ae7a05253acf5b998f13a7c3ee6661ebfe-userdata-shm.mount: Deactivated successfully.
Jan 20 15:09:37 compute-1 systemd[1]: var-lib-containers-storage-overlay-54aa6a90ad0d5f14e5fd3fa3051693ab8a0283f2ca8d04c2beba4ab7db8a43eb-merged.mount: Deactivated successfully.
Jan 20 15:09:37 compute-1 podman[296278]: 2026-01-20 15:09:37.40118176 +0000 UTC m=+0.124618857 container cleanup 5934c01e984de0e040eb3c06a49fe9ae7a05253acf5b998f13a7c3ee6661ebfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:09:37 compute-1 systemd[1]: libpod-conmon-5934c01e984de0e040eb3c06a49fe9ae7a05253acf5b998f13a7c3ee6661ebfe.scope: Deactivated successfully.
Jan 20 15:09:37 compute-1 podman[296329]: 2026-01-20 15:09:37.468476209 +0000 UTC m=+0.045210594 container remove 5934c01e984de0e040eb3c06a49fe9ae7a05253acf5b998f13a7c3ee6661ebfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.474 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bb2b253d-0bf1-4546-9fa6-37144c251062]: (4, ('Tue Jan 20 03:09:37 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b (5934c01e984de0e040eb3c06a49fe9ae7a05253acf5b998f13a7c3ee6661ebfe)\n5934c01e984de0e040eb3c06a49fe9ae7a05253acf5b998f13a7c3ee6661ebfe\nTue Jan 20 03:09:37 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b (5934c01e984de0e040eb3c06a49fe9ae7a05253acf5b998f13a7c3ee6661ebfe)\n5934c01e984de0e040eb3c06a49fe9ae7a05253acf5b998f13a7c3ee6661ebfe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.476 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[730e7517-c85d-48a4-90fe-ba6c112a1d71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.477 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape22d6ddc-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.479 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:37 compute-1 kernel: tape22d6ddc-00: left promiscuous mode
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.492 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.495 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[27cd1494-5c25-4285-a86c-de57cedc592d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.516 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[42f1f476-d145-4e7c-bf1d-9e331725d93f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.517 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[90c156a7-1b2b-4ade-9dc5-01ca1af1b967]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.532 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b25b9493-4d34-47a1-ad3e-b564bb65cfd1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678695, 'reachable_time': 23313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296347, 'error': None, 'target': 'ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:37 compute-1 systemd[1]: run-netns-ovnmeta\x2de22d6ddc\x2d0339\x2d4395\x2dbc21\x2d95081825f05b.mount: Deactivated successfully.
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.537 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.537 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[6d47169f-f777-4c85-b0e8-7f1719e3935d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.538 140354 INFO neutron.agent.ovn.metadata.agent [-] Port c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf in datapath e22d6ddc-0339-4395-bc21-95081825f05b unbound from our chassis
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.539 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e22d6ddc-0339-4395-bc21-95081825f05b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.540 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6037a122-024f-407b-b112-c9121db53780]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.540 140354 INFO neutron.agent.ovn.metadata.agent [-] Port c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf in datapath e22d6ddc-0339-4395-bc21-95081825f05b unbound from our chassis
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.541 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e22d6ddc-0339-4395-bc21-95081825f05b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:09:37 compute-1 nova_compute[225855]: 2026-01-20 15:09:37.542 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:09:37.542 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[91c722a9-b93e-4453-b255-8d2a5a62d547]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:38 compute-1 nova_compute[225855]: 2026-01-20 15:09:38.490 225859 DEBUG nova.compute.manager [req-e72be83e-f1cf-4b25-8e0c-cc16bab90eb5 req-b631f6fa-04b0-48fe-8899-f009e5c6a38a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-vif-unplugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:09:38 compute-1 nova_compute[225855]: 2026-01-20 15:09:38.491 225859 DEBUG oslo_concurrency.lockutils [req-e72be83e-f1cf-4b25-8e0c-cc16bab90eb5 req-b631f6fa-04b0-48fe-8899-f009e5c6a38a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:38 compute-1 nova_compute[225855]: 2026-01-20 15:09:38.491 225859 DEBUG oslo_concurrency.lockutils [req-e72be83e-f1cf-4b25-8e0c-cc16bab90eb5 req-b631f6fa-04b0-48fe-8899-f009e5c6a38a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:38 compute-1 nova_compute[225855]: 2026-01-20 15:09:38.491 225859 DEBUG oslo_concurrency.lockutils [req-e72be83e-f1cf-4b25-8e0c-cc16bab90eb5 req-b631f6fa-04b0-48fe-8899-f009e5c6a38a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:38 compute-1 nova_compute[225855]: 2026-01-20 15:09:38.491 225859 DEBUG nova.compute.manager [req-e72be83e-f1cf-4b25-8e0c-cc16bab90eb5 req-b631f6fa-04b0-48fe-8899-f009e5c6a38a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] No waiting events found dispatching network-vif-unplugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:09:38 compute-1 nova_compute[225855]: 2026-01-20 15:09:38.492 225859 DEBUG nova.compute.manager [req-e72be83e-f1cf-4b25-8e0c-cc16bab90eb5 req-b631f6fa-04b0-48fe-8899-f009e5c6a38a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-vif-unplugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:09:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:38.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:38 compute-1 sudo[296349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:38 compute-1 sudo[296349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:38 compute-1 sudo[296349]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:38 compute-1 sudo[296374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:38 compute-1 sudo[296374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:38 compute-1 sudo[296374]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:38.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:39 compute-1 ceph-mon[81775]: pgmap v2587: 321 pgs: 321 active+clean; 510 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 801 KiB/s rd, 2.0 MiB/s wr, 98 op/s
Jan 20 15:09:39 compute-1 nova_compute[225855]: 2026-01-20 15:09:39.989 225859 INFO nova.virt.libvirt.driver [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Deleting instance files /var/lib/nova/instances/b4c55640-85f9-4d75-a4df-6ee77b21ca73_del
Jan 20 15:09:39 compute-1 nova_compute[225855]: 2026-01-20 15:09:39.990 225859 INFO nova.virt.libvirt.driver [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Deletion of /var/lib/nova/instances/b4c55640-85f9-4d75-a4df-6ee77b21ca73_del complete
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.066 225859 INFO nova.compute.manager [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Took 2.96 seconds to destroy the instance on the hypervisor.
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.067 225859 DEBUG oslo.service.loopingcall [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.067 225859 DEBUG nova.compute.manager [-] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.067 225859 DEBUG nova.network.neutron [-] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:09:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:40.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.679 225859 DEBUG nova.compute.manager [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.680 225859 DEBUG oslo_concurrency.lockutils [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.680 225859 DEBUG oslo_concurrency.lockutils [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.680 225859 DEBUG oslo_concurrency.lockutils [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.680 225859 DEBUG nova.compute.manager [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] No waiting events found dispatching network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.681 225859 WARNING nova.compute.manager [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received unexpected event network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf for instance with vm_state active and task_state deleting.
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.681 225859 DEBUG nova.compute.manager [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.681 225859 DEBUG oslo_concurrency.lockutils [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.681 225859 DEBUG oslo_concurrency.lockutils [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.681 225859 DEBUG oslo_concurrency.lockutils [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.682 225859 DEBUG nova.compute.manager [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] No waiting events found dispatching network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.682 225859 WARNING nova.compute.manager [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received unexpected event network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf for instance with vm_state active and task_state deleting.
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.682 225859 DEBUG nova.compute.manager [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.682 225859 DEBUG oslo_concurrency.lockutils [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.682 225859 DEBUG oslo_concurrency.lockutils [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.683 225859 DEBUG oslo_concurrency.lockutils [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.683 225859 DEBUG nova.compute.manager [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] No waiting events found dispatching network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.683 225859 WARNING nova.compute.manager [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received unexpected event network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf for instance with vm_state active and task_state deleting.
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.683 225859 DEBUG nova.compute.manager [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-vif-unplugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.684 225859 DEBUG oslo_concurrency.lockutils [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.684 225859 DEBUG oslo_concurrency.lockutils [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.684 225859 DEBUG oslo_concurrency.lockutils [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.684 225859 DEBUG nova.compute.manager [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] No waiting events found dispatching network-vif-unplugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.684 225859 DEBUG nova.compute.manager [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-vif-unplugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.685 225859 DEBUG nova.compute.manager [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.685 225859 DEBUG oslo_concurrency.lockutils [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.685 225859 DEBUG oslo_concurrency.lockutils [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.685 225859 DEBUG oslo_concurrency.lockutils [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.686 225859 DEBUG nova.compute.manager [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] No waiting events found dispatching network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:09:40 compute-1 nova_compute[225855]: 2026-01-20 15:09:40.686 225859 WARNING nova.compute.manager [req-80e5f13a-04a8-4165-b9af-bcab7b57f19e req-6f7f5122-7ebd-44ae-afa4-a6fa28d58ae1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received unexpected event network-vif-plugged-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf for instance with vm_state active and task_state deleting.
Jan 20 15:09:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3710696462' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2564933822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:40 compute-1 ceph-mon[81775]: pgmap v2588: 321 pgs: 321 active+clean; 510 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 225 KiB/s rd, 46 KiB/s wr, 24 op/s
Jan 20 15:09:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/628547865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:40 compute-1 ceph-mon[81775]: pgmap v2589: 321 pgs: 321 active+clean; 497 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 236 KiB/s rd, 1.3 MiB/s wr, 44 op/s
Jan 20 15:09:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:09:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:40.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:09:41 compute-1 nova_compute[225855]: 2026-01-20 15:09:41.229 225859 DEBUG nova.network.neutron [-] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:09:41 compute-1 nova_compute[225855]: 2026-01-20 15:09:41.250 225859 INFO nova.compute.manager [-] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Took 1.18 seconds to deallocate network for instance.
Jan 20 15:09:41 compute-1 nova_compute[225855]: 2026-01-20 15:09:41.309 225859 DEBUG oslo_concurrency.lockutils [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:41 compute-1 nova_compute[225855]: 2026-01-20 15:09:41.310 225859 DEBUG oslo_concurrency.lockutils [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:41 compute-1 nova_compute[225855]: 2026-01-20 15:09:41.342 225859 DEBUG nova.compute.manager [req-a578728c-527b-4bbd-a0e9-f2d202c54ad7 req-76634f58-7992-4a16-a902-63531ffc805b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Received event network-vif-deleted-c6bf5189-ce8b-4ff1-9eba-e3d4195a6cbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:09:41 compute-1 nova_compute[225855]: 2026-01-20 15:09:41.407 225859 DEBUG oslo_concurrency.processutils [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:09:41 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:09:41 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/230950978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:41 compute-1 nova_compute[225855]: 2026-01-20 15:09:41.843 225859 DEBUG oslo_concurrency.processutils [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:09:41 compute-1 nova_compute[225855]: 2026-01-20 15:09:41.850 225859 DEBUG nova.compute.provider_tree [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:09:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/230950978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:42 compute-1 nova_compute[225855]: 2026-01-20 15:09:42.019 225859 DEBUG nova.scheduler.client.report [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:09:42 compute-1 nova_compute[225855]: 2026-01-20 15:09:42.050 225859 DEBUG oslo_concurrency.lockutils [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:42 compute-1 nova_compute[225855]: 2026-01-20 15:09:42.072 225859 INFO nova.scheduler.client.report [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Deleted allocations for instance b4c55640-85f9-4d75-a4df-6ee77b21ca73
Jan 20 15:09:42 compute-1 nova_compute[225855]: 2026-01-20 15:09:42.117 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:42 compute-1 nova_compute[225855]: 2026-01-20 15:09:42.129 225859 DEBUG oslo_concurrency.lockutils [None req-7e715cb4-5d2d-4385-ad06-420ccbc20186 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "b4c55640-85f9-4d75-a4df-6ee77b21ca73" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:42 compute-1 nova_compute[225855]: 2026-01-20 15:09:42.394 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:42.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:42 compute-1 nova_compute[225855]: 2026-01-20 15:09:42.544 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:42 compute-1 ceph-mon[81775]: pgmap v2590: 321 pgs: 321 active+clean; 504 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 103 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 20 15:09:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:09:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:42.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:09:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e383 e383: 3 total, 3 up, 3 in
Jan 20 15:09:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:44.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:44 compute-1 ceph-mon[81775]: osdmap e383: 3 total, 3 up, 3 in
Jan 20 15:09:44 compute-1 ceph-mon[81775]: pgmap v2592: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 785 KiB/s rd, 2.2 MiB/s wr, 100 op/s
Jan 20 15:09:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:44.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e384 e384: 3 total, 3 up, 3 in
Jan 20 15:09:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:46.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:09:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:46.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:09:46 compute-1 ceph-mon[81775]: osdmap e384: 3 total, 3 up, 3 in
Jan 20 15:09:46 compute-1 ceph-mon[81775]: pgmap v2594: 321 pgs: 321 active+clean; 467 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.2 MiB/s wr, 233 op/s
Jan 20 15:09:47 compute-1 nova_compute[225855]: 2026-01-20 15:09:47.420 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:47 compute-1 nova_compute[225855]: 2026-01-20 15:09:47.546 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:48 compute-1 nova_compute[225855]: 2026-01-20 15:09:48.230 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:48.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:48.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:49 compute-1 podman[296427]: 2026-01-20 15:09:49.013817131 +0000 UTC m=+0.057518813 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:09:49 compute-1 ceph-mon[81775]: pgmap v2595: 321 pgs: 321 active+clean; 467 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.4 MiB/s wr, 203 op/s
Jan 20 15:09:50 compute-1 nova_compute[225855]: 2026-01-20 15:09:50.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:50 compute-1 nova_compute[225855]: 2026-01-20 15:09:50.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:09:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/867603612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:50.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:50.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:51 compute-1 nova_compute[225855]: 2026-01-20 15:09:51.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e385 e385: 3 total, 3 up, 3 in
Jan 20 15:09:51 compute-1 ceph-mon[81775]: pgmap v2596: 321 pgs: 321 active+clean; 458 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.6 MiB/s wr, 262 op/s
Jan 20 15:09:52 compute-1 nova_compute[225855]: 2026-01-20 15:09:52.344 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921777.3435755, b4c55640-85f9-4d75-a4df-6ee77b21ca73 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:09:52 compute-1 nova_compute[225855]: 2026-01-20 15:09:52.345 225859 INFO nova.compute.manager [-] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] VM Stopped (Lifecycle Event)
Jan 20 15:09:52 compute-1 nova_compute[225855]: 2026-01-20 15:09:52.379 225859 DEBUG nova.compute.manager [None req-7749b0a5-add8-4d7b-a121-ccd929715148 - - - - - -] [instance: b4c55640-85f9-4d75-a4df-6ee77b21ca73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:09:52 compute-1 nova_compute[225855]: 2026-01-20 15:09:52.456 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:52.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:52 compute-1 ceph-mon[81775]: osdmap e385: 3 total, 3 up, 3 in
Jan 20 15:09:52 compute-1 nova_compute[225855]: 2026-01-20 15:09:52.549 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:09:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:52.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:09:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:53 compute-1 ceph-mon[81775]: pgmap v2598: 321 pgs: 321 active+clean; 437 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.6 MiB/s wr, 262 op/s
Jan 20 15:09:54 compute-1 nova_compute[225855]: 2026-01-20 15:09:54.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:54 compute-1 nova_compute[225855]: 2026-01-20 15:09:54.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:09:54 compute-1 nova_compute[225855]: 2026-01-20 15:09:54.369 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:09:54 compute-1 nova_compute[225855]: 2026-01-20 15:09:54.370 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:54 compute-1 nova_compute[225855]: 2026-01-20 15:09:54.371 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:54 compute-1 nova_compute[225855]: 2026-01-20 15:09:54.371 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:09:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:54.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:09:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:54.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:55 compute-1 ceph-mon[81775]: pgmap v2599: 321 pgs: 321 active+clean; 440 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.4 MiB/s wr, 147 op/s
Jan 20 15:09:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1171336010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:56.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:56 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4065963856' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:56.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:57 compute-1 nova_compute[225855]: 2026-01-20 15:09:57.458 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:57 compute-1 nova_compute[225855]: 2026-01-20 15:09:57.551 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:57 compute-1 ceph-mon[81775]: pgmap v2600: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.7 MiB/s wr, 165 op/s
Jan 20 15:09:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:58 compute-1 nova_compute[225855]: 2026-01-20 15:09:58.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:58 compute-1 nova_compute[225855]: 2026-01-20 15:09:58.373 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:58 compute-1 nova_compute[225855]: 2026-01-20 15:09:58.373 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:58 compute-1 nova_compute[225855]: 2026-01-20 15:09:58.374 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:58 compute-1 nova_compute[225855]: 2026-01-20 15:09:58.374 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:09:58 compute-1 nova_compute[225855]: 2026-01-20 15:09:58.374 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:09:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:58.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3512189121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:09:58 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/984211214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:58 compute-1 nova_compute[225855]: 2026-01-20 15:09:58.822 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:09:58 compute-1 nova_compute[225855]: 2026-01-20 15:09:58.917 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:09:58 compute-1 nova_compute[225855]: 2026-01-20 15:09:58.918 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:09:58 compute-1 nova_compute[225855]: 2026-01-20 15:09:58.918 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:09:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:09:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:58.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:58 compute-1 sudo[296474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:58 compute-1 sudo[296474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:58 compute-1 sudo[296474]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:59 compute-1 sudo[296499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:59 compute-1 sudo[296499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:59 compute-1 sudo[296499]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:59 compute-1 nova_compute[225855]: 2026-01-20 15:09:59.073 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:09:59 compute-1 nova_compute[225855]: 2026-01-20 15:09:59.074 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4119MB free_disk=20.80986785888672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:09:59 compute-1 nova_compute[225855]: 2026-01-20 15:09:59.075 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:59 compute-1 nova_compute[225855]: 2026-01-20 15:09:59.075 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:59 compute-1 nova_compute[225855]: 2026-01-20 15:09:59.165 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance a25af5a3-096f-4363-842e-d960c22eb16b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:09:59 compute-1 nova_compute[225855]: 2026-01-20 15:09:59.165 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:09:59 compute-1 nova_compute[225855]: 2026-01-20 15:09:59.165 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:09:59 compute-1 nova_compute[225855]: 2026-01-20 15:09:59.211 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:09:59 compute-1 ceph-mon[81775]: pgmap v2601: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.7 MiB/s wr, 165 op/s
Jan 20 15:09:59 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/984211214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:59 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/225497052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:09:59 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3843724838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:59 compute-1 nova_compute[225855]: 2026-01-20 15:09:59.630 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:09:59 compute-1 nova_compute[225855]: 2026-01-20 15:09:59.637 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:09:59 compute-1 nova_compute[225855]: 2026-01-20 15:09:59.660 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:09:59 compute-1 nova_compute[225855]: 2026-01-20 15:09:59.686 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:09:59 compute-1 nova_compute[225855]: 2026-01-20 15:09:59.686 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:00.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3843724838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:00 compute-1 ceph-mon[81775]: overall HEALTH_OK
Jan 20 15:10:00 compute-1 ceph-mon[81775]: pgmap v2602: 321 pgs: 321 active+clean; 476 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.2 MiB/s wr, 170 op/s
Jan 20 15:10:00 compute-1 nova_compute[225855]: 2026-01-20 15:10:00.682 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:10:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:10:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:00.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:10:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1240187548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:02 compute-1 nova_compute[225855]: 2026-01-20 15:10:02.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:10:02 compute-1 nova_compute[225855]: 2026-01-20 15:10:02.461 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:10:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:02.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:10:02 compute-1 nova_compute[225855]: 2026-01-20 15:10:02.552 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:02 compute-1 ceph-mon[81775]: pgmap v2603: 321 pgs: 321 active+clean; 476 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.0 MiB/s wr, 146 op/s
Jan 20 15:10:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2876936489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:02.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:03 compute-1 nova_compute[225855]: 2026-01-20 15:10:03.376 225859 DEBUG oslo_concurrency.lockutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "9e852872-788c-4dac-b7fb-d76d67e7a84f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:03 compute-1 nova_compute[225855]: 2026-01-20 15:10:03.377 225859 DEBUG oslo_concurrency.lockutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "9e852872-788c-4dac-b7fb-d76d67e7a84f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:03 compute-1 nova_compute[225855]: 2026-01-20 15:10:03.411 225859 DEBUG nova.compute.manager [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:10:03 compute-1 nova_compute[225855]: 2026-01-20 15:10:03.480 225859 DEBUG oslo_concurrency.lockutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:03 compute-1 nova_compute[225855]: 2026-01-20 15:10:03.481 225859 DEBUG oslo_concurrency.lockutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:03 compute-1 nova_compute[225855]: 2026-01-20 15:10:03.487 225859 DEBUG nova.virt.hardware [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:10:03 compute-1 nova_compute[225855]: 2026-01-20 15:10:03.488 225859 INFO nova.compute.claims [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:10:03 compute-1 nova_compute[225855]: 2026-01-20 15:10:03.606 225859 DEBUG oslo_concurrency.processutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:10:04 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1931767600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.052 225859 DEBUG oslo_concurrency.processutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.057 225859 DEBUG nova.compute.provider_tree [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:10:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1931767600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.092 225859 DEBUG nova.scheduler.client.report [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.119 225859 DEBUG oslo_concurrency.lockutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.120 225859 DEBUG nova.compute.manager [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.165 225859 DEBUG nova.compute.manager [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.165 225859 DEBUG nova.network.neutron [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.185 225859 INFO nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.202 225859 DEBUG nova.compute.manager [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.250 225859 INFO nova.virt.block_device [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Booting with volume ccb7c984-4606-40ef-8fcd-a902f5382dee at /dev/vda
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.412 225859 DEBUG os_brick.utils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.414 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.425 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.425 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[97258e65-4038-4c1f-9d26-b80d1c2d80ef]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.427 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.437 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.437 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e00cf0-0f75-4654-992b-1edaa2e1bdf8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.439 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.447 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.448 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[bdc4c4aa-e00c-46af-bd34-9f310a8b507f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.449 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[52288b68-d62a-4b67-8b5a-be86a9f1b957]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.449 225859 DEBUG oslo_concurrency.processutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.481 225859 DEBUG oslo_concurrency.processutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.483 225859 DEBUG os_brick.initiator.connectors.lightos [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.484 225859 DEBUG os_brick.initiator.connectors.lightos [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.484 225859 DEBUG os_brick.initiator.connectors.lightos [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.484 225859 DEBUG os_brick.utils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] <== get_connector_properties: return (71ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:10:04 compute-1 nova_compute[225855]: 2026-01-20 15:10:04.485 225859 DEBUG nova.virt.block_device [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Updating existing volume attachment record: f4780d3e-d038-4466-9222-3d7730703f45 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 15:10:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:10:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:04.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:10:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:04.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:05 compute-1 podman[296578]: 2026-01-20 15:10:05.040205215 +0000 UTC m=+0.088782570 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:10:05 compute-1 ceph-mon[81775]: pgmap v2604: 321 pgs: 321 active+clean; 476 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.8 MiB/s wr, 149 op/s
Jan 20 15:10:05 compute-1 nova_compute[225855]: 2026-01-20 15:10:05.154 225859 DEBUG nova.policy [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bf422e55e158420cbdae75f07a3bb97a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a49638950e1543fa8e0d251af5479623', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:10:05 compute-1 nova_compute[225855]: 2026-01-20 15:10:05.722 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:05.722 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:10:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:05.723 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:10:05 compute-1 nova_compute[225855]: 2026-01-20 15:10:05.727 225859 DEBUG nova.compute.manager [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:10:05 compute-1 nova_compute[225855]: 2026-01-20 15:10:05.728 225859 DEBUG nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:10:05 compute-1 nova_compute[225855]: 2026-01-20 15:10:05.728 225859 INFO nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Creating image(s)
Jan 20 15:10:05 compute-1 nova_compute[225855]: 2026-01-20 15:10:05.729 225859 DEBUG nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 15:10:05 compute-1 nova_compute[225855]: 2026-01-20 15:10:05.729 225859 DEBUG nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Ensure instance console log exists: /var/lib/nova/instances/9e852872-788c-4dac-b7fb-d76d67e7a84f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:10:05 compute-1 nova_compute[225855]: 2026-01-20 15:10:05.729 225859 DEBUG oslo_concurrency.lockutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:05 compute-1 nova_compute[225855]: 2026-01-20 15:10:05.730 225859 DEBUG oslo_concurrency.lockutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:05 compute-1 nova_compute[225855]: 2026-01-20 15:10:05.730 225859 DEBUG oslo_concurrency.lockutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:05 compute-1 nova_compute[225855]: 2026-01-20 15:10:05.842 225859 DEBUG nova.network.neutron [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Successfully created port: f9f19cf7-87f5-4dd4-a7be-78086c84e176 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:10:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2067592141' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:10:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:06.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:06.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:07 compute-1 nova_compute[225855]: 2026-01-20 15:10:07.112 225859 DEBUG nova.network.neutron [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Successfully updated port: f9f19cf7-87f5-4dd4-a7be-78086c84e176 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:10:07 compute-1 nova_compute[225855]: 2026-01-20 15:10:07.124 225859 DEBUG oslo_concurrency.lockutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "refresh_cache-9e852872-788c-4dac-b7fb-d76d67e7a84f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:10:07 compute-1 nova_compute[225855]: 2026-01-20 15:10:07.125 225859 DEBUG oslo_concurrency.lockutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquired lock "refresh_cache-9e852872-788c-4dac-b7fb-d76d67e7a84f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:10:07 compute-1 nova_compute[225855]: 2026-01-20 15:10:07.125 225859 DEBUG nova.network.neutron [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:10:07 compute-1 ceph-mon[81775]: pgmap v2605: 321 pgs: 321 active+clean; 476 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.5 MiB/s wr, 157 op/s
Jan 20 15:10:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3240538515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:10:07 compute-1 nova_compute[225855]: 2026-01-20 15:10:07.219 225859 DEBUG nova.compute.manager [req-9aea7350-b3f2-4184-b469-1f5945fa5981 req-aebb9bb6-8011-4901-b9c6-b66e7b6fb655 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Received event network-changed-f9f19cf7-87f5-4dd4-a7be-78086c84e176 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:07 compute-1 nova_compute[225855]: 2026-01-20 15:10:07.219 225859 DEBUG nova.compute.manager [req-9aea7350-b3f2-4184-b469-1f5945fa5981 req-aebb9bb6-8011-4901-b9c6-b66e7b6fb655 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Refreshing instance network info cache due to event network-changed-f9f19cf7-87f5-4dd4-a7be-78086c84e176. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:10:07 compute-1 nova_compute[225855]: 2026-01-20 15:10:07.219 225859 DEBUG oslo_concurrency.lockutils [req-9aea7350-b3f2-4184-b469-1f5945fa5981 req-aebb9bb6-8011-4901-b9c6-b66e7b6fb655 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-9e852872-788c-4dac-b7fb-d76d67e7a84f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:10:07 compute-1 nova_compute[225855]: 2026-01-20 15:10:07.464 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:07 compute-1 nova_compute[225855]: 2026-01-20 15:10:07.554 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:08 compute-1 nova_compute[225855]: 2026-01-20 15:10:08.116 225859 DEBUG nova.network.neutron [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:10:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:08.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:08.725 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:10:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:08.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:10:09 compute-1 ceph-mon[81775]: pgmap v2606: 321 pgs: 321 active+clean; 476 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 52 KiB/s wr, 123 op/s
Jan 20 15:10:09 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Jan 20 15:10:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:10:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:10.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:10:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:10.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:11 compute-1 sudo[296607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:11 compute-1 sudo[296607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:11 compute-1 sudo[296607]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:11 compute-1 sudo[296632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:10:11 compute-1 sudo[296632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:11 compute-1 sudo[296632]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:11 compute-1 sudo[296657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:11 compute-1 sudo[296657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:11 compute-1 sudo[296657]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:11 compute-1 sudo[296682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 15:10:11 compute-1 sudo[296682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.379 225859 DEBUG nova.network.neutron [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Updating instance_info_cache with network_info: [{"id": "f9f19cf7-87f5-4dd4-a7be-78086c84e176", "address": "fa:16:3e:19:e2:73", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9f19cf7-87", "ovs_interfaceid": "f9f19cf7-87f5-4dd4-a7be-78086c84e176", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.467 225859 DEBUG oslo_concurrency.lockutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Releasing lock "refresh_cache-9e852872-788c-4dac-b7fb-d76d67e7a84f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.468 225859 DEBUG nova.compute.manager [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Instance network_info: |[{"id": "f9f19cf7-87f5-4dd4-a7be-78086c84e176", "address": "fa:16:3e:19:e2:73", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9f19cf7-87", "ovs_interfaceid": "f9f19cf7-87f5-4dd4-a7be-78086c84e176", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.468 225859 DEBUG oslo_concurrency.lockutils [req-9aea7350-b3f2-4184-b469-1f5945fa5981 req-aebb9bb6-8011-4901-b9c6-b66e7b6fb655 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-9e852872-788c-4dac-b7fb-d76d67e7a84f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.469 225859 DEBUG nova.network.neutron [req-9aea7350-b3f2-4184-b469-1f5945fa5981 req-aebb9bb6-8011-4901-b9c6-b66e7b6fb655 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Refreshing network info cache for port f9f19cf7-87f5-4dd4-a7be-78086c84e176 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.473 225859 DEBUG nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Start _get_guest_xml network_info=[{"id": "f9f19cf7-87f5-4dd4-a7be-78086c84e176", "address": "fa:16:3e:19:e2:73", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9f19cf7-87", "ovs_interfaceid": "f9f19cf7-87f5-4dd4-a7be-78086c84e176", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ccb7c984-4606-40ef-8fcd-a902f5382dee', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ccb7c984-4606-40ef-8fcd-a902f5382dee', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '9e852872-788c-4dac-b7fb-d76d67e7a84f', 'attached_at': '', 'detached_at': '', 'volume_id': 'ccb7c984-4606-40ef-8fcd-a902f5382dee', 'serial': 'ccb7c984-4606-40ef-8fcd-a902f5382dee'}, 'guest_format': None, 'boot_index': 0, 'mount_device': '/dev/vda', 'attachment_id': 'f4780d3e-d038-4466-9222-3d7730703f45', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.480 225859 WARNING nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.486 225859 DEBUG nova.virt.libvirt.host [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.487 225859 DEBUG nova.virt.libvirt.host [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.496 225859 DEBUG nova.virt.libvirt.host [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.497 225859 DEBUG nova.virt.libvirt.host [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.498 225859 DEBUG nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.498 225859 DEBUG nova.virt.hardware [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.499 225859 DEBUG nova.virt.hardware [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.499 225859 DEBUG nova.virt.hardware [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.499 225859 DEBUG nova.virt.hardware [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.499 225859 DEBUG nova.virt.hardware [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.499 225859 DEBUG nova.virt.hardware [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.500 225859 DEBUG nova.virt.hardware [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.500 225859 DEBUG nova.virt.hardware [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.500 225859 DEBUG nova.virt.hardware [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.500 225859 DEBUG nova.virt.hardware [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.501 225859 DEBUG nova.virt.hardware [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:10:11 compute-1 ceph-mon[81775]: pgmap v2607: 321 pgs: 321 active+clean; 481 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 614 KiB/s wr, 130 op/s
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.532 225859 DEBUG nova.storage.rbd_utils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] rbd image 9e852872-788c-4dac-b7fb-d76d67e7a84f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:10:11 compute-1 nova_compute[225855]: 2026-01-20 15:10:11.536 225859 DEBUG oslo_concurrency.processutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:11 compute-1 sudo[296682]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:11 compute-1 sudo[296745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:11 compute-1 sudo[296745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:11 compute-1 sudo[296745]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:11 compute-1 sudo[296790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:10:11 compute-1 sudo[296790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:11 compute-1 sudo[296790]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:11 compute-1 sudo[296815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:11 compute-1 sudo[296815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:11 compute-1 sudo[296815]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:11 compute-1 sudo[296840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:10:11 compute-1 sudo[296840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:10:11 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1681359816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.012 225859 DEBUG oslo_concurrency.processutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.163 225859 DEBUG os_brick.encryptors [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Using volume encryption metadata '{'encryption_key_id': '349091f0-57cd-4e7a-b935-410f986b1500', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ccb7c984-4606-40ef-8fcd-a902f5382dee', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ccb7c984-4606-40ef-8fcd-a902f5382dee', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '9e852872-788c-4dac-b7fb-d76d67e7a84f', 'attached_at': '', 'detached_at': '', 'volume_id': 'ccb7c984-4606-40ef-8fcd-a902f5382dee', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.166 225859 DEBUG barbicanclient.client [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.189 225859 DEBUG barbicanclient.v1.secrets [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/349091f0-57cd-4e7a-b935-410f986b1500 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.190 225859 INFO barbicanclient.base [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Calculated Secrets uuid ref: secrets/349091f0-57cd-4e7a-b935-410f986b1500
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.228 225859 DEBUG barbicanclient.client [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.228 225859 INFO barbicanclient.base [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Calculated Secrets uuid ref: secrets/349091f0-57cd-4e7a-b935-410f986b1500
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.255 225859 DEBUG barbicanclient.client [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.255 225859 INFO barbicanclient.base [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Calculated Secrets uuid ref: secrets/349091f0-57cd-4e7a-b935-410f986b1500
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.280 225859 DEBUG barbicanclient.client [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.280 225859 INFO barbicanclient.base [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Calculated Secrets uuid ref: secrets/349091f0-57cd-4e7a-b935-410f986b1500
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.300 225859 DEBUG barbicanclient.client [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.301 225859 INFO barbicanclient.base [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Calculated Secrets uuid ref: secrets/349091f0-57cd-4e7a-b935-410f986b1500
Jan 20 15:10:12 compute-1 sudo[296840]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.325 225859 DEBUG barbicanclient.client [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.325 225859 INFO barbicanclient.base [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Calculated Secrets uuid ref: secrets/349091f0-57cd-4e7a-b935-410f986b1500
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.379 225859 DEBUG barbicanclient.client [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.379 225859 INFO barbicanclient.base [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Calculated Secrets uuid ref: secrets/349091f0-57cd-4e7a-b935-410f986b1500
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.423 225859 DEBUG barbicanclient.client [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.423 225859 INFO barbicanclient.base [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Calculated Secrets uuid ref: secrets/349091f0-57cd-4e7a-b935-410f986b1500
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.443 225859 DEBUG barbicanclient.client [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.444 225859 INFO barbicanclient.base [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Calculated Secrets uuid ref: secrets/349091f0-57cd-4e7a-b935-410f986b1500
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.467 225859 DEBUG barbicanclient.client [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.468 225859 INFO barbicanclient.base [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Calculated Secrets uuid ref: secrets/349091f0-57cd-4e7a-b935-410f986b1500
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.494 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.497 225859 DEBUG barbicanclient.client [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.497 225859 INFO barbicanclient.base [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Calculated Secrets uuid ref: secrets/349091f0-57cd-4e7a-b935-410f986b1500
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.530 225859 DEBUG barbicanclient.client [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.530 225859 INFO barbicanclient.base [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Calculated Secrets uuid ref: secrets/349091f0-57cd-4e7a-b935-410f986b1500
Jan 20 15:10:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:12.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.552 225859 DEBUG barbicanclient.client [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.552 225859 INFO barbicanclient.base [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Calculated Secrets uuid ref: secrets/349091f0-57cd-4e7a-b935-410f986b1500
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.558 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.572 225859 DEBUG barbicanclient.client [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.572 225859 INFO barbicanclient.base [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Calculated Secrets uuid ref: secrets/349091f0-57cd-4e7a-b935-410f986b1500
Jan 20 15:10:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 15:10:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1681359816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:10:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 15:10:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:10:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:10:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:10:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:10:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.595 225859 DEBUG barbicanclient.client [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.596 225859 INFO barbicanclient.base [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Calculated Secrets uuid ref: secrets/349091f0-57cd-4e7a-b935-410f986b1500
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.615 225859 DEBUG barbicanclient.client [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.616 225859 DEBUG nova.virt.libvirt.host [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 20 15:10:12 compute-1 nova_compute[225855]:   <usage type="volume">
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <volume>ccb7c984-4606-40ef-8fcd-a902f5382dee</volume>
Jan 20 15:10:12 compute-1 nova_compute[225855]:   </usage>
Jan 20 15:10:12 compute-1 nova_compute[225855]: </secret>
Jan 20 15:10:12 compute-1 nova_compute[225855]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.644 225859 DEBUG nova.virt.libvirt.vif [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:10:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1030995833',display_name='tempest-TestVolumeBootPattern-server-1030995833',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1030995833',id=173,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a49638950e1543fa8e0d251af5479623',ramdisk_id='',reservation_id='r-wer4a7li',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-194644003',owner_user_name='tempest-TestVolumeBootPattern-194644003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:10:04Z,user_data=None,user_id='bf422e55e158420cbdae75f07a3bb97a',uuid=9e852872-788c-4dac-b7fb-d76d67e7a84f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f9f19cf7-87f5-4dd4-a7be-78086c84e176", "address": "fa:16:3e:19:e2:73", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9f19cf7-87", "ovs_interfaceid": "f9f19cf7-87f5-4dd4-a7be-78086c84e176", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.645 225859 DEBUG nova.network.os_vif_util [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converting VIF {"id": "f9f19cf7-87f5-4dd4-a7be-78086c84e176", "address": "fa:16:3e:19:e2:73", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9f19cf7-87", "ovs_interfaceid": "f9f19cf7-87f5-4dd4-a7be-78086c84e176", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.646 225859 DEBUG nova.network.os_vif_util [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:e2:73,bridge_name='br-int',has_traffic_filtering=True,id=f9f19cf7-87f5-4dd4-a7be-78086c84e176,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9f19cf7-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.648 225859 DEBUG nova.objects.instance [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9e852872-788c-4dac-b7fb-d76d67e7a84f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.664 225859 DEBUG nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:10:12 compute-1 nova_compute[225855]:   <uuid>9e852872-788c-4dac-b7fb-d76d67e7a84f</uuid>
Jan 20 15:10:12 compute-1 nova_compute[225855]:   <name>instance-000000ad</name>
Jan 20 15:10:12 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:10:12 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:10:12 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <nova:name>tempest-TestVolumeBootPattern-server-1030995833</nova:name>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:10:11</nova:creationTime>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:10:12 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:10:12 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:10:12 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:10:12 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:10:12 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:10:12 compute-1 nova_compute[225855]:         <nova:user uuid="bf422e55e158420cbdae75f07a3bb97a">tempest-TestVolumeBootPattern-194644003-project-member</nova:user>
Jan 20 15:10:12 compute-1 nova_compute[225855]:         <nova:project uuid="a49638950e1543fa8e0d251af5479623">tempest-TestVolumeBootPattern-194644003</nova:project>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:10:12 compute-1 nova_compute[225855]:         <nova:port uuid="f9f19cf7-87f5-4dd4-a7be-78086c84e176">
Jan 20 15:10:12 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:10:12 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:10:12 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <system>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <entry name="serial">9e852872-788c-4dac-b7fb-d76d67e7a84f</entry>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <entry name="uuid">9e852872-788c-4dac-b7fb-d76d67e7a84f</entry>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     </system>
Jan 20 15:10:12 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:10:12 compute-1 nova_compute[225855]:   <os>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:   </os>
Jan 20 15:10:12 compute-1 nova_compute[225855]:   <features>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:   </features>
Jan 20 15:10:12 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:10:12 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:10:12 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/9e852872-788c-4dac-b7fb-d76d67e7a84f_disk.config">
Jan 20 15:10:12 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       </source>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:10:12 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <source protocol="rbd" name="volumes/volume-ccb7c984-4606-40ef-8fcd-a902f5382dee">
Jan 20 15:10:12 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       </source>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:10:12 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <serial>ccb7c984-4606-40ef-8fcd-a902f5382dee</serial>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <encryption format="luks">
Jan 20 15:10:12 compute-1 nova_compute[225855]:         <secret type="passphrase" uuid="4b9c7f6b-c79e-4c34-82d7-409b96a12d36"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       </encryption>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:19:e2:73"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <target dev="tapf9f19cf7-87"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/9e852872-788c-4dac-b7fb-d76d67e7a84f/console.log" append="off"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <video>
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     </video>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:10:12 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:10:12 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:10:12 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:10:12 compute-1 nova_compute[225855]: </domain>
Jan 20 15:10:12 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.665 225859 DEBUG nova.compute.manager [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Preparing to wait for external event network-vif-plugged-f9f19cf7-87f5-4dd4-a7be-78086c84e176 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.666 225859 DEBUG oslo_concurrency.lockutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "9e852872-788c-4dac-b7fb-d76d67e7a84f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.666 225859 DEBUG oslo_concurrency.lockutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "9e852872-788c-4dac-b7fb-d76d67e7a84f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.667 225859 DEBUG oslo_concurrency.lockutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "9e852872-788c-4dac-b7fb-d76d67e7a84f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.668 225859 DEBUG nova.virt.libvirt.vif [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:10:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1030995833',display_name='tempest-TestVolumeBootPattern-server-1030995833',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1030995833',id=173,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a49638950e1543fa8e0d251af5479623',ramdisk_id='',reservation_id='r-wer4a7li',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-194644003',owner_user_name='tempest-TestVolumeBootPattern-194644003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:10:04Z,user_data=None,user_id='bf422e55e158420cbdae75f07a3bb97a',uuid=9e852872-788c-4dac-b7fb-d76d67e7a84f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f9f19cf7-87f5-4dd4-a7be-78086c84e176", "address": "fa:16:3e:19:e2:73", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9f19cf7-87", "ovs_interfaceid": "f9f19cf7-87f5-4dd4-a7be-78086c84e176", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.668 225859 DEBUG nova.network.os_vif_util [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converting VIF {"id": "f9f19cf7-87f5-4dd4-a7be-78086c84e176", "address": "fa:16:3e:19:e2:73", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9f19cf7-87", "ovs_interfaceid": "f9f19cf7-87f5-4dd4-a7be-78086c84e176", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.669 225859 DEBUG nova.network.os_vif_util [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:e2:73,bridge_name='br-int',has_traffic_filtering=True,id=f9f19cf7-87f5-4dd4-a7be-78086c84e176,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9f19cf7-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.670 225859 DEBUG os_vif [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:e2:73,bridge_name='br-int',has_traffic_filtering=True,id=f9f19cf7-87f5-4dd4-a7be-78086c84e176,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9f19cf7-87') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.671 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.672 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.673 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.677 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.677 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf9f19cf7-87, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.678 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf9f19cf7-87, col_values=(('external_ids', {'iface-id': 'f9f19cf7-87f5-4dd4-a7be-78086c84e176', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:e2:73', 'vm-uuid': '9e852872-788c-4dac-b7fb-d76d67e7a84f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.679 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:12 compute-1 NetworkManager[49104]: <info>  [1768921812.6807] manager: (tapf9f19cf7-87): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/317)
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.682 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.687 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.689 225859 INFO os_vif [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:e2:73,bridge_name='br-int',has_traffic_filtering=True,id=f9f19cf7-87f5-4dd4-a7be-78086c84e176,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9f19cf7-87')
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.748 225859 DEBUG nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.749 225859 DEBUG nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.749 225859 DEBUG nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] No VIF found with MAC fa:16:3e:19:e2:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.750 225859 INFO nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Using config drive
Jan 20 15:10:12 compute-1 nova_compute[225855]: 2026-01-20 15:10:12.774 225859 DEBUG nova.storage.rbd_utils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] rbd image 9e852872-788c-4dac-b7fb-d76d67e7a84f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:10:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:10:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:12.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:10:13 compute-1 nova_compute[225855]: 2026-01-20 15:10:13.305 225859 INFO nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Creating config drive at /var/lib/nova/instances/9e852872-788c-4dac-b7fb-d76d67e7a84f/disk.config
Jan 20 15:10:13 compute-1 nova_compute[225855]: 2026-01-20 15:10:13.310 225859 DEBUG oslo_concurrency.processutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9e852872-788c-4dac-b7fb-d76d67e7a84f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd1zh_4aw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:13 compute-1 nova_compute[225855]: 2026-01-20 15:10:13.442 225859 DEBUG oslo_concurrency.processutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9e852872-788c-4dac-b7fb-d76d67e7a84f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd1zh_4aw" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:13 compute-1 nova_compute[225855]: 2026-01-20 15:10:13.468 225859 DEBUG nova.storage.rbd_utils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] rbd image 9e852872-788c-4dac-b7fb-d76d67e7a84f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:10:13 compute-1 nova_compute[225855]: 2026-01-20 15:10:13.472 225859 DEBUG oslo_concurrency.processutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9e852872-788c-4dac-b7fb-d76d67e7a84f/disk.config 9e852872-788c-4dac-b7fb-d76d67e7a84f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:13 compute-1 ceph-mon[81775]: pgmap v2608: 321 pgs: 321 active+clean; 487 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1007 KiB/s wr, 94 op/s
Jan 20 15:10:13 compute-1 nova_compute[225855]: 2026-01-20 15:10:13.613 225859 DEBUG oslo_concurrency.processutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9e852872-788c-4dac-b7fb-d76d67e7a84f/disk.config 9e852872-788c-4dac-b7fb-d76d67e7a84f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:13 compute-1 nova_compute[225855]: 2026-01-20 15:10:13.614 225859 INFO nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Deleting local config drive /var/lib/nova/instances/9e852872-788c-4dac-b7fb-d76d67e7a84f/disk.config because it was imported into RBD.
Jan 20 15:10:13 compute-1 kernel: tapf9f19cf7-87: entered promiscuous mode
Jan 20 15:10:13 compute-1 NetworkManager[49104]: <info>  [1768921813.6601] manager: (tapf9f19cf7-87): new Tun device (/org/freedesktop/NetworkManager/Devices/318)
Jan 20 15:10:13 compute-1 systemd-udevd[296968]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:10:13 compute-1 ovn_controller[130490]: 2026-01-20T15:10:13Z|00769|binding|INFO|Claiming lport f9f19cf7-87f5-4dd4-a7be-78086c84e176 for this chassis.
Jan 20 15:10:13 compute-1 ovn_controller[130490]: 2026-01-20T15:10:13Z|00770|binding|INFO|f9f19cf7-87f5-4dd4-a7be-78086c84e176: Claiming fa:16:3e:19:e2:73 10.100.0.12
Jan 20 15:10:13 compute-1 nova_compute[225855]: 2026-01-20 15:10:13.699 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:13 compute-1 NetworkManager[49104]: <info>  [1768921813.7115] device (tapf9f19cf7-87): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:10:13 compute-1 NetworkManager[49104]: <info>  [1768921813.7137] device (tapf9f19cf7-87): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:10:13 compute-1 ovn_controller[130490]: 2026-01-20T15:10:13Z|00771|binding|INFO|Setting lport f9f19cf7-87f5-4dd4-a7be-78086c84e176 ovn-installed in OVS
Jan 20 15:10:13 compute-1 nova_compute[225855]: 2026-01-20 15:10:13.715 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:13 compute-1 systemd-machined[194361]: New machine qemu-89-instance-000000ad.
Jan 20 15:10:13 compute-1 systemd[1]: Started Virtual Machine qemu-89-instance-000000ad.
Jan 20 15:10:13 compute-1 ovn_controller[130490]: 2026-01-20T15:10:13Z|00772|binding|INFO|Setting lport f9f19cf7-87f5-4dd4-a7be-78086c84e176 up in Southbound
Jan 20 15:10:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:13.749 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:e2:73 10.100.0.12'], port_security=['fa:16:3e:19:e2:73 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9e852872-788c-4dac-b7fb-d76d67e7a84f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a49638950e1543fa8e0d251af5479623', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64f52c7d-befd-4095-889b-e7a5da6821d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76ec1139-009f-49fe-bfde-07c0ef9e8b12, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=f9f19cf7-87f5-4dd4-a7be-78086c84e176) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:10:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:13.750 140354 INFO neutron.agent.ovn.metadata.agent [-] Port f9f19cf7-87f5-4dd4-a7be-78086c84e176 in datapath b677f1a9-dbaa-4373-8466-bd9ccf067b91 bound to our chassis
Jan 20 15:10:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:13.751 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b677f1a9-dbaa-4373-8466-bd9ccf067b91
Jan 20 15:10:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:13.764 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb3b639-50d8-4ee4-aac7-e74e76438a9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:13.764 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb677f1a9-d1 in ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:10:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:13.766 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb677f1a9-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:10:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:13.766 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fd589ad8-cee1-49a0-baea-0348e6efe766]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:13.767 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea31feb-6c2f-4a8f-b73f-4d54411572ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:13.783 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[648403fd-7ac4-4ebc-9eb4-90c1b81b25e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:13.794 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2b75b9b5-fff9-4dd2-aa00-c396a8e4738e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:13.822 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[a67a248e-9d3a-49f1-ab46-53e62a9655f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:13.828 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c00eeb1f-77d2-4fc4-abb8-0d9996e52977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:13 compute-1 NetworkManager[49104]: <info>  [1768921813.8292] manager: (tapb677f1a9-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/319)
Jan 20 15:10:13 compute-1 systemd-udevd[296972]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:10:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:13.864 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[1cac5e26-a394-4a77-8793-96a3cd91a4f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:13.867 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c8a84ada-a5ca-4b32-be0c-92f6a00201d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:13 compute-1 NetworkManager[49104]: <info>  [1768921813.8981] device (tapb677f1a9-d0): carrier: link connected
Jan 20 15:10:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:13.905 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f6062976-0f69-4e7f-a00f-d848803994de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:13.923 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b60ecf1b-dd25-455e-aaf5-e5507d4d1c76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb677f1a9-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:c8:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685282, 'reachable_time': 24586, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297004, 'error': None, 'target': 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:13.947 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0513ea1e-5a64-494e-b028-44432b484b4f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:c834'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685282, 'tstamp': 685282}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297005, 'error': None, 'target': 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:13.964 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[71cf3823-7806-46bb-8f4e-1ca9f3ef3f73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb677f1a9-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:c8:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685282, 'reachable_time': 24586, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297006, 'error': None, 'target': 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:13.998 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe80284-8492-43e6-b90f-bb6468743db9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:14.054 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c59d8ce4-9a0c-4d2f-92f4-e787c3673c07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:14.055 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb677f1a9-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:14.056 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:14.056 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb677f1a9-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:14 compute-1 nova_compute[225855]: 2026-01-20 15:10:14.058 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:14 compute-1 NetworkManager[49104]: <info>  [1768921814.0589] manager: (tapb677f1a9-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/320)
Jan 20 15:10:14 compute-1 kernel: tapb677f1a9-d0: entered promiscuous mode
Jan 20 15:10:14 compute-1 nova_compute[225855]: 2026-01-20 15:10:14.061 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:14.062 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb677f1a9-d0, col_values=(('external_ids', {'iface-id': '1aa285ce-a9ae-4d1e-b4b9-c72f4e0b8d65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:14 compute-1 ovn_controller[130490]: 2026-01-20T15:10:14Z|00773|binding|INFO|Releasing lport 1aa285ce-a9ae-4d1e-b4b9-c72f4e0b8d65 from this chassis (sb_readonly=0)
Jan 20 15:10:14 compute-1 nova_compute[225855]: 2026-01-20 15:10:14.090 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:14.092 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b677f1a9-dbaa-4373-8466-bd9ccf067b91.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b677f1a9-dbaa-4373-8466-bd9ccf067b91.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:14.092 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e1b9ad93-b59f-456d-b3f4-229244a24080]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:14.093 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-b677f1a9-dbaa-4373-8466-bd9ccf067b91
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/b677f1a9-dbaa-4373-8466-bd9ccf067b91.pid.haproxy
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID b677f1a9-dbaa-4373-8466-bd9ccf067b91
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:10:14 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:14.094 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'env', 'PROCESS_TAG=haproxy-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b677f1a9-dbaa-4373-8466-bd9ccf067b91.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:10:14 compute-1 nova_compute[225855]: 2026-01-20 15:10:14.427 225859 DEBUG nova.network.neutron [req-9aea7350-b3f2-4184-b469-1f5945fa5981 req-aebb9bb6-8011-4901-b9c6-b66e7b6fb655 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Updated VIF entry in instance network info cache for port f9f19cf7-87f5-4dd4-a7be-78086c84e176. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:10:14 compute-1 nova_compute[225855]: 2026-01-20 15:10:14.428 225859 DEBUG nova.network.neutron [req-9aea7350-b3f2-4184-b469-1f5945fa5981 req-aebb9bb6-8011-4901-b9c6-b66e7b6fb655 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Updating instance_info_cache with network_info: [{"id": "f9f19cf7-87f5-4dd4-a7be-78086c84e176", "address": "fa:16:3e:19:e2:73", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9f19cf7-87", "ovs_interfaceid": "f9f19cf7-87f5-4dd4-a7be-78086c84e176", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:10:14 compute-1 podman[297038]: 2026-01-20 15:10:14.456306393 +0000 UTC m=+0.048616560 container create e8bf3f1f69d8a9dca045ab143829c5417120c28e2a4cd71c921ad01e76c46c44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:10:14 compute-1 nova_compute[225855]: 2026-01-20 15:10:14.463 225859 DEBUG oslo_concurrency.lockutils [req-9aea7350-b3f2-4184-b469-1f5945fa5981 req-aebb9bb6-8011-4901-b9c6-b66e7b6fb655 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-9e852872-788c-4dac-b7fb-d76d67e7a84f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:10:14 compute-1 systemd[1]: Started libpod-conmon-e8bf3f1f69d8a9dca045ab143829c5417120c28e2a4cd71c921ad01e76c46c44.scope.
Jan 20 15:10:14 compute-1 nova_compute[225855]: 2026-01-20 15:10:14.523 225859 DEBUG nova.compute.manager [req-e86389a5-6022-4d4d-a058-d05b998b41e1 req-a43351fe-e447-49a3-9f7c-37752c1dd16e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Received event network-vif-plugged-f9f19cf7-87f5-4dd4-a7be-78086c84e176 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:14 compute-1 nova_compute[225855]: 2026-01-20 15:10:14.523 225859 DEBUG oslo_concurrency.lockutils [req-e86389a5-6022-4d4d-a058-d05b998b41e1 req-a43351fe-e447-49a3-9f7c-37752c1dd16e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9e852872-788c-4dac-b7fb-d76d67e7a84f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:14 compute-1 nova_compute[225855]: 2026-01-20 15:10:14.523 225859 DEBUG oslo_concurrency.lockutils [req-e86389a5-6022-4d4d-a058-d05b998b41e1 req-a43351fe-e447-49a3-9f7c-37752c1dd16e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9e852872-788c-4dac-b7fb-d76d67e7a84f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:14 compute-1 nova_compute[225855]: 2026-01-20 15:10:14.524 225859 DEBUG oslo_concurrency.lockutils [req-e86389a5-6022-4d4d-a058-d05b998b41e1 req-a43351fe-e447-49a3-9f7c-37752c1dd16e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9e852872-788c-4dac-b7fb-d76d67e7a84f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:14 compute-1 nova_compute[225855]: 2026-01-20 15:10:14.524 225859 DEBUG nova.compute.manager [req-e86389a5-6022-4d4d-a058-d05b998b41e1 req-a43351fe-e447-49a3-9f7c-37752c1dd16e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Processing event network-vif-plugged-f9f19cf7-87f5-4dd4-a7be-78086c84e176 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:10:14 compute-1 podman[297038]: 2026-01-20 15:10:14.42977661 +0000 UTC m=+0.022086797 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:10:14 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:10:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5075690fd2baf5ce67a4e697a92358c36b0bceb39f08aa2bba2b013ccbee9916/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:10:14 compute-1 podman[297038]: 2026-01-20 15:10:14.544356341 +0000 UTC m=+0.136666528 container init e8bf3f1f69d8a9dca045ab143829c5417120c28e2a4cd71c921ad01e76c46c44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 15:10:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:14.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:14 compute-1 podman[297038]: 2026-01-20 15:10:14.551485904 +0000 UTC m=+0.143796071 container start e8bf3f1f69d8a9dca045ab143829c5417120c28e2a4cd71c921ad01e76c46c44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:10:14 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[297086]: [NOTICE]   (297093) : New worker (297095) forked
Jan 20 15:10:14 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[297086]: [NOTICE]   (297093) : Loading success.
Jan 20 15:10:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1131765128' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:10:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1131765128' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:10:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:10:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:14.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:10:15 compute-1 ceph-mon[81775]: pgmap v2609: 321 pgs: 321 active+clean; 502 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 635 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Jan 20 15:10:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:16.429 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:16.430 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:16.431 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:16.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:16 compute-1 ceph-mon[81775]: pgmap v2610: 321 pgs: 321 active+clean; 511 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 721 KiB/s rd, 2.2 MiB/s wr, 101 op/s
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.616 225859 DEBUG nova.compute.manager [req-b4be0a5d-d2d4-463a-882b-65187d28574a req-35613700-c0f8-4356-903c-5d673b4011a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Received event network-vif-plugged-f9f19cf7-87f5-4dd4-a7be-78086c84e176 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.616 225859 DEBUG oslo_concurrency.lockutils [req-b4be0a5d-d2d4-463a-882b-65187d28574a req-35613700-c0f8-4356-903c-5d673b4011a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9e852872-788c-4dac-b7fb-d76d67e7a84f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.617 225859 DEBUG oslo_concurrency.lockutils [req-b4be0a5d-d2d4-463a-882b-65187d28574a req-35613700-c0f8-4356-903c-5d673b4011a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9e852872-788c-4dac-b7fb-d76d67e7a84f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.617 225859 DEBUG oslo_concurrency.lockutils [req-b4be0a5d-d2d4-463a-882b-65187d28574a req-35613700-c0f8-4356-903c-5d673b4011a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9e852872-788c-4dac-b7fb-d76d67e7a84f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.617 225859 DEBUG nova.compute.manager [req-b4be0a5d-d2d4-463a-882b-65187d28574a req-35613700-c0f8-4356-903c-5d673b4011a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] No waiting events found dispatching network-vif-plugged-f9f19cf7-87f5-4dd4-a7be-78086c84e176 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.617 225859 WARNING nova.compute.manager [req-b4be0a5d-d2d4-463a-882b-65187d28574a req-35613700-c0f8-4356-903c-5d673b4011a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Received unexpected event network-vif-plugged-f9f19cf7-87f5-4dd4-a7be-78086c84e176 for instance with vm_state building and task_state spawning.
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.839 225859 DEBUG nova.compute.manager [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.840 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921816.8385212, 9e852872-788c-4dac-b7fb-d76d67e7a84f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.840 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] VM Started (Lifecycle Event)
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.844 225859 DEBUG nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.848 225859 INFO nova.virt.libvirt.driver [-] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Instance spawned successfully.
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.848 225859 DEBUG nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.864 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.870 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.873 225859 DEBUG nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.874 225859 DEBUG nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.874 225859 DEBUG nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.874 225859 DEBUG nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.875 225859 DEBUG nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.875 225859 DEBUG nova.virt.libvirt.driver [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.898 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.898 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921816.8387506, 9e852872-788c-4dac-b7fb-d76d67e7a84f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.898 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] VM Paused (Lifecycle Event)
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.930 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.933 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921816.8429008, 9e852872-788c-4dac-b7fb-d76d67e7a84f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.933 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] VM Resumed (Lifecycle Event)
Jan 20 15:10:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:16.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.954 225859 INFO nova.compute.manager [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Took 11.23 seconds to spawn the instance on the hypervisor.
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.955 225859 DEBUG nova.compute.manager [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.962 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:10:16 compute-1 nova_compute[225855]: 2026-01-20 15:10:16.965 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:10:17 compute-1 nova_compute[225855]: 2026-01-20 15:10:17.015 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:10:17 compute-1 nova_compute[225855]: 2026-01-20 15:10:17.041 225859 INFO nova.compute.manager [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Took 13.59 seconds to build instance.
Jan 20 15:10:17 compute-1 nova_compute[225855]: 2026-01-20 15:10:17.057 225859 DEBUG oslo_concurrency.lockutils [None req-05c30a27-224b-486c-a6a3-1a1f679cb609 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "9e852872-788c-4dac-b7fb-d76d67e7a84f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:17 compute-1 nova_compute[225855]: 2026-01-20 15:10:17.560 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:17 compute-1 nova_compute[225855]: 2026-01-20 15:10:17.680 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #124. Immutable memtables: 0.
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:10:17.699501) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 124
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921817699596, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 1924, "num_deletes": 255, "total_data_size": 4164916, "memory_usage": 4222112, "flush_reason": "Manual Compaction"}
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #125: started
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921817729569, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 125, "file_size": 2722827, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61512, "largest_seqno": 63431, "table_properties": {"data_size": 2714791, "index_size": 4786, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17713, "raw_average_key_size": 20, "raw_value_size": 2698424, "raw_average_value_size": 3182, "num_data_blocks": 207, "num_entries": 848, "num_filter_entries": 848, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921681, "oldest_key_time": 1768921681, "file_creation_time": 1768921817, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 30232 microseconds, and 9516 cpu microseconds.
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:10:17.729731) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #125: 2722827 bytes OK
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:10:17.729790) [db/memtable_list.cc:519] [default] Level-0 commit table #125 started
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:10:17.735201) [db/memtable_list.cc:722] [default] Level-0 commit table #125: memtable #1 done
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:10:17.735222) EVENT_LOG_v1 {"time_micros": 1768921817735215, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:10:17.735248) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 4156125, prev total WAL file size 4156125, number of live WAL files 2.
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000121.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:10:17.737126) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [125(2659KB)], [123(10MB)]
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921817737231, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [125], "files_L6": [123], "score": -1, "input_data_size": 14183796, "oldest_snapshot_seqno": -1}
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #126: 8788 keys, 12255515 bytes, temperature: kUnknown
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921817867344, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 126, "file_size": 12255515, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12196741, "index_size": 35686, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22021, "raw_key_size": 230322, "raw_average_key_size": 26, "raw_value_size": 12040134, "raw_average_value_size": 1370, "num_data_blocks": 1372, "num_entries": 8788, "num_filter_entries": 8788, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768921817, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 126, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:10:17.867565) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 12255515 bytes
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:10:17.869141) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.0 rd, 94.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 10.9 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(9.7) write-amplify(4.5) OK, records in: 9317, records dropped: 529 output_compression: NoCompression
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:10:17.869160) EVENT_LOG_v1 {"time_micros": 1768921817869151, "job": 78, "event": "compaction_finished", "compaction_time_micros": 130160, "compaction_time_cpu_micros": 56595, "output_level": 6, "num_output_files": 1, "total_output_size": 12255515, "num_input_records": 9317, "num_output_records": 8788, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921817869669, "job": 78, "event": "table_file_deletion", "file_number": 125}
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000123.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921817871661, "job": 78, "event": "table_file_deletion", "file_number": 123}
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:10:17.736994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:10:17.871718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:10:17.871724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:10:17.871728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:10:17.871729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:10:17 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:10:17.871730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:10:17 compute-1 sudo[297114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:17 compute-1 sudo[297114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:17 compute-1 sudo[297114]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:17 compute-1 sudo[297139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:10:17 compute-1 sudo[297139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:17 compute-1 sudo[297139]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:18.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:18 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:18 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:18 compute-1 ceph-mon[81775]: pgmap v2611: 321 pgs: 321 active+clean; 511 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 2.2 MiB/s wr, 74 op/s
Jan 20 15:10:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:18.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:19 compute-1 sudo[297164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:19 compute-1 sudo[297164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:19 compute-1 sudo[297164]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:19 compute-1 sudo[297193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:19 compute-1 sudo[297193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:19 compute-1 sudo[297193]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:19 compute-1 podman[297188]: 2026-01-20 15:10:19.171809808 +0000 UTC m=+0.060780876 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.301 225859 DEBUG oslo_concurrency.lockutils [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "9e852872-788c-4dac-b7fb-d76d67e7a84f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.302 225859 DEBUG oslo_concurrency.lockutils [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "9e852872-788c-4dac-b7fb-d76d67e7a84f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.303 225859 DEBUG oslo_concurrency.lockutils [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "9e852872-788c-4dac-b7fb-d76d67e7a84f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.303 225859 DEBUG oslo_concurrency.lockutils [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "9e852872-788c-4dac-b7fb-d76d67e7a84f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.304 225859 DEBUG oslo_concurrency.lockutils [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "9e852872-788c-4dac-b7fb-d76d67e7a84f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.307 225859 INFO nova.compute.manager [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Terminating instance
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.309 225859 DEBUG nova.compute.manager [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:10:20 compute-1 kernel: tapf9f19cf7-87 (unregistering): left promiscuous mode
Jan 20 15:10:20 compute-1 NetworkManager[49104]: <info>  [1768921820.3572] device (tapf9f19cf7-87): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.377 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:20 compute-1 ovn_controller[130490]: 2026-01-20T15:10:20Z|00774|binding|INFO|Releasing lport f9f19cf7-87f5-4dd4-a7be-78086c84e176 from this chassis (sb_readonly=0)
Jan 20 15:10:20 compute-1 ovn_controller[130490]: 2026-01-20T15:10:20Z|00775|binding|INFO|Setting lport f9f19cf7-87f5-4dd4-a7be-78086c84e176 down in Southbound
Jan 20 15:10:20 compute-1 ovn_controller[130490]: 2026-01-20T15:10:20Z|00776|binding|INFO|Removing iface tapf9f19cf7-87 ovn-installed in OVS
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.380 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.395 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:20 compute-1 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000ad.scope: Deactivated successfully.
Jan 20 15:10:20 compute-1 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000ad.scope: Consumed 3.515s CPU time.
Jan 20 15:10:20 compute-1 systemd-machined[194361]: Machine qemu-89-instance-000000ad terminated.
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.550 225859 INFO nova.virt.libvirt.driver [-] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Instance destroyed successfully.
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.551 225859 DEBUG nova.objects.instance [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lazy-loading 'resources' on Instance uuid 9e852872-788c-4dac-b7fb-d76d67e7a84f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:10:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:20.557 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:e2:73 10.100.0.12'], port_security=['fa:16:3e:19:e2:73 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9e852872-788c-4dac-b7fb-d76d67e7a84f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a49638950e1543fa8e0d251af5479623', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64f52c7d-befd-4095-889b-e7a5da6821d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76ec1139-009f-49fe-bfde-07c0ef9e8b12, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=f9f19cf7-87f5-4dd4-a7be-78086c84e176) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:10:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:20.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:20.560 140354 INFO neutron.agent.ovn.metadata.agent [-] Port f9f19cf7-87f5-4dd4-a7be-78086c84e176 in datapath b677f1a9-dbaa-4373-8466-bd9ccf067b91 unbound from our chassis
Jan 20 15:10:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:20.563 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b677f1a9-dbaa-4373-8466-bd9ccf067b91, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:10:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:20.564 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ef2aa7-206f-43c1-a0c5-12f10b6053e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:20.565 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 namespace which is not needed anymore
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.573 225859 DEBUG nova.virt.libvirt.vif [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:10:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1030995833',display_name='tempest-TestVolumeBootPattern-server-1030995833',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1030995833',id=173,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:10:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a49638950e1543fa8e0d251af5479623',ramdisk_id='',reservation_id='r-wer4a7li',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestVolumeBootPattern-194644003',owner_user_name='tempest-TestVolumeBootPattern-194644003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:10:17Z,user_data=None,user_id='bf422e55e158420cbdae75f07a3bb97a',uuid=9e852872-788c-4dac-b7fb-d76d67e7a84f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f9f19cf7-87f5-4dd4-a7be-78086c84e176", "address": "fa:16:3e:19:e2:73", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9f19cf7-87", "ovs_interfaceid": "f9f19cf7-87f5-4dd4-a7be-78086c84e176", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.573 225859 DEBUG nova.network.os_vif_util [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converting VIF {"id": "f9f19cf7-87f5-4dd4-a7be-78086c84e176", "address": "fa:16:3e:19:e2:73", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9f19cf7-87", "ovs_interfaceid": "f9f19cf7-87f5-4dd4-a7be-78086c84e176", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.574 225859 DEBUG nova.network.os_vif_util [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:e2:73,bridge_name='br-int',has_traffic_filtering=True,id=f9f19cf7-87f5-4dd4-a7be-78086c84e176,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9f19cf7-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.574 225859 DEBUG os_vif [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:e2:73,bridge_name='br-int',has_traffic_filtering=True,id=f9f19cf7-87f5-4dd4-a7be-78086c84e176,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9f19cf7-87') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.575 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.576 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf9f19cf7-87, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.577 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.579 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.582 225859 INFO os_vif [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:e2:73,bridge_name='br-int',has_traffic_filtering=True,id=f9f19cf7-87f5-4dd4-a7be-78086c84e176,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9f19cf7-87')
Jan 20 15:10:20 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[297086]: [NOTICE]   (297093) : haproxy version is 2.8.14-c23fe91
Jan 20 15:10:20 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[297086]: [NOTICE]   (297093) : path to executable is /usr/sbin/haproxy
Jan 20 15:10:20 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[297086]: [WARNING]  (297093) : Exiting Master process...
Jan 20 15:10:20 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[297086]: [ALERT]    (297093) : Current worker (297095) exited with code 143 (Terminated)
Jan 20 15:10:20 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[297086]: [WARNING]  (297093) : All workers exited. Exiting... (0)
Jan 20 15:10:20 compute-1 systemd[1]: libpod-e8bf3f1f69d8a9dca045ab143829c5417120c28e2a4cd71c921ad01e76c46c44.scope: Deactivated successfully.
Jan 20 15:10:20 compute-1 podman[297286]: 2026-01-20 15:10:20.702345515 +0000 UTC m=+0.046690206 container died e8bf3f1f69d8a9dca045ab143829c5417120c28e2a4cd71c921ad01e76c46c44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:10:20 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8bf3f1f69d8a9dca045ab143829c5417120c28e2a4cd71c921ad01e76c46c44-userdata-shm.mount: Deactivated successfully.
Jan 20 15:10:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-5075690fd2baf5ce67a4e697a92358c36b0bceb39f08aa2bba2b013ccbee9916-merged.mount: Deactivated successfully.
Jan 20 15:10:20 compute-1 podman[297286]: 2026-01-20 15:10:20.746783285 +0000 UTC m=+0.091127986 container cleanup e8bf3f1f69d8a9dca045ab143829c5417120c28e2a4cd71c921ad01e76c46c44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 15:10:20 compute-1 systemd[1]: libpod-conmon-e8bf3f1f69d8a9dca045ab143829c5417120c28e2a4cd71c921ad01e76c46c44.scope: Deactivated successfully.
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.784 225859 INFO nova.virt.libvirt.driver [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Deleting instance files /var/lib/nova/instances/9e852872-788c-4dac-b7fb-d76d67e7a84f_del
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.784 225859 INFO nova.virt.libvirt.driver [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Deletion of /var/lib/nova/instances/9e852872-788c-4dac-b7fb-d76d67e7a84f_del complete
Jan 20 15:10:20 compute-1 podman[297316]: 2026-01-20 15:10:20.824643075 +0000 UTC m=+0.052077069 container remove e8bf3f1f69d8a9dca045ab143829c5417120c28e2a4cd71c921ad01e76c46c44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:10:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:20.831 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5f51e4-7959-4749-82ac-2d53fe53fded]: (4, ('Tue Jan 20 03:10:20 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 (e8bf3f1f69d8a9dca045ab143829c5417120c28e2a4cd71c921ad01e76c46c44)\ne8bf3f1f69d8a9dca045ab143829c5417120c28e2a4cd71c921ad01e76c46c44\nTue Jan 20 03:10:20 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 (e8bf3f1f69d8a9dca045ab143829c5417120c28e2a4cd71c921ad01e76c46c44)\ne8bf3f1f69d8a9dca045ab143829c5417120c28e2a4cd71c921ad01e76c46c44\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:20.834 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[29326437-2cfe-4572-8710-05fcd73372de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:20.836 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb677f1a9-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.839 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:20 compute-1 kernel: tapb677f1a9-d0: left promiscuous mode
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.845 225859 INFO nova.compute.manager [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Took 0.54 seconds to destroy the instance on the hypervisor.
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.846 225859 DEBUG oslo.service.loopingcall [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.846 225859 DEBUG nova.compute.manager [-] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.847 225859 DEBUG nova.network.neutron [-] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:10:20 compute-1 nova_compute[225855]: 2026-01-20 15:10:20.873 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:20.877 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0018992a-3a19-4d93-9488-c6a4c21f2fb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:20.892 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5f276d48-d873-40a2-9426-f08db75106d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:20.894 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[08daf50e-0818-4053-a455-19cd617b7ecf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:20.911 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[939fc0e3-2764-4915-b5f5-5c16e3e7bcc7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685274, 'reachable_time': 16108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297332, 'error': None, 'target': 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:20.915 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:10:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:20.915 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[13dd8f21-7c30-4330-87e5-b3b37bc5192a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:20 compute-1 systemd[1]: run-netns-ovnmeta\x2db677f1a9\x2ddbaa\x2d4373\x2d8466\x2dbd9ccf067b91.mount: Deactivated successfully.
Jan 20 15:10:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:10:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:20.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:10:21 compute-1 nova_compute[225855]: 2026-01-20 15:10:21.422 225859 DEBUG nova.compute.manager [req-2644b226-28b8-490d-8f46-1cf0348f71b0 req-38e0afe8-395e-494f-a9d0-c9823ea60c96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Received event network-vif-unplugged-f9f19cf7-87f5-4dd4-a7be-78086c84e176 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:21 compute-1 nova_compute[225855]: 2026-01-20 15:10:21.423 225859 DEBUG oslo_concurrency.lockutils [req-2644b226-28b8-490d-8f46-1cf0348f71b0 req-38e0afe8-395e-494f-a9d0-c9823ea60c96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9e852872-788c-4dac-b7fb-d76d67e7a84f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:21 compute-1 nova_compute[225855]: 2026-01-20 15:10:21.423 225859 DEBUG oslo_concurrency.lockutils [req-2644b226-28b8-490d-8f46-1cf0348f71b0 req-38e0afe8-395e-494f-a9d0-c9823ea60c96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9e852872-788c-4dac-b7fb-d76d67e7a84f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:21 compute-1 nova_compute[225855]: 2026-01-20 15:10:21.423 225859 DEBUG oslo_concurrency.lockutils [req-2644b226-28b8-490d-8f46-1cf0348f71b0 req-38e0afe8-395e-494f-a9d0-c9823ea60c96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9e852872-788c-4dac-b7fb-d76d67e7a84f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:21 compute-1 nova_compute[225855]: 2026-01-20 15:10:21.423 225859 DEBUG nova.compute.manager [req-2644b226-28b8-490d-8f46-1cf0348f71b0 req-38e0afe8-395e-494f-a9d0-c9823ea60c96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] No waiting events found dispatching network-vif-unplugged-f9f19cf7-87f5-4dd4-a7be-78086c84e176 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:10:21 compute-1 nova_compute[225855]: 2026-01-20 15:10:21.424 225859 DEBUG nova.compute.manager [req-2644b226-28b8-490d-8f46-1cf0348f71b0 req-38e0afe8-395e-494f-a9d0-c9823ea60c96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Received event network-vif-unplugged-f9f19cf7-87f5-4dd4-a7be-78086c84e176 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:10:21 compute-1 ceph-mon[81775]: pgmap v2612: 321 pgs: 321 active+clean; 511 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 399 KiB/s rd, 2.2 MiB/s wr, 78 op/s
Jan 20 15:10:22 compute-1 nova_compute[225855]: 2026-01-20 15:10:22.194 225859 DEBUG nova.network.neutron [-] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:10:22 compute-1 nova_compute[225855]: 2026-01-20 15:10:22.214 225859 INFO nova.compute.manager [-] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Took 1.37 seconds to deallocate network for instance.
Jan 20 15:10:22 compute-1 nova_compute[225855]: 2026-01-20 15:10:22.271 225859 DEBUG nova.compute.manager [req-0f262437-f374-4145-91c4-2127196770d5 req-1eff2cd7-d76c-4f77-9e7c-a5ea724225e7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Received event network-vif-deleted-f9f19cf7-87f5-4dd4-a7be-78086c84e176 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:22 compute-1 nova_compute[225855]: 2026-01-20 15:10:22.434 225859 INFO nova.compute.manager [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Took 0.22 seconds to detach 1 volumes for instance.
Jan 20 15:10:22 compute-1 nova_compute[225855]: 2026-01-20 15:10:22.500 225859 DEBUG oslo_concurrency.lockutils [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:22 compute-1 nova_compute[225855]: 2026-01-20 15:10:22.501 225859 DEBUG oslo_concurrency.lockutils [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:22.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:22 compute-1 nova_compute[225855]: 2026-01-20 15:10:22.562 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:22 compute-1 nova_compute[225855]: 2026-01-20 15:10:22.596 225859 DEBUG oslo_concurrency.processutils [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:22 compute-1 ceph-mon[81775]: pgmap v2613: 321 pgs: 321 active+clean; 510 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 398 KiB/s rd, 1.6 MiB/s wr, 73 op/s
Jan 20 15:10:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:22.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:10:23 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3238774180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:23 compute-1 nova_compute[225855]: 2026-01-20 15:10:23.032 225859 DEBUG oslo_concurrency.processutils [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:23 compute-1 nova_compute[225855]: 2026-01-20 15:10:23.039 225859 DEBUG nova.compute.provider_tree [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:10:23 compute-1 nova_compute[225855]: 2026-01-20 15:10:23.052 225859 DEBUG nova.scheduler.client.report [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:10:23 compute-1 nova_compute[225855]: 2026-01-20 15:10:23.072 225859 DEBUG oslo_concurrency.lockutils [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:23 compute-1 nova_compute[225855]: 2026-01-20 15:10:23.105 225859 INFO nova.scheduler.client.report [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Deleted allocations for instance 9e852872-788c-4dac-b7fb-d76d67e7a84f
Jan 20 15:10:23 compute-1 nova_compute[225855]: 2026-01-20 15:10:23.204 225859 DEBUG oslo_concurrency.lockutils [None req-14350567-d534-4580-8413-135ad632139b bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "9e852872-788c-4dac-b7fb-d76d67e7a84f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:23 compute-1 nova_compute[225855]: 2026-01-20 15:10:23.520 225859 DEBUG nova.compute.manager [req-79cfc599-0d4a-4499-87a2-babd7747216a req-6ddfa326-023e-4447-8a77-114f8d6f6873 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Received event network-vif-plugged-f9f19cf7-87f5-4dd4-a7be-78086c84e176 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:23 compute-1 nova_compute[225855]: 2026-01-20 15:10:23.521 225859 DEBUG oslo_concurrency.lockutils [req-79cfc599-0d4a-4499-87a2-babd7747216a req-6ddfa326-023e-4447-8a77-114f8d6f6873 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9e852872-788c-4dac-b7fb-d76d67e7a84f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:23 compute-1 nova_compute[225855]: 2026-01-20 15:10:23.522 225859 DEBUG oslo_concurrency.lockutils [req-79cfc599-0d4a-4499-87a2-babd7747216a req-6ddfa326-023e-4447-8a77-114f8d6f6873 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9e852872-788c-4dac-b7fb-d76d67e7a84f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:23 compute-1 nova_compute[225855]: 2026-01-20 15:10:23.522 225859 DEBUG oslo_concurrency.lockutils [req-79cfc599-0d4a-4499-87a2-babd7747216a req-6ddfa326-023e-4447-8a77-114f8d6f6873 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9e852872-788c-4dac-b7fb-d76d67e7a84f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:23 compute-1 nova_compute[225855]: 2026-01-20 15:10:23.522 225859 DEBUG nova.compute.manager [req-79cfc599-0d4a-4499-87a2-babd7747216a req-6ddfa326-023e-4447-8a77-114f8d6f6873 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] No waiting events found dispatching network-vif-plugged-f9f19cf7-87f5-4dd4-a7be-78086c84e176 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:10:23 compute-1 nova_compute[225855]: 2026-01-20 15:10:23.523 225859 WARNING nova.compute.manager [req-79cfc599-0d4a-4499-87a2-babd7747216a req-6ddfa326-023e-4447-8a77-114f8d6f6873 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Received unexpected event network-vif-plugged-f9f19cf7-87f5-4dd4-a7be-78086c84e176 for instance with vm_state deleted and task_state None.
Jan 20 15:10:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3238774180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:10:24 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4078764243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:10:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:10:24 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4078764243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:10:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:24.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4078764243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:10:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4078764243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:10:24 compute-1 ceph-mon[81775]: pgmap v2614: 321 pgs: 321 active+clean; 510 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 317 KiB/s rd, 1.2 MiB/s wr, 49 op/s
Jan 20 15:10:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:10:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:24.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:10:25 compute-1 nova_compute[225855]: 2026-01-20 15:10:25.578 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:10:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:26.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:10:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:26.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:27 compute-1 ceph-mon[81775]: pgmap v2615: 321 pgs: 321 active+clean; 493 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 231 KiB/s rd, 146 KiB/s wr, 68 op/s
Jan 20 15:10:27 compute-1 nova_compute[225855]: 2026-01-20 15:10:27.564 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:28.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:28.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:29 compute-1 ceph-mon[81775]: pgmap v2616: 321 pgs: 321 active+clean; 493 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 20 KiB/s wr, 36 op/s
Jan 20 15:10:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3315978454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:30.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:30 compute-1 nova_compute[225855]: 2026-01-20 15:10:30.581 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:30.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:31 compute-1 ceph-mon[81775]: pgmap v2617: 321 pgs: 321 active+clean; 429 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 29 KiB/s wr, 67 op/s
Jan 20 15:10:32 compute-1 nova_compute[225855]: 2026-01-20 15:10:32.566 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:32.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:10:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:32.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:10:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:33 compute-1 ceph-mon[81775]: pgmap v2618: 321 pgs: 321 active+clean; 429 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 21 KiB/s wr, 67 op/s
Jan 20 15:10:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:10:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:34.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:10:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:10:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:34.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:10:35 compute-1 nova_compute[225855]: 2026-01-20 15:10:35.548 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921820.5471656, 9e852872-788c-4dac-b7fb-d76d67e7a84f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:10:35 compute-1 nova_compute[225855]: 2026-01-20 15:10:35.548 225859 INFO nova.compute.manager [-] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] VM Stopped (Lifecycle Event)
Jan 20 15:10:35 compute-1 nova_compute[225855]: 2026-01-20 15:10:35.585 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:35 compute-1 ceph-mon[81775]: pgmap v2619: 321 pgs: 321 active+clean; 429 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 21 KiB/s wr, 82 op/s
Jan 20 15:10:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3420058915' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:10:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3420058915' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:10:35 compute-1 nova_compute[225855]: 2026-01-20 15:10:35.998 225859 DEBUG nova.compute.manager [None req-cf1c000e-3324-4782-b19d-01b8759efea7 - - - - - -] [instance: 9e852872-788c-4dac-b7fb-d76d67e7a84f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:10:36 compute-1 podman[297363]: 2026-01-20 15:10:36.055123017 +0000 UTC m=+0.093128834 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 15:10:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:36.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:36 compute-1 ceph-mon[81775]: pgmap v2620: 321 pgs: 321 active+clean; 429 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 21 KiB/s wr, 153 op/s
Jan 20 15:10:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:10:36 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1348110601' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:10:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:10:36 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1348110601' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:10:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:36.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:37 compute-1 nova_compute[225855]: 2026-01-20 15:10:37.569 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1348110601' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:10:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1348110601' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:10:37 compute-1 nova_compute[225855]: 2026-01-20 15:10:37.696 225859 DEBUG oslo_concurrency.lockutils [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Acquiring lock "a25af5a3-096f-4363-842e-d960c22eb16b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:37 compute-1 nova_compute[225855]: 2026-01-20 15:10:37.696 225859 DEBUG oslo_concurrency.lockutils [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:37 compute-1 nova_compute[225855]: 2026-01-20 15:10:37.696 225859 DEBUG oslo_concurrency.lockutils [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Acquiring lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:37 compute-1 nova_compute[225855]: 2026-01-20 15:10:37.697 225859 DEBUG oslo_concurrency.lockutils [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:37 compute-1 nova_compute[225855]: 2026-01-20 15:10:37.697 225859 DEBUG oslo_concurrency.lockutils [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:37 compute-1 nova_compute[225855]: 2026-01-20 15:10:37.698 225859 INFO nova.compute.manager [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Terminating instance
Jan 20 15:10:37 compute-1 nova_compute[225855]: 2026-01-20 15:10:37.699 225859 DEBUG nova.compute.manager [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:10:37 compute-1 kernel: tap6b7cb043-d1 (unregistering): left promiscuous mode
Jan 20 15:10:37 compute-1 NetworkManager[49104]: <info>  [1768921837.7646] device (tap6b7cb043-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:10:37 compute-1 nova_compute[225855]: 2026-01-20 15:10:37.777 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:37 compute-1 ovn_controller[130490]: 2026-01-20T15:10:37Z|00777|binding|INFO|Releasing lport 6b7cb043-d1f4-4c2b-8173-1e3e2a664767 from this chassis (sb_readonly=0)
Jan 20 15:10:37 compute-1 ovn_controller[130490]: 2026-01-20T15:10:37Z|00778|binding|INFO|Setting lport 6b7cb043-d1f4-4c2b-8173-1e3e2a664767 down in Southbound
Jan 20 15:10:37 compute-1 ovn_controller[130490]: 2026-01-20T15:10:37Z|00779|binding|INFO|Removing iface tap6b7cb043-d1 ovn-installed in OVS
Jan 20 15:10:37 compute-1 nova_compute[225855]: 2026-01-20 15:10:37.778 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:37 compute-1 nova_compute[225855]: 2026-01-20 15:10:37.812 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:37 compute-1 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000a8.scope: Deactivated successfully.
Jan 20 15:10:37 compute-1 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000a8.scope: Consumed 19.034s CPU time.
Jan 20 15:10:37 compute-1 systemd-machined[194361]: Machine qemu-86-instance-000000a8 terminated.
Jan 20 15:10:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:37.833 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:9e:90 10.100.0.6'], port_security=['fa:16:3e:bf:9e:90 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'a25af5a3-096f-4363-842e-d960c22eb16b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3967ae21-1590-4685-8881-8bd1bcf25258', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc74c4a296554866969b05aef75252af', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd56d9c6d-4bb5-4a73-ab07-9e0ee1fd3b93', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89ced88f-b1ed-4329-8a53-1931e6b0e3e9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=6b7cb043-d1f4-4c2b-8173-1e3e2a664767) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:10:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:37.834 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 6b7cb043-d1f4-4c2b-8173-1e3e2a664767 in datapath 3967ae21-1590-4685-8881-8bd1bcf25258 unbound from our chassis
Jan 20 15:10:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:37.836 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3967ae21-1590-4685-8881-8bd1bcf25258, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:10:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:37.837 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6e792391-8bf1-484a-8dc7-6375c7345ea7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:37.838 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258 namespace which is not needed anymore
Jan 20 15:10:37 compute-1 nova_compute[225855]: 2026-01-20 15:10:37.929 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:37 compute-1 nova_compute[225855]: 2026-01-20 15:10:37.935 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:37 compute-1 nova_compute[225855]: 2026-01-20 15:10:37.944 225859 INFO nova.virt.libvirt.driver [-] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Instance destroyed successfully.
Jan 20 15:10:37 compute-1 nova_compute[225855]: 2026-01-20 15:10:37.944 225859 DEBUG nova.objects.instance [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lazy-loading 'resources' on Instance uuid a25af5a3-096f-4363-842e-d960c22eb16b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:10:37 compute-1 neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258[294842]: [NOTICE]   (294872) : haproxy version is 2.8.14-c23fe91
Jan 20 15:10:37 compute-1 neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258[294842]: [NOTICE]   (294872) : path to executable is /usr/sbin/haproxy
Jan 20 15:10:37 compute-1 neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258[294842]: [WARNING]  (294872) : Exiting Master process...
Jan 20 15:10:37 compute-1 neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258[294842]: [WARNING]  (294872) : Exiting Master process...
Jan 20 15:10:37 compute-1 neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258[294842]: [ALERT]    (294872) : Current worker (294884) exited with code 143 (Terminated)
Jan 20 15:10:37 compute-1 neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258[294842]: [WARNING]  (294872) : All workers exited. Exiting... (0)
Jan 20 15:10:38 compute-1 systemd[1]: libpod-d749b2135e8b112af5d3a20cbeb705a74120f88ef0dbe175ef347b92349abb8e.scope: Deactivated successfully.
Jan 20 15:10:38 compute-1 podman[297424]: 2026-01-20 15:10:38.00708267 +0000 UTC m=+0.046196982 container died d749b2135e8b112af5d3a20cbeb705a74120f88ef0dbe175ef347b92349abb8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.023 225859 DEBUG nova.virt.libvirt.vif [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:07:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-950743647',display_name='tempest-ServerRescueNegativeTestJSON-server-950743647',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-950743647',id=168,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:08:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fc74c4a296554866969b05aef75252af',ramdisk_id='',reservation_id='r-gkbc59sh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1649662639',owner_user_name='tempest-ServerRescueNegativeTestJSON-1649662639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:08:28Z,user_data=None,user_id='27658864f96d453586dd0846a4c55b7d',uuid=a25af5a3-096f-4363-842e-d960c22eb16b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "address": "fa:16:3e:bf:9e:90", "network": {"id": "3967ae21-1590-4685-8881-8bd1bcf25258", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-285441107-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc74c4a296554866969b05aef75252af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7cb043-d1", "ovs_interfaceid": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.024 225859 DEBUG nova.network.os_vif_util [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Converting VIF {"id": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "address": "fa:16:3e:bf:9e:90", "network": {"id": "3967ae21-1590-4685-8881-8bd1bcf25258", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-285441107-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc74c4a296554866969b05aef75252af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7cb043-d1", "ovs_interfaceid": "6b7cb043-d1f4-4c2b-8173-1e3e2a664767", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.025 225859 DEBUG nova.network.os_vif_util [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bf:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=6b7cb043-d1f4-4c2b-8173-1e3e2a664767,network=Network(3967ae21-1590-4685-8881-8bd1bcf25258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b7cb043-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.026 225859 DEBUG os_vif [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=6b7cb043-d1f4-4c2b-8173-1e3e2a664767,network=Network(3967ae21-1590-4685-8881-8bd1bcf25258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b7cb043-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.028 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.028 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b7cb043-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.030 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.031 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.033 225859 INFO os_vif [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=6b7cb043-d1f4-4c2b-8173-1e3e2a664767,network=Network(3967ae21-1590-4685-8881-8bd1bcf25258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b7cb043-d1')
Jan 20 15:10:38 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d749b2135e8b112af5d3a20cbeb705a74120f88ef0dbe175ef347b92349abb8e-userdata-shm.mount: Deactivated successfully.
Jan 20 15:10:38 compute-1 systemd[1]: var-lib-containers-storage-overlay-6a0a29c0cfe0897e5a9faa037b73161c66d2ed59c5ab8259820c45f49ae9bf20-merged.mount: Deactivated successfully.
Jan 20 15:10:38 compute-1 podman[297424]: 2026-01-20 15:10:38.046644053 +0000 UTC m=+0.085758365 container cleanup d749b2135e8b112af5d3a20cbeb705a74120f88ef0dbe175ef347b92349abb8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.062 225859 DEBUG nova.compute.manager [req-73cfee4f-3676-421f-bf11-c472da662cb3 req-6e07b650-6541-4f53-9bbb-c4bc3915bf08 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Received event network-vif-unplugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.062 225859 DEBUG oslo_concurrency.lockutils [req-73cfee4f-3676-421f-bf11-c472da662cb3 req-6e07b650-6541-4f53-9bbb-c4bc3915bf08 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.062 225859 DEBUG oslo_concurrency.lockutils [req-73cfee4f-3676-421f-bf11-c472da662cb3 req-6e07b650-6541-4f53-9bbb-c4bc3915bf08 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.062 225859 DEBUG oslo_concurrency.lockutils [req-73cfee4f-3676-421f-bf11-c472da662cb3 req-6e07b650-6541-4f53-9bbb-c4bc3915bf08 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.063 225859 DEBUG nova.compute.manager [req-73cfee4f-3676-421f-bf11-c472da662cb3 req-6e07b650-6541-4f53-9bbb-c4bc3915bf08 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] No waiting events found dispatching network-vif-unplugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.063 225859 DEBUG nova.compute.manager [req-73cfee4f-3676-421f-bf11-c472da662cb3 req-6e07b650-6541-4f53-9bbb-c4bc3915bf08 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Received event network-vif-unplugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:10:38 compute-1 systemd[1]: libpod-conmon-d749b2135e8b112af5d3a20cbeb705a74120f88ef0dbe175ef347b92349abb8e.scope: Deactivated successfully.
Jan 20 15:10:38 compute-1 podman[297465]: 2026-01-20 15:10:38.115562088 +0000 UTC m=+0.046864641 container remove d749b2135e8b112af5d3a20cbeb705a74120f88ef0dbe175ef347b92349abb8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 15:10:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:38.121 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ae4d48a0-6f0d-4572-a7f3-5619be614d23]: (4, ('Tue Jan 20 03:10:37 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258 (d749b2135e8b112af5d3a20cbeb705a74120f88ef0dbe175ef347b92349abb8e)\nd749b2135e8b112af5d3a20cbeb705a74120f88ef0dbe175ef347b92349abb8e\nTue Jan 20 03:10:38 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258 (d749b2135e8b112af5d3a20cbeb705a74120f88ef0dbe175ef347b92349abb8e)\nd749b2135e8b112af5d3a20cbeb705a74120f88ef0dbe175ef347b92349abb8e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:38.122 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[caf51314-1d46-40e9-943a-a4518560755a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:38.123 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3967ae21-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.125 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:38 compute-1 kernel: tap3967ae21-10: left promiscuous mode
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.139 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:38.142 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7915caf9-26f1-4841-ad08-39071b452225]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:38.162 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b0efc4-609b-48d4-99e2-8b31656e2ddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:38.163 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[74065a5e-d960-4237-b91c-28bb970ada94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:38.182 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8904880d-ee4e-4985-9980-aa002876114f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674630, 'reachable_time': 40173, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297489, 'error': None, 'target': 'ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:38.184 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3967ae21-1590-4685-8881-8bd1bcf25258 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:10:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:38.184 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[d65f5794-a4b9-4244-ab48-fc1fb8cea321]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:38 compute-1 systemd[1]: run-netns-ovnmeta\x2d3967ae21\x2d1590\x2d4685\x2d8881\x2d8bd1bcf25258.mount: Deactivated successfully.
Jan 20 15:10:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:38.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:38 compute-1 ceph-mon[81775]: pgmap v2621: 321 pgs: 321 active+clean; 429 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 20 KiB/s wr, 121 op/s
Jan 20 15:10:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e386 e386: 3 total, 3 up, 3 in
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.694 225859 INFO nova.virt.libvirt.driver [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Deleting instance files /var/lib/nova/instances/a25af5a3-096f-4363-842e-d960c22eb16b_del
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.695 225859 INFO nova.virt.libvirt.driver [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Deletion of /var/lib/nova/instances/a25af5a3-096f-4363-842e-d960c22eb16b_del complete
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.921 225859 INFO nova.compute.manager [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Took 1.22 seconds to destroy the instance on the hypervisor.
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.922 225859 DEBUG oslo.service.loopingcall [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.923 225859 DEBUG nova.compute.manager [-] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:10:38 compute-1 nova_compute[225855]: 2026-01-20 15:10:38.923 225859 DEBUG nova.network.neutron [-] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:10:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:10:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:38.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:10:39 compute-1 sudo[297491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:39 compute-1 sudo[297491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:39 compute-1 sudo[297491]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:39 compute-1 sudo[297516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:39 compute-1 sudo[297516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:39 compute-1 sudo[297516]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:39 compute-1 ceph-mon[81775]: osdmap e386: 3 total, 3 up, 3 in
Jan 20 15:10:40 compute-1 nova_compute[225855]: 2026-01-20 15:10:40.143 225859 DEBUG nova.network.neutron [-] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:10:40 compute-1 nova_compute[225855]: 2026-01-20 15:10:40.314 225859 DEBUG nova.compute.manager [req-c0103c11-0e8c-4b62-b8f6-4bd604ccce3a req-2c673116-4bc2-4995-ab14-d2613f9686bc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Received event network-vif-deleted-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:40 compute-1 nova_compute[225855]: 2026-01-20 15:10:40.314 225859 INFO nova.compute.manager [req-c0103c11-0e8c-4b62-b8f6-4bd604ccce3a req-2c673116-4bc2-4995-ab14-d2613f9686bc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Neutron deleted interface 6b7cb043-d1f4-4c2b-8173-1e3e2a664767; detaching it from the instance and deleting it from the info cache
Jan 20 15:10:40 compute-1 nova_compute[225855]: 2026-01-20 15:10:40.315 225859 DEBUG nova.network.neutron [req-c0103c11-0e8c-4b62-b8f6-4bd604ccce3a req-2c673116-4bc2-4995-ab14-d2613f9686bc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:10:40 compute-1 nova_compute[225855]: 2026-01-20 15:10:40.316 225859 DEBUG nova.compute.manager [req-eae4afd1-6747-4efe-9781-16f726358106 req-7c214a29-be3a-482a-afbd-db300295c926 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Received event network-vif-plugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:40 compute-1 nova_compute[225855]: 2026-01-20 15:10:40.317 225859 DEBUG oslo_concurrency.lockutils [req-eae4afd1-6747-4efe-9781-16f726358106 req-7c214a29-be3a-482a-afbd-db300295c926 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:40 compute-1 nova_compute[225855]: 2026-01-20 15:10:40.317 225859 DEBUG oslo_concurrency.lockutils [req-eae4afd1-6747-4efe-9781-16f726358106 req-7c214a29-be3a-482a-afbd-db300295c926 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:40 compute-1 nova_compute[225855]: 2026-01-20 15:10:40.317 225859 DEBUG oslo_concurrency.lockutils [req-eae4afd1-6747-4efe-9781-16f726358106 req-7c214a29-be3a-482a-afbd-db300295c926 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:40 compute-1 nova_compute[225855]: 2026-01-20 15:10:40.317 225859 DEBUG nova.compute.manager [req-eae4afd1-6747-4efe-9781-16f726358106 req-7c214a29-be3a-482a-afbd-db300295c926 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] No waiting events found dispatching network-vif-plugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:10:40 compute-1 nova_compute[225855]: 2026-01-20 15:10:40.317 225859 WARNING nova.compute.manager [req-eae4afd1-6747-4efe-9781-16f726358106 req-7c214a29-be3a-482a-afbd-db300295c926 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Received unexpected event network-vif-plugged-6b7cb043-d1f4-4c2b-8173-1e3e2a664767 for instance with vm_state rescued and task_state deleting.
Jan 20 15:10:40 compute-1 nova_compute[225855]: 2026-01-20 15:10:40.321 225859 INFO nova.compute.manager [-] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Took 1.40 seconds to deallocate network for instance.
Jan 20 15:10:40 compute-1 nova_compute[225855]: 2026-01-20 15:10:40.359 225859 DEBUG nova.compute.manager [req-c0103c11-0e8c-4b62-b8f6-4bd604ccce3a req-2c673116-4bc2-4995-ab14-d2613f9686bc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Detach interface failed, port_id=6b7cb043-d1f4-4c2b-8173-1e3e2a664767, reason: Instance a25af5a3-096f-4363-842e-d960c22eb16b could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 20 15:10:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:40.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:40 compute-1 nova_compute[225855]: 2026-01-20 15:10:40.659 225859 DEBUG oslo_concurrency.lockutils [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:40 compute-1 nova_compute[225855]: 2026-01-20 15:10:40.659 225859 DEBUG oslo_concurrency.lockutils [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:40 compute-1 ceph-mon[81775]: pgmap v2623: 321 pgs: 321 active+clean; 375 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.2 MiB/s wr, 176 op/s
Jan 20 15:10:40 compute-1 nova_compute[225855]: 2026-01-20 15:10:40.701 225859 DEBUG oslo_concurrency.processutils [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:40.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:41 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:10:41 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1780300540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:41 compute-1 nova_compute[225855]: 2026-01-20 15:10:41.163 225859 DEBUG oslo_concurrency.processutils [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:41 compute-1 nova_compute[225855]: 2026-01-20 15:10:41.169 225859 DEBUG nova.compute.provider_tree [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:10:41 compute-1 nova_compute[225855]: 2026-01-20 15:10:41.302 225859 DEBUG nova.scheduler.client.report [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:10:41 compute-1 nova_compute[225855]: 2026-01-20 15:10:41.357 225859 DEBUG oslo_concurrency.lockutils [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:41 compute-1 nova_compute[225855]: 2026-01-20 15:10:41.434 225859 INFO nova.scheduler.client.report [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Deleted allocations for instance a25af5a3-096f-4363-842e-d960c22eb16b
Jan 20 15:10:41 compute-1 nova_compute[225855]: 2026-01-20 15:10:41.544 225859 DEBUG oslo_concurrency.lockutils [None req-671ee076-b046-4e19-829f-98b4be8b1947 27658864f96d453586dd0846a4c55b7d fc74c4a296554866969b05aef75252af - - default default] Lock "a25af5a3-096f-4363-842e-d960c22eb16b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1780300540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:42 compute-1 nova_compute[225855]: 2026-01-20 15:10:42.572 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:42.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:42 compute-1 ceph-mon[81775]: pgmap v2624: 321 pgs: 321 active+clean; 348 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 231 op/s
Jan 20 15:10:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:42.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:43 compute-1 nova_compute[225855]: 2026-01-20 15:10:43.031 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:43 compute-1 nova_compute[225855]: 2026-01-20 15:10:43.626 225859 DEBUG oslo_concurrency.lockutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:43 compute-1 nova_compute[225855]: 2026-01-20 15:10:43.627 225859 DEBUG oslo_concurrency.lockutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:43 compute-1 nova_compute[225855]: 2026-01-20 15:10:43.644 225859 DEBUG nova.compute.manager [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:10:43 compute-1 nova_compute[225855]: 2026-01-20 15:10:43.711 225859 DEBUG oslo_concurrency.lockutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:43 compute-1 nova_compute[225855]: 2026-01-20 15:10:43.712 225859 DEBUG oslo_concurrency.lockutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:43 compute-1 nova_compute[225855]: 2026-01-20 15:10:43.719 225859 DEBUG nova.virt.hardware [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:10:43 compute-1 nova_compute[225855]: 2026-01-20 15:10:43.719 225859 INFO nova.compute.claims [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:10:43 compute-1 nova_compute[225855]: 2026-01-20 15:10:43.807 225859 DEBUG oslo_concurrency.processutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:10:44 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1303016254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:44 compute-1 nova_compute[225855]: 2026-01-20 15:10:44.269 225859 DEBUG oslo_concurrency.processutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:44 compute-1 nova_compute[225855]: 2026-01-20 15:10:44.277 225859 DEBUG nova.compute.provider_tree [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:10:44 compute-1 nova_compute[225855]: 2026-01-20 15:10:44.300 225859 DEBUG nova.scheduler.client.report [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:10:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1303016254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:44 compute-1 nova_compute[225855]: 2026-01-20 15:10:44.335 225859 DEBUG oslo_concurrency.lockutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:44 compute-1 nova_compute[225855]: 2026-01-20 15:10:44.336 225859 DEBUG nova.compute.manager [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:10:44 compute-1 nova_compute[225855]: 2026-01-20 15:10:44.404 225859 DEBUG nova.compute.manager [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:10:44 compute-1 nova_compute[225855]: 2026-01-20 15:10:44.405 225859 DEBUG nova.network.neutron [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:10:44 compute-1 nova_compute[225855]: 2026-01-20 15:10:44.425 225859 INFO nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:10:44 compute-1 nova_compute[225855]: 2026-01-20 15:10:44.447 225859 DEBUG nova.compute.manager [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:10:44 compute-1 nova_compute[225855]: 2026-01-20 15:10:44.496 225859 INFO nova.virt.block_device [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Booting with volume snapshot 33f09854-2fbb-41ab-84ee-a1c4a1274b2b at /dev/vda
Jan 20 15:10:44 compute-1 nova_compute[225855]: 2026-01-20 15:10:44.580 225859 DEBUG nova.policy [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bf422e55e158420cbdae75f07a3bb97a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a49638950e1543fa8e0d251af5479623', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:10:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:44.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:44.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:45 compute-1 ceph-mon[81775]: pgmap v2625: 321 pgs: 321 active+clean; 331 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 228 op/s
Jan 20 15:10:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1430308450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:45 compute-1 nova_compute[225855]: 2026-01-20 15:10:45.825 225859 DEBUG nova.network.neutron [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Successfully created port: 1c883167-abba-4a7e-af5c-33a54aba9ce0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:10:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:10:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:46.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:10:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:10:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:46.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:10:47 compute-1 nova_compute[225855]: 2026-01-20 15:10:47.199 225859 DEBUG nova.network.neutron [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Successfully updated port: 1c883167-abba-4a7e-af5c-33a54aba9ce0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:10:47 compute-1 nova_compute[225855]: 2026-01-20 15:10:47.222 225859 DEBUG oslo_concurrency.lockutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "refresh_cache-35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:10:47 compute-1 nova_compute[225855]: 2026-01-20 15:10:47.222 225859 DEBUG oslo_concurrency.lockutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquired lock "refresh_cache-35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:10:47 compute-1 nova_compute[225855]: 2026-01-20 15:10:47.222 225859 DEBUG nova.network.neutron [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:10:47 compute-1 nova_compute[225855]: 2026-01-20 15:10:47.298 225859 DEBUG nova.compute.manager [req-b31ee416-8317-4348-93ef-231b53011c14 req-74b23f27-0e5f-4934-81c3-0cc7fa2c9700 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Received event network-changed-1c883167-abba-4a7e-af5c-33a54aba9ce0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:47 compute-1 nova_compute[225855]: 2026-01-20 15:10:47.298 225859 DEBUG nova.compute.manager [req-b31ee416-8317-4348-93ef-231b53011c14 req-74b23f27-0e5f-4934-81c3-0cc7fa2c9700 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Refreshing instance network info cache due to event network-changed-1c883167-abba-4a7e-af5c-33a54aba9ce0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:10:47 compute-1 nova_compute[225855]: 2026-01-20 15:10:47.299 225859 DEBUG oslo_concurrency.lockutils [req-b31ee416-8317-4348-93ef-231b53011c14 req-74b23f27-0e5f-4934-81c3-0cc7fa2c9700 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:10:47 compute-1 nova_compute[225855]: 2026-01-20 15:10:47.442 225859 DEBUG nova.network.neutron [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:10:47 compute-1 ceph-mon[81775]: pgmap v2626: 321 pgs: 321 active+clean; 266 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 406 KiB/s rd, 2.1 MiB/s wr, 205 op/s
Jan 20 15:10:47 compute-1 nova_compute[225855]: 2026-01-20 15:10:47.574 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:48 compute-1 nova_compute[225855]: 2026-01-20 15:10:48.033 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:48.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:10:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:48.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.013 225859 DEBUG os_brick.utils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.014 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.036 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.036 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[6c7ca7b8-1040-46a2-8259-dff386738023]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.038 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.047 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.048 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[ceb3c032-1387-42ce-91b6-a4172ebbde65]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.049 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.063 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.064 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[84dfb702-a52d-4db6-8035-d6efc95196ac]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.066 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9ed2f5-c176-419e-9aa1-758d10fb1bb1]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.066 225859 DEBUG oslo_concurrency.processutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.107 225859 DEBUG oslo_concurrency.processutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "nvme version" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.110 225859 DEBUG os_brick.initiator.connectors.lightos [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.110 225859 DEBUG os_brick.initiator.connectors.lightos [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.110 225859 DEBUG os_brick.initiator.connectors.lightos [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.111 225859 DEBUG os_brick.utils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] <== get_connector_properties: return (97ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.111 225859 DEBUG nova.virt.block_device [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Updating existing volume attachment record: 9aa44915-ba45-4054-be03-16dd5a3b8ca4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.406 225859 DEBUG nova.network.neutron [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Updating instance_info_cache with network_info: [{"id": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "address": "fa:16:3e:51:9b:64", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c883167-ab", "ovs_interfaceid": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.437 225859 DEBUG oslo_concurrency.lockutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Releasing lock "refresh_cache-35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.438 225859 DEBUG nova.compute.manager [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Instance network_info: |[{"id": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "address": "fa:16:3e:51:9b:64", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c883167-ab", "ovs_interfaceid": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.439 225859 DEBUG oslo_concurrency.lockutils [req-b31ee416-8317-4348-93ef-231b53011c14 req-74b23f27-0e5f-4934-81c3-0cc7fa2c9700 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:10:49 compute-1 nova_compute[225855]: 2026-01-20 15:10:49.440 225859 DEBUG nova.network.neutron [req-b31ee416-8317-4348-93ef-231b53011c14 req-74b23f27-0e5f-4934-81c3-0cc7fa2c9700 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Refreshing network info cache for port 1c883167-abba-4a7e-af5c-33a54aba9ce0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:10:49 compute-1 ceph-mon[81775]: pgmap v2627: 321 pgs: 321 active+clean; 266 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 406 KiB/s rd, 2.1 MiB/s wr, 205 op/s
Jan 20 15:10:50 compute-1 podman[297598]: 2026-01-20 15:10:50.020192734 +0000 UTC m=+0.064187762 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.112 225859 DEBUG nova.compute.manager [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.116 225859 DEBUG nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.116 225859 INFO nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Creating image(s)
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.117 225859 DEBUG nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.118 225859 DEBUG nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Ensure instance console log exists: /var/lib/nova/instances/35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.118 225859 DEBUG oslo_concurrency.lockutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.119 225859 DEBUG oslo_concurrency.lockutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.120 225859 DEBUG oslo_concurrency.lockutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.125 225859 DEBUG nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Start _get_guest_xml network_info=[{"id": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "address": "fa:16:3e:51:9b:64", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c883167-ab", "ovs_interfaceid": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': True, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f396a213-a7f4-434e-a290-c5d9278be4af', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f396a213-a7f4-434e-a290-c5d9278be4af', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8', 'attached_at': '', 'detached_at': '', 'volume_id': 'f396a213-a7f4-434e-a290-c5d9278be4af', 'serial': 'f396a213-a7f4-434e-a290-c5d9278be4af'}, 'guest_format': None, 'boot_index': 0, 'mount_device': '/dev/vda', 'attachment_id': '9aa44915-ba45-4054-be03-16dd5a3b8ca4', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.133 225859 WARNING nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.139 225859 DEBUG nova.virt.libvirt.host [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.140 225859 DEBUG nova.virt.libvirt.host [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.144 225859 DEBUG nova.virt.libvirt.host [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.145 225859 DEBUG nova.virt.libvirt.host [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.147 225859 DEBUG nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.148 225859 DEBUG nova.virt.hardware [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.148 225859 DEBUG nova.virt.hardware [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.149 225859 DEBUG nova.virt.hardware [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.149 225859 DEBUG nova.virt.hardware [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.150 225859 DEBUG nova.virt.hardware [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.150 225859 DEBUG nova.virt.hardware [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.150 225859 DEBUG nova.virt.hardware [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.151 225859 DEBUG nova.virt.hardware [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.151 225859 DEBUG nova.virt.hardware [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.152 225859 DEBUG nova.virt.hardware [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.152 225859 DEBUG nova.virt.hardware [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.190 225859 DEBUG nova.storage.rbd_utils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] rbd image 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.195 225859 DEBUG oslo_concurrency.processutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.271 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:10:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1529396375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.560 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:10:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:50.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.641 225859 DEBUG oslo_concurrency.processutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.667 225859 DEBUG nova.virt.libvirt.vif [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:10:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-589796538',display_name='tempest-TestVolumeBootPattern-server-589796538',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-589796538',id=174,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a49638950e1543fa8e0d251af5479623',ramdisk_id='',reservation_id='r-sgiqpjjh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-194644003',owner_user_name='tempest-TestVolumeBootPattern-194644003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:10:44Z,user_data=None,user_id='bf422e55e158420cbdae75f07a3bb97a',uuid=35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "address": "fa:16:3e:51:9b:64", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c883167-ab", "ovs_interfaceid": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.668 225859 DEBUG nova.network.os_vif_util [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converting VIF {"id": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "address": "fa:16:3e:51:9b:64", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c883167-ab", "ovs_interfaceid": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.669 225859 DEBUG nova.network.os_vif_util [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:9b:64,bridge_name='br-int',has_traffic_filtering=True,id=1c883167-abba-4a7e-af5c-33a54aba9ce0,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c883167-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.670 225859 DEBUG nova.objects.instance [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lazy-loading 'pci_devices' on Instance uuid 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.686 225859 DEBUG nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:10:50 compute-1 nova_compute[225855]:   <uuid>35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8</uuid>
Jan 20 15:10:50 compute-1 nova_compute[225855]:   <name>instance-000000ae</name>
Jan 20 15:10:50 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:10:50 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:10:50 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <nova:name>tempest-TestVolumeBootPattern-server-589796538</nova:name>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:10:50</nova:creationTime>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:10:50 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:10:50 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:10:50 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:10:50 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:10:50 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:10:50 compute-1 nova_compute[225855]:         <nova:user uuid="bf422e55e158420cbdae75f07a3bb97a">tempest-TestVolumeBootPattern-194644003-project-member</nova:user>
Jan 20 15:10:50 compute-1 nova_compute[225855]:         <nova:project uuid="a49638950e1543fa8e0d251af5479623">tempest-TestVolumeBootPattern-194644003</nova:project>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:10:50 compute-1 nova_compute[225855]:         <nova:port uuid="1c883167-abba-4a7e-af5c-33a54aba9ce0">
Jan 20 15:10:50 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:10:50 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:10:50 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <system>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <entry name="serial">35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8</entry>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <entry name="uuid">35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8</entry>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     </system>
Jan 20 15:10:50 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:10:50 compute-1 nova_compute[225855]:   <os>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:   </os>
Jan 20 15:10:50 compute-1 nova_compute[225855]:   <features>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:   </features>
Jan 20 15:10:50 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:10:50 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:10:50 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8_disk.config">
Jan 20 15:10:50 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       </source>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:10:50 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <source protocol="rbd" name="volumes/volume-f396a213-a7f4-434e-a290-c5d9278be4af">
Jan 20 15:10:50 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       </source>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:10:50 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <serial>f396a213-a7f4-434e-a290-c5d9278be4af</serial>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:51:9b:64"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <target dev="tap1c883167-ab"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8/console.log" append="off"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <video>
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     </video>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:10:50 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:10:50 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:10:50 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:10:50 compute-1 nova_compute[225855]: </domain>
Jan 20 15:10:50 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.687 225859 DEBUG nova.compute.manager [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Preparing to wait for external event network-vif-plugged-1c883167-abba-4a7e-af5c-33a54aba9ce0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.687 225859 DEBUG oslo_concurrency.lockutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.688 225859 DEBUG oslo_concurrency.lockutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.688 225859 DEBUG oslo_concurrency.lockutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.689 225859 DEBUG nova.virt.libvirt.vif [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:10:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-589796538',display_name='tempest-TestVolumeBootPattern-server-589796538',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-589796538',id=174,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a49638950e1543fa8e0d251af5479623',ramdisk_id='',reservation_id='r-sgiqpjjh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-194644003',owner_user_name='tempest-TestVolumeBootPattern-194644003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:10:44Z,user_data=None,user_id='bf422e55e158420cbdae75f07a3bb97a',uuid=35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "address": "fa:16:3e:51:9b:64", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c883167-ab", "ovs_interfaceid": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.689 225859 DEBUG nova.network.os_vif_util [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converting VIF {"id": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "address": "fa:16:3e:51:9b:64", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c883167-ab", "ovs_interfaceid": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.689 225859 DEBUG nova.network.os_vif_util [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:9b:64,bridge_name='br-int',has_traffic_filtering=True,id=1c883167-abba-4a7e-af5c-33a54aba9ce0,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c883167-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.690 225859 DEBUG os_vif [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:9b:64,bridge_name='br-int',has_traffic_filtering=True,id=1c883167-abba-4a7e-af5c-33a54aba9ce0,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c883167-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.690 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.691 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.691 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.693 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.693 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c883167-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.694 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c883167-ab, col_values=(('external_ids', {'iface-id': '1c883167-abba-4a7e-af5c-33a54aba9ce0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:51:9b:64', 'vm-uuid': '35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.738 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:50 compute-1 NetworkManager[49104]: <info>  [1768921850.7394] manager: (tap1c883167-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.742 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.744 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.745 225859 INFO os_vif [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:9b:64,bridge_name='br-int',has_traffic_filtering=True,id=1c883167-abba-4a7e-af5c-33a54aba9ce0,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c883167-ab')
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.801 225859 DEBUG nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.802 225859 DEBUG nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.802 225859 DEBUG nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] No VIF found with MAC fa:16:3e:51:9b:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.802 225859 INFO nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Using config drive
Jan 20 15:10:50 compute-1 nova_compute[225855]: 2026-01-20 15:10:50.831 225859 DEBUG nova.storage.rbd_utils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] rbd image 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:10:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:10:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:50.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:10:51 compute-1 nova_compute[225855]: 2026-01-20 15:10:51.394 225859 INFO nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Creating config drive at /var/lib/nova/instances/35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8/disk.config
Jan 20 15:10:51 compute-1 nova_compute[225855]: 2026-01-20 15:10:51.399 225859 DEBUG oslo_concurrency.processutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplo4v1tx5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:51 compute-1 nova_compute[225855]: 2026-01-20 15:10:51.435 225859 DEBUG nova.network.neutron [req-b31ee416-8317-4348-93ef-231b53011c14 req-74b23f27-0e5f-4934-81c3-0cc7fa2c9700 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Updated VIF entry in instance network info cache for port 1c883167-abba-4a7e-af5c-33a54aba9ce0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:10:51 compute-1 nova_compute[225855]: 2026-01-20 15:10:51.436 225859 DEBUG nova.network.neutron [req-b31ee416-8317-4348-93ef-231b53011c14 req-74b23f27-0e5f-4934-81c3-0cc7fa2c9700 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Updating instance_info_cache with network_info: [{"id": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "address": "fa:16:3e:51:9b:64", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c883167-ab", "ovs_interfaceid": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:10:51 compute-1 nova_compute[225855]: 2026-01-20 15:10:51.510 225859 DEBUG oslo_concurrency.lockutils [req-b31ee416-8317-4348-93ef-231b53011c14 req-74b23f27-0e5f-4934-81c3-0cc7fa2c9700 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:10:51 compute-1 nova_compute[225855]: 2026-01-20 15:10:51.532 225859 DEBUG oslo_concurrency.processutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplo4v1tx5" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:51 compute-1 ceph-mon[81775]: pgmap v2628: 321 pgs: 321 active+clean; 266 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 636 KiB/s rd, 1.8 MiB/s wr, 191 op/s
Jan 20 15:10:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2095857021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:10:51 compute-1 nova_compute[225855]: 2026-01-20 15:10:51.575 225859 DEBUG nova.storage.rbd_utils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] rbd image 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:10:51 compute-1 nova_compute[225855]: 2026-01-20 15:10:51.579 225859 DEBUG oslo_concurrency.processutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8/disk.config 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:51 compute-1 nova_compute[225855]: 2026-01-20 15:10:51.719 225859 DEBUG oslo_concurrency.processutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8/disk.config 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:51 compute-1 nova_compute[225855]: 2026-01-20 15:10:51.720 225859 INFO nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Deleting local config drive /var/lib/nova/instances/35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8/disk.config because it was imported into RBD.
Jan 20 15:10:51 compute-1 kernel: tap1c883167-ab: entered promiscuous mode
Jan 20 15:10:51 compute-1 NetworkManager[49104]: <info>  [1768921851.7763] manager: (tap1c883167-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/322)
Jan 20 15:10:51 compute-1 ovn_controller[130490]: 2026-01-20T15:10:51Z|00780|binding|INFO|Claiming lport 1c883167-abba-4a7e-af5c-33a54aba9ce0 for this chassis.
Jan 20 15:10:51 compute-1 nova_compute[225855]: 2026-01-20 15:10:51.823 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:51 compute-1 ovn_controller[130490]: 2026-01-20T15:10:51Z|00781|binding|INFO|1c883167-abba-4a7e-af5c-33a54aba9ce0: Claiming fa:16:3e:51:9b:64 10.100.0.10
Jan 20 15:10:51 compute-1 systemd-udevd[297732]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:10:51 compute-1 systemd-machined[194361]: New machine qemu-90-instance-000000ae.
Jan 20 15:10:51 compute-1 NetworkManager[49104]: <info>  [1768921851.8703] device (tap1c883167-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:10:51 compute-1 NetworkManager[49104]: <info>  [1768921851.8713] device (tap1c883167-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:10:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:51.901 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:9b:64 10.100.0.10'], port_security=['fa:16:3e:51:9b:64 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a49638950e1543fa8e0d251af5479623', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64f52c7d-befd-4095-889b-e7a5da6821d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76ec1139-009f-49fe-bfde-07c0ef9e8b12, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=1c883167-abba-4a7e-af5c-33a54aba9ce0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:10:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:51.902 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 1c883167-abba-4a7e-af5c-33a54aba9ce0 in datapath b677f1a9-dbaa-4373-8466-bd9ccf067b91 bound to our chassis
Jan 20 15:10:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:51.904 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b677f1a9-dbaa-4373-8466-bd9ccf067b91
Jan 20 15:10:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:51.918 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[25c0feb9-8909-46f3-b08a-900eb33bf4f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:51.919 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb677f1a9-d1 in ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:10:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:51.922 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb677f1a9-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:10:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:51.922 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f36d4e22-bea2-4869-995e-4bc4ee357e9b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:51.923 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[99af607a-1450-494e-82a3-ff1c1dd90f8d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:51 compute-1 systemd[1]: Started Virtual Machine qemu-90-instance-000000ae.
Jan 20 15:10:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:51.944 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[fbddcc52-f651-4847-ba54-d739b9ccfce2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:51 compute-1 nova_compute[225855]: 2026-01-20 15:10:51.944 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:51 compute-1 ovn_controller[130490]: 2026-01-20T15:10:51Z|00782|binding|INFO|Setting lport 1c883167-abba-4a7e-af5c-33a54aba9ce0 ovn-installed in OVS
Jan 20 15:10:51 compute-1 ovn_controller[130490]: 2026-01-20T15:10:51Z|00783|binding|INFO|Setting lport 1c883167-abba-4a7e-af5c-33a54aba9ce0 up in Southbound
Jan 20 15:10:51 compute-1 nova_compute[225855]: 2026-01-20 15:10:51.951 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:51.957 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6b350d30-6569-4996-9bbf-d6bc15817f47]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:51.990 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d384bd71-827d-4fc9-a55e-2b1f1d8477a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:51.995 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f1dee5c0-cf4e-4dc4-b975-8a317508bb9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:51 compute-1 NetworkManager[49104]: <info>  [1768921851.9970] manager: (tapb677f1a9-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/323)
Jan 20 15:10:51 compute-1 systemd-udevd[297734]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:52.030 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[dd64a5f9-b2bb-41c3-a197-3d8360e8e4d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:52.033 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[49eb7d28-289c-4c3c-b101-3537b828e1a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:52 compute-1 NetworkManager[49104]: <info>  [1768921852.0614] device (tapb677f1a9-d0): carrier: link connected
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:52.068 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[31035fae-715a-4802-9406-7c01712d2da8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:52.086 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[05808dca-a29b-4891-be57-286813d928db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb677f1a9-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:c8:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689098, 'reachable_time': 17931, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297765, 'error': None, 'target': 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:52.103 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a88bcc7d-4eb5-4f85-af74-b05a3bc64f08]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:c834'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689098, 'tstamp': 689098}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297766, 'error': None, 'target': 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:52.120 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b1af4f82-da03-4613-8ff3-530caaf1322a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb677f1a9-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:c8:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689098, 'reachable_time': 17931, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297767, 'error': None, 'target': 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:52.152 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[86852fcf-a0bc-4652-8d57-a85a716887d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:52.207 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[67b8dfa6-896a-4dfe-adf4-e07d57c1d44e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:52.208 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb677f1a9-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:52.208 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:52.208 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb677f1a9-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:52 compute-1 nova_compute[225855]: 2026-01-20 15:10:52.210 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:52 compute-1 NetworkManager[49104]: <info>  [1768921852.2110] manager: (tapb677f1a9-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Jan 20 15:10:52 compute-1 kernel: tapb677f1a9-d0: entered promiscuous mode
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:52.212 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb677f1a9-d0, col_values=(('external_ids', {'iface-id': '1aa285ce-a9ae-4d1e-b4b9-c72f4e0b8d65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:52 compute-1 nova_compute[225855]: 2026-01-20 15:10:52.213 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:52 compute-1 ovn_controller[130490]: 2026-01-20T15:10:52Z|00784|binding|INFO|Releasing lport 1aa285ce-a9ae-4d1e-b4b9-c72f4e0b8d65 from this chassis (sb_readonly=0)
Jan 20 15:10:52 compute-1 nova_compute[225855]: 2026-01-20 15:10:52.214 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:52.215 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b677f1a9-dbaa-4373-8466-bd9ccf067b91.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b677f1a9-dbaa-4373-8466-bd9ccf067b91.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:52.215 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[56ea8e9b-8845-42da-b0cd-d8af9c6e2834]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:52.216 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-b677f1a9-dbaa-4373-8466-bd9ccf067b91
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/b677f1a9-dbaa-4373-8466-bd9ccf067b91.pid.haproxy
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID b677f1a9-dbaa-4373-8466-bd9ccf067b91
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:10:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:52.217 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'env', 'PROCESS_TAG=haproxy-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b677f1a9-dbaa-4373-8466-bd9ccf067b91.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:10:52 compute-1 nova_compute[225855]: 2026-01-20 15:10:52.227 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:52 compute-1 nova_compute[225855]: 2026-01-20 15:10:52.349 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921852.3489478, 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:10:52 compute-1 nova_compute[225855]: 2026-01-20 15:10:52.349 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] VM Started (Lifecycle Event)
Jan 20 15:10:52 compute-1 nova_compute[225855]: 2026-01-20 15:10:52.396 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:10:52 compute-1 nova_compute[225855]: 2026-01-20 15:10:52.400 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921852.3491285, 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:10:52 compute-1 nova_compute[225855]: 2026-01-20 15:10:52.400 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] VM Paused (Lifecycle Event)
Jan 20 15:10:52 compute-1 nova_compute[225855]: 2026-01-20 15:10:52.430 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:10:52 compute-1 nova_compute[225855]: 2026-01-20 15:10:52.432 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:10:52 compute-1 nova_compute[225855]: 2026-01-20 15:10:52.571 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:10:52 compute-1 nova_compute[225855]: 2026-01-20 15:10:52.576 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:52 compute-1 podman[297841]: 2026-01-20 15:10:52.587078655 +0000 UTC m=+0.047815537 container create ce93107e97f2afc68739b0e4240ca20f65ba11abe81e5b38fd289de5c6365fb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:10:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:52.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:52 compute-1 systemd[1]: Started libpod-conmon-ce93107e97f2afc68739b0e4240ca20f65ba11abe81e5b38fd289de5c6365fb2.scope.
Jan 20 15:10:52 compute-1 podman[297841]: 2026-01-20 15:10:52.561759937 +0000 UTC m=+0.022496849 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:10:52 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:10:52 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19daef63f9380ca40ee278346b76985622bb726e2f8a3577bdcd88908f79189d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:10:52 compute-1 podman[297841]: 2026-01-20 15:10:52.6781677 +0000 UTC m=+0.138904582 container init ce93107e97f2afc68739b0e4240ca20f65ba11abe81e5b38fd289de5c6365fb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:10:52 compute-1 podman[297841]: 2026-01-20 15:10:52.683001007 +0000 UTC m=+0.143737889 container start ce93107e97f2afc68739b0e4240ca20f65ba11abe81e5b38fd289de5c6365fb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:10:52 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[297856]: [NOTICE]   (297860) : New worker (297862) forked
Jan 20 15:10:52 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[297856]: [NOTICE]   (297860) : Loading success.
Jan 20 15:10:52 compute-1 nova_compute[225855]: 2026-01-20 15:10:52.943 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921837.9421668, a25af5a3-096f-4363-842e-d960c22eb16b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:10:52 compute-1 nova_compute[225855]: 2026-01-20 15:10:52.943 225859 INFO nova.compute.manager [-] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] VM Stopped (Lifecycle Event)
Jan 20 15:10:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:10:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:52.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.009 225859 DEBUG nova.compute.manager [None req-40c3535a-6194-443f-922c-56d9d908b990 - - - - - -] [instance: a25af5a3-096f-4363-842e-d960c22eb16b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.098 225859 DEBUG nova.compute.manager [req-d80ac688-df37-4bf3-995e-3bafd623c984 req-f4758ff9-d384-4d37-bbf1-c3f26c972322 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Received event network-vif-plugged-1c883167-abba-4a7e-af5c-33a54aba9ce0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.099 225859 DEBUG oslo_concurrency.lockutils [req-d80ac688-df37-4bf3-995e-3bafd623c984 req-f4758ff9-d384-4d37-bbf1-c3f26c972322 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.099 225859 DEBUG oslo_concurrency.lockutils [req-d80ac688-df37-4bf3-995e-3bafd623c984 req-f4758ff9-d384-4d37-bbf1-c3f26c972322 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.100 225859 DEBUG oslo_concurrency.lockutils [req-d80ac688-df37-4bf3-995e-3bafd623c984 req-f4758ff9-d384-4d37-bbf1-c3f26c972322 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.100 225859 DEBUG nova.compute.manager [req-d80ac688-df37-4bf3-995e-3bafd623c984 req-f4758ff9-d384-4d37-bbf1-c3f26c972322 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Processing event network-vif-plugged-1c883167-abba-4a7e-af5c-33a54aba9ce0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.100 225859 DEBUG nova.compute.manager [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.104 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921853.1046283, 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.105 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] VM Resumed (Lifecycle Event)
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.106 225859 DEBUG nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.110 225859 INFO nova.virt.libvirt.driver [-] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Instance spawned successfully.
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.110 225859 DEBUG nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.122 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.130 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.135 225859 DEBUG nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.135 225859 DEBUG nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.136 225859 DEBUG nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.136 225859 DEBUG nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.137 225859 DEBUG nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.137 225859 DEBUG nova.virt.libvirt.driver [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.162 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.194 225859 INFO nova.compute.manager [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Took 3.08 seconds to spawn the instance on the hypervisor.
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.194 225859 DEBUG nova.compute.manager [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.260 225859 INFO nova.compute.manager [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Took 9.57 seconds to build instance.
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.277 225859 DEBUG oslo_concurrency.lockutils [None req-bd00b6c0-2377-4e1e-be36-f79351d98531 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:53 compute-1 nova_compute[225855]: 2026-01-20 15:10:53.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:10:53 compute-1 ceph-mon[81775]: pgmap v2629: 321 pgs: 321 active+clean; 268 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 586 KiB/s rd, 855 KiB/s wr, 132 op/s
Jan 20 15:10:54 compute-1 nova_compute[225855]: 2026-01-20 15:10:54.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:10:54 compute-1 nova_compute[225855]: 2026-01-20 15:10:54.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:10:54 compute-1 nova_compute[225855]: 2026-01-20 15:10:54.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:10:54 compute-1 nova_compute[225855]: 2026-01-20 15:10:54.575 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:10:54 compute-1 nova_compute[225855]: 2026-01-20 15:10:54.575 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:10:54 compute-1 nova_compute[225855]: 2026-01-20 15:10:54.575 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:10:54 compute-1 nova_compute[225855]: 2026-01-20 15:10:54.575 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:10:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:54.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:54.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.280 225859 DEBUG nova.compute.manager [req-c7bc8d05-784c-403b-b96e-2f34b2e0aeed req-8dcf7686-c2b8-4529-8a43-f470d751b122 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Received event network-vif-plugged-1c883167-abba-4a7e-af5c-33a54aba9ce0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.281 225859 DEBUG oslo_concurrency.lockutils [req-c7bc8d05-784c-403b-b96e-2f34b2e0aeed req-8dcf7686-c2b8-4529-8a43-f470d751b122 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.282 225859 DEBUG oslo_concurrency.lockutils [req-c7bc8d05-784c-403b-b96e-2f34b2e0aeed req-8dcf7686-c2b8-4529-8a43-f470d751b122 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.282 225859 DEBUG oslo_concurrency.lockutils [req-c7bc8d05-784c-403b-b96e-2f34b2e0aeed req-8dcf7686-c2b8-4529-8a43-f470d751b122 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.283 225859 DEBUG nova.compute.manager [req-c7bc8d05-784c-403b-b96e-2f34b2e0aeed req-8dcf7686-c2b8-4529-8a43-f470d751b122 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] No waiting events found dispatching network-vif-plugged-1c883167-abba-4a7e-af5c-33a54aba9ce0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.283 225859 WARNING nova.compute.manager [req-c7bc8d05-784c-403b-b96e-2f34b2e0aeed req-8dcf7686-c2b8-4529-8a43-f470d751b122 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Received unexpected event network-vif-plugged-1c883167-abba-4a7e-af5c-33a54aba9ce0 for instance with vm_state active and task_state None.
Jan 20 15:10:55 compute-1 ceph-mon[81775]: pgmap v2630: 321 pgs: 321 active+clean; 268 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 668 KiB/s rd, 36 KiB/s wr, 89 op/s
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.741 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.852 225859 DEBUG oslo_concurrency.lockutils [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.853 225859 DEBUG oslo_concurrency.lockutils [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.853 225859 DEBUG oslo_concurrency.lockutils [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.854 225859 DEBUG oslo_concurrency.lockutils [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.855 225859 DEBUG oslo_concurrency.lockutils [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.857 225859 INFO nova.compute.manager [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Terminating instance
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.860 225859 DEBUG nova.compute.manager [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:10:55 compute-1 kernel: tap1c883167-ab (unregistering): left promiscuous mode
Jan 20 15:10:55 compute-1 NetworkManager[49104]: <info>  [1768921855.9079] device (tap1c883167-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:10:55 compute-1 ovn_controller[130490]: 2026-01-20T15:10:55Z|00785|binding|INFO|Releasing lport 1c883167-abba-4a7e-af5c-33a54aba9ce0 from this chassis (sb_readonly=0)
Jan 20 15:10:55 compute-1 ovn_controller[130490]: 2026-01-20T15:10:55Z|00786|binding|INFO|Setting lport 1c883167-abba-4a7e-af5c-33a54aba9ce0 down in Southbound
Jan 20 15:10:55 compute-1 ovn_controller[130490]: 2026-01-20T15:10:55Z|00787|binding|INFO|Removing iface tap1c883167-ab ovn-installed in OVS
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.917 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.920 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:55.934 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:9b:64 10.100.0.10'], port_security=['fa:16:3e:51:9b:64 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a49638950e1543fa8e0d251af5479623', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64f52c7d-befd-4095-889b-e7a5da6821d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76ec1139-009f-49fe-bfde-07c0ef9e8b12, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=1c883167-abba-4a7e-af5c-33a54aba9ce0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:10:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:55.937 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 1c883167-abba-4a7e-af5c-33a54aba9ce0 in datapath b677f1a9-dbaa-4373-8466-bd9ccf067b91 unbound from our chassis
Jan 20 15:10:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:55.940 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b677f1a9-dbaa-4373-8466-bd9ccf067b91, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:10:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:55.941 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cad7d707-aa60-4d5a-a21c-93f62be8ae1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.942 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Updating instance_info_cache with network_info: [{"id": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "address": "fa:16:3e:51:9b:64", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c883167-ab", "ovs_interfaceid": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:10:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:55.942 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 namespace which is not needed anymore
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.946 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.961 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.962 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.962 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:10:55 compute-1 nova_compute[225855]: 2026-01-20 15:10:55.962 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:10:55 compute-1 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000ae.scope: Deactivated successfully.
Jan 20 15:10:55 compute-1 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000ae.scope: Consumed 3.386s CPU time.
Jan 20 15:10:55 compute-1 systemd-machined[194361]: Machine qemu-90-instance-000000ae terminated.
Jan 20 15:10:56 compute-1 nova_compute[225855]: 2026-01-20 15:10:56.100 225859 INFO nova.virt.libvirt.driver [-] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Instance destroyed successfully.
Jan 20 15:10:56 compute-1 nova_compute[225855]: 2026-01-20 15:10:56.100 225859 DEBUG nova.objects.instance [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lazy-loading 'resources' on Instance uuid 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:10:56 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[297856]: [NOTICE]   (297860) : haproxy version is 2.8.14-c23fe91
Jan 20 15:10:56 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[297856]: [NOTICE]   (297860) : path to executable is /usr/sbin/haproxy
Jan 20 15:10:56 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[297856]: [WARNING]  (297860) : Exiting Master process...
Jan 20 15:10:56 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[297856]: [ALERT]    (297860) : Current worker (297862) exited with code 143 (Terminated)
Jan 20 15:10:56 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[297856]: [WARNING]  (297860) : All workers exited. Exiting... (0)
Jan 20 15:10:56 compute-1 systemd[1]: libpod-ce93107e97f2afc68739b0e4240ca20f65ba11abe81e5b38fd289de5c6365fb2.scope: Deactivated successfully.
Jan 20 15:10:56 compute-1 podman[297897]: 2026-01-20 15:10:56.116208369 +0000 UTC m=+0.042514797 container died ce93107e97f2afc68739b0e4240ca20f65ba11abe81e5b38fd289de5c6365fb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:10:56 compute-1 nova_compute[225855]: 2026-01-20 15:10:56.120 225859 DEBUG nova.virt.libvirt.vif [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:10:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-589796538',display_name='tempest-TestVolumeBootPattern-server-589796538',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-589796538',id=174,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:10:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a49638950e1543fa8e0d251af5479623',ramdisk_id='',reservation_id='r-sgiqpjjh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-194644003',owner_user_name='tempest-TestVolumeBootPattern-194644003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:10:53Z,user_data=None,user_id='bf422e55e158420cbdae75f07a3bb97a',uuid=35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "address": "fa:16:3e:51:9b:64", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c883167-ab", "ovs_interfaceid": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:10:56 compute-1 nova_compute[225855]: 2026-01-20 15:10:56.121 225859 DEBUG nova.network.os_vif_util [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converting VIF {"id": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "address": "fa:16:3e:51:9b:64", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c883167-ab", "ovs_interfaceid": "1c883167-abba-4a7e-af5c-33a54aba9ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:10:56 compute-1 nova_compute[225855]: 2026-01-20 15:10:56.122 225859 DEBUG nova.network.os_vif_util [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:9b:64,bridge_name='br-int',has_traffic_filtering=True,id=1c883167-abba-4a7e-af5c-33a54aba9ce0,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c883167-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:10:56 compute-1 nova_compute[225855]: 2026-01-20 15:10:56.122 225859 DEBUG os_vif [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:9b:64,bridge_name='br-int',has_traffic_filtering=True,id=1c883167-abba-4a7e-af5c-33a54aba9ce0,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c883167-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:10:56 compute-1 nova_compute[225855]: 2026-01-20 15:10:56.124 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:56 compute-1 nova_compute[225855]: 2026-01-20 15:10:56.124 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c883167-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:56 compute-1 nova_compute[225855]: 2026-01-20 15:10:56.125 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:56 compute-1 nova_compute[225855]: 2026-01-20 15:10:56.128 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:10:56 compute-1 nova_compute[225855]: 2026-01-20 15:10:56.130 225859 INFO os_vif [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:9b:64,bridge_name='br-int',has_traffic_filtering=True,id=1c883167-abba-4a7e-af5c-33a54aba9ce0,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c883167-ab')
Jan 20 15:10:56 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ce93107e97f2afc68739b0e4240ca20f65ba11abe81e5b38fd289de5c6365fb2-userdata-shm.mount: Deactivated successfully.
Jan 20 15:10:56 compute-1 systemd[1]: var-lib-containers-storage-overlay-19daef63f9380ca40ee278346b76985622bb726e2f8a3577bdcd88908f79189d-merged.mount: Deactivated successfully.
Jan 20 15:10:56 compute-1 podman[297897]: 2026-01-20 15:10:56.151989654 +0000 UTC m=+0.078296082 container cleanup ce93107e97f2afc68739b0e4240ca20f65ba11abe81e5b38fd289de5c6365fb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:10:56 compute-1 systemd[1]: libpod-conmon-ce93107e97f2afc68739b0e4240ca20f65ba11abe81e5b38fd289de5c6365fb2.scope: Deactivated successfully.
Jan 20 15:10:56 compute-1 podman[297952]: 2026-01-20 15:10:56.215168997 +0000 UTC m=+0.043120855 container remove ce93107e97f2afc68739b0e4240ca20f65ba11abe81e5b38fd289de5c6365fb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:10:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:56.220 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4f95a62b-7df5-434c-94ce-5bdc6e72cbe2]: (4, ('Tue Jan 20 03:10:56 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 (ce93107e97f2afc68739b0e4240ca20f65ba11abe81e5b38fd289de5c6365fb2)\nce93107e97f2afc68739b0e4240ca20f65ba11abe81e5b38fd289de5c6365fb2\nTue Jan 20 03:10:56 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 (ce93107e97f2afc68739b0e4240ca20f65ba11abe81e5b38fd289de5c6365fb2)\nce93107e97f2afc68739b0e4240ca20f65ba11abe81e5b38fd289de5c6365fb2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:56.222 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[063b198f-b987-4884-8ea1-46e9962a9b5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:56.223 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb677f1a9-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:56 compute-1 kernel: tapb677f1a9-d0: left promiscuous mode
Jan 20 15:10:56 compute-1 nova_compute[225855]: 2026-01-20 15:10:56.225 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:56 compute-1 nova_compute[225855]: 2026-01-20 15:10:56.237 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:56.240 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[73236760-c55e-4ec4-876e-b62927ff67bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:56.252 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c6db4696-6ec4-4d2c-b651-4ee4ae60cb2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:56.253 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8b47aff7-215a-4310-a75c-faf24828f642]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:56.266 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2029f044-45fe-40c3-b52d-d9fdcb2cfba0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689091, 'reachable_time': 23553, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297972, 'error': None, 'target': 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:56.268 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:10:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:10:56.268 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[822a9a71-63ff-44e1-9f39-96ba1571aeb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:56 compute-1 systemd[1]: run-netns-ovnmeta\x2db677f1a9\x2ddbaa\x2d4373\x2d8466\x2dbd9ccf067b91.mount: Deactivated successfully.
Jan 20 15:10:56 compute-1 nova_compute[225855]: 2026-01-20 15:10:56.316 225859 INFO nova.virt.libvirt.driver [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Deleting instance files /var/lib/nova/instances/35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8_del
Jan 20 15:10:56 compute-1 nova_compute[225855]: 2026-01-20 15:10:56.317 225859 INFO nova.virt.libvirt.driver [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Deletion of /var/lib/nova/instances/35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8_del complete
Jan 20 15:10:56 compute-1 nova_compute[225855]: 2026-01-20 15:10:56.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:10:56 compute-1 nova_compute[225855]: 2026-01-20 15:10:56.519 225859 INFO nova.compute.manager [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Took 0.66 seconds to destroy the instance on the hypervisor.
Jan 20 15:10:56 compute-1 nova_compute[225855]: 2026-01-20 15:10:56.520 225859 DEBUG oslo.service.loopingcall [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:10:56 compute-1 nova_compute[225855]: 2026-01-20 15:10:56.520 225859 DEBUG nova.compute.manager [-] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:10:56 compute-1 nova_compute[225855]: 2026-01-20 15:10:56.520 225859 DEBUG nova.network.neutron [-] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:10:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:56.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:56 compute-1 ceph-mon[81775]: pgmap v2631: 321 pgs: 321 active+clean; 268 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 36 KiB/s wr, 141 op/s
Jan 20 15:10:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:10:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:56.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.068 225859 DEBUG nova.network.neutron [-] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.093 225859 INFO nova.compute.manager [-] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Took 0.57 seconds to deallocate network for instance.
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.158 225859 DEBUG nova.compute.manager [req-349a0ed1-33b9-4394-8000-4456a480c885 req-b729d9d1-9999-46a2-bf95-1a87e72129b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Received event network-vif-deleted-1c883167-abba-4a7e-af5c-33a54aba9ce0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.307 225859 INFO nova.compute.manager [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Took 0.21 seconds to detach 1 volumes for instance.
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.309 225859 DEBUG nova.compute.manager [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Deleting volume: f396a213-a7f4-434e-a290-c5d9278be4af _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.408 225859 DEBUG nova.compute.manager [req-ea74f5c4-40f5-41fe-b400-24613d2dcf43 req-ecc41560-41fc-47b7-8696-e6865b24c08e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Received event network-vif-unplugged-1c883167-abba-4a7e-af5c-33a54aba9ce0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.408 225859 DEBUG oslo_concurrency.lockutils [req-ea74f5c4-40f5-41fe-b400-24613d2dcf43 req-ecc41560-41fc-47b7-8696-e6865b24c08e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.409 225859 DEBUG oslo_concurrency.lockutils [req-ea74f5c4-40f5-41fe-b400-24613d2dcf43 req-ecc41560-41fc-47b7-8696-e6865b24c08e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.409 225859 DEBUG oslo_concurrency.lockutils [req-ea74f5c4-40f5-41fe-b400-24613d2dcf43 req-ecc41560-41fc-47b7-8696-e6865b24c08e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.409 225859 DEBUG nova.compute.manager [req-ea74f5c4-40f5-41fe-b400-24613d2dcf43 req-ecc41560-41fc-47b7-8696-e6865b24c08e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] No waiting events found dispatching network-vif-unplugged-1c883167-abba-4a7e-af5c-33a54aba9ce0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.409 225859 DEBUG nova.compute.manager [req-ea74f5c4-40f5-41fe-b400-24613d2dcf43 req-ecc41560-41fc-47b7-8696-e6865b24c08e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Received event network-vif-unplugged-1c883167-abba-4a7e-af5c-33a54aba9ce0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.410 225859 DEBUG nova.compute.manager [req-ea74f5c4-40f5-41fe-b400-24613d2dcf43 req-ecc41560-41fc-47b7-8696-e6865b24c08e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Received event network-vif-plugged-1c883167-abba-4a7e-af5c-33a54aba9ce0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.410 225859 DEBUG oslo_concurrency.lockutils [req-ea74f5c4-40f5-41fe-b400-24613d2dcf43 req-ecc41560-41fc-47b7-8696-e6865b24c08e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.410 225859 DEBUG oslo_concurrency.lockutils [req-ea74f5c4-40f5-41fe-b400-24613d2dcf43 req-ecc41560-41fc-47b7-8696-e6865b24c08e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.410 225859 DEBUG oslo_concurrency.lockutils [req-ea74f5c4-40f5-41fe-b400-24613d2dcf43 req-ecc41560-41fc-47b7-8696-e6865b24c08e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.410 225859 DEBUG nova.compute.manager [req-ea74f5c4-40f5-41fe-b400-24613d2dcf43 req-ecc41560-41fc-47b7-8696-e6865b24c08e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] No waiting events found dispatching network-vif-plugged-1c883167-abba-4a7e-af5c-33a54aba9ce0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.410 225859 WARNING nova.compute.manager [req-ea74f5c4-40f5-41fe-b400-24613d2dcf43 req-ecc41560-41fc-47b7-8696-e6865b24c08e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Received unexpected event network-vif-plugged-1c883167-abba-4a7e-af5c-33a54aba9ce0 for instance with vm_state active and task_state deleting.
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.578 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.723 225859 DEBUG oslo_concurrency.lockutils [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.724 225859 DEBUG oslo_concurrency.lockutils [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:57 compute-1 nova_compute[225855]: 2026-01-20 15:10:57.913 225859 DEBUG oslo_concurrency.processutils [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/93101636' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:10:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/93101636' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:10:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:58 compute-1 nova_compute[225855]: 2026-01-20 15:10:58.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:10:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:10:58 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2257383779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:58 compute-1 nova_compute[225855]: 2026-01-20 15:10:58.364 225859 DEBUG oslo_concurrency.processutils [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:58 compute-1 nova_compute[225855]: 2026-01-20 15:10:58.370 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:58 compute-1 nova_compute[225855]: 2026-01-20 15:10:58.373 225859 DEBUG nova.compute.provider_tree [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:10:58 compute-1 nova_compute[225855]: 2026-01-20 15:10:58.388 225859 DEBUG nova.scheduler.client.report [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:10:58 compute-1 nova_compute[225855]: 2026-01-20 15:10:58.411 225859 DEBUG oslo_concurrency.lockutils [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:58 compute-1 nova_compute[225855]: 2026-01-20 15:10:58.414 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:58 compute-1 nova_compute[225855]: 2026-01-20 15:10:58.414 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:58 compute-1 nova_compute[225855]: 2026-01-20 15:10:58.414 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:10:58 compute-1 nova_compute[225855]: 2026-01-20 15:10:58.415 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:58 compute-1 nova_compute[225855]: 2026-01-20 15:10:58.468 225859 INFO nova.scheduler.client.report [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Deleted allocations for instance 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8
Jan 20 15:10:58 compute-1 nova_compute[225855]: 2026-01-20 15:10:58.556 225859 DEBUG oslo_concurrency.lockutils [None req-b2f8c823-cde5-4dd1-bd8a-b06b9cd3b121 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:10:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:58.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:10:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:10:58 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4107587614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:58 compute-1 nova_compute[225855]: 2026-01-20 15:10:58.841 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:58 compute-1 nova_compute[225855]: 2026-01-20 15:10:58.990 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:10:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:10:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:10:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:58.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:10:58 compute-1 nova_compute[225855]: 2026-01-20 15:10:58.993 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4299MB free_disk=20.94251251220703GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:10:58 compute-1 nova_compute[225855]: 2026-01-20 15:10:58.993 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:58 compute-1 nova_compute[225855]: 2026-01-20 15:10:58.994 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:59 compute-1 nova_compute[225855]: 2026-01-20 15:10:59.117 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:10:59 compute-1 nova_compute[225855]: 2026-01-20 15:10:59.117 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:10:59 compute-1 nova_compute[225855]: 2026-01-20 15:10:59.141 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:59 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2257383779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:59 compute-1 ceph-mon[81775]: pgmap v2632: 321 pgs: 321 active+clean; 268 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 24 KiB/s wr, 90 op/s
Jan 20 15:10:59 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4107587614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:59 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2215610935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:59 compute-1 sudo[298040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:59 compute-1 sudo[298040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:59 compute-1 sudo[298040]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:59 compute-1 sudo[298065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:59 compute-1 sudo[298065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:59 compute-1 sudo[298065]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:10:59 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1208327764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:59 compute-1 nova_compute[225855]: 2026-01-20 15:10:59.590 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:59 compute-1 nova_compute[225855]: 2026-01-20 15:10:59.599 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:10:59 compute-1 nova_compute[225855]: 2026-01-20 15:10:59.616 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:10:59 compute-1 nova_compute[225855]: 2026-01-20 15:10:59.642 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:10:59 compute-1 nova_compute[225855]: 2026-01-20 15:10:59.642 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e387 e387: 3 total, 3 up, 3 in
Jan 20 15:11:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1208327764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2920973194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:00.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:11:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:00.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:11:01 compute-1 nova_compute[225855]: 2026-01-20 15:11:01.126 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:11:01 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1603287857' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:11:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:11:01 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1603287857' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:11:01 compute-1 ceph-mon[81775]: osdmap e387: 3 total, 3 up, 3 in
Jan 20 15:11:01 compute-1 ceph-mon[81775]: pgmap v2634: 321 pgs: 321 active+clean; 268 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 28 KiB/s wr, 120 op/s
Jan 20 15:11:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1603287857' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:11:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1603287857' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:11:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:02.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:02 compute-1 nova_compute[225855]: 2026-01-20 15:11:02.618 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:02 compute-1 nova_compute[225855]: 2026-01-20 15:11:02.637 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:11:02 compute-1 nova_compute[225855]: 2026-01-20 15:11:02.638 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:11:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:11:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:02.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:11:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:03 compute-1 ceph-mon[81775]: pgmap v2635: 321 pgs: 321 active+clean; 268 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 142 op/s
Jan 20 15:11:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3353158919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/416091169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:04.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:04.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:05 compute-1 ceph-mon[81775]: pgmap v2636: 321 pgs: 321 active+clean; 255 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.7 KiB/s wr, 135 op/s
Jan 20 15:11:06 compute-1 nova_compute[225855]: 2026-01-20 15:11:06.127 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e388 e388: 3 total, 3 up, 3 in
Jan 20 15:11:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:06.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:06.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:07 compute-1 podman[298096]: 2026-01-20 15:11:07.067914048 +0000 UTC m=+0.113544753 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 15:11:07 compute-1 nova_compute[225855]: 2026-01-20 15:11:07.355 225859 DEBUG oslo_concurrency.lockutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Acquiring lock "b661b8a2-8bea-46be-afe4-537fd2523387" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:07 compute-1 nova_compute[225855]: 2026-01-20 15:11:07.355 225859 DEBUG oslo_concurrency.lockutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:07 compute-1 nova_compute[225855]: 2026-01-20 15:11:07.380 225859 DEBUG nova.compute.manager [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:11:07 compute-1 nova_compute[225855]: 2026-01-20 15:11:07.471 225859 DEBUG oslo_concurrency.lockutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:07 compute-1 nova_compute[225855]: 2026-01-20 15:11:07.471 225859 DEBUG oslo_concurrency.lockutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:07 compute-1 nova_compute[225855]: 2026-01-20 15:11:07.482 225859 DEBUG nova.virt.hardware [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:11:07 compute-1 nova_compute[225855]: 2026-01-20 15:11:07.483 225859 INFO nova.compute.claims [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:11:07 compute-1 ceph-mon[81775]: pgmap v2637: 321 pgs: 321 active+clean; 222 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 7.1 KiB/s wr, 72 op/s
Jan 20 15:11:07 compute-1 ceph-mon[81775]: osdmap e388: 3 total, 3 up, 3 in
Jan 20 15:11:07 compute-1 nova_compute[225855]: 2026-01-20 15:11:07.607 225859 DEBUG oslo_concurrency.processutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:07 compute-1 nova_compute[225855]: 2026-01-20 15:11:07.643 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:11:08 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/932732216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.070 225859 DEBUG oslo_concurrency.processutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.076 225859 DEBUG nova.compute.provider_tree [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.105 225859 DEBUG nova.scheduler.client.report [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.123 225859 DEBUG oslo_concurrency.lockutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.124 225859 DEBUG nova.compute.manager [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.177 225859 DEBUG nova.compute.manager [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.178 225859 DEBUG nova.network.neutron [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.198 225859 INFO nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.220 225859 DEBUG nova.compute.manager [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:11:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.341 225859 DEBUG nova.compute.manager [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.343 225859 DEBUG nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.344 225859 INFO nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Creating image(s)
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.384 225859 DEBUG nova.storage.rbd_utils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] rbd image b661b8a2-8bea-46be-afe4-537fd2523387_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.426 225859 DEBUG nova.storage.rbd_utils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] rbd image b661b8a2-8bea-46be-afe4-537fd2523387_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.454 225859 DEBUG nova.storage.rbd_utils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] rbd image b661b8a2-8bea-46be-afe4-537fd2523387_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.457 225859 DEBUG oslo_concurrency.processutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.493 225859 DEBUG nova.policy [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '17140eb73c0b4236807367396cc4959b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eac67fc3f12d4e9f9e47de6b79eea88f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.550 225859 DEBUG oslo_concurrency.processutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.551 225859 DEBUG oslo_concurrency.lockutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.551 225859 DEBUG oslo_concurrency.lockutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.552 225859 DEBUG oslo_concurrency.lockutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.576 225859 DEBUG nova.storage.rbd_utils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] rbd image b661b8a2-8bea-46be-afe4-537fd2523387_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.580 225859 DEBUG oslo_concurrency.processutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 b661b8a2-8bea-46be-afe4-537fd2523387_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/932732216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:08.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.843 225859 DEBUG oslo_concurrency.processutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 b661b8a2-8bea-46be-afe4-537fd2523387_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:08 compute-1 nova_compute[225855]: 2026-01-20 15:11:08.927 225859 DEBUG nova.storage.rbd_utils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] resizing rbd image b661b8a2-8bea-46be-afe4-537fd2523387_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:11:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:09.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:09 compute-1 nova_compute[225855]: 2026-01-20 15:11:09.037 225859 DEBUG nova.objects.instance [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Lazy-loading 'migration_context' on Instance uuid b661b8a2-8bea-46be-afe4-537fd2523387 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:11:09 compute-1 nova_compute[225855]: 2026-01-20 15:11:09.098 225859 DEBUG nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:11:09 compute-1 nova_compute[225855]: 2026-01-20 15:11:09.098 225859 DEBUG nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Ensure instance console log exists: /var/lib/nova/instances/b661b8a2-8bea-46be-afe4-537fd2523387/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:11:09 compute-1 nova_compute[225855]: 2026-01-20 15:11:09.099 225859 DEBUG oslo_concurrency.lockutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:09 compute-1 nova_compute[225855]: 2026-01-20 15:11:09.100 225859 DEBUG oslo_concurrency.lockutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:09 compute-1 nova_compute[225855]: 2026-01-20 15:11:09.100 225859 DEBUG oslo_concurrency.lockutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:09 compute-1 nova_compute[225855]: 2026-01-20 15:11:09.221 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:09.221 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:11:09 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:09.224 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:11:09 compute-1 nova_compute[225855]: 2026-01-20 15:11:09.407 225859 DEBUG nova.network.neutron [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Successfully created port: 7af44da8-4a04-4981-9a59-e94e346d071e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:11:09 compute-1 ceph-mon[81775]: pgmap v2639: 321 pgs: 321 active+clean; 222 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 8.2 KiB/s wr, 50 op/s
Jan 20 15:11:10 compute-1 nova_compute[225855]: 2026-01-20 15:11:10.301 225859 DEBUG nova.network.neutron [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Successfully updated port: 7af44da8-4a04-4981-9a59-e94e346d071e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:11:10 compute-1 nova_compute[225855]: 2026-01-20 15:11:10.316 225859 DEBUG oslo_concurrency.lockutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Acquiring lock "refresh_cache-b661b8a2-8bea-46be-afe4-537fd2523387" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:11:10 compute-1 nova_compute[225855]: 2026-01-20 15:11:10.317 225859 DEBUG oslo_concurrency.lockutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Acquired lock "refresh_cache-b661b8a2-8bea-46be-afe4-537fd2523387" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:11:10 compute-1 nova_compute[225855]: 2026-01-20 15:11:10.317 225859 DEBUG nova.network.neutron [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:11:10 compute-1 nova_compute[225855]: 2026-01-20 15:11:10.421 225859 DEBUG nova.compute.manager [req-caae48a2-4ec2-4722-b445-33f7aa0d19fc req-80ca1a28-7212-4b44-8074-016c8b5079dd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Received event network-changed-7af44da8-4a04-4981-9a59-e94e346d071e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:11:10 compute-1 nova_compute[225855]: 2026-01-20 15:11:10.421 225859 DEBUG nova.compute.manager [req-caae48a2-4ec2-4722-b445-33f7aa0d19fc req-80ca1a28-7212-4b44-8074-016c8b5079dd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Refreshing instance network info cache due to event network-changed-7af44da8-4a04-4981-9a59-e94e346d071e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:11:10 compute-1 nova_compute[225855]: 2026-01-20 15:11:10.421 225859 DEBUG oslo_concurrency.lockutils [req-caae48a2-4ec2-4722-b445-33f7aa0d19fc req-80ca1a28-7212-4b44-8074-016c8b5079dd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b661b8a2-8bea-46be-afe4-537fd2523387" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:11:10 compute-1 nova_compute[225855]: 2026-01-20 15:11:10.534 225859 DEBUG nova.network.neutron [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:11:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:10.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:10 compute-1 ceph-mon[81775]: pgmap v2640: 321 pgs: 321 active+clean; 262 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 81 op/s
Jan 20 15:11:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:11.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.098 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921856.0979578, 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.099 225859 INFO nova.compute.manager [-] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] VM Stopped (Lifecycle Event)
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.119 225859 DEBUG nova.compute.manager [None req-9f2fa34b-086d-491d-af98-71f582a9441f - - - - - -] [instance: 35ba4fcf-baa1-45a5-bf4c-4f5cb96653a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.129 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:11.227 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.485 225859 DEBUG nova.network.neutron [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Updating instance_info_cache with network_info: [{"id": "7af44da8-4a04-4981-9a59-e94e346d071e", "address": "fa:16:3e:a5:4b:49", "network": {"id": "901fb6c4-d3d9-4ccd-b087-e1a254c7cc98", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1118919703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eac67fc3f12d4e9f9e47de6b79eea88f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7af44da8-4a", "ovs_interfaceid": "7af44da8-4a04-4981-9a59-e94e346d071e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.505 225859 DEBUG oslo_concurrency.lockutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Releasing lock "refresh_cache-b661b8a2-8bea-46be-afe4-537fd2523387" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.505 225859 DEBUG nova.compute.manager [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Instance network_info: |[{"id": "7af44da8-4a04-4981-9a59-e94e346d071e", "address": "fa:16:3e:a5:4b:49", "network": {"id": "901fb6c4-d3d9-4ccd-b087-e1a254c7cc98", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1118919703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eac67fc3f12d4e9f9e47de6b79eea88f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7af44da8-4a", "ovs_interfaceid": "7af44da8-4a04-4981-9a59-e94e346d071e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.505 225859 DEBUG oslo_concurrency.lockutils [req-caae48a2-4ec2-4722-b445-33f7aa0d19fc req-80ca1a28-7212-4b44-8074-016c8b5079dd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b661b8a2-8bea-46be-afe4-537fd2523387" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.506 225859 DEBUG nova.network.neutron [req-caae48a2-4ec2-4722-b445-33f7aa0d19fc req-80ca1a28-7212-4b44-8074-016c8b5079dd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Refreshing network info cache for port 7af44da8-4a04-4981-9a59-e94e346d071e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.508 225859 DEBUG nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Start _get_guest_xml network_info=[{"id": "7af44da8-4a04-4981-9a59-e94e346d071e", "address": "fa:16:3e:a5:4b:49", "network": {"id": "901fb6c4-d3d9-4ccd-b087-e1a254c7cc98", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1118919703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eac67fc3f12d4e9f9e47de6b79eea88f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7af44da8-4a", "ovs_interfaceid": "7af44da8-4a04-4981-9a59-e94e346d071e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.513 225859 WARNING nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.517 225859 DEBUG nova.virt.libvirt.host [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.517 225859 DEBUG nova.virt.libvirt.host [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.522 225859 DEBUG nova.virt.libvirt.host [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.522 225859 DEBUG nova.virt.libvirt.host [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.523 225859 DEBUG nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.524 225859 DEBUG nova.virt.hardware [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.524 225859 DEBUG nova.virt.hardware [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.524 225859 DEBUG nova.virt.hardware [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.525 225859 DEBUG nova.virt.hardware [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.525 225859 DEBUG nova.virt.hardware [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.525 225859 DEBUG nova.virt.hardware [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.525 225859 DEBUG nova.virt.hardware [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.526 225859 DEBUG nova.virt.hardware [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.526 225859 DEBUG nova.virt.hardware [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.526 225859 DEBUG nova.virt.hardware [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.526 225859 DEBUG nova.virt.hardware [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:11:11 compute-1 nova_compute[225855]: 2026-01-20 15:11:11.529 225859 DEBUG oslo_concurrency.processutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:11:11 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1314462438' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.000 225859 DEBUG oslo_concurrency.processutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.024 225859 DEBUG nova.storage.rbd_utils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] rbd image b661b8a2-8bea-46be-afe4-537fd2523387_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.028 225859 DEBUG oslo_concurrency.processutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1314462438' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:11:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2034628683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.500 225859 DEBUG oslo_concurrency.processutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.502 225859 DEBUG nova.virt.libvirt.vif [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:11:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1893768580',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1893768580',id=175,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eac67fc3f12d4e9f9e47de6b79eea88f',ramdisk_id='',reservation_id='r-wkdfou8l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-232429935',owner_user_name='tempest-ServerTagsTestJSON-232429935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:11:08Z,user_data=None,user_id='17140eb73c0b4236807367396cc4959b',uuid=b661b8a2-8bea-46be-afe4-537fd2523387,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7af44da8-4a04-4981-9a59-e94e346d071e", "address": "fa:16:3e:a5:4b:49", "network": {"id": "901fb6c4-d3d9-4ccd-b087-e1a254c7cc98", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1118919703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eac67fc3f12d4e9f9e47de6b79eea88f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7af44da8-4a", "ovs_interfaceid": "7af44da8-4a04-4981-9a59-e94e346d071e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.502 225859 DEBUG nova.network.os_vif_util [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Converting VIF {"id": "7af44da8-4a04-4981-9a59-e94e346d071e", "address": "fa:16:3e:a5:4b:49", "network": {"id": "901fb6c4-d3d9-4ccd-b087-e1a254c7cc98", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1118919703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eac67fc3f12d4e9f9e47de6b79eea88f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7af44da8-4a", "ovs_interfaceid": "7af44da8-4a04-4981-9a59-e94e346d071e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.503 225859 DEBUG nova.network.os_vif_util [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:4b:49,bridge_name='br-int',has_traffic_filtering=True,id=7af44da8-4a04-4981-9a59-e94e346d071e,network=Network(901fb6c4-d3d9-4ccd-b087-e1a254c7cc98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7af44da8-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.504 225859 DEBUG nova.objects.instance [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Lazy-loading 'pci_devices' on Instance uuid b661b8a2-8bea-46be-afe4-537fd2523387 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.525 225859 DEBUG nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:11:12 compute-1 nova_compute[225855]:   <uuid>b661b8a2-8bea-46be-afe4-537fd2523387</uuid>
Jan 20 15:11:12 compute-1 nova_compute[225855]:   <name>instance-000000af</name>
Jan 20 15:11:12 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:11:12 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:11:12 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerTagsTestJSON-server-1893768580</nova:name>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:11:11</nova:creationTime>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:11:12 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:11:12 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:11:12 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:11:12 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:11:12 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:11:12 compute-1 nova_compute[225855]:         <nova:user uuid="17140eb73c0b4236807367396cc4959b">tempest-ServerTagsTestJSON-232429935-project-member</nova:user>
Jan 20 15:11:12 compute-1 nova_compute[225855]:         <nova:project uuid="eac67fc3f12d4e9f9e47de6b79eea88f">tempest-ServerTagsTestJSON-232429935</nova:project>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:11:12 compute-1 nova_compute[225855]:         <nova:port uuid="7af44da8-4a04-4981-9a59-e94e346d071e">
Jan 20 15:11:12 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:11:12 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:11:12 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <system>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <entry name="serial">b661b8a2-8bea-46be-afe4-537fd2523387</entry>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <entry name="uuid">b661b8a2-8bea-46be-afe4-537fd2523387</entry>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     </system>
Jan 20 15:11:12 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:11:12 compute-1 nova_compute[225855]:   <os>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:   </os>
Jan 20 15:11:12 compute-1 nova_compute[225855]:   <features>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:   </features>
Jan 20 15:11:12 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:11:12 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:11:12 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/b661b8a2-8bea-46be-afe4-537fd2523387_disk">
Jan 20 15:11:12 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       </source>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:11:12 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/b661b8a2-8bea-46be-afe4-537fd2523387_disk.config">
Jan 20 15:11:12 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       </source>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:11:12 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:a5:4b:49"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <target dev="tap7af44da8-4a"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/b661b8a2-8bea-46be-afe4-537fd2523387/console.log" append="off"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <video>
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     </video>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:11:12 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:11:12 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:11:12 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:11:12 compute-1 nova_compute[225855]: </domain>
Jan 20 15:11:12 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.526 225859 DEBUG nova.compute.manager [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Preparing to wait for external event network-vif-plugged-7af44da8-4a04-4981-9a59-e94e346d071e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.526 225859 DEBUG oslo_concurrency.lockutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Acquiring lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.526 225859 DEBUG oslo_concurrency.lockutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.527 225859 DEBUG oslo_concurrency.lockutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.527 225859 DEBUG nova.virt.libvirt.vif [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:11:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1893768580',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1893768580',id=175,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eac67fc3f12d4e9f9e47de6b79eea88f',ramdisk_id='',reservation_id='r-wkdfou8l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-232429935',owner_user_name='tempest-ServerTagsTestJSON-232429935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:11:08Z,user_data=None,user_id='17140eb73c0b4236807367396cc4959b',uuid=b661b8a2-8bea-46be-afe4-537fd2523387,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7af44da8-4a04-4981-9a59-e94e346d071e", "address": "fa:16:3e:a5:4b:49", "network": {"id": "901fb6c4-d3d9-4ccd-b087-e1a254c7cc98", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1118919703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eac67fc3f12d4e9f9e47de6b79eea88f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7af44da8-4a", "ovs_interfaceid": "7af44da8-4a04-4981-9a59-e94e346d071e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.527 225859 DEBUG nova.network.os_vif_util [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Converting VIF {"id": "7af44da8-4a04-4981-9a59-e94e346d071e", "address": "fa:16:3e:a5:4b:49", "network": {"id": "901fb6c4-d3d9-4ccd-b087-e1a254c7cc98", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1118919703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eac67fc3f12d4e9f9e47de6b79eea88f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7af44da8-4a", "ovs_interfaceid": "7af44da8-4a04-4981-9a59-e94e346d071e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.528 225859 DEBUG nova.network.os_vif_util [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:4b:49,bridge_name='br-int',has_traffic_filtering=True,id=7af44da8-4a04-4981-9a59-e94e346d071e,network=Network(901fb6c4-d3d9-4ccd-b087-e1a254c7cc98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7af44da8-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.528 225859 DEBUG os_vif [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:4b:49,bridge_name='br-int',has_traffic_filtering=True,id=7af44da8-4a04-4981-9a59-e94e346d071e,network=Network(901fb6c4-d3d9-4ccd-b087-e1a254c7cc98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7af44da8-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.529 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.529 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.530 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.532 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.533 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7af44da8-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.533 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7af44da8-4a, col_values=(('external_ids', {'iface-id': '7af44da8-4a04-4981-9a59-e94e346d071e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:4b:49', 'vm-uuid': 'b661b8a2-8bea-46be-afe4-537fd2523387'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.535 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:12 compute-1 NetworkManager[49104]: <info>  [1768921872.5362] manager: (tap7af44da8-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/325)
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.538 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.544 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.545 225859 INFO os_vif [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:4b:49,bridge_name='br-int',has_traffic_filtering=True,id=7af44da8-4a04-4981-9a59-e94e346d071e,network=Network(901fb6c4-d3d9-4ccd-b087-e1a254c7cc98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7af44da8-4a')
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.617 225859 DEBUG nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.618 225859 DEBUG nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.618 225859 DEBUG nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] No VIF found with MAC fa:16:3e:a5:4b:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.619 225859 INFO nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Using config drive
Jan 20 15:11:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:11:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:12.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.654 225859 DEBUG nova.storage.rbd_utils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] rbd image b661b8a2-8bea-46be-afe4-537fd2523387_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:12 compute-1 nova_compute[225855]: 2026-01-20 15:11:12.658 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:11:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:13.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.116 225859 INFO nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Creating config drive at /var/lib/nova/instances/b661b8a2-8bea-46be-afe4-537fd2523387/disk.config
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.126 225859 DEBUG oslo_concurrency.processutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b661b8a2-8bea-46be-afe4-537fd2523387/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp62q643zy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.159 225859 DEBUG nova.network.neutron [req-caae48a2-4ec2-4722-b445-33f7aa0d19fc req-80ca1a28-7212-4b44-8074-016c8b5079dd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Updated VIF entry in instance network info cache for port 7af44da8-4a04-4981-9a59-e94e346d071e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.161 225859 DEBUG nova.network.neutron [req-caae48a2-4ec2-4722-b445-33f7aa0d19fc req-80ca1a28-7212-4b44-8074-016c8b5079dd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Updating instance_info_cache with network_info: [{"id": "7af44da8-4a04-4981-9a59-e94e346d071e", "address": "fa:16:3e:a5:4b:49", "network": {"id": "901fb6c4-d3d9-4ccd-b087-e1a254c7cc98", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1118919703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eac67fc3f12d4e9f9e47de6b79eea88f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7af44da8-4a", "ovs_interfaceid": "7af44da8-4a04-4981-9a59-e94e346d071e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.183 225859 DEBUG oslo_concurrency.lockutils [req-caae48a2-4ec2-4722-b445-33f7aa0d19fc req-80ca1a28-7212-4b44-8074-016c8b5079dd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b661b8a2-8bea-46be-afe4-537fd2523387" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.263 225859 DEBUG oslo_concurrency.processutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b661b8a2-8bea-46be-afe4-537fd2523387/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp62q643zy" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.300 225859 DEBUG nova.storage.rbd_utils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] rbd image b661b8a2-8bea-46be-afe4-537fd2523387_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.306 225859 DEBUG oslo_concurrency.processutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b661b8a2-8bea-46be-afe4-537fd2523387/disk.config b661b8a2-8bea-46be-afe4-537fd2523387_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:13 compute-1 ceph-mon[81775]: pgmap v2641: 321 pgs: 321 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 80 op/s
Jan 20 15:11:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2034628683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.485 225859 DEBUG oslo_concurrency.processutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b661b8a2-8bea-46be-afe4-537fd2523387/disk.config b661b8a2-8bea-46be-afe4-537fd2523387_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.486 225859 INFO nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Deleting local config drive /var/lib/nova/instances/b661b8a2-8bea-46be-afe4-537fd2523387/disk.config because it was imported into RBD.
Jan 20 15:11:13 compute-1 kernel: tap7af44da8-4a: entered promiscuous mode
Jan 20 15:11:13 compute-1 NetworkManager[49104]: <info>  [1768921873.5351] manager: (tap7af44da8-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/326)
Jan 20 15:11:13 compute-1 ovn_controller[130490]: 2026-01-20T15:11:13Z|00788|binding|INFO|Claiming lport 7af44da8-4a04-4981-9a59-e94e346d071e for this chassis.
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.536 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:13 compute-1 ovn_controller[130490]: 2026-01-20T15:11:13Z|00789|binding|INFO|7af44da8-4a04-4981-9a59-e94e346d071e: Claiming fa:16:3e:a5:4b:49 10.100.0.8
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.541 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.552 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:4b:49 10.100.0.8'], port_security=['fa:16:3e:a5:4b:49 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b661b8a2-8bea-46be-afe4-537fd2523387', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eac67fc3f12d4e9f9e47de6b79eea88f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd744446f-90e4-4fa0-a2ba-33abbbe03344', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff7bbda1-70af-44d2-a4e3-9c7dbd69983d, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=7af44da8-4a04-4981-9a59-e94e346d071e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.554 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 7af44da8-4a04-4981-9a59-e94e346d071e in datapath 901fb6c4-d3d9-4ccd-b087-e1a254c7cc98 bound to our chassis
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.556 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 901fb6c4-d3d9-4ccd-b087-e1a254c7cc98
Jan 20 15:11:13 compute-1 systemd-udevd[298445]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.566 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7f3aa4b6-fbd9-4680-a550-57700a490455]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.568 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap901fb6c4-d1 in ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.569 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap901fb6c4-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.569 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0167dad4-04db-4436-9963-56b87f42d097]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.570 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[eb309a56-916e-4426-a6ff-8d7a87be834c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:13 compute-1 NetworkManager[49104]: <info>  [1768921873.5762] device (tap7af44da8-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:11:13 compute-1 NetworkManager[49104]: <info>  [1768921873.5781] device (tap7af44da8-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:11:13 compute-1 systemd-machined[194361]: New machine qemu-91-instance-000000af.
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.583 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[029d9c6f-6cb6-440f-9b4f-02bd84051132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.604 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.605 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[dbbd4643-f09a-4b14-aa27-9aad453a35da]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:13 compute-1 systemd[1]: Started Virtual Machine qemu-91-instance-000000af.
Jan 20 15:11:13 compute-1 ovn_controller[130490]: 2026-01-20T15:11:13Z|00790|binding|INFO|Setting lport 7af44da8-4a04-4981-9a59-e94e346d071e ovn-installed in OVS
Jan 20 15:11:13 compute-1 ovn_controller[130490]: 2026-01-20T15:11:13Z|00791|binding|INFO|Setting lport 7af44da8-4a04-4981-9a59-e94e346d071e up in Southbound
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.612 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.634 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[93de9b1e-b811-4474-8aeb-bf2f02020aa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:13 compute-1 systemd-udevd[298450]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:11:13 compute-1 NetworkManager[49104]: <info>  [1768921873.6408] manager: (tap901fb6c4-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/327)
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.640 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[373fcd8f-be68-485a-860a-20295029bf82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.668 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[5e4ecec0-fcae-4c39-881f-1d317e73090a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.672 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[54846de8-3321-44e8-a8e3-5cfab4b626db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:13 compute-1 NetworkManager[49104]: <info>  [1768921873.6945] device (tap901fb6c4-d0): carrier: link connected
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.701 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c07daa86-6bca-4c08-b901-22949ae35337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.719 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[db9b66ae-4a38-481a-86fb-c8b5f4a45a40]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap901fb6c4-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:d6:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691262, 'reachable_time': 16516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298481, 'error': None, 'target': 'ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.735 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c82197-b2fe-42b4-b1f0-482670580025]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:d637'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691262, 'tstamp': 691262}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298482, 'error': None, 'target': 'ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.754 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[842062b4-bd57-4081-9cef-fe1db793ece8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap901fb6c4-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:d6:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691262, 'reachable_time': 16516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298483, 'error': None, 'target': 'ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.782 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ba2f1476-8774-4466-8cd4-1f3ce7054507]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.822 225859 DEBUG nova.compute.manager [req-2613c3b2-6f92-470e-a509-0bd5aa207be6 req-5c751510-433f-445b-ba03-96837db9e7a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Received event network-vif-plugged-7af44da8-4a04-4981-9a59-e94e346d071e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.822 225859 DEBUG oslo_concurrency.lockutils [req-2613c3b2-6f92-470e-a509-0bd5aa207be6 req-5c751510-433f-445b-ba03-96837db9e7a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.823 225859 DEBUG oslo_concurrency.lockutils [req-2613c3b2-6f92-470e-a509-0bd5aa207be6 req-5c751510-433f-445b-ba03-96837db9e7a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.823 225859 DEBUG oslo_concurrency.lockutils [req-2613c3b2-6f92-470e-a509-0bd5aa207be6 req-5c751510-433f-445b-ba03-96837db9e7a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.823 225859 DEBUG nova.compute.manager [req-2613c3b2-6f92-470e-a509-0bd5aa207be6 req-5c751510-433f-445b-ba03-96837db9e7a5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Processing event network-vif-plugged-7af44da8-4a04-4981-9a59-e94e346d071e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.841 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fb8e1d83-6dc5-4520-b8a2-7220af6611ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.842 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap901fb6c4-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.843 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.843 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap901fb6c4-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.844 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:13 compute-1 NetworkManager[49104]: <info>  [1768921873.8454] manager: (tap901fb6c4-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/328)
Jan 20 15:11:13 compute-1 kernel: tap901fb6c4-d0: entered promiscuous mode
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.846 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.850 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap901fb6c4-d0, col_values=(('external_ids', {'iface-id': '46940fc6-bbdc-4a44-b120-2c8feb7a2a8b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.851 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:13 compute-1 ovn_controller[130490]: 2026-01-20T15:11:13Z|00792|binding|INFO|Releasing lport 46940fc6-bbdc-4a44-b120-2c8feb7a2a8b from this chassis (sb_readonly=0)
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.852 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.854 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/901fb6c4-d3d9-4ccd-b087-e1a254c7cc98.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/901fb6c4-d3d9-4ccd-b087-e1a254c7cc98.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.855 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[72cbbecc-6b10-4ee8-a181-41f145e99a46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.857 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/901fb6c4-d3d9-4ccd-b087-e1a254c7cc98.pid.haproxy
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 901fb6c4-d3d9-4ccd-b087-e1a254c7cc98
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:11:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:13.858 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98', 'env', 'PROCESS_TAG=haproxy-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/901fb6c4-d3d9-4ccd-b087-e1a254c7cc98.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.865 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.974 225859 DEBUG nova.compute.manager [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.975 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921873.974276, b661b8a2-8bea-46be-afe4-537fd2523387 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.975 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] VM Started (Lifecycle Event)
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.980 225859 DEBUG nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.984 225859 INFO nova.virt.libvirt.driver [-] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Instance spawned successfully.
Jan 20 15:11:13 compute-1 nova_compute[225855]: 2026-01-20 15:11:13.984 225859 DEBUG nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.016 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.021 225859 DEBUG nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.022 225859 DEBUG nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.023 225859 DEBUG nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.023 225859 DEBUG nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.024 225859 DEBUG nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.024 225859 DEBUG nova.virt.libvirt.driver [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.031 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.062 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.063 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921873.97832, b661b8a2-8bea-46be-afe4-537fd2523387 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.063 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] VM Paused (Lifecycle Event)
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.263 225859 INFO nova.compute.manager [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Took 5.92 seconds to spawn the instance on the hypervisor.
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.264 225859 DEBUG nova.compute.manager [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.266 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.278 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921873.9803827, b661b8a2-8bea-46be-afe4-537fd2523387 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.278 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] VM Resumed (Lifecycle Event)
Jan 20 15:11:14 compute-1 podman[298555]: 2026-01-20 15:11:14.285129205 +0000 UTC m=+0.062528505 container create 39c10f61ce58f80611efb9597b63357309abfc9bfb51660b161305b57e340c4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:11:14 compute-1 podman[298555]: 2026-01-20 15:11:14.246633163 +0000 UTC m=+0.024032483 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:11:14 compute-1 systemd[1]: Started libpod-conmon-39c10f61ce58f80611efb9597b63357309abfc9bfb51660b161305b57e340c4b.scope.
Jan 20 15:11:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1024057289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:11:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1024057289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:11:14 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.388 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:11:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64736f063f2c1d26d577945ba7fc6e4bee2fe41a524dd9cef1de3e35037e7f23/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.398 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:11:14 compute-1 podman[298555]: 2026-01-20 15:11:14.403251777 +0000 UTC m=+0.180651107 container init 39c10f61ce58f80611efb9597b63357309abfc9bfb51660b161305b57e340c4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:11:14 compute-1 podman[298555]: 2026-01-20 15:11:14.410434441 +0000 UTC m=+0.187833741 container start 39c10f61ce58f80611efb9597b63357309abfc9bfb51660b161305b57e340c4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.426 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.428 225859 INFO nova.compute.manager [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Took 7.00 seconds to build instance.
Jan 20 15:11:14 compute-1 neutron-haproxy-ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98[298570]: [NOTICE]   (298574) : New worker (298576) forked
Jan 20 15:11:14 compute-1 neutron-haproxy-ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98[298570]: [NOTICE]   (298574) : Loading success.
Jan 20 15:11:14 compute-1 nova_compute[225855]: 2026-01-20 15:11:14.451 225859 DEBUG oslo_concurrency.lockutils [None req-e06e1530-9323-4750-87da-ce51787654ba 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:14.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:15.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:15 compute-1 ceph-mon[81775]: pgmap v2642: 321 pgs: 321 active+clean; 303 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 100 op/s
Jan 20 15:11:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:11:15 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/292567124' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:11:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:11:15 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/292567124' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:11:15 compute-1 nova_compute[225855]: 2026-01-20 15:11:15.910 225859 DEBUG nova.compute.manager [req-c9d531f8-7d03-45b4-af5a-f18c988143bf req-77d546d8-d984-40a1-98ff-8b4fd81587e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Received event network-vif-plugged-7af44da8-4a04-4981-9a59-e94e346d071e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:11:15 compute-1 nova_compute[225855]: 2026-01-20 15:11:15.910 225859 DEBUG oslo_concurrency.lockutils [req-c9d531f8-7d03-45b4-af5a-f18c988143bf req-77d546d8-d984-40a1-98ff-8b4fd81587e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:15 compute-1 nova_compute[225855]: 2026-01-20 15:11:15.910 225859 DEBUG oslo_concurrency.lockutils [req-c9d531f8-7d03-45b4-af5a-f18c988143bf req-77d546d8-d984-40a1-98ff-8b4fd81587e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:15 compute-1 nova_compute[225855]: 2026-01-20 15:11:15.911 225859 DEBUG oslo_concurrency.lockutils [req-c9d531f8-7d03-45b4-af5a-f18c988143bf req-77d546d8-d984-40a1-98ff-8b4fd81587e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:15 compute-1 nova_compute[225855]: 2026-01-20 15:11:15.911 225859 DEBUG nova.compute.manager [req-c9d531f8-7d03-45b4-af5a-f18c988143bf req-77d546d8-d984-40a1-98ff-8b4fd81587e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] No waiting events found dispatching network-vif-plugged-7af44da8-4a04-4981-9a59-e94e346d071e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:11:15 compute-1 nova_compute[225855]: 2026-01-20 15:11:15.911 225859 WARNING nova.compute.manager [req-c9d531f8-7d03-45b4-af5a-f18c988143bf req-77d546d8-d984-40a1-98ff-8b4fd81587e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Received unexpected event network-vif-plugged-7af44da8-4a04-4981-9a59-e94e346d071e for instance with vm_state active and task_state None.
Jan 20 15:11:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/292567124' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:11:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/292567124' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:11:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:16.430 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:16.431 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:16.432 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:11:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:16.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:11:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:11:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:17.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:11:17 compute-1 nova_compute[225855]: 2026-01-20 15:11:17.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:11:17 compute-1 ceph-mon[81775]: pgmap v2643: 321 pgs: 321 active+clean; 315 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.3 MiB/s wr, 130 op/s
Jan 20 15:11:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2499990403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:17 compute-1 nova_compute[225855]: 2026-01-20 15:11:17.536 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:17 compute-1 nova_compute[225855]: 2026-01-20 15:11:17.623 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:18 compute-1 sudo[298588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:18 compute-1 sudo[298588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:18 compute-1 sudo[298588]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:18 compute-1 sudo[298613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:11:18 compute-1 sudo[298613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:18 compute-1 sudo[298613]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:18 compute-1 sudo[298638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:18 compute-1 sudo[298638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:18 compute-1 sudo[298638]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:18 compute-1 sudo[298663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:11:18 compute-1 sudo[298663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:18.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:18 compute-1 sudo[298663]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:11:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:19.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:11:19 compute-1 ceph-mon[81775]: pgmap v2644: 321 pgs: 321 active+clean; 315 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 913 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Jan 20 15:11:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/628713959' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1273009450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 15:11:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:11:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:11:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:11:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:11:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:11:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:11:19 compute-1 sudo[298719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:19 compute-1 sudo[298719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:19 compute-1 sudo[298719]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:19 compute-1 sudo[298744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:19 compute-1 sudo[298744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:19 compute-1 sudo[298744]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:19 compute-1 nova_compute[225855]: 2026-01-20 15:11:19.781 225859 DEBUG oslo_concurrency.lockutils [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Acquiring lock "b661b8a2-8bea-46be-afe4-537fd2523387" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:19 compute-1 nova_compute[225855]: 2026-01-20 15:11:19.781 225859 DEBUG oslo_concurrency.lockutils [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:19 compute-1 nova_compute[225855]: 2026-01-20 15:11:19.781 225859 DEBUG oslo_concurrency.lockutils [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Acquiring lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:19 compute-1 nova_compute[225855]: 2026-01-20 15:11:19.782 225859 DEBUG oslo_concurrency.lockutils [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:19 compute-1 nova_compute[225855]: 2026-01-20 15:11:19.782 225859 DEBUG oslo_concurrency.lockutils [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:19 compute-1 nova_compute[225855]: 2026-01-20 15:11:19.783 225859 INFO nova.compute.manager [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Terminating instance
Jan 20 15:11:19 compute-1 nova_compute[225855]: 2026-01-20 15:11:19.783 225859 DEBUG nova.compute.manager [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:11:19 compute-1 kernel: tap7af44da8-4a (unregistering): left promiscuous mode
Jan 20 15:11:19 compute-1 NetworkManager[49104]: <info>  [1768921879.8321] device (tap7af44da8-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:11:19 compute-1 ovn_controller[130490]: 2026-01-20T15:11:19Z|00793|binding|INFO|Releasing lport 7af44da8-4a04-4981-9a59-e94e346d071e from this chassis (sb_readonly=0)
Jan 20 15:11:19 compute-1 nova_compute[225855]: 2026-01-20 15:11:19.847 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:19 compute-1 ovn_controller[130490]: 2026-01-20T15:11:19Z|00794|binding|INFO|Setting lport 7af44da8-4a04-4981-9a59-e94e346d071e down in Southbound
Jan 20 15:11:19 compute-1 nova_compute[225855]: 2026-01-20 15:11:19.849 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:19 compute-1 ovn_controller[130490]: 2026-01-20T15:11:19Z|00795|binding|INFO|Removing iface tap7af44da8-4a ovn-installed in OVS
Jan 20 15:11:19 compute-1 nova_compute[225855]: 2026-01-20 15:11:19.850 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:19.855 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:4b:49 10.100.0.8'], port_security=['fa:16:3e:a5:4b:49 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b661b8a2-8bea-46be-afe4-537fd2523387', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eac67fc3f12d4e9f9e47de6b79eea88f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd744446f-90e4-4fa0-a2ba-33abbbe03344', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff7bbda1-70af-44d2-a4e3-9c7dbd69983d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=7af44da8-4a04-4981-9a59-e94e346d071e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:11:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:19.856 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 7af44da8-4a04-4981-9a59-e94e346d071e in datapath 901fb6c4-d3d9-4ccd-b087-e1a254c7cc98 unbound from our chassis
Jan 20 15:11:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:19.857 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 901fb6c4-d3d9-4ccd-b087-e1a254c7cc98, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:11:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:19.858 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[50f78dcd-c7d8-42e5-996f-c41239078f1e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:19 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:19.859 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98 namespace which is not needed anymore
Jan 20 15:11:19 compute-1 nova_compute[225855]: 2026-01-20 15:11:19.867 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:19 compute-1 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000af.scope: Deactivated successfully.
Jan 20 15:11:19 compute-1 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000af.scope: Consumed 6.393s CPU time.
Jan 20 15:11:19 compute-1 systemd-machined[194361]: Machine qemu-91-instance-000000af terminated.
Jan 20 15:11:19 compute-1 neutron-haproxy-ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98[298570]: [NOTICE]   (298574) : haproxy version is 2.8.14-c23fe91
Jan 20 15:11:19 compute-1 neutron-haproxy-ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98[298570]: [NOTICE]   (298574) : path to executable is /usr/sbin/haproxy
Jan 20 15:11:19 compute-1 neutron-haproxy-ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98[298570]: [WARNING]  (298574) : Exiting Master process...
Jan 20 15:11:19 compute-1 neutron-haproxy-ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98[298570]: [ALERT]    (298574) : Current worker (298576) exited with code 143 (Terminated)
Jan 20 15:11:19 compute-1 neutron-haproxy-ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98[298570]: [WARNING]  (298574) : All workers exited. Exiting... (0)
Jan 20 15:11:19 compute-1 systemd[1]: libpod-39c10f61ce58f80611efb9597b63357309abfc9bfb51660b161305b57e340c4b.scope: Deactivated successfully.
Jan 20 15:11:19 compute-1 podman[298795]: 2026-01-20 15:11:19.988452698 +0000 UTC m=+0.044450582 container died 39c10f61ce58f80611efb9597b63357309abfc9bfb51660b161305b57e340c4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:11:20 compute-1 kernel: tap7af44da8-4a: entered promiscuous mode
Jan 20 15:11:20 compute-1 NetworkManager[49104]: <info>  [1768921880.0079] manager: (tap7af44da8-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/329)
Jan 20 15:11:20 compute-1 systemd-udevd[298774]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:11:20 compute-1 kernel: tap7af44da8-4a (unregistering): left promiscuous mode
Jan 20 15:11:20 compute-1 ovn_controller[130490]: 2026-01-20T15:11:20Z|00796|binding|INFO|Claiming lport 7af44da8-4a04-4981-9a59-e94e346d071e for this chassis.
Jan 20 15:11:20 compute-1 ovn_controller[130490]: 2026-01-20T15:11:20Z|00797|binding|INFO|7af44da8-4a04-4981-9a59-e94e346d071e: Claiming fa:16:3e:a5:4b:49 10.100.0.8
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.012 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:20.018 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:4b:49 10.100.0.8'], port_security=['fa:16:3e:a5:4b:49 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b661b8a2-8bea-46be-afe4-537fd2523387', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eac67fc3f12d4e9f9e47de6b79eea88f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd744446f-90e4-4fa0-a2ba-33abbbe03344', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff7bbda1-70af-44d2-a4e3-9c7dbd69983d, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=7af44da8-4a04-4981-9a59-e94e346d071e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:11:20 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-39c10f61ce58f80611efb9597b63357309abfc9bfb51660b161305b57e340c4b-userdata-shm.mount: Deactivated successfully.
Jan 20 15:11:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-64736f063f2c1d26d577945ba7fc6e4bee2fe41a524dd9cef1de3e35037e7f23-merged.mount: Deactivated successfully.
Jan 20 15:11:20 compute-1 podman[298795]: 2026-01-20 15:11:20.028190656 +0000 UTC m=+0.084188540 container cleanup 39c10f61ce58f80611efb9597b63357309abfc9bfb51660b161305b57e340c4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.030 225859 INFO nova.virt.libvirt.driver [-] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Instance destroyed successfully.
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.031 225859 DEBUG nova.objects.instance [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Lazy-loading 'resources' on Instance uuid b661b8a2-8bea-46be-afe4-537fd2523387 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:11:20 compute-1 ovn_controller[130490]: 2026-01-20T15:11:20Z|00798|binding|INFO|Setting lport 7af44da8-4a04-4981-9a59-e94e346d071e ovn-installed in OVS
Jan 20 15:11:20 compute-1 ovn_controller[130490]: 2026-01-20T15:11:20Z|00799|binding|INFO|Setting lport 7af44da8-4a04-4981-9a59-e94e346d071e up in Southbound
Jan 20 15:11:20 compute-1 ovn_controller[130490]: 2026-01-20T15:11:20Z|00800|binding|INFO|Releasing lport 7af44da8-4a04-4981-9a59-e94e346d071e from this chassis (sb_readonly=1)
Jan 20 15:11:20 compute-1 ovn_controller[130490]: 2026-01-20T15:11:20Z|00801|if_status|INFO|Dropped 2 log messages in last 103 seconds (most recently, 103 seconds ago) due to excessive rate
Jan 20 15:11:20 compute-1 ovn_controller[130490]: 2026-01-20T15:11:20Z|00802|if_status|INFO|Not setting lport 7af44da8-4a04-4981-9a59-e94e346d071e down as sb is readonly
Jan 20 15:11:20 compute-1 ovn_controller[130490]: 2026-01-20T15:11:20Z|00803|binding|INFO|Removing iface tap7af44da8-4a ovn-installed in OVS
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.034 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:20 compute-1 systemd[1]: libpod-conmon-39c10f61ce58f80611efb9597b63357309abfc9bfb51660b161305b57e340c4b.scope: Deactivated successfully.
Jan 20 15:11:20 compute-1 ovn_controller[130490]: 2026-01-20T15:11:20Z|00804|binding|INFO|Releasing lport 7af44da8-4a04-4981-9a59-e94e346d071e from this chassis (sb_readonly=0)
Jan 20 15:11:20 compute-1 ovn_controller[130490]: 2026-01-20T15:11:20Z|00805|binding|INFO|Setting lport 7af44da8-4a04-4981-9a59-e94e346d071e down in Southbound
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.047 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:20.049 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:4b:49 10.100.0.8'], port_security=['fa:16:3e:a5:4b:49 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b661b8a2-8bea-46be-afe4-537fd2523387', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eac67fc3f12d4e9f9e47de6b79eea88f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd744446f-90e4-4fa0-a2ba-33abbbe03344', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff7bbda1-70af-44d2-a4e3-9c7dbd69983d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=7af44da8-4a04-4981-9a59-e94e346d071e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.050 225859 DEBUG nova.virt.libvirt.vif [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:11:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1893768580',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1893768580',id=175,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:11:14Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eac67fc3f12d4e9f9e47de6b79eea88f',ramdisk_id='',reservation_id='r-wkdfou8l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-232429935',owner_user_name='tempest-ServerTagsTestJSON-232429935-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:11:14Z,user_data=None,user_id='17140eb73c0b4236807367396cc4959b',uuid=b661b8a2-8bea-46be-afe4-537fd2523387,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7af44da8-4a04-4981-9a59-e94e346d071e", "address": "fa:16:3e:a5:4b:49", "network": {"id": "901fb6c4-d3d9-4ccd-b087-e1a254c7cc98", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1118919703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eac67fc3f12d4e9f9e47de6b79eea88f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7af44da8-4a", "ovs_interfaceid": "7af44da8-4a04-4981-9a59-e94e346d071e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.051 225859 DEBUG nova.network.os_vif_util [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Converting VIF {"id": "7af44da8-4a04-4981-9a59-e94e346d071e", "address": "fa:16:3e:a5:4b:49", "network": {"id": "901fb6c4-d3d9-4ccd-b087-e1a254c7cc98", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1118919703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eac67fc3f12d4e9f9e47de6b79eea88f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7af44da8-4a", "ovs_interfaceid": "7af44da8-4a04-4981-9a59-e94e346d071e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.052 225859 DEBUG nova.network.os_vif_util [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:4b:49,bridge_name='br-int',has_traffic_filtering=True,id=7af44da8-4a04-4981-9a59-e94e346d071e,network=Network(901fb6c4-d3d9-4ccd-b087-e1a254c7cc98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7af44da8-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.052 225859 DEBUG os_vif [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:4b:49,bridge_name='br-int',has_traffic_filtering=True,id=7af44da8-4a04-4981-9a59-e94e346d071e,network=Network(901fb6c4-d3d9-4ccd-b087-e1a254c7cc98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7af44da8-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.054 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.054 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7af44da8-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.056 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.057 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.059 225859 INFO os_vif [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:4b:49,bridge_name='br-int',has_traffic_filtering=True,id=7af44da8-4a04-4981-9a59-e94e346d071e,network=Network(901fb6c4-d3d9-4ccd-b087-e1a254c7cc98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7af44da8-4a')
Jan 20 15:11:20 compute-1 podman[298826]: 2026-01-20 15:11:20.111728466 +0000 UTC m=+0.053761807 container remove 39c10f61ce58f80611efb9597b63357309abfc9bfb51660b161305b57e340c4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:11:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:20.116 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d96bf1-9327-4f6c-aa6d-3665317d1422]: (4, ('Tue Jan 20 03:11:19 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98 (39c10f61ce58f80611efb9597b63357309abfc9bfb51660b161305b57e340c4b)\n39c10f61ce58f80611efb9597b63357309abfc9bfb51660b161305b57e340c4b\nTue Jan 20 03:11:20 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98 (39c10f61ce58f80611efb9597b63357309abfc9bfb51660b161305b57e340c4b)\n39c10f61ce58f80611efb9597b63357309abfc9bfb51660b161305b57e340c4b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:20.118 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b26baaad-270f-430b-a6bf-8458abd57bf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:20.120 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap901fb6c4-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.122 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:20 compute-1 kernel: tap901fb6c4-d0: left promiscuous mode
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.135 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:20.136 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[466a7dc3-6841-4a57-a21b-35d24ec98752]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:20 compute-1 podman[298827]: 2026-01-20 15:11:20.144727722 +0000 UTC m=+0.079904248 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:11:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:20.152 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[34862ac6-1ff7-4815-95e1-0420c8f287f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:20.153 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[518126f2-93b2-4761-9c4e-8e008b815d29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:20.169 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[56a79a4c-ddb4-4220-a1da-9934d430ea4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691255, 'reachable_time': 34255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298878, 'error': None, 'target': 'ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:20.172 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-901fb6c4-d3d9-4ccd-b087-e1a254c7cc98 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:11:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:20.172 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[3abd030f-c874-435b-b42d-34abf6f66f68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:20 compute-1 systemd[1]: run-netns-ovnmeta\x2d901fb6c4\x2dd3d9\x2d4ccd\x2db087\x2de1a254c7cc98.mount: Deactivated successfully.
Jan 20 15:11:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:20.173 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 7af44da8-4a04-4981-9a59-e94e346d071e in datapath 901fb6c4-d3d9-4ccd-b087-e1a254c7cc98 unbound from our chassis
Jan 20 15:11:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:20.175 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 901fb6c4-d3d9-4ccd-b087-e1a254c7cc98, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:11:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:20.176 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bc37c0f2-58ab-4d32-a163-cb185130006e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:20.176 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 7af44da8-4a04-4981-9a59-e94e346d071e in datapath 901fb6c4-d3d9-4ccd-b087-e1a254c7cc98 unbound from our chassis
Jan 20 15:11:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:20.178 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 901fb6c4-d3d9-4ccd-b087-e1a254c7cc98, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:11:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:11:20.178 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[78db79e2-0a21-453c-9b3d-4e965c7e0e9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.180 225859 DEBUG nova.compute.manager [req-d5a1d988-97f8-4370-a6a9-5c23ce61f4f0 req-63040d03-16ff-4c43-b3de-1aba204e434e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Received event network-vif-unplugged-7af44da8-4a04-4981-9a59-e94e346d071e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.180 225859 DEBUG oslo_concurrency.lockutils [req-d5a1d988-97f8-4370-a6a9-5c23ce61f4f0 req-63040d03-16ff-4c43-b3de-1aba204e434e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.180 225859 DEBUG oslo_concurrency.lockutils [req-d5a1d988-97f8-4370-a6a9-5c23ce61f4f0 req-63040d03-16ff-4c43-b3de-1aba204e434e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.181 225859 DEBUG oslo_concurrency.lockutils [req-d5a1d988-97f8-4370-a6a9-5c23ce61f4f0 req-63040d03-16ff-4c43-b3de-1aba204e434e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.181 225859 DEBUG nova.compute.manager [req-d5a1d988-97f8-4370-a6a9-5c23ce61f4f0 req-63040d03-16ff-4c43-b3de-1aba204e434e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] No waiting events found dispatching network-vif-unplugged-7af44da8-4a04-4981-9a59-e94e346d071e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.181 225859 DEBUG nova.compute.manager [req-d5a1d988-97f8-4370-a6a9-5c23ce61f4f0 req-63040d03-16ff-4c43-b3de-1aba204e434e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Received event network-vif-unplugged-7af44da8-4a04-4981-9a59-e94e346d071e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.431 225859 INFO nova.virt.libvirt.driver [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Deleting instance files /var/lib/nova/instances/b661b8a2-8bea-46be-afe4-537fd2523387_del
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.431 225859 INFO nova.virt.libvirt.driver [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Deletion of /var/lib/nova/instances/b661b8a2-8bea-46be-afe4-537fd2523387_del complete
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.480 225859 INFO nova.compute.manager [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Took 0.70 seconds to destroy the instance on the hypervisor.
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.480 225859 DEBUG oslo.service.loopingcall [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.481 225859 DEBUG nova.compute.manager [-] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:11:20 compute-1 nova_compute[225855]: 2026-01-20 15:11:20.481 225859 DEBUG nova.network.neutron [-] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:11:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:11:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:20.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:11:21 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e389 e389: 3 total, 3 up, 3 in
Jan 20 15:11:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:21.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:21 compute-1 ceph-mon[81775]: pgmap v2645: 321 pgs: 321 active+clean; 259 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 177 op/s
Jan 20 15:11:21 compute-1 ceph-mon[81775]: osdmap e389: 3 total, 3 up, 3 in
Jan 20 15:11:21 compute-1 nova_compute[225855]: 2026-01-20 15:11:21.530 225859 DEBUG nova.network.neutron [-] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:11:21 compute-1 nova_compute[225855]: 2026-01-20 15:11:21.545 225859 INFO nova.compute.manager [-] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Took 1.06 seconds to deallocate network for instance.
Jan 20 15:11:21 compute-1 nova_compute[225855]: 2026-01-20 15:11:21.592 225859 DEBUG oslo_concurrency.lockutils [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:21 compute-1 nova_compute[225855]: 2026-01-20 15:11:21.593 225859 DEBUG oslo_concurrency.lockutils [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:21 compute-1 nova_compute[225855]: 2026-01-20 15:11:21.672 225859 DEBUG oslo_concurrency.processutils [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:11:22 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1813241616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.120 225859 DEBUG oslo_concurrency.processutils [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.126 225859 DEBUG nova.compute.provider_tree [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:11:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1118456220' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1813241616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.164 225859 DEBUG nova.scheduler.client.report [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.208 225859 DEBUG oslo_concurrency.lockutils [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.248 225859 INFO nova.scheduler.client.report [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Deleted allocations for instance b661b8a2-8bea-46be-afe4-537fd2523387
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.299 225859 DEBUG nova.compute.manager [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Received event network-vif-plugged-7af44da8-4a04-4981-9a59-e94e346d071e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.300 225859 DEBUG oslo_concurrency.lockutils [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.300 225859 DEBUG oslo_concurrency.lockutils [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.300 225859 DEBUG oslo_concurrency.lockutils [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.300 225859 DEBUG nova.compute.manager [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] No waiting events found dispatching network-vif-plugged-7af44da8-4a04-4981-9a59-e94e346d071e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.300 225859 WARNING nova.compute.manager [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Received unexpected event network-vif-plugged-7af44da8-4a04-4981-9a59-e94e346d071e for instance with vm_state deleted and task_state None.
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.301 225859 DEBUG nova.compute.manager [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Received event network-vif-plugged-7af44da8-4a04-4981-9a59-e94e346d071e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.301 225859 DEBUG oslo_concurrency.lockutils [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.301 225859 DEBUG oslo_concurrency.lockutils [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.301 225859 DEBUG oslo_concurrency.lockutils [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.301 225859 DEBUG nova.compute.manager [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] No waiting events found dispatching network-vif-plugged-7af44da8-4a04-4981-9a59-e94e346d071e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.302 225859 WARNING nova.compute.manager [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Received unexpected event network-vif-plugged-7af44da8-4a04-4981-9a59-e94e346d071e for instance with vm_state deleted and task_state None.
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.302 225859 DEBUG nova.compute.manager [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Received event network-vif-plugged-7af44da8-4a04-4981-9a59-e94e346d071e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.302 225859 DEBUG oslo_concurrency.lockutils [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.302 225859 DEBUG oslo_concurrency.lockutils [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.302 225859 DEBUG oslo_concurrency.lockutils [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.303 225859 DEBUG nova.compute.manager [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] No waiting events found dispatching network-vif-plugged-7af44da8-4a04-4981-9a59-e94e346d071e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.303 225859 WARNING nova.compute.manager [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Received unexpected event network-vif-plugged-7af44da8-4a04-4981-9a59-e94e346d071e for instance with vm_state deleted and task_state None.
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.303 225859 DEBUG nova.compute.manager [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Received event network-vif-unplugged-7af44da8-4a04-4981-9a59-e94e346d071e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.303 225859 DEBUG oslo_concurrency.lockutils [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.303 225859 DEBUG oslo_concurrency.lockutils [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.303 225859 DEBUG oslo_concurrency.lockutils [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.304 225859 DEBUG nova.compute.manager [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] No waiting events found dispatching network-vif-unplugged-7af44da8-4a04-4981-9a59-e94e346d071e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.304 225859 WARNING nova.compute.manager [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Received unexpected event network-vif-unplugged-7af44da8-4a04-4981-9a59-e94e346d071e for instance with vm_state deleted and task_state None.
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.304 225859 DEBUG nova.compute.manager [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Received event network-vif-plugged-7af44da8-4a04-4981-9a59-e94e346d071e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.304 225859 DEBUG oslo_concurrency.lockutils [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.304 225859 DEBUG oslo_concurrency.lockutils [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.305 225859 DEBUG oslo_concurrency.lockutils [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.305 225859 DEBUG nova.compute.manager [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] No waiting events found dispatching network-vif-plugged-7af44da8-4a04-4981-9a59-e94e346d071e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.305 225859 WARNING nova.compute.manager [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Received unexpected event network-vif-plugged-7af44da8-4a04-4981-9a59-e94e346d071e for instance with vm_state deleted and task_state None.
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.305 225859 DEBUG nova.compute.manager [req-4d3df9ae-d717-4425-9fae-419135dee5ff req-1deeb906-6935-4fc0-a0cf-d2b33d57ae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Received event network-vif-deleted-7af44da8-4a04-4981-9a59-e94e346d071e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.321 225859 DEBUG oslo_concurrency.lockutils [None req-8519ce3d-f8cd-4ecb-bd73-14d2719b55b8 17140eb73c0b4236807367396cc4959b eac67fc3f12d4e9f9e47de6b79eea88f - - default default] Lock "b661b8a2-8bea-46be-afe4-537fd2523387" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:22 compute-1 nova_compute[225855]: 2026-01-20 15:11:22.625 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:22.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:23.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:23 compute-1 ceph-mon[81775]: pgmap v2647: 321 pgs: 321 active+clean; 226 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 960 KiB/s wr, 169 op/s
Jan 20 15:11:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:11:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:24.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:11:24 compute-1 ceph-mon[81775]: pgmap v2648: 321 pgs: 321 active+clean; 201 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 428 KiB/s wr, 154 op/s
Jan 20 15:11:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:25.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:25 compute-1 nova_compute[225855]: 2026-01-20 15:11:25.057 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:25 compute-1 sudo[298904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:25 compute-1 sudo[298904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:25 compute-1 sudo[298904]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:25 compute-1 sudo[298929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:11:25 compute-1 sudo[298929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:25 compute-1 sudo[298929]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:11:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:11:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e390 e390: 3 total, 3 up, 3 in
Jan 20 15:11:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:11:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:26.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:11:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:27.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:27 compute-1 NetworkManager[49104]: <info>  [1768921887.3456] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/330)
Jan 20 15:11:27 compute-1 nova_compute[225855]: 2026-01-20 15:11:27.344 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:27 compute-1 NetworkManager[49104]: <info>  [1768921887.3463] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/331)
Jan 20 15:11:27 compute-1 nova_compute[225855]: 2026-01-20 15:11:27.481 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:27 compute-1 nova_compute[225855]: 2026-01-20 15:11:27.493 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:27 compute-1 ceph-mon[81775]: pgmap v2649: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 22 KiB/s wr, 216 op/s
Jan 20 15:11:27 compute-1 ceph-mon[81775]: osdmap e390: 3 total, 3 up, 3 in
Jan 20 15:11:27 compute-1 nova_compute[225855]: 2026-01-20 15:11:27.706 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:27 compute-1 nova_compute[225855]: 2026-01-20 15:11:27.718 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:28.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:29.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:29 compute-1 ceph-mon[81775]: pgmap v2651: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 26 KiB/s wr, 167 op/s
Jan 20 15:11:30 compute-1 nova_compute[225855]: 2026-01-20 15:11:30.059 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:11:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:30.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:11:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:31.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:31 compute-1 ceph-mon[81775]: pgmap v2652: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 20 KiB/s wr, 141 op/s
Jan 20 15:11:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:11:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:32.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:11:32 compute-1 nova_compute[225855]: 2026-01-20 15:11:32.709 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:32 compute-1 ceph-mon[81775]: pgmap v2653: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 133 op/s
Jan 20 15:11:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:11:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:33.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:11:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:34 compute-1 nova_compute[225855]: 2026-01-20 15:11:34.504 225859 DEBUG oslo_concurrency.lockutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Acquiring lock "7a223382-86d1-478e-8324-01ef43aef7e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:34 compute-1 nova_compute[225855]: 2026-01-20 15:11:34.504 225859 DEBUG oslo_concurrency.lockutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lock "7a223382-86d1-478e-8324-01ef43aef7e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:34 compute-1 nova_compute[225855]: 2026-01-20 15:11:34.523 225859 DEBUG nova.compute.manager [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:11:34 compute-1 nova_compute[225855]: 2026-01-20 15:11:34.630 225859 DEBUG oslo_concurrency.lockutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:34 compute-1 nova_compute[225855]: 2026-01-20 15:11:34.630 225859 DEBUG oslo_concurrency.lockutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:34 compute-1 nova_compute[225855]: 2026-01-20 15:11:34.638 225859 DEBUG nova.virt.hardware [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:11:34 compute-1 nova_compute[225855]: 2026-01-20 15:11:34.639 225859 INFO nova.compute.claims [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:11:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:34.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:34 compute-1 ceph-mon[81775]: pgmap v2654: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 KiB/s wr, 128 op/s
Jan 20 15:11:34 compute-1 nova_compute[225855]: 2026-01-20 15:11:34.757 225859 DEBUG oslo_concurrency.processutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:35.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.029 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921880.0275724, b661b8a2-8bea-46be-afe4-537fd2523387 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.029 225859 INFO nova.compute.manager [-] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] VM Stopped (Lifecycle Event)
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.062 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.066 225859 DEBUG nova.compute.manager [None req-d2297132-de63-4433-9602-0dcd0c57feff - - - - - -] [instance: b661b8a2-8bea-46be-afe4-537fd2523387] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:11:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:11:35 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1087186560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.186 225859 DEBUG oslo_concurrency.processutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.191 225859 DEBUG nova.compute.provider_tree [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.218 225859 DEBUG nova.scheduler.client.report [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.239 225859 DEBUG oslo_concurrency.lockutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.240 225859 DEBUG nova.compute.manager [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.292 225859 DEBUG nova.compute.manager [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.308 225859 INFO nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.331 225859 DEBUG nova.compute.manager [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.441 225859 DEBUG nova.compute.manager [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.443 225859 DEBUG nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.444 225859 INFO nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Creating image(s)
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.477 225859 DEBUG nova.storage.rbd_utils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] rbd image 7a223382-86d1-478e-8324-01ef43aef7e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.505 225859 DEBUG nova.storage.rbd_utils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] rbd image 7a223382-86d1-478e-8324-01ef43aef7e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.536 225859 DEBUG nova.storage.rbd_utils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] rbd image 7a223382-86d1-478e-8324-01ef43aef7e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.540 225859 DEBUG oslo_concurrency.processutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.621 225859 DEBUG oslo_concurrency.processutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.622 225859 DEBUG oslo_concurrency.lockutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.622 225859 DEBUG oslo_concurrency.lockutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.622 225859 DEBUG oslo_concurrency.lockutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.651 225859 DEBUG nova.storage.rbd_utils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] rbd image 7a223382-86d1-478e-8324-01ef43aef7e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:35 compute-1 nova_compute[225855]: 2026-01-20 15:11:35.654 225859 DEBUG oslo_concurrency.processutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 7a223382-86d1-478e-8324-01ef43aef7e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1087186560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.028 225859 DEBUG oslo_concurrency.processutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 7a223382-86d1-478e-8324-01ef43aef7e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.125 225859 DEBUG nova.storage.rbd_utils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] resizing rbd image 7a223382-86d1-478e-8324-01ef43aef7e1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.237 225859 DEBUG nova.objects.instance [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lazy-loading 'migration_context' on Instance uuid 7a223382-86d1-478e-8324-01ef43aef7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.252 225859 DEBUG nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.253 225859 DEBUG nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Ensure instance console log exists: /var/lib/nova/instances/7a223382-86d1-478e-8324-01ef43aef7e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.253 225859 DEBUG oslo_concurrency.lockutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.253 225859 DEBUG oslo_concurrency.lockutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.254 225859 DEBUG oslo_concurrency.lockutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.255 225859 DEBUG nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.259 225859 WARNING nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.265 225859 DEBUG nova.virt.libvirt.host [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.266 225859 DEBUG nova.virt.libvirt.host [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.269 225859 DEBUG nova.virt.libvirt.host [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.269 225859 DEBUG nova.virt.libvirt.host [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.271 225859 DEBUG nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.271 225859 DEBUG nova.virt.hardware [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.271 225859 DEBUG nova.virt.hardware [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.271 225859 DEBUG nova.virt.hardware [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.271 225859 DEBUG nova.virt.hardware [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.272 225859 DEBUG nova.virt.hardware [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.272 225859 DEBUG nova.virt.hardware [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.272 225859 DEBUG nova.virt.hardware [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.272 225859 DEBUG nova.virt.hardware [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.272 225859 DEBUG nova.virt.hardware [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.272 225859 DEBUG nova.virt.hardware [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.273 225859 DEBUG nova.virt.hardware [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.275 225859 DEBUG oslo_concurrency.processutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:36.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:36 compute-1 ceph-mon[81775]: pgmap v2655: 321 pgs: 321 active+clean; 172 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 496 KiB/s rd, 847 KiB/s wr, 28 op/s
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.779 225859 DEBUG oslo_concurrency.processutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.812 225859 DEBUG nova.storage.rbd_utils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] rbd image 7a223382-86d1-478e-8324-01ef43aef7e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:36 compute-1 nova_compute[225855]: 2026-01-20 15:11:36.817 225859 DEBUG oslo_concurrency.processutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:37.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:11:37 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3661868601' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:37 compute-1 nova_compute[225855]: 2026-01-20 15:11:37.266 225859 DEBUG oslo_concurrency.processutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:37 compute-1 nova_compute[225855]: 2026-01-20 15:11:37.268 225859 DEBUG nova.objects.instance [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lazy-loading 'pci_devices' on Instance uuid 7a223382-86d1-478e-8324-01ef43aef7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:11:37 compute-1 nova_compute[225855]: 2026-01-20 15:11:37.295 225859 DEBUG nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:11:37 compute-1 nova_compute[225855]:   <uuid>7a223382-86d1-478e-8324-01ef43aef7e1</uuid>
Jan 20 15:11:37 compute-1 nova_compute[225855]:   <name>instance-000000b1</name>
Jan 20 15:11:37 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:11:37 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:11:37 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerShowV254Test-server-1643419890</nova:name>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:11:36</nova:creationTime>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:11:37 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:11:37 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:11:37 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:11:37 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:11:37 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:11:37 compute-1 nova_compute[225855]:         <nova:user uuid="8e699d2accae4b489d779507db44504e">tempest-ServerShowV254Test-782263999-project-member</nova:user>
Jan 20 15:11:37 compute-1 nova_compute[225855]:         <nova:project uuid="37c01d33832740c3ba018515e081285b">tempest-ServerShowV254Test-782263999</nova:project>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <nova:ports/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:11:37 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:11:37 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <system>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <entry name="serial">7a223382-86d1-478e-8324-01ef43aef7e1</entry>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <entry name="uuid">7a223382-86d1-478e-8324-01ef43aef7e1</entry>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     </system>
Jan 20 15:11:37 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:11:37 compute-1 nova_compute[225855]:   <os>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:   </os>
Jan 20 15:11:37 compute-1 nova_compute[225855]:   <features>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:   </features>
Jan 20 15:11:37 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:11:37 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:11:37 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/7a223382-86d1-478e-8324-01ef43aef7e1_disk">
Jan 20 15:11:37 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       </source>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:11:37 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/7a223382-86d1-478e-8324-01ef43aef7e1_disk.config">
Jan 20 15:11:37 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       </source>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:11:37 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/7a223382-86d1-478e-8324-01ef43aef7e1/console.log" append="off"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <video>
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     </video>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:11:37 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:11:37 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:11:37 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:11:37 compute-1 nova_compute[225855]: </domain>
Jan 20 15:11:37 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:11:37 compute-1 nova_compute[225855]: 2026-01-20 15:11:37.375 225859 DEBUG nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:11:37 compute-1 nova_compute[225855]: 2026-01-20 15:11:37.375 225859 DEBUG nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:11:37 compute-1 nova_compute[225855]: 2026-01-20 15:11:37.376 225859 INFO nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Using config drive
Jan 20 15:11:37 compute-1 nova_compute[225855]: 2026-01-20 15:11:37.406 225859 DEBUG nova.storage.rbd_utils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] rbd image 7a223382-86d1-478e-8324-01ef43aef7e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:37 compute-1 podman[299212]: 2026-01-20 15:11:37.419688192 +0000 UTC m=+0.084072276 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:11:37 compute-1 nova_compute[225855]: 2026-01-20 15:11:37.688 225859 INFO nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Creating config drive at /var/lib/nova/instances/7a223382-86d1-478e-8324-01ef43aef7e1/disk.config
Jan 20 15:11:37 compute-1 nova_compute[225855]: 2026-01-20 15:11:37.695 225859 DEBUG oslo_concurrency.processutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7a223382-86d1-478e-8324-01ef43aef7e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbyxngv9v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:37 compute-1 nova_compute[225855]: 2026-01-20 15:11:37.722 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1084543734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3661868601' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:37 compute-1 nova_compute[225855]: 2026-01-20 15:11:37.829 225859 DEBUG oslo_concurrency.processutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7a223382-86d1-478e-8324-01ef43aef7e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbyxngv9v" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:37 compute-1 nova_compute[225855]: 2026-01-20 15:11:37.856 225859 DEBUG nova.storage.rbd_utils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] rbd image 7a223382-86d1-478e-8324-01ef43aef7e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:37 compute-1 nova_compute[225855]: 2026-01-20 15:11:37.859 225859 DEBUG oslo_concurrency.processutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7a223382-86d1-478e-8324-01ef43aef7e1/disk.config 7a223382-86d1-478e-8324-01ef43aef7e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:37 compute-1 nova_compute[225855]: 2026-01-20 15:11:37.998 225859 DEBUG oslo_concurrency.processutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7a223382-86d1-478e-8324-01ef43aef7e1/disk.config 7a223382-86d1-478e-8324-01ef43aef7e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:37 compute-1 nova_compute[225855]: 2026-01-20 15:11:37.999 225859 INFO nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Deleting local config drive /var/lib/nova/instances/7a223382-86d1-478e-8324-01ef43aef7e1/disk.config because it was imported into RBD.
Jan 20 15:11:38 compute-1 systemd-machined[194361]: New machine qemu-92-instance-000000b1.
Jan 20 15:11:38 compute-1 systemd[1]: Started Virtual Machine qemu-92-instance-000000b1.
Jan 20 15:11:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.485 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921898.4844081, 7a223382-86d1-478e-8324-01ef43aef7e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.486 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] VM Resumed (Lifecycle Event)
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.492 225859 DEBUG nova.compute.manager [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.492 225859 DEBUG nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.496 225859 INFO nova.virt.libvirt.driver [-] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Instance spawned successfully.
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.496 225859 DEBUG nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.520 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.525 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.528 225859 DEBUG nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.528 225859 DEBUG nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.529 225859 DEBUG nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.529 225859 DEBUG nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.529 225859 DEBUG nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.530 225859 DEBUG nova.virt.libvirt.driver [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.663 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.664 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921898.4855099, 7a223382-86d1-478e-8324-01ef43aef7e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.664 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] VM Started (Lifecycle Event)
Jan 20 15:11:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:38.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.729 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.733 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.748 225859 INFO nova.compute.manager [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Took 3.31 seconds to spawn the instance on the hypervisor.
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.749 225859 DEBUG nova.compute.manager [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.757 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:11:38 compute-1 ceph-mon[81775]: pgmap v2656: 321 pgs: 321 active+clean; 172 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 418 KiB/s rd, 713 KiB/s wr, 23 op/s
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.804 225859 INFO nova.compute.manager [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Took 4.21 seconds to build instance.
Jan 20 15:11:38 compute-1 nova_compute[225855]: 2026-01-20 15:11:38.819 225859 DEBUG oslo_concurrency.lockutils [None req-460bb514-b6ca-4c3a-91b8-5945a3a306a8 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lock "7a223382-86d1-478e-8324-01ef43aef7e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.315s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:39.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:39 compute-1 sudo[299354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:39 compute-1 sudo[299354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:39 compute-1 sudo[299354]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:39 compute-1 sudo[299380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:39 compute-1 sudo[299380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:39 compute-1 sudo[299380]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:40 compute-1 nova_compute[225855]: 2026-01-20 15:11:40.065 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:40.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:40 compute-1 nova_compute[225855]: 2026-01-20 15:11:40.751 225859 INFO nova.compute.manager [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Rebuilding instance
Jan 20 15:11:41 compute-1 nova_compute[225855]: 2026-01-20 15:11:41.014 225859 DEBUG nova.objects.instance [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7a223382-86d1-478e-8324-01ef43aef7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:11:41 compute-1 nova_compute[225855]: 2026-01-20 15:11:41.029 225859 DEBUG nova.compute.manager [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:11:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:41.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:41 compute-1 nova_compute[225855]: 2026-01-20 15:11:41.070 225859 DEBUG nova.objects.instance [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lazy-loading 'pci_requests' on Instance uuid 7a223382-86d1-478e-8324-01ef43aef7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:11:41 compute-1 nova_compute[225855]: 2026-01-20 15:11:41.083 225859 DEBUG nova.objects.instance [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lazy-loading 'pci_devices' on Instance uuid 7a223382-86d1-478e-8324-01ef43aef7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:11:41 compute-1 nova_compute[225855]: 2026-01-20 15:11:41.095 225859 DEBUG nova.objects.instance [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lazy-loading 'resources' on Instance uuid 7a223382-86d1-478e-8324-01ef43aef7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:11:41 compute-1 nova_compute[225855]: 2026-01-20 15:11:41.111 225859 DEBUG nova.objects.instance [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lazy-loading 'migration_context' on Instance uuid 7a223382-86d1-478e-8324-01ef43aef7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:11:41 compute-1 nova_compute[225855]: 2026-01-20 15:11:41.138 225859 DEBUG nova.objects.instance [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 15:11:41 compute-1 nova_compute[225855]: 2026-01-20 15:11:41.141 225859 DEBUG nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 15:11:41 compute-1 ceph-mon[81775]: pgmap v2657: 321 pgs: 321 active+clean; 217 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.1 MiB/s wr, 128 op/s
Jan 20 15:11:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:11:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:42.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:11:42 compute-1 nova_compute[225855]: 2026-01-20 15:11:42.713 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:43.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:43 compute-1 ceph-mon[81775]: pgmap v2658: 321 pgs: 321 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Jan 20 15:11:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:44.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:45.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:45 compute-1 nova_compute[225855]: 2026-01-20 15:11:45.068 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:45 compute-1 ceph-mon[81775]: pgmap v2659: 321 pgs: 321 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Jan 20 15:11:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e391 e391: 3 total, 3 up, 3 in
Jan 20 15:11:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e392 e392: 3 total, 3 up, 3 in
Jan 20 15:11:46 compute-1 ceph-mon[81775]: osdmap e391: 3 total, 3 up, 3 in
Jan 20 15:11:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:46.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:47.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:47 compute-1 ceph-mon[81775]: pgmap v2661: 321 pgs: 321 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 189 op/s
Jan 20 15:11:47 compute-1 ceph-mon[81775]: osdmap e392: 3 total, 3 up, 3 in
Jan 20 15:11:47 compute-1 nova_compute[225855]: 2026-01-20 15:11:47.715 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:48 compute-1 ceph-mon[81775]: pgmap v2663: 321 pgs: 321 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.2 MiB/s wr, 79 op/s
Jan 20 15:11:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:48.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:49.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:50 compute-1 nova_compute[225855]: 2026-01-20 15:11:50.071 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:11:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:50.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:11:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:11:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:51.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:11:51 compute-1 podman[299410]: 2026-01-20 15:11:51.050545659 +0000 UTC m=+0.080797214 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 20 15:11:51 compute-1 nova_compute[225855]: 2026-01-20 15:11:51.184 225859 DEBUG nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 15:11:51 compute-1 nova_compute[225855]: 2026-01-20 15:11:51.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:11:51 compute-1 nova_compute[225855]: 2026-01-20 15:11:51.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:11:51 compute-1 ceph-mon[81775]: pgmap v2664: 321 pgs: 321 active+clean; 249 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 435 KiB/s rd, 414 KiB/s wr, 45 op/s
Jan 20 15:11:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3676783533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:52.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:52 compute-1 nova_compute[225855]: 2026-01-20 15:11:52.717 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:11:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:53.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:11:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:53 compute-1 ceph-mon[81775]: pgmap v2665: 321 pgs: 321 active+clean; 260 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 143 KiB/s rd, 1.9 MiB/s wr, 54 op/s
Jan 20 15:11:54 compute-1 nova_compute[225855]: 2026-01-20 15:11:54.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:11:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/803720983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:54 compute-1 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000b1.scope: Deactivated successfully.
Jan 20 15:11:54 compute-1 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000b1.scope: Consumed 13.312s CPU time.
Jan 20 15:11:54 compute-1 systemd-machined[194361]: Machine qemu-92-instance-000000b1 terminated.
Jan 20 15:11:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:54.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:11:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:55.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:11:55 compute-1 nova_compute[225855]: 2026-01-20 15:11:55.075 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:55 compute-1 nova_compute[225855]: 2026-01-20 15:11:55.205 225859 INFO nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Instance shutdown successfully after 14 seconds.
Jan 20 15:11:55 compute-1 nova_compute[225855]: 2026-01-20 15:11:55.210 225859 INFO nova.virt.libvirt.driver [-] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Instance destroyed successfully.
Jan 20 15:11:55 compute-1 nova_compute[225855]: 2026-01-20 15:11:55.214 225859 INFO nova.virt.libvirt.driver [-] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Instance destroyed successfully.
Jan 20 15:11:55 compute-1 nova_compute[225855]: 2026-01-20 15:11:55.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:11:55 compute-1 nova_compute[225855]: 2026-01-20 15:11:55.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:11:55 compute-1 nova_compute[225855]: 2026-01-20 15:11:55.614 225859 INFO nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Deleting instance files /var/lib/nova/instances/7a223382-86d1-478e-8324-01ef43aef7e1_del
Jan 20 15:11:55 compute-1 nova_compute[225855]: 2026-01-20 15:11:55.615 225859 INFO nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Deletion of /var/lib/nova/instances/7a223382-86d1-478e-8324-01ef43aef7e1_del complete
Jan 20 15:11:55 compute-1 ceph-mon[81775]: pgmap v2666: 321 pgs: 321 active+clean; 276 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 158 KiB/s rd, 2.5 MiB/s wr, 64 op/s
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.042 225859 DEBUG nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.043 225859 INFO nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Creating image(s)
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.073 225859 DEBUG nova.storage.rbd_utils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] rbd image 7a223382-86d1-478e-8324-01ef43aef7e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.110 225859 DEBUG nova.storage.rbd_utils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] rbd image 7a223382-86d1-478e-8324-01ef43aef7e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.148 225859 DEBUG nova.storage.rbd_utils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] rbd image 7a223382-86d1-478e-8324-01ef43aef7e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.153 225859 DEBUG oslo_concurrency.processutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.225 225859 DEBUG oslo_concurrency.processutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.227 225859 DEBUG oslo_concurrency.lockutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Acquiring lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.228 225859 DEBUG oslo_concurrency.lockutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.228 225859 DEBUG oslo_concurrency.lockutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.263 225859 DEBUG nova.storage.rbd_utils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] rbd image 7a223382-86d1-478e-8324-01ef43aef7e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.268 225859 DEBUG oslo_concurrency.processutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c 7a223382-86d1-478e-8324-01ef43aef7e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.388 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-7a223382-86d1-478e-8324-01ef43aef7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.389 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-7a223382-86d1-478e-8324-01ef43aef7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.390 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.390 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7a223382-86d1-478e-8324-01ef43aef7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.548 225859 DEBUG oslo_concurrency.processutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c 7a223382-86d1-478e-8324-01ef43aef7e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.644 225859 DEBUG nova.storage.rbd_utils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] resizing rbd image 7a223382-86d1-478e-8324-01ef43aef7e1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:11:56 compute-1 ceph-mon[81775]: pgmap v2667: 321 pgs: 321 active+clean; 296 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 479 KiB/s rd, 4.7 MiB/s wr, 153 op/s
Jan 20 15:11:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:56.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.777 225859 DEBUG nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.779 225859 DEBUG nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Ensure instance console log exists: /var/lib/nova/instances/7a223382-86d1-478e-8324-01ef43aef7e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.780 225859 DEBUG oslo_concurrency.lockutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.781 225859 DEBUG oslo_concurrency.lockutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.782 225859 DEBUG oslo_concurrency.lockutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.785 225859 DEBUG nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:22:02Z,direct_url=<?>,disk_format='qcow2',id=26699514-f465-4b50-98b7-36f2cfc6a308,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.790 225859 WARNING nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.799 225859 DEBUG nova.virt.libvirt.host [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.801 225859 DEBUG nova.virt.libvirt.host [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.806 225859 DEBUG nova.virt.libvirt.host [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.807 225859 DEBUG nova.virt.libvirt.host [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.810 225859 DEBUG nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.811 225859 DEBUG nova.virt.hardware [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:22:02Z,direct_url=<?>,disk_format='qcow2',id=26699514-f465-4b50-98b7-36f2cfc6a308,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.812 225859 DEBUG nova.virt.hardware [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.812 225859 DEBUG nova.virt.hardware [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.813 225859 DEBUG nova.virt.hardware [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.813 225859 DEBUG nova.virt.hardware [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.813 225859 DEBUG nova.virt.hardware [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.814 225859 DEBUG nova.virt.hardware [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.814 225859 DEBUG nova.virt.hardware [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.815 225859 DEBUG nova.virt.hardware [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.815 225859 DEBUG nova.virt.hardware [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.815 225859 DEBUG nova.virt.hardware [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.816 225859 DEBUG nova.objects.instance [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7a223382-86d1-478e-8324-01ef43aef7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:11:56 compute-1 nova_compute[225855]: 2026-01-20 15:11:56.835 225859 DEBUG oslo_concurrency.processutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:57.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:57 compute-1 nova_compute[225855]: 2026-01-20 15:11:57.095 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:11:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:11:57 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4078018162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:57 compute-1 nova_compute[225855]: 2026-01-20 15:11:57.263 225859 DEBUG oslo_concurrency.processutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:57 compute-1 nova_compute[225855]: 2026-01-20 15:11:57.298 225859 DEBUG nova.storage.rbd_utils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] rbd image 7a223382-86d1-478e-8324-01ef43aef7e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:57 compute-1 nova_compute[225855]: 2026-01-20 15:11:57.302 225859 DEBUG oslo_concurrency.processutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4078018162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:57 compute-1 nova_compute[225855]: 2026-01-20 15:11:57.720 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:57 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:11:57 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1131313242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:57 compute-1 nova_compute[225855]: 2026-01-20 15:11:57.804 225859 DEBUG oslo_concurrency.processutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:57 compute-1 nova_compute[225855]: 2026-01-20 15:11:57.807 225859 DEBUG nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:11:57 compute-1 nova_compute[225855]:   <uuid>7a223382-86d1-478e-8324-01ef43aef7e1</uuid>
Jan 20 15:11:57 compute-1 nova_compute[225855]:   <name>instance-000000b1</name>
Jan 20 15:11:57 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:11:57 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:11:57 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerShowV254Test-server-1643419890</nova:name>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:11:56</nova:creationTime>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:11:57 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:11:57 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:11:57 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:11:57 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:11:57 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:11:57 compute-1 nova_compute[225855]:         <nova:user uuid="8e699d2accae4b489d779507db44504e">tempest-ServerShowV254Test-782263999-project-member</nova:user>
Jan 20 15:11:57 compute-1 nova_compute[225855]:         <nova:project uuid="37c01d33832740c3ba018515e081285b">tempest-ServerShowV254Test-782263999</nova:project>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="26699514-f465-4b50-98b7-36f2cfc6a308"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <nova:ports/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:11:57 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:11:57 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <system>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <entry name="serial">7a223382-86d1-478e-8324-01ef43aef7e1</entry>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <entry name="uuid">7a223382-86d1-478e-8324-01ef43aef7e1</entry>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     </system>
Jan 20 15:11:57 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:11:57 compute-1 nova_compute[225855]:   <os>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:   </os>
Jan 20 15:11:57 compute-1 nova_compute[225855]:   <features>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:   </features>
Jan 20 15:11:57 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:11:57 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:11:57 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/7a223382-86d1-478e-8324-01ef43aef7e1_disk">
Jan 20 15:11:57 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       </source>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:11:57 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/7a223382-86d1-478e-8324-01ef43aef7e1_disk.config">
Jan 20 15:11:57 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       </source>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:11:57 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/7a223382-86d1-478e-8324-01ef43aef7e1/console.log" append="off"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <video>
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     </video>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:11:57 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:11:57 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:11:57 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:11:57 compute-1 nova_compute[225855]: </domain>
Jan 20 15:11:57 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:11:57 compute-1 nova_compute[225855]: 2026-01-20 15:11:57.873 225859 DEBUG nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:11:57 compute-1 nova_compute[225855]: 2026-01-20 15:11:57.874 225859 DEBUG nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:11:57 compute-1 nova_compute[225855]: 2026-01-20 15:11:57.874 225859 INFO nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Using config drive
Jan 20 15:11:57 compute-1 nova_compute[225855]: 2026-01-20 15:11:57.905 225859 DEBUG nova.storage.rbd_utils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] rbd image 7a223382-86d1-478e-8324-01ef43aef7e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:57 compute-1 nova_compute[225855]: 2026-01-20 15:11:57.928 225859 DEBUG nova.objects.instance [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7a223382-86d1-478e-8324-01ef43aef7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:11:58 compute-1 nova_compute[225855]: 2026-01-20 15:11:58.202 225859 INFO nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Creating config drive at /var/lib/nova/instances/7a223382-86d1-478e-8324-01ef43aef7e1/disk.config
Jan 20 15:11:58 compute-1 nova_compute[225855]: 2026-01-20 15:11:58.206 225859 DEBUG oslo_concurrency.processutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7a223382-86d1-478e-8324-01ef43aef7e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyfy5mqls execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:58 compute-1 nova_compute[225855]: 2026-01-20 15:11:58.340 225859 DEBUG oslo_concurrency.processutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7a223382-86d1-478e-8324-01ef43aef7e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyfy5mqls" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:58 compute-1 nova_compute[225855]: 2026-01-20 15:11:58.383 225859 DEBUG nova.storage.rbd_utils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] rbd image 7a223382-86d1-478e-8324-01ef43aef7e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:58 compute-1 nova_compute[225855]: 2026-01-20 15:11:58.388 225859 DEBUG oslo_concurrency.processutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7a223382-86d1-478e-8324-01ef43aef7e1/disk.config 7a223382-86d1-478e-8324-01ef43aef7e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:58 compute-1 nova_compute[225855]: 2026-01-20 15:11:58.581 225859 DEBUG oslo_concurrency.processutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7a223382-86d1-478e-8324-01ef43aef7e1/disk.config 7a223382-86d1-478e-8324-01ef43aef7e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:58 compute-1 nova_compute[225855]: 2026-01-20 15:11:58.582 225859 INFO nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Deleting local config drive /var/lib/nova/instances/7a223382-86d1-478e-8324-01ef43aef7e1/disk.config because it was imported into RBD.
Jan 20 15:11:58 compute-1 systemd-machined[194361]: New machine qemu-93-instance-000000b1.
Jan 20 15:11:58 compute-1 systemd[1]: Started Virtual Machine qemu-93-instance-000000b1.
Jan 20 15:11:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:58.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1131313242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:58 compute-1 ceph-mon[81775]: pgmap v2668: 321 pgs: 321 active+clean; 296 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 404 KiB/s rd, 4.0 MiB/s wr, 129 op/s
Jan 20 15:11:58 compute-1 nova_compute[225855]: 2026-01-20 15:11:58.787 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:11:58 compute-1 nova_compute[225855]: 2026-01-20 15:11:58.806 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-7a223382-86d1-478e-8324-01ef43aef7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:11:58 compute-1 nova_compute[225855]: 2026-01-20 15:11:58.807 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:11:58 compute-1 nova_compute[225855]: 2026-01-20 15:11:58.807 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.039 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for 7a223382-86d1-478e-8324-01ef43aef7e1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.040 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921919.038735, 7a223382-86d1-478e-8324-01ef43aef7e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.040 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] VM Resumed (Lifecycle Event)
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.042 225859 DEBUG nova.compute.manager [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.043 225859 DEBUG nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.046 225859 INFO nova.virt.libvirt.driver [-] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Instance spawned successfully.
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.046 225859 DEBUG nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:11:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:11:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:59.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.074 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.080 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.083 225859 DEBUG nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.084 225859 DEBUG nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.084 225859 DEBUG nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.084 225859 DEBUG nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.085 225859 DEBUG nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.085 225859 DEBUG nova.virt.libvirt.driver [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.111 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.112 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921919.0398448, 7a223382-86d1-478e-8324-01ef43aef7e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.112 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] VM Started (Lifecycle Event)
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.152 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.155 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.172 225859 DEBUG nova.compute.manager [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.203 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.231 225859 DEBUG oslo_concurrency.lockutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.231 225859 DEBUG oslo_concurrency.lockutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.231 225859 DEBUG nova.objects.instance [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.317 225859 DEBUG oslo_concurrency.lockutils [None req-0086cd3f-1509-4f39-9077-b0a04ab25c5f 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.374 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.375 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.375 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.376 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.376 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:59 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3538423909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:59 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4158599472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:11:59 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1987985162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.837 225859 DEBUG oslo_concurrency.lockutils [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Acquiring lock "7a223382-86d1-478e-8324-01ef43aef7e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.838 225859 DEBUG oslo_concurrency.lockutils [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lock "7a223382-86d1-478e-8324-01ef43aef7e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.838 225859 DEBUG oslo_concurrency.lockutils [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Acquiring lock "7a223382-86d1-478e-8324-01ef43aef7e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.838 225859 DEBUG oslo_concurrency.lockutils [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lock "7a223382-86d1-478e-8324-01ef43aef7e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.839 225859 DEBUG oslo_concurrency.lockutils [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lock "7a223382-86d1-478e-8324-01ef43aef7e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.841 225859 INFO nova.compute.manager [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Terminating instance
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.842 225859 DEBUG oslo_concurrency.lockutils [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Acquiring lock "refresh_cache-7a223382-86d1-478e-8324-01ef43aef7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.842 225859 DEBUG oslo_concurrency.lockutils [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Acquired lock "refresh_cache-7a223382-86d1-478e-8324-01ef43aef7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.842 225859 DEBUG nova.network.neutron [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.844 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:59 compute-1 sudo[299821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:59 compute-1 sudo[299821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:59 compute-1 sudo[299821]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:59 compute-1 sudo[299848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:59 compute-1 sudo[299848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:59 compute-1 sudo[299848]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.939 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:11:59 compute-1 nova_compute[225855]: 2026-01-20 15:11:59.940 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:12:00 compute-1 nova_compute[225855]: 2026-01-20 15:12:00.057 225859 DEBUG nova.network.neutron [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:12:00 compute-1 nova_compute[225855]: 2026-01-20 15:12:00.079 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:00 compute-1 nova_compute[225855]: 2026-01-20 15:12:00.084 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:12:00 compute-1 nova_compute[225855]: 2026-01-20 15:12:00.085 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4228MB free_disk=20.937217712402344GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:12:00 compute-1 nova_compute[225855]: 2026-01-20 15:12:00.085 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:00 compute-1 nova_compute[225855]: 2026-01-20 15:12:00.085 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:00 compute-1 nova_compute[225855]: 2026-01-20 15:12:00.177 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 7a223382-86d1-478e-8324-01ef43aef7e1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:12:00 compute-1 nova_compute[225855]: 2026-01-20 15:12:00.177 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:12:00 compute-1 nova_compute[225855]: 2026-01-20 15:12:00.178 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:12:00 compute-1 nova_compute[225855]: 2026-01-20 15:12:00.236 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:00 compute-1 nova_compute[225855]: 2026-01-20 15:12:00.455 225859 DEBUG nova.network.neutron [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:12:00 compute-1 nova_compute[225855]: 2026-01-20 15:12:00.472 225859 DEBUG oslo_concurrency.lockutils [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Releasing lock "refresh_cache-7a223382-86d1-478e-8324-01ef43aef7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:12:00 compute-1 nova_compute[225855]: 2026-01-20 15:12:00.473 225859 DEBUG nova.compute.manager [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:12:00 compute-1 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000b1.scope: Deactivated successfully.
Jan 20 15:12:00 compute-1 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000b1.scope: Consumed 1.930s CPU time.
Jan 20 15:12:00 compute-1 systemd-machined[194361]: Machine qemu-93-instance-000000b1 terminated.
Jan 20 15:12:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:12:00 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/86498293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:00 compute-1 nova_compute[225855]: 2026-01-20 15:12:00.659 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:00 compute-1 nova_compute[225855]: 2026-01-20 15:12:00.664 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:12:00 compute-1 nova_compute[225855]: 2026-01-20 15:12:00.684 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:12:00 compute-1 nova_compute[225855]: 2026-01-20 15:12:00.692 225859 INFO nova.virt.libvirt.driver [-] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Instance destroyed successfully.
Jan 20 15:12:00 compute-1 nova_compute[225855]: 2026-01-20 15:12:00.692 225859 DEBUG nova.objects.instance [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lazy-loading 'resources' on Instance uuid 7a223382-86d1-478e-8324-01ef43aef7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:12:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:00.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1987985162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2331614767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4086234804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:00 compute-1 ceph-mon[81775]: pgmap v2669: 321 pgs: 321 active+clean; 289 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 881 KiB/s rd, 5.7 MiB/s wr, 181 op/s
Jan 20 15:12:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/86498293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:00 compute-1 nova_compute[225855]: 2026-01-20 15:12:00.728 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:12:00 compute-1 nova_compute[225855]: 2026-01-20 15:12:00.729 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:12:01 compute-1 nova_compute[225855]: 2026-01-20 15:12:01.054 225859 INFO nova.virt.libvirt.driver [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Deleting instance files /var/lib/nova/instances/7a223382-86d1-478e-8324-01ef43aef7e1_del
Jan 20 15:12:01 compute-1 nova_compute[225855]: 2026-01-20 15:12:01.054 225859 INFO nova.virt.libvirt.driver [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Deletion of /var/lib/nova/instances/7a223382-86d1-478e-8324-01ef43aef7e1_del complete
Jan 20 15:12:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:01.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:12:01 compute-1 nova_compute[225855]: 2026-01-20 15:12:01.108 225859 INFO nova.compute.manager [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Took 0.64 seconds to destroy the instance on the hypervisor.
Jan 20 15:12:01 compute-1 nova_compute[225855]: 2026-01-20 15:12:01.109 225859 DEBUG oslo.service.loopingcall [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:12:01 compute-1 nova_compute[225855]: 2026-01-20 15:12:01.109 225859 DEBUG nova.compute.manager [-] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:12:01 compute-1 nova_compute[225855]: 2026-01-20 15:12:01.109 225859 DEBUG nova.network.neutron [-] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:12:01 compute-1 nova_compute[225855]: 2026-01-20 15:12:01.379 225859 DEBUG nova.network.neutron [-] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:12:01 compute-1 nova_compute[225855]: 2026-01-20 15:12:01.396 225859 DEBUG nova.network.neutron [-] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:12:01 compute-1 nova_compute[225855]: 2026-01-20 15:12:01.415 225859 INFO nova.compute.manager [-] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Took 0.31 seconds to deallocate network for instance.
Jan 20 15:12:01 compute-1 nova_compute[225855]: 2026-01-20 15:12:01.461 225859 DEBUG oslo_concurrency.lockutils [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:01 compute-1 nova_compute[225855]: 2026-01-20 15:12:01.461 225859 DEBUG oslo_concurrency.lockutils [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:01 compute-1 nova_compute[225855]: 2026-01-20 15:12:01.502 225859 DEBUG oslo_concurrency.processutils [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1794335812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:12:01 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3816362426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:01 compute-1 nova_compute[225855]: 2026-01-20 15:12:01.954 225859 DEBUG oslo_concurrency.processutils [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:01 compute-1 nova_compute[225855]: 2026-01-20 15:12:01.962 225859 DEBUG nova.compute.provider_tree [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:12:01 compute-1 nova_compute[225855]: 2026-01-20 15:12:01.977 225859 DEBUG nova.scheduler.client.report [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:12:01 compute-1 nova_compute[225855]: 2026-01-20 15:12:01.997 225859 DEBUG oslo_concurrency.lockutils [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:02 compute-1 nova_compute[225855]: 2026-01-20 15:12:02.026 225859 INFO nova.scheduler.client.report [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Deleted allocations for instance 7a223382-86d1-478e-8324-01ef43aef7e1
Jan 20 15:12:02 compute-1 nova_compute[225855]: 2026-01-20 15:12:02.120 225859 DEBUG oslo_concurrency.lockutils [None req-f40fb6eb-0551-43ad-b875-1ad5007a8abc 8e699d2accae4b489d779507db44504e 37c01d33832740c3ba018515e081285b - - default default] Lock "7a223382-86d1-478e-8324-01ef43aef7e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.283s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:12:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:02.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:12:02 compute-1 nova_compute[225855]: 2026-01-20 15:12:02.721 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3816362426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4256123011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:02 compute-1 ceph-mon[81775]: pgmap v2670: 321 pgs: 321 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 915 KiB/s rd, 5.5 MiB/s wr, 177 op/s
Jan 20 15:12:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:12:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:03.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:12:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:03 compute-1 nova_compute[225855]: 2026-01-20 15:12:03.729 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:12:03 compute-1 nova_compute[225855]: 2026-01-20 15:12:03.729 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:12:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1600982363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:12:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:04.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:12:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1997130354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:04 compute-1 ceph-mon[81775]: pgmap v2671: 321 pgs: 321 active+clean; 275 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.4 MiB/s wr, 214 op/s
Jan 20 15:12:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:05.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:05 compute-1 nova_compute[225855]: 2026-01-20 15:12:05.082 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:06.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:12:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:07.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:12:07 compute-1 ceph-mon[81775]: pgmap v2672: 321 pgs: 321 active+clean; 247 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.9 MiB/s wr, 336 op/s
Jan 20 15:12:07 compute-1 nova_compute[225855]: 2026-01-20 15:12:07.722 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:08 compute-1 podman[299943]: 2026-01-20 15:12:08.145129421 +0000 UTC m=+0.172783633 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller)
Jan 20 15:12:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:12:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:08.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:12:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:12:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:09.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:12:09 compute-1 ceph-mon[81775]: pgmap v2673: 321 pgs: 321 active+clean; 247 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.8 MiB/s wr, 255 op/s
Jan 20 15:12:10 compute-1 nova_compute[225855]: 2026-01-20 15:12:10.085 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:10.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:11.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:11 compute-1 ceph-mon[81775]: pgmap v2674: 321 pgs: 321 active+clean; 247 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.8 MiB/s wr, 287 op/s
Jan 20 15:12:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:12:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:12.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:12:12 compute-1 nova_compute[225855]: 2026-01-20 15:12:12.723 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:12:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:13.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:12:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:13 compute-1 ceph-mon[81775]: pgmap v2675: 321 pgs: 321 active+clean; 247 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 77 KiB/s wr, 245 op/s
Jan 20 15:12:14 compute-1 nova_compute[225855]: 2026-01-20 15:12:14.537 225859 DEBUG oslo_concurrency.lockutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Acquiring lock "4e7e9bf1-528e-4390-8d23-3ab48889e23c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:14 compute-1 nova_compute[225855]: 2026-01-20 15:12:14.537 225859 DEBUG oslo_concurrency.lockutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lock "4e7e9bf1-528e-4390-8d23-3ab48889e23c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:14 compute-1 nova_compute[225855]: 2026-01-20 15:12:14.555 225859 DEBUG nova.compute.manager [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:12:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1857185143' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:12:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1857185143' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:12:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:12:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:14.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:12:14 compute-1 nova_compute[225855]: 2026-01-20 15:12:14.794 225859 DEBUG oslo_concurrency.lockutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:14 compute-1 nova_compute[225855]: 2026-01-20 15:12:14.795 225859 DEBUG oslo_concurrency.lockutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:14 compute-1 nova_compute[225855]: 2026-01-20 15:12:14.805 225859 DEBUG nova.virt.hardware [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:12:14 compute-1 nova_compute[225855]: 2026-01-20 15:12:14.805 225859 INFO nova.compute.claims [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:12:14 compute-1 nova_compute[225855]: 2026-01-20 15:12:14.951 225859 DEBUG oslo_concurrency.processutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:15.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.088 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:12:15 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/514934175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.418 225859 DEBUG oslo_concurrency.processutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.424 225859 DEBUG nova.compute.provider_tree [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.458 225859 DEBUG nova.scheduler.client.report [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.514 225859 DEBUG oslo_concurrency.lockutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.515 225859 DEBUG nova.compute.manager [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.592 225859 DEBUG nova.compute.manager [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.614 225859 INFO nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:12:15 compute-1 ceph-mon[81775]: pgmap v2676: 321 pgs: 321 active+clean; 247 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 31 KiB/s wr, 228 op/s
Jan 20 15:12:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/514934175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.634 225859 DEBUG nova.compute.manager [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.690 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921920.6896899, 7a223382-86d1-478e-8324-01ef43aef7e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.691 225859 INFO nova.compute.manager [-] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] VM Stopped (Lifecycle Event)
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.745 225859 DEBUG nova.compute.manager [None req-60a59d7f-7914-488e-b24a-acba2f4e8484 - - - - - -] [instance: 7a223382-86d1-478e-8324-01ef43aef7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.791 225859 DEBUG nova.compute.manager [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.792 225859 DEBUG nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.793 225859 INFO nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Creating image(s)
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.828 225859 DEBUG nova.storage.rbd_utils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] rbd image 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.868 225859 DEBUG nova.storage.rbd_utils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] rbd image 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.903 225859 DEBUG nova.storage.rbd_utils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] rbd image 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.907 225859 DEBUG oslo_concurrency.processutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.971 225859 DEBUG oslo_concurrency.processutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.973 225859 DEBUG oslo_concurrency.lockutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.974 225859 DEBUG oslo_concurrency.lockutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:15 compute-1 nova_compute[225855]: 2026-01-20 15:12:15.974 225859 DEBUG oslo_concurrency.lockutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.001 225859 DEBUG nova.storage.rbd_utils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] rbd image 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.006 225859 DEBUG oslo_concurrency.processutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.336 225859 DEBUG oslo_concurrency.processutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.427 225859 DEBUG nova.storage.rbd_utils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] resizing rbd image 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:12:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:12:16.431 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:12:16.432 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:12:16.432 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.536 225859 DEBUG nova.objects.instance [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e7e9bf1-528e-4390-8d23-3ab48889e23c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.563 225859 DEBUG nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.563 225859 DEBUG nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Ensure instance console log exists: /var/lib/nova/instances/4e7e9bf1-528e-4390-8d23-3ab48889e23c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.564 225859 DEBUG oslo_concurrency.lockutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.564 225859 DEBUG oslo_concurrency.lockutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.565 225859 DEBUG oslo_concurrency.lockutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.566 225859 DEBUG nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.572 225859 WARNING nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.578 225859 DEBUG nova.virt.libvirt.host [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.579 225859 DEBUG nova.virt.libvirt.host [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.584 225859 DEBUG nova.virt.libvirt.host [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.585 225859 DEBUG nova.virt.libvirt.host [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.587 225859 DEBUG nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.588 225859 DEBUG nova.virt.hardware [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.589 225859 DEBUG nova.virt.hardware [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.589 225859 DEBUG nova.virt.hardware [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.590 225859 DEBUG nova.virt.hardware [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.590 225859 DEBUG nova.virt.hardware [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.591 225859 DEBUG nova.virt.hardware [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.591 225859 DEBUG nova.virt.hardware [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.592 225859 DEBUG nova.virt.hardware [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.592 225859 DEBUG nova.virt.hardware [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.593 225859 DEBUG nova.virt.hardware [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.593 225859 DEBUG nova.virt.hardware [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:12:16 compute-1 nova_compute[225855]: 2026-01-20 15:12:16.599 225859 DEBUG oslo_concurrency.processutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:16 compute-1 ceph-mon[81775]: pgmap v2677: 321 pgs: 321 active+clean; 259 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.6 MiB/s wr, 203 op/s
Jan 20 15:12:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:12:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:16.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:12:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:17.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:12:17 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/394228248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:17 compute-1 nova_compute[225855]: 2026-01-20 15:12:17.105 225859 DEBUG oslo_concurrency.processutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:17 compute-1 nova_compute[225855]: 2026-01-20 15:12:17.133 225859 DEBUG nova.storage.rbd_utils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] rbd image 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:12:17 compute-1 nova_compute[225855]: 2026-01-20 15:12:17.138 225859 DEBUG oslo_concurrency.processutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:12:17 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/636621959' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:17 compute-1 nova_compute[225855]: 2026-01-20 15:12:17.607 225859 DEBUG oslo_concurrency.processutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:17 compute-1 nova_compute[225855]: 2026-01-20 15:12:17.609 225859 DEBUG nova.objects.instance [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e7e9bf1-528e-4390-8d23-3ab48889e23c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:12:17 compute-1 nova_compute[225855]: 2026-01-20 15:12:17.638 225859 DEBUG nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:12:17 compute-1 nova_compute[225855]:   <uuid>4e7e9bf1-528e-4390-8d23-3ab48889e23c</uuid>
Jan 20 15:12:17 compute-1 nova_compute[225855]:   <name>instance-000000b4</name>
Jan 20 15:12:17 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:12:17 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:12:17 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerShowV257Test-server-1886200183</nova:name>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:12:16</nova:creationTime>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:12:17 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:12:17 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:12:17 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:12:17 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:12:17 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:12:17 compute-1 nova_compute[225855]:         <nova:user uuid="a73d4f13c0bf4d1c9497cd04e5db6724">tempest-ServerShowV257Test-1887808980-project-member</nova:user>
Jan 20 15:12:17 compute-1 nova_compute[225855]:         <nova:project uuid="8550ab6f7bdb4d9faa423c65e76a6818">tempest-ServerShowV257Test-1887808980</nova:project>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <nova:ports/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:12:17 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:12:17 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <system>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <entry name="serial">4e7e9bf1-528e-4390-8d23-3ab48889e23c</entry>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <entry name="uuid">4e7e9bf1-528e-4390-8d23-3ab48889e23c</entry>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     </system>
Jan 20 15:12:17 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:12:17 compute-1 nova_compute[225855]:   <os>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:   </os>
Jan 20 15:12:17 compute-1 nova_compute[225855]:   <features>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:   </features>
Jan 20 15:12:17 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:12:17 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:12:17 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk">
Jan 20 15:12:17 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       </source>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:12:17 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk.config">
Jan 20 15:12:17 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       </source>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:12:17 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/4e7e9bf1-528e-4390-8d23-3ab48889e23c/console.log" append="off"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <video>
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     </video>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:12:17 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:12:17 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:12:17 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:12:17 compute-1 nova_compute[225855]: </domain>
Jan 20 15:12:17 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:12:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/394228248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/636621959' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:17 compute-1 nova_compute[225855]: 2026-01-20 15:12:17.702 225859 DEBUG nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:12:17 compute-1 nova_compute[225855]: 2026-01-20 15:12:17.703 225859 DEBUG nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:12:17 compute-1 nova_compute[225855]: 2026-01-20 15:12:17.704 225859 INFO nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Using config drive
Jan 20 15:12:17 compute-1 nova_compute[225855]: 2026-01-20 15:12:17.746 225859 DEBUG nova.storage.rbd_utils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] rbd image 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:12:17 compute-1 nova_compute[225855]: 2026-01-20 15:12:17.752 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:18 compute-1 nova_compute[225855]: 2026-01-20 15:12:18.260 225859 INFO nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Creating config drive at /var/lib/nova/instances/4e7e9bf1-528e-4390-8d23-3ab48889e23c/disk.config
Jan 20 15:12:18 compute-1 nova_compute[225855]: 2026-01-20 15:12:18.266 225859 DEBUG oslo_concurrency.processutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e7e9bf1-528e-4390-8d23-3ab48889e23c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoq0lg9sm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:18 compute-1 nova_compute[225855]: 2026-01-20 15:12:18.408 225859 DEBUG oslo_concurrency.processutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e7e9bf1-528e-4390-8d23-3ab48889e23c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoq0lg9sm" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:18 compute-1 nova_compute[225855]: 2026-01-20 15:12:18.455 225859 DEBUG nova.storage.rbd_utils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] rbd image 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:12:18 compute-1 nova_compute[225855]: 2026-01-20 15:12:18.461 225859 DEBUG oslo_concurrency.processutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e7e9bf1-528e-4390-8d23-3ab48889e23c/disk.config 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:18 compute-1 nova_compute[225855]: 2026-01-20 15:12:18.660 225859 DEBUG oslo_concurrency.processutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e7e9bf1-528e-4390-8d23-3ab48889e23c/disk.config 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:18 compute-1 nova_compute[225855]: 2026-01-20 15:12:18.662 225859 INFO nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Deleting local config drive /var/lib/nova/instances/4e7e9bf1-528e-4390-8d23-3ab48889e23c/disk.config because it was imported into RBD.
Jan 20 15:12:18 compute-1 ceph-mon[81775]: pgmap v2678: 321 pgs: 321 active+clean; 259 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 69 op/s
Jan 20 15:12:18 compute-1 systemd-machined[194361]: New machine qemu-94-instance-000000b4.
Jan 20 15:12:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:18.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:18 compute-1 systemd[1]: Started Virtual Machine qemu-94-instance-000000b4.
Jan 20 15:12:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:12:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:19.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.092 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921939.0919037, 4e7e9bf1-528e-4390-8d23-3ab48889e23c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.094 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] VM Resumed (Lifecycle Event)
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.098 225859 DEBUG nova.compute.manager [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.099 225859 DEBUG nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.105 225859 INFO nova.virt.libvirt.driver [-] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Instance spawned successfully.
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.106 225859 DEBUG nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.127 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.136 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.145 225859 DEBUG nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.146 225859 DEBUG nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.147 225859 DEBUG nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.148 225859 DEBUG nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.149 225859 DEBUG nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.150 225859 DEBUG nova.virt.libvirt.driver [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.188 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.189 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921939.093336, 4e7e9bf1-528e-4390-8d23-3ab48889e23c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.189 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] VM Started (Lifecycle Event)
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.226 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.231 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.245 225859 INFO nova.compute.manager [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Took 3.45 seconds to spawn the instance on the hypervisor.
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.246 225859 DEBUG nova.compute.manager [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.260 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.333 225859 INFO nova.compute.manager [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Took 4.61 seconds to build instance.
Jan 20 15:12:19 compute-1 nova_compute[225855]: 2026-01-20 15:12:19.394 225859 DEBUG oslo_concurrency.lockutils [None req-c45cd0ee-36ca-4caa-8e42-97cd4c7dc0e0 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lock "4e7e9bf1-528e-4390-8d23-3ab48889e23c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:19 compute-1 sudo[300342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:20 compute-1 sudo[300342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:20 compute-1 sudo[300342]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:20 compute-1 sudo[300367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:20 compute-1 sudo[300367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:20 compute-1 sudo[300367]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:20 compute-1 nova_compute[225855]: 2026-01-20 15:12:20.090 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:20.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:20 compute-1 nova_compute[225855]: 2026-01-20 15:12:20.830 225859 INFO nova.compute.manager [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Rebuilding instance
Jan 20 15:12:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:12:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:21.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:12:21 compute-1 nova_compute[225855]: 2026-01-20 15:12:21.253 225859 DEBUG nova.objects.instance [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4e7e9bf1-528e-4390-8d23-3ab48889e23c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:12:21 compute-1 nova_compute[225855]: 2026-01-20 15:12:21.289 225859 DEBUG nova.compute.manager [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:12:21 compute-1 nova_compute[225855]: 2026-01-20 15:12:21.428 225859 DEBUG nova.objects.instance [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lazy-loading 'pci_requests' on Instance uuid 4e7e9bf1-528e-4390-8d23-3ab48889e23c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:12:21 compute-1 nova_compute[225855]: 2026-01-20 15:12:21.446 225859 DEBUG nova.objects.instance [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e7e9bf1-528e-4390-8d23-3ab48889e23c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:12:21 compute-1 nova_compute[225855]: 2026-01-20 15:12:21.467 225859 DEBUG nova.objects.instance [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lazy-loading 'resources' on Instance uuid 4e7e9bf1-528e-4390-8d23-3ab48889e23c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:12:21 compute-1 nova_compute[225855]: 2026-01-20 15:12:21.483 225859 DEBUG nova.objects.instance [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e7e9bf1-528e-4390-8d23-3ab48889e23c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:12:21 compute-1 nova_compute[225855]: 2026-01-20 15:12:21.497 225859 DEBUG nova.objects.instance [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 15:12:21 compute-1 nova_compute[225855]: 2026-01-20 15:12:21.506 225859 DEBUG nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 15:12:21 compute-1 ceph-mon[81775]: pgmap v2679: 321 pgs: 321 active+clean; 322 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.1 MiB/s wr, 190 op/s
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #127. Immutable memtables: 0.
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:21.595084) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 127
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921941595135, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 1634, "num_deletes": 257, "total_data_size": 3389258, "memory_usage": 3443840, "flush_reason": "Manual Compaction"}
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #128: started
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921941611108, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 128, "file_size": 2233908, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63436, "largest_seqno": 65065, "table_properties": {"data_size": 2227138, "index_size": 3776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15019, "raw_average_key_size": 20, "raw_value_size": 2213210, "raw_average_value_size": 2970, "num_data_blocks": 166, "num_entries": 745, "num_filter_entries": 745, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921818, "oldest_key_time": 1768921818, "file_creation_time": 1768921941, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 16054 microseconds, and 5042 cpu microseconds.
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:21.611136) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #128: 2233908 bytes OK
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:21.611152) [db/memtable_list.cc:519] [default] Level-0 commit table #128 started
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:21.612403) [db/memtable_list.cc:722] [default] Level-0 commit table #128: memtable #1 done
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:21.612412) EVENT_LOG_v1 {"time_micros": 1768921941612409, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:21.612426) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 3381691, prev total WAL file size 3381691, number of live WAL files 2.
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000124.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:21.613166) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323730' seq:72057594037927935, type:22 .. '6C6F676D0032353231' seq:0, type:0; will stop at (end)
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [128(2181KB)], [126(11MB)]
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921941613246, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [128], "files_L6": [126], "score": -1, "input_data_size": 14489423, "oldest_snapshot_seqno": -1}
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #129: 9002 keys, 14341651 bytes, temperature: kUnknown
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921941789056, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 129, "file_size": 14341651, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14279201, "index_size": 38849, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22533, "raw_key_size": 235900, "raw_average_key_size": 26, "raw_value_size": 14116626, "raw_average_value_size": 1568, "num_data_blocks": 1503, "num_entries": 9002, "num_filter_entries": 9002, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768921941, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 129, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:21.789380) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 14341651 bytes
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:21.791032) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 82.4 rd, 81.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 11.7 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(12.9) write-amplify(6.4) OK, records in: 9533, records dropped: 531 output_compression: NoCompression
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:21.791077) EVENT_LOG_v1 {"time_micros": 1768921941791060, "job": 80, "event": "compaction_finished", "compaction_time_micros": 175919, "compaction_time_cpu_micros": 56538, "output_level": 6, "num_output_files": 1, "total_output_size": 14341651, "num_input_records": 9533, "num_output_records": 9002, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921941791800, "job": 80, "event": "table_file_deletion", "file_number": 128}
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000126.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921941794314, "job": 80, "event": "table_file_deletion", "file_number": 126}
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:21.613066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:21.794454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:21.794459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:21.794461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:21.794462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:21 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:21.794463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:22 compute-1 podman[300393]: 2026-01-20 15:12:22.026075481 +0000 UTC m=+0.068127164 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 20 15:12:22 compute-1 nova_compute[225855]: 2026-01-20 15:12:22.727 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:22.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:12:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:23.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:12:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:23 compute-1 ceph-mon[81775]: pgmap v2680: 321 pgs: 321 active+clean; 340 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.4 MiB/s wr, 200 op/s
Jan 20 15:12:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:24.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:12:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:25.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:12:25 compute-1 nova_compute[225855]: 2026-01-20 15:12:25.094 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:25 compute-1 ceph-mon[81775]: pgmap v2681: 321 pgs: 321 active+clean; 343 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.5 MiB/s wr, 204 op/s
Jan 20 15:12:25 compute-1 sudo[300415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:25 compute-1 sudo[300415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:25 compute-1 sudo[300415]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:25 compute-1 sudo[300440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:12:25 compute-1 sudo[300440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:25 compute-1 sudo[300440]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:25 compute-1 sudo[300465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:25 compute-1 sudo[300465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:25 compute-1 sudo[300465]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:25 compute-1 sudo[300490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 15:12:25 compute-1 sudo[300490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:26 compute-1 podman[300587]: 2026-01-20 15:12:26.496974806 +0000 UTC m=+0.068564906 container exec 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:12:26 compute-1 podman[300587]: 2026-01-20 15:12:26.617212498 +0000 UTC m=+0.188802598 container exec_died 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 15:12:26 compute-1 ceph-mon[81775]: pgmap v2682: 321 pgs: 321 active+clean; 344 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.5 MiB/s wr, 224 op/s
Jan 20 15:12:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:26.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:27.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:27 compute-1 podman[300741]: 2026-01-20 15:12:27.220725832 +0000 UTC m=+0.051941665 container exec 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 15:12:27 compute-1 podman[300741]: 2026-01-20 15:12:27.233186005 +0000 UTC m=+0.064401838 container exec_died 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 15:12:27 compute-1 podman[300802]: 2026-01-20 15:12:27.478216207 +0000 UTC m=+0.059670534 container exec e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, architecture=x86_64, io.buildah.version=1.28.2, vcs-type=git, version=2.2.4, description=keepalived for Ceph, name=keepalived, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, build-date=2023-02-22T09:23:20)
Jan 20 15:12:27 compute-1 podman[300802]: 2026-01-20 15:12:27.489188668 +0000 UTC m=+0.070642995 container exec_died e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=keepalived, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, vendor=Red Hat, Inc.)
Jan 20 15:12:27 compute-1 sudo[300490]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:27 compute-1 sudo[300834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:27 compute-1 sudo[300834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:27 compute-1 sudo[300834]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:27 compute-1 sudo[300859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:12:27 compute-1 sudo[300859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:27 compute-1 sudo[300859]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:27 compute-1 nova_compute[225855]: 2026-01-20 15:12:27.729 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:27 compute-1 sudo[300885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:27 compute-1 sudo[300885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:27 compute-1 sudo[300885]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:27 compute-1 sudo[300910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:12:27 compute-1 sudo[300910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:28 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:28 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:28 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:28 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:28 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:28 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:28 compute-1 sudo[300910]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:28.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:12:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:12:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:12:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:12:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:12:29 compute-1 ceph-mon[81775]: pgmap v2683: 321 pgs: 321 active+clean; 344 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.9 MiB/s wr, 198 op/s
Jan 20 15:12:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:29.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:30 compute-1 nova_compute[225855]: 2026-01-20 15:12:30.098 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:30.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:31.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:31 compute-1 ceph-mon[81775]: pgmap v2684: 321 pgs: 321 active+clean; 344 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.9 MiB/s wr, 199 op/s
Jan 20 15:12:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1500870900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:31 compute-1 nova_compute[225855]: 2026-01-20 15:12:31.556 225859 DEBUG nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 15:12:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:32.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:32 compute-1 nova_compute[225855]: 2026-01-20 15:12:32.756 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:12:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:33.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:12:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:12:33.316 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:12:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:12:33.317 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:12:33 compute-1 nova_compute[225855]: 2026-01-20 15:12:33.318 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:33 compute-1 ceph-mon[81775]: pgmap v2685: 321 pgs: 321 active+clean; 348 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 91 op/s
Jan 20 15:12:34 compute-1 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000b4.scope: Deactivated successfully.
Jan 20 15:12:34 compute-1 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000b4.scope: Consumed 12.581s CPU time.
Jan 20 15:12:34 compute-1 systemd-machined[194361]: Machine qemu-94-instance-000000b4 terminated.
Jan 20 15:12:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:12:34.320 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:12:34 compute-1 nova_compute[225855]: 2026-01-20 15:12:34.572 225859 INFO nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Instance shutdown successfully after 13 seconds.
Jan 20 15:12:34 compute-1 nova_compute[225855]: 2026-01-20 15:12:34.578 225859 INFO nova.virt.libvirt.driver [-] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Instance destroyed successfully.
Jan 20 15:12:34 compute-1 nova_compute[225855]: 2026-01-20 15:12:34.582 225859 INFO nova.virt.libvirt.driver [-] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Instance destroyed successfully.
Jan 20 15:12:34 compute-1 sudo[300971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:34 compute-1 sudo[300971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:34 compute-1 sudo[300971]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:34 compute-1 sudo[301012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:12:34 compute-1 sudo[301012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:34 compute-1 sudo[301012]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:34.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:35 compute-1 nova_compute[225855]: 2026-01-20 15:12:35.101 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:35.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:35 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:35 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:35 compute-1 ceph-mon[81775]: pgmap v2686: 321 pgs: 321 active+clean; 351 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 936 KiB/s wr, 60 op/s
Jan 20 15:12:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1655711995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:36 compute-1 nova_compute[225855]: 2026-01-20 15:12:36.194 225859 INFO nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Deleting instance files /var/lib/nova/instances/4e7e9bf1-528e-4390-8d23-3ab48889e23c_del
Jan 20 15:12:36 compute-1 nova_compute[225855]: 2026-01-20 15:12:36.195 225859 INFO nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Deletion of /var/lib/nova/instances/4e7e9bf1-528e-4390-8d23-3ab48889e23c_del complete
Jan 20 15:12:36 compute-1 nova_compute[225855]: 2026-01-20 15:12:36.439 225859 DEBUG nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:12:36 compute-1 nova_compute[225855]: 2026-01-20 15:12:36.439 225859 INFO nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Creating image(s)
Jan 20 15:12:36 compute-1 nova_compute[225855]: 2026-01-20 15:12:36.473 225859 DEBUG nova.storage.rbd_utils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] rbd image 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:12:36 compute-1 nova_compute[225855]: 2026-01-20 15:12:36.510 225859 DEBUG nova.storage.rbd_utils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] rbd image 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:12:36 compute-1 nova_compute[225855]: 2026-01-20 15:12:36.544 225859 DEBUG nova.storage.rbd_utils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] rbd image 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:12:36 compute-1 nova_compute[225855]: 2026-01-20 15:12:36.549 225859 DEBUG oslo_concurrency.processutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:36 compute-1 nova_compute[225855]: 2026-01-20 15:12:36.650 225859 DEBUG oslo_concurrency.processutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:36 compute-1 nova_compute[225855]: 2026-01-20 15:12:36.652 225859 DEBUG oslo_concurrency.lockutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Acquiring lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:36 compute-1 nova_compute[225855]: 2026-01-20 15:12:36.653 225859 DEBUG oslo_concurrency.lockutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:36 compute-1 nova_compute[225855]: 2026-01-20 15:12:36.654 225859 DEBUG oslo_concurrency.lockutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:36 compute-1 nova_compute[225855]: 2026-01-20 15:12:36.691 225859 DEBUG nova.storage.rbd_utils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] rbd image 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:12:36 compute-1 nova_compute[225855]: 2026-01-20 15:12:36.695 225859 DEBUG oslo_concurrency.processutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:12:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:36.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:12:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2796264318' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:12:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2796264318' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:12:36 compute-1 ceph-mon[81775]: pgmap v2687: 321 pgs: 321 active+clean; 376 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 925 KiB/s rd, 2.2 MiB/s wr, 124 op/s
Jan 20 15:12:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:37.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.158 225859 DEBUG oslo_concurrency.processutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.232 225859 DEBUG nova.storage.rbd_utils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] resizing rbd image 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.332 225859 DEBUG nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.333 225859 DEBUG nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Ensure instance console log exists: /var/lib/nova/instances/4e7e9bf1-528e-4390-8d23-3ab48889e23c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.334 225859 DEBUG oslo_concurrency.lockutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.334 225859 DEBUG oslo_concurrency.lockutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.335 225859 DEBUG oslo_concurrency.lockutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.337 225859 DEBUG nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:22:02Z,direct_url=<?>,disk_format='qcow2',id=26699514-f465-4b50-98b7-36f2cfc6a308,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.341 225859 WARNING nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.348 225859 DEBUG nova.virt.libvirt.host [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.348 225859 DEBUG nova.virt.libvirt.host [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.352 225859 DEBUG nova.virt.libvirt.host [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.352 225859 DEBUG nova.virt.libvirt.host [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.354 225859 DEBUG nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.354 225859 DEBUG nova.virt.hardware [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:22:02Z,direct_url=<?>,disk_format='qcow2',id=26699514-f465-4b50-98b7-36f2cfc6a308,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.355 225859 DEBUG nova.virt.hardware [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.355 225859 DEBUG nova.virt.hardware [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.356 225859 DEBUG nova.virt.hardware [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.356 225859 DEBUG nova.virt.hardware [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.356 225859 DEBUG nova.virt.hardware [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.356 225859 DEBUG nova.virt.hardware [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.357 225859 DEBUG nova.virt.hardware [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.357 225859 DEBUG nova.virt.hardware [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.357 225859 DEBUG nova.virt.hardware [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.358 225859 DEBUG nova.virt.hardware [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.358 225859 DEBUG nova.objects.instance [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4e7e9bf1-528e-4390-8d23-3ab48889e23c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.377 225859 DEBUG oslo_concurrency.processutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.758 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:12:37 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2691945406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.806 225859 DEBUG oslo_concurrency.processutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.834 225859 DEBUG nova.storage.rbd_utils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] rbd image 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:12:37 compute-1 nova_compute[225855]: 2026-01-20 15:12:37.838 225859 DEBUG oslo_concurrency.processutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e393 e393: 3 total, 3 up, 3 in
Jan 20 15:12:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2691945406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:12:38 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1378670698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:38 compute-1 nova_compute[225855]: 2026-01-20 15:12:38.298 225859 DEBUG oslo_concurrency.processutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:38 compute-1 nova_compute[225855]: 2026-01-20 15:12:38.301 225859 DEBUG nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:12:38 compute-1 nova_compute[225855]:   <uuid>4e7e9bf1-528e-4390-8d23-3ab48889e23c</uuid>
Jan 20 15:12:38 compute-1 nova_compute[225855]:   <name>instance-000000b4</name>
Jan 20 15:12:38 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:12:38 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:12:38 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <nova:name>tempest-ServerShowV257Test-server-1886200183</nova:name>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:12:37</nova:creationTime>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:12:38 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:12:38 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:12:38 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:12:38 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:12:38 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:12:38 compute-1 nova_compute[225855]:         <nova:user uuid="a73d4f13c0bf4d1c9497cd04e5db6724">tempest-ServerShowV257Test-1887808980-project-member</nova:user>
Jan 20 15:12:38 compute-1 nova_compute[225855]:         <nova:project uuid="8550ab6f7bdb4d9faa423c65e76a6818">tempest-ServerShowV257Test-1887808980</nova:project>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="26699514-f465-4b50-98b7-36f2cfc6a308"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <nova:ports/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:12:38 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:12:38 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <system>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <entry name="serial">4e7e9bf1-528e-4390-8d23-3ab48889e23c</entry>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <entry name="uuid">4e7e9bf1-528e-4390-8d23-3ab48889e23c</entry>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     </system>
Jan 20 15:12:38 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:12:38 compute-1 nova_compute[225855]:   <os>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:   </os>
Jan 20 15:12:38 compute-1 nova_compute[225855]:   <features>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:   </features>
Jan 20 15:12:38 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:12:38 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:12:38 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk">
Jan 20 15:12:38 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       </source>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:12:38 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk.config">
Jan 20 15:12:38 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       </source>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:12:38 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/4e7e9bf1-528e-4390-8d23-3ab48889e23c/console.log" append="off"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <video>
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     </video>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:12:38 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:12:38 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:12:38 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:12:38 compute-1 nova_compute[225855]: </domain>
Jan 20 15:12:38 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:12:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:38 compute-1 nova_compute[225855]: 2026-01-20 15:12:38.389 225859 DEBUG nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:12:38 compute-1 nova_compute[225855]: 2026-01-20 15:12:38.390 225859 DEBUG nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:12:38 compute-1 nova_compute[225855]: 2026-01-20 15:12:38.391 225859 INFO nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Using config drive
Jan 20 15:12:38 compute-1 nova_compute[225855]: 2026-01-20 15:12:38.421 225859 DEBUG nova.storage.rbd_utils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] rbd image 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:12:38 compute-1 podman[301271]: 2026-01-20 15:12:38.486813219 +0000 UTC m=+0.142544105 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Jan 20 15:12:38 compute-1 nova_compute[225855]: 2026-01-20 15:12:38.492 225859 DEBUG nova.objects.instance [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 4e7e9bf1-528e-4390-8d23-3ab48889e23c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:12:38 compute-1 nova_compute[225855]: 2026-01-20 15:12:38.526 225859 DEBUG nova.objects.instance [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lazy-loading 'keypairs' on Instance uuid 4e7e9bf1-528e-4390-8d23-3ab48889e23c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:12:38 compute-1 nova_compute[225855]: 2026-01-20 15:12:38.736 225859 INFO nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Creating config drive at /var/lib/nova/instances/4e7e9bf1-528e-4390-8d23-3ab48889e23c/disk.config
Jan 20 15:12:38 compute-1 nova_compute[225855]: 2026-01-20 15:12:38.742 225859 DEBUG oslo_concurrency.processutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e7e9bf1-528e-4390-8d23-3ab48889e23c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2iqg0srg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:38.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:38 compute-1 nova_compute[225855]: 2026-01-20 15:12:38.875 225859 DEBUG oslo_concurrency.processutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e7e9bf1-528e-4390-8d23-3ab48889e23c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2iqg0srg" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:38 compute-1 nova_compute[225855]: 2026-01-20 15:12:38.911 225859 DEBUG nova.storage.rbd_utils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] rbd image 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:12:38 compute-1 nova_compute[225855]: 2026-01-20 15:12:38.915 225859 DEBUG oslo_concurrency.processutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e7e9bf1-528e-4390-8d23-3ab48889e23c/disk.config 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:38 compute-1 ceph-mon[81775]: osdmap e393: 3 total, 3 up, 3 in
Jan 20 15:12:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1378670698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:38 compute-1 ceph-mon[81775]: pgmap v2689: 321 pgs: 321 active+clean; 376 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 423 KiB/s rd, 2.6 MiB/s wr, 125 op/s
Jan 20 15:12:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:39.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:39 compute-1 nova_compute[225855]: 2026-01-20 15:12:39.159 225859 DEBUG oslo_concurrency.processutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e7e9bf1-528e-4390-8d23-3ab48889e23c/disk.config 4e7e9bf1-528e-4390-8d23-3ab48889e23c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.244s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:39 compute-1 nova_compute[225855]: 2026-01-20 15:12:39.160 225859 INFO nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Deleting local config drive /var/lib/nova/instances/4e7e9bf1-528e-4390-8d23-3ab48889e23c/disk.config because it was imported into RBD.
Jan 20 15:12:39 compute-1 systemd-machined[194361]: New machine qemu-95-instance-000000b4.
Jan 20 15:12:39 compute-1 systemd[1]: Started Virtual Machine qemu-95-instance-000000b4.
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.029 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for 4e7e9bf1-528e-4390-8d23-3ab48889e23c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.030 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921960.028581, 4e7e9bf1-528e-4390-8d23-3ab48889e23c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.030 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] VM Resumed (Lifecycle Event)
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.033 225859 DEBUG nova.compute.manager [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.033 225859 DEBUG nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.036 225859 INFO nova.virt.libvirt.driver [-] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Instance spawned successfully.
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.037 225859 DEBUG nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.056 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.060 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.068 225859 DEBUG nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.069 225859 DEBUG nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.070 225859 DEBUG nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.070 225859 DEBUG nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.071 225859 DEBUG nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.071 225859 DEBUG nova.virt.libvirt.driver [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.099 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.100 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768921960.0287015, 4e7e9bf1-528e-4390-8d23-3ab48889e23c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.100 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] VM Started (Lifecycle Event)
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.104 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:40 compute-1 sudo[301414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:40 compute-1 sudo[301414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:40 compute-1 sudo[301414]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.174 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.178 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:12:40 compute-1 sudo[301439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.193 225859 DEBUG nova.compute.manager [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:12:40 compute-1 sudo[301439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:40 compute-1 sudo[301439]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.208 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.283 225859 DEBUG oslo_concurrency.lockutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.284 225859 DEBUG oslo_concurrency.lockutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.284 225859 DEBUG nova.objects.instance [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 15:12:40 compute-1 nova_compute[225855]: 2026-01-20 15:12:40.413 225859 DEBUG oslo_concurrency.lockutils [None req-ae4beab2-80e3-49ac-a455-5af63bdede94 a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:12:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:40.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:12:41 compute-1 ceph-mon[81775]: pgmap v2690: 321 pgs: 321 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 481 KiB/s rd, 4.1 MiB/s wr, 209 op/s
Jan 20 15:12:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:41.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #130. Immutable memtables: 0.
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:41.786437) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 130
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921961786473, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 540, "num_deletes": 252, "total_data_size": 825110, "memory_usage": 836160, "flush_reason": "Manual Compaction"}
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #131: started
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921961792700, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 131, "file_size": 472423, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65070, "largest_seqno": 65605, "table_properties": {"data_size": 469520, "index_size": 874, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7759, "raw_average_key_size": 21, "raw_value_size": 463546, "raw_average_value_size": 1266, "num_data_blocks": 36, "num_entries": 366, "num_filter_entries": 366, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921942, "oldest_key_time": 1768921942, "file_creation_time": 1768921961, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 6297 microseconds, and 2215 cpu microseconds.
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:41.792732) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #131: 472423 bytes OK
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:41.792749) [db/memtable_list.cc:519] [default] Level-0 commit table #131 started
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:41.794838) [db/memtable_list.cc:722] [default] Level-0 commit table #131: memtable #1 done
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:41.794897) EVENT_LOG_v1 {"time_micros": 1768921961794888, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:41.794922) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 821918, prev total WAL file size 821918, number of live WAL files 2.
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000127.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:41.795422) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303035' seq:72057594037927935, type:22 .. '6D6772737461740032323538' seq:0, type:0; will stop at (end)
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [131(461KB)], [129(13MB)]
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921961795459, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [131], "files_L6": [129], "score": -1, "input_data_size": 14814074, "oldest_snapshot_seqno": -1}
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #132: 8848 keys, 10986658 bytes, temperature: kUnknown
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921961877963, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 132, "file_size": 10986658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10929787, "index_size": 33654, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22149, "raw_key_size": 232931, "raw_average_key_size": 26, "raw_value_size": 10774459, "raw_average_value_size": 1217, "num_data_blocks": 1287, "num_entries": 8848, "num_filter_entries": 8848, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768921961, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 132, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:41.878274) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 10986658 bytes
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:41.879769) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.3 rd, 133.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 13.7 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(54.6) write-amplify(23.3) OK, records in: 9368, records dropped: 520 output_compression: NoCompression
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:41.879789) EVENT_LOG_v1 {"time_micros": 1768921961879780, "job": 82, "event": "compaction_finished", "compaction_time_micros": 82615, "compaction_time_cpu_micros": 28191, "output_level": 6, "num_output_files": 1, "total_output_size": 10986658, "num_input_records": 9368, "num_output_records": 8848, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921961880014, "job": 82, "event": "table_file_deletion", "file_number": 131}
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000129.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921961882679, "job": 82, "event": "table_file_deletion", "file_number": 129}
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:41.795347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:41.882776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:41.882782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:41.882785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:41.882787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:12:41.882789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:42 compute-1 nova_compute[225855]: 2026-01-20 15:12:42.227 225859 DEBUG oslo_concurrency.lockutils [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Acquiring lock "4e7e9bf1-528e-4390-8d23-3ab48889e23c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:42 compute-1 nova_compute[225855]: 2026-01-20 15:12:42.228 225859 DEBUG oslo_concurrency.lockutils [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lock "4e7e9bf1-528e-4390-8d23-3ab48889e23c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:42 compute-1 nova_compute[225855]: 2026-01-20 15:12:42.228 225859 DEBUG oslo_concurrency.lockutils [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Acquiring lock "4e7e9bf1-528e-4390-8d23-3ab48889e23c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:42 compute-1 nova_compute[225855]: 2026-01-20 15:12:42.229 225859 DEBUG oslo_concurrency.lockutils [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lock "4e7e9bf1-528e-4390-8d23-3ab48889e23c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:42 compute-1 nova_compute[225855]: 2026-01-20 15:12:42.229 225859 DEBUG oslo_concurrency.lockutils [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lock "4e7e9bf1-528e-4390-8d23-3ab48889e23c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:42 compute-1 nova_compute[225855]: 2026-01-20 15:12:42.230 225859 INFO nova.compute.manager [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Terminating instance
Jan 20 15:12:42 compute-1 nova_compute[225855]: 2026-01-20 15:12:42.231 225859 DEBUG oslo_concurrency.lockutils [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Acquiring lock "refresh_cache-4e7e9bf1-528e-4390-8d23-3ab48889e23c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:12:42 compute-1 nova_compute[225855]: 2026-01-20 15:12:42.231 225859 DEBUG oslo_concurrency.lockutils [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Acquired lock "refresh_cache-4e7e9bf1-528e-4390-8d23-3ab48889e23c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:12:42 compute-1 nova_compute[225855]: 2026-01-20 15:12:42.232 225859 DEBUG nova.network.neutron [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:12:42 compute-1 nova_compute[225855]: 2026-01-20 15:12:42.702 225859 DEBUG nova.network.neutron [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:12:42 compute-1 nova_compute[225855]: 2026-01-20 15:12:42.759 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:42.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:42 compute-1 ceph-mon[81775]: pgmap v2691: 321 pgs: 321 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 418 KiB/s rd, 4.0 MiB/s wr, 214 op/s
Jan 20 15:12:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:12:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:43.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:12:43 compute-1 nova_compute[225855]: 2026-01-20 15:12:43.167 225859 DEBUG nova.network.neutron [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:12:43 compute-1 nova_compute[225855]: 2026-01-20 15:12:43.209 225859 DEBUG oslo_concurrency.lockutils [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Releasing lock "refresh_cache-4e7e9bf1-528e-4390-8d23-3ab48889e23c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:12:43 compute-1 nova_compute[225855]: 2026-01-20 15:12:43.210 225859 DEBUG nova.compute.manager [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:12:43 compute-1 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000b4.scope: Deactivated successfully.
Jan 20 15:12:43 compute-1 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000b4.scope: Consumed 4.121s CPU time.
Jan 20 15:12:43 compute-1 systemd-machined[194361]: Machine qemu-95-instance-000000b4 terminated.
Jan 20 15:12:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:43 compute-1 nova_compute[225855]: 2026-01-20 15:12:43.433 225859 INFO nova.virt.libvirt.driver [-] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Instance destroyed successfully.
Jan 20 15:12:43 compute-1 nova_compute[225855]: 2026-01-20 15:12:43.433 225859 DEBUG nova.objects.instance [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lazy-loading 'resources' on Instance uuid 4e7e9bf1-528e-4390-8d23-3ab48889e23c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:12:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:12:43 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1969015913' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:12:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:12:43 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1969015913' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:12:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1046212052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1969015913' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:12:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1969015913' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:12:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e394 e394: 3 total, 3 up, 3 in
Jan 20 15:12:44 compute-1 nova_compute[225855]: 2026-01-20 15:12:44.382 225859 INFO nova.virt.libvirt.driver [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Deleting instance files /var/lib/nova/instances/4e7e9bf1-528e-4390-8d23-3ab48889e23c_del
Jan 20 15:12:44 compute-1 nova_compute[225855]: 2026-01-20 15:12:44.382 225859 INFO nova.virt.libvirt.driver [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Deletion of /var/lib/nova/instances/4e7e9bf1-528e-4390-8d23-3ab48889e23c_del complete
Jan 20 15:12:44 compute-1 nova_compute[225855]: 2026-01-20 15:12:44.451 225859 INFO nova.compute.manager [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Took 1.24 seconds to destroy the instance on the hypervisor.
Jan 20 15:12:44 compute-1 nova_compute[225855]: 2026-01-20 15:12:44.452 225859 DEBUG oslo.service.loopingcall [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:12:44 compute-1 nova_compute[225855]: 2026-01-20 15:12:44.452 225859 DEBUG nova.compute.manager [-] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:12:44 compute-1 nova_compute[225855]: 2026-01-20 15:12:44.452 225859 DEBUG nova.network.neutron [-] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:12:44 compute-1 nova_compute[225855]: 2026-01-20 15:12:44.591 225859 DEBUG nova.network.neutron [-] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:12:44 compute-1 nova_compute[225855]: 2026-01-20 15:12:44.619 225859 DEBUG nova.network.neutron [-] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:12:44 compute-1 nova_compute[225855]: 2026-01-20 15:12:44.634 225859 INFO nova.compute.manager [-] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Took 0.18 seconds to deallocate network for instance.
Jan 20 15:12:44 compute-1 nova_compute[225855]: 2026-01-20 15:12:44.711 225859 DEBUG oslo_concurrency.lockutils [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:44 compute-1 nova_compute[225855]: 2026-01-20 15:12:44.711 225859 DEBUG oslo_concurrency.lockutils [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:44.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:44 compute-1 nova_compute[225855]: 2026-01-20 15:12:44.798 225859 DEBUG oslo_concurrency.processutils [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:45 compute-1 nova_compute[225855]: 2026-01-20 15:12:45.107 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:45.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:45 compute-1 ceph-mon[81775]: osdmap e394: 3 total, 3 up, 3 in
Jan 20 15:12:45 compute-1 ceph-mon[81775]: pgmap v2693: 321 pgs: 321 active+clean; 308 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.7 MiB/s wr, 178 op/s
Jan 20 15:12:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1692019823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:45 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:12:45 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/358433427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:45 compute-1 nova_compute[225855]: 2026-01-20 15:12:45.237 225859 DEBUG oslo_concurrency.processutils [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:45 compute-1 nova_compute[225855]: 2026-01-20 15:12:45.243 225859 DEBUG nova.compute.provider_tree [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:12:45 compute-1 nova_compute[225855]: 2026-01-20 15:12:45.273 225859 DEBUG nova.scheduler.client.report [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:12:45 compute-1 nova_compute[225855]: 2026-01-20 15:12:45.297 225859 DEBUG oslo_concurrency.lockutils [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:45 compute-1 nova_compute[225855]: 2026-01-20 15:12:45.339 225859 INFO nova.scheduler.client.report [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Deleted allocations for instance 4e7e9bf1-528e-4390-8d23-3ab48889e23c
Jan 20 15:12:45 compute-1 nova_compute[225855]: 2026-01-20 15:12:45.411 225859 DEBUG oslo_concurrency.lockutils [None req-5048e29f-dd5f-45f2-b955-36cf3538c28a a73d4f13c0bf4d1c9497cd04e5db6724 8550ab6f7bdb4d9faa423c65e76a6818 - - default default] Lock "4e7e9bf1-528e-4390-8d23-3ab48889e23c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/358433427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/131253235' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:46.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e395 e395: 3 total, 3 up, 3 in
Jan 20 15:12:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:47.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:47 compute-1 ceph-mon[81775]: pgmap v2694: 321 pgs: 321 active+clean; 216 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.5 MiB/s wr, 286 op/s
Jan 20 15:12:47 compute-1 ceph-mon[81775]: osdmap e395: 3 total, 3 up, 3 in
Jan 20 15:12:47 compute-1 nova_compute[225855]: 2026-01-20 15:12:47.761 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:48.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 20 15:12:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:49.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 20 15:12:49 compute-1 ceph-mon[81775]: pgmap v2696: 321 pgs: 321 active+clean; 216 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 881 KiB/s wr, 199 op/s
Jan 20 15:12:50 compute-1 nova_compute[225855]: 2026-01-20 15:12:50.111 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e396 e396: 3 total, 3 up, 3 in
Jan 20 15:12:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:50.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:51.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:51 compute-1 ceph-mon[81775]: pgmap v2697: 321 pgs: 321 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.6 KiB/s wr, 267 op/s
Jan 20 15:12:51 compute-1 ceph-mon[81775]: osdmap e396: 3 total, 3 up, 3 in
Jan 20 15:12:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/57227263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:52 compute-1 ceph-mon[81775]: pgmap v2699: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 4.4 KiB/s wr, 273 op/s
Jan 20 15:12:52 compute-1 nova_compute[225855]: 2026-01-20 15:12:52.763 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:12:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:52.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:12:53 compute-1 podman[301514]: 2026-01-20 15:12:53.040661683 +0000 UTC m=+0.073560728 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 15:12:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:53.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:53 compute-1 nova_compute[225855]: 2026-01-20 15:12:53.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:12:53 compute-1 nova_compute[225855]: 2026-01-20 15:12:53.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:12:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:54 compute-1 nova_compute[225855]: 2026-01-20 15:12:54.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:12:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:54.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:55 compute-1 nova_compute[225855]: 2026-01-20 15:12:55.114 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:55.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:55 compute-1 ceph-mon[81775]: pgmap v2700: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 KiB/s wr, 146 op/s
Jan 20 15:12:56 compute-1 nova_compute[225855]: 2026-01-20 15:12:56.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:12:56 compute-1 nova_compute[225855]: 2026-01-20 15:12:56.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:12:56 compute-1 nova_compute[225855]: 2026-01-20 15:12:56.342 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:12:56 compute-1 nova_compute[225855]: 2026-01-20 15:12:56.373 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:12:56 compute-1 nova_compute[225855]: 2026-01-20 15:12:56.374 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:12:56 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e397 e397: 3 total, 3 up, 3 in
Jan 20 15:12:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:56.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:57.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:57 compute-1 nova_compute[225855]: 2026-01-20 15:12:57.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:12:57 compute-1 nova_compute[225855]: 2026-01-20 15:12:57.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:12:57 compute-1 ceph-mon[81775]: pgmap v2701: 321 pgs: 321 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.3 KiB/s wr, 130 op/s
Jan 20 15:12:57 compute-1 ceph-mon[81775]: osdmap e397: 3 total, 3 up, 3 in
Jan 20 15:12:57 compute-1 nova_compute[225855]: 2026-01-20 15:12:57.765 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:58 compute-1 nova_compute[225855]: 2026-01-20 15:12:58.432 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921963.4311595, 4e7e9bf1-528e-4390-8d23-3ab48889e23c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:12:58 compute-1 nova_compute[225855]: 2026-01-20 15:12:58.433 225859 INFO nova.compute.manager [-] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] VM Stopped (Lifecycle Event)
Jan 20 15:12:58 compute-1 nova_compute[225855]: 2026-01-20 15:12:58.468 225859 DEBUG nova.compute.manager [None req-79dc6c96-6eff-4694-bad6-b731a6583f0a - - - - - -] [instance: 4e7e9bf1-528e-4390-8d23-3ab48889e23c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:12:58 compute-1 ceph-mon[81775]: pgmap v2703: 321 pgs: 321 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 639 B/s wr, 65 op/s
Jan 20 15:12:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:58.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:12:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:59.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:59 compute-1 nova_compute[225855]: 2026-01-20 15:12:59.338 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:12:59 compute-1 nova_compute[225855]: 2026-01-20 15:12:59.366 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:59 compute-1 nova_compute[225855]: 2026-01-20 15:12:59.367 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:59 compute-1 nova_compute[225855]: 2026-01-20 15:12:59.367 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:59 compute-1 nova_compute[225855]: 2026-01-20 15:12:59.367 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:12:59 compute-1 nova_compute[225855]: 2026-01-20 15:12:59.368 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:12:59 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/270906769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:59 compute-1 nova_compute[225855]: 2026-01-20 15:12:59.879 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:00 compute-1 nova_compute[225855]: 2026-01-20 15:13:00.036 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:13:00 compute-1 nova_compute[225855]: 2026-01-20 15:13:00.037 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4266MB free_disk=20.942672729492188GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:13:00 compute-1 nova_compute[225855]: 2026-01-20 15:13:00.037 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:00 compute-1 nova_compute[225855]: 2026-01-20 15:13:00.037 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/270906769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:00 compute-1 nova_compute[225855]: 2026-01-20 15:13:00.115 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:13:00 compute-1 nova_compute[225855]: 2026-01-20 15:13:00.116 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:13:00 compute-1 nova_compute[225855]: 2026-01-20 15:13:00.121 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:00 compute-1 nova_compute[225855]: 2026-01-20 15:13:00.151 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:00 compute-1 sudo[301561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:00 compute-1 sudo[301561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:00 compute-1 sudo[301561]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:00 compute-1 sudo[301587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:00 compute-1 sudo[301587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:00 compute-1 sudo[301587]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:13:00 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3721231144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:00 compute-1 nova_compute[225855]: 2026-01-20 15:13:00.599 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:00 compute-1 nova_compute[225855]: 2026-01-20 15:13:00.604 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:13:00 compute-1 nova_compute[225855]: 2026-01-20 15:13:00.627 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:13:00 compute-1 nova_compute[225855]: 2026-01-20 15:13:00.652 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:13:00 compute-1 nova_compute[225855]: 2026-01-20 15:13:00.653 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:00.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:01 compute-1 ceph-mon[81775]: pgmap v2704: 321 pgs: 321 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.3 KiB/s wr, 90 op/s
Jan 20 15:13:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3721231144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:01.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2748790219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/992957130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:02 compute-1 nova_compute[225855]: 2026-01-20 15:13:02.769 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:02.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:03 compute-1 ceph-mon[81775]: pgmap v2705: 321 pgs: 321 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 15 KiB/s wr, 64 op/s
Jan 20 15:13:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:03.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2926783721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:04 compute-1 nova_compute[225855]: 2026-01-20 15:13:04.655 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:13:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:04.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:05 compute-1 nova_compute[225855]: 2026-01-20 15:13:05.124 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:13:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:05.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:13:05 compute-1 nova_compute[225855]: 2026-01-20 15:13:05.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:13:05 compute-1 ceph-mon[81775]: pgmap v2706: 321 pgs: 321 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 28 KiB/s wr, 73 op/s
Jan 20 15:13:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/187333900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:13:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:06.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:13:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:07.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:07 compute-1 ceph-mon[81775]: pgmap v2707: 321 pgs: 321 active+clean; 227 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.1 MiB/s wr, 104 op/s
Jan 20 15:13:07 compute-1 nova_compute[225855]: 2026-01-20 15:13:07.771 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2234610671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:08.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:09 compute-1 podman[301636]: 2026-01-20 15:13:09.025482158 +0000 UTC m=+0.074832375 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 15:13:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:09.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:09 compute-1 ceph-mon[81775]: pgmap v2708: 321 pgs: 321 active+clean; 227 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 976 KiB/s wr, 89 op/s
Jan 20 15:13:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/221289373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:10 compute-1 nova_compute[225855]: 2026-01-20 15:13:10.126 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1463661681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:10 compute-1 ceph-mon[81775]: pgmap v2709: 321 pgs: 321 active+clean; 198 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Jan 20 15:13:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:10.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:11.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:12 compute-1 nova_compute[225855]: 2026-01-20 15:13:12.774 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:12.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:13.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:13 compute-1 ceph-mon[81775]: pgmap v2710: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Jan 20 15:13:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2024585432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2794425772' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:13:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2794425772' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:13:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:14.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:15 compute-1 nova_compute[225855]: 2026-01-20 15:13:15.129 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:15.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:15 compute-1 ceph-mon[81775]: pgmap v2711: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Jan 20 15:13:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:16.434 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:16.435 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:16.435 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:16 compute-1 ceph-mon[81775]: pgmap v2712: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Jan 20 15:13:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:13:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:16.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:13:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:17.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:17 compute-1 nova_compute[225855]: 2026-01-20 15:13:17.336 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:13:17 compute-1 nova_compute[225855]: 2026-01-20 15:13:17.831 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:13:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:18.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:13:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:13:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:19.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:13:19 compute-1 ceph-mon[81775]: pgmap v2713: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 900 KiB/s wr, 60 op/s
Jan 20 15:13:20 compute-1 nova_compute[225855]: 2026-01-20 15:13:20.132 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:20 compute-1 sudo[301668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:20 compute-1 sudo[301668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:20 compute-1 sudo[301668]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:20 compute-1 sudo[301693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:20 compute-1 sudo[301693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:20 compute-1 sudo[301693]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:13:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:20.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:13:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:21.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:21 compute-1 ceph-mon[81775]: pgmap v2714: 321 pgs: 321 active+clean; 189 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 135 op/s
Jan 20 15:13:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:22.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:22 compute-1 nova_compute[225855]: 2026-01-20 15:13:22.835 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:22 compute-1 ceph-mon[81775]: pgmap v2715: 321 pgs: 321 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 20 15:13:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:13:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:23.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:13:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:23 compute-1 podman[301720]: 2026-01-20 15:13:23.999305587 +0000 UTC m=+0.043723102 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 15:13:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4043402433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:24.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:25 compute-1 nova_compute[225855]: 2026-01-20 15:13:25.136 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:25.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:13:25 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3326666623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:25 compute-1 ceph-mon[81775]: pgmap v2716: 321 pgs: 321 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Jan 20 15:13:26 compute-1 nova_compute[225855]: 2026-01-20 15:13:26.377 225859 DEBUG oslo_concurrency.lockutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:26 compute-1 nova_compute[225855]: 2026-01-20 15:13:26.377 225859 DEBUG oslo_concurrency.lockutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3326666623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:26 compute-1 nova_compute[225855]: 2026-01-20 15:13:26.396 225859 DEBUG nova.compute.manager [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:13:26 compute-1 nova_compute[225855]: 2026-01-20 15:13:26.485 225859 DEBUG oslo_concurrency.lockutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:26 compute-1 nova_compute[225855]: 2026-01-20 15:13:26.486 225859 DEBUG oslo_concurrency.lockutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:26 compute-1 nova_compute[225855]: 2026-01-20 15:13:26.491 225859 DEBUG nova.virt.hardware [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:13:26 compute-1 nova_compute[225855]: 2026-01-20 15:13:26.492 225859 INFO nova.compute.claims [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:13:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:26.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:27 compute-1 nova_compute[225855]: 2026-01-20 15:13:27.046 225859 DEBUG oslo_concurrency.processutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:27.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #133. Immutable memtables: 0.
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:13:27.402075) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 133
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922007402096, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 771, "num_deletes": 254, "total_data_size": 1275267, "memory_usage": 1297488, "flush_reason": "Manual Compaction"}
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #134: started
Jan 20 15:13:27 compute-1 ceph-mon[81775]: pgmap v2717: 321 pgs: 321 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922007409762, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 134, "file_size": 840017, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65610, "largest_seqno": 66376, "table_properties": {"data_size": 836370, "index_size": 1426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8828, "raw_average_key_size": 19, "raw_value_size": 828875, "raw_average_value_size": 1871, "num_data_blocks": 63, "num_entries": 443, "num_filter_entries": 443, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921962, "oldest_key_time": 1768921962, "file_creation_time": 1768922007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 7731 microseconds, and 2446 cpu microseconds.
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:13:27.409804) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #134: 840017 bytes OK
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:13:27.409818) [db/memtable_list.cc:519] [default] Level-0 commit table #134 started
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:13:27.410934) [db/memtable_list.cc:722] [default] Level-0 commit table #134: memtable #1 done
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:13:27.410945) EVENT_LOG_v1 {"time_micros": 1768922007410941, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:13:27.410961) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 1271164, prev total WAL file size 1271164, number of live WAL files 2.
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000130.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:13:27 compute-1 ceph-mgr[82135]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2542147622
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:13:27.411430) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [134(820KB)], [132(10MB)]
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922007411525, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [134], "files_L6": [132], "score": -1, "input_data_size": 11826675, "oldest_snapshot_seqno": -1}
Jan 20 15:13:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:13:27 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1030759791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:27 compute-1 nova_compute[225855]: 2026-01-20 15:13:27.478 225859 DEBUG oslo_concurrency.processutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:27 compute-1 nova_compute[225855]: 2026-01-20 15:13:27.483 225859 DEBUG nova.compute.provider_tree [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #135: 8769 keys, 9951772 bytes, temperature: kUnknown
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922007497427, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 135, "file_size": 9951772, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9896409, "index_size": 32338, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21957, "raw_key_size": 232044, "raw_average_key_size": 26, "raw_value_size": 9743382, "raw_average_value_size": 1111, "num_data_blocks": 1224, "num_entries": 8769, "num_filter_entries": 8769, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768922007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 135, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:13:27.497744) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 9951772 bytes
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:13:27.499277) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.5 rd, 115.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 10.5 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(25.9) write-amplify(11.8) OK, records in: 9291, records dropped: 522 output_compression: NoCompression
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:13:27.499304) EVENT_LOG_v1 {"time_micros": 1768922007499291, "job": 84, "event": "compaction_finished", "compaction_time_micros": 86014, "compaction_time_cpu_micros": 32812, "output_level": 6, "num_output_files": 1, "total_output_size": 9951772, "num_input_records": 9291, "num_output_records": 8769, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922007499853, "job": 84, "event": "table_file_deletion", "file_number": 134}
Jan 20 15:13:27 compute-1 nova_compute[225855]: 2026-01-20 15:13:27.500 225859 DEBUG nova.scheduler.client.report [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000132.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922007502306, "job": 84, "event": "table_file_deletion", "file_number": 132}
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:13:27.411286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:13:27.502457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:13:27.502463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:13:27.502464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:13:27.502466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:13:27 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:13:27.502467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:13:27 compute-1 nova_compute[225855]: 2026-01-20 15:13:27.536 225859 DEBUG oslo_concurrency.lockutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:27 compute-1 nova_compute[225855]: 2026-01-20 15:13:27.537 225859 DEBUG nova.compute.manager [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:13:27 compute-1 nova_compute[225855]: 2026-01-20 15:13:27.653 225859 DEBUG nova.compute.manager [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:13:27 compute-1 nova_compute[225855]: 2026-01-20 15:13:27.653 225859 DEBUG nova.network.neutron [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:13:27 compute-1 nova_compute[225855]: 2026-01-20 15:13:27.830 225859 DEBUG nova.policy [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '442a7a5cb8ea426a82be9762b262d171', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:13:27 compute-1 nova_compute[225855]: 2026-01-20 15:13:27.836 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:27 compute-1 nova_compute[225855]: 2026-01-20 15:13:27.978 225859 INFO nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.017 225859 DEBUG nova.compute.manager [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.209 225859 DEBUG nova.compute.manager [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.212 225859 DEBUG nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.213 225859 INFO nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Creating image(s)
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.252 225859 DEBUG nova.storage.rbd_utils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 128af7d9-155f-468d-9873-98c816f0df9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.283 225859 DEBUG nova.storage.rbd_utils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 128af7d9-155f-468d-9873-98c816f0df9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.311 225859 DEBUG nova.storage.rbd_utils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 128af7d9-155f-468d-9873-98c816f0df9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.315 225859 DEBUG oslo_concurrency.processutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.394 225859 DEBUG oslo_concurrency.processutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.394 225859 DEBUG oslo_concurrency.lockutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.395 225859 DEBUG oslo_concurrency.lockutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.395 225859 DEBUG oslo_concurrency.lockutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1030759791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.429 225859 DEBUG nova.storage.rbd_utils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 128af7d9-155f-468d-9873-98c816f0df9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.433 225859 DEBUG oslo_concurrency.processutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 128af7d9-155f-468d-9873-98c816f0df9e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.601 225859 DEBUG nova.network.neutron [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Successfully created port: 9de5453d-b548-429c-8fc2-7b012cb8ebdf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.761 225859 DEBUG oslo_concurrency.processutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 128af7d9-155f-468d-9873-98c816f0df9e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.828 225859 DEBUG nova.storage.rbd_utils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] resizing rbd image 128af7d9-155f-468d-9873-98c816f0df9e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:13:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:28.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.947 225859 DEBUG nova.objects.instance [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'migration_context' on Instance uuid 128af7d9-155f-468d-9873-98c816f0df9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.987 225859 DEBUG nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.988 225859 DEBUG nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Ensure instance console log exists: /var/lib/nova/instances/128af7d9-155f-468d-9873-98c816f0df9e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.989 225859 DEBUG oslo_concurrency.lockutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.989 225859 DEBUG oslo_concurrency.lockutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:28 compute-1 nova_compute[225855]: 2026-01-20 15:13:28.989 225859 DEBUG oslo_concurrency.lockutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:13:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:29.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:13:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4028771629' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:29 compute-1 ceph-mon[81775]: pgmap v2718: 321 pgs: 321 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Jan 20 15:13:29 compute-1 nova_compute[225855]: 2026-01-20 15:13:29.969 225859 DEBUG nova.network.neutron [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Successfully updated port: 9de5453d-b548-429c-8fc2-7b012cb8ebdf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:13:30 compute-1 nova_compute[225855]: 2026-01-20 15:13:30.047 225859 DEBUG oslo_concurrency.lockutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:13:30 compute-1 nova_compute[225855]: 2026-01-20 15:13:30.047 225859 DEBUG oslo_concurrency.lockutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquired lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:13:30 compute-1 nova_compute[225855]: 2026-01-20 15:13:30.048 225859 DEBUG nova.network.neutron [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:13:30 compute-1 nova_compute[225855]: 2026-01-20 15:13:30.139 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:30 compute-1 nova_compute[225855]: 2026-01-20 15:13:30.217 225859 DEBUG nova.network.neutron [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:13:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:30.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:30 compute-1 nova_compute[225855]: 2026-01-20 15:13:30.940 225859 DEBUG nova.network.neutron [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updating instance_info_cache with network_info: [{"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:13:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:13:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:31.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:13:31 compute-1 ceph-mon[81775]: pgmap v2719: 321 pgs: 321 active+clean; 244 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.4 MiB/s wr, 141 op/s
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.593 225859 DEBUG oslo_concurrency.lockutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Releasing lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.593 225859 DEBUG nova.compute.manager [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Instance network_info: |[{"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.595 225859 DEBUG nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Start _get_guest_xml network_info=[{"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.600 225859 WARNING nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.605 225859 DEBUG nova.virt.libvirt.host [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.606 225859 DEBUG nova.virt.libvirt.host [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.609 225859 DEBUG nova.virt.libvirt.host [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.610 225859 DEBUG nova.virt.libvirt.host [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.611 225859 DEBUG nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.611 225859 DEBUG nova.virt.hardware [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.612 225859 DEBUG nova.virt.hardware [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.612 225859 DEBUG nova.virt.hardware [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.613 225859 DEBUG nova.virt.hardware [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.613 225859 DEBUG nova.virt.hardware [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.613 225859 DEBUG nova.virt.hardware [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.613 225859 DEBUG nova.virt.hardware [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.614 225859 DEBUG nova.virt.hardware [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.614 225859 DEBUG nova.virt.hardware [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.614 225859 DEBUG nova.virt.hardware [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.614 225859 DEBUG nova.virt.hardware [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:13:31 compute-1 nova_compute[225855]: 2026-01-20 15:13:31.617 225859 DEBUG oslo_concurrency.processutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:13:32 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1603427827' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.041 225859 DEBUG oslo_concurrency.processutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.065 225859 DEBUG nova.storage.rbd_utils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 128af7d9-155f-468d-9873-98c816f0df9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.070 225859 DEBUG oslo_concurrency.processutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.101 225859 DEBUG nova.compute.manager [req-de2a3c0f-8b93-454c-8732-1b48901607c0 req-dfc04484-6634-4fb7-90c0-e9c764b7da19 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-changed-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.102 225859 DEBUG nova.compute.manager [req-de2a3c0f-8b93-454c-8732-1b48901607c0 req-dfc04484-6634-4fb7-90c0-e9c764b7da19 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Refreshing instance network info cache due to event network-changed-9de5453d-b548-429c-8fc2-7b012cb8ebdf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.103 225859 DEBUG oslo_concurrency.lockutils [req-de2a3c0f-8b93-454c-8732-1b48901607c0 req-dfc04484-6634-4fb7-90c0-e9c764b7da19 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.103 225859 DEBUG oslo_concurrency.lockutils [req-de2a3c0f-8b93-454c-8732-1b48901607c0 req-dfc04484-6634-4fb7-90c0-e9c764b7da19 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.103 225859 DEBUG nova.network.neutron [req-de2a3c0f-8b93-454c-8732-1b48901607c0 req-dfc04484-6634-4fb7-90c0-e9c764b7da19 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Refreshing network info cache for port 9de5453d-b548-429c-8fc2-7b012cb8ebdf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:13:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:13:32 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2022380678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.494 225859 DEBUG oslo_concurrency.processutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.496 225859 DEBUG nova.virt.libvirt.vif [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:13:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1998945962',display_name='tempest-TestNetworkAdvancedServerOps-server-1998945962',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1998945962',id=183,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQtn9WZwWHpZ18eEtTY9zdPNbJgOayUdrvVmR1brDMxwKaiJ8tf9lOFdht6GjVy3Orpnh5Z5LatI7xEKad9rNtjFmwEczk5s4CmWp5ueE54bJ73h+pph+yq2VHvIP5rgg==',key_name='tempest-TestNetworkAdvancedServerOps-1645169738',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-mvt7thmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:13:28Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=128af7d9-155f-468d-9873-98c816f0df9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.496 225859 DEBUG nova.network.os_vif_util [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.497 225859 DEBUG nova.network.os_vif_util [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.498 225859 DEBUG nova.objects.instance [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'pci_devices' on Instance uuid 128af7d9-155f-468d-9873-98c816f0df9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.517 225859 DEBUG nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:13:32 compute-1 nova_compute[225855]:   <uuid>128af7d9-155f-468d-9873-98c816f0df9e</uuid>
Jan 20 15:13:32 compute-1 nova_compute[225855]:   <name>instance-000000b7</name>
Jan 20 15:13:32 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:13:32 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:13:32 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1998945962</nova:name>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:13:31</nova:creationTime>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:13:32 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:13:32 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:13:32 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:13:32 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:13:32 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:13:32 compute-1 nova_compute[225855]:         <nova:user uuid="442a7a5cb8ea426a82be9762b262d171">tempest-TestNetworkAdvancedServerOps-175282664-project-member</nova:user>
Jan 20 15:13:32 compute-1 nova_compute[225855]:         <nova:project uuid="1ed5feeeafe7448a8efb47ab975b0ead">tempest-TestNetworkAdvancedServerOps-175282664</nova:project>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:13:32 compute-1 nova_compute[225855]:         <nova:port uuid="9de5453d-b548-429c-8fc2-7b012cb8ebdf">
Jan 20 15:13:32 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:13:32 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:13:32 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <system>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <entry name="serial">128af7d9-155f-468d-9873-98c816f0df9e</entry>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <entry name="uuid">128af7d9-155f-468d-9873-98c816f0df9e</entry>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     </system>
Jan 20 15:13:32 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:13:32 compute-1 nova_compute[225855]:   <os>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:   </os>
Jan 20 15:13:32 compute-1 nova_compute[225855]:   <features>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:   </features>
Jan 20 15:13:32 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:13:32 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:13:32 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/128af7d9-155f-468d-9873-98c816f0df9e_disk">
Jan 20 15:13:32 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       </source>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:13:32 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/128af7d9-155f-468d-9873-98c816f0df9e_disk.config">
Jan 20 15:13:32 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       </source>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:13:32 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:a8:1d:e9"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <target dev="tap9de5453d-b5"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/128af7d9-155f-468d-9873-98c816f0df9e/console.log" append="off"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <video>
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     </video>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:13:32 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:13:32 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:13:32 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:13:32 compute-1 nova_compute[225855]: </domain>
Jan 20 15:13:32 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.518 225859 DEBUG nova.compute.manager [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Preparing to wait for external event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.518 225859 DEBUG oslo_concurrency.lockutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.519 225859 DEBUG oslo_concurrency.lockutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.519 225859 DEBUG oslo_concurrency.lockutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.519 225859 DEBUG nova.virt.libvirt.vif [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:13:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1998945962',display_name='tempest-TestNetworkAdvancedServerOps-server-1998945962',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1998945962',id=183,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQtn9WZwWHpZ18eEtTY9zdPNbJgOayUdrvVmR1brDMxwKaiJ8tf9lOFdht6GjVy3Orpnh5Z5LatI7xEKad9rNtjFmwEczk5s4CmWp5ueE54bJ73h+pph+yq2VHvIP5rgg==',key_name='tempest-TestNetworkAdvancedServerOps-1645169738',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-mvt7thmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:13:28Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=128af7d9-155f-468d-9873-98c816f0df9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.520 225859 DEBUG nova.network.os_vif_util [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.520 225859 DEBUG nova.network.os_vif_util [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.521 225859 DEBUG os_vif [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.521 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.522 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.522 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.525 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.526 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9de5453d-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.526 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9de5453d-b5, col_values=(('external_ids', {'iface-id': '9de5453d-b548-429c-8fc2-7b012cb8ebdf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:1d:e9', 'vm-uuid': '128af7d9-155f-468d-9873-98c816f0df9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.528 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:32 compute-1 NetworkManager[49104]: <info>  [1768922012.5289] manager: (tap9de5453d-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.530 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.534 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.535 225859 INFO os_vif [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5')
Jan 20 15:13:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1603427827' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2022380678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.632 225859 DEBUG nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.633 225859 DEBUG nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.633 225859 DEBUG nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No VIF found with MAC fa:16:3e:a8:1d:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.634 225859 INFO nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Using config drive
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.656 225859 DEBUG nova.storage.rbd_utils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 128af7d9-155f-468d-9873-98c816f0df9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:13:32 compute-1 nova_compute[225855]: 2026-01-20 15:13:32.838 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:32.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:33.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:33 compute-1 nova_compute[225855]: 2026-01-20 15:13:33.579 225859 INFO nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Creating config drive at /var/lib/nova/instances/128af7d9-155f-468d-9873-98c816f0df9e/disk.config
Jan 20 15:13:33 compute-1 nova_compute[225855]: 2026-01-20 15:13:33.588 225859 DEBUG oslo_concurrency.processutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/128af7d9-155f-468d-9873-98c816f0df9e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9xwn_xtb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:33 compute-1 ceph-mon[81775]: pgmap v2720: 321 pgs: 321 active+clean; 278 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 269 KiB/s rd, 4.0 MiB/s wr, 106 op/s
Jan 20 15:13:33 compute-1 nova_compute[225855]: 2026-01-20 15:13:33.727 225859 DEBUG oslo_concurrency.processutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/128af7d9-155f-468d-9873-98c816f0df9e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9xwn_xtb" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:33 compute-1 nova_compute[225855]: 2026-01-20 15:13:33.758 225859 DEBUG nova.storage.rbd_utils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 128af7d9-155f-468d-9873-98c816f0df9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:13:33 compute-1 nova_compute[225855]: 2026-01-20 15:13:33.761 225859 DEBUG oslo_concurrency.processutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/128af7d9-155f-468d-9873-98c816f0df9e/disk.config 128af7d9-155f-468d-9873-98c816f0df9e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:33 compute-1 nova_compute[225855]: 2026-01-20 15:13:33.924 225859 DEBUG oslo_concurrency.processutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/128af7d9-155f-468d-9873-98c816f0df9e/disk.config 128af7d9-155f-468d-9873-98c816f0df9e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:33 compute-1 nova_compute[225855]: 2026-01-20 15:13:33.925 225859 INFO nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Deleting local config drive /var/lib/nova/instances/128af7d9-155f-468d-9873-98c816f0df9e/disk.config because it was imported into RBD.
Jan 20 15:13:33 compute-1 kernel: tap9de5453d-b5: entered promiscuous mode
Jan 20 15:13:33 compute-1 NetworkManager[49104]: <info>  [1768922013.9714] manager: (tap9de5453d-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/333)
Jan 20 15:13:33 compute-1 ovn_controller[130490]: 2026-01-20T15:13:33Z|00806|binding|INFO|Claiming lport 9de5453d-b548-429c-8fc2-7b012cb8ebdf for this chassis.
Jan 20 15:13:33 compute-1 ovn_controller[130490]: 2026-01-20T15:13:33Z|00807|binding|INFO|9de5453d-b548-429c-8fc2-7b012cb8ebdf: Claiming fa:16:3e:a8:1d:e9 10.100.0.4
Jan 20 15:13:33 compute-1 nova_compute[225855]: 2026-01-20 15:13:33.971 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:33 compute-1 nova_compute[225855]: 2026-01-20 15:13:33.978 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.002 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:1d:e9 10.100.0.4'], port_security=['fa:16:3e:a8:1d:e9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '128af7d9-155f-468d-9873-98c816f0df9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d07527d3-7363-453c-9902-c562bab626ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b4895263-5fc5-4c5a-ab8d-547f570bc095', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf4d49b6-1d42-4171-8055-0d823fb37e66, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=9de5453d-b548-429c-8fc2-7b012cb8ebdf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.003 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 9de5453d-b548-429c-8fc2-7b012cb8ebdf in datapath d07527d3-7363-453c-9902-c562bab626ba bound to our chassis
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.005 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d07527d3-7363-453c-9902-c562bab626ba
Jan 20 15:13:34 compute-1 systemd-udevd[302066]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:13:34 compute-1 systemd-machined[194361]: New machine qemu-96-instance-000000b7.
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.016 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[15eb01ff-dad7-42df-94c3-efe6059dea97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.016 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd07527d3-71 in ovnmeta-d07527d3-7363-453c-9902-c562bab626ba namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:13:34 compute-1 NetworkManager[49104]: <info>  [1768922014.0192] device (tap9de5453d-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:13:34 compute-1 NetworkManager[49104]: <info>  [1768922014.0198] device (tap9de5453d-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.020 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd07527d3-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.020 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7a77e0bf-4ef6-4ff2-a72e-fca71ea8e8f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.021 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[58b76739-b825-4cf3-b09e-9c0debf2b53d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.034 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[365767fb-f121-4c69-995d-b09874daf622]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:34 compute-1 systemd[1]: Started Virtual Machine qemu-96-instance-000000b7.
Jan 20 15:13:34 compute-1 nova_compute[225855]: 2026-01-20 15:13:34.045 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:34 compute-1 ovn_controller[130490]: 2026-01-20T15:13:34Z|00808|binding|INFO|Setting lport 9de5453d-b548-429c-8fc2-7b012cb8ebdf ovn-installed in OVS
Jan 20 15:13:34 compute-1 ovn_controller[130490]: 2026-01-20T15:13:34Z|00809|binding|INFO|Setting lport 9de5453d-b548-429c-8fc2-7b012cb8ebdf up in Southbound
Jan 20 15:13:34 compute-1 nova_compute[225855]: 2026-01-20 15:13:34.052 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.051 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d84059eb-9b5b-4f5e-9220-b3e427702163]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.084 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[16900d6d-9e90-4be4-a9b3-fe4ea42b97e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.088 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8b16e8e3-3ac6-4cb1-89e2-e464753ff23a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:34 compute-1 NetworkManager[49104]: <info>  [1768922014.0897] manager: (tapd07527d3-70): new Veth device (/org/freedesktop/NetworkManager/Devices/334)
Jan 20 15:13:34 compute-1 systemd-udevd[302070]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.120 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[830638b4-bcf1-47ae-a5e5-53a401312cd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.123 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[da4b25a8-5a94-427f-9626-b72c80cb220c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:34 compute-1 NetworkManager[49104]: <info>  [1768922014.1434] device (tapd07527d3-70): carrier: link connected
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.148 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[42d4233e-16a0-473b-a8d7-63ebe3c94f7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.164 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c52064b1-2044-4102-838e-72d7be024d9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd07527d3-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:33:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705307, 'reachable_time': 39290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302100, 'error': None, 'target': 'ovnmeta-d07527d3-7363-453c-9902-c562bab626ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.179 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[41040360-33d4-4221-8bd4-5e343d382103]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6f:33a8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 705307, 'tstamp': 705307}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302101, 'error': None, 'target': 'ovnmeta-d07527d3-7363-453c-9902-c562bab626ba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.194 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d35769dd-e888-4201-bb69-c18e06e9b6d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd07527d3-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:33:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705307, 'reachable_time': 39290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302102, 'error': None, 'target': 'ovnmeta-d07527d3-7363-453c-9902-c562bab626ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.225 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4982a898-3f0b-4f2d-b67d-d0804b9c3ded]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.273 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5a69110f-3c62-4e8b-8b6d-7b00e2073557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.274 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd07527d3-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.274 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.275 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd07527d3-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:13:34 compute-1 NetworkManager[49104]: <info>  [1768922014.2772] manager: (tapd07527d3-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/335)
Jan 20 15:13:34 compute-1 nova_compute[225855]: 2026-01-20 15:13:34.276 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:34 compute-1 kernel: tapd07527d3-70: entered promiscuous mode
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.280 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd07527d3-70, col_values=(('external_ids', {'iface-id': '311d5bf2-0b44-4ce1-9ec1-e7458d5df232'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:13:34 compute-1 ovn_controller[130490]: 2026-01-20T15:13:34Z|00810|binding|INFO|Releasing lport 311d5bf2-0b44-4ce1-9ec1-e7458d5df232 from this chassis (sb_readonly=0)
Jan 20 15:13:34 compute-1 nova_compute[225855]: 2026-01-20 15:13:34.282 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:34 compute-1 nova_compute[225855]: 2026-01-20 15:13:34.296 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:34 compute-1 nova_compute[225855]: 2026-01-20 15:13:34.302 225859 DEBUG nova.compute.manager [req-b63561c7-a5c3-4289-9bb4-6935aaa8a698 req-f9ba7d1d-5f12-4ddf-9598-d782da4a5b74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:13:34 compute-1 nova_compute[225855]: 2026-01-20 15:13:34.303 225859 DEBUG oslo_concurrency.lockutils [req-b63561c7-a5c3-4289-9bb4-6935aaa8a698 req-f9ba7d1d-5f12-4ddf-9598-d782da4a5b74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:34 compute-1 nova_compute[225855]: 2026-01-20 15:13:34.303 225859 DEBUG oslo_concurrency.lockutils [req-b63561c7-a5c3-4289-9bb4-6935aaa8a698 req-f9ba7d1d-5f12-4ddf-9598-d782da4a5b74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:34 compute-1 nova_compute[225855]: 2026-01-20 15:13:34.303 225859 DEBUG oslo_concurrency.lockutils [req-b63561c7-a5c3-4289-9bb4-6935aaa8a698 req-f9ba7d1d-5f12-4ddf-9598-d782da4a5b74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:34 compute-1 nova_compute[225855]: 2026-01-20 15:13:34.303 225859 DEBUG nova.compute.manager [req-b63561c7-a5c3-4289-9bb4-6935aaa8a698 req-f9ba7d1d-5f12-4ddf-9598-d782da4a5b74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Processing event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:13:34 compute-1 nova_compute[225855]: 2026-01-20 15:13:34.304 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.305 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d07527d3-7363-453c-9902-c562bab626ba.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d07527d3-7363-453c-9902-c562bab626ba.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.306 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[87e6d122-3c60-43aa-b493-4d389d16466c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.307 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-d07527d3-7363-453c-9902-c562bab626ba
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/d07527d3-7363-453c-9902-c562bab626ba.pid.haproxy
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID d07527d3-7363-453c-9902-c562bab626ba
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:13:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:34.309 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d07527d3-7363-453c-9902-c562bab626ba', 'env', 'PROCESS_TAG=haproxy-d07527d3-7363-453c-9902-c562bab626ba', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d07527d3-7363-453c-9902-c562bab626ba.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:13:34 compute-1 nova_compute[225855]: 2026-01-20 15:13:34.351 225859 DEBUG nova.network.neutron [req-de2a3c0f-8b93-454c-8732-1b48901607c0 req-dfc04484-6634-4fb7-90c0-e9c764b7da19 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updated VIF entry in instance network info cache for port 9de5453d-b548-429c-8fc2-7b012cb8ebdf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:13:34 compute-1 nova_compute[225855]: 2026-01-20 15:13:34.351 225859 DEBUG nova.network.neutron [req-de2a3c0f-8b93-454c-8732-1b48901607c0 req-dfc04484-6634-4fb7-90c0-e9c764b7da19 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updating instance_info_cache with network_info: [{"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:13:34 compute-1 nova_compute[225855]: 2026-01-20 15:13:34.379 225859 DEBUG oslo_concurrency.lockutils [req-de2a3c0f-8b93-454c-8732-1b48901607c0 req-dfc04484-6634-4fb7-90c0-e9c764b7da19 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:13:34 compute-1 podman[302132]: 2026-01-20 15:13:34.64951051 +0000 UTC m=+0.049345991 container create 9e0f985d5a417f01c41231b07aa6e097d3d4253b45c9c2dcd897f5700b809155 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 20 15:13:34 compute-1 systemd[1]: Started libpod-conmon-9e0f985d5a417f01c41231b07aa6e097d3d4253b45c9c2dcd897f5700b809155.scope.
Jan 20 15:13:34 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:13:34 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19af57efcf5c610cf3b2724d9be132eb303f0d0071822b6d38a243dd05cdc13e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:13:34 compute-1 podman[302132]: 2026-01-20 15:13:34.623960655 +0000 UTC m=+0.023796156 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:13:34 compute-1 podman[302132]: 2026-01-20 15:13:34.721232795 +0000 UTC m=+0.121068296 container init 9e0f985d5a417f01c41231b07aa6e097d3d4253b45c9c2dcd897f5700b809155 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 15:13:34 compute-1 podman[302132]: 2026-01-20 15:13:34.727033309 +0000 UTC m=+0.126868800 container start 9e0f985d5a417f01c41231b07aa6e097d3d4253b45c9c2dcd897f5700b809155 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:13:34 compute-1 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[302147]: [NOTICE]   (302166) : New worker (302169) forked
Jan 20 15:13:34 compute-1 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[302147]: [NOTICE]   (302166) : Loading success.
Jan 20 15:13:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:34.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:34 compute-1 sudo[302178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:34 compute-1 sudo[302178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:34 compute-1 sudo[302178]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:34 compute-1 sudo[302203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:13:34 compute-1 sudo[302203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:34 compute-1 sudo[302203]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:34 compute-1 sudo[302228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:34 compute-1 sudo[302228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:34 compute-1 sudo[302228]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:35 compute-1 sudo[302253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:13:35 compute-1 sudo[302253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:35.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:35 compute-1 ceph-mon[81775]: pgmap v2721: 321 pgs: 321 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 848 KiB/s rd, 3.9 MiB/s wr, 114 op/s
Jan 20 15:13:35 compute-1 sudo[302253]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.559 225859 DEBUG nova.compute.manager [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.561 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922015.5584025, 128af7d9-155f-468d-9873-98c816f0df9e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.561 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] VM Started (Lifecycle Event)
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.564 225859 DEBUG nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.568 225859 INFO nova.virt.libvirt.driver [-] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Instance spawned successfully.
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.568 225859 DEBUG nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.582 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.586 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.591 225859 DEBUG nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.591 225859 DEBUG nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.592 225859 DEBUG nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.592 225859 DEBUG nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.592 225859 DEBUG nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.593 225859 DEBUG nova.virt.libvirt.driver [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.624 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.624 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922015.5586474, 128af7d9-155f-468d-9873-98c816f0df9e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.624 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] VM Paused (Lifecycle Event)
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.666 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.670 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922015.5640373, 128af7d9-155f-468d-9873-98c816f0df9e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.670 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] VM Resumed (Lifecycle Event)
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.684 225859 INFO nova.compute.manager [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Took 7.47 seconds to spawn the instance on the hypervisor.
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.685 225859 DEBUG nova.compute.manager [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.697 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.700 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.725 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.772 225859 INFO nova.compute.manager [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Took 9.31 seconds to build instance.
Jan 20 15:13:35 compute-1 nova_compute[225855]: 2026-01-20 15:13:35.790 225859 DEBUG oslo_concurrency.lockutils [None req-b00a3858-a0ec-49f4-bb4d-e8705b9f8fa3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:13:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:13:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:13:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:13:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:13:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:13:36 compute-1 nova_compute[225855]: 2026-01-20 15:13:36.458 225859 DEBUG nova.compute.manager [req-010f4833-51a2-4bc8-aa73-a8e0991b4476 req-79832a42-b5ed-4cf9-a046-684a26d4a521 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:13:36 compute-1 nova_compute[225855]: 2026-01-20 15:13:36.459 225859 DEBUG oslo_concurrency.lockutils [req-010f4833-51a2-4bc8-aa73-a8e0991b4476 req-79832a42-b5ed-4cf9-a046-684a26d4a521 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:36 compute-1 nova_compute[225855]: 2026-01-20 15:13:36.459 225859 DEBUG oslo_concurrency.lockutils [req-010f4833-51a2-4bc8-aa73-a8e0991b4476 req-79832a42-b5ed-4cf9-a046-684a26d4a521 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:36 compute-1 nova_compute[225855]: 2026-01-20 15:13:36.459 225859 DEBUG oslo_concurrency.lockutils [req-010f4833-51a2-4bc8-aa73-a8e0991b4476 req-79832a42-b5ed-4cf9-a046-684a26d4a521 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:36 compute-1 nova_compute[225855]: 2026-01-20 15:13:36.460 225859 DEBUG nova.compute.manager [req-010f4833-51a2-4bc8-aa73-a8e0991b4476 req-79832a42-b5ed-4cf9-a046-684a26d4a521 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] No waiting events found dispatching network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:13:36 compute-1 nova_compute[225855]: 2026-01-20 15:13:36.460 225859 WARNING nova.compute.manager [req-010f4833-51a2-4bc8-aa73-a8e0991b4476 req-79832a42-b5ed-4cf9-a046-684a26d4a521 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received unexpected event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf for instance with vm_state active and task_state None.
Jan 20 15:13:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:36.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:37.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:37 compute-1 ceph-mon[81775]: pgmap v2722: 321 pgs: 321 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Jan 20 15:13:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:37.512 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:13:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:37.513 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:13:37 compute-1 nova_compute[225855]: 2026-01-20 15:13:37.515 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:37 compute-1 nova_compute[225855]: 2026-01-20 15:13:37.527 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:37 compute-1 nova_compute[225855]: 2026-01-20 15:13:37.841 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/152969632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:13:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:38.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:13:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:13:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:39.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:13:39 compute-1 ceph-mon[81775]: pgmap v2723: 321 pgs: 321 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Jan 20 15:13:40 compute-1 podman[302339]: 2026-01-20 15:13:40.038327829 +0000 UTC m=+0.083820670 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 15:13:40 compute-1 nova_compute[225855]: 2026-01-20 15:13:40.050 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:40 compute-1 NetworkManager[49104]: <info>  [1768922020.0511] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Jan 20 15:13:40 compute-1 NetworkManager[49104]: <info>  [1768922020.0523] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/337)
Jan 20 15:13:40 compute-1 nova_compute[225855]: 2026-01-20 15:13:40.226 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:40 compute-1 ovn_controller[130490]: 2026-01-20T15:13:40Z|00811|binding|INFO|Releasing lport 311d5bf2-0b44-4ce1-9ec1-e7458d5df232 from this chassis (sb_readonly=0)
Jan 20 15:13:40 compute-1 nova_compute[225855]: 2026-01-20 15:13:40.245 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:40 compute-1 sudo[302368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:40 compute-1 sudo[302368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:40 compute-1 sudo[302368]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:40 compute-1 sudo[302393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:40 compute-1 sudo[302393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:40 compute-1 sudo[302393]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:40 compute-1 nova_compute[225855]: 2026-01-20 15:13:40.619 225859 DEBUG nova.compute.manager [req-0147313d-c76a-4813-a12e-531025211f20 req-4cd2ed76-adc9-4783-bacf-ab7430034c12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-changed-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:13:40 compute-1 nova_compute[225855]: 2026-01-20 15:13:40.620 225859 DEBUG nova.compute.manager [req-0147313d-c76a-4813-a12e-531025211f20 req-4cd2ed76-adc9-4783-bacf-ab7430034c12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Refreshing instance network info cache due to event network-changed-9de5453d-b548-429c-8fc2-7b012cb8ebdf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:13:40 compute-1 nova_compute[225855]: 2026-01-20 15:13:40.620 225859 DEBUG oslo_concurrency.lockutils [req-0147313d-c76a-4813-a12e-531025211f20 req-4cd2ed76-adc9-4783-bacf-ab7430034c12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:13:40 compute-1 nova_compute[225855]: 2026-01-20 15:13:40.621 225859 DEBUG oslo_concurrency.lockutils [req-0147313d-c76a-4813-a12e-531025211f20 req-4cd2ed76-adc9-4783-bacf-ab7430034c12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:13:40 compute-1 nova_compute[225855]: 2026-01-20 15:13:40.621 225859 DEBUG nova.network.neutron [req-0147313d-c76a-4813-a12e-531025211f20 req-4cd2ed76-adc9-4783-bacf-ab7430034c12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Refreshing network info cache for port 9de5453d-b548-429c-8fc2-7b012cb8ebdf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:13:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:40.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:41.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:41 compute-1 ceph-mon[81775]: pgmap v2724: 321 pgs: 321 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.0 MiB/s wr, 225 op/s
Jan 20 15:13:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/40024272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:13:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:13:41 compute-1 sudo[302419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:41 compute-1 sudo[302419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:41 compute-1 sudo[302419]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:41 compute-1 sudo[302444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:13:41 compute-1 sudo[302444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:41 compute-1 sudo[302444]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:42 compute-1 nova_compute[225855]: 2026-01-20 15:13:42.252 225859 DEBUG nova.network.neutron [req-0147313d-c76a-4813-a12e-531025211f20 req-4cd2ed76-adc9-4783-bacf-ab7430034c12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updated VIF entry in instance network info cache for port 9de5453d-b548-429c-8fc2-7b012cb8ebdf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:13:42 compute-1 nova_compute[225855]: 2026-01-20 15:13:42.253 225859 DEBUG nova.network.neutron [req-0147313d-c76a-4813-a12e-531025211f20 req-4cd2ed76-adc9-4783-bacf-ab7430034c12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updating instance_info_cache with network_info: [{"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:13:42 compute-1 nova_compute[225855]: 2026-01-20 15:13:42.274 225859 DEBUG oslo_concurrency.lockutils [req-0147313d-c76a-4813-a12e-531025211f20 req-4cd2ed76-adc9-4783-bacf-ab7430034c12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:13:42 compute-1 nova_compute[225855]: 2026-01-20 15:13:42.530 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:42 compute-1 nova_compute[225855]: 2026-01-20 15:13:42.843 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:42.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:43.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:43 compute-1 ceph-mon[81775]: pgmap v2725: 321 pgs: 321 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 194 op/s
Jan 20 15:13:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2719836672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/418648228' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3096174409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:44 compute-1 ceph-mon[81775]: pgmap v2726: 321 pgs: 321 active+clean; 307 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Jan 20 15:13:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:13:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:44.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:13:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:45.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1877180579' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:46 compute-1 ceph-mon[81775]: pgmap v2727: 321 pgs: 321 active+clean; 365 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 212 op/s
Jan 20 15:13:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:46.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:47.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:13:47.516 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:13:47 compute-1 nova_compute[225855]: 2026-01-20 15:13:47.534 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:47 compute-1 ovn_controller[130490]: 2026-01-20T15:13:47Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:1d:e9 10.100.0.4
Jan 20 15:13:47 compute-1 ovn_controller[130490]: 2026-01-20T15:13:47Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:1d:e9 10.100.0.4
Jan 20 15:13:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3855612625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:47 compute-1 nova_compute[225855]: 2026-01-20 15:13:47.879 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:48 compute-1 ceph-mon[81775]: pgmap v2728: 321 pgs: 321 active+clean; 365 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 160 op/s
Jan 20 15:13:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:48.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:49.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:50.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:13:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:51.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:13:51 compute-1 ceph-mon[81775]: pgmap v2729: 321 pgs: 321 active+clean; 399 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.4 MiB/s wr, 271 op/s
Jan 20 15:13:52 compute-1 nova_compute[225855]: 2026-01-20 15:13:52.537 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:13:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:52.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:13:52 compute-1 nova_compute[225855]: 2026-01-20 15:13:52.880 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:53.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:53 compute-1 nova_compute[225855]: 2026-01-20 15:13:53.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:13:53 compute-1 nova_compute[225855]: 2026-01-20 15:13:53.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:13:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:53 compute-1 ceph-mon[81775]: pgmap v2730: 321 pgs: 321 active+clean; 404 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.0 MiB/s wr, 238 op/s
Jan 20 15:13:53 compute-1 nova_compute[225855]: 2026-01-20 15:13:53.637 225859 INFO nova.compute.manager [None req-f2b68e5e-7fe4-470f-8e46-d3b7873b26fc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Get console output
Jan 20 15:13:53 compute-1 nova_compute[225855]: 2026-01-20 15:13:53.642 263775 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:13:54 compute-1 ceph-mon[81775]: pgmap v2731: 321 pgs: 321 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 6.1 MiB/s wr, 251 op/s
Jan 20 15:13:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:54.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:55 compute-1 podman[302476]: 2026-01-20 15:13:55.001920929 +0000 UTC m=+0.047256612 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:13:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:55.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:56 compute-1 nova_compute[225855]: 2026-01-20 15:13:56.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:13:56 compute-1 nova_compute[225855]: 2026-01-20 15:13:56.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:13:56 compute-1 nova_compute[225855]: 2026-01-20 15:13:56.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:13:56 compute-1 nova_compute[225855]: 2026-01-20 15:13:56.536 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:13:56 compute-1 nova_compute[225855]: 2026-01-20 15:13:56.536 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:13:56 compute-1 nova_compute[225855]: 2026-01-20 15:13:56.537 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:13:56 compute-1 nova_compute[225855]: 2026-01-20 15:13:56.537 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 128af7d9-155f-468d-9873-98c816f0df9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:13:56 compute-1 nova_compute[225855]: 2026-01-20 15:13:56.640 225859 INFO nova.compute.manager [None req-ac2d34ea-47e2-41f0-9cf1-393c287456c8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Get console output
Jan 20 15:13:56 compute-1 nova_compute[225855]: 2026-01-20 15:13:56.644 263775 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:13:56 compute-1 ceph-mon[81775]: pgmap v2732: 321 pgs: 321 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.0 MiB/s wr, 262 op/s
Jan 20 15:13:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:56.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:57.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:57 compute-1 nova_compute[225855]: 2026-01-20 15:13:57.540 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:57 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3414784352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:57 compute-1 nova_compute[225855]: 2026-01-20 15:13:57.887 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:58 compute-1 ceph-mon[81775]: pgmap v2733: 321 pgs: 321 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 213 op/s
Jan 20 15:13:58 compute-1 nova_compute[225855]: 2026-01-20 15:13:58.776 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updating instance_info_cache with network_info: [{"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:13:58 compute-1 nova_compute[225855]: 2026-01-20 15:13:58.799 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:13:58 compute-1 nova_compute[225855]: 2026-01-20 15:13:58.799 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:13:58 compute-1 nova_compute[225855]: 2026-01-20 15:13:58.799 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:13:58 compute-1 nova_compute[225855]: 2026-01-20 15:13:58.800 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:13:58 compute-1 nova_compute[225855]: 2026-01-20 15:13:58.800 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:13:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.003000086s ======
Jan 20 15:13:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:58.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000086s
Jan 20 15:13:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:13:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:13:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:59.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:13:59 compute-1 nova_compute[225855]: 2026-01-20 15:13:59.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:13:59 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2197494371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:00 compute-1 nova_compute[225855]: 2026-01-20 15:14:00.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:00 compute-1 nova_compute[225855]: 2026-01-20 15:14:00.367 225859 DEBUG oslo_concurrency.lockutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Acquiring lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:14:00 compute-1 nova_compute[225855]: 2026-01-20 15:14:00.368 225859 DEBUG oslo_concurrency.lockutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Acquired lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:14:00 compute-1 nova_compute[225855]: 2026-01-20 15:14:00.368 225859 DEBUG nova.network.neutron [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:14:00 compute-1 nova_compute[225855]: 2026-01-20 15:14:00.397 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:00 compute-1 nova_compute[225855]: 2026-01-20 15:14:00.397 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:00 compute-1 nova_compute[225855]: 2026-01-20 15:14:00.397 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:00 compute-1 nova_compute[225855]: 2026-01-20 15:14:00.398 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:14:00 compute-1 nova_compute[225855]: 2026-01-20 15:14:00.399 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:00 compute-1 sudo[302517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:00 compute-1 sudo[302517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:00 compute-1 sudo[302517]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:00 compute-1 sudo[302542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:00 compute-1 sudo[302542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:00 compute-1 sudo[302542]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3764944478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:00 compute-1 ceph-mon[81775]: pgmap v2734: 321 pgs: 321 active+clean; 454 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.6 MiB/s wr, 279 op/s
Jan 20 15:14:00 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:14:00 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/916155642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:00 compute-1 nova_compute[225855]: 2026-01-20 15:14:00.862 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:14:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:00.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:14:00 compute-1 nova_compute[225855]: 2026-01-20 15:14:00.947 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:14:00 compute-1 nova_compute[225855]: 2026-01-20 15:14:00.947 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:14:01 compute-1 nova_compute[225855]: 2026-01-20 15:14:01.161 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:14:01 compute-1 nova_compute[225855]: 2026-01-20 15:14:01.163 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4069MB free_disk=20.894081115722656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:14:01 compute-1 nova_compute[225855]: 2026-01-20 15:14:01.163 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:01 compute-1 nova_compute[225855]: 2026-01-20 15:14:01.164 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 20 15:14:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:01.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 20 15:14:01 compute-1 nova_compute[225855]: 2026-01-20 15:14:01.356 225859 INFO nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updating resource usage from migration c8ea2eca-34f1-4b31-9699-90661d5995f9
Jan 20 15:14:01 compute-1 nova_compute[225855]: 2026-01-20 15:14:01.379 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Migration c8ea2eca-34f1-4b31-9699-90661d5995f9 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 20 15:14:01 compute-1 nova_compute[225855]: 2026-01-20 15:14:01.379 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:14:01 compute-1 nova_compute[225855]: 2026-01-20 15:14:01.379 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:14:01 compute-1 nova_compute[225855]: 2026-01-20 15:14:01.475 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing inventories for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 15:14:01 compute-1 nova_compute[225855]: 2026-01-20 15:14:01.550 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating ProviderTree inventory for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 15:14:01 compute-1 nova_compute[225855]: 2026-01-20 15:14:01.551 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 15:14:01 compute-1 nova_compute[225855]: 2026-01-20 15:14:01.567 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing aggregate associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 15:14:01 compute-1 nova_compute[225855]: 2026-01-20 15:14:01.591 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing trait associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 15:14:01 compute-1 nova_compute[225855]: 2026-01-20 15:14:01.627 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/916155642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3060118312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:14:02 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1147698371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:02 compute-1 nova_compute[225855]: 2026-01-20 15:14:02.073 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:02 compute-1 nova_compute[225855]: 2026-01-20 15:14:02.076 225859 DEBUG nova.network.neutron [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updating instance_info_cache with network_info: [{"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:14:02 compute-1 nova_compute[225855]: 2026-01-20 15:14:02.082 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:14:02 compute-1 nova_compute[225855]: 2026-01-20 15:14:02.101 225859 DEBUG oslo_concurrency.lockutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Releasing lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:14:02 compute-1 nova_compute[225855]: 2026-01-20 15:14:02.105 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:14:02 compute-1 nova_compute[225855]: 2026-01-20 15:14:02.263 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:14:02 compute-1 nova_compute[225855]: 2026-01-20 15:14:02.264 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:02 compute-1 nova_compute[225855]: 2026-01-20 15:14:02.320 225859 DEBUG nova.virt.libvirt.driver [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 20 15:14:02 compute-1 nova_compute[225855]: 2026-01-20 15:14:02.321 225859 DEBUG nova.virt.libvirt.volume.remotefs [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Creating file /var/lib/nova/instances/128af7d9-155f-468d-9873-98c816f0df9e/3cf538d9a8b949a0901af2d58a3d01eb.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Jan 20 15:14:02 compute-1 nova_compute[225855]: 2026-01-20 15:14:02.321 225859 DEBUG oslo_concurrency.processutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/128af7d9-155f-468d-9873-98c816f0df9e/3cf538d9a8b949a0901af2d58a3d01eb.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:02 compute-1 nova_compute[225855]: 2026-01-20 15:14:02.543 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:02 compute-1 nova_compute[225855]: 2026-01-20 15:14:02.795 225859 DEBUG oslo_concurrency.processutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/128af7d9-155f-468d-9873-98c816f0df9e/3cf538d9a8b949a0901af2d58a3d01eb.tmp" returned: 1 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:02 compute-1 nova_compute[225855]: 2026-01-20 15:14:02.796 225859 DEBUG oslo_concurrency.processutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/128af7d9-155f-468d-9873-98c816f0df9e/3cf538d9a8b949a0901af2d58a3d01eb.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 20 15:14:02 compute-1 nova_compute[225855]: 2026-01-20 15:14:02.796 225859 DEBUG nova.virt.libvirt.volume.remotefs [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Creating directory /var/lib/nova/instances/128af7d9-155f-468d-9873-98c816f0df9e on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Jan 20 15:14:02 compute-1 nova_compute[225855]: 2026-01-20 15:14:02.797 225859 DEBUG oslo_concurrency.processutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/128af7d9-155f-468d-9873-98c816f0df9e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1147698371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:02 compute-1 ceph-mon[81775]: pgmap v2735: 321 pgs: 321 active+clean; 479 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.6 MiB/s wr, 183 op/s
Jan 20 15:14:02 compute-1 nova_compute[225855]: 2026-01-20 15:14:02.884 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:02.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:03 compute-1 nova_compute[225855]: 2026-01-20 15:14:03.029 225859 DEBUG oslo_concurrency.processutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/128af7d9-155f-468d-9873-98c816f0df9e" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:03 compute-1 nova_compute[225855]: 2026-01-20 15:14:03.033 225859 DEBUG nova.virt.libvirt.driver [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 15:14:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:03.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:03 compute-1 nova_compute[225855]: 2026-01-20 15:14:03.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1568540187' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:04 compute-1 nova_compute[225855]: 2026-01-20 15:14:04.354 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2257608367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1278568393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:04 compute-1 ceph-mon[81775]: pgmap v2736: 321 pgs: 321 active+clean; 481 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 169 op/s
Jan 20 15:14:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:14:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:04.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:14:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:05.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:05 compute-1 kernel: tap9de5453d-b5 (unregistering): left promiscuous mode
Jan 20 15:14:05 compute-1 NetworkManager[49104]: <info>  [1768922045.6308] device (tap9de5453d-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:14:05 compute-1 ovn_controller[130490]: 2026-01-20T15:14:05Z|00812|binding|INFO|Releasing lport 9de5453d-b548-429c-8fc2-7b012cb8ebdf from this chassis (sb_readonly=0)
Jan 20 15:14:05 compute-1 ovn_controller[130490]: 2026-01-20T15:14:05Z|00813|binding|INFO|Setting lport 9de5453d-b548-429c-8fc2-7b012cb8ebdf down in Southbound
Jan 20 15:14:05 compute-1 ovn_controller[130490]: 2026-01-20T15:14:05Z|00814|binding|INFO|Removing iface tap9de5453d-b5 ovn-installed in OVS
Jan 20 15:14:05 compute-1 nova_compute[225855]: 2026-01-20 15:14:05.638 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:05 compute-1 nova_compute[225855]: 2026-01-20 15:14:05.640 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:05.647 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:1d:e9 10.100.0.4'], port_security=['fa:16:3e:a8:1d:e9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '128af7d9-155f-468d-9873-98c816f0df9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d07527d3-7363-453c-9902-c562bab626ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b4895263-5fc5-4c5a-ab8d-547f570bc095', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf4d49b6-1d42-4171-8055-0d823fb37e66, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=9de5453d-b548-429c-8fc2-7b012cb8ebdf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:14:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:05.648 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 9de5453d-b548-429c-8fc2-7b012cb8ebdf in datapath d07527d3-7363-453c-9902-c562bab626ba unbound from our chassis
Jan 20 15:14:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:05.650 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d07527d3-7363-453c-9902-c562bab626ba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:14:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:05.650 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[16852e8d-702b-4036-8ad3-bbfcee7441aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:05.651 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d07527d3-7363-453c-9902-c562bab626ba namespace which is not needed anymore
Jan 20 15:14:05 compute-1 nova_compute[225855]: 2026-01-20 15:14:05.657 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:05 compute-1 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000b7.scope: Deactivated successfully.
Jan 20 15:14:05 compute-1 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000b7.scope: Consumed 14.404s CPU time.
Jan 20 15:14:05 compute-1 systemd-machined[194361]: Machine qemu-96-instance-000000b7 terminated.
Jan 20 15:14:05 compute-1 nova_compute[225855]: 2026-01-20 15:14:05.884 225859 DEBUG nova.compute.manager [req-a1eda262-5399-4ec9-924e-5d5597919829 req-789f59f2-89ed-49d3-b1d9-b47f31b6876e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-unplugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:05 compute-1 nova_compute[225855]: 2026-01-20 15:14:05.885 225859 DEBUG oslo_concurrency.lockutils [req-a1eda262-5399-4ec9-924e-5d5597919829 req-789f59f2-89ed-49d3-b1d9-b47f31b6876e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:05 compute-1 nova_compute[225855]: 2026-01-20 15:14:05.885 225859 DEBUG oslo_concurrency.lockutils [req-a1eda262-5399-4ec9-924e-5d5597919829 req-789f59f2-89ed-49d3-b1d9-b47f31b6876e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:05 compute-1 nova_compute[225855]: 2026-01-20 15:14:05.886 225859 DEBUG oslo_concurrency.lockutils [req-a1eda262-5399-4ec9-924e-5d5597919829 req-789f59f2-89ed-49d3-b1d9-b47f31b6876e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:05 compute-1 nova_compute[225855]: 2026-01-20 15:14:05.886 225859 DEBUG nova.compute.manager [req-a1eda262-5399-4ec9-924e-5d5597919829 req-789f59f2-89ed-49d3-b1d9-b47f31b6876e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] No waiting events found dispatching network-vif-unplugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:14:05 compute-1 nova_compute[225855]: 2026-01-20 15:14:05.886 225859 WARNING nova.compute.manager [req-a1eda262-5399-4ec9-924e-5d5597919829 req-789f59f2-89ed-49d3-b1d9-b47f31b6876e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received unexpected event network-vif-unplugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf for instance with vm_state active and task_state resize_migrating.
Jan 20 15:14:05 compute-1 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[302147]: [NOTICE]   (302166) : haproxy version is 2.8.14-c23fe91
Jan 20 15:14:05 compute-1 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[302147]: [NOTICE]   (302166) : path to executable is /usr/sbin/haproxy
Jan 20 15:14:05 compute-1 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[302147]: [WARNING]  (302166) : Exiting Master process...
Jan 20 15:14:05 compute-1 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[302147]: [WARNING]  (302166) : Exiting Master process...
Jan 20 15:14:05 compute-1 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[302147]: [ALERT]    (302166) : Current worker (302169) exited with code 143 (Terminated)
Jan 20 15:14:05 compute-1 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[302147]: [WARNING]  (302166) : All workers exited. Exiting... (0)
Jan 20 15:14:05 compute-1 systemd[1]: libpod-9e0f985d5a417f01c41231b07aa6e097d3d4253b45c9c2dcd897f5700b809155.scope: Deactivated successfully.
Jan 20 15:14:05 compute-1 podman[302622]: 2026-01-20 15:14:05.942883822 +0000 UTC m=+0.204330339 container died 9e0f985d5a417f01c41231b07aa6e097d3d4253b45c9c2dcd897f5700b809155 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 15:14:05 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9e0f985d5a417f01c41231b07aa6e097d3d4253b45c9c2dcd897f5700b809155-userdata-shm.mount: Deactivated successfully.
Jan 20 15:14:05 compute-1 systemd[1]: var-lib-containers-storage-overlay-19af57efcf5c610cf3b2724d9be132eb303f0d0071822b6d38a243dd05cdc13e-merged.mount: Deactivated successfully.
Jan 20 15:14:05 compute-1 podman[302622]: 2026-01-20 15:14:05.996935825 +0000 UTC m=+0.258382332 container cleanup 9e0f985d5a417f01c41231b07aa6e097d3d4253b45c9c2dcd897f5700b809155 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:14:06 compute-1 systemd[1]: libpod-conmon-9e0f985d5a417f01c41231b07aa6e097d3d4253b45c9c2dcd897f5700b809155.scope: Deactivated successfully.
Jan 20 15:14:06 compute-1 nova_compute[225855]: 2026-01-20 15:14:06.049 225859 INFO nova.virt.libvirt.driver [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Instance shutdown successfully after 3 seconds.
Jan 20 15:14:06 compute-1 nova_compute[225855]: 2026-01-20 15:14:06.054 225859 INFO nova.virt.libvirt.driver [-] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Instance destroyed successfully.
Jan 20 15:14:06 compute-1 nova_compute[225855]: 2026-01-20 15:14:06.055 225859 DEBUG nova.virt.libvirt.vif [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:13:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1998945962',display_name='tempest-TestNetworkAdvancedServerOps-server-1998945962',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1998945962',id=183,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQtn9WZwWHpZ18eEtTY9zdPNbJgOayUdrvVmR1brDMxwKaiJ8tf9lOFdht6GjVy3Orpnh5Z5LatI7xEKad9rNtjFmwEczk5s4CmWp5ueE54bJ73h+pph+yq2VHvIP5rgg==',key_name='tempest-TestNetworkAdvancedServerOps-1645169738',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:13:35Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-mvt7thmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:13:59Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=128af7d9-155f-468d-9873-98c816f0df9e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1250108698", "vif_mac": "fa:16:3e:a8:1d:e9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:14:06 compute-1 nova_compute[225855]: 2026-01-20 15:14:06.055 225859 DEBUG nova.network.os_vif_util [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Converting VIF {"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1250108698", "vif_mac": "fa:16:3e:a8:1d:e9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:14:06 compute-1 nova_compute[225855]: 2026-01-20 15:14:06.056 225859 DEBUG nova.network.os_vif_util [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:14:06 compute-1 nova_compute[225855]: 2026-01-20 15:14:06.057 225859 DEBUG os_vif [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:14:06 compute-1 nova_compute[225855]: 2026-01-20 15:14:06.058 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:06 compute-1 nova_compute[225855]: 2026-01-20 15:14:06.059 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9de5453d-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:06 compute-1 nova_compute[225855]: 2026-01-20 15:14:06.095 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:06 compute-1 podman[302664]: 2026-01-20 15:14:06.096148241 +0000 UTC m=+0.078499259 container remove 9e0f985d5a417f01c41231b07aa6e097d3d4253b45c9c2dcd897f5700b809155 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 15:14:06 compute-1 nova_compute[225855]: 2026-01-20 15:14:06.098 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:14:06 compute-1 nova_compute[225855]: 2026-01-20 15:14:06.100 225859 INFO os_vif [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5')
Jan 20 15:14:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:06.101 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[825b913a-805c-4fe6-a7c5-a3e5b6ca50aa]: (4, ('Tue Jan 20 03:14:05 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba (9e0f985d5a417f01c41231b07aa6e097d3d4253b45c9c2dcd897f5700b809155)\n9e0f985d5a417f01c41231b07aa6e097d3d4253b45c9c2dcd897f5700b809155\nTue Jan 20 03:14:06 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba (9e0f985d5a417f01c41231b07aa6e097d3d4253b45c9c2dcd897f5700b809155)\n9e0f985d5a417f01c41231b07aa6e097d3d4253b45c9c2dcd897f5700b809155\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:06.102 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f76bce96-4d3e-43da-977d-4c6d62521cdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:06.103 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd07527d3-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:06 compute-1 nova_compute[225855]: 2026-01-20 15:14:06.104 225859 DEBUG nova.virt.libvirt.driver [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:14:06 compute-1 nova_compute[225855]: 2026-01-20 15:14:06.104 225859 DEBUG nova.virt.libvirt.driver [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:14:06 compute-1 kernel: tapd07527d3-70: left promiscuous mode
Jan 20 15:14:06 compute-1 nova_compute[225855]: 2026-01-20 15:14:06.109 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:06 compute-1 nova_compute[225855]: 2026-01-20 15:14:06.118 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:06.122 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[70092df2-6b4f-4f0d-a652-01e56dca6ada]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:06.137 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[19e707b0-7574-40dd-84d2-e28f6d1eb07b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:06.138 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[04fc1234-ad2c-4ca1-b3eb-02d9c8fdb91d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:06.153 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c186a9-4709-4f54-b1d1-0a426fd0050a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705300, 'reachable_time': 37664, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302679, 'error': None, 'target': 'ovnmeta-d07527d3-7363-453c-9902-c562bab626ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:06.156 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d07527d3-7363-453c-9902-c562bab626ba deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:14:06 compute-1 systemd[1]: run-netns-ovnmeta\x2dd07527d3\x2d7363\x2d453c\x2d9902\x2dc562bab626ba.mount: Deactivated successfully.
Jan 20 15:14:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:06.156 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[76c6007f-f179-4ca2-ad03-617c5876f22f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:06 compute-1 nova_compute[225855]: 2026-01-20 15:14:06.299 225859 DEBUG neutronclient.v2_0.client [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 9de5453d-b548-429c-8fc2-7b012cb8ebdf for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 20 15:14:06 compute-1 nova_compute[225855]: 2026-01-20 15:14:06.334 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3350738824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:06 compute-1 nova_compute[225855]: 2026-01-20 15:14:06.416 225859 DEBUG oslo_concurrency.lockutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:06 compute-1 nova_compute[225855]: 2026-01-20 15:14:06.416 225859 DEBUG oslo_concurrency.lockutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:06 compute-1 nova_compute[225855]: 2026-01-20 15:14:06.416 225859 DEBUG oslo_concurrency.lockutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:06.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:07.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:07 compute-1 ceph-mon[81775]: pgmap v2737: 321 pgs: 321 active+clean; 484 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.0 MiB/s wr, 168 op/s
Jan 20 15:14:07 compute-1 nova_compute[225855]: 2026-01-20 15:14:07.691 225859 DEBUG nova.compute.manager [req-6c55cbfe-8bca-4b99-b488-8c07175e349c req-10bdd897-4ba8-49bf-b9ce-254d974383ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-changed-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:07 compute-1 nova_compute[225855]: 2026-01-20 15:14:07.691 225859 DEBUG nova.compute.manager [req-6c55cbfe-8bca-4b99-b488-8c07175e349c req-10bdd897-4ba8-49bf-b9ce-254d974383ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Refreshing instance network info cache due to event network-changed-9de5453d-b548-429c-8fc2-7b012cb8ebdf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:14:07 compute-1 nova_compute[225855]: 2026-01-20 15:14:07.691 225859 DEBUG oslo_concurrency.lockutils [req-6c55cbfe-8bca-4b99-b488-8c07175e349c req-10bdd897-4ba8-49bf-b9ce-254d974383ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:14:07 compute-1 nova_compute[225855]: 2026-01-20 15:14:07.691 225859 DEBUG oslo_concurrency.lockutils [req-6c55cbfe-8bca-4b99-b488-8c07175e349c req-10bdd897-4ba8-49bf-b9ce-254d974383ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:14:07 compute-1 nova_compute[225855]: 2026-01-20 15:14:07.692 225859 DEBUG nova.network.neutron [req-6c55cbfe-8bca-4b99-b488-8c07175e349c req-10bdd897-4ba8-49bf-b9ce-254d974383ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Refreshing network info cache for port 9de5453d-b548-429c-8fc2-7b012cb8ebdf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:14:07 compute-1 nova_compute[225855]: 2026-01-20 15:14:07.886 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:08 compute-1 nova_compute[225855]: 2026-01-20 15:14:08.003 225859 DEBUG nova.compute.manager [req-061373c2-4af0-43d6-9bcd-2aa411425f8d req-dd2b9761-cc89-488c-94cf-e4aef802ec64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:08 compute-1 nova_compute[225855]: 2026-01-20 15:14:08.003 225859 DEBUG oslo_concurrency.lockutils [req-061373c2-4af0-43d6-9bcd-2aa411425f8d req-dd2b9761-cc89-488c-94cf-e4aef802ec64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:08 compute-1 nova_compute[225855]: 2026-01-20 15:14:08.004 225859 DEBUG oslo_concurrency.lockutils [req-061373c2-4af0-43d6-9bcd-2aa411425f8d req-dd2b9761-cc89-488c-94cf-e4aef802ec64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:08 compute-1 nova_compute[225855]: 2026-01-20 15:14:08.004 225859 DEBUG oslo_concurrency.lockutils [req-061373c2-4af0-43d6-9bcd-2aa411425f8d req-dd2b9761-cc89-488c-94cf-e4aef802ec64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:08 compute-1 nova_compute[225855]: 2026-01-20 15:14:08.004 225859 DEBUG nova.compute.manager [req-061373c2-4af0-43d6-9bcd-2aa411425f8d req-dd2b9761-cc89-488c-94cf-e4aef802ec64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] No waiting events found dispatching network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:14:08 compute-1 nova_compute[225855]: 2026-01-20 15:14:08.004 225859 WARNING nova.compute.manager [req-061373c2-4af0-43d6-9bcd-2aa411425f8d req-dd2b9761-cc89-488c-94cf-e4aef802ec64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received unexpected event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf for instance with vm_state active and task_state resize_migrated.
Jan 20 15:14:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:08.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:08 compute-1 nova_compute[225855]: 2026-01-20 15:14:08.994 225859 DEBUG nova.network.neutron [req-6c55cbfe-8bca-4b99-b488-8c07175e349c req-10bdd897-4ba8-49bf-b9ce-254d974383ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updated VIF entry in instance network info cache for port 9de5453d-b548-429c-8fc2-7b012cb8ebdf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:14:08 compute-1 nova_compute[225855]: 2026-01-20 15:14:08.995 225859 DEBUG nova.network.neutron [req-6c55cbfe-8bca-4b99-b488-8c07175e349c req-10bdd897-4ba8-49bf-b9ce-254d974383ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updating instance_info_cache with network_info: [{"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:14:09 compute-1 nova_compute[225855]: 2026-01-20 15:14:09.013 225859 DEBUG oslo_concurrency.lockutils [req-6c55cbfe-8bca-4b99-b488-8c07175e349c req-10bdd897-4ba8-49bf-b9ce-254d974383ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:14:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:14:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:09.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:14:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e398 e398: 3 total, 3 up, 3 in
Jan 20 15:14:09 compute-1 ceph-mon[81775]: pgmap v2738: 321 pgs: 321 active+clean; 484 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 728 KiB/s rd, 4.0 MiB/s wr, 129 op/s
Jan 20 15:14:10 compute-1 ceph-mon[81775]: osdmap e398: 3 total, 3 up, 3 in
Jan 20 15:14:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/469366976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:10.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:11 compute-1 podman[302683]: 2026-01-20 15:14:11.061757452 +0000 UTC m=+0.115397664 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:14:11 compute-1 nova_compute[225855]: 2026-01-20 15:14:11.096 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:14:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:11.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:14:11 compute-1 ceph-mon[81775]: pgmap v2740: 321 pgs: 321 active+clean; 486 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 161 op/s
Jan 20 15:14:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3331031870' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:11 compute-1 nova_compute[225855]: 2026-01-20 15:14:11.928 225859 DEBUG nova.compute.manager [req-93ecd443-7114-407f-9823-5fa6636357e0 req-4295da27-e483-4903-be84-8847c1316b75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:11 compute-1 nova_compute[225855]: 2026-01-20 15:14:11.928 225859 DEBUG oslo_concurrency.lockutils [req-93ecd443-7114-407f-9823-5fa6636357e0 req-4295da27-e483-4903-be84-8847c1316b75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:11 compute-1 nova_compute[225855]: 2026-01-20 15:14:11.928 225859 DEBUG oslo_concurrency.lockutils [req-93ecd443-7114-407f-9823-5fa6636357e0 req-4295da27-e483-4903-be84-8847c1316b75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:11 compute-1 nova_compute[225855]: 2026-01-20 15:14:11.928 225859 DEBUG oslo_concurrency.lockutils [req-93ecd443-7114-407f-9823-5fa6636357e0 req-4295da27-e483-4903-be84-8847c1316b75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:11 compute-1 nova_compute[225855]: 2026-01-20 15:14:11.928 225859 DEBUG nova.compute.manager [req-93ecd443-7114-407f-9823-5fa6636357e0 req-4295da27-e483-4903-be84-8847c1316b75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] No waiting events found dispatching network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:14:11 compute-1 nova_compute[225855]: 2026-01-20 15:14:11.929 225859 WARNING nova.compute.manager [req-93ecd443-7114-407f-9823-5fa6636357e0 req-4295da27-e483-4903-be84-8847c1316b75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received unexpected event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf for instance with vm_state resized and task_state None.
Jan 20 15:14:12 compute-1 ceph-mon[81775]: pgmap v2741: 321 pgs: 321 active+clean; 486 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 108 KiB/s wr, 182 op/s
Jan 20 15:14:12 compute-1 nova_compute[225855]: 2026-01-20 15:14:12.888 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:12.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:13.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1373685837' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:14:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1373685837' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:14:14 compute-1 nova_compute[225855]: 2026-01-20 15:14:14.030 225859 DEBUG nova.compute.manager [req-312b01be-d4e9-4906-9088-3b048ff4cb74 req-4f600fd2-4dcf-4a37-a270-b33776175ea4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:14 compute-1 nova_compute[225855]: 2026-01-20 15:14:14.031 225859 DEBUG oslo_concurrency.lockutils [req-312b01be-d4e9-4906-9088-3b048ff4cb74 req-4f600fd2-4dcf-4a37-a270-b33776175ea4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:14 compute-1 nova_compute[225855]: 2026-01-20 15:14:14.031 225859 DEBUG oslo_concurrency.lockutils [req-312b01be-d4e9-4906-9088-3b048ff4cb74 req-4f600fd2-4dcf-4a37-a270-b33776175ea4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:14 compute-1 nova_compute[225855]: 2026-01-20 15:14:14.031 225859 DEBUG oslo_concurrency.lockutils [req-312b01be-d4e9-4906-9088-3b048ff4cb74 req-4f600fd2-4dcf-4a37-a270-b33776175ea4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:14 compute-1 nova_compute[225855]: 2026-01-20 15:14:14.031 225859 DEBUG nova.compute.manager [req-312b01be-d4e9-4906-9088-3b048ff4cb74 req-4f600fd2-4dcf-4a37-a270-b33776175ea4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] No waiting events found dispatching network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:14:14 compute-1 nova_compute[225855]: 2026-01-20 15:14:14.031 225859 WARNING nova.compute.manager [req-312b01be-d4e9-4906-9088-3b048ff4cb74 req-4f600fd2-4dcf-4a37-a270-b33776175ea4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received unexpected event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf for instance with vm_state resized and task_state resize_reverting.
Jan 20 15:14:14 compute-1 ceph-mon[81775]: pgmap v2742: 321 pgs: 321 active+clean; 486 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 93 KiB/s wr, 201 op/s
Jan 20 15:14:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3444125027' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:14.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:15.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:16 compute-1 nova_compute[225855]: 2026-01-20 15:14:16.121 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:16 compute-1 nova_compute[225855]: 2026-01-20 15:14:16.143 225859 DEBUG nova.compute.manager [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-unplugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:16 compute-1 nova_compute[225855]: 2026-01-20 15:14:16.144 225859 DEBUG oslo_concurrency.lockutils [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:16 compute-1 nova_compute[225855]: 2026-01-20 15:14:16.144 225859 DEBUG oslo_concurrency.lockutils [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:16 compute-1 nova_compute[225855]: 2026-01-20 15:14:16.144 225859 DEBUG oslo_concurrency.lockutils [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:16 compute-1 nova_compute[225855]: 2026-01-20 15:14:16.145 225859 DEBUG nova.compute.manager [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] No waiting events found dispatching network-vif-unplugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:14:16 compute-1 nova_compute[225855]: 2026-01-20 15:14:16.145 225859 WARNING nova.compute.manager [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received unexpected event network-vif-unplugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf for instance with vm_state resized and task_state resize_reverting.
Jan 20 15:14:16 compute-1 nova_compute[225855]: 2026-01-20 15:14:16.146 225859 DEBUG nova.compute.manager [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:16 compute-1 nova_compute[225855]: 2026-01-20 15:14:16.146 225859 DEBUG oslo_concurrency.lockutils [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:16 compute-1 nova_compute[225855]: 2026-01-20 15:14:16.146 225859 DEBUG oslo_concurrency.lockutils [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:16 compute-1 nova_compute[225855]: 2026-01-20 15:14:16.147 225859 DEBUG oslo_concurrency.lockutils [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:16 compute-1 nova_compute[225855]: 2026-01-20 15:14:16.147 225859 DEBUG nova.compute.manager [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] No waiting events found dispatching network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:14:16 compute-1 nova_compute[225855]: 2026-01-20 15:14:16.147 225859 WARNING nova.compute.manager [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received unexpected event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf for instance with vm_state resized and task_state resize_reverting.
Jan 20 15:14:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:14:16 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3848990267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3848990267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:16.436 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:16.436 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:16.437 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:16 compute-1 nova_compute[225855]: 2026-01-20 15:14:16.486 225859 INFO nova.compute.manager [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Swapping old allocation on dict_keys(['bbb02880-a710-4ac1-8b2c-5c09765848d1']) held by migration c8ea2eca-34f1-4b31-9699-90661d5995f9 for instance
Jan 20 15:14:16 compute-1 nova_compute[225855]: 2026-01-20 15:14:16.531 225859 DEBUG nova.scheduler.client.report [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Overwriting current allocation {'allocations': {'068db7fd-4bd6-45a9-8bd6-a22cfe7596ed': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}, 'generation': 90}}, 'project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'user_id': '442a7a5cb8ea426a82be9762b262d171', 'consumer_generation': 1} on consumer 128af7d9-155f-468d-9873-98c816f0df9e move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Jan 20 15:14:16 compute-1 nova_compute[225855]: 2026-01-20 15:14:16.710 225859 INFO nova.network.neutron [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updating port 9de5453d-b548-429c-8fc2-7b012cb8ebdf with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 20 15:14:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:16.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:17.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:17 compute-1 nova_compute[225855]: 2026-01-20 15:14:17.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:17 compute-1 nova_compute[225855]: 2026-01-20 15:14:17.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 15:14:17 compute-1 nova_compute[225855]: 2026-01-20 15:14:17.688 225859 DEBUG oslo_concurrency.lockutils [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:14:17 compute-1 nova_compute[225855]: 2026-01-20 15:14:17.688 225859 DEBUG oslo_concurrency.lockutils [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquired lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:14:17 compute-1 nova_compute[225855]: 2026-01-20 15:14:17.689 225859 DEBUG nova.network.neutron [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:14:17 compute-1 nova_compute[225855]: 2026-01-20 15:14:17.821 225859 DEBUG nova.compute.manager [req-5c8a6c42-d363-41d1-993b-3458673b597a req-e041dbdf-95b4-4d3f-a85c-5f3c6d6d7d01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-changed-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:17 compute-1 nova_compute[225855]: 2026-01-20 15:14:17.821 225859 DEBUG nova.compute.manager [req-5c8a6c42-d363-41d1-993b-3458673b597a req-e041dbdf-95b4-4d3f-a85c-5f3c6d6d7d01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Refreshing instance network info cache due to event network-changed-9de5453d-b548-429c-8fc2-7b012cb8ebdf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:14:17 compute-1 nova_compute[225855]: 2026-01-20 15:14:17.822 225859 DEBUG oslo_concurrency.lockutils [req-5c8a6c42-d363-41d1-993b-3458673b597a req-e041dbdf-95b4-4d3f-a85c-5f3c6d6d7d01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:14:17 compute-1 nova_compute[225855]: 2026-01-20 15:14:17.890 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:18.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:18 compute-1 ceph-mon[81775]: pgmap v2743: 321 pgs: 321 active+clean; 486 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 24 KiB/s wr, 217 op/s
Jan 20 15:14:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:19.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:20 compute-1 nova_compute[225855]: 2026-01-20 15:14:20.072 225859 DEBUG nova.network.neutron [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updating instance_info_cache with network_info: [{"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:14:20 compute-1 nova_compute[225855]: 2026-01-20 15:14:20.097 225859 DEBUG oslo_concurrency.lockutils [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Releasing lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:14:20 compute-1 nova_compute[225855]: 2026-01-20 15:14:20.098 225859 DEBUG nova.virt.libvirt.driver [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Jan 20 15:14:20 compute-1 nova_compute[225855]: 2026-01-20 15:14:20.134 225859 DEBUG oslo_concurrency.lockutils [req-5c8a6c42-d363-41d1-993b-3458673b597a req-e041dbdf-95b4-4d3f-a85c-5f3c6d6d7d01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:14:20 compute-1 nova_compute[225855]: 2026-01-20 15:14:20.135 225859 DEBUG nova.network.neutron [req-5c8a6c42-d363-41d1-993b-3458673b597a req-e041dbdf-95b4-4d3f-a85c-5f3c6d6d7d01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Refreshing network info cache for port 9de5453d-b548-429c-8fc2-7b012cb8ebdf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:14:20 compute-1 nova_compute[225855]: 2026-01-20 15:14:20.182 225859 DEBUG nova.storage.rbd_utils [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rolling back rbd image(128af7d9-155f-468d-9873-98c816f0df9e_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Jan 20 15:14:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2888658016' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:20 compute-1 ceph-mon[81775]: pgmap v2744: 321 pgs: 321 active+clean; 486 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 24 KiB/s wr, 217 op/s
Jan 20 15:14:20 compute-1 nova_compute[225855]: 2026-01-20 15:14:20.317 225859 DEBUG nova.storage.rbd_utils [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] removing snapshot(nova-resize) on rbd image(128af7d9-155f-468d-9873-98c816f0df9e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 15:14:20 compute-1 sudo[302770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:20 compute-1 sudo[302770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:20 compute-1 sudo[302770]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:20 compute-1 nova_compute[225855]: 2026-01-20 15:14:20.870 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922045.8693986, 128af7d9-155f-468d-9873-98c816f0df9e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:14:20 compute-1 nova_compute[225855]: 2026-01-20 15:14:20.871 225859 INFO nova.compute.manager [-] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] VM Stopped (Lifecycle Event)
Jan 20 15:14:20 compute-1 nova_compute[225855]: 2026-01-20 15:14:20.896 225859 DEBUG nova.compute.manager [None req-f2cf59cd-4a92-4229-92a5-223081d9574d - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:14:20 compute-1 sudo[302795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:20 compute-1 nova_compute[225855]: 2026-01-20 15:14:20.899 225859 DEBUG nova.compute.manager [None req-f2cf59cd-4a92-4229-92a5-223081d9574d - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:14:20 compute-1 sudo[302795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:20 compute-1 sudo[302795]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:14:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:20.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:14:20 compute-1 nova_compute[225855]: 2026-01-20 15:14:20.924 225859 INFO nova.compute.manager [None req-f2cf59cd-4a92-4229-92a5-223081d9574d - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.123 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:21 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e399 e399: 3 total, 3 up, 3 in
Jan 20 15:14:21 compute-1 ceph-mon[81775]: pgmap v2745: 321 pgs: 321 active+clean; 501 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.4 MiB/s wr, 212 op/s
Jan 20 15:14:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:21.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.679 225859 DEBUG nova.network.neutron [req-5c8a6c42-d363-41d1-993b-3458673b597a req-e041dbdf-95b4-4d3f-a85c-5f3c6d6d7d01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updated VIF entry in instance network info cache for port 9de5453d-b548-429c-8fc2-7b012cb8ebdf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.680 225859 DEBUG nova.network.neutron [req-5c8a6c42-d363-41d1-993b-3458673b597a req-e041dbdf-95b4-4d3f-a85c-5f3c6d6d7d01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updating instance_info_cache with network_info: [{"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.698 225859 DEBUG oslo_concurrency.lockutils [req-5c8a6c42-d363-41d1-993b-3458673b597a req-e041dbdf-95b4-4d3f-a85c-5f3c6d6d7d01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.862 225859 DEBUG nova.virt.libvirt.driver [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Start _get_guest_xml network_info=[{"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.866 225859 WARNING nova.virt.libvirt.driver [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.872 225859 DEBUG nova.virt.libvirt.host [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.873 225859 DEBUG nova.virt.libvirt.host [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.876 225859 DEBUG nova.virt.libvirt.host [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.876 225859 DEBUG nova.virt.libvirt.host [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.878 225859 DEBUG nova.virt.libvirt.driver [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.878 225859 DEBUG nova.virt.hardware [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.879 225859 DEBUG nova.virt.hardware [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.879 225859 DEBUG nova.virt.hardware [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.879 225859 DEBUG nova.virt.hardware [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.879 225859 DEBUG nova.virt.hardware [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.880 225859 DEBUG nova.virt.hardware [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.880 225859 DEBUG nova.virt.hardware [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.880 225859 DEBUG nova.virt.hardware [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.881 225859 DEBUG nova.virt.hardware [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.881 225859 DEBUG nova.virt.hardware [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.881 225859 DEBUG nova.virt.hardware [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.882 225859 DEBUG nova.objects.instance [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'vcpu_model' on Instance uuid 128af7d9-155f-468d-9873-98c816f0df9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:14:21 compute-1 nova_compute[225855]: 2026-01-20 15:14:21.900 225859 DEBUG oslo_concurrency.processutils [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:22 compute-1 ceph-mon[81775]: osdmap e399: 3 total, 3 up, 3 in
Jan 20 15:14:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:14:22 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/714890073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.326 225859 DEBUG oslo_concurrency.processutils [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.361 225859 DEBUG oslo_concurrency.processutils [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:14:22 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4287932142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.843 225859 DEBUG oslo_concurrency.processutils [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.847 225859 DEBUG nova.virt.libvirt.vif [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:13:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1998945962',display_name='tempest-TestNetworkAdvancedServerOps-server-1998945962',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1998945962',id=183,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQtn9WZwWHpZ18eEtTY9zdPNbJgOayUdrvVmR1brDMxwKaiJ8tf9lOFdht6GjVy3Orpnh5Z5LatI7xEKad9rNtjFmwEczk5s4CmWp5ueE54bJ73h+pph+yq2VHvIP5rgg==',key_name='tempest-TestNetworkAdvancedServerOps-1645169738',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:14:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-mvt7thmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:14:13Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=128af7d9-155f-468d-9873-98c816f0df9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.848 225859 DEBUG nova.network.os_vif_util [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.849 225859 DEBUG nova.network.os_vif_util [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.855 225859 DEBUG nova.virt.libvirt.driver [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:14:22 compute-1 nova_compute[225855]:   <uuid>128af7d9-155f-468d-9873-98c816f0df9e</uuid>
Jan 20 15:14:22 compute-1 nova_compute[225855]:   <name>instance-000000b7</name>
Jan 20 15:14:22 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:14:22 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:14:22 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1998945962</nova:name>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:14:21</nova:creationTime>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:14:22 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:14:22 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:14:22 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:14:22 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:14:22 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:14:22 compute-1 nova_compute[225855]:         <nova:user uuid="442a7a5cb8ea426a82be9762b262d171">tempest-TestNetworkAdvancedServerOps-175282664-project-member</nova:user>
Jan 20 15:14:22 compute-1 nova_compute[225855]:         <nova:project uuid="1ed5feeeafe7448a8efb47ab975b0ead">tempest-TestNetworkAdvancedServerOps-175282664</nova:project>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:14:22 compute-1 nova_compute[225855]:         <nova:port uuid="9de5453d-b548-429c-8fc2-7b012cb8ebdf">
Jan 20 15:14:22 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:14:22 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:14:22 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <system>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <entry name="serial">128af7d9-155f-468d-9873-98c816f0df9e</entry>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <entry name="uuid">128af7d9-155f-468d-9873-98c816f0df9e</entry>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     </system>
Jan 20 15:14:22 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:14:22 compute-1 nova_compute[225855]:   <os>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:   </os>
Jan 20 15:14:22 compute-1 nova_compute[225855]:   <features>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:   </features>
Jan 20 15:14:22 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:14:22 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:14:22 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/128af7d9-155f-468d-9873-98c816f0df9e_disk">
Jan 20 15:14:22 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       </source>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:14:22 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/128af7d9-155f-468d-9873-98c816f0df9e_disk.config">
Jan 20 15:14:22 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       </source>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:14:22 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:a8:1d:e9"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <target dev="tap9de5453d-b5"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/128af7d9-155f-468d-9873-98c816f0df9e/console.log" append="off"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <video>
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     </video>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <input type="keyboard" bus="usb"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:14:22 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:14:22 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:14:22 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:14:22 compute-1 nova_compute[225855]: </domain>
Jan 20 15:14:22 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.857 225859 DEBUG nova.compute.manager [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Preparing to wait for external event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.858 225859 DEBUG oslo_concurrency.lockutils [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.858 225859 DEBUG oslo_concurrency.lockutils [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.858 225859 DEBUG oslo_concurrency.lockutils [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.859 225859 DEBUG nova.virt.libvirt.vif [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:13:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1998945962',display_name='tempest-TestNetworkAdvancedServerOps-server-1998945962',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1998945962',id=183,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQtn9WZwWHpZ18eEtTY9zdPNbJgOayUdrvVmR1brDMxwKaiJ8tf9lOFdht6GjVy3Orpnh5Z5LatI7xEKad9rNtjFmwEczk5s4CmWp5ueE54bJ73h+pph+yq2VHvIP5rgg==',key_name='tempest-TestNetworkAdvancedServerOps-1645169738',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:14:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-mvt7thmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:14:13Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=128af7d9-155f-468d-9873-98c816f0df9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.859 225859 DEBUG nova.network.os_vif_util [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.860 225859 DEBUG nova.network.os_vif_util [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.861 225859 DEBUG os_vif [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.861 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.862 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.862 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.865 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.865 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9de5453d-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.866 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9de5453d-b5, col_values=(('external_ids', {'iface-id': '9de5453d-b548-429c-8fc2-7b012cb8ebdf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:1d:e9', 'vm-uuid': '128af7d9-155f-468d-9873-98c816f0df9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.867 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:22 compute-1 NetworkManager[49104]: <info>  [1768922062.8688] manager: (tap9de5453d-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.870 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.874 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.875 225859 INFO os_vif [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5')
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.892 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:22.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:22 compute-1 kernel: tap9de5453d-b5: entered promiscuous mode
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.960 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:22 compute-1 NetworkManager[49104]: <info>  [1768922062.9621] manager: (tap9de5453d-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/339)
Jan 20 15:14:22 compute-1 ovn_controller[130490]: 2026-01-20T15:14:22Z|00815|binding|INFO|Claiming lport 9de5453d-b548-429c-8fc2-7b012cb8ebdf for this chassis.
Jan 20 15:14:22 compute-1 ovn_controller[130490]: 2026-01-20T15:14:22Z|00816|binding|INFO|9de5453d-b548-429c-8fc2-7b012cb8ebdf: Claiming fa:16:3e:a8:1d:e9 10.100.0.4
Jan 20 15:14:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:22.972 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:1d:e9 10.100.0.4'], port_security=['fa:16:3e:a8:1d:e9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '128af7d9-155f-468d-9873-98c816f0df9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d07527d3-7363-453c-9902-c562bab626ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'b4895263-5fc5-4c5a-ab8d-547f570bc095', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf4d49b6-1d42-4171-8055-0d823fb37e66, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=9de5453d-b548-429c-8fc2-7b012cb8ebdf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:14:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:22.973 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 9de5453d-b548-429c-8fc2-7b012cb8ebdf in datapath d07527d3-7363-453c-9902-c562bab626ba bound to our chassis
Jan 20 15:14:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:22.975 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d07527d3-7363-453c-9902-c562bab626ba
Jan 20 15:14:22 compute-1 systemd-udevd[302896]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:14:22 compute-1 ovn_controller[130490]: 2026-01-20T15:14:22Z|00817|binding|INFO|Setting lport 9de5453d-b548-429c-8fc2-7b012cb8ebdf ovn-installed in OVS
Jan 20 15:14:22 compute-1 ovn_controller[130490]: 2026-01-20T15:14:22Z|00818|binding|INFO|Setting lport 9de5453d-b548-429c-8fc2-7b012cb8ebdf up in Southbound
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.988 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:22 compute-1 nova_compute[225855]: 2026-01-20 15:14:22.990 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:22.987 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba0a4ae-117a-4940-aa10-6e15a30c274d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:22.991 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd07527d3-71 in ovnmeta-d07527d3-7363-453c-9902-c562bab626ba namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:14:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:22.993 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd07527d3-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:14:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:22.993 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4be3eb9a-31e4-4701-9800-c056157a30f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:22.995 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2d8bf542-035a-45b9-a6cb-2aca81b5a439]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:23 compute-1 NetworkManager[49104]: <info>  [1768922063.0005] device (tap9de5453d-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:14:23 compute-1 NetworkManager[49104]: <info>  [1768922063.0012] device (tap9de5453d-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:14:23 compute-1 systemd-machined[194361]: New machine qemu-97-instance-000000b7.
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:23.009 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[b3578506-587d-491b-adcd-d90e8514071f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:23 compute-1 systemd[1]: Started Virtual Machine qemu-97-instance-000000b7.
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:23.034 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a79f83fb-bb32-4f45-a772-420547a81fe7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:23.063 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[621d68da-4b89-4216-901a-b42184ff9f65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:23 compute-1 NetworkManager[49104]: <info>  [1768922063.0685] manager: (tapd07527d3-70): new Veth device (/org/freedesktop/NetworkManager/Devices/340)
Jan 20 15:14:23 compute-1 systemd-udevd[302900]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:23.069 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2d8dd71e-e502-414a-9204-25f217da1890]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:23.097 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[bd03d32e-3b00-44c9-9e11-81634d633973]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:23.100 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[8f166daf-3e31-4aad-a944-9462fd663beb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:23 compute-1 NetworkManager[49104]: <info>  [1768922063.1232] device (tapd07527d3-70): carrier: link connected
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:23.128 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[13ef60ac-615e-4436-bac4-ee8986b5af93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:23.143 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a46163fb-474f-422b-bbde-e41b415f8985]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd07527d3-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:33:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710205, 'reachable_time': 44044, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302931, 'error': None, 'target': 'ovnmeta-d07527d3-7363-453c-9902-c562bab626ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:23.157 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f9dca58b-438c-4316-bd19-b08faf9e464c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6f:33a8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 710205, 'tstamp': 710205}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302932, 'error': None, 'target': 'ovnmeta-d07527d3-7363-453c-9902-c562bab626ba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:23.173 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6eecd527-66e0-47de-ab90-3a831be619d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd07527d3-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:33:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710205, 'reachable_time': 44044, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302933, 'error': None, 'target': 'ovnmeta-d07527d3-7363-453c-9902-c562bab626ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:23.200 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e801fbd1-7e7f-47e1-b0c0-4e3ed6a6c9ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:23.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:23.260 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce71dea-1e4a-41a5-ab05-e7e8f755b70c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:23.262 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd07527d3-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:23.262 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:23.263 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd07527d3-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.264 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:23 compute-1 NetworkManager[49104]: <info>  [1768922063.2652] manager: (tapd07527d3-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Jan 20 15:14:23 compute-1 kernel: tapd07527d3-70: entered promiscuous mode
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:23.268 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd07527d3-70, col_values=(('external_ids', {'iface-id': '311d5bf2-0b44-4ce1-9ec1-e7458d5df232'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.269 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:23 compute-1 ovn_controller[130490]: 2026-01-20T15:14:23Z|00819|binding|INFO|Releasing lport 311d5bf2-0b44-4ce1-9ec1-e7458d5df232 from this chassis (sb_readonly=0)
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.269 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:23.270 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d07527d3-7363-453c-9902-c562bab626ba.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d07527d3-7363-453c-9902-c562bab626ba.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:23.271 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[23d87075-2e18-4799-ab87-b71a39919ffb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:23.271 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-d07527d3-7363-453c-9902-c562bab626ba
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/d07527d3-7363-453c-9902-c562bab626ba.pid.haproxy
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID d07527d3-7363-453c-9902-c562bab626ba
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:14:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:23.272 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d07527d3-7363-453c-9902-c562bab626ba', 'env', 'PROCESS_TAG=haproxy-d07527d3-7363-453c-9902-c562bab626ba', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d07527d3-7363-453c-9902-c562bab626ba.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.282 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/714890073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:23 compute-1 ceph-mon[81775]: pgmap v2747: 321 pgs: 321 active+clean; 505 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.6 MiB/s wr, 150 op/s
Jan 20 15:14:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4287932142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.421 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922063.4213808, 128af7d9-155f-468d-9873-98c816f0df9e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.422 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] VM Started (Lifecycle Event)
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.451 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.455 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922063.4215171, 128af7d9-155f-468d-9873-98c816f0df9e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.455 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] VM Paused (Lifecycle Event)
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.475 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.478 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.504 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 20 15:14:23 compute-1 podman[303007]: 2026-01-20 15:14:23.586530574 +0000 UTC m=+0.022832299 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.687 225859 DEBUG nova.compute.manager [req-0bb21d69-33bb-4488-b61e-e71fda433142 req-af923cfc-861a-47cf-8b85-26ef2f466681 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.687 225859 DEBUG oslo_concurrency.lockutils [req-0bb21d69-33bb-4488-b61e-e71fda433142 req-af923cfc-861a-47cf-8b85-26ef2f466681 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.687 225859 DEBUG oslo_concurrency.lockutils [req-0bb21d69-33bb-4488-b61e-e71fda433142 req-af923cfc-861a-47cf-8b85-26ef2f466681 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.687 225859 DEBUG oslo_concurrency.lockutils [req-0bb21d69-33bb-4488-b61e-e71fda433142 req-af923cfc-861a-47cf-8b85-26ef2f466681 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.688 225859 DEBUG nova.compute.manager [req-0bb21d69-33bb-4488-b61e-e71fda433142 req-af923cfc-861a-47cf-8b85-26ef2f466681 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Processing event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.688 225859 DEBUG nova.compute.manager [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.691 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922063.6912546, 128af7d9-155f-468d-9873-98c816f0df9e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.691 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] VM Resumed (Lifecycle Event)
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.695 225859 INFO nova.virt.libvirt.driver [-] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Instance running successfully.
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.695 225859 DEBUG nova.virt.libvirt.driver [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.733 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.737 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:14:23 compute-1 podman[303007]: 2026-01-20 15:14:23.765035419 +0000 UTC m=+0.201337124 container create b478c0ef6bae9eea7c30e6f5fef3da31b6628e87efed91da6febae4d8fa3f142 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.777 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 20 15:14:23 compute-1 nova_compute[225855]: 2026-01-20 15:14:23.801 225859 INFO nova.compute.manager [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updating instance to original state: 'active'
Jan 20 15:14:23 compute-1 systemd[1]: Started libpod-conmon-b478c0ef6bae9eea7c30e6f5fef3da31b6628e87efed91da6febae4d8fa3f142.scope.
Jan 20 15:14:23 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:14:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a54fb807962a544aa2982518e645db6ec417e79968889ef69caab0fed7c38d3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:14:23 compute-1 podman[303007]: 2026-01-20 15:14:23.861483306 +0000 UTC m=+0.297785021 container init b478c0ef6bae9eea7c30e6f5fef3da31b6628e87efed91da6febae4d8fa3f142 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 20 15:14:23 compute-1 podman[303007]: 2026-01-20 15:14:23.866905419 +0000 UTC m=+0.303207124 container start b478c0ef6bae9eea7c30e6f5fef3da31b6628e87efed91da6febae4d8fa3f142 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 15:14:23 compute-1 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[303024]: [NOTICE]   (303028) : New worker (303030) forked
Jan 20 15:14:23 compute-1 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[303024]: [NOTICE]   (303028) : Loading success.
Jan 20 15:14:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:24.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:25.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:25 compute-1 nova_compute[225855]: 2026-01-20 15:14:25.361 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:25 compute-1 nova_compute[225855]: 2026-01-20 15:14:25.361 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 15:14:25 compute-1 nova_compute[225855]: 2026-01-20 15:14:25.379 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 15:14:25 compute-1 ceph-mon[81775]: pgmap v2748: 321 pgs: 321 active+clean; 515 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.5 MiB/s wr, 130 op/s
Jan 20 15:14:25 compute-1 nova_compute[225855]: 2026-01-20 15:14:25.795 225859 DEBUG nova.compute.manager [req-3f502ff3-ee76-489f-a057-779cfb8e42b0 req-ec796712-68a0-4732-85d5-dd79241a8bf1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:25 compute-1 nova_compute[225855]: 2026-01-20 15:14:25.795 225859 DEBUG oslo_concurrency.lockutils [req-3f502ff3-ee76-489f-a057-779cfb8e42b0 req-ec796712-68a0-4732-85d5-dd79241a8bf1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:25 compute-1 nova_compute[225855]: 2026-01-20 15:14:25.795 225859 DEBUG oslo_concurrency.lockutils [req-3f502ff3-ee76-489f-a057-779cfb8e42b0 req-ec796712-68a0-4732-85d5-dd79241a8bf1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:25 compute-1 nova_compute[225855]: 2026-01-20 15:14:25.796 225859 DEBUG oslo_concurrency.lockutils [req-3f502ff3-ee76-489f-a057-779cfb8e42b0 req-ec796712-68a0-4732-85d5-dd79241a8bf1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:25 compute-1 nova_compute[225855]: 2026-01-20 15:14:25.796 225859 DEBUG nova.compute.manager [req-3f502ff3-ee76-489f-a057-779cfb8e42b0 req-ec796712-68a0-4732-85d5-dd79241a8bf1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] No waiting events found dispatching network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:14:25 compute-1 nova_compute[225855]: 2026-01-20 15:14:25.796 225859 WARNING nova.compute.manager [req-3f502ff3-ee76-489f-a057-779cfb8e42b0 req-ec796712-68a0-4732-85d5-dd79241a8bf1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received unexpected event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf for instance with vm_state active and task_state None.
Jan 20 15:14:26 compute-1 podman[303040]: 2026-01-20 15:14:26.003653116 +0000 UTC m=+0.045250855 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 15:14:26 compute-1 ceph-mon[81775]: pgmap v2749: 321 pgs: 321 active+clean; 519 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.6 MiB/s wr, 151 op/s
Jan 20 15:14:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e400 e400: 3 total, 3 up, 3 in
Jan 20 15:14:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:26.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:27.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:27 compute-1 ceph-mon[81775]: osdmap e400: 3 total, 3 up, 3 in
Jan 20 15:14:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1626811342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:27 compute-1 nova_compute[225855]: 2026-01-20 15:14:27.869 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:27 compute-1 nova_compute[225855]: 2026-01-20 15:14:27.896 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:28.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:28 compute-1 ceph-mon[81775]: pgmap v2751: 321 pgs: 321 active+clean; 519 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 136 op/s
Jan 20 15:14:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:14:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:29.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:14:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:30.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:31.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:31 compute-1 ceph-mon[81775]: pgmap v2752: 321 pgs: 321 active+clean; 543 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.0 MiB/s wr, 150 op/s
Jan 20 15:14:31 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e401 e401: 3 total, 3 up, 3 in
Jan 20 15:14:32 compute-1 ceph-mon[81775]: osdmap e401: 3 total, 3 up, 3 in
Jan 20 15:14:32 compute-1 nova_compute[225855]: 2026-01-20 15:14:32.872 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:32 compute-1 nova_compute[225855]: 2026-01-20 15:14:32.898 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:32.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:33.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:33 compute-1 ceph-mon[81775]: pgmap v2754: 321 pgs: 321 active+clean; 565 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.8 MiB/s wr, 186 op/s
Jan 20 15:14:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3869517748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2503689261' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:34.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:35.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:36 compute-1 ceph-mon[81775]: pgmap v2755: 321 pgs: 321 active+clean; 565 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 87 op/s
Jan 20 15:14:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:36.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:14:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:37.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:14:37 compute-1 ceph-mon[81775]: pgmap v2756: 321 pgs: 321 active+clean; 565 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 100 op/s
Jan 20 15:14:37 compute-1 nova_compute[225855]: 2026-01-20 15:14:37.754 225859 DEBUG oslo_concurrency.lockutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "185fbaf7-4372-4e7c-b053-df9c4022514f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:37 compute-1 nova_compute[225855]: 2026-01-20 15:14:37.754 225859 DEBUG oslo_concurrency.lockutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "185fbaf7-4372-4e7c-b053-df9c4022514f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:37 compute-1 nova_compute[225855]: 2026-01-20 15:14:37.773 225859 DEBUG nova.compute.manager [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:14:37 compute-1 nova_compute[225855]: 2026-01-20 15:14:37.839 225859 DEBUG oslo_concurrency.lockutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:37 compute-1 nova_compute[225855]: 2026-01-20 15:14:37.840 225859 DEBUG oslo_concurrency.lockutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:37 compute-1 nova_compute[225855]: 2026-01-20 15:14:37.846 225859 DEBUG nova.virt.hardware [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:14:37 compute-1 nova_compute[225855]: 2026-01-20 15:14:37.846 225859 INFO nova.compute.claims [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:14:37 compute-1 nova_compute[225855]: 2026-01-20 15:14:37.875 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:37 compute-1 nova_compute[225855]: 2026-01-20 15:14:37.898 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:37 compute-1 nova_compute[225855]: 2026-01-20 15:14:37.987 225859 DEBUG oslo_concurrency.processutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:38 compute-1 ovn_controller[130490]: 2026-01-20T15:14:38Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:1d:e9 10.100.0.4
Jan 20 15:14:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:14:38 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3112583804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.512 225859 DEBUG oslo_concurrency.processutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.519 225859 DEBUG nova.compute.provider_tree [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.535 225859 DEBUG nova.scheduler.client.report [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.563 225859 DEBUG oslo_concurrency.lockutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.564 225859 DEBUG nova.compute.manager [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.622 225859 DEBUG nova.compute.manager [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.623 225859 DEBUG nova.network.neutron [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.644 225859 INFO nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.671 225859 DEBUG nova.compute.manager [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.731 225859 INFO nova.virt.block_device [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Booting with volume 4994c109-f7d8-4642-bf6a-2f796e3851ba at /dev/vda
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.911 225859 DEBUG os_brick.utils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.912 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.926 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.926 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[42f04a66-1c2c-4bdc-830f-62e978504540]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.928 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.937 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.937 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[03d58a46-de6b-478a-bb5d-ed2120511c31]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.940 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:38.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.951 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.952 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca4cba2-a11d-4d67-8d94-d6c96858c8b3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.953 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[bdce2a29-abaa-4c98-9bcd-eb5bdbd4f81f]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.954 225859 DEBUG oslo_concurrency.processutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3112583804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.989 225859 DEBUG oslo_concurrency.processutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.992 225859 DEBUG os_brick.initiator.connectors.lightos [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.992 225859 DEBUG os_brick.initiator.connectors.lightos [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.992 225859 DEBUG os_brick.initiator.connectors.lightos [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.992 225859 DEBUG os_brick.utils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] <== get_connector_properties: return (81ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:14:38 compute-1 nova_compute[225855]: 2026-01-20 15:14:38.993 225859 DEBUG nova.virt.block_device [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Updating existing volume attachment record: b71943fe-2a88-4bcf-8d86-3af9f4ede56c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 15:14:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:14:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:39.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:14:39 compute-1 nova_compute[225855]: 2026-01-20 15:14:39.488 225859 DEBUG nova.policy [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bf422e55e158420cbdae75f07a3bb97a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a49638950e1543fa8e0d251af5479623', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:14:40 compute-1 nova_compute[225855]: 2026-01-20 15:14:40.014 225859 DEBUG nova.compute.manager [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:14:40 compute-1 nova_compute[225855]: 2026-01-20 15:14:40.016 225859 DEBUG nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:14:40 compute-1 nova_compute[225855]: 2026-01-20 15:14:40.017 225859 INFO nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Creating image(s)
Jan 20 15:14:40 compute-1 nova_compute[225855]: 2026-01-20 15:14:40.017 225859 DEBUG nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 15:14:40 compute-1 nova_compute[225855]: 2026-01-20 15:14:40.018 225859 DEBUG nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Ensure instance console log exists: /var/lib/nova/instances/185fbaf7-4372-4e7c-b053-df9c4022514f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:14:40 compute-1 nova_compute[225855]: 2026-01-20 15:14:40.019 225859 DEBUG oslo_concurrency.lockutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:40 compute-1 nova_compute[225855]: 2026-01-20 15:14:40.019 225859 DEBUG oslo_concurrency.lockutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:40 compute-1 nova_compute[225855]: 2026-01-20 15:14:40.020 225859 DEBUG oslo_concurrency.lockutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:40 compute-1 nova_compute[225855]: 2026-01-20 15:14:40.236 225859 DEBUG nova.network.neutron [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Successfully created port: 2cfaf09f-1f9e-489f-b7d3-43166c005796 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:14:40 compute-1 ceph-mon[81775]: pgmap v2757: 321 pgs: 321 active+clean; 565 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 97 op/s
Jan 20 15:14:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/185856105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:40.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:40 compute-1 sudo[303096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:40 compute-1 sudo[303096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:40 compute-1 sudo[303096]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:41 compute-1 sudo[303121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:41 compute-1 sudo[303121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:41 compute-1 sudo[303121]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:41.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:41 compute-1 ceph-mon[81775]: pgmap v2758: 321 pgs: 321 active+clean; 566 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.2 MiB/s wr, 122 op/s
Jan 20 15:14:41 compute-1 nova_compute[225855]: 2026-01-20 15:14:41.386 225859 DEBUG nova.network.neutron [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Successfully updated port: 2cfaf09f-1f9e-489f-b7d3-43166c005796 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:14:41 compute-1 nova_compute[225855]: 2026-01-20 15:14:41.411 225859 DEBUG oslo_concurrency.lockutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "refresh_cache-185fbaf7-4372-4e7c-b053-df9c4022514f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:14:41 compute-1 nova_compute[225855]: 2026-01-20 15:14:41.411 225859 DEBUG oslo_concurrency.lockutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquired lock "refresh_cache-185fbaf7-4372-4e7c-b053-df9c4022514f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:14:41 compute-1 nova_compute[225855]: 2026-01-20 15:14:41.411 225859 DEBUG nova.network.neutron [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:14:41 compute-1 nova_compute[225855]: 2026-01-20 15:14:41.477 225859 DEBUG nova.compute.manager [req-63d90287-066a-4018-8e15-22f333d63078 req-85c42175-22fc-44e7-a425-2b7eb594f8fa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Received event network-changed-2cfaf09f-1f9e-489f-b7d3-43166c005796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:41 compute-1 nova_compute[225855]: 2026-01-20 15:14:41.478 225859 DEBUG nova.compute.manager [req-63d90287-066a-4018-8e15-22f333d63078 req-85c42175-22fc-44e7-a425-2b7eb594f8fa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Refreshing instance network info cache due to event network-changed-2cfaf09f-1f9e-489f-b7d3-43166c005796. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:14:41 compute-1 nova_compute[225855]: 2026-01-20 15:14:41.478 225859 DEBUG oslo_concurrency.lockutils [req-63d90287-066a-4018-8e15-22f333d63078 req-85c42175-22fc-44e7-a425-2b7eb594f8fa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-185fbaf7-4372-4e7c-b053-df9c4022514f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:14:41 compute-1 nova_compute[225855]: 2026-01-20 15:14:41.548 225859 DEBUG nova.network.neutron [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:14:41 compute-1 sudo[303148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:41 compute-1 sudo[303148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:41 compute-1 sudo[303148]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:42 compute-1 podman[303147]: 2026-01-20 15:14:42.03844322 +0000 UTC m=+0.085710073 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:14:42 compute-1 sudo[303187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:14:42 compute-1 sudo[303187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:42 compute-1 sudo[303187]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:42 compute-1 sudo[303221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:42 compute-1 sudo[303221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:42 compute-1 sudo[303221]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:42 compute-1 sudo[303246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:14:42 compute-1 sudo[303246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:42 compute-1 sudo[303246]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.567 225859 DEBUG nova.network.neutron [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Updating instance_info_cache with network_info: [{"id": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "address": "fa:16:3e:6b:b0:3d", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cfaf09f-1f", "ovs_interfaceid": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.588 225859 DEBUG oslo_concurrency.lockutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Releasing lock "refresh_cache-185fbaf7-4372-4e7c-b053-df9c4022514f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.589 225859 DEBUG nova.compute.manager [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Instance network_info: |[{"id": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "address": "fa:16:3e:6b:b0:3d", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cfaf09f-1f", "ovs_interfaceid": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.589 225859 DEBUG oslo_concurrency.lockutils [req-63d90287-066a-4018-8e15-22f333d63078 req-85c42175-22fc-44e7-a425-2b7eb594f8fa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-185fbaf7-4372-4e7c-b053-df9c4022514f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.590 225859 DEBUG nova.network.neutron [req-63d90287-066a-4018-8e15-22f333d63078 req-85c42175-22fc-44e7-a425-2b7eb594f8fa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Refreshing network info cache for port 2cfaf09f-1f9e-489f-b7d3-43166c005796 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.594 225859 DEBUG nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Start _get_guest_xml network_info=[{"id": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "address": "fa:16:3e:6b:b0:3d", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cfaf09f-1f", "ovs_interfaceid": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4994c109-f7d8-4642-bf6a-2f796e3851ba', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4994c109-f7d8-4642-bf6a-2f796e3851ba', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '185fbaf7-4372-4e7c-b053-df9c4022514f', 'attached_at': '', 'detached_at': '', 'volume_id': '4994c109-f7d8-4642-bf6a-2f796e3851ba', 'serial': '4994c109-f7d8-4642-bf6a-2f796e3851ba'}, 'guest_format': None, 'boot_index': 0, 'mount_device': '/dev/vda', 'attachment_id': 'b71943fe-2a88-4bcf-8d86-3af9f4ede56c', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.599 225859 WARNING nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.605 225859 DEBUG nova.virt.libvirt.host [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.606 225859 DEBUG nova.virt.libvirt.host [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.615 225859 DEBUG nova.virt.libvirt.host [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.616 225859 DEBUG nova.virt.libvirt.host [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.617 225859 DEBUG nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.617 225859 DEBUG nova.virt.hardware [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.618 225859 DEBUG nova.virt.hardware [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.618 225859 DEBUG nova.virt.hardware [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.618 225859 DEBUG nova.virt.hardware [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.618 225859 DEBUG nova.virt.hardware [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.619 225859 DEBUG nova.virt.hardware [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.619 225859 DEBUG nova.virt.hardware [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.619 225859 DEBUG nova.virt.hardware [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.619 225859 DEBUG nova.virt.hardware [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.619 225859 DEBUG nova.virt.hardware [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.620 225859 DEBUG nova.virt.hardware [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:14:42 compute-1 ceph-mon[81775]: pgmap v2759: 321 pgs: 321 active+clean; 566 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 46 KiB/s wr, 151 op/s
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.879 225859 DEBUG nova.storage.rbd_utils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] rbd image 185fbaf7-4372-4e7c-b053-df9c4022514f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.883 225859 DEBUG oslo_concurrency.processutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.907 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:42 compute-1 nova_compute[225855]: 2026-01-20 15:14:42.909 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:14:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:14:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:42.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:14:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:43.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:14:43 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2934712307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.314 225859 DEBUG oslo_concurrency.processutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.341 225859 DEBUG nova.virt.libvirt.vif [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:14:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1033173342',display_name='tempest-TestVolumeBootPattern-server-1033173342',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1033173342',id=188,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBjkv3PM31l7/LOeidHCDov4vvdGwqOT15IVVbWearXBCn3jQz2xB6ix8iz1XP+iiPXyhWuw0LpMPT9jQN2b0mvhqeZTHErGcz1VZLskRcT6iqcekmFxWykFxr44bv68XA==',key_name='tempest-TestVolumeBootPattern-474773317',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a49638950e1543fa8e0d251af5479623',ramdisk_id='',reservation_id='r-pz9crh7i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-194644003',owner_user_name='tempest-TestVolumeBootPattern-194644003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:14:38Z,user_data=None,user_id='bf422e55e158420cbdae75f07a3bb97a',uuid=185fbaf7-4372-4e7c-b053-df9c4022514f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "address": "fa:16:3e:6b:b0:3d", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cfaf09f-1f", "ovs_interfaceid": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.341 225859 DEBUG nova.network.os_vif_util [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converting VIF {"id": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "address": "fa:16:3e:6b:b0:3d", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cfaf09f-1f", "ovs_interfaceid": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.342 225859 DEBUG nova.network.os_vif_util [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:b0:3d,bridge_name='br-int',has_traffic_filtering=True,id=2cfaf09f-1f9e-489f-b7d3-43166c005796,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cfaf09f-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.344 225859 DEBUG nova.objects.instance [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lazy-loading 'pci_devices' on Instance uuid 185fbaf7-4372-4e7c-b053-df9c4022514f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:14:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.362 225859 DEBUG nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:14:43 compute-1 nova_compute[225855]:   <uuid>185fbaf7-4372-4e7c-b053-df9c4022514f</uuid>
Jan 20 15:14:43 compute-1 nova_compute[225855]:   <name>instance-000000bc</name>
Jan 20 15:14:43 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:14:43 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:14:43 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <nova:name>tempest-TestVolumeBootPattern-server-1033173342</nova:name>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:14:42</nova:creationTime>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:14:43 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:14:43 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:14:43 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:14:43 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:14:43 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:14:43 compute-1 nova_compute[225855]:         <nova:user uuid="bf422e55e158420cbdae75f07a3bb97a">tempest-TestVolumeBootPattern-194644003-project-member</nova:user>
Jan 20 15:14:43 compute-1 nova_compute[225855]:         <nova:project uuid="a49638950e1543fa8e0d251af5479623">tempest-TestVolumeBootPattern-194644003</nova:project>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:14:43 compute-1 nova_compute[225855]:         <nova:port uuid="2cfaf09f-1f9e-489f-b7d3-43166c005796">
Jan 20 15:14:43 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:14:43 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:14:43 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <system>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <entry name="serial">185fbaf7-4372-4e7c-b053-df9c4022514f</entry>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <entry name="uuid">185fbaf7-4372-4e7c-b053-df9c4022514f</entry>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     </system>
Jan 20 15:14:43 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:14:43 compute-1 nova_compute[225855]:   <os>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:   </os>
Jan 20 15:14:43 compute-1 nova_compute[225855]:   <features>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:   </features>
Jan 20 15:14:43 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:14:43 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:14:43 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/185fbaf7-4372-4e7c-b053-df9c4022514f_disk.config">
Jan 20 15:14:43 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       </source>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:14:43 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <source protocol="rbd" name="volumes/volume-4994c109-f7d8-4642-bf6a-2f796e3851ba">
Jan 20 15:14:43 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       </source>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:14:43 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <serial>4994c109-f7d8-4642-bf6a-2f796e3851ba</serial>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:6b:b0:3d"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <target dev="tap2cfaf09f-1f"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/185fbaf7-4372-4e7c-b053-df9c4022514f/console.log" append="off"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <video>
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     </video>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:14:43 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:14:43 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:14:43 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:14:43 compute-1 nova_compute[225855]: </domain>
Jan 20 15:14:43 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.364 225859 DEBUG nova.compute.manager [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Preparing to wait for external event network-vif-plugged-2cfaf09f-1f9e-489f-b7d3-43166c005796 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.364 225859 DEBUG oslo_concurrency.lockutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "185fbaf7-4372-4e7c-b053-df9c4022514f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.364 225859 DEBUG oslo_concurrency.lockutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "185fbaf7-4372-4e7c-b053-df9c4022514f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.365 225859 DEBUG oslo_concurrency.lockutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "185fbaf7-4372-4e7c-b053-df9c4022514f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.366 225859 DEBUG nova.virt.libvirt.vif [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:14:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1033173342',display_name='tempest-TestVolumeBootPattern-server-1033173342',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1033173342',id=188,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBjkv3PM31l7/LOeidHCDov4vvdGwqOT15IVVbWearXBCn3jQz2xB6ix8iz1XP+iiPXyhWuw0LpMPT9jQN2b0mvhqeZTHErGcz1VZLskRcT6iqcekmFxWykFxr44bv68XA==',key_name='tempest-TestVolumeBootPattern-474773317',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a49638950e1543fa8e0d251af5479623',ramdisk_id='',reservation_id='r-pz9crh7i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-194644003',owner_user_name='tempest-TestVolumeBootPattern-194644003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:14:38Z,user_data=None,user_id='bf422e55e158420cbdae75f07a3bb97a',uuid=185fbaf7-4372-4e7c-b053-df9c4022514f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "address": "fa:16:3e:6b:b0:3d", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cfaf09f-1f", "ovs_interfaceid": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.366 225859 DEBUG nova.network.os_vif_util [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converting VIF {"id": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "address": "fa:16:3e:6b:b0:3d", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cfaf09f-1f", "ovs_interfaceid": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.367 225859 DEBUG nova.network.os_vif_util [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:b0:3d,bridge_name='br-int',has_traffic_filtering=True,id=2cfaf09f-1f9e-489f-b7d3-43166c005796,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cfaf09f-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.367 225859 DEBUG os_vif [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:b0:3d,bridge_name='br-int',has_traffic_filtering=True,id=2cfaf09f-1f9e-489f-b7d3-43166c005796,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cfaf09f-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.367 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.368 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.368 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.371 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.371 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2cfaf09f-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.371 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2cfaf09f-1f, col_values=(('external_ids', {'iface-id': '2cfaf09f-1f9e-489f-b7d3-43166c005796', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:b0:3d', 'vm-uuid': '185fbaf7-4372-4e7c-b053-df9c4022514f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.373 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:43 compute-1 NetworkManager[49104]: <info>  [1768922083.3741] manager: (tap2cfaf09f-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.375 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.379 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.380 225859 INFO os_vif [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:b0:3d,bridge_name='br-int',has_traffic_filtering=True,id=2cfaf09f-1f9e-489f-b7d3-43166c005796,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cfaf09f-1f')
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.455 225859 DEBUG nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.455 225859 DEBUG nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.455 225859 DEBUG nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] No VIF found with MAC fa:16:3e:6b:b0:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.456 225859 INFO nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Using config drive
Jan 20 15:14:43 compute-1 nova_compute[225855]: 2026-01-20 15:14:43.480 225859 DEBUG nova.storage.rbd_utils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] rbd image 185fbaf7-4372-4e7c-b053-df9c4022514f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:14:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:14:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:14:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:14:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:14:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:14:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:14:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2934712307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:44 compute-1 nova_compute[225855]: 2026-01-20 15:14:44.153 225859 INFO nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Creating config drive at /var/lib/nova/instances/185fbaf7-4372-4e7c-b053-df9c4022514f/disk.config
Jan 20 15:14:44 compute-1 nova_compute[225855]: 2026-01-20 15:14:44.158 225859 DEBUG oslo_concurrency.processutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/185fbaf7-4372-4e7c-b053-df9c4022514f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpznc8anpu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:44 compute-1 nova_compute[225855]: 2026-01-20 15:14:44.294 225859 DEBUG oslo_concurrency.processutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/185fbaf7-4372-4e7c-b053-df9c4022514f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpznc8anpu" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:44 compute-1 nova_compute[225855]: 2026-01-20 15:14:44.323 225859 DEBUG nova.storage.rbd_utils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] rbd image 185fbaf7-4372-4e7c-b053-df9c4022514f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:14:44 compute-1 nova_compute[225855]: 2026-01-20 15:14:44.326 225859 DEBUG oslo_concurrency.processutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/185fbaf7-4372-4e7c-b053-df9c4022514f/disk.config 185fbaf7-4372-4e7c-b053-df9c4022514f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:44 compute-1 nova_compute[225855]: 2026-01-20 15:14:44.487 225859 DEBUG oslo_concurrency.processutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/185fbaf7-4372-4e7c-b053-df9c4022514f/disk.config 185fbaf7-4372-4e7c-b053-df9c4022514f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:44 compute-1 nova_compute[225855]: 2026-01-20 15:14:44.488 225859 INFO nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Deleting local config drive /var/lib/nova/instances/185fbaf7-4372-4e7c-b053-df9c4022514f/disk.config because it was imported into RBD.
Jan 20 15:14:44 compute-1 kernel: tap2cfaf09f-1f: entered promiscuous mode
Jan 20 15:14:44 compute-1 NetworkManager[49104]: <info>  [1768922084.5334] manager: (tap2cfaf09f-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/343)
Jan 20 15:14:44 compute-1 nova_compute[225855]: 2026-01-20 15:14:44.534 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:44 compute-1 ovn_controller[130490]: 2026-01-20T15:14:44Z|00820|binding|INFO|Claiming lport 2cfaf09f-1f9e-489f-b7d3-43166c005796 for this chassis.
Jan 20 15:14:44 compute-1 ovn_controller[130490]: 2026-01-20T15:14:44Z|00821|binding|INFO|2cfaf09f-1f9e-489f-b7d3-43166c005796: Claiming fa:16:3e:6b:b0:3d 10.100.0.3
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.543 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:b0:3d 10.100.0.3'], port_security=['fa:16:3e:6b:b0:3d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '185fbaf7-4372-4e7c-b053-df9c4022514f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a49638950e1543fa8e0d251af5479623', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c29da5ec-6cb2-4047-ba89-70fa67a96476', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76ec1139-009f-49fe-bfde-07c0ef9e8b12, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=2cfaf09f-1f9e-489f-b7d3-43166c005796) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.544 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 2cfaf09f-1f9e-489f-b7d3-43166c005796 in datapath b677f1a9-dbaa-4373-8466-bd9ccf067b91 bound to our chassis
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.545 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b677f1a9-dbaa-4373-8466-bd9ccf067b91
Jan 20 15:14:44 compute-1 ovn_controller[130490]: 2026-01-20T15:14:44Z|00822|binding|INFO|Setting lport 2cfaf09f-1f9e-489f-b7d3-43166c005796 ovn-installed in OVS
Jan 20 15:14:44 compute-1 ovn_controller[130490]: 2026-01-20T15:14:44Z|00823|binding|INFO|Setting lport 2cfaf09f-1f9e-489f-b7d3-43166c005796 up in Southbound
Jan 20 15:14:44 compute-1 nova_compute[225855]: 2026-01-20 15:14:44.550 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:44 compute-1 nova_compute[225855]: 2026-01-20 15:14:44.553 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.555 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[332f0062-fa47-4da9-a50f-5c588acccd04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.556 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb677f1a9-d1 in ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.558 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb677f1a9-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.558 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b99b4ee8-d743-4d7a-84b3-af594a981f6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.558 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c3d6c7-218b-4057-8d73-4d8141573af1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:44 compute-1 systemd-udevd[303418]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:14:44 compute-1 systemd-machined[194361]: New machine qemu-98-instance-000000bc.
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.569 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6a9d6c-f6b5-4295-9cd8-72d8c8e2890b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:44 compute-1 NetworkManager[49104]: <info>  [1768922084.5775] device (tap2cfaf09f-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:14:44 compute-1 NetworkManager[49104]: <info>  [1768922084.5781] device (tap2cfaf09f-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:14:44 compute-1 systemd[1]: Started Virtual Machine qemu-98-instance-000000bc.
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.594 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1b2255af-eee8-40a5-a661-a3b6da2e4730]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.618 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[658aa737-f868-470a-945b-f8326ca78a74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:44 compute-1 systemd-udevd[303421]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:14:44 compute-1 NetworkManager[49104]: <info>  [1768922084.6240] manager: (tapb677f1a9-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/344)
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.624 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a809a2ea-a696-47e5-a53e-6ec8e28c5a70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:44 compute-1 nova_compute[225855]: 2026-01-20 15:14:44.627 225859 INFO nova.compute.manager [None req-983ad604-747f-4a92-9aae-4e849d5f72a5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Get console output
Jan 20 15:14:44 compute-1 nova_compute[225855]: 2026-01-20 15:14:44.637 263775 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.653 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[27588c71-c4f7-47b6-ab63-1abfe88bbee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.655 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5a6310-4607-4a26-8914-42def5f87971]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:44 compute-1 NetworkManager[49104]: <info>  [1768922084.6769] device (tapb677f1a9-d0): carrier: link connected
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.684 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[fcb25818-0689-40aa-8297-04d38ea65062]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.701 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f1da6dc7-fb23-49d4-a447-108c2eea9ce8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb677f1a9-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:c8:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712360, 'reachable_time': 20858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303449, 'error': None, 'target': 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.717 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc33ea8-d16c-4f52-a40b-2057843d50e0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:c834'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712360, 'tstamp': 712360}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303450, 'error': None, 'target': 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.736 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[20db29bd-417e-4989-a1fc-eabe04c2188b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb677f1a9-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:c8:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712360, 'reachable_time': 20858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303451, 'error': None, 'target': 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.764 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ee7b70a8-afc1-432b-9e0d-9cfa4b8f3e82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.822 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[45494294-a7fc-41b1-8f84-cf24f069232d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.823 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb677f1a9-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.824 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.824 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb677f1a9-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:44 compute-1 nova_compute[225855]: 2026-01-20 15:14:44.826 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:44 compute-1 kernel: tapb677f1a9-d0: entered promiscuous mode
Jan 20 15:14:44 compute-1 NetworkManager[49104]: <info>  [1768922084.8270] manager: (tapb677f1a9-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/345)
Jan 20 15:14:44 compute-1 nova_compute[225855]: 2026-01-20 15:14:44.830 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.831 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb677f1a9-d0, col_values=(('external_ids', {'iface-id': '1aa285ce-a9ae-4d1e-b4b9-c72f4e0b8d65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:44 compute-1 nova_compute[225855]: 2026-01-20 15:14:44.832 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:44 compute-1 ovn_controller[130490]: 2026-01-20T15:14:44Z|00824|binding|INFO|Releasing lport 1aa285ce-a9ae-4d1e-b4b9-c72f4e0b8d65 from this chassis (sb_readonly=0)
Jan 20 15:14:44 compute-1 nova_compute[225855]: 2026-01-20 15:14:44.846 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.847 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b677f1a9-dbaa-4373-8466-bd9ccf067b91.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b677f1a9-dbaa-4373-8466-bd9ccf067b91.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.849 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[39b99b1e-2203-4641-bf55-cfb0710d0f78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.850 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-b677f1a9-dbaa-4373-8466-bd9ccf067b91
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/b677f1a9-dbaa-4373-8466-bd9ccf067b91.pid.haproxy
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID b677f1a9-dbaa-4373-8466-bd9ccf067b91
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:14:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:44.851 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'env', 'PROCESS_TAG=haproxy-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b677f1a9-dbaa-4373-8466-bd9ccf067b91.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:14:44 compute-1 ceph-mon[81775]: pgmap v2760: 321 pgs: 321 active+clean; 567 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 44 KiB/s wr, 143 op/s
Jan 20 15:14:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:44.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:45 compute-1 nova_compute[225855]: 2026-01-20 15:14:45.095 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922085.0951176, 185fbaf7-4372-4e7c-b053-df9c4022514f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:14:45 compute-1 nova_compute[225855]: 2026-01-20 15:14:45.096 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] VM Started (Lifecycle Event)
Jan 20 15:14:45 compute-1 nova_compute[225855]: 2026-01-20 15:14:45.118 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:14:45 compute-1 nova_compute[225855]: 2026-01-20 15:14:45.122 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922085.095241, 185fbaf7-4372-4e7c-b053-df9c4022514f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:14:45 compute-1 nova_compute[225855]: 2026-01-20 15:14:45.122 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] VM Paused (Lifecycle Event)
Jan 20 15:14:45 compute-1 nova_compute[225855]: 2026-01-20 15:14:45.139 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:14:45 compute-1 nova_compute[225855]: 2026-01-20 15:14:45.142 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:14:45 compute-1 nova_compute[225855]: 2026-01-20 15:14:45.157 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:14:45 compute-1 podman[303523]: 2026-01-20 15:14:45.20338617 +0000 UTC m=+0.050862804 container create f51b1e2ffd0c0523ee19615079e9e512b8c099800046323e66291635c3cc77d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:14:45 compute-1 systemd[1]: Started libpod-conmon-f51b1e2ffd0c0523ee19615079e9e512b8c099800046323e66291635c3cc77d9.scope.
Jan 20 15:14:45 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:14:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:45 compute-1 podman[303523]: 2026-01-20 15:14:45.17554265 +0000 UTC m=+0.023019304 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:14:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:45.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67b20a5a2fd7710889925edaaefc37d2cd36bbde3ed5e1de89395074d35e2d88/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:14:45 compute-1 podman[303523]: 2026-01-20 15:14:45.281909178 +0000 UTC m=+0.129385832 container init f51b1e2ffd0c0523ee19615079e9e512b8c099800046323e66291635c3cc77d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:14:45 compute-1 podman[303523]: 2026-01-20 15:14:45.289499323 +0000 UTC m=+0.136975947 container start f51b1e2ffd0c0523ee19615079e9e512b8c099800046323e66291635c3cc77d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 15:14:45 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[303538]: [NOTICE]   (303542) : New worker (303544) forked
Jan 20 15:14:45 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[303538]: [NOTICE]   (303542) : Loading success.
Jan 20 15:14:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:45.717 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:14:45 compute-1 nova_compute[225855]: 2026-01-20 15:14:45.718 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:45.719 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:14:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:45.720 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:45 compute-1 nova_compute[225855]: 2026-01-20 15:14:45.802 225859 DEBUG nova.network.neutron [req-63d90287-066a-4018-8e15-22f333d63078 req-85c42175-22fc-44e7-a425-2b7eb594f8fa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Updated VIF entry in instance network info cache for port 2cfaf09f-1f9e-489f-b7d3-43166c005796. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:14:45 compute-1 nova_compute[225855]: 2026-01-20 15:14:45.803 225859 DEBUG nova.network.neutron [req-63d90287-066a-4018-8e15-22f333d63078 req-85c42175-22fc-44e7-a425-2b7eb594f8fa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Updating instance_info_cache with network_info: [{"id": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "address": "fa:16:3e:6b:b0:3d", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cfaf09f-1f", "ovs_interfaceid": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:14:45 compute-1 nova_compute[225855]: 2026-01-20 15:14:45.817 225859 DEBUG oslo_concurrency.lockutils [req-63d90287-066a-4018-8e15-22f333d63078 req-85c42175-22fc-44e7-a425-2b7eb594f8fa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-185fbaf7-4372-4e7c-b053-df9c4022514f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.652 225859 DEBUG nova.compute.manager [req-d4c28f7f-a0b8-45b8-8a81-dbcf691e9a7e req-9c1f92a3-b6ae-4f83-a076-e866f67d8b74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Received event network-vif-plugged-2cfaf09f-1f9e-489f-b7d3-43166c005796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.654 225859 DEBUG oslo_concurrency.lockutils [req-d4c28f7f-a0b8-45b8-8a81-dbcf691e9a7e req-9c1f92a3-b6ae-4f83-a076-e866f67d8b74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "185fbaf7-4372-4e7c-b053-df9c4022514f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.654 225859 DEBUG oslo_concurrency.lockutils [req-d4c28f7f-a0b8-45b8-8a81-dbcf691e9a7e req-9c1f92a3-b6ae-4f83-a076-e866f67d8b74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "185fbaf7-4372-4e7c-b053-df9c4022514f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.655 225859 DEBUG oslo_concurrency.lockutils [req-d4c28f7f-a0b8-45b8-8a81-dbcf691e9a7e req-9c1f92a3-b6ae-4f83-a076-e866f67d8b74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "185fbaf7-4372-4e7c-b053-df9c4022514f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.655 225859 DEBUG nova.compute.manager [req-d4c28f7f-a0b8-45b8-8a81-dbcf691e9a7e req-9c1f92a3-b6ae-4f83-a076-e866f67d8b74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Processing event network-vif-plugged-2cfaf09f-1f9e-489f-b7d3-43166c005796 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.656 225859 DEBUG nova.compute.manager [req-d4c28f7f-a0b8-45b8-8a81-dbcf691e9a7e req-9c1f92a3-b6ae-4f83-a076-e866f67d8b74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Received event network-vif-plugged-2cfaf09f-1f9e-489f-b7d3-43166c005796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.657 225859 DEBUG oslo_concurrency.lockutils [req-d4c28f7f-a0b8-45b8-8a81-dbcf691e9a7e req-9c1f92a3-b6ae-4f83-a076-e866f67d8b74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "185fbaf7-4372-4e7c-b053-df9c4022514f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.657 225859 DEBUG oslo_concurrency.lockutils [req-d4c28f7f-a0b8-45b8-8a81-dbcf691e9a7e req-9c1f92a3-b6ae-4f83-a076-e866f67d8b74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "185fbaf7-4372-4e7c-b053-df9c4022514f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.658 225859 DEBUG oslo_concurrency.lockutils [req-d4c28f7f-a0b8-45b8-8a81-dbcf691e9a7e req-9c1f92a3-b6ae-4f83-a076-e866f67d8b74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "185fbaf7-4372-4e7c-b053-df9c4022514f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.658 225859 DEBUG nova.compute.manager [req-d4c28f7f-a0b8-45b8-8a81-dbcf691e9a7e req-9c1f92a3-b6ae-4f83-a076-e866f67d8b74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] No waiting events found dispatching network-vif-plugged-2cfaf09f-1f9e-489f-b7d3-43166c005796 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.659 225859 WARNING nova.compute.manager [req-d4c28f7f-a0b8-45b8-8a81-dbcf691e9a7e req-9c1f92a3-b6ae-4f83-a076-e866f67d8b74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Received unexpected event network-vif-plugged-2cfaf09f-1f9e-489f-b7d3-43166c005796 for instance with vm_state building and task_state spawning.
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.660 225859 DEBUG nova.compute.manager [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.665 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922086.6655588, 185fbaf7-4372-4e7c-b053-df9c4022514f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.666 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] VM Resumed (Lifecycle Event)
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.668 225859 DEBUG nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.671 225859 INFO nova.virt.libvirt.driver [-] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Instance spawned successfully.
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.672 225859 DEBUG nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.690 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.700 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.703 225859 DEBUG nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.703 225859 DEBUG nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.704 225859 DEBUG nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.704 225859 DEBUG nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.704 225859 DEBUG nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.705 225859 DEBUG nova.virt.libvirt.driver [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.781 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.824 225859 INFO nova.compute.manager [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Took 6.81 seconds to spawn the instance on the hypervisor.
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.825 225859 DEBUG nova.compute.manager [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.881 225859 INFO nova.compute.manager [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Took 9.06 seconds to build instance.
Jan 20 15:14:46 compute-1 nova_compute[225855]: 2026-01-20 15:14:46.897 225859 DEBUG oslo_concurrency.lockutils [None req-a239317b-94da-476d-aa13-27e5dba4e4e5 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "185fbaf7-4372-4e7c-b053-df9c4022514f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:46.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.180 225859 DEBUG nova.compute.manager [req-cbd2da10-50c5-41f0-b250-bac139562e5f req-0bd37af1-7b64-4bc0-a1c0-d1e0c9eeae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-changed-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.180 225859 DEBUG nova.compute.manager [req-cbd2da10-50c5-41f0-b250-bac139562e5f req-0bd37af1-7b64-4bc0-a1c0-d1e0c9eeae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Refreshing instance network info cache due to event network-changed-9de5453d-b548-429c-8fc2-7b012cb8ebdf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.181 225859 DEBUG oslo_concurrency.lockutils [req-cbd2da10-50c5-41f0-b250-bac139562e5f req-0bd37af1-7b64-4bc0-a1c0-d1e0c9eeae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.181 225859 DEBUG oslo_concurrency.lockutils [req-cbd2da10-50c5-41f0-b250-bac139562e5f req-0bd37af1-7b64-4bc0-a1c0-d1e0c9eeae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.181 225859 DEBUG nova.network.neutron [req-cbd2da10-50c5-41f0-b250-bac139562e5f req-0bd37af1-7b64-4bc0-a1c0-d1e0c9eeae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Refreshing network info cache for port 9de5453d-b548-429c-8fc2-7b012cb8ebdf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.225 225859 DEBUG oslo_concurrency.lockutils [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.226 225859 DEBUG oslo_concurrency.lockutils [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.226 225859 DEBUG oslo_concurrency.lockutils [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.226 225859 DEBUG oslo_concurrency.lockutils [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.226 225859 DEBUG oslo_concurrency.lockutils [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.227 225859 INFO nova.compute.manager [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Terminating instance
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.228 225859 DEBUG nova.compute.manager [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:14:47 compute-1 kernel: tap9de5453d-b5 (unregistering): left promiscuous mode
Jan 20 15:14:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:47.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:47 compute-1 NetworkManager[49104]: <info>  [1768922087.2763] device (tap9de5453d-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.288 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.293 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:47.296 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:1d:e9 10.100.0.4'], port_security=['fa:16:3e:a8:1d:e9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '128af7d9-155f-468d-9873-98c816f0df9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d07527d3-7363-453c-9902-c562bab626ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'b4895263-5fc5-4c5a-ab8d-547f570bc095', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf4d49b6-1d42-4171-8055-0d823fb37e66, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=9de5453d-b548-429c-8fc2-7b012cb8ebdf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:14:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:47.297 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 9de5453d-b548-429c-8fc2-7b012cb8ebdf in datapath d07527d3-7363-453c-9902-c562bab626ba unbound from our chassis
Jan 20 15:14:47 compute-1 ovn_controller[130490]: 2026-01-20T15:14:47Z|00825|binding|INFO|Releasing lport 9de5453d-b548-429c-8fc2-7b012cb8ebdf from this chassis (sb_readonly=0)
Jan 20 15:14:47 compute-1 ovn_controller[130490]: 2026-01-20T15:14:47Z|00826|binding|INFO|Setting lport 9de5453d-b548-429c-8fc2-7b012cb8ebdf down in Southbound
Jan 20 15:14:47 compute-1 ovn_controller[130490]: 2026-01-20T15:14:47Z|00827|binding|INFO|Removing iface tap9de5453d-b5 ovn-installed in OVS
Jan 20 15:14:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:47.298 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d07527d3-7363-453c-9902-c562bab626ba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:14:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:47.300 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee2a644-f1a1-41a2-b368-c1f5690994fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:47.300 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d07527d3-7363-453c-9902-c562bab626ba namespace which is not needed anymore
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.345 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:47 compute-1 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000b7.scope: Deactivated successfully.
Jan 20 15:14:47 compute-1 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000b7.scope: Consumed 13.516s CPU time.
Jan 20 15:14:47 compute-1 systemd-machined[194361]: Machine qemu-97-instance-000000b7 terminated.
Jan 20 15:14:47 compute-1 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[303024]: [NOTICE]   (303028) : haproxy version is 2.8.14-c23fe91
Jan 20 15:14:47 compute-1 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[303024]: [NOTICE]   (303028) : path to executable is /usr/sbin/haproxy
Jan 20 15:14:47 compute-1 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[303024]: [WARNING]  (303028) : Exiting Master process...
Jan 20 15:14:47 compute-1 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[303024]: [ALERT]    (303028) : Current worker (303030) exited with code 143 (Terminated)
Jan 20 15:14:47 compute-1 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[303024]: [WARNING]  (303028) : All workers exited. Exiting... (0)
Jan 20 15:14:47 compute-1 systemd[1]: libpod-b478c0ef6bae9eea7c30e6f5fef3da31b6628e87efed91da6febae4d8fa3f142.scope: Deactivated successfully.
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.449 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:47 compute-1 podman[303575]: 2026-01-20 15:14:47.455073538 +0000 UTC m=+0.044346859 container died b478c0ef6bae9eea7c30e6f5fef3da31b6628e87efed91da6febae4d8fa3f142 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.454 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.459 225859 INFO nova.virt.libvirt.driver [-] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Instance destroyed successfully.
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.460 225859 DEBUG nova.objects.instance [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'resources' on Instance uuid 128af7d9-155f-468d-9873-98c816f0df9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.475 225859 DEBUG nova.virt.libvirt.vif [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:13:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1998945962',display_name='tempest-TestNetworkAdvancedServerOps-server-1998945962',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1998945962',id=183,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQtn9WZwWHpZ18eEtTY9zdPNbJgOayUdrvVmR1brDMxwKaiJ8tf9lOFdht6GjVy3Orpnh5Z5LatI7xEKad9rNtjFmwEczk5s4CmWp5ueE54bJ73h+pph+yq2VHvIP5rgg==',key_name='tempest-TestNetworkAdvancedServerOps-1645169738',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:14:23Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-mvt7thmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:14:23Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=128af7d9-155f-468d-9873-98c816f0df9e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.476 225859 DEBUG nova.network.os_vif_util [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.478 225859 DEBUG nova.network.os_vif_util [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.479 225859 DEBUG os_vif [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.481 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:47 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b478c0ef6bae9eea7c30e6f5fef3da31b6628e87efed91da6febae4d8fa3f142-userdata-shm.mount: Deactivated successfully.
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.481 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9de5453d-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:47 compute-1 systemd[1]: var-lib-containers-storage-overlay-2a54fb807962a544aa2982518e645db6ec417e79968889ef69caab0fed7c38d3-merged.mount: Deactivated successfully.
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.490 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.492 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.495 225859 INFO os_vif [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5')
Jan 20 15:14:47 compute-1 podman[303575]: 2026-01-20 15:14:47.50201792 +0000 UTC m=+0.091291211 container cleanup b478c0ef6bae9eea7c30e6f5fef3da31b6628e87efed91da6febae4d8fa3f142 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 20 15:14:47 compute-1 systemd[1]: libpod-conmon-b478c0ef6bae9eea7c30e6f5fef3da31b6628e87efed91da6febae4d8fa3f142.scope: Deactivated successfully.
Jan 20 15:14:47 compute-1 podman[303633]: 2026-01-20 15:14:47.570582445 +0000 UTC m=+0.039628315 container remove b478c0ef6bae9eea7c30e6f5fef3da31b6628e87efed91da6febae4d8fa3f142 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:14:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:47.575 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e73dcf67-7b08-46d2-944e-7a1ac96fa862]: (4, ('Tue Jan 20 03:14:47 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba (b478c0ef6bae9eea7c30e6f5fef3da31b6628e87efed91da6febae4d8fa3f142)\nb478c0ef6bae9eea7c30e6f5fef3da31b6628e87efed91da6febae4d8fa3f142\nTue Jan 20 03:14:47 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba (b478c0ef6bae9eea7c30e6f5fef3da31b6628e87efed91da6febae4d8fa3f142)\nb478c0ef6bae9eea7c30e6f5fef3da31b6628e87efed91da6febae4d8fa3f142\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:47.577 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a1d4cc0d-ba4a-4c30-ac7d-546245c3cf14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:47.578 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd07527d3-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:47 compute-1 kernel: tapd07527d3-70: left promiscuous mode
Jan 20 15:14:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:47.587 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[99e8ddc0-8138-4d70-ae59-12f5f6eac65a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.586 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.600 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:47.605 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0c678576-28d3-49e4-8c5a-02551033c470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:47.606 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[93c5362c-ea2d-4748-91f6-fc742b25955f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:47 compute-1 ceph-mon[81775]: pgmap v2761: 321 pgs: 321 active+clean; 568 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 65 KiB/s wr, 155 op/s
Jan 20 15:14:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:47.621 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6800e9d9-882d-4a1d-bf3d-70ad38317da7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710198, 'reachable_time': 44596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303649, 'error': None, 'target': 'ovnmeta-d07527d3-7363-453c-9902-c562bab626ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:47 compute-1 systemd[1]: run-netns-ovnmeta\x2dd07527d3\x2d7363\x2d453c\x2d9902\x2dc562bab626ba.mount: Deactivated successfully.
Jan 20 15:14:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:47.630 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d07527d3-7363-453c-9902-c562bab626ba deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:14:47 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:14:47.630 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab4fb60-7896-4556-906a-432d4ca1c1cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.877 225859 INFO nova.virt.libvirt.driver [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Deleting instance files /var/lib/nova/instances/128af7d9-155f-468d-9873-98c816f0df9e_del
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.879 225859 INFO nova.virt.libvirt.driver [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Deletion of /var/lib/nova/instances/128af7d9-155f-468d-9873-98c816f0df9e_del complete
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.902 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.922 225859 DEBUG nova.compute.manager [req-1868f13c-2345-42d8-8d9a-6b0772297bc8 req-0d36d7ce-b979-476d-91d2-c122ffa55740 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-unplugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.923 225859 DEBUG oslo_concurrency.lockutils [req-1868f13c-2345-42d8-8d9a-6b0772297bc8 req-0d36d7ce-b979-476d-91d2-c122ffa55740 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.924 225859 DEBUG oslo_concurrency.lockutils [req-1868f13c-2345-42d8-8d9a-6b0772297bc8 req-0d36d7ce-b979-476d-91d2-c122ffa55740 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.924 225859 DEBUG oslo_concurrency.lockutils [req-1868f13c-2345-42d8-8d9a-6b0772297bc8 req-0d36d7ce-b979-476d-91d2-c122ffa55740 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.924 225859 DEBUG nova.compute.manager [req-1868f13c-2345-42d8-8d9a-6b0772297bc8 req-0d36d7ce-b979-476d-91d2-c122ffa55740 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] No waiting events found dispatching network-vif-unplugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.925 225859 DEBUG nova.compute.manager [req-1868f13c-2345-42d8-8d9a-6b0772297bc8 req-0d36d7ce-b979-476d-91d2-c122ffa55740 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-unplugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.936 225859 INFO nova.compute.manager [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Took 0.71 seconds to destroy the instance on the hypervisor.
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.937 225859 DEBUG oslo.service.loopingcall [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.938 225859 DEBUG nova.compute.manager [-] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:14:47 compute-1 nova_compute[225855]: 2026-01-20 15:14:47.938 225859 DEBUG nova.network.neutron [-] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:14:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1149942708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:48.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:49 compute-1 sudo[303656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:49 compute-1 sudo[303656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:49 compute-1 sudo[303656]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:49 compute-1 sudo[303681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:14:49 compute-1 sudo[303681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:49 compute-1 sudo[303681]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:14:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:49.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:14:49 compute-1 nova_compute[225855]: 2026-01-20 15:14:49.541 225859 DEBUG nova.network.neutron [req-cbd2da10-50c5-41f0-b250-bac139562e5f req-0bd37af1-7b64-4bc0-a1c0-d1e0c9eeae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updated VIF entry in instance network info cache for port 9de5453d-b548-429c-8fc2-7b012cb8ebdf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:14:49 compute-1 nova_compute[225855]: 2026-01-20 15:14:49.541 225859 DEBUG nova.network.neutron [req-cbd2da10-50c5-41f0-b250-bac139562e5f req-0bd37af1-7b64-4bc0-a1c0-d1e0c9eeae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updating instance_info_cache with network_info: [{"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:14:49 compute-1 nova_compute[225855]: 2026-01-20 15:14:49.565 225859 DEBUG oslo_concurrency.lockutils [req-cbd2da10-50c5-41f0-b250-bac139562e5f req-0bd37af1-7b64-4bc0-a1c0-d1e0c9eeae43 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:14:49 compute-1 ceph-mon[81775]: pgmap v2762: 321 pgs: 321 active+clean; 568 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 54 KiB/s wr, 132 op/s
Jan 20 15:14:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:14:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:14:49 compute-1 nova_compute[225855]: 2026-01-20 15:14:49.696 225859 DEBUG nova.network.neutron [-] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:14:49 compute-1 nova_compute[225855]: 2026-01-20 15:14:49.731 225859 INFO nova.compute.manager [-] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Took 1.79 seconds to deallocate network for instance.
Jan 20 15:14:49 compute-1 nova_compute[225855]: 2026-01-20 15:14:49.771 225859 DEBUG oslo_concurrency.lockutils [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:49 compute-1 nova_compute[225855]: 2026-01-20 15:14:49.771 225859 DEBUG oslo_concurrency.lockutils [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:49 compute-1 nova_compute[225855]: 2026-01-20 15:14:49.871 225859 DEBUG oslo_concurrency.processutils [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:50 compute-1 nova_compute[225855]: 2026-01-20 15:14:50.017 225859 DEBUG nova.compute.manager [req-a4f3f775-f582-4a97-bfc3-0608095dbe7b req-75459025-193d-4d8c-8806-0c5ac05d5b5d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:50 compute-1 nova_compute[225855]: 2026-01-20 15:14:50.018 225859 DEBUG oslo_concurrency.lockutils [req-a4f3f775-f582-4a97-bfc3-0608095dbe7b req-75459025-193d-4d8c-8806-0c5ac05d5b5d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:50 compute-1 nova_compute[225855]: 2026-01-20 15:14:50.019 225859 DEBUG oslo_concurrency.lockutils [req-a4f3f775-f582-4a97-bfc3-0608095dbe7b req-75459025-193d-4d8c-8806-0c5ac05d5b5d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:50 compute-1 nova_compute[225855]: 2026-01-20 15:14:50.019 225859 DEBUG oslo_concurrency.lockutils [req-a4f3f775-f582-4a97-bfc3-0608095dbe7b req-75459025-193d-4d8c-8806-0c5ac05d5b5d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:50 compute-1 nova_compute[225855]: 2026-01-20 15:14:50.020 225859 DEBUG nova.compute.manager [req-a4f3f775-f582-4a97-bfc3-0608095dbe7b req-75459025-193d-4d8c-8806-0c5ac05d5b5d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] No waiting events found dispatching network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:14:50 compute-1 nova_compute[225855]: 2026-01-20 15:14:50.020 225859 WARNING nova.compute.manager [req-a4f3f775-f582-4a97-bfc3-0608095dbe7b req-75459025-193d-4d8c-8806-0c5ac05d5b5d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received unexpected event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf for instance with vm_state deleted and task_state None.
Jan 20 15:14:50 compute-1 nova_compute[225855]: 2026-01-20 15:14:50.020 225859 DEBUG nova.compute.manager [req-a4f3f775-f582-4a97-bfc3-0608095dbe7b req-75459025-193d-4d8c-8806-0c5ac05d5b5d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-deleted-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:50 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:14:50 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1825238396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:50 compute-1 nova_compute[225855]: 2026-01-20 15:14:50.315 225859 DEBUG oslo_concurrency.processutils [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:50 compute-1 nova_compute[225855]: 2026-01-20 15:14:50.321 225859 DEBUG nova.compute.provider_tree [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:14:50 compute-1 nova_compute[225855]: 2026-01-20 15:14:50.344 225859 DEBUG nova.scheduler.client.report [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:14:50 compute-1 nova_compute[225855]: 2026-01-20 15:14:50.378 225859 DEBUG oslo_concurrency.lockutils [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:50 compute-1 nova_compute[225855]: 2026-01-20 15:14:50.462 225859 INFO nova.scheduler.client.report [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Deleted allocations for instance 128af7d9-155f-468d-9873-98c816f0df9e
Jan 20 15:14:50 compute-1 nova_compute[225855]: 2026-01-20 15:14:50.530 225859 DEBUG oslo_concurrency.lockutils [None req-0d7f3870-f13b-4e1a-8e57-f04035bf7dfc 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1825238396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:14:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:50.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:14:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:14:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:51.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:14:51 compute-1 nova_compute[225855]: 2026-01-20 15:14:51.652 225859 DEBUG nova.compute.manager [req-78436652-f690-4cd1-8fa2-b27b3fcd05c7 req-0b0acd81-bb75-4631-bf60-3fe37544e669 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Received event network-changed-2cfaf09f-1f9e-489f-b7d3-43166c005796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:51 compute-1 nova_compute[225855]: 2026-01-20 15:14:51.653 225859 DEBUG nova.compute.manager [req-78436652-f690-4cd1-8fa2-b27b3fcd05c7 req-0b0acd81-bb75-4631-bf60-3fe37544e669 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Refreshing instance network info cache due to event network-changed-2cfaf09f-1f9e-489f-b7d3-43166c005796. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:14:51 compute-1 nova_compute[225855]: 2026-01-20 15:14:51.653 225859 DEBUG oslo_concurrency.lockutils [req-78436652-f690-4cd1-8fa2-b27b3fcd05c7 req-0b0acd81-bb75-4631-bf60-3fe37544e669 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-185fbaf7-4372-4e7c-b053-df9c4022514f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:14:51 compute-1 nova_compute[225855]: 2026-01-20 15:14:51.654 225859 DEBUG oslo_concurrency.lockutils [req-78436652-f690-4cd1-8fa2-b27b3fcd05c7 req-0b0acd81-bb75-4631-bf60-3fe37544e669 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-185fbaf7-4372-4e7c-b053-df9c4022514f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:14:51 compute-1 nova_compute[225855]: 2026-01-20 15:14:51.654 225859 DEBUG nova.network.neutron [req-78436652-f690-4cd1-8fa2-b27b3fcd05c7 req-0b0acd81-bb75-4631-bf60-3fe37544e669 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Refreshing network info cache for port 2cfaf09f-1f9e-489f-b7d3-43166c005796 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:14:51 compute-1 ceph-mon[81775]: pgmap v2763: 321 pgs: 321 active+clean; 532 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.3 MiB/s wr, 229 op/s
Jan 20 15:14:52 compute-1 nova_compute[225855]: 2026-01-20 15:14:52.488 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:52 compute-1 ceph-mon[81775]: pgmap v2764: 321 pgs: 321 active+clean; 533 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 193 op/s
Jan 20 15:14:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1450403844' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:52 compute-1 nova_compute[225855]: 2026-01-20 15:14:52.904 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:52.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:53.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2924848263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:54 compute-1 nova_compute[225855]: 2026-01-20 15:14:54.261 225859 DEBUG nova.network.neutron [req-78436652-f690-4cd1-8fa2-b27b3fcd05c7 req-0b0acd81-bb75-4631-bf60-3fe37544e669 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Updated VIF entry in instance network info cache for port 2cfaf09f-1f9e-489f-b7d3-43166c005796. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:14:54 compute-1 nova_compute[225855]: 2026-01-20 15:14:54.262 225859 DEBUG nova.network.neutron [req-78436652-f690-4cd1-8fa2-b27b3fcd05c7 req-0b0acd81-bb75-4631-bf60-3fe37544e669 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Updating instance_info_cache with network_info: [{"id": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "address": "fa:16:3e:6b:b0:3d", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cfaf09f-1f", "ovs_interfaceid": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:14:54 compute-1 nova_compute[225855]: 2026-01-20 15:14:54.294 225859 DEBUG oslo_concurrency.lockutils [req-78436652-f690-4cd1-8fa2-b27b3fcd05c7 req-0b0acd81-bb75-4631-bf60-3fe37544e669 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-185fbaf7-4372-4e7c-b053-df9c4022514f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:14:54 compute-1 nova_compute[225855]: 2026-01-20 15:14:54.357 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:54 compute-1 nova_compute[225855]: 2026-01-20 15:14:54.358 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:14:54 compute-1 ovn_controller[130490]: 2026-01-20T15:14:54Z|00828|binding|INFO|Releasing lport 1aa285ce-a9ae-4d1e-b4b9-c72f4e0b8d65 from this chassis (sb_readonly=0)
Jan 20 15:14:54 compute-1 nova_compute[225855]: 2026-01-20 15:14:54.524 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:54 compute-1 ceph-mon[81775]: pgmap v2765: 321 pgs: 321 active+clean; 547 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 149 op/s
Jan 20 15:14:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:54.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:55.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:56.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:57 compute-1 podman[303732]: 2026-01-20 15:14:57.01511448 +0000 UTC m=+0.052318825 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 15:14:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:57.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:57 compute-1 nova_compute[225855]: 2026-01-20 15:14:57.342 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:57 compute-1 nova_compute[225855]: 2026-01-20 15:14:57.542 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:57 compute-1 ceph-mon[81775]: pgmap v2766: 321 pgs: 321 active+clean; 566 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 222 op/s
Jan 20 15:14:57 compute-1 nova_compute[225855]: 2026-01-20 15:14:57.907 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:58 compute-1 nova_compute[225855]: 2026-01-20 15:14:58.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:58 compute-1 nova_compute[225855]: 2026-01-20 15:14:58.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:14:58 compute-1 nova_compute[225855]: 2026-01-20 15:14:58.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:14:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:58.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:14:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:14:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:59.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:14:59 compute-1 ovn_controller[130490]: 2026-01-20T15:14:59Z|00091|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.3
Jan 20 15:14:59 compute-1 ovn_controller[130490]: 2026-01-20T15:14:59Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:6b:b0:3d 10.100.0.3
Jan 20 15:14:59 compute-1 ceph-mon[81775]: pgmap v2767: 321 pgs: 321 active+clean; 566 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 209 op/s
Jan 20 15:14:59 compute-1 nova_compute[225855]: 2026-01-20 15:14:59.822 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-185fbaf7-4372-4e7c-b053-df9c4022514f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:14:59 compute-1 nova_compute[225855]: 2026-01-20 15:14:59.823 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-185fbaf7-4372-4e7c-b053-df9c4022514f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:14:59 compute-1 nova_compute[225855]: 2026-01-20 15:14:59.823 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:14:59 compute-1 nova_compute[225855]: 2026-01-20 15:14:59.823 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 185fbaf7-4372-4e7c-b053-df9c4022514f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:15:00 compute-1 ceph-mon[81775]: pgmap v2768: 321 pgs: 321 active+clean; 568 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.0 MiB/s wr, 280 op/s
Jan 20 15:15:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:00.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:01 compute-1 sudo[303754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:01 compute-1 sudo[303754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:01 compute-1 sudo[303754]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:01 compute-1 sudo[303779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:01 compute-1 sudo[303779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:01 compute-1 sudo[303779]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:01.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:02 compute-1 nova_compute[225855]: 2026-01-20 15:15:02.459 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922087.4578772, 128af7d9-155f-468d-9873-98c816f0df9e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:15:02 compute-1 nova_compute[225855]: 2026-01-20 15:15:02.459 225859 INFO nova.compute.manager [-] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] VM Stopped (Lifecycle Event)
Jan 20 15:15:02 compute-1 nova_compute[225855]: 2026-01-20 15:15:02.545 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:02 compute-1 ovn_controller[130490]: 2026-01-20T15:15:02Z|00093|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.3
Jan 20 15:15:02 compute-1 ovn_controller[130490]: 2026-01-20T15:15:02Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:6b:b0:3d 10.100.0.3
Jan 20 15:15:02 compute-1 nova_compute[225855]: 2026-01-20 15:15:02.707 225859 DEBUG nova.compute.manager [None req-96e89b3a-c676-48c2-b355-b8d154d8ef67 - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:15:02 compute-1 nova_compute[225855]: 2026-01-20 15:15:02.781 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Updating instance_info_cache with network_info: [{"id": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "address": "fa:16:3e:6b:b0:3d", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cfaf09f-1f", "ovs_interfaceid": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:15:02 compute-1 nova_compute[225855]: 2026-01-20 15:15:02.909 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:02.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.002000056s ======
Jan 20 15:15:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:03.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Jan 20 15:15:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:03 compute-1 ceph-mon[81775]: pgmap v2769: 321 pgs: 321 active+clean; 580 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 207 op/s
Jan 20 15:15:04 compute-1 nova_compute[225855]: 2026-01-20 15:15:04.147 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-185fbaf7-4372-4e7c-b053-df9c4022514f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:15:04 compute-1 nova_compute[225855]: 2026-01-20 15:15:04.147 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:15:04 compute-1 nova_compute[225855]: 2026-01-20 15:15:04.148 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:15:04 compute-1 nova_compute[225855]: 2026-01-20 15:15:04.149 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:15:04 compute-1 nova_compute[225855]: 2026-01-20 15:15:04.149 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:15:04 compute-1 nova_compute[225855]: 2026-01-20 15:15:04.150 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:15:04 compute-1 ovn_controller[130490]: 2026-01-20T15:15:04Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6b:b0:3d 10.100.0.3
Jan 20 15:15:04 compute-1 ovn_controller[130490]: 2026-01-20T15:15:04Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6b:b0:3d 10.100.0.3
Jan 20 15:15:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1795334721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2590486040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:04.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:15:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:05.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:15:05 compute-1 nova_compute[225855]: 2026-01-20 15:15:05.520 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:05 compute-1 nova_compute[225855]: 2026-01-20 15:15:05.521 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:05 compute-1 nova_compute[225855]: 2026-01-20 15:15:05.521 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:05 compute-1 nova_compute[225855]: 2026-01-20 15:15:05.521 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:15:05 compute-1 nova_compute[225855]: 2026-01-20 15:15:05.522 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:05 compute-1 ceph-mon[81775]: pgmap v2770: 321 pgs: 321 active+clean; 580 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 185 op/s
Jan 20 15:15:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:15:05 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3818984052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:05 compute-1 nova_compute[225855]: 2026-01-20 15:15:05.998 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:06 compute-1 nova_compute[225855]: 2026-01-20 15:15:06.084 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000bc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:15:06 compute-1 nova_compute[225855]: 2026-01-20 15:15:06.085 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000bc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:15:06 compute-1 nova_compute[225855]: 2026-01-20 15:15:06.247 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:15:06 compute-1 nova_compute[225855]: 2026-01-20 15:15:06.249 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4037MB free_disk=20.830204010009766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:15:06 compute-1 nova_compute[225855]: 2026-01-20 15:15:06.250 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:06 compute-1 nova_compute[225855]: 2026-01-20 15:15:06.250 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:06 compute-1 nova_compute[225855]: 2026-01-20 15:15:06.472 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 185fbaf7-4372-4e7c-b053-df9c4022514f actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:15:06 compute-1 nova_compute[225855]: 2026-01-20 15:15:06.473 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:15:06 compute-1 nova_compute[225855]: 2026-01-20 15:15:06.473 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:15:06 compute-1 nova_compute[225855]: 2026-01-20 15:15:06.517 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3818984052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:06 compute-1 ceph-osd[79119]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 20 15:15:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:15:06 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3239820126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:06 compute-1 nova_compute[225855]: 2026-01-20 15:15:06.954 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:06 compute-1 nova_compute[225855]: 2026-01-20 15:15:06.960 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:15:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:06.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:07 compute-1 nova_compute[225855]: 2026-01-20 15:15:07.009 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:15:07 compute-1 nova_compute[225855]: 2026-01-20 15:15:07.042 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:15:07 compute-1 nova_compute[225855]: 2026-01-20 15:15:07.043 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:07.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:07 compute-1 nova_compute[225855]: 2026-01-20 15:15:07.549 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:07 compute-1 ceph-mon[81775]: pgmap v2771: 321 pgs: 321 active+clean; 584 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.9 MiB/s wr, 181 op/s
Jan 20 15:15:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3239820126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2592459286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2126827095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:07 compute-1 nova_compute[225855]: 2026-01-20 15:15:07.911 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:08 compute-1 ceph-mon[81775]: pgmap v2772: 321 pgs: 321 active+clean; 584 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 598 KiB/s wr, 102 op/s
Jan 20 15:15:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:15:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:08.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:15:09 compute-1 nova_compute[225855]: 2026-01-20 15:15:09.234 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:15:09 compute-1 nova_compute[225855]: 2026-01-20 15:15:09.235 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:15:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:09.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:10.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:11.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:11 compute-1 ceph-mon[81775]: pgmap v2773: 321 pgs: 321 active+clean; 606 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Jan 20 15:15:12 compute-1 nova_compute[225855]: 2026-01-20 15:15:12.552 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4149569028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:15:12 compute-1 nova_compute[225855]: 2026-01-20 15:15:12.914 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:15:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:13.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:15:13 compute-1 podman[303855]: 2026-01-20 15:15:13.065762473 +0000 UTC m=+0.106264337 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 20 15:15:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:13.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:13 compute-1 ceph-mon[81775]: pgmap v2774: 321 pgs: 321 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 912 KiB/s rd, 2.7 MiB/s wr, 100 op/s
Jan 20 15:15:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1223840242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:15:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1223840242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:15:14 compute-1 ceph-mon[81775]: pgmap v2775: 321 pgs: 321 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 428 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Jan 20 15:15:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:15:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:15.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:15:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:15.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:16.437 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:16.437 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:16.438 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:15:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:17.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:15:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:15:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:17.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:15:17 compute-1 nova_compute[225855]: 2026-01-20 15:15:17.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:15:17 compute-1 nova_compute[225855]: 2026-01-20 15:15:17.554 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:17 compute-1 ceph-mon[81775]: pgmap v2776: 321 pgs: 321 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 399 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Jan 20 15:15:17 compute-1 nova_compute[225855]: 2026-01-20 15:15:17.915 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:19.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:19.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:19 compute-1 ceph-mon[81775]: pgmap v2777: 321 pgs: 321 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 397 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 20 15:15:20 compute-1 ceph-mon[81775]: pgmap v2778: 321 pgs: 321 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 436 KiB/s rd, 2.2 MiB/s wr, 74 op/s
Jan 20 15:15:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1049306304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:15:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:15:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:21.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:15:21 compute-1 sudo[303885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:21 compute-1 sudo[303885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:21 compute-1 sudo[303885]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:21 compute-1 sudo[303910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:21 compute-1 sudo[303910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:21 compute-1 sudo[303910]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:21.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:22 compute-1 nova_compute[225855]: 2026-01-20 15:15:22.590 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:22 compute-1 ceph-mon[81775]: pgmap v2779: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 82 KiB/s rd, 1.0 MiB/s wr, 19 op/s
Jan 20 15:15:22 compute-1 nova_compute[225855]: 2026-01-20 15:15:22.917 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:23.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:23.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:23 compute-1 nova_compute[225855]: 2026-01-20 15:15:23.871 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:25.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:25.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:25 compute-1 ceph-mon[81775]: pgmap v2780: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 68 KiB/s wr, 8 op/s
Jan 20 15:15:26 compute-1 ceph-mon[81775]: pgmap v2781: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 252 KiB/s rd, 78 KiB/s wr, 17 op/s
Jan 20 15:15:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:27.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:27.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.334 225859 DEBUG nova.compute.manager [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.453 225859 DEBUG oslo_concurrency.lockutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.454 225859 DEBUG oslo_concurrency.lockutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.494 225859 DEBUG nova.objects.instance [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'pci_requests' on Instance uuid f1ded131-d9a3-4e93-ad99-53ee2695d5c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.541 225859 DEBUG nova.virt.hardware [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.541 225859 INFO nova.compute.claims [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.542 225859 DEBUG nova.objects.instance [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'resources' on Instance uuid f1ded131-d9a3-4e93-ad99-53ee2695d5c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.546 225859 DEBUG nova.compute.manager [req-a5758f26-8c43-4950-bed0-502a1521c662 req-ad93c421-dc8a-4d87-ab2b-38a939e3c537 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Received event network-changed-2cfaf09f-1f9e-489f-b7d3-43166c005796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.546 225859 DEBUG nova.compute.manager [req-a5758f26-8c43-4950-bed0-502a1521c662 req-ad93c421-dc8a-4d87-ab2b-38a939e3c537 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Refreshing instance network info cache due to event network-changed-2cfaf09f-1f9e-489f-b7d3-43166c005796. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.547 225859 DEBUG oslo_concurrency.lockutils [req-a5758f26-8c43-4950-bed0-502a1521c662 req-ad93c421-dc8a-4d87-ab2b-38a939e3c537 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-185fbaf7-4372-4e7c-b053-df9c4022514f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.547 225859 DEBUG oslo_concurrency.lockutils [req-a5758f26-8c43-4950-bed0-502a1521c662 req-ad93c421-dc8a-4d87-ab2b-38a939e3c537 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-185fbaf7-4372-4e7c-b053-df9c4022514f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.547 225859 DEBUG nova.network.neutron [req-a5758f26-8c43-4950-bed0-502a1521c662 req-ad93c421-dc8a-4d87-ab2b-38a939e3c537 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Refreshing network info cache for port 2cfaf09f-1f9e-489f-b7d3-43166c005796 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.562 225859 DEBUG nova.objects.instance [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'pci_devices' on Instance uuid f1ded131-d9a3-4e93-ad99-53ee2695d5c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.636 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.657 225859 INFO nova.compute.resource_tracker [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updating resource usage from migration 57f22c5f-c3c6-4f11-afbc-5b3fc1752f60
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.658 225859 DEBUG nova.compute.resource_tracker [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Starting to track incoming migration 57f22c5f-c3c6-4f11-afbc-5b3fc1752f60 with flavor 30c26a27-d918-46d8-a512-4ef3b4ce5955 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.741 225859 DEBUG oslo_concurrency.lockutils [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "185fbaf7-4372-4e7c-b053-df9c4022514f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.742 225859 DEBUG oslo_concurrency.lockutils [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "185fbaf7-4372-4e7c-b053-df9c4022514f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.742 225859 DEBUG oslo_concurrency.lockutils [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "185fbaf7-4372-4e7c-b053-df9c4022514f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.743 225859 DEBUG oslo_concurrency.lockutils [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "185fbaf7-4372-4e7c-b053-df9c4022514f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.743 225859 DEBUG oslo_concurrency.lockutils [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "185fbaf7-4372-4e7c-b053-df9c4022514f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.744 225859 INFO nova.compute.manager [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Terminating instance
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.745 225859 DEBUG nova.compute.manager [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.919 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:27 compute-1 kernel: tap2cfaf09f-1f (unregistering): left promiscuous mode
Jan 20 15:15:27 compute-1 NetworkManager[49104]: <info>  [1768922127.9345] device (tap2cfaf09f-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:15:27 compute-1 ovn_controller[130490]: 2026-01-20T15:15:27Z|00829|binding|INFO|Releasing lport 2cfaf09f-1f9e-489f-b7d3-43166c005796 from this chassis (sb_readonly=0)
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.949 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:27 compute-1 ovn_controller[130490]: 2026-01-20T15:15:27Z|00830|binding|INFO|Setting lport 2cfaf09f-1f9e-489f-b7d3-43166c005796 down in Southbound
Jan 20 15:15:27 compute-1 ovn_controller[130490]: 2026-01-20T15:15:27Z|00831|binding|INFO|Removing iface tap2cfaf09f-1f ovn-installed in OVS
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.951 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.969 225859 DEBUG oslo_concurrency.processutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:27 compute-1 nova_compute[225855]: 2026-01-20 15:15:27.996 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:28 compute-1 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000bc.scope: Deactivated successfully.
Jan 20 15:15:28 compute-1 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000bc.scope: Consumed 14.688s CPU time.
Jan 20 15:15:28 compute-1 systemd-machined[194361]: Machine qemu-98-instance-000000bc terminated.
Jan 20 15:15:28 compute-1 podman[303940]: 2026-01-20 15:15:28.042882936 +0000 UTC m=+0.089424528 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:15:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:28.053 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:b0:3d 10.100.0.3'], port_security=['fa:16:3e:6b:b0:3d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '185fbaf7-4372-4e7c-b053-df9c4022514f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a49638950e1543fa8e0d251af5479623', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c29da5ec-6cb2-4047-ba89-70fa67a96476', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76ec1139-009f-49fe-bfde-07c0ef9e8b12, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=2cfaf09f-1f9e-489f-b7d3-43166c005796) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:15:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:28.055 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 2cfaf09f-1f9e-489f-b7d3-43166c005796 in datapath b677f1a9-dbaa-4373-8466-bd9ccf067b91 unbound from our chassis
Jan 20 15:15:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:28.057 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b677f1a9-dbaa-4373-8466-bd9ccf067b91, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:15:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:28.058 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2230c423-ed9f-470f-a031-461398f19813]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:28.059 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 namespace which is not needed anymore
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.164 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.168 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:28 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[303538]: [NOTICE]   (303542) : haproxy version is 2.8.14-c23fe91
Jan 20 15:15:28 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[303538]: [NOTICE]   (303542) : path to executable is /usr/sbin/haproxy
Jan 20 15:15:28 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[303538]: [WARNING]  (303542) : Exiting Master process...
Jan 20 15:15:28 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[303538]: [ALERT]    (303542) : Current worker (303544) exited with code 143 (Terminated)
Jan 20 15:15:28 compute-1 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[303538]: [WARNING]  (303542) : All workers exited. Exiting... (0)
Jan 20 15:15:28 compute-1 systemd[1]: libpod-f51b1e2ffd0c0523ee19615079e9e512b8c099800046323e66291635c3cc77d9.scope: Deactivated successfully.
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.185 225859 INFO nova.virt.libvirt.driver [-] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Instance destroyed successfully.
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.186 225859 DEBUG nova.objects.instance [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lazy-loading 'resources' on Instance uuid 185fbaf7-4372-4e7c-b053-df9c4022514f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:15:28 compute-1 conmon[303538]: conmon f51b1e2ffd0c0523ee19 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f51b1e2ffd0c0523ee19615079e9e512b8c099800046323e66291635c3cc77d9.scope/container/memory.events
Jan 20 15:15:28 compute-1 podman[304002]: 2026-01-20 15:15:28.192857061 +0000 UTC m=+0.052283184 container died f51b1e2ffd0c0523ee19615079e9e512b8c099800046323e66291635c3cc77d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:15:28 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f51b1e2ffd0c0523ee19615079e9e512b8c099800046323e66291635c3cc77d9-userdata-shm.mount: Deactivated successfully.
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.219 225859 DEBUG nova.virt.libvirt.vif [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:14:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1033173342',display_name='tempest-TestVolumeBootPattern-server-1033173342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1033173342',id=188,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBjkv3PM31l7/LOeidHCDov4vvdGwqOT15IVVbWearXBCn3jQz2xB6ix8iz1XP+iiPXyhWuw0LpMPT9jQN2b0mvhqeZTHErGcz1VZLskRcT6iqcekmFxWykFxr44bv68XA==',key_name='tempest-TestVolumeBootPattern-474773317',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:14:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a49638950e1543fa8e0d251af5479623',ramdisk_id='',reservation_id='r-pz9crh7i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-194644003',owner_user_name='tempest-TestVolumeBootPattern-194644003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:14:46Z,user_data=None,user_id='bf422e55e158420cbdae75f07a3bb97a',uuid=185fbaf7-4372-4e7c-b053-df9c4022514f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "address": "fa:16:3e:6b:b0:3d", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cfaf09f-1f", "ovs_interfaceid": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.219 225859 DEBUG nova.network.os_vif_util [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converting VIF {"id": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "address": "fa:16:3e:6b:b0:3d", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cfaf09f-1f", "ovs_interfaceid": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.220 225859 DEBUG nova.network.os_vif_util [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6b:b0:3d,bridge_name='br-int',has_traffic_filtering=True,id=2cfaf09f-1f9e-489f-b7d3-43166c005796,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cfaf09f-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.221 225859 DEBUG os_vif [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:b0:3d,bridge_name='br-int',has_traffic_filtering=True,id=2cfaf09f-1f9e-489f-b7d3-43166c005796,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cfaf09f-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.224 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.224 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cfaf09f-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:28 compute-1 systemd[1]: var-lib-containers-storage-overlay-67b20a5a2fd7710889925edaaefc37d2cd36bbde3ed5e1de89395074d35e2d88-merged.mount: Deactivated successfully.
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.228 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.231 225859 INFO os_vif [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:b0:3d,bridge_name='br-int',has_traffic_filtering=True,id=2cfaf09f-1f9e-489f-b7d3-43166c005796,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cfaf09f-1f')
Jan 20 15:15:28 compute-1 podman[304002]: 2026-01-20 15:15:28.233138504 +0000 UTC m=+0.092564627 container cleanup f51b1e2ffd0c0523ee19615079e9e512b8c099800046323e66291635c3cc77d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:15:28 compute-1 systemd[1]: libpod-conmon-f51b1e2ffd0c0523ee19615079e9e512b8c099800046323e66291635c3cc77d9.scope: Deactivated successfully.
Jan 20 15:15:28 compute-1 podman[304042]: 2026-01-20 15:15:28.28937295 +0000 UTC m=+0.038611177 container remove f51b1e2ffd0c0523ee19615079e9e512b8c099800046323e66291635c3cc77d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 20 15:15:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:28.295 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f96571-6de8-4d4d-95cc-1e3e12c633cf]: (4, ('Tue Jan 20 03:15:28 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 (f51b1e2ffd0c0523ee19615079e9e512b8c099800046323e66291635c3cc77d9)\nf51b1e2ffd0c0523ee19615079e9e512b8c099800046323e66291635c3cc77d9\nTue Jan 20 03:15:28 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 (f51b1e2ffd0c0523ee19615079e9e512b8c099800046323e66291635c3cc77d9)\nf51b1e2ffd0c0523ee19615079e9e512b8c099800046323e66291635c3cc77d9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:28.296 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[32d993c1-a136-429d-9eae-8b57ef159865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:28.297 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb677f1a9-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.299 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:28 compute-1 kernel: tapb677f1a9-d0: left promiscuous mode
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.315 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:28.318 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[80d8b198-b73d-41cb-a7e5-a50ec2fc94b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:28.341 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[da457d9e-244f-4855-8c00-8ef788afed6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:28.343 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cafabbb1-23c6-410c-859e-33ce23ba55fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:28.359 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a121d298-58b0-4158-8cde-2181705426f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712354, 'reachable_time': 39672, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304075, 'error': None, 'target': 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:28.361 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:15:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:28.362 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[c510100a-175f-44b4-9407-b2c06a13506a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:28 compute-1 systemd[1]: run-netns-ovnmeta\x2db677f1a9\x2ddbaa\x2d4373\x2d8466\x2dbd9ccf067b91.mount: Deactivated successfully.
Jan 20 15:15:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:15:28 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3564192023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.430 225859 DEBUG oslo_concurrency.processutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.438 225859 DEBUG nova.compute.provider_tree [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.457 225859 DEBUG nova.scheduler.client.report [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:15:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3564192023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.476 225859 DEBUG oslo_concurrency.lockutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.477 225859 INFO nova.compute.manager [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Migrating
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.492 225859 INFO nova.virt.libvirt.driver [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Deleting instance files /var/lib/nova/instances/185fbaf7-4372-4e7c-b053-df9c4022514f_del
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.493 225859 INFO nova.virt.libvirt.driver [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Deletion of /var/lib/nova/instances/185fbaf7-4372-4e7c-b053-df9c4022514f_del complete
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.561 225859 INFO nova.compute.manager [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Took 0.82 seconds to destroy the instance on the hypervisor.
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.562 225859 DEBUG oslo.service.loopingcall [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.562 225859 DEBUG nova.compute.manager [-] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:15:28 compute-1 nova_compute[225855]: 2026-01-20 15:15:28.562 225859 DEBUG nova.network.neutron [-] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:15:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:29.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:29.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:29 compute-1 ceph-mon[81775]: pgmap v2782: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 252 KiB/s rd, 61 KiB/s wr, 16 op/s
Jan 20 15:15:29 compute-1 nova_compute[225855]: 2026-01-20 15:15:29.680 225859 DEBUG nova.compute.manager [req-a30b1e01-a015-4f15-b1c3-e3e7be1acc94 req-8c0c5996-d02e-4a33-ac08-ac0007df7acc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Received event network-vif-unplugged-2cfaf09f-1f9e-489f-b7d3-43166c005796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:29 compute-1 nova_compute[225855]: 2026-01-20 15:15:29.681 225859 DEBUG oslo_concurrency.lockutils [req-a30b1e01-a015-4f15-b1c3-e3e7be1acc94 req-8c0c5996-d02e-4a33-ac08-ac0007df7acc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "185fbaf7-4372-4e7c-b053-df9c4022514f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:29 compute-1 nova_compute[225855]: 2026-01-20 15:15:29.681 225859 DEBUG oslo_concurrency.lockutils [req-a30b1e01-a015-4f15-b1c3-e3e7be1acc94 req-8c0c5996-d02e-4a33-ac08-ac0007df7acc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "185fbaf7-4372-4e7c-b053-df9c4022514f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:29 compute-1 nova_compute[225855]: 2026-01-20 15:15:29.681 225859 DEBUG oslo_concurrency.lockutils [req-a30b1e01-a015-4f15-b1c3-e3e7be1acc94 req-8c0c5996-d02e-4a33-ac08-ac0007df7acc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "185fbaf7-4372-4e7c-b053-df9c4022514f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:29 compute-1 nova_compute[225855]: 2026-01-20 15:15:29.681 225859 DEBUG nova.compute.manager [req-a30b1e01-a015-4f15-b1c3-e3e7be1acc94 req-8c0c5996-d02e-4a33-ac08-ac0007df7acc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] No waiting events found dispatching network-vif-unplugged-2cfaf09f-1f9e-489f-b7d3-43166c005796 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:15:29 compute-1 nova_compute[225855]: 2026-01-20 15:15:29.682 225859 DEBUG nova.compute.manager [req-a30b1e01-a015-4f15-b1c3-e3e7be1acc94 req-8c0c5996-d02e-4a33-ac08-ac0007df7acc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Received event network-vif-unplugged-2cfaf09f-1f9e-489f-b7d3-43166c005796 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:15:30 compute-1 nova_compute[225855]: 2026-01-20 15:15:30.461 225859 DEBUG nova.network.neutron [req-a5758f26-8c43-4950-bed0-502a1521c662 req-ad93c421-dc8a-4d87-ab2b-38a939e3c537 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Updated VIF entry in instance network info cache for port 2cfaf09f-1f9e-489f-b7d3-43166c005796. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:15:30 compute-1 nova_compute[225855]: 2026-01-20 15:15:30.462 225859 DEBUG nova.network.neutron [req-a5758f26-8c43-4950-bed0-502a1521c662 req-ad93c421-dc8a-4d87-ab2b-38a939e3c537 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Updating instance_info_cache with network_info: [{"id": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "address": "fa:16:3e:6b:b0:3d", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cfaf09f-1f", "ovs_interfaceid": "2cfaf09f-1f9e-489f-b7d3-43166c005796", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:15:30 compute-1 nova_compute[225855]: 2026-01-20 15:15:30.481 225859 DEBUG oslo_concurrency.lockutils [req-a5758f26-8c43-4950-bed0-502a1521c662 req-ad93c421-dc8a-4d87-ab2b-38a939e3c537 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-185fbaf7-4372-4e7c-b053-df9c4022514f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:15:30 compute-1 nova_compute[225855]: 2026-01-20 15:15:30.896 225859 DEBUG nova.network.neutron [-] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:15:30 compute-1 nova_compute[225855]: 2026-01-20 15:15:30.932 225859 INFO nova.compute.manager [-] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Took 2.37 seconds to deallocate network for instance.
Jan 20 15:15:31 compute-1 nova_compute[225855]: 2026-01-20 15:15:31.019 225859 DEBUG nova.compute.manager [req-1a4ac1b9-799b-490e-99dd-8771c3668b30 req-5fa23e5a-f663-4d70-9f86-d3b073bf09d5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Received event network-vif-deleted-2cfaf09f-1f9e-489f-b7d3-43166c005796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:15:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:31.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:15:31 compute-1 sshd-session[304080]: Accepted publickey for nova from 192.168.122.100 port 48186 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 15:15:31 compute-1 systemd[1]: Created slice User Slice of UID 42436.
Jan 20 15:15:31 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 20 15:15:31 compute-1 systemd-logind[783]: New session 69 of user nova.
Jan 20 15:15:31 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 20 15:15:31 compute-1 systemd[1]: Starting User Manager for UID 42436...
Jan 20 15:15:31 compute-1 systemd[304084]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 15:15:31 compute-1 nova_compute[225855]: 2026-01-20 15:15:31.205 225859 INFO nova.compute.manager [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Took 0.27 seconds to detach 1 volumes for instance.
Jan 20 15:15:31 compute-1 nova_compute[225855]: 2026-01-20 15:15:31.253 225859 DEBUG oslo_concurrency.lockutils [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:31 compute-1 nova_compute[225855]: 2026-01-20 15:15:31.253 225859 DEBUG oslo_concurrency.lockutils [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:31 compute-1 systemd[304084]: Queued start job for default target Main User Target.
Jan 20 15:15:31 compute-1 systemd[304084]: Created slice User Application Slice.
Jan 20 15:15:31 compute-1 systemd[304084]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 15:15:31 compute-1 systemd[304084]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 15:15:31 compute-1 systemd[304084]: Reached target Paths.
Jan 20 15:15:31 compute-1 systemd[304084]: Reached target Timers.
Jan 20 15:15:31 compute-1 systemd[304084]: Starting D-Bus User Message Bus Socket...
Jan 20 15:15:31 compute-1 systemd[304084]: Starting Create User's Volatile Files and Directories...
Jan 20 15:15:31 compute-1 systemd[304084]: Finished Create User's Volatile Files and Directories.
Jan 20 15:15:31 compute-1 systemd[304084]: Listening on D-Bus User Message Bus Socket.
Jan 20 15:15:31 compute-1 systemd[304084]: Reached target Sockets.
Jan 20 15:15:31 compute-1 systemd[304084]: Reached target Basic System.
Jan 20 15:15:31 compute-1 systemd[304084]: Reached target Main User Target.
Jan 20 15:15:31 compute-1 systemd[304084]: Startup finished in 138ms.
Jan 20 15:15:31 compute-1 systemd[1]: Started User Manager for UID 42436.
Jan 20 15:15:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:31.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:31 compute-1 systemd[1]: Started Session 69 of User nova.
Jan 20 15:15:31 compute-1 sshd-session[304080]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 15:15:31 compute-1 sshd-session[304099]: Received disconnect from 192.168.122.100 port 48186:11: disconnected by user
Jan 20 15:15:31 compute-1 sshd-session[304099]: Disconnected from user nova 192.168.122.100 port 48186
Jan 20 15:15:31 compute-1 sshd-session[304080]: pam_unix(sshd:session): session closed for user nova
Jan 20 15:15:31 compute-1 systemd[1]: session-69.scope: Deactivated successfully.
Jan 20 15:15:31 compute-1 systemd-logind[783]: Session 69 logged out. Waiting for processes to exit.
Jan 20 15:15:31 compute-1 systemd-logind[783]: Removed session 69.
Jan 20 15:15:31 compute-1 nova_compute[225855]: 2026-01-20 15:15:31.433 225859 DEBUG oslo_concurrency.processutils [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:31 compute-1 sshd-session[304102]: Accepted publickey for nova from 192.168.122.100 port 48200 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 15:15:31 compute-1 systemd-logind[783]: New session 71 of user nova.
Jan 20 15:15:31 compute-1 systemd[1]: Started Session 71 of User nova.
Jan 20 15:15:31 compute-1 sshd-session[304102]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 15:15:31 compute-1 sshd-session[304124]: Received disconnect from 192.168.122.100 port 48200:11: disconnected by user
Jan 20 15:15:31 compute-1 sshd-session[304124]: Disconnected from user nova 192.168.122.100 port 48200
Jan 20 15:15:31 compute-1 sshd-session[304102]: pam_unix(sshd:session): session closed for user nova
Jan 20 15:15:31 compute-1 ceph-mon[81775]: pgmap v2783: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 259 KiB/s rd, 61 KiB/s wr, 24 op/s
Jan 20 15:15:31 compute-1 systemd[1]: session-71.scope: Deactivated successfully.
Jan 20 15:15:31 compute-1 systemd-logind[783]: Session 71 logged out. Waiting for processes to exit.
Jan 20 15:15:31 compute-1 systemd-logind[783]: Removed session 71.
Jan 20 15:15:31 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:15:31 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1592438209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:31 compute-1 nova_compute[225855]: 2026-01-20 15:15:31.869 225859 DEBUG nova.compute.manager [req-920ca308-7218-4c6e-9fd8-7259f8d53d90 req-08b69dc9-7e07-40cf-8719-7a697d37c122 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Received event network-vif-plugged-2cfaf09f-1f9e-489f-b7d3-43166c005796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:31 compute-1 nova_compute[225855]: 2026-01-20 15:15:31.870 225859 DEBUG oslo_concurrency.lockutils [req-920ca308-7218-4c6e-9fd8-7259f8d53d90 req-08b69dc9-7e07-40cf-8719-7a697d37c122 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "185fbaf7-4372-4e7c-b053-df9c4022514f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:31 compute-1 nova_compute[225855]: 2026-01-20 15:15:31.870 225859 DEBUG oslo_concurrency.lockutils [req-920ca308-7218-4c6e-9fd8-7259f8d53d90 req-08b69dc9-7e07-40cf-8719-7a697d37c122 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "185fbaf7-4372-4e7c-b053-df9c4022514f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:31 compute-1 nova_compute[225855]: 2026-01-20 15:15:31.870 225859 DEBUG oslo_concurrency.lockutils [req-920ca308-7218-4c6e-9fd8-7259f8d53d90 req-08b69dc9-7e07-40cf-8719-7a697d37c122 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "185fbaf7-4372-4e7c-b053-df9c4022514f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:31 compute-1 nova_compute[225855]: 2026-01-20 15:15:31.871 225859 DEBUG nova.compute.manager [req-920ca308-7218-4c6e-9fd8-7259f8d53d90 req-08b69dc9-7e07-40cf-8719-7a697d37c122 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] No waiting events found dispatching network-vif-plugged-2cfaf09f-1f9e-489f-b7d3-43166c005796 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:15:31 compute-1 nova_compute[225855]: 2026-01-20 15:15:31.871 225859 WARNING nova.compute.manager [req-920ca308-7218-4c6e-9fd8-7259f8d53d90 req-08b69dc9-7e07-40cf-8719-7a697d37c122 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Received unexpected event network-vif-plugged-2cfaf09f-1f9e-489f-b7d3-43166c005796 for instance with vm_state deleted and task_state None.
Jan 20 15:15:31 compute-1 nova_compute[225855]: 2026-01-20 15:15:31.873 225859 DEBUG oslo_concurrency.processutils [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:31 compute-1 nova_compute[225855]: 2026-01-20 15:15:31.879 225859 DEBUG nova.compute.provider_tree [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:15:31 compute-1 nova_compute[225855]: 2026-01-20 15:15:31.922 225859 DEBUG nova.scheduler.client.report [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:15:31 compute-1 nova_compute[225855]: 2026-01-20 15:15:31.943 225859 DEBUG oslo_concurrency.lockutils [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:31 compute-1 nova_compute[225855]: 2026-01-20 15:15:31.966 225859 INFO nova.scheduler.client.report [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Deleted allocations for instance 185fbaf7-4372-4e7c-b053-df9c4022514f
Jan 20 15:15:32 compute-1 nova_compute[225855]: 2026-01-20 15:15:32.068 225859 DEBUG oslo_concurrency.lockutils [None req-22a2616d-74c4-4fc5-8de0-a4e568bbbb7a bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "185fbaf7-4372-4e7c-b053-df9c4022514f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1592438209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:32 compute-1 nova_compute[225855]: 2026-01-20 15:15:32.921 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:33.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:33 compute-1 nova_compute[225855]: 2026-01-20 15:15:33.226 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:33.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:33 compute-1 ceph-mon[81775]: pgmap v2784: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 222 KiB/s rd, 56 KiB/s wr, 25 op/s
Jan 20 15:15:34 compute-1 nova_compute[225855]: 2026-01-20 15:15:34.287 225859 DEBUG nova.compute.manager [req-e4c946d2-8be3-403c-961a-3a75924fbe4e req-46ac77b8-0c6a-420a-bb6c-49d1a41ef030 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-vif-unplugged-0e93d1de-671e-4e37-8e79-44bed7981254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:34 compute-1 nova_compute[225855]: 2026-01-20 15:15:34.288 225859 DEBUG oslo_concurrency.lockutils [req-e4c946d2-8be3-403c-961a-3a75924fbe4e req-46ac77b8-0c6a-420a-bb6c-49d1a41ef030 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:34 compute-1 nova_compute[225855]: 2026-01-20 15:15:34.288 225859 DEBUG oslo_concurrency.lockutils [req-e4c946d2-8be3-403c-961a-3a75924fbe4e req-46ac77b8-0c6a-420a-bb6c-49d1a41ef030 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:34 compute-1 nova_compute[225855]: 2026-01-20 15:15:34.288 225859 DEBUG oslo_concurrency.lockutils [req-e4c946d2-8be3-403c-961a-3a75924fbe4e req-46ac77b8-0c6a-420a-bb6c-49d1a41ef030 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:34 compute-1 nova_compute[225855]: 2026-01-20 15:15:34.288 225859 DEBUG nova.compute.manager [req-e4c946d2-8be3-403c-961a-3a75924fbe4e req-46ac77b8-0c6a-420a-bb6c-49d1a41ef030 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] No waiting events found dispatching network-vif-unplugged-0e93d1de-671e-4e37-8e79-44bed7981254 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:15:34 compute-1 nova_compute[225855]: 2026-01-20 15:15:34.288 225859 WARNING nova.compute.manager [req-e4c946d2-8be3-403c-961a-3a75924fbe4e req-46ac77b8-0c6a-420a-bb6c-49d1a41ef030 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received unexpected event network-vif-unplugged-0e93d1de-671e-4e37-8e79-44bed7981254 for instance with vm_state active and task_state resize_migrating.
Jan 20 15:15:34 compute-1 nova_compute[225855]: 2026-01-20 15:15:34.637 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:34 compute-1 ceph-mon[81775]: pgmap v2785: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 222 KiB/s rd, 13 KiB/s wr, 22 op/s
Jan 20 15:15:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:35.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:35.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:35 compute-1 nova_compute[225855]: 2026-01-20 15:15:35.819 225859 INFO nova.network.neutron [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updating port 0e93d1de-671e-4e37-8e79-44bed7981254 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 20 15:15:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1108637769' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:15:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1108637769' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:15:36 compute-1 ceph-mon[81775]: pgmap v2786: 321 pgs: 321 active+clean; 605 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 232 KiB/s rd, 28 KiB/s wr, 43 op/s
Jan 20 15:15:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:37.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:37.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:37 compute-1 nova_compute[225855]: 2026-01-20 15:15:37.541 225859 DEBUG nova.compute.manager [req-1000111b-15d0-45d2-ac83-d49fd1cbbc37 req-33a57693-0eff-4d7f-8520-32926828c084 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:37 compute-1 nova_compute[225855]: 2026-01-20 15:15:37.542 225859 DEBUG oslo_concurrency.lockutils [req-1000111b-15d0-45d2-ac83-d49fd1cbbc37 req-33a57693-0eff-4d7f-8520-32926828c084 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:37 compute-1 nova_compute[225855]: 2026-01-20 15:15:37.542 225859 DEBUG oslo_concurrency.lockutils [req-1000111b-15d0-45d2-ac83-d49fd1cbbc37 req-33a57693-0eff-4d7f-8520-32926828c084 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:37 compute-1 nova_compute[225855]: 2026-01-20 15:15:37.543 225859 DEBUG oslo_concurrency.lockutils [req-1000111b-15d0-45d2-ac83-d49fd1cbbc37 req-33a57693-0eff-4d7f-8520-32926828c084 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:37 compute-1 nova_compute[225855]: 2026-01-20 15:15:37.543 225859 DEBUG nova.compute.manager [req-1000111b-15d0-45d2-ac83-d49fd1cbbc37 req-33a57693-0eff-4d7f-8520-32926828c084 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] No waiting events found dispatching network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:15:37 compute-1 nova_compute[225855]: 2026-01-20 15:15:37.543 225859 WARNING nova.compute.manager [req-1000111b-15d0-45d2-ac83-d49fd1cbbc37 req-33a57693-0eff-4d7f-8520-32926828c084 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received unexpected event network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 for instance with vm_state active and task_state resize_migrated.
Jan 20 15:15:37 compute-1 nova_compute[225855]: 2026-01-20 15:15:37.923 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e402 e402: 3 total, 3 up, 3 in
Jan 20 15:15:38 compute-1 nova_compute[225855]: 2026-01-20 15:15:38.076 225859 DEBUG oslo_concurrency.lockutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:15:38 compute-1 nova_compute[225855]: 2026-01-20 15:15:38.077 225859 DEBUG oslo_concurrency.lockutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquired lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:15:38 compute-1 nova_compute[225855]: 2026-01-20 15:15:38.077 225859 DEBUG nova.network.neutron [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:15:38 compute-1 nova_compute[225855]: 2026-01-20 15:15:38.228 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:38 compute-1 nova_compute[225855]: 2026-01-20 15:15:38.296 225859 DEBUG nova.compute.manager [req-87897e1d-ecd1-4510-b9de-886f9ed8c75b req-f4ce1150-6021-408f-8ead-d1dfd55983da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-changed-0e93d1de-671e-4e37-8e79-44bed7981254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:38 compute-1 nova_compute[225855]: 2026-01-20 15:15:38.297 225859 DEBUG nova.compute.manager [req-87897e1d-ecd1-4510-b9de-886f9ed8c75b req-f4ce1150-6021-408f-8ead-d1dfd55983da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Refreshing instance network info cache due to event network-changed-0e93d1de-671e-4e37-8e79-44bed7981254. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:15:38 compute-1 nova_compute[225855]: 2026-01-20 15:15:38.297 225859 DEBUG oslo_concurrency.lockutils [req-87897e1d-ecd1-4510-b9de-886f9ed8c75b req-f4ce1150-6021-408f-8ead-d1dfd55983da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:15:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:38 compute-1 ceph-mon[81775]: osdmap e402: 3 total, 3 up, 3 in
Jan 20 15:15:38 compute-1 ceph-mon[81775]: pgmap v2788: 321 pgs: 321 active+clean; 605 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 21 KiB/s wr, 42 op/s
Jan 20 15:15:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:39.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:15:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:39.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:15:39 compute-1 nova_compute[225855]: 2026-01-20 15:15:39.893 225859 DEBUG oslo_concurrency.lockutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "b5656c1b-5ac7-4b93-a25d-420e1e294678" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:39 compute-1 nova_compute[225855]: 2026-01-20 15:15:39.894 225859 DEBUG oslo_concurrency.lockutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:39 compute-1 nova_compute[225855]: 2026-01-20 15:15:39.923 225859 DEBUG nova.compute.manager [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.021 225859 DEBUG oslo_concurrency.lockutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.022 225859 DEBUG oslo_concurrency.lockutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.031 225859 DEBUG nova.virt.hardware [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.031 225859 INFO nova.compute.claims [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.171 225859 DEBUG nova.network.neutron [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updating instance_info_cache with network_info: [{"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.193 225859 DEBUG oslo_concurrency.lockutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Releasing lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.197 225859 DEBUG oslo_concurrency.lockutils [req-87897e1d-ecd1-4510-b9de-886f9ed8c75b req-f4ce1150-6021-408f-8ead-d1dfd55983da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.197 225859 DEBUG nova.network.neutron [req-87897e1d-ecd1-4510-b9de-886f9ed8c75b req-f4ce1150-6021-408f-8ead-d1dfd55983da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Refreshing network info cache for port 0e93d1de-671e-4e37-8e79-44bed7981254 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.263 225859 DEBUG oslo_concurrency.processutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.293 225859 DEBUG os_brick.utils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.294 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.306 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.306 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4ddb58-46db-4a9a-b09d-13bc6c7c15e1]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.308 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.315 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.315 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[05f87087-9d58-45bc-bcb7-d528b0636940]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.317 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.324 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.324 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ea9234-5873-4e73-a1ff-c871fe1322c2]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.325 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[2dbdb80e-a84a-47ae-90f3-88020214fa9e]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.326 225859 DEBUG oslo_concurrency.processutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.353 225859 DEBUG oslo_concurrency.processutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.355 225859 DEBUG os_brick.initiator.connectors.lightos [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.355 225859 DEBUG os_brick.initiator.connectors.lightos [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.356 225859 DEBUG os_brick.initiator.connectors.lightos [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.356 225859 DEBUG os_brick.utils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:15:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:15:40 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1691770121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.701 225859 DEBUG oslo_concurrency.processutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.708 225859 DEBUG nova.compute.provider_tree [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.730 225859 DEBUG nova.scheduler.client.report [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.755 225859 DEBUG oslo_concurrency.lockutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.756 225859 DEBUG nova.compute.manager [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.808 225859 DEBUG nova.compute.manager [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.809 225859 DEBUG nova.network.neutron [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.832 225859 INFO nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.854 225859 DEBUG nova.compute.manager [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.975 225859 DEBUG nova.compute.manager [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.976 225859 DEBUG nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:15:40 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.976 225859 INFO nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Creating image(s)
Jan 20 15:15:41 compute-1 nova_compute[225855]: 2026-01-20 15:15:40.999 225859 DEBUG nova.storage.rbd_utils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image b5656c1b-5ac7-4b93-a25d-420e1e294678_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:15:41 compute-1 nova_compute[225855]: 2026-01-20 15:15:41.023 225859 DEBUG nova.storage.rbd_utils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image b5656c1b-5ac7-4b93-a25d-420e1e294678_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:15:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:15:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:41.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:15:41 compute-1 nova_compute[225855]: 2026-01-20 15:15:41.047 225859 DEBUG nova.storage.rbd_utils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image b5656c1b-5ac7-4b93-a25d-420e1e294678_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:15:41 compute-1 nova_compute[225855]: 2026-01-20 15:15:41.050 225859 DEBUG oslo_concurrency.processutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:41 compute-1 nova_compute[225855]: 2026-01-20 15:15:41.077 225859 DEBUG nova.policy [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '442a7a5cb8ea426a82be9762b262d171', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:15:41 compute-1 nova_compute[225855]: 2026-01-20 15:15:41.113 225859 DEBUG oslo_concurrency.processutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:41 compute-1 nova_compute[225855]: 2026-01-20 15:15:41.114 225859 DEBUG oslo_concurrency.lockutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:41 compute-1 nova_compute[225855]: 2026-01-20 15:15:41.114 225859 DEBUG oslo_concurrency.lockutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:41 compute-1 nova_compute[225855]: 2026-01-20 15:15:41.115 225859 DEBUG oslo_concurrency.lockutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:41 compute-1 nova_compute[225855]: 2026-01-20 15:15:41.138 225859 DEBUG nova.storage.rbd_utils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image b5656c1b-5ac7-4b93-a25d-420e1e294678_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:15:41 compute-1 nova_compute[225855]: 2026-01-20 15:15:41.142 225859 DEBUG oslo_concurrency.processutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 b5656c1b-5ac7-4b93-a25d-420e1e294678_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:41.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:41 compute-1 sudo[304256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:41 compute-1 sudo[304256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:41 compute-1 sudo[304256]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:41 compute-1 nova_compute[225855]: 2026-01-20 15:15:41.410 225859 DEBUG oslo_concurrency.processutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 b5656c1b-5ac7-4b93-a25d-420e1e294678_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:41 compute-1 sudo[304281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:41 compute-1 sudo[304281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:41 compute-1 sudo[304281]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:41 compute-1 nova_compute[225855]: 2026-01-20 15:15:41.491 225859 DEBUG nova.storage.rbd_utils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] resizing rbd image b5656c1b-5ac7-4b93-a25d-420e1e294678_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:15:41 compute-1 nova_compute[225855]: 2026-01-20 15:15:41.608 225859 DEBUG nova.objects.instance [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'migration_context' on Instance uuid b5656c1b-5ac7-4b93-a25d-420e1e294678 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:15:41 compute-1 nova_compute[225855]: 2026-01-20 15:15:41.633 225859 DEBUG nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:15:41 compute-1 nova_compute[225855]: 2026-01-20 15:15:41.634 225859 DEBUG nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Ensure instance console log exists: /var/lib/nova/instances/b5656c1b-5ac7-4b93-a25d-420e1e294678/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:15:41 compute-1 nova_compute[225855]: 2026-01-20 15:15:41.635 225859 DEBUG oslo_concurrency.lockutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:41 compute-1 nova_compute[225855]: 2026-01-20 15:15:41.635 225859 DEBUG oslo_concurrency.lockutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:41 compute-1 nova_compute[225855]: 2026-01-20 15:15:41.635 225859 DEBUG oslo_concurrency.lockutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:41 compute-1 ceph-mon[81775]: pgmap v2789: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 23 KiB/s wr, 33 op/s
Jan 20 15:15:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1691770121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:41 compute-1 systemd[1]: Stopping User Manager for UID 42436...
Jan 20 15:15:41 compute-1 systemd[304084]: Activating special unit Exit the Session...
Jan 20 15:15:41 compute-1 systemd[304084]: Stopped target Main User Target.
Jan 20 15:15:41 compute-1 systemd[304084]: Stopped target Basic System.
Jan 20 15:15:41 compute-1 systemd[304084]: Stopped target Paths.
Jan 20 15:15:41 compute-1 systemd[304084]: Stopped target Sockets.
Jan 20 15:15:41 compute-1 systemd[304084]: Stopped target Timers.
Jan 20 15:15:41 compute-1 systemd[304084]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 20 15:15:41 compute-1 systemd[304084]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 15:15:41 compute-1 systemd[304084]: Closed D-Bus User Message Bus Socket.
Jan 20 15:15:41 compute-1 systemd[304084]: Stopped Create User's Volatile Files and Directories.
Jan 20 15:15:41 compute-1 systemd[304084]: Removed slice User Application Slice.
Jan 20 15:15:41 compute-1 systemd[304084]: Reached target Shutdown.
Jan 20 15:15:41 compute-1 systemd[304084]: Finished Exit the Session.
Jan 20 15:15:41 compute-1 systemd[304084]: Reached target Exit the Session.
Jan 20 15:15:41 compute-1 systemd[1]: user@42436.service: Deactivated successfully.
Jan 20 15:15:41 compute-1 systemd[1]: Stopped User Manager for UID 42436.
Jan 20 15:15:41 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 20 15:15:41 compute-1 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 20 15:15:41 compute-1 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 20 15:15:41 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 20 15:15:41 compute-1 systemd[1]: Removed slice User Slice of UID 42436.
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.058 225859 DEBUG nova.network.neutron [req-87897e1d-ecd1-4510-b9de-886f9ed8c75b req-f4ce1150-6021-408f-8ead-d1dfd55983da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updated VIF entry in instance network info cache for port 0e93d1de-671e-4e37-8e79-44bed7981254. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.059 225859 DEBUG nova.network.neutron [req-87897e1d-ecd1-4510-b9de-886f9ed8c75b req-f4ce1150-6021-408f-8ead-d1dfd55983da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updating instance_info_cache with network_info: [{"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.074 225859 DEBUG oslo_concurrency.lockutils [req-87897e1d-ecd1-4510-b9de-886f9ed8c75b req-f4ce1150-6021-408f-8ead-d1dfd55983da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.273 225859 DEBUG nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.274 225859 DEBUG nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.275 225859 INFO nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Creating image(s)
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.303 225859 DEBUG nova.storage.rbd_utils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] creating snapshot(nova-resize) on rbd image(f1ded131-d9a3-4e93-ad99-53ee2695d5c8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.576 225859 DEBUG nova.network.neutron [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Successfully created port: ebbe6083-de9d-43ca-9ab2-cf306ea0be4d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:15:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e403 e403: 3 total, 3 up, 3 in
Jan 20 15:15:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/810396698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.699 225859 DEBUG nova.objects.instance [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'trusted_certs' on Instance uuid f1ded131-d9a3-4e93-ad99-53ee2695d5c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.809 225859 DEBUG nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.809 225859 DEBUG nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Ensure instance console log exists: /var/lib/nova/instances/f1ded131-d9a3-4e93-ad99-53ee2695d5c8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.810 225859 DEBUG oslo_concurrency.lockutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.810 225859 DEBUG oslo_concurrency.lockutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.810 225859 DEBUG oslo_concurrency.lockutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.813 225859 DEBUG nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Start _get_guest_xml network_info=[{"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "vif_mac": "fa:16:3e:99:5e:ed"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-933c5c7a-f496-4bcc-b304-68156c235fe5', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '933c5c7a-f496-4bcc-b304-68156c235fe5', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': 'f1ded131-d9a3-4e93-ad99-53ee2695d5c8', 'attached_at': '2026-01-20T15:15:41.000000', 'detached_at': '', 'volume_id': '933c5c7a-f496-4bcc-b304-68156c235fe5', 'multiattach': True, 'serial': '933c5c7a-f496-4bcc-b304-68156c235fe5'}, 'guest_format': None, 'boot_index': None, 'mount_device': '/dev/vdb', 'attachment_id': '2d78d259-3019-4a60-a542-4278ca487610', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.817 225859 WARNING nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.820 225859 DEBUG nova.virt.libvirt.host [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.821 225859 DEBUG nova.virt.libvirt.host [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.827 225859 DEBUG nova.virt.libvirt.host [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.829 225859 DEBUG nova.virt.libvirt.host [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.830 225859 DEBUG nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.830 225859 DEBUG nova.virt.hardware [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='30c26a27-d918-46d8-a512-4ef3b4ce5955',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.831 225859 DEBUG nova.virt.hardware [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.831 225859 DEBUG nova.virt.hardware [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.831 225859 DEBUG nova.virt.hardware [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.831 225859 DEBUG nova.virt.hardware [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.832 225859 DEBUG nova.virt.hardware [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.832 225859 DEBUG nova.virt.hardware [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.832 225859 DEBUG nova.virt.hardware [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.832 225859 DEBUG nova.virt.hardware [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.833 225859 DEBUG nova.virt.hardware [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.833 225859 DEBUG nova.virt.hardware [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.833 225859 DEBUG nova.objects.instance [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'vcpu_model' on Instance uuid f1ded131-d9a3-4e93-ad99-53ee2695d5c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.864 225859 DEBUG oslo_concurrency.processutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:42 compute-1 nova_compute[225855]: 2026-01-20 15:15:42.925 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:43.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.183 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922128.182088, 185fbaf7-4372-4e7c-b053-df9c4022514f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.183 225859 INFO nova.compute.manager [-] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] VM Stopped (Lifecycle Event)
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.226 225859 DEBUG nova.compute.manager [None req-d3b5c919-5fc0-4576-85c1-b1c081b9ec73 - - - - - -] [instance: 185fbaf7-4372-4e7c-b053-df9c4022514f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.229 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:15:43 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2943588033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.296 225859 DEBUG oslo_concurrency.processutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.324 225859 DEBUG oslo_concurrency.processutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:43.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:43 compute-1 ceph-mon[81775]: pgmap v2790: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 23 KiB/s wr, 43 op/s
Jan 20 15:15:43 compute-1 ceph-mon[81775]: osdmap e403: 3 total, 3 up, 3 in
Jan 20 15:15:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/856547777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2943588033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:15:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:15:43 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1384480633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.736 225859 DEBUG oslo_concurrency.processutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.780 225859 DEBUG nova.virt.libvirt.vif [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:14:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-0',id=187,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL5L2o6o5dLcQyaIfhCZ5CKxQlecqNGmP68oHIQEsVoKIC2qfrMKjObT9GdMU8oznX9LVUwIWCShhlEJu9ZqPiutEL2afEJ1hQQamjERNcx9wWS2NfOgykA4yugQphfOtA==',key_name='tempest-keypair-1568469072',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:14:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fff727019f86407498e83d7948d54962',ramdisk_id='',reservation_id='r-h927541v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-418194625',owner_user_name='tempest-AttachVolumeMultiAttachTest-418194625-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:15:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e9cc4ce3e069479ba9c789b378a68a1d',uuid=f1ded131-d9a3-4e93-ad99-53ee2695d5c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "vif_mac": "fa:16:3e:99:5e:ed"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.780 225859 DEBUG nova.network.os_vif_util [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converting VIF {"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "vif_mac": "fa:16:3e:99:5e:ed"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.781 225859 DEBUG nova.network.os_vif_util [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:5e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0e93d1de-671e-4e37-8e79-44bed7981254,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e93d1de-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.783 225859 DEBUG nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:15:43 compute-1 nova_compute[225855]:   <uuid>f1ded131-d9a3-4e93-ad99-53ee2695d5c8</uuid>
Jan 20 15:15:43 compute-1 nova_compute[225855]:   <name>instance-000000bb</name>
Jan 20 15:15:43 compute-1 nova_compute[225855]:   <memory>196608</memory>
Jan 20 15:15:43 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:15:43 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <nova:name>multiattach-server-0</nova:name>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:15:42</nova:creationTime>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <nova:flavor name="m1.micro">
Jan 20 15:15:43 compute-1 nova_compute[225855]:         <nova:memory>192</nova:memory>
Jan 20 15:15:43 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:15:43 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:15:43 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:15:43 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:15:43 compute-1 nova_compute[225855]:         <nova:user uuid="e9cc4ce3e069479ba9c789b378a68a1d">tempest-AttachVolumeMultiAttachTest-418194625-project-member</nova:user>
Jan 20 15:15:43 compute-1 nova_compute[225855]:         <nova:project uuid="fff727019f86407498e83d7948d54962">tempest-AttachVolumeMultiAttachTest-418194625</nova:project>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:15:43 compute-1 nova_compute[225855]:         <nova:port uuid="0e93d1de-671e-4e37-8e79-44bed7981254">
Jan 20 15:15:43 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:15:43 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:15:43 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <system>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <entry name="serial">f1ded131-d9a3-4e93-ad99-53ee2695d5c8</entry>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <entry name="uuid">f1ded131-d9a3-4e93-ad99-53ee2695d5c8</entry>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     </system>
Jan 20 15:15:43 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:15:43 compute-1 nova_compute[225855]:   <os>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:   </os>
Jan 20 15:15:43 compute-1 nova_compute[225855]:   <features>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:   </features>
Jan 20 15:15:43 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:15:43 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:15:43 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/f1ded131-d9a3-4e93-ad99-53ee2695d5c8_disk">
Jan 20 15:15:43 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       </source>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:15:43 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/f1ded131-d9a3-4e93-ad99-53ee2695d5c8_disk.config">
Jan 20 15:15:43 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       </source>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:15:43 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <source protocol="rbd" name="volumes/volume-933c5c7a-f496-4bcc-b304-68156c235fe5">
Jan 20 15:15:43 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       </source>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:15:43 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <target dev="vdb" bus="virtio"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <serial>933c5c7a-f496-4bcc-b304-68156c235fe5</serial>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <shareable/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:99:5e:ed"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <target dev="tap0e93d1de-67"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/f1ded131-d9a3-4e93-ad99-53ee2695d5c8/console.log" append="off"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <video>
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     </video>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:15:43 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:15:43 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:15:43 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:15:43 compute-1 nova_compute[225855]: </domain>
Jan 20 15:15:43 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.785 225859 DEBUG nova.virt.libvirt.vif [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:14:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-0',id=187,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL5L2o6o5dLcQyaIfhCZ5CKxQlecqNGmP68oHIQEsVoKIC2qfrMKjObT9GdMU8oznX9LVUwIWCShhlEJu9ZqPiutEL2afEJ1hQQamjERNcx9wWS2NfOgykA4yugQphfOtA==',key_name='tempest-keypair-1568469072',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:14:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fff727019f86407498e83d7948d54962',ramdisk_id='',reservation_id='r-h927541v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-418194625',owner_user_name='tempest-AttachVolumeMultiAttachTest-418194625-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:15:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e9cc4ce3e069479ba9c789b378a68a1d',uuid=f1ded131-d9a3-4e93-ad99-53ee2695d5c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "vif_mac": "fa:16:3e:99:5e:ed"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.785 225859 DEBUG nova.network.os_vif_util [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converting VIF {"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "vif_mac": "fa:16:3e:99:5e:ed"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.785 225859 DEBUG nova.network.os_vif_util [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:5e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0e93d1de-671e-4e37-8e79-44bed7981254,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e93d1de-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.786 225859 DEBUG os_vif [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:5e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0e93d1de-671e-4e37-8e79-44bed7981254,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e93d1de-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.786 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.787 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.787 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.790 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.790 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e93d1de-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.790 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0e93d1de-67, col_values=(('external_ids', {'iface-id': '0e93d1de-671e-4e37-8e79-44bed7981254', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:5e:ed', 'vm-uuid': 'f1ded131-d9a3-4e93-ad99-53ee2695d5c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:43 compute-1 NetworkManager[49104]: <info>  [1768922143.7930] manager: (tap0e93d1de-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/346)
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.792 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.794 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.798 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.799 225859 INFO os_vif [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:5e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0e93d1de-671e-4e37-8e79-44bed7981254,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e93d1de-67')
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.885 225859 DEBUG nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.885 225859 DEBUG nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.885 225859 DEBUG nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.886 225859 DEBUG nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] No VIF found with MAC fa:16:3e:99:5e:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.886 225859 INFO nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Using config drive
Jan 20 15:15:43 compute-1 podman[304518]: 2026-01-20 15:15:43.910843835 +0000 UTC m=+0.077927732 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 20 15:15:43 compute-1 kernel: tap0e93d1de-67: entered promiscuous mode
Jan 20 15:15:43 compute-1 NetworkManager[49104]: <info>  [1768922143.9633] manager: (tap0e93d1de-67): new Tun device (/org/freedesktop/NetworkManager/Devices/347)
Jan 20 15:15:43 compute-1 ovn_controller[130490]: 2026-01-20T15:15:43Z|00832|binding|INFO|Claiming lport 0e93d1de-671e-4e37-8e79-44bed7981254 for this chassis.
Jan 20 15:15:43 compute-1 ovn_controller[130490]: 2026-01-20T15:15:43Z|00833|binding|INFO|0e93d1de-671e-4e37-8e79-44bed7981254: Claiming fa:16:3e:99:5e:ed 10.100.0.3
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.963 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:43.969 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:5e:ed 10.100.0.3'], port_security=['fa:16:3e:99:5e:ed 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f1ded131-d9a3-4e93-ad99-53ee2695d5c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fff727019f86407498e83d7948d54962', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5ace6a2f-56c6-4679-bb81-70ccb27ab312', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87d69a20-7690-494a-ac16-7c600840561a, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=0e93d1de-671e-4e37-8e79-44bed7981254) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:15:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:43.971 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 0e93d1de-671e-4e37-8e79-44bed7981254 in datapath c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab bound to our chassis
Jan 20 15:15:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:43.972 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab
Jan 20 15:15:43 compute-1 ovn_controller[130490]: 2026-01-20T15:15:43Z|00834|binding|INFO|Setting lport 0e93d1de-671e-4e37-8e79-44bed7981254 ovn-installed in OVS
Jan 20 15:15:43 compute-1 ovn_controller[130490]: 2026-01-20T15:15:43Z|00835|binding|INFO|Setting lport 0e93d1de-671e-4e37-8e79-44bed7981254 up in Southbound
Jan 20 15:15:43 compute-1 nova_compute[225855]: 2026-01-20 15:15:43.982 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:43.984 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cee19937-a8d1-4125-8148-5570c394880b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:43.985 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc1f4a971-01 in ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:15:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:43.987 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc1f4a971-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:15:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:43.987 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9272b8d5-9544-4c5c-bce2-f47c29c01b82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:43.987 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c1894f2b-b0c9-489c-99b6-68e01dc0c47a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:43 compute-1 systemd-udevd[304576]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:15:43 compute-1 systemd-machined[194361]: New machine qemu-99-instance-000000bb.
Jan 20 15:15:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:43.997 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[8e898592-c06b-481a-a529-ff7141035c1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:44 compute-1 NetworkManager[49104]: <info>  [1768922144.0072] device (tap0e93d1de-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:15:44 compute-1 NetworkManager[49104]: <info>  [1768922144.0077] device (tap0e93d1de-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:15:44 compute-1 systemd[1]: Started Virtual Machine qemu-99-instance-000000bb.
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:44.010 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bc2f34a5-a482-40bf-9b34-6b5a8928adc2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:44.034 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[2463f21c-5c83-49f3-8b1f-dcf9eaf46e61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:44.040 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[95fe8f50-e685-4e84-8154-cb1842bb5ac8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:44 compute-1 NetworkManager[49104]: <info>  [1768922144.0414] manager: (tapc1f4a971-00): new Veth device (/org/freedesktop/NetworkManager/Devices/348)
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:44.067 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[35dc046f-8634-4a6b-9cd0-7b2e146cc2c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:44.070 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[1967d06c-4fa7-4e46-82d7-a603a08a5b58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:44 compute-1 NetworkManager[49104]: <info>  [1768922144.0906] device (tapc1f4a971-00): carrier: link connected
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:44.094 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c52fb6-ef6d-43a4-97d2-2382fc3c919c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:44.111 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ac2d5c59-72bb-43b7-ac53-8b0f4e42b034]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1f4a971-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:30:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718301, 'reachable_time': 36622, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304608, 'error': None, 'target': 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:44.127 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[08a4688f-fdc2-4c84-a25f-e7f4b8a4d7cd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefc:30f0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 718301, 'tstamp': 718301}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304609, 'error': None, 'target': 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:44.144 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7c7f2875-ce1d-45a5-a655-e0d9f09fd4dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1f4a971-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:30:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718301, 'reachable_time': 36622, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304610, 'error': None, 'target': 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:44.177 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb85b8e-b912-4610-8145-4a186458c8d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:44.239 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a149067e-08d1-4fba-99e5-364c93539071]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:44.241 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1f4a971-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:44.241 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:44.241 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1f4a971-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.243 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:44 compute-1 NetworkManager[49104]: <info>  [1768922144.2441] manager: (tapc1f4a971-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/349)
Jan 20 15:15:44 compute-1 kernel: tapc1f4a971-00: entered promiscuous mode
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.246 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:44.249 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc1f4a971-00, col_values=(('external_ids', {'iface-id': 'b20b0e27-0b08-4316-b6df-6784416f44c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:44 compute-1 ovn_controller[130490]: 2026-01-20T15:15:44Z|00836|binding|INFO|Releasing lport b20b0e27-0b08-4316-b6df-6784416f44c0 from this chassis (sb_readonly=0)
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.250 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.251 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.254 225859 DEBUG nova.network.neutron [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Successfully updated port: ebbe6083-de9d-43ca-9ab2-cf306ea0be4d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:44.257 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:44.258 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[00c03140-d17f-4ca2-acd2-143d5eba8e29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:44.259 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab.pid.haproxy
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:15:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:44.259 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'env', 'PROCESS_TAG=haproxy-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.265 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.277 225859 DEBUG oslo_concurrency.lockutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.277 225859 DEBUG oslo_concurrency.lockutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquired lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.278 225859 DEBUG nova.network.neutron [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.467 225859 DEBUG nova.compute.manager [req-077d4813-c385-4e23-9c9b-e6f2eae66aed req-b7052b2a-aba9-4bfd-96f1-6621372e8d7c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-changed-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.467 225859 DEBUG nova.compute.manager [req-077d4813-c385-4e23-9c9b-e6f2eae66aed req-b7052b2a-aba9-4bfd-96f1-6621372e8d7c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Refreshing instance network info cache due to event network-changed-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.468 225859 DEBUG oslo_concurrency.lockutils [req-077d4813-c385-4e23-9c9b-e6f2eae66aed req-b7052b2a-aba9-4bfd-96f1-6621372e8d7c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.568 225859 DEBUG nova.network.neutron [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:15:44 compute-1 podman[304692]: 2026-01-20 15:15:44.616147967 +0000 UTC m=+0.050589987 container create a439ab3be6e54a4c6d705368f861d75da7d479824ae2fd478adfd2b64bc06259 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 15:15:44 compute-1 systemd[1]: Started libpod-conmon-a439ab3be6e54a4c6d705368f861d75da7d479824ae2fd478adfd2b64bc06259.scope.
Jan 20 15:15:44 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:15:44 compute-1 podman[304692]: 2026-01-20 15:15:44.586921437 +0000 UTC m=+0.021363477 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:15:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb008d1ba0088a00e33db5fdd52a317528e9420e9947ad1e20b1f6e8e7235013/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:15:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1384480633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:15:44 compute-1 ceph-mon[81775]: pgmap v2792: 321 pgs: 321 active+clean; 623 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 1.2 MiB/s wr, 54 op/s
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.695 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922144.694481, f1ded131-d9a3-4e93-ad99-53ee2695d5c8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.695 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] VM Resumed (Lifecycle Event)
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.698 225859 DEBUG nova.compute.manager [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.701 225859 INFO nova.virt.libvirt.driver [-] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Instance running successfully.
Jan 20 15:15:44 compute-1 virtqemud[225396]: argument unsupported: QEMU guest agent is not configured
Jan 20 15:15:44 compute-1 podman[304692]: 2026-01-20 15:15:44.703783823 +0000 UTC m=+0.138225873 container init a439ab3be6e54a4c6d705368f861d75da7d479824ae2fd478adfd2b64bc06259 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.704 225859 DEBUG nova.virt.libvirt.guest [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.704 225859 DEBUG nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 20 15:15:44 compute-1 podman[304692]: 2026-01-20 15:15:44.710956227 +0000 UTC m=+0.145398247 container start a439ab3be6e54a4c6d705368f861d75da7d479824ae2fd478adfd2b64bc06259 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.726 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.729 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:15:44 compute-1 neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab[304717]: [NOTICE]   (304722) : New worker (304724) forked
Jan 20 15:15:44 compute-1 neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab[304717]: [NOTICE]   (304722) : Loading success.
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.775 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.775 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922144.695603, f1ded131-d9a3-4e93-ad99-53ee2695d5c8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.776 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] VM Started (Lifecycle Event)
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.835 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.839 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:15:44 compute-1 nova_compute[225855]: 2026-01-20 15:15:44.873 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 20 15:15:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:45.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:45.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3184790100' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:15:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3184790100' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.646 225859 DEBUG nova.compute.manager [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.646 225859 DEBUG oslo_concurrency.lockutils [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.647 225859 DEBUG oslo_concurrency.lockutils [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.647 225859 DEBUG oslo_concurrency.lockutils [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.647 225859 DEBUG nova.compute.manager [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] No waiting events found dispatching network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.648 225859 WARNING nova.compute.manager [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received unexpected event network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 for instance with vm_state resized and task_state None.
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.648 225859 DEBUG nova.compute.manager [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.648 225859 DEBUG oslo_concurrency.lockutils [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.648 225859 DEBUG oslo_concurrency.lockutils [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.649 225859 DEBUG oslo_concurrency.lockutils [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.649 225859 DEBUG nova.compute.manager [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] No waiting events found dispatching network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.649 225859 WARNING nova.compute.manager [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received unexpected event network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 for instance with vm_state resized and task_state None.
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.679 225859 DEBUG nova.network.neutron [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Updating instance_info_cache with network_info: [{"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.708 225859 DEBUG oslo_concurrency.lockutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Releasing lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.709 225859 DEBUG nova.compute.manager [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Instance network_info: |[{"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.709 225859 DEBUG oslo_concurrency.lockutils [req-077d4813-c385-4e23-9c9b-e6f2eae66aed req-b7052b2a-aba9-4bfd-96f1-6621372e8d7c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.709 225859 DEBUG nova.network.neutron [req-077d4813-c385-4e23-9c9b-e6f2eae66aed req-b7052b2a-aba9-4bfd-96f1-6621372e8d7c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Refreshing network info cache for port ebbe6083-de9d-43ca-9ab2-cf306ea0be4d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.712 225859 DEBUG nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Start _get_guest_xml network_info=[{"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.715 225859 WARNING nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.720 225859 DEBUG nova.virt.libvirt.host [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.720 225859 DEBUG nova.virt.libvirt.host [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.724 225859 DEBUG nova.virt.libvirt.host [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.725 225859 DEBUG nova.virt.libvirt.host [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.726 225859 DEBUG nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.726 225859 DEBUG nova.virt.hardware [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.727 225859 DEBUG nova.virt.hardware [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.727 225859 DEBUG nova.virt.hardware [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.727 225859 DEBUG nova.virt.hardware [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.727 225859 DEBUG nova.virt.hardware [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.728 225859 DEBUG nova.virt.hardware [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.728 225859 DEBUG nova.virt.hardware [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.728 225859 DEBUG nova.virt.hardware [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.728 225859 DEBUG nova.virt.hardware [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.729 225859 DEBUG nova.virt.hardware [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.729 225859 DEBUG nova.virt.hardware [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:15:46 compute-1 nova_compute[225855]: 2026-01-20 15:15:46.731 225859 DEBUG oslo_concurrency.processutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e404 e404: 3 total, 3 up, 3 in
Jan 20 15:15:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:47.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:15:47 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3505090184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.199 225859 DEBUG oslo_concurrency.processutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.225 225859 DEBUG nova.storage.rbd_utils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image b5656c1b-5ac7-4b93-a25d-420e1e294678_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.229 225859 DEBUG oslo_concurrency.processutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:47.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:47 compute-1 ceph-mon[81775]: pgmap v2793: 321 pgs: 321 active+clean; 645 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.5 MiB/s wr, 171 op/s
Jan 20 15:15:47 compute-1 ceph-mon[81775]: osdmap e404: 3 total, 3 up, 3 in
Jan 20 15:15:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3505090184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:15:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:15:47 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/257267576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.682 225859 DEBUG oslo_concurrency.processutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.685 225859 DEBUG nova.virt.libvirt.vif [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:15:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-470752205',display_name='tempest-TestNetworkAdvancedServerOps-server-470752205',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-470752205',id=190,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAmgIhRv7SylyyDoXsoPKcXNSjcMbMCF7/IROo74ZCmTo9LbWE2Sanv271vjV+ounImSggkddfPFTxsQsqOAeGJ63UOB1CVRLYAEgvPLI8ngnO4k9hlNWAKjL0F7Yejx9A==',key_name='tempest-TestNetworkAdvancedServerOps-131135683',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-kjhk913w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:15:40Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=b5656c1b-5ac7-4b93-a25d-420e1e294678,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.686 225859 DEBUG nova.network.os_vif_util [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.687 225859 DEBUG nova.network.os_vif_util [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:77:ea,bridge_name='br-int',has_traffic_filtering=True,id=ebbe6083-de9d-43ca-9ab2-cf306ea0be4d,network=Network(be008398-8f36-4967-9cc8-6412553c79f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebbe6083-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.689 225859 DEBUG nova.objects.instance [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'pci_devices' on Instance uuid b5656c1b-5ac7-4b93-a25d-420e1e294678 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.728 225859 DEBUG nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:15:47 compute-1 nova_compute[225855]:   <uuid>b5656c1b-5ac7-4b93-a25d-420e1e294678</uuid>
Jan 20 15:15:47 compute-1 nova_compute[225855]:   <name>instance-000000be</name>
Jan 20 15:15:47 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:15:47 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:15:47 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-470752205</nova:name>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:15:46</nova:creationTime>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:15:47 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:15:47 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:15:47 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:15:47 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:15:47 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:15:47 compute-1 nova_compute[225855]:         <nova:user uuid="442a7a5cb8ea426a82be9762b262d171">tempest-TestNetworkAdvancedServerOps-175282664-project-member</nova:user>
Jan 20 15:15:47 compute-1 nova_compute[225855]:         <nova:project uuid="1ed5feeeafe7448a8efb47ab975b0ead">tempest-TestNetworkAdvancedServerOps-175282664</nova:project>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:15:47 compute-1 nova_compute[225855]:         <nova:port uuid="ebbe6083-de9d-43ca-9ab2-cf306ea0be4d">
Jan 20 15:15:47 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:15:47 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:15:47 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <system>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <entry name="serial">b5656c1b-5ac7-4b93-a25d-420e1e294678</entry>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <entry name="uuid">b5656c1b-5ac7-4b93-a25d-420e1e294678</entry>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     </system>
Jan 20 15:15:47 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:15:47 compute-1 nova_compute[225855]:   <os>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:   </os>
Jan 20 15:15:47 compute-1 nova_compute[225855]:   <features>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:   </features>
Jan 20 15:15:47 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:15:47 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:15:47 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/b5656c1b-5ac7-4b93-a25d-420e1e294678_disk">
Jan 20 15:15:47 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       </source>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:15:47 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/b5656c1b-5ac7-4b93-a25d-420e1e294678_disk.config">
Jan 20 15:15:47 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       </source>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:15:47 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:a9:77:ea"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <target dev="tapebbe6083-de"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/b5656c1b-5ac7-4b93-a25d-420e1e294678/console.log" append="off"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <video>
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     </video>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:15:47 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:15:47 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:15:47 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:15:47 compute-1 nova_compute[225855]: </domain>
Jan 20 15:15:47 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.731 225859 DEBUG nova.compute.manager [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Preparing to wait for external event network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.731 225859 DEBUG oslo_concurrency.lockutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.732 225859 DEBUG oslo_concurrency.lockutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.732 225859 DEBUG oslo_concurrency.lockutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.733 225859 DEBUG nova.virt.libvirt.vif [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:15:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-470752205',display_name='tempest-TestNetworkAdvancedServerOps-server-470752205',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-470752205',id=190,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAmgIhRv7SylyyDoXsoPKcXNSjcMbMCF7/IROo74ZCmTo9LbWE2Sanv271vjV+ounImSggkddfPFTxsQsqOAeGJ63UOB1CVRLYAEgvPLI8ngnO4k9hlNWAKjL0F7Yejx9A==',key_name='tempest-TestNetworkAdvancedServerOps-131135683',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-kjhk913w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:15:40Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=b5656c1b-5ac7-4b93-a25d-420e1e294678,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.734 225859 DEBUG nova.network.os_vif_util [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.735 225859 DEBUG nova.network.os_vif_util [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:77:ea,bridge_name='br-int',has_traffic_filtering=True,id=ebbe6083-de9d-43ca-9ab2-cf306ea0be4d,network=Network(be008398-8f36-4967-9cc8-6412553c79f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebbe6083-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.736 225859 DEBUG os_vif [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:77:ea,bridge_name='br-int',has_traffic_filtering=True,id=ebbe6083-de9d-43ca-9ab2-cf306ea0be4d,network=Network(be008398-8f36-4967-9cc8-6412553c79f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebbe6083-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.737 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.737 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.738 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.742 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.743 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebbe6083-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.743 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapebbe6083-de, col_values=(('external_ids', {'iface-id': 'ebbe6083-de9d-43ca-9ab2-cf306ea0be4d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:77:ea', 'vm-uuid': 'b5656c1b-5ac7-4b93-a25d-420e1e294678'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.746 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:47 compute-1 NetworkManager[49104]: <info>  [1768922147.7466] manager: (tapebbe6083-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/350)
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.749 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.754 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.755 225859 INFO os_vif [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:77:ea,bridge_name='br-int',has_traffic_filtering=True,id=ebbe6083-de9d-43ca-9ab2-cf306ea0be4d,network=Network(be008398-8f36-4967-9cc8-6412553c79f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebbe6083-de')
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.827 225859 DEBUG nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.827 225859 DEBUG nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.828 225859 DEBUG nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No VIF found with MAC fa:16:3e:a9:77:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.828 225859 INFO nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Using config drive
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.854 225859 DEBUG nova.storage.rbd_utils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image b5656c1b-5ac7-4b93-a25d-420e1e294678_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:15:47 compute-1 nova_compute[225855]: 2026-01-20 15:15:47.926 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:48 compute-1 nova_compute[225855]: 2026-01-20 15:15:48.162 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:48.162 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:15:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:48.163 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:15:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/257267576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:15:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:48 compute-1 nova_compute[225855]: 2026-01-20 15:15:48.598 225859 INFO nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Creating config drive at /var/lib/nova/instances/b5656c1b-5ac7-4b93-a25d-420e1e294678/disk.config
Jan 20 15:15:48 compute-1 nova_compute[225855]: 2026-01-20 15:15:48.603 225859 DEBUG oslo_concurrency.processutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5656c1b-5ac7-4b93-a25d-420e1e294678/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppumrw33v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:48 compute-1 nova_compute[225855]: 2026-01-20 15:15:48.734 225859 DEBUG oslo_concurrency.processutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5656c1b-5ac7-4b93-a25d-420e1e294678/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppumrw33v" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:48 compute-1 nova_compute[225855]: 2026-01-20 15:15:48.761 225859 DEBUG nova.storage.rbd_utils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image b5656c1b-5ac7-4b93-a25d-420e1e294678_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:15:48 compute-1 nova_compute[225855]: 2026-01-20 15:15:48.765 225859 DEBUG oslo_concurrency.processutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b5656c1b-5ac7-4b93-a25d-420e1e294678/disk.config b5656c1b-5ac7-4b93-a25d-420e1e294678_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:48 compute-1 nova_compute[225855]: 2026-01-20 15:15:48.934 225859 DEBUG oslo_concurrency.processutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b5656c1b-5ac7-4b93-a25d-420e1e294678/disk.config b5656c1b-5ac7-4b93-a25d-420e1e294678_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:48 compute-1 nova_compute[225855]: 2026-01-20 15:15:48.936 225859 INFO nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Deleting local config drive /var/lib/nova/instances/b5656c1b-5ac7-4b93-a25d-420e1e294678/disk.config because it was imported into RBD.
Jan 20 15:15:48 compute-1 kernel: tapebbe6083-de: entered promiscuous mode
Jan 20 15:15:48 compute-1 NetworkManager[49104]: <info>  [1768922148.9774] manager: (tapebbe6083-de): new Tun device (/org/freedesktop/NetworkManager/Devices/351)
Jan 20 15:15:48 compute-1 ovn_controller[130490]: 2026-01-20T15:15:48Z|00837|binding|INFO|Claiming lport ebbe6083-de9d-43ca-9ab2-cf306ea0be4d for this chassis.
Jan 20 15:15:48 compute-1 ovn_controller[130490]: 2026-01-20T15:15:48Z|00838|binding|INFO|ebbe6083-de9d-43ca-9ab2-cf306ea0be4d: Claiming fa:16:3e:a9:77:ea 10.100.0.5
Jan 20 15:15:48 compute-1 nova_compute[225855]: 2026-01-20 15:15:48.983 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:48.990 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:77:ea 10.100.0.5'], port_security=['fa:16:3e:a9:77:ea 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b5656c1b-5ac7-4b93-a25d-420e1e294678', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-be008398-8f36-4967-9cc8-6412553c79f3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1f7c21ab-d630-47d9-a822-01d8ee3b1d55', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdafc2c8-f418-454c-b49a-dbb24d8d2298, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=ebbe6083-de9d-43ca-9ab2-cf306ea0be4d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:15:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:48.991 140354 INFO neutron.agent.ovn.metadata.agent [-] Port ebbe6083-de9d-43ca-9ab2-cf306ea0be4d in datapath be008398-8f36-4967-9cc8-6412553c79f3 bound to our chassis
Jan 20 15:15:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:48.994 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network be008398-8f36-4967-9cc8-6412553c79f3
Jan 20 15:15:49 compute-1 nova_compute[225855]: 2026-01-20 15:15:48.999 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:49 compute-1 ovn_controller[130490]: 2026-01-20T15:15:49Z|00839|binding|INFO|Setting lport ebbe6083-de9d-43ca-9ab2-cf306ea0be4d ovn-installed in OVS
Jan 20 15:15:49 compute-1 ovn_controller[130490]: 2026-01-20T15:15:49Z|00840|binding|INFO|Setting lport ebbe6083-de9d-43ca-9ab2-cf306ea0be4d up in Southbound
Jan 20 15:15:49 compute-1 nova_compute[225855]: 2026-01-20 15:15:49.001 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.008 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a52f33eb-93c2-445f-b6e4-762ec4468701]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.009 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbe008398-81 in ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.011 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbe008398-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.011 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c64956f4-a7ae-4a56-98e6-3adb7873b01c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.012 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2b48f718-0e90-4d80-896a-4e2fd911541b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:49 compute-1 systemd-udevd[304872]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.022 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d90ee5-0099-45ad-b478-e7800d15ae01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:49 compute-1 systemd-machined[194361]: New machine qemu-100-instance-000000be.
Jan 20 15:15:49 compute-1 NetworkManager[49104]: <info>  [1768922149.0353] device (tapebbe6083-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:15:49 compute-1 NetworkManager[49104]: <info>  [1768922149.0365] device (tapebbe6083-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:15:49 compute-1 systemd[1]: Started Virtual Machine qemu-100-instance-000000be.
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.044 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[387b4d8a-cd5f-4d6d-952a-6356619b6d11]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:49.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.078 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c1f063d9-8ede-481f-a208-97467f7b8f12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:49 compute-1 NetworkManager[49104]: <info>  [1768922149.0848] manager: (tapbe008398-80): new Veth device (/org/freedesktop/NetworkManager/Devices/352)
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.083 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6c69ff28-eff0-462d-af33-833de441e937]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.125 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[46f42e1a-0b1e-4364-b848-71db05860a31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.128 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[adee28e8-acc1-4e95-934e-e8a0b0f2797e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:49 compute-1 NetworkManager[49104]: <info>  [1768922149.1532] device (tapbe008398-80): carrier: link connected
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.158 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[94df023d-0229-463d-8499-eb7c6307cbe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.173 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[087ceea3-6986-4012-9e24-005c061b2803]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbe008398-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:1d:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718808, 'reachable_time': 42207, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304904, 'error': None, 'target': 'ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.188 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9b0d97ac-c75c-49fb-928f-21fdb45805bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe12:1d8a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 718808, 'tstamp': 718808}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304905, 'error': None, 'target': 'ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.204 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f10b351b-3f51-446f-9fdb-471769d963e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbe008398-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:1d:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718808, 'reachable_time': 42207, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304906, 'error': None, 'target': 'ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.233 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d820fd-e2ac-4723-80c0-698dda733e72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.297 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[516ee1b3-8b60-41e3-a796-7c03c2e939c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.298 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe008398-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.299 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.299 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe008398-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:49 compute-1 nova_compute[225855]: 2026-01-20 15:15:49.302 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:49 compute-1 NetworkManager[49104]: <info>  [1768922149.3033] manager: (tapbe008398-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Jan 20 15:15:49 compute-1 kernel: tapbe008398-80: entered promiscuous mode
Jan 20 15:15:49 compute-1 nova_compute[225855]: 2026-01-20 15:15:49.305 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.305 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbe008398-80, col_values=(('external_ids', {'iface-id': 'f3fd8b5d-b152-40f2-b571-88de4b49c77e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:49 compute-1 ovn_controller[130490]: 2026-01-20T15:15:49Z|00841|binding|INFO|Releasing lport f3fd8b5d-b152-40f2-b571-88de4b49c77e from this chassis (sb_readonly=0)
Jan 20 15:15:49 compute-1 nova_compute[225855]: 2026-01-20 15:15:49.308 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.309 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/be008398-8f36-4967-9cc8-6412553c79f3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/be008398-8f36-4967-9cc8-6412553c79f3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.310 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[28ea90c1-939b-48f2-86d0-4860afc1ff27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.311 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-be008398-8f36-4967-9cc8-6412553c79f3
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/be008398-8f36-4967-9cc8-6412553c79f3.pid.haproxy
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID be008398-8f36-4967-9cc8-6412553c79f3
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:15:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:49.311 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3', 'env', 'PROCESS_TAG=haproxy-be008398-8f36-4967-9cc8-6412553c79f3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/be008398-8f36-4967-9cc8-6412553c79f3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:15:49 compute-1 nova_compute[225855]: 2026-01-20 15:15:49.322 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:49.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:49 compute-1 ceph-mon[81775]: pgmap v2795: 321 pgs: 321 active+clean; 645 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.7 MiB/s wr, 181 op/s
Jan 20 15:15:49 compute-1 sudo[304949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:49 compute-1 sudo[304949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:49 compute-1 sudo[304949]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:49 compute-1 nova_compute[225855]: 2026-01-20 15:15:49.456 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922149.4554558, b5656c1b-5ac7-4b93-a25d-420e1e294678 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:15:49 compute-1 nova_compute[225855]: 2026-01-20 15:15:49.457 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] VM Started (Lifecycle Event)
Jan 20 15:15:49 compute-1 sudo[304982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:15:49 compute-1 sudo[304982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:49 compute-1 sudo[304982]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:49 compute-1 nova_compute[225855]: 2026-01-20 15:15:49.476 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:15:49 compute-1 nova_compute[225855]: 2026-01-20 15:15:49.481 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922149.4555695, b5656c1b-5ac7-4b93-a25d-420e1e294678 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:15:49 compute-1 nova_compute[225855]: 2026-01-20 15:15:49.481 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] VM Paused (Lifecycle Event)
Jan 20 15:15:49 compute-1 nova_compute[225855]: 2026-01-20 15:15:49.507 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:15:49 compute-1 nova_compute[225855]: 2026-01-20 15:15:49.511 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:15:49 compute-1 sudo[305008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:49 compute-1 sudo[305008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:49 compute-1 sudo[305008]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:49 compute-1 nova_compute[225855]: 2026-01-20 15:15:49.531 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:15:49 compute-1 sudo[305033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:15:49 compute-1 sudo[305033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:49 compute-1 nova_compute[225855]: 2026-01-20 15:15:49.759 225859 DEBUG nova.network.neutron [req-077d4813-c385-4e23-9c9b-e6f2eae66aed req-b7052b2a-aba9-4bfd-96f1-6621372e8d7c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Updated VIF entry in instance network info cache for port ebbe6083-de9d-43ca-9ab2-cf306ea0be4d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:15:49 compute-1 nova_compute[225855]: 2026-01-20 15:15:49.760 225859 DEBUG nova.network.neutron [req-077d4813-c385-4e23-9c9b-e6f2eae66aed req-b7052b2a-aba9-4bfd-96f1-6621372e8d7c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Updating instance_info_cache with network_info: [{"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:15:49 compute-1 podman[305081]: 2026-01-20 15:15:49.772695467 +0000 UTC m=+0.072191410 container create ab35cb6b69ea79e9a1b020ef3386e02f717ac202f0cbeea2b52a921e4f6d2e7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Jan 20 15:15:49 compute-1 nova_compute[225855]: 2026-01-20 15:15:49.779 225859 DEBUG oslo_concurrency.lockutils [req-077d4813-c385-4e23-9c9b-e6f2eae66aed req-b7052b2a-aba9-4bfd-96f1-6621372e8d7c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:15:49 compute-1 systemd[1]: Started libpod-conmon-ab35cb6b69ea79e9a1b020ef3386e02f717ac202f0cbeea2b52a921e4f6d2e7f.scope.
Jan 20 15:15:49 compute-1 podman[305081]: 2026-01-20 15:15:49.738560818 +0000 UTC m=+0.038056791 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:15:49 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:15:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62df8bc671769c9f3b38059ef802cd2fd43ec5ee8411e92bf650967517928ab7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:15:49 compute-1 podman[305081]: 2026-01-20 15:15:49.88172103 +0000 UTC m=+0.181216993 container init ab35cb6b69ea79e9a1b020ef3386e02f717ac202f0cbeea2b52a921e4f6d2e7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:15:49 compute-1 podman[305081]: 2026-01-20 15:15:49.889438499 +0000 UTC m=+0.188934442 container start ab35cb6b69ea79e9a1b020ef3386e02f717ac202f0cbeea2b52a921e4f6d2e7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 20 15:15:49 compute-1 neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3[305107]: [NOTICE]   (305112) : New worker (305116) forked
Jan 20 15:15:49 compute-1 neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3[305107]: [NOTICE]   (305112) : Loading success.
Jan 20 15:15:50 compute-1 sudo[305033]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.246 225859 DEBUG nova.compute.manager [req-1fca2a33-5a1b-4c85-9f99-aac2b77e1378 req-f60cf546-c7ab-444d-95eb-ff92e70a6215 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.246 225859 DEBUG oslo_concurrency.lockutils [req-1fca2a33-5a1b-4c85-9f99-aac2b77e1378 req-f60cf546-c7ab-444d-95eb-ff92e70a6215 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.247 225859 DEBUG oslo_concurrency.lockutils [req-1fca2a33-5a1b-4c85-9f99-aac2b77e1378 req-f60cf546-c7ab-444d-95eb-ff92e70a6215 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.247 225859 DEBUG oslo_concurrency.lockutils [req-1fca2a33-5a1b-4c85-9f99-aac2b77e1378 req-f60cf546-c7ab-444d-95eb-ff92e70a6215 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.247 225859 DEBUG nova.compute.manager [req-1fca2a33-5a1b-4c85-9f99-aac2b77e1378 req-f60cf546-c7ab-444d-95eb-ff92e70a6215 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Processing event network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.248 225859 DEBUG nova.compute.manager [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.253 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922150.2529113, b5656c1b-5ac7-4b93-a25d-420e1e294678 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.253 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] VM Resumed (Lifecycle Event)
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.255 225859 DEBUG nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.259 225859 INFO nova.virt.libvirt.driver [-] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Instance spawned successfully.
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.260 225859 DEBUG nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.275 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.278 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.286 225859 DEBUG nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.286 225859 DEBUG nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.287 225859 DEBUG nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.287 225859 DEBUG nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.288 225859 DEBUG nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.288 225859 DEBUG nova.virt.libvirt.driver [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.370 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:15:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:15:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:15:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:15:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:15:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:15:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.437 225859 INFO nova.compute.manager [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Took 9.46 seconds to spawn the instance on the hypervisor.
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.438 225859 DEBUG nova.compute.manager [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.494 225859 INFO nova.compute.manager [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Took 10.50 seconds to build instance.
Jan 20 15:15:50 compute-1 nova_compute[225855]: 2026-01-20 15:15:50.522 225859 DEBUG oslo_concurrency.lockutils [None req-bbd8c924-2ea1-4107-b500-c19534f19b83 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:51.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:51.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:51 compute-1 ceph-mon[81775]: pgmap v2796: 321 pgs: 321 active+clean; 596 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 219 op/s
Jan 20 15:15:52 compute-1 nova_compute[225855]: 2026-01-20 15:15:52.473 225859 DEBUG nova.compute.manager [req-129a25d7-9b39-4079-8a28-fcb383acee96 req-7db61976-0aa0-43d1-ba1b-60ed67796bb3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:52 compute-1 nova_compute[225855]: 2026-01-20 15:15:52.474 225859 DEBUG oslo_concurrency.lockutils [req-129a25d7-9b39-4079-8a28-fcb383acee96 req-7db61976-0aa0-43d1-ba1b-60ed67796bb3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:52 compute-1 nova_compute[225855]: 2026-01-20 15:15:52.474 225859 DEBUG oslo_concurrency.lockutils [req-129a25d7-9b39-4079-8a28-fcb383acee96 req-7db61976-0aa0-43d1-ba1b-60ed67796bb3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:52 compute-1 nova_compute[225855]: 2026-01-20 15:15:52.474 225859 DEBUG oslo_concurrency.lockutils [req-129a25d7-9b39-4079-8a28-fcb383acee96 req-7db61976-0aa0-43d1-ba1b-60ed67796bb3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:52 compute-1 nova_compute[225855]: 2026-01-20 15:15:52.474 225859 DEBUG nova.compute.manager [req-129a25d7-9b39-4079-8a28-fcb383acee96 req-7db61976-0aa0-43d1-ba1b-60ed67796bb3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] No waiting events found dispatching network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:15:52 compute-1 nova_compute[225855]: 2026-01-20 15:15:52.475 225859 WARNING nova.compute.manager [req-129a25d7-9b39-4079-8a28-fcb383acee96 req-7db61976-0aa0-43d1-ba1b-60ed67796bb3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received unexpected event network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d for instance with vm_state active and task_state None.
Jan 20 15:15:52 compute-1 nova_compute[225855]: 2026-01-20 15:15:52.747 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:52 compute-1 nova_compute[225855]: 2026-01-20 15:15:52.931 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:53.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:53.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:53 compute-1 ovn_controller[130490]: 2026-01-20T15:15:53Z|00842|binding|INFO|Releasing lport f3fd8b5d-b152-40f2-b571-88de4b49c77e from this chassis (sb_readonly=0)
Jan 20 15:15:53 compute-1 ovn_controller[130490]: 2026-01-20T15:15:53Z|00843|binding|INFO|Releasing lport b20b0e27-0b08-4316-b6df-6784416f44c0 from this chassis (sb_readonly=0)
Jan 20 15:15:53 compute-1 ceph-mon[81775]: pgmap v2797: 321 pgs: 321 active+clean; 564 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 187 op/s
Jan 20 15:15:53 compute-1 nova_compute[225855]: 2026-01-20 15:15:53.655 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e405 e405: 3 total, 3 up, 3 in
Jan 20 15:15:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:15:54.165 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:54 compute-1 nova_compute[225855]: 2026-01-20 15:15:54.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:15:54 compute-1 nova_compute[225855]: 2026-01-20 15:15:54.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:15:54 compute-1 ceph-mon[81775]: osdmap e405: 3 total, 3 up, 3 in
Jan 20 15:15:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2040095600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:15:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:55.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:15:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:55.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:55 compute-1 ceph-mon[81775]: pgmap v2799: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 564 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 20 KiB/s wr, 116 op/s
Jan 20 15:15:56 compute-1 sudo[305142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:56 compute-1 sudo[305142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:56 compute-1 sudo[305142]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:56 compute-1 sudo[305167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:15:56 compute-1 sudo[305167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:56 compute-1 sudo[305167]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:56 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Jan 20 15:15:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:57 compute-1 nova_compute[225855]: 2026-01-20 15:15:57.067 225859 DEBUG nova.compute.manager [req-0686ac1a-8332-48ba-8d0c-3e4a33aa6e0f req-383c81a0-ff9a-47f4-8564-e63c5d0dd20c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-changed-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:57 compute-1 nova_compute[225855]: 2026-01-20 15:15:57.067 225859 DEBUG nova.compute.manager [req-0686ac1a-8332-48ba-8d0c-3e4a33aa6e0f req-383c81a0-ff9a-47f4-8564-e63c5d0dd20c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Refreshing instance network info cache due to event network-changed-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:15:57 compute-1 nova_compute[225855]: 2026-01-20 15:15:57.068 225859 DEBUG oslo_concurrency.lockutils [req-0686ac1a-8332-48ba-8d0c-3e4a33aa6e0f req-383c81a0-ff9a-47f4-8564-e63c5d0dd20c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:15:57 compute-1 nova_compute[225855]: 2026-01-20 15:15:57.068 225859 DEBUG oslo_concurrency.lockutils [req-0686ac1a-8332-48ba-8d0c-3e4a33aa6e0f req-383c81a0-ff9a-47f4-8564-e63c5d0dd20c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:15:57 compute-1 nova_compute[225855]: 2026-01-20 15:15:57.068 225859 DEBUG nova.network.neutron [req-0686ac1a-8332-48ba-8d0c-3e4a33aa6e0f req-383c81a0-ff9a-47f4-8564-e63c5d0dd20c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Refreshing network info cache for port ebbe6083-de9d-43ca-9ab2-cf306ea0be4d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:15:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:57.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:57 compute-1 ovn_controller[130490]: 2026-01-20T15:15:57Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:5e:ed 10.100.0.3
Jan 20 15:15:57 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:15:57 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:15:57 compute-1 ceph-mon[81775]: pgmap v2800: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 564 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 27 KiB/s wr, 148 op/s
Jan 20 15:15:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:57.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:57 compute-1 nova_compute[225855]: 2026-01-20 15:15:57.750 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:57 compute-1 nova_compute[225855]: 2026-01-20 15:15:57.957 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:58 compute-1 nova_compute[225855]: 2026-01-20 15:15:58.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:15:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:59 compute-1 podman[305193]: 2026-01-20 15:15:59.028196337 +0000 UTC m=+0.067241329 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:15:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:59.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:59 compute-1 nova_compute[225855]: 2026-01-20 15:15:59.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:15:59 compute-1 nova_compute[225855]: 2026-01-20 15:15:59.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:15:59 compute-1 nova_compute[225855]: 2026-01-20 15:15:59.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:15:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:15:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:59.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:59 compute-1 ceph-mon[81775]: pgmap v2801: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 564 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 26 KiB/s wr, 144 op/s
Jan 20 15:15:59 compute-1 nova_compute[225855]: 2026-01-20 15:15:59.762 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:15:59 compute-1 nova_compute[225855]: 2026-01-20 15:15:59.763 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:15:59 compute-1 nova_compute[225855]: 2026-01-20 15:15:59.763 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:15:59 compute-1 nova_compute[225855]: 2026-01-20 15:15:59.763 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f1ded131-d9a3-4e93-ad99-53ee2695d5c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:16:00 compute-1 ceph-mon[81775]: pgmap v2802: 321 pgs: 321 active+clean; 564 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 14 KiB/s wr, 133 op/s
Jan 20 15:16:00 compute-1 nova_compute[225855]: 2026-01-20 15:16:00.905 225859 DEBUG nova.network.neutron [req-0686ac1a-8332-48ba-8d0c-3e4a33aa6e0f req-383c81a0-ff9a-47f4-8564-e63c5d0dd20c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Updated VIF entry in instance network info cache for port ebbe6083-de9d-43ca-9ab2-cf306ea0be4d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:16:00 compute-1 nova_compute[225855]: 2026-01-20 15:16:00.908 225859 DEBUG nova.network.neutron [req-0686ac1a-8332-48ba-8d0c-3e4a33aa6e0f req-383c81a0-ff9a-47f4-8564-e63c5d0dd20c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Updating instance_info_cache with network_info: [{"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:16:00 compute-1 nova_compute[225855]: 2026-01-20 15:16:00.947 225859 DEBUG oslo_concurrency.lockutils [req-0686ac1a-8332-48ba-8d0c-3e4a33aa6e0f req-383c81a0-ff9a-47f4-8564-e63c5d0dd20c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:16:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:01.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:01.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:01 compute-1 sudo[305215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:16:01 compute-1 sudo[305215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:01 compute-1 sudo[305215]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:01 compute-1 sudo[305240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:16:01 compute-1 sudo[305240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:01 compute-1 sudo[305240]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e406 e406: 3 total, 3 up, 3 in
Jan 20 15:16:01 compute-1 ovn_controller[130490]: 2026-01-20T15:16:01Z|00844|binding|INFO|Releasing lport f3fd8b5d-b152-40f2-b571-88de4b49c77e from this chassis (sb_readonly=0)
Jan 20 15:16:01 compute-1 ovn_controller[130490]: 2026-01-20T15:16:01Z|00845|binding|INFO|Releasing lport b20b0e27-0b08-4316-b6df-6784416f44c0 from this chassis (sb_readonly=0)
Jan 20 15:16:02 compute-1 nova_compute[225855]: 2026-01-20 15:16:02.032 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:02 compute-1 nova_compute[225855]: 2026-01-20 15:16:02.413 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updating instance_info_cache with network_info: [{"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:16:02 compute-1 nova_compute[225855]: 2026-01-20 15:16:02.482 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:16:02 compute-1 nova_compute[225855]: 2026-01-20 15:16:02.483 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:16:02 compute-1 nova_compute[225855]: 2026-01-20 15:16:02.483 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:16:02 compute-1 nova_compute[225855]: 2026-01-20 15:16:02.484 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:16:02 compute-1 nova_compute[225855]: 2026-01-20 15:16:02.484 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:16:02 compute-1 nova_compute[225855]: 2026-01-20 15:16:02.484 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:16:02 compute-1 nova_compute[225855]: 2026-01-20 15:16:02.539 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:02 compute-1 nova_compute[225855]: 2026-01-20 15:16:02.540 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:02 compute-1 nova_compute[225855]: 2026-01-20 15:16:02.540 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:02 compute-1 nova_compute[225855]: 2026-01-20 15:16:02.541 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:16:02 compute-1 nova_compute[225855]: 2026-01-20 15:16:02.541 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:16:02 compute-1 nova_compute[225855]: 2026-01-20 15:16:02.752 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:02 compute-1 ceph-mon[81775]: osdmap e406: 3 total, 3 up, 3 in
Jan 20 15:16:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1509921417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:02 compute-1 ceph-mon[81775]: pgmap v2804: 321 pgs: 321 active+clean; 564 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 28 KiB/s wr, 161 op/s
Jan 20 15:16:02 compute-1 nova_compute[225855]: 2026-01-20 15:16:02.958 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:03.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:16:03 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2411011844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:03 compute-1 nova_compute[225855]: 2026-01-20 15:16:03.158 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:16:03 compute-1 nova_compute[225855]: 2026-01-20 15:16:03.250 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:16:03 compute-1 nova_compute[225855]: 2026-01-20 15:16:03.251 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:16:03 compute-1 nova_compute[225855]: 2026-01-20 15:16:03.251 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:16:03 compute-1 nova_compute[225855]: 2026-01-20 15:16:03.254 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000be as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:16:03 compute-1 nova_compute[225855]: 2026-01-20 15:16:03.254 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000be as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:16:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:03.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:03 compute-1 nova_compute[225855]: 2026-01-20 15:16:03.403 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:16:03 compute-1 nova_compute[225855]: 2026-01-20 15:16:03.404 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3873MB free_disk=20.78484344482422GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:16:03 compute-1 nova_compute[225855]: 2026-01-20 15:16:03.404 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:03 compute-1 nova_compute[225855]: 2026-01-20 15:16:03.404 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:03 compute-1 nova_compute[225855]: 2026-01-20 15:16:03.527 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance f1ded131-d9a3-4e93-ad99-53ee2695d5c8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:16:03 compute-1 nova_compute[225855]: 2026-01-20 15:16:03.527 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance b5656c1b-5ac7-4b93-a25d-420e1e294678 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:16:03 compute-1 nova_compute[225855]: 2026-01-20 15:16:03.527 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:16:03 compute-1 nova_compute[225855]: 2026-01-20 15:16:03.528 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:16:03 compute-1 nova_compute[225855]: 2026-01-20 15:16:03.538 225859 DEBUG nova.compute.manager [req-d35e570b-3627-4498-80c7-1f9ae9e1c065 req-fbe58f85-a193-4be0-b7d3-31164e136147 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-changed-0e93d1de-671e-4e37-8e79-44bed7981254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:03 compute-1 nova_compute[225855]: 2026-01-20 15:16:03.539 225859 DEBUG nova.compute.manager [req-d35e570b-3627-4498-80c7-1f9ae9e1c065 req-fbe58f85-a193-4be0-b7d3-31164e136147 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Refreshing instance network info cache due to event network-changed-0e93d1de-671e-4e37-8e79-44bed7981254. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:16:03 compute-1 nova_compute[225855]: 2026-01-20 15:16:03.539 225859 DEBUG oslo_concurrency.lockutils [req-d35e570b-3627-4498-80c7-1f9ae9e1c065 req-fbe58f85-a193-4be0-b7d3-31164e136147 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:16:03 compute-1 nova_compute[225855]: 2026-01-20 15:16:03.539 225859 DEBUG oslo_concurrency.lockutils [req-d35e570b-3627-4498-80c7-1f9ae9e1c065 req-fbe58f85-a193-4be0-b7d3-31164e136147 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:16:03 compute-1 nova_compute[225855]: 2026-01-20 15:16:03.539 225859 DEBUG nova.network.neutron [req-d35e570b-3627-4498-80c7-1f9ae9e1c065 req-fbe58f85-a193-4be0-b7d3-31164e136147 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Refreshing network info cache for port 0e93d1de-671e-4e37-8e79-44bed7981254 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:16:03 compute-1 nova_compute[225855]: 2026-01-20 15:16:03.804 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:16:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2411011844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3815396852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:16:04 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4156676619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:04 compute-1 nova_compute[225855]: 2026-01-20 15:16:04.290 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:16:04 compute-1 nova_compute[225855]: 2026-01-20 15:16:04.298 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:16:04 compute-1 nova_compute[225855]: 2026-01-20 15:16:04.335 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:16:04 compute-1 nova_compute[225855]: 2026-01-20 15:16:04.390 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:16:04 compute-1 nova_compute[225855]: 2026-01-20 15:16:04.390 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4156676619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:04 compute-1 ceph-mon[81775]: pgmap v2805: 321 pgs: 321 active+clean; 569 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 727 KiB/s wr, 135 op/s
Jan 20 15:16:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:05.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:05 compute-1 ovn_controller[130490]: 2026-01-20T15:16:05Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a9:77:ea 10.100.0.5
Jan 20 15:16:05 compute-1 ovn_controller[130490]: 2026-01-20T15:16:05Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:77:ea 10.100.0.5
Jan 20 15:16:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:05.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3931726910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2847299956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:07 compute-1 ceph-mon[81775]: pgmap v2806: 321 pgs: 321 active+clean; 598 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.6 MiB/s wr, 133 op/s
Jan 20 15:16:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:07.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:07.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:07 compute-1 nova_compute[225855]: 2026-01-20 15:16:07.590 225859 DEBUG nova.network.neutron [req-d35e570b-3627-4498-80c7-1f9ae9e1c065 req-fbe58f85-a193-4be0-b7d3-31164e136147 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updated VIF entry in instance network info cache for port 0e93d1de-671e-4e37-8e79-44bed7981254. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:16:07 compute-1 nova_compute[225855]: 2026-01-20 15:16:07.591 225859 DEBUG nova.network.neutron [req-d35e570b-3627-4498-80c7-1f9ae9e1c065 req-fbe58f85-a193-4be0-b7d3-31164e136147 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updating instance_info_cache with network_info: [{"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:16:07 compute-1 nova_compute[225855]: 2026-01-20 15:16:07.738 225859 DEBUG oslo_concurrency.lockutils [req-d35e570b-3627-4498-80c7-1f9ae9e1c065 req-fbe58f85-a193-4be0-b7d3-31164e136147 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:16:07 compute-1 nova_compute[225855]: 2026-01-20 15:16:07.756 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:07 compute-1 nova_compute[225855]: 2026-01-20 15:16:07.960 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:08 compute-1 nova_compute[225855]: 2026-01-20 15:16:08.247 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:16:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:09.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:16:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:09.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:16:09 compute-1 ceph-mon[81775]: pgmap v2807: 321 pgs: 321 active+clean; 598 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.6 MiB/s wr, 133 op/s
Jan 20 15:16:10 compute-1 nova_compute[225855]: 2026-01-20 15:16:10.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:16:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:11.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:11.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:11 compute-1 ceph-mon[81775]: pgmap v2808: 321 pgs: 321 active+clean; 598 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 615 KiB/s rd, 2.6 MiB/s wr, 99 op/s
Jan 20 15:16:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3136970445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:11 compute-1 nova_compute[225855]: 2026-01-20 15:16:11.902 225859 INFO nova.compute.manager [None req-b1256381-e841-4313-bd8e-e323e10725ac 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Get console output
Jan 20 15:16:11 compute-1 nova_compute[225855]: 2026-01-20 15:16:11.907 263775 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:16:12 compute-1 ceph-mon[81775]: pgmap v2809: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 365 KiB/s rd, 2.4 MiB/s wr, 72 op/s
Jan 20 15:16:12 compute-1 nova_compute[225855]: 2026-01-20 15:16:12.759 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:12 compute-1 nova_compute[225855]: 2026-01-20 15:16:12.962 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:13.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:13.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1846740269' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:16:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1846740269' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:16:13 compute-1 nova_compute[225855]: 2026-01-20 15:16:13.898 225859 INFO nova.compute.manager [None req-98a0bc9a-d2ef-4c53-8d93-0b20dce27c54 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Get console output
Jan 20 15:16:13 compute-1 nova_compute[225855]: 2026-01-20 15:16:13.901 263775 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:16:14 compute-1 podman[305317]: 2026-01-20 15:16:14.039443068 +0000 UTC m=+0.086085233 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:16:14 compute-1 ceph-mon[81775]: pgmap v2810: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Jan 20 15:16:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:15.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:16:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:15.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:16:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:16:16.438 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:16:16.438 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:16:16.439 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:17.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:17.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:17 compute-1 ceph-mon[81775]: pgmap v2811: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 250 KiB/s rd, 1.6 MiB/s wr, 42 op/s
Jan 20 15:16:17 compute-1 nova_compute[225855]: 2026-01-20 15:16:17.770 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:17 compute-1 nova_compute[225855]: 2026-01-20 15:16:17.964 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:19.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:19.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:19 compute-1 ceph-mon[81775]: pgmap v2812: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 KiB/s rd, 20 KiB/s wr, 1 op/s
Jan 20 15:16:19 compute-1 nova_compute[225855]: 2026-01-20 15:16:19.683 225859 DEBUG nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Check if temp file /var/lib/nova/instances/tmpwqr60y7t exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Jan 20 15:16:19 compute-1 nova_compute[225855]: 2026-01-20 15:16:19.685 225859 DEBUG nova.compute.manager [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwqr60y7t',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b5656c1b-5ac7-4b93-a25d-420e1e294678',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Jan 20 15:16:20 compute-1 ceph-mon[81775]: pgmap v2813: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 KiB/s rd, 27 KiB/s wr, 3 op/s
Jan 20 15:16:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:21.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:21.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:21 compute-1 sudo[305347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:16:21 compute-1 sudo[305347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:21 compute-1 sudo[305347]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:21 compute-1 sudo[305372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:16:21 compute-1 sudo[305372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:21 compute-1 sudo[305372]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:22 compute-1 nova_compute[225855]: 2026-01-20 15:16:22.771 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:22 compute-1 nova_compute[225855]: 2026-01-20 15:16:22.966 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:23.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:16:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:23.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:16:23 compute-1 ceph-mon[81775]: pgmap v2814: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 KiB/s rd, 23 KiB/s wr, 2 op/s
Jan 20 15:16:24 compute-1 nova_compute[225855]: 2026-01-20 15:16:24.751 225859 DEBUG nova.compute.manager [req-baadcd49-f99d-4409-b8cb-1898bcabb6b2 req-116f8276-7526-4b07-bf86-2841ad94df00 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-vif-unplugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:24 compute-1 nova_compute[225855]: 2026-01-20 15:16:24.751 225859 DEBUG oslo_concurrency.lockutils [req-baadcd49-f99d-4409-b8cb-1898bcabb6b2 req-116f8276-7526-4b07-bf86-2841ad94df00 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:24 compute-1 nova_compute[225855]: 2026-01-20 15:16:24.752 225859 DEBUG oslo_concurrency.lockutils [req-baadcd49-f99d-4409-b8cb-1898bcabb6b2 req-116f8276-7526-4b07-bf86-2841ad94df00 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:24 compute-1 nova_compute[225855]: 2026-01-20 15:16:24.752 225859 DEBUG oslo_concurrency.lockutils [req-baadcd49-f99d-4409-b8cb-1898bcabb6b2 req-116f8276-7526-4b07-bf86-2841ad94df00 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:24 compute-1 nova_compute[225855]: 2026-01-20 15:16:24.752 225859 DEBUG nova.compute.manager [req-baadcd49-f99d-4409-b8cb-1898bcabb6b2 req-116f8276-7526-4b07-bf86-2841ad94df00 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] No waiting events found dispatching network-vif-unplugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:16:24 compute-1 nova_compute[225855]: 2026-01-20 15:16:24.752 225859 DEBUG nova.compute.manager [req-baadcd49-f99d-4409-b8cb-1898bcabb6b2 req-116f8276-7526-4b07-bf86-2841ad94df00 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-vif-unplugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:16:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:25.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:25.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:25 compute-1 ceph-mon[81775]: pgmap v2815: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 KiB/s rd, 23 KiB/s wr, 2 op/s
Jan 20 15:16:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1205900117' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:16:25 compute-1 nova_compute[225855]: 2026-01-20 15:16:25.734 225859 INFO nova.compute.manager [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Took 5.05 seconds for pre_live_migration on destination host compute-0.ctlplane.example.com.
Jan 20 15:16:25 compute-1 nova_compute[225855]: 2026-01-20 15:16:25.735 225859 DEBUG nova.compute.manager [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:16:25 compute-1 nova_compute[225855]: 2026-01-20 15:16:25.752 225859 DEBUG nova.compute.manager [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwqr60y7t',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b5656c1b-5ac7-4b93-a25d-420e1e294678',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(9b925066-c218-4b07-910d-90dd336bf952),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Jan 20 15:16:25 compute-1 nova_compute[225855]: 2026-01-20 15:16:25.755 225859 DEBUG nova.objects.instance [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lazy-loading 'migration_context' on Instance uuid b5656c1b-5ac7-4b93-a25d-420e1e294678 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:16:25 compute-1 nova_compute[225855]: 2026-01-20 15:16:25.757 225859 DEBUG nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Jan 20 15:16:25 compute-1 nova_compute[225855]: 2026-01-20 15:16:25.759 225859 DEBUG nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Jan 20 15:16:25 compute-1 nova_compute[225855]: 2026-01-20 15:16:25.759 225859 DEBUG nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Jan 20 15:16:25 compute-1 nova_compute[225855]: 2026-01-20 15:16:25.781 225859 DEBUG nova.virt.libvirt.vif [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:15:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-470752205',display_name='tempest-TestNetworkAdvancedServerOps-server-470752205',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-470752205',id=190,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAmgIhRv7SylyyDoXsoPKcXNSjcMbMCF7/IROo74ZCmTo9LbWE2Sanv271vjV+ounImSggkddfPFTxsQsqOAeGJ63UOB1CVRLYAEgvPLI8ngnO4k9hlNWAKjL0F7Yejx9A==',key_name='tempest-TestNetworkAdvancedServerOps-131135683',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:15:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-kjhk913w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:15:50Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=b5656c1b-5ac7-4b93-a25d-420e1e294678,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:16:25 compute-1 nova_compute[225855]: 2026-01-20 15:16:25.782 225859 DEBUG nova.network.os_vif_util [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Converting VIF {"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:16:25 compute-1 nova_compute[225855]: 2026-01-20 15:16:25.783 225859 DEBUG nova.network.os_vif_util [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a9:77:ea,bridge_name='br-int',has_traffic_filtering=True,id=ebbe6083-de9d-43ca-9ab2-cf306ea0be4d,network=Network(be008398-8f36-4967-9cc8-6412553c79f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebbe6083-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:16:25 compute-1 nova_compute[225855]: 2026-01-20 15:16:25.783 225859 DEBUG nova.virt.libvirt.migration [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Updating guest XML with vif config: <interface type="ethernet">
Jan 20 15:16:25 compute-1 nova_compute[225855]:   <mac address="fa:16:3e:a9:77:ea"/>
Jan 20 15:16:25 compute-1 nova_compute[225855]:   <model type="virtio"/>
Jan 20 15:16:25 compute-1 nova_compute[225855]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:16:25 compute-1 nova_compute[225855]:   <mtu size="1442"/>
Jan 20 15:16:25 compute-1 nova_compute[225855]:   <target dev="tapebbe6083-de"/>
Jan 20 15:16:25 compute-1 nova_compute[225855]: </interface>
Jan 20 15:16:25 compute-1 nova_compute[225855]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Jan 20 15:16:25 compute-1 nova_compute[225855]: 2026-01-20 15:16:25.784 225859 DEBUG nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Jan 20 15:16:26 compute-1 nova_compute[225855]: 2026-01-20 15:16:26.262 225859 DEBUG nova.virt.libvirt.migration [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 20 15:16:26 compute-1 nova_compute[225855]: 2026-01-20 15:16:26.262 225859 INFO nova.virt.libvirt.migration [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Increasing downtime to 50 ms after 0 sec elapsed time
Jan 20 15:16:26 compute-1 nova_compute[225855]: 2026-01-20 15:16:26.359 225859 INFO nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Jan 20 15:16:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e407 e407: 3 total, 3 up, 3 in
Jan 20 15:16:26 compute-1 ceph-mon[81775]: pgmap v2816: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 KiB/s rd, 11 KiB/s wr, 2 op/s
Jan 20 15:16:26 compute-1 nova_compute[225855]: 2026-01-20 15:16:26.861 225859 DEBUG nova.virt.libvirt.migration [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 20 15:16:26 compute-1 nova_compute[225855]: 2026-01-20 15:16:26.862 225859 DEBUG nova.virt.libvirt.migration [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.066 225859 DEBUG nova.compute.manager [req-e2371e9d-1e43-4664-ae25-506d85674ee2 req-fbc64c1e-eb84-4fd1-bc9e-1277f63d1f62 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.066 225859 DEBUG oslo_concurrency.lockutils [req-e2371e9d-1e43-4664-ae25-506d85674ee2 req-fbc64c1e-eb84-4fd1-bc9e-1277f63d1f62 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.066 225859 DEBUG oslo_concurrency.lockutils [req-e2371e9d-1e43-4664-ae25-506d85674ee2 req-fbc64c1e-eb84-4fd1-bc9e-1277f63d1f62 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.066 225859 DEBUG oslo_concurrency.lockutils [req-e2371e9d-1e43-4664-ae25-506d85674ee2 req-fbc64c1e-eb84-4fd1-bc9e-1277f63d1f62 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.067 225859 DEBUG nova.compute.manager [req-e2371e9d-1e43-4664-ae25-506d85674ee2 req-fbc64c1e-eb84-4fd1-bc9e-1277f63d1f62 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] No waiting events found dispatching network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.067 225859 WARNING nova.compute.manager [req-e2371e9d-1e43-4664-ae25-506d85674ee2 req-fbc64c1e-eb84-4fd1-bc9e-1277f63d1f62 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received unexpected event network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d for instance with vm_state active and task_state migrating.
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.067 225859 DEBUG nova.compute.manager [req-e2371e9d-1e43-4664-ae25-506d85674ee2 req-fbc64c1e-eb84-4fd1-bc9e-1277f63d1f62 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-changed-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.067 225859 DEBUG nova.compute.manager [req-e2371e9d-1e43-4664-ae25-506d85674ee2 req-fbc64c1e-eb84-4fd1-bc9e-1277f63d1f62 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Refreshing instance network info cache due to event network-changed-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.068 225859 DEBUG oslo_concurrency.lockutils [req-e2371e9d-1e43-4664-ae25-506d85674ee2 req-fbc64c1e-eb84-4fd1-bc9e-1277f63d1f62 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.068 225859 DEBUG oslo_concurrency.lockutils [req-e2371e9d-1e43-4664-ae25-506d85674ee2 req-fbc64c1e-eb84-4fd1-bc9e-1277f63d1f62 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.068 225859 DEBUG nova.network.neutron [req-e2371e9d-1e43-4664-ae25-506d85674ee2 req-fbc64c1e-eb84-4fd1-bc9e-1277f63d1f62 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Refreshing network info cache for port ebbe6083-de9d-43ca-9ab2-cf306ea0be4d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:16:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:16:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:27.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.365 225859 DEBUG nova.virt.libvirt.migration [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.365 225859 DEBUG nova.virt.libvirt.migration [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Jan 20 15:16:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:27.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.504 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922187.503702, b5656c1b-5ac7-4b93-a25d-420e1e294678 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.504 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] VM Paused (Lifecycle Event)
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.531 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.534 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.561 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] During sync_power_state the instance has a pending task (migrating). Skip.
Jan 20 15:16:27 compute-1 kernel: tapebbe6083-de (unregistering): left promiscuous mode
Jan 20 15:16:27 compute-1 NetworkManager[49104]: <info>  [1768922187.6920] device (tapebbe6083-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:16:27 compute-1 ovn_controller[130490]: 2026-01-20T15:16:27Z|00846|binding|INFO|Releasing lport ebbe6083-de9d-43ca-9ab2-cf306ea0be4d from this chassis (sb_readonly=0)
Jan 20 15:16:27 compute-1 ovn_controller[130490]: 2026-01-20T15:16:27Z|00847|binding|INFO|Setting lport ebbe6083-de9d-43ca-9ab2-cf306ea0be4d down in Southbound
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.703 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:27 compute-1 ovn_controller[130490]: 2026-01-20T15:16:27Z|00848|binding|INFO|Removing iface tapebbe6083-de ovn-installed in OVS
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.708 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:16:27.710 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:77:ea 10.100.0.5'], port_security=['fa:16:3e:a9:77:ea 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '367c1a2c-b16a-4828-ab5a-626bb50023b4'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b5656c1b-5ac7-4b93-a25d-420e1e294678', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-be008398-8f36-4967-9cc8-6412553c79f3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1f7c21ab-d630-47d9-a822-01d8ee3b1d55', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdafc2c8-f418-454c-b49a-dbb24d8d2298, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=ebbe6083-de9d-43ca-9ab2-cf306ea0be4d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:16:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:16:27.712 140354 INFO neutron.agent.ovn.metadata.agent [-] Port ebbe6083-de9d-43ca-9ab2-cf306ea0be4d in datapath be008398-8f36-4967-9cc8-6412553c79f3 unbound from our chassis
Jan 20 15:16:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:16:27.713 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network be008398-8f36-4967-9cc8-6412553c79f3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:16:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:16:27.715 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0526d38d-2014-4215-8682-e374b55a8733]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:16:27.715 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3 namespace which is not needed anymore
Jan 20 15:16:27 compute-1 ceph-mon[81775]: osdmap e407: 3 total, 3 up, 3 in
Jan 20 15:16:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3605721892' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.720 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:27 compute-1 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000be.scope: Deactivated successfully.
Jan 20 15:16:27 compute-1 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000be.scope: Consumed 14.856s CPU time.
Jan 20 15:16:27 compute-1 systemd-machined[194361]: Machine qemu-100-instance-000000be terminated.
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.773 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:27 compute-1 neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3[305107]: [NOTICE]   (305112) : haproxy version is 2.8.14-c23fe91
Jan 20 15:16:27 compute-1 neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3[305107]: [NOTICE]   (305112) : path to executable is /usr/sbin/haproxy
Jan 20 15:16:27 compute-1 neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3[305107]: [WARNING]  (305112) : Exiting Master process...
Jan 20 15:16:27 compute-1 neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3[305107]: [ALERT]    (305112) : Current worker (305116) exited with code 143 (Terminated)
Jan 20 15:16:27 compute-1 neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3[305107]: [WARNING]  (305112) : All workers exited. Exiting... (0)
Jan 20 15:16:27 compute-1 systemd[1]: libpod-ab35cb6b69ea79e9a1b020ef3386e02f717ac202f0cbeea2b52a921e4f6d2e7f.scope: Deactivated successfully.
Jan 20 15:16:27 compute-1 conmon[305107]: conmon ab35cb6b69ea79e9a1b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab35cb6b69ea79e9a1b020ef3386e02f717ac202f0cbeea2b52a921e4f6d2e7f.scope/container/memory.events
Jan 20 15:16:27 compute-1 virtqemud[225396]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/b5656c1b-5ac7-4b93-a25d-420e1e294678_disk: No such file or directory
Jan 20 15:16:27 compute-1 virtqemud[225396]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/b5656c1b-5ac7-4b93-a25d-420e1e294678_disk: No such file or directory
Jan 20 15:16:27 compute-1 podman[305426]: 2026-01-20 15:16:27.850481156 +0000 UTC m=+0.043374902 container died ab35cb6b69ea79e9a1b020ef3386e02f717ac202f0cbeea2b52a921e4f6d2e7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.858 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.863 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.869 225859 DEBUG nova.virt.libvirt.guest [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.870 225859 INFO nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Migration operation has completed
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.871 225859 INFO nova.compute.manager [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] _post_live_migration() is started..
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.878 225859 DEBUG nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Jan 20 15:16:27 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab35cb6b69ea79e9a1b020ef3386e02f717ac202f0cbeea2b52a921e4f6d2e7f-userdata-shm.mount: Deactivated successfully.
Jan 20 15:16:27 compute-1 systemd[1]: var-lib-containers-storage-overlay-62df8bc671769c9f3b38059ef802cd2fd43ec5ee8411e92bf650967517928ab7-merged.mount: Deactivated successfully.
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.883 225859 DEBUG nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.883 225859 DEBUG nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Jan 20 15:16:27 compute-1 podman[305426]: 2026-01-20 15:16:27.890645895 +0000 UTC m=+0.083539641 container cleanup ab35cb6b69ea79e9a1b020ef3386e02f717ac202f0cbeea2b52a921e4f6d2e7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:16:27 compute-1 systemd[1]: libpod-conmon-ab35cb6b69ea79e9a1b020ef3386e02f717ac202f0cbeea2b52a921e4f6d2e7f.scope: Deactivated successfully.
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.928 225859 DEBUG nova.compute.manager [req-74b297c1-99b7-4c81-a622-9d433b32d090 req-5e9c3042-3d5c-48df-89fb-5a6d6fb803fc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-vif-unplugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.928 225859 DEBUG oslo_concurrency.lockutils [req-74b297c1-99b7-4c81-a622-9d433b32d090 req-5e9c3042-3d5c-48df-89fb-5a6d6fb803fc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.928 225859 DEBUG oslo_concurrency.lockutils [req-74b297c1-99b7-4c81-a622-9d433b32d090 req-5e9c3042-3d5c-48df-89fb-5a6d6fb803fc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.928 225859 DEBUG oslo_concurrency.lockutils [req-74b297c1-99b7-4c81-a622-9d433b32d090 req-5e9c3042-3d5c-48df-89fb-5a6d6fb803fc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.929 225859 DEBUG nova.compute.manager [req-74b297c1-99b7-4c81-a622-9d433b32d090 req-5e9c3042-3d5c-48df-89fb-5a6d6fb803fc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] No waiting events found dispatching network-vif-unplugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.929 225859 DEBUG nova.compute.manager [req-74b297c1-99b7-4c81-a622-9d433b32d090 req-5e9c3042-3d5c-48df-89fb-5a6d6fb803fc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-vif-unplugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:16:27 compute-1 podman[305465]: 2026-01-20 15:16:27.952762148 +0000 UTC m=+0.040596983 container remove ab35cb6b69ea79e9a1b020ef3386e02f717ac202f0cbeea2b52a921e4f6d2e7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:16:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:16:27.958 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[75412e8e-d7a9-4727-85eb-638894d5e966]: (4, ('Tue Jan 20 03:16:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3 (ab35cb6b69ea79e9a1b020ef3386e02f717ac202f0cbeea2b52a921e4f6d2e7f)\nab35cb6b69ea79e9a1b020ef3386e02f717ac202f0cbeea2b52a921e4f6d2e7f\nTue Jan 20 03:16:27 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3 (ab35cb6b69ea79e9a1b020ef3386e02f717ac202f0cbeea2b52a921e4f6d2e7f)\nab35cb6b69ea79e9a1b020ef3386e02f717ac202f0cbeea2b52a921e4f6d2e7f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:16:27.959 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3af62ac2-0dcb-46cf-a460-b3becd85ee68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:16:27.960 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe008398-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.961 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:27 compute-1 kernel: tapbe008398-80: left promiscuous mode
Jan 20 15:16:27 compute-1 nova_compute[225855]: 2026-01-20 15:16:27.977 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:16:27.981 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3c7fbecf-842e-4eb6-a944-59e56293d655]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:16:28.106 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7b97ed47-813a-487e-8c42-0022f9742a4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:16:28.107 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[60e4e794-682e-4196-bb96-0ba5d5d75ec6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:16:28.122 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[dff27390-58ba-44d3-8ecc-eaa7be7eb18a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718799, 'reachable_time': 16028, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305482, 'error': None, 'target': 'ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:28 compute-1 systemd[1]: run-netns-ovnmeta\x2dbe008398\x2d8f36\x2d4967\x2d9cc8\x2d6412553c79f3.mount: Deactivated successfully.
Jan 20 15:16:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:16:28.125 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:16:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:16:28.126 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[3edcbcfd-6545-46e5-a0e8-51734aa7af2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1173701695' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:16:28 compute-1 ceph-mon[81775]: pgmap v2818: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 KiB/s rd, 8.5 KiB/s wr, 2 op/s
Jan 20 15:16:29 compute-1 nova_compute[225855]: 2026-01-20 15:16:29.100 225859 DEBUG nova.network.neutron [req-e2371e9d-1e43-4664-ae25-506d85674ee2 req-fbc64c1e-eb84-4fd1-bc9e-1277f63d1f62 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Updated VIF entry in instance network info cache for port ebbe6083-de9d-43ca-9ab2-cf306ea0be4d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:16:29 compute-1 nova_compute[225855]: 2026-01-20 15:16:29.101 225859 DEBUG nova.network.neutron [req-e2371e9d-1e43-4664-ae25-506d85674ee2 req-fbc64c1e-eb84-4fd1-bc9e-1277f63d1f62 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Updating instance_info_cache with network_info: [{"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:16:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:16:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:29.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:16:29 compute-1 nova_compute[225855]: 2026-01-20 15:16:29.131 225859 DEBUG oslo_concurrency.lockutils [req-e2371e9d-1e43-4664-ae25-506d85674ee2 req-fbc64c1e-eb84-4fd1-bc9e-1277f63d1f62 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:16:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:29.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:29 compute-1 nova_compute[225855]: 2026-01-20 15:16:29.941 225859 DEBUG nova.network.neutron [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Activated binding for port ebbe6083-de9d-43ca-9ab2-cf306ea0be4d and host compute-0.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Jan 20 15:16:29 compute-1 nova_compute[225855]: 2026-01-20 15:16:29.942 225859 DEBUG nova.compute.manager [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Jan 20 15:16:29 compute-1 nova_compute[225855]: 2026-01-20 15:16:29.943 225859 DEBUG nova.virt.libvirt.vif [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:15:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-470752205',display_name='tempest-TestNetworkAdvancedServerOps-server-470752205',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-470752205',id=190,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAmgIhRv7SylyyDoXsoPKcXNSjcMbMCF7/IROo74ZCmTo9LbWE2Sanv271vjV+ounImSggkddfPFTxsQsqOAeGJ63UOB1CVRLYAEgvPLI8ngnO4k9hlNWAKjL0F7Yejx9A==',key_name='tempest-TestNetworkAdvancedServerOps-131135683',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:15:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-kjhk913w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:16:15Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=b5656c1b-5ac7-4b93-a25d-420e1e294678,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:16:29 compute-1 nova_compute[225855]: 2026-01-20 15:16:29.943 225859 DEBUG nova.network.os_vif_util [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Converting VIF {"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:16:29 compute-1 nova_compute[225855]: 2026-01-20 15:16:29.943 225859 DEBUG nova.network.os_vif_util [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a9:77:ea,bridge_name='br-int',has_traffic_filtering=True,id=ebbe6083-de9d-43ca-9ab2-cf306ea0be4d,network=Network(be008398-8f36-4967-9cc8-6412553c79f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebbe6083-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:16:29 compute-1 nova_compute[225855]: 2026-01-20 15:16:29.944 225859 DEBUG os_vif [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:77:ea,bridge_name='br-int',has_traffic_filtering=True,id=ebbe6083-de9d-43ca-9ab2-cf306ea0be4d,network=Network(be008398-8f36-4967-9cc8-6412553c79f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebbe6083-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:16:29 compute-1 nova_compute[225855]: 2026-01-20 15:16:29.946 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:29 compute-1 nova_compute[225855]: 2026-01-20 15:16:29.946 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebbe6083-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:16:29 compute-1 nova_compute[225855]: 2026-01-20 15:16:29.947 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:29 compute-1 nova_compute[225855]: 2026-01-20 15:16:29.950 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:16:29 compute-1 nova_compute[225855]: 2026-01-20 15:16:29.952 225859 INFO os_vif [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:77:ea,bridge_name='br-int',has_traffic_filtering=True,id=ebbe6083-de9d-43ca-9ab2-cf306ea0be4d,network=Network(be008398-8f36-4967-9cc8-6412553c79f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebbe6083-de')
Jan 20 15:16:29 compute-1 nova_compute[225855]: 2026-01-20 15:16:29.953 225859 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:29 compute-1 nova_compute[225855]: 2026-01-20 15:16:29.953 225859 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:29 compute-1 nova_compute[225855]: 2026-01-20 15:16:29.953 225859 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:29 compute-1 nova_compute[225855]: 2026-01-20 15:16:29.953 225859 DEBUG nova.compute.manager [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Jan 20 15:16:29 compute-1 nova_compute[225855]: 2026-01-20 15:16:29.954 225859 INFO nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Deleting instance files /var/lib/nova/instances/b5656c1b-5ac7-4b93-a25d-420e1e294678_del
Jan 20 15:16:29 compute-1 nova_compute[225855]: 2026-01-20 15:16:29.954 225859 INFO nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Deletion of /var/lib/nova/instances/b5656c1b-5ac7-4b93-a25d-420e1e294678_del complete
Jan 20 15:16:30 compute-1 podman[305484]: 2026-01-20 15:16:30.010288088 +0000 UTC m=+0.055468905 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 20 15:16:30 compute-1 nova_compute[225855]: 2026-01-20 15:16:30.047 225859 DEBUG nova.compute.manager [req-8d4ad645-2470-43a7-9aba-b1c97acd08b5 req-d4712228-3f09-4525-9367-f5eb27524d53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:30 compute-1 nova_compute[225855]: 2026-01-20 15:16:30.047 225859 DEBUG oslo_concurrency.lockutils [req-8d4ad645-2470-43a7-9aba-b1c97acd08b5 req-d4712228-3f09-4525-9367-f5eb27524d53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:30 compute-1 nova_compute[225855]: 2026-01-20 15:16:30.047 225859 DEBUG oslo_concurrency.lockutils [req-8d4ad645-2470-43a7-9aba-b1c97acd08b5 req-d4712228-3f09-4525-9367-f5eb27524d53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:30 compute-1 nova_compute[225855]: 2026-01-20 15:16:30.047 225859 DEBUG oslo_concurrency.lockutils [req-8d4ad645-2470-43a7-9aba-b1c97acd08b5 req-d4712228-3f09-4525-9367-f5eb27524d53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:30 compute-1 nova_compute[225855]: 2026-01-20 15:16:30.048 225859 DEBUG nova.compute.manager [req-8d4ad645-2470-43a7-9aba-b1c97acd08b5 req-d4712228-3f09-4525-9367-f5eb27524d53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] No waiting events found dispatching network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:16:30 compute-1 nova_compute[225855]: 2026-01-20 15:16:30.048 225859 WARNING nova.compute.manager [req-8d4ad645-2470-43a7-9aba-b1c97acd08b5 req-d4712228-3f09-4525-9367-f5eb27524d53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received unexpected event network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d for instance with vm_state active and task_state migrating.
Jan 20 15:16:30 compute-1 nova_compute[225855]: 2026-01-20 15:16:30.048 225859 DEBUG nova.compute.manager [req-8d4ad645-2470-43a7-9aba-b1c97acd08b5 req-d4712228-3f09-4525-9367-f5eb27524d53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:30 compute-1 nova_compute[225855]: 2026-01-20 15:16:30.048 225859 DEBUG oslo_concurrency.lockutils [req-8d4ad645-2470-43a7-9aba-b1c97acd08b5 req-d4712228-3f09-4525-9367-f5eb27524d53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:30 compute-1 nova_compute[225855]: 2026-01-20 15:16:30.048 225859 DEBUG oslo_concurrency.lockutils [req-8d4ad645-2470-43a7-9aba-b1c97acd08b5 req-d4712228-3f09-4525-9367-f5eb27524d53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:30 compute-1 nova_compute[225855]: 2026-01-20 15:16:30.048 225859 DEBUG oslo_concurrency.lockutils [req-8d4ad645-2470-43a7-9aba-b1c97acd08b5 req-d4712228-3f09-4525-9367-f5eb27524d53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:30 compute-1 nova_compute[225855]: 2026-01-20 15:16:30.049 225859 DEBUG nova.compute.manager [req-8d4ad645-2470-43a7-9aba-b1c97acd08b5 req-d4712228-3f09-4525-9367-f5eb27524d53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] No waiting events found dispatching network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:16:30 compute-1 nova_compute[225855]: 2026-01-20 15:16:30.049 225859 WARNING nova.compute.manager [req-8d4ad645-2470-43a7-9aba-b1c97acd08b5 req-d4712228-3f09-4525-9367-f5eb27524d53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received unexpected event network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d for instance with vm_state active and task_state migrating.
Jan 20 15:16:30 compute-1 nova_compute[225855]: 2026-01-20 15:16:30.049 225859 DEBUG nova.compute.manager [req-8d4ad645-2470-43a7-9aba-b1c97acd08b5 req-d4712228-3f09-4525-9367-f5eb27524d53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-vif-unplugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:30 compute-1 nova_compute[225855]: 2026-01-20 15:16:30.049 225859 DEBUG oslo_concurrency.lockutils [req-8d4ad645-2470-43a7-9aba-b1c97acd08b5 req-d4712228-3f09-4525-9367-f5eb27524d53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:30 compute-1 nova_compute[225855]: 2026-01-20 15:16:30.049 225859 DEBUG oslo_concurrency.lockutils [req-8d4ad645-2470-43a7-9aba-b1c97acd08b5 req-d4712228-3f09-4525-9367-f5eb27524d53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:30 compute-1 nova_compute[225855]: 2026-01-20 15:16:30.050 225859 DEBUG oslo_concurrency.lockutils [req-8d4ad645-2470-43a7-9aba-b1c97acd08b5 req-d4712228-3f09-4525-9367-f5eb27524d53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:30 compute-1 nova_compute[225855]: 2026-01-20 15:16:30.050 225859 DEBUG nova.compute.manager [req-8d4ad645-2470-43a7-9aba-b1c97acd08b5 req-d4712228-3f09-4525-9367-f5eb27524d53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] No waiting events found dispatching network-vif-unplugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:16:30 compute-1 nova_compute[225855]: 2026-01-20 15:16:30.050 225859 DEBUG nova.compute.manager [req-8d4ad645-2470-43a7-9aba-b1c97acd08b5 req-d4712228-3f09-4525-9367-f5eb27524d53 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-vif-unplugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:16:30 compute-1 ceph-mon[81775]: pgmap v2819: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 630 KiB/s rd, 3.8 KiB/s wr, 54 op/s
Jan 20 15:16:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:16:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:31.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:16:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:16:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:31.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:16:32 compute-1 nova_compute[225855]: 2026-01-20 15:16:32.221 225859 DEBUG nova.compute.manager [req-2b858c79-1f31-4de4-8073-358f2e98d753 req-0cc8db62-b365-425b-8dd1-4c407516161a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:32 compute-1 nova_compute[225855]: 2026-01-20 15:16:32.221 225859 DEBUG oslo_concurrency.lockutils [req-2b858c79-1f31-4de4-8073-358f2e98d753 req-0cc8db62-b365-425b-8dd1-4c407516161a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:32 compute-1 nova_compute[225855]: 2026-01-20 15:16:32.221 225859 DEBUG oslo_concurrency.lockutils [req-2b858c79-1f31-4de4-8073-358f2e98d753 req-0cc8db62-b365-425b-8dd1-4c407516161a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:32 compute-1 nova_compute[225855]: 2026-01-20 15:16:32.221 225859 DEBUG oslo_concurrency.lockutils [req-2b858c79-1f31-4de4-8073-358f2e98d753 req-0cc8db62-b365-425b-8dd1-4c407516161a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:32 compute-1 nova_compute[225855]: 2026-01-20 15:16:32.221 225859 DEBUG nova.compute.manager [req-2b858c79-1f31-4de4-8073-358f2e98d753 req-0cc8db62-b365-425b-8dd1-4c407516161a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] No waiting events found dispatching network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:16:32 compute-1 nova_compute[225855]: 2026-01-20 15:16:32.222 225859 WARNING nova.compute.manager [req-2b858c79-1f31-4de4-8073-358f2e98d753 req-0cc8db62-b365-425b-8dd1-4c407516161a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received unexpected event network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d for instance with vm_state active and task_state migrating.
Jan 20 15:16:32 compute-1 nova_compute[225855]: 2026-01-20 15:16:32.222 225859 DEBUG nova.compute.manager [req-2b858c79-1f31-4de4-8073-358f2e98d753 req-0cc8db62-b365-425b-8dd1-4c407516161a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:32 compute-1 nova_compute[225855]: 2026-01-20 15:16:32.222 225859 DEBUG oslo_concurrency.lockutils [req-2b858c79-1f31-4de4-8073-358f2e98d753 req-0cc8db62-b365-425b-8dd1-4c407516161a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:32 compute-1 nova_compute[225855]: 2026-01-20 15:16:32.222 225859 DEBUG oslo_concurrency.lockutils [req-2b858c79-1f31-4de4-8073-358f2e98d753 req-0cc8db62-b365-425b-8dd1-4c407516161a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:32 compute-1 nova_compute[225855]: 2026-01-20 15:16:32.222 225859 DEBUG oslo_concurrency.lockutils [req-2b858c79-1f31-4de4-8073-358f2e98d753 req-0cc8db62-b365-425b-8dd1-4c407516161a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:32 compute-1 nova_compute[225855]: 2026-01-20 15:16:32.222 225859 DEBUG nova.compute.manager [req-2b858c79-1f31-4de4-8073-358f2e98d753 req-0cc8db62-b365-425b-8dd1-4c407516161a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] No waiting events found dispatching network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:16:32 compute-1 nova_compute[225855]: 2026-01-20 15:16:32.223 225859 WARNING nova.compute.manager [req-2b858c79-1f31-4de4-8073-358f2e98d753 req-0cc8db62-b365-425b-8dd1-4c407516161a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received unexpected event network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d for instance with vm_state active and task_state migrating.
Jan 20 15:16:32 compute-1 nova_compute[225855]: 2026-01-20 15:16:32.984 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:33.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:33 compute-1 ovn_controller[130490]: 2026-01-20T15:16:33Z|00849|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 20 15:16:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:16:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:33.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:16:33 compute-1 ceph-mon[81775]: pgmap v2820: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.2 KiB/s wr, 82 op/s
Jan 20 15:16:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e408 e408: 3 total, 3 up, 3 in
Jan 20 15:16:34 compute-1 nova_compute[225855]: 2026-01-20 15:16:34.948 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:35.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:35.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:35 compute-1 nova_compute[225855]: 2026-01-20 15:16:35.783 225859 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Acquiring lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:35 compute-1 nova_compute[225855]: 2026-01-20 15:16:35.784 225859 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:35 compute-1 nova_compute[225855]: 2026-01-20 15:16:35.784 225859 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:35 compute-1 nova_compute[225855]: 2026-01-20 15:16:35.801 225859 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:35 compute-1 nova_compute[225855]: 2026-01-20 15:16:35.802 225859 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:35 compute-1 nova_compute[225855]: 2026-01-20 15:16:35.802 225859 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:35 compute-1 ceph-mon[81775]: pgmap v2821: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.5 KiB/s wr, 116 op/s
Jan 20 15:16:35 compute-1 ceph-mon[81775]: osdmap e408: 3 total, 3 up, 3 in
Jan 20 15:16:35 compute-1 nova_compute[225855]: 2026-01-20 15:16:35.802 225859 DEBUG nova.compute.resource_tracker [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:16:35 compute-1 nova_compute[225855]: 2026-01-20 15:16:35.803 225859 DEBUG oslo_concurrency.processutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:16:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:16:36 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2719345878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:36 compute-1 nova_compute[225855]: 2026-01-20 15:16:36.246 225859 DEBUG oslo_concurrency.processutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:16:36 compute-1 nova_compute[225855]: 2026-01-20 15:16:36.328 225859 DEBUG nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:16:36 compute-1 nova_compute[225855]: 2026-01-20 15:16:36.329 225859 DEBUG nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:16:36 compute-1 nova_compute[225855]: 2026-01-20 15:16:36.329 225859 DEBUG nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:16:36 compute-1 nova_compute[225855]: 2026-01-20 15:16:36.506 225859 WARNING nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:16:36 compute-1 nova_compute[225855]: 2026-01-20 15:16:36.507 225859 DEBUG nova.compute.resource_tracker [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4011MB free_disk=20.760086059570312GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:16:36 compute-1 nova_compute[225855]: 2026-01-20 15:16:36.507 225859 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:36 compute-1 nova_compute[225855]: 2026-01-20 15:16:36.507 225859 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:36 compute-1 nova_compute[225855]: 2026-01-20 15:16:36.547 225859 DEBUG nova.compute.resource_tracker [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Migration for instance b5656c1b-5ac7-4b93-a25d-420e1e294678 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 20 15:16:36 compute-1 nova_compute[225855]: 2026-01-20 15:16:36.567 225859 DEBUG nova.compute.resource_tracker [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Jan 20 15:16:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/796769356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2719345878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:36 compute-1 ceph-mon[81775]: pgmap v2823: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.9 KiB/s wr, 124 op/s
Jan 20 15:16:36 compute-1 nova_compute[225855]: 2026-01-20 15:16:36.977 225859 DEBUG nova.compute.resource_tracker [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Instance f1ded131-d9a3-4e93-ad99-53ee2695d5c8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:16:36 compute-1 nova_compute[225855]: 2026-01-20 15:16:36.977 225859 DEBUG nova.compute.resource_tracker [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Migration 9b925066-c218-4b07-910d-90dd336bf952 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 20 15:16:36 compute-1 nova_compute[225855]: 2026-01-20 15:16:36.977 225859 DEBUG nova.compute.resource_tracker [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:16:36 compute-1 nova_compute[225855]: 2026-01-20 15:16:36.978 225859 DEBUG nova.compute.resource_tracker [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=704MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:16:37 compute-1 nova_compute[225855]: 2026-01-20 15:16:37.075 225859 DEBUG oslo_concurrency.processutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:16:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:37.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:16:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:37.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:16:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:16:37 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/512128344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:37 compute-1 nova_compute[225855]: 2026-01-20 15:16:37.519 225859 DEBUG oslo_concurrency.processutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:16:37 compute-1 nova_compute[225855]: 2026-01-20 15:16:37.525 225859 DEBUG nova.compute.provider_tree [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:16:37 compute-1 nova_compute[225855]: 2026-01-20 15:16:37.549 225859 DEBUG nova.scheduler.client.report [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:16:37 compute-1 nova_compute[225855]: 2026-01-20 15:16:37.576 225859 DEBUG nova.compute.resource_tracker [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:16:37 compute-1 nova_compute[225855]: 2026-01-20 15:16:37.577 225859 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:37 compute-1 nova_compute[225855]: 2026-01-20 15:16:37.584 225859 INFO nova.compute.manager [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Migrating instance to compute-0.ctlplane.example.com finished successfully.
Jan 20 15:16:37 compute-1 nova_compute[225855]: 2026-01-20 15:16:37.684 225859 INFO nova.scheduler.client.report [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Deleted allocation for migration 9b925066-c218-4b07-910d-90dd336bf952
Jan 20 15:16:37 compute-1 nova_compute[225855]: 2026-01-20 15:16:37.684 225859 DEBUG nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Jan 20 15:16:37 compute-1 nova_compute[225855]: 2026-01-20 15:16:37.986 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/512128344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:39 compute-1 ceph-mon[81775]: pgmap v2824: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.8 KiB/s wr, 123 op/s
Jan 20 15:16:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:39.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:39.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:39 compute-1 nova_compute[225855]: 2026-01-20 15:16:39.960 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:40 compute-1 nova_compute[225855]: 2026-01-20 15:16:40.831 225859 DEBUG nova.compute.manager [req-bad45644-521c-4271-8d87-180ff0e8abd4 req-3c157dd2-3016-4cf1-8659-8e525fbba106 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-changed-0e93d1de-671e-4e37-8e79-44bed7981254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:40 compute-1 nova_compute[225855]: 2026-01-20 15:16:40.831 225859 DEBUG nova.compute.manager [req-bad45644-521c-4271-8d87-180ff0e8abd4 req-3c157dd2-3016-4cf1-8659-8e525fbba106 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Refreshing instance network info cache due to event network-changed-0e93d1de-671e-4e37-8e79-44bed7981254. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:16:40 compute-1 nova_compute[225855]: 2026-01-20 15:16:40.831 225859 DEBUG oslo_concurrency.lockutils [req-bad45644-521c-4271-8d87-180ff0e8abd4 req-3c157dd2-3016-4cf1-8659-8e525fbba106 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:16:40 compute-1 nova_compute[225855]: 2026-01-20 15:16:40.832 225859 DEBUG oslo_concurrency.lockutils [req-bad45644-521c-4271-8d87-180ff0e8abd4 req-3c157dd2-3016-4cf1-8659-8e525fbba106 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:16:40 compute-1 nova_compute[225855]: 2026-01-20 15:16:40.832 225859 DEBUG nova.network.neutron [req-bad45644-521c-4271-8d87-180ff0e8abd4 req-3c157dd2-3016-4cf1-8659-8e525fbba106 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Refreshing network info cache for port 0e93d1de-671e-4e37-8e79-44bed7981254 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:16:41 compute-1 ceph-mon[81775]: pgmap v2825: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.7 KiB/s wr, 70 op/s
Jan 20 15:16:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:41.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:16:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:41.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:16:41 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e409 e409: 3 total, 3 up, 3 in
Jan 20 15:16:41 compute-1 sudo[305556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:16:41 compute-1 sudo[305556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:41 compute-1 sudo[305556]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:42 compute-1 sudo[305581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:16:42 compute-1 sudo[305581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:42 compute-1 sudo[305581]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:42 compute-1 ceph-mon[81775]: osdmap e409: 3 total, 3 up, 3 in
Jan 20 15:16:42 compute-1 ceph-mon[81775]: pgmap v2827: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 211 KiB/s rd, 3.7 KiB/s wr, 11 op/s
Jan 20 15:16:42 compute-1 nova_compute[225855]: 2026-01-20 15:16:42.870 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922187.8690405, b5656c1b-5ac7-4b93-a25d-420e1e294678 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:16:42 compute-1 nova_compute[225855]: 2026-01-20 15:16:42.870 225859 INFO nova.compute.manager [-] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] VM Stopped (Lifecycle Event)
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #136. Immutable memtables: 0.
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:16:42.877775) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 136
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922202877834, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2434, "num_deletes": 255, "total_data_size": 5483757, "memory_usage": 5560424, "flush_reason": "Manual Compaction"}
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #137: started
Jan 20 15:16:42 compute-1 nova_compute[225855]: 2026-01-20 15:16:42.893 225859 DEBUG nova.compute.manager [None req-f7e24bc1-ec39-4929-9e42-80bac89b93b5 - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922202901979, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 137, "file_size": 3592662, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66381, "largest_seqno": 68810, "table_properties": {"data_size": 3582980, "index_size": 6047, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20909, "raw_average_key_size": 20, "raw_value_size": 3563296, "raw_average_value_size": 3524, "num_data_blocks": 263, "num_entries": 1011, "num_filter_entries": 1011, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768922008, "oldest_key_time": 1768922008, "file_creation_time": 1768922202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 24246 microseconds, and 8853 cpu microseconds.
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:16:42.902017) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #137: 3592662 bytes OK
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:16:42.902036) [db/memtable_list.cc:519] [default] Level-0 commit table #137 started
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:16:42.904360) [db/memtable_list.cc:722] [default] Level-0 commit table #137: memtable #1 done
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:16:42.904413) EVENT_LOG_v1 {"time_micros": 1768922202904402, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:16:42.904442) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 5472980, prev total WAL file size 5472980, number of live WAL files 2.
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000133.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:16:42.905750) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [137(3508KB)], [135(9718KB)]
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922202905785, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [137], "files_L6": [135], "score": -1, "input_data_size": 13544434, "oldest_snapshot_seqno": -1}
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #138: 9255 keys, 11673229 bytes, temperature: kUnknown
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922202994010, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 138, "file_size": 11673229, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11613200, "index_size": 35788, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23173, "raw_key_size": 243101, "raw_average_key_size": 26, "raw_value_size": 11450388, "raw_average_value_size": 1237, "num_data_blocks": 1364, "num_entries": 9255, "num_filter_entries": 9255, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768922202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 138, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:16:42 compute-1 nova_compute[225855]: 2026-01-20 15:16:42.995 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:16:42.994261) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 11673229 bytes
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:16:42.995997) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.4 rd, 132.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 9.5 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 9780, records dropped: 525 output_compression: NoCompression
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:16:42.996058) EVENT_LOG_v1 {"time_micros": 1768922202996036, "job": 86, "event": "compaction_finished", "compaction_time_micros": 88303, "compaction_time_cpu_micros": 27975, "output_level": 6, "num_output_files": 1, "total_output_size": 11673229, "num_input_records": 9780, "num_output_records": 9255, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922202996901, "job": 86, "event": "table_file_deletion", "file_number": 137}
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000135.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922202999125, "job": 86, "event": "table_file_deletion", "file_number": 135}
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:16:42.905682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:16:42.999234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:16:42.999241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:16:42.999243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:16:42.999245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:16:42 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:16:42.999247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:16:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:43.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:43.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:44 compute-1 nova_compute[225855]: 2026-01-20 15:16:44.191 225859 DEBUG nova.network.neutron [req-bad45644-521c-4271-8d87-180ff0e8abd4 req-3c157dd2-3016-4cf1-8659-8e525fbba106 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updated VIF entry in instance network info cache for port 0e93d1de-671e-4e37-8e79-44bed7981254. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:16:44 compute-1 nova_compute[225855]: 2026-01-20 15:16:44.192 225859 DEBUG nova.network.neutron [req-bad45644-521c-4271-8d87-180ff0e8abd4 req-3c157dd2-3016-4cf1-8659-8e525fbba106 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updating instance_info_cache with network_info: [{"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:16:44 compute-1 nova_compute[225855]: 2026-01-20 15:16:44.251 225859 DEBUG oslo_concurrency.lockutils [req-bad45644-521c-4271-8d87-180ff0e8abd4 req-3c157dd2-3016-4cf1-8659-8e525fbba106 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:16:44 compute-1 ceph-mon[81775]: pgmap v2828: 321 pgs: 321 active+clean; 568 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 175 KiB/s rd, 3.4 KiB/s wr, 16 op/s
Jan 20 15:16:44 compute-1 nova_compute[225855]: 2026-01-20 15:16:44.963 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:45 compute-1 podman[305608]: 2026-01-20 15:16:45.021990571 +0000 UTC m=+0.074473614 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:16:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:45.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:45.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:16:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:47.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:16:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:47.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:47 compute-1 ceph-mon[81775]: pgmap v2829: 321 pgs: 321 active+clean; 519 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 662 KiB/s rd, 17 KiB/s wr, 90 op/s
Jan 20 15:16:47 compute-1 nova_compute[225855]: 2026-01-20 15:16:47.998 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:48 compute-1 nova_compute[225855]: 2026-01-20 15:16:48.586 225859 DEBUG oslo_concurrency.lockutils [None req-1902b019-7a1b-4a94-924f-2ba27a4743a5 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:48 compute-1 nova_compute[225855]: 2026-01-20 15:16:48.587 225859 DEBUG oslo_concurrency.lockutils [None req-1902b019-7a1b-4a94-924f-2ba27a4743a5 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:48 compute-1 nova_compute[225855]: 2026-01-20 15:16:48.600 225859 INFO nova.compute.manager [None req-1902b019-7a1b-4a94-924f-2ba27a4743a5 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Detaching volume 933c5c7a-f496-4bcc-b304-68156c235fe5
Jan 20 15:16:48 compute-1 ovn_controller[130490]: 2026-01-20T15:16:48Z|00850|binding|INFO|Releasing lport b20b0e27-0b08-4316-b6df-6784416f44c0 from this chassis (sb_readonly=0)
Jan 20 15:16:48 compute-1 nova_compute[225855]: 2026-01-20 15:16:48.688 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:48 compute-1 nova_compute[225855]: 2026-01-20 15:16:48.787 225859 INFO nova.virt.block_device [None req-1902b019-7a1b-4a94-924f-2ba27a4743a5 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Attempting to driver detach volume 933c5c7a-f496-4bcc-b304-68156c235fe5 from mountpoint /dev/vdb
Jan 20 15:16:48 compute-1 nova_compute[225855]: 2026-01-20 15:16:48.794 225859 DEBUG nova.virt.libvirt.driver [None req-1902b019-7a1b-4a94-924f-2ba27a4743a5 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Attempting to detach device vdb from instance f1ded131-d9a3-4e93-ad99-53ee2695d5c8 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 15:16:48 compute-1 nova_compute[225855]: 2026-01-20 15:16:48.794 225859 DEBUG nova.virt.libvirt.guest [None req-1902b019-7a1b-4a94-924f-2ba27a4743a5 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 15:16:48 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:16:48 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-933c5c7a-f496-4bcc-b304-68156c235fe5">
Jan 20 15:16:48 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:16:48 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:16:48 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:16:48 compute-1 nova_compute[225855]:   </source>
Jan 20 15:16:48 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:16:48 compute-1 nova_compute[225855]:   <serial>933c5c7a-f496-4bcc-b304-68156c235fe5</serial>
Jan 20 15:16:48 compute-1 nova_compute[225855]:   <shareable/>
Jan 20 15:16:48 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 20 15:16:48 compute-1 nova_compute[225855]: </disk>
Jan 20 15:16:48 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:16:48 compute-1 nova_compute[225855]: 2026-01-20 15:16:48.800 225859 INFO nova.virt.libvirt.driver [None req-1902b019-7a1b-4a94-924f-2ba27a4743a5 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Successfully detached device vdb from instance f1ded131-d9a3-4e93-ad99-53ee2695d5c8 from the persistent domain config.
Jan 20 15:16:48 compute-1 nova_compute[225855]: 2026-01-20 15:16:48.800 225859 DEBUG nova.virt.libvirt.driver [None req-1902b019-7a1b-4a94-924f-2ba27a4743a5 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f1ded131-d9a3-4e93-ad99-53ee2695d5c8 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 15:16:48 compute-1 nova_compute[225855]: 2026-01-20 15:16:48.801 225859 DEBUG nova.virt.libvirt.guest [None req-1902b019-7a1b-4a94-924f-2ba27a4743a5 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 15:16:48 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:16:48 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-933c5c7a-f496-4bcc-b304-68156c235fe5">
Jan 20 15:16:48 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:16:48 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:16:48 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:16:48 compute-1 nova_compute[225855]:   </source>
Jan 20 15:16:48 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:16:48 compute-1 nova_compute[225855]:   <serial>933c5c7a-f496-4bcc-b304-68156c235fe5</serial>
Jan 20 15:16:48 compute-1 nova_compute[225855]:   <shareable/>
Jan 20 15:16:48 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 20 15:16:48 compute-1 nova_compute[225855]: </disk>
Jan 20 15:16:48 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:16:48 compute-1 nova_compute[225855]: 2026-01-20 15:16:48.902 225859 DEBUG nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Received event <DeviceRemovedEvent: 1768922208.9024026, f1ded131-d9a3-4e93-ad99-53ee2695d5c8 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 15:16:48 compute-1 nova_compute[225855]: 2026-01-20 15:16:48.905 225859 DEBUG nova.virt.libvirt.driver [None req-1902b019-7a1b-4a94-924f-2ba27a4743a5 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f1ded131-d9a3-4e93-ad99-53ee2695d5c8 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 15:16:48 compute-1 nova_compute[225855]: 2026-01-20 15:16:48.907 225859 INFO nova.virt.libvirt.driver [None req-1902b019-7a1b-4a94-924f-2ba27a4743a5 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Successfully detached device vdb from instance f1ded131-d9a3-4e93-ad99-53ee2695d5c8 from the live domain config.
Jan 20 15:16:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:49.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:49 compute-1 nova_compute[225855]: 2026-01-20 15:16:49.298 225859 DEBUG nova.objects.instance [None req-1902b019-7a1b-4a94-924f-2ba27a4743a5 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'flavor' on Instance uuid f1ded131-d9a3-4e93-ad99-53ee2695d5c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:16:49 compute-1 nova_compute[225855]: 2026-01-20 15:16:49.348 225859 DEBUG oslo_concurrency.lockutils [None req-1902b019-7a1b-4a94-924f-2ba27a4743a5 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:49.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:49 compute-1 ceph-mon[81775]: pgmap v2830: 321 pgs: 321 active+clean; 519 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 662 KiB/s rd, 17 KiB/s wr, 90 op/s
Jan 20 15:16:49 compute-1 nova_compute[225855]: 2026-01-20 15:16:49.966 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:16:50.536 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:16:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:16:50.537 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:16:50 compute-1 nova_compute[225855]: 2026-01-20 15:16:50.536 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:50 compute-1 ceph-mon[81775]: pgmap v2831: 321 pgs: 321 active+clean; 521 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 664 KiB/s rd, 17 KiB/s wr, 89 op/s
Jan 20 15:16:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:16:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:51.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:16:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:51.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:52 compute-1 ceph-mon[81775]: pgmap v2832: 321 pgs: 321 active+clean; 521 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 616 KiB/s rd, 26 KiB/s wr, 83 op/s
Jan 20 15:16:53 compute-1 nova_compute[225855]: 2026-01-20 15:16:53.000 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:53.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:53 compute-1 nova_compute[225855]: 2026-01-20 15:16:53.386 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:53.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:54 compute-1 nova_compute[225855]: 2026-01-20 15:16:54.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:16:54 compute-1 nova_compute[225855]: 2026-01-20 15:16:54.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:16:54 compute-1 ceph-mon[81775]: pgmap v2833: 321 pgs: 321 active+clean; 521 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 551 KiB/s rd, 24 KiB/s wr, 74 op/s
Jan 20 15:16:54 compute-1 nova_compute[225855]: 2026-01-20 15:16:54.968 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:55.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:55.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:16:55.538 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:16:56 compute-1 sudo[305642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:16:56 compute-1 sudo[305642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:56 compute-1 sudo[305642]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:56 compute-1 sudo[305667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:16:56 compute-1 sudo[305667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:56 compute-1 sudo[305667]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:56 compute-1 sudo[305692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:16:56 compute-1 sudo[305692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:56 compute-1 sudo[305692]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:56 compute-1 sudo[305717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:16:56 compute-1 sudo[305717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:56 compute-1 ceph-mon[81775]: pgmap v2834: 321 pgs: 321 active+clean; 521 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 27 KiB/s wr, 70 op/s
Jan 20 15:16:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:57.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:57 compute-1 sudo[305717]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:57.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:57 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:16:57 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:16:57 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:16:57 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:16:57 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:16:57 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:16:57 compute-1 nova_compute[225855]: 2026-01-20 15:16:57.826 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:58 compute-1 nova_compute[225855]: 2026-01-20 15:16:58.001 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:58 compute-1 ceph-mon[81775]: pgmap v2835: 321 pgs: 321 active+clean; 521 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 14 KiB/s wr, 2 op/s
Jan 20 15:16:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:16:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:59.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:16:59 compute-1 nova_compute[225855]: 2026-01-20 15:16:59.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:16:59 compute-1 nova_compute[225855]: 2026-01-20 15:16:59.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:16:59 compute-1 nova_compute[225855]: 2026-01-20 15:16:59.434 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:16:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:16:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:59.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:59 compute-1 nova_compute[225855]: 2026-01-20 15:16:59.970 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:00 compute-1 nova_compute[225855]: 2026-01-20 15:17:00.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:17:00 compute-1 ceph-osd[79119]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 20 15:17:01 compute-1 podman[305775]: 2026-01-20 15:17:01.009564655 +0000 UTC m=+0.050919995 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 15:17:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:01.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:01 compute-1 nova_compute[225855]: 2026-01-20 15:17:01.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:17:01 compute-1 nova_compute[225855]: 2026-01-20 15:17:01.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:17:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:17:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:01.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:17:01 compute-1 ceph-mon[81775]: pgmap v2836: 321 pgs: 321 active+clean; 521 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 15 KiB/s wr, 10 op/s
Jan 20 15:17:02 compute-1 sudo[305795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:17:02 compute-1 sudo[305795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:17:02 compute-1 sudo[305795]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:02 compute-1 sudo[305820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:17:02 compute-1 sudo[305820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:17:02 compute-1 sudo[305820]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:02 compute-1 nova_compute[225855]: 2026-01-20 15:17:02.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:17:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1955683945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:03 compute-1 nova_compute[225855]: 2026-01-20 15:17:03.004 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:03 compute-1 sudo[305845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:17:03 compute-1 sudo[305845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:17:03 compute-1 sudo[305845]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:03.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:03 compute-1 sudo[305870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:17:03 compute-1 sudo[305870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:17:03 compute-1 sudo[305870]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:03 compute-1 nova_compute[225855]: 2026-01-20 15:17:03.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:17:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:03 compute-1 nova_compute[225855]: 2026-01-20 15:17:03.383 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:03 compute-1 nova_compute[225855]: 2026-01-20 15:17:03.384 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:03 compute-1 nova_compute[225855]: 2026-01-20 15:17:03.384 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:03 compute-1 nova_compute[225855]: 2026-01-20 15:17:03.384 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:17:03 compute-1 nova_compute[225855]: 2026-01-20 15:17:03.384 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:17:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:17:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:03.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:17:03 compute-1 nova_compute[225855]: 2026-01-20 15:17:03.452 225859 DEBUG oslo_concurrency.lockutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:03 compute-1 nova_compute[225855]: 2026-01-20 15:17:03.452 225859 DEBUG oslo_concurrency.lockutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:03 compute-1 nova_compute[225855]: 2026-01-20 15:17:03.471 225859 DEBUG nova.compute.manager [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:17:03 compute-1 nova_compute[225855]: 2026-01-20 15:17:03.552 225859 DEBUG oslo_concurrency.lockutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:03 compute-1 nova_compute[225855]: 2026-01-20 15:17:03.553 225859 DEBUG oslo_concurrency.lockutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:03 compute-1 nova_compute[225855]: 2026-01-20 15:17:03.562 225859 DEBUG nova.virt.hardware [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:17:03 compute-1 nova_compute[225855]: 2026-01-20 15:17:03.563 225859 INFO nova.compute.claims [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:17:03 compute-1 nova_compute[225855]: 2026-01-20 15:17:03.667 225859 DEBUG oslo_concurrency.processutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:17:03 compute-1 ceph-mon[81775]: pgmap v2837: 321 pgs: 321 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 986 KiB/s wr, 35 op/s
Jan 20 15:17:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:17:03 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:17:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/548683424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:17:03 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2580642773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:03 compute-1 nova_compute[225855]: 2026-01-20 15:17:03.807 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:17:03 compute-1 nova_compute[225855]: 2026-01-20 15:17:03.873 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:17:03 compute-1 nova_compute[225855]: 2026-01-20 15:17:03.873 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.038 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.040 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3985MB free_disk=20.805618286132812GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.040 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:17:04 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4051330510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.124 225859 DEBUG oslo_concurrency.processutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.128 225859 DEBUG nova.compute.provider_tree [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.148 225859 DEBUG nova.scheduler.client.report [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.171 225859 DEBUG oslo_concurrency.lockutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.172 225859 DEBUG nova.compute.manager [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.175 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.246 225859 DEBUG nova.compute.manager [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.247 225859 DEBUG nova.network.neutron [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.407 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance f1ded131-d9a3-4e93-ad99-53ee2695d5c8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.408 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 4f38d24a-3458-4c59-8480-8094ffcbb5aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.408 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.409 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.417 225859 INFO nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.468 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.540 225859 DEBUG nova.compute.manager [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.631 225859 DEBUG nova.compute.manager [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.633 225859 DEBUG nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.634 225859 INFO nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Creating image(s)
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.660 225859 DEBUG nova.storage.rbd_utils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 4f38d24a-3458-4c59-8480-8094ffcbb5aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.685 225859 DEBUG nova.storage.rbd_utils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 4f38d24a-3458-4c59-8480-8094ffcbb5aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:17:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2580642773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4051330510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:04 compute-1 ceph-mon[81775]: pgmap v2838: 321 pgs: 321 active+clean; 560 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.6 MiB/s wr, 38 op/s
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.716 225859 DEBUG nova.storage.rbd_utils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 4f38d24a-3458-4c59-8480-8094ffcbb5aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.720 225859 DEBUG oslo_concurrency.processutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.790 225859 DEBUG oslo_concurrency.processutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.792 225859 DEBUG oslo_concurrency.lockutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.792 225859 DEBUG oslo_concurrency.lockutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.793 225859 DEBUG oslo_concurrency.lockutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.818 225859 DEBUG nova.storage.rbd_utils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 4f38d24a-3458-4c59-8480-8094ffcbb5aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.822 225859 DEBUG oslo_concurrency.processutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 4f38d24a-3458-4c59-8480-8094ffcbb5aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.866 225859 DEBUG nova.policy [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '442a7a5cb8ea426a82be9762b262d171', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:17:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:17:04 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2900254021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.907 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.913 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.928 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.930 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.930 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:04 compute-1 nova_compute[225855]: 2026-01-20 15:17:04.972 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:05 compute-1 nova_compute[225855]: 2026-01-20 15:17:05.080 225859 DEBUG oslo_concurrency.processutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 4f38d24a-3458-4c59-8480-8094ffcbb5aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:17:05 compute-1 nova_compute[225855]: 2026-01-20 15:17:05.140 225859 DEBUG nova.storage.rbd_utils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] resizing rbd image 4f38d24a-3458-4c59-8480-8094ffcbb5aa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:17:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:17:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:05.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:17:05 compute-1 nova_compute[225855]: 2026-01-20 15:17:05.239 225859 DEBUG nova.objects.instance [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'migration_context' on Instance uuid 4f38d24a-3458-4c59-8480-8094ffcbb5aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:17:05 compute-1 nova_compute[225855]: 2026-01-20 15:17:05.254 225859 DEBUG nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:17:05 compute-1 nova_compute[225855]: 2026-01-20 15:17:05.254 225859 DEBUG nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Ensure instance console log exists: /var/lib/nova/instances/4f38d24a-3458-4c59-8480-8094ffcbb5aa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:17:05 compute-1 nova_compute[225855]: 2026-01-20 15:17:05.254 225859 DEBUG oslo_concurrency.lockutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:05 compute-1 nova_compute[225855]: 2026-01-20 15:17:05.255 225859 DEBUG oslo_concurrency.lockutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:05 compute-1 nova_compute[225855]: 2026-01-20 15:17:05.255 225859 DEBUG oslo_concurrency.lockutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:05.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:05 compute-1 nova_compute[225855]: 2026-01-20 15:17:05.661 225859 DEBUG nova.network.neutron [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Successfully created port: d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:17:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2900254021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2608648706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3529627429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/104006290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:06 compute-1 ceph-mon[81775]: pgmap v2839: 321 pgs: 321 active+clean; 601 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.2 MiB/s wr, 64 op/s
Jan 20 15:17:07 compute-1 nova_compute[225855]: 2026-01-20 15:17:07.059 225859 DEBUG nova.network.neutron [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Successfully updated port: d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:17:07 compute-1 nova_compute[225855]: 2026-01-20 15:17:07.077 225859 DEBUG oslo_concurrency.lockutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "refresh_cache-4f38d24a-3458-4c59-8480-8094ffcbb5aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:17:07 compute-1 nova_compute[225855]: 2026-01-20 15:17:07.077 225859 DEBUG oslo_concurrency.lockutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquired lock "refresh_cache-4f38d24a-3458-4c59-8480-8094ffcbb5aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:17:07 compute-1 nova_compute[225855]: 2026-01-20 15:17:07.078 225859 DEBUG nova.network.neutron [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:17:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:07.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:07 compute-1 nova_compute[225855]: 2026-01-20 15:17:07.213 225859 DEBUG nova.compute.manager [req-3a457955-0c78-4aed-99d2-34e09e8a012d req-2c4a8951-cfd2-46a4-9ed0-73a56d2e357e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Received event network-changed-d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:17:07 compute-1 nova_compute[225855]: 2026-01-20 15:17:07.213 225859 DEBUG nova.compute.manager [req-3a457955-0c78-4aed-99d2-34e09e8a012d req-2c4a8951-cfd2-46a4-9ed0-73a56d2e357e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Refreshing instance network info cache due to event network-changed-d3a1fab4-7d4e-40cd-bdbb-b337196adbc5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:17:07 compute-1 nova_compute[225855]: 2026-01-20 15:17:07.214 225859 DEBUG oslo_concurrency.lockutils [req-3a457955-0c78-4aed-99d2-34e09e8a012d req-2c4a8951-cfd2-46a4-9ed0-73a56d2e357e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-4f38d24a-3458-4c59-8480-8094ffcbb5aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:17:07 compute-1 nova_compute[225855]: 2026-01-20 15:17:07.330 225859 DEBUG nova.network.neutron [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:17:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:17:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:07.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:17:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3133455725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:17:07 compute-1 nova_compute[225855]: 2026-01-20 15:17:07.931 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.006 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.533 225859 DEBUG nova.network.neutron [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Updating instance_info_cache with network_info: [{"id": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "address": "fa:16:3e:20:6b:ff", "network": {"id": "4512e7e3-1668-4e98-8240-843256180395", "bridge": "br-int", "label": "tempest-network-smoke--434600618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a1fab4-7d", "ovs_interfaceid": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.553 225859 DEBUG oslo_concurrency.lockutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Releasing lock "refresh_cache-4f38d24a-3458-4c59-8480-8094ffcbb5aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.553 225859 DEBUG nova.compute.manager [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Instance network_info: |[{"id": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "address": "fa:16:3e:20:6b:ff", "network": {"id": "4512e7e3-1668-4e98-8240-843256180395", "bridge": "br-int", "label": "tempest-network-smoke--434600618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a1fab4-7d", "ovs_interfaceid": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.553 225859 DEBUG oslo_concurrency.lockutils [req-3a457955-0c78-4aed-99d2-34e09e8a012d req-2c4a8951-cfd2-46a4-9ed0-73a56d2e357e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-4f38d24a-3458-4c59-8480-8094ffcbb5aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.554 225859 DEBUG nova.network.neutron [req-3a457955-0c78-4aed-99d2-34e09e8a012d req-2c4a8951-cfd2-46a4-9ed0-73a56d2e357e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Refreshing network info cache for port d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.556 225859 DEBUG nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Start _get_guest_xml network_info=[{"id": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "address": "fa:16:3e:20:6b:ff", "network": {"id": "4512e7e3-1668-4e98-8240-843256180395", "bridge": "br-int", "label": "tempest-network-smoke--434600618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a1fab4-7d", "ovs_interfaceid": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.561 225859 WARNING nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.565 225859 DEBUG nova.virt.libvirt.host [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.566 225859 DEBUG nova.virt.libvirt.host [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.569 225859 DEBUG nova.virt.libvirt.host [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.570 225859 DEBUG nova.virt.libvirt.host [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.572 225859 DEBUG nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.572 225859 DEBUG nova.virt.hardware [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.573 225859 DEBUG nova.virt.hardware [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.573 225859 DEBUG nova.virt.hardware [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.573 225859 DEBUG nova.virt.hardware [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.573 225859 DEBUG nova.virt.hardware [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.574 225859 DEBUG nova.virt.hardware [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.574 225859 DEBUG nova.virt.hardware [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.574 225859 DEBUG nova.virt.hardware [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.574 225859 DEBUG nova.virt.hardware [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.574 225859 DEBUG nova.virt.hardware [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.575 225859 DEBUG nova.virt.hardware [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:17:08 compute-1 nova_compute[225855]: 2026-01-20 15:17:08.578 225859 DEBUG oslo_concurrency.processutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:17:08 compute-1 ceph-mon[81775]: pgmap v2840: 321 pgs: 321 active+clean; 601 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 3.2 MiB/s wr, 63 op/s
Jan 20 15:17:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:17:09 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/630892543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.018 225859 DEBUG oslo_concurrency.processutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.043 225859 DEBUG nova.storage.rbd_utils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 4f38d24a-3458-4c59-8480-8094ffcbb5aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.047 225859 DEBUG oslo_concurrency.processutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:17:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:09.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:09.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:17:09 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1661905444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.481 225859 DEBUG oslo_concurrency.processutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.482 225859 DEBUG nova.virt.libvirt.vif [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:17:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2057447015',display_name='tempest-TestNetworkAdvancedServerOps-server-2057447015',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2057447015',id=191,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJPwC5MyMqyTrbrSBwBOpxBpSiLPbu1nzobGp6ktmxE+oIlgwGH9ZkqYZyAjLxwv50DDSq5iaSNQxNoKrNJWo+FdRObJJTJ5JQ9hbj5JsMfLfRRZUDmDAFS5rhSXxsMyYg==',key_name='tempest-TestNetworkAdvancedServerOps-386377958',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-foa05j4u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:17:04Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=4f38d24a-3458-4c59-8480-8094ffcbb5aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "address": "fa:16:3e:20:6b:ff", "network": {"id": "4512e7e3-1668-4e98-8240-843256180395", "bridge": "br-int", "label": "tempest-network-smoke--434600618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a1fab4-7d", "ovs_interfaceid": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.483 225859 DEBUG nova.network.os_vif_util [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "address": "fa:16:3e:20:6b:ff", "network": {"id": "4512e7e3-1668-4e98-8240-843256180395", "bridge": "br-int", "label": "tempest-network-smoke--434600618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a1fab4-7d", "ovs_interfaceid": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.484 225859 DEBUG nova.network.os_vif_util [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:6b:ff,bridge_name='br-int',has_traffic_filtering=True,id=d3a1fab4-7d4e-40cd-bdbb-b337196adbc5,network=Network(4512e7e3-1668-4e98-8240-843256180395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3a1fab4-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.485 225859 DEBUG nova.objects.instance [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f38d24a-3458-4c59-8480-8094ffcbb5aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.499 225859 DEBUG nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:17:09 compute-1 nova_compute[225855]:   <uuid>4f38d24a-3458-4c59-8480-8094ffcbb5aa</uuid>
Jan 20 15:17:09 compute-1 nova_compute[225855]:   <name>instance-000000bf</name>
Jan 20 15:17:09 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:17:09 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:17:09 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-2057447015</nova:name>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:17:08</nova:creationTime>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:17:09 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:17:09 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:17:09 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:17:09 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:17:09 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:17:09 compute-1 nova_compute[225855]:         <nova:user uuid="442a7a5cb8ea426a82be9762b262d171">tempest-TestNetworkAdvancedServerOps-175282664-project-member</nova:user>
Jan 20 15:17:09 compute-1 nova_compute[225855]:         <nova:project uuid="1ed5feeeafe7448a8efb47ab975b0ead">tempest-TestNetworkAdvancedServerOps-175282664</nova:project>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:17:09 compute-1 nova_compute[225855]:         <nova:port uuid="d3a1fab4-7d4e-40cd-bdbb-b337196adbc5">
Jan 20 15:17:09 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:17:09 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:17:09 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <system>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <entry name="serial">4f38d24a-3458-4c59-8480-8094ffcbb5aa</entry>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <entry name="uuid">4f38d24a-3458-4c59-8480-8094ffcbb5aa</entry>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     </system>
Jan 20 15:17:09 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:17:09 compute-1 nova_compute[225855]:   <os>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:   </os>
Jan 20 15:17:09 compute-1 nova_compute[225855]:   <features>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:   </features>
Jan 20 15:17:09 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:17:09 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:17:09 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/4f38d24a-3458-4c59-8480-8094ffcbb5aa_disk">
Jan 20 15:17:09 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       </source>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:17:09 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/4f38d24a-3458-4c59-8480-8094ffcbb5aa_disk.config">
Jan 20 15:17:09 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       </source>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:17:09 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:20:6b:ff"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <target dev="tapd3a1fab4-7d"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/4f38d24a-3458-4c59-8480-8094ffcbb5aa/console.log" append="off"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <video>
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     </video>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:17:09 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:17:09 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:17:09 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:17:09 compute-1 nova_compute[225855]: </domain>
Jan 20 15:17:09 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.501 225859 DEBUG nova.compute.manager [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Preparing to wait for external event network-vif-plugged-d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.501 225859 DEBUG oslo_concurrency.lockutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.502 225859 DEBUG oslo_concurrency.lockutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.502 225859 DEBUG oslo_concurrency.lockutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.503 225859 DEBUG nova.virt.libvirt.vif [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:17:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2057447015',display_name='tempest-TestNetworkAdvancedServerOps-server-2057447015',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2057447015',id=191,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJPwC5MyMqyTrbrSBwBOpxBpSiLPbu1nzobGp6ktmxE+oIlgwGH9ZkqYZyAjLxwv50DDSq5iaSNQxNoKrNJWo+FdRObJJTJ5JQ9hbj5JsMfLfRRZUDmDAFS5rhSXxsMyYg==',key_name='tempest-TestNetworkAdvancedServerOps-386377958',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-foa05j4u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:17:04Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=4f38d24a-3458-4c59-8480-8094ffcbb5aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "address": "fa:16:3e:20:6b:ff", "network": {"id": "4512e7e3-1668-4e98-8240-843256180395", "bridge": "br-int", "label": "tempest-network-smoke--434600618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a1fab4-7d", "ovs_interfaceid": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.503 225859 DEBUG nova.network.os_vif_util [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "address": "fa:16:3e:20:6b:ff", "network": {"id": "4512e7e3-1668-4e98-8240-843256180395", "bridge": "br-int", "label": "tempest-network-smoke--434600618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a1fab4-7d", "ovs_interfaceid": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.504 225859 DEBUG nova.network.os_vif_util [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:6b:ff,bridge_name='br-int',has_traffic_filtering=True,id=d3a1fab4-7d4e-40cd-bdbb-b337196adbc5,network=Network(4512e7e3-1668-4e98-8240-843256180395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3a1fab4-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.504 225859 DEBUG os_vif [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:6b:ff,bridge_name='br-int',has_traffic_filtering=True,id=d3a1fab4-7d4e-40cd-bdbb-b337196adbc5,network=Network(4512e7e3-1668-4e98-8240-843256180395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3a1fab4-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.505 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.506 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.506 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.509 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.510 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3a1fab4-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.510 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd3a1fab4-7d, col_values=(('external_ids', {'iface-id': 'd3a1fab4-7d4e-40cd-bdbb-b337196adbc5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:6b:ff', 'vm-uuid': '4f38d24a-3458-4c59-8480-8094ffcbb5aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.511 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:09 compute-1 NetworkManager[49104]: <info>  [1768922229.5127] manager: (tapd3a1fab4-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/354)
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.514 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.520 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.521 225859 INFO os_vif [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:6b:ff,bridge_name='br-int',has_traffic_filtering=True,id=d3a1fab4-7d4e-40cd-bdbb-b337196adbc5,network=Network(4512e7e3-1668-4e98-8240-843256180395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3a1fab4-7d')
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.567 225859 DEBUG nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.568 225859 DEBUG nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.568 225859 DEBUG nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No VIF found with MAC fa:16:3e:20:6b:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.569 225859 INFO nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Using config drive
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.599 225859 DEBUG nova.storage.rbd_utils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 4f38d24a-3458-4c59-8480-8094ffcbb5aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:17:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/630892543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:17:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1661905444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.954 225859 INFO nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Creating config drive at /var/lib/nova/instances/4f38d24a-3458-4c59-8480-8094ffcbb5aa/disk.config
Jan 20 15:17:09 compute-1 nova_compute[225855]: 2026-01-20 15:17:09.959 225859 DEBUG oslo_concurrency.processutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4f38d24a-3458-4c59-8480-8094ffcbb5aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp107hlhkm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.095 225859 DEBUG oslo_concurrency.processutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4f38d24a-3458-4c59-8480-8094ffcbb5aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp107hlhkm" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.121 225859 DEBUG nova.storage.rbd_utils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 4f38d24a-3458-4c59-8480-8094ffcbb5aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.124 225859 DEBUG oslo_concurrency.processutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4f38d24a-3458-4c59-8480-8094ffcbb5aa/disk.config 4f38d24a-3458-4c59-8480-8094ffcbb5aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.152 225859 DEBUG nova.network.neutron [req-3a457955-0c78-4aed-99d2-34e09e8a012d req-2c4a8951-cfd2-46a4-9ed0-73a56d2e357e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Updated VIF entry in instance network info cache for port d3a1fab4-7d4e-40cd-bdbb-b337196adbc5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.153 225859 DEBUG nova.network.neutron [req-3a457955-0c78-4aed-99d2-34e09e8a012d req-2c4a8951-cfd2-46a4-9ed0-73a56d2e357e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Updating instance_info_cache with network_info: [{"id": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "address": "fa:16:3e:20:6b:ff", "network": {"id": "4512e7e3-1668-4e98-8240-843256180395", "bridge": "br-int", "label": "tempest-network-smoke--434600618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a1fab4-7d", "ovs_interfaceid": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.169 225859 DEBUG oslo_concurrency.lockutils [req-3a457955-0c78-4aed-99d2-34e09e8a012d req-2c4a8951-cfd2-46a4-9ed0-73a56d2e357e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-4f38d24a-3458-4c59-8480-8094ffcbb5aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.311 225859 DEBUG oslo_concurrency.processutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4f38d24a-3458-4c59-8480-8094ffcbb5aa/disk.config 4f38d24a-3458-4c59-8480-8094ffcbb5aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.312 225859 INFO nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Deleting local config drive /var/lib/nova/instances/4f38d24a-3458-4c59-8480-8094ffcbb5aa/disk.config because it was imported into RBD.
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:17:10 compute-1 kernel: tapd3a1fab4-7d: entered promiscuous mode
Jan 20 15:17:10 compute-1 NetworkManager[49104]: <info>  [1768922230.3616] manager: (tapd3a1fab4-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/355)
Jan 20 15:17:10 compute-1 ovn_controller[130490]: 2026-01-20T15:17:10Z|00851|binding|INFO|Claiming lport d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 for this chassis.
Jan 20 15:17:10 compute-1 ovn_controller[130490]: 2026-01-20T15:17:10Z|00852|binding|INFO|d3a1fab4-7d4e-40cd-bdbb-b337196adbc5: Claiming fa:16:3e:20:6b:ff 10.100.0.9
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.362 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.369 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:6b:ff 10.100.0.9'], port_security=['fa:16:3e:20:6b:ff 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4f38d24a-3458-4c59-8480-8094ffcbb5aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4512e7e3-1668-4e98-8240-843256180395', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9f2f56b-e7c7-475c-b1af-94303aad79ef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0147113d-42da-4762-ae51-c60dbf8c0dd2, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=d3a1fab4-7d4e-40cd-bdbb-b337196adbc5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.370 140354 INFO neutron.agent.ovn.metadata.agent [-] Port d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 in datapath 4512e7e3-1668-4e98-8240-843256180395 bound to our chassis
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.371 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4512e7e3-1668-4e98-8240-843256180395
Jan 20 15:17:10 compute-1 ovn_controller[130490]: 2026-01-20T15:17:10Z|00853|binding|INFO|Setting lport d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 ovn-installed in OVS
Jan 20 15:17:10 compute-1 ovn_controller[130490]: 2026-01-20T15:17:10Z|00854|binding|INFO|Setting lport d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 up in Southbound
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.379 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.381 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5dafce14-7887-4752-8a44-f01980a09709]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.382 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4512e7e3-11 in ovnmeta-4512e7e3-1668-4e98-8240-843256180395 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.384 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.383 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4512e7e3-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.384 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8690532a-4829-4164-8813-cce223c42e50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.385 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[be09299e-e72b-45bf-a3d9-5b8146e23569]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:10 compute-1 systemd-udevd[306267]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.395 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[eecd0db3-284d-4c23-8dcf-45b4d25b0b75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:10 compute-1 systemd-machined[194361]: New machine qemu-101-instance-000000bf.
Jan 20 15:17:10 compute-1 NetworkManager[49104]: <info>  [1768922230.4024] device (tapd3a1fab4-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:17:10 compute-1 NetworkManager[49104]: <info>  [1768922230.4034] device (tapd3a1fab4-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.409 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[949a998c-7ec1-43e7-8dfb-188aaeda62c4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:10 compute-1 systemd[1]: Started Virtual Machine qemu-101-instance-000000bf.
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.435 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[68f43f74-60f3-4c00-bf77-2f893d3c26de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:10 compute-1 systemd-udevd[306270]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:17:10 compute-1 NetworkManager[49104]: <info>  [1768922230.4409] manager: (tap4512e7e3-10): new Veth device (/org/freedesktop/NetworkManager/Devices/356)
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.440 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bff152f5-b7e7-4232-ad3c-d3b0abf82787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.473 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f08db15f-5352-461c-9f5c-d50234d4b55d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.475 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c7f4f7-9f50-4bcb-8407-3c631579e548]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:10 compute-1 NetworkManager[49104]: <info>  [1768922230.4956] device (tap4512e7e3-10): carrier: link connected
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.500 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d3ecf6bc-d47d-4cfa-ba15-cd35655c9870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.516 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[15e8ec1e-4edb-4ce8-b6d9-03b4e2b2e1fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4512e7e3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:e6:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 726942, 'reachable_time': 26848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306298, 'error': None, 'target': 'ovnmeta-4512e7e3-1668-4e98-8240-843256180395', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.529 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[99a4138b-44b6-4cf7-aa97-037d9ae2e65d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:e6d5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 726942, 'tstamp': 726942}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306299, 'error': None, 'target': 'ovnmeta-4512e7e3-1668-4e98-8240-843256180395', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.544 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bc4b8fee-2d11-4514-af70-545e04c9aef6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4512e7e3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:e6:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 726942, 'reachable_time': 26848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306300, 'error': None, 'target': 'ovnmeta-4512e7e3-1668-4e98-8240-843256180395', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.567 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7daab9f3-1220-4d42-b820-933cb49174fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.601 225859 DEBUG nova.compute.manager [req-15ab5063-6d81-4fc8-bec7-e77680b454fc req-832952ca-769a-423f-a49e-b0734d99d421 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Received event network-vif-plugged-d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.602 225859 DEBUG oslo_concurrency.lockutils [req-15ab5063-6d81-4fc8-bec7-e77680b454fc req-832952ca-769a-423f-a49e-b0734d99d421 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.602 225859 DEBUG oslo_concurrency.lockutils [req-15ab5063-6d81-4fc8-bec7-e77680b454fc req-832952ca-769a-423f-a49e-b0734d99d421 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.602 225859 DEBUG oslo_concurrency.lockutils [req-15ab5063-6d81-4fc8-bec7-e77680b454fc req-832952ca-769a-423f-a49e-b0734d99d421 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.602 225859 DEBUG nova.compute.manager [req-15ab5063-6d81-4fc8-bec7-e77680b454fc req-832952ca-769a-423f-a49e-b0734d99d421 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Processing event network-vif-plugged-d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.620 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5f591dac-ec3e-483c-b9e5-eb70f60d605d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.621 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4512e7e3-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.621 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.622 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4512e7e3-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.623 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:10 compute-1 NetworkManager[49104]: <info>  [1768922230.6245] manager: (tap4512e7e3-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Jan 20 15:17:10 compute-1 kernel: tap4512e7e3-10: entered promiscuous mode
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.626 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4512e7e3-10, col_values=(('external_ids', {'iface-id': '3415a30d-278b-4c15-be57-f12804d2b50c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:17:10 compute-1 ovn_controller[130490]: 2026-01-20T15:17:10Z|00855|binding|INFO|Releasing lport 3415a30d-278b-4c15-be57-f12804d2b50c from this chassis (sb_readonly=0)
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.640 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.641 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4512e7e3-1668-4e98-8240-843256180395.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4512e7e3-1668-4e98-8240-843256180395.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.643 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[af41a529-d02e-44f1-9fd6-c8e10cf07650]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.644 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-4512e7e3-1668-4e98-8240-843256180395
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/4512e7e3-1668-4e98-8240-843256180395.pid.haproxy
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 4512e7e3-1668-4e98-8240-843256180395
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:17:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:10.644 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4512e7e3-1668-4e98-8240-843256180395', 'env', 'PROCESS_TAG=haproxy-4512e7e3-1668-4e98-8240-843256180395', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4512e7e3-1668-4e98-8240-843256180395.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.880 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922230.8800206, 4f38d24a-3458-4c59-8480-8094ffcbb5aa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.880 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] VM Started (Lifecycle Event)
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.883 225859 DEBUG nova.compute.manager [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.888 225859 DEBUG nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.891 225859 INFO nova.virt.libvirt.driver [-] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Instance spawned successfully.
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.891 225859 DEBUG nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.899 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.901 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.911 225859 DEBUG nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.912 225859 DEBUG nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.912 225859 DEBUG nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.913 225859 DEBUG nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.913 225859 DEBUG nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.914 225859 DEBUG nova.virt.libvirt.driver [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.924 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.924 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922230.8830743, 4f38d24a-3458-4c59-8480-8094ffcbb5aa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.924 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] VM Paused (Lifecycle Event)
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.956 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.960 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922230.8855433, 4f38d24a-3458-4c59-8480-8094ffcbb5aa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.960 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] VM Resumed (Lifecycle Event)
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.978 225859 INFO nova.compute.manager [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Took 6.35 seconds to spawn the instance on the hypervisor.
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.979 225859 DEBUG nova.compute.manager [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:17:10 compute-1 ceph-mon[81775]: pgmap v2841: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 3.6 MiB/s wr, 73 op/s
Jan 20 15:17:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3979331012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.981 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:17:10 compute-1 nova_compute[225855]: 2026-01-20 15:17:10.987 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:17:11 compute-1 podman[306374]: 2026-01-20 15:17:11.013330407 +0000 UTC m=+0.057116521 container create e60f633dc8d25180e6187f9fd54256e0e27093030965e6a20f3d5598fbea10ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4512e7e3-1668-4e98-8240-843256180395, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 20 15:17:11 compute-1 nova_compute[225855]: 2026-01-20 15:17:11.022 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:17:11 compute-1 nova_compute[225855]: 2026-01-20 15:17:11.064 225859 INFO nova.compute.manager [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Took 7.54 seconds to build instance.
Jan 20 15:17:11 compute-1 nova_compute[225855]: 2026-01-20 15:17:11.079 225859 DEBUG oslo_concurrency.lockutils [None req-18fdf58e-e22e-47f3-bcd4-3c053d6c44ca 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:11 compute-1 podman[306374]: 2026-01-20 15:17:10.990489779 +0000 UTC m=+0.034275913 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:17:11 compute-1 systemd[1]: Started libpod-conmon-e60f633dc8d25180e6187f9fd54256e0e27093030965e6a20f3d5598fbea10ce.scope.
Jan 20 15:17:11 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:17:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7810c804e6835b7fafc97e2298621921e0f31c1da0eaafddfa82d1449fccd1e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:17:11 compute-1 podman[306374]: 2026-01-20 15:17:11.130103001 +0000 UTC m=+0.173889145 container init e60f633dc8d25180e6187f9fd54256e0e27093030965e6a20f3d5598fbea10ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4512e7e3-1668-4e98-8240-843256180395, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:17:11 compute-1 podman[306374]: 2026-01-20 15:17:11.137166221 +0000 UTC m=+0.180952335 container start e60f633dc8d25180e6187f9fd54256e0e27093030965e6a20f3d5598fbea10ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4512e7e3-1668-4e98-8240-843256180395, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 15:17:11 compute-1 neutron-haproxy-ovnmeta-4512e7e3-1668-4e98-8240-843256180395[306389]: [NOTICE]   (306393) : New worker (306395) forked
Jan 20 15:17:11 compute-1 neutron-haproxy-ovnmeta-4512e7e3-1668-4e98-8240-843256180395[306389]: [NOTICE]   (306393) : Loading success.
Jan 20 15:17:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:11.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:11.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:12 compute-1 nova_compute[225855]: 2026-01-20 15:17:12.689 225859 DEBUG nova.compute.manager [req-328b45a0-2136-40bd-93b6-bf15a9a6247c req-16a73e29-54b3-4ed3-b3d9-f147e46b61ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Received event network-vif-plugged-d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:17:12 compute-1 nova_compute[225855]: 2026-01-20 15:17:12.690 225859 DEBUG oslo_concurrency.lockutils [req-328b45a0-2136-40bd-93b6-bf15a9a6247c req-16a73e29-54b3-4ed3-b3d9-f147e46b61ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:12 compute-1 nova_compute[225855]: 2026-01-20 15:17:12.690 225859 DEBUG oslo_concurrency.lockutils [req-328b45a0-2136-40bd-93b6-bf15a9a6247c req-16a73e29-54b3-4ed3-b3d9-f147e46b61ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:12 compute-1 nova_compute[225855]: 2026-01-20 15:17:12.690 225859 DEBUG oslo_concurrency.lockutils [req-328b45a0-2136-40bd-93b6-bf15a9a6247c req-16a73e29-54b3-4ed3-b3d9-f147e46b61ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:12 compute-1 nova_compute[225855]: 2026-01-20 15:17:12.690 225859 DEBUG nova.compute.manager [req-328b45a0-2136-40bd-93b6-bf15a9a6247c req-16a73e29-54b3-4ed3-b3d9-f147e46b61ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] No waiting events found dispatching network-vif-plugged-d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:17:12 compute-1 nova_compute[225855]: 2026-01-20 15:17:12.691 225859 WARNING nova.compute.manager [req-328b45a0-2136-40bd-93b6-bf15a9a6247c req-16a73e29-54b3-4ed3-b3d9-f147e46b61ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Received unexpected event network-vif-plugged-d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 for instance with vm_state active and task_state None.
Jan 20 15:17:13 compute-1 nova_compute[225855]: 2026-01-20 15:17:13.008 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:17:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:13.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:17:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:13.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:17:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1693259685' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:17:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:17:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1693259685' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:17:13 compute-1 ceph-mon[81775]: pgmap v2842: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 46 KiB/s rd, 3.6 MiB/s wr, 70 op/s
Jan 20 15:17:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1693259685' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:17:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1693259685' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:17:14 compute-1 nova_compute[225855]: 2026-01-20 15:17:14.512 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:14 compute-1 ceph-mon[81775]: pgmap v2843: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 782 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Jan 20 15:17:14 compute-1 nova_compute[225855]: 2026-01-20 15:17:14.849 225859 DEBUG nova.compute.manager [req-048d33d9-813e-4b04-ad2d-2ff2f9ebbd77 req-d742de8c-8460-4452-90a4-d1c37bd20db4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Received event network-changed-d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:17:14 compute-1 nova_compute[225855]: 2026-01-20 15:17:14.849 225859 DEBUG nova.compute.manager [req-048d33d9-813e-4b04-ad2d-2ff2f9ebbd77 req-d742de8c-8460-4452-90a4-d1c37bd20db4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Refreshing instance network info cache due to event network-changed-d3a1fab4-7d4e-40cd-bdbb-b337196adbc5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:17:14 compute-1 nova_compute[225855]: 2026-01-20 15:17:14.849 225859 DEBUG oslo_concurrency.lockutils [req-048d33d9-813e-4b04-ad2d-2ff2f9ebbd77 req-d742de8c-8460-4452-90a4-d1c37bd20db4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-4f38d24a-3458-4c59-8480-8094ffcbb5aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:17:14 compute-1 nova_compute[225855]: 2026-01-20 15:17:14.849 225859 DEBUG oslo_concurrency.lockutils [req-048d33d9-813e-4b04-ad2d-2ff2f9ebbd77 req-d742de8c-8460-4452-90a4-d1c37bd20db4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-4f38d24a-3458-4c59-8480-8094ffcbb5aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:17:14 compute-1 nova_compute[225855]: 2026-01-20 15:17:14.849 225859 DEBUG nova.network.neutron [req-048d33d9-813e-4b04-ad2d-2ff2f9ebbd77 req-d742de8c-8460-4452-90a4-d1c37bd20db4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Refreshing network info cache for port d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:17:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:15.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:15.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e410 e410: 3 total, 3 up, 3 in
Jan 20 15:17:16 compute-1 podman[306407]: 2026-01-20 15:17:16.036480472 +0000 UTC m=+0.084963812 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 20 15:17:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:16.439 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:16.439 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:16.440 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:16 compute-1 ceph-mon[81775]: osdmap e410: 3 total, 3 up, 3 in
Jan 20 15:17:16 compute-1 ceph-mon[81775]: pgmap v2845: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 447 KiB/s wr, 182 op/s
Jan 20 15:17:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e411 e411: 3 total, 3 up, 3 in
Jan 20 15:17:17 compute-1 nova_compute[225855]: 2026-01-20 15:17:17.069 225859 DEBUG nova.network.neutron [req-048d33d9-813e-4b04-ad2d-2ff2f9ebbd77 req-d742de8c-8460-4452-90a4-d1c37bd20db4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Updated VIF entry in instance network info cache for port d3a1fab4-7d4e-40cd-bdbb-b337196adbc5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:17:17 compute-1 nova_compute[225855]: 2026-01-20 15:17:17.070 225859 DEBUG nova.network.neutron [req-048d33d9-813e-4b04-ad2d-2ff2f9ebbd77 req-d742de8c-8460-4452-90a4-d1c37bd20db4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Updating instance_info_cache with network_info: [{"id": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "address": "fa:16:3e:20:6b:ff", "network": {"id": "4512e7e3-1668-4e98-8240-843256180395", "bridge": "br-int", "label": "tempest-network-smoke--434600618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a1fab4-7d", "ovs_interfaceid": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:17:17 compute-1 nova_compute[225855]: 2026-01-20 15:17:17.088 225859 DEBUG oslo_concurrency.lockutils [req-048d33d9-813e-4b04-ad2d-2ff2f9ebbd77 req-d742de8c-8460-4452-90a4-d1c37bd20db4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-4f38d24a-3458-4c59-8480-8094ffcbb5aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:17:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:17.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:17.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:17 compute-1 ceph-mon[81775]: osdmap e411: 3 total, 3 up, 3 in
Jan 20 15:17:18 compute-1 nova_compute[225855]: 2026-01-20 15:17:18.010 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:18 compute-1 nova_compute[225855]: 2026-01-20 15:17:18.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:17:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:18 compute-1 ceph-mon[81775]: pgmap v2847: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 42 KiB/s wr, 213 op/s
Jan 20 15:17:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:19.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:19.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:19 compute-1 nova_compute[225855]: 2026-01-20 15:17:19.562 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:20 compute-1 ceph-mon[81775]: pgmap v2848: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.7 MiB/s rd, 43 KiB/s wr, 231 op/s
Jan 20 15:17:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:21.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:17:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:21.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:17:22 compute-1 sudo[306436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:17:22 compute-1 sudo[306436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:17:22 compute-1 sudo[306436]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:17:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 60K writes, 233K keys, 60K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.05 MB/s
                                           Cumulative WAL: 60K writes, 22K syncs, 2.65 writes per sync, written: 0.22 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8255 writes, 31K keys, 8255 commit groups, 1.0 writes per commit group, ingest: 29.33 MB, 0.05 MB/s
                                           Interval WAL: 8255 writes, 3312 syncs, 2.49 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 15:17:22 compute-1 sudo[306461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:17:22 compute-1 sudo[306461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:17:22 compute-1 sudo[306461]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:22 compute-1 ceph-mon[81775]: pgmap v2849: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 42 KiB/s wr, 184 op/s
Jan 20 15:17:23 compute-1 nova_compute[225855]: 2026-01-20 15:17:23.013 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:23.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:23.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3656409958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:24 compute-1 ovn_controller[130490]: 2026-01-20T15:17:24Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:20:6b:ff 10.100.0.9
Jan 20 15:17:24 compute-1 ovn_controller[130490]: 2026-01-20T15:17:24Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:6b:ff 10.100.0.9
Jan 20 15:17:24 compute-1 nova_compute[225855]: 2026-01-20 15:17:24.564 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:25 compute-1 ceph-mon[81775]: pgmap v2850: 321 pgs: 321 active+clean; 618 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 365 KiB/s wr, 138 op/s
Jan 20 15:17:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e412 e412: 3 total, 3 up, 3 in
Jan 20 15:17:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:17:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:25.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:17:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:17:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:25.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:17:26 compute-1 ceph-mon[81775]: osdmap e412: 3 total, 3 up, 3 in
Jan 20 15:17:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:17:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:27.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:17:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2594788816' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:17:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2594788816' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:17:27 compute-1 ceph-mon[81775]: pgmap v2852: 321 pgs: 321 active+clean; 630 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 651 KiB/s rd, 2.6 MiB/s wr, 119 op/s
Jan 20 15:17:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e413 e413: 3 total, 3 up, 3 in
Jan 20 15:17:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:27.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:28 compute-1 nova_compute[225855]: 2026-01-20 15:17:28.016 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:28 compute-1 ceph-mon[81775]: osdmap e413: 3 total, 3 up, 3 in
Jan 20 15:17:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:17:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:29.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:17:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1785409203' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:17:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1785409203' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:17:29 compute-1 ceph-mon[81775]: pgmap v2854: 321 pgs: 321 active+clean; 630 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 401 KiB/s rd, 3.2 MiB/s wr, 120 op/s
Jan 20 15:17:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:17:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:29.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:17:29 compute-1 nova_compute[225855]: 2026-01-20 15:17:29.607 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:30 compute-1 nova_compute[225855]: 2026-01-20 15:17:30.658 225859 INFO nova.compute.manager [None req-11b20fbe-a70c-479b-baae-7120a8121549 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Get console output
Jan 20 15:17:30 compute-1 nova_compute[225855]: 2026-01-20 15:17:30.664 263775 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:17:30 compute-1 nova_compute[225855]: 2026-01-20 15:17:30.950 225859 INFO nova.compute.manager [None req-4014658b-d089-43f2-9dc5-9d47a60b1e0a 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Pausing
Jan 20 15:17:30 compute-1 nova_compute[225855]: 2026-01-20 15:17:30.951 225859 DEBUG nova.objects.instance [None req-4014658b-d089-43f2-9dc5-9d47a60b1e0a 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'flavor' on Instance uuid 4f38d24a-3458-4c59-8480-8094ffcbb5aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:17:30 compute-1 nova_compute[225855]: 2026-01-20 15:17:30.988 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922250.9882522, 4f38d24a-3458-4c59-8480-8094ffcbb5aa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:17:30 compute-1 nova_compute[225855]: 2026-01-20 15:17:30.989 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] VM Paused (Lifecycle Event)
Jan 20 15:17:30 compute-1 nova_compute[225855]: 2026-01-20 15:17:30.990 225859 DEBUG nova.compute.manager [None req-4014658b-d089-43f2-9dc5-9d47a60b1e0a 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:17:30 compute-1 nova_compute[225855]: 2026-01-20 15:17:30.993 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:30.993 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:17:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:30.995 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:17:31 compute-1 nova_compute[225855]: 2026-01-20 15:17:31.026 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:17:31 compute-1 nova_compute[225855]: 2026-01-20 15:17:31.028 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:17:31 compute-1 nova_compute[225855]: 2026-01-20 15:17:31.071 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] During sync_power_state the instance has a pending task (pausing). Skip.
Jan 20 15:17:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:31.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:31.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:31 compute-1 ceph-mon[81775]: pgmap v2855: 321 pgs: 321 active+clean; 565 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 490 KiB/s rd, 3.2 MiB/s wr, 195 op/s
Jan 20 15:17:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/13352825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:31 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e414 e414: 3 total, 3 up, 3 in
Jan 20 15:17:32 compute-1 podman[306491]: 2026-01-20 15:17:32.01648817 +0000 UTC m=+0.055536697 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.627 225859 DEBUG oslo_concurrency.lockutils [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.628 225859 DEBUG oslo_concurrency.lockutils [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.628 225859 DEBUG oslo_concurrency.lockutils [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.628 225859 DEBUG oslo_concurrency.lockutils [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.628 225859 DEBUG oslo_concurrency.lockutils [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.629 225859 INFO nova.compute.manager [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Terminating instance
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.630 225859 DEBUG nova.compute.manager [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:17:32 compute-1 kernel: tap0e93d1de-67 (unregistering): left promiscuous mode
Jan 20 15:17:32 compute-1 NetworkManager[49104]: <info>  [1768922252.6747] device (tap0e93d1de-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:17:32 compute-1 ovn_controller[130490]: 2026-01-20T15:17:32Z|00856|binding|INFO|Releasing lport 0e93d1de-671e-4e37-8e79-44bed7981254 from this chassis (sb_readonly=0)
Jan 20 15:17:32 compute-1 ovn_controller[130490]: 2026-01-20T15:17:32Z|00857|binding|INFO|Setting lport 0e93d1de-671e-4e37-8e79-44bed7981254 down in Southbound
Jan 20 15:17:32 compute-1 ovn_controller[130490]: 2026-01-20T15:17:32Z|00858|binding|INFO|Removing iface tap0e93d1de-67 ovn-installed in OVS
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.683 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.685 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:32.692 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:5e:ed 10.100.0.3'], port_security=['fa:16:3e:99:5e:ed 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f1ded131-d9a3-4e93-ad99-53ee2695d5c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fff727019f86407498e83d7948d54962', 'neutron:revision_number': '8', 'neutron:security_group_ids': '5ace6a2f-56c6-4679-bb81-70ccb27ab312', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87d69a20-7690-494a-ac16-7c600840561a, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=0e93d1de-671e-4e37-8e79-44bed7981254) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:17:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:32.693 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 0e93d1de-671e-4e37-8e79-44bed7981254 in datapath c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab unbound from our chassis
Jan 20 15:17:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:32.694 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:17:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:32.695 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7abbcf0b-8193-4a4e-be8f-d1a5770ed658]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:32.696 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab namespace which is not needed anymore
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.699 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:32 compute-1 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000bb.scope: Deactivated successfully.
Jan 20 15:17:32 compute-1 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000bb.scope: Consumed 17.614s CPU time.
Jan 20 15:17:32 compute-1 systemd-machined[194361]: Machine qemu-99-instance-000000bb terminated.
Jan 20 15:17:32 compute-1 neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab[304717]: [NOTICE]   (304722) : haproxy version is 2.8.14-c23fe91
Jan 20 15:17:32 compute-1 neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab[304717]: [NOTICE]   (304722) : path to executable is /usr/sbin/haproxy
Jan 20 15:17:32 compute-1 neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab[304717]: [WARNING]  (304722) : Exiting Master process...
Jan 20 15:17:32 compute-1 neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab[304717]: [ALERT]    (304722) : Current worker (304724) exited with code 143 (Terminated)
Jan 20 15:17:32 compute-1 neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab[304717]: [WARNING]  (304722) : All workers exited. Exiting... (0)
Jan 20 15:17:32 compute-1 systemd[1]: libpod-a439ab3be6e54a4c6d705368f861d75da7d479824ae2fd478adfd2b64bc06259.scope: Deactivated successfully.
Jan 20 15:17:32 compute-1 podman[306533]: 2026-01-20 15:17:32.843278479 +0000 UTC m=+0.050865094 container died a439ab3be6e54a4c6d705368f861d75da7d479824ae2fd478adfd2b64bc06259 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.865 225859 INFO nova.virt.libvirt.driver [-] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Instance destroyed successfully.
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.866 225859 DEBUG nova.objects.instance [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'resources' on Instance uuid f1ded131-d9a3-4e93-ad99-53ee2695d5c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:17:32 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a439ab3be6e54a4c6d705368f861d75da7d479824ae2fd478adfd2b64bc06259-userdata-shm.mount: Deactivated successfully.
Jan 20 15:17:32 compute-1 systemd[1]: var-lib-containers-storage-overlay-cb008d1ba0088a00e33db5fdd52a317528e9420e9947ad1e20b1f6e8e7235013-merged.mount: Deactivated successfully.
Jan 20 15:17:32 compute-1 ceph-mon[81775]: osdmap e414: 3 total, 3 up, 3 in
Jan 20 15:17:32 compute-1 ceph-mon[81775]: pgmap v2857: 321 pgs: 321 active+clean; 532 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 226 KiB/s rd, 982 KiB/s wr, 142 op/s
Jan 20 15:17:32 compute-1 podman[306533]: 2026-01-20 15:17:32.89088399 +0000 UTC m=+0.098470615 container cleanup a439ab3be6e54a4c6d705368f861d75da7d479824ae2fd478adfd2b64bc06259 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.891 225859 DEBUG nova.virt.libvirt.vif [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:14:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-0',id=187,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL5L2o6o5dLcQyaIfhCZ5CKxQlecqNGmP68oHIQEsVoKIC2qfrMKjObT9GdMU8oznX9LVUwIWCShhlEJu9ZqPiutEL2afEJ1hQQamjERNcx9wWS2NfOgykA4yugQphfOtA==',key_name='tempest-keypair-1568469072',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:15:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fff727019f86407498e83d7948d54962',ramdisk_id='',reservation_id='r-h927541v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-418194625',owner_user_name='tempest-AttachVolumeMultiAttachTest-418194625-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:15:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e9cc4ce3e069479ba9c789b378a68a1d',uuid=f1ded131-d9a3-4e93-ad99-53ee2695d5c8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.892 225859 DEBUG nova.network.os_vif_util [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converting VIF {"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.893 225859 DEBUG nova.network.os_vif_util [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:99:5e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0e93d1de-671e-4e37-8e79-44bed7981254,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e93d1de-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.894 225859 DEBUG os_vif [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:5e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0e93d1de-671e-4e37-8e79-44bed7981254,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e93d1de-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.896 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.896 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e93d1de-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:17:32 compute-1 systemd[1]: libpod-conmon-a439ab3be6e54a4c6d705368f861d75da7d479824ae2fd478adfd2b64bc06259.scope: Deactivated successfully.
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.898 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.900 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.904 225859 INFO os_vif [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:5e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0e93d1de-671e-4e37-8e79-44bed7981254,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e93d1de-67')
Jan 20 15:17:32 compute-1 podman[306575]: 2026-01-20 15:17:32.95574982 +0000 UTC m=+0.042950549 container remove a439ab3be6e54a4c6d705368f861d75da7d479824ae2fd478adfd2b64bc06259 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:17:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:32.963 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7159670f-b479-4da7-92cb-b3d8d6c35832]: (4, ('Tue Jan 20 03:17:32 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab (a439ab3be6e54a4c6d705368f861d75da7d479824ae2fd478adfd2b64bc06259)\na439ab3be6e54a4c6d705368f861d75da7d479824ae2fd478adfd2b64bc06259\nTue Jan 20 03:17:32 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab (a439ab3be6e54a4c6d705368f861d75da7d479824ae2fd478adfd2b64bc06259)\na439ab3be6e54a4c6d705368f861d75da7d479824ae2fd478adfd2b64bc06259\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:32.965 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a8cfb054-6c31-40f1-b0b9-cfcca559c481]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:32.966 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1f4a971-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.968 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:32 compute-1 kernel: tapc1f4a971-00: left promiscuous mode
Jan 20 15:17:32 compute-1 nova_compute[225855]: 2026-01-20 15:17:32.981 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:32.984 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[25a74703-86be-4da1-9708-a1f61edc7eab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:32.996 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:17:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:32.997 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[dbe210d1-8c90-47c9-8373-7587d9fd83cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:32.998 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b19bbf-5509-4a20-949e-6dc29609c6f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:33.015 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb05141-9a37-48b0-b29f-79c46d7e6ed0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718295, 'reachable_time': 26716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306609, 'error': None, 'target': 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:33.019 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:17:33 compute-1 nova_compute[225855]: 2026-01-20 15:17:33.019 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:33.019 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[17d577be-bea8-436e-8b20-ad6f77447e0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:33 compute-1 systemd[1]: run-netns-ovnmeta\x2dc1f4a971\x2d0bd7\x2d41ce\x2dbdf6\x2d5acb2b1b4bab.mount: Deactivated successfully.
Jan 20 15:17:33 compute-1 nova_compute[225855]: 2026-01-20 15:17:33.043 225859 DEBUG nova.compute.manager [req-eeb9de47-d613-4b05-9cec-e42ecc6a7fbd req-ccb5514f-a2de-48bd-93f3-a3f5d62fd5b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-vif-unplugged-0e93d1de-671e-4e37-8e79-44bed7981254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:17:33 compute-1 nova_compute[225855]: 2026-01-20 15:17:33.043 225859 DEBUG oslo_concurrency.lockutils [req-eeb9de47-d613-4b05-9cec-e42ecc6a7fbd req-ccb5514f-a2de-48bd-93f3-a3f5d62fd5b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:33 compute-1 nova_compute[225855]: 2026-01-20 15:17:33.045 225859 DEBUG oslo_concurrency.lockutils [req-eeb9de47-d613-4b05-9cec-e42ecc6a7fbd req-ccb5514f-a2de-48bd-93f3-a3f5d62fd5b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:33 compute-1 nova_compute[225855]: 2026-01-20 15:17:33.045 225859 DEBUG oslo_concurrency.lockutils [req-eeb9de47-d613-4b05-9cec-e42ecc6a7fbd req-ccb5514f-a2de-48bd-93f3-a3f5d62fd5b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:33 compute-1 nova_compute[225855]: 2026-01-20 15:17:33.045 225859 DEBUG nova.compute.manager [req-eeb9de47-d613-4b05-9cec-e42ecc6a7fbd req-ccb5514f-a2de-48bd-93f3-a3f5d62fd5b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] No waiting events found dispatching network-vif-unplugged-0e93d1de-671e-4e37-8e79-44bed7981254 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:17:33 compute-1 nova_compute[225855]: 2026-01-20 15:17:33.045 225859 DEBUG nova.compute.manager [req-eeb9de47-d613-4b05-9cec-e42ecc6a7fbd req-ccb5514f-a2de-48bd-93f3-a3f5d62fd5b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-vif-unplugged-0e93d1de-671e-4e37-8e79-44bed7981254 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:17:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:17:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:33.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:17:33 compute-1 nova_compute[225855]: 2026-01-20 15:17:33.320 225859 INFO nova.virt.libvirt.driver [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Deleting instance files /var/lib/nova/instances/f1ded131-d9a3-4e93-ad99-53ee2695d5c8_del
Jan 20 15:17:33 compute-1 nova_compute[225855]: 2026-01-20 15:17:33.322 225859 INFO nova.virt.libvirt.driver [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Deletion of /var/lib/nova/instances/f1ded131-d9a3-4e93-ad99-53ee2695d5c8_del complete
Jan 20 15:17:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:33.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:33 compute-1 nova_compute[225855]: 2026-01-20 15:17:33.815 225859 INFO nova.compute.manager [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Took 1.18 seconds to destroy the instance on the hypervisor.
Jan 20 15:17:33 compute-1 nova_compute[225855]: 2026-01-20 15:17:33.816 225859 DEBUG oslo.service.loopingcall [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:17:33 compute-1 nova_compute[225855]: 2026-01-20 15:17:33.816 225859 DEBUG nova.compute.manager [-] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:17:33 compute-1 nova_compute[225855]: 2026-01-20 15:17:33.817 225859 DEBUG nova.network.neutron [-] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:17:34 compute-1 nova_compute[225855]: 2026-01-20 15:17:34.294 225859 INFO nova.compute.manager [None req-504840e6-6cce-4d9f-9319-c681f6ed5fc8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Get console output
Jan 20 15:17:34 compute-1 nova_compute[225855]: 2026-01-20 15:17:34.302 263775 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:17:34 compute-1 nova_compute[225855]: 2026-01-20 15:17:34.482 225859 INFO nova.compute.manager [None req-cae5a840-e40e-499d-812b-c11e11f159ae 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Unpausing
Jan 20 15:17:34 compute-1 nova_compute[225855]: 2026-01-20 15:17:34.484 225859 DEBUG nova.objects.instance [None req-cae5a840-e40e-499d-812b-c11e11f159ae 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'flavor' on Instance uuid 4f38d24a-3458-4c59-8480-8094ffcbb5aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:17:34 compute-1 nova_compute[225855]: 2026-01-20 15:17:34.514 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922254.5136197, 4f38d24a-3458-4c59-8480-8094ffcbb5aa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:17:34 compute-1 nova_compute[225855]: 2026-01-20 15:17:34.514 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] VM Resumed (Lifecycle Event)
Jan 20 15:17:34 compute-1 virtqemud[225396]: argument unsupported: QEMU guest agent is not configured
Jan 20 15:17:34 compute-1 nova_compute[225855]: 2026-01-20 15:17:34.519 225859 DEBUG nova.virt.libvirt.guest [None req-cae5a840-e40e-499d-812b-c11e11f159ae 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 20 15:17:34 compute-1 nova_compute[225855]: 2026-01-20 15:17:34.519 225859 DEBUG nova.compute.manager [None req-cae5a840-e40e-499d-812b-c11e11f159ae 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:17:34 compute-1 nova_compute[225855]: 2026-01-20 15:17:34.546 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:17:34 compute-1 nova_compute[225855]: 2026-01-20 15:17:34.550 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:17:34 compute-1 nova_compute[225855]: 2026-01-20 15:17:34.584 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] During sync_power_state the instance has a pending task (unpausing). Skip.
Jan 20 15:17:34 compute-1 nova_compute[225855]: 2026-01-20 15:17:34.661 225859 DEBUG nova.network.neutron [-] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:17:34 compute-1 nova_compute[225855]: 2026-01-20 15:17:34.677 225859 INFO nova.compute.manager [-] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Took 0.86 seconds to deallocate network for instance.
Jan 20 15:17:34 compute-1 nova_compute[225855]: 2026-01-20 15:17:34.727 225859 DEBUG oslo_concurrency.lockutils [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:34 compute-1 nova_compute[225855]: 2026-01-20 15:17:34.728 225859 DEBUG oslo_concurrency.lockutils [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:34 compute-1 nova_compute[225855]: 2026-01-20 15:17:34.743 225859 DEBUG nova.compute.manager [req-96087b76-7c97-4fac-bfa4-8059dafbac9c req-919d7204-fa65-45c7-bd9e-3e0bf8e85106 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-vif-deleted-0e93d1de-671e-4e37-8e79-44bed7981254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:17:34 compute-1 nova_compute[225855]: 2026-01-20 15:17:34.800 225859 DEBUG oslo_concurrency.processutils [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:17:35 compute-1 nova_compute[225855]: 2026-01-20 15:17:35.153 225859 DEBUG nova.compute.manager [req-987b788c-5af9-45c8-bc34-f16a7311b8a8 req-fcbca577-c859-4281-81a7-13ef4f764aa6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:17:35 compute-1 nova_compute[225855]: 2026-01-20 15:17:35.154 225859 DEBUG oslo_concurrency.lockutils [req-987b788c-5af9-45c8-bc34-f16a7311b8a8 req-fcbca577-c859-4281-81a7-13ef4f764aa6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:35 compute-1 nova_compute[225855]: 2026-01-20 15:17:35.154 225859 DEBUG oslo_concurrency.lockutils [req-987b788c-5af9-45c8-bc34-f16a7311b8a8 req-fcbca577-c859-4281-81a7-13ef4f764aa6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:35 compute-1 nova_compute[225855]: 2026-01-20 15:17:35.154 225859 DEBUG oslo_concurrency.lockutils [req-987b788c-5af9-45c8-bc34-f16a7311b8a8 req-fcbca577-c859-4281-81a7-13ef4f764aa6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:35 compute-1 nova_compute[225855]: 2026-01-20 15:17:35.154 225859 DEBUG nova.compute.manager [req-987b788c-5af9-45c8-bc34-f16a7311b8a8 req-fcbca577-c859-4281-81a7-13ef4f764aa6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] No waiting events found dispatching network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:17:35 compute-1 nova_compute[225855]: 2026-01-20 15:17:35.155 225859 WARNING nova.compute.manager [req-987b788c-5af9-45c8-bc34-f16a7311b8a8 req-fcbca577-c859-4281-81a7-13ef4f764aa6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received unexpected event network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 for instance with vm_state deleted and task_state None.
Jan 20 15:17:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:35.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:17:35 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3058422568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:35 compute-1 nova_compute[225855]: 2026-01-20 15:17:35.253 225859 DEBUG oslo_concurrency.processutils [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:17:35 compute-1 nova_compute[225855]: 2026-01-20 15:17:35.260 225859 DEBUG nova.compute.provider_tree [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:17:35 compute-1 nova_compute[225855]: 2026-01-20 15:17:35.285 225859 DEBUG nova.scheduler.client.report [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:17:35 compute-1 nova_compute[225855]: 2026-01-20 15:17:35.336 225859 DEBUG oslo_concurrency.lockutils [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:35 compute-1 nova_compute[225855]: 2026-01-20 15:17:35.382 225859 INFO nova.scheduler.client.report [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Deleted allocations for instance f1ded131-d9a3-4e93-ad99-53ee2695d5c8
Jan 20 15:17:35 compute-1 nova_compute[225855]: 2026-01-20 15:17:35.462 225859 DEBUG oslo_concurrency.lockutils [None req-8a54e417-8d3c-4bf9-889d-182b61853b5d e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:35.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:35 compute-1 ceph-mon[81775]: pgmap v2858: 321 pgs: 321 active+clean; 500 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 109 KiB/s rd, 42 KiB/s wr, 106 op/s
Jan 20 15:17:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3058422568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:36 compute-1 ceph-mon[81775]: pgmap v2859: 321 pgs: 321 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 118 KiB/s rd, 53 KiB/s wr, 127 op/s
Jan 20 15:17:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:17:36 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1154417224' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:17:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:17:36 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1154417224' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:17:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 e415: 3 total, 3 up, 3 in
Jan 20 15:17:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:17:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:37.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:17:37 compute-1 nova_compute[225855]: 2026-01-20 15:17:37.285 225859 INFO nova.compute.manager [None req-e8dfab0f-c800-4f16-a2ef-bbcce3513308 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Get console output
Jan 20 15:17:37 compute-1 nova_compute[225855]: 2026-01-20 15:17:37.291 263775 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:17:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:17:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:37.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:17:37 compute-1 nova_compute[225855]: 2026-01-20 15:17:37.899 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1154417224' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:17:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1154417224' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:17:37 compute-1 ceph-mon[81775]: osdmap e415: 3 total, 3 up, 3 in
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.022 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.332 225859 DEBUG nova.compute.manager [req-072b75bc-eee6-4b63-b059-6894187d9313 req-aad19ad7-7ce0-402e-b026-17f3b61374c2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Received event network-changed-d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.333 225859 DEBUG nova.compute.manager [req-072b75bc-eee6-4b63-b059-6894187d9313 req-aad19ad7-7ce0-402e-b026-17f3b61374c2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Refreshing instance network info cache due to event network-changed-d3a1fab4-7d4e-40cd-bdbb-b337196adbc5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.333 225859 DEBUG oslo_concurrency.lockutils [req-072b75bc-eee6-4b63-b059-6894187d9313 req-aad19ad7-7ce0-402e-b026-17f3b61374c2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-4f38d24a-3458-4c59-8480-8094ffcbb5aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.333 225859 DEBUG oslo_concurrency.lockutils [req-072b75bc-eee6-4b63-b059-6894187d9313 req-aad19ad7-7ce0-402e-b026-17f3b61374c2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-4f38d24a-3458-4c59-8480-8094ffcbb5aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.334 225859 DEBUG nova.network.neutron [req-072b75bc-eee6-4b63-b059-6894187d9313 req-aad19ad7-7ce0-402e-b026-17f3b61374c2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Refreshing network info cache for port d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:17:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.413 225859 DEBUG oslo_concurrency.lockutils [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.413 225859 DEBUG oslo_concurrency.lockutils [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.413 225859 DEBUG oslo_concurrency.lockutils [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.413 225859 DEBUG oslo_concurrency.lockutils [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.414 225859 DEBUG oslo_concurrency.lockutils [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.415 225859 INFO nova.compute.manager [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Terminating instance
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.416 225859 DEBUG nova.compute.manager [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:17:38 compute-1 kernel: tapd3a1fab4-7d (unregistering): left promiscuous mode
Jan 20 15:17:38 compute-1 NetworkManager[49104]: <info>  [1768922258.4858] device (tapd3a1fab4-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:17:38 compute-1 ovn_controller[130490]: 2026-01-20T15:17:38Z|00859|binding|INFO|Releasing lport d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 from this chassis (sb_readonly=0)
Jan 20 15:17:38 compute-1 ovn_controller[130490]: 2026-01-20T15:17:38Z|00860|binding|INFO|Setting lport d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 down in Southbound
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.491 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:38 compute-1 ovn_controller[130490]: 2026-01-20T15:17:38Z|00861|binding|INFO|Removing iface tapd3a1fab4-7d ovn-installed in OVS
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.493 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:38.498 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:6b:ff 10.100.0.9'], port_security=['fa:16:3e:20:6b:ff 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4f38d24a-3458-4c59-8480-8094ffcbb5aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4512e7e3-1668-4e98-8240-843256180395', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9f2f56b-e7c7-475c-b1af-94303aad79ef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0147113d-42da-4762-ae51-c60dbf8c0dd2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=d3a1fab4-7d4e-40cd-bdbb-b337196adbc5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:17:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:38.499 140354 INFO neutron.agent.ovn.metadata.agent [-] Port d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 in datapath 4512e7e3-1668-4e98-8240-843256180395 unbound from our chassis
Jan 20 15:17:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:38.500 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4512e7e3-1668-4e98-8240-843256180395, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:17:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:38.501 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a5997726-2c09-4d4a-9285-b7b32e0657f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:38.502 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4512e7e3-1668-4e98-8240-843256180395 namespace which is not needed anymore
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.512 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:38 compute-1 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000bf.scope: Deactivated successfully.
Jan 20 15:17:38 compute-1 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000bf.scope: Consumed 13.944s CPU time.
Jan 20 15:17:38 compute-1 systemd-machined[194361]: Machine qemu-101-instance-000000bf terminated.
Jan 20 15:17:38 compute-1 neutron-haproxy-ovnmeta-4512e7e3-1668-4e98-8240-843256180395[306389]: [NOTICE]   (306393) : haproxy version is 2.8.14-c23fe91
Jan 20 15:17:38 compute-1 neutron-haproxy-ovnmeta-4512e7e3-1668-4e98-8240-843256180395[306389]: [NOTICE]   (306393) : path to executable is /usr/sbin/haproxy
Jan 20 15:17:38 compute-1 neutron-haproxy-ovnmeta-4512e7e3-1668-4e98-8240-843256180395[306389]: [WARNING]  (306393) : Exiting Master process...
Jan 20 15:17:38 compute-1 neutron-haproxy-ovnmeta-4512e7e3-1668-4e98-8240-843256180395[306389]: [ALERT]    (306393) : Current worker (306395) exited with code 143 (Terminated)
Jan 20 15:17:38 compute-1 neutron-haproxy-ovnmeta-4512e7e3-1668-4e98-8240-843256180395[306389]: [WARNING]  (306393) : All workers exited. Exiting... (0)
Jan 20 15:17:38 compute-1 systemd[1]: libpod-e60f633dc8d25180e6187f9fd54256e0e27093030965e6a20f3d5598fbea10ce.scope: Deactivated successfully.
Jan 20 15:17:38 compute-1 podman[306661]: 2026-01-20 15:17:38.635754383 +0000 UTC m=+0.046683676 container died e60f633dc8d25180e6187f9fd54256e0e27093030965e6a20f3d5598fbea10ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4512e7e3-1668-4e98-8240-843256180395, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.651 225859 INFO nova.virt.libvirt.driver [-] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Instance destroyed successfully.
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.651 225859 DEBUG nova.objects.instance [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'resources' on Instance uuid 4f38d24a-3458-4c59-8480-8094ffcbb5aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:17:38 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e60f633dc8d25180e6187f9fd54256e0e27093030965e6a20f3d5598fbea10ce-userdata-shm.mount: Deactivated successfully.
Jan 20 15:17:38 compute-1 systemd[1]: var-lib-containers-storage-overlay-e7810c804e6835b7fafc97e2298621921e0f31c1da0eaafddfa82d1449fccd1e-merged.mount: Deactivated successfully.
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.672 225859 DEBUG nova.virt.libvirt.vif [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:17:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2057447015',display_name='tempest-TestNetworkAdvancedServerOps-server-2057447015',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2057447015',id=191,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJPwC5MyMqyTrbrSBwBOpxBpSiLPbu1nzobGp6ktmxE+oIlgwGH9ZkqYZyAjLxwv50DDSq5iaSNQxNoKrNJWo+FdRObJJTJ5JQ9hbj5JsMfLfRRZUDmDAFS5rhSXxsMyYg==',key_name='tempest-TestNetworkAdvancedServerOps-386377958',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:17:10Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-foa05j4u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:17:34Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=4f38d24a-3458-4c59-8480-8094ffcbb5aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "address": "fa:16:3e:20:6b:ff", "network": {"id": "4512e7e3-1668-4e98-8240-843256180395", "bridge": "br-int", "label": "tempest-network-smoke--434600618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a1fab4-7d", "ovs_interfaceid": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.673 225859 DEBUG nova.network.os_vif_util [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "address": "fa:16:3e:20:6b:ff", "network": {"id": "4512e7e3-1668-4e98-8240-843256180395", "bridge": "br-int", "label": "tempest-network-smoke--434600618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a1fab4-7d", "ovs_interfaceid": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.673 225859 DEBUG nova.network.os_vif_util [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:20:6b:ff,bridge_name='br-int',has_traffic_filtering=True,id=d3a1fab4-7d4e-40cd-bdbb-b337196adbc5,network=Network(4512e7e3-1668-4e98-8240-843256180395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3a1fab4-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.674 225859 DEBUG os_vif [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:6b:ff,bridge_name='br-int',has_traffic_filtering=True,id=d3a1fab4-7d4e-40cd-bdbb-b337196adbc5,network=Network(4512e7e3-1668-4e98-8240-843256180395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3a1fab4-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:17:38 compute-1 podman[306661]: 2026-01-20 15:17:38.676120897 +0000 UTC m=+0.087050190 container cleanup e60f633dc8d25180e6187f9fd54256e0e27093030965e6a20f3d5598fbea10ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4512e7e3-1668-4e98-8240-843256180395, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.676 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.677 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3a1fab4-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.722 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:38 compute-1 systemd[1]: libpod-conmon-e60f633dc8d25180e6187f9fd54256e0e27093030965e6a20f3d5598fbea10ce.scope: Deactivated successfully.
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.725 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.727 225859 INFO os_vif [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:6b:ff,bridge_name='br-int',has_traffic_filtering=True,id=d3a1fab4-7d4e-40cd-bdbb-b337196adbc5,network=Network(4512e7e3-1668-4e98-8240-843256180395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3a1fab4-7d')
Jan 20 15:17:38 compute-1 podman[306701]: 2026-01-20 15:17:38.73613803 +0000 UTC m=+0.040652214 container remove e60f633dc8d25180e6187f9fd54256e0e27093030965e6a20f3d5598fbea10ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4512e7e3-1668-4e98-8240-843256180395, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 15:17:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:38.738 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae5c318-1c66-4e48-9ce1-a5fac5791f65]: (4, ('Tue Jan 20 03:17:38 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4512e7e3-1668-4e98-8240-843256180395 (e60f633dc8d25180e6187f9fd54256e0e27093030965e6a20f3d5598fbea10ce)\ne60f633dc8d25180e6187f9fd54256e0e27093030965e6a20f3d5598fbea10ce\nTue Jan 20 03:17:38 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4512e7e3-1668-4e98-8240-843256180395 (e60f633dc8d25180e6187f9fd54256e0e27093030965e6a20f3d5598fbea10ce)\ne60f633dc8d25180e6187f9fd54256e0e27093030965e6a20f3d5598fbea10ce\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:38.740 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[23075dba-bf29-4b74-8718-3d7fdf869613]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:38.741 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4512e7e3-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:17:38 compute-1 kernel: tap4512e7e3-10: left promiscuous mode
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.746 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:38 compute-1 nova_compute[225855]: 2026-01-20 15:17:38.757 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:38.760 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5d8bec-a388-4f43-be91-b8b51cfcbbee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:38.775 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ff35ea24-931d-4e20-a90b-4b36910c2bde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:38.776 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ac41e65e-5619-4ca8-bc86-57b07c42f708]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:38.792 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bbfabdc8-d2de-4794-a0be-e3802e131ef1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 726935, 'reachable_time': 34330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306734, 'error': None, 'target': 'ovnmeta-4512e7e3-1668-4e98-8240-843256180395', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:38 compute-1 systemd[1]: run-netns-ovnmeta\x2d4512e7e3\x2d1668\x2d4e98\x2d8240\x2d843256180395.mount: Deactivated successfully.
Jan 20 15:17:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:38.794 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4512e7e3-1668-4e98-8240-843256180395 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:17:38 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:17:38.795 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6a8b0f-adb3-4b19-a5d3-870cfd83275a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:39 compute-1 ceph-mon[81775]: pgmap v2861: 321 pgs: 321 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 22 KiB/s wr, 71 op/s
Jan 20 15:17:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:39.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:39 compute-1 nova_compute[225855]: 2026-01-20 15:17:39.462 225859 INFO nova.virt.libvirt.driver [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Deleting instance files /var/lib/nova/instances/4f38d24a-3458-4c59-8480-8094ffcbb5aa_del
Jan 20 15:17:39 compute-1 nova_compute[225855]: 2026-01-20 15:17:39.463 225859 INFO nova.virt.libvirt.driver [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Deletion of /var/lib/nova/instances/4f38d24a-3458-4c59-8480-8094ffcbb5aa_del complete
Jan 20 15:17:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:17:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:39.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:17:39 compute-1 nova_compute[225855]: 2026-01-20 15:17:39.521 225859 INFO nova.compute.manager [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Took 1.11 seconds to destroy the instance on the hypervisor.
Jan 20 15:17:39 compute-1 nova_compute[225855]: 2026-01-20 15:17:39.522 225859 DEBUG oslo.service.loopingcall [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:17:39 compute-1 nova_compute[225855]: 2026-01-20 15:17:39.522 225859 DEBUG nova.compute.manager [-] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:17:39 compute-1 nova_compute[225855]: 2026-01-20 15:17:39.522 225859 DEBUG nova.network.neutron [-] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:17:39 compute-1 nova_compute[225855]: 2026-01-20 15:17:39.728 225859 DEBUG nova.network.neutron [req-072b75bc-eee6-4b63-b059-6894187d9313 req-aad19ad7-7ce0-402e-b026-17f3b61374c2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Updated VIF entry in instance network info cache for port d3a1fab4-7d4e-40cd-bdbb-b337196adbc5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:17:39 compute-1 nova_compute[225855]: 2026-01-20 15:17:39.729 225859 DEBUG nova.network.neutron [req-072b75bc-eee6-4b63-b059-6894187d9313 req-aad19ad7-7ce0-402e-b026-17f3b61374c2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Updating instance_info_cache with network_info: [{"id": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "address": "fa:16:3e:20:6b:ff", "network": {"id": "4512e7e3-1668-4e98-8240-843256180395", "bridge": "br-int", "label": "tempest-network-smoke--434600618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3a1fab4-7d", "ovs_interfaceid": "d3a1fab4-7d4e-40cd-bdbb-b337196adbc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:17:39 compute-1 nova_compute[225855]: 2026-01-20 15:17:39.751 225859 DEBUG oslo_concurrency.lockutils [req-072b75bc-eee6-4b63-b059-6894187d9313 req-aad19ad7-7ce0-402e-b026-17f3b61374c2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-4f38d24a-3458-4c59-8480-8094ffcbb5aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.062 225859 DEBUG nova.network.neutron [-] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.084 225859 INFO nova.compute.manager [-] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Took 0.56 seconds to deallocate network for instance.
Jan 20 15:17:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1237641148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.156 225859 DEBUG oslo_concurrency.lockutils [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.156 225859 DEBUG oslo_concurrency.lockutils [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.200 225859 DEBUG oslo_concurrency.processutils [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.565 225859 DEBUG nova.compute.manager [req-8c67f0ba-14cf-46e8-8818-c41feb18a543 req-cf345f5f-0ad5-4654-877e-3240e6bcab88 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Received event network-vif-unplugged-d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.566 225859 DEBUG oslo_concurrency.lockutils [req-8c67f0ba-14cf-46e8-8818-c41feb18a543 req-cf345f5f-0ad5-4654-877e-3240e6bcab88 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.566 225859 DEBUG oslo_concurrency.lockutils [req-8c67f0ba-14cf-46e8-8818-c41feb18a543 req-cf345f5f-0ad5-4654-877e-3240e6bcab88 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.566 225859 DEBUG oslo_concurrency.lockutils [req-8c67f0ba-14cf-46e8-8818-c41feb18a543 req-cf345f5f-0ad5-4654-877e-3240e6bcab88 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.566 225859 DEBUG nova.compute.manager [req-8c67f0ba-14cf-46e8-8818-c41feb18a543 req-cf345f5f-0ad5-4654-877e-3240e6bcab88 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] No waiting events found dispatching network-vif-unplugged-d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.566 225859 WARNING nova.compute.manager [req-8c67f0ba-14cf-46e8-8818-c41feb18a543 req-cf345f5f-0ad5-4654-877e-3240e6bcab88 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Received unexpected event network-vif-unplugged-d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 for instance with vm_state deleted and task_state None.
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.567 225859 DEBUG nova.compute.manager [req-8c67f0ba-14cf-46e8-8818-c41feb18a543 req-cf345f5f-0ad5-4654-877e-3240e6bcab88 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Received event network-vif-plugged-d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.567 225859 DEBUG oslo_concurrency.lockutils [req-8c67f0ba-14cf-46e8-8818-c41feb18a543 req-cf345f5f-0ad5-4654-877e-3240e6bcab88 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.567 225859 DEBUG oslo_concurrency.lockutils [req-8c67f0ba-14cf-46e8-8818-c41feb18a543 req-cf345f5f-0ad5-4654-877e-3240e6bcab88 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.567 225859 DEBUG oslo_concurrency.lockutils [req-8c67f0ba-14cf-46e8-8818-c41feb18a543 req-cf345f5f-0ad5-4654-877e-3240e6bcab88 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.567 225859 DEBUG nova.compute.manager [req-8c67f0ba-14cf-46e8-8818-c41feb18a543 req-cf345f5f-0ad5-4654-877e-3240e6bcab88 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] No waiting events found dispatching network-vif-plugged-d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.567 225859 WARNING nova.compute.manager [req-8c67f0ba-14cf-46e8-8818-c41feb18a543 req-cf345f5f-0ad5-4654-877e-3240e6bcab88 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Received unexpected event network-vif-plugged-d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 for instance with vm_state deleted and task_state None.
Jan 20 15:17:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:17:40 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3071866721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.676 225859 DEBUG oslo_concurrency.processutils [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.684 225859 DEBUG nova.compute.provider_tree [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.706 225859 DEBUG nova.scheduler.client.report [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.729 225859 DEBUG oslo_concurrency.lockutils [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.930 225859 INFO nova.scheduler.client.report [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Deleted allocations for instance 4f38d24a-3458-4c59-8480-8094ffcbb5aa
Jan 20 15:17:40 compute-1 nova_compute[225855]: 2026-01-20 15:17:40.997 225859 DEBUG oslo_concurrency.lockutils [None req-e22fce45-8ff1-4e21-ae11-6ec524f9b68d 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "4f38d24a-3458-4c59-8480-8094ffcbb5aa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:41 compute-1 ceph-mon[81775]: pgmap v2862: 321 pgs: 321 active+clean; 337 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 69 KiB/s rd, 19 KiB/s wr, 99 op/s
Jan 20 15:17:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3071866721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:17:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:41.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:17:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:41.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:41 compute-1 nova_compute[225855]: 2026-01-20 15:17:41.770 225859 DEBUG nova.compute.manager [req-03de410d-9cad-41ed-b307-4538a94ce33d req-069102a6-e658-4613-a128-87e1a562f364 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Received event network-vif-deleted-d3a1fab4-7d4e-40cd-bdbb-b337196adbc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:17:42 compute-1 sudo[306760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:17:42 compute-1 sudo[306760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:17:42 compute-1 sudo[306760]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:42 compute-1 sudo[306785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:17:42 compute-1 sudo[306785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:17:42 compute-1 sudo[306785]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:43 compute-1 nova_compute[225855]: 2026-01-20 15:17:43.024 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:43.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:17:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:43.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:17:43 compute-1 nova_compute[225855]: 2026-01-20 15:17:43.722 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:43 compute-1 ceph-mon[81775]: pgmap v2863: 321 pgs: 321 active+clean; 289 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 18 KiB/s wr, 119 op/s
Jan 20 15:17:44 compute-1 ceph-mon[81775]: pgmap v2864: 321 pgs: 321 active+clean; 262 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 19 KiB/s wr, 123 op/s
Jan 20 15:17:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3826035582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:45.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:45.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:46 compute-1 ceph-mon[81775]: pgmap v2865: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 4.5 KiB/s wr, 112 op/s
Jan 20 15:17:46 compute-1 nova_compute[225855]: 2026-01-20 15:17:46.980 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:47 compute-1 podman[306813]: 2026-01-20 15:17:47.103962794 +0000 UTC m=+0.147610019 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 15:17:47 compute-1 nova_compute[225855]: 2026-01-20 15:17:47.105 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:47.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:47.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:47 compute-1 nova_compute[225855]: 2026-01-20 15:17:47.863 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922252.8621945, f1ded131-d9a3-4e93-ad99-53ee2695d5c8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:17:47 compute-1 nova_compute[225855]: 2026-01-20 15:17:47.864 225859 INFO nova.compute.manager [-] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] VM Stopped (Lifecycle Event)
Jan 20 15:17:47 compute-1 nova_compute[225855]: 2026-01-20 15:17:47.895 225859 DEBUG nova.compute.manager [None req-6b3f2232-0298-4c86-98cf-d5f63d88e1a3 - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:17:48 compute-1 nova_compute[225855]: 2026-01-20 15:17:48.025 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:48 compute-1 nova_compute[225855]: 2026-01-20 15:17:48.725 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:48 compute-1 ceph-mon[81775]: pgmap v2866: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 3.9 KiB/s wr, 96 op/s
Jan 20 15:17:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:49.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:49.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:50 compute-1 ceph-mon[81775]: pgmap v2867: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 3.7 KiB/s wr, 94 op/s
Jan 20 15:17:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:51.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:51.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:52 compute-1 ceph-mon[81775]: pgmap v2868: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 3.7 KiB/s wr, 70 op/s
Jan 20 15:17:53 compute-1 nova_compute[225855]: 2026-01-20 15:17:53.029 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:53.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:53.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:53 compute-1 nova_compute[225855]: 2026-01-20 15:17:53.649 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922258.648655, 4f38d24a-3458-4c59-8480-8094ffcbb5aa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:17:53 compute-1 nova_compute[225855]: 2026-01-20 15:17:53.650 225859 INFO nova.compute.manager [-] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] VM Stopped (Lifecycle Event)
Jan 20 15:17:53 compute-1 nova_compute[225855]: 2026-01-20 15:17:53.667 225859 DEBUG nova.compute.manager [None req-5d729218-ed0d-4f2d-993e-61f024a9f9fa - - - - - -] [instance: 4f38d24a-3458-4c59-8480-8094ffcbb5aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:17:53 compute-1 nova_compute[225855]: 2026-01-20 15:17:53.748 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1775843392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:17:54 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1456547554' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:17:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:17:54 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1456547554' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:17:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:17:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 69K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.13 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1715 writes, 8442 keys, 1715 commit groups, 1.0 writes per commit group, ingest: 16.52 MB, 0.03 MB/s
                                           Interval WAL: 1715 writes, 1715 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     79.4      1.05              0.28        43    0.024       0      0       0.0       0.0
                                             L6      1/0   11.13 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.9    104.2     88.6      4.65              1.34        42    0.111    293K    22K       0.0       0.0
                                            Sum      1/0   11.13 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.9     84.9     86.9      5.70              1.62        85    0.067    293K    22K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.6     98.5     98.6      0.79              0.26        12    0.066     56K   3149       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    104.2     88.6      4.65              1.34        42    0.111    293K    22K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     79.6      1.05              0.28        42    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.082, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.48 GB write, 0.10 MB/s write, 0.47 GB read, 0.10 MB/s read, 5.7 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564d515a71f0#2 capacity: 304.00 MB usage: 54.47 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000531 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3135,52.31 MB,17.2083%) FilterBlock(85,821.23 KB,0.263811%) IndexBlock(85,1.35 MB,0.444026%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 15:17:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1456547554' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:17:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1456547554' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:17:55 compute-1 ceph-mon[81775]: pgmap v2869: 321 pgs: 321 active+clean; 176 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 2.1 KiB/s wr, 44 op/s
Jan 20 15:17:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:55.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:17:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:55.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:17:56 compute-1 nova_compute[225855]: 2026-01-20 15:17:56.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:17:56 compute-1 nova_compute[225855]: 2026-01-20 15:17:56.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:17:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:57.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:17:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:57.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:17:57 compute-1 ceph-mon[81775]: pgmap v2870: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 2.0 KiB/s wr, 44 op/s
Jan 20 15:17:58 compute-1 nova_compute[225855]: 2026-01-20 15:17:58.032 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:58 compute-1 nova_compute[225855]: 2026-01-20 15:17:58.749 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:58 compute-1 ceph-mon[81775]: pgmap v2871: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 852 B/s wr, 25 op/s
Jan 20 15:17:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:17:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:59.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:17:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:17:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:59.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:00 compute-1 nova_compute[225855]: 2026-01-20 15:18:00.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:18:00 compute-1 nova_compute[225855]: 2026-01-20 15:18:00.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:18:00 compute-1 nova_compute[225855]: 2026-01-20 15:18:00.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:18:00 compute-1 nova_compute[225855]: 2026-01-20 15:18:00.508 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:18:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:01.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:18:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:01.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:18:01 compute-1 ceph-mon[81775]: pgmap v2872: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 852 B/s wr, 25 op/s
Jan 20 15:18:02 compute-1 nova_compute[225855]: 2026-01-20 15:18:02.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:18:02 compute-1 nova_compute[225855]: 2026-01-20 15:18:02.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:18:02 compute-1 sudo[306845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:02 compute-1 sudo[306845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:02 compute-1 sudo[306845]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:02 compute-1 sudo[306876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:02 compute-1 sudo[306876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:02 compute-1 sudo[306876]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:02 compute-1 podman[306869]: 2026-01-20 15:18:02.639908503 +0000 UTC m=+0.055378382 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:18:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2471681010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:02 compute-1 ceph-mon[81775]: pgmap v2873: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 852 B/s wr, 24 op/s
Jan 20 15:18:03 compute-1 nova_compute[225855]: 2026-01-20 15:18:03.032 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:18:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:03.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:18:03 compute-1 nova_compute[225855]: 2026-01-20 15:18:03.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:18:03 compute-1 sudo[306913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:03 compute-1 sudo[306913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:03 compute-1 sudo[306913]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:03 compute-1 sudo[306938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:18:03 compute-1 sudo[306938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:03 compute-1 sudo[306938]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:03 compute-1 sudo[306963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:03 compute-1 sudo[306963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:03 compute-1 sudo[306963]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:03.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:03 compute-1 sudo[306988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:18:03 compute-1 sudo[306988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1995102232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:03 compute-1 nova_compute[225855]: 2026-01-20 15:18:03.751 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:03 compute-1 sudo[306988]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:04 compute-1 nova_compute[225855]: 2026-01-20 15:18:04.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:18:04 compute-1 nova_compute[225855]: 2026-01-20 15:18:04.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:18:04 compute-1 nova_compute[225855]: 2026-01-20 15:18:04.375 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:18:04 compute-1 nova_compute[225855]: 2026-01-20 15:18:04.376 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:18:04 compute-1 nova_compute[225855]: 2026-01-20 15:18:04.376 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:18:04 compute-1 nova_compute[225855]: 2026-01-20 15:18:04.376 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:18:04 compute-1 nova_compute[225855]: 2026-01-20 15:18:04.376 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:18:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:18:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:18:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:18:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:18:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:18:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:18:04 compute-1 ceph-mon[81775]: pgmap v2874: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.9 KiB/s rd, 255 B/s wr, 12 op/s
Jan 20 15:18:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:18:04 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/593386575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:04 compute-1 nova_compute[225855]: 2026-01-20 15:18:04.841 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:18:04 compute-1 nova_compute[225855]: 2026-01-20 15:18:04.992 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:18:04 compute-1 nova_compute[225855]: 2026-01-20 15:18:04.994 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4249MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:18:04 compute-1 nova_compute[225855]: 2026-01-20 15:18:04.994 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:18:04 compute-1 nova_compute[225855]: 2026-01-20 15:18:04.994 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:18:05 compute-1 nova_compute[225855]: 2026-01-20 15:18:05.050 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:18:05 compute-1 nova_compute[225855]: 2026-01-20 15:18:05.050 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:18:05 compute-1 nova_compute[225855]: 2026-01-20 15:18:05.069 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:18:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:05.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:05.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:18:05 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3326096982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:05 compute-1 nova_compute[225855]: 2026-01-20 15:18:05.582 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:18:05 compute-1 nova_compute[225855]: 2026-01-20 15:18:05.588 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:18:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/593386575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2904258732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3326096982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:06 compute-1 ceph-mon[81775]: pgmap v2875: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.9 KiB/s rd, 255 B/s wr, 11 op/s
Jan 20 15:18:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:07.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:07.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:07 compute-1 nova_compute[225855]: 2026-01-20 15:18:07.571 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:18:07 compute-1 nova_compute[225855]: 2026-01-20 15:18:07.617 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:18:07 compute-1 nova_compute[225855]: 2026-01-20 15:18:07.618 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:18:08 compute-1 nova_compute[225855]: 2026-01-20 15:18:08.034 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/563328679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:08 compute-1 nova_compute[225855]: 2026-01-20 15:18:08.788 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:09 compute-1 ceph-mon[81775]: pgmap v2876: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:18:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/83561485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:09.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:18:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:09.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:18:09 compute-1 nova_compute[225855]: 2026-01-20 15:18:09.618 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:18:09 compute-1 nova_compute[225855]: 2026-01-20 15:18:09.619 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:18:10 compute-1 sudo[307093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:10 compute-1 sudo[307093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:10 compute-1 sudo[307093]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:10 compute-1 sudo[307118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:18:10 compute-1 sudo[307118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:10 compute-1 sudo[307118]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:10 compute-1 nova_compute[225855]: 2026-01-20 15:18:10.334 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:18:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:18:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:18:10 compute-1 ceph-mon[81775]: pgmap v2877: 321 pgs: 321 active+clean; 155 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 25 op/s
Jan 20 15:18:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:11.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:11.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:12 compute-1 ceph-mon[81775]: pgmap v2878: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:18:13 compute-1 nova_compute[225855]: 2026-01-20 15:18:13.035 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:13.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:13.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/149369571' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:18:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/149369571' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:18:13 compute-1 nova_compute[225855]: 2026-01-20 15:18:13.789 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:14 compute-1 ceph-mon[81775]: pgmap v2879: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:18:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:15.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:18:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:15.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:18:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2456508678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:18:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:16.439 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:18:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:16.440 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:18:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:16.440 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:18:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/594431206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:18:16 compute-1 ceph-mon[81775]: pgmap v2880: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:18:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:17.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:18:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:17.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:18:18 compute-1 nova_compute[225855]: 2026-01-20 15:18:18.036 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:18 compute-1 podman[307147]: 2026-01-20 15:18:18.040717497 +0000 UTC m=+0.088863021 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 20 15:18:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:18 compute-1 ceph-mon[81775]: pgmap v2881: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:18:18 compute-1 nova_compute[225855]: 2026-01-20 15:18:18.791 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:19.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:18:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:19.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:18:20 compute-1 ceph-mon[81775]: pgmap v2882: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 924 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 20 15:18:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:21.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:21.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:22 compute-1 sudo[307175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:22 compute-1 sudo[307175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:22 compute-1 sudo[307175]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:22 compute-1 sudo[307200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:22 compute-1 sudo[307200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:22 compute-1 sudo[307200]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:22 compute-1 ceph-mon[81775]: pgmap v2883: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 92 KiB/s wr, 48 op/s
Jan 20 15:18:23 compute-1 nova_compute[225855]: 2026-01-20 15:18:23.038 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:18:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:23.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:18:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:23.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:23 compute-1 nova_compute[225855]: 2026-01-20 15:18:23.793 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:24 compute-1 ceph-mon[81775]: pgmap v2884: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:18:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:25.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:25.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:26 compute-1 ceph-mon[81775]: pgmap v2885: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:18:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:27.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:18:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:27.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:18:28 compute-1 nova_compute[225855]: 2026-01-20 15:18:28.040 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:28 compute-1 nova_compute[225855]: 2026-01-20 15:18:28.796 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:29 compute-1 ovn_controller[130490]: 2026-01-20T15:18:29Z|00862|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 20 15:18:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:29.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:29 compute-1 ceph-mon[81775]: pgmap v2886: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:18:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:29.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:30 compute-1 ceph-mon[81775]: pgmap v2887: 321 pgs: 321 active+clean; 176 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1024 KiB/s wr, 84 op/s
Jan 20 15:18:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:18:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:31.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:18:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:31.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #139. Immutable memtables: 0.
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:18:31.911669) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 139
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922311911771, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 1434, "num_deletes": 258, "total_data_size": 3019222, "memory_usage": 3059816, "flush_reason": "Manual Compaction"}
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #140: started
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922311923473, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 140, "file_size": 1979813, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68816, "largest_seqno": 70244, "table_properties": {"data_size": 1973758, "index_size": 3257, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13517, "raw_average_key_size": 20, "raw_value_size": 1961240, "raw_average_value_size": 2905, "num_data_blocks": 144, "num_entries": 675, "num_filter_entries": 675, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768922204, "oldest_key_time": 1768922204, "file_creation_time": 1768922311, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 11835 microseconds, and 4697 cpu microseconds.
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:18:31.923503) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #140: 1979813 bytes OK
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:18:31.923519) [db/memtable_list.cc:519] [default] Level-0 commit table #140 started
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:18:31.924423) [db/memtable_list.cc:722] [default] Level-0 commit table #140: memtable #1 done
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:18:31.924434) EVENT_LOG_v1 {"time_micros": 1768922311924430, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:18:31.924450) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 3012430, prev total WAL file size 3012430, number of live WAL files 2.
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000136.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:18:31.925191) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353230' seq:72057594037927935, type:22 .. '6C6F676D0032373732' seq:0, type:0; will stop at (end)
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [140(1933KB)], [138(11MB)]
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922311925221, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [140], "files_L6": [138], "score": -1, "input_data_size": 13653042, "oldest_snapshot_seqno": -1}
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #141: 9397 keys, 13514620 bytes, temperature: kUnknown
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922311995958, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 141, "file_size": 13514620, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13451444, "index_size": 38551, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23557, "raw_key_size": 247143, "raw_average_key_size": 26, "raw_value_size": 13284059, "raw_average_value_size": 1413, "num_data_blocks": 1478, "num_entries": 9397, "num_filter_entries": 9397, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768922311, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 141, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:18:31 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:18:32 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:18:31.996227) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 13514620 bytes
Jan 20 15:18:32 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:18:32.002105) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.8 rd, 190.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 11.1 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(13.7) write-amplify(6.8) OK, records in: 9930, records dropped: 533 output_compression: NoCompression
Jan 20 15:18:32 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:18:32.002145) EVENT_LOG_v1 {"time_micros": 1768922312002132, "job": 88, "event": "compaction_finished", "compaction_time_micros": 70816, "compaction_time_cpu_micros": 30981, "output_level": 6, "num_output_files": 1, "total_output_size": 13514620, "num_input_records": 9930, "num_output_records": 9397, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:18:32 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:18:32 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922312002780, "job": 88, "event": "table_file_deletion", "file_number": 140}
Jan 20 15:18:32 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000138.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:18:32 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922312005203, "job": 88, "event": "table_file_deletion", "file_number": 138}
Jan 20 15:18:32 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:18:31.925123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:18:32 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:18:32.005233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:18:32 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:18:32.005237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:18:32 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:18:32.005238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:18:32 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:18:32.005240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:18:32 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:18:32.005241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:18:32 compute-1 ceph-mon[81775]: pgmap v2888: 321 pgs: 321 active+clean; 182 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 65 op/s
Jan 20 15:18:33 compute-1 podman[307230]: 2026-01-20 15:18:33.003636927 +0000 UTC m=+0.051494672 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 15:18:33 compute-1 nova_compute[225855]: 2026-01-20 15:18:33.042 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:33.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:33 compute-1 nova_compute[225855]: 2026-01-20 15:18:33.416 225859 DEBUG oslo_concurrency.lockutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:18:33 compute-1 nova_compute[225855]: 2026-01-20 15:18:33.417 225859 DEBUG oslo_concurrency.lockutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:18:33 compute-1 nova_compute[225855]: 2026-01-20 15:18:33.436 225859 DEBUG nova.compute.manager [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:18:33 compute-1 nova_compute[225855]: 2026-01-20 15:18:33.512 225859 DEBUG oslo_concurrency.lockutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:18:33 compute-1 nova_compute[225855]: 2026-01-20 15:18:33.513 225859 DEBUG oslo_concurrency.lockutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:18:33 compute-1 nova_compute[225855]: 2026-01-20 15:18:33.533 225859 DEBUG nova.virt.hardware [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:18:33 compute-1 nova_compute[225855]: 2026-01-20 15:18:33.534 225859 INFO nova.compute.claims [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:18:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:33.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:33 compute-1 nova_compute[225855]: 2026-01-20 15:18:33.670 225859 DEBUG oslo_concurrency.processutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:18:33 compute-1 nova_compute[225855]: 2026-01-20 15:18:33.797 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:18:34 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2134992538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.096 225859 DEBUG oslo_concurrency.processutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.101 225859 DEBUG nova.compute.provider_tree [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.126 225859 DEBUG nova.scheduler.client.report [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:18:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2134992538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.147 225859 DEBUG oslo_concurrency.lockutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.148 225859 DEBUG nova.compute.manager [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.218 225859 DEBUG nova.compute.manager [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.219 225859 DEBUG nova.network.neutron [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.268 225859 INFO nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.289 225859 DEBUG nova.compute.manager [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.434 225859 DEBUG nova.compute.manager [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.436 225859 DEBUG nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.436 225859 INFO nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Creating image(s)
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.731 225859 DEBUG nova.storage.rbd_utils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.758 225859 DEBUG nova.storage.rbd_utils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.786 225859 DEBUG nova.storage.rbd_utils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.789 225859 DEBUG oslo_concurrency.processutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.821 225859 DEBUG nova.policy [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cd9a8f26b71f4631a387e555e6b18428', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9156c0a9920c4721843416b9a44404f9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.857 225859 DEBUG oslo_concurrency.processutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.858 225859 DEBUG oslo_concurrency.lockutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.859 225859 DEBUG oslo_concurrency.lockutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.859 225859 DEBUG oslo_concurrency.lockutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:18:34 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.887 225859 DEBUG nova.storage.rbd_utils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:18:35 compute-1 nova_compute[225855]: 2026-01-20 15:18:34.999 225859 DEBUG oslo_concurrency.processutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:18:35 compute-1 ceph-mon[81775]: pgmap v2889: 321 pgs: 321 active+clean; 190 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 72 op/s
Jan 20 15:18:35 compute-1 nova_compute[225855]: 2026-01-20 15:18:35.261 225859 DEBUG oslo_concurrency.processutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:18:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:35.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:35 compute-1 nova_compute[225855]: 2026-01-20 15:18:35.316 225859 DEBUG nova.storage.rbd_utils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] resizing rbd image 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:18:35 compute-1 nova_compute[225855]: 2026-01-20 15:18:35.395 225859 DEBUG nova.objects.instance [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lazy-loading 'migration_context' on Instance uuid 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:18:35 compute-1 nova_compute[225855]: 2026-01-20 15:18:35.419 225859 DEBUG nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:18:35 compute-1 nova_compute[225855]: 2026-01-20 15:18:35.419 225859 DEBUG nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Ensure instance console log exists: /var/lib/nova/instances/2b31f3d7-81bd-4712-bcb1-98afd2dc0f44/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:18:35 compute-1 nova_compute[225855]: 2026-01-20 15:18:35.420 225859 DEBUG oslo_concurrency.lockutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:18:35 compute-1 nova_compute[225855]: 2026-01-20 15:18:35.420 225859 DEBUG oslo_concurrency.lockutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:18:35 compute-1 nova_compute[225855]: 2026-01-20 15:18:35.420 225859 DEBUG oslo_concurrency.lockutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:18:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:18:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:35.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:18:36 compute-1 ceph-mon[81775]: pgmap v2890: 321 pgs: 321 active+clean; 208 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 2.3 MiB/s wr, 86 op/s
Jan 20 15:18:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:36.916 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:18:36 compute-1 nova_compute[225855]: 2026-01-20 15:18:36.916 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:36.917 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:18:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:37.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:18:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:37.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:18:37 compute-1 nova_compute[225855]: 2026-01-20 15:18:37.886 225859 DEBUG nova.network.neutron [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Successfully created port: bd99d3a5-54e0-4e70-9a02-3543631281a6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:18:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:37.919 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:18:38 compute-1 nova_compute[225855]: 2026-01-20 15:18:38.044 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:38 compute-1 ceph-mon[81775]: pgmap v2891: 321 pgs: 321 active+clean; 208 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 2.3 MiB/s wr, 86 op/s
Jan 20 15:18:38 compute-1 nova_compute[225855]: 2026-01-20 15:18:38.799 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:39.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:39.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:39 compute-1 nova_compute[225855]: 2026-01-20 15:18:39.591 225859 DEBUG nova.network.neutron [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Successfully updated port: bd99d3a5-54e0-4e70-9a02-3543631281a6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:18:39 compute-1 nova_compute[225855]: 2026-01-20 15:18:39.615 225859 DEBUG oslo_concurrency.lockutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "refresh_cache-2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:18:39 compute-1 nova_compute[225855]: 2026-01-20 15:18:39.615 225859 DEBUG oslo_concurrency.lockutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquired lock "refresh_cache-2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:18:39 compute-1 nova_compute[225855]: 2026-01-20 15:18:39.615 225859 DEBUG nova.network.neutron [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:18:39 compute-1 nova_compute[225855]: 2026-01-20 15:18:39.705 225859 DEBUG nova.compute.manager [req-51605533-b7f0-4242-9d13-054db74dd614 req-24ae14e8-fdfd-494a-9899-0c18fd537a5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Received event network-changed-bd99d3a5-54e0-4e70-9a02-3543631281a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:18:39 compute-1 nova_compute[225855]: 2026-01-20 15:18:39.706 225859 DEBUG nova.compute.manager [req-51605533-b7f0-4242-9d13-054db74dd614 req-24ae14e8-fdfd-494a-9899-0c18fd537a5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Refreshing instance network info cache due to event network-changed-bd99d3a5-54e0-4e70-9a02-3543631281a6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:18:39 compute-1 nova_compute[225855]: 2026-01-20 15:18:39.706 225859 DEBUG oslo_concurrency.lockutils [req-51605533-b7f0-4242-9d13-054db74dd614 req-24ae14e8-fdfd-494a-9899-0c18fd537a5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:18:39 compute-1 nova_compute[225855]: 2026-01-20 15:18:39.928 225859 DEBUG nova.network.neutron [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:18:40 compute-1 ceph-mon[81775]: pgmap v2892: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.123 225859 DEBUG nova.network.neutron [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Updating instance_info_cache with network_info: [{"id": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "address": "fa:16:3e:44:23:fd", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd99d3a5-54", "ovs_interfaceid": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.153 225859 DEBUG oslo_concurrency.lockutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Releasing lock "refresh_cache-2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.153 225859 DEBUG nova.compute.manager [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Instance network_info: |[{"id": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "address": "fa:16:3e:44:23:fd", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd99d3a5-54", "ovs_interfaceid": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.153 225859 DEBUG oslo_concurrency.lockutils [req-51605533-b7f0-4242-9d13-054db74dd614 req-24ae14e8-fdfd-494a-9899-0c18fd537a5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.154 225859 DEBUG nova.network.neutron [req-51605533-b7f0-4242-9d13-054db74dd614 req-24ae14e8-fdfd-494a-9899-0c18fd537a5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Refreshing network info cache for port bd99d3a5-54e0-4e70-9a02-3543631281a6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.156 225859 DEBUG nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Start _get_guest_xml network_info=[{"id": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "address": "fa:16:3e:44:23:fd", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd99d3a5-54", "ovs_interfaceid": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.161 225859 WARNING nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.166 225859 DEBUG nova.virt.libvirt.host [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.166 225859 DEBUG nova.virt.libvirt.host [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.169 225859 DEBUG nova.virt.libvirt.host [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.170 225859 DEBUG nova.virt.libvirt.host [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.171 225859 DEBUG nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.172 225859 DEBUG nova.virt.hardware [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.172 225859 DEBUG nova.virt.hardware [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.172 225859 DEBUG nova.virt.hardware [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.172 225859 DEBUG nova.virt.hardware [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.173 225859 DEBUG nova.virt.hardware [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.173 225859 DEBUG nova.virt.hardware [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.173 225859 DEBUG nova.virt.hardware [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.173 225859 DEBUG nova.virt.hardware [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.174 225859 DEBUG nova.virt.hardware [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.174 225859 DEBUG nova.virt.hardware [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.174 225859 DEBUG nova.virt.hardware [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.177 225859 DEBUG oslo_concurrency.processutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:18:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:41.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:41.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:41 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:18:41 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2986932180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.591 225859 DEBUG oslo_concurrency.processutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.618 225859 DEBUG nova.storage.rbd_utils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:18:41 compute-1 nova_compute[225855]: 2026-01-20 15:18:41.623 225859 DEBUG oslo_concurrency.processutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:18:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2986932180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:18:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:18:42 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2338561239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.050 225859 DEBUG oslo_concurrency.processutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.052 225859 DEBUG nova.virt.libvirt.vif [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:18:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1816292644',display_name='tempest-AttachVolumeNegativeTest-server-1816292644',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1816292644',id=194,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGw8qLSULQliDvwKbeTpbt4J6LHSzE0FcgbAkXI3mB449DNtZV5vtYZtKqW3qflHvvMvcmL7nd1rBiXHEUgRgW71fE/QnzR597lXioriSvOlFWdxXwkMYduhCAWqw/sG6A==',key_name='tempest-keypair-235858913',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9156c0a9920c4721843416b9a44404f9',ramdisk_id='',reservation_id='r-acjnzwzw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1505789262',owner_user_name='tempest-AttachVolumeNegativeTest-1505789262-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:18:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='cd9a8f26b71f4631a387e555e6b18428',uuid=2b31f3d7-81bd-4712-bcb1-98afd2dc0f44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "address": "fa:16:3e:44:23:fd", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd99d3a5-54", "ovs_interfaceid": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.052 225859 DEBUG nova.network.os_vif_util [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Converting VIF {"id": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "address": "fa:16:3e:44:23:fd", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd99d3a5-54", "ovs_interfaceid": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.053 225859 DEBUG nova.network.os_vif_util [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:23:fd,bridge_name='br-int',has_traffic_filtering=True,id=bd99d3a5-54e0-4e70-9a02-3543631281a6,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd99d3a5-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.054 225859 DEBUG nova.objects.instance [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.072 225859 DEBUG nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:18:42 compute-1 nova_compute[225855]:   <uuid>2b31f3d7-81bd-4712-bcb1-98afd2dc0f44</uuid>
Jan 20 15:18:42 compute-1 nova_compute[225855]:   <name>instance-000000c2</name>
Jan 20 15:18:42 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:18:42 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:18:42 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <nova:name>tempest-AttachVolumeNegativeTest-server-1816292644</nova:name>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:18:41</nova:creationTime>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:18:42 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:18:42 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:18:42 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:18:42 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:18:42 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:18:42 compute-1 nova_compute[225855]:         <nova:user uuid="cd9a8f26b71f4631a387e555e6b18428">tempest-AttachVolumeNegativeTest-1505789262-project-member</nova:user>
Jan 20 15:18:42 compute-1 nova_compute[225855]:         <nova:project uuid="9156c0a9920c4721843416b9a44404f9">tempest-AttachVolumeNegativeTest-1505789262</nova:project>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:18:42 compute-1 nova_compute[225855]:         <nova:port uuid="bd99d3a5-54e0-4e70-9a02-3543631281a6">
Jan 20 15:18:42 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:18:42 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:18:42 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <system>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <entry name="serial">2b31f3d7-81bd-4712-bcb1-98afd2dc0f44</entry>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <entry name="uuid">2b31f3d7-81bd-4712-bcb1-98afd2dc0f44</entry>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     </system>
Jan 20 15:18:42 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:18:42 compute-1 nova_compute[225855]:   <os>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:   </os>
Jan 20 15:18:42 compute-1 nova_compute[225855]:   <features>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:   </features>
Jan 20 15:18:42 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:18:42 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:18:42 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/2b31f3d7-81bd-4712-bcb1-98afd2dc0f44_disk">
Jan 20 15:18:42 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       </source>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:18:42 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/2b31f3d7-81bd-4712-bcb1-98afd2dc0f44_disk.config">
Jan 20 15:18:42 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       </source>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:18:42 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:44:23:fd"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <target dev="tapbd99d3a5-54"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/2b31f3d7-81bd-4712-bcb1-98afd2dc0f44/console.log" append="off"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <video>
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     </video>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:18:42 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:18:42 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:18:42 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:18:42 compute-1 nova_compute[225855]: </domain>
Jan 20 15:18:42 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.074 225859 DEBUG nova.compute.manager [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Preparing to wait for external event network-vif-plugged-bd99d3a5-54e0-4e70-9a02-3543631281a6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.074 225859 DEBUG oslo_concurrency.lockutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.074 225859 DEBUG oslo_concurrency.lockutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.074 225859 DEBUG oslo_concurrency.lockutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.075 225859 DEBUG nova.virt.libvirt.vif [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:18:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1816292644',display_name='tempest-AttachVolumeNegativeTest-server-1816292644',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1816292644',id=194,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGw8qLSULQliDvwKbeTpbt4J6LHSzE0FcgbAkXI3mB449DNtZV5vtYZtKqW3qflHvvMvcmL7nd1rBiXHEUgRgW71fE/QnzR597lXioriSvOlFWdxXwkMYduhCAWqw/sG6A==',key_name='tempest-keypair-235858913',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9156c0a9920c4721843416b9a44404f9',ramdisk_id='',reservation_id='r-acjnzwzw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1505789262',owner_user_name='tempest-AttachVolumeNegativeTest-1505789262-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:18:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='cd9a8f26b71f4631a387e555e6b18428',uuid=2b31f3d7-81bd-4712-bcb1-98afd2dc0f44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "address": "fa:16:3e:44:23:fd", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd99d3a5-54", "ovs_interfaceid": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.075 225859 DEBUG nova.network.os_vif_util [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Converting VIF {"id": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "address": "fa:16:3e:44:23:fd", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd99d3a5-54", "ovs_interfaceid": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.076 225859 DEBUG nova.network.os_vif_util [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:23:fd,bridge_name='br-int',has_traffic_filtering=True,id=bd99d3a5-54e0-4e70-9a02-3543631281a6,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd99d3a5-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.076 225859 DEBUG os_vif [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:23:fd,bridge_name='br-int',has_traffic_filtering=True,id=bd99d3a5-54e0-4e70-9a02-3543631281a6,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd99d3a5-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.076 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.077 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.077 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.080 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.080 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd99d3a5-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.081 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbd99d3a5-54, col_values=(('external_ids', {'iface-id': 'bd99d3a5-54e0-4e70-9a02-3543631281a6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:23:fd', 'vm-uuid': '2b31f3d7-81bd-4712-bcb1-98afd2dc0f44'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.082 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:42 compute-1 NetworkManager[49104]: <info>  [1768922322.0850] manager: (tapbd99d3a5-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/358)
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.085 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.090 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.091 225859 INFO os_vif [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:23:fd,bridge_name='br-int',has_traffic_filtering=True,id=bd99d3a5-54e0-4e70-9a02-3543631281a6,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd99d3a5-54')
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.153 225859 DEBUG nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.153 225859 DEBUG nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.153 225859 DEBUG nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] No VIF found with MAC fa:16:3e:44:23:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.154 225859 INFO nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Using config drive
Jan 20 15:18:42 compute-1 nova_compute[225855]: 2026-01-20 15:18:42.182 225859 DEBUG nova.storage.rbd_utils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:18:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2338561239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:18:42 compute-1 ceph-mon[81775]: pgmap v2893: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 2.9 MiB/s wr, 82 op/s
Jan 20 15:18:42 compute-1 sudo[307524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:42 compute-1 sudo[307524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:42 compute-1 sudo[307524]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:42 compute-1 sudo[307549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:42 compute-1 sudo[307549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:42 compute-1 sudo[307549]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:43 compute-1 nova_compute[225855]: 2026-01-20 15:18:43.046 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:43.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:43 compute-1 nova_compute[225855]: 2026-01-20 15:18:43.564 225859 INFO nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Creating config drive at /var/lib/nova/instances/2b31f3d7-81bd-4712-bcb1-98afd2dc0f44/disk.config
Jan 20 15:18:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:43.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:43 compute-1 nova_compute[225855]: 2026-01-20 15:18:43.569 225859 DEBUG oslo_concurrency.processutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b31f3d7-81bd-4712-bcb1-98afd2dc0f44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp9vnxc9g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:18:43 compute-1 nova_compute[225855]: 2026-01-20 15:18:43.708 225859 DEBUG oslo_concurrency.processutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b31f3d7-81bd-4712-bcb1-98afd2dc0f44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp9vnxc9g" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:18:43 compute-1 nova_compute[225855]: 2026-01-20 15:18:43.736 225859 DEBUG nova.storage.rbd_utils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:18:43 compute-1 nova_compute[225855]: 2026-01-20 15:18:43.740 225859 DEBUG oslo_concurrency.processutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b31f3d7-81bd-4712-bcb1-98afd2dc0f44/disk.config 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:18:43 compute-1 nova_compute[225855]: 2026-01-20 15:18:43.913 225859 DEBUG oslo_concurrency.processutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b31f3d7-81bd-4712-bcb1-98afd2dc0f44/disk.config 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:18:43 compute-1 nova_compute[225855]: 2026-01-20 15:18:43.914 225859 INFO nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Deleting local config drive /var/lib/nova/instances/2b31f3d7-81bd-4712-bcb1-98afd2dc0f44/disk.config because it was imported into RBD.
Jan 20 15:18:43 compute-1 kernel: tapbd99d3a5-54: entered promiscuous mode
Jan 20 15:18:43 compute-1 NetworkManager[49104]: <info>  [1768922323.9634] manager: (tapbd99d3a5-54): new Tun device (/org/freedesktop/NetworkManager/Devices/359)
Jan 20 15:18:43 compute-1 ovn_controller[130490]: 2026-01-20T15:18:43Z|00863|binding|INFO|Claiming lport bd99d3a5-54e0-4e70-9a02-3543631281a6 for this chassis.
Jan 20 15:18:43 compute-1 ovn_controller[130490]: 2026-01-20T15:18:43Z|00864|binding|INFO|bd99d3a5-54e0-4e70-9a02-3543631281a6: Claiming fa:16:3e:44:23:fd 10.100.0.13
Jan 20 15:18:43 compute-1 nova_compute[225855]: 2026-01-20 15:18:43.964 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:43 compute-1 nova_compute[225855]: 2026-01-20 15:18:43.969 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:43.980 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:23:fd 10.100.0.13'], port_security=['fa:16:3e:44:23:fd 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2b31f3d7-81bd-4712-bcb1-98afd2dc0f44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9156c0a9920c4721843416b9a44404f9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd945b282-623c-4da9-a940-ac04c971b57b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bfc4e2a-eeed-480e-aa18-68fc6c8f2cc2, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=bd99d3a5-54e0-4e70-9a02-3543631281a6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:18:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:43.981 140354 INFO neutron.agent.ovn.metadata.agent [-] Port bd99d3a5-54e0-4e70-9a02-3543631281a6 in datapath 76c2d716-7d14-4bc1-b83b-a3290ee99d9a bound to our chassis
Jan 20 15:18:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:43.982 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76c2d716-7d14-4bc1-b83b-a3290ee99d9a
Jan 20 15:18:43 compute-1 systemd-machined[194361]: New machine qemu-102-instance-000000c2.
Jan 20 15:18:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:43.994 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5b08f621-da96-4265-a683-a0bc73825283]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:18:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:43.995 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap76c2d716-71 in ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:18:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:43.996 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap76c2d716-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:18:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:43.996 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[faecfa86-f32b-4e39-b900-8ee036c9c549]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:18:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:43.997 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb1e272-1c56-4123-8dca-da3dc141cd39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:44.008 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[e243bb0f-26dc-4fde-8041-26c4003f91fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:18:44 compute-1 nova_compute[225855]: 2026-01-20 15:18:44.028 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:44 compute-1 systemd[1]: Started Virtual Machine qemu-102-instance-000000c2.
Jan 20 15:18:44 compute-1 nova_compute[225855]: 2026-01-20 15:18:44.033 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:44.034 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d33b9485-dfbf-4cae-abfd-04c8305ef411]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:18:44 compute-1 ovn_controller[130490]: 2026-01-20T15:18:44Z|00865|binding|INFO|Setting lport bd99d3a5-54e0-4e70-9a02-3543631281a6 ovn-installed in OVS
Jan 20 15:18:44 compute-1 ovn_controller[130490]: 2026-01-20T15:18:44Z|00866|binding|INFO|Setting lport bd99d3a5-54e0-4e70-9a02-3543631281a6 up in Southbound
Jan 20 15:18:44 compute-1 nova_compute[225855]: 2026-01-20 15:18:44.039 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:44 compute-1 systemd-udevd[307631]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:18:44 compute-1 NetworkManager[49104]: <info>  [1768922324.0543] device (tapbd99d3a5-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:18:44 compute-1 NetworkManager[49104]: <info>  [1768922324.0551] device (tapbd99d3a5-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:44.065 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[5e96c39b-5fff-4ec0-b163-54b1370a2a65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:44.070 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cfc456f5-0cdd-4d90-9822-8bd33c59e038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:18:44 compute-1 NetworkManager[49104]: <info>  [1768922324.0714] manager: (tap76c2d716-70): new Veth device (/org/freedesktop/NetworkManager/Devices/360)
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:44.102 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[2c807fa5-1e4c-472f-a8c6-c887f38df508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:44.105 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e1d4ed49-ee05-4850-ae4e-b2c2665322ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:18:44 compute-1 NetworkManager[49104]: <info>  [1768922324.1247] device (tap76c2d716-70): carrier: link connected
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:44.129 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ea39e7-b3df-4bde-b60c-d1d4ac562725]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:44.145 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4f9515-9fc7-41fb-a205-7659e1a579a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76c2d716-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:44:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 245], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736305, 'reachable_time': 19770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307661, 'error': None, 'target': 'ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:44.164 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e789b55f-105a-4590-9c88-adbfc6c3ba47]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2e:44ab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736305, 'tstamp': 736305}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307662, 'error': None, 'target': 'ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:44.179 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d125f67e-d8f1-429f-a105-53927c91b7dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76c2d716-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:44:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 245], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736305, 'reachable_time': 19770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307663, 'error': None, 'target': 'ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:44.217 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[94834a92-1cc9-45fd-bddb-2502b491ab58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:44.272 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd648a2-7d3a-4c04-bf01-6f8e093b5afc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:44.273 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76c2d716-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:44.273 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:44.274 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76c2d716-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:18:44 compute-1 nova_compute[225855]: 2026-01-20 15:18:44.275 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:44 compute-1 NetworkManager[49104]: <info>  [1768922324.2761] manager: (tap76c2d716-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Jan 20 15:18:44 compute-1 kernel: tap76c2d716-70: entered promiscuous mode
Jan 20 15:18:44 compute-1 nova_compute[225855]: 2026-01-20 15:18:44.277 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:44.278 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76c2d716-70, col_values=(('external_ids', {'iface-id': '2c0bba0e-e9b6-4ece-8349-62642b94d91d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:18:44 compute-1 nova_compute[225855]: 2026-01-20 15:18:44.279 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:44 compute-1 ovn_controller[130490]: 2026-01-20T15:18:44Z|00867|binding|INFO|Releasing lport 2c0bba0e-e9b6-4ece-8349-62642b94d91d from this chassis (sb_readonly=0)
Jan 20 15:18:44 compute-1 nova_compute[225855]: 2026-01-20 15:18:44.295 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:44.296 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/76c2d716-7d14-4bc1-b83b-a3290ee99d9a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/76c2d716-7d14-4bc1-b83b-a3290ee99d9a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:44.297 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[57704dc8-1030-488e-874f-5829e3d2407a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:44.298 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-76c2d716-7d14-4bc1-b83b-a3290ee99d9a
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/76c2d716-7d14-4bc1-b83b-a3290ee99d9a.pid.haproxy
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 76c2d716-7d14-4bc1-b83b-a3290ee99d9a
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:18:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:18:44.300 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'env', 'PROCESS_TAG=haproxy-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/76c2d716-7d14-4bc1-b83b-a3290ee99d9a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:18:44 compute-1 nova_compute[225855]: 2026-01-20 15:18:44.369 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922324.368992, 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:18:44 compute-1 nova_compute[225855]: 2026-01-20 15:18:44.369 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] VM Started (Lifecycle Event)
Jan 20 15:18:44 compute-1 nova_compute[225855]: 2026-01-20 15:18:44.466 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:18:44 compute-1 nova_compute[225855]: 2026-01-20 15:18:44.471 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922324.3691351, 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:18:44 compute-1 nova_compute[225855]: 2026-01-20 15:18:44.471 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] VM Paused (Lifecycle Event)
Jan 20 15:18:44 compute-1 nova_compute[225855]: 2026-01-20 15:18:44.538 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:18:44 compute-1 nova_compute[225855]: 2026-01-20 15:18:44.542 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:18:44 compute-1 nova_compute[225855]: 2026-01-20 15:18:44.586 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:18:44 compute-1 podman[307737]: 2026-01-20 15:18:44.63535801 +0000 UTC m=+0.050022173 container create 5e4fd4628195f875a51769944f5674e810c100ec290dc3180351b0db60def7c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 15:18:44 compute-1 systemd[1]: Started libpod-conmon-5e4fd4628195f875a51769944f5674e810c100ec290dc3180351b0db60def7c7.scope.
Jan 20 15:18:44 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:18:44 compute-1 podman[307737]: 2026-01-20 15:18:44.610623411 +0000 UTC m=+0.025287374 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:18:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b4fcd771e27f8b153744804f05a30b7615c256cc9d63d2f67498c5cbee4f4e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:18:44 compute-1 podman[307737]: 2026-01-20 15:18:44.720240615 +0000 UTC m=+0.134904548 container init 5e4fd4628195f875a51769944f5674e810c100ec290dc3180351b0db60def7c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 15:18:44 compute-1 podman[307737]: 2026-01-20 15:18:44.725963726 +0000 UTC m=+0.140627659 container start 5e4fd4628195f875a51769944f5674e810c100ec290dc3180351b0db60def7c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:18:44 compute-1 ceph-mon[81775]: pgmap v2894: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 270 KiB/s rd, 2.6 MiB/s wr, 71 op/s
Jan 20 15:18:44 compute-1 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[307752]: [NOTICE]   (307756) : New worker (307758) forked
Jan 20 15:18:44 compute-1 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[307752]: [NOTICE]   (307756) : Loading success.
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:44.999 225859 DEBUG nova.compute.manager [req-dcea4326-d029-43cc-a973-ac713a700454 req-e0d8c5ff-8711-4796-b8b1-c559c8c0e4ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Received event network-vif-plugged-bd99d3a5-54e0-4e70-9a02-3543631281a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:44.999 225859 DEBUG oslo_concurrency.lockutils [req-dcea4326-d029-43cc-a973-ac713a700454 req-e0d8c5ff-8711-4796-b8b1-c559c8c0e4ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.000 225859 DEBUG oslo_concurrency.lockutils [req-dcea4326-d029-43cc-a973-ac713a700454 req-e0d8c5ff-8711-4796-b8b1-c559c8c0e4ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.000 225859 DEBUG oslo_concurrency.lockutils [req-dcea4326-d029-43cc-a973-ac713a700454 req-e0d8c5ff-8711-4796-b8b1-c559c8c0e4ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.000 225859 DEBUG nova.compute.manager [req-dcea4326-d029-43cc-a973-ac713a700454 req-e0d8c5ff-8711-4796-b8b1-c559c8c0e4ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Processing event network-vif-plugged-bd99d3a5-54e0-4e70-9a02-3543631281a6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.001 225859 DEBUG nova.compute.manager [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.004 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922325.00469, 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.005 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] VM Resumed (Lifecycle Event)
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.006 225859 DEBUG nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.010 225859 INFO nova.virt.libvirt.driver [-] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Instance spawned successfully.
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.010 225859 DEBUG nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.036 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.038 225859 DEBUG nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.038 225859 DEBUG nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.038 225859 DEBUG nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.039 225859 DEBUG nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.039 225859 DEBUG nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.040 225859 DEBUG nova.virt.libvirt.driver [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.045 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.087 225859 DEBUG nova.network.neutron [req-51605533-b7f0-4242-9d13-054db74dd614 req-24ae14e8-fdfd-494a-9899-0c18fd537a5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Updated VIF entry in instance network info cache for port bd99d3a5-54e0-4e70-9a02-3543631281a6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.087 225859 DEBUG nova.network.neutron [req-51605533-b7f0-4242-9d13-054db74dd614 req-24ae14e8-fdfd-494a-9899-0c18fd537a5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Updating instance_info_cache with network_info: [{"id": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "address": "fa:16:3e:44:23:fd", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd99d3a5-54", "ovs_interfaceid": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.123 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.169 225859 DEBUG oslo_concurrency.lockutils [req-51605533-b7f0-4242-9d13-054db74dd614 req-24ae14e8-fdfd-494a-9899-0c18fd537a5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.198 225859 INFO nova.compute.manager [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Took 10.76 seconds to spawn the instance on the hypervisor.
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.199 225859 DEBUG nova.compute.manager [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.277 225859 INFO nova.compute.manager [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Took 11.79 seconds to build instance.
Jan 20 15:18:45 compute-1 nova_compute[225855]: 2026-01-20 15:18:45.309 225859 DEBUG oslo_concurrency.lockutils [None req-bf228ae6-1931-4ebb-8ef0-3778a1908505 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:18:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:18:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:45.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:18:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:45.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:46 compute-1 ceph-mon[81775]: pgmap v2895: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 542 KiB/s rd, 1.9 MiB/s wr, 75 op/s
Jan 20 15:18:47 compute-1 nova_compute[225855]: 2026-01-20 15:18:47.082 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:47 compute-1 nova_compute[225855]: 2026-01-20 15:18:47.144 225859 DEBUG nova.compute.manager [req-3f702505-207e-4d05-ab7c-5ef6075e758c req-e77c0c8a-a2a5-42ed-bc11-a5836889e298 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Received event network-vif-plugged-bd99d3a5-54e0-4e70-9a02-3543631281a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:18:47 compute-1 nova_compute[225855]: 2026-01-20 15:18:47.145 225859 DEBUG oslo_concurrency.lockutils [req-3f702505-207e-4d05-ab7c-5ef6075e758c req-e77c0c8a-a2a5-42ed-bc11-a5836889e298 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:18:47 compute-1 nova_compute[225855]: 2026-01-20 15:18:47.145 225859 DEBUG oslo_concurrency.lockutils [req-3f702505-207e-4d05-ab7c-5ef6075e758c req-e77c0c8a-a2a5-42ed-bc11-a5836889e298 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:18:47 compute-1 nova_compute[225855]: 2026-01-20 15:18:47.145 225859 DEBUG oslo_concurrency.lockutils [req-3f702505-207e-4d05-ab7c-5ef6075e758c req-e77c0c8a-a2a5-42ed-bc11-a5836889e298 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:18:47 compute-1 nova_compute[225855]: 2026-01-20 15:18:47.145 225859 DEBUG nova.compute.manager [req-3f702505-207e-4d05-ab7c-5ef6075e758c req-e77c0c8a-a2a5-42ed-bc11-a5836889e298 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] No waiting events found dispatching network-vif-plugged-bd99d3a5-54e0-4e70-9a02-3543631281a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:18:47 compute-1 nova_compute[225855]: 2026-01-20 15:18:47.146 225859 WARNING nova.compute.manager [req-3f702505-207e-4d05-ab7c-5ef6075e758c req-e77c0c8a-a2a5-42ed-bc11-a5836889e298 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Received unexpected event network-vif-plugged-bd99d3a5-54e0-4e70-9a02-3543631281a6 for instance with vm_state active and task_state None.
Jan 20 15:18:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:18:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:47.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:18:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:47.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:48 compute-1 nova_compute[225855]: 2026-01-20 15:18:48.048 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:48 compute-1 NetworkManager[49104]: <info>  [1768922328.4408] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Jan 20 15:18:48 compute-1 NetworkManager[49104]: <info>  [1768922328.4415] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/363)
Jan 20 15:18:48 compute-1 nova_compute[225855]: 2026-01-20 15:18:48.440 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:48 compute-1 nova_compute[225855]: 2026-01-20 15:18:48.516 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:48 compute-1 ovn_controller[130490]: 2026-01-20T15:18:48Z|00868|binding|INFO|Releasing lport 2c0bba0e-e9b6-4ece-8349-62642b94d91d from this chassis (sb_readonly=0)
Jan 20 15:18:48 compute-1 nova_compute[225855]: 2026-01-20 15:18:48.529 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:48 compute-1 ceph-mon[81775]: pgmap v2896: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 469 KiB/s rd, 1.6 MiB/s wr, 34 op/s
Jan 20 15:18:49 compute-1 podman[307770]: 2026-01-20 15:18:49.029637174 +0000 UTC m=+0.077473887 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:18:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:18:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:49.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:18:49 compute-1 nova_compute[225855]: 2026-01-20 15:18:49.561 225859 DEBUG nova.compute.manager [req-1f9601fb-1d10-4b0a-b38d-3f9aebe84f58 req-5284322e-f98e-4fa9-bb30-355eff4d1fc8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Received event network-changed-bd99d3a5-54e0-4e70-9a02-3543631281a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:18:49 compute-1 nova_compute[225855]: 2026-01-20 15:18:49.562 225859 DEBUG nova.compute.manager [req-1f9601fb-1d10-4b0a-b38d-3f9aebe84f58 req-5284322e-f98e-4fa9-bb30-355eff4d1fc8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Refreshing instance network info cache due to event network-changed-bd99d3a5-54e0-4e70-9a02-3543631281a6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:18:49 compute-1 nova_compute[225855]: 2026-01-20 15:18:49.562 225859 DEBUG oslo_concurrency.lockutils [req-1f9601fb-1d10-4b0a-b38d-3f9aebe84f58 req-5284322e-f98e-4fa9-bb30-355eff4d1fc8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:18:49 compute-1 nova_compute[225855]: 2026-01-20 15:18:49.562 225859 DEBUG oslo_concurrency.lockutils [req-1f9601fb-1d10-4b0a-b38d-3f9aebe84f58 req-5284322e-f98e-4fa9-bb30-355eff4d1fc8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:18:49 compute-1 nova_compute[225855]: 2026-01-20 15:18:49.562 225859 DEBUG nova.network.neutron [req-1f9601fb-1d10-4b0a-b38d-3f9aebe84f58 req-5284322e-f98e-4fa9-bb30-355eff4d1fc8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Refreshing network info cache for port bd99d3a5-54e0-4e70-9a02-3543631281a6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:18:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:49.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:49 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 20 15:18:49 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 20 15:18:49 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 20 15:18:49 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 20 15:18:49 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 20 15:18:49 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 20 15:18:49 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Jan 20 15:18:50 compute-1 ceph-mon[81775]: pgmap v2897: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.6 MiB/s wr, 153 op/s
Jan 20 15:18:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:18:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:51.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:18:51 compute-1 nova_compute[225855]: 2026-01-20 15:18:51.353 225859 DEBUG nova.network.neutron [req-1f9601fb-1d10-4b0a-b38d-3f9aebe84f58 req-5284322e-f98e-4fa9-bb30-355eff4d1fc8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Updated VIF entry in instance network info cache for port bd99d3a5-54e0-4e70-9a02-3543631281a6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:18:51 compute-1 nova_compute[225855]: 2026-01-20 15:18:51.353 225859 DEBUG nova.network.neutron [req-1f9601fb-1d10-4b0a-b38d-3f9aebe84f58 req-5284322e-f98e-4fa9-bb30-355eff4d1fc8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Updating instance_info_cache with network_info: [{"id": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "address": "fa:16:3e:44:23:fd", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd99d3a5-54", "ovs_interfaceid": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:18:51 compute-1 nova_compute[225855]: 2026-01-20 15:18:51.394 225859 DEBUG oslo_concurrency.lockutils [req-1f9601fb-1d10-4b0a-b38d-3f9aebe84f58 req-5284322e-f98e-4fa9-bb30-355eff4d1fc8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:18:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:51.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:52 compute-1 nova_compute[225855]: 2026-01-20 15:18:52.083 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:52 compute-1 ceph-mon[81775]: pgmap v2898: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 45 KiB/s wr, 157 op/s
Jan 20 15:18:53 compute-1 nova_compute[225855]: 2026-01-20 15:18:53.050 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:18:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:53.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:18:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:53.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:54 compute-1 ceph-mon[81775]: pgmap v2899: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 22 KiB/s wr, 223 op/s
Jan 20 15:18:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:55.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:55.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:56 compute-1 ceph-mon[81775]: pgmap v2900: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 22 KiB/s wr, 302 op/s
Jan 20 15:18:57 compute-1 nova_compute[225855]: 2026-01-20 15:18:57.085 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:57.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:18:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:57.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:18:57 compute-1 ovn_controller[130490]: 2026-01-20T15:18:57Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:44:23:fd 10.100.0.13
Jan 20 15:18:57 compute-1 ovn_controller[130490]: 2026-01-20T15:18:57Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:44:23:fd 10.100.0.13
Jan 20 15:18:58 compute-1 nova_compute[225855]: 2026-01-20 15:18:58.052 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:58 compute-1 nova_compute[225855]: 2026-01-20 15:18:58.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:18:58 compute-1 nova_compute[225855]: 2026-01-20 15:18:58.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:18:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:58 compute-1 ceph-mon[81775]: pgmap v2901: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 0 B/s wr, 277 op/s
Jan 20 15:18:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:59.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:18:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:59.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:00 compute-1 ceph-mon[81775]: pgmap v2902: 321 pgs: 321 active+clean; 271 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.5 MiB/s wr, 343 op/s
Jan 20 15:19:01 compute-1 nova_compute[225855]: 2026-01-20 15:19:01.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:01 compute-1 nova_compute[225855]: 2026-01-20 15:19:01.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:19:01 compute-1 nova_compute[225855]: 2026-01-20 15:19:01.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:19:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:19:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:01.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:19:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:19:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:01.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:19:01 compute-1 nova_compute[225855]: 2026-01-20 15:19:01.887 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:19:01 compute-1 nova_compute[225855]: 2026-01-20 15:19:01.888 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:19:01 compute-1 nova_compute[225855]: 2026-01-20 15:19:01.888 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:19:01 compute-1 nova_compute[225855]: 2026-01-20 15:19:01.888 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:19:02 compute-1 anacron[89046]: Job `cron.monthly' started
Jan 20 15:19:02 compute-1 anacron[89046]: Job `cron.monthly' terminated
Jan 20 15:19:02 compute-1 anacron[89046]: Normal exit (3 jobs run)
Jan 20 15:19:02 compute-1 nova_compute[225855]: 2026-01-20 15:19:02.086 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/412186716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:03 compute-1 sudo[307805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:03 compute-1 sudo[307805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:03 compute-1 sudo[307805]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:03 compute-1 nova_compute[225855]: 2026-01-20 15:19:03.054 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:03 compute-1 sudo[307830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:03 compute-1 sudo[307830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:03 compute-1 sudo[307830]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:03 compute-1 podman[307854]: 2026-01-20 15:19:03.165625213 +0000 UTC m=+0.052380249 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true)
Jan 20 15:19:03 compute-1 ceph-mon[81775]: pgmap v2903: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 266 op/s
Jan 20 15:19:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/817734252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:03.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:03 compute-1 nova_compute[225855]: 2026-01-20 15:19:03.460 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Updating instance_info_cache with network_info: [{"id": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "address": "fa:16:3e:44:23:fd", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd99d3a5-54", "ovs_interfaceid": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:19:03 compute-1 nova_compute[225855]: 2026-01-20 15:19:03.482 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:19:03 compute-1 nova_compute[225855]: 2026-01-20 15:19:03.483 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:19:03 compute-1 nova_compute[225855]: 2026-01-20 15:19:03.483 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:03 compute-1 nova_compute[225855]: 2026-01-20 15:19:03.484 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:19:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:03.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:19:04 compute-1 ceph-mon[81775]: pgmap v2904: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 950 KiB/s rd, 2.1 MiB/s wr, 258 op/s
Jan 20 15:19:05 compute-1 nova_compute[225855]: 2026-01-20 15:19:05.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:05.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:05.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:06 compute-1 nova_compute[225855]: 2026-01-20 15:19:06.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:06 compute-1 nova_compute[225855]: 2026-01-20 15:19:06.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:06 compute-1 nova_compute[225855]: 2026-01-20 15:19:06.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:06 compute-1 nova_compute[225855]: 2026-01-20 15:19:06.391 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:19:06 compute-1 nova_compute[225855]: 2026-01-20 15:19:06.391 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:19:06 compute-1 nova_compute[225855]: 2026-01-20 15:19:06.391 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:19:06 compute-1 nova_compute[225855]: 2026-01-20 15:19:06.391 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:19:06 compute-1 nova_compute[225855]: 2026-01-20 15:19:06.391 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:19:06 compute-1 ceph-mon[81775]: pgmap v2905: 321 pgs: 321 active+clean; 281 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 910 KiB/s rd, 2.2 MiB/s wr, 192 op/s
Jan 20 15:19:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:19:06 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3146294823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:06 compute-1 nova_compute[225855]: 2026-01-20 15:19:06.820 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:19:06 compute-1 nova_compute[225855]: 2026-01-20 15:19:06.894 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:19:06 compute-1 nova_compute[225855]: 2026-01-20 15:19:06.895 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.038 225859 DEBUG oslo_concurrency.lockutils [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.039 225859 DEBUG oslo_concurrency.lockutils [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.043 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.045 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4077MB free_disk=20.897098541259766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.045 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.045 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.053 225859 DEBUG nova.objects.instance [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lazy-loading 'flavor' on Instance uuid 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.089 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.112 225859 DEBUG oslo_concurrency.lockutils [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.169 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.170 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.170 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.196 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing inventories for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.240 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating ProviderTree inventory for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.241 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.281 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing aggregate associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.334 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing trait associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 15:19:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:19:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:07.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.417 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:19:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:07.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.607 225859 DEBUG oslo_concurrency.lockutils [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.608 225859 DEBUG oslo_concurrency.lockutils [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.609 225859 INFO nova.compute.manager [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Attaching volume 9f3cabb8-d51f-4db9-97b3-c5b764893ee2 to /dev/vdb
Jan 20 15:19:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3146294823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3414621009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.840 225859 DEBUG os_brick.utils [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.842 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.853 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.853 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[508418f6-bd0c-4f63-9106-0a2b71094171]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:19:07 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/222608440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.855 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.863 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.863 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[1c2f6834-4335-4dda-b7cb-09424e226a92]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.865 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.874 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.874 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[17759ec5-00cc-49fc-8bab-7dd2d493eaf7]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.876 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.876 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[0e775201-dea4-41eb-a5ef-6b1709ec86dd]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.877 225859 DEBUG oslo_concurrency.processutils [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.906 225859 DEBUG oslo_concurrency.processutils [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.910 225859 DEBUG os_brick.initiator.connectors.lightos [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.910 225859 DEBUG os_brick.initiator.connectors.lightos [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.911 225859 DEBUG os_brick.initiator.connectors.lightos [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.911 225859 DEBUG os_brick.utils [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.911 225859 DEBUG nova.virt.block_device [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Updating existing volume attachment record: 42dce34d-a725-459d-9faf-f052d4783cbb _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.917 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:19:07 compute-1 nova_compute[225855]: 2026-01-20 15:19:07.950 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:19:08 compute-1 nova_compute[225855]: 2026-01-20 15:19:08.013 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:19:08 compute-1 nova_compute[225855]: 2026-01-20 15:19:08.013 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:19:08 compute-1 nova_compute[225855]: 2026-01-20 15:19:08.014 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:08 compute-1 nova_compute[225855]: 2026-01-20 15:19:08.057 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/222608440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/750879127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3736625337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:08 compute-1 ceph-mon[81775]: pgmap v2906: 321 pgs: 321 active+clean; 281 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 861 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Jan 20 15:19:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3200555813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:19:08 compute-1 nova_compute[225855]: 2026-01-20 15:19:08.876 225859 DEBUG nova.objects.instance [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lazy-loading 'flavor' on Instance uuid 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:19:08 compute-1 nova_compute[225855]: 2026-01-20 15:19:08.899 225859 DEBUG nova.virt.libvirt.driver [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Attempting to attach volume 9f3cabb8-d51f-4db9-97b3-c5b764893ee2 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 20 15:19:08 compute-1 nova_compute[225855]: 2026-01-20 15:19:08.902 225859 DEBUG nova.virt.libvirt.guest [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] attach device xml: <disk type="network" device="disk">
Jan 20 15:19:08 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:19:08 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-9f3cabb8-d51f-4db9-97b3-c5b764893ee2">
Jan 20 15:19:08 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:19:08 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:19:08 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:19:08 compute-1 nova_compute[225855]:   </source>
Jan 20 15:19:08 compute-1 nova_compute[225855]:   <auth username="openstack">
Jan 20 15:19:08 compute-1 nova_compute[225855]:     <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:19:08 compute-1 nova_compute[225855]:   </auth>
Jan 20 15:19:08 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:19:08 compute-1 nova_compute[225855]:   <serial>9f3cabb8-d51f-4db9-97b3-c5b764893ee2</serial>
Jan 20 15:19:08 compute-1 nova_compute[225855]: </disk>
Jan 20 15:19:08 compute-1 nova_compute[225855]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 15:19:09 compute-1 nova_compute[225855]: 2026-01-20 15:19:09.026 225859 DEBUG nova.virt.libvirt.driver [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:19:09 compute-1 nova_compute[225855]: 2026-01-20 15:19:09.026 225859 DEBUG nova.virt.libvirt.driver [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:19:09 compute-1 nova_compute[225855]: 2026-01-20 15:19:09.027 225859 DEBUG nova.virt.libvirt.driver [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:19:09 compute-1 nova_compute[225855]: 2026-01-20 15:19:09.027 225859 DEBUG nova.virt.libvirt.driver [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] No VIF found with MAC fa:16:3e:44:23:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:19:09 compute-1 nova_compute[225855]: 2026-01-20 15:19:09.303 225859 DEBUG oslo_concurrency.lockutils [None req-1efd3742-3065-40d0-a299-80c5de58551f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:19:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:09.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:19:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:09.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:19:10 compute-1 sudo[307952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:10 compute-1 sudo[307952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:10 compute-1 sudo[307952]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:10 compute-1 sudo[307977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:19:10 compute-1 sudo[307977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:10 compute-1 sudo[307977]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:10 compute-1 sudo[308002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:10 compute-1 sudo[308002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:10 compute-1 sudo[308002]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:10 compute-1 sudo[308027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:19:10 compute-1 sudo[308027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:10 compute-1 ceph-mon[81775]: pgmap v2907: 321 pgs: 321 active+clean; 222 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 874 KiB/s rd, 2.2 MiB/s wr, 128 op/s
Jan 20 15:19:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:19:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:19:11 compute-1 sudo[308027]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:19:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:11.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:19:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:19:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:11.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:19:11 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:19:11 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:19:11 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:19:11 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:19:11 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:19:11 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:19:12 compute-1 nova_compute[225855]: 2026-01-20 15:19:12.091 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:12 compute-1 ceph-mon[81775]: pgmap v2908: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 268 KiB/s rd, 654 KiB/s wr, 75 op/s
Jan 20 15:19:13 compute-1 nova_compute[225855]: 2026-01-20 15:19:13.038 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:13 compute-1 nova_compute[225855]: 2026-01-20 15:19:13.058 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:13.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:13.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2789919308' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:19:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2789919308' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:19:14 compute-1 ceph-mon[81775]: pgmap v2909: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 24 KiB/s wr, 32 op/s
Jan 20 15:19:15 compute-1 ovn_controller[130490]: 2026-01-20T15:19:15Z|00869|binding|INFO|Releasing lport 2c0bba0e-e9b6-4ece-8349-62642b94d91d from this chassis (sb_readonly=0)
Jan 20 15:19:15 compute-1 nova_compute[225855]: 2026-01-20 15:19:15.102 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:19:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:15.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:19:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:15.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:19:16.440 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:19:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:19:16.440 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:19:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:19:16.441 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:19:17 compute-1 ceph-mon[81775]: pgmap v2910: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 26 KiB/s wr, 32 op/s
Jan 20 15:19:17 compute-1 nova_compute[225855]: 2026-01-20 15:19:17.119 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:17.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:17.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:17 compute-1 sudo[308085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:17 compute-1 sudo[308085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:17 compute-1 sudo[308085]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:17 compute-1 sudo[308111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:19:17 compute-1 sudo[308111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:17 compute-1 sudo[308111]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:18 compute-1 nova_compute[225855]: 2026-01-20 15:19:18.060 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:18 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:19:18 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:19:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:19.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:19 compute-1 ceph-mon[81775]: pgmap v2911: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.9 KiB/s wr, 30 op/s
Jan 20 15:19:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2879300125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:19.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:20 compute-1 podman[308137]: 2026-01-20 15:19:20.050845247 +0000 UTC m=+0.085896795 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 15:19:20 compute-1 ceph-mon[81775]: pgmap v2912: 321 pgs: 321 active+clean; 214 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 699 KiB/s wr, 33 op/s
Jan 20 15:19:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:21.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:21.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:22 compute-1 nova_compute[225855]: 2026-01-20 15:19:22.121 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:22 compute-1 nova_compute[225855]: 2026-01-20 15:19:22.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:22 compute-1 ceph-mon[81775]: pgmap v2913: 321 pgs: 321 active+clean; 221 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 948 KiB/s wr, 27 op/s
Jan 20 15:19:23 compute-1 nova_compute[225855]: 2026-01-20 15:19:23.062 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:23 compute-1 sudo[308165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:23 compute-1 sudo[308165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:23 compute-1 sudo[308165]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:23 compute-1 sudo[308190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:23 compute-1 sudo[308190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:23 compute-1 sudo[308190]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:23.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:19:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:23.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:19:25 compute-1 ceph-mon[81775]: pgmap v2914: 321 pgs: 321 active+clean; 241 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 26 op/s
Jan 20 15:19:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:25.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:25.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:26 compute-1 nova_compute[225855]: 2026-01-20 15:19:26.484 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:26 compute-1 ceph-mon[81775]: pgmap v2915: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:19:27 compute-1 nova_compute[225855]: 2026-01-20 15:19:27.122 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:27.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:27.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/774464195' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:19:28 compute-1 nova_compute[225855]: 2026-01-20 15:19:28.063 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3252354878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:19:28 compute-1 ceph-mon[81775]: pgmap v2916: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:19:29 compute-1 nova_compute[225855]: 2026-01-20 15:19:29.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:29 compute-1 nova_compute[225855]: 2026-01-20 15:19:29.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 15:19:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:29.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:19:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:29.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:19:30 compute-1 ceph-mon[81775]: pgmap v2917: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:19:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:31.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:31.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:32 compute-1 nova_compute[225855]: 2026-01-20 15:19:32.124 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:32 compute-1 ceph-mon[81775]: pgmap v2918: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Jan 20 15:19:33 compute-1 nova_compute[225855]: 2026-01-20 15:19:33.105 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:33.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:33.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:33 compute-1 systemd[1]: Starting dnf makecache...
Jan 20 15:19:34 compute-1 podman[308221]: 2026-01-20 15:19:34.043145881 +0000 UTC m=+0.082358245 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 20 15:19:34 compute-1 dnf[308222]: Metadata cache refreshed recently.
Jan 20 15:19:34 compute-1 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 20 15:19:34 compute-1 systemd[1]: Finished dnf makecache.
Jan 20 15:19:34 compute-1 ceph-mon[81775]: pgmap v2919: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 887 KiB/s wr, 14 op/s
Jan 20 15:19:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:35.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:35.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:36 compute-1 ceph-mon[81775]: pgmap v2920: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 489 KiB/s wr, 24 op/s
Jan 20 15:19:37 compute-1 nova_compute[225855]: 2026-01-20 15:19:37.100 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:37 compute-1 nova_compute[225855]: 2026-01-20 15:19:37.125 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:37.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:19:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:37.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:19:37 compute-1 nova_compute[225855]: 2026-01-20 15:19:37.979 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:38 compute-1 nova_compute[225855]: 2026-01-20 15:19:38.108 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:38 compute-1 ceph-mon[81775]: pgmap v2921: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 17 KiB/s wr, 23 op/s
Jan 20 15:19:39 compute-1 nova_compute[225855]: 2026-01-20 15:19:39.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:39 compute-1 nova_compute[225855]: 2026-01-20 15:19:39.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 15:19:39 compute-1 nova_compute[225855]: 2026-01-20 15:19:39.399 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 15:19:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:19:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:39.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:19:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:19:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:39.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:19:40 compute-1 ceph-mon[81775]: pgmap v2922: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Jan 20 15:19:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:19:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:41.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:19:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:19:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:41.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:19:42 compute-1 nova_compute[225855]: 2026-01-20 15:19:42.126 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2107288519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:43 compute-1 nova_compute[225855]: 2026-01-20 15:19:43.112 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:43 compute-1 sudo[308245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:43 compute-1 sudo[308245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:43 compute-1 sudo[308245]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:43 compute-1 sudo[308270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:43 compute-1 sudo[308270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:43 compute-1 sudo[308270]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:43.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:43 compute-1 ceph-mon[81775]: pgmap v2923: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 73 op/s
Jan 20 15:19:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:43.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:44 compute-1 ceph-mon[81775]: pgmap v2924: 321 pgs: 321 active+clean; 254 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 68 KiB/s wr, 84 op/s
Jan 20 15:19:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:19:45.197 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:19:45 compute-1 nova_compute[225855]: 2026-01-20 15:19:45.197 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:19:45.198 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:19:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:45.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:45.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:45 compute-1 nova_compute[225855]: 2026-01-20 15:19:45.947 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:45 compute-1 nova_compute[225855]: 2026-01-20 15:19:45.974 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Triggering sync for uuid 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 20 15:19:45 compute-1 nova_compute[225855]: 2026-01-20 15:19:45.974 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:19:45 compute-1 nova_compute[225855]: 2026-01-20 15:19:45.974 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:19:46 compute-1 nova_compute[225855]: 2026-01-20 15:19:46.170 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:19:47 compute-1 nova_compute[225855]: 2026-01-20 15:19:47.128 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:47 compute-1 ceph-mon[81775]: pgmap v2925: 321 pgs: 321 active+clean; 293 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Jan 20 15:19:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:47.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:47.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:48 compute-1 nova_compute[225855]: 2026-01-20 15:19:48.115 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:48 compute-1 ceph-mon[81775]: pgmap v2926: 321 pgs: 321 active+clean; 293 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 77 op/s
Jan 20 15:19:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:49.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3438431235' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:19:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:49.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3654454720' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:19:50 compute-1 ceph-mon[81775]: pgmap v2927: 321 pgs: 321 active+clean; 312 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 124 op/s
Jan 20 15:19:51 compute-1 podman[308300]: 2026-01-20 15:19:51.039705312 +0000 UTC m=+0.083154038 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 15:19:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:19:51.199 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:19:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:19:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:51.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:19:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:51.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:52 compute-1 nova_compute[225855]: 2026-01-20 15:19:52.129 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:53 compute-1 nova_compute[225855]: 2026-01-20 15:19:53.122 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:53.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:53 compute-1 ceph-mon[81775]: pgmap v2928: 321 pgs: 321 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 20 15:19:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:53.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:54 compute-1 ceph-mon[81775]: pgmap v2929: 321 pgs: 321 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 20 15:19:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:55.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:19:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:55.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:19:57 compute-1 nova_compute[225855]: 2026-01-20 15:19:57.131 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:57.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:57 compute-1 ceph-mon[81775]: pgmap v2930: 321 pgs: 321 active+clean; 326 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 393 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Jan 20 15:19:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:57.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:58 compute-1 nova_compute[225855]: 2026-01-20 15:19:58.125 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:58 compute-1 nova_compute[225855]: 2026-01-20 15:19:58.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:58 compute-1 nova_compute[225855]: 2026-01-20 15:19:58.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:19:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:58 compute-1 ceph-mon[81775]: pgmap v2931: 321 pgs: 321 active+clean; 326 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 385 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Jan 20 15:19:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:59.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:19:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:59.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:00 compute-1 ceph-mon[81775]: overall HEALTH_OK
Jan 20 15:20:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:20:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:01.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:20:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/18900664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:01 compute-1 ceph-mon[81775]: pgmap v2932: 321 pgs: 321 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 125 op/s
Jan 20 15:20:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:01.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:02 compute-1 nova_compute[225855]: 2026-01-20 15:20:02.133 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.127 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:03 compute-1 ceph-mon[81775]: pgmap v2933: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 593 KiB/s wr, 121 op/s
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.231 225859 DEBUG oslo_concurrency.lockutils [None req-5ff466c2-d750-4ab2-aa61-5cf4fd377c32 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.232 225859 DEBUG oslo_concurrency.lockutils [None req-5ff466c2-d750-4ab2-aa61-5cf4fd377c32 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.246 225859 INFO nova.compute.manager [None req-5ff466c2-d750-4ab2-aa61-5cf4fd377c32 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Detaching volume 9f3cabb8-d51f-4db9-97b3-c5b764893ee2
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:20:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.418 225859 INFO nova.virt.block_device [None req-5ff466c2-d750-4ab2-aa61-5cf4fd377c32 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Attempting to driver detach volume 9f3cabb8-d51f-4db9-97b3-c5b764893ee2 from mountpoint /dev/vdb
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.427 225859 DEBUG nova.virt.libvirt.driver [None req-5ff466c2-d750-4ab2-aa61-5cf4fd377c32 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Attempting to detach device vdb from instance 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.427 225859 DEBUG nova.virt.libvirt.guest [None req-5ff466c2-d750-4ab2-aa61-5cf4fd377c32 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 15:20:03 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:20:03 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-9f3cabb8-d51f-4db9-97b3-c5b764893ee2">
Jan 20 15:20:03 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:20:03 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:20:03 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:20:03 compute-1 nova_compute[225855]:   </source>
Jan 20 15:20:03 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:20:03 compute-1 nova_compute[225855]:   <serial>9f3cabb8-d51f-4db9-97b3-c5b764893ee2</serial>
Jan 20 15:20:03 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 15:20:03 compute-1 nova_compute[225855]: </disk>
Jan 20 15:20:03 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.434 225859 INFO nova.virt.libvirt.driver [None req-5ff466c2-d750-4ab2-aa61-5cf4fd377c32 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Successfully detached device vdb from instance 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44 from the persistent domain config.
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.435 225859 DEBUG nova.virt.libvirt.driver [None req-5ff466c2-d750-4ab2-aa61-5cf4fd377c32 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.435 225859 DEBUG nova.virt.libvirt.guest [None req-5ff466c2-d750-4ab2-aa61-5cf4fd377c32 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 15:20:03 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:20:03 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-9f3cabb8-d51f-4db9-97b3-c5b764893ee2">
Jan 20 15:20:03 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:20:03 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:20:03 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:20:03 compute-1 nova_compute[225855]:   </source>
Jan 20 15:20:03 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:20:03 compute-1 nova_compute[225855]:   <serial>9f3cabb8-d51f-4db9-97b3-c5b764893ee2</serial>
Jan 20 15:20:03 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 15:20:03 compute-1 nova_compute[225855]: </disk>
Jan 20 15:20:03 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:20:03 compute-1 sudo[308332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:03 compute-1 sudo[308332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:03 compute-1 sudo[308332]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:20:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:03.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.485 225859 DEBUG nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Received event <DeviceRemovedEvent: 1768922403.4855456, 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.488 225859 DEBUG nova.virt.libvirt.driver [None req-5ff466c2-d750-4ab2-aa61-5cf4fd377c32 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.490 225859 INFO nova.virt.libvirt.driver [None req-5ff466c2-d750-4ab2-aa61-5cf4fd377c32 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Successfully detached device vdb from instance 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44 from the live domain config.
Jan 20 15:20:03 compute-1 sudo[308357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:03 compute-1 sudo[308357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:03 compute-1 sudo[308357]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.527 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.527 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.528 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.528 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.642 225859 DEBUG nova.objects.instance [None req-5ff466c2-d750-4ab2-aa61-5cf4fd377c32 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lazy-loading 'flavor' on Instance uuid 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:20:03 compute-1 nova_compute[225855]: 2026-01-20 15:20:03.686 225859 DEBUG oslo_concurrency.lockutils [None req-5ff466c2-d750-4ab2-aa61-5cf4fd377c32 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.454s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:20:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:03.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.680 225859 DEBUG oslo_concurrency.lockutils [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.681 225859 DEBUG oslo_concurrency.lockutils [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.681 225859 DEBUG oslo_concurrency.lockutils [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.682 225859 DEBUG oslo_concurrency.lockutils [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.682 225859 DEBUG oslo_concurrency.lockutils [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.683 225859 INFO nova.compute.manager [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Terminating instance
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.684 225859 DEBUG nova.compute.manager [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.727 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Updating instance_info_cache with network_info: [{"id": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "address": "fa:16:3e:44:23:fd", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd99d3a5-54", "ovs_interfaceid": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:20:04 compute-1 kernel: tapbd99d3a5-54 (unregistering): left promiscuous mode
Jan 20 15:20:04 compute-1 NetworkManager[49104]: <info>  [1768922404.7457] device (tapbd99d3a5-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.746 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.747 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.747 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.753 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:04 compute-1 ovn_controller[130490]: 2026-01-20T15:20:04Z|00870|binding|INFO|Releasing lport bd99d3a5-54e0-4e70-9a02-3543631281a6 from this chassis (sb_readonly=0)
Jan 20 15:20:04 compute-1 ovn_controller[130490]: 2026-01-20T15:20:04Z|00871|binding|INFO|Setting lport bd99d3a5-54e0-4e70-9a02-3543631281a6 down in Southbound
Jan 20 15:20:04 compute-1 ovn_controller[130490]: 2026-01-20T15:20:04Z|00872|binding|INFO|Removing iface tapbd99d3a5-54 ovn-installed in OVS
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.755 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:04.761 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:23:fd 10.100.0.13'], port_security=['fa:16:3e:44:23:fd 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2b31f3d7-81bd-4712-bcb1-98afd2dc0f44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9156c0a9920c4721843416b9a44404f9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd945b282-623c-4da9-a940-ac04c971b57b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.245'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bfc4e2a-eeed-480e-aa18-68fc6c8f2cc2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=bd99d3a5-54e0-4e70-9a02-3543631281a6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:20:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:04.762 140354 INFO neutron.agent.ovn.metadata.agent [-] Port bd99d3a5-54e0-4e70-9a02-3543631281a6 in datapath 76c2d716-7d14-4bc1-b83b-a3290ee99d9a unbound from our chassis
Jan 20 15:20:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:04.763 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 76c2d716-7d14-4bc1-b83b-a3290ee99d9a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:20:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:04.764 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a9904c-4e6c-4841-b192-6be83a8b83b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:04.764 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a namespace which is not needed anymore
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.777 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:04 compute-1 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000c2.scope: Deactivated successfully.
Jan 20 15:20:04 compute-1 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000c2.scope: Consumed 15.860s CPU time.
Jan 20 15:20:04 compute-1 systemd-machined[194361]: Machine qemu-102-instance-000000c2 terminated.
Jan 20 15:20:04 compute-1 ceph-mon[81775]: pgmap v2934: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 35 KiB/s wr, 103 op/s
Jan 20 15:20:04 compute-1 podman[308385]: 2026-01-20 15:20:04.831701021 +0000 UTC m=+0.062384312 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 20 15:20:04 compute-1 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[307752]: [NOTICE]   (307756) : haproxy version is 2.8.14-c23fe91
Jan 20 15:20:04 compute-1 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[307752]: [NOTICE]   (307756) : path to executable is /usr/sbin/haproxy
Jan 20 15:20:04 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 15:20:04 compute-1 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[307752]: [WARNING]  (307756) : Exiting Master process...
Jan 20 15:20:04 compute-1 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[307752]: [WARNING]  (307756) : Exiting Master process...
Jan 20 15:20:04 compute-1 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[307752]: [ALERT]    (307756) : Current worker (307758) exited with code 143 (Terminated)
Jan 20 15:20:04 compute-1 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[307752]: [WARNING]  (307756) : All workers exited. Exiting... (0)
Jan 20 15:20:04 compute-1 systemd[1]: libpod-5e4fd4628195f875a51769944f5674e810c100ec290dc3180351b0db60def7c7.scope: Deactivated successfully.
Jan 20 15:20:04 compute-1 podman[308428]: 2026-01-20 15:20:04.9213396 +0000 UTC m=+0.056716101 container died 5e4fd4628195f875a51769944f5674e810c100ec290dc3180351b0db60def7c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.923 225859 INFO nova.virt.libvirt.driver [-] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Instance destroyed successfully.
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.925 225859 DEBUG nova.objects.instance [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lazy-loading 'resources' on Instance uuid 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.940 225859 DEBUG nova.virt.libvirt.vif [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:18:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1816292644',display_name='tempest-AttachVolumeNegativeTest-server-1816292644',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1816292644',id=194,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGw8qLSULQliDvwKbeTpbt4J6LHSzE0FcgbAkXI3mB449DNtZV5vtYZtKqW3qflHvvMvcmL7nd1rBiXHEUgRgW71fE/QnzR597lXioriSvOlFWdxXwkMYduhCAWqw/sG6A==',key_name='tempest-keypair-235858913',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:18:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9156c0a9920c4721843416b9a44404f9',ramdisk_id='',reservation_id='r-acjnzwzw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1505789262',owner_user_name='tempest-AttachVolumeNegativeTest-1505789262-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:18:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='cd9a8f26b71f4631a387e555e6b18428',uuid=2b31f3d7-81bd-4712-bcb1-98afd2dc0f44,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "address": "fa:16:3e:44:23:fd", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd99d3a5-54", "ovs_interfaceid": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.941 225859 DEBUG nova.network.os_vif_util [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Converting VIF {"id": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "address": "fa:16:3e:44:23:fd", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd99d3a5-54", "ovs_interfaceid": "bd99d3a5-54e0-4e70-9a02-3543631281a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.941 225859 DEBUG nova.network.os_vif_util [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:44:23:fd,bridge_name='br-int',has_traffic_filtering=True,id=bd99d3a5-54e0-4e70-9a02-3543631281a6,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd99d3a5-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.942 225859 DEBUG os_vif [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:23:fd,bridge_name='br-int',has_traffic_filtering=True,id=bd99d3a5-54e0-4e70-9a02-3543631281a6,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd99d3a5-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.943 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.944 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd99d3a5-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.946 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.949 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:20:04 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5e4fd4628195f875a51769944f5674e810c100ec290dc3180351b0db60def7c7-userdata-shm.mount: Deactivated successfully.
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.956 225859 INFO os_vif [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:23:fd,bridge_name='br-int',has_traffic_filtering=True,id=bd99d3a5-54e0-4e70-9a02-3543631281a6,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd99d3a5-54')
Jan 20 15:20:04 compute-1 systemd[1]: var-lib-containers-storage-overlay-c7b4fcd771e27f8b153744804f05a30b7615c256cc9d63d2f67498c5cbee4f4e-merged.mount: Deactivated successfully.
Jan 20 15:20:04 compute-1 podman[308428]: 2026-01-20 15:20:04.970750555 +0000 UTC m=+0.106127046 container cleanup 5e4fd4628195f875a51769944f5674e810c100ec290dc3180351b0db60def7c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 15:20:04 compute-1 systemd[1]: libpod-conmon-5e4fd4628195f875a51769944f5674e810c100ec290dc3180351b0db60def7c7.scope: Deactivated successfully.
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.991 225859 DEBUG nova.compute.manager [req-ba1685a9-2c37-4c38-99e8-04a85ac49bbb req-18920763-1f55-4f81-bd55-bef55d14de7d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Received event network-vif-unplugged-bd99d3a5-54e0-4e70-9a02-3543631281a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.993 225859 DEBUG oslo_concurrency.lockutils [req-ba1685a9-2c37-4c38-99e8-04a85ac49bbb req-18920763-1f55-4f81-bd55-bef55d14de7d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.993 225859 DEBUG oslo_concurrency.lockutils [req-ba1685a9-2c37-4c38-99e8-04a85ac49bbb req-18920763-1f55-4f81-bd55-bef55d14de7d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.993 225859 DEBUG oslo_concurrency.lockutils [req-ba1685a9-2c37-4c38-99e8-04a85ac49bbb req-18920763-1f55-4f81-bd55-bef55d14de7d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.994 225859 DEBUG nova.compute.manager [req-ba1685a9-2c37-4c38-99e8-04a85ac49bbb req-18920763-1f55-4f81-bd55-bef55d14de7d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] No waiting events found dispatching network-vif-unplugged-bd99d3a5-54e0-4e70-9a02-3543631281a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:20:04 compute-1 nova_compute[225855]: 2026-01-20 15:20:04.994 225859 DEBUG nova.compute.manager [req-ba1685a9-2c37-4c38-99e8-04a85ac49bbb req-18920763-1f55-4f81-bd55-bef55d14de7d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Received event network-vif-unplugged-bd99d3a5-54e0-4e70-9a02-3543631281a6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:20:05 compute-1 podman[308483]: 2026-01-20 15:20:05.05100571 +0000 UTC m=+0.052069841 container remove 5e4fd4628195f875a51769944f5674e810c100ec290dc3180351b0db60def7c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:20:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:05.058 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[70f8056e-6ac5-4b54-8257-b48293301d39]: (4, ('Tue Jan 20 03:20:04 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a (5e4fd4628195f875a51769944f5674e810c100ec290dc3180351b0db60def7c7)\n5e4fd4628195f875a51769944f5674e810c100ec290dc3180351b0db60def7c7\nTue Jan 20 03:20:04 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a (5e4fd4628195f875a51769944f5674e810c100ec290dc3180351b0db60def7c7)\n5e4fd4628195f875a51769944f5674e810c100ec290dc3180351b0db60def7c7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:05.061 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9b99d362-a7da-4f38-b19a-0f2dec007acc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:05.062 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76c2d716-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:20:05 compute-1 nova_compute[225855]: 2026-01-20 15:20:05.063 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:05 compute-1 kernel: tap76c2d716-70: left promiscuous mode
Jan 20 15:20:05 compute-1 nova_compute[225855]: 2026-01-20 15:20:05.081 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:05.084 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5ce970db-1b12-486e-9076-f990cfaf89eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:05.103 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b147ebf6-bb11-42dd-9dbf-ab6513831d05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:05.105 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[75402418-e903-4422-9ccd-34aedf082408]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:05.126 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7bcaba9d-c8ef-4cd9-91a5-171964438c27]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736298, 'reachable_time': 24698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308501, 'error': None, 'target': 'ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:05.130 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:20:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:05.131 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[49da201e-5157-4b60-91e5-9543a63d5fe5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:05 compute-1 systemd[1]: run-netns-ovnmeta\x2d76c2d716\x2d7d14\x2d4bc1\x2db83b\x2da3290ee99d9a.mount: Deactivated successfully.
Jan 20 15:20:05 compute-1 nova_compute[225855]: 2026-01-20 15:20:05.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:20:05 compute-1 nova_compute[225855]: 2026-01-20 15:20:05.384 225859 INFO nova.virt.libvirt.driver [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Deleting instance files /var/lib/nova/instances/2b31f3d7-81bd-4712-bcb1-98afd2dc0f44_del
Jan 20 15:20:05 compute-1 nova_compute[225855]: 2026-01-20 15:20:05.385 225859 INFO nova.virt.libvirt.driver [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Deletion of /var/lib/nova/instances/2b31f3d7-81bd-4712-bcb1-98afd2dc0f44_del complete
Jan 20 15:20:05 compute-1 nova_compute[225855]: 2026-01-20 15:20:05.433 225859 INFO nova.compute.manager [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Took 0.75 seconds to destroy the instance on the hypervisor.
Jan 20 15:20:05 compute-1 nova_compute[225855]: 2026-01-20 15:20:05.434 225859 DEBUG oslo.service.loopingcall [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:20:05 compute-1 nova_compute[225855]: 2026-01-20 15:20:05.435 225859 DEBUG nova.compute.manager [-] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:20:05 compute-1 nova_compute[225855]: 2026-01-20 15:20:05.435 225859 DEBUG nova.network.neutron [-] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:20:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:05.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3916580390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:20:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:05.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:20:06 compute-1 nova_compute[225855]: 2026-01-20 15:20:06.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:20:06 compute-1 nova_compute[225855]: 2026-01-20 15:20:06.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:20:06 compute-1 nova_compute[225855]: 2026-01-20 15:20:06.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:20:06 compute-1 nova_compute[225855]: 2026-01-20 15:20:06.370 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:06 compute-1 nova_compute[225855]: 2026-01-20 15:20:06.371 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:06 compute-1 nova_compute[225855]: 2026-01-20 15:20:06.371 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:06 compute-1 nova_compute[225855]: 2026-01-20 15:20:06.372 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:20:06 compute-1 nova_compute[225855]: 2026-01-20 15:20:06.372 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:20:06 compute-1 nova_compute[225855]: 2026-01-20 15:20:06.582 225859 DEBUG nova.network.neutron [-] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:20:06 compute-1 nova_compute[225855]: 2026-01-20 15:20:06.619 225859 INFO nova.compute.manager [-] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Took 1.18 seconds to deallocate network for instance.
Jan 20 15:20:06 compute-1 nova_compute[225855]: 2026-01-20 15:20:06.671 225859 DEBUG oslo_concurrency.lockutils [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:06 compute-1 nova_compute[225855]: 2026-01-20 15:20:06.672 225859 DEBUG oslo_concurrency.lockutils [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:06 compute-1 nova_compute[225855]: 2026-01-20 15:20:06.735 225859 DEBUG oslo_concurrency.processutils [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:20:06 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:20:06 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/35866287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/102669052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:06 compute-1 ceph-mon[81775]: pgmap v2935: 321 pgs: 321 active+clean; 230 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 36 KiB/s wr, 120 op/s
Jan 20 15:20:06 compute-1 nova_compute[225855]: 2026-01-20 15:20:06.856 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.037 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.038 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4254MB free_disk=20.931278228759766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.038 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.076 225859 DEBUG nova.compute.manager [req-c8e349d5-1f0d-4dd9-9c94-2189fd37d746 req-313ac06f-a95a-42bf-8416-0fa87d3ccc00 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Received event network-vif-plugged-bd99d3a5-54e0-4e70-9a02-3543631281a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.076 225859 DEBUG oslo_concurrency.lockutils [req-c8e349d5-1f0d-4dd9-9c94-2189fd37d746 req-313ac06f-a95a-42bf-8416-0fa87d3ccc00 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.077 225859 DEBUG oslo_concurrency.lockutils [req-c8e349d5-1f0d-4dd9-9c94-2189fd37d746 req-313ac06f-a95a-42bf-8416-0fa87d3ccc00 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.077 225859 DEBUG oslo_concurrency.lockutils [req-c8e349d5-1f0d-4dd9-9c94-2189fd37d746 req-313ac06f-a95a-42bf-8416-0fa87d3ccc00 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.077 225859 DEBUG nova.compute.manager [req-c8e349d5-1f0d-4dd9-9c94-2189fd37d746 req-313ac06f-a95a-42bf-8416-0fa87d3ccc00 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] No waiting events found dispatching network-vif-plugged-bd99d3a5-54e0-4e70-9a02-3543631281a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.077 225859 WARNING nova.compute.manager [req-c8e349d5-1f0d-4dd9-9c94-2189fd37d746 req-313ac06f-a95a-42bf-8416-0fa87d3ccc00 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Received unexpected event network-vif-plugged-bd99d3a5-54e0-4e70-9a02-3543631281a6 for instance with vm_state deleted and task_state None.
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.078 225859 DEBUG nova.compute.manager [req-c8e349d5-1f0d-4dd9-9c94-2189fd37d746 req-313ac06f-a95a-42bf-8416-0fa87d3ccc00 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Received event network-vif-deleted-bd99d3a5-54e0-4e70-9a02-3543631281a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:20:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:20:07 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3825334878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.230 225859 DEBUG oslo_concurrency.processutils [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.237 225859 DEBUG nova.compute.provider_tree [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.256 225859 DEBUG nova.scheduler.client.report [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.285 225859 DEBUG oslo_concurrency.lockutils [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.289 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.327 225859 INFO nova.scheduler.client.report [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Deleted allocations for instance 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.361 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.362 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.381 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.411 225859 DEBUG oslo_concurrency.lockutils [None req-77153b7a-08dd-4f41-958b-db394c309e07 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "2b31f3d7-81bd-4712-bcb1-98afd2dc0f44" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:07.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:20:07 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/703266583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.833 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.838 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:20:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/35866287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3825334878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/703266583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.855 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.877 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:20:07 compute-1 nova_compute[225855]: 2026-01-20 15:20:07.878 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:07.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:08 compute-1 nova_compute[225855]: 2026-01-20 15:20:08.173 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:08 compute-1 ceph-mon[81775]: pgmap v2936: 321 pgs: 321 active+clean; 230 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 11 KiB/s wr, 110 op/s
Jan 20 15:20:08 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #142. Immutable memtables: 0.
Jan 20 15:20:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:20:08.866512) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:20:08 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 142
Jan 20 15:20:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922408866643, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1220, "num_deletes": 251, "total_data_size": 2591421, "memory_usage": 2620800, "flush_reason": "Manual Compaction"}
Jan 20 15:20:08 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #143: started
Jan 20 15:20:08 compute-1 nova_compute[225855]: 2026-01-20 15:20:08.876 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:20:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922408896088, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 143, "file_size": 1698431, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70250, "largest_seqno": 71464, "table_properties": {"data_size": 1693229, "index_size": 2661, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11553, "raw_average_key_size": 19, "raw_value_size": 1682677, "raw_average_value_size": 2891, "num_data_blocks": 118, "num_entries": 582, "num_filter_entries": 582, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768922312, "oldest_key_time": 1768922312, "file_creation_time": 1768922408, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:20:08 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 29735 microseconds, and 6477 cpu microseconds.
Jan 20 15:20:08 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:20:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:20:08.896254) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #143: 1698431 bytes OK
Jan 20 15:20:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:20:08.896284) [db/memtable_list.cc:519] [default] Level-0 commit table #143 started
Jan 20 15:20:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:20:08.902794) [db/memtable_list.cc:722] [default] Level-0 commit table #143: memtable #1 done
Jan 20 15:20:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:20:08.902824) EVENT_LOG_v1 {"time_micros": 1768922408902814, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:20:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:20:08.902853) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:20:08 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 2585594, prev total WAL file size 2585594, number of live WAL files 2.
Jan 20 15:20:08 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000139.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:20:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:20:08.904272) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Jan 20 15:20:08 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:20:08 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [143(1658KB)], [141(12MB)]
Jan 20 15:20:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922408904326, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [143], "files_L6": [141], "score": -1, "input_data_size": 15213051, "oldest_snapshot_seqno": -1}
Jan 20 15:20:09 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #144: 9464 keys, 13363564 bytes, temperature: kUnknown
Jan 20 15:20:09 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922409199265, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 144, "file_size": 13363564, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13300247, "index_size": 38564, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23685, "raw_key_size": 249234, "raw_average_key_size": 26, "raw_value_size": 13131911, "raw_average_value_size": 1387, "num_data_blocks": 1475, "num_entries": 9464, "num_filter_entries": 9464, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768922408, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 144, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:20:09 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:20:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:20:09.199600) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 13363564 bytes
Jan 20 15:20:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:20:09.341208) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 51.6 rd, 45.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 12.9 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(16.8) write-amplify(7.9) OK, records in: 9979, records dropped: 515 output_compression: NoCompression
Jan 20 15:20:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:20:09.341275) EVENT_LOG_v1 {"time_micros": 1768922409341249, "job": 90, "event": "compaction_finished", "compaction_time_micros": 295025, "compaction_time_cpu_micros": 33737, "output_level": 6, "num_output_files": 1, "total_output_size": 13363564, "num_input_records": 9979, "num_output_records": 9464, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:20:09 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:20:09 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922409342141, "job": 90, "event": "table_file_deletion", "file_number": 143}
Jan 20 15:20:09 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000141.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:20:09 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922409347473, "job": 90, "event": "table_file_deletion", "file_number": 141}
Jan 20 15:20:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:20:08.904193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:20:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:20:09.347624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:20:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:20:09.347631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:20:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:20:09.347632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:20:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:20:09.347634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:20:09 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:20:09.347636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:20:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:20:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:09.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:20:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3480512486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:09 compute-1 nova_compute[225855]: 2026-01-20 15:20:09.946 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:09.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3688265108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:10 compute-1 ceph-mon[81775]: pgmap v2937: 321 pgs: 321 active+clean; 190 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 171 op/s
Jan 20 15:20:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:11.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:11.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:13 compute-1 nova_compute[225855]: 2026-01-20 15:20:13.107 225859 DEBUG oslo_concurrency.lockutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:13 compute-1 nova_compute[225855]: 2026-01-20 15:20:13.107 225859 DEBUG oslo_concurrency.lockutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:13 compute-1 nova_compute[225855]: 2026-01-20 15:20:13.125 225859 DEBUG nova.compute.manager [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:20:13 compute-1 nova_compute[225855]: 2026-01-20 15:20:13.177 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:13 compute-1 nova_compute[225855]: 2026-01-20 15:20:13.195 225859 DEBUG oslo_concurrency.lockutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:13 compute-1 nova_compute[225855]: 2026-01-20 15:20:13.196 225859 DEBUG oslo_concurrency.lockutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:13 compute-1 nova_compute[225855]: 2026-01-20 15:20:13.200 225859 DEBUG nova.virt.hardware [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:20:13 compute-1 nova_compute[225855]: 2026-01-20 15:20:13.200 225859 INFO nova.compute.claims [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:20:13 compute-1 nova_compute[225855]: 2026-01-20 15:20:13.297 225859 DEBUG oslo_concurrency.processutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:20:13 compute-1 nova_compute[225855]: 2026-01-20 15:20:13.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:20:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:13 compute-1 ceph-mon[81775]: pgmap v2938: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Jan 20 15:20:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:13.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:20:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2383700053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:13 compute-1 nova_compute[225855]: 2026-01-20 15:20:13.729 225859 DEBUG oslo_concurrency.processutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:20:13 compute-1 nova_compute[225855]: 2026-01-20 15:20:13.736 225859 DEBUG nova.compute.provider_tree [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:20:13 compute-1 nova_compute[225855]: 2026-01-20 15:20:13.755 225859 DEBUG nova.scheduler.client.report [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:20:13 compute-1 nova_compute[225855]: 2026-01-20 15:20:13.778 225859 DEBUG oslo_concurrency.lockutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:13 compute-1 nova_compute[225855]: 2026-01-20 15:20:13.779 225859 DEBUG nova.compute.manager [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:20:13 compute-1 nova_compute[225855]: 2026-01-20 15:20:13.833 225859 DEBUG nova.compute.manager [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:20:13 compute-1 nova_compute[225855]: 2026-01-20 15:20:13.834 225859 DEBUG nova.network.neutron [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:20:13 compute-1 nova_compute[225855]: 2026-01-20 15:20:13.856 225859 INFO nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:20:13 compute-1 nova_compute[225855]: 2026-01-20 15:20:13.883 225859 DEBUG nova.compute.manager [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:20:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:13.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.030 225859 DEBUG nova.compute.manager [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.031 225859 DEBUG nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.031 225859 INFO nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Creating image(s)
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.056 225859 DEBUG nova.storage.rbd_utils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 7a53c9b1-e64b-4a31-897a-bbe7d964cf45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.086 225859 DEBUG nova.storage.rbd_utils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 7a53c9b1-e64b-4a31-897a-bbe7d964cf45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.112 225859 DEBUG nova.storage.rbd_utils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 7a53c9b1-e64b-4a31-897a-bbe7d964cf45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.117 225859 DEBUG oslo_concurrency.processutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.182 225859 DEBUG oslo_concurrency.processutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.184 225859 DEBUG oslo_concurrency.lockutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.185 225859 DEBUG oslo_concurrency.lockutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.185 225859 DEBUG oslo_concurrency.lockutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.219 225859 DEBUG nova.storage.rbd_utils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 7a53c9b1-e64b-4a31-897a-bbe7d964cf45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.223 225859 DEBUG oslo_concurrency.processutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 7a53c9b1-e64b-4a31-897a-bbe7d964cf45_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.384 225859 DEBUG nova.policy [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cd9a8f26b71f4631a387e555e6b18428', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9156c0a9920c4721843416b9a44404f9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:20:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3382626955' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:20:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3382626955' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:20:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2383700053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.505 225859 DEBUG oslo_concurrency.processutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 7a53c9b1-e64b-4a31-897a-bbe7d964cf45_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.585 225859 DEBUG nova.storage.rbd_utils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] resizing rbd image 7a53c9b1-e64b-4a31-897a-bbe7d964cf45_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.685 225859 DEBUG nova.objects.instance [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lazy-loading 'migration_context' on Instance uuid 7a53c9b1-e64b-4a31-897a-bbe7d964cf45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.703 225859 DEBUG nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.704 225859 DEBUG nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Ensure instance console log exists: /var/lib/nova/instances/7a53c9b1-e64b-4a31-897a-bbe7d964cf45/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.704 225859 DEBUG oslo_concurrency.lockutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.705 225859 DEBUG oslo_concurrency.lockutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.705 225859 DEBUG oslo_concurrency.lockutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:14 compute-1 nova_compute[225855]: 2026-01-20 15:20:14.949 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:20:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:15.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:20:15 compute-1 ceph-mon[81775]: pgmap v2939: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Jan 20 15:20:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:20:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:15.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:20:15 compute-1 nova_compute[225855]: 2026-01-20 15:20:15.957 225859 DEBUG nova.network.neutron [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Successfully created port: ab5264b7-ec64-46dd-b30d-981799387571 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:20:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:16.441 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:16.441 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:16.442 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:16 compute-1 ceph-mon[81775]: pgmap v2940: 321 pgs: 321 active+clean; 240 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 363 KiB/s rd, 3.6 MiB/s wr, 119 op/s
Jan 20 15:20:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:17.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:17 compute-1 nova_compute[225855]: 2026-01-20 15:20:17.740 225859 DEBUG nova.network.neutron [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Successfully updated port: ab5264b7-ec64-46dd-b30d-981799387571 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:20:17 compute-1 nova_compute[225855]: 2026-01-20 15:20:17.755 225859 DEBUG oslo_concurrency.lockutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "refresh_cache-7a53c9b1-e64b-4a31-897a-bbe7d964cf45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:20:17 compute-1 nova_compute[225855]: 2026-01-20 15:20:17.756 225859 DEBUG oslo_concurrency.lockutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquired lock "refresh_cache-7a53c9b1-e64b-4a31-897a-bbe7d964cf45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:20:17 compute-1 nova_compute[225855]: 2026-01-20 15:20:17.756 225859 DEBUG nova.network.neutron [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:20:17 compute-1 nova_compute[225855]: 2026-01-20 15:20:17.891 225859 DEBUG nova.compute.manager [req-5a11617e-8c11-4005-9cf8-213fb71664d2 req-0f4ffa8f-cb27-4e7c-a4af-33152b5cf64c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Received event network-changed-ab5264b7-ec64-46dd-b30d-981799387571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:20:17 compute-1 nova_compute[225855]: 2026-01-20 15:20:17.891 225859 DEBUG nova.compute.manager [req-5a11617e-8c11-4005-9cf8-213fb71664d2 req-0f4ffa8f-cb27-4e7c-a4af-33152b5cf64c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Refreshing instance network info cache due to event network-changed-ab5264b7-ec64-46dd-b30d-981799387571. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:20:17 compute-1 nova_compute[225855]: 2026-01-20 15:20:17.891 225859 DEBUG oslo_concurrency.lockutils [req-5a11617e-8c11-4005-9cf8-213fb71664d2 req-0f4ffa8f-cb27-4e7c-a4af-33152b5cf64c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-7a53c9b1-e64b-4a31-897a-bbe7d964cf45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:20:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:17.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:17 compute-1 nova_compute[225855]: 2026-01-20 15:20:17.959 225859 DEBUG nova.network.neutron [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:20:17 compute-1 sudo[308766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:17 compute-1 sudo[308766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:17 compute-1 sudo[308766]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:18 compute-1 sudo[308791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:20:18 compute-1 sudo[308791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:18 compute-1 sudo[308791]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:18 compute-1 sudo[308816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:18 compute-1 sudo[308816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:18 compute-1 sudo[308816]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:18 compute-1 sudo[308841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 15:20:18 compute-1 sudo[308841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:18 compute-1 nova_compute[225855]: 2026-01-20 15:20:18.210 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:18 compute-1 sudo[308841]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:19 compute-1 sudo[308883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:19 compute-1 sudo[308883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:19 compute-1 sudo[308883]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:19.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:19 compute-1 sudo[308908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:20:19 compute-1 sudo[308908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:19 compute-1 sudo[308908]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 15:20:19 compute-1 ceph-mon[81775]: pgmap v2941: 321 pgs: 321 active+clean; 240 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 351 KiB/s rd, 3.6 MiB/s wr, 101 op/s
Jan 20 15:20:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:19 compute-1 sudo[308933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:19 compute-1 sudo[308933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:19 compute-1 sudo[308933]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:19 compute-1 sudo[308958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:20:19 compute-1 sudo[308958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:19 compute-1 nova_compute[225855]: 2026-01-20 15:20:19.918 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922404.9171386, 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:20:19 compute-1 nova_compute[225855]: 2026-01-20 15:20:19.919 225859 INFO nova.compute.manager [-] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] VM Stopped (Lifecycle Event)
Jan 20 15:20:19 compute-1 nova_compute[225855]: 2026-01-20 15:20:19.951 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:19.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:19 compute-1 nova_compute[225855]: 2026-01-20 15:20:19.993 225859 DEBUG nova.compute.manager [None req-dc2bf381-1c1c-486d-bcc0-446eaf6c8ccc - - - - - -] [instance: 2b31f3d7-81bd-4712-bcb1-98afd2dc0f44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:20:20 compute-1 sudo[308958]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.217 225859 DEBUG nova.network.neutron [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Updating instance_info_cache with network_info: [{"id": "ab5264b7-ec64-46dd-b30d-981799387571", "address": "fa:16:3e:3c:70:df", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5264b7-ec", "ovs_interfaceid": "ab5264b7-ec64-46dd-b30d-981799387571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.239 225859 DEBUG oslo_concurrency.lockutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Releasing lock "refresh_cache-7a53c9b1-e64b-4a31-897a-bbe7d964cf45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.240 225859 DEBUG nova.compute.manager [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Instance network_info: |[{"id": "ab5264b7-ec64-46dd-b30d-981799387571", "address": "fa:16:3e:3c:70:df", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5264b7-ec", "ovs_interfaceid": "ab5264b7-ec64-46dd-b30d-981799387571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.240 225859 DEBUG oslo_concurrency.lockutils [req-5a11617e-8c11-4005-9cf8-213fb71664d2 req-0f4ffa8f-cb27-4e7c-a4af-33152b5cf64c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-7a53c9b1-e64b-4a31-897a-bbe7d964cf45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.241 225859 DEBUG nova.network.neutron [req-5a11617e-8c11-4005-9cf8-213fb71664d2 req-0f4ffa8f-cb27-4e7c-a4af-33152b5cf64c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Refreshing network info cache for port ab5264b7-ec64-46dd-b30d-981799387571 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.244 225859 DEBUG nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Start _get_guest_xml network_info=[{"id": "ab5264b7-ec64-46dd-b30d-981799387571", "address": "fa:16:3e:3c:70:df", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5264b7-ec", "ovs_interfaceid": "ab5264b7-ec64-46dd-b30d-981799387571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.247 225859 WARNING nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.252 225859 DEBUG nova.virt.libvirt.host [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.253 225859 DEBUG nova.virt.libvirt.host [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.256 225859 DEBUG nova.virt.libvirt.host [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.257 225859 DEBUG nova.virt.libvirt.host [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.258 225859 DEBUG nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.258 225859 DEBUG nova.virt.hardware [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.258 225859 DEBUG nova.virt.hardware [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.259 225859 DEBUG nova.virt.hardware [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.259 225859 DEBUG nova.virt.hardware [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.259 225859 DEBUG nova.virt.hardware [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.259 225859 DEBUG nova.virt.hardware [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.259 225859 DEBUG nova.virt.hardware [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.260 225859 DEBUG nova.virt.hardware [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.260 225859 DEBUG nova.virt.hardware [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.260 225859 DEBUG nova.virt.hardware [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.260 225859 DEBUG nova.virt.hardware [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.263 225859 DEBUG oslo_concurrency.processutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:20:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 15:20:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:20:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:20:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:20:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:20:20 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:20:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:20:20 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3496344543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.742 225859 DEBUG oslo_concurrency.processutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.770 225859 DEBUG nova.storage.rbd_utils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 7a53c9b1-e64b-4a31-897a-bbe7d964cf45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:20:20 compute-1 nova_compute[225855]: 2026-01-20 15:20:20.774 225859 DEBUG oslo_concurrency.processutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:20:21 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:20:21 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/243034538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.233 225859 DEBUG oslo_concurrency.processutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.235 225859 DEBUG nova.virt.libvirt.vif [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:20:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-512186019',display_name='tempest-AttachVolumeNegativeTest-server-512186019',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-512186019',id=197,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI8d6pfZzu0QN0+ud36hsYGEa2fue/k/EBJ/5AAbAw966Nprd6b6gecK+XPS3vJw5O7JCevyXRxpx1xed28ouQO1W8vY3Q7SPAOn3X0ewiZY79+ulj2hj305nyB4SNFMjQ==',key_name='tempest-keypair-2087250418',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9156c0a9920c4721843416b9a44404f9',ramdisk_id='',reservation_id='r-1dq7a0u0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1505789262',owner_user_name='tempest-AttachVolumeNegativeTest-1505789262-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:20:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='cd9a8f26b71f4631a387e555e6b18428',uuid=7a53c9b1-e64b-4a31-897a-bbe7d964cf45,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab5264b7-ec64-46dd-b30d-981799387571", "address": "fa:16:3e:3c:70:df", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5264b7-ec", "ovs_interfaceid": "ab5264b7-ec64-46dd-b30d-981799387571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.236 225859 DEBUG nova.network.os_vif_util [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Converting VIF {"id": "ab5264b7-ec64-46dd-b30d-981799387571", "address": "fa:16:3e:3c:70:df", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5264b7-ec", "ovs_interfaceid": "ab5264b7-ec64-46dd-b30d-981799387571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.237 225859 DEBUG nova.network.os_vif_util [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:70:df,bridge_name='br-int',has_traffic_filtering=True,id=ab5264b7-ec64-46dd-b30d-981799387571,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab5264b7-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.238 225859 DEBUG nova.objects.instance [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7a53c9b1-e64b-4a31-897a-bbe7d964cf45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.268 225859 DEBUG nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:20:21 compute-1 nova_compute[225855]:   <uuid>7a53c9b1-e64b-4a31-897a-bbe7d964cf45</uuid>
Jan 20 15:20:21 compute-1 nova_compute[225855]:   <name>instance-000000c5</name>
Jan 20 15:20:21 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:20:21 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:20:21 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <nova:name>tempest-AttachVolumeNegativeTest-server-512186019</nova:name>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:20:20</nova:creationTime>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:20:21 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:20:21 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:20:21 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:20:21 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:20:21 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:20:21 compute-1 nova_compute[225855]:         <nova:user uuid="cd9a8f26b71f4631a387e555e6b18428">tempest-AttachVolumeNegativeTest-1505789262-project-member</nova:user>
Jan 20 15:20:21 compute-1 nova_compute[225855]:         <nova:project uuid="9156c0a9920c4721843416b9a44404f9">tempest-AttachVolumeNegativeTest-1505789262</nova:project>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:20:21 compute-1 nova_compute[225855]:         <nova:port uuid="ab5264b7-ec64-46dd-b30d-981799387571">
Jan 20 15:20:21 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:20:21 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:20:21 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <system>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <entry name="serial">7a53c9b1-e64b-4a31-897a-bbe7d964cf45</entry>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <entry name="uuid">7a53c9b1-e64b-4a31-897a-bbe7d964cf45</entry>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     </system>
Jan 20 15:20:21 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:20:21 compute-1 nova_compute[225855]:   <os>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:   </os>
Jan 20 15:20:21 compute-1 nova_compute[225855]:   <features>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:   </features>
Jan 20 15:20:21 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:20:21 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:20:21 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/7a53c9b1-e64b-4a31-897a-bbe7d964cf45_disk">
Jan 20 15:20:21 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       </source>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:20:21 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/7a53c9b1-e64b-4a31-897a-bbe7d964cf45_disk.config">
Jan 20 15:20:21 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       </source>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:20:21 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:3c:70:df"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <target dev="tapab5264b7-ec"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/7a53c9b1-e64b-4a31-897a-bbe7d964cf45/console.log" append="off"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <video>
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     </video>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:20:21 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:20:21 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:20:21 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:20:21 compute-1 nova_compute[225855]: </domain>
Jan 20 15:20:21 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.269 225859 DEBUG nova.compute.manager [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Preparing to wait for external event network-vif-plugged-ab5264b7-ec64-46dd-b30d-981799387571 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.270 225859 DEBUG oslo_concurrency.lockutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.270 225859 DEBUG oslo_concurrency.lockutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.270 225859 DEBUG oslo_concurrency.lockutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.271 225859 DEBUG nova.virt.libvirt.vif [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:20:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-512186019',display_name='tempest-AttachVolumeNegativeTest-server-512186019',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-512186019',id=197,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI8d6pfZzu0QN0+ud36hsYGEa2fue/k/EBJ/5AAbAw966Nprd6b6gecK+XPS3vJw5O7JCevyXRxpx1xed28ouQO1W8vY3Q7SPAOn3X0ewiZY79+ulj2hj305nyB4SNFMjQ==',key_name='tempest-keypair-2087250418',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9156c0a9920c4721843416b9a44404f9',ramdisk_id='',reservation_id='r-1dq7a0u0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1505789262',owner_user_name='tempest-AttachVolumeNegativeTest-1505789262-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:20:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='cd9a8f26b71f4631a387e555e6b18428',uuid=7a53c9b1-e64b-4a31-897a-bbe7d964cf45,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab5264b7-ec64-46dd-b30d-981799387571", "address": "fa:16:3e:3c:70:df", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5264b7-ec", "ovs_interfaceid": "ab5264b7-ec64-46dd-b30d-981799387571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.271 225859 DEBUG nova.network.os_vif_util [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Converting VIF {"id": "ab5264b7-ec64-46dd-b30d-981799387571", "address": "fa:16:3e:3c:70:df", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5264b7-ec", "ovs_interfaceid": "ab5264b7-ec64-46dd-b30d-981799387571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.272 225859 DEBUG nova.network.os_vif_util [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:70:df,bridge_name='br-int',has_traffic_filtering=True,id=ab5264b7-ec64-46dd-b30d-981799387571,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab5264b7-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.272 225859 DEBUG os_vif [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:70:df,bridge_name='br-int',has_traffic_filtering=True,id=ab5264b7-ec64-46dd-b30d-981799387571,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab5264b7-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.273 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.273 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.274 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.278 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.278 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab5264b7-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.278 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapab5264b7-ec, col_values=(('external_ids', {'iface-id': 'ab5264b7-ec64-46dd-b30d-981799387571', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3c:70:df', 'vm-uuid': '7a53c9b1-e64b-4a31-897a-bbe7d964cf45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:20:21 compute-1 NetworkManager[49104]: <info>  [1768922421.2811] manager: (tapab5264b7-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.283 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.287 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.288 225859 INFO os_vif [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:70:df,bridge_name='br-int',has_traffic_filtering=True,id=ab5264b7-ec64-46dd-b30d-981799387571,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab5264b7-ec')
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.440 225859 DEBUG nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.441 225859 DEBUG nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.441 225859 DEBUG nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] No VIF found with MAC fa:16:3e:3c:70:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.442 225859 INFO nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Using config drive
Jan 20 15:20:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:21.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:21 compute-1 nova_compute[225855]: 2026-01-20 15:20:21.493 225859 DEBUG nova.storage.rbd_utils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 7a53c9b1-e64b-4a31-897a-bbe7d964cf45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:20:21 compute-1 ceph-mon[81775]: pgmap v2942: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 3.9 MiB/s wr, 105 op/s
Jan 20 15:20:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3496344543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:20:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/243034538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:20:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:21.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.083 225859 INFO nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Creating config drive at /var/lib/nova/instances/7a53c9b1-e64b-4a31-897a-bbe7d964cf45/disk.config
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.094 225859 DEBUG oslo_concurrency.processutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7a53c9b1-e64b-4a31-897a-bbe7d964cf45/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0_t7c0s3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:20:22 compute-1 podman[309097]: 2026-01-20 15:20:22.126775497 +0000 UTC m=+0.154069308 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.247 225859 DEBUG oslo_concurrency.processutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7a53c9b1-e64b-4a31-897a-bbe7d964cf45/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0_t7c0s3" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.274 225859 DEBUG nova.storage.rbd_utils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 7a53c9b1-e64b-4a31-897a-bbe7d964cf45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.278 225859 DEBUG oslo_concurrency.processutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7a53c9b1-e64b-4a31-897a-bbe7d964cf45/disk.config 7a53c9b1-e64b-4a31-897a-bbe7d964cf45_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.427 225859 DEBUG oslo_concurrency.processutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7a53c9b1-e64b-4a31-897a-bbe7d964cf45/disk.config 7a53c9b1-e64b-4a31-897a-bbe7d964cf45_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.427 225859 INFO nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Deleting local config drive /var/lib/nova/instances/7a53c9b1-e64b-4a31-897a-bbe7d964cf45/disk.config because it was imported into RBD.
Jan 20 15:20:22 compute-1 kernel: tapab5264b7-ec: entered promiscuous mode
Jan 20 15:20:22 compute-1 ovn_controller[130490]: 2026-01-20T15:20:22Z|00873|binding|INFO|Claiming lport ab5264b7-ec64-46dd-b30d-981799387571 for this chassis.
Jan 20 15:20:22 compute-1 ovn_controller[130490]: 2026-01-20T15:20:22Z|00874|binding|INFO|ab5264b7-ec64-46dd-b30d-981799387571: Claiming fa:16:3e:3c:70:df 10.100.0.6
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.481 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:22 compute-1 NetworkManager[49104]: <info>  [1768922422.4826] manager: (tapab5264b7-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/365)
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.494 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:70:df 10.100.0.6'], port_security=['fa:16:3e:3c:70:df 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7a53c9b1-e64b-4a31-897a-bbe7d964cf45', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9156c0a9920c4721843416b9a44404f9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77f773a6-dc7f-4790-9c9b-d69f30c72eb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bfc4e2a-eeed-480e-aa18-68fc6c8f2cc2, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=ab5264b7-ec64-46dd-b30d-981799387571) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:20:22 compute-1 ovn_controller[130490]: 2026-01-20T15:20:22Z|00875|binding|INFO|Setting lport ab5264b7-ec64-46dd-b30d-981799387571 ovn-installed in OVS
Jan 20 15:20:22 compute-1 ovn_controller[130490]: 2026-01-20T15:20:22Z|00876|binding|INFO|Setting lport ab5264b7-ec64-46dd-b30d-981799387571 up in Southbound
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.495 140354 INFO neutron.agent.ovn.metadata.agent [-] Port ab5264b7-ec64-46dd-b30d-981799387571 in datapath 76c2d716-7d14-4bc1-b83b-a3290ee99d9a bound to our chassis
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.496 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76c2d716-7d14-4bc1-b83b-a3290ee99d9a
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.496 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.499 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.507 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9f18d1c1-9f3f-48c4-8d43-aa858af68c70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.508 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap76c2d716-71 in ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.510 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap76c2d716-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.510 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f22ee9-0d74-4731-8778-f1f4114c5276]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.511 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6cd28157-74ac-4a48-a3ad-436b3822fa35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:22 compute-1 systemd-udevd[309178]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:20:22 compute-1 systemd-machined[194361]: New machine qemu-103-instance-000000c5.
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.522 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[2e99f971-bd02-4233-9190-22280d65da73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:22 compute-1 NetworkManager[49104]: <info>  [1768922422.5269] device (tapab5264b7-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:20:22 compute-1 NetworkManager[49104]: <info>  [1768922422.5280] device (tapab5264b7-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:20:22 compute-1 systemd[1]: Started Virtual Machine qemu-103-instance-000000c5.
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.546 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5e4905c6-d8e3-48e8-96a3-0b41e671eb71]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.575 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[cdaa1cb6-ef52-45ae-970a-30fd1c7782aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:22 compute-1 NetworkManager[49104]: <info>  [1768922422.5812] manager: (tap76c2d716-70): new Veth device (/org/freedesktop/NetworkManager/Devices/366)
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.580 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fc1e74cc-6806-4b57-ba4e-1f0b455020e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.610 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[6bc9986b-3e25-4151-8c0f-af3f88629015]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.614 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[66a304e7-3cee-4330-83d8-6e70eb78a425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:22 compute-1 NetworkManager[49104]: <info>  [1768922422.6367] device (tap76c2d716-70): carrier: link connected
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.642 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[ada54852-843a-48a0-b26a-ce3a849f12e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.660 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c4aa291b-4a90-40d8-abee-eacae454bc63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76c2d716-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:44:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746156, 'reachable_time': 15011, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309210, 'error': None, 'target': 'ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/409901699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.677 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6a01f89a-5c2b-4d25-b86c-ef13ce03f4d7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2e:44ab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 746156, 'tstamp': 746156}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309211, 'error': None, 'target': 'ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.696 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c83a8de1-305f-4f57-8458-5139d042997e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76c2d716-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:44:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746156, 'reachable_time': 15011, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309212, 'error': None, 'target': 'ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.724 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[62883f93-6583-48b8-ab03-f996adaea3bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.781 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[62160ad2-ea8e-46a4-9b88-28cae7061c68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.782 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76c2d716-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.783 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.783 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76c2d716-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.785 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:22 compute-1 NetworkManager[49104]: <info>  [1768922422.7861] manager: (tap76c2d716-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Jan 20 15:20:22 compute-1 kernel: tap76c2d716-70: entered promiscuous mode
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.789 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.791 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76c2d716-70, col_values=(('external_ids', {'iface-id': '2c0bba0e-e9b6-4ece-8349-62642b94d91d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.792 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:22 compute-1 ovn_controller[130490]: 2026-01-20T15:20:22Z|00877|binding|INFO|Releasing lport 2c0bba0e-e9b6-4ece-8349-62642b94d91d from this chassis (sb_readonly=0)
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.792 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.793 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/76c2d716-7d14-4bc1-b83b-a3290ee99d9a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/76c2d716-7d14-4bc1-b83b-a3290ee99d9a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.794 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2b922365-37e2-4d99-8007-0d96abbf9e32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.795 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-76c2d716-7d14-4bc1-b83b-a3290ee99d9a
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/76c2d716-7d14-4bc1-b83b-a3290ee99d9a.pid.haproxy
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 76c2d716-7d14-4bc1-b83b-a3290ee99d9a
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:20:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:22.795 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'env', 'PROCESS_TAG=haproxy-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/76c2d716-7d14-4bc1-b83b-a3290ee99d9a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.806 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.960 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922422.9596643, 7a53c9b1-e64b-4a31-897a-bbe7d964cf45 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.961 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] VM Started (Lifecycle Event)
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.991 225859 DEBUG nova.compute.manager [req-f5ff359d-800f-4750-b361-4c2db76cfb9e req-a3f10dba-d01a-4290-a0b0-2a98eff7ad84 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Received event network-vif-plugged-ab5264b7-ec64-46dd-b30d-981799387571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.992 225859 DEBUG oslo_concurrency.lockutils [req-f5ff359d-800f-4750-b361-4c2db76cfb9e req-a3f10dba-d01a-4290-a0b0-2a98eff7ad84 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.992 225859 DEBUG oslo_concurrency.lockutils [req-f5ff359d-800f-4750-b361-4c2db76cfb9e req-a3f10dba-d01a-4290-a0b0-2a98eff7ad84 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.992 225859 DEBUG oslo_concurrency.lockutils [req-f5ff359d-800f-4750-b361-4c2db76cfb9e req-a3f10dba-d01a-4290-a0b0-2a98eff7ad84 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.993 225859 DEBUG nova.compute.manager [req-f5ff359d-800f-4750-b361-4c2db76cfb9e req-a3f10dba-d01a-4290-a0b0-2a98eff7ad84 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Processing event network-vif-plugged-ab5264b7-ec64-46dd-b30d-981799387571 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.993 225859 DEBUG nova.compute.manager [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:20:22 compute-1 nova_compute[225855]: 2026-01-20 15:20:22.997 225859 DEBUG nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.002 225859 INFO nova.virt.libvirt.driver [-] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Instance spawned successfully.
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.002 225859 DEBUG nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.006 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.009 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.039 225859 DEBUG nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.041 225859 DEBUG nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.041 225859 DEBUG nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.042 225859 DEBUG nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.043 225859 DEBUG nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.043 225859 DEBUG nova.virt.libvirt.driver [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.047 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.047 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922422.9599028, 7a53c9b1-e64b-4a31-897a-bbe7d964cf45 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.048 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] VM Paused (Lifecycle Event)
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.102 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.107 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922422.997031, 7a53c9b1-e64b-4a31-897a-bbe7d964cf45 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.108 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] VM Resumed (Lifecycle Event)
Jan 20 15:20:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:23.124 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.139 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.143 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.147 225859 INFO nova.compute.manager [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Took 9.12 seconds to spawn the instance on the hypervisor.
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.147 225859 DEBUG nova.compute.manager [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.174 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.178 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.179 225859 DEBUG nova.network.neutron [req-5a11617e-8c11-4005-9cf8-213fb71664d2 req-0f4ffa8f-cb27-4e7c-a4af-33152b5cf64c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Updated VIF entry in instance network info cache for port ab5264b7-ec64-46dd-b30d-981799387571. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.180 225859 DEBUG nova.network.neutron [req-5a11617e-8c11-4005-9cf8-213fb71664d2 req-0f4ffa8f-cb27-4e7c-a4af-33152b5cf64c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Updating instance_info_cache with network_info: [{"id": "ab5264b7-ec64-46dd-b30d-981799387571", "address": "fa:16:3e:3c:70:df", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5264b7-ec", "ovs_interfaceid": "ab5264b7-ec64-46dd-b30d-981799387571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.199 225859 DEBUG oslo_concurrency.lockutils [req-5a11617e-8c11-4005-9cf8-213fb71664d2 req-0f4ffa8f-cb27-4e7c-a4af-33152b5cf64c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-7a53c9b1-e64b-4a31-897a-bbe7d964cf45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.210 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.234 225859 INFO nova.compute.manager [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Took 10.06 seconds to build instance.
Jan 20 15:20:23 compute-1 nova_compute[225855]: 2026-01-20 15:20:23.259 225859 DEBUG oslo_concurrency.lockutils [None req-dc9a0afd-b1a5-4c8b-b4fc-1789229ae1df cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:23 compute-1 podman[309286]: 2026-01-20 15:20:23.262084045 +0000 UTC m=+0.058306026 container create ff71acd4802f5ef48f8b4d2bd8d4bad2170671407bdfd29bae692881ac810c51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:20:23 compute-1 systemd[1]: Started libpod-conmon-ff71acd4802f5ef48f8b4d2bd8d4bad2170671407bdfd29bae692881ac810c51.scope.
Jan 20 15:20:23 compute-1 podman[309286]: 2026-01-20 15:20:23.228681893 +0000 UTC m=+0.024903884 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:20:23 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:20:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98994e09dc4916b2814fcd1aedf4c05e63267fe55d9759fece6cec7531df8589/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:20:23 compute-1 podman[309286]: 2026-01-20 15:20:23.34979318 +0000 UTC m=+0.146015171 container init ff71acd4802f5ef48f8b4d2bd8d4bad2170671407bdfd29bae692881ac810c51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:20:23 compute-1 podman[309286]: 2026-01-20 15:20:23.357808766 +0000 UTC m=+0.154030747 container start ff71acd4802f5ef48f8b4d2bd8d4bad2170671407bdfd29bae692881ac810c51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 20 15:20:23 compute-1 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[309301]: [NOTICE]   (309305) : New worker (309307) forked
Jan 20 15:20:23 compute-1 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[309301]: [NOTICE]   (309305) : Loading success.
Jan 20 15:20:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:23 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:23.416 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:20:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:23.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:23 compute-1 sudo[309316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:23 compute-1 sudo[309316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:23 compute-1 sudo[309316]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:23 compute-1 sudo[309341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:23 compute-1 sudo[309341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:23 compute-1 sudo[309341]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:23 compute-1 ceph-mon[81775]: pgmap v2943: 321 pgs: 321 active+clean; 215 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 2.5 MiB/s wr, 58 op/s
Jan 20 15:20:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3883154021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:20:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:23.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:24 compute-1 ceph-mon[81775]: pgmap v2944: 321 pgs: 321 active+clean; 215 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 318 KiB/s rd, 2.5 MiB/s wr, 66 op/s
Jan 20 15:20:24 compute-1 nova_compute[225855]: 2026-01-20 15:20:24.842 225859 DEBUG nova.compute.manager [req-d86ea6cb-fab0-41be-9624-427a3a4a8836 req-3fe23a3a-e437-4c8b-89de-e227d8d5dc1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Received event network-changed-ab5264b7-ec64-46dd-b30d-981799387571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:20:24 compute-1 nova_compute[225855]: 2026-01-20 15:20:24.842 225859 DEBUG nova.compute.manager [req-d86ea6cb-fab0-41be-9624-427a3a4a8836 req-3fe23a3a-e437-4c8b-89de-e227d8d5dc1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Refreshing instance network info cache due to event network-changed-ab5264b7-ec64-46dd-b30d-981799387571. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:20:24 compute-1 nova_compute[225855]: 2026-01-20 15:20:24.843 225859 DEBUG oslo_concurrency.lockutils [req-d86ea6cb-fab0-41be-9624-427a3a4a8836 req-3fe23a3a-e437-4c8b-89de-e227d8d5dc1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-7a53c9b1-e64b-4a31-897a-bbe7d964cf45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:20:24 compute-1 nova_compute[225855]: 2026-01-20 15:20:24.843 225859 DEBUG oslo_concurrency.lockutils [req-d86ea6cb-fab0-41be-9624-427a3a4a8836 req-3fe23a3a-e437-4c8b-89de-e227d8d5dc1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-7a53c9b1-e64b-4a31-897a-bbe7d964cf45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:20:24 compute-1 nova_compute[225855]: 2026-01-20 15:20:24.844 225859 DEBUG nova.network.neutron [req-d86ea6cb-fab0-41be-9624-427a3a4a8836 req-3fe23a3a-e437-4c8b-89de-e227d8d5dc1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Refreshing network info cache for port ab5264b7-ec64-46dd-b30d-981799387571 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:20:25 compute-1 nova_compute[225855]: 2026-01-20 15:20:25.108 225859 DEBUG nova.compute.manager [req-116f9fc1-2df7-43ea-937a-9f5c96459625 req-4ce11330-6966-4750-a9fd-7d5a0752afbb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Received event network-vif-plugged-ab5264b7-ec64-46dd-b30d-981799387571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:20:25 compute-1 nova_compute[225855]: 2026-01-20 15:20:25.108 225859 DEBUG oslo_concurrency.lockutils [req-116f9fc1-2df7-43ea-937a-9f5c96459625 req-4ce11330-6966-4750-a9fd-7d5a0752afbb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:25 compute-1 nova_compute[225855]: 2026-01-20 15:20:25.109 225859 DEBUG oslo_concurrency.lockutils [req-116f9fc1-2df7-43ea-937a-9f5c96459625 req-4ce11330-6966-4750-a9fd-7d5a0752afbb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:25 compute-1 nova_compute[225855]: 2026-01-20 15:20:25.109 225859 DEBUG oslo_concurrency.lockutils [req-116f9fc1-2df7-43ea-937a-9f5c96459625 req-4ce11330-6966-4750-a9fd-7d5a0752afbb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:25 compute-1 nova_compute[225855]: 2026-01-20 15:20:25.109 225859 DEBUG nova.compute.manager [req-116f9fc1-2df7-43ea-937a-9f5c96459625 req-4ce11330-6966-4750-a9fd-7d5a0752afbb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] No waiting events found dispatching network-vif-plugged-ab5264b7-ec64-46dd-b30d-981799387571 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:20:25 compute-1 nova_compute[225855]: 2026-01-20 15:20:25.109 225859 WARNING nova.compute.manager [req-116f9fc1-2df7-43ea-937a-9f5c96459625 req-4ce11330-6966-4750-a9fd-7d5a0752afbb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Received unexpected event network-vif-plugged-ab5264b7-ec64-46dd-b30d-981799387571 for instance with vm_state active and task_state None.
Jan 20 15:20:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:25.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:25 compute-1 sudo[309367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:25 compute-1 sudo[309367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:25 compute-1 sudo[309367]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:25 compute-1 sudo[309392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:20:25 compute-1 sudo[309392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:25 compute-1 sudo[309392]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:20:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:25.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:20:26 compute-1 nova_compute[225855]: 2026-01-20 15:20:26.282 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:26 compute-1 nova_compute[225855]: 2026-01-20 15:20:26.960 225859 DEBUG nova.network.neutron [req-d86ea6cb-fab0-41be-9624-427a3a4a8836 req-3fe23a3a-e437-4c8b-89de-e227d8d5dc1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Updated VIF entry in instance network info cache for port ab5264b7-ec64-46dd-b30d-981799387571. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:20:26 compute-1 nova_compute[225855]: 2026-01-20 15:20:26.961 225859 DEBUG nova.network.neutron [req-d86ea6cb-fab0-41be-9624-427a3a4a8836 req-3fe23a3a-e437-4c8b-89de-e227d8d5dc1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Updating instance_info_cache with network_info: [{"id": "ab5264b7-ec64-46dd-b30d-981799387571", "address": "fa:16:3e:3c:70:df", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5264b7-ec", "ovs_interfaceid": "ab5264b7-ec64-46dd-b30d-981799387571", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:20:26 compute-1 nova_compute[225855]: 2026-01-20 15:20:26.986 225859 DEBUG oslo_concurrency.lockutils [req-d86ea6cb-fab0-41be-9624-427a3a4a8836 req-3fe23a3a-e437-4c8b-89de-e227d8d5dc1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-7a53c9b1-e64b-4a31-897a-bbe7d964cf45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:20:27 compute-1 ceph-mon[81775]: pgmap v2945: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 167 op/s
Jan 20 15:20:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:27.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:27.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:28 compute-1 nova_compute[225855]: 2026-01-20 15:20:28.211 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:28 compute-1 ceph-mon[81775]: pgmap v2946: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Jan 20 15:20:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:20:29.418 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:20:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:29.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:29.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:30 compute-1 ceph-mon[81775]: pgmap v2947: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 206 op/s
Jan 20 15:20:31 compute-1 nova_compute[225855]: 2026-01-20 15:20:31.285 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:31.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:20:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:31.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:20:32 compute-1 ceph-mon[81775]: pgmap v2948: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 202 op/s
Jan 20 15:20:33 compute-1 nova_compute[225855]: 2026-01-20 15:20:33.214 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:33.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:33.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:34 compute-1 ceph-mon[81775]: pgmap v2949: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 188 op/s
Jan 20 15:20:35 compute-1 podman[309422]: 2026-01-20 15:20:35.047774239 +0000 UTC m=+0.081182782 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 20 15:20:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:20:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:35.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:20:35 compute-1 ovn_controller[130490]: 2026-01-20T15:20:35Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3c:70:df 10.100.0.6
Jan 20 15:20:35 compute-1 ovn_controller[130490]: 2026-01-20T15:20:35Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3c:70:df 10.100.0.6
Jan 20 15:20:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:35.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:36 compute-1 nova_compute[225855]: 2026-01-20 15:20:36.289 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:36 compute-1 ceph-mon[81775]: pgmap v2950: 321 pgs: 321 active+clean; 235 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.8 MiB/s wr, 192 op/s
Jan 20 15:20:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:37.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:20:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:38.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:20:38 compute-1 nova_compute[225855]: 2026-01-20 15:20:38.216 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:38 compute-1 ceph-mon[81775]: pgmap v2951: 321 pgs: 321 active+clean; 235 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 90 op/s
Jan 20 15:20:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:39.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:40.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:40 compute-1 ceph-mon[81775]: pgmap v2952: 321 pgs: 321 active+clean; 267 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Jan 20 15:20:41 compute-1 nova_compute[225855]: 2026-01-20 15:20:41.292 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:41.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:42.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:42 compute-1 ceph-mon[81775]: pgmap v2953: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Jan 20 15:20:43 compute-1 nova_compute[225855]: 2026-01-20 15:20:43.218 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:43.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:43 compute-1 sudo[309444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:43 compute-1 sudo[309444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:43 compute-1 sudo[309444]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:43 compute-1 sudo[309470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:43 compute-1 sudo[309470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:43 compute-1 sudo[309470]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:44.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:44 compute-1 ceph-mon[81775]: pgmap v2954: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Jan 20 15:20:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:45.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:46.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:46 compute-1 nova_compute[225855]: 2026-01-20 15:20:46.295 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:46 compute-1 ceph-mon[81775]: pgmap v2955: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Jan 20 15:20:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:20:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:47.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:20:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:48.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:48 compute-1 nova_compute[225855]: 2026-01-20 15:20:48.221 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:48 compute-1 ceph-mon[81775]: pgmap v2956: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 589 KiB/s rd, 2.6 MiB/s wr, 104 op/s
Jan 20 15:20:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:20:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:49.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:20:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2647159744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:50.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:50 compute-1 ceph-mon[81775]: pgmap v2957: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 598 KiB/s rd, 2.6 MiB/s wr, 118 op/s
Jan 20 15:20:51 compute-1 nova_compute[225855]: 2026-01-20 15:20:51.298 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:51.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:52.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:52 compute-1 ceph-mon[81775]: pgmap v2958: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 373 KiB/s wr, 43 op/s
Jan 20 15:20:53 compute-1 podman[309500]: 2026-01-20 15:20:53.034655219 +0000 UTC m=+0.074405240 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:20:53 compute-1 nova_compute[225855]: 2026-01-20 15:20:53.222 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:53.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:20:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:54.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:20:54 compute-1 ceph-mon[81775]: pgmap v2959: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 26 KiB/s wr, 28 op/s
Jan 20 15:20:55 compute-1 ovn_controller[130490]: 2026-01-20T15:20:55Z|00878|binding|INFO|Releasing lport 2c0bba0e-e9b6-4ece-8349-62642b94d91d from this chassis (sb_readonly=0)
Jan 20 15:20:55 compute-1 nova_compute[225855]: 2026-01-20 15:20:55.395 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:55.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:56.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:56 compute-1 nova_compute[225855]: 2026-01-20 15:20:56.300 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:56 compute-1 ceph-mon[81775]: pgmap v2960: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 27 KiB/s wr, 28 op/s
Jan 20 15:20:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:57.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:58.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:58 compute-1 nova_compute[225855]: 2026-01-20 15:20:58.225 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:58 compute-1 ceph-mon[81775]: pgmap v2961: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Jan 20 15:20:59 compute-1 nova_compute[225855]: 2026-01-20 15:20:59.314 225859 DEBUG oslo_concurrency.lockutils [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:59 compute-1 nova_compute[225855]: 2026-01-20 15:20:59.314 225859 DEBUG oslo_concurrency.lockutils [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:59 compute-1 nova_compute[225855]: 2026-01-20 15:20:59.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:20:59 compute-1 nova_compute[225855]: 2026-01-20 15:20:59.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:20:59 compute-1 nova_compute[225855]: 2026-01-20 15:20:59.369 225859 DEBUG nova.objects.instance [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lazy-loading 'flavor' on Instance uuid 7a53c9b1-e64b-4a31-897a-bbe7d964cf45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:20:59 compute-1 nova_compute[225855]: 2026-01-20 15:20:59.482 225859 DEBUG oslo_concurrency.lockutils [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:20:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:59.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:59 compute-1 nova_compute[225855]: 2026-01-20 15:20:59.892 225859 DEBUG oslo_concurrency.lockutils [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:59 compute-1 nova_compute[225855]: 2026-01-20 15:20:59.893 225859 DEBUG oslo_concurrency.lockutils [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:59 compute-1 nova_compute[225855]: 2026-01-20 15:20:59.893 225859 INFO nova.compute.manager [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Attaching volume f458dd1d-0a83-4853-b1f9-6b4923a44988 to /dev/vdb
Jan 20 15:21:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:00.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:00 compute-1 nova_compute[225855]: 2026-01-20 15:21:00.364 225859 DEBUG os_brick.utils [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:21:00 compute-1 nova_compute[225855]: 2026-01-20 15:21:00.366 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:21:00 compute-1 nova_compute[225855]: 2026-01-20 15:21:00.376 231081 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:21:00 compute-1 nova_compute[225855]: 2026-01-20 15:21:00.376 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e70ca2-fa21-4e9d-8f1c-33102cab0b8e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:00 compute-1 nova_compute[225855]: 2026-01-20 15:21:00.377 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:21:00 compute-1 nova_compute[225855]: 2026-01-20 15:21:00.383 231081 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:21:00 compute-1 nova_compute[225855]: 2026-01-20 15:21:00.384 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[2a2951ed-ef28-432c-bc11-577ede3ded13]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1821ea3dc03d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:00 compute-1 nova_compute[225855]: 2026-01-20 15:21:00.385 231081 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:21:00 compute-1 nova_compute[225855]: 2026-01-20 15:21:00.391 231081 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:21:00 compute-1 nova_compute[225855]: 2026-01-20 15:21:00.391 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0d8e1c-6b42-434d-a553-b6afc9246238]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:00 compute-1 nova_compute[225855]: 2026-01-20 15:21:00.392 231081 DEBUG oslo.privsep.daemon [-] privsep: reply[0c0fdec7-7af7-45b9-a8e2-80eb0db3d9d5]: (4, '870b1f1c-f19c-477b-b282-ee6eeba50974') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:00 compute-1 nova_compute[225855]: 2026-01-20 15:21:00.393 225859 DEBUG oslo_concurrency.processutils [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:21:00 compute-1 nova_compute[225855]: 2026-01-20 15:21:00.419 225859 DEBUG oslo_concurrency.processutils [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:21:00 compute-1 nova_compute[225855]: 2026-01-20 15:21:00.422 225859 DEBUG os_brick.initiator.connectors.lightos [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:21:00 compute-1 nova_compute[225855]: 2026-01-20 15:21:00.422 225859 DEBUG os_brick.initiator.connectors.lightos [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:21:00 compute-1 nova_compute[225855]: 2026-01-20 15:21:00.423 225859 DEBUG os_brick.initiator.connectors.lightos [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:21:00 compute-1 nova_compute[225855]: 2026-01-20 15:21:00.423 225859 DEBUG os_brick.utils [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] <== get_connector_properties: return (57ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1821ea3dc03d', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '870b1f1c-f19c-477b-b282-ee6eeba50974', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:21:00 compute-1 nova_compute[225855]: 2026-01-20 15:21:00.423 225859 DEBUG nova.virt.block_device [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Updating existing volume attachment record: dc7fe8a9-25f2-4eb7-8845-f59506394b39 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 15:21:00 compute-1 ceph-mon[81775]: pgmap v2962: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 3.3 KiB/s wr, 28 op/s
Jan 20 15:21:01 compute-1 nova_compute[225855]: 2026-01-20 15:21:01.141 225859 DEBUG nova.objects.instance [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lazy-loading 'flavor' on Instance uuid 7a53c9b1-e64b-4a31-897a-bbe7d964cf45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:21:01 compute-1 nova_compute[225855]: 2026-01-20 15:21:01.186 225859 DEBUG nova.virt.libvirt.driver [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Attempting to attach volume f458dd1d-0a83-4853-b1f9-6b4923a44988 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 20 15:21:01 compute-1 nova_compute[225855]: 2026-01-20 15:21:01.191 225859 DEBUG nova.virt.libvirt.guest [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] attach device xml: <disk type="network" device="disk">
Jan 20 15:21:01 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:21:01 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-f458dd1d-0a83-4853-b1f9-6b4923a44988">
Jan 20 15:21:01 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:21:01 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:21:01 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:21:01 compute-1 nova_compute[225855]:   </source>
Jan 20 15:21:01 compute-1 nova_compute[225855]:   <auth username="openstack">
Jan 20 15:21:01 compute-1 nova_compute[225855]:     <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:21:01 compute-1 nova_compute[225855]:   </auth>
Jan 20 15:21:01 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:21:01 compute-1 nova_compute[225855]:   <serial>f458dd1d-0a83-4853-b1f9-6b4923a44988</serial>
Jan 20 15:21:01 compute-1 nova_compute[225855]: </disk>
Jan 20 15:21:01 compute-1 nova_compute[225855]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 15:21:01 compute-1 nova_compute[225855]: 2026-01-20 15:21:01.301 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:01 compute-1 nova_compute[225855]: 2026-01-20 15:21:01.315 225859 DEBUG nova.virt.libvirt.driver [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:21:01 compute-1 nova_compute[225855]: 2026-01-20 15:21:01.316 225859 DEBUG nova.virt.libvirt.driver [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:21:01 compute-1 nova_compute[225855]: 2026-01-20 15:21:01.316 225859 DEBUG nova.virt.libvirt.driver [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:21:01 compute-1 nova_compute[225855]: 2026-01-20 15:21:01.316 225859 DEBUG nova.virt.libvirt.driver [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] No VIF found with MAC fa:16:3e:3c:70:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:21:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:01.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:01 compute-1 nova_compute[225855]: 2026-01-20 15:21:01.543 225859 DEBUG oslo_concurrency.lockutils [None req-4ceac083-89a3-4964-9803-2c53185bb57f cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:01 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/54920815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:21:01 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #145. Immutable memtables: 0.
Jan 20 15:21:01 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:21:01.953633) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:21:01 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 145
Jan 20 15:21:01 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922461953753, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 812, "num_deletes": 250, "total_data_size": 1520801, "memory_usage": 1541048, "flush_reason": "Manual Compaction"}
Jan 20 15:21:01 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #146: started
Jan 20 15:21:01 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922461963734, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 146, "file_size": 697690, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71469, "largest_seqno": 72276, "table_properties": {"data_size": 694314, "index_size": 1219, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 9080, "raw_average_key_size": 21, "raw_value_size": 687182, "raw_average_value_size": 1590, "num_data_blocks": 52, "num_entries": 432, "num_filter_entries": 432, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768922409, "oldest_key_time": 1768922409, "file_creation_time": 1768922461, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:21:01 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 10148 microseconds, and 5797 cpu microseconds.
Jan 20 15:21:01 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:21:01 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:21:01.963792) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #146: 697690 bytes OK
Jan 20 15:21:01 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:21:01.963819) [db/memtable_list.cc:519] [default] Level-0 commit table #146 started
Jan 20 15:21:01 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:21:01.965164) [db/memtable_list.cc:722] [default] Level-0 commit table #146: memtable #1 done
Jan 20 15:21:01 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:21:01.965187) EVENT_LOG_v1 {"time_micros": 1768922461965179, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:21:01 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:21:01.965211) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:21:01 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1516571, prev total WAL file size 1516571, number of live WAL files 2.
Jan 20 15:21:01 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000142.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:21:01 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:21:01.966058) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323537' seq:72057594037927935, type:22 .. '6D6772737461740032353038' seq:0, type:0; will stop at (end)
Jan 20 15:21:01 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:21:01 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [146(681KB)], [144(12MB)]
Jan 20 15:21:01 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922461966142, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [146], "files_L6": [144], "score": -1, "input_data_size": 14061254, "oldest_snapshot_seqno": -1}
Jan 20 15:21:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:02.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:02 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #147: 9399 keys, 10497953 bytes, temperature: kUnknown
Jan 20 15:21:02 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922462068752, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 147, "file_size": 10497953, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10439140, "index_size": 34188, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23557, "raw_key_size": 248070, "raw_average_key_size": 26, "raw_value_size": 10276073, "raw_average_value_size": 1093, "num_data_blocks": 1293, "num_entries": 9399, "num_filter_entries": 9399, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768922461, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 147, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:21:02 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:21:02 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:21:02.069057) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 10497953 bytes
Jan 20 15:21:02 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:21:02.070972) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.9 rd, 102.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 12.7 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(35.2) write-amplify(15.0) OK, records in: 9896, records dropped: 497 output_compression: NoCompression
Jan 20 15:21:02 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:21:02.071031) EVENT_LOG_v1 {"time_micros": 1768922462071008, "job": 92, "event": "compaction_finished", "compaction_time_micros": 102679, "compaction_time_cpu_micros": 29955, "output_level": 6, "num_output_files": 1, "total_output_size": 10497953, "num_input_records": 9896, "num_output_records": 9399, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:21:02 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:21:02 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922462071506, "job": 92, "event": "table_file_deletion", "file_number": 146}
Jan 20 15:21:02 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000144.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:21:02 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922462074412, "job": 92, "event": "table_file_deletion", "file_number": 144}
Jan 20 15:21:02 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:21:01.965940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:21:02 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:21:02.074540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:21:02 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:21:02.074546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:21:02 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:21:02.074548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:21:02 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:21:02.074550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:21:02 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:21:02.074551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:21:02 compute-1 ceph-mon[81775]: pgmap v2963: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 2.2 KiB/s wr, 14 op/s
Jan 20 15:21:03 compute-1 nova_compute[225855]: 2026-01-20 15:21:03.227 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:03 compute-1 nova_compute[225855]: 2026-01-20 15:21:03.482 225859 DEBUG oslo_concurrency.lockutils [None req-5264b3a9-63fa-429b-bb09-d354dc08e9e9 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:03 compute-1 nova_compute[225855]: 2026-01-20 15:21:03.484 225859 DEBUG oslo_concurrency.lockutils [None req-5264b3a9-63fa-429b-bb09-d354dc08e9e9 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:03 compute-1 nova_compute[225855]: 2026-01-20 15:21:03.496 225859 INFO nova.compute.manager [None req-5264b3a9-63fa-429b-bb09-d354dc08e9e9 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Detaching volume f458dd1d-0a83-4853-b1f9-6b4923a44988
Jan 20 15:21:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:03.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:03 compute-1 nova_compute[225855]: 2026-01-20 15:21:03.616 225859 INFO nova.virt.block_device [None req-5264b3a9-63fa-429b-bb09-d354dc08e9e9 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Attempting to driver detach volume f458dd1d-0a83-4853-b1f9-6b4923a44988 from mountpoint /dev/vdb
Jan 20 15:21:03 compute-1 nova_compute[225855]: 2026-01-20 15:21:03.624 225859 DEBUG nova.virt.libvirt.driver [None req-5264b3a9-63fa-429b-bb09-d354dc08e9e9 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Attempting to detach device vdb from instance 7a53c9b1-e64b-4a31-897a-bbe7d964cf45 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 15:21:03 compute-1 nova_compute[225855]: 2026-01-20 15:21:03.625 225859 DEBUG nova.virt.libvirt.guest [None req-5264b3a9-63fa-429b-bb09-d354dc08e9e9 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 15:21:03 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:21:03 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-f458dd1d-0a83-4853-b1f9-6b4923a44988">
Jan 20 15:21:03 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:21:03 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:21:03 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:21:03 compute-1 nova_compute[225855]:   </source>
Jan 20 15:21:03 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:21:03 compute-1 nova_compute[225855]:   <serial>f458dd1d-0a83-4853-b1f9-6b4923a44988</serial>
Jan 20 15:21:03 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 15:21:03 compute-1 nova_compute[225855]: </disk>
Jan 20 15:21:03 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:21:03 compute-1 nova_compute[225855]: 2026-01-20 15:21:03.634 225859 INFO nova.virt.libvirt.driver [None req-5264b3a9-63fa-429b-bb09-d354dc08e9e9 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Successfully detached device vdb from instance 7a53c9b1-e64b-4a31-897a-bbe7d964cf45 from the persistent domain config.
Jan 20 15:21:03 compute-1 nova_compute[225855]: 2026-01-20 15:21:03.635 225859 DEBUG nova.virt.libvirt.driver [None req-5264b3a9-63fa-429b-bb09-d354dc08e9e9 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 7a53c9b1-e64b-4a31-897a-bbe7d964cf45 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 15:21:03 compute-1 nova_compute[225855]: 2026-01-20 15:21:03.635 225859 DEBUG nova.virt.libvirt.guest [None req-5264b3a9-63fa-429b-bb09-d354dc08e9e9 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 15:21:03 compute-1 nova_compute[225855]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:21:03 compute-1 nova_compute[225855]:   <source protocol="rbd" name="volumes/volume-f458dd1d-0a83-4853-b1f9-6b4923a44988">
Jan 20 15:21:03 compute-1 nova_compute[225855]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:21:03 compute-1 nova_compute[225855]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:21:03 compute-1 nova_compute[225855]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:21:03 compute-1 nova_compute[225855]:   </source>
Jan 20 15:21:03 compute-1 nova_compute[225855]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:21:03 compute-1 nova_compute[225855]:   <serial>f458dd1d-0a83-4853-b1f9-6b4923a44988</serial>
Jan 20 15:21:03 compute-1 nova_compute[225855]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 15:21:03 compute-1 nova_compute[225855]: </disk>
Jan 20 15:21:03 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:21:03 compute-1 nova_compute[225855]: 2026-01-20 15:21:03.694 225859 DEBUG nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Received event <DeviceRemovedEvent: 1768922463.6939726, 7a53c9b1-e64b-4a31-897a-bbe7d964cf45 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 15:21:03 compute-1 nova_compute[225855]: 2026-01-20 15:21:03.696 225859 DEBUG nova.virt.libvirt.driver [None req-5264b3a9-63fa-429b-bb09-d354dc08e9e9 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 7a53c9b1-e64b-4a31-897a-bbe7d964cf45 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 15:21:03 compute-1 nova_compute[225855]: 2026-01-20 15:21:03.699 225859 INFO nova.virt.libvirt.driver [None req-5264b3a9-63fa-429b-bb09-d354dc08e9e9 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Successfully detached device vdb from instance 7a53c9b1-e64b-4a31-897a-bbe7d964cf45 from the live domain config.
Jan 20 15:21:03 compute-1 sudo[309561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:03 compute-1 sudo[309561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:03 compute-1 sudo[309561]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:03 compute-1 sudo[309586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:03 compute-1 sudo[309586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:03 compute-1 sudo[309586]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/373551683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:04 compute-1 nova_compute[225855]: 2026-01-20 15:21:04.006 225859 DEBUG nova.objects.instance [None req-5264b3a9-63fa-429b-bb09-d354dc08e9e9 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lazy-loading 'flavor' on Instance uuid 7a53c9b1-e64b-4a31-897a-bbe7d964cf45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:21:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:04.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:04 compute-1 nova_compute[225855]: 2026-01-20 15:21:04.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:21:04 compute-1 nova_compute[225855]: 2026-01-20 15:21:04.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:21:04 compute-1 nova_compute[225855]: 2026-01-20 15:21:04.342 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:21:04 compute-1 nova_compute[225855]: 2026-01-20 15:21:04.345 225859 DEBUG oslo_concurrency.lockutils [None req-5264b3a9-63fa-429b-bb09-d354dc08e9e9 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:04 compute-1 nova_compute[225855]: 2026-01-20 15:21:04.835 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3237900193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:05 compute-1 ceph-mon[81775]: pgmap v2964: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 1.3 KiB/s wr, 1 op/s
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.095 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-7a53c9b1-e64b-4a31-897a-bbe7d964cf45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.096 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-7a53c9b1-e64b-4a31-897a-bbe7d964cf45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.096 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.096 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7a53c9b1-e64b-4a31-897a-bbe7d964cf45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.446 225859 DEBUG oslo_concurrency.lockutils [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.446 225859 DEBUG oslo_concurrency.lockutils [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.447 225859 DEBUG oslo_concurrency.lockutils [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.447 225859 DEBUG oslo_concurrency.lockutils [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.447 225859 DEBUG oslo_concurrency.lockutils [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.448 225859 INFO nova.compute.manager [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Terminating instance
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.449 225859 DEBUG nova.compute.manager [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:21:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:05.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:05 compute-1 kernel: tapab5264b7-ec (unregistering): left promiscuous mode
Jan 20 15:21:05 compute-1 NetworkManager[49104]: <info>  [1768922465.6530] device (tapab5264b7-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.663 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:05 compute-1 ovn_controller[130490]: 2026-01-20T15:21:05Z|00879|binding|INFO|Releasing lport ab5264b7-ec64-46dd-b30d-981799387571 from this chassis (sb_readonly=0)
Jan 20 15:21:05 compute-1 ovn_controller[130490]: 2026-01-20T15:21:05Z|00880|binding|INFO|Setting lport ab5264b7-ec64-46dd-b30d-981799387571 down in Southbound
Jan 20 15:21:05 compute-1 ovn_controller[130490]: 2026-01-20T15:21:05Z|00881|binding|INFO|Removing iface tapab5264b7-ec ovn-installed in OVS
Jan 20 15:21:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:21:05.694 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:70:df 10.100.0.6'], port_security=['fa:16:3e:3c:70:df 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7a53c9b1-e64b-4a31-897a-bbe7d964cf45', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9156c0a9920c4721843416b9a44404f9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '77f773a6-dc7f-4790-9c9b-d69f30c72eb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bfc4e2a-eeed-480e-aa18-68fc6c8f2cc2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=ab5264b7-ec64-46dd-b30d-981799387571) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:21:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:21:05.695 140354 INFO neutron.agent.ovn.metadata.agent [-] Port ab5264b7-ec64-46dd-b30d-981799387571 in datapath 76c2d716-7d14-4bc1-b83b-a3290ee99d9a unbound from our chassis
Jan 20 15:21:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:21:05.696 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 76c2d716-7d14-4bc1-b83b-a3290ee99d9a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:21:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:21:05.698 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c72d84-6ff5-4ab4-94aa-903432808496]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:21:05.698 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a namespace which is not needed anymore
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.718 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:05 compute-1 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d000000c5.scope: Deactivated successfully.
Jan 20 15:21:05 compute-1 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d000000c5.scope: Consumed 14.588s CPU time.
Jan 20 15:21:05 compute-1 systemd-machined[194361]: Machine qemu-103-instance-000000c5 terminated.
Jan 20 15:21:05 compute-1 podman[309611]: 2026-01-20 15:21:05.762791099 +0000 UTC m=+0.089780555 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 20 15:21:05 compute-1 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[309301]: [NOTICE]   (309305) : haproxy version is 2.8.14-c23fe91
Jan 20 15:21:05 compute-1 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[309301]: [NOTICE]   (309305) : path to executable is /usr/sbin/haproxy
Jan 20 15:21:05 compute-1 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[309301]: [WARNING]  (309305) : Exiting Master process...
Jan 20 15:21:05 compute-1 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[309301]: [ALERT]    (309305) : Current worker (309307) exited with code 143 (Terminated)
Jan 20 15:21:05 compute-1 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[309301]: [WARNING]  (309305) : All workers exited. Exiting... (0)
Jan 20 15:21:05 compute-1 systemd[1]: libpod-ff71acd4802f5ef48f8b4d2bd8d4bad2170671407bdfd29bae692881ac810c51.scope: Deactivated successfully.
Jan 20 15:21:05 compute-1 podman[309656]: 2026-01-20 15:21:05.843957289 +0000 UTC m=+0.048160090 container died ff71acd4802f5ef48f8b4d2bd8d4bad2170671407bdfd29bae692881ac810c51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:21:05 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ff71acd4802f5ef48f8b4d2bd8d4bad2170671407bdfd29bae692881ac810c51-userdata-shm.mount: Deactivated successfully.
Jan 20 15:21:05 compute-1 systemd[1]: var-lib-containers-storage-overlay-98994e09dc4916b2814fcd1aedf4c05e63267fe55d9759fece6cec7531df8589-merged.mount: Deactivated successfully.
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.896 225859 INFO nova.virt.libvirt.driver [-] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Instance destroyed successfully.
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.897 225859 DEBUG nova.objects.instance [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lazy-loading 'resources' on Instance uuid 7a53c9b1-e64b-4a31-897a-bbe7d964cf45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:21:05 compute-1 podman[309656]: 2026-01-20 15:21:05.897405977 +0000 UTC m=+0.101608728 container cleanup ff71acd4802f5ef48f8b4d2bd8d4bad2170671407bdfd29bae692881ac810c51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 15:21:05 compute-1 systemd[1]: libpod-conmon-ff71acd4802f5ef48f8b4d2bd8d4bad2170671407bdfd29bae692881ac810c51.scope: Deactivated successfully.
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.970 225859 DEBUG nova.virt.libvirt.vif [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:20:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-512186019',display_name='tempest-AttachVolumeNegativeTest-server-512186019',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-512186019',id=197,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI8d6pfZzu0QN0+ud36hsYGEa2fue/k/EBJ/5AAbAw966Nprd6b6gecK+XPS3vJw5O7JCevyXRxpx1xed28ouQO1W8vY3Q7SPAOn3X0ewiZY79+ulj2hj305nyB4SNFMjQ==',key_name='tempest-keypair-2087250418',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:20:23Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9156c0a9920c4721843416b9a44404f9',ramdisk_id='',reservation_id='r-1dq7a0u0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1505789262',owner_user_name='tempest-AttachVolumeNegativeTest-1505789262-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:20:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='cd9a8f26b71f4631a387e555e6b18428',uuid=7a53c9b1-e64b-4a31-897a-bbe7d964cf45,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab5264b7-ec64-46dd-b30d-981799387571", "address": "fa:16:3e:3c:70:df", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5264b7-ec", "ovs_interfaceid": "ab5264b7-ec64-46dd-b30d-981799387571", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.971 225859 DEBUG nova.network.os_vif_util [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Converting VIF {"id": "ab5264b7-ec64-46dd-b30d-981799387571", "address": "fa:16:3e:3c:70:df", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5264b7-ec", "ovs_interfaceid": "ab5264b7-ec64-46dd-b30d-981799387571", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.972 225859 DEBUG nova.network.os_vif_util [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3c:70:df,bridge_name='br-int',has_traffic_filtering=True,id=ab5264b7-ec64-46dd-b30d-981799387571,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab5264b7-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.973 225859 DEBUG os_vif [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:70:df,bridge_name='br-int',has_traffic_filtering=True,id=ab5264b7-ec64-46dd-b30d-981799387571,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab5264b7-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.975 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.975 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab5264b7-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.977 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.978 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:05 compute-1 nova_compute[225855]: 2026-01-20 15:21:05.981 225859 INFO os_vif [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:70:df,bridge_name='br-int',has_traffic_filtering=True,id=ab5264b7-ec64-46dd-b30d-981799387571,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab5264b7-ec')
Jan 20 15:21:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:06.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:06 compute-1 podman[309702]: 2026-01-20 15:21:06.080546945 +0000 UTC m=+0.158215975 container remove ff71acd4802f5ef48f8b4d2bd8d4bad2170671407bdfd29bae692881ac810c51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 15:21:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:21:06.086 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d5cf35bd-3d1d-46c8-b96f-b4d396d078ad]: (4, ('Tue Jan 20 03:21:05 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a (ff71acd4802f5ef48f8b4d2bd8d4bad2170671407bdfd29bae692881ac810c51)\nff71acd4802f5ef48f8b4d2bd8d4bad2170671407bdfd29bae692881ac810c51\nTue Jan 20 03:21:05 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a (ff71acd4802f5ef48f8b4d2bd8d4bad2170671407bdfd29bae692881ac810c51)\nff71acd4802f5ef48f8b4d2bd8d4bad2170671407bdfd29bae692881ac810c51\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:21:06.089 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[04401659-6060-4205-9c42-4fd9cd2a1b62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:21:06.090 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76c2d716-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:21:06 compute-1 nova_compute[225855]: 2026-01-20 15:21:06.092 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:06 compute-1 kernel: tap76c2d716-70: left promiscuous mode
Jan 20 15:21:06 compute-1 nova_compute[225855]: 2026-01-20 15:21:06.106 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:21:06.109 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d65e16c2-e2cf-47d7-b0bb-dd1bdcb36089]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:21:06.123 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a2eddc05-88d9-4c31-9d98-5d8fe501923a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:21:06.125 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[41345da5-a637-4a5b-a64f-8bc178087c8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:21:06.141 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[78cb73ca-fa75-4f97-8496-ca5956755e1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746149, 'reachable_time': 15955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309733, 'error': None, 'target': 'ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:21:06.145 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:21:06 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:21:06.145 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[e065bab1-4bde-4f36-a65d-4d65c73c6da3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:06 compute-1 systemd[1]: run-netns-ovnmeta\x2d76c2d716\x2d7d14\x2d4bc1\x2db83b\x2da3290ee99d9a.mount: Deactivated successfully.
Jan 20 15:21:06 compute-1 nova_compute[225855]: 2026-01-20 15:21:06.344 225859 DEBUG nova.compute.manager [req-07b90bbf-dcfd-4e60-bc52-f5a1ecb9a4cf req-6a109a83-a898-40d6-ad16-1ebf56c2c436 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Received event network-vif-unplugged-ab5264b7-ec64-46dd-b30d-981799387571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:21:06 compute-1 nova_compute[225855]: 2026-01-20 15:21:06.345 225859 DEBUG oslo_concurrency.lockutils [req-07b90bbf-dcfd-4e60-bc52-f5a1ecb9a4cf req-6a109a83-a898-40d6-ad16-1ebf56c2c436 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:06 compute-1 nova_compute[225855]: 2026-01-20 15:21:06.345 225859 DEBUG oslo_concurrency.lockutils [req-07b90bbf-dcfd-4e60-bc52-f5a1ecb9a4cf req-6a109a83-a898-40d6-ad16-1ebf56c2c436 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:06 compute-1 nova_compute[225855]: 2026-01-20 15:21:06.345 225859 DEBUG oslo_concurrency.lockutils [req-07b90bbf-dcfd-4e60-bc52-f5a1ecb9a4cf req-6a109a83-a898-40d6-ad16-1ebf56c2c436 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:06 compute-1 nova_compute[225855]: 2026-01-20 15:21:06.345 225859 DEBUG nova.compute.manager [req-07b90bbf-dcfd-4e60-bc52-f5a1ecb9a4cf req-6a109a83-a898-40d6-ad16-1ebf56c2c436 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] No waiting events found dispatching network-vif-unplugged-ab5264b7-ec64-46dd-b30d-981799387571 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:21:06 compute-1 nova_compute[225855]: 2026-01-20 15:21:06.345 225859 DEBUG nova.compute.manager [req-07b90bbf-dcfd-4e60-bc52-f5a1ecb9a4cf req-6a109a83-a898-40d6-ad16-1ebf56c2c436 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Received event network-vif-unplugged-ab5264b7-ec64-46dd-b30d-981799387571 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:21:06 compute-1 nova_compute[225855]: 2026-01-20 15:21:06.655 225859 INFO nova.virt.libvirt.driver [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Deleting instance files /var/lib/nova/instances/7a53c9b1-e64b-4a31-897a-bbe7d964cf45_del
Jan 20 15:21:06 compute-1 nova_compute[225855]: 2026-01-20 15:21:06.656 225859 INFO nova.virt.libvirt.driver [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Deletion of /var/lib/nova/instances/7a53c9b1-e64b-4a31-897a-bbe7d964cf45_del complete
Jan 20 15:21:06 compute-1 ceph-mon[81775]: pgmap v2965: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 3.0 KiB/s wr, 3 op/s
Jan 20 15:21:06 compute-1 nova_compute[225855]: 2026-01-20 15:21:06.976 225859 INFO nova.compute.manager [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Took 1.53 seconds to destroy the instance on the hypervisor.
Jan 20 15:21:06 compute-1 nova_compute[225855]: 2026-01-20 15:21:06.977 225859 DEBUG oslo.service.loopingcall [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:21:06 compute-1 nova_compute[225855]: 2026-01-20 15:21:06.978 225859 DEBUG nova.compute.manager [-] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:21:06 compute-1 nova_compute[225855]: 2026-01-20 15:21:06.978 225859 DEBUG nova.network.neutron [-] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:21:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:07.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:07 compute-1 nova_compute[225855]: 2026-01-20 15:21:07.757 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Updating instance_info_cache with network_info: [{"id": "ab5264b7-ec64-46dd-b30d-981799387571", "address": "fa:16:3e:3c:70:df", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5264b7-ec", "ovs_interfaceid": "ab5264b7-ec64-46dd-b30d-981799387571", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:21:07 compute-1 nova_compute[225855]: 2026-01-20 15:21:07.845 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-7a53c9b1-e64b-4a31-897a-bbe7d964cf45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:21:07 compute-1 nova_compute[225855]: 2026-01-20 15:21:07.846 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:21:07 compute-1 nova_compute[225855]: 2026-01-20 15:21:07.846 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:21:07 compute-1 nova_compute[225855]: 2026-01-20 15:21:07.847 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:21:07 compute-1 nova_compute[225855]: 2026-01-20 15:21:07.847 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:21:07 compute-1 nova_compute[225855]: 2026-01-20 15:21:07.923 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:07 compute-1 nova_compute[225855]: 2026-01-20 15:21:07.924 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:07 compute-1 nova_compute[225855]: 2026-01-20 15:21:07.924 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:07 compute-1 nova_compute[225855]: 2026-01-20 15:21:07.924 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:21:07 compute-1 nova_compute[225855]: 2026-01-20 15:21:07.924 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:21:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:08.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:08 compute-1 nova_compute[225855]: 2026-01-20 15:21:08.135 225859 DEBUG nova.network.neutron [-] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:21:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:21:08.142 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:21:08 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:21:08.144 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:21:08 compute-1 nova_compute[225855]: 2026-01-20 15:21:08.145 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:08 compute-1 nova_compute[225855]: 2026-01-20 15:21:08.231 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:21:08 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1320144358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:08 compute-1 nova_compute[225855]: 2026-01-20 15:21:08.370 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:21:08 compute-1 nova_compute[225855]: 2026-01-20 15:21:08.389 225859 INFO nova.compute.manager [-] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Took 1.41 seconds to deallocate network for instance.
Jan 20 15:21:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1320144358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:08 compute-1 nova_compute[225855]: 2026-01-20 15:21:08.449 225859 DEBUG nova.compute.manager [req-31348548-716d-4350-af2c-6d66a5381f6a req-7d1cc436-8615-4104-8cbf-89c11823449d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Received event network-vif-deleted-ab5264b7-ec64-46dd-b30d-981799387571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:21:08 compute-1 nova_compute[225855]: 2026-01-20 15:21:08.471 225859 DEBUG oslo_concurrency.lockutils [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:08 compute-1 nova_compute[225855]: 2026-01-20 15:21:08.471 225859 DEBUG oslo_concurrency.lockutils [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:08 compute-1 nova_compute[225855]: 2026-01-20 15:21:08.549 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:21:08 compute-1 nova_compute[225855]: 2026-01-20 15:21:08.550 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4267MB free_disk=20.94268798828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:21:08 compute-1 nova_compute[225855]: 2026-01-20 15:21:08.550 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:08 compute-1 nova_compute[225855]: 2026-01-20 15:21:08.659 225859 DEBUG oslo_concurrency.processutils [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:21:08 compute-1 nova_compute[225855]: 2026-01-20 15:21:08.800 225859 DEBUG nova.compute.manager [req-8ed1ca59-68fc-4e81-b72b-6b18b3c3876f req-49e253f9-6af9-4564-a9a4-4f987af1b460 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Received event network-vif-plugged-ab5264b7-ec64-46dd-b30d-981799387571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:21:08 compute-1 nova_compute[225855]: 2026-01-20 15:21:08.801 225859 DEBUG oslo_concurrency.lockutils [req-8ed1ca59-68fc-4e81-b72b-6b18b3c3876f req-49e253f9-6af9-4564-a9a4-4f987af1b460 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:08 compute-1 nova_compute[225855]: 2026-01-20 15:21:08.801 225859 DEBUG oslo_concurrency.lockutils [req-8ed1ca59-68fc-4e81-b72b-6b18b3c3876f req-49e253f9-6af9-4564-a9a4-4f987af1b460 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:08 compute-1 nova_compute[225855]: 2026-01-20 15:21:08.802 225859 DEBUG oslo_concurrency.lockutils [req-8ed1ca59-68fc-4e81-b72b-6b18b3c3876f req-49e253f9-6af9-4564-a9a4-4f987af1b460 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:08 compute-1 nova_compute[225855]: 2026-01-20 15:21:08.802 225859 DEBUG nova.compute.manager [req-8ed1ca59-68fc-4e81-b72b-6b18b3c3876f req-49e253f9-6af9-4564-a9a4-4f987af1b460 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] No waiting events found dispatching network-vif-plugged-ab5264b7-ec64-46dd-b30d-981799387571 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:21:08 compute-1 nova_compute[225855]: 2026-01-20 15:21:08.803 225859 WARNING nova.compute.manager [req-8ed1ca59-68fc-4e81-b72b-6b18b3c3876f req-49e253f9-6af9-4564-a9a4-4f987af1b460 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Received unexpected event network-vif-plugged-ab5264b7-ec64-46dd-b30d-981799387571 for instance with vm_state deleted and task_state None.
Jan 20 15:21:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:21:09 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2027985971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:09 compute-1 nova_compute[225855]: 2026-01-20 15:21:09.066 225859 DEBUG oslo_concurrency.processutils [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:21:09 compute-1 nova_compute[225855]: 2026-01-20 15:21:09.071 225859 DEBUG nova.compute.provider_tree [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:21:09 compute-1 nova_compute[225855]: 2026-01-20 15:21:09.180 225859 DEBUG nova.scheduler.client.report [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:21:09 compute-1 nova_compute[225855]: 2026-01-20 15:21:09.224 225859 DEBUG oslo_concurrency.lockutils [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:09 compute-1 nova_compute[225855]: 2026-01-20 15:21:09.227 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:09 compute-1 nova_compute[225855]: 2026-01-20 15:21:09.277 225859 INFO nova.scheduler.client.report [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Deleted allocations for instance 7a53c9b1-e64b-4a31-897a-bbe7d964cf45
Jan 20 15:21:09 compute-1 nova_compute[225855]: 2026-01-20 15:21:09.346 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:21:09 compute-1 nova_compute[225855]: 2026-01-20 15:21:09.346 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:21:09 compute-1 nova_compute[225855]: 2026-01-20 15:21:09.369 225859 DEBUG oslo_concurrency.lockutils [None req-8c4a9634-9fb3-4152-a31f-a9607378737e cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "7a53c9b1-e64b-4a31-897a-bbe7d964cf45" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:09 compute-1 nova_compute[225855]: 2026-01-20 15:21:09.371 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:21:09 compute-1 ceph-mon[81775]: pgmap v2966: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 2.0 KiB/s wr, 3 op/s
Jan 20 15:21:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2027985971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:09.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:21:09 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3327566445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:09 compute-1 nova_compute[225855]: 2026-01-20 15:21:09.838 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:21:09 compute-1 nova_compute[225855]: 2026-01-20 15:21:09.843 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:21:09 compute-1 nova_compute[225855]: 2026-01-20 15:21:09.888 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:21:09 compute-1 nova_compute[225855]: 2026-01-20 15:21:09.919 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:21:09 compute-1 nova_compute[225855]: 2026-01-20 15:21:09.920 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:21:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:10.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:21:10 compute-1 nova_compute[225855]: 2026-01-20 15:21:10.413 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:21:10 compute-1 nova_compute[225855]: 2026-01-20 15:21:10.413 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:21:10 compute-1 nova_compute[225855]: 2026-01-20 15:21:10.414 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:21:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1519889462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3327566445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:10 compute-1 nova_compute[225855]: 2026-01-20 15:21:10.978 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/212363485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:11 compute-1 ceph-mon[81775]: pgmap v2967: 321 pgs: 321 active+clean; 142 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 2.7 KiB/s wr, 20 op/s
Jan 20 15:21:11 compute-1 nova_compute[225855]: 2026-01-20 15:21:11.543 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:11.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:12.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:12 compute-1 ceph-mon[81775]: pgmap v2968: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 3.0 KiB/s wr, 30 op/s
Jan 20 15:21:13 compute-1 nova_compute[225855]: 2026-01-20 15:21:13.231 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:13 compute-1 nova_compute[225855]: 2026-01-20 15:21:13.336 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:21:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:13.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/204930984' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:21:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/204930984' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:21:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:21:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:14.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:21:14 compute-1 ceph-mon[81775]: pgmap v2969: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Jan 20 15:21:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:15.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:15 compute-1 nova_compute[225855]: 2026-01-20 15:21:15.981 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:16.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:21:16.442 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:21:16.442 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:21:16.442 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:16 compute-1 ceph-mon[81775]: pgmap v2970: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.8 KiB/s wr, 29 op/s
Jan 20 15:21:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:21:17.147 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:21:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:17.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3995444439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:18.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:18 compute-1 nova_compute[225855]: 2026-01-20 15:21:18.233 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:18 compute-1 ceph-mon[81775]: pgmap v2971: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:21:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:21:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:19.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:21:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:20.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1618258914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:20 compute-1 nova_compute[225855]: 2026-01-20 15:21:20.895 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922465.8945441, 7a53c9b1-e64b-4a31-897a-bbe7d964cf45 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:21:20 compute-1 nova_compute[225855]: 2026-01-20 15:21:20.896 225859 INFO nova.compute.manager [-] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] VM Stopped (Lifecycle Event)
Jan 20 15:21:20 compute-1 nova_compute[225855]: 2026-01-20 15:21:20.924 225859 DEBUG nova.compute.manager [None req-f951f18e-5dc0-473c-b07d-bb8ad696e646 - - - - - -] [instance: 7a53c9b1-e64b-4a31-897a-bbe7d964cf45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:21:21 compute-1 nova_compute[225855]: 2026-01-20 15:21:21.007 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:21 compute-1 ceph-mon[81775]: pgmap v2972: 321 pgs: 321 active+clean; 161 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1.6 MiB/s wr, 52 op/s
Jan 20 15:21:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:21.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:22.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:22 compute-1 nova_compute[225855]: 2026-01-20 15:21:22.334 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:21:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2770881495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:21:23 compute-1 nova_compute[225855]: 2026-01-20 15:21:23.235 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:21:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:23.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:21:23 compute-1 ceph-mon[81775]: pgmap v2973: 321 pgs: 321 active+clean; 170 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 20 15:21:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3554796984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:21:23 compute-1 sudo[309811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:23 compute-1 sudo[309811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:23 compute-1 sudo[309811]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:24 compute-1 sudo[309846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:24 compute-1 sudo[309846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:24 compute-1 sudo[309846]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:24 compute-1 podman[309812]: 2026-01-20 15:21:24.05877668 +0000 UTC m=+0.093192070 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Jan 20 15:21:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:21:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:24.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:21:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/300643478' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:21:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:21:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:25.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:21:25 compute-1 ceph-mon[81775]: pgmap v2974: 321 pgs: 321 active+clean; 189 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.8 MiB/s wr, 34 op/s
Jan 20 15:21:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3508581801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:21:25 compute-1 sudo[309890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:25 compute-1 sudo[309890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:25 compute-1 sudo[309890]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:25 compute-1 sudo[309915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:21:25 compute-1 sudo[309915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:25 compute-1 sudo[309915]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:25 compute-1 sudo[309940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:25 compute-1 sudo[309940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:25 compute-1 sudo[309940]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:25 compute-1 sudo[309965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:21:26 compute-1 sudo[309965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:26 compute-1 nova_compute[225855]: 2026-01-20 15:21:26.010 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:26.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:26 compute-1 sudo[309965]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 15:21:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:21:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:21:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:21:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:21:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:21:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:21:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:21:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:27.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:21:27 compute-1 ceph-mon[81775]: pgmap v2975: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 3.6 MiB/s wr, 66 op/s
Jan 20 15:21:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:28.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:28 compute-1 nova_compute[225855]: 2026-01-20 15:21:28.235 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:28 compute-1 ceph-mon[81775]: pgmap v2976: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 3.6 MiB/s wr, 66 op/s
Jan 20 15:21:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:29.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:30.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:30 compute-1 ceph-mon[81775]: pgmap v2977: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 188 op/s
Jan 20 15:21:31 compute-1 nova_compute[225855]: 2026-01-20 15:21:31.080 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:31.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:32.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:32 compute-1 sudo[310023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:32 compute-1 sudo[310023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:32 compute-1 sudo[310023]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:32 compute-1 sudo[310048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:21:32 compute-1 sudo[310048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:32 compute-1 sudo[310048]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:33 compute-1 nova_compute[225855]: 2026-01-20 15:21:33.237 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:33 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:21:33 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:21:33 compute-1 ceph-mon[81775]: pgmap v2978: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.0 MiB/s wr, 176 op/s
Jan 20 15:21:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:33.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:34.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:34 compute-1 ceph-mon[81775]: pgmap v2979: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 168 op/s
Jan 20 15:21:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:35.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:36 compute-1 podman[310075]: 2026-01-20 15:21:36.001703144 +0000 UTC m=+0.051589007 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 20 15:21:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:36.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:36 compute-1 nova_compute[225855]: 2026-01-20 15:21:36.119 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:36 compute-1 ceph-mon[81775]: pgmap v2980: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 762 KiB/s wr, 166 op/s
Jan 20 15:21:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:37.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:38.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:38 compute-1 nova_compute[225855]: 2026-01-20 15:21:38.288 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:38 compute-1 ceph-mon[81775]: pgmap v2981: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 134 op/s
Jan 20 15:21:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:39.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:40.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:40 compute-1 ceph-mon[81775]: pgmap v2982: 321 pgs: 321 active+clean; 258 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.3 MiB/s wr, 243 op/s
Jan 20 15:21:41 compute-1 nova_compute[225855]: 2026-01-20 15:21:41.121 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:41.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:42.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:42 compute-1 ceph-mon[81775]: pgmap v2983: 321 pgs: 321 active+clean; 277 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 854 KiB/s rd, 4.2 MiB/s wr, 137 op/s
Jan 20 15:21:43 compute-1 nova_compute[225855]: 2026-01-20 15:21:43.289 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:43.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:44.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:44 compute-1 sudo[310100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:44 compute-1 sudo[310100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:44 compute-1 sudo[310100]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:44 compute-1 sudo[310125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:44 compute-1 sudo[310125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:44 compute-1 sudo[310125]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:44 compute-1 ceph-mon[81775]: pgmap v2984: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Jan 20 15:21:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:21:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:45.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:21:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:21:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:46.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:21:46 compute-1 nova_compute[225855]: 2026-01-20 15:21:46.123 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:46 compute-1 ceph-mon[81775]: pgmap v2985: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Jan 20 15:21:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:47.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:48.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:48 compute-1 nova_compute[225855]: 2026-01-20 15:21:48.292 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:48 compute-1 nova_compute[225855]: 2026-01-20 15:21:48.606 225859 DEBUG nova.compute.manager [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 20 15:21:48 compute-1 ceph-mon[81775]: pgmap v2986: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Jan 20 15:21:48 compute-1 nova_compute[225855]: 2026-01-20 15:21:48.820 225859 DEBUG oslo_concurrency.lockutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:48 compute-1 nova_compute[225855]: 2026-01-20 15:21:48.821 225859 DEBUG oslo_concurrency.lockutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:48 compute-1 nova_compute[225855]: 2026-01-20 15:21:48.840 225859 DEBUG nova.objects.instance [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'pci_requests' on Instance uuid 65aa2157-f058-4e5c-b448-64cf956310ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:21:48 compute-1 nova_compute[225855]: 2026-01-20 15:21:48.854 225859 DEBUG nova.virt.hardware [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:21:48 compute-1 nova_compute[225855]: 2026-01-20 15:21:48.854 225859 INFO nova.compute.claims [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:21:48 compute-1 nova_compute[225855]: 2026-01-20 15:21:48.855 225859 DEBUG nova.objects.instance [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'resources' on Instance uuid 65aa2157-f058-4e5c-b448-64cf956310ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:21:48 compute-1 nova_compute[225855]: 2026-01-20 15:21:48.863 225859 DEBUG nova.objects.instance [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'pci_devices' on Instance uuid 65aa2157-f058-4e5c-b448-64cf956310ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:21:48 compute-1 nova_compute[225855]: 2026-01-20 15:21:48.938 225859 INFO nova.compute.resource_tracker [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updating resource usage from migration 9b9a1c84-0aa6-44a5-9e94-a94a60dd6646
Jan 20 15:21:48 compute-1 nova_compute[225855]: 2026-01-20 15:21:48.939 225859 DEBUG nova.compute.resource_tracker [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Starting to track incoming migration 9b9a1c84-0aa6-44a5-9e94-a94a60dd6646 with flavor 30c26a27-d918-46d8-a512-4ef3b4ce5955 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 20 15:21:49 compute-1 nova_compute[225855]: 2026-01-20 15:21:49.061 225859 DEBUG oslo_concurrency.processutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:21:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:21:49 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3610823315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:49 compute-1 nova_compute[225855]: 2026-01-20 15:21:49.511 225859 DEBUG oslo_concurrency.processutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:21:49 compute-1 nova_compute[225855]: 2026-01-20 15:21:49.516 225859 DEBUG nova.compute.provider_tree [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:21:49 compute-1 nova_compute[225855]: 2026-01-20 15:21:49.543 225859 DEBUG nova.scheduler.client.report [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:21:49 compute-1 nova_compute[225855]: 2026-01-20 15:21:49.575 225859 DEBUG oslo_concurrency.lockutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:49 compute-1 nova_compute[225855]: 2026-01-20 15:21:49.575 225859 INFO nova.compute.manager [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Migrating
Jan 20 15:21:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:49.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:49 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3610823315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:50.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:50 compute-1 sshd-session[310175]: Accepted publickey for nova from 192.168.122.100 port 47006 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 15:21:50 compute-1 ceph-mon[81775]: pgmap v2987: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Jan 20 15:21:50 compute-1 systemd-logind[783]: New session 72 of user nova.
Jan 20 15:21:50 compute-1 systemd[1]: Created slice User Slice of UID 42436.
Jan 20 15:21:50 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 20 15:21:50 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 20 15:21:50 compute-1 systemd[1]: Starting User Manager for UID 42436...
Jan 20 15:21:50 compute-1 systemd[310179]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 15:21:50 compute-1 systemd[310179]: Queued start job for default target Main User Target.
Jan 20 15:21:51 compute-1 systemd[310179]: Created slice User Application Slice.
Jan 20 15:21:51 compute-1 systemd[310179]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 15:21:51 compute-1 systemd[310179]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 15:21:51 compute-1 systemd[310179]: Reached target Paths.
Jan 20 15:21:51 compute-1 systemd[310179]: Reached target Timers.
Jan 20 15:21:51 compute-1 systemd[310179]: Starting D-Bus User Message Bus Socket...
Jan 20 15:21:51 compute-1 systemd[310179]: Starting Create User's Volatile Files and Directories...
Jan 20 15:21:51 compute-1 systemd[310179]: Finished Create User's Volatile Files and Directories.
Jan 20 15:21:51 compute-1 systemd[310179]: Listening on D-Bus User Message Bus Socket.
Jan 20 15:21:51 compute-1 systemd[310179]: Reached target Sockets.
Jan 20 15:21:51 compute-1 systemd[310179]: Reached target Basic System.
Jan 20 15:21:51 compute-1 systemd[310179]: Reached target Main User Target.
Jan 20 15:21:51 compute-1 systemd[310179]: Startup finished in 131ms.
Jan 20 15:21:51 compute-1 systemd[1]: Started User Manager for UID 42436.
Jan 20 15:21:51 compute-1 systemd[1]: Started Session 72 of User nova.
Jan 20 15:21:51 compute-1 sshd-session[310175]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 15:21:51 compute-1 sshd-session[310194]: Received disconnect from 192.168.122.100 port 47006:11: disconnected by user
Jan 20 15:21:51 compute-1 sshd-session[310194]: Disconnected from user nova 192.168.122.100 port 47006
Jan 20 15:21:51 compute-1 sshd-session[310175]: pam_unix(sshd:session): session closed for user nova
Jan 20 15:21:51 compute-1 systemd[1]: session-72.scope: Deactivated successfully.
Jan 20 15:21:51 compute-1 systemd-logind[783]: Session 72 logged out. Waiting for processes to exit.
Jan 20 15:21:51 compute-1 systemd-logind[783]: Removed session 72.
Jan 20 15:21:51 compute-1 nova_compute[225855]: 2026-01-20 15:21:51.124 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:51 compute-1 sshd-session[310196]: Accepted publickey for nova from 192.168.122.100 port 47010 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 15:21:51 compute-1 systemd-logind[783]: New session 74 of user nova.
Jan 20 15:21:51 compute-1 systemd[1]: Started Session 74 of User nova.
Jan 20 15:21:51 compute-1 sshd-session[310196]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 15:21:51 compute-1 sshd-session[310199]: Received disconnect from 192.168.122.100 port 47010:11: disconnected by user
Jan 20 15:21:51 compute-1 sshd-session[310199]: Disconnected from user nova 192.168.122.100 port 47010
Jan 20 15:21:51 compute-1 sshd-session[310196]: pam_unix(sshd:session): session closed for user nova
Jan 20 15:21:51 compute-1 systemd[1]: session-74.scope: Deactivated successfully.
Jan 20 15:21:51 compute-1 systemd-logind[783]: Session 74 logged out. Waiting for processes to exit.
Jan 20 15:21:51 compute-1 systemd-logind[783]: Removed session 74.
Jan 20 15:21:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:21:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:51.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:21:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:21:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:52.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:21:52 compute-1 ceph-mon[81775]: pgmap v2988: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 96 KiB/s rd, 1001 KiB/s wr, 21 op/s
Jan 20 15:21:53 compute-1 nova_compute[225855]: 2026-01-20 15:21:53.297 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:21:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:53.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:21:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:21:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:54.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:21:54 compute-1 nova_compute[225855]: 2026-01-20 15:21:54.245 225859 DEBUG nova.compute.manager [req-384c106f-c8da-4c7d-9edd-b7cf32361a1d req-5b8050a1-452d-45ac-bafa-e98f6c2fa345 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received event network-vif-unplugged-2eefbfcb-7c22-4c45-bb7b-75319242796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:21:54 compute-1 nova_compute[225855]: 2026-01-20 15:21:54.245 225859 DEBUG oslo_concurrency.lockutils [req-384c106f-c8da-4c7d-9edd-b7cf32361a1d req-5b8050a1-452d-45ac-bafa-e98f6c2fa345 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:54 compute-1 nova_compute[225855]: 2026-01-20 15:21:54.245 225859 DEBUG oslo_concurrency.lockutils [req-384c106f-c8da-4c7d-9edd-b7cf32361a1d req-5b8050a1-452d-45ac-bafa-e98f6c2fa345 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:54 compute-1 nova_compute[225855]: 2026-01-20 15:21:54.246 225859 DEBUG oslo_concurrency.lockutils [req-384c106f-c8da-4c7d-9edd-b7cf32361a1d req-5b8050a1-452d-45ac-bafa-e98f6c2fa345 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:54 compute-1 nova_compute[225855]: 2026-01-20 15:21:54.246 225859 DEBUG nova.compute.manager [req-384c106f-c8da-4c7d-9edd-b7cf32361a1d req-5b8050a1-452d-45ac-bafa-e98f6c2fa345 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] No waiting events found dispatching network-vif-unplugged-2eefbfcb-7c22-4c45-bb7b-75319242796c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:21:54 compute-1 nova_compute[225855]: 2026-01-20 15:21:54.246 225859 WARNING nova.compute.manager [req-384c106f-c8da-4c7d-9edd-b7cf32361a1d req-5b8050a1-452d-45ac-bafa-e98f6c2fa345 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received unexpected event network-vif-unplugged-2eefbfcb-7c22-4c45-bb7b-75319242796c for instance with vm_state active and task_state resize_migrating.
Jan 20 15:21:54 compute-1 ceph-mon[81775]: pgmap v2989: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 78 KiB/s wr, 6 op/s
Jan 20 15:21:55 compute-1 podman[310203]: 2026-01-20 15:21:55.072602802 +0000 UTC m=+0.120346137 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:21:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:21:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:55.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:21:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:21:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:56.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:21:56 compute-1 nova_compute[225855]: 2026-01-20 15:21:56.177 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:56 compute-1 nova_compute[225855]: 2026-01-20 15:21:56.411 225859 DEBUG nova.compute.manager [req-c0dd9026-9437-4203-a4c8-b0b1f917be34 req-116c3512-6ea9-4060-b99e-5eeb18df2d6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received event network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:21:56 compute-1 nova_compute[225855]: 2026-01-20 15:21:56.412 225859 DEBUG oslo_concurrency.lockutils [req-c0dd9026-9437-4203-a4c8-b0b1f917be34 req-116c3512-6ea9-4060-b99e-5eeb18df2d6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:56 compute-1 nova_compute[225855]: 2026-01-20 15:21:56.413 225859 DEBUG oslo_concurrency.lockutils [req-c0dd9026-9437-4203-a4c8-b0b1f917be34 req-116c3512-6ea9-4060-b99e-5eeb18df2d6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:56 compute-1 nova_compute[225855]: 2026-01-20 15:21:56.413 225859 DEBUG oslo_concurrency.lockutils [req-c0dd9026-9437-4203-a4c8-b0b1f917be34 req-116c3512-6ea9-4060-b99e-5eeb18df2d6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:56 compute-1 nova_compute[225855]: 2026-01-20 15:21:56.413 225859 DEBUG nova.compute.manager [req-c0dd9026-9437-4203-a4c8-b0b1f917be34 req-116c3512-6ea9-4060-b99e-5eeb18df2d6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] No waiting events found dispatching network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:21:56 compute-1 nova_compute[225855]: 2026-01-20 15:21:56.413 225859 WARNING nova.compute.manager [req-c0dd9026-9437-4203-a4c8-b0b1f917be34 req-116c3512-6ea9-4060-b99e-5eeb18df2d6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received unexpected event network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c for instance with vm_state active and task_state resize_migrated.
Jan 20 15:21:56 compute-1 ceph-mon[81775]: pgmap v2990: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 KiB/s rd, 44 KiB/s wr, 4 op/s
Jan 20 15:21:57 compute-1 nova_compute[225855]: 2026-01-20 15:21:57.053 225859 INFO nova.network.neutron [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updating port 2eefbfcb-7c22-4c45-bb7b-75319242796c with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 20 15:21:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 20 15:21:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:57.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 20 15:21:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:21:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:58.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:21:58 compute-1 nova_compute[225855]: 2026-01-20 15:21:58.275 225859 DEBUG oslo_concurrency.lockutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:21:58 compute-1 nova_compute[225855]: 2026-01-20 15:21:58.275 225859 DEBUG oslo_concurrency.lockutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquired lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:21:58 compute-1 nova_compute[225855]: 2026-01-20 15:21:58.275 225859 DEBUG nova.network.neutron [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:21:58 compute-1 nova_compute[225855]: 2026-01-20 15:21:58.300 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:58 compute-1 nova_compute[225855]: 2026-01-20 15:21:58.421 225859 DEBUG nova.compute.manager [req-77276c5d-236f-4ee6-af46-35521532af4d req-d09a6e41-cd28-413f-ae3a-b5d4adcb2c3c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received event network-changed-2eefbfcb-7c22-4c45-bb7b-75319242796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:21:58 compute-1 nova_compute[225855]: 2026-01-20 15:21:58.421 225859 DEBUG nova.compute.manager [req-77276c5d-236f-4ee6-af46-35521532af4d req-d09a6e41-cd28-413f-ae3a-b5d4adcb2c3c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Refreshing instance network info cache due to event network-changed-2eefbfcb-7c22-4c45-bb7b-75319242796c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:21:58 compute-1 nova_compute[225855]: 2026-01-20 15:21:58.422 225859 DEBUG oslo_concurrency.lockutils [req-77276c5d-236f-4ee6-af46-35521532af4d req-d09a6e41-cd28-413f-ae3a-b5d4adcb2c3c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:21:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:58 compute-1 ceph-mon[81775]: pgmap v2991: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 KiB/s rd, 30 KiB/s wr, 4 op/s
Jan 20 15:21:58 compute-1 ovn_controller[130490]: 2026-01-20T15:21:58Z|00882|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 20 15:21:59 compute-1 nova_compute[225855]: 2026-01-20 15:21:59.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:21:59 compute-1 nova_compute[225855]: 2026-01-20 15:21:59.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:21:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:21:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:59.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:22:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:00.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:22:00 compute-1 ceph-mon[81775]: pgmap v2992: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 KiB/s rd, 30 KiB/s wr, 4 op/s
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.179 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:01 compute-1 systemd[1]: Stopping User Manager for UID 42436...
Jan 20 15:22:01 compute-1 systemd[310179]: Activating special unit Exit the Session...
Jan 20 15:22:01 compute-1 systemd[310179]: Stopped target Main User Target.
Jan 20 15:22:01 compute-1 systemd[310179]: Stopped target Basic System.
Jan 20 15:22:01 compute-1 systemd[310179]: Stopped target Paths.
Jan 20 15:22:01 compute-1 systemd[310179]: Stopped target Sockets.
Jan 20 15:22:01 compute-1 systemd[310179]: Stopped target Timers.
Jan 20 15:22:01 compute-1 systemd[310179]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 20 15:22:01 compute-1 systemd[310179]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 15:22:01 compute-1 systemd[310179]: Closed D-Bus User Message Bus Socket.
Jan 20 15:22:01 compute-1 systemd[310179]: Stopped Create User's Volatile Files and Directories.
Jan 20 15:22:01 compute-1 systemd[310179]: Removed slice User Application Slice.
Jan 20 15:22:01 compute-1 systemd[310179]: Reached target Shutdown.
Jan 20 15:22:01 compute-1 systemd[310179]: Finished Exit the Session.
Jan 20 15:22:01 compute-1 systemd[310179]: Reached target Exit the Session.
Jan 20 15:22:01 compute-1 systemd[1]: user@42436.service: Deactivated successfully.
Jan 20 15:22:01 compute-1 systemd[1]: Stopped User Manager for UID 42436.
Jan 20 15:22:01 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 20 15:22:01 compute-1 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 20 15:22:01 compute-1 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 20 15:22:01 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 20 15:22:01 compute-1 systemd[1]: Removed slice User Slice of UID 42436.
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.480 225859 DEBUG nova.network.neutron [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updating instance_info_cache with network_info: [{"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.517 225859 DEBUG oslo_concurrency.lockutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Releasing lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.521 225859 DEBUG oslo_concurrency.lockutils [req-77276c5d-236f-4ee6-af46-35521532af4d req-d09a6e41-cd28-413f-ae3a-b5d4adcb2c3c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.521 225859 DEBUG nova.network.neutron [req-77276c5d-236f-4ee6-af46-35521532af4d req-d09a6e41-cd28-413f-ae3a-b5d4adcb2c3c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Refreshing network info cache for port 2eefbfcb-7c22-4c45-bb7b-75319242796c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.658 225859 DEBUG nova.virt.libvirt.driver [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.660 225859 DEBUG nova.virt.libvirt.driver [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.660 225859 INFO nova.virt.libvirt.driver [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Creating image(s)
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.695 225859 DEBUG nova.storage.rbd_utils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] creating snapshot(nova-resize) on rbd image(65aa2157-f058-4e5c-b448-64cf956310ba_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 15:22:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:01.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:01 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e416 e416: 3 total, 3 up, 3 in
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.855 225859 DEBUG nova.objects.instance [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'trusted_certs' on Instance uuid 65aa2157-f058-4e5c-b448-64cf956310ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.954 225859 DEBUG nova.virt.libvirt.driver [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.954 225859 DEBUG nova.virt.libvirt.driver [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Ensure instance console log exists: /var/lib/nova/instances/65aa2157-f058-4e5c-b448-64cf956310ba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.955 225859 DEBUG oslo_concurrency.lockutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.955 225859 DEBUG oslo_concurrency.lockutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.955 225859 DEBUG oslo_concurrency.lockutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.958 225859 DEBUG nova.virt.libvirt.driver [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Start _get_guest_xml network_info=[{"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1810657086", "vif_mac": "fa:16:3e:7a:99:f8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.963 225859 WARNING nova.virt.libvirt.driver [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.971 225859 DEBUG nova.virt.libvirt.host [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.971 225859 DEBUG nova.virt.libvirt.host [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.974 225859 DEBUG nova.virt.libvirt.host [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.975 225859 DEBUG nova.virt.libvirt.host [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.976 225859 DEBUG nova.virt.libvirt.driver [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.976 225859 DEBUG nova.virt.hardware [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='30c26a27-d918-46d8-a512-4ef3b4ce5955',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.976 225859 DEBUG nova.virt.hardware [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.976 225859 DEBUG nova.virt.hardware [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.977 225859 DEBUG nova.virt.hardware [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.977 225859 DEBUG nova.virt.hardware [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.977 225859 DEBUG nova.virt.hardware [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.977 225859 DEBUG nova.virt.hardware [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.977 225859 DEBUG nova.virt.hardware [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.978 225859 DEBUG nova.virt.hardware [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.978 225859 DEBUG nova.virt.hardware [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.978 225859 DEBUG nova.virt.hardware [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.978 225859 DEBUG nova.objects.instance [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'vcpu_model' on Instance uuid 65aa2157-f058-4e5c-b448-64cf956310ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:22:01 compute-1 nova_compute[225855]: 2026-01-20 15:22:01.993 225859 DEBUG oslo_concurrency.processutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:22:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:02.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:22:02 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1804081147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:22:02 compute-1 nova_compute[225855]: 2026-01-20 15:22:02.433 225859 DEBUG oslo_concurrency.processutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:22:02 compute-1 nova_compute[225855]: 2026-01-20 15:22:02.469 225859 DEBUG oslo_concurrency.processutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:22:02 compute-1 nova_compute[225855]: 2026-01-20 15:22:02.881 225859 DEBUG nova.network.neutron [req-77276c5d-236f-4ee6-af46-35521532af4d req-d09a6e41-cd28-413f-ae3a-b5d4adcb2c3c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updated VIF entry in instance network info cache for port 2eefbfcb-7c22-4c45-bb7b-75319242796c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:22:02 compute-1 nova_compute[225855]: 2026-01-20 15:22:02.882 225859 DEBUG nova.network.neutron [req-77276c5d-236f-4ee6-af46-35521532af4d req-d09a6e41-cd28-413f-ae3a-b5d4adcb2c3c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updating instance_info_cache with network_info: [{"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:22:02 compute-1 ceph-mon[81775]: osdmap e416: 3 total, 3 up, 3 in
Jan 20 15:22:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1804081147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:22:02 compute-1 ceph-mon[81775]: pgmap v2994: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 KiB/s rd, 24 KiB/s wr, 4 op/s
Jan 20 15:22:02 compute-1 nova_compute[225855]: 2026-01-20 15:22:02.905 225859 DEBUG oslo_concurrency.lockutils [req-77276c5d-236f-4ee6-af46-35521532af4d req-d09a6e41-cd28-413f-ae3a-b5d4adcb2c3c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:22:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:22:02 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1822320099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:22:02 compute-1 nova_compute[225855]: 2026-01-20 15:22:02.988 225859 DEBUG oslo_concurrency.processutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:22:02 compute-1 nova_compute[225855]: 2026-01-20 15:22:02.989 225859 DEBUG nova.virt.libvirt.vif [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:21:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-890684112',display_name='tempest-TestNetworkAdvancedServerOps-server-890684112',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-890684112',id=198,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBESV7XYmzz1neUUH7k/g2EXDk6RAN24jF19myyoRv6wDjFXd5E2VXPhzcf3Q2CFmKA+oZARXh9ZLZnZRzD1iPeEGFbgLb8nt50MGrmQlAcYMGRSCqrzrniFYSfPnybQWNg==',key_name='tempest-TestNetworkAdvancedServerOps-1160843308',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:21:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-n64n905g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:21:55Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=65aa2157-f058-4e5c-b448-64cf956310ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1810657086", "vif_mac": "fa:16:3e:7a:99:f8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:22:02 compute-1 nova_compute[225855]: 2026-01-20 15:22:02.990 225859 DEBUG nova.network.os_vif_util [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1810657086", "vif_mac": "fa:16:3e:7a:99:f8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:22:02 compute-1 nova_compute[225855]: 2026-01-20 15:22:02.991 225859 DEBUG nova.network.os_vif_util [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:99:f8,bridge_name='br-int',has_traffic_filtering=True,id=2eefbfcb-7c22-4c45-bb7b-75319242796c,network=Network(6eb3ab38-e480-46b8-ae2d-d286fe61de3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eefbfcb-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:22:02 compute-1 nova_compute[225855]: 2026-01-20 15:22:02.994 225859 DEBUG nova.virt.libvirt.driver [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:22:02 compute-1 nova_compute[225855]:   <uuid>65aa2157-f058-4e5c-b448-64cf956310ba</uuid>
Jan 20 15:22:02 compute-1 nova_compute[225855]:   <name>instance-000000c6</name>
Jan 20 15:22:02 compute-1 nova_compute[225855]:   <memory>196608</memory>
Jan 20 15:22:02 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:22:02 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-890684112</nova:name>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:22:01</nova:creationTime>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <nova:flavor name="m1.micro">
Jan 20 15:22:02 compute-1 nova_compute[225855]:         <nova:memory>192</nova:memory>
Jan 20 15:22:02 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:22:02 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:22:02 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:22:02 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:22:02 compute-1 nova_compute[225855]:         <nova:user uuid="442a7a5cb8ea426a82be9762b262d171">tempest-TestNetworkAdvancedServerOps-175282664-project-member</nova:user>
Jan 20 15:22:02 compute-1 nova_compute[225855]:         <nova:project uuid="1ed5feeeafe7448a8efb47ab975b0ead">tempest-TestNetworkAdvancedServerOps-175282664</nova:project>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:22:02 compute-1 nova_compute[225855]:         <nova:port uuid="2eefbfcb-7c22-4c45-bb7b-75319242796c">
Jan 20 15:22:02 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:22:02 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:22:02 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <system>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <entry name="serial">65aa2157-f058-4e5c-b448-64cf956310ba</entry>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <entry name="uuid">65aa2157-f058-4e5c-b448-64cf956310ba</entry>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     </system>
Jan 20 15:22:02 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:22:02 compute-1 nova_compute[225855]:   <os>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:   </os>
Jan 20 15:22:02 compute-1 nova_compute[225855]:   <features>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:   </features>
Jan 20 15:22:02 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:22:02 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:22:02 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/65aa2157-f058-4e5c-b448-64cf956310ba_disk">
Jan 20 15:22:02 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       </source>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:22:02 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/65aa2157-f058-4e5c-b448-64cf956310ba_disk.config">
Jan 20 15:22:02 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       </source>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:22:02 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:7a:99:f8"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <target dev="tap2eefbfcb-7c"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/65aa2157-f058-4e5c-b448-64cf956310ba/console.log" append="off"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <video>
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     </video>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:22:02 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:22:02 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:22:02 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:22:02 compute-1 nova_compute[225855]: </domain>
Jan 20 15:22:02 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:22:02 compute-1 nova_compute[225855]: 2026-01-20 15:22:02.995 225859 DEBUG nova.virt.libvirt.vif [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:21:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-890684112',display_name='tempest-TestNetworkAdvancedServerOps-server-890684112',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-890684112',id=198,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBESV7XYmzz1neUUH7k/g2EXDk6RAN24jF19myyoRv6wDjFXd5E2VXPhzcf3Q2CFmKA+oZARXh9ZLZnZRzD1iPeEGFbgLb8nt50MGrmQlAcYMGRSCqrzrniFYSfPnybQWNg==',key_name='tempest-TestNetworkAdvancedServerOps-1160843308',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:21:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-n64n905g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:21:55Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=65aa2157-f058-4e5c-b448-64cf956310ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1810657086", "vif_mac": "fa:16:3e:7a:99:f8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:22:02 compute-1 nova_compute[225855]: 2026-01-20 15:22:02.996 225859 DEBUG nova.network.os_vif_util [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1810657086", "vif_mac": "fa:16:3e:7a:99:f8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:22:02 compute-1 nova_compute[225855]: 2026-01-20 15:22:02.996 225859 DEBUG nova.network.os_vif_util [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:99:f8,bridge_name='br-int',has_traffic_filtering=True,id=2eefbfcb-7c22-4c45-bb7b-75319242796c,network=Network(6eb3ab38-e480-46b8-ae2d-d286fe61de3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eefbfcb-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:22:02 compute-1 nova_compute[225855]: 2026-01-20 15:22:02.997 225859 DEBUG os_vif [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:99:f8,bridge_name='br-int',has_traffic_filtering=True,id=2eefbfcb-7c22-4c45-bb7b-75319242796c,network=Network(6eb3ab38-e480-46b8-ae2d-d286fe61de3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eefbfcb-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:22:02 compute-1 nova_compute[225855]: 2026-01-20 15:22:02.997 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:02 compute-1 nova_compute[225855]: 2026-01-20 15:22:02.998 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:22:02 compute-1 nova_compute[225855]: 2026-01-20 15:22:02.998 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.001 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.001 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2eefbfcb-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.002 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2eefbfcb-7c, col_values=(('external_ids', {'iface-id': '2eefbfcb-7c22-4c45-bb7b-75319242796c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:99:f8', 'vm-uuid': '65aa2157-f058-4e5c-b448-64cf956310ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.003 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:03 compute-1 NetworkManager[49104]: <info>  [1768922523.0048] manager: (tap2eefbfcb-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.008 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.010 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.011 225859 INFO os_vif [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:99:f8,bridge_name='br-int',has_traffic_filtering=True,id=2eefbfcb-7c22-4c45-bb7b-75319242796c,network=Network(6eb3ab38-e480-46b8-ae2d-d286fe61de3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eefbfcb-7c')
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.067 225859 DEBUG nova.virt.libvirt.driver [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.068 225859 DEBUG nova.virt.libvirt.driver [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.068 225859 DEBUG nova.virt.libvirt.driver [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No VIF found with MAC fa:16:3e:7a:99:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.069 225859 INFO nova.virt.libvirt.driver [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Using config drive
Jan 20 15:22:03 compute-1 kernel: tap2eefbfcb-7c: entered promiscuous mode
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.160 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:03 compute-1 NetworkManager[49104]: <info>  [1768922523.1617] manager: (tap2eefbfcb-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/369)
Jan 20 15:22:03 compute-1 ovn_controller[130490]: 2026-01-20T15:22:03Z|00883|binding|INFO|Claiming lport 2eefbfcb-7c22-4c45-bb7b-75319242796c for this chassis.
Jan 20 15:22:03 compute-1 ovn_controller[130490]: 2026-01-20T15:22:03Z|00884|binding|INFO|2eefbfcb-7c22-4c45-bb7b-75319242796c: Claiming fa:16:3e:7a:99:f8 10.100.0.4
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.170 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:99:f8 10.100.0.4'], port_security=['fa:16:3e:7a:99:f8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '65aa2157-f058-4e5c-b448-64cf956310ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6eb3ab38-e480-46b8-ae2d-d286fe61de3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a9217303-0a2c-4a19-a65b-396cb455c1f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93c7ac88-5c28-4609-8d16-8949ae99e457, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=2eefbfcb-7c22-4c45-bb7b-75319242796c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.171 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 2eefbfcb-7c22-4c45-bb7b-75319242796c in datapath 6eb3ab38-e480-46b8-ae2d-d286fe61de3c bound to our chassis
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.172 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6eb3ab38-e480-46b8-ae2d-d286fe61de3c
Jan 20 15:22:03 compute-1 ovn_controller[130490]: 2026-01-20T15:22:03Z|00885|binding|INFO|Setting lport 2eefbfcb-7c22-4c45-bb7b-75319242796c ovn-installed in OVS
Jan 20 15:22:03 compute-1 ovn_controller[130490]: 2026-01-20T15:22:03Z|00886|binding|INFO|Setting lport 2eefbfcb-7c22-4c45-bb7b-75319242796c up in Southbound
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.180 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.184 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.184 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4db45d-49c6-4f88-889c-af6de445e85c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.185 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6eb3ab38-e1 in ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.187 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6eb3ab38-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.187 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c19228ae-1cd3-4fa9-86cd-bffeeae55b4b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.189 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd1bf0b-c912-4ee1-9d46-8a9388981a26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:03 compute-1 systemd-machined[194361]: New machine qemu-104-instance-000000c6.
Jan 20 15:22:03 compute-1 systemd-udevd[310402]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.201 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[72659f65-4d5e-4940-92fb-d71b3e4ee0a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:03 compute-1 systemd[1]: Started Virtual Machine qemu-104-instance-000000c6.
Jan 20 15:22:03 compute-1 NetworkManager[49104]: <info>  [1768922523.2079] device (tap2eefbfcb-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:22:03 compute-1 NetworkManager[49104]: <info>  [1768922523.2085] device (tap2eefbfcb-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.216 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3f3d24cb-28fb-44e0-9772-01f11bc9581e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.246 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[ced301ae-070e-4cae-bcd9-10b855f3de33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:03 compute-1 NetworkManager[49104]: <info>  [1768922523.2517] manager: (tap6eb3ab38-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/370)
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.251 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b06ed79d-700c-4ccc-ab94-799a34b01705]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.285 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd049be-4d21-40a7-a8df-c614f27d6a38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.288 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[abd9236a-7e8b-4fe1-9bde-af467dc6b3c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.300 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:03 compute-1 NetworkManager[49104]: <info>  [1768922523.3104] device (tap6eb3ab38-e0): carrier: link connected
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.316 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[4d01f879-28c5-4fad-8ba3-4a87c80cd384]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.334 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[dab38f46-c3df-4561-ac8d-6da6cde3e169]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6eb3ab38-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:42:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 251], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 756223, 'reachable_time': 15392, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310434, 'error': None, 'target': 'ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.351 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a4018002-b985-4747-af29-cb29ddc078db]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:428f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 756223, 'tstamp': 756223}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310435, 'error': None, 'target': 'ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.372 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f74302e1-f2ad-47bf-af40-44e4330e6b42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6eb3ab38-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:42:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 251], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 756223, 'reachable_time': 15392, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310436, 'error': None, 'target': 'ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.403 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2a6e0e21-0b59-44cd-b26e-83f1d4e9827e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.457 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e5bb1b0e-4e56-4d61-81a6-c07dbbb5a004]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.459 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6eb3ab38-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.459 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.459 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6eb3ab38-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:22:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:03 compute-1 kernel: tap6eb3ab38-e0: entered promiscuous mode
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.461 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:03 compute-1 NetworkManager[49104]: <info>  [1768922523.4617] manager: (tap6eb3ab38-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/371)
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.465 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6eb3ab38-e0, col_values=(('external_ids', {'iface-id': 'f6896e14-17f7-4c25-9eea-77cd7f8fe02c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.466 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:03 compute-1 ovn_controller[130490]: 2026-01-20T15:22:03Z|00887|binding|INFO|Releasing lport f6896e14-17f7-4c25-9eea-77cd7f8fe02c from this chassis (sb_readonly=0)
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.468 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6eb3ab38-e480-46b8-ae2d-d286fe61de3c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6eb3ab38-e480-46b8-ae2d-d286fe61de3c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.479 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4af1baba-b958-462b-8ea5-dc64b5fbc000]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.480 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-6eb3ab38-e480-46b8-ae2d-d286fe61de3c
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/6eb3ab38-e480-46b8-ae2d-d286fe61de3c.pid.haproxy
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 6eb3ab38-e480-46b8-ae2d-d286fe61de3c
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.480 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:03.480 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c', 'env', 'PROCESS_TAG=haproxy-6eb3ab38-e480-46b8-ae2d-d286fe61de3c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6eb3ab38-e480-46b8-ae2d-d286fe61de3c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:22:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:03.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.779 225859 DEBUG nova.compute.manager [req-83a6fbbf-3571-4f8e-88b6-d8bc4a976c99 req-b7ac87d6-24d0-4a16-b5ee-50c2882cb493 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received event network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.780 225859 DEBUG oslo_concurrency.lockutils [req-83a6fbbf-3571-4f8e-88b6-d8bc4a976c99 req-b7ac87d6-24d0-4a16-b5ee-50c2882cb493 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.780 225859 DEBUG oslo_concurrency.lockutils [req-83a6fbbf-3571-4f8e-88b6-d8bc4a976c99 req-b7ac87d6-24d0-4a16-b5ee-50c2882cb493 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.780 225859 DEBUG oslo_concurrency.lockutils [req-83a6fbbf-3571-4f8e-88b6-d8bc4a976c99 req-b7ac87d6-24d0-4a16-b5ee-50c2882cb493 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.780 225859 DEBUG nova.compute.manager [req-83a6fbbf-3571-4f8e-88b6-d8bc4a976c99 req-b7ac87d6-24d0-4a16-b5ee-50c2882cb493 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] No waiting events found dispatching network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:22:03 compute-1 nova_compute[225855]: 2026-01-20 15:22:03.781 225859 WARNING nova.compute.manager [req-83a6fbbf-3571-4f8e-88b6-d8bc4a976c99 req-b7ac87d6-24d0-4a16-b5ee-50c2882cb493 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received unexpected event network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c for instance with vm_state active and task_state resize_finish.
Jan 20 15:22:03 compute-1 podman[310468]: 2026-01-20 15:22:03.802645608 +0000 UTC m=+0.024427660 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:22:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1822320099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:22:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1944454760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:22:04 compute-1 podman[310468]: 2026-01-20 15:22:04.043832775 +0000 UTC m=+0.265614827 container create c7fab9924b227159e9767e068e64a7972c66e5c4ddb35306b896a5152ff90354 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 15:22:04 compute-1 systemd[1]: Started libpod-conmon-c7fab9924b227159e9767e068e64a7972c66e5c4ddb35306b896a5152ff90354.scope.
Jan 20 15:22:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:04.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:04 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:22:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9a8ddd055a857b793a3cbb612eba64ac8fa8b721d637f2000dc6e7f650142a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:22:04 compute-1 podman[310468]: 2026-01-20 15:22:04.170626013 +0000 UTC m=+0.392408065 container init c7fab9924b227159e9767e068e64a7972c66e5c4ddb35306b896a5152ff90354 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 15:22:04 compute-1 podman[310468]: 2026-01-20 15:22:04.177059464 +0000 UTC m=+0.398841506 container start c7fab9924b227159e9767e068e64a7972c66e5c4ddb35306b896a5152ff90354 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:22:04 compute-1 neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c[310482]: [NOTICE]   (310511) : New worker (310522) forked
Jan 20 15:22:04 compute-1 neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c[310482]: [NOTICE]   (310511) : Loading success.
Jan 20 15:22:04 compute-1 sudo[310533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:04 compute-1 sudo[310533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:04 compute-1 sudo[310533]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:04 compute-1 sudo[310559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:04 compute-1 sudo[310559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:04 compute-1 sudo[310559]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:04 compute-1 nova_compute[225855]: 2026-01-20 15:22:04.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:22:04 compute-1 nova_compute[225855]: 2026-01-20 15:22:04.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:22:04 compute-1 nova_compute[225855]: 2026-01-20 15:22:04.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:22:04 compute-1 nova_compute[225855]: 2026-01-20 15:22:04.360 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922524.3597896, 65aa2157-f058-4e5c-b448-64cf956310ba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:22:04 compute-1 nova_compute[225855]: 2026-01-20 15:22:04.360 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] VM Resumed (Lifecycle Event)
Jan 20 15:22:04 compute-1 nova_compute[225855]: 2026-01-20 15:22:04.362 225859 DEBUG nova.compute.manager [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:22:04 compute-1 nova_compute[225855]: 2026-01-20 15:22:04.365 225859 INFO nova.virt.libvirt.driver [-] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Instance running successfully.
Jan 20 15:22:04 compute-1 virtqemud[225396]: argument unsupported: QEMU guest agent is not configured
Jan 20 15:22:04 compute-1 nova_compute[225855]: 2026-01-20 15:22:04.368 225859 DEBUG nova.virt.libvirt.guest [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 20 15:22:04 compute-1 nova_compute[225855]: 2026-01-20 15:22:04.368 225859 DEBUG nova.virt.libvirt.driver [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 20 15:22:04 compute-1 nova_compute[225855]: 2026-01-20 15:22:04.375 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:22:04 compute-1 nova_compute[225855]: 2026-01-20 15:22:04.375 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:22:04 compute-1 nova_compute[225855]: 2026-01-20 15:22:04.375 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:22:04 compute-1 nova_compute[225855]: 2026-01-20 15:22:04.375 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 65aa2157-f058-4e5c-b448-64cf956310ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:22:04 compute-1 nova_compute[225855]: 2026-01-20 15:22:04.393 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:22:04 compute-1 nova_compute[225855]: 2026-01-20 15:22:04.399 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:22:04 compute-1 nova_compute[225855]: 2026-01-20 15:22:04.453 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 20 15:22:04 compute-1 nova_compute[225855]: 2026-01-20 15:22:04.453 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922524.361839, 65aa2157-f058-4e5c-b448-64cf956310ba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:22:04 compute-1 nova_compute[225855]: 2026-01-20 15:22:04.454 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] VM Started (Lifecycle Event)
Jan 20 15:22:04 compute-1 nova_compute[225855]: 2026-01-20 15:22:04.476 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:22:04 compute-1 nova_compute[225855]: 2026-01-20 15:22:04.480 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:22:05 compute-1 ceph-mon[81775]: pgmap v2995: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 15 KiB/s wr, 16 op/s
Jan 20 15:22:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:22:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:05.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:22:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:06.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:06 compute-1 nova_compute[225855]: 2026-01-20 15:22:06.142 225859 DEBUG nova.compute.manager [req-f7e34805-2e49-4fd7-98e5-6b9b544da6bd req-9a930593-e911-4620-9244-ae9145ec9e0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received event network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:22:06 compute-1 nova_compute[225855]: 2026-01-20 15:22:06.143 225859 DEBUG oslo_concurrency.lockutils [req-f7e34805-2e49-4fd7-98e5-6b9b544da6bd req-9a930593-e911-4620-9244-ae9145ec9e0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:06 compute-1 nova_compute[225855]: 2026-01-20 15:22:06.143 225859 DEBUG oslo_concurrency.lockutils [req-f7e34805-2e49-4fd7-98e5-6b9b544da6bd req-9a930593-e911-4620-9244-ae9145ec9e0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:06 compute-1 nova_compute[225855]: 2026-01-20 15:22:06.143 225859 DEBUG oslo_concurrency.lockutils [req-f7e34805-2e49-4fd7-98e5-6b9b544da6bd req-9a930593-e911-4620-9244-ae9145ec9e0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:06 compute-1 nova_compute[225855]: 2026-01-20 15:22:06.143 225859 DEBUG nova.compute.manager [req-f7e34805-2e49-4fd7-98e5-6b9b544da6bd req-9a930593-e911-4620-9244-ae9145ec9e0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] No waiting events found dispatching network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:22:06 compute-1 nova_compute[225855]: 2026-01-20 15:22:06.144 225859 WARNING nova.compute.manager [req-f7e34805-2e49-4fd7-98e5-6b9b544da6bd req-9a930593-e911-4620-9244-ae9145ec9e0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received unexpected event network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c for instance with vm_state resized and task_state None.
Jan 20 15:22:06 compute-1 podman[310590]: 2026-01-20 15:22:06.218790961 +0000 UTC m=+0.058604094 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 15:22:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4076641553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:07 compute-1 nova_compute[225855]: 2026-01-20 15:22:07.363 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updating instance_info_cache with network_info: [{"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:22:07 compute-1 nova_compute[225855]: 2026-01-20 15:22:07.507 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:22:07 compute-1 nova_compute[225855]: 2026-01-20 15:22:07.507 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:22:07 compute-1 nova_compute[225855]: 2026-01-20 15:22:07.508 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:22:07 compute-1 nova_compute[225855]: 2026-01-20 15:22:07.508 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:22:07 compute-1 nova_compute[225855]: 2026-01-20 15:22:07.508 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:22:07 compute-1 nova_compute[225855]: 2026-01-20 15:22:07.600 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:07 compute-1 nova_compute[225855]: 2026-01-20 15:22:07.601 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:07 compute-1 nova_compute[225855]: 2026-01-20 15:22:07.602 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:07 compute-1 nova_compute[225855]: 2026-01-20 15:22:07.602 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:22:07 compute-1 nova_compute[225855]: 2026-01-20 15:22:07.603 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:22:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:07.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:07 compute-1 ceph-mon[81775]: pgmap v2996: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 2.0 KiB/s wr, 51 op/s
Jan 20 15:22:08 compute-1 nova_compute[225855]: 2026-01-20 15:22:08.004 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:22:08 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3733158156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:08 compute-1 nova_compute[225855]: 2026-01-20 15:22:08.051 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:22:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:08.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:08 compute-1 nova_compute[225855]: 2026-01-20 15:22:08.275 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000c6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:22:08 compute-1 nova_compute[225855]: 2026-01-20 15:22:08.276 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000c6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:22:08 compute-1 nova_compute[225855]: 2026-01-20 15:22:08.302 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:08 compute-1 nova_compute[225855]: 2026-01-20 15:22:08.435 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:22:08 compute-1 nova_compute[225855]: 2026-01-20 15:22:08.436 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4104MB free_disk=20.897098541259766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:22:08 compute-1 nova_compute[225855]: 2026-01-20 15:22:08.436 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:08 compute-1 nova_compute[225855]: 2026-01-20 15:22:08.437 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:08 compute-1 nova_compute[225855]: 2026-01-20 15:22:08.535 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Applying migration context for instance 65aa2157-f058-4e5c-b448-64cf956310ba as it has an incoming, in-progress migration 9b9a1c84-0aa6-44a5-9e94-a94a60dd6646. Migration status is confirming _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950
Jan 20 15:22:08 compute-1 nova_compute[225855]: 2026-01-20 15:22:08.536 225859 INFO nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updating resource usage from migration 9b9a1c84-0aa6-44a5-9e94-a94a60dd6646
Jan 20 15:22:08 compute-1 nova_compute[225855]: 2026-01-20 15:22:08.573 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 65aa2157-f058-4e5c-b448-64cf956310ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:22:08 compute-1 nova_compute[225855]: 2026-01-20 15:22:08.573 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:22:08 compute-1 nova_compute[225855]: 2026-01-20 15:22:08.573 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=704MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:22:08 compute-1 nova_compute[225855]: 2026-01-20 15:22:08.609 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:22:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3036926870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3733158156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:08 compute-1 ceph-mon[81775]: pgmap v2997: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 2.0 KiB/s wr, 51 op/s
Jan 20 15:22:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:22:09 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4092149460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:09 compute-1 nova_compute[225855]: 2026-01-20 15:22:09.030 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:22:09 compute-1 nova_compute[225855]: 2026-01-20 15:22:09.035 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:22:09 compute-1 nova_compute[225855]: 2026-01-20 15:22:09.090 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:22:09 compute-1 nova_compute[225855]: 2026-01-20 15:22:09.114 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:22:09 compute-1 nova_compute[225855]: 2026-01-20 15:22:09.115 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:09.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4092149460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/797713413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:09 compute-1 nova_compute[225855]: 2026-01-20 15:22:09.946 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:22:09 compute-1 nova_compute[225855]: 2026-01-20 15:22:09.947 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:22:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:10.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:10 compute-1 nova_compute[225855]: 2026-01-20 15:22:10.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:22:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:10.593 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:22:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:10.594 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:22:10 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:10.595 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:22:10 compute-1 nova_compute[225855]: 2026-01-20 15:22:10.596 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2090225516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:10 compute-1 ceph-mon[81775]: pgmap v2998: 321 pgs: 321 active+clean; 208 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.8 KiB/s wr, 119 op/s
Jan 20 15:22:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e417 e417: 3 total, 3 up, 3 in
Jan 20 15:22:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:11.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:11 compute-1 ceph-mon[81775]: osdmap e417: 3 total, 3 up, 3 in
Jan 20 15:22:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/971058922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/891151784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:12.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:12 compute-1 ceph-mon[81775]: pgmap v3000: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.2 KiB/s wr, 142 op/s
Jan 20 15:22:13 compute-1 nova_compute[225855]: 2026-01-20 15:22:13.007 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:13 compute-1 nova_compute[225855]: 2026-01-20 15:22:13.306 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:13 compute-1 nova_compute[225855]: 2026-01-20 15:22:13.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:22:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:22:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1403667597' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:22:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:22:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1403667597' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:22:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:13.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1403667597' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:22:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1403667597' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:22:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3059706914' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:22:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3059706914' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:22:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:14.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2030652367' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:22:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2030652367' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:22:15 compute-1 ceph-mon[81775]: pgmap v3001: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.3 KiB/s wr, 137 op/s
Jan 20 15:22:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:15.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:16.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:22:16 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/170002193' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:22:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:22:16 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/170002193' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:22:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/170002193' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:22:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/170002193' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:22:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:16.443 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:16.444 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:16.444 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:17 compute-1 ceph-mon[81775]: pgmap v3002: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.5 KiB/s wr, 141 op/s
Jan 20 15:22:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:17.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:18 compute-1 ovn_controller[130490]: 2026-01-20T15:22:18Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:99:f8 10.100.0.4
Jan 20 15:22:18 compute-1 nova_compute[225855]: 2026-01-20 15:22:18.009 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:18.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:18 compute-1 nova_compute[225855]: 2026-01-20 15:22:18.305 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:18 compute-1 ceph-mon[81775]: pgmap v3003: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.5 KiB/s wr, 141 op/s
Jan 20 15:22:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:19.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:20.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:20 compute-1 ceph-mon[81775]: pgmap v3004: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 531 KiB/s rd, 18 KiB/s wr, 122 op/s
Jan 20 15:22:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:21.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:22.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:22 compute-1 ovn_controller[130490]: 2026-01-20T15:22:22Z|00888|binding|INFO|Releasing lport f6896e14-17f7-4c25-9eea-77cd7f8fe02c from this chassis (sb_readonly=0)
Jan 20 15:22:22 compute-1 nova_compute[225855]: 2026-01-20 15:22:22.434 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 e418: 3 total, 3 up, 3 in
Jan 20 15:22:23 compute-1 nova_compute[225855]: 2026-01-20 15:22:23.012 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:23 compute-1 nova_compute[225855]: 2026-01-20 15:22:23.307 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:23 compute-1 ceph-mon[81775]: osdmap e418: 3 total, 3 up, 3 in
Jan 20 15:22:23 compute-1 ceph-mon[81775]: pgmap v3006: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 679 KiB/s rd, 16 KiB/s wr, 112 op/s
Jan 20 15:22:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:22:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:23.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:22:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:24.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:24 compute-1 nova_compute[225855]: 2026-01-20 15:22:24.282 225859 INFO nova.compute.manager [None req-502f0124-d88b-4c6c-a015-c2d226e8e549 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Get console output
Jan 20 15:22:24 compute-1 nova_compute[225855]: 2026-01-20 15:22:24.288 263775 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:22:24 compute-1 sudo[310663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:24 compute-1 sudo[310663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:24 compute-1 sudo[310663]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:24 compute-1 sudo[310688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:24 compute-1 sudo[310688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:24 compute-1 sudo[310688]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:24 compute-1 ceph-mon[81775]: pgmap v3007: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 672 KiB/s rd, 15 KiB/s wr, 103 op/s
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.606 225859 DEBUG nova.compute.manager [req-8d8c6560-d5dc-4b7f-9423-393401c984e3 req-3c2bd7cd-16a2-4fd4-bed4-6df2d05542f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received event network-changed-2eefbfcb-7c22-4c45-bb7b-75319242796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.607 225859 DEBUG nova.compute.manager [req-8d8c6560-d5dc-4b7f-9423-393401c984e3 req-3c2bd7cd-16a2-4fd4-bed4-6df2d05542f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Refreshing instance network info cache due to event network-changed-2eefbfcb-7c22-4c45-bb7b-75319242796c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.607 225859 DEBUG oslo_concurrency.lockutils [req-8d8c6560-d5dc-4b7f-9423-393401c984e3 req-3c2bd7cd-16a2-4fd4-bed4-6df2d05542f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.607 225859 DEBUG oslo_concurrency.lockutils [req-8d8c6560-d5dc-4b7f-9423-393401c984e3 req-3c2bd7cd-16a2-4fd4-bed4-6df2d05542f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.608 225859 DEBUG nova.network.neutron [req-8d8c6560-d5dc-4b7f-9423-393401c984e3 req-3c2bd7cd-16a2-4fd4-bed4-6df2d05542f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Refreshing network info cache for port 2eefbfcb-7c22-4c45-bb7b-75319242796c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.675 225859 DEBUG oslo_concurrency.lockutils [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "65aa2157-f058-4e5c-b448-64cf956310ba" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.675 225859 DEBUG oslo_concurrency.lockutils [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.676 225859 DEBUG oslo_concurrency.lockutils [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.676 225859 DEBUG oslo_concurrency.lockutils [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.676 225859 DEBUG oslo_concurrency.lockutils [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.678 225859 INFO nova.compute.manager [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Terminating instance
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.679 225859 DEBUG nova.compute.manager [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:22:25 compute-1 kernel: tap2eefbfcb-7c (unregistering): left promiscuous mode
Jan 20 15:22:25 compute-1 NetworkManager[49104]: <info>  [1768922545.7242] device (tap2eefbfcb-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:22:25 compute-1 ovn_controller[130490]: 2026-01-20T15:22:25Z|00889|binding|INFO|Releasing lport 2eefbfcb-7c22-4c45-bb7b-75319242796c from this chassis (sb_readonly=0)
Jan 20 15:22:25 compute-1 ovn_controller[130490]: 2026-01-20T15:22:25Z|00890|binding|INFO|Setting lport 2eefbfcb-7c22-4c45-bb7b-75319242796c down in Southbound
Jan 20 15:22:25 compute-1 ovn_controller[130490]: 2026-01-20T15:22:25Z|00891|binding|INFO|Removing iface tap2eefbfcb-7c ovn-installed in OVS
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.734 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.736 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:25.740 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:99:f8 10.100.0.4'], port_security=['fa:16:3e:7a:99:f8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '65aa2157-f058-4e5c-b448-64cf956310ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6eb3ab38-e480-46b8-ae2d-d286fe61de3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'a9217303-0a2c-4a19-a65b-396cb455c1f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93c7ac88-5c28-4609-8d16-8949ae99e457, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=2eefbfcb-7c22-4c45-bb7b-75319242796c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:22:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:25.742 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 2eefbfcb-7c22-4c45-bb7b-75319242796c in datapath 6eb3ab38-e480-46b8-ae2d-d286fe61de3c unbound from our chassis
Jan 20 15:22:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:25.742 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6eb3ab38-e480-46b8-ae2d-d286fe61de3c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:22:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:25.744 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8d350159-204a-4b6a-a521-4caed85b4913]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:25 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:25.745 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c namespace which is not needed anymore
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.755 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:25.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:25 compute-1 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d000000c6.scope: Deactivated successfully.
Jan 20 15:22:25 compute-1 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d000000c6.scope: Consumed 14.694s CPU time.
Jan 20 15:22:25 compute-1 systemd-machined[194361]: Machine qemu-104-instance-000000c6 terminated.
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.905 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:25 compute-1 neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c[310482]: [NOTICE]   (310511) : haproxy version is 2.8.14-c23fe91
Jan 20 15:22:25 compute-1 neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c[310482]: [NOTICE]   (310511) : path to executable is /usr/sbin/haproxy
Jan 20 15:22:25 compute-1 neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c[310482]: [WARNING]  (310511) : Exiting Master process...
Jan 20 15:22:25 compute-1 neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c[310482]: [ALERT]    (310511) : Current worker (310522) exited with code 143 (Terminated)
Jan 20 15:22:25 compute-1 neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c[310482]: [WARNING]  (310511) : All workers exited. Exiting... (0)
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.911 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:25 compute-1 systemd[1]: libpod-c7fab9924b227159e9767e068e64a7972c66e5c4ddb35306b896a5152ff90354.scope: Deactivated successfully.
Jan 20 15:22:25 compute-1 podman[310714]: 2026-01-20 15:22:25.918146927 +0000 UTC m=+0.165582709 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 20 15:22:25 compute-1 podman[310758]: 2026-01-20 15:22:25.921550594 +0000 UTC m=+0.082466275 container died c7fab9924b227159e9767e068e64a7972c66e5c4ddb35306b896a5152ff90354 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.922 225859 INFO nova.virt.libvirt.driver [-] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Instance destroyed successfully.
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.923 225859 DEBUG nova.objects.instance [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'resources' on Instance uuid 65aa2157-f058-4e5c-b448-64cf956310ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.945 225859 DEBUG nova.virt.libvirt.vif [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:21:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-890684112',display_name='tempest-TestNetworkAdvancedServerOps-server-890684112',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-890684112',id=198,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBESV7XYmzz1neUUH7k/g2EXDk6RAN24jF19myyoRv6wDjFXd5E2VXPhzcf3Q2CFmKA+oZARXh9ZLZnZRzD1iPeEGFbgLb8nt50MGrmQlAcYMGRSCqrzrniFYSfPnybQWNg==',key_name='tempest-TestNetworkAdvancedServerOps-1160843308',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:22:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-n64n905g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:22:11Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=65aa2157-f058-4e5c-b448-64cf956310ba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.947 225859 DEBUG nova.network.os_vif_util [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.949 225859 DEBUG nova.network.os_vif_util [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:99:f8,bridge_name='br-int',has_traffic_filtering=True,id=2eefbfcb-7c22-4c45-bb7b-75319242796c,network=Network(6eb3ab38-e480-46b8-ae2d-d286fe61de3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eefbfcb-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.950 225859 DEBUG os_vif [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:99:f8,bridge_name='br-int',has_traffic_filtering=True,id=2eefbfcb-7c22-4c45-bb7b-75319242796c,network=Network(6eb3ab38-e480-46b8-ae2d-d286fe61de3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eefbfcb-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:22:25 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c7fab9924b227159e9767e068e64a7972c66e5c4ddb35306b896a5152ff90354-userdata-shm.mount: Deactivated successfully.
Jan 20 15:22:25 compute-1 systemd[1]: var-lib-containers-storage-overlay-a9a8ddd055a857b793a3cbb612eba64ac8fa8b721d637f2000dc6e7f650142a8-merged.mount: Deactivated successfully.
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.953 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.954 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2eefbfcb-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.956 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.959 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:22:25 compute-1 nova_compute[225855]: 2026-01-20 15:22:25.964 225859 INFO os_vif [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:99:f8,bridge_name='br-int',has_traffic_filtering=True,id=2eefbfcb-7c22-4c45-bb7b-75319242796c,network=Network(6eb3ab38-e480-46b8-ae2d-d286fe61de3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eefbfcb-7c')
Jan 20 15:22:25 compute-1 podman[310758]: 2026-01-20 15:22:25.966435445 +0000 UTC m=+0.127351126 container cleanup c7fab9924b227159e9767e068e64a7972c66e5c4ddb35306b896a5152ff90354 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 15:22:25 compute-1 systemd[1]: libpod-conmon-c7fab9924b227159e9767e068e64a7972c66e5c4ddb35306b896a5152ff90354.scope: Deactivated successfully.
Jan 20 15:22:26 compute-1 nova_compute[225855]: 2026-01-20 15:22:26.019 225859 DEBUG nova.compute.manager [req-7920d1d3-9b2b-473c-9a1d-9f1d9f85e556 req-b9a4e964-25bc-46f9-9236-6d6e5ea9db20 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received event network-vif-unplugged-2eefbfcb-7c22-4c45-bb7b-75319242796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:22:26 compute-1 nova_compute[225855]: 2026-01-20 15:22:26.020 225859 DEBUG oslo_concurrency.lockutils [req-7920d1d3-9b2b-473c-9a1d-9f1d9f85e556 req-b9a4e964-25bc-46f9-9236-6d6e5ea9db20 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:26 compute-1 nova_compute[225855]: 2026-01-20 15:22:26.020 225859 DEBUG oslo_concurrency.lockutils [req-7920d1d3-9b2b-473c-9a1d-9f1d9f85e556 req-b9a4e964-25bc-46f9-9236-6d6e5ea9db20 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:26 compute-1 nova_compute[225855]: 2026-01-20 15:22:26.020 225859 DEBUG oslo_concurrency.lockutils [req-7920d1d3-9b2b-473c-9a1d-9f1d9f85e556 req-b9a4e964-25bc-46f9-9236-6d6e5ea9db20 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:26 compute-1 nova_compute[225855]: 2026-01-20 15:22:26.020 225859 DEBUG nova.compute.manager [req-7920d1d3-9b2b-473c-9a1d-9f1d9f85e556 req-b9a4e964-25bc-46f9-9236-6d6e5ea9db20 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] No waiting events found dispatching network-vif-unplugged-2eefbfcb-7c22-4c45-bb7b-75319242796c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:22:26 compute-1 nova_compute[225855]: 2026-01-20 15:22:26.021 225859 DEBUG nova.compute.manager [req-7920d1d3-9b2b-473c-9a1d-9f1d9f85e556 req-b9a4e964-25bc-46f9-9236-6d6e5ea9db20 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received event network-vif-unplugged-2eefbfcb-7c22-4c45-bb7b-75319242796c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:22:26 compute-1 podman[310810]: 2026-01-20 15:22:26.040037497 +0000 UTC m=+0.050422131 container remove c7fab9924b227159e9767e068e64a7972c66e5c4ddb35306b896a5152ff90354 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 15:22:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:26.047 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0998a0ea-0c0f-4d0a-b08a-741e45441e92]: (4, ('Tue Jan 20 03:22:25 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c (c7fab9924b227159e9767e068e64a7972c66e5c4ddb35306b896a5152ff90354)\nc7fab9924b227159e9767e068e64a7972c66e5c4ddb35306b896a5152ff90354\nTue Jan 20 03:22:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c (c7fab9924b227159e9767e068e64a7972c66e5c4ddb35306b896a5152ff90354)\nc7fab9924b227159e9767e068e64a7972c66e5c4ddb35306b896a5152ff90354\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:26.049 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[eb3ea942-8ece-4236-a5f4-db271b3f1d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:26.051 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6eb3ab38-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:22:26 compute-1 nova_compute[225855]: 2026-01-20 15:22:26.053 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:26 compute-1 kernel: tap6eb3ab38-e0: left promiscuous mode
Jan 20 15:22:26 compute-1 nova_compute[225855]: 2026-01-20 15:22:26.068 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:26.072 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f40fae-3d2c-405f-b249-4fa8653d5fd3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:26.091 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c7cfeb-dbfa-4508-afb9-4f2d040e0043]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:26.093 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7599edd6-032c-4134-b2d3-a9dc1f52b1b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:26.110 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a47af88e-731c-467c-b662-66dc0eeb74d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 756216, 'reachable_time': 19066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310836, 'error': None, 'target': 'ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:26 compute-1 systemd[1]: run-netns-ovnmeta\x2d6eb3ab38\x2de480\x2d46b8\x2dae2d\x2dd286fe61de3c.mount: Deactivated successfully.
Jan 20 15:22:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:26.115 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:22:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:22:26.115 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[199eba97-a2fa-4a81-b735-92a34a2691a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:22:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:26.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:26 compute-1 nova_compute[225855]: 2026-01-20 15:22:26.526 225859 INFO nova.virt.libvirt.driver [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Deleting instance files /var/lib/nova/instances/65aa2157-f058-4e5c-b448-64cf956310ba_del
Jan 20 15:22:26 compute-1 nova_compute[225855]: 2026-01-20 15:22:26.527 225859 INFO nova.virt.libvirt.driver [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Deletion of /var/lib/nova/instances/65aa2157-f058-4e5c-b448-64cf956310ba_del complete
Jan 20 15:22:26 compute-1 nova_compute[225855]: 2026-01-20 15:22:26.576 225859 INFO nova.compute.manager [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Took 0.90 seconds to destroy the instance on the hypervisor.
Jan 20 15:22:26 compute-1 nova_compute[225855]: 2026-01-20 15:22:26.577 225859 DEBUG oslo.service.loopingcall [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:22:26 compute-1 nova_compute[225855]: 2026-01-20 15:22:26.577 225859 DEBUG nova.compute.manager [-] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:22:26 compute-1 nova_compute[225855]: 2026-01-20 15:22:26.578 225859 DEBUG nova.network.neutron [-] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:22:26 compute-1 ceph-mon[81775]: pgmap v3008: 321 pgs: 321 active+clean; 202 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 603 KiB/s rd, 27 KiB/s wr, 63 op/s
Jan 20 15:22:26 compute-1 nova_compute[225855]: 2026-01-20 15:22:26.856 225859 DEBUG nova.network.neutron [req-8d8c6560-d5dc-4b7f-9423-393401c984e3 req-3c2bd7cd-16a2-4fd4-bed4-6df2d05542f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updated VIF entry in instance network info cache for port 2eefbfcb-7c22-4c45-bb7b-75319242796c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:22:26 compute-1 nova_compute[225855]: 2026-01-20 15:22:26.857 225859 DEBUG nova.network.neutron [req-8d8c6560-d5dc-4b7f-9423-393401c984e3 req-3c2bd7cd-16a2-4fd4-bed4-6df2d05542f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updating instance_info_cache with network_info: [{"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:22:26 compute-1 nova_compute[225855]: 2026-01-20 15:22:26.890 225859 DEBUG oslo_concurrency.lockutils [req-8d8c6560-d5dc-4b7f-9423-393401c984e3 req-3c2bd7cd-16a2-4fd4-bed4-6df2d05542f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:22:27 compute-1 nova_compute[225855]: 2026-01-20 15:22:27.078 225859 DEBUG nova.network.neutron [-] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:22:27 compute-1 nova_compute[225855]: 2026-01-20 15:22:27.100 225859 INFO nova.compute.manager [-] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Took 0.52 seconds to deallocate network for instance.
Jan 20 15:22:27 compute-1 nova_compute[225855]: 2026-01-20 15:22:27.151 225859 DEBUG oslo_concurrency.lockutils [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:27 compute-1 nova_compute[225855]: 2026-01-20 15:22:27.151 225859 DEBUG oslo_concurrency.lockutils [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:27 compute-1 nova_compute[225855]: 2026-01-20 15:22:27.185 225859 DEBUG nova.compute.manager [req-7210697d-abfa-4ffe-afb5-0829aea6713c req-df827cb6-1886-450c-b69f-b92a76ed0a7f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received event network-vif-deleted-2eefbfcb-7c22-4c45-bb7b-75319242796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:22:27 compute-1 nova_compute[225855]: 2026-01-20 15:22:27.216 225859 DEBUG oslo_concurrency.processutils [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:22:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:22:27 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1554924812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:27 compute-1 nova_compute[225855]: 2026-01-20 15:22:27.670 225859 DEBUG oslo_concurrency.processutils [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:22:27 compute-1 nova_compute[225855]: 2026-01-20 15:22:27.678 225859 DEBUG nova.compute.provider_tree [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:22:27 compute-1 nova_compute[225855]: 2026-01-20 15:22:27.698 225859 DEBUG nova.scheduler.client.report [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:22:27 compute-1 nova_compute[225855]: 2026-01-20 15:22:27.720 225859 DEBUG oslo_concurrency.lockutils [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:27 compute-1 nova_compute[225855]: 2026-01-20 15:22:27.745 225859 INFO nova.scheduler.client.report [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Deleted allocations for instance 65aa2157-f058-4e5c-b448-64cf956310ba
Jan 20 15:22:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:22:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:27.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:22:27 compute-1 nova_compute[225855]: 2026-01-20 15:22:27.816 225859 DEBUG oslo_concurrency.lockutils [None req-08218b91-1d94-42a2-a507-86dc3e555f8f 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1554924812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:28 compute-1 nova_compute[225855]: 2026-01-20 15:22:28.107 225859 DEBUG nova.compute.manager [req-b5c83043-4b3c-4fff-b193-470c405dd90f req-e464383b-dda6-4085-a4e0-233ae7868e36 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received event network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:22:28 compute-1 nova_compute[225855]: 2026-01-20 15:22:28.107 225859 DEBUG oslo_concurrency.lockutils [req-b5c83043-4b3c-4fff-b193-470c405dd90f req-e464383b-dda6-4085-a4e0-233ae7868e36 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:28 compute-1 nova_compute[225855]: 2026-01-20 15:22:28.108 225859 DEBUG oslo_concurrency.lockutils [req-b5c83043-4b3c-4fff-b193-470c405dd90f req-e464383b-dda6-4085-a4e0-233ae7868e36 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:28 compute-1 nova_compute[225855]: 2026-01-20 15:22:28.108 225859 DEBUG oslo_concurrency.lockutils [req-b5c83043-4b3c-4fff-b193-470c405dd90f req-e464383b-dda6-4085-a4e0-233ae7868e36 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:28 compute-1 nova_compute[225855]: 2026-01-20 15:22:28.108 225859 DEBUG nova.compute.manager [req-b5c83043-4b3c-4fff-b193-470c405dd90f req-e464383b-dda6-4085-a4e0-233ae7868e36 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] No waiting events found dispatching network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:22:28 compute-1 nova_compute[225855]: 2026-01-20 15:22:28.108 225859 WARNING nova.compute.manager [req-b5c83043-4b3c-4fff-b193-470c405dd90f req-e464383b-dda6-4085-a4e0-233ae7868e36 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received unexpected event network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c for instance with vm_state deleted and task_state None.
Jan 20 15:22:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:28.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:28 compute-1 nova_compute[225855]: 2026-01-20 15:22:28.309 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:28 compute-1 ceph-mon[81775]: pgmap v3009: 321 pgs: 321 active+clean; 202 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 603 KiB/s rd, 27 KiB/s wr, 63 op/s
Jan 20 15:22:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:29.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:30.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:30 compute-1 ceph-mon[81775]: pgmap v3010: 321 pgs: 321 active+clean; 142 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 189 KiB/s rd, 14 KiB/s wr, 44 op/s
Jan 20 15:22:30 compute-1 nova_compute[225855]: 2026-01-20 15:22:30.957 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:22:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:31.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:22:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:32.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:32 compute-1 nova_compute[225855]: 2026-01-20 15:22:32.187 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:32 compute-1 nova_compute[225855]: 2026-01-20 15:22:32.266 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:32 compute-1 sudo[310865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:32 compute-1 sudo[310865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:32 compute-1 sudo[310865]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:32 compute-1 sudo[310890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:22:32 compute-1 sudo[310890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:32 compute-1 sudo[310890]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:32 compute-1 ceph-mon[81775]: pgmap v3011: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 14 KiB/s wr, 34 op/s
Jan 20 15:22:32 compute-1 sudo[310915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:32 compute-1 sudo[310915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:32 compute-1 sudo[310915]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:32 compute-1 sudo[310940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 15:22:32 compute-1 sudo[310940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:33 compute-1 nova_compute[225855]: 2026-01-20 15:22:33.310 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:33 compute-1 podman[311040]: 2026-01-20 15:22:33.364362974 +0000 UTC m=+0.059869770 container exec 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:22:33 compute-1 podman[311040]: 2026-01-20 15:22:33.455344712 +0000 UTC m=+0.150851528 container exec_died 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:22:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:33.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:34 compute-1 podman[311195]: 2026-01-20 15:22:34.018136191 +0000 UTC m=+0.050450732 container exec 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 15:22:34 compute-1 podman[311195]: 2026-01-20 15:22:34.023603437 +0000 UTC m=+0.055917958 container exec_died 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 15:22:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:34.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:34 compute-1 podman[311261]: 2026-01-20 15:22:34.235467046 +0000 UTC m=+0.067205680 container exec e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.openshift.expose-services=, name=keepalived, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, distribution-scope=public, description=keepalived for Ceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 20 15:22:35 compute-1 ceph-mon[81775]: pgmap v3012: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 20 15:22:35 compute-1 podman[311261]: 2026-01-20 15:22:35.459468663 +0000 UTC m=+1.291207267 container exec_died e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, version=2.2.4, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vcs-type=git, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, description=keepalived for Ceph, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public)
Jan 20 15:22:35 compute-1 sudo[310940]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:35 compute-1 sudo[311294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:35 compute-1 sudo[311294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:35 compute-1 sudo[311294]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:35 compute-1 sudo[311319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:22:35 compute-1 sudo[311319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:35 compute-1 sudo[311319]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:35 compute-1 sudo[311344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:35 compute-1 sudo[311344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:35 compute-1 sudo[311344]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:35 compute-1 sudo[311370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:22:35 compute-1 sudo[311370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:22:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:35.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:22:35 compute-1 nova_compute[225855]: 2026-01-20 15:22:35.959 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:22:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:36.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:22:36 compute-1 sudo[311370]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:22:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:22:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:22:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:22:36 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:22:37 compute-1 podman[311426]: 2026-01-20 15:22:37.006728458 +0000 UTC m=+0.051685216 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:22:37 compute-1 ceph-mon[81775]: pgmap v3013: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 20 15:22:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:37.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:38.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:38 compute-1 nova_compute[225855]: 2026-01-20 15:22:38.313 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:38 compute-1 ceph-mon[81775]: pgmap v3014: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:22:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:22:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:39.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:22:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:40.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:40 compute-1 ceph-mon[81775]: pgmap v3015: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:22:40 compute-1 nova_compute[225855]: 2026-01-20 15:22:40.920 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922545.9189079, 65aa2157-f058-4e5c-b448-64cf956310ba => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:22:40 compute-1 nova_compute[225855]: 2026-01-20 15:22:40.921 225859 INFO nova.compute.manager [-] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] VM Stopped (Lifecycle Event)
Jan 20 15:22:40 compute-1 nova_compute[225855]: 2026-01-20 15:22:40.945 225859 DEBUG nova.compute.manager [None req-ffaf5845-51ae-44d5-a996-05b0d3b8fdfe - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:22:40 compute-1 nova_compute[225855]: 2026-01-20 15:22:40.961 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:41.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:41 compute-1 sudo[311450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:41 compute-1 sudo[311450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:41 compute-1 sudo[311450]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:42 compute-1 sudo[311475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:22:42 compute-1 sudo[311475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:42 compute-1 sudo[311475]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:22:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:42.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:22:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:43 compute-1 nova_compute[225855]: 2026-01-20 15:22:43.315 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:43 compute-1 ceph-mon[81775]: pgmap v3016: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s
Jan 20 15:22:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:22:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:43.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:22:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:44.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:44 compute-1 sudo[311502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:44 compute-1 sudo[311502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:44 compute-1 sudo[311502]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:44 compute-1 sudo[311527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:44 compute-1 sudo[311527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:44 compute-1 sudo[311527]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:44 compute-1 ceph-mon[81775]: pgmap v3017: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:22:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:45.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:46 compute-1 nova_compute[225855]: 2026-01-20 15:22:46.019 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:46.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:46 compute-1 ceph-mon[81775]: pgmap v3018: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:22:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:47.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:48.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:48 compute-1 nova_compute[225855]: 2026-01-20 15:22:48.318 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:49.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:49 compute-1 ceph-mon[81775]: pgmap v3019: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:22:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:50.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:51 compute-1 nova_compute[225855]: 2026-01-20 15:22:51.058 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:51.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:51 compute-1 ceph-mon[81775]: pgmap v3020: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:22:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:52.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:53 compute-1 nova_compute[225855]: 2026-01-20 15:22:53.243 225859 DEBUG oslo_concurrency.lockutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "4c926c1a-d5cf-4865-aa57-66b439d115f8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:53 compute-1 nova_compute[225855]: 2026-01-20 15:22:53.243 225859 DEBUG oslo_concurrency.lockutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:53 compute-1 nova_compute[225855]: 2026-01-20 15:22:53.257 225859 DEBUG nova.compute.manager [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:22:53 compute-1 nova_compute[225855]: 2026-01-20 15:22:53.319 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:53 compute-1 nova_compute[225855]: 2026-01-20 15:22:53.349 225859 DEBUG oslo_concurrency.lockutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:53 compute-1 nova_compute[225855]: 2026-01-20 15:22:53.350 225859 DEBUG oslo_concurrency.lockutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:53 compute-1 nova_compute[225855]: 2026-01-20 15:22:53.359 225859 DEBUG nova.virt.hardware [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:22:53 compute-1 nova_compute[225855]: 2026-01-20 15:22:53.359 225859 INFO nova.compute.claims [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:22:53 compute-1 nova_compute[225855]: 2026-01-20 15:22:53.449 225859 DEBUG oslo_concurrency.processutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:22:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:22:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:53.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:22:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:22:53 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3700219704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:53 compute-1 nova_compute[225855]: 2026-01-20 15:22:53.905 225859 DEBUG oslo_concurrency.processutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:22:53 compute-1 nova_compute[225855]: 2026-01-20 15:22:53.911 225859 DEBUG nova.compute.provider_tree [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:22:53 compute-1 nova_compute[225855]: 2026-01-20 15:22:53.928 225859 DEBUG nova.scheduler.client.report [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:22:53 compute-1 nova_compute[225855]: 2026-01-20 15:22:53.952 225859 DEBUG oslo_concurrency.lockutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:53 compute-1 nova_compute[225855]: 2026-01-20 15:22:53.953 225859 DEBUG nova.compute.manager [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:22:53 compute-1 ceph-mon[81775]: pgmap v3021: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:22:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3700219704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:53 compute-1 nova_compute[225855]: 2026-01-20 15:22:53.994 225859 DEBUG nova.compute.manager [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:22:53 compute-1 nova_compute[225855]: 2026-01-20 15:22:53.995 225859 DEBUG nova.network.neutron [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.015 225859 INFO nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.031 225859 DEBUG nova.compute.manager [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.123 225859 DEBUG nova.compute.manager [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.124 225859 DEBUG nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.125 225859 INFO nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Creating image(s)
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.149 225859 DEBUG nova.storage.rbd_utils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 4c926c1a-d5cf-4865-aa57-66b439d115f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.172 225859 DEBUG nova.storage.rbd_utils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 4c926c1a-d5cf-4865-aa57-66b439d115f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:22:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:54.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.199 225859 DEBUG nova.storage.rbd_utils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 4c926c1a-d5cf-4865-aa57-66b439d115f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.204 225859 DEBUG oslo_concurrency.processutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.235 225859 DEBUG nova.policy [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '442a7a5cb8ea426a82be9762b262d171', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.275 225859 DEBUG oslo_concurrency.processutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.276 225859 DEBUG oslo_concurrency.lockutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.276 225859 DEBUG oslo_concurrency.lockutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.277 225859 DEBUG oslo_concurrency.lockutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.299 225859 DEBUG nova.storage.rbd_utils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 4c926c1a-d5cf-4865-aa57-66b439d115f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.302 225859 DEBUG oslo_concurrency.processutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 4c926c1a-d5cf-4865-aa57-66b439d115f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.598 225859 DEBUG oslo_concurrency.processutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 4c926c1a-d5cf-4865-aa57-66b439d115f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.649 225859 DEBUG nova.storage.rbd_utils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] resizing rbd image 4c926c1a-d5cf-4865-aa57-66b439d115f8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.732 225859 DEBUG nova.objects.instance [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'migration_context' on Instance uuid 4c926c1a-d5cf-4865-aa57-66b439d115f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.747 225859 DEBUG nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.748 225859 DEBUG nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Ensure instance console log exists: /var/lib/nova/instances/4c926c1a-d5cf-4865-aa57-66b439d115f8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.748 225859 DEBUG oslo_concurrency.lockutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.748 225859 DEBUG oslo_concurrency.lockutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.749 225859 DEBUG oslo_concurrency.lockutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:54 compute-1 nova_compute[225855]: 2026-01-20 15:22:54.957 225859 DEBUG nova.network.neutron [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Successfully created port: 5ee850cb-507f-4288-993f-a7892e9285c9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:22:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:55.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:56 compute-1 nova_compute[225855]: 2026-01-20 15:22:56.061 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:56 compute-1 ceph-mon[81775]: pgmap v3022: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:22:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:56.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:57 compute-1 podman[311746]: 2026-01-20 15:22:57.033118985 +0000 UTC m=+0.079018327 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 15:22:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:22:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:57.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:22:57 compute-1 nova_compute[225855]: 2026-01-20 15:22:57.832 225859 DEBUG nova.network.neutron [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Successfully updated port: 5ee850cb-507f-4288-993f-a7892e9285c9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:22:57 compute-1 nova_compute[225855]: 2026-01-20 15:22:57.847 225859 DEBUG oslo_concurrency.lockutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "refresh_cache-4c926c1a-d5cf-4865-aa57-66b439d115f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:22:57 compute-1 nova_compute[225855]: 2026-01-20 15:22:57.848 225859 DEBUG oslo_concurrency.lockutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquired lock "refresh_cache-4c926c1a-d5cf-4865-aa57-66b439d115f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:22:57 compute-1 nova_compute[225855]: 2026-01-20 15:22:57.848 225859 DEBUG nova.network.neutron [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:22:57 compute-1 nova_compute[225855]: 2026-01-20 15:22:57.946 225859 DEBUG nova.compute.manager [req-8316f45a-60ef-47bf-8a7c-ae73d249895a req-282130c6-2182-480d-ac22-06c9330ecd05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Received event network-changed-5ee850cb-507f-4288-993f-a7892e9285c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:22:57 compute-1 nova_compute[225855]: 2026-01-20 15:22:57.947 225859 DEBUG nova.compute.manager [req-8316f45a-60ef-47bf-8a7c-ae73d249895a req-282130c6-2182-480d-ac22-06c9330ecd05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Refreshing instance network info cache due to event network-changed-5ee850cb-507f-4288-993f-a7892e9285c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:22:57 compute-1 nova_compute[225855]: 2026-01-20 15:22:57.947 225859 DEBUG oslo_concurrency.lockutils [req-8316f45a-60ef-47bf-8a7c-ae73d249895a req-282130c6-2182-480d-ac22-06c9330ecd05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-4c926c1a-d5cf-4865-aa57-66b439d115f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:22:57 compute-1 nova_compute[225855]: 2026-01-20 15:22:57.995 225859 DEBUG nova.network.neutron [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:22:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:58.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:58 compute-1 ceph-mon[81775]: pgmap v3023: 321 pgs: 321 active+clean; 151 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Jan 20 15:22:58 compute-1 nova_compute[225855]: 2026-01-20 15:22:58.323 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:22:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:22:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:59.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:23:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:23:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:00.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:23:00 compute-1 ceph-mon[81775]: pgmap v3024: 321 pgs: 321 active+clean; 151 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.090 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.208 225859 DEBUG nova.network.neutron [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Updating instance_info_cache with network_info: [{"id": "5ee850cb-507f-4288-993f-a7892e9285c9", "address": "fa:16:3e:cc:0a:92", "network": {"id": "6a8ba0d6-68f0-4a25-84ff-3685d5b259a6", "bridge": "br-int", "label": "tempest-network-smoke--805773012", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ee850cb-50", "ovs_interfaceid": "5ee850cb-507f-4288-993f-a7892e9285c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.815 225859 DEBUG oslo_concurrency.lockutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Releasing lock "refresh_cache-4c926c1a-d5cf-4865-aa57-66b439d115f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.815 225859 DEBUG nova.compute.manager [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Instance network_info: |[{"id": "5ee850cb-507f-4288-993f-a7892e9285c9", "address": "fa:16:3e:cc:0a:92", "network": {"id": "6a8ba0d6-68f0-4a25-84ff-3685d5b259a6", "bridge": "br-int", "label": "tempest-network-smoke--805773012", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ee850cb-50", "ovs_interfaceid": "5ee850cb-507f-4288-993f-a7892e9285c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.816 225859 DEBUG oslo_concurrency.lockutils [req-8316f45a-60ef-47bf-8a7c-ae73d249895a req-282130c6-2182-480d-ac22-06c9330ecd05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-4c926c1a-d5cf-4865-aa57-66b439d115f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.816 225859 DEBUG nova.network.neutron [req-8316f45a-60ef-47bf-8a7c-ae73d249895a req-282130c6-2182-480d-ac22-06c9330ecd05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Refreshing network info cache for port 5ee850cb-507f-4288-993f-a7892e9285c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.819 225859 DEBUG nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Start _get_guest_xml network_info=[{"id": "5ee850cb-507f-4288-993f-a7892e9285c9", "address": "fa:16:3e:cc:0a:92", "network": {"id": "6a8ba0d6-68f0-4a25-84ff-3685d5b259a6", "bridge": "br-int", "label": "tempest-network-smoke--805773012", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ee850cb-50", "ovs_interfaceid": "5ee850cb-507f-4288-993f-a7892e9285c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.822 225859 WARNING nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:23:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.827 225859 DEBUG nova.virt.libvirt.host [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:23:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:01.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.828 225859 DEBUG nova.virt.libvirt.host [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.833 225859 DEBUG nova.virt.libvirt.host [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.834 225859 DEBUG nova.virt.libvirt.host [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.835 225859 DEBUG nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.835 225859 DEBUG nova.virt.hardware [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.836 225859 DEBUG nova.virt.hardware [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.836 225859 DEBUG nova.virt.hardware [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.836 225859 DEBUG nova.virt.hardware [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.836 225859 DEBUG nova.virt.hardware [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.836 225859 DEBUG nova.virt.hardware [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.837 225859 DEBUG nova.virt.hardware [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.837 225859 DEBUG nova.virt.hardware [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.837 225859 DEBUG nova.virt.hardware [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.837 225859 DEBUG nova.virt.hardware [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.838 225859 DEBUG nova.virt.hardware [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:23:01 compute-1 nova_compute[225855]: 2026-01-20 15:23:01.840 225859 DEBUG oslo_concurrency.processutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:23:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:02.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:23:02 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3095742531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.287 225859 DEBUG oslo_concurrency.processutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.309 225859 DEBUG nova.storage.rbd_utils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 4c926c1a-d5cf-4865-aa57-66b439d115f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.313 225859 DEBUG oslo_concurrency.processutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:23:02 compute-1 ceph-mon[81775]: pgmap v3025: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:23:02 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3095742531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:23:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:23:02 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1782031866' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.757 225859 DEBUG oslo_concurrency.processutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.759 225859 DEBUG nova.virt.libvirt.vif [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:22:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1485210936',display_name='tempest-TestNetworkAdvancedServerOps-server-1485210936',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1485210936',id=200,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFcP4UZM8O63AIoOVMDQxqy+5NyOff2+4sn+/fo/i9F+a5f/348af0mVz9/8dEzIolLAAsu+/bRAcsRarrSFS+xLm/g8/3nv4fTZhMf6io52heWTDMNl7eXefIdDoA+XVw==',key_name='tempest-TestNetworkAdvancedServerOps-577536853',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-y87mmd4a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:22:54Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=4c926c1a-d5cf-4865-aa57-66b439d115f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ee850cb-507f-4288-993f-a7892e9285c9", "address": "fa:16:3e:cc:0a:92", "network": {"id": "6a8ba0d6-68f0-4a25-84ff-3685d5b259a6", "bridge": "br-int", "label": "tempest-network-smoke--805773012", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ee850cb-50", "ovs_interfaceid": "5ee850cb-507f-4288-993f-a7892e9285c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.759 225859 DEBUG nova.network.os_vif_util [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "5ee850cb-507f-4288-993f-a7892e9285c9", "address": "fa:16:3e:cc:0a:92", "network": {"id": "6a8ba0d6-68f0-4a25-84ff-3685d5b259a6", "bridge": "br-int", "label": "tempest-network-smoke--805773012", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ee850cb-50", "ovs_interfaceid": "5ee850cb-507f-4288-993f-a7892e9285c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.760 225859 DEBUG nova.network.os_vif_util [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5ee850cb-507f-4288-993f-a7892e9285c9,network=Network(6a8ba0d6-68f0-4a25-84ff-3685d5b259a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ee850cb-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.761 225859 DEBUG nova.objects.instance [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'pci_devices' on Instance uuid 4c926c1a-d5cf-4865-aa57-66b439d115f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.774 225859 DEBUG nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:23:02 compute-1 nova_compute[225855]:   <uuid>4c926c1a-d5cf-4865-aa57-66b439d115f8</uuid>
Jan 20 15:23:02 compute-1 nova_compute[225855]:   <name>instance-000000c8</name>
Jan 20 15:23:02 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:23:02 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:23:02 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1485210936</nova:name>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:23:01</nova:creationTime>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:23:02 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:23:02 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:23:02 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:23:02 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:23:02 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:23:02 compute-1 nova_compute[225855]:         <nova:user uuid="442a7a5cb8ea426a82be9762b262d171">tempest-TestNetworkAdvancedServerOps-175282664-project-member</nova:user>
Jan 20 15:23:02 compute-1 nova_compute[225855]:         <nova:project uuid="1ed5feeeafe7448a8efb47ab975b0ead">tempest-TestNetworkAdvancedServerOps-175282664</nova:project>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:23:02 compute-1 nova_compute[225855]:         <nova:port uuid="5ee850cb-507f-4288-993f-a7892e9285c9">
Jan 20 15:23:02 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:23:02 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:23:02 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <system>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <entry name="serial">4c926c1a-d5cf-4865-aa57-66b439d115f8</entry>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <entry name="uuid">4c926c1a-d5cf-4865-aa57-66b439d115f8</entry>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     </system>
Jan 20 15:23:02 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:23:02 compute-1 nova_compute[225855]:   <os>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:   </os>
Jan 20 15:23:02 compute-1 nova_compute[225855]:   <features>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:   </features>
Jan 20 15:23:02 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:23:02 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:23:02 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/4c926c1a-d5cf-4865-aa57-66b439d115f8_disk">
Jan 20 15:23:02 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       </source>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:23:02 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/4c926c1a-d5cf-4865-aa57-66b439d115f8_disk.config">
Jan 20 15:23:02 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       </source>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:23:02 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:cc:0a:92"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <target dev="tap5ee850cb-50"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/4c926c1a-d5cf-4865-aa57-66b439d115f8/console.log" append="off"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <video>
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     </video>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:23:02 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:23:02 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:23:02 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:23:02 compute-1 nova_compute[225855]: </domain>
Jan 20 15:23:02 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.776 225859 DEBUG nova.compute.manager [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Preparing to wait for external event network-vif-plugged-5ee850cb-507f-4288-993f-a7892e9285c9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.776 225859 DEBUG oslo_concurrency.lockutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.777 225859 DEBUG oslo_concurrency.lockutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.777 225859 DEBUG oslo_concurrency.lockutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.778 225859 DEBUG nova.virt.libvirt.vif [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:22:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1485210936',display_name='tempest-TestNetworkAdvancedServerOps-server-1485210936',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1485210936',id=200,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFcP4UZM8O63AIoOVMDQxqy+5NyOff2+4sn+/fo/i9F+a5f/348af0mVz9/8dEzIolLAAsu+/bRAcsRarrSFS+xLm/g8/3nv4fTZhMf6io52heWTDMNl7eXefIdDoA+XVw==',key_name='tempest-TestNetworkAdvancedServerOps-577536853',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-y87mmd4a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:22:54Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=4c926c1a-d5cf-4865-aa57-66b439d115f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ee850cb-507f-4288-993f-a7892e9285c9", "address": "fa:16:3e:cc:0a:92", "network": {"id": "6a8ba0d6-68f0-4a25-84ff-3685d5b259a6", "bridge": "br-int", "label": "tempest-network-smoke--805773012", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ee850cb-50", "ovs_interfaceid": "5ee850cb-507f-4288-993f-a7892e9285c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.778 225859 DEBUG nova.network.os_vif_util [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "5ee850cb-507f-4288-993f-a7892e9285c9", "address": "fa:16:3e:cc:0a:92", "network": {"id": "6a8ba0d6-68f0-4a25-84ff-3685d5b259a6", "bridge": "br-int", "label": "tempest-network-smoke--805773012", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ee850cb-50", "ovs_interfaceid": "5ee850cb-507f-4288-993f-a7892e9285c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.779 225859 DEBUG nova.network.os_vif_util [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5ee850cb-507f-4288-993f-a7892e9285c9,network=Network(6a8ba0d6-68f0-4a25-84ff-3685d5b259a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ee850cb-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.779 225859 DEBUG os_vif [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5ee850cb-507f-4288-993f-a7892e9285c9,network=Network(6a8ba0d6-68f0-4a25-84ff-3685d5b259a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ee850cb-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.780 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.780 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.781 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.783 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.783 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ee850cb-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.784 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5ee850cb-50, col_values=(('external_ids', {'iface-id': '5ee850cb-507f-4288-993f-a7892e9285c9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:0a:92', 'vm-uuid': '4c926c1a-d5cf-4865-aa57-66b439d115f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.785 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:02 compute-1 NetworkManager[49104]: <info>  [1768922582.7871] manager: (tap5ee850cb-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.792 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.794 225859 INFO os_vif [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5ee850cb-507f-4288-993f-a7892e9285c9,network=Network(6a8ba0d6-68f0-4a25-84ff-3685d5b259a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ee850cb-50')
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.843 225859 DEBUG nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.844 225859 DEBUG nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.844 225859 DEBUG nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No VIF found with MAC fa:16:3e:cc:0a:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.845 225859 INFO nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Using config drive
Jan 20 15:23:02 compute-1 nova_compute[225855]: 2026-01-20 15:23:02.868 225859 DEBUG nova.storage.rbd_utils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 4c926c1a-d5cf-4865-aa57-66b439d115f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:23:03 compute-1 nova_compute[225855]: 2026-01-20 15:23:03.171 225859 DEBUG nova.network.neutron [req-8316f45a-60ef-47bf-8a7c-ae73d249895a req-282130c6-2182-480d-ac22-06c9330ecd05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Updated VIF entry in instance network info cache for port 5ee850cb-507f-4288-993f-a7892e9285c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:23:03 compute-1 nova_compute[225855]: 2026-01-20 15:23:03.171 225859 DEBUG nova.network.neutron [req-8316f45a-60ef-47bf-8a7c-ae73d249895a req-282130c6-2182-480d-ac22-06c9330ecd05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Updating instance_info_cache with network_info: [{"id": "5ee850cb-507f-4288-993f-a7892e9285c9", "address": "fa:16:3e:cc:0a:92", "network": {"id": "6a8ba0d6-68f0-4a25-84ff-3685d5b259a6", "bridge": "br-int", "label": "tempest-network-smoke--805773012", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ee850cb-50", "ovs_interfaceid": "5ee850cb-507f-4288-993f-a7892e9285c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:23:03 compute-1 nova_compute[225855]: 2026-01-20 15:23:03.187 225859 DEBUG oslo_concurrency.lockutils [req-8316f45a-60ef-47bf-8a7c-ae73d249895a req-282130c6-2182-480d-ac22-06c9330ecd05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-4c926c1a-d5cf-4865-aa57-66b439d115f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:23:03 compute-1 nova_compute[225855]: 2026-01-20 15:23:03.282 225859 INFO nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Creating config drive at /var/lib/nova/instances/4c926c1a-d5cf-4865-aa57-66b439d115f8/disk.config
Jan 20 15:23:03 compute-1 nova_compute[225855]: 2026-01-20 15:23:03.287 225859 DEBUG oslo_concurrency.processutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4c926c1a-d5cf-4865-aa57-66b439d115f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx91emwzl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:23:03 compute-1 nova_compute[225855]: 2026-01-20 15:23:03.324 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:03 compute-1 nova_compute[225855]: 2026-01-20 15:23:03.418 225859 DEBUG oslo_concurrency.processutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4c926c1a-d5cf-4865-aa57-66b439d115f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx91emwzl" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:23:03 compute-1 nova_compute[225855]: 2026-01-20 15:23:03.443 225859 DEBUG nova.storage.rbd_utils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 4c926c1a-d5cf-4865-aa57-66b439d115f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:23:03 compute-1 nova_compute[225855]: 2026-01-20 15:23:03.447 225859 DEBUG oslo_concurrency.processutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4c926c1a-d5cf-4865-aa57-66b439d115f8/disk.config 4c926c1a-d5cf-4865-aa57-66b439d115f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:23:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:03 compute-1 nova_compute[225855]: 2026-01-20 15:23:03.715 225859 DEBUG oslo_concurrency.processutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4c926c1a-d5cf-4865-aa57-66b439d115f8/disk.config 4c926c1a-d5cf-4865-aa57-66b439d115f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:23:03 compute-1 nova_compute[225855]: 2026-01-20 15:23:03.716 225859 INFO nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Deleting local config drive /var/lib/nova/instances/4c926c1a-d5cf-4865-aa57-66b439d115f8/disk.config because it was imported into RBD.
Jan 20 15:23:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1782031866' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:23:03 compute-1 kernel: tap5ee850cb-50: entered promiscuous mode
Jan 20 15:23:03 compute-1 ovn_controller[130490]: 2026-01-20T15:23:03Z|00892|binding|INFO|Claiming lport 5ee850cb-507f-4288-993f-a7892e9285c9 for this chassis.
Jan 20 15:23:03 compute-1 ovn_controller[130490]: 2026-01-20T15:23:03Z|00893|binding|INFO|5ee850cb-507f-4288-993f-a7892e9285c9: Claiming fa:16:3e:cc:0a:92 10.100.0.12
Jan 20 15:23:03 compute-1 NetworkManager[49104]: <info>  [1768922583.7776] manager: (tap5ee850cb-50): new Tun device (/org/freedesktop/NetworkManager/Devices/373)
Jan 20 15:23:03 compute-1 nova_compute[225855]: 2026-01-20 15:23:03.777 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:03 compute-1 nova_compute[225855]: 2026-01-20 15:23:03.780 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:03.792 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:0a:92 10.100.0.12'], port_security=['fa:16:3e:cc:0a:92 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4c926c1a-d5cf-4865-aa57-66b439d115f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f91a697-00ef-4c2f-a4cd-47aa600b459f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a867ed68-782d-4d05-8077-be0278c87405, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=5ee850cb-507f-4288-993f-a7892e9285c9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:23:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:03.793 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 5ee850cb-507f-4288-993f-a7892e9285c9 in datapath 6a8ba0d6-68f0-4a25-84ff-3685d5b259a6 bound to our chassis
Jan 20 15:23:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:03.794 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a8ba0d6-68f0-4a25-84ff-3685d5b259a6
Jan 20 15:23:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:03.806 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d9027045-a733-44ec-a371-45ad6863f37f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:03.807 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a8ba0d6-61 in ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:23:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:03.808 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a8ba0d6-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:23:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:03.809 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[daf755e3-95c8-49d2-bd8c-58b2475437b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:03.809 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bc526f16-e043-4112-989e-645e2a1ea0e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:03 compute-1 systemd-udevd[311913]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:23:03 compute-1 systemd-machined[194361]: New machine qemu-105-instance-000000c8.
Jan 20 15:23:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:03.820 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[79499408-e694-4e66-b7bf-3c77bc7095df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:03 compute-1 NetworkManager[49104]: <info>  [1768922583.8320] device (tap5ee850cb-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:23:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:03 compute-1 NetworkManager[49104]: <info>  [1768922583.8336] device (tap5ee850cb-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:23:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:03.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:03 compute-1 systemd[1]: Started Virtual Machine qemu-105-instance-000000c8.
Jan 20 15:23:03 compute-1 nova_compute[225855]: 2026-01-20 15:23:03.841 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:03.846 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c8bfa168-75d0-43c7-8b8c-15b2e7346823]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:03 compute-1 ovn_controller[130490]: 2026-01-20T15:23:03Z|00894|binding|INFO|Setting lport 5ee850cb-507f-4288-993f-a7892e9285c9 ovn-installed in OVS
Jan 20 15:23:03 compute-1 ovn_controller[130490]: 2026-01-20T15:23:03Z|00895|binding|INFO|Setting lport 5ee850cb-507f-4288-993f-a7892e9285c9 up in Southbound
Jan 20 15:23:03 compute-1 nova_compute[225855]: 2026-01-20 15:23:03.848 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:03.879 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[4e08fcae-0b4a-47c7-b555-0f4e9a1bdff9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:03.884 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb23797-a09b-42af-a8c3-b45617b2d655]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:03 compute-1 systemd-udevd[311917]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:23:03 compute-1 NetworkManager[49104]: <info>  [1768922583.8860] manager: (tap6a8ba0d6-60): new Veth device (/org/freedesktop/NetworkManager/Devices/374)
Jan 20 15:23:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:03.916 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a73669-44df-42e3-9bf1-56e9ffce9c48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:03.919 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[960157e7-012c-48c2-8ecb-23c3bfa2f63f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:03 compute-1 NetworkManager[49104]: <info>  [1768922583.9420] device (tap6a8ba0d6-60): carrier: link connected
Jan 20 15:23:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:03.946 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[7e09d2ea-82f5-45a7-9b05-3220fa398e8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:03.965 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab25d17-fe1d-439f-9cab-2d3cdf52f383]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a8ba0d6-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:82:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 762286, 'reachable_time': 35060, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311945, 'error': None, 'target': 'ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:03.983 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5542ba19-21e8-4b60-a03f-2316725df1ad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:8288'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 762286, 'tstamp': 762286}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311946, 'error': None, 'target': 'ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:03.998 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e051d4e1-11a7-49ae-860b-35fe647a7837]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a8ba0d6-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:82:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 762286, 'reachable_time': 35060, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311947, 'error': None, 'target': 'ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:04.033 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc855b3-9296-4fa3-83e8-a9a4a009af0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:04.118 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[79ab5e1c-2d4d-47ff-9bbd-b10265ff9e27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:04.120 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a8ba0d6-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:04.120 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:04.121 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a8ba0d6-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:23:04 compute-1 NetworkManager[49104]: <info>  [1768922584.1235] manager: (tap6a8ba0d6-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Jan 20 15:23:04 compute-1 kernel: tap6a8ba0d6-60: entered promiscuous mode
Jan 20 15:23:04 compute-1 nova_compute[225855]: 2026-01-20 15:23:04.123 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:04.125 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a8ba0d6-60, col_values=(('external_ids', {'iface-id': '37682451-9139-425b-b6d7-1ea83a2306c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:23:04 compute-1 nova_compute[225855]: 2026-01-20 15:23:04.127 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:04 compute-1 ovn_controller[130490]: 2026-01-20T15:23:04Z|00896|binding|INFO|Releasing lport 37682451-9139-425b-b6d7-1ea83a2306c3 from this chassis (sb_readonly=1)
Jan 20 15:23:04 compute-1 nova_compute[225855]: 2026-01-20 15:23:04.141 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:04.142 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a8ba0d6-68f0-4a25-84ff-3685d5b259a6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a8ba0d6-68f0-4a25-84ff-3685d5b259a6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:04.143 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b03ff4ea-b033-444b-9579-764f9a6997eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:04.144 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/6a8ba0d6-68f0-4a25-84ff-3685d5b259a6.pid.haproxy
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 6a8ba0d6-68f0-4a25-84ff-3685d5b259a6
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:23:04 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:04.144 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6', 'env', 'PROCESS_TAG=haproxy-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a8ba0d6-68f0-4a25-84ff-3685d5b259a6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:23:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:04.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:04 compute-1 nova_compute[225855]: 2026-01-20 15:23:04.373 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922584.373234, 4c926c1a-d5cf-4865-aa57-66b439d115f8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:23:04 compute-1 nova_compute[225855]: 2026-01-20 15:23:04.374 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] VM Started (Lifecycle Event)
Jan 20 15:23:04 compute-1 podman[312021]: 2026-01-20 15:23:04.508595777 +0000 UTC m=+0.045015026 container create 3d2e532eac9473a89ae3cd09c7dc89ba78b5562e33899e920a0ad5ee290fae30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 15:23:04 compute-1 systemd[1]: Started libpod-conmon-3d2e532eac9473a89ae3cd09c7dc89ba78b5562e33899e920a0ad5ee290fae30.scope.
Jan 20 15:23:04 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:23:04 compute-1 podman[312021]: 2026-01-20 15:23:04.485333123 +0000 UTC m=+0.021752402 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:23:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba41683a800cd9238c1b5a90bf229c56f6a98cdf524e45addc7193533096210c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:23:04 compute-1 podman[312021]: 2026-01-20 15:23:04.595376645 +0000 UTC m=+0.131795914 container init 3d2e532eac9473a89ae3cd09c7dc89ba78b5562e33899e920a0ad5ee290fae30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:23:04 compute-1 podman[312021]: 2026-01-20 15:23:04.60081542 +0000 UTC m=+0.137234669 container start 3d2e532eac9473a89ae3cd09c7dc89ba78b5562e33899e920a0ad5ee290fae30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:23:04 compute-1 neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6[312037]: [NOTICE]   (312043) : New worker (312066) forked
Jan 20 15:23:04 compute-1 neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6[312037]: [NOTICE]   (312043) : Loading success.
Jan 20 15:23:04 compute-1 sudo[312041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:04 compute-1 sudo[312041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:04 compute-1 sudo[312041]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:04 compute-1 sudo[312077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:04 compute-1 sudo[312077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:04 compute-1 sudo[312077]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:04 compute-1 ceph-mon[81775]: pgmap v3026: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:23:05 compute-1 nova_compute[225855]: 2026-01-20 15:23:05.080 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:23:05 compute-1 nova_compute[225855]: 2026-01-20 15:23:05.085 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922584.373421, 4c926c1a-d5cf-4865-aa57-66b439d115f8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:23:05 compute-1 nova_compute[225855]: 2026-01-20 15:23:05.085 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] VM Paused (Lifecycle Event)
Jan 20 15:23:05 compute-1 nova_compute[225855]: 2026-01-20 15:23:05.681 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:23:05 compute-1 nova_compute[225855]: 2026-01-20 15:23:05.684 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:23:05 compute-1 nova_compute[225855]: 2026-01-20 15:23:05.704 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:23:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:23:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:05.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:23:05 compute-1 ceph-mon[81775]: pgmap v3027: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:23:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/147687748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:23:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:06.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.270 225859 DEBUG nova.compute.manager [req-55862ba5-8d88-4a02-aac0-1afc73eb884f req-659bcf91-6e7c-4eda-8b02-15b7c8c5f50e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Received event network-vif-plugged-5ee850cb-507f-4288-993f-a7892e9285c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.270 225859 DEBUG oslo_concurrency.lockutils [req-55862ba5-8d88-4a02-aac0-1afc73eb884f req-659bcf91-6e7c-4eda-8b02-15b7c8c5f50e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.270 225859 DEBUG oslo_concurrency.lockutils [req-55862ba5-8d88-4a02-aac0-1afc73eb884f req-659bcf91-6e7c-4eda-8b02-15b7c8c5f50e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.270 225859 DEBUG oslo_concurrency.lockutils [req-55862ba5-8d88-4a02-aac0-1afc73eb884f req-659bcf91-6e7c-4eda-8b02-15b7c8c5f50e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.270 225859 DEBUG nova.compute.manager [req-55862ba5-8d88-4a02-aac0-1afc73eb884f req-659bcf91-6e7c-4eda-8b02-15b7c8c5f50e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Processing event network-vif-plugged-5ee850cb-507f-4288-993f-a7892e9285c9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.271 225859 DEBUG nova.compute.manager [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.274 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922586.2745192, 4c926c1a-d5cf-4865-aa57-66b439d115f8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.275 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] VM Resumed (Lifecycle Event)
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.277 225859 DEBUG nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.280 225859 INFO nova.virt.libvirt.driver [-] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Instance spawned successfully.
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.280 225859 DEBUG nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.318 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.321 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.328 225859 DEBUG nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.328 225859 DEBUG nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.328 225859 DEBUG nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.329 225859 DEBUG nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.329 225859 DEBUG nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.329 225859 DEBUG nova.virt.libvirt.driver [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.363 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.392 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.392 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.398 225859 INFO nova.compute.manager [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Took 12.27 seconds to spawn the instance on the hypervisor.
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.398 225859 DEBUG nova.compute.manager [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.464 225859 INFO nova.compute.manager [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Took 13.15 seconds to build instance.
Jan 20 15:23:06 compute-1 nova_compute[225855]: 2026-01-20 15:23:06.480 225859 DEBUG oslo_concurrency.lockutils [None req-1b898df7-ad12-4947-8ab8-99ba6eee88f5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:23:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2287325465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:23:07 compute-1 nova_compute[225855]: 2026-01-20 15:23:07.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:23:07 compute-1 nova_compute[225855]: 2026-01-20 15:23:07.359 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:23:07 compute-1 nova_compute[225855]: 2026-01-20 15:23:07.360 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:23:07 compute-1 nova_compute[225855]: 2026-01-20 15:23:07.360 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:23:07 compute-1 nova_compute[225855]: 2026-01-20 15:23:07.361 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:23:07 compute-1 nova_compute[225855]: 2026-01-20 15:23:07.361 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:23:07 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:23:07 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3144324644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:23:07 compute-1 nova_compute[225855]: 2026-01-20 15:23:07.786 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:07 compute-1 nova_compute[225855]: 2026-01-20 15:23:07.801 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:23:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:23:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:07.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:23:07 compute-1 nova_compute[225855]: 2026-01-20 15:23:07.867 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:23:07 compute-1 nova_compute[225855]: 2026-01-20 15:23:07.867 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:23:07 compute-1 podman[312127]: 2026-01-20 15:23:07.893995095 +0000 UTC m=+0.055974629 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 15:23:08 compute-1 ceph-mon[81775]: pgmap v3028: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 20 15:23:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3144324644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:23:08 compute-1 nova_compute[225855]: 2026-01-20 15:23:08.018 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:23:08 compute-1 nova_compute[225855]: 2026-01-20 15:23:08.019 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4113MB free_disk=20.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:23:08 compute-1 nova_compute[225855]: 2026-01-20 15:23:08.019 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:23:08 compute-1 nova_compute[225855]: 2026-01-20 15:23:08.019 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:23:08 compute-1 nova_compute[225855]: 2026-01-20 15:23:08.082 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 4c926c1a-d5cf-4865-aa57-66b439d115f8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:23:08 compute-1 nova_compute[225855]: 2026-01-20 15:23:08.082 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:23:08 compute-1 nova_compute[225855]: 2026-01-20 15:23:08.082 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:23:08 compute-1 nova_compute[225855]: 2026-01-20 15:23:08.147 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:23:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:08.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:08 compute-1 nova_compute[225855]: 2026-01-20 15:23:08.325 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:08 compute-1 nova_compute[225855]: 2026-01-20 15:23:08.383 225859 DEBUG nova.compute.manager [req-7044eb28-560b-4743-855d-ca1cf2f625a1 req-411c6d42-381a-4572-afc2-19e70fa4f61e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Received event network-vif-plugged-5ee850cb-507f-4288-993f-a7892e9285c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:23:08 compute-1 nova_compute[225855]: 2026-01-20 15:23:08.384 225859 DEBUG oslo_concurrency.lockutils [req-7044eb28-560b-4743-855d-ca1cf2f625a1 req-411c6d42-381a-4572-afc2-19e70fa4f61e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:23:08 compute-1 nova_compute[225855]: 2026-01-20 15:23:08.384 225859 DEBUG oslo_concurrency.lockutils [req-7044eb28-560b-4743-855d-ca1cf2f625a1 req-411c6d42-381a-4572-afc2-19e70fa4f61e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:23:08 compute-1 nova_compute[225855]: 2026-01-20 15:23:08.384 225859 DEBUG oslo_concurrency.lockutils [req-7044eb28-560b-4743-855d-ca1cf2f625a1 req-411c6d42-381a-4572-afc2-19e70fa4f61e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:23:08 compute-1 nova_compute[225855]: 2026-01-20 15:23:08.385 225859 DEBUG nova.compute.manager [req-7044eb28-560b-4743-855d-ca1cf2f625a1 req-411c6d42-381a-4572-afc2-19e70fa4f61e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] No waiting events found dispatching network-vif-plugged-5ee850cb-507f-4288-993f-a7892e9285c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:23:08 compute-1 nova_compute[225855]: 2026-01-20 15:23:08.385 225859 WARNING nova.compute.manager [req-7044eb28-560b-4743-855d-ca1cf2f625a1 req-411c6d42-381a-4572-afc2-19e70fa4f61e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Received unexpected event network-vif-plugged-5ee850cb-507f-4288-993f-a7892e9285c9 for instance with vm_state active and task_state None.
Jan 20 15:23:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:23:08 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2587878333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:23:08 compute-1 nova_compute[225855]: 2026-01-20 15:23:08.592 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:23:08 compute-1 nova_compute[225855]: 2026-01-20 15:23:08.599 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:23:08 compute-1 nova_compute[225855]: 2026-01-20 15:23:08.617 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:23:08 compute-1 nova_compute[225855]: 2026-01-20 15:23:08.642 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:23:08 compute-1 nova_compute[225855]: 2026-01-20 15:23:08.643 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:23:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2587878333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:23:09 compute-1 nova_compute[225855]: 2026-01-20 15:23:09.644 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:23:09 compute-1 nova_compute[225855]: 2026-01-20 15:23:09.645 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:23:09 compute-1 nova_compute[225855]: 2026-01-20 15:23:09.645 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:23:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:09.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:10 compute-1 ceph-mon[81775]: pgmap v3029: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 507 KiB/s wr, 21 op/s
Jan 20 15:23:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/467097017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:23:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:10.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:10 compute-1 nova_compute[225855]: 2026-01-20 15:23:10.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:23:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/619901193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:23:11 compute-1 nova_compute[225855]: 2026-01-20 15:23:11.202 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:11.202 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:23:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:11.204 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:23:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:11.205 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:23:11 compute-1 nova_compute[225855]: 2026-01-20 15:23:11.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:23:11 compute-1 ovn_controller[130490]: 2026-01-20T15:23:11Z|00897|binding|INFO|Releasing lport 37682451-9139-425b-b6d7-1ea83a2306c3 from this chassis (sb_readonly=0)
Jan 20 15:23:11 compute-1 NetworkManager[49104]: <info>  [1768922591.3587] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Jan 20 15:23:11 compute-1 nova_compute[225855]: 2026-01-20 15:23:11.358 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:11 compute-1 NetworkManager[49104]: <info>  [1768922591.3598] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Jan 20 15:23:11 compute-1 ovn_controller[130490]: 2026-01-20T15:23:11Z|00898|binding|INFO|Releasing lport 37682451-9139-425b-b6d7-1ea83a2306c3 from this chassis (sb_readonly=0)
Jan 20 15:23:11 compute-1 nova_compute[225855]: 2026-01-20 15:23:11.390 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:11 compute-1 nova_compute[225855]: 2026-01-20 15:23:11.394 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:11 compute-1 nova_compute[225855]: 2026-01-20 15:23:11.737 225859 DEBUG nova.compute.manager [req-34831086-4be9-463c-8b2c-d358eb7dfd74 req-0f080f90-3086-44d8-a754-11d3e045bf35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Received event network-changed-5ee850cb-507f-4288-993f-a7892e9285c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:23:11 compute-1 nova_compute[225855]: 2026-01-20 15:23:11.737 225859 DEBUG nova.compute.manager [req-34831086-4be9-463c-8b2c-d358eb7dfd74 req-0f080f90-3086-44d8-a754-11d3e045bf35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Refreshing instance network info cache due to event network-changed-5ee850cb-507f-4288-993f-a7892e9285c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:23:11 compute-1 nova_compute[225855]: 2026-01-20 15:23:11.737 225859 DEBUG oslo_concurrency.lockutils [req-34831086-4be9-463c-8b2c-d358eb7dfd74 req-0f080f90-3086-44d8-a754-11d3e045bf35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-4c926c1a-d5cf-4865-aa57-66b439d115f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:23:11 compute-1 nova_compute[225855]: 2026-01-20 15:23:11.738 225859 DEBUG oslo_concurrency.lockutils [req-34831086-4be9-463c-8b2c-d358eb7dfd74 req-0f080f90-3086-44d8-a754-11d3e045bf35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-4c926c1a-d5cf-4865-aa57-66b439d115f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:23:11 compute-1 nova_compute[225855]: 2026-01-20 15:23:11.738 225859 DEBUG nova.network.neutron [req-34831086-4be9-463c-8b2c-d358eb7dfd74 req-0f080f90-3086-44d8-a754-11d3e045bf35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Refreshing network info cache for port 5ee850cb-507f-4288-993f-a7892e9285c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:23:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:23:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:11.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:23:12 compute-1 ceph-mon[81775]: pgmap v3030: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 507 KiB/s wr, 74 op/s
Jan 20 15:23:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:12.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:12 compute-1 nova_compute[225855]: 2026-01-20 15:23:12.789 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:13 compute-1 nova_compute[225855]: 2026-01-20 15:23:13.058 225859 DEBUG nova.network.neutron [req-34831086-4be9-463c-8b2c-d358eb7dfd74 req-0f080f90-3086-44d8-a754-11d3e045bf35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Updated VIF entry in instance network info cache for port 5ee850cb-507f-4288-993f-a7892e9285c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:23:13 compute-1 nova_compute[225855]: 2026-01-20 15:23:13.059 225859 DEBUG nova.network.neutron [req-34831086-4be9-463c-8b2c-d358eb7dfd74 req-0f080f90-3086-44d8-a754-11d3e045bf35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Updating instance_info_cache with network_info: [{"id": "5ee850cb-507f-4288-993f-a7892e9285c9", "address": "fa:16:3e:cc:0a:92", "network": {"id": "6a8ba0d6-68f0-4a25-84ff-3685d5b259a6", "bridge": "br-int", "label": "tempest-network-smoke--805773012", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ee850cb-50", "ovs_interfaceid": "5ee850cb-507f-4288-993f-a7892e9285c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:23:13 compute-1 nova_compute[225855]: 2026-01-20 15:23:13.081 225859 DEBUG oslo_concurrency.lockutils [req-34831086-4be9-463c-8b2c-d358eb7dfd74 req-0f080f90-3086-44d8-a754-11d3e045bf35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-4c926c1a-d5cf-4865-aa57-66b439d115f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:23:13 compute-1 nova_compute[225855]: 2026-01-20 15:23:13.328 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:23:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3891755442' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:23:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:23:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3891755442' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:23:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:23:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:13.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:23:14 compute-1 ceph-mon[81775]: pgmap v3031: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:23:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3891755442' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:23:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3891755442' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:23:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:14.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:14 compute-1 nova_compute[225855]: 2026-01-20 15:23:14.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:23:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:15.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:16 compute-1 ceph-mon[81775]: pgmap v3032: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:23:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:16.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:16.444 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:23:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:16.444 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:23:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:16.445 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:23:17 compute-1 nova_compute[225855]: 2026-01-20 15:23:17.790 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:23:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:17.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:23:18 compute-1 ceph-mon[81775]: pgmap v3033: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:23:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:18.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:18 compute-1 nova_compute[225855]: 2026-01-20 15:23:18.329 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:19 compute-1 sshd-session[312173]: Connection closed by authenticating user root 121.36.32.179 port 42594 [preauth]
Jan 20 15:23:19 compute-1 ovn_controller[130490]: 2026-01-20T15:23:19Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cc:0a:92 10.100.0.12
Jan 20 15:23:19 compute-1 ovn_controller[130490]: 2026-01-20T15:23:19Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cc:0a:92 10.100.0.12
Jan 20 15:23:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:19.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:20 compute-1 ceph-mon[81775]: pgmap v3034: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 20 15:23:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:20.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:23:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:21.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:23:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:22.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:22 compute-1 ceph-mon[81775]: pgmap v3035: 321 pgs: 321 active+clean; 180 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 104 op/s
Jan 20 15:23:22 compute-1 nova_compute[225855]: 2026-01-20 15:23:22.793 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:23 compute-1 nova_compute[225855]: 2026-01-20 15:23:23.332 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:23.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:24.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:24 compute-1 ceph-mon[81775]: pgmap v3036: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 680 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Jan 20 15:23:24 compute-1 sudo[312180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:24 compute-1 sudo[312180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:24 compute-1 sudo[312180]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:24 compute-1 sudo[312205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:24 compute-1 sudo[312205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:24 compute-1 sudo[312205]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:25 compute-1 nova_compute[225855]: 2026-01-20 15:23:25.053 225859 INFO nova.compute.manager [None req-801be12f-ea4e-427e-afe5-a7bda78ad809 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Get console output
Jan 20 15:23:25 compute-1 nova_compute[225855]: 2026-01-20 15:23:25.058 263775 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:23:25 compute-1 nova_compute[225855]: 2026-01-20 15:23:25.334 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:23:25 compute-1 nova_compute[225855]: 2026-01-20 15:23:25.346 225859 DEBUG oslo_concurrency.lockutils [None req-9074b4a9-08a3-48d3-a541-d80622072f80 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "4c926c1a-d5cf-4865-aa57-66b439d115f8" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:23:25 compute-1 nova_compute[225855]: 2026-01-20 15:23:25.347 225859 DEBUG oslo_concurrency.lockutils [None req-9074b4a9-08a3-48d3-a541-d80622072f80 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:23:25 compute-1 nova_compute[225855]: 2026-01-20 15:23:25.347 225859 DEBUG nova.compute.manager [None req-9074b4a9-08a3-48d3-a541-d80622072f80 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:23:25 compute-1 nova_compute[225855]: 2026-01-20 15:23:25.351 225859 DEBUG nova.compute.manager [None req-9074b4a9-08a3-48d3-a541-d80622072f80 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 20 15:23:25 compute-1 nova_compute[225855]: 2026-01-20 15:23:25.352 225859 DEBUG nova.objects.instance [None req-9074b4a9-08a3-48d3-a541-d80622072f80 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'flavor' on Instance uuid 4c926c1a-d5cf-4865-aa57-66b439d115f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:23:25 compute-1 nova_compute[225855]: 2026-01-20 15:23:25.377 225859 DEBUG nova.virt.libvirt.driver [None req-9074b4a9-08a3-48d3-a541-d80622072f80 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 15:23:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:23:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:25.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:23:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:26.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:26 compute-1 ceph-mon[81775]: pgmap v3037: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:23:27 compute-1 nova_compute[225855]: 2026-01-20 15:23:27.794 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:27.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:27 compute-1 kernel: tap5ee850cb-50 (unregistering): left promiscuous mode
Jan 20 15:23:27 compute-1 NetworkManager[49104]: <info>  [1768922607.9866] device (tap5ee850cb-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:23:27 compute-1 nova_compute[225855]: 2026-01-20 15:23:27.993 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:27 compute-1 ovn_controller[130490]: 2026-01-20T15:23:27Z|00899|binding|INFO|Releasing lport 5ee850cb-507f-4288-993f-a7892e9285c9 from this chassis (sb_readonly=0)
Jan 20 15:23:27 compute-1 ovn_controller[130490]: 2026-01-20T15:23:27Z|00900|binding|INFO|Setting lport 5ee850cb-507f-4288-993f-a7892e9285c9 down in Southbound
Jan 20 15:23:27 compute-1 ovn_controller[130490]: 2026-01-20T15:23:27Z|00901|binding|INFO|Removing iface tap5ee850cb-50 ovn-installed in OVS
Jan 20 15:23:27 compute-1 nova_compute[225855]: 2026-01-20 15:23:27.998 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:28.004 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:0a:92 10.100.0.12'], port_security=['fa:16:3e:cc:0a:92 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4c926c1a-d5cf-4865-aa57-66b439d115f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f91a697-00ef-4c2f-a4cd-47aa600b459f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.176'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a867ed68-782d-4d05-8077-be0278c87405, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=5ee850cb-507f-4288-993f-a7892e9285c9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:23:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:28.005 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 5ee850cb-507f-4288-993f-a7892e9285c9 in datapath 6a8ba0d6-68f0-4a25-84ff-3685d5b259a6 unbound from our chassis
Jan 20 15:23:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:28.006 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a8ba0d6-68f0-4a25-84ff-3685d5b259a6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:23:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:28.007 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7aed325c-e852-4e2c-9a0a-c811e9241873]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:28.008 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6 namespace which is not needed anymore
Jan 20 15:23:28 compute-1 nova_compute[225855]: 2026-01-20 15:23:28.025 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:28 compute-1 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d000000c8.scope: Deactivated successfully.
Jan 20 15:23:28 compute-1 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d000000c8.scope: Consumed 13.599s CPU time.
Jan 20 15:23:28 compute-1 systemd-machined[194361]: Machine qemu-105-instance-000000c8 terminated.
Jan 20 15:23:28 compute-1 podman[312232]: 2026-01-20 15:23:28.065914497 +0000 UTC m=+0.106021498 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 20 15:23:28 compute-1 neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6[312037]: [NOTICE]   (312043) : haproxy version is 2.8.14-c23fe91
Jan 20 15:23:28 compute-1 neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6[312037]: [NOTICE]   (312043) : path to executable is /usr/sbin/haproxy
Jan 20 15:23:28 compute-1 neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6[312037]: [WARNING]  (312043) : Exiting Master process...
Jan 20 15:23:28 compute-1 neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6[312037]: [ALERT]    (312043) : Current worker (312066) exited with code 143 (Terminated)
Jan 20 15:23:28 compute-1 neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6[312037]: [WARNING]  (312043) : All workers exited. Exiting... (0)
Jan 20 15:23:28 compute-1 systemd[1]: libpod-3d2e532eac9473a89ae3cd09c7dc89ba78b5562e33899e920a0ad5ee290fae30.scope: Deactivated successfully.
Jan 20 15:23:28 compute-1 podman[312280]: 2026-01-20 15:23:28.14515284 +0000 UTC m=+0.045850251 container died 3d2e532eac9473a89ae3cd09c7dc89ba78b5562e33899e920a0ad5ee290fae30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:23:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:23:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:28.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:23:28 compute-1 nova_compute[225855]: 2026-01-20 15:23:28.253 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:28 compute-1 nova_compute[225855]: 2026-01-20 15:23:28.260 225859 DEBUG nova.compute.manager [req-738d96d4-787d-4743-b27e-df0ec24b392b req-5d632645-c5ee-4c0e-8bf2-313b05968bdd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Received event network-vif-unplugged-5ee850cb-507f-4288-993f-a7892e9285c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:23:28 compute-1 nova_compute[225855]: 2026-01-20 15:23:28.261 225859 DEBUG oslo_concurrency.lockutils [req-738d96d4-787d-4743-b27e-df0ec24b392b req-5d632645-c5ee-4c0e-8bf2-313b05968bdd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:23:28 compute-1 nova_compute[225855]: 2026-01-20 15:23:28.261 225859 DEBUG oslo_concurrency.lockutils [req-738d96d4-787d-4743-b27e-df0ec24b392b req-5d632645-c5ee-4c0e-8bf2-313b05968bdd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:23:28 compute-1 nova_compute[225855]: 2026-01-20 15:23:28.261 225859 DEBUG oslo_concurrency.lockutils [req-738d96d4-787d-4743-b27e-df0ec24b392b req-5d632645-c5ee-4c0e-8bf2-313b05968bdd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:23:28 compute-1 nova_compute[225855]: 2026-01-20 15:23:28.261 225859 DEBUG nova.compute.manager [req-738d96d4-787d-4743-b27e-df0ec24b392b req-5d632645-c5ee-4c0e-8bf2-313b05968bdd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] No waiting events found dispatching network-vif-unplugged-5ee850cb-507f-4288-993f-a7892e9285c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:23:28 compute-1 nova_compute[225855]: 2026-01-20 15:23:28.262 225859 WARNING nova.compute.manager [req-738d96d4-787d-4743-b27e-df0ec24b392b req-5d632645-c5ee-4c0e-8bf2-313b05968bdd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Received unexpected event network-vif-unplugged-5ee850cb-507f-4288-993f-a7892e9285c9 for instance with vm_state active and task_state powering-off.
Jan 20 15:23:28 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3d2e532eac9473a89ae3cd09c7dc89ba78b5562e33899e920a0ad5ee290fae30-userdata-shm.mount: Deactivated successfully.
Jan 20 15:23:28 compute-1 systemd[1]: var-lib-containers-storage-overlay-ba41683a800cd9238c1b5a90bf229c56f6a98cdf524e45addc7193533096210c-merged.mount: Deactivated successfully.
Jan 20 15:23:28 compute-1 podman[312280]: 2026-01-20 15:23:28.288342138 +0000 UTC m=+0.189039529 container cleanup 3d2e532eac9473a89ae3cd09c7dc89ba78b5562e33899e920a0ad5ee290fae30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 15:23:28 compute-1 systemd[1]: libpod-conmon-3d2e532eac9473a89ae3cd09c7dc89ba78b5562e33899e920a0ad5ee290fae30.scope: Deactivated successfully.
Jan 20 15:23:28 compute-1 nova_compute[225855]: 2026-01-20 15:23:28.333 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:28 compute-1 podman[312319]: 2026-01-20 15:23:28.364519843 +0000 UTC m=+0.051734158 container remove 3d2e532eac9473a89ae3cd09c7dc89ba78b5562e33899e920a0ad5ee290fae30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 15:23:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:28.370 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f941d24d-78e7-4b68-a4d1-db1ff88cdec4]: (4, ('Tue Jan 20 03:23:28 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6 (3d2e532eac9473a89ae3cd09c7dc89ba78b5562e33899e920a0ad5ee290fae30)\n3d2e532eac9473a89ae3cd09c7dc89ba78b5562e33899e920a0ad5ee290fae30\nTue Jan 20 03:23:28 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6 (3d2e532eac9473a89ae3cd09c7dc89ba78b5562e33899e920a0ad5ee290fae30)\n3d2e532eac9473a89ae3cd09c7dc89ba78b5562e33899e920a0ad5ee290fae30\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:28.372 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7ea85a-50b4-4fe3-ad93-f279ca7fa76e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:28.373 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a8ba0d6-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:23:28 compute-1 nova_compute[225855]: 2026-01-20 15:23:28.375 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:28 compute-1 kernel: tap6a8ba0d6-60: left promiscuous mode
Jan 20 15:23:28 compute-1 nova_compute[225855]: 2026-01-20 15:23:28.388 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:28 compute-1 nova_compute[225855]: 2026-01-20 15:23:28.392 225859 INFO nova.virt.libvirt.driver [None req-9074b4a9-08a3-48d3-a541-d80622072f80 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Instance shutdown successfully after 3 seconds.
Jan 20 15:23:28 compute-1 nova_compute[225855]: 2026-01-20 15:23:28.394 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:28.397 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fb966007-836b-4fb4-9394-2859080276e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:28 compute-1 nova_compute[225855]: 2026-01-20 15:23:28.399 225859 INFO nova.virt.libvirt.driver [-] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Instance destroyed successfully.
Jan 20 15:23:28 compute-1 nova_compute[225855]: 2026-01-20 15:23:28.399 225859 DEBUG nova.objects.instance [None req-9074b4a9-08a3-48d3-a541-d80622072f80 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'numa_topology' on Instance uuid 4c926c1a-d5cf-4865-aa57-66b439d115f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:23:28 compute-1 nova_compute[225855]: 2026-01-20 15:23:28.412 225859 DEBUG nova.compute.manager [None req-9074b4a9-08a3-48d3-a541-d80622072f80 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:23:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:28.415 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[45f61ebc-f0f5-4b90-ba76-fffc4c66e063]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:28.417 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[246d7cab-b7f9-45df-bb5c-6270675618ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:28.433 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b109bb4d-292e-40a0-bc52-2bbec4ef7b65]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 762280, 'reachable_time': 28195, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312339, 'error': None, 'target': 'ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:28.436 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:23:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:28.436 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[e457509e-fbb4-4093-9327-0eb3937ac225]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:28 compute-1 systemd[1]: run-netns-ovnmeta\x2d6a8ba0d6\x2d68f0\x2d4a25\x2d84ff\x2d3685d5b259a6.mount: Deactivated successfully.
Jan 20 15:23:28 compute-1 nova_compute[225855]: 2026-01-20 15:23:28.450 225859 DEBUG oslo_concurrency.lockutils [None req-9074b4a9-08a3-48d3-a541-d80622072f80 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:23:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:28 compute-1 ceph-mon[81775]: pgmap v3038: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 20 15:23:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:23:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:29.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:23:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:30.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:30 compute-1 nova_compute[225855]: 2026-01-20 15:23:30.398 225859 DEBUG nova.compute.manager [req-54ab5347-995d-4995-8745-6c09279f64c9 req-c331f25d-bab8-45e7-bf15-4868cd7beeaa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Received event network-vif-plugged-5ee850cb-507f-4288-993f-a7892e9285c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:23:30 compute-1 nova_compute[225855]: 2026-01-20 15:23:30.398 225859 DEBUG oslo_concurrency.lockutils [req-54ab5347-995d-4995-8745-6c09279f64c9 req-c331f25d-bab8-45e7-bf15-4868cd7beeaa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:23:30 compute-1 nova_compute[225855]: 2026-01-20 15:23:30.398 225859 DEBUG oslo_concurrency.lockutils [req-54ab5347-995d-4995-8745-6c09279f64c9 req-c331f25d-bab8-45e7-bf15-4868cd7beeaa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:23:30 compute-1 nova_compute[225855]: 2026-01-20 15:23:30.399 225859 DEBUG oslo_concurrency.lockutils [req-54ab5347-995d-4995-8745-6c09279f64c9 req-c331f25d-bab8-45e7-bf15-4868cd7beeaa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:23:30 compute-1 nova_compute[225855]: 2026-01-20 15:23:30.399 225859 DEBUG nova.compute.manager [req-54ab5347-995d-4995-8745-6c09279f64c9 req-c331f25d-bab8-45e7-bf15-4868cd7beeaa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] No waiting events found dispatching network-vif-plugged-5ee850cb-507f-4288-993f-a7892e9285c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:23:30 compute-1 nova_compute[225855]: 2026-01-20 15:23:30.399 225859 WARNING nova.compute.manager [req-54ab5347-995d-4995-8745-6c09279f64c9 req-c331f25d-bab8-45e7-bf15-4868cd7beeaa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Received unexpected event network-vif-plugged-5ee850cb-507f-4288-993f-a7892e9285c9 for instance with vm_state stopped and task_state None.
Jan 20 15:23:30 compute-1 ceph-mon[81775]: pgmap v3039: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #148. Immutable memtables: 0.
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:23:30.525782) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 148
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922610525923, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1817, "num_deletes": 252, "total_data_size": 4124514, "memory_usage": 4181888, "flush_reason": "Manual Compaction"}
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #149: started
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922610556318, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 149, "file_size": 2698868, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72281, "largest_seqno": 74093, "table_properties": {"data_size": 2691378, "index_size": 4368, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16277, "raw_average_key_size": 20, "raw_value_size": 2676151, "raw_average_value_size": 3361, "num_data_blocks": 190, "num_entries": 796, "num_filter_entries": 796, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768922462, "oldest_key_time": 1768922462, "file_creation_time": 1768922610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 30572 microseconds, and 7033 cpu microseconds.
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:23:30.556365) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #149: 2698868 bytes OK
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:23:30.556386) [db/memtable_list.cc:519] [default] Level-0 commit table #149 started
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:23:30.558714) [db/memtable_list.cc:722] [default] Level-0 commit table #149: memtable #1 done
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:23:30.558727) EVENT_LOG_v1 {"time_micros": 1768922610558723, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:23:30.558747) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 4116259, prev total WAL file size 4116259, number of live WAL files 2.
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000145.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:23:30.559946) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [149(2635KB)], [147(10MB)]
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922610560032, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [149], "files_L6": [147], "score": -1, "input_data_size": 13196821, "oldest_snapshot_seqno": -1}
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #150: 9670 keys, 11330445 bytes, temperature: kUnknown
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922610712594, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 150, "file_size": 11330445, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11269239, "index_size": 35941, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24197, "raw_key_size": 254467, "raw_average_key_size": 26, "raw_value_size": 11100844, "raw_average_value_size": 1147, "num_data_blocks": 1363, "num_entries": 9670, "num_filter_entries": 9670, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768922610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 150, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:23:30.712853) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 11330445 bytes
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:23:30.714256) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 86.5 rd, 74.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 10.0 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(9.1) write-amplify(4.2) OK, records in: 10195, records dropped: 525 output_compression: NoCompression
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:23:30.714276) EVENT_LOG_v1 {"time_micros": 1768922610714266, "job": 94, "event": "compaction_finished", "compaction_time_micros": 152648, "compaction_time_cpu_micros": 30961, "output_level": 6, "num_output_files": 1, "total_output_size": 11330445, "num_input_records": 10195, "num_output_records": 9670, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922610714752, "job": 94, "event": "table_file_deletion", "file_number": 149}
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000147.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922610716453, "job": 94, "event": "table_file_deletion", "file_number": 147}
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:23:30.559761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:23:30.716511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:23:30.716515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:23:30.716517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:23:30.716519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:23:30 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:23:30.716520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:23:31 compute-1 nova_compute[225855]: 2026-01-20 15:23:31.495 225859 INFO nova.compute.manager [None req-91115158-c844-4da1-8cae-4da48e6711ee 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Get console output
Jan 20 15:23:31 compute-1 nova_compute[225855]: 2026-01-20 15:23:31.657 225859 DEBUG nova.objects.instance [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'flavor' on Instance uuid 4c926c1a-d5cf-4865-aa57-66b439d115f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:23:31 compute-1 nova_compute[225855]: 2026-01-20 15:23:31.678 225859 DEBUG oslo_concurrency.lockutils [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "refresh_cache-4c926c1a-d5cf-4865-aa57-66b439d115f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:23:31 compute-1 nova_compute[225855]: 2026-01-20 15:23:31.679 225859 DEBUG oslo_concurrency.lockutils [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquired lock "refresh_cache-4c926c1a-d5cf-4865-aa57-66b439d115f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:23:31 compute-1 nova_compute[225855]: 2026-01-20 15:23:31.679 225859 DEBUG nova.network.neutron [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:23:31 compute-1 nova_compute[225855]: 2026-01-20 15:23:31.679 225859 DEBUG nova.objects.instance [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'info_cache' on Instance uuid 4c926c1a-d5cf-4865-aa57-66b439d115f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:23:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:31.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:32.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:32 compute-1 ceph-mon[81775]: pgmap v3040: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 383 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Jan 20 15:23:32 compute-1 nova_compute[225855]: 2026-01-20 15:23:32.796 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.134 225859 DEBUG nova.network.neutron [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Updating instance_info_cache with network_info: [{"id": "5ee850cb-507f-4288-993f-a7892e9285c9", "address": "fa:16:3e:cc:0a:92", "network": {"id": "6a8ba0d6-68f0-4a25-84ff-3685d5b259a6", "bridge": "br-int", "label": "tempest-network-smoke--805773012", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ee850cb-50", "ovs_interfaceid": "5ee850cb-507f-4288-993f-a7892e9285c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.229 225859 DEBUG oslo_concurrency.lockutils [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Releasing lock "refresh_cache-4c926c1a-d5cf-4865-aa57-66b439d115f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.250 225859 INFO nova.virt.libvirt.driver [-] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Instance destroyed successfully.
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.250 225859 DEBUG nova.objects.instance [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'numa_topology' on Instance uuid 4c926c1a-d5cf-4865-aa57-66b439d115f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.283 225859 DEBUG nova.objects.instance [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'resources' on Instance uuid 4c926c1a-d5cf-4865-aa57-66b439d115f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.335 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.699 225859 DEBUG nova.virt.libvirt.vif [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:22:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1485210936',display_name='tempest-TestNetworkAdvancedServerOps-server-1485210936',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1485210936',id=200,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFcP4UZM8O63AIoOVMDQxqy+5NyOff2+4sn+/fo/i9F+a5f/348af0mVz9/8dEzIolLAAsu+/bRAcsRarrSFS+xLm/g8/3nv4fTZhMf6io52heWTDMNl7eXefIdDoA+XVw==',key_name='tempest-TestNetworkAdvancedServerOps-577536853',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:23:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-y87mmd4a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:23:28Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=4c926c1a-d5cf-4865-aa57-66b439d115f8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5ee850cb-507f-4288-993f-a7892e9285c9", "address": "fa:16:3e:cc:0a:92", "network": {"id": "6a8ba0d6-68f0-4a25-84ff-3685d5b259a6", "bridge": "br-int", "label": "tempest-network-smoke--805773012", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ee850cb-50", "ovs_interfaceid": "5ee850cb-507f-4288-993f-a7892e9285c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.700 225859 DEBUG nova.network.os_vif_util [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "5ee850cb-507f-4288-993f-a7892e9285c9", "address": "fa:16:3e:cc:0a:92", "network": {"id": "6a8ba0d6-68f0-4a25-84ff-3685d5b259a6", "bridge": "br-int", "label": "tempest-network-smoke--805773012", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ee850cb-50", "ovs_interfaceid": "5ee850cb-507f-4288-993f-a7892e9285c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.701 225859 DEBUG nova.network.os_vif_util [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5ee850cb-507f-4288-993f-a7892e9285c9,network=Network(6a8ba0d6-68f0-4a25-84ff-3685d5b259a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ee850cb-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.701 225859 DEBUG os_vif [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5ee850cb-507f-4288-993f-a7892e9285c9,network=Network(6a8ba0d6-68f0-4a25-84ff-3685d5b259a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ee850cb-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.702 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.703 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ee850cb-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.704 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.706 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.709 225859 INFO os_vif [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5ee850cb-507f-4288-993f-a7892e9285c9,network=Network(6a8ba0d6-68f0-4a25-84ff-3685d5b259a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ee850cb-50')
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.715 225859 DEBUG nova.virt.libvirt.driver [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Start _get_guest_xml network_info=[{"id": "5ee850cb-507f-4288-993f-a7892e9285c9", "address": "fa:16:3e:cc:0a:92", "network": {"id": "6a8ba0d6-68f0-4a25-84ff-3685d5b259a6", "bridge": "br-int", "label": "tempest-network-smoke--805773012", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ee850cb-50", "ovs_interfaceid": "5ee850cb-507f-4288-993f-a7892e9285c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.718 225859 WARNING nova.virt.libvirt.driver [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.726 225859 DEBUG nova.virt.libvirt.host [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.727 225859 DEBUG nova.virt.libvirt.host [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.731 225859 DEBUG nova.virt.libvirt.host [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.732 225859 DEBUG nova.virt.libvirt.host [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.733 225859 DEBUG nova.virt.libvirt.driver [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.733 225859 DEBUG nova.virt.hardware [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.734 225859 DEBUG nova.virt.hardware [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.734 225859 DEBUG nova.virt.hardware [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.734 225859 DEBUG nova.virt.hardware [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.734 225859 DEBUG nova.virt.hardware [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.734 225859 DEBUG nova.virt.hardware [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.734 225859 DEBUG nova.virt.hardware [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.735 225859 DEBUG nova.virt.hardware [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.735 225859 DEBUG nova.virt.hardware [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.735 225859 DEBUG nova.virt.hardware [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.735 225859 DEBUG nova.virt.hardware [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.735 225859 DEBUG nova.objects.instance [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4c926c1a-d5cf-4865-aa57-66b439d115f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:23:33 compute-1 nova_compute[225855]: 2026-01-20 15:23:33.767 225859 DEBUG oslo_concurrency.processutils [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:23:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:33.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:23:34 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/156660179' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:23:34 compute-1 nova_compute[225855]: 2026-01-20 15:23:34.210 225859 DEBUG oslo_concurrency.processutils [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:23:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:34.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:34 compute-1 nova_compute[225855]: 2026-01-20 15:23:34.260 225859 DEBUG oslo_concurrency.processutils [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:23:34 compute-1 ceph-mon[81775]: pgmap v3041: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 150 KiB/s rd, 950 KiB/s wr, 28 op/s
Jan 20 15:23:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/156660179' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:23:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:23:34 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/316765263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:23:34 compute-1 nova_compute[225855]: 2026-01-20 15:23:34.716 225859 DEBUG oslo_concurrency.processutils [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:23:34 compute-1 nova_compute[225855]: 2026-01-20 15:23:34.718 225859 DEBUG nova.virt.libvirt.vif [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:22:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1485210936',display_name='tempest-TestNetworkAdvancedServerOps-server-1485210936',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1485210936',id=200,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFcP4UZM8O63AIoOVMDQxqy+5NyOff2+4sn+/fo/i9F+a5f/348af0mVz9/8dEzIolLAAsu+/bRAcsRarrSFS+xLm/g8/3nv4fTZhMf6io52heWTDMNl7eXefIdDoA+XVw==',key_name='tempest-TestNetworkAdvancedServerOps-577536853',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:23:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-y87mmd4a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:23:28Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=4c926c1a-d5cf-4865-aa57-66b439d115f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5ee850cb-507f-4288-993f-a7892e9285c9", "address": "fa:16:3e:cc:0a:92", "network": {"id": "6a8ba0d6-68f0-4a25-84ff-3685d5b259a6", "bridge": "br-int", "label": "tempest-network-smoke--805773012", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ee850cb-50", "ovs_interfaceid": "5ee850cb-507f-4288-993f-a7892e9285c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:23:34 compute-1 nova_compute[225855]: 2026-01-20 15:23:34.718 225859 DEBUG nova.network.os_vif_util [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "5ee850cb-507f-4288-993f-a7892e9285c9", "address": "fa:16:3e:cc:0a:92", "network": {"id": "6a8ba0d6-68f0-4a25-84ff-3685d5b259a6", "bridge": "br-int", "label": "tempest-network-smoke--805773012", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ee850cb-50", "ovs_interfaceid": "5ee850cb-507f-4288-993f-a7892e9285c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:23:34 compute-1 nova_compute[225855]: 2026-01-20 15:23:34.719 225859 DEBUG nova.network.os_vif_util [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5ee850cb-507f-4288-993f-a7892e9285c9,network=Network(6a8ba0d6-68f0-4a25-84ff-3685d5b259a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ee850cb-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:23:34 compute-1 nova_compute[225855]: 2026-01-20 15:23:34.720 225859 DEBUG nova.objects.instance [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'pci_devices' on Instance uuid 4c926c1a-d5cf-4865-aa57-66b439d115f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.055 225859 DEBUG nova.virt.libvirt.driver [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:23:35 compute-1 nova_compute[225855]:   <uuid>4c926c1a-d5cf-4865-aa57-66b439d115f8</uuid>
Jan 20 15:23:35 compute-1 nova_compute[225855]:   <name>instance-000000c8</name>
Jan 20 15:23:35 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:23:35 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:23:35 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1485210936</nova:name>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:23:33</nova:creationTime>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:23:35 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:23:35 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:23:35 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:23:35 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:23:35 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:23:35 compute-1 nova_compute[225855]:         <nova:user uuid="442a7a5cb8ea426a82be9762b262d171">tempest-TestNetworkAdvancedServerOps-175282664-project-member</nova:user>
Jan 20 15:23:35 compute-1 nova_compute[225855]:         <nova:project uuid="1ed5feeeafe7448a8efb47ab975b0ead">tempest-TestNetworkAdvancedServerOps-175282664</nova:project>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:23:35 compute-1 nova_compute[225855]:         <nova:port uuid="5ee850cb-507f-4288-993f-a7892e9285c9">
Jan 20 15:23:35 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:23:35 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:23:35 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <system>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <entry name="serial">4c926c1a-d5cf-4865-aa57-66b439d115f8</entry>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <entry name="uuid">4c926c1a-d5cf-4865-aa57-66b439d115f8</entry>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     </system>
Jan 20 15:23:35 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:23:35 compute-1 nova_compute[225855]:   <os>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:   </os>
Jan 20 15:23:35 compute-1 nova_compute[225855]:   <features>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:   </features>
Jan 20 15:23:35 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:23:35 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:23:35 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/4c926c1a-d5cf-4865-aa57-66b439d115f8_disk">
Jan 20 15:23:35 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       </source>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:23:35 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/4c926c1a-d5cf-4865-aa57-66b439d115f8_disk.config">
Jan 20 15:23:35 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       </source>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:23:35 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:cc:0a:92"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <target dev="tap5ee850cb-50"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/4c926c1a-d5cf-4865-aa57-66b439d115f8/console.log" append="off"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <video>
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     </video>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <input type="keyboard" bus="usb"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:23:35 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:23:35 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:23:35 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:23:35 compute-1 nova_compute[225855]: </domain>
Jan 20 15:23:35 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.058 225859 DEBUG nova.virt.libvirt.driver [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.058 225859 DEBUG nova.virt.libvirt.driver [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.059 225859 DEBUG nova.virt.libvirt.vif [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:22:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1485210936',display_name='tempest-TestNetworkAdvancedServerOps-server-1485210936',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1485210936',id=200,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFcP4UZM8O63AIoOVMDQxqy+5NyOff2+4sn+/fo/i9F+a5f/348af0mVz9/8dEzIolLAAsu+/bRAcsRarrSFS+xLm/g8/3nv4fTZhMf6io52heWTDMNl7eXefIdDoA+XVw==',key_name='tempest-TestNetworkAdvancedServerOps-577536853',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:23:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-y87mmd4a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:23:28Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=4c926c1a-d5cf-4865-aa57-66b439d115f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5ee850cb-507f-4288-993f-a7892e9285c9", "address": "fa:16:3e:cc:0a:92", "network": {"id": "6a8ba0d6-68f0-4a25-84ff-3685d5b259a6", "bridge": "br-int", "label": "tempest-network-smoke--805773012", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ee850cb-50", "ovs_interfaceid": "5ee850cb-507f-4288-993f-a7892e9285c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.059 225859 DEBUG nova.network.os_vif_util [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "5ee850cb-507f-4288-993f-a7892e9285c9", "address": "fa:16:3e:cc:0a:92", "network": {"id": "6a8ba0d6-68f0-4a25-84ff-3685d5b259a6", "bridge": "br-int", "label": "tempest-network-smoke--805773012", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ee850cb-50", "ovs_interfaceid": "5ee850cb-507f-4288-993f-a7892e9285c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.060 225859 DEBUG nova.network.os_vif_util [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5ee850cb-507f-4288-993f-a7892e9285c9,network=Network(6a8ba0d6-68f0-4a25-84ff-3685d5b259a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ee850cb-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.060 225859 DEBUG os_vif [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5ee850cb-507f-4288-993f-a7892e9285c9,network=Network(6a8ba0d6-68f0-4a25-84ff-3685d5b259a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ee850cb-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.061 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.061 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.062 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.064 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.064 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ee850cb-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.065 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5ee850cb-50, col_values=(('external_ids', {'iface-id': '5ee850cb-507f-4288-993f-a7892e9285c9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:0a:92', 'vm-uuid': '4c926c1a-d5cf-4865-aa57-66b439d115f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.067 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:35 compute-1 NetworkManager[49104]: <info>  [1768922615.0679] manager: (tap5ee850cb-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/378)
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.069 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.071 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.072 225859 INFO os_vif [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5ee850cb-507f-4288-993f-a7892e9285c9,network=Network(6a8ba0d6-68f0-4a25-84ff-3685d5b259a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ee850cb-50')
Jan 20 15:23:35 compute-1 kernel: tap5ee850cb-50: entered promiscuous mode
Jan 20 15:23:35 compute-1 ovn_controller[130490]: 2026-01-20T15:23:35Z|00902|binding|INFO|Claiming lport 5ee850cb-507f-4288-993f-a7892e9285c9 for this chassis.
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.137 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:35 compute-1 ovn_controller[130490]: 2026-01-20T15:23:35Z|00903|binding|INFO|5ee850cb-507f-4288-993f-a7892e9285c9: Claiming fa:16:3e:cc:0a:92 10.100.0.12
Jan 20 15:23:35 compute-1 NetworkManager[49104]: <info>  [1768922615.1385] manager: (tap5ee850cb-50): new Tun device (/org/freedesktop/NetworkManager/Devices/379)
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.145 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:0a:92 10.100.0.12'], port_security=['fa:16:3e:cc:0a:92 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4c926c1a-d5cf-4865-aa57-66b439d115f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '5', 'neutron:security_group_ids': '6f91a697-00ef-4c2f-a4cd-47aa600b459f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.176'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a867ed68-782d-4d05-8077-be0278c87405, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=5ee850cb-507f-4288-993f-a7892e9285c9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.146 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 5ee850cb-507f-4288-993f-a7892e9285c9 in datapath 6a8ba0d6-68f0-4a25-84ff-3685d5b259a6 bound to our chassis
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.147 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a8ba0d6-68f0-4a25-84ff-3685d5b259a6
Jan 20 15:23:35 compute-1 ovn_controller[130490]: 2026-01-20T15:23:35Z|00904|binding|INFO|Setting lport 5ee850cb-507f-4288-993f-a7892e9285c9 ovn-installed in OVS
Jan 20 15:23:35 compute-1 ovn_controller[130490]: 2026-01-20T15:23:35Z|00905|binding|INFO|Setting lport 5ee850cb-507f-4288-993f-a7892e9285c9 up in Southbound
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.157 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.157 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8f9ca245-6d71-4ace-9595-662246f53765]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.158 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a8ba0d6-61 in ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.160 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a8ba0d6-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.160 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[344387be-9605-43e9-a731-7320b0ab2e56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.161 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.162 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bce71c9f-00e7-412d-b583-8dae9ff91e9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:35 compute-1 systemd-udevd[312420]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.173 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[abd9cd3c-fbc7-4615-ace5-8bee91c27537]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:35 compute-1 systemd-machined[194361]: New machine qemu-106-instance-000000c8.
Jan 20 15:23:35 compute-1 NetworkManager[49104]: <info>  [1768922615.1823] device (tap5ee850cb-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:23:35 compute-1 NetworkManager[49104]: <info>  [1768922615.1834] device (tap5ee850cb-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.186 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe5a3db-7da4-46b6-b9a3-a9d054cf0062]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:35 compute-1 systemd[1]: Started Virtual Machine qemu-106-instance-000000c8.
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.215 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[53a47acd-a1c2-4bee-854c-5a16137a39ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.221 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ed9afe23-1dff-4e9b-9690-26c69d4a24d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:35 compute-1 systemd-udevd[312425]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:23:35 compute-1 NetworkManager[49104]: <info>  [1768922615.2229] manager: (tap6a8ba0d6-60): new Veth device (/org/freedesktop/NetworkManager/Devices/380)
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.251 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[ec56fe73-75f5-4261-af7b-f94108af4762]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.254 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c4117e10-f131-4fb0-9d4d-6fa779169099]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:35 compute-1 NetworkManager[49104]: <info>  [1768922615.2757] device (tap6a8ba0d6-60): carrier: link connected
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.281 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[ac13f920-520c-45f8-bacd-ebaa02b91a91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.301 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9b781802-f3f1-4fb3-ae6e-ee5a3937933c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a8ba0d6-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:82:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765420, 'reachable_time': 20334, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312453, 'error': None, 'target': 'ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.314 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ad5c481f-2939-40dc-bcbf-32f3ecbb9440]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:8288'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 765420, 'tstamp': 765420}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312454, 'error': None, 'target': 'ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.330 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1dcf7e0b-e39b-4f25-9e1f-555642cd7378]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a8ba0d6-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:82:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765420, 'reachable_time': 20334, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312455, 'error': None, 'target': 'ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.364 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[55a65508-7cf4-4ebe-9c2b-7756ecbf4dda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.433 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0e53202f-3b68-4a49-a81b-50d9fb73da21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.435 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a8ba0d6-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.435 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.436 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a8ba0d6-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:23:35 compute-1 NetworkManager[49104]: <info>  [1768922615.4382] manager: (tap6a8ba0d6-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.437 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:35 compute-1 kernel: tap6a8ba0d6-60: entered promiscuous mode
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.440 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.441 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a8ba0d6-60, col_values=(('external_ids', {'iface-id': '37682451-9139-425b-b6d7-1ea83a2306c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.442 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:35 compute-1 ovn_controller[130490]: 2026-01-20T15:23:35Z|00906|binding|INFO|Releasing lport 37682451-9139-425b-b6d7-1ea83a2306c3 from this chassis (sb_readonly=0)
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.444 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a8ba0d6-68f0-4a25-84ff-3685d5b259a6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a8ba0d6-68f0-4a25-84ff-3685d5b259a6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.445 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[12a735f1-3714-4f7f-b232-f7acb1f82b9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.445 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/6a8ba0d6-68f0-4a25-84ff-3685d5b259a6.pid.haproxy
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 6a8ba0d6-68f0-4a25-84ff-3685d5b259a6
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:23:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:35.446 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6', 'env', 'PROCESS_TAG=haproxy-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a8ba0d6-68f0-4a25-84ff-3685d5b259a6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.460 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.641 225859 DEBUG nova.virt.libvirt.host [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Removed pending event for 4c926c1a-d5cf-4865-aa57-66b439d115f8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.642 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922615.640725, 4c926c1a-d5cf-4865-aa57-66b439d115f8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.642 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] VM Resumed (Lifecycle Event)
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.645 225859 DEBUG nova.compute.manager [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.649 225859 INFO nova.virt.libvirt.driver [-] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Instance rebooted successfully.
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.649 225859 DEBUG nova.compute.manager [None req-17bcfc80-7c1a-4369-9a16-6423cc2819e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.673 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.676 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.704 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] During sync_power_state the instance has a pending task (powering-on). Skip.
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.704 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922615.6417022, 4c926c1a-d5cf-4865-aa57-66b439d115f8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.704 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] VM Started (Lifecycle Event)
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.724 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:23:35 compute-1 nova_compute[225855]: 2026-01-20 15:23:35.728 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:23:35 compute-1 podman[312530]: 2026-01-20 15:23:35.827382955 +0000 UTC m=+0.050105901 container create 3e180e523263b2efe6b366dda5dbd7316202839bfa973cebc4d886e9b759bb3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 15:23:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/316765263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:23:35 compute-1 systemd[1]: Started libpod-conmon-3e180e523263b2efe6b366dda5dbd7316202839bfa973cebc4d886e9b759bb3c.scope.
Jan 20 15:23:35 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:23:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:23:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:35.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:23:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/260d7576a6afe1e83d647994bef54bb9257457e8b2f09eff78519d9b0206d51a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:23:35 compute-1 podman[312530]: 2026-01-20 15:23:35.799827919 +0000 UTC m=+0.022550885 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:23:35 compute-1 podman[312530]: 2026-01-20 15:23:35.897567969 +0000 UTC m=+0.120290935 container init 3e180e523263b2efe6b366dda5dbd7316202839bfa973cebc4d886e9b759bb3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 15:23:35 compute-1 podman[312530]: 2026-01-20 15:23:35.902414118 +0000 UTC m=+0.125137064 container start 3e180e523263b2efe6b366dda5dbd7316202839bfa973cebc4d886e9b759bb3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 20 15:23:35 compute-1 neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6[312545]: [NOTICE]   (312549) : New worker (312551) forked
Jan 20 15:23:35 compute-1 neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6[312545]: [NOTICE]   (312549) : Loading success.
Jan 20 15:23:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:36.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:37 compute-1 ceph-mon[81775]: pgmap v3042: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 30 KiB/s wr, 5 op/s
Jan 20 15:23:37 compute-1 nova_compute[225855]: 2026-01-20 15:23:37.469 225859 DEBUG nova.compute.manager [req-d4617dd3-5b54-4a10-a5cd-6fdc9988b37c req-6c9856fc-352d-4360-a66e-2ba109d7c451 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Received event network-vif-plugged-5ee850cb-507f-4288-993f-a7892e9285c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:23:37 compute-1 nova_compute[225855]: 2026-01-20 15:23:37.470 225859 DEBUG oslo_concurrency.lockutils [req-d4617dd3-5b54-4a10-a5cd-6fdc9988b37c req-6c9856fc-352d-4360-a66e-2ba109d7c451 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:23:37 compute-1 nova_compute[225855]: 2026-01-20 15:23:37.471 225859 DEBUG oslo_concurrency.lockutils [req-d4617dd3-5b54-4a10-a5cd-6fdc9988b37c req-6c9856fc-352d-4360-a66e-2ba109d7c451 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:23:37 compute-1 nova_compute[225855]: 2026-01-20 15:23:37.471 225859 DEBUG oslo_concurrency.lockutils [req-d4617dd3-5b54-4a10-a5cd-6fdc9988b37c req-6c9856fc-352d-4360-a66e-2ba109d7c451 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:23:37 compute-1 nova_compute[225855]: 2026-01-20 15:23:37.471 225859 DEBUG nova.compute.manager [req-d4617dd3-5b54-4a10-a5cd-6fdc9988b37c req-6c9856fc-352d-4360-a66e-2ba109d7c451 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] No waiting events found dispatching network-vif-plugged-5ee850cb-507f-4288-993f-a7892e9285c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:23:37 compute-1 nova_compute[225855]: 2026-01-20 15:23:37.472 225859 WARNING nova.compute.manager [req-d4617dd3-5b54-4a10-a5cd-6fdc9988b37c req-6c9856fc-352d-4360-a66e-2ba109d7c451 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Received unexpected event network-vif-plugged-5ee850cb-507f-4288-993f-a7892e9285c9 for instance with vm_state active and task_state None.
Jan 20 15:23:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:23:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:37.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:23:38 compute-1 podman[312561]: 2026-01-20 15:23:38.014125099 +0000 UTC m=+0.056390641 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:23:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:38.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:38 compute-1 nova_compute[225855]: 2026-01-20 15:23:38.338 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:38 compute-1 ceph-mon[81775]: pgmap v3043: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 30 KiB/s wr, 8 op/s
Jan 20 15:23:39 compute-1 nova_compute[225855]: 2026-01-20 15:23:39.579 225859 DEBUG nova.compute.manager [req-8fa989ab-d9f3-4367-b0c0-a878f94dfda0 req-f98e48e4-dfd5-485c-94a7-df76fc3140b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Received event network-vif-plugged-5ee850cb-507f-4288-993f-a7892e9285c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:23:39 compute-1 nova_compute[225855]: 2026-01-20 15:23:39.580 225859 DEBUG oslo_concurrency.lockutils [req-8fa989ab-d9f3-4367-b0c0-a878f94dfda0 req-f98e48e4-dfd5-485c-94a7-df76fc3140b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:23:39 compute-1 nova_compute[225855]: 2026-01-20 15:23:39.580 225859 DEBUG oslo_concurrency.lockutils [req-8fa989ab-d9f3-4367-b0c0-a878f94dfda0 req-f98e48e4-dfd5-485c-94a7-df76fc3140b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:23:39 compute-1 nova_compute[225855]: 2026-01-20 15:23:39.580 225859 DEBUG oslo_concurrency.lockutils [req-8fa989ab-d9f3-4367-b0c0-a878f94dfda0 req-f98e48e4-dfd5-485c-94a7-df76fc3140b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:23:39 compute-1 nova_compute[225855]: 2026-01-20 15:23:39.581 225859 DEBUG nova.compute.manager [req-8fa989ab-d9f3-4367-b0c0-a878f94dfda0 req-f98e48e4-dfd5-485c-94a7-df76fc3140b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] No waiting events found dispatching network-vif-plugged-5ee850cb-507f-4288-993f-a7892e9285c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:23:39 compute-1 nova_compute[225855]: 2026-01-20 15:23:39.581 225859 WARNING nova.compute.manager [req-8fa989ab-d9f3-4367-b0c0-a878f94dfda0 req-f98e48e4-dfd5-485c-94a7-df76fc3140b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Received unexpected event network-vif-plugged-5ee850cb-507f-4288-993f-a7892e9285c9 for instance with vm_state active and task_state None.
Jan 20 15:23:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:23:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:39.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:23:40 compute-1 nova_compute[225855]: 2026-01-20 15:23:40.069 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:40 compute-1 ceph-mon[81775]: pgmap v3044: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 18 KiB/s wr, 6 op/s
Jan 20 15:23:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:40.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:23:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:41.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:23:42 compute-1 ceph-mon[81775]: pgmap v3045: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 18 KiB/s wr, 61 op/s
Jan 20 15:23:42 compute-1 sudo[312582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:42 compute-1 sudo[312582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:42 compute-1 sudo[312582]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:42.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:42 compute-1 sudo[312607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:23:42 compute-1 sudo[312607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:42 compute-1 sudo[312607]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:42 compute-1 sudo[312632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:42 compute-1 sudo[312632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:42 compute-1 sudo[312632]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:42 compute-1 sudo[312657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:23:42 compute-1 sudo[312657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:42 compute-1 sudo[312657]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:23:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:23:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:23:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:23:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:23:43 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:23:43 compute-1 nova_compute[225855]: 2026-01-20 15:23:43.340 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:43.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:44 compute-1 ceph-mon[81775]: pgmap v3046: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Jan 20 15:23:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:44.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:44 compute-1 sudo[312715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:44 compute-1 sudo[312715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:44 compute-1 sudo[312715]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:44 compute-1 sudo[312740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:44 compute-1 sudo[312740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:44 compute-1 sudo[312740]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:45 compute-1 nova_compute[225855]: 2026-01-20 15:23:45.071 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:45.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:46.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:46 compute-1 ceph-mon[81775]: pgmap v3047: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 71 op/s
Jan 20 15:23:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:23:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:47.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:23:48 compute-1 ovn_controller[130490]: 2026-01-20T15:23:48Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cc:0a:92 10.100.0.12
Jan 20 15:23:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:23:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:48.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:23:48 compute-1 nova_compute[225855]: 2026-01-20 15:23:48.342 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:48 compute-1 ceph-mon[81775]: pgmap v3048: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 71 op/s
Jan 20 15:23:49 compute-1 sudo[312767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:49 compute-1 sudo[312767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:49 compute-1 sudo[312767]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:49 compute-1 sudo[312792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:23:49 compute-1 sudo[312792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:49 compute-1 sudo[312792]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:23:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:49.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:23:50 compute-1 nova_compute[225855]: 2026-01-20 15:23:50.074 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:50.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:50 compute-1 ceph-mon[81775]: pgmap v3049: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 66 op/s
Jan 20 15:23:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:23:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:23:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:23:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:51.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:23:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:52.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:52 compute-1 ceph-mon[81775]: pgmap v3050: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.0 KiB/s wr, 91 op/s
Jan 20 15:23:53 compute-1 nova_compute[225855]: 2026-01-20 15:23:53.345 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:53.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:54 compute-1 nova_compute[225855]: 2026-01-20 15:23:54.061 225859 INFO nova.compute.manager [None req-c9455216-e9f0-46e5-bfc1-801dfa3d54cd 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Get console output
Jan 20 15:23:54 compute-1 nova_compute[225855]: 2026-01-20 15:23:54.069 263775 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:23:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:54.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:55 compute-1 nova_compute[225855]: 2026-01-20 15:23:55.078 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:55.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:56 compute-1 nova_compute[225855]: 2026-01-20 15:23:56.155 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:56.155 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:23:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:56.157 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:23:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:56.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:56 compute-1 nova_compute[225855]: 2026-01-20 15:23:56.508 225859 DEBUG nova.compute.manager [req-07c3a016-d282-40fa-a837-6e129ccf6ed6 req-44ba922c-40f4-45c1-ac90-364d73b9a8f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Received event network-changed-5ee850cb-507f-4288-993f-a7892e9285c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:23:56 compute-1 nova_compute[225855]: 2026-01-20 15:23:56.508 225859 DEBUG nova.compute.manager [req-07c3a016-d282-40fa-a837-6e129ccf6ed6 req-44ba922c-40f4-45c1-ac90-364d73b9a8f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Refreshing instance network info cache due to event network-changed-5ee850cb-507f-4288-993f-a7892e9285c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:23:56 compute-1 nova_compute[225855]: 2026-01-20 15:23:56.509 225859 DEBUG oslo_concurrency.lockutils [req-07c3a016-d282-40fa-a837-6e129ccf6ed6 req-44ba922c-40f4-45c1-ac90-364d73b9a8f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-4c926c1a-d5cf-4865-aa57-66b439d115f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:23:56 compute-1 nova_compute[225855]: 2026-01-20 15:23:56.509 225859 DEBUG oslo_concurrency.lockutils [req-07c3a016-d282-40fa-a837-6e129ccf6ed6 req-44ba922c-40f4-45c1-ac90-364d73b9a8f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-4c926c1a-d5cf-4865-aa57-66b439d115f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:23:56 compute-1 nova_compute[225855]: 2026-01-20 15:23:56.509 225859 DEBUG nova.network.neutron [req-07c3a016-d282-40fa-a837-6e129ccf6ed6 req-44ba922c-40f4-45c1-ac90-364d73b9a8f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Refreshing network info cache for port 5ee850cb-507f-4288-993f-a7892e9285c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:23:56 compute-1 ceph-mon[81775]: pgmap v3051: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 871 KiB/s rd, 12 KiB/s wr, 53 op/s
Jan 20 15:23:56 compute-1 ceph-mon[81775]: pgmap v3052: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 42 op/s
Jan 20 15:23:56 compute-1 nova_compute[225855]: 2026-01-20 15:23:56.642 225859 DEBUG oslo_concurrency.lockutils [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "4c926c1a-d5cf-4865-aa57-66b439d115f8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:23:56 compute-1 nova_compute[225855]: 2026-01-20 15:23:56.643 225859 DEBUG oslo_concurrency.lockutils [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:23:56 compute-1 nova_compute[225855]: 2026-01-20 15:23:56.643 225859 DEBUG oslo_concurrency.lockutils [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:23:56 compute-1 nova_compute[225855]: 2026-01-20 15:23:56.644 225859 DEBUG oslo_concurrency.lockutils [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:23:56 compute-1 nova_compute[225855]: 2026-01-20 15:23:56.644 225859 DEBUG oslo_concurrency.lockutils [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:23:56 compute-1 nova_compute[225855]: 2026-01-20 15:23:56.645 225859 INFO nova.compute.manager [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Terminating instance
Jan 20 15:23:56 compute-1 nova_compute[225855]: 2026-01-20 15:23:56.647 225859 DEBUG nova.compute.manager [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:23:56 compute-1 kernel: tap5ee850cb-50 (unregistering): left promiscuous mode
Jan 20 15:23:56 compute-1 NetworkManager[49104]: <info>  [1768922636.9752] device (tap5ee850cb-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.030 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:57 compute-1 ovn_controller[130490]: 2026-01-20T15:23:57Z|00907|binding|INFO|Releasing lport 5ee850cb-507f-4288-993f-a7892e9285c9 from this chassis (sb_readonly=0)
Jan 20 15:23:57 compute-1 ovn_controller[130490]: 2026-01-20T15:23:57Z|00908|binding|INFO|Setting lport 5ee850cb-507f-4288-993f-a7892e9285c9 down in Southbound
Jan 20 15:23:57 compute-1 ovn_controller[130490]: 2026-01-20T15:23:57Z|00909|binding|INFO|Removing iface tap5ee850cb-50 ovn-installed in OVS
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.032 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:57.037 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:0a:92 10.100.0.12'], port_security=['fa:16:3e:cc:0a:92 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4c926c1a-d5cf-4865-aa57-66b439d115f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6f91a697-00ef-4c2f-a4cd-47aa600b459f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a867ed68-782d-4d05-8077-be0278c87405, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=5ee850cb-507f-4288-993f-a7892e9285c9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:23:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:57.038 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 5ee850cb-507f-4288-993f-a7892e9285c9 in datapath 6a8ba0d6-68f0-4a25-84ff-3685d5b259a6 unbound from our chassis
Jan 20 15:23:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:57.040 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a8ba0d6-68f0-4a25-84ff-3685d5b259a6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:23:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:57.041 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ab9b6210-77c0-4672-8127-1f56bf31bf33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:57.042 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6 namespace which is not needed anymore
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.046 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:57 compute-1 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d000000c8.scope: Deactivated successfully.
Jan 20 15:23:57 compute-1 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d000000c8.scope: Consumed 13.566s CPU time.
Jan 20 15:23:57 compute-1 systemd-machined[194361]: Machine qemu-106-instance-000000c8 terminated.
Jan 20 15:23:57 compute-1 neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6[312545]: [NOTICE]   (312549) : haproxy version is 2.8.14-c23fe91
Jan 20 15:23:57 compute-1 neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6[312545]: [NOTICE]   (312549) : path to executable is /usr/sbin/haproxy
Jan 20 15:23:57 compute-1 neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6[312545]: [WARNING]  (312549) : Exiting Master process...
Jan 20 15:23:57 compute-1 neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6[312545]: [ALERT]    (312549) : Current worker (312551) exited with code 143 (Terminated)
Jan 20 15:23:57 compute-1 neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6[312545]: [WARNING]  (312549) : All workers exited. Exiting... (0)
Jan 20 15:23:57 compute-1 systemd[1]: libpod-3e180e523263b2efe6b366dda5dbd7316202839bfa973cebc4d886e9b759bb3c.scope: Deactivated successfully.
Jan 20 15:23:57 compute-1 podman[312845]: 2026-01-20 15:23:57.206068342 +0000 UTC m=+0.047138597 container died 3e180e523263b2efe6b366dda5dbd7316202839bfa973cebc4d886e9b759bb3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 15:23:57 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3e180e523263b2efe6b366dda5dbd7316202839bfa973cebc4d886e9b759bb3c-userdata-shm.mount: Deactivated successfully.
Jan 20 15:23:57 compute-1 systemd[1]: var-lib-containers-storage-overlay-260d7576a6afe1e83d647994bef54bb9257457e8b2f09eff78519d9b0206d51a-merged.mount: Deactivated successfully.
Jan 20 15:23:57 compute-1 podman[312845]: 2026-01-20 15:23:57.246311471 +0000 UTC m=+0.087381706 container cleanup 3e180e523263b2efe6b366dda5dbd7316202839bfa973cebc4d886e9b759bb3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:23:57 compute-1 systemd[1]: libpod-conmon-3e180e523263b2efe6b366dda5dbd7316202839bfa973cebc4d886e9b759bb3c.scope: Deactivated successfully.
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.283 225859 INFO nova.virt.libvirt.driver [-] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Instance destroyed successfully.
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.284 225859 DEBUG nova.objects.instance [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'resources' on Instance uuid 4c926c1a-d5cf-4865-aa57-66b439d115f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.319 225859 DEBUG nova.virt.libvirt.vif [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:22:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1485210936',display_name='tempest-TestNetworkAdvancedServerOps-server-1485210936',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1485210936',id=200,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFcP4UZM8O63AIoOVMDQxqy+5NyOff2+4sn+/fo/i9F+a5f/348af0mVz9/8dEzIolLAAsu+/bRAcsRarrSFS+xLm/g8/3nv4fTZhMf6io52heWTDMNl7eXefIdDoA+XVw==',key_name='tempest-TestNetworkAdvancedServerOps-577536853',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:23:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-y87mmd4a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:23:35Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=4c926c1a-d5cf-4865-aa57-66b439d115f8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5ee850cb-507f-4288-993f-a7892e9285c9", "address": "fa:16:3e:cc:0a:92", "network": {"id": "6a8ba0d6-68f0-4a25-84ff-3685d5b259a6", "bridge": "br-int", "label": "tempest-network-smoke--805773012", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ee850cb-50", "ovs_interfaceid": "5ee850cb-507f-4288-993f-a7892e9285c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.320 225859 DEBUG nova.network.os_vif_util [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "5ee850cb-507f-4288-993f-a7892e9285c9", "address": "fa:16:3e:cc:0a:92", "network": {"id": "6a8ba0d6-68f0-4a25-84ff-3685d5b259a6", "bridge": "br-int", "label": "tempest-network-smoke--805773012", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ee850cb-50", "ovs_interfaceid": "5ee850cb-507f-4288-993f-a7892e9285c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.321 225859 DEBUG nova.network.os_vif_util [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5ee850cb-507f-4288-993f-a7892e9285c9,network=Network(6a8ba0d6-68f0-4a25-84ff-3685d5b259a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ee850cb-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.322 225859 DEBUG os_vif [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5ee850cb-507f-4288-993f-a7892e9285c9,network=Network(6a8ba0d6-68f0-4a25-84ff-3685d5b259a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ee850cb-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.324 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.324 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ee850cb-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.326 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.328 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.331 225859 INFO os_vif [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5ee850cb-507f-4288-993f-a7892e9285c9,network=Network(6a8ba0d6-68f0-4a25-84ff-3685d5b259a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ee850cb-50')
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.408 225859 DEBUG nova.compute.manager [req-4224c930-8fbc-41be-b7ce-efe96bba867f req-c2b2ea51-9e40-41ba-a605-a47c2fe1c36e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Received event network-vif-unplugged-5ee850cb-507f-4288-993f-a7892e9285c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.408 225859 DEBUG oslo_concurrency.lockutils [req-4224c930-8fbc-41be-b7ce-efe96bba867f req-c2b2ea51-9e40-41ba-a605-a47c2fe1c36e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.409 225859 DEBUG oslo_concurrency.lockutils [req-4224c930-8fbc-41be-b7ce-efe96bba867f req-c2b2ea51-9e40-41ba-a605-a47c2fe1c36e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.409 225859 DEBUG oslo_concurrency.lockutils [req-4224c930-8fbc-41be-b7ce-efe96bba867f req-c2b2ea51-9e40-41ba-a605-a47c2fe1c36e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.410 225859 DEBUG nova.compute.manager [req-4224c930-8fbc-41be-b7ce-efe96bba867f req-c2b2ea51-9e40-41ba-a605-a47c2fe1c36e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] No waiting events found dispatching network-vif-unplugged-5ee850cb-507f-4288-993f-a7892e9285c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.410 225859 DEBUG nova.compute.manager [req-4224c930-8fbc-41be-b7ce-efe96bba867f req-c2b2ea51-9e40-41ba-a605-a47c2fe1c36e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Received event network-vif-unplugged-5ee850cb-507f-4288-993f-a7892e9285c9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:23:57 compute-1 podman[312875]: 2026-01-20 15:23:57.795003617 +0000 UTC m=+0.522480789 container remove 3e180e523263b2efe6b366dda5dbd7316202839bfa973cebc4d886e9b759bb3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 15:23:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:57.802 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[62470e61-6a4c-4769-be37-3896431c6c4d]: (4, ('Tue Jan 20 03:23:57 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6 (3e180e523263b2efe6b366dda5dbd7316202839bfa973cebc4d886e9b759bb3c)\n3e180e523263b2efe6b366dda5dbd7316202839bfa973cebc4d886e9b759bb3c\nTue Jan 20 03:23:57 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6 (3e180e523263b2efe6b366dda5dbd7316202839bfa973cebc4d886e9b759bb3c)\n3e180e523263b2efe6b366dda5dbd7316202839bfa973cebc4d886e9b759bb3c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:57.804 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fce504d5-9841-4511-9a5a-216c8404978f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:57.805 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a8ba0d6-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.807 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:57 compute-1 kernel: tap6a8ba0d6-60: left promiscuous mode
Jan 20 15:23:57 compute-1 nova_compute[225855]: 2026-01-20 15:23:57.823 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:57.827 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f2ecdf-82c2-4705-9029-59acedd65da1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:57.844 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[63af6880-c62a-4fe7-91a9-f7c037ada8aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:57.845 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a5918890-bb72-440a-8ae8-c6462a49af31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:57.860 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ab5146-c2e0-4bb1-bf9e-0cbb611bec92]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765413, 'reachable_time': 26586, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312920, 'error': None, 'target': 'ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:57 compute-1 systemd[1]: run-netns-ovnmeta\x2d6a8ba0d6\x2d68f0\x2d4a25\x2d84ff\x2d3685d5b259a6.mount: Deactivated successfully.
Jan 20 15:23:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:57.863 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a8ba0d6-68f0-4a25-84ff-3685d5b259a6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:23:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:23:57.864 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[d096177c-bfbe-4ca4-8a36-f6a190c9ff6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:23:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:57.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:58.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:58 compute-1 ceph-mon[81775]: pgmap v3053: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 42 op/s
Jan 20 15:23:58 compute-1 nova_compute[225855]: 2026-01-20 15:23:58.345 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:59 compute-1 podman[312921]: 2026-01-20 15:23:59.028336079 +0000 UTC m=+0.074652062 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 20 15:23:59 compute-1 nova_compute[225855]: 2026-01-20 15:23:59.549 225859 DEBUG nova.compute.manager [req-47eb6ccc-e7ad-4fb4-bdc1-b25c3095929e req-cfc6fe84-6dd7-4d28-8ac1-b4053358ddbe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Received event network-vif-plugged-5ee850cb-507f-4288-993f-a7892e9285c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:23:59 compute-1 nova_compute[225855]: 2026-01-20 15:23:59.549 225859 DEBUG oslo_concurrency.lockutils [req-47eb6ccc-e7ad-4fb4-bdc1-b25c3095929e req-cfc6fe84-6dd7-4d28-8ac1-b4053358ddbe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:23:59 compute-1 nova_compute[225855]: 2026-01-20 15:23:59.550 225859 DEBUG oslo_concurrency.lockutils [req-47eb6ccc-e7ad-4fb4-bdc1-b25c3095929e req-cfc6fe84-6dd7-4d28-8ac1-b4053358ddbe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:23:59 compute-1 nova_compute[225855]: 2026-01-20 15:23:59.550 225859 DEBUG oslo_concurrency.lockutils [req-47eb6ccc-e7ad-4fb4-bdc1-b25c3095929e req-cfc6fe84-6dd7-4d28-8ac1-b4053358ddbe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:23:59 compute-1 nova_compute[225855]: 2026-01-20 15:23:59.550 225859 DEBUG nova.compute.manager [req-47eb6ccc-e7ad-4fb4-bdc1-b25c3095929e req-cfc6fe84-6dd7-4d28-8ac1-b4053358ddbe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] No waiting events found dispatching network-vif-plugged-5ee850cb-507f-4288-993f-a7892e9285c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:23:59 compute-1 nova_compute[225855]: 2026-01-20 15:23:59.550 225859 WARNING nova.compute.manager [req-47eb6ccc-e7ad-4fb4-bdc1-b25c3095929e req-cfc6fe84-6dd7-4d28-8ac1-b4053358ddbe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Received unexpected event network-vif-plugged-5ee850cb-507f-4288-993f-a7892e9285c9 for instance with vm_state active and task_state deleting.
Jan 20 15:23:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:23:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:59.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:00 compute-1 nova_compute[225855]: 2026-01-20 15:24:00.074 225859 DEBUG nova.network.neutron [req-07c3a016-d282-40fa-a837-6e129ccf6ed6 req-44ba922c-40f4-45c1-ac90-364d73b9a8f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Updated VIF entry in instance network info cache for port 5ee850cb-507f-4288-993f-a7892e9285c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:24:00 compute-1 nova_compute[225855]: 2026-01-20 15:24:00.075 225859 DEBUG nova.network.neutron [req-07c3a016-d282-40fa-a837-6e129ccf6ed6 req-44ba922c-40f4-45c1-ac90-364d73b9a8f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Updating instance_info_cache with network_info: [{"id": "5ee850cb-507f-4288-993f-a7892e9285c9", "address": "fa:16:3e:cc:0a:92", "network": {"id": "6a8ba0d6-68f0-4a25-84ff-3685d5b259a6", "bridge": "br-int", "label": "tempest-network-smoke--805773012", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ee850cb-50", "ovs_interfaceid": "5ee850cb-507f-4288-993f-a7892e9285c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:24:00 compute-1 nova_compute[225855]: 2026-01-20 15:24:00.123 225859 INFO nova.virt.libvirt.driver [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Deleting instance files /var/lib/nova/instances/4c926c1a-d5cf-4865-aa57-66b439d115f8_del
Jan 20 15:24:00 compute-1 nova_compute[225855]: 2026-01-20 15:24:00.124 225859 INFO nova.virt.libvirt.driver [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Deletion of /var/lib/nova/instances/4c926c1a-d5cf-4865-aa57-66b439d115f8_del complete
Jan 20 15:24:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:00.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:00 compute-1 nova_compute[225855]: 2026-01-20 15:24:00.278 225859 DEBUG oslo_concurrency.lockutils [req-07c3a016-d282-40fa-a837-6e129ccf6ed6 req-44ba922c-40f4-45c1-ac90-364d73b9a8f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-4c926c1a-d5cf-4865-aa57-66b439d115f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:24:00 compute-1 nova_compute[225855]: 2026-01-20 15:24:00.287 225859 INFO nova.compute.manager [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Took 3.64 seconds to destroy the instance on the hypervisor.
Jan 20 15:24:00 compute-1 nova_compute[225855]: 2026-01-20 15:24:00.287 225859 DEBUG oslo.service.loopingcall [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:24:00 compute-1 nova_compute[225855]: 2026-01-20 15:24:00.288 225859 DEBUG nova.compute.manager [-] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:24:00 compute-1 nova_compute[225855]: 2026-01-20 15:24:00.288 225859 DEBUG nova.network.neutron [-] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:24:00 compute-1 ceph-mon[81775]: pgmap v3054: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 42 op/s
Jan 20 15:24:01 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:24:01.158 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:24:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:01.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:02.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:02 compute-1 nova_compute[225855]: 2026-01-20 15:24:02.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:02 compute-1 nova_compute[225855]: 2026-01-20 15:24:02.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:24:02 compute-1 nova_compute[225855]: 2026-01-20 15:24:02.380 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:02 compute-1 nova_compute[225855]: 2026-01-20 15:24:02.420 225859 DEBUG nova.network.neutron [-] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:24:02 compute-1 nova_compute[225855]: 2026-01-20 15:24:02.435 225859 INFO nova.compute.manager [-] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Took 2.15 seconds to deallocate network for instance.
Jan 20 15:24:02 compute-1 nova_compute[225855]: 2026-01-20 15:24:02.478 225859 DEBUG oslo_concurrency.lockutils [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:24:02 compute-1 nova_compute[225855]: 2026-01-20 15:24:02.478 225859 DEBUG oslo_concurrency.lockutils [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:24:02 compute-1 nova_compute[225855]: 2026-01-20 15:24:02.504 225859 DEBUG nova.compute.manager [req-c6ce2d9a-27e3-480a-868d-fbcdd2877fe2 req-b0327b2a-433b-4fde-afad-5a2318e8125c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Received event network-vif-deleted-5ee850cb-507f-4288-993f-a7892e9285c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:24:02 compute-1 nova_compute[225855]: 2026-01-20 15:24:02.535 225859 DEBUG oslo_concurrency.processutils [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:24:02 compute-1 ceph-mon[81775]: pgmap v3055: 321 pgs: 321 active+clean; 175 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 540 KiB/s rd, 21 KiB/s wr, 57 op/s
Jan 20 15:24:02 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:24:02 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2239924417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:02 compute-1 nova_compute[225855]: 2026-01-20 15:24:02.966 225859 DEBUG oslo_concurrency.processutils [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:24:02 compute-1 nova_compute[225855]: 2026-01-20 15:24:02.971 225859 DEBUG nova.compute.provider_tree [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:24:03 compute-1 nova_compute[225855]: 2026-01-20 15:24:02.999 225859 DEBUG nova.scheduler.client.report [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:24:03 compute-1 nova_compute[225855]: 2026-01-20 15:24:03.042 225859 DEBUG oslo_concurrency.lockutils [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:24:03 compute-1 nova_compute[225855]: 2026-01-20 15:24:03.075 225859 INFO nova.scheduler.client.report [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Deleted allocations for instance 4c926c1a-d5cf-4865-aa57-66b439d115f8
Jan 20 15:24:03 compute-1 nova_compute[225855]: 2026-01-20 15:24:03.157 225859 DEBUG oslo_concurrency.lockutils [None req-397d1304-7c92-4fb0-b90e-607b141a4cf2 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "4c926c1a-d5cf-4865-aa57-66b439d115f8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:24:03 compute-1 nova_compute[225855]: 2026-01-20 15:24:03.347 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2239924417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:24:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:03.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:24:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:04.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:04 compute-1 ceph-mon[81775]: pgmap v3056: 321 pgs: 321 active+clean; 144 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 210 KiB/s rd, 22 KiB/s wr, 44 op/s
Jan 20 15:24:05 compute-1 sudo[312975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:05 compute-1 sudo[312975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:05 compute-1 sudo[312975]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:05 compute-1 sudo[313000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:05 compute-1 sudo[313000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:05 compute-1 sudo[313000]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:05 compute-1 nova_compute[225855]: 2026-01-20 15:24:05.769 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:05 compute-1 nova_compute[225855]: 2026-01-20 15:24:05.886 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:05.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:06.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:06 compute-1 nova_compute[225855]: 2026-01-20 15:24:06.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:06 compute-1 nova_compute[225855]: 2026-01-20 15:24:06.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:24:06 compute-1 nova_compute[225855]: 2026-01-20 15:24:06.342 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:24:06 compute-1 nova_compute[225855]: 2026-01-20 15:24:06.427 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:24:06 compute-1 ceph-mon[81775]: pgmap v3057: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 20 15:24:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/861187482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:07 compute-1 nova_compute[225855]: 2026-01-20 15:24:07.430 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/95525792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:24:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:07.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:24:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:08.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:08 compute-1 nova_compute[225855]: 2026-01-20 15:24:08.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:08 compute-1 nova_compute[225855]: 2026-01-20 15:24:08.349 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:08 compute-1 ceph-mon[81775]: pgmap v3058: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 20 15:24:09 compute-1 podman[313028]: 2026-01-20 15:24:09.008581746 +0000 UTC m=+0.055075553 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 20 15:24:09 compute-1 nova_compute[225855]: 2026-01-20 15:24:09.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:09 compute-1 nova_compute[225855]: 2026-01-20 15:24:09.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:09 compute-1 nova_compute[225855]: 2026-01-20 15:24:09.376 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:24:09 compute-1 nova_compute[225855]: 2026-01-20 15:24:09.376 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:24:09 compute-1 nova_compute[225855]: 2026-01-20 15:24:09.376 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:24:09 compute-1 nova_compute[225855]: 2026-01-20 15:24:09.376 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:24:09 compute-1 nova_compute[225855]: 2026-01-20 15:24:09.377 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:24:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:24:09 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/960585230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:09 compute-1 ceph-mon[81775]: pgmap v3059: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 28 op/s
Jan 20 15:24:09 compute-1 nova_compute[225855]: 2026-01-20 15:24:09.806 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:24:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:09.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:09 compute-1 nova_compute[225855]: 2026-01-20 15:24:09.948 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:24:09 compute-1 nova_compute[225855]: 2026-01-20 15:24:09.949 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4250MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:24:09 compute-1 nova_compute[225855]: 2026-01-20 15:24:09.949 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:24:09 compute-1 nova_compute[225855]: 2026-01-20 15:24:09.950 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:24:10 compute-1 nova_compute[225855]: 2026-01-20 15:24:10.017 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:24:10 compute-1 nova_compute[225855]: 2026-01-20 15:24:10.018 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:24:10 compute-1 nova_compute[225855]: 2026-01-20 15:24:10.033 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing inventories for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 15:24:10 compute-1 nova_compute[225855]: 2026-01-20 15:24:10.050 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating ProviderTree inventory for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 15:24:10 compute-1 nova_compute[225855]: 2026-01-20 15:24:10.051 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 15:24:10 compute-1 nova_compute[225855]: 2026-01-20 15:24:10.078 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing aggregate associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 15:24:10 compute-1 nova_compute[225855]: 2026-01-20 15:24:10.103 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing trait associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 15:24:10 compute-1 nova_compute[225855]: 2026-01-20 15:24:10.121 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:24:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:24:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:10.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:24:10 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:24:10 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/110819138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:10 compute-1 nova_compute[225855]: 2026-01-20 15:24:10.567 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:24:10 compute-1 nova_compute[225855]: 2026-01-20 15:24:10.572 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:24:10 compute-1 nova_compute[225855]: 2026-01-20 15:24:10.597 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:24:10 compute-1 nova_compute[225855]: 2026-01-20 15:24:10.617 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:24:10 compute-1 nova_compute[225855]: 2026-01-20 15:24:10.618 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:24:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/960585230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/110819138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:11 compute-1 nova_compute[225855]: 2026-01-20 15:24:11.618 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:11 compute-1 nova_compute[225855]: 2026-01-20 15:24:11.619 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:11 compute-1 ceph-mon[81775]: pgmap v3060: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 28 op/s
Jan 20 15:24:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2015578960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:11.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:12.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:12 compute-1 nova_compute[225855]: 2026-01-20 15:24:12.282 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922637.2804992, 4c926c1a-d5cf-4865-aa57-66b439d115f8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:24:12 compute-1 nova_compute[225855]: 2026-01-20 15:24:12.282 225859 INFO nova.compute.manager [-] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] VM Stopped (Lifecycle Event)
Jan 20 15:24:12 compute-1 nova_compute[225855]: 2026-01-20 15:24:12.328 225859 DEBUG nova.compute.manager [None req-fdbb8f95-eee1-4bb6-9e85-e0150eda8ccd - - - - - -] [instance: 4c926c1a-d5cf-4865-aa57-66b439d115f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:24:12 compute-1 nova_compute[225855]: 2026-01-20 15:24:12.435 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/171582387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:13 compute-1 nova_compute[225855]: 2026-01-20 15:24:13.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:13 compute-1 nova_compute[225855]: 2026-01-20 15:24:13.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:13 compute-1 nova_compute[225855]: 2026-01-20 15:24:13.352 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:13 compute-1 ceph-mon[81775]: pgmap v3061: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.2 KiB/s rd, 2.9 KiB/s wr, 14 op/s
Jan 20 15:24:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2614624376' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:24:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2614624376' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:24:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:13.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:24:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:14.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:24:14 compute-1 nova_compute[225855]: 2026-01-20 15:24:14.349 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:15 compute-1 ceph-mon[81775]: pgmap v3062: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 938 B/s rd, 341 B/s wr, 2 op/s
Jan 20 15:24:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:24:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:15.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:24:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:16.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:24:16.445 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:24:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:24:16.445 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:24:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:24:16.445 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:24:17 compute-1 nova_compute[225855]: 2026-01-20 15:24:17.439 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:17 compute-1 ceph-mon[81775]: pgmap v3063: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:24:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:17.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:18.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:18 compute-1 nova_compute[225855]: 2026-01-20 15:24:18.353 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:19.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:20 compute-1 ceph-mon[81775]: pgmap v3064: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:24:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:20.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:21.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:22.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:22 compute-1 nova_compute[225855]: 2026-01-20 15:24:22.494 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:22 compute-1 ceph-mon[81775]: pgmap v3065: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:24:23 compute-1 nova_compute[225855]: 2026-01-20 15:24:23.355 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:23.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:24.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:24 compute-1 ceph-mon[81775]: pgmap v3066: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:24:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1399840892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:25 compute-1 sudo[313100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:25 compute-1 sudo[313100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:25 compute-1 sudo[313100]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:25 compute-1 sudo[313125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:25 compute-1 sudo[313125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:25 compute-1 sudo[313125]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:25.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:26.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:27 compute-1 ceph-mon[81775]: pgmap v3067: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:24:27 compute-1 nova_compute[225855]: 2026-01-20 15:24:27.499 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:24:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:27.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:24:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:28.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:28 compute-1 nova_compute[225855]: 2026-01-20 15:24:28.357 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:29 compute-1 ceph-mon[81775]: pgmap v3068: 321 pgs: 321 active+clean; 161 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s rd, 1.3 MiB/s wr, 14 op/s
Jan 20 15:24:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:24:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:29.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:24:30 compute-1 podman[313153]: 2026-01-20 15:24:30.036832188 +0000 UTC m=+0.087874340 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:24:30 compute-1 ceph-mon[81775]: pgmap v3069: 321 pgs: 321 active+clean; 161 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s rd, 1.3 MiB/s wr, 14 op/s
Jan 20 15:24:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:30.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:30 compute-1 nova_compute[225855]: 2026-01-20 15:24:30.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:30 compute-1 nova_compute[225855]: 2026-01-20 15:24:30.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 15:24:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/758575018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:24:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:24:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:31.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:24:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:32.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:32 compute-1 ceph-mon[81775]: pgmap v3070: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:24:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/592825777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:24:32 compute-1 nova_compute[225855]: 2026-01-20 15:24:32.501 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:33 compute-1 nova_compute[225855]: 2026-01-20 15:24:33.360 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:33.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:34.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:34 compute-1 ceph-mon[81775]: pgmap v3071: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:24:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:35.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:36.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:36 compute-1 ceph-mon[81775]: pgmap v3072: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 20 15:24:37 compute-1 nova_compute[225855]: 2026-01-20 15:24:37.505 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:37.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:24:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:38.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:24:38 compute-1 nova_compute[225855]: 2026-01-20 15:24:38.362 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:38 compute-1 ceph-mon[81775]: pgmap v3073: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Jan 20 15:24:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:24:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:39.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:24:40 compute-1 podman[313184]: 2026-01-20 15:24:40.012695001 +0000 UTC m=+0.056291929 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 20 15:24:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:40.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:40 compute-1 ceph-mon[81775]: pgmap v3074: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 467 KiB/s wr, 71 op/s
Jan 20 15:24:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:41.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:42.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:42 compute-1 nova_compute[225855]: 2026-01-20 15:24:42.352 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:42 compute-1 nova_compute[225855]: 2026-01-20 15:24:42.352 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 15:24:42 compute-1 nova_compute[225855]: 2026-01-20 15:24:42.369 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 15:24:42 compute-1 ceph-mon[81775]: pgmap v3075: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 467 KiB/s wr, 86 op/s
Jan 20 15:24:42 compute-1 nova_compute[225855]: 2026-01-20 15:24:42.508 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:43 compute-1 nova_compute[225855]: 2026-01-20 15:24:43.364 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:43.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:44.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:44 compute-1 ceph-mon[81775]: pgmap v3076: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:24:45 compute-1 sudo[313207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:45 compute-1 sudo[313207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:45 compute-1 sudo[313207]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:45 compute-1 sudo[313232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:45 compute-1 sudo[313232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:45 compute-1 sudo[313232]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:46.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:46.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:46 compute-1 ceph-mon[81775]: pgmap v3077: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:24:46 compute-1 ovn_controller[130490]: 2026-01-20T15:24:46Z|00910|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Jan 20 15:24:47 compute-1 nova_compute[225855]: 2026-01-20 15:24:47.512 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:48.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:48.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:48 compute-1 nova_compute[225855]: 2026-01-20 15:24:48.365 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:48 compute-1 ceph-mon[81775]: pgmap v3078: 321 pgs: 321 active+clean; 171 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 577 KiB/s wr, 84 op/s
Jan 20 15:24:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:49 compute-1 sudo[313260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:49 compute-1 sudo[313260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:49 compute-1 sudo[313260]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:49 compute-1 sudo[313285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:24:49 compute-1 sudo[313285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:49 compute-1 sudo[313285]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:49 compute-1 sudo[313310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:49 compute-1 sudo[313310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:49 compute-1 sudo[313310]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:24:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:50.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:24:50 compute-1 sudo[313335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:24:50 compute-1 sudo[313335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:50.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:50 compute-1 ceph-mon[81775]: pgmap v3079: 321 pgs: 321 active+clean; 171 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 545 KiB/s rd, 576 KiB/s wr, 27 op/s
Jan 20 15:24:50 compute-1 sudo[313335]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:24:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:24:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:24:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:24:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:24:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:24:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:52.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:52.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:52 compute-1 nova_compute[225855]: 2026-01-20 15:24:52.516 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:52 compute-1 ceph-mon[81775]: pgmap v3080: 321 pgs: 321 active+clean; 198 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 754 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 20 15:24:53 compute-1 nova_compute[225855]: 2026-01-20 15:24:53.367 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:54.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:54.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:54 compute-1 ceph-mon[81775]: pgmap v3081: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:24:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:24:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:56.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:24:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:56.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:56 compute-1 ceph-mon[81775]: pgmap v3082: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:24:56 compute-1 sudo[313394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:56 compute-1 sudo[313394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:57 compute-1 sudo[313394]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:57 compute-1 sudo[313419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:24:57 compute-1 sudo[313419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:57 compute-1 sudo[313419]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:57 compute-1 nova_compute[225855]: 2026-01-20 15:24:57.519 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:57 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:24:57 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:24:57 compute-1 ceph-mon[81775]: pgmap v3083: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:24:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:58.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:24:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:58.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:58 compute-1 nova_compute[225855]: 2026-01-20 15:24:58.368 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:59 compute-1 ceph-mon[81775]: pgmap v3084: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 259 KiB/s rd, 1.6 MiB/s wr, 51 op/s
Jan 20 15:25:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:00.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:00.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:01 compute-1 podman[313446]: 2026-01-20 15:25:01.066573512 +0000 UTC m=+0.116492036 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:25:01 compute-1 ceph-mon[81775]: pgmap v3085: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 259 KiB/s rd, 1.6 MiB/s wr, 52 op/s
Jan 20 15:25:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:02.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:02.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:02 compute-1 nova_compute[225855]: 2026-01-20 15:25:02.521 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:03 compute-1 nova_compute[225855]: 2026-01-20 15:25:03.357 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:25:03 compute-1 nova_compute[225855]: 2026-01-20 15:25:03.357 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:25:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:03.390 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:25:03 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:03.391 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:25:03 compute-1 nova_compute[225855]: 2026-01-20 15:25:03.414 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:03 compute-1 ceph-mon[81775]: pgmap v3086: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 42 KiB/s wr, 9 op/s
Jan 20 15:25:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:04.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:04.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:05 compute-1 sudo[313475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:25:05 compute-1 sudo[313475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:05 compute-1 sudo[313475]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:05 compute-1 sudo[313500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:25:05 compute-1 sudo[313500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:05 compute-1 sudo[313500]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:05 compute-1 ceph-mon[81775]: pgmap v3087: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.4 KiB/s rd, 12 KiB/s wr, 8 op/s
Jan 20 15:25:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:25:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:06.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:25:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:06.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3941704266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:07 compute-1 nova_compute[225855]: 2026-01-20 15:25:07.525 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:07 compute-1 ceph-mon[81775]: pgmap v3088: 321 pgs: 321 active+clean; 134 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 13 KiB/s wr, 29 op/s
Jan 20 15:25:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1643207792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/216630041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:08.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:08.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:08 compute-1 nova_compute[225855]: 2026-01-20 15:25:08.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:25:08 compute-1 nova_compute[225855]: 2026-01-20 15:25:08.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:25:08 compute-1 nova_compute[225855]: 2026-01-20 15:25:08.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:25:08 compute-1 nova_compute[225855]: 2026-01-20 15:25:08.416 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:09 compute-1 nova_compute[225855]: 2026-01-20 15:25:09.343 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:25:09 compute-1 nova_compute[225855]: 2026-01-20 15:25:09.343 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:25:09 compute-1 nova_compute[225855]: 2026-01-20 15:25:09.343 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:25:09 compute-1 nova_compute[225855]: 2026-01-20 15:25:09.344 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:25:09 compute-1 nova_compute[225855]: 2026-01-20 15:25:09.369 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:25:09 compute-1 nova_compute[225855]: 2026-01-20 15:25:09.369 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:25:09 compute-1 nova_compute[225855]: 2026-01-20 15:25:09.369 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:25:09 compute-1 nova_compute[225855]: 2026-01-20 15:25:09.370 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:25:09 compute-1 nova_compute[225855]: 2026-01-20 15:25:09.370 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:25:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:25:09 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4144926388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:09 compute-1 nova_compute[225855]: 2026-01-20 15:25:09.819 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:25:09 compute-1 nova_compute[225855]: 2026-01-20 15:25:09.964 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:25:09 compute-1 nova_compute[225855]: 2026-01-20 15:25:09.966 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4274MB free_disk=20.977684020996094GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:25:09 compute-1 nova_compute[225855]: 2026-01-20 15:25:09.966 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:25:09 compute-1 nova_compute[225855]: 2026-01-20 15:25:09.966 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:25:09 compute-1 ceph-mon[81775]: pgmap v3089: 321 pgs: 321 active+clean; 134 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 20 15:25:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4144926388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:10.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:10.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:10 compute-1 podman[313550]: 2026-01-20 15:25:10.996379378 +0000 UTC m=+0.045110459 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:25:11 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:11.392 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:25:11 compute-1 nova_compute[225855]: 2026-01-20 15:25:11.576 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:25:11 compute-1 nova_compute[225855]: 2026-01-20 15:25:11.576 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:25:11 compute-1 nova_compute[225855]: 2026-01-20 15:25:11.620 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:25:11 compute-1 ceph-mon[81775]: pgmap v3090: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Jan 20 15:25:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/100076280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:25:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:12.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:25:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:25:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1344433681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:12 compute-1 nova_compute[225855]: 2026-01-20 15:25:12.066 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:25:12 compute-1 nova_compute[225855]: 2026-01-20 15:25:12.072 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:25:12 compute-1 nova_compute[225855]: 2026-01-20 15:25:12.119 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:25:12 compute-1 nova_compute[225855]: 2026-01-20 15:25:12.121 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:25:12 compute-1 nova_compute[225855]: 2026-01-20 15:25:12.121 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:25:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:12.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:12 compute-1 nova_compute[225855]: 2026-01-20 15:25:12.528 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:12 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #151. Immutable memtables: 0.
Jan 20 15:25:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:25:12.992007) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:25:12 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 151
Jan 20 15:25:12 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922712992077, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1228, "num_deletes": 256, "total_data_size": 2642387, "memory_usage": 2679248, "flush_reason": "Manual Compaction"}
Jan 20 15:25:12 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #152: started
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922713007486, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 152, "file_size": 1732615, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74098, "largest_seqno": 75321, "table_properties": {"data_size": 1727336, "index_size": 2738, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11445, "raw_average_key_size": 19, "raw_value_size": 1716618, "raw_average_value_size": 2919, "num_data_blocks": 122, "num_entries": 588, "num_filter_entries": 588, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768922611, "oldest_key_time": 1768922611, "file_creation_time": 1768922712, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 15564 microseconds, and 4276 cpu microseconds.
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:25:13.007571) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #152: 1732615 bytes OK
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:25:13.007588) [db/memtable_list.cc:519] [default] Level-0 commit table #152 started
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:25:13.010502) [db/memtable_list.cc:722] [default] Level-0 commit table #152: memtable #1 done
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:25:13.010555) EVENT_LOG_v1 {"time_micros": 1768922713010544, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:25:13.010588) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 2636499, prev total WAL file size 2653148, number of live WAL files 2.
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000148.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:25:13.011589) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373731' seq:72057594037927935, type:22 .. '6C6F676D0033303233' seq:0, type:0; will stop at (end)
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [152(1692KB)], [150(10MB)]
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922713011634, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [152], "files_L6": [150], "score": -1, "input_data_size": 13063060, "oldest_snapshot_seqno": -1}
Jan 20 15:25:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1344433681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/955133676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #153: 9733 keys, 12927504 bytes, temperature: kUnknown
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922713160188, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 153, "file_size": 12927504, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12863962, "index_size": 38085, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24389, "raw_key_size": 256733, "raw_average_key_size": 26, "raw_value_size": 12692578, "raw_average_value_size": 1304, "num_data_blocks": 1454, "num_entries": 9733, "num_filter_entries": 9733, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768922713, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 153, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:25:13.160782) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 12927504 bytes
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:25:13.163287) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 87.8 rd, 86.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 10.8 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(15.0) write-amplify(7.5) OK, records in: 10258, records dropped: 525 output_compression: NoCompression
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:25:13.163322) EVENT_LOG_v1 {"time_micros": 1768922713163306, "job": 96, "event": "compaction_finished", "compaction_time_micros": 148830, "compaction_time_cpu_micros": 34784, "output_level": 6, "num_output_files": 1, "total_output_size": 12927504, "num_input_records": 10258, "num_output_records": 9733, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922713164169, "job": 96, "event": "table_file_deletion", "file_number": 152}
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000150.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922713170562, "job": 96, "event": "table_file_deletion", "file_number": 150}
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:25:13.011493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:25:13.170644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:25:13.170654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:25:13.170659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:25:13.170664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:25:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:25:13.170668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:25:13 compute-1 nova_compute[225855]: 2026-01-20 15:25:13.419 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:14 compute-1 ceph-mon[81775]: pgmap v3091: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Jan 20 15:25:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3963659316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:25:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3963659316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:25:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:14.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:14 compute-1 nova_compute[225855]: 2026-01-20 15:25:14.117 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:25:14 compute-1 nova_compute[225855]: 2026-01-20 15:25:14.117 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:25:14 compute-1 nova_compute[225855]: 2026-01-20 15:25:14.117 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:25:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:14.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:16 compute-1 ceph-mon[81775]: pgmap v3092: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Jan 20 15:25:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:16.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:16.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:16 compute-1 nova_compute[225855]: 2026-01-20 15:25:16.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:25:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:16.445 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:25:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:16.446 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:25:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:16.446 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:25:17 compute-1 nova_compute[225855]: 2026-01-20 15:25:17.577 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:18.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:18 compute-1 ceph-mon[81775]: pgmap v3093: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Jan 20 15:25:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:18.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:18 compute-1 nova_compute[225855]: 2026-01-20 15:25:18.420 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:20.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:20 compute-1 ceph-mon[81775]: pgmap v3094: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 20 15:25:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:20.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:25:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:22.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:25:22 compute-1 ceph-mon[81775]: pgmap v3095: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 20 15:25:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:22.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:22 compute-1 nova_compute[225855]: 2026-01-20 15:25:22.579 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:23 compute-1 nova_compute[225855]: 2026-01-20 15:25:23.422 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:25:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:24.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:25:24 compute-1 ceph-mon[81775]: pgmap v3096: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:24.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:25 compute-1 sudo[313598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:25:25 compute-1 sudo[313598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:25 compute-1 sudo[313598]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:25 compute-1 sudo[313623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:25:25 compute-1 sudo[313623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:25 compute-1 sudo[313623]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:26.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:26 compute-1 ceph-mon[81775]: pgmap v3097: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:26.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:27 compute-1 nova_compute[225855]: 2026-01-20 15:25:27.582 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:27 compute-1 sshd-session[313649]: Connection closed by 95.215.0.144 port 45394
Jan 20 15:25:27 compute-1 sshd-session[313650]: Unable to negotiate with 95.215.0.144 port 45396: no matching host key type found. Their offer: ssh-rsa,ssh-dss [preauth]
Jan 20 15:25:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:28.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:28 compute-1 ceph-mon[81775]: pgmap v3098: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:28.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:28 compute-1 nova_compute[225855]: 2026-01-20 15:25:28.423 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:30.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:30 compute-1 ceph-mon[81775]: pgmap v3099: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:30 compute-1 nova_compute[225855]: 2026-01-20 15:25:30.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:25:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:30.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:32 compute-1 podman[313655]: 2026-01-20 15:25:32.01734076 +0000 UTC m=+0.064767920 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 20 15:25:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:25:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:32.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:25:32 compute-1 ceph-mon[81775]: pgmap v3100: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:32.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:32 compute-1 nova_compute[225855]: 2026-01-20 15:25:32.583 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:33 compute-1 nova_compute[225855]: 2026-01-20 15:25:33.424 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:34.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:34 compute-1 ceph-mon[81775]: pgmap v3101: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:34.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:36.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:36.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:36 compute-1 ceph-mon[81775]: pgmap v3102: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:37 compute-1 nova_compute[225855]: 2026-01-20 15:25:37.587 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:25:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:38.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:25:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:38.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:38 compute-1 ceph-mon[81775]: pgmap v3103: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:38 compute-1 nova_compute[225855]: 2026-01-20 15:25:38.427 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:40.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:40.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:40 compute-1 ceph-mon[81775]: pgmap v3104: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:42 compute-1 podman[313686]: 2026-01-20 15:25:42.003632012 +0000 UTC m=+0.045190282 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:25:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:42.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:42.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:42 compute-1 ceph-mon[81775]: pgmap v3105: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:42 compute-1 nova_compute[225855]: 2026-01-20 15:25:42.589 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:43 compute-1 nova_compute[225855]: 2026-01-20 15:25:43.428 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:44.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:44.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:44 compute-1 ceph-mon[81775]: pgmap v3106: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:45 compute-1 sudo[313706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:25:45 compute-1 sudo[313706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:45 compute-1 sudo[313706]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:45 compute-1 sudo[313732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:25:45 compute-1 sudo[313732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:45 compute-1 sudo[313732]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:45 compute-1 nova_compute[225855]: 2026-01-20 15:25:45.859 225859 DEBUG oslo_concurrency.lockutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "eb6ef384-2891-42d0-9059-42b89009b14c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:25:45 compute-1 nova_compute[225855]: 2026-01-20 15:25:45.860 225859 DEBUG oslo_concurrency.lockutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "eb6ef384-2891-42d0-9059-42b89009b14c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:25:45 compute-1 nova_compute[225855]: 2026-01-20 15:25:45.874 225859 DEBUG nova.compute.manager [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:25:45 compute-1 nova_compute[225855]: 2026-01-20 15:25:45.957 225859 DEBUG oslo_concurrency.lockutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:25:45 compute-1 nova_compute[225855]: 2026-01-20 15:25:45.958 225859 DEBUG oslo_concurrency.lockutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:25:45 compute-1 nova_compute[225855]: 2026-01-20 15:25:45.967 225859 DEBUG nova.virt.hardware [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:25:45 compute-1 nova_compute[225855]: 2026-01-20 15:25:45.968 225859 INFO nova.compute.claims [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.063 225859 DEBUG oslo_concurrency.processutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:25:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:46.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:46.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:46 compute-1 ceph-mon[81775]: pgmap v3107: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:46 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:25:46 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3360934547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.512 225859 DEBUG oslo_concurrency.processutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.518 225859 DEBUG nova.compute.provider_tree [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.611 225859 DEBUG nova.scheduler.client.report [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.635 225859 DEBUG oslo_concurrency.lockutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.636 225859 DEBUG nova.compute.manager [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.680 225859 DEBUG nova.compute.manager [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.681 225859 DEBUG nova.network.neutron [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.703 225859 INFO nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.723 225859 DEBUG nova.compute.manager [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.808 225859 DEBUG nova.compute.manager [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.810 225859 DEBUG nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.811 225859 INFO nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Creating image(s)
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.845 225859 DEBUG nova.storage.rbd_utils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image eb6ef384-2891-42d0-9059-42b89009b14c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.878 225859 DEBUG nova.storage.rbd_utils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image eb6ef384-2891-42d0-9059-42b89009b14c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.904 225859 DEBUG nova.storage.rbd_utils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image eb6ef384-2891-42d0-9059-42b89009b14c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.907 225859 DEBUG oslo_concurrency.processutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.985 225859 DEBUG oslo_concurrency.processutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.986 225859 DEBUG oslo_concurrency.lockutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.986 225859 DEBUG oslo_concurrency.lockutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:25:46 compute-1 nova_compute[225855]: 2026-01-20 15:25:46.986 225859 DEBUG oslo_concurrency.lockutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:25:47 compute-1 nova_compute[225855]: 2026-01-20 15:25:47.010 225859 DEBUG nova.storage.rbd_utils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image eb6ef384-2891-42d0-9059-42b89009b14c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:25:47 compute-1 nova_compute[225855]: 2026-01-20 15:25:47.014 225859 DEBUG oslo_concurrency.processutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 eb6ef384-2891-42d0-9059-42b89009b14c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:25:47 compute-1 nova_compute[225855]: 2026-01-20 15:25:47.287 225859 DEBUG oslo_concurrency.processutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 eb6ef384-2891-42d0-9059-42b89009b14c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:25:47 compute-1 nova_compute[225855]: 2026-01-20 15:25:47.321 225859 DEBUG nova.policy [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5338aa65dc0e4326a66ce79053787f14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:25:47 compute-1 nova_compute[225855]: 2026-01-20 15:25:47.359 225859 DEBUG nova.storage.rbd_utils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] resizing rbd image eb6ef384-2891-42d0-9059-42b89009b14c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:25:47 compute-1 nova_compute[225855]: 2026-01-20 15:25:47.463 225859 DEBUG nova.objects.instance [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'migration_context' on Instance uuid eb6ef384-2891-42d0-9059-42b89009b14c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:25:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3360934547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:47 compute-1 nova_compute[225855]: 2026-01-20 15:25:47.495 225859 DEBUG nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:25:47 compute-1 nova_compute[225855]: 2026-01-20 15:25:47.495 225859 DEBUG nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Ensure instance console log exists: /var/lib/nova/instances/eb6ef384-2891-42d0-9059-42b89009b14c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:25:47 compute-1 nova_compute[225855]: 2026-01-20 15:25:47.496 225859 DEBUG oslo_concurrency.lockutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:25:47 compute-1 nova_compute[225855]: 2026-01-20 15:25:47.496 225859 DEBUG oslo_concurrency.lockutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:25:47 compute-1 nova_compute[225855]: 2026-01-20 15:25:47.496 225859 DEBUG oslo_concurrency.lockutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:25:47 compute-1 nova_compute[225855]: 2026-01-20 15:25:47.592 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:48.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:48.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:48 compute-1 nova_compute[225855]: 2026-01-20 15:25:48.471 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:48 compute-1 ceph-mon[81775]: pgmap v3108: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:48 compute-1 nova_compute[225855]: 2026-01-20 15:25:48.611 225859 DEBUG nova.network.neutron [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Successfully created port: 423d10be-bf78-43ff-8ae2-812d375ccef8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:25:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:25:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:50.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:25:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:50.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:50 compute-1 nova_compute[225855]: 2026-01-20 15:25:50.356 225859 DEBUG nova.network.neutron [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Successfully updated port: 423d10be-bf78-43ff-8ae2-812d375ccef8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:25:50 compute-1 nova_compute[225855]: 2026-01-20 15:25:50.382 225859 DEBUG oslo_concurrency.lockutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "refresh_cache-eb6ef384-2891-42d0-9059-42b89009b14c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:25:50 compute-1 nova_compute[225855]: 2026-01-20 15:25:50.383 225859 DEBUG oslo_concurrency.lockutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquired lock "refresh_cache-eb6ef384-2891-42d0-9059-42b89009b14c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:25:50 compute-1 nova_compute[225855]: 2026-01-20 15:25:50.383 225859 DEBUG nova.network.neutron [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:25:50 compute-1 nova_compute[225855]: 2026-01-20 15:25:50.489 225859 DEBUG nova.compute.manager [req-9c585c66-3c7a-41a4-b473-d96ab5dfb75f req-7d510cfd-28ba-4e9a-bdd8-45782f43353c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Received event network-changed-423d10be-bf78-43ff-8ae2-812d375ccef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:25:50 compute-1 nova_compute[225855]: 2026-01-20 15:25:50.489 225859 DEBUG nova.compute.manager [req-9c585c66-3c7a-41a4-b473-d96ab5dfb75f req-7d510cfd-28ba-4e9a-bdd8-45782f43353c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Refreshing instance network info cache due to event network-changed-423d10be-bf78-43ff-8ae2-812d375ccef8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:25:50 compute-1 nova_compute[225855]: 2026-01-20 15:25:50.490 225859 DEBUG oslo_concurrency.lockutils [req-9c585c66-3c7a-41a4-b473-d96ab5dfb75f req-7d510cfd-28ba-4e9a-bdd8-45782f43353c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-eb6ef384-2891-42d0-9059-42b89009b14c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:25:50 compute-1 ceph-mon[81775]: pgmap v3109: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:50 compute-1 nova_compute[225855]: 2026-01-20 15:25:50.584 225859 DEBUG nova.network.neutron [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:25:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:52.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.231 225859 DEBUG nova.network.neutron [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Updating instance_info_cache with network_info: [{"id": "423d10be-bf78-43ff-8ae2-812d375ccef8", "address": "fa:16:3e:56:58:d6", "network": {"id": "3b3aa186-38a4-4cc6-9399-f535503e9791", "bridge": "br-int", "label": "tempest-network-smoke--458751349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap423d10be-bf", "ovs_interfaceid": "423d10be-bf78-43ff-8ae2-812d375ccef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.262 225859 DEBUG oslo_concurrency.lockutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Releasing lock "refresh_cache-eb6ef384-2891-42d0-9059-42b89009b14c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.263 225859 DEBUG nova.compute.manager [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Instance network_info: |[{"id": "423d10be-bf78-43ff-8ae2-812d375ccef8", "address": "fa:16:3e:56:58:d6", "network": {"id": "3b3aa186-38a4-4cc6-9399-f535503e9791", "bridge": "br-int", "label": "tempest-network-smoke--458751349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap423d10be-bf", "ovs_interfaceid": "423d10be-bf78-43ff-8ae2-812d375ccef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.263 225859 DEBUG oslo_concurrency.lockutils [req-9c585c66-3c7a-41a4-b473-d96ab5dfb75f req-7d510cfd-28ba-4e9a-bdd8-45782f43353c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-eb6ef384-2891-42d0-9059-42b89009b14c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.263 225859 DEBUG nova.network.neutron [req-9c585c66-3c7a-41a4-b473-d96ab5dfb75f req-7d510cfd-28ba-4e9a-bdd8-45782f43353c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Refreshing network info cache for port 423d10be-bf78-43ff-8ae2-812d375ccef8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.266 225859 DEBUG nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Start _get_guest_xml network_info=[{"id": "423d10be-bf78-43ff-8ae2-812d375ccef8", "address": "fa:16:3e:56:58:d6", "network": {"id": "3b3aa186-38a4-4cc6-9399-f535503e9791", "bridge": "br-int", "label": "tempest-network-smoke--458751349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap423d10be-bf", "ovs_interfaceid": "423d10be-bf78-43ff-8ae2-812d375ccef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.272 225859 WARNING nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.278 225859 DEBUG nova.virt.libvirt.host [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.278 225859 DEBUG nova.virt.libvirt.host [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.281 225859 DEBUG nova.virt.libvirt.host [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.281 225859 DEBUG nova.virt.libvirt.host [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.283 225859 DEBUG nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.283 225859 DEBUG nova.virt.hardware [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.284 225859 DEBUG nova.virt.hardware [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.284 225859 DEBUG nova.virt.hardware [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.284 225859 DEBUG nova.virt.hardware [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.284 225859 DEBUG nova.virt.hardware [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.284 225859 DEBUG nova.virt.hardware [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.285 225859 DEBUG nova.virt.hardware [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.285 225859 DEBUG nova.virt.hardware [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.285 225859 DEBUG nova.virt.hardware [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.285 225859 DEBUG nova.virt.hardware [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.286 225859 DEBUG nova.virt.hardware [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.289 225859 DEBUG oslo_concurrency.processutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:25:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:52.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:52 compute-1 ceph-mon[81775]: pgmap v3110: 321 pgs: 321 active+clean; 154 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.5 MiB/s wr, 14 op/s
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.596 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:25:52 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3613554405' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.750 225859 DEBUG oslo_concurrency.processutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.775 225859 DEBUG nova.storage.rbd_utils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image eb6ef384-2891-42d0-9059-42b89009b14c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:25:52 compute-1 nova_compute[225855]: 2026-01-20 15:25:52.779 225859 DEBUG oslo_concurrency.processutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:25:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:25:53 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4237637953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.271 225859 DEBUG oslo_concurrency.processutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.274 225859 DEBUG nova.virt.libvirt.vif [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:25:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1635932386',display_name='tempest-TestNetworkBasicOps-server-1635932386',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1635932386',id=202,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOxLu/uVu4uuXkZBckB0Jue8mA2XpnPI63IpB2BGooiySZuLgddUiCiwQ3/YqBeUzNGbEuiI4/oWiiYa4zQrQHAa9idheznhVw0kdlFQsBm1hL1vB4bH09utur5br8iaiQ==',key_name='tempest-TestNetworkBasicOps-305263708',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-1wrkal35',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:25:46Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=eb6ef384-2891-42d0-9059-42b89009b14c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "423d10be-bf78-43ff-8ae2-812d375ccef8", "address": "fa:16:3e:56:58:d6", "network": {"id": "3b3aa186-38a4-4cc6-9399-f535503e9791", "bridge": "br-int", "label": "tempest-network-smoke--458751349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap423d10be-bf", "ovs_interfaceid": "423d10be-bf78-43ff-8ae2-812d375ccef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.275 225859 DEBUG nova.network.os_vif_util [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "423d10be-bf78-43ff-8ae2-812d375ccef8", "address": "fa:16:3e:56:58:d6", "network": {"id": "3b3aa186-38a4-4cc6-9399-f535503e9791", "bridge": "br-int", "label": "tempest-network-smoke--458751349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap423d10be-bf", "ovs_interfaceid": "423d10be-bf78-43ff-8ae2-812d375ccef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.276 225859 DEBUG nova.network.os_vif_util [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:58:d6,bridge_name='br-int',has_traffic_filtering=True,id=423d10be-bf78-43ff-8ae2-812d375ccef8,network=Network(3b3aa186-38a4-4cc6-9399-f535503e9791),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap423d10be-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.278 225859 DEBUG nova.objects.instance [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'pci_devices' on Instance uuid eb6ef384-2891-42d0-9059-42b89009b14c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.295 225859 DEBUG nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:25:53 compute-1 nova_compute[225855]:   <uuid>eb6ef384-2891-42d0-9059-42b89009b14c</uuid>
Jan 20 15:25:53 compute-1 nova_compute[225855]:   <name>instance-000000ca</name>
Jan 20 15:25:53 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:25:53 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:25:53 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <nova:name>tempest-TestNetworkBasicOps-server-1635932386</nova:name>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:25:52</nova:creationTime>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:25:53 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:25:53 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:25:53 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:25:53 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:25:53 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:25:53 compute-1 nova_compute[225855]:         <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:25:53 compute-1 nova_compute[225855]:         <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:25:53 compute-1 nova_compute[225855]:         <nova:port uuid="423d10be-bf78-43ff-8ae2-812d375ccef8">
Jan 20 15:25:53 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:25:53 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:25:53 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <system>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <entry name="serial">eb6ef384-2891-42d0-9059-42b89009b14c</entry>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <entry name="uuid">eb6ef384-2891-42d0-9059-42b89009b14c</entry>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     </system>
Jan 20 15:25:53 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:25:53 compute-1 nova_compute[225855]:   <os>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:   </os>
Jan 20 15:25:53 compute-1 nova_compute[225855]:   <features>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:   </features>
Jan 20 15:25:53 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:25:53 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:25:53 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/eb6ef384-2891-42d0-9059-42b89009b14c_disk">
Jan 20 15:25:53 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       </source>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:25:53 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/eb6ef384-2891-42d0-9059-42b89009b14c_disk.config">
Jan 20 15:25:53 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       </source>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:25:53 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:56:58:d6"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <target dev="tap423d10be-bf"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/eb6ef384-2891-42d0-9059-42b89009b14c/console.log" append="off"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <video>
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     </video>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:25:53 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:25:53 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:25:53 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:25:53 compute-1 nova_compute[225855]: </domain>
Jan 20 15:25:53 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.297 225859 DEBUG nova.compute.manager [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Preparing to wait for external event network-vif-plugged-423d10be-bf78-43ff-8ae2-812d375ccef8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.297 225859 DEBUG oslo_concurrency.lockutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "eb6ef384-2891-42d0-9059-42b89009b14c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.297 225859 DEBUG oslo_concurrency.lockutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "eb6ef384-2891-42d0-9059-42b89009b14c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.298 225859 DEBUG oslo_concurrency.lockutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "eb6ef384-2891-42d0-9059-42b89009b14c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.298 225859 DEBUG nova.virt.libvirt.vif [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:25:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1635932386',display_name='tempest-TestNetworkBasicOps-server-1635932386',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1635932386',id=202,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOxLu/uVu4uuXkZBckB0Jue8mA2XpnPI63IpB2BGooiySZuLgddUiCiwQ3/YqBeUzNGbEuiI4/oWiiYa4zQrQHAa9idheznhVw0kdlFQsBm1hL1vB4bH09utur5br8iaiQ==',key_name='tempest-TestNetworkBasicOps-305263708',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-1wrkal35',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:25:46Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=eb6ef384-2891-42d0-9059-42b89009b14c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "423d10be-bf78-43ff-8ae2-812d375ccef8", "address": "fa:16:3e:56:58:d6", "network": {"id": "3b3aa186-38a4-4cc6-9399-f535503e9791", "bridge": "br-int", "label": "tempest-network-smoke--458751349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap423d10be-bf", "ovs_interfaceid": "423d10be-bf78-43ff-8ae2-812d375ccef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.299 225859 DEBUG nova.network.os_vif_util [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "423d10be-bf78-43ff-8ae2-812d375ccef8", "address": "fa:16:3e:56:58:d6", "network": {"id": "3b3aa186-38a4-4cc6-9399-f535503e9791", "bridge": "br-int", "label": "tempest-network-smoke--458751349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap423d10be-bf", "ovs_interfaceid": "423d10be-bf78-43ff-8ae2-812d375ccef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.300 225859 DEBUG nova.network.os_vif_util [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:58:d6,bridge_name='br-int',has_traffic_filtering=True,id=423d10be-bf78-43ff-8ae2-812d375ccef8,network=Network(3b3aa186-38a4-4cc6-9399-f535503e9791),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap423d10be-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.300 225859 DEBUG os_vif [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:58:d6,bridge_name='br-int',has_traffic_filtering=True,id=423d10be-bf78-43ff-8ae2-812d375ccef8,network=Network(3b3aa186-38a4-4cc6-9399-f535503e9791),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap423d10be-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.301 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.301 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.302 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.306 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.306 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap423d10be-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.307 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap423d10be-bf, col_values=(('external_ids', {'iface-id': '423d10be-bf78-43ff-8ae2-812d375ccef8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:58:d6', 'vm-uuid': 'eb6ef384-2891-42d0-9059-42b89009b14c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.308 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:53 compute-1 NetworkManager[49104]: <info>  [1768922753.3099] manager: (tap423d10be-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/382)
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.310 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.315 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.316 225859 INFO os_vif [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:58:d6,bridge_name='br-int',has_traffic_filtering=True,id=423d10be-bf78-43ff-8ae2-812d375ccef8,network=Network(3b3aa186-38a4-4cc6-9399-f535503e9791),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap423d10be-bf')
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.366 225859 DEBUG nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.366 225859 DEBUG nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.367 225859 DEBUG nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No VIF found with MAC fa:16:3e:56:58:d6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.367 225859 INFO nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Using config drive
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.391 225859 DEBUG nova.storage.rbd_utils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image eb6ef384-2891-42d0-9059-42b89009b14c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.473 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3613554405' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:25:53 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4237637953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.699 225859 INFO nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Creating config drive at /var/lib/nova/instances/eb6ef384-2891-42d0-9059-42b89009b14c/disk.config
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.704 225859 DEBUG oslo_concurrency.processutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eb6ef384-2891-42d0-9059-42b89009b14c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5mk0q491 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.779 225859 DEBUG nova.network.neutron [req-9c585c66-3c7a-41a4-b473-d96ab5dfb75f req-7d510cfd-28ba-4e9a-bdd8-45782f43353c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Updated VIF entry in instance network info cache for port 423d10be-bf78-43ff-8ae2-812d375ccef8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.780 225859 DEBUG nova.network.neutron [req-9c585c66-3c7a-41a4-b473-d96ab5dfb75f req-7d510cfd-28ba-4e9a-bdd8-45782f43353c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Updating instance_info_cache with network_info: [{"id": "423d10be-bf78-43ff-8ae2-812d375ccef8", "address": "fa:16:3e:56:58:d6", "network": {"id": "3b3aa186-38a4-4cc6-9399-f535503e9791", "bridge": "br-int", "label": "tempest-network-smoke--458751349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap423d10be-bf", "ovs_interfaceid": "423d10be-bf78-43ff-8ae2-812d375ccef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.795 225859 DEBUG oslo_concurrency.lockutils [req-9c585c66-3c7a-41a4-b473-d96ab5dfb75f req-7d510cfd-28ba-4e9a-bdd8-45782f43353c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-eb6ef384-2891-42d0-9059-42b89009b14c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.848 225859 DEBUG oslo_concurrency.processutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eb6ef384-2891-42d0-9059-42b89009b14c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5mk0q491" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.877 225859 DEBUG nova.storage.rbd_utils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image eb6ef384-2891-42d0-9059-42b89009b14c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:25:53 compute-1 nova_compute[225855]: 2026-01-20 15:25:53.881 225859 DEBUG oslo_concurrency.processutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eb6ef384-2891-42d0-9059-42b89009b14c/disk.config eb6ef384-2891-42d0-9059-42b89009b14c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:25:54 compute-1 nova_compute[225855]: 2026-01-20 15:25:54.062 225859 DEBUG oslo_concurrency.processutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eb6ef384-2891-42d0-9059-42b89009b14c/disk.config eb6ef384-2891-42d0-9059-42b89009b14c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:25:54 compute-1 nova_compute[225855]: 2026-01-20 15:25:54.063 225859 INFO nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Deleting local config drive /var/lib/nova/instances/eb6ef384-2891-42d0-9059-42b89009b14c/disk.config because it was imported into RBD.
Jan 20 15:25:54 compute-1 NetworkManager[49104]: <info>  [1768922754.1126] manager: (tap423d10be-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/383)
Jan 20 15:25:54 compute-1 kernel: tap423d10be-bf: entered promiscuous mode
Jan 20 15:25:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:54.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:54 compute-1 nova_compute[225855]: 2026-01-20 15:25:54.114 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:54 compute-1 ovn_controller[130490]: 2026-01-20T15:25:54Z|00911|binding|INFO|Claiming lport 423d10be-bf78-43ff-8ae2-812d375ccef8 for this chassis.
Jan 20 15:25:54 compute-1 ovn_controller[130490]: 2026-01-20T15:25:54Z|00912|binding|INFO|423d10be-bf78-43ff-8ae2-812d375ccef8: Claiming fa:16:3e:56:58:d6 10.100.0.6
Jan 20 15:25:54 compute-1 nova_compute[225855]: 2026-01-20 15:25:54.120 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.134 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:58:d6 10.100.0.6'], port_security=['fa:16:3e:56:58:d6 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'eb6ef384-2891-42d0-9059-42b89009b14c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b3aa186-38a4-4cc6-9399-f535503e9791', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '046cc664-f8d9-4379-b46e-95218c363faa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bbfdde0f-6f5b-476d-8d21-7557a37ecad6, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=423d10be-bf78-43ff-8ae2-812d375ccef8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.135 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 423d10be-bf78-43ff-8ae2-812d375ccef8 in datapath 3b3aa186-38a4-4cc6-9399-f535503e9791 bound to our chassis
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.136 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3b3aa186-38a4-4cc6-9399-f535503e9791
Jan 20 15:25:54 compute-1 systemd-machined[194361]: New machine qemu-107-instance-000000ca.
Jan 20 15:25:54 compute-1 systemd-udevd[314084]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.149 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[14decc33-432c-4697-9001-900a7950ec61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.150 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3b3aa186-31 in ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.152 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3b3aa186-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.152 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[18893a02-d7fe-4f2d-a1f9-53c5b99babf9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.153 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[34794a76-7224-4979-b49e-c2fcc7fd7235]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:54 compute-1 NetworkManager[49104]: <info>  [1768922754.1648] device (tap423d10be-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:25:54 compute-1 NetworkManager[49104]: <info>  [1768922754.1654] device (tap423d10be-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.165 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[b9af6e6c-873b-4ea8-b5be-22e06abb9940]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:54 compute-1 nova_compute[225855]: 2026-01-20 15:25:54.177 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:54 compute-1 systemd[1]: Started Virtual Machine qemu-107-instance-000000ca.
Jan 20 15:25:54 compute-1 ovn_controller[130490]: 2026-01-20T15:25:54Z|00913|binding|INFO|Setting lport 423d10be-bf78-43ff-8ae2-812d375ccef8 ovn-installed in OVS
Jan 20 15:25:54 compute-1 ovn_controller[130490]: 2026-01-20T15:25:54Z|00914|binding|INFO|Setting lport 423d10be-bf78-43ff-8ae2-812d375ccef8 up in Southbound
Jan 20 15:25:54 compute-1 nova_compute[225855]: 2026-01-20 15:25:54.185 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.188 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[40187f24-2d1b-44fd-ad30-be25da512eaf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.215 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[db80be7c-b7d4-4990-9014-e67db0136e67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.220 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb4839f-3ddd-45f9-be87-bf2748c80a3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:54 compute-1 NetworkManager[49104]: <info>  [1768922754.2211] manager: (tap3b3aa186-30): new Veth device (/org/freedesktop/NetworkManager/Devices/384)
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.252 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[98e5a7fa-2183-45aa-93cf-11cd1df6a4ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.254 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[4ae56565-5fc7-4add-a5ad-b362895a3415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:54 compute-1 NetworkManager[49104]: <info>  [1768922754.2760] device (tap3b3aa186-30): carrier: link connected
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.282 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[9a4e2ba4-5f7a-4c71-97ae-17c02b88e442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.299 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[36ab1ed8-96a2-4c41-841e-60275277f987]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b3aa186-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:01:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779320, 'reachable_time': 26479, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314116, 'error': None, 'target': 'ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.313 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ec2ef450-82ef-42e9-8cdf-eb74f94ddd11]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:19f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 779320, 'tstamp': 779320}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314117, 'error': None, 'target': 'ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.327 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e9889c67-73ed-477e-a3e1-5eaf9eeaa47c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b3aa186-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:01:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779320, 'reachable_time': 26479, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314118, 'error': None, 'target': 'ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.356 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[800826ce-ef11-481c-8de1-b4349fe53480]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:25:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:54.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.411 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2e433b82-293d-4d35-bbe0-f6c27c4176a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.412 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b3aa186-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.413 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:25:54 compute-1 kernel: tap3b3aa186-30: entered promiscuous mode
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.413 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b3aa186-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:25:54 compute-1 NetworkManager[49104]: <info>  [1768922754.4167] manager: (tap3b3aa186-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.417 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3b3aa186-30, col_values=(('external_ids', {'iface-id': 'f92b8596-5a2a-495f-b715-08c8aa7181a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:25:54 compute-1 ovn_controller[130490]: 2026-01-20T15:25:54Z|00915|binding|INFO|Releasing lport f92b8596-5a2a-495f-b715-08c8aa7181a3 from this chassis (sb_readonly=0)
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.419 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3b3aa186-38a4-4cc6-9399-f535503e9791.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3b3aa186-38a4-4cc6-9399-f535503e9791.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.420 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7f490e6e-1fcc-4535-a207-a05020e25e19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.420 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-3b3aa186-38a4-4cc6-9399-f535503e9791
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/3b3aa186-38a4-4cc6-9399-f535503e9791.pid.haproxy
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 3b3aa186-38a4-4cc6-9399-f535503e9791
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:25:54 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:25:54.421 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791', 'env', 'PROCESS_TAG=haproxy-3b3aa186-38a4-4cc6-9399-f535503e9791', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3b3aa186-38a4-4cc6-9399-f535503e9791.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:25:54 compute-1 nova_compute[225855]: 2026-01-20 15:25:54.431 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:54 compute-1 ceph-mon[81775]: pgmap v3111: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:25:54 compute-1 nova_compute[225855]: 2026-01-20 15:25:54.724 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922754.7236478, eb6ef384-2891-42d0-9059-42b89009b14c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:25:54 compute-1 nova_compute[225855]: 2026-01-20 15:25:54.724 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] VM Started (Lifecycle Event)
Jan 20 15:25:54 compute-1 nova_compute[225855]: 2026-01-20 15:25:54.756 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:25:54 compute-1 nova_compute[225855]: 2026-01-20 15:25:54.760 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922754.7238882, eb6ef384-2891-42d0-9059-42b89009b14c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:25:54 compute-1 nova_compute[225855]: 2026-01-20 15:25:54.760 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] VM Paused (Lifecycle Event)
Jan 20 15:25:54 compute-1 nova_compute[225855]: 2026-01-20 15:25:54.777 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:25:54 compute-1 nova_compute[225855]: 2026-01-20 15:25:54.780 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:25:54 compute-1 nova_compute[225855]: 2026-01-20 15:25:54.796 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:25:54 compute-1 podman[314191]: 2026-01-20 15:25:54.832127981 +0000 UTC m=+0.044839922 container create c23b6ee47ebd8837b0723000e8dd8b5f34925d463ae436fc56e1762a0bc434e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:25:54 compute-1 systemd[1]: Started libpod-conmon-c23b6ee47ebd8837b0723000e8dd8b5f34925d463ae436fc56e1762a0bc434e1.scope.
Jan 20 15:25:54 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:25:54 compute-1 podman[314191]: 2026-01-20 15:25:54.80865495 +0000 UTC m=+0.021366891 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:25:54 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f6bc1236f352acdb0936adef4a9dad7a0a958fe3cb77609aa75d071a1c9c47/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:25:54 compute-1 podman[314191]: 2026-01-20 15:25:54.915088999 +0000 UTC m=+0.127800950 container init c23b6ee47ebd8837b0723000e8dd8b5f34925d463ae436fc56e1762a0bc434e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:25:54 compute-1 podman[314191]: 2026-01-20 15:25:54.920442022 +0000 UTC m=+0.133153963 container start c23b6ee47ebd8837b0723000e8dd8b5f34925d463ae436fc56e1762a0bc434e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 20 15:25:54 compute-1 neutron-haproxy-ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791[314207]: [NOTICE]   (314211) : New worker (314213) forked
Jan 20 15:25:54 compute-1 neutron-haproxy-ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791[314207]: [NOTICE]   (314211) : Loading success.
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.394 225859 DEBUG nova.compute.manager [req-3737f890-f120-404f-add9-c00713588786 req-1f8d0ccc-c8c4-472f-9b09-a8a9f181c1d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Received event network-vif-plugged-423d10be-bf78-43ff-8ae2-812d375ccef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.395 225859 DEBUG oslo_concurrency.lockutils [req-3737f890-f120-404f-add9-c00713588786 req-1f8d0ccc-c8c4-472f-9b09-a8a9f181c1d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "eb6ef384-2891-42d0-9059-42b89009b14c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.395 225859 DEBUG oslo_concurrency.lockutils [req-3737f890-f120-404f-add9-c00713588786 req-1f8d0ccc-c8c4-472f-9b09-a8a9f181c1d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "eb6ef384-2891-42d0-9059-42b89009b14c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.396 225859 DEBUG oslo_concurrency.lockutils [req-3737f890-f120-404f-add9-c00713588786 req-1f8d0ccc-c8c4-472f-9b09-a8a9f181c1d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "eb6ef384-2891-42d0-9059-42b89009b14c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.396 225859 DEBUG nova.compute.manager [req-3737f890-f120-404f-add9-c00713588786 req-1f8d0ccc-c8c4-472f-9b09-a8a9f181c1d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Processing event network-vif-plugged-423d10be-bf78-43ff-8ae2-812d375ccef8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.397 225859 DEBUG nova.compute.manager [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.400 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922755.4000478, eb6ef384-2891-42d0-9059-42b89009b14c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.400 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] VM Resumed (Lifecycle Event)
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.401 225859 DEBUG nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.405 225859 INFO nova.virt.libvirt.driver [-] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Instance spawned successfully.
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.405 225859 DEBUG nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.419 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.424 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.427 225859 DEBUG nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.427 225859 DEBUG nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.428 225859 DEBUG nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.428 225859 DEBUG nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.429 225859 DEBUG nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.429 225859 DEBUG nova.virt.libvirt.driver [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.451 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.492 225859 INFO nova.compute.manager [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Took 8.68 seconds to spawn the instance on the hypervisor.
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.492 225859 DEBUG nova.compute.manager [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.563 225859 INFO nova.compute.manager [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Took 9.65 seconds to build instance.
Jan 20 15:25:55 compute-1 nova_compute[225855]: 2026-01-20 15:25:55.583 225859 DEBUG oslo_concurrency.lockutils [None req-431cd767-f2b3-4d69-b7d3-f2145bbc2389 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "eb6ef384-2891-42d0-9059-42b89009b14c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:25:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:56.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:56.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:56 compute-1 ceph-mon[81775]: pgmap v3112: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:25:57 compute-1 sudo[314223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:25:57 compute-1 sudo[314223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:57 compute-1 sudo[314223]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:57 compute-1 sudo[314248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:25:57 compute-1 sudo[314248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:57 compute-1 sudo[314248]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:57 compute-1 sudo[314273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:25:57 compute-1 sudo[314273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:57 compute-1 sudo[314273]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:57 compute-1 sudo[314298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:25:57 compute-1 sudo[314298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:57 compute-1 nova_compute[225855]: 2026-01-20 15:25:57.499 225859 DEBUG nova.compute.manager [req-13dcadbb-f48e-4007-a694-fb33422f2209 req-922e1ce6-2666-4db9-901e-ce6c8b7cb1d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Received event network-vif-plugged-423d10be-bf78-43ff-8ae2-812d375ccef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:25:57 compute-1 nova_compute[225855]: 2026-01-20 15:25:57.501 225859 DEBUG oslo_concurrency.lockutils [req-13dcadbb-f48e-4007-a694-fb33422f2209 req-922e1ce6-2666-4db9-901e-ce6c8b7cb1d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "eb6ef384-2891-42d0-9059-42b89009b14c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:25:57 compute-1 nova_compute[225855]: 2026-01-20 15:25:57.502 225859 DEBUG oslo_concurrency.lockutils [req-13dcadbb-f48e-4007-a694-fb33422f2209 req-922e1ce6-2666-4db9-901e-ce6c8b7cb1d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "eb6ef384-2891-42d0-9059-42b89009b14c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:25:57 compute-1 nova_compute[225855]: 2026-01-20 15:25:57.503 225859 DEBUG oslo_concurrency.lockutils [req-13dcadbb-f48e-4007-a694-fb33422f2209 req-922e1ce6-2666-4db9-901e-ce6c8b7cb1d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "eb6ef384-2891-42d0-9059-42b89009b14c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:25:57 compute-1 nova_compute[225855]: 2026-01-20 15:25:57.503 225859 DEBUG nova.compute.manager [req-13dcadbb-f48e-4007-a694-fb33422f2209 req-922e1ce6-2666-4db9-901e-ce6c8b7cb1d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] No waiting events found dispatching network-vif-plugged-423d10be-bf78-43ff-8ae2-812d375ccef8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:25:57 compute-1 nova_compute[225855]: 2026-01-20 15:25:57.503 225859 WARNING nova.compute.manager [req-13dcadbb-f48e-4007-a694-fb33422f2209 req-922e1ce6-2666-4db9-901e-ce6c8b7cb1d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Received unexpected event network-vif-plugged-423d10be-bf78-43ff-8ae2-812d375ccef8 for instance with vm_state active and task_state None.
Jan 20 15:25:57 compute-1 sudo[314298]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:58.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:58 compute-1 nova_compute[225855]: 2026-01-20 15:25:58.314 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:25:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:58.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:58 compute-1 nova_compute[225855]: 2026-01-20 15:25:58.476 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:58 compute-1 ceph-mon[81775]: pgmap v3113: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 149 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 20 15:25:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:25:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:25:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:25:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:25:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:25:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:25:59 compute-1 ovn_controller[130490]: 2026-01-20T15:25:59Z|00916|binding|INFO|Releasing lport f92b8596-5a2a-495f-b715-08c8aa7181a3 from this chassis (sb_readonly=0)
Jan 20 15:25:59 compute-1 NetworkManager[49104]: <info>  [1768922759.5439] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/386)
Jan 20 15:25:59 compute-1 nova_compute[225855]: 2026-01-20 15:25:59.544 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:59 compute-1 NetworkManager[49104]: <info>  [1768922759.5447] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Jan 20 15:25:59 compute-1 ovn_controller[130490]: 2026-01-20T15:25:59Z|00917|binding|INFO|Releasing lport f92b8596-5a2a-495f-b715-08c8aa7181a3 from this chassis (sb_readonly=0)
Jan 20 15:25:59 compute-1 nova_compute[225855]: 2026-01-20 15:25:59.576 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:59 compute-1 nova_compute[225855]: 2026-01-20 15:25:59.581 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:59 compute-1 nova_compute[225855]: 2026-01-20 15:25:59.946 225859 DEBUG nova.compute.manager [req-3f03e048-0bb2-4dae-969e-a466a179cce7 req-8308b5cb-762c-4cf1-a507-a7c0659e9adc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Received event network-changed-423d10be-bf78-43ff-8ae2-812d375ccef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:25:59 compute-1 nova_compute[225855]: 2026-01-20 15:25:59.947 225859 DEBUG nova.compute.manager [req-3f03e048-0bb2-4dae-969e-a466a179cce7 req-8308b5cb-762c-4cf1-a507-a7c0659e9adc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Refreshing instance network info cache due to event network-changed-423d10be-bf78-43ff-8ae2-812d375ccef8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:25:59 compute-1 nova_compute[225855]: 2026-01-20 15:25:59.947 225859 DEBUG oslo_concurrency.lockutils [req-3f03e048-0bb2-4dae-969e-a466a179cce7 req-8308b5cb-762c-4cf1-a507-a7c0659e9adc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-eb6ef384-2891-42d0-9059-42b89009b14c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:25:59 compute-1 nova_compute[225855]: 2026-01-20 15:25:59.947 225859 DEBUG oslo_concurrency.lockutils [req-3f03e048-0bb2-4dae-969e-a466a179cce7 req-8308b5cb-762c-4cf1-a507-a7c0659e9adc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-eb6ef384-2891-42d0-9059-42b89009b14c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:25:59 compute-1 nova_compute[225855]: 2026-01-20 15:25:59.947 225859 DEBUG nova.network.neutron [req-3f03e048-0bb2-4dae-969e-a466a179cce7 req-8308b5cb-762c-4cf1-a507-a7c0659e9adc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Refreshing network info cache for port 423d10be-bf78-43ff-8ae2-812d375ccef8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:26:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:00.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:00.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:00 compute-1 ceph-mon[81775]: pgmap v3114: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 149 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 20 15:26:01 compute-1 nova_compute[225855]: 2026-01-20 15:26:01.223 225859 DEBUG nova.network.neutron [req-3f03e048-0bb2-4dae-969e-a466a179cce7 req-8308b5cb-762c-4cf1-a507-a7c0659e9adc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Updated VIF entry in instance network info cache for port 423d10be-bf78-43ff-8ae2-812d375ccef8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:26:01 compute-1 nova_compute[225855]: 2026-01-20 15:26:01.224 225859 DEBUG nova.network.neutron [req-3f03e048-0bb2-4dae-969e-a466a179cce7 req-8308b5cb-762c-4cf1-a507-a7c0659e9adc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Updating instance_info_cache with network_info: [{"id": "423d10be-bf78-43ff-8ae2-812d375ccef8", "address": "fa:16:3e:56:58:d6", "network": {"id": "3b3aa186-38a4-4cc6-9399-f535503e9791", "bridge": "br-int", "label": "tempest-network-smoke--458751349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap423d10be-bf", "ovs_interfaceid": "423d10be-bf78-43ff-8ae2-812d375ccef8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:26:01 compute-1 nova_compute[225855]: 2026-01-20 15:26:01.271 225859 DEBUG oslo_concurrency.lockutils [req-3f03e048-0bb2-4dae-969e-a466a179cce7 req-8308b5cb-762c-4cf1-a507-a7c0659e9adc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-eb6ef384-2891-42d0-9059-42b89009b14c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:26:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:26:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:02.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:26:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:02.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:02 compute-1 ceph-mon[81775]: pgmap v3115: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 89 op/s
Jan 20 15:26:03 compute-1 podman[314358]: 2026-01-20 15:26:03.070763264 +0000 UTC m=+0.111240497 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 20 15:26:03 compute-1 nova_compute[225855]: 2026-01-20 15:26:03.316 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:03 compute-1 nova_compute[225855]: 2026-01-20 15:26:03.476 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:03 compute-1 sudo[314381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:26:03 compute-1 sudo[314381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:03 compute-1 sudo[314381]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:03 compute-1 sudo[314406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:26:03 compute-1 sudo[314406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:03 compute-1 sudo[314406]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:04.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:04.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:04 compute-1 ceph-mon[81775]: pgmap v3116: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 340 KiB/s wr, 85 op/s
Jan 20 15:26:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:26:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:26:05 compute-1 nova_compute[225855]: 2026-01-20 15:26:05.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:26:05 compute-1 nova_compute[225855]: 2026-01-20 15:26:05.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:26:05 compute-1 sudo[314434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:26:05 compute-1 sudo[314434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:05 compute-1 sudo[314434]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:05 compute-1 sudo[314459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:26:05 compute-1 sudo[314459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:05 compute-1 sudo[314459]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:26:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:06.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:26:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:06.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:06 compute-1 ceph-mon[81775]: pgmap v3117: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:26:07 compute-1 ovn_controller[130490]: 2026-01-20T15:26:07Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:56:58:d6 10.100.0.6
Jan 20 15:26:07 compute-1 ovn_controller[130490]: 2026-01-20T15:26:07Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:56:58:d6 10.100.0.6
Jan 20 15:26:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:26:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:08.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:26:08 compute-1 nova_compute[225855]: 2026-01-20 15:26:08.318 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:08.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:08 compute-1 nova_compute[225855]: 2026-01-20 15:26:08.478 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:08 compute-1 ceph-mon[81775]: pgmap v3118: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 511 B/s wr, 73 op/s
Jan 20 15:26:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2608940846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:26:09 compute-1 nova_compute[225855]: 2026-01-20 15:26:09.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:26:09 compute-1 nova_compute[225855]: 2026-01-20 15:26:09.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:26:09 compute-1 nova_compute[225855]: 2026-01-20 15:26:09.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:26:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1694225511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:26:10 compute-1 nova_compute[225855]: 2026-01-20 15:26:10.091 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-eb6ef384-2891-42d0-9059-42b89009b14c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:26:10 compute-1 nova_compute[225855]: 2026-01-20 15:26:10.091 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-eb6ef384-2891-42d0-9059-42b89009b14c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:26:10 compute-1 nova_compute[225855]: 2026-01-20 15:26:10.092 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:26:10 compute-1 nova_compute[225855]: 2026-01-20 15:26:10.092 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid eb6ef384-2891-42d0-9059-42b89009b14c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:26:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:10.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:10.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:10 compute-1 ceph-mon[81775]: pgmap v3119: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 58 op/s
Jan 20 15:26:11 compute-1 nova_compute[225855]: 2026-01-20 15:26:11.892 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Updating instance_info_cache with network_info: [{"id": "423d10be-bf78-43ff-8ae2-812d375ccef8", "address": "fa:16:3e:56:58:d6", "network": {"id": "3b3aa186-38a4-4cc6-9399-f535503e9791", "bridge": "br-int", "label": "tempest-network-smoke--458751349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap423d10be-bf", "ovs_interfaceid": "423d10be-bf78-43ff-8ae2-812d375ccef8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:26:11 compute-1 nova_compute[225855]: 2026-01-20 15:26:11.909 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-eb6ef384-2891-42d0-9059-42b89009b14c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:26:11 compute-1 nova_compute[225855]: 2026-01-20 15:26:11.910 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:26:11 compute-1 nova_compute[225855]: 2026-01-20 15:26:11.910 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:26:11 compute-1 nova_compute[225855]: 2026-01-20 15:26:11.910 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:26:11 compute-1 nova_compute[225855]: 2026-01-20 15:26:11.911 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:26:11 compute-1 nova_compute[225855]: 2026-01-20 15:26:11.911 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:26:11 compute-1 nova_compute[225855]: 2026-01-20 15:26:11.930 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:26:11 compute-1 nova_compute[225855]: 2026-01-20 15:26:11.930 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:26:11 compute-1 nova_compute[225855]: 2026-01-20 15:26:11.931 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:26:11 compute-1 nova_compute[225855]: 2026-01-20 15:26:11.931 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:26:11 compute-1 nova_compute[225855]: 2026-01-20 15:26:11.931 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:26:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:12.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:26:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1555682087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:26:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:12.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:12 compute-1 nova_compute[225855]: 2026-01-20 15:26:12.384 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:26:12 compute-1 nova_compute[225855]: 2026-01-20 15:26:12.461 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000ca as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:26:12 compute-1 nova_compute[225855]: 2026-01-20 15:26:12.462 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000ca as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:26:12 compute-1 podman[314510]: 2026-01-20 15:26:12.537836701 +0000 UTC m=+0.093130130 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 20 15:26:12 compute-1 nova_compute[225855]: 2026-01-20 15:26:12.600 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:26:12 compute-1 nova_compute[225855]: 2026-01-20 15:26:12.601 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4062MB free_disk=20.95288848876953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:26:12 compute-1 nova_compute[225855]: 2026-01-20 15:26:12.602 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:26:12 compute-1 nova_compute[225855]: 2026-01-20 15:26:12.602 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:26:12 compute-1 nova_compute[225855]: 2026-01-20 15:26:12.671 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance eb6ef384-2891-42d0-9059-42b89009b14c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:26:12 compute-1 nova_compute[225855]: 2026-01-20 15:26:12.671 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:26:12 compute-1 nova_compute[225855]: 2026-01-20 15:26:12.672 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:26:12 compute-1 ceph-mon[81775]: pgmap v3120: 321 pgs: 321 active+clean; 184 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 95 op/s
Jan 20 15:26:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1555682087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:26:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2255426292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:26:12 compute-1 nova_compute[225855]: 2026-01-20 15:26:12.731 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:26:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:26:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1246091011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:26:13 compute-1 nova_compute[225855]: 2026-01-20 15:26:13.156 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:26:13 compute-1 nova_compute[225855]: 2026-01-20 15:26:13.162 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:26:13 compute-1 nova_compute[225855]: 2026-01-20 15:26:13.184 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:26:13 compute-1 nova_compute[225855]: 2026-01-20 15:26:13.209 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:26:13 compute-1 nova_compute[225855]: 2026-01-20 15:26:13.210 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:26:13 compute-1 nova_compute[225855]: 2026-01-20 15:26:13.320 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:13 compute-1 nova_compute[225855]: 2026-01-20 15:26:13.480 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:13 compute-1 nova_compute[225855]: 2026-01-20 15:26:13.640 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:26:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1246091011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:26:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1911327537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:26:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2660355829' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:26:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2660355829' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:26:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:14.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:14 compute-1 nova_compute[225855]: 2026-01-20 15:26:14.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:26:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:26:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:14.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:26:14 compute-1 ceph-mon[81775]: pgmap v3121: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 668 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 20 15:26:15 compute-1 nova_compute[225855]: 2026-01-20 15:26:15.176 225859 INFO nova.compute.manager [None req-67609110-5db5-4752-b445-88e91749cfc7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Get console output
Jan 20 15:26:15 compute-1 nova_compute[225855]: 2026-01-20 15:26:15.182 263775 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:26:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:16.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:16.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:26:16.446 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:26:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:26:16.447 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:26:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:26:16.447 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:26:16 compute-1 ceph-mon[81775]: pgmap v3122: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:26:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:26:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:18.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:26:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:26:18.275 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:26:18 compute-1 nova_compute[225855]: 2026-01-20 15:26:18.276 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:26:18.277 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:26:18 compute-1 nova_compute[225855]: 2026-01-20 15:26:18.322 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:18 compute-1 nova_compute[225855]: 2026-01-20 15:26:18.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:26:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:18.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:18 compute-1 nova_compute[225855]: 2026-01-20 15:26:18.483 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:18 compute-1 ceph-mon[81775]: pgmap v3123: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:26:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:20.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:20.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:20 compute-1 ceph-mon[81775]: pgmap v3124: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:26:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:22.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:22.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:22 compute-1 ceph-mon[81775]: pgmap v3125: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:26:23 compute-1 nova_compute[225855]: 2026-01-20 15:26:23.324 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:23 compute-1 nova_compute[225855]: 2026-01-20 15:26:23.485 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:26:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:24.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:26:24 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:26:24.278 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:26:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:24.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:24 compute-1 ceph-mon[81775]: pgmap v3126: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 916 KiB/s wr, 26 op/s
Jan 20 15:26:25 compute-1 sudo[314558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:26:25 compute-1 sudo[314558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:25 compute-1 sudo[314558]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:26 compute-1 sudo[314583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:26:26 compute-1 sudo[314583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:26 compute-1 sudo[314583]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:26.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:26.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:26 compute-1 ceph-mon[81775]: pgmap v3127: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 15 KiB/s wr, 0 op/s
Jan 20 15:26:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:28.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:28 compute-1 nova_compute[225855]: 2026-01-20 15:26:28.326 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:26:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:28.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:26:28 compute-1 nova_compute[225855]: 2026-01-20 15:26:28.487 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:28 compute-1 ceph-mon[81775]: pgmap v3128: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 15 KiB/s wr, 0 op/s
Jan 20 15:26:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:30.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:30.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:30 compute-1 ceph-mon[81775]: pgmap v3129: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Jan 20 15:26:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:32.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:32.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:32 compute-1 ceph-mon[81775]: pgmap v3130: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Jan 20 15:26:33 compute-1 nova_compute[225855]: 2026-01-20 15:26:33.328 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:33 compute-1 nova_compute[225855]: 2026-01-20 15:26:33.490 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:34 compute-1 podman[314612]: 2026-01-20 15:26:34.022676279 +0000 UTC m=+0.069516426 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 20 15:26:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:34.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:26:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:34.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:26:34 compute-1 ceph-mon[81775]: pgmap v3131: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s wr, 0 op/s
Jan 20 15:26:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:36.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:36.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:36 compute-1 ceph-mon[81775]: pgmap v3132: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 341 B/s wr, 0 op/s
Jan 20 15:26:37 compute-1 ceph-mon[81775]: pgmap v3133: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 341 B/s wr, 0 op/s
Jan 20 15:26:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:26:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:38.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:26:38 compute-1 nova_compute[225855]: 2026-01-20 15:26:38.331 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:38.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:38 compute-1 nova_compute[225855]: 2026-01-20 15:26:38.491 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:39 compute-1 ceph-mon[81775]: pgmap v3134: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:26:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:26:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:40.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:26:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:40.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:42 compute-1 ceph-mon[81775]: pgmap v3135: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Jan 20 15:26:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:42.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:42.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:42 compute-1 podman[314645]: 2026-01-20 15:26:42.997716586 +0000 UTC m=+0.049629628 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:26:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3507037067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:26:43 compute-1 nova_compute[225855]: 2026-01-20 15:26:43.333 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:43 compute-1 nova_compute[225855]: 2026-01-20 15:26:43.494 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:44 compute-1 ceph-mon[81775]: pgmap v3136: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 20 15:26:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:26:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:44.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:26:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:26:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:44.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:26:46 compute-1 sudo[314667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:26:46 compute-1 sudo[314667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:46 compute-1 sudo[314667]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:46 compute-1 sudo[314692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:26:46 compute-1 sudo[314692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:46 compute-1 sudo[314692]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:46 compute-1 ceph-mon[81775]: pgmap v3137: 321 pgs: 321 active+clean; 217 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 817 KiB/s wr, 2 op/s
Jan 20 15:26:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:26:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:46.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:26:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:46.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:47 compute-1 ovn_controller[130490]: 2026-01-20T15:26:47Z|00918|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Jan 20 15:26:48 compute-1 ceph-mon[81775]: pgmap v3138: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:26:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:26:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:48.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:26:48 compute-1 nova_compute[225855]: 2026-01-20 15:26:48.334 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:48.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:48 compute-1 nova_compute[225855]: 2026-01-20 15:26:48.496 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:50.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:50 compute-1 ceph-mon[81775]: pgmap v3139: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:26:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:26:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:50.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:26:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:26:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:52.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:26:52 compute-1 ceph-mon[81775]: pgmap v3140: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:26:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:52.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:53 compute-1 nova_compute[225855]: 2026-01-20 15:26:53.334 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:53 compute-1 nova_compute[225855]: 2026-01-20 15:26:53.497 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:54.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:54 compute-1 ceph-mon[81775]: pgmap v3141: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:26:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1693400051' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:26:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:54.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:55 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/252485601' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:26:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:56.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:56 compute-1 ceph-mon[81775]: pgmap v3142: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:26:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:56.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:58.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:58 compute-1 ceph-mon[81775]: pgmap v3143: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1000 KiB/s wr, 24 op/s
Jan 20 15:26:58 compute-1 nova_compute[225855]: 2026-01-20 15:26:58.337 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:26:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:26:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:58.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:26:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:58 compute-1 nova_compute[225855]: 2026-01-20 15:26:58.499 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:00.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:00 compute-1 ceph-mon[81775]: pgmap v3144: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:27:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:00.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:27:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:02.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:27:02 compute-1 ceph-mon[81775]: pgmap v3145: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 661 KiB/s rd, 12 KiB/s wr, 31 op/s
Jan 20 15:27:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:02.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:03 compute-1 nova_compute[225855]: 2026-01-20 15:27:03.338 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:03 compute-1 nova_compute[225855]: 2026-01-20 15:27:03.501 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:03 compute-1 sudo[314726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:03 compute-1 sudo[314726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:03 compute-1 sudo[314726]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:03 compute-1 sudo[314751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:27:03 compute-1 sudo[314751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:03 compute-1 sudo[314751]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:03 compute-1 sudo[314776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:03 compute-1 sudo[314776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:03 compute-1 sudo[314776]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:04 compute-1 sudo[314801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:27:04 compute-1 sudo[314801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:04 compute-1 podman[314825]: 2026-01-20 15:27:04.127143056 +0000 UTC m=+0.072959984 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 15:27:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:04.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:04 compute-1 ceph-mon[81775]: pgmap v3146: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 844 KiB/s rd, 12 KiB/s wr, 38 op/s
Jan 20 15:27:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:04.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:04 compute-1 sudo[314801]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:05 compute-1 nova_compute[225855]: 2026-01-20 15:27:05.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:27:05 compute-1 nova_compute[225855]: 2026-01-20 15:27:05.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:27:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:27:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:27:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:27:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:27:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:27:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:27:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:27:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:06.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:27:06 compute-1 sudo[314885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:06 compute-1 sudo[314885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:06 compute-1 sudo[314885]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:06 compute-1 sudo[314910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:06 compute-1 sudo[314910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:06 compute-1 sudo[314910]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:06.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:06 compute-1 ceph-mon[81775]: pgmap v3147: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 12 KiB/s wr, 65 op/s
Jan 20 15:27:07 compute-1 nova_compute[225855]: 2026-01-20 15:27:07.829 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:08.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:08 compute-1 nova_compute[225855]: 2026-01-20 15:27:08.340 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:27:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:08.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:27:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:08 compute-1 nova_compute[225855]: 2026-01-20 15:27:08.503 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:08 compute-1 ceph-mon[81775]: pgmap v3148: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:27:09 compute-1 nova_compute[225855]: 2026-01-20 15:27:09.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:27:09 compute-1 nova_compute[225855]: 2026-01-20 15:27:09.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:27:09 compute-1 nova_compute[225855]: 2026-01-20 15:27:09.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:27:09 compute-1 nova_compute[225855]: 2026-01-20 15:27:09.638 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-eb6ef384-2891-42d0-9059-42b89009b14c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:27:09 compute-1 nova_compute[225855]: 2026-01-20 15:27:09.639 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-eb6ef384-2891-42d0-9059-42b89009b14c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:27:09 compute-1 nova_compute[225855]: 2026-01-20 15:27:09.639 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:27:09 compute-1 nova_compute[225855]: 2026-01-20 15:27:09.639 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid eb6ef384-2891-42d0-9059-42b89009b14c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:27:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:10.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:10 compute-1 sudo[314937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:10 compute-1 sudo[314937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:10 compute-1 sudo[314937]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:10 compute-1 sudo[314962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:27:10 compute-1 sudo[314962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:10 compute-1 sudo[314962]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:10.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:10 compute-1 ceph-mon[81775]: pgmap v3149: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:27:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:27:10 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:27:11 compute-1 ceph-mon[81775]: pgmap v3150: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 20 15:27:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1964989034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:12.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:12.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:12 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #154. Immutable memtables: 0.
Jan 20 15:27:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:27:12.906455) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:27:12 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 154
Jan 20 15:27:12 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922832906550, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1383, "num_deletes": 251, "total_data_size": 3165251, "memory_usage": 3223488, "flush_reason": "Manual Compaction"}
Jan 20 15:27:12 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #155: started
Jan 20 15:27:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3229525351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1263708301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:12 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922832921506, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 155, "file_size": 2078124, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75326, "largest_seqno": 76704, "table_properties": {"data_size": 2072208, "index_size": 3246, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12612, "raw_average_key_size": 19, "raw_value_size": 2060353, "raw_average_value_size": 3254, "num_data_blocks": 145, "num_entries": 633, "num_filter_entries": 633, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768922713, "oldest_key_time": 1768922713, "file_creation_time": 1768922832, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:27:12 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 15094 microseconds, and 5423 cpu microseconds.
Jan 20 15:27:12 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:27:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:27:12.921545) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #155: 2078124 bytes OK
Jan 20 15:27:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:27:12.921564) [db/memtable_list.cc:519] [default] Level-0 commit table #155 started
Jan 20 15:27:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:27:12.923371) [db/memtable_list.cc:722] [default] Level-0 commit table #155: memtable #1 done
Jan 20 15:27:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:27:12.923383) EVENT_LOG_v1 {"time_micros": 1768922832923379, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:27:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:27:12.923402) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:27:12 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 3158775, prev total WAL file size 3158775, number of live WAL files 2.
Jan 20 15:27:12 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000151.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:27:12 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:27:12.924053) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Jan 20 15:27:12 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:27:12 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [155(2029KB)], [153(12MB)]
Jan 20 15:27:12 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922832924076, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [155], "files_L6": [153], "score": -1, "input_data_size": 15005628, "oldest_snapshot_seqno": -1}
Jan 20 15:27:13 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #156: 9851 keys, 13117279 bytes, temperature: kUnknown
Jan 20 15:27:13 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922833067517, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 156, "file_size": 13117279, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13052724, "index_size": 38842, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24645, "raw_key_size": 259883, "raw_average_key_size": 26, "raw_value_size": 12879052, "raw_average_value_size": 1307, "num_data_blocks": 1480, "num_entries": 9851, "num_filter_entries": 9851, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768922832, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 156, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:27:13 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:27:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:27:13.067942) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 13117279 bytes
Jan 20 15:27:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:27:13.070287) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.5 rd, 91.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 12.3 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(13.5) write-amplify(6.3) OK, records in: 10366, records dropped: 515 output_compression: NoCompression
Jan 20 15:27:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:27:13.070307) EVENT_LOG_v1 {"time_micros": 1768922833070299, "job": 98, "event": "compaction_finished", "compaction_time_micros": 143581, "compaction_time_cpu_micros": 31236, "output_level": 6, "num_output_files": 1, "total_output_size": 13117279, "num_input_records": 10366, "num_output_records": 9851, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:27:13 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:27:13 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922833071030, "job": 98, "event": "table_file_deletion", "file_number": 155}
Jan 20 15:27:13 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000153.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:27:13 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922833073098, "job": 98, "event": "table_file_deletion", "file_number": 153}
Jan 20 15:27:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:27:12.924014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:27:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:27:13.073190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:27:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:27:13.073197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:27:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:27:13.073199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:27:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:27:13.073201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:27:13 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:27:13.073204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:27:13 compute-1 nova_compute[225855]: 2026-01-20 15:27:13.114 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Updating instance_info_cache with network_info: [{"id": "423d10be-bf78-43ff-8ae2-812d375ccef8", "address": "fa:16:3e:56:58:d6", "network": {"id": "3b3aa186-38a4-4cc6-9399-f535503e9791", "bridge": "br-int", "label": "tempest-network-smoke--458751349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap423d10be-bf", "ovs_interfaceid": "423d10be-bf78-43ff-8ae2-812d375ccef8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:27:13 compute-1 nova_compute[225855]: 2026-01-20 15:27:13.143 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-eb6ef384-2891-42d0-9059-42b89009b14c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:27:13 compute-1 nova_compute[225855]: 2026-01-20 15:27:13.143 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:27:13 compute-1 nova_compute[225855]: 2026-01-20 15:27:13.144 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:27:13 compute-1 nova_compute[225855]: 2026-01-20 15:27:13.144 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:27:13 compute-1 nova_compute[225855]: 2026-01-20 15:27:13.144 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:27:13 compute-1 nova_compute[225855]: 2026-01-20 15:27:13.172 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:27:13 compute-1 nova_compute[225855]: 2026-01-20 15:27:13.172 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:27:13 compute-1 nova_compute[225855]: 2026-01-20 15:27:13.173 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:27:13 compute-1 nova_compute[225855]: 2026-01-20 15:27:13.173 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:27:13 compute-1 nova_compute[225855]: 2026-01-20 15:27:13.173 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:27:13 compute-1 nova_compute[225855]: 2026-01-20 15:27:13.342 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:13 compute-1 nova_compute[225855]: 2026-01-20 15:27:13.504 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:27:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2380053491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:13 compute-1 nova_compute[225855]: 2026-01-20 15:27:13.647 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:27:13 compute-1 nova_compute[225855]: 2026-01-20 15:27:13.711 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000ca as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:27:13 compute-1 nova_compute[225855]: 2026-01-20 15:27:13.711 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000ca as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:27:13 compute-1 nova_compute[225855]: 2026-01-20 15:27:13.858 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:27:13 compute-1 nova_compute[225855]: 2026-01-20 15:27:13.859 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4075MB free_disk=20.91363525390625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:27:13 compute-1 nova_compute[225855]: 2026-01-20 15:27:13.860 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:27:13 compute-1 nova_compute[225855]: 2026-01-20 15:27:13.860 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:27:13 compute-1 ceph-mon[81775]: pgmap v3151: 321 pgs: 321 active+clean; 254 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 725 KiB/s wr, 53 op/s
Jan 20 15:27:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2737404840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2380053491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/523948789' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:27:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/523948789' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:27:14 compute-1 podman[315011]: 2026-01-20 15:27:14.00675643 +0000 UTC m=+0.048463764 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 20 15:27:14 compute-1 nova_compute[225855]: 2026-01-20 15:27:14.037 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance eb6ef384-2891-42d0-9059-42b89009b14c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:27:14 compute-1 nova_compute[225855]: 2026-01-20 15:27:14.038 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:27:14 compute-1 nova_compute[225855]: 2026-01-20 15:27:14.038 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:27:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:14.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:14 compute-1 nova_compute[225855]: 2026-01-20 15:27:14.253 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:27:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:14.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:27:14 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1376628842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:14 compute-1 nova_compute[225855]: 2026-01-20 15:27:14.702 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:27:14 compute-1 nova_compute[225855]: 2026-01-20 15:27:14.708 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:27:14 compute-1 nova_compute[225855]: 2026-01-20 15:27:14.740 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:27:14 compute-1 nova_compute[225855]: 2026-01-20 15:27:14.742 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:27:14 compute-1 nova_compute[225855]: 2026-01-20 15:27:14.743 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:27:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1376628842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:14 compute-1 nova_compute[225855]: 2026-01-20 15:27:14.941 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:27:14 compute-1 nova_compute[225855]: 2026-01-20 15:27:14.941 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:27:15 compute-1 nova_compute[225855]: 2026-01-20 15:27:15.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:27:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:27:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:16.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:27:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:27:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:16.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:27:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:27:16.447 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:27:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:27:16.448 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:27:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:27:16.449 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:27:16 compute-1 ceph-mon[81775]: pgmap v3152: 321 pgs: 321 active+clean; 264 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.3 MiB/s wr, 58 op/s
Jan 20 15:27:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:18.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:18 compute-1 nova_compute[225855]: 2026-01-20 15:27:18.359 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:18.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:18 compute-1 nova_compute[225855]: 2026-01-20 15:27:18.507 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:18 compute-1 ceph-mon[81775]: pgmap v3153: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 598 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 20 15:27:19 compute-1 nova_compute[225855]: 2026-01-20 15:27:19.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:27:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:20.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:27:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:20.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:27:20 compute-1 ceph-mon[81775]: pgmap v3154: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:27:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:22.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:27:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 64K writes, 249K keys, 64K commit groups, 1.0 writes per commit group, ingest: 0.24 GB, 0.04 MB/s
                                           Cumulative WAL: 64K writes, 24K syncs, 2.64 writes per sync, written: 0.24 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4629 writes, 15K keys, 4629 commit groups, 1.0 writes per commit group, ingest: 15.94 MB, 0.03 MB/s
                                           Interval WAL: 4629 writes, 1847 syncs, 2.51 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 15:27:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:22.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:23 compute-1 ceph-mon[81775]: pgmap v3155: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Jan 20 15:27:23 compute-1 nova_compute[225855]: 2026-01-20 15:27:23.360 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:23 compute-1 nova_compute[225855]: 2026-01-20 15:27:23.510 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:24.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:24 compute-1 ceph-mon[81775]: pgmap v3156: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Jan 20 15:27:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:24.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:27:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:26.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:27:26 compute-1 ceph-mon[81775]: pgmap v3157: 321 pgs: 321 active+clean; 252 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 240 KiB/s rd, 1.4 MiB/s wr, 54 op/s
Jan 20 15:27:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3885040948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:26 compute-1 sudo[315059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:26 compute-1 sudo[315059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:26 compute-1 sudo[315059]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:26 compute-1 sudo[315084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:26 compute-1 sudo[315084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:26 compute-1 sudo[315084]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:26.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:27:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:28.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:27:28 compute-1 ceph-mon[81775]: pgmap v3158: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 193 KiB/s rd, 885 KiB/s wr, 69 op/s
Jan 20 15:27:28 compute-1 nova_compute[225855]: 2026-01-20 15:27:28.362 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:28.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:28 compute-1 nova_compute[225855]: 2026-01-20 15:27:28.512 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:27:30.090 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:27:30 compute-1 nova_compute[225855]: 2026-01-20 15:27:30.090 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:27:30.091 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:27:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:30.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:30 compute-1 ceph-mon[81775]: pgmap v3159: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 20 15:27:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:30.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:32.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:32 compute-1 ceph-mon[81775]: pgmap v3160: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 20 15:27:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:32.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:33 compute-1 nova_compute[225855]: 2026-01-20 15:27:33.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:27:33 compute-1 ovn_controller[130490]: 2026-01-20T15:27:33Z|00919|binding|INFO|Releasing lport f92b8596-5a2a-495f-b715-08c8aa7181a3 from this chassis (sb_readonly=0)
Jan 20 15:27:33 compute-1 nova_compute[225855]: 2026-01-20 15:27:33.364 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:33 compute-1 nova_compute[225855]: 2026-01-20 15:27:33.376 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:33 compute-1 nova_compute[225855]: 2026-01-20 15:27:33.512 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:34.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:34.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:34 compute-1 ceph-mon[81775]: pgmap v3161: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 20 15:27:34 compute-1 nova_compute[225855]: 2026-01-20 15:27:34.482 225859 DEBUG nova.compute.manager [req-0ee3aff6-fcab-40bb-ad29-b3eb5bffb864 req-0521a83f-5d85-4b6a-8a7b-aa7b4aa2d7c5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Received event network-changed-423d10be-bf78-43ff-8ae2-812d375ccef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:27:34 compute-1 nova_compute[225855]: 2026-01-20 15:27:34.482 225859 DEBUG nova.compute.manager [req-0ee3aff6-fcab-40bb-ad29-b3eb5bffb864 req-0521a83f-5d85-4b6a-8a7b-aa7b4aa2d7c5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Refreshing instance network info cache due to event network-changed-423d10be-bf78-43ff-8ae2-812d375ccef8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:27:34 compute-1 nova_compute[225855]: 2026-01-20 15:27:34.483 225859 DEBUG oslo_concurrency.lockutils [req-0ee3aff6-fcab-40bb-ad29-b3eb5bffb864 req-0521a83f-5d85-4b6a-8a7b-aa7b4aa2d7c5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-eb6ef384-2891-42d0-9059-42b89009b14c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:27:34 compute-1 nova_compute[225855]: 2026-01-20 15:27:34.483 225859 DEBUG oslo_concurrency.lockutils [req-0ee3aff6-fcab-40bb-ad29-b3eb5bffb864 req-0521a83f-5d85-4b6a-8a7b-aa7b4aa2d7c5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-eb6ef384-2891-42d0-9059-42b89009b14c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:27:34 compute-1 nova_compute[225855]: 2026-01-20 15:27:34.483 225859 DEBUG nova.network.neutron [req-0ee3aff6-fcab-40bb-ad29-b3eb5bffb864 req-0521a83f-5d85-4b6a-8a7b-aa7b4aa2d7c5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Refreshing network info cache for port 423d10be-bf78-43ff-8ae2-812d375ccef8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:27:34 compute-1 nova_compute[225855]: 2026-01-20 15:27:34.547 225859 DEBUG oslo_concurrency.lockutils [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "eb6ef384-2891-42d0-9059-42b89009b14c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:27:34 compute-1 nova_compute[225855]: 2026-01-20 15:27:34.548 225859 DEBUG oslo_concurrency.lockutils [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "eb6ef384-2891-42d0-9059-42b89009b14c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:27:34 compute-1 nova_compute[225855]: 2026-01-20 15:27:34.548 225859 DEBUG oslo_concurrency.lockutils [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "eb6ef384-2891-42d0-9059-42b89009b14c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:27:34 compute-1 nova_compute[225855]: 2026-01-20 15:27:34.548 225859 DEBUG oslo_concurrency.lockutils [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "eb6ef384-2891-42d0-9059-42b89009b14c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:27:34 compute-1 nova_compute[225855]: 2026-01-20 15:27:34.548 225859 DEBUG oslo_concurrency.lockutils [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "eb6ef384-2891-42d0-9059-42b89009b14c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:27:34 compute-1 nova_compute[225855]: 2026-01-20 15:27:34.550 225859 INFO nova.compute.manager [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Terminating instance
Jan 20 15:27:34 compute-1 nova_compute[225855]: 2026-01-20 15:27:34.551 225859 DEBUG nova.compute.manager [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:27:35 compute-1 podman[315113]: 2026-01-20 15:27:35.036570079 +0000 UTC m=+0.080114518 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:27:35 compute-1 kernel: tap423d10be-bf (unregistering): left promiscuous mode
Jan 20 15:27:35 compute-1 NetworkManager[49104]: <info>  [1768922855.3631] device (tap423d10be-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:27:35 compute-1 nova_compute[225855]: 2026-01-20 15:27:35.371 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:35 compute-1 ovn_controller[130490]: 2026-01-20T15:27:35Z|00920|binding|INFO|Releasing lport 423d10be-bf78-43ff-8ae2-812d375ccef8 from this chassis (sb_readonly=0)
Jan 20 15:27:35 compute-1 ovn_controller[130490]: 2026-01-20T15:27:35Z|00921|binding|INFO|Setting lport 423d10be-bf78-43ff-8ae2-812d375ccef8 down in Southbound
Jan 20 15:27:35 compute-1 ovn_controller[130490]: 2026-01-20T15:27:35Z|00922|binding|INFO|Removing iface tap423d10be-bf ovn-installed in OVS
Jan 20 15:27:35 compute-1 nova_compute[225855]: 2026-01-20 15:27:35.373 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:27:35.380 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:58:d6 10.100.0.6'], port_security=['fa:16:3e:56:58:d6 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'eb6ef384-2891-42d0-9059-42b89009b14c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b3aa186-38a4-4cc6-9399-f535503e9791', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '046cc664-f8d9-4379-b46e-95218c363faa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bbfdde0f-6f5b-476d-8d21-7557a37ecad6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=423d10be-bf78-43ff-8ae2-812d375ccef8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:27:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:27:35.381 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 423d10be-bf78-43ff-8ae2-812d375ccef8 in datapath 3b3aa186-38a4-4cc6-9399-f535503e9791 unbound from our chassis
Jan 20 15:27:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:27:35.382 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3b3aa186-38a4-4cc6-9399-f535503e9791, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:27:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:27:35.384 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[83682e27-6ea2-43d5-8ab9-bde3b88cb6a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:27:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:27:35.385 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791 namespace which is not needed anymore
Jan 20 15:27:35 compute-1 nova_compute[225855]: 2026-01-20 15:27:35.391 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:35 compute-1 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d000000ca.scope: Deactivated successfully.
Jan 20 15:27:35 compute-1 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d000000ca.scope: Consumed 16.773s CPU time.
Jan 20 15:27:35 compute-1 systemd-machined[194361]: Machine qemu-107-instance-000000ca terminated.
Jan 20 15:27:35 compute-1 neutron-haproxy-ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791[314207]: [NOTICE]   (314211) : haproxy version is 2.8.14-c23fe91
Jan 20 15:27:35 compute-1 neutron-haproxy-ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791[314207]: [NOTICE]   (314211) : path to executable is /usr/sbin/haproxy
Jan 20 15:27:35 compute-1 neutron-haproxy-ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791[314207]: [WARNING]  (314211) : Exiting Master process...
Jan 20 15:27:35 compute-1 neutron-haproxy-ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791[314207]: [WARNING]  (314211) : Exiting Master process...
Jan 20 15:27:35 compute-1 neutron-haproxy-ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791[314207]: [ALERT]    (314211) : Current worker (314213) exited with code 143 (Terminated)
Jan 20 15:27:35 compute-1 neutron-haproxy-ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791[314207]: [WARNING]  (314211) : All workers exited. Exiting... (0)
Jan 20 15:27:35 compute-1 systemd[1]: libpod-c23b6ee47ebd8837b0723000e8dd8b5f34925d463ae436fc56e1762a0bc434e1.scope: Deactivated successfully.
Jan 20 15:27:35 compute-1 podman[315165]: 2026-01-20 15:27:35.51330587 +0000 UTC m=+0.043912564 container died c23b6ee47ebd8837b0723000e8dd8b5f34925d463ae436fc56e1762a0bc434e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 15:27:35 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c23b6ee47ebd8837b0723000e8dd8b5f34925d463ae436fc56e1762a0bc434e1-userdata-shm.mount: Deactivated successfully.
Jan 20 15:27:35 compute-1 systemd[1]: var-lib-containers-storage-overlay-42f6bc1236f352acdb0936adef4a9dad7a0a958fe3cb77609aa75d071a1c9c47-merged.mount: Deactivated successfully.
Jan 20 15:27:35 compute-1 podman[315165]: 2026-01-20 15:27:35.550300647 +0000 UTC m=+0.080907341 container cleanup c23b6ee47ebd8837b0723000e8dd8b5f34925d463ae436fc56e1762a0bc434e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:27:35 compute-1 systemd[1]: libpod-conmon-c23b6ee47ebd8837b0723000e8dd8b5f34925d463ae436fc56e1762a0bc434e1.scope: Deactivated successfully.
Jan 20 15:27:35 compute-1 nova_compute[225855]: 2026-01-20 15:27:35.590 225859 INFO nova.virt.libvirt.driver [-] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Instance destroyed successfully.
Jan 20 15:27:35 compute-1 nova_compute[225855]: 2026-01-20 15:27:35.591 225859 DEBUG nova.objects.instance [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'resources' on Instance uuid eb6ef384-2891-42d0-9059-42b89009b14c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:27:35 compute-1 nova_compute[225855]: 2026-01-20 15:27:35.611 225859 DEBUG nova.virt.libvirt.vif [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:25:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1635932386',display_name='tempest-TestNetworkBasicOps-server-1635932386',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1635932386',id=202,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOxLu/uVu4uuXkZBckB0Jue8mA2XpnPI63IpB2BGooiySZuLgddUiCiwQ3/YqBeUzNGbEuiI4/oWiiYa4zQrQHAa9idheznhVw0kdlFQsBm1hL1vB4bH09utur5br8iaiQ==',key_name='tempest-TestNetworkBasicOps-305263708',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:25:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-1wrkal35',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:25:55Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=eb6ef384-2891-42d0-9059-42b89009b14c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "423d10be-bf78-43ff-8ae2-812d375ccef8", "address": "fa:16:3e:56:58:d6", "network": {"id": "3b3aa186-38a4-4cc6-9399-f535503e9791", "bridge": "br-int", "label": "tempest-network-smoke--458751349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap423d10be-bf", "ovs_interfaceid": "423d10be-bf78-43ff-8ae2-812d375ccef8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:27:35 compute-1 nova_compute[225855]: 2026-01-20 15:27:35.612 225859 DEBUG nova.network.os_vif_util [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "423d10be-bf78-43ff-8ae2-812d375ccef8", "address": "fa:16:3e:56:58:d6", "network": {"id": "3b3aa186-38a4-4cc6-9399-f535503e9791", "bridge": "br-int", "label": "tempest-network-smoke--458751349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap423d10be-bf", "ovs_interfaceid": "423d10be-bf78-43ff-8ae2-812d375ccef8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:27:35 compute-1 nova_compute[225855]: 2026-01-20 15:27:35.613 225859 DEBUG nova.network.os_vif_util [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:56:58:d6,bridge_name='br-int',has_traffic_filtering=True,id=423d10be-bf78-43ff-8ae2-812d375ccef8,network=Network(3b3aa186-38a4-4cc6-9399-f535503e9791),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap423d10be-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:27:35 compute-1 nova_compute[225855]: 2026-01-20 15:27:35.613 225859 DEBUG os_vif [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:58:d6,bridge_name='br-int',has_traffic_filtering=True,id=423d10be-bf78-43ff-8ae2-812d375ccef8,network=Network(3b3aa186-38a4-4cc6-9399-f535503e9791),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap423d10be-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:27:35 compute-1 nova_compute[225855]: 2026-01-20 15:27:35.615 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:35 compute-1 nova_compute[225855]: 2026-01-20 15:27:35.617 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap423d10be-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:27:35 compute-1 podman[315194]: 2026-01-20 15:27:35.619559254 +0000 UTC m=+0.049935317 container remove c23b6ee47ebd8837b0723000e8dd8b5f34925d463ae436fc56e1762a0bc434e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 20 15:27:35 compute-1 nova_compute[225855]: 2026-01-20 15:27:35.622 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:27:35 compute-1 nova_compute[225855]: 2026-01-20 15:27:35.625 225859 INFO os_vif [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:58:d6,bridge_name='br-int',has_traffic_filtering=True,id=423d10be-bf78-43ff-8ae2-812d375ccef8,network=Network(3b3aa186-38a4-4cc6-9399-f535503e9791),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap423d10be-bf')
Jan 20 15:27:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:27:35.627 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e478625b-a3cc-46ab-9395-982493ef048c]: (4, ('Tue Jan 20 03:27:35 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791 (c23b6ee47ebd8837b0723000e8dd8b5f34925d463ae436fc56e1762a0bc434e1)\nc23b6ee47ebd8837b0723000e8dd8b5f34925d463ae436fc56e1762a0bc434e1\nTue Jan 20 03:27:35 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791 (c23b6ee47ebd8837b0723000e8dd8b5f34925d463ae436fc56e1762a0bc434e1)\nc23b6ee47ebd8837b0723000e8dd8b5f34925d463ae436fc56e1762a0bc434e1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:27:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:27:35.629 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[23392637-bd9b-4b91-b859-60ab8962d589]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:27:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:27:35.630 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b3aa186-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:27:35 compute-1 kernel: tap3b3aa186-30: left promiscuous mode
Jan 20 15:27:35 compute-1 nova_compute[225855]: 2026-01-20 15:27:35.643 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:35 compute-1 nova_compute[225855]: 2026-01-20 15:27:35.646 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:27:35.649 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8ee5ee-5ee7-4d1d-9682-0cc850c73d33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:27:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:27:35.662 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b25f3c-8499-4d5a-9272-bb0e974a3f44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:27:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:27:35.664 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bf77e96b-851f-4d08-b1d2-57ac22fc8200]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:27:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:27:35.683 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[eb14d818-ce6f-4ee2-92e2-1722af9cd7f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779313, 'reachable_time': 25571, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315238, 'error': None, 'target': 'ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:27:35 compute-1 systemd[1]: run-netns-ovnmeta\x2d3b3aa186\x2d38a4\x2d4cc6\x2d9399\x2df535503e9791.mount: Deactivated successfully.
Jan 20 15:27:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:27:35.687 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3b3aa186-38a4-4cc6-9399-f535503e9791 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:27:35 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:27:35.687 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[d706d4cc-6f6f-49b1-a843-07e4f6237cab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:27:36 compute-1 nova_compute[225855]: 2026-01-20 15:27:36.040 225859 INFO nova.virt.libvirt.driver [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Deleting instance files /var/lib/nova/instances/eb6ef384-2891-42d0-9059-42b89009b14c_del
Jan 20 15:27:36 compute-1 nova_compute[225855]: 2026-01-20 15:27:36.041 225859 INFO nova.virt.libvirt.driver [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Deletion of /var/lib/nova/instances/eb6ef384-2891-42d0-9059-42b89009b14c_del complete
Jan 20 15:27:36 compute-1 nova_compute[225855]: 2026-01-20 15:27:36.114 225859 INFO nova.compute.manager [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Took 1.56 seconds to destroy the instance on the hypervisor.
Jan 20 15:27:36 compute-1 nova_compute[225855]: 2026-01-20 15:27:36.115 225859 DEBUG oslo.service.loopingcall [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:27:36 compute-1 nova_compute[225855]: 2026-01-20 15:27:36.115 225859 DEBUG nova.compute.manager [-] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:27:36 compute-1 nova_compute[225855]: 2026-01-20 15:27:36.116 225859 DEBUG nova.network.neutron [-] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:27:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:27:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:36.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:27:36 compute-1 ceph-mon[81775]: pgmap v3162: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:27:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:36.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:27:36 compute-1 nova_compute[225855]: 2026-01-20 15:27:36.573 225859 DEBUG nova.compute.manager [req-f0ca6ea0-ec60-4d75-8cdf-217a7948a4dd req-b6d3295f-46ce-49bd-9588-8bab400b4594 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Received event network-vif-unplugged-423d10be-bf78-43ff-8ae2-812d375ccef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:27:36 compute-1 nova_compute[225855]: 2026-01-20 15:27:36.573 225859 DEBUG oslo_concurrency.lockutils [req-f0ca6ea0-ec60-4d75-8cdf-217a7948a4dd req-b6d3295f-46ce-49bd-9588-8bab400b4594 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "eb6ef384-2891-42d0-9059-42b89009b14c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:27:36 compute-1 nova_compute[225855]: 2026-01-20 15:27:36.574 225859 DEBUG oslo_concurrency.lockutils [req-f0ca6ea0-ec60-4d75-8cdf-217a7948a4dd req-b6d3295f-46ce-49bd-9588-8bab400b4594 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "eb6ef384-2891-42d0-9059-42b89009b14c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:27:36 compute-1 nova_compute[225855]: 2026-01-20 15:27:36.574 225859 DEBUG oslo_concurrency.lockutils [req-f0ca6ea0-ec60-4d75-8cdf-217a7948a4dd req-b6d3295f-46ce-49bd-9588-8bab400b4594 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "eb6ef384-2891-42d0-9059-42b89009b14c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:27:36 compute-1 nova_compute[225855]: 2026-01-20 15:27:36.574 225859 DEBUG nova.compute.manager [req-f0ca6ea0-ec60-4d75-8cdf-217a7948a4dd req-b6d3295f-46ce-49bd-9588-8bab400b4594 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] No waiting events found dispatching network-vif-unplugged-423d10be-bf78-43ff-8ae2-812d375ccef8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:27:36 compute-1 nova_compute[225855]: 2026-01-20 15:27:36.574 225859 DEBUG nova.compute.manager [req-f0ca6ea0-ec60-4d75-8cdf-217a7948a4dd req-b6d3295f-46ce-49bd-9588-8bab400b4594 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Received event network-vif-unplugged-423d10be-bf78-43ff-8ae2-812d375ccef8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:27:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:27:37.092 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:27:38 compute-1 ceph-mon[81775]: pgmap v3163: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:38.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:38 compute-1 nova_compute[225855]: 2026-01-20 15:27:38.347 225859 DEBUG nova.network.neutron [-] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:27:38 compute-1 nova_compute[225855]: 2026-01-20 15:27:38.369 225859 INFO nova.compute.manager [-] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Took 2.25 seconds to deallocate network for instance.
Jan 20 15:27:38 compute-1 nova_compute[225855]: 2026-01-20 15:27:38.414 225859 DEBUG oslo_concurrency.lockutils [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:27:38 compute-1 nova_compute[225855]: 2026-01-20 15:27:38.415 225859 DEBUG oslo_concurrency.lockutils [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:27:38 compute-1 nova_compute[225855]: 2026-01-20 15:27:38.449 225859 DEBUG nova.compute.manager [req-2dbe7723-39ae-477c-9557-c468cd59fe5a req-7b66b752-0f8d-40b8-b903-c1077d1960fb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Received event network-vif-deleted-423d10be-bf78-43ff-8ae2-812d375ccef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:27:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:38.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:38 compute-1 nova_compute[225855]: 2026-01-20 15:27:38.466 225859 DEBUG oslo_concurrency.processutils [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:27:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:38 compute-1 nova_compute[225855]: 2026-01-20 15:27:38.514 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:38 compute-1 nova_compute[225855]: 2026-01-20 15:27:38.686 225859 DEBUG nova.compute.manager [req-10b9b0d8-19d5-4431-aed8-6dff23102258 req-e25f0678-7139-49fa-b647-f6c386ff8cc6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Received event network-vif-plugged-423d10be-bf78-43ff-8ae2-812d375ccef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:27:38 compute-1 nova_compute[225855]: 2026-01-20 15:27:38.687 225859 DEBUG oslo_concurrency.lockutils [req-10b9b0d8-19d5-4431-aed8-6dff23102258 req-e25f0678-7139-49fa-b647-f6c386ff8cc6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "eb6ef384-2891-42d0-9059-42b89009b14c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:27:38 compute-1 nova_compute[225855]: 2026-01-20 15:27:38.687 225859 DEBUG oslo_concurrency.lockutils [req-10b9b0d8-19d5-4431-aed8-6dff23102258 req-e25f0678-7139-49fa-b647-f6c386ff8cc6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "eb6ef384-2891-42d0-9059-42b89009b14c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:27:38 compute-1 nova_compute[225855]: 2026-01-20 15:27:38.687 225859 DEBUG oslo_concurrency.lockutils [req-10b9b0d8-19d5-4431-aed8-6dff23102258 req-e25f0678-7139-49fa-b647-f6c386ff8cc6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "eb6ef384-2891-42d0-9059-42b89009b14c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:27:38 compute-1 nova_compute[225855]: 2026-01-20 15:27:38.687 225859 DEBUG nova.compute.manager [req-10b9b0d8-19d5-4431-aed8-6dff23102258 req-e25f0678-7139-49fa-b647-f6c386ff8cc6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] No waiting events found dispatching network-vif-plugged-423d10be-bf78-43ff-8ae2-812d375ccef8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:27:38 compute-1 nova_compute[225855]: 2026-01-20 15:27:38.688 225859 WARNING nova.compute.manager [req-10b9b0d8-19d5-4431-aed8-6dff23102258 req-e25f0678-7139-49fa-b647-f6c386ff8cc6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Received unexpected event network-vif-plugged-423d10be-bf78-43ff-8ae2-812d375ccef8 for instance with vm_state deleted and task_state None.
Jan 20 15:27:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:27:38 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2202569858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:38 compute-1 nova_compute[225855]: 2026-01-20 15:27:38.920 225859 DEBUG nova.network.neutron [req-0ee3aff6-fcab-40bb-ad29-b3eb5bffb864 req-0521a83f-5d85-4b6a-8a7b-aa7b4aa2d7c5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Updated VIF entry in instance network info cache for port 423d10be-bf78-43ff-8ae2-812d375ccef8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:27:38 compute-1 nova_compute[225855]: 2026-01-20 15:27:38.921 225859 DEBUG nova.network.neutron [req-0ee3aff6-fcab-40bb-ad29-b3eb5bffb864 req-0521a83f-5d85-4b6a-8a7b-aa7b4aa2d7c5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Updating instance_info_cache with network_info: [{"id": "423d10be-bf78-43ff-8ae2-812d375ccef8", "address": "fa:16:3e:56:58:d6", "network": {"id": "3b3aa186-38a4-4cc6-9399-f535503e9791", "bridge": "br-int", "label": "tempest-network-smoke--458751349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap423d10be-bf", "ovs_interfaceid": "423d10be-bf78-43ff-8ae2-812d375ccef8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:27:38 compute-1 nova_compute[225855]: 2026-01-20 15:27:38.923 225859 DEBUG oslo_concurrency.processutils [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:27:38 compute-1 nova_compute[225855]: 2026-01-20 15:27:38.929 225859 DEBUG nova.compute.provider_tree [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:27:38 compute-1 nova_compute[225855]: 2026-01-20 15:27:38.945 225859 DEBUG oslo_concurrency.lockutils [req-0ee3aff6-fcab-40bb-ad29-b3eb5bffb864 req-0521a83f-5d85-4b6a-8a7b-aa7b4aa2d7c5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-eb6ef384-2891-42d0-9059-42b89009b14c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:27:38 compute-1 nova_compute[225855]: 2026-01-20 15:27:38.948 225859 DEBUG nova.scheduler.client.report [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:27:38 compute-1 nova_compute[225855]: 2026-01-20 15:27:38.969 225859 DEBUG oslo_concurrency.lockutils [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:27:39 compute-1 nova_compute[225855]: 2026-01-20 15:27:39.001 225859 INFO nova.scheduler.client.report [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Deleted allocations for instance eb6ef384-2891-42d0-9059-42b89009b14c
Jan 20 15:27:39 compute-1 nova_compute[225855]: 2026-01-20 15:27:39.064 225859 DEBUG oslo_concurrency.lockutils [None req-a7e80228-0114-4046-9376-2462bdc6520d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "eb6ef384-2891-42d0-9059-42b89009b14c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:27:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2202569858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:40 compute-1 ceph-mon[81775]: pgmap v3164: 321 pgs: 321 active+clean; 177 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 0 B/s wr, 1 op/s
Jan 20 15:27:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:27:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:40.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:27:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:40.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:40 compute-1 nova_compute[225855]: 2026-01-20 15:27:40.620 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:42.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:42 compute-1 ceph-mon[81775]: pgmap v3165: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:42.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:43 compute-1 nova_compute[225855]: 2026-01-20 15:27:43.515 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:44.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:44 compute-1 ceph-mon[81775]: pgmap v3166: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:44.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:44 compute-1 nova_compute[225855]: 2026-01-20 15:27:44.467 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:44 compute-1 nova_compute[225855]: 2026-01-20 15:27:44.554 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:44 compute-1 podman[315268]: 2026-01-20 15:27:44.999227774 +0000 UTC m=+0.047074505 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:27:45 compute-1 nova_compute[225855]: 2026-01-20 15:27:45.622 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:46.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:46 compute-1 ceph-mon[81775]: pgmap v3167: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:27:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:46.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:27:46 compute-1 sudo[315290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:46 compute-1 sudo[315290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:46 compute-1 sudo[315290]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:46 compute-1 sudo[315315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:46 compute-1 sudo[315315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:46 compute-1 sudo[315315]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:27:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:48.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:27:48 compute-1 ceph-mon[81775]: pgmap v3168: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:48.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:48 compute-1 nova_compute[225855]: 2026-01-20 15:27:48.517 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:50.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:50 compute-1 ceph-mon[81775]: pgmap v3169: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:50.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:50 compute-1 nova_compute[225855]: 2026-01-20 15:27:50.588 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922855.5870845, eb6ef384-2891-42d0-9059-42b89009b14c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:27:50 compute-1 nova_compute[225855]: 2026-01-20 15:27:50.588 225859 INFO nova.compute.manager [-] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] VM Stopped (Lifecycle Event)
Jan 20 15:27:50 compute-1 nova_compute[225855]: 2026-01-20 15:27:50.616 225859 DEBUG nova.compute.manager [None req-e6cf17ca-7728-4417-8402-3f280ac4bb16 - - - - - -] [instance: eb6ef384-2891-42d0-9059-42b89009b14c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:27:50 compute-1 nova_compute[225855]: 2026-01-20 15:27:50.624 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:27:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:52.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:27:52 compute-1 ceph-mon[81775]: pgmap v3170: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 20 15:27:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:27:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:52.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:27:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:53 compute-1 nova_compute[225855]: 2026-01-20 15:27:53.518 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:27:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:54.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:27:54 compute-1 ceph-mon[81775]: pgmap v3171: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:27:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:54.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:27:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 15K writes, 77K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1509 writes, 7469 keys, 1509 commit groups, 1.0 writes per commit group, ingest: 15.29 MB, 0.03 MB/s
                                           Interval WAL: 1509 writes, 1509 syncs, 1.00 writes per sync, written: 0.01 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     80.6      1.16              0.32        49    0.024       0      0       0.0       0.0
                                             L6      1/0   12.51 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.1    101.5     86.8      5.56              1.53        48    0.116    353K    26K       0.0       0.0
                                            Sum      1/0   12.51 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.1     83.9     85.8      6.72              1.85        97    0.069    353K    26K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.9     78.2     79.6      1.03              0.23        12    0.086     60K   3110       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    101.5     86.8      5.56              1.53        48    0.116    353K    26K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     80.8      1.16              0.32        48    0.024       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.092, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.56 GB write, 0.11 MB/s write, 0.55 GB read, 0.10 MB/s read, 6.7 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.13 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564d515a71f0#2 capacity: 304.00 MB usage: 62.47 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000375 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3577,59.91 MB,19.7072%) FilterBlock(97,977.48 KB,0.314005%) IndexBlock(97,1.60 MB,0.527126%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 15:27:55 compute-1 nova_compute[225855]: 2026-01-20 15:27:55.627 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:27:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:56.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:27:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:56.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:56 compute-1 ceph-mon[81775]: pgmap v3172: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:27:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:27:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:58.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:27:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:27:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:58.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:58 compute-1 nova_compute[225855]: 2026-01-20 15:27:58.520 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:58 compute-1 ceph-mon[81775]: pgmap v3173: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:28:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:00.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:00.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:00 compute-1 ceph-mon[81775]: pgmap v3174: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:28:00 compute-1 nova_compute[225855]: 2026-01-20 15:28:00.631 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:28:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:02.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:28:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:02.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:02 compute-1 ceph-mon[81775]: pgmap v3175: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:28:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:03 compute-1 nova_compute[225855]: 2026-01-20 15:28:03.522 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:04.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:04.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:04 compute-1 ceph-mon[81775]: pgmap v3176: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:28:05 compute-1 nova_compute[225855]: 2026-01-20 15:28:05.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:28:05 compute-1 nova_compute[225855]: 2026-01-20 15:28:05.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:28:05 compute-1 nova_compute[225855]: 2026-01-20 15:28:05.633 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:06 compute-1 podman[315351]: 2026-01-20 15:28:06.028958078 +0000 UTC m=+0.076063223 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:28:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:06.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:28:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:06.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:28:06 compute-1 sudo[315377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:06 compute-1 sudo[315377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:06 compute-1 ceph-mon[81775]: pgmap v3177: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:28:06 compute-1 sudo[315377]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:06 compute-1 sudo[315402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:06 compute-1 sudo[315402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:06 compute-1 sudo[315402]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:07 compute-1 nova_compute[225855]: 2026-01-20 15:28:07.066 225859 DEBUG oslo_concurrency.lockutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "770605b0-4686-4d97-9f82-7ed299482f50" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:07 compute-1 nova_compute[225855]: 2026-01-20 15:28:07.066 225859 DEBUG oslo_concurrency.lockutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:07 compute-1 nova_compute[225855]: 2026-01-20 15:28:07.091 225859 DEBUG nova.compute.manager [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:28:07 compute-1 nova_compute[225855]: 2026-01-20 15:28:07.219 225859 DEBUG oslo_concurrency.lockutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:07 compute-1 nova_compute[225855]: 2026-01-20 15:28:07.220 225859 DEBUG oslo_concurrency.lockutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:07 compute-1 nova_compute[225855]: 2026-01-20 15:28:07.231 225859 DEBUG nova.virt.hardware [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:28:07 compute-1 nova_compute[225855]: 2026-01-20 15:28:07.231 225859 INFO nova.compute.claims [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:28:07 compute-1 nova_compute[225855]: 2026-01-20 15:28:07.338 225859 DEBUG oslo_concurrency.processutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:28:07 compute-1 nova_compute[225855]: 2026-01-20 15:28:07.809 225859 DEBUG oslo_concurrency.processutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:28:07 compute-1 nova_compute[225855]: 2026-01-20 15:28:07.817 225859 DEBUG nova.compute.provider_tree [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:28:07 compute-1 nova_compute[225855]: 2026-01-20 15:28:07.839 225859 DEBUG nova.scheduler.client.report [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:28:07 compute-1 nova_compute[225855]: 2026-01-20 15:28:07.873 225859 DEBUG oslo_concurrency.lockutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:07 compute-1 nova_compute[225855]: 2026-01-20 15:28:07.874 225859 DEBUG nova.compute.manager [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:28:07 compute-1 nova_compute[225855]: 2026-01-20 15:28:07.918 225859 DEBUG nova.compute.manager [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:28:07 compute-1 nova_compute[225855]: 2026-01-20 15:28:07.919 225859 DEBUG nova.network.neutron [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:28:07 compute-1 nova_compute[225855]: 2026-01-20 15:28:07.939 225859 INFO nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:28:07 compute-1 nova_compute[225855]: 2026-01-20 15:28:07.957 225859 DEBUG nova.compute.manager [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.071 225859 DEBUG nova.compute.manager [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.073 225859 DEBUG nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.073 225859 INFO nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Creating image(s)
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.095 225859 DEBUG nova.storage.rbd_utils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 770605b0-4686-4d97-9f82-7ed299482f50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.121 225859 DEBUG nova.storage.rbd_utils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 770605b0-4686-4d97-9f82-7ed299482f50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.144 225859 DEBUG nova.storage.rbd_utils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 770605b0-4686-4d97-9f82-7ed299482f50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.148 225859 DEBUG oslo_concurrency.processutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.215 225859 DEBUG oslo_concurrency.processutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.216 225859 DEBUG oslo_concurrency.lockutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.217 225859 DEBUG oslo_concurrency.lockutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.217 225859 DEBUG oslo_concurrency.lockutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.242 225859 DEBUG nova.storage.rbd_utils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 770605b0-4686-4d97-9f82-7ed299482f50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.247 225859 DEBUG oslo_concurrency.processutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 770605b0-4686-4d97-9f82-7ed299482f50_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:28:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:08.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.339 225859 DEBUG nova.policy [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5338aa65dc0e4326a66ce79053787f14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:28:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:08.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.503 225859 DEBUG oslo_concurrency.processutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 770605b0-4686-4d97-9f82-7ed299482f50_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.256s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:28:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.534 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.570 225859 DEBUG nova.storage.rbd_utils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] resizing rbd image 770605b0-4686-4d97-9f82-7ed299482f50_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.657 225859 DEBUG nova.objects.instance [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'migration_context' on Instance uuid 770605b0-4686-4d97-9f82-7ed299482f50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.673 225859 DEBUG nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.673 225859 DEBUG nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Ensure instance console log exists: /var/lib/nova/instances/770605b0-4686-4d97-9f82-7ed299482f50/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.674 225859 DEBUG oslo_concurrency.lockutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.674 225859 DEBUG oslo_concurrency.lockutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:08 compute-1 nova_compute[225855]: 2026-01-20 15:28:08.674 225859 DEBUG oslo_concurrency.lockutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:08 compute-1 ceph-mon[81775]: pgmap v3178: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:28:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/185543081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:09 compute-1 nova_compute[225855]: 2026-01-20 15:28:09.127 225859 DEBUG nova.network.neutron [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Successfully created port: 52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:28:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/566218198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:09 compute-1 nova_compute[225855]: 2026-01-20 15:28:09.934 225859 DEBUG nova.network.neutron [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Successfully updated port: 52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:28:09 compute-1 nova_compute[225855]: 2026-01-20 15:28:09.953 225859 DEBUG oslo_concurrency.lockutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:28:09 compute-1 nova_compute[225855]: 2026-01-20 15:28:09.953 225859 DEBUG oslo_concurrency.lockutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquired lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:28:09 compute-1 nova_compute[225855]: 2026-01-20 15:28:09.953 225859 DEBUG nova.network.neutron [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:28:10 compute-1 nova_compute[225855]: 2026-01-20 15:28:10.025 225859 DEBUG nova.compute.manager [req-496fb9cd-060b-41a6-a2d4-4d8e4d0a388c req-0056ab06-d98d-423c-ae8c-39f40b4d0bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received event network-changed-52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:28:10 compute-1 nova_compute[225855]: 2026-01-20 15:28:10.026 225859 DEBUG nova.compute.manager [req-496fb9cd-060b-41a6-a2d4-4d8e4d0a388c req-0056ab06-d98d-423c-ae8c-39f40b4d0bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Refreshing instance network info cache due to event network-changed-52fb2315-9ec5-47a4-af4a-e0ed5e4caf21. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:28:10 compute-1 nova_compute[225855]: 2026-01-20 15:28:10.026 225859 DEBUG oslo_concurrency.lockutils [req-496fb9cd-060b-41a6-a2d4-4d8e4d0a388c req-0056ab06-d98d-423c-ae8c-39f40b4d0bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:28:10 compute-1 nova_compute[225855]: 2026-01-20 15:28:10.317 225859 DEBUG nova.network.neutron [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:28:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:10.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:10.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:10 compute-1 sudo[315617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:10 compute-1 sudo[315617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:10 compute-1 sudo[315617]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:10 compute-1 sudo[315642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:28:10 compute-1 sudo[315642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:10 compute-1 sudo[315642]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:10 compute-1 sudo[315667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:10 compute-1 sudo[315667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:10 compute-1 sudo[315667]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:10 compute-1 nova_compute[225855]: 2026-01-20 15:28:10.636 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:10 compute-1 sudo[315692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:28:10 compute-1 sudo[315692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:10 compute-1 ceph-mon[81775]: pgmap v3179: 321 pgs: 321 active+clean; 140 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.6 KiB/s rd, 821 KiB/s wr, 9 op/s
Jan 20 15:28:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1912177260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.095 225859 DEBUG nova.network.neutron [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Updating instance_info_cache with network_info: [{"id": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "address": "fa:16:3e:f9:08:60", "network": {"id": "f19fb67c-6bab-4253-851e-ede5bb26f589", "bridge": "br-int", "label": "tempest-network-smoke--2116756150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52fb2315-9e", "ovs_interfaceid": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.127 225859 DEBUG oslo_concurrency.lockutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Releasing lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.128 225859 DEBUG nova.compute.manager [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Instance network_info: |[{"id": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "address": "fa:16:3e:f9:08:60", "network": {"id": "f19fb67c-6bab-4253-851e-ede5bb26f589", "bridge": "br-int", "label": "tempest-network-smoke--2116756150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52fb2315-9e", "ovs_interfaceid": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.128 225859 DEBUG oslo_concurrency.lockutils [req-496fb9cd-060b-41a6-a2d4-4d8e4d0a388c req-0056ab06-d98d-423c-ae8c-39f40b4d0bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.129 225859 DEBUG nova.network.neutron [req-496fb9cd-060b-41a6-a2d4-4d8e4d0a388c req-0056ab06-d98d-423c-ae8c-39f40b4d0bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Refreshing network info cache for port 52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.132 225859 DEBUG nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Start _get_guest_xml network_info=[{"id": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "address": "fa:16:3e:f9:08:60", "network": {"id": "f19fb67c-6bab-4253-851e-ede5bb26f589", "bridge": "br-int", "label": "tempest-network-smoke--2116756150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52fb2315-9e", "ovs_interfaceid": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.136 225859 WARNING nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:28:11 compute-1 sudo[315692]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.149 225859 DEBUG nova.virt.libvirt.host [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.151 225859 DEBUG nova.virt.libvirt.host [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.156 225859 DEBUG nova.virt.libvirt.host [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.156 225859 DEBUG nova.virt.libvirt.host [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.157 225859 DEBUG nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.157 225859 DEBUG nova.virt.hardware [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.158 225859 DEBUG nova.virt.hardware [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.158 225859 DEBUG nova.virt.hardware [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.158 225859 DEBUG nova.virt.hardware [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.159 225859 DEBUG nova.virt.hardware [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.159 225859 DEBUG nova.virt.hardware [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.159 225859 DEBUG nova.virt.hardware [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.159 225859 DEBUG nova.virt.hardware [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.159 225859 DEBUG nova.virt.hardware [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.160 225859 DEBUG nova.virt.hardware [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.160 225859 DEBUG nova.virt.hardware [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.163 225859 DEBUG oslo_concurrency.processutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:28:11 compute-1 sudo[315751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:11 compute-1 sudo[315751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:11 compute-1 sudo[315751]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:28:11 compute-1 sudo[315795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:28:11 compute-1 sudo[315795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:11 compute-1 sudo[315795]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.358 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.359 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.360 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.379 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.379 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.380 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.380 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.380 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:28:11 compute-1 sudo[315820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:11 compute-1 sudo[315820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:11 compute-1 sudo[315820]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:11 compute-1 sudo[315846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 20 15:28:11 compute-1 sudo[315846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:28:11 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/864390893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.619 225859 DEBUG oslo_concurrency.processutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.642 225859 DEBUG nova.storage.rbd_utils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 770605b0-4686-4d97-9f82-7ed299482f50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.648 225859 DEBUG oslo_concurrency.processutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:28:11 compute-1 sudo[315846]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:28:11 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3409456068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:11 compute-1 nova_compute[225855]: 2026-01-20 15:28:11.880 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:28:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/864390893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.067 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.069 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4232MB free_disk=20.97887420654297GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.069 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.070 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.160 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 770605b0-4686-4d97-9f82-7ed299482f50 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.161 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.161 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:28:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:28:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1982473906' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.206 225859 DEBUG oslo_concurrency.processutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.209 225859 DEBUG nova.virt.libvirt.vif [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:28:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-774862138',display_name='tempest-TestNetworkBasicOps-server-774862138',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-774862138',id=204,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHV4gRYgjRGDnaHxkSMJQMAVAwkk8jLZAdWo/fxENkiAFcbr1CXMhNYzs2hTTM5NoLjR4u2qMt5dKH7C9b4LHMK/3O49DJNSLMAUfk3HkTe/ulPxPFwHn3vfQCPGwBrM5A==',key_name='tempest-TestNetworkBasicOps-1250991011',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-u6z01i8j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:28:08Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=770605b0-4686-4d97-9f82-7ed299482f50,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "address": "fa:16:3e:f9:08:60", "network": {"id": "f19fb67c-6bab-4253-851e-ede5bb26f589", "bridge": "br-int", "label": "tempest-network-smoke--2116756150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52fb2315-9e", "ovs_interfaceid": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.209 225859 DEBUG nova.network.os_vif_util [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "address": "fa:16:3e:f9:08:60", "network": {"id": "f19fb67c-6bab-4253-851e-ede5bb26f589", "bridge": "br-int", "label": "tempest-network-smoke--2116756150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52fb2315-9e", "ovs_interfaceid": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.210 225859 DEBUG nova.network.os_vif_util [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:60,bridge_name='br-int',has_traffic_filtering=True,id=52fb2315-9ec5-47a4-af4a-e0ed5e4caf21,network=Network(f19fb67c-6bab-4253-851e-ede5bb26f589),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52fb2315-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.212 225859 DEBUG nova.objects.instance [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'pci_devices' on Instance uuid 770605b0-4686-4d97-9f82-7ed299482f50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.214 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.259 225859 DEBUG nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:28:12 compute-1 nova_compute[225855]:   <uuid>770605b0-4686-4d97-9f82-7ed299482f50</uuid>
Jan 20 15:28:12 compute-1 nova_compute[225855]:   <name>instance-000000cc</name>
Jan 20 15:28:12 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:28:12 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:28:12 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <nova:name>tempest-TestNetworkBasicOps-server-774862138</nova:name>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:28:11</nova:creationTime>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:28:12 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:28:12 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:28:12 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:28:12 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:28:12 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:28:12 compute-1 nova_compute[225855]:         <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:28:12 compute-1 nova_compute[225855]:         <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:28:12 compute-1 nova_compute[225855]:         <nova:port uuid="52fb2315-9ec5-47a4-af4a-e0ed5e4caf21">
Jan 20 15:28:12 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:28:12 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:28:12 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <system>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <entry name="serial">770605b0-4686-4d97-9f82-7ed299482f50</entry>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <entry name="uuid">770605b0-4686-4d97-9f82-7ed299482f50</entry>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     </system>
Jan 20 15:28:12 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:28:12 compute-1 nova_compute[225855]:   <os>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:   </os>
Jan 20 15:28:12 compute-1 nova_compute[225855]:   <features>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:   </features>
Jan 20 15:28:12 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:28:12 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:28:12 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/770605b0-4686-4d97-9f82-7ed299482f50_disk">
Jan 20 15:28:12 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       </source>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:28:12 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/770605b0-4686-4d97-9f82-7ed299482f50_disk.config">
Jan 20 15:28:12 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       </source>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:28:12 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:f9:08:60"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <target dev="tap52fb2315-9e"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/770605b0-4686-4d97-9f82-7ed299482f50/console.log" append="off"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <video>
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     </video>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:28:12 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:28:12 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:28:12 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:28:12 compute-1 nova_compute[225855]: </domain>
Jan 20 15:28:12 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.261 225859 DEBUG nova.compute.manager [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Preparing to wait for external event network-vif-plugged-52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.262 225859 DEBUG oslo_concurrency.lockutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "770605b0-4686-4d97-9f82-7ed299482f50-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.262 225859 DEBUG oslo_concurrency.lockutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.262 225859 DEBUG oslo_concurrency.lockutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.263 225859 DEBUG nova.virt.libvirt.vif [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:28:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-774862138',display_name='tempest-TestNetworkBasicOps-server-774862138',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-774862138',id=204,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHV4gRYgjRGDnaHxkSMJQMAVAwkk8jLZAdWo/fxENkiAFcbr1CXMhNYzs2hTTM5NoLjR4u2qMt5dKH7C9b4LHMK/3O49DJNSLMAUfk3HkTe/ulPxPFwHn3vfQCPGwBrM5A==',key_name='tempest-TestNetworkBasicOps-1250991011',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-u6z01i8j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:28:08Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=770605b0-4686-4d97-9f82-7ed299482f50,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "address": "fa:16:3e:f9:08:60", "network": {"id": "f19fb67c-6bab-4253-851e-ede5bb26f589", "bridge": "br-int", "label": "tempest-network-smoke--2116756150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52fb2315-9e", "ovs_interfaceid": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.264 225859 DEBUG nova.network.os_vif_util [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "address": "fa:16:3e:f9:08:60", "network": {"id": "f19fb67c-6bab-4253-851e-ede5bb26f589", "bridge": "br-int", "label": "tempest-network-smoke--2116756150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52fb2315-9e", "ovs_interfaceid": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.265 225859 DEBUG nova.network.os_vif_util [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:60,bridge_name='br-int',has_traffic_filtering=True,id=52fb2315-9ec5-47a4-af4a-e0ed5e4caf21,network=Network(f19fb67c-6bab-4253-851e-ede5bb26f589),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52fb2315-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.265 225859 DEBUG os_vif [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:60,bridge_name='br-int',has_traffic_filtering=True,id=52fb2315-9ec5-47a4-af4a-e0ed5e4caf21,network=Network(f19fb67c-6bab-4253-851e-ede5bb26f589),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52fb2315-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.266 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.266 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.267 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.271 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.271 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52fb2315-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.272 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52fb2315-9e, col_values=(('external_ids', {'iface-id': '52fb2315-9ec5-47a4-af4a-e0ed5e4caf21', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:08:60', 'vm-uuid': '770605b0-4686-4d97-9f82-7ed299482f50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.273 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:12 compute-1 NetworkManager[49104]: <info>  [1768922892.2747] manager: (tap52fb2315-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.276 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.280 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.281 225859 INFO os_vif [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:60,bridge_name='br-int',has_traffic_filtering=True,id=52fb2315-9ec5-47a4-af4a-e0ed5e4caf21,network=Network(f19fb67c-6bab-4253-851e-ede5bb26f589),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52fb2315-9e')
Jan 20 15:28:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:12.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.325 225859 DEBUG nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.325 225859 DEBUG nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.326 225859 DEBUG nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No VIF found with MAC fa:16:3e:f9:08:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.326 225859 INFO nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Using config drive
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.355 225859 DEBUG nova.storage.rbd_utils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 770605b0-4686-4d97-9f82-7ed299482f50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:28:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:12.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.551 225859 DEBUG nova.network.neutron [req-496fb9cd-060b-41a6-a2d4-4d8e4d0a388c req-0056ab06-d98d-423c-ae8c-39f40b4d0bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Updated VIF entry in instance network info cache for port 52fb2315-9ec5-47a4-af4a-e0ed5e4caf21. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.551 225859 DEBUG nova.network.neutron [req-496fb9cd-060b-41a6-a2d4-4d8e4d0a388c req-0056ab06-d98d-423c-ae8c-39f40b4d0bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Updating instance_info_cache with network_info: [{"id": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "address": "fa:16:3e:f9:08:60", "network": {"id": "f19fb67c-6bab-4253-851e-ede5bb26f589", "bridge": "br-int", "label": "tempest-network-smoke--2116756150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52fb2315-9e", "ovs_interfaceid": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.573 225859 DEBUG oslo_concurrency.lockutils [req-496fb9cd-060b-41a6-a2d4-4d8e4d0a388c req-0056ab06-d98d-423c-ae8c-39f40b4d0bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:28:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:28:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1083406231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.681 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.686 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.698 225859 INFO nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Creating config drive at /var/lib/nova/instances/770605b0-4686-4d97-9f82-7ed299482f50/disk.config
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.703 225859 DEBUG oslo_concurrency.processutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/770605b0-4686-4d97-9f82-7ed299482f50/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4twi5ho1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.729 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.765 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.766 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.834 225859 DEBUG oslo_concurrency.processutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/770605b0-4686-4d97-9f82-7ed299482f50/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4twi5ho1" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.861 225859 DEBUG nova.storage.rbd_utils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 770605b0-4686-4d97-9f82-7ed299482f50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:28:12 compute-1 nova_compute[225855]: 2026-01-20 15:28:12.864 225859 DEBUG oslo_concurrency.processutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/770605b0-4686-4d97-9f82-7ed299482f50/disk.config 770605b0-4686-4d97-9f82-7ed299482f50_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:28:12 compute-1 ceph-mon[81775]: pgmap v3180: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Jan 20 15:28:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3409456068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:28:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:28:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:28:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:28:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:28:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1982473906' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:28:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/323984659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1083406231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.032 225859 DEBUG oslo_concurrency.processutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/770605b0-4686-4d97-9f82-7ed299482f50/disk.config 770605b0-4686-4d97-9f82-7ed299482f50_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.033 225859 INFO nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Deleting local config drive /var/lib/nova/instances/770605b0-4686-4d97-9f82-7ed299482f50/disk.config because it was imported into RBD.
Jan 20 15:28:13 compute-1 kernel: tap52fb2315-9e: entered promiscuous mode
Jan 20 15:28:13 compute-1 NetworkManager[49104]: <info>  [1768922893.0887] manager: (tap52fb2315-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/389)
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.087 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:13 compute-1 ovn_controller[130490]: 2026-01-20T15:28:13Z|00923|binding|INFO|Claiming lport 52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 for this chassis.
Jan 20 15:28:13 compute-1 ovn_controller[130490]: 2026-01-20T15:28:13Z|00924|binding|INFO|52fb2315-9ec5-47a4-af4a-e0ed5e4caf21: Claiming fa:16:3e:f9:08:60 10.100.0.14
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.093 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.102 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:08:60 10.100.0.14'], port_security=['fa:16:3e:f9:08:60 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '770605b0-4686-4d97-9f82-7ed299482f50', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f19fb67c-6bab-4253-851e-ede5bb26f589', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b57a9b16-bf8b-47b4-a097-c9a9044c2225', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c4c5a39-3036-4b6a-873b-b8673f881902, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=52fb2315-9ec5-47a4-af4a-e0ed5e4caf21) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.103 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 in datapath f19fb67c-6bab-4253-851e-ede5bb26f589 bound to our chassis
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.104 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f19fb67c-6bab-4253-851e-ede5bb26f589
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.120 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[72025e0a-2d8e-4106-8603-ad594130e4e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.121 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf19fb67c-61 in ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.124 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf19fb67c-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.124 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e5fa97-ed40-4b7f-8ebf-b3112dcc34bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.125 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c21cb94b-ec17-49c2-828f-870cea99727d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:13 compute-1 systemd-udevd[316048]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:28:13 compute-1 systemd-machined[194361]: New machine qemu-108-instance-000000cc.
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.145 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[ab72129c-a5f2-4cb2-80b0-b6e5fb1527f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:13 compute-1 NetworkManager[49104]: <info>  [1768922893.1493] device (tap52fb2315-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:28:13 compute-1 NetworkManager[49104]: <info>  [1768922893.1500] device (tap52fb2315-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.152 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:13 compute-1 systemd[1]: Started Virtual Machine qemu-108-instance-000000cc.
Jan 20 15:28:13 compute-1 ovn_controller[130490]: 2026-01-20T15:28:13Z|00925|binding|INFO|Setting lport 52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 ovn-installed in OVS
Jan 20 15:28:13 compute-1 ovn_controller[130490]: 2026-01-20T15:28:13Z|00926|binding|INFO|Setting lport 52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 up in Southbound
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.160 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.160 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7664a407-5836-473a-94a6-5daa99433b70]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.192 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[4083ea9e-8ae7-4350-a40e-4353b71863d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.198 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7f896c0f-9970-4131-813c-dc1900e68660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:13 compute-1 NetworkManager[49104]: <info>  [1768922893.1992] manager: (tapf19fb67c-60): new Veth device (/org/freedesktop/NetworkManager/Devices/390)
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.236 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e20d5f3d-cffe-4160-8139-f9df010718da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.240 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[9e88f065-cfbb-4728-a6f3-e1a68d636850]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:13 compute-1 NetworkManager[49104]: <info>  [1768922893.2649] device (tapf19fb67c-60): carrier: link connected
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.269 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[8a39ac85-17d3-4a0b-8f9f-222b34a96edd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.286 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c010b914-a23c-43fa-b562-70a25225290c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf19fb67c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:c0:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 793219, 'reachable_time': 44098, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316081, 'error': None, 'target': 'ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.302 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[45cfcf99-6480-49a3-9586-d3c728795db3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:c0c7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 793219, 'tstamp': 793219}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316082, 'error': None, 'target': 'ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.322 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e7f37416-d0da-4381-ae82-10d35f7f02b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf19fb67c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:c0:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 793219, 'reachable_time': 44098, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316083, 'error': None, 'target': 'ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.358 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[620d2215-ec19-4868-9eaa-5d46c9e42a9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.417 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[36bf33b2-52b9-4489-9a81-01d5b9b14dcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.419 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf19fb67c-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.419 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.420 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf19fb67c-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.422 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:13 compute-1 kernel: tapf19fb67c-60: entered promiscuous mode
Jan 20 15:28:13 compute-1 NetworkManager[49104]: <info>  [1768922893.4227] manager: (tapf19fb67c-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/391)
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.424 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf19fb67c-60, col_values=(('external_ids', {'iface-id': 'd4b30c77-23b1-48b0-a6c8-4cf53f9840de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:28:13 compute-1 ovn_controller[130490]: 2026-01-20T15:28:13Z|00927|binding|INFO|Releasing lport d4b30c77-23b1-48b0-a6c8-4cf53f9840de from this chassis (sb_readonly=0)
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.426 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.438 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.440 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f19fb67c-6bab-4253-851e-ede5bb26f589.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f19fb67c-6bab-4253-851e-ede5bb26f589.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.441 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f20e27c5-435d-4e1f-83dd-387ffd5e80b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.441 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-f19fb67c-6bab-4253-851e-ede5bb26f589
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/f19fb67c-6bab-4253-851e-ede5bb26f589.pid.haproxy
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID f19fb67c-6bab-4253-851e-ede5bb26f589
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:28:13 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:13.442 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589', 'env', 'PROCESS_TAG=haproxy-f19fb67c-6bab-4253-851e-ede5bb26f589', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f19fb67c-6bab-4253-851e-ede5bb26f589.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.457 225859 DEBUG nova.compute.manager [req-d7cbdf50-7af4-4d46-89ee-4a30bdc27cb0 req-0327bc7f-2d0e-4135-9d93-7016cede58c5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received event network-vif-plugged-52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.458 225859 DEBUG oslo_concurrency.lockutils [req-d7cbdf50-7af4-4d46-89ee-4a30bdc27cb0 req-0327bc7f-2d0e-4135-9d93-7016cede58c5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "770605b0-4686-4d97-9f82-7ed299482f50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.458 225859 DEBUG oslo_concurrency.lockutils [req-d7cbdf50-7af4-4d46-89ee-4a30bdc27cb0 req-0327bc7f-2d0e-4135-9d93-7016cede58c5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.458 225859 DEBUG oslo_concurrency.lockutils [req-d7cbdf50-7af4-4d46-89ee-4a30bdc27cb0 req-0327bc7f-2d0e-4135-9d93-7016cede58c5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.459 225859 DEBUG nova.compute.manager [req-d7cbdf50-7af4-4d46-89ee-4a30bdc27cb0 req-0327bc7f-2d0e-4135-9d93-7016cede58c5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Processing event network-vif-plugged-52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:28:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.565 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.589 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922893.5886781, 770605b0-4686-4d97-9f82-7ed299482f50 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.590 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] VM Started (Lifecycle Event)
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.593 225859 DEBUG nova.compute.manager [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.597 225859 DEBUG nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.600 225859 INFO nova.virt.libvirt.driver [-] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Instance spawned successfully.
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.601 225859 DEBUG nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.615 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:28:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:28:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3549410671' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:28:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:28:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3549410671' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.621 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.624 225859 DEBUG nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.624 225859 DEBUG nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.625 225859 DEBUG nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.625 225859 DEBUG nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.625 225859 DEBUG nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.626 225859 DEBUG nova.virt.libvirt.driver [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.656 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.657 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922893.5890388, 770605b0-4686-4d97-9f82-7ed299482f50 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.657 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] VM Paused (Lifecycle Event)
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.690 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.697 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922893.5960007, 770605b0-4686-4d97-9f82-7ed299482f50 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.698 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] VM Resumed (Lifecycle Event)
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.701 225859 INFO nova.compute.manager [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Took 5.63 seconds to spawn the instance on the hypervisor.
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.702 225859 DEBUG nova.compute.manager [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.712 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.717 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.737 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.745 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.746 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.750 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.750 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.760 225859 INFO nova.compute.manager [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Took 6.58 seconds to build instance.
Jan 20 15:28:13 compute-1 nova_compute[225855]: 2026-01-20 15:28:13.775 225859 DEBUG oslo_concurrency.lockutils [None req-363dc4c9-30e3-49d5-9b76-6537f931887c 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:13 compute-1 podman[316157]: 2026-01-20 15:28:13.828944475 +0000 UTC m=+0.049165044 container create e68935dd090d02c7d09ee6eaa42b1b2f1beffe055ed6b7bb3242ce69225c8b7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 15:28:13 compute-1 systemd[1]: Started libpod-conmon-e68935dd090d02c7d09ee6eaa42b1b2f1beffe055ed6b7bb3242ce69225c8b7f.scope.
Jan 20 15:28:13 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:28:13 compute-1 podman[316157]: 2026-01-20 15:28:13.803398536 +0000 UTC m=+0.023619125 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:28:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987c681a0ffce47fad96bd241a3b13e2bd7d14a562c7f3c13eeeccc3550197bb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:28:13 compute-1 podman[316157]: 2026-01-20 15:28:13.913841889 +0000 UTC m=+0.134062488 container init e68935dd090d02c7d09ee6eaa42b1b2f1beffe055ed6b7bb3242ce69225c8b7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 15:28:13 compute-1 podman[316157]: 2026-01-20 15:28:13.919032778 +0000 UTC m=+0.139253337 container start e68935dd090d02c7d09ee6eaa42b1b2f1beffe055ed6b7bb3242ce69225c8b7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 15:28:13 compute-1 neutron-haproxy-ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589[316172]: [NOTICE]   (316176) : New worker (316178) forked
Jan 20 15:28:13 compute-1 neutron-haproxy-ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589[316172]: [NOTICE]   (316176) : Loading success.
Jan 20 15:28:14 compute-1 ceph-mon[81775]: pgmap v3181: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Jan 20 15:28:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2276571353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3549410671' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:28:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3549410671' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:28:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:14.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:14.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:15 compute-1 nova_compute[225855]: 2026-01-20 15:28:15.543 225859 DEBUG nova.compute.manager [req-2993ec33-9fad-4747-824b-99d38fac6c43 req-e488644d-2645-43d9-9564-8e9afdbb106f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received event network-vif-plugged-52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:28:15 compute-1 nova_compute[225855]: 2026-01-20 15:28:15.544 225859 DEBUG oslo_concurrency.lockutils [req-2993ec33-9fad-4747-824b-99d38fac6c43 req-e488644d-2645-43d9-9564-8e9afdbb106f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "770605b0-4686-4d97-9f82-7ed299482f50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:15 compute-1 nova_compute[225855]: 2026-01-20 15:28:15.544 225859 DEBUG oslo_concurrency.lockutils [req-2993ec33-9fad-4747-824b-99d38fac6c43 req-e488644d-2645-43d9-9564-8e9afdbb106f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:15 compute-1 nova_compute[225855]: 2026-01-20 15:28:15.544 225859 DEBUG oslo_concurrency.lockutils [req-2993ec33-9fad-4747-824b-99d38fac6c43 req-e488644d-2645-43d9-9564-8e9afdbb106f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:15 compute-1 nova_compute[225855]: 2026-01-20 15:28:15.544 225859 DEBUG nova.compute.manager [req-2993ec33-9fad-4747-824b-99d38fac6c43 req-e488644d-2645-43d9-9564-8e9afdbb106f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] No waiting events found dispatching network-vif-plugged-52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:28:15 compute-1 nova_compute[225855]: 2026-01-20 15:28:15.545 225859 WARNING nova.compute.manager [req-2993ec33-9fad-4747-824b-99d38fac6c43 req-e488644d-2645-43d9-9564-8e9afdbb106f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received unexpected event network-vif-plugged-52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 for instance with vm_state active and task_state None.
Jan 20 15:28:16 compute-1 podman[316188]: 2026-01-20 15:28:16.009898484 +0000 UTC m=+0.054780545 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 20 15:28:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:28:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:16.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:28:16 compute-1 nova_compute[225855]: 2026-01-20 15:28:16.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:28:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:16.449 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:16.449 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:16.450 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:16.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:16 compute-1 ceph-mon[81775]: pgmap v3182: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 20 15:28:17 compute-1 nova_compute[225855]: 2026-01-20 15:28:17.275 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:17 compute-1 ovn_controller[130490]: 2026-01-20T15:28:17Z|00928|binding|INFO|Releasing lport d4b30c77-23b1-48b0-a6c8-4cf53f9840de from this chassis (sb_readonly=0)
Jan 20 15:28:17 compute-1 nova_compute[225855]: 2026-01-20 15:28:17.455 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:17 compute-1 NetworkManager[49104]: <info>  [1768922897.4572] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Jan 20 15:28:17 compute-1 NetworkManager[49104]: <info>  [1768922897.4583] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Jan 20 15:28:17 compute-1 ovn_controller[130490]: 2026-01-20T15:28:17Z|00929|binding|INFO|Releasing lport d4b30c77-23b1-48b0-a6c8-4cf53f9840de from this chassis (sb_readonly=0)
Jan 20 15:28:17 compute-1 nova_compute[225855]: 2026-01-20 15:28:17.491 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:17 compute-1 nova_compute[225855]: 2026-01-20 15:28:17.496 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:17 compute-1 sudo[316209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:17 compute-1 sudo[316209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:17 compute-1 sudo[316209]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:18 compute-1 sudo[316234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:28:18 compute-1 sudo[316234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:18 compute-1 sudo[316234]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:18.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:28:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:18.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:28:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:18 compute-1 ceph-mon[81775]: pgmap v3183: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Jan 20 15:28:18 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:18 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:18 compute-1 nova_compute[225855]: 2026-01-20 15:28:18.566 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:18 compute-1 nova_compute[225855]: 2026-01-20 15:28:18.596 225859 DEBUG nova.compute.manager [req-ad52a640-ea27-483d-86f1-e208ba45eedc req-90001ff5-4262-4c4d-9aae-f2132a73eb32 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received event network-changed-52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:28:18 compute-1 nova_compute[225855]: 2026-01-20 15:28:18.596 225859 DEBUG nova.compute.manager [req-ad52a640-ea27-483d-86f1-e208ba45eedc req-90001ff5-4262-4c4d-9aae-f2132a73eb32 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Refreshing instance network info cache due to event network-changed-52fb2315-9ec5-47a4-af4a-e0ed5e4caf21. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:28:18 compute-1 nova_compute[225855]: 2026-01-20 15:28:18.597 225859 DEBUG oslo_concurrency.lockutils [req-ad52a640-ea27-483d-86f1-e208ba45eedc req-90001ff5-4262-4c4d-9aae-f2132a73eb32 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:28:18 compute-1 nova_compute[225855]: 2026-01-20 15:28:18.597 225859 DEBUG oslo_concurrency.lockutils [req-ad52a640-ea27-483d-86f1-e208ba45eedc req-90001ff5-4262-4c4d-9aae-f2132a73eb32 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:28:18 compute-1 nova_compute[225855]: 2026-01-20 15:28:18.597 225859 DEBUG nova.network.neutron [req-ad52a640-ea27-483d-86f1-e208ba45eedc req-90001ff5-4262-4c4d-9aae-f2132a73eb32 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Refreshing network info cache for port 52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:28:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:28:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:20.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:28:20 compute-1 nova_compute[225855]: 2026-01-20 15:28:20.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:28:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:20.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:20 compute-1 ceph-mon[81775]: pgmap v3184: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 20 15:28:21 compute-1 nova_compute[225855]: 2026-01-20 15:28:21.560 225859 DEBUG nova.network.neutron [req-ad52a640-ea27-483d-86f1-e208ba45eedc req-90001ff5-4262-4c4d-9aae-f2132a73eb32 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Updated VIF entry in instance network info cache for port 52fb2315-9ec5-47a4-af4a-e0ed5e4caf21. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:28:21 compute-1 nova_compute[225855]: 2026-01-20 15:28:21.561 225859 DEBUG nova.network.neutron [req-ad52a640-ea27-483d-86f1-e208ba45eedc req-90001ff5-4262-4c4d-9aae-f2132a73eb32 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Updating instance_info_cache with network_info: [{"id": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "address": "fa:16:3e:f9:08:60", "network": {"id": "f19fb67c-6bab-4253-851e-ede5bb26f589", "bridge": "br-int", "label": "tempest-network-smoke--2116756150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52fb2315-9e", "ovs_interfaceid": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:28:21 compute-1 nova_compute[225855]: 2026-01-20 15:28:21.585 225859 DEBUG oslo_concurrency.lockutils [req-ad52a640-ea27-483d-86f1-e208ba45eedc req-90001ff5-4262-4c4d-9aae-f2132a73eb32 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:28:21 compute-1 ceph-mon[81775]: pgmap v3185: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1006 KiB/s wr, 90 op/s
Jan 20 15:28:22 compute-1 nova_compute[225855]: 2026-01-20 15:28:22.278 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:22.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:28:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:22.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:28:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:23 compute-1 nova_compute[225855]: 2026-01-20 15:28:23.633 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:24 compute-1 ceph-mon[81775]: pgmap v3186: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Jan 20 15:28:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:28:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:24.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:28:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:28:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:24.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:28:26 compute-1 ceph-mon[81775]: pgmap v3187: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Jan 20 15:28:26 compute-1 ovn_controller[130490]: 2026-01-20T15:28:26Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f9:08:60 10.100.0.14
Jan 20 15:28:26 compute-1 ovn_controller[130490]: 2026-01-20T15:28:26Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f9:08:60 10.100.0.14
Jan 20 15:28:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:26.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:26.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:26 compute-1 sudo[316263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:26 compute-1 sudo[316263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:26 compute-1 sudo[316263]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:26 compute-1 sudo[316288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:26 compute-1 sudo[316288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:26 compute-1 sudo[316288]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:27 compute-1 nova_compute[225855]: 2026-01-20 15:28:27.281 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:28 compute-1 ceph-mon[81775]: pgmap v3188: 321 pgs: 321 active+clean; 174 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 550 KiB/s wr, 87 op/s
Jan 20 15:28:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:28.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:28.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:28 compute-1 nova_compute[225855]: 2026-01-20 15:28:28.636 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:30 compute-1 ceph-mon[81775]: pgmap v3189: 321 pgs: 321 active+clean; 176 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 60 op/s
Jan 20 15:28:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:30.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:28:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:30.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:28:31 compute-1 nova_compute[225855]: 2026-01-20 15:28:31.895 225859 INFO nova.compute.manager [None req-4885eae6-f1d4-421f-8053-7401edab03ac 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Get console output
Jan 20 15:28:31 compute-1 nova_compute[225855]: 2026-01-20 15:28:31.903 263775 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:28:32 compute-1 ceph-mon[81775]: pgmap v3190: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:28:32 compute-1 nova_compute[225855]: 2026-01-20 15:28:32.283 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:32.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:32.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:32.527 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:28:32 compute-1 nova_compute[225855]: 2026-01-20 15:28:32.527 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:32.529 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:28:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:33 compute-1 nova_compute[225855]: 2026-01-20 15:28:33.638 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:34 compute-1 ceph-mon[81775]: pgmap v3191: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:28:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:34.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:34.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:34.531 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:28:34 compute-1 nova_compute[225855]: 2026-01-20 15:28:34.951 225859 DEBUG oslo_concurrency.lockutils [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "interface-770605b0-4686-4d97-9f82-7ed299482f50-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:34 compute-1 nova_compute[225855]: 2026-01-20 15:28:34.952 225859 DEBUG oslo_concurrency.lockutils [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "interface-770605b0-4686-4d97-9f82-7ed299482f50-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:34 compute-1 nova_compute[225855]: 2026-01-20 15:28:34.952 225859 DEBUG nova.objects.instance [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'flavor' on Instance uuid 770605b0-4686-4d97-9f82-7ed299482f50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:28:35 compute-1 nova_compute[225855]: 2026-01-20 15:28:35.505 225859 DEBUG nova.objects.instance [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'pci_requests' on Instance uuid 770605b0-4686-4d97-9f82-7ed299482f50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:28:35 compute-1 nova_compute[225855]: 2026-01-20 15:28:35.519 225859 DEBUG nova.network.neutron [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:28:35 compute-1 nova_compute[225855]: 2026-01-20 15:28:35.722 225859 DEBUG nova.policy [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5338aa65dc0e4326a66ce79053787f14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:28:36 compute-1 ceph-mon[81775]: pgmap v3192: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:28:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:28:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:36.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:28:36 compute-1 nova_compute[225855]: 2026-01-20 15:28:36.443 225859 DEBUG nova.network.neutron [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Successfully created port: 77056a83-f3ee-44a1-8cd0-fac2b5327a1e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:28:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:36.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:37 compute-1 podman[316318]: 2026-01-20 15:28:37.056322516 +0000 UTC m=+0.087697344 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 20 15:28:37 compute-1 nova_compute[225855]: 2026-01-20 15:28:37.062 225859 DEBUG nova.network.neutron [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Successfully updated port: 77056a83-f3ee-44a1-8cd0-fac2b5327a1e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:28:37 compute-1 nova_compute[225855]: 2026-01-20 15:28:37.084 225859 DEBUG oslo_concurrency.lockutils [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:28:37 compute-1 nova_compute[225855]: 2026-01-20 15:28:37.084 225859 DEBUG oslo_concurrency.lockutils [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquired lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:28:37 compute-1 nova_compute[225855]: 2026-01-20 15:28:37.084 225859 DEBUG nova.network.neutron [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:28:37 compute-1 nova_compute[225855]: 2026-01-20 15:28:37.221 225859 DEBUG nova.compute.manager [req-6a9af8bc-ec26-41c4-ad07-087e9e24e499 req-337a11f2-6747-4b8d-b5d3-3ec4cdbe76ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received event network-changed-77056a83-f3ee-44a1-8cd0-fac2b5327a1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:28:37 compute-1 nova_compute[225855]: 2026-01-20 15:28:37.222 225859 DEBUG nova.compute.manager [req-6a9af8bc-ec26-41c4-ad07-087e9e24e499 req-337a11f2-6747-4b8d-b5d3-3ec4cdbe76ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Refreshing instance network info cache due to event network-changed-77056a83-f3ee-44a1-8cd0-fac2b5327a1e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:28:37 compute-1 nova_compute[225855]: 2026-01-20 15:28:37.222 225859 DEBUG oslo_concurrency.lockutils [req-6a9af8bc-ec26-41c4-ad07-087e9e24e499 req-337a11f2-6747-4b8d-b5d3-3ec4cdbe76ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:28:37 compute-1 nova_compute[225855]: 2026-01-20 15:28:37.285 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:38.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:38 compute-1 ceph-mon[81775]: pgmap v3193: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:28:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:38.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:38 compute-1 nova_compute[225855]: 2026-01-20 15:28:38.640 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:40.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.382 225859 DEBUG nova.network.neutron [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Updating instance_info_cache with network_info: [{"id": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "address": "fa:16:3e:f9:08:60", "network": {"id": "f19fb67c-6bab-4253-851e-ede5bb26f589", "bridge": "br-int", "label": "tempest-network-smoke--2116756150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52fb2315-9e", "ovs_interfaceid": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "address": "fa:16:3e:1e:e5:d8", "network": {"id": "d8ab95ce-159e-451b-baf0-5271f6a3160b", "bridge": "br-int", "label": "tempest-network-smoke--1027589119", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77056a83-f3", "ovs_interfaceid": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.406 225859 DEBUG oslo_concurrency.lockutils [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Releasing lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.407 225859 DEBUG oslo_concurrency.lockutils [req-6a9af8bc-ec26-41c4-ad07-087e9e24e499 req-337a11f2-6747-4b8d-b5d3-3ec4cdbe76ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.408 225859 DEBUG nova.network.neutron [req-6a9af8bc-ec26-41c4-ad07-087e9e24e499 req-337a11f2-6747-4b8d-b5d3-3ec4cdbe76ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Refreshing network info cache for port 77056a83-f3ee-44a1-8cd0-fac2b5327a1e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.410 225859 DEBUG nova.virt.libvirt.vif [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:28:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-774862138',display_name='tempest-TestNetworkBasicOps-server-774862138',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-774862138',id=204,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHV4gRYgjRGDnaHxkSMJQMAVAwkk8jLZAdWo/fxENkiAFcbr1CXMhNYzs2hTTM5NoLjR4u2qMt5dKH7C9b4LHMK/3O49DJNSLMAUfk3HkTe/ulPxPFwHn3vfQCPGwBrM5A==',key_name='tempest-TestNetworkBasicOps-1250991011',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:28:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-u6z01i8j',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:28:13Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=770605b0-4686-4d97-9f82-7ed299482f50,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "address": "fa:16:3e:1e:e5:d8", "network": {"id": "d8ab95ce-159e-451b-baf0-5271f6a3160b", "bridge": "br-int", "label": "tempest-network-smoke--1027589119", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77056a83-f3", "ovs_interfaceid": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.411 225859 DEBUG nova.network.os_vif_util [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "address": "fa:16:3e:1e:e5:d8", "network": {"id": "d8ab95ce-159e-451b-baf0-5271f6a3160b", "bridge": "br-int", "label": "tempest-network-smoke--1027589119", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77056a83-f3", "ovs_interfaceid": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.411 225859 DEBUG nova.network.os_vif_util [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:e5:d8,bridge_name='br-int',has_traffic_filtering=True,id=77056a83-f3ee-44a1-8cd0-fac2b5327a1e,network=Network(d8ab95ce-159e-451b-baf0-5271f6a3160b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77056a83-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.412 225859 DEBUG os_vif [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:e5:d8,bridge_name='br-int',has_traffic_filtering=True,id=77056a83-f3ee-44a1-8cd0-fac2b5327a1e,network=Network(d8ab95ce-159e-451b-baf0-5271f6a3160b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77056a83-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.412 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.413 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.413 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.416 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.416 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77056a83-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.416 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap77056a83-f3, col_values=(('external_ids', {'iface-id': '77056a83-f3ee-44a1-8cd0-fac2b5327a1e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:e5:d8', 'vm-uuid': '770605b0-4686-4d97-9f82-7ed299482f50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.417 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:40 compute-1 NetworkManager[49104]: <info>  [1768922920.4187] manager: (tap77056a83-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.419 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.426 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.427 225859 INFO os_vif [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:e5:d8,bridge_name='br-int',has_traffic_filtering=True,id=77056a83-f3ee-44a1-8cd0-fac2b5327a1e,network=Network(d8ab95ce-159e-451b-baf0-5271f6a3160b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77056a83-f3')
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.428 225859 DEBUG nova.virt.libvirt.vif [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:28:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-774862138',display_name='tempest-TestNetworkBasicOps-server-774862138',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-774862138',id=204,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHV4gRYgjRGDnaHxkSMJQMAVAwkk8jLZAdWo/fxENkiAFcbr1CXMhNYzs2hTTM5NoLjR4u2qMt5dKH7C9b4LHMK/3O49DJNSLMAUfk3HkTe/ulPxPFwHn3vfQCPGwBrM5A==',key_name='tempest-TestNetworkBasicOps-1250991011',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:28:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-u6z01i8j',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:28:13Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=770605b0-4686-4d97-9f82-7ed299482f50,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "address": "fa:16:3e:1e:e5:d8", "network": {"id": "d8ab95ce-159e-451b-baf0-5271f6a3160b", "bridge": "br-int", "label": "tempest-network-smoke--1027589119", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77056a83-f3", "ovs_interfaceid": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.429 225859 DEBUG nova.network.os_vif_util [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "address": "fa:16:3e:1e:e5:d8", "network": {"id": "d8ab95ce-159e-451b-baf0-5271f6a3160b", "bridge": "br-int", "label": "tempest-network-smoke--1027589119", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77056a83-f3", "ovs_interfaceid": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.430 225859 DEBUG nova.network.os_vif_util [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:e5:d8,bridge_name='br-int',has_traffic_filtering=True,id=77056a83-f3ee-44a1-8cd0-fac2b5327a1e,network=Network(d8ab95ce-159e-451b-baf0-5271f6a3160b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77056a83-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.434 225859 DEBUG nova.virt.libvirt.guest [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] attach device xml: <interface type="ethernet">
Jan 20 15:28:40 compute-1 nova_compute[225855]:   <mac address="fa:16:3e:1e:e5:d8"/>
Jan 20 15:28:40 compute-1 nova_compute[225855]:   <model type="virtio"/>
Jan 20 15:28:40 compute-1 nova_compute[225855]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:28:40 compute-1 nova_compute[225855]:   <mtu size="1442"/>
Jan 20 15:28:40 compute-1 nova_compute[225855]:   <target dev="tap77056a83-f3"/>
Jan 20 15:28:40 compute-1 nova_compute[225855]: </interface>
Jan 20 15:28:40 compute-1 nova_compute[225855]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 15:28:40 compute-1 kernel: tap77056a83-f3: entered promiscuous mode
Jan 20 15:28:40 compute-1 NetworkManager[49104]: <info>  [1768922920.4491] manager: (tap77056a83-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/395)
Jan 20 15:28:40 compute-1 ovn_controller[130490]: 2026-01-20T15:28:40Z|00930|binding|INFO|Claiming lport 77056a83-f3ee-44a1-8cd0-fac2b5327a1e for this chassis.
Jan 20 15:28:40 compute-1 ovn_controller[130490]: 2026-01-20T15:28:40Z|00931|binding|INFO|77056a83-f3ee-44a1-8cd0-fac2b5327a1e: Claiming fa:16:3e:1e:e5:d8 10.100.0.28
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.452 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.460 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:e5:d8 10.100.0.28'], port_security=['fa:16:3e:1e:e5:d8 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': '770605b0-4686-4d97-9f82-7ed299482f50', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d8ab95ce-159e-451b-baf0-5271f6a3160b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aaa69ba6-9a27-441e-877e-2cd188322a42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4ab7f4cc-800b-4c21-91a0-e2fd29d04e91, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=77056a83-f3ee-44a1-8cd0-fac2b5327a1e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.461 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 77056a83-f3ee-44a1-8cd0-fac2b5327a1e in datapath d8ab95ce-159e-451b-baf0-5271f6a3160b bound to our chassis
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.462 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d8ab95ce-159e-451b-baf0-5271f6a3160b
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.476 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8d5094a5-d4d5-433e-9f4f-ca1b04a7164a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.477 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd8ab95ce-11 in ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.478 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd8ab95ce-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.478 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5616fd40-e93f-496e-a033-20c2a728aba6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:40 compute-1 systemd-udevd[316354]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.479 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bab29d36-d4e2-4db3-9778-5e54bf2562fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:40 compute-1 NetworkManager[49104]: <info>  [1768922920.4923] device (tap77056a83-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:28:40 compute-1 NetworkManager[49104]: <info>  [1768922920.4928] device (tap77056a83-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.494 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:40 compute-1 ovn_controller[130490]: 2026-01-20T15:28:40Z|00932|binding|INFO|Setting lport 77056a83-f3ee-44a1-8cd0-fac2b5327a1e ovn-installed in OVS
Jan 20 15:28:40 compute-1 ovn_controller[130490]: 2026-01-20T15:28:40Z|00933|binding|INFO|Setting lport 77056a83-f3ee-44a1-8cd0-fac2b5327a1e up in Southbound
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.496 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.495 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[0edead4d-1217-4562-b767-1a62f4a00872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:40.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.519 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4db80bc0-4f7b-40ea-918f-119d6163f8f8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:40 compute-1 ceph-mon[81775]: pgmap v3194: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 266 KiB/s rd, 1.6 MiB/s wr, 48 op/s
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.545 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[63a25ad9-cb73-4794-8af5-2155bc4d67ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.545 225859 DEBUG nova.virt.libvirt.driver [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.546 225859 DEBUG nova.virt.libvirt.driver [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.546 225859 DEBUG nova.virt.libvirt.driver [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No VIF found with MAC fa:16:3e:f9:08:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.547 225859 DEBUG nova.virt.libvirt.driver [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No VIF found with MAC fa:16:3e:1e:e5:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.550 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8f74f23f-ebde-4f98-bd9b-0d1c7858f8ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:40 compute-1 NetworkManager[49104]: <info>  [1768922920.5516] manager: (tapd8ab95ce-10): new Veth device (/org/freedesktop/NetworkManager/Devices/396)
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.581 225859 DEBUG nova.virt.libvirt.guest [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:28:40 compute-1 nova_compute[225855]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:28:40 compute-1 nova_compute[225855]:   <nova:name>tempest-TestNetworkBasicOps-server-774862138</nova:name>
Jan 20 15:28:40 compute-1 nova_compute[225855]:   <nova:creationTime>2026-01-20 15:28:40</nova:creationTime>
Jan 20 15:28:40 compute-1 nova_compute[225855]:   <nova:flavor name="m1.nano">
Jan 20 15:28:40 compute-1 nova_compute[225855]:     <nova:memory>128</nova:memory>
Jan 20 15:28:40 compute-1 nova_compute[225855]:     <nova:disk>1</nova:disk>
Jan 20 15:28:40 compute-1 nova_compute[225855]:     <nova:swap>0</nova:swap>
Jan 20 15:28:40 compute-1 nova_compute[225855]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:28:40 compute-1 nova_compute[225855]:     <nova:vcpus>1</nova:vcpus>
Jan 20 15:28:40 compute-1 nova_compute[225855]:   </nova:flavor>
Jan 20 15:28:40 compute-1 nova_compute[225855]:   <nova:owner>
Jan 20 15:28:40 compute-1 nova_compute[225855]:     <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:28:40 compute-1 nova_compute[225855]:     <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:28:40 compute-1 nova_compute[225855]:   </nova:owner>
Jan 20 15:28:40 compute-1 nova_compute[225855]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:28:40 compute-1 nova_compute[225855]:   <nova:ports>
Jan 20 15:28:40 compute-1 nova_compute[225855]:     <nova:port uuid="52fb2315-9ec5-47a4-af4a-e0ed5e4caf21">
Jan 20 15:28:40 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 20 15:28:40 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 15:28:40 compute-1 nova_compute[225855]:     <nova:port uuid="77056a83-f3ee-44a1-8cd0-fac2b5327a1e">
Jan 20 15:28:40 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Jan 20 15:28:40 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 15:28:40 compute-1 nova_compute[225855]:   </nova:ports>
Jan 20 15:28:40 compute-1 nova_compute[225855]: </nova:instance>
Jan 20 15:28:40 compute-1 nova_compute[225855]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.583 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e062ba43-8d76-4c35-a16e-736fedf5688f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.586 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[de11bd56-a477-4942-9f40-768361154258]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:40 compute-1 NetworkManager[49104]: <info>  [1768922920.6097] device (tapd8ab95ce-10): carrier: link connected
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.611 225859 DEBUG oslo_concurrency.lockutils [None req-8e31895a-3545-448a-a53c-ead2de407a86 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "interface-770605b0-4686-4d97-9f82-7ed299482f50-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.614 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad96f8f-3c2f-4ab2-94bb-1bee3f1509dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.629 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0cc2c0e2-8713-4bb6-a01d-7a63b1aafa57]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd8ab95ce-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:e9:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 265], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795953, 'reachable_time': 34661, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316380, 'error': None, 'target': 'ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.646 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[15bbf5d4-1ec5-4976-9f23-df3fbecd77f2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe82:e946'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795953, 'tstamp': 795953}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316381, 'error': None, 'target': 'ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.662 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a4d53efd-2eae-4666-bb20-2fc67eec268b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd8ab95ce-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:e9:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 265], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795953, 'reachable_time': 34661, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316382, 'error': None, 'target': 'ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.694 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4ae5c2d4-8321-47c2-9698-3d0ff957f13b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.745 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d07e3037-de68-42c5-954a-62c63eba4db7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.747 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8ab95ce-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.747 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.748 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd8ab95ce-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:28:40 compute-1 kernel: tapd8ab95ce-10: entered promiscuous mode
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.750 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:40 compute-1 NetworkManager[49104]: <info>  [1768922920.7506] manager: (tapd8ab95ce-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.752 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd8ab95ce-10, col_values=(('external_ids', {'iface-id': '09021fcc-5f8e-43f5-85a0-9ce682d692a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.753 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:40 compute-1 ovn_controller[130490]: 2026-01-20T15:28:40Z|00934|binding|INFO|Releasing lport 09021fcc-5f8e-43f5-85a0-9ce682d692a0 from this chassis (sb_readonly=0)
Jan 20 15:28:40 compute-1 nova_compute[225855]: 2026-01-20 15:28:40.765 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.766 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d8ab95ce-159e-451b-baf0-5271f6a3160b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d8ab95ce-159e-451b-baf0-5271f6a3160b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.767 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[393598a7-f93a-443d-8372-5745e3431acf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.768 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-d8ab95ce-159e-451b-baf0-5271f6a3160b
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/d8ab95ce-159e-451b-baf0-5271f6a3160b.pid.haproxy
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID d8ab95ce-159e-451b-baf0-5271f6a3160b
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:28:40 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:40.768 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b', 'env', 'PROCESS_TAG=haproxy-d8ab95ce-159e-451b-baf0-5271f6a3160b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d8ab95ce-159e-451b-baf0-5271f6a3160b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:28:41 compute-1 podman[316414]: 2026-01-20 15:28:41.122345996 +0000 UTC m=+0.048758173 container create 62b8365da44e9b0899fd7c930b462a3d53b7c0b3ab06aa3a347b5ba7ee813dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:28:41 compute-1 systemd[1]: Started libpod-conmon-62b8365da44e9b0899fd7c930b462a3d53b7c0b3ab06aa3a347b5ba7ee813dbc.scope.
Jan 20 15:28:41 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:28:41 compute-1 podman[316414]: 2026-01-20 15:28:41.096726324 +0000 UTC m=+0.023138521 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:28:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec07ec1121543b297cde9c315ec4473068cc96eb4066eb377bd1883358661b91/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:28:41 compute-1 podman[316414]: 2026-01-20 15:28:41.204992156 +0000 UTC m=+0.131404353 container init 62b8365da44e9b0899fd7c930b462a3d53b7c0b3ab06aa3a347b5ba7ee813dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:28:41 compute-1 podman[316414]: 2026-01-20 15:28:41.212602403 +0000 UTC m=+0.139014580 container start 62b8365da44e9b0899fd7c930b462a3d53b7c0b3ab06aa3a347b5ba7ee813dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 15:28:41 compute-1 neutron-haproxy-ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b[316429]: [NOTICE]   (316433) : New worker (316435) forked
Jan 20 15:28:41 compute-1 neutron-haproxy-ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b[316429]: [NOTICE]   (316433) : Loading success.
Jan 20 15:28:41 compute-1 nova_compute[225855]: 2026-01-20 15:28:41.588 225859 DEBUG nova.compute.manager [req-d3dc5542-df92-4a45-81cf-a358e92eb7b8 req-576f3a99-7f1c-48d1-b47f-7e8927686517 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received event network-vif-plugged-77056a83-f3ee-44a1-8cd0-fac2b5327a1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:28:41 compute-1 nova_compute[225855]: 2026-01-20 15:28:41.588 225859 DEBUG oslo_concurrency.lockutils [req-d3dc5542-df92-4a45-81cf-a358e92eb7b8 req-576f3a99-7f1c-48d1-b47f-7e8927686517 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "770605b0-4686-4d97-9f82-7ed299482f50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:41 compute-1 nova_compute[225855]: 2026-01-20 15:28:41.589 225859 DEBUG oslo_concurrency.lockutils [req-d3dc5542-df92-4a45-81cf-a358e92eb7b8 req-576f3a99-7f1c-48d1-b47f-7e8927686517 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:41 compute-1 nova_compute[225855]: 2026-01-20 15:28:41.589 225859 DEBUG oslo_concurrency.lockutils [req-d3dc5542-df92-4a45-81cf-a358e92eb7b8 req-576f3a99-7f1c-48d1-b47f-7e8927686517 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:41 compute-1 nova_compute[225855]: 2026-01-20 15:28:41.589 225859 DEBUG nova.compute.manager [req-d3dc5542-df92-4a45-81cf-a358e92eb7b8 req-576f3a99-7f1c-48d1-b47f-7e8927686517 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] No waiting events found dispatching network-vif-plugged-77056a83-f3ee-44a1-8cd0-fac2b5327a1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:28:41 compute-1 nova_compute[225855]: 2026-01-20 15:28:41.589 225859 WARNING nova.compute.manager [req-d3dc5542-df92-4a45-81cf-a358e92eb7b8 req-576f3a99-7f1c-48d1-b47f-7e8927686517 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received unexpected event network-vif-plugged-77056a83-f3ee-44a1-8cd0-fac2b5327a1e for instance with vm_state active and task_state None.
Jan 20 15:28:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:28:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:42.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:28:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:28:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:42.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:28:42 compute-1 ceph-mon[81775]: pgmap v3195: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 131 KiB/s rd, 1.1 MiB/s wr, 31 op/s
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.336 225859 DEBUG oslo_concurrency.lockutils [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "interface-770605b0-4686-4d97-9f82-7ed299482f50-77056a83-f3ee-44a1-8cd0-fac2b5327a1e" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.337 225859 DEBUG oslo_concurrency.lockutils [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "interface-770605b0-4686-4d97-9f82-7ed299482f50-77056a83-f3ee-44a1-8cd0-fac2b5327a1e" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.358 225859 DEBUG nova.objects.instance [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'flavor' on Instance uuid 770605b0-4686-4d97-9f82-7ed299482f50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.381 225859 DEBUG nova.virt.libvirt.vif [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:28:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-774862138',display_name='tempest-TestNetworkBasicOps-server-774862138',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-774862138',id=204,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHV4gRYgjRGDnaHxkSMJQMAVAwkk8jLZAdWo/fxENkiAFcbr1CXMhNYzs2hTTM5NoLjR4u2qMt5dKH7C9b4LHMK/3O49DJNSLMAUfk3HkTe/ulPxPFwHn3vfQCPGwBrM5A==',key_name='tempest-TestNetworkBasicOps-1250991011',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:28:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-u6z01i8j',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:28:13Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=770605b0-4686-4d97-9f82-7ed299482f50,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "address": "fa:16:3e:1e:e5:d8", "network": {"id": "d8ab95ce-159e-451b-baf0-5271f6a3160b", "bridge": "br-int", "label": "tempest-network-smoke--1027589119", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77056a83-f3", "ovs_interfaceid": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.382 225859 DEBUG nova.network.os_vif_util [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "address": "fa:16:3e:1e:e5:d8", "network": {"id": "d8ab95ce-159e-451b-baf0-5271f6a3160b", "bridge": "br-int", "label": "tempest-network-smoke--1027589119", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77056a83-f3", "ovs_interfaceid": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.383 225859 DEBUG nova.network.os_vif_util [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:e5:d8,bridge_name='br-int',has_traffic_filtering=True,id=77056a83-f3ee-44a1-8cd0-fac2b5327a1e,network=Network(d8ab95ce-159e-451b-baf0-5271f6a3160b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77056a83-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.386 225859 DEBUG nova.virt.libvirt.guest [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:e5:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77056a83-f3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.389 225859 DEBUG nova.virt.libvirt.guest [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:e5:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77056a83-f3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.391 225859 DEBUG nova.virt.libvirt.driver [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Attempting to detach device tap77056a83-f3 from instance 770605b0-4686-4d97-9f82-7ed299482f50 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.392 225859 DEBUG nova.virt.libvirt.guest [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] detach device xml: <interface type="ethernet">
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <mac address="fa:16:3e:1e:e5:d8"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <model type="virtio"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <mtu size="1442"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <target dev="tap77056a83-f3"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]: </interface>
Jan 20 15:28:43 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.399 225859 DEBUG nova.virt.libvirt.guest [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:e5:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77056a83-f3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.402 225859 DEBUG nova.virt.libvirt.guest [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1e:e5:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77056a83-f3"/></interface>not found in domain: <domain type='kvm' id='108'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <name>instance-000000cc</name>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <uuid>770605b0-4686-4d97-9f82-7ed299482f50</uuid>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:name>tempest-TestNetworkBasicOps-server-774862138</nova:name>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:creationTime>2026-01-20 15:28:40</nova:creationTime>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:flavor name="m1.nano">
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:memory>128</nova:memory>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:disk>1</nova:disk>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:swap>0</nova:swap>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:vcpus>1</nova:vcpus>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </nova:flavor>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:owner>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </nova:owner>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:ports>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:port uuid="52fb2315-9ec5-47a4-af4a-e0ed5e4caf21">
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:port uuid="77056a83-f3ee-44a1-8cd0-fac2b5327a1e">
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </nova:ports>
Jan 20 15:28:43 compute-1 nova_compute[225855]: </nova:instance>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <memory unit='KiB'>131072</memory>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <vcpu placement='static'>1</vcpu>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <resource>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <partition>/machine</partition>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </resource>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <sysinfo type='smbios'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <system>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <entry name='manufacturer'>RDO</entry>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <entry name='product'>OpenStack Compute</entry>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <entry name='serial'>770605b0-4686-4d97-9f82-7ed299482f50</entry>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <entry name='uuid'>770605b0-4686-4d97-9f82-7ed299482f50</entry>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <entry name='family'>Virtual Machine</entry>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </system>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <os>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <boot dev='hd'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <smbios mode='sysinfo'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </os>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <features>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <vmcoreinfo state='on'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </features>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <cpu mode='custom' match='exact' check='full'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <model fallback='forbid'>Nehalem</model>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <feature policy='require' name='x2apic'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <feature policy='require' name='hypervisor'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <feature policy='require' name='vme'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <clock offset='utc'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <timer name='pit' tickpolicy='delay'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <timer name='hpet' present='no'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <on_poweroff>destroy</on_poweroff>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <on_reboot>restart</on_reboot>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <on_crash>destroy</on_crash>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <disk type='network' device='disk'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <auth username='openstack'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <source protocol='rbd' name='vms/770605b0-4686-4d97-9f82-7ed299482f50_disk' index='2'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:         <host name='192.168.122.100' port='6789'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:         <host name='192.168.122.102' port='6789'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:         <host name='192.168.122.101' port='6789'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       </source>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target dev='vda' bus='virtio'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='virtio-disk0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <disk type='network' device='cdrom'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <auth username='openstack'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <source protocol='rbd' name='vms/770605b0-4686-4d97-9f82-7ed299482f50_disk.config' index='1'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:         <host name='192.168.122.100' port='6789'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:         <host name='192.168.122.102' port='6789'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:         <host name='192.168.122.101' port='6789'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       </source>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target dev='sda' bus='sata'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <readonly/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='sata0-0-0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='0' model='pcie-root'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pcie.0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='1' port='0x10'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.1'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='2' port='0x11'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.2'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='3' port='0x12'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.3'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='4' port='0x13'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.4'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='5' port='0x14'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.5'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='6' port='0x15'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.6'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='7' port='0x16'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.7'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='8' port='0x17'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.8'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='9' port='0x18'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.9'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='10' port='0x19'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.10'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='11' port='0x1a'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.11'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='12' port='0x1b'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.12'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='13' port='0x1c'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.13'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='14' port='0x1d'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.14'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='15' port='0x1e'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.15'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='16' port='0x1f'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.16'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='17' port='0x20'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.17'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='18' port='0x21'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.18'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='19' port='0x22'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.19'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='20' port='0x23'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.20'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='21' port='0x24'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.21'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='22' port='0x25'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.22'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='23' port='0x26'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.23'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='24' port='0x27'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.24'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='25' port='0x28'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.25'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-pci-bridge'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.26'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='usb'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='sata' index='0'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='ide'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <interface type='ethernet'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <mac address='fa:16:3e:f9:08:60'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target dev='tap52fb2315-9e'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model type='virtio'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <mtu size='1442'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='net0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <interface type='ethernet'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <mac address='fa:16:3e:1e:e5:d8'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target dev='tap77056a83-f3'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model type='virtio'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <mtu size='1442'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='net1'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <serial type='pty'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <source path='/dev/pts/0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <log file='/var/lib/nova/instances/770605b0-4686-4d97-9f82-7ed299482f50/console.log' append='off'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target type='isa-serial' port='0'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:         <model name='isa-serial'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       </target>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='serial0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <console type='pty' tty='/dev/pts/0'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <source path='/dev/pts/0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <log file='/var/lib/nova/instances/770605b0-4686-4d97-9f82-7ed299482f50/console.log' append='off'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target type='serial' port='0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='serial0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </console>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <input type='tablet' bus='usb'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='input0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='usb' bus='0' port='1'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </input>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <input type='mouse' bus='ps2'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='input1'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </input>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <input type='keyboard' bus='ps2'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='input2'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </input>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <listen type='address' address='::0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </graphics>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <audio id='1' type='none'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <video>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model type='virtio' heads='1' primary='yes'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='video0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </video>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <watchdog model='itco' action='reset'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='watchdog0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </watchdog>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <memballoon model='virtio'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <stats period='10'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='balloon0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <rng model='virtio'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <backend model='random'>/dev/urandom</backend>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='rng0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <label>system_u:system_r:svirt_t:s0:c126,c530</label>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c126,c530</imagelabel>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </seclabel>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <label>+107:+107</label>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <imagelabel>+107:+107</imagelabel>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </seclabel>
Jan 20 15:28:43 compute-1 nova_compute[225855]: </domain>
Jan 20 15:28:43 compute-1 nova_compute[225855]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.404 225859 INFO nova.virt.libvirt.driver [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully detached device tap77056a83-f3 from instance 770605b0-4686-4d97-9f82-7ed299482f50 from the persistent domain config.
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.405 225859 DEBUG nova.virt.libvirt.driver [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] (1/8): Attempting to detach device tap77056a83-f3 with device alias net1 from instance 770605b0-4686-4d97-9f82-7ed299482f50 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.405 225859 DEBUG nova.virt.libvirt.guest [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] detach device xml: <interface type="ethernet">
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <mac address="fa:16:3e:1e:e5:d8"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <model type="virtio"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <mtu size="1442"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <target dev="tap77056a83-f3"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]: </interface>
Jan 20 15:28:43 compute-1 nova_compute[225855]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.410 225859 DEBUG nova.network.neutron [req-6a9af8bc-ec26-41c4-ad07-087e9e24e499 req-337a11f2-6747-4b8d-b5d3-3ec4cdbe76ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Updated VIF entry in instance network info cache for port 77056a83-f3ee-44a1-8cd0-fac2b5327a1e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.411 225859 DEBUG nova.network.neutron [req-6a9af8bc-ec26-41c4-ad07-087e9e24e499 req-337a11f2-6747-4b8d-b5d3-3ec4cdbe76ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Updating instance_info_cache with network_info: [{"id": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "address": "fa:16:3e:f9:08:60", "network": {"id": "f19fb67c-6bab-4253-851e-ede5bb26f589", "bridge": "br-int", "label": "tempest-network-smoke--2116756150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52fb2315-9e", "ovs_interfaceid": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "address": "fa:16:3e:1e:e5:d8", "network": {"id": "d8ab95ce-159e-451b-baf0-5271f6a3160b", "bridge": "br-int", "label": "tempest-network-smoke--1027589119", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77056a83-f3", "ovs_interfaceid": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.436 225859 DEBUG oslo_concurrency.lockutils [req-6a9af8bc-ec26-41c4-ad07-087e9e24e499 req-337a11f2-6747-4b8d-b5d3-3ec4cdbe76ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:28:43 compute-1 kernel: tap77056a83-f3 (unregistering): left promiscuous mode
Jan 20 15:28:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:43 compute-1 NetworkManager[49104]: <info>  [1768922923.5130] device (tap77056a83-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.524 225859 DEBUG nova.virt.libvirt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Received event <DeviceRemovedEvent: 1768922923.5232997, 770605b0-4686-4d97-9f82-7ed299482f50 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 15:28:43 compute-1 ovn_controller[130490]: 2026-01-20T15:28:43Z|00935|binding|INFO|Releasing lport 77056a83-f3ee-44a1-8cd0-fac2b5327a1e from this chassis (sb_readonly=0)
Jan 20 15:28:43 compute-1 ovn_controller[130490]: 2026-01-20T15:28:43Z|00936|binding|INFO|Setting lport 77056a83-f3ee-44a1-8cd0-fac2b5327a1e down in Southbound
Jan 20 15:28:43 compute-1 ovn_controller[130490]: 2026-01-20T15:28:43Z|00937|binding|INFO|Removing iface tap77056a83-f3 ovn-installed in OVS
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.579 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.581 225859 DEBUG nova.virt.libvirt.driver [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Start waiting for the detach event from libvirt for device tap77056a83-f3 with device alias net1 for instance 770605b0-4686-4d97-9f82-7ed299482f50 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.581 225859 DEBUG nova.virt.libvirt.guest [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:e5:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77056a83-f3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.582 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.585 225859 DEBUG nova.virt.libvirt.guest [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1e:e5:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77056a83-f3"/></interface>not found in domain: <domain type='kvm' id='108'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <name>instance-000000cc</name>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <uuid>770605b0-4686-4d97-9f82-7ed299482f50</uuid>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:name>tempest-TestNetworkBasicOps-server-774862138</nova:name>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:creationTime>2026-01-20 15:28:40</nova:creationTime>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:flavor name="m1.nano">
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:memory>128</nova:memory>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:disk>1</nova:disk>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:swap>0</nova:swap>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:vcpus>1</nova:vcpus>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </nova:flavor>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:owner>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </nova:owner>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:ports>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:port uuid="52fb2315-9ec5-47a4-af4a-e0ed5e4caf21">
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:port uuid="77056a83-f3ee-44a1-8cd0-fac2b5327a1e">
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </nova:ports>
Jan 20 15:28:43 compute-1 nova_compute[225855]: </nova:instance>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <memory unit='KiB'>131072</memory>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <vcpu placement='static'>1</vcpu>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <resource>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <partition>/machine</partition>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </resource>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <sysinfo type='smbios'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <system>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <entry name='manufacturer'>RDO</entry>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <entry name='product'>OpenStack Compute</entry>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <entry name='serial'>770605b0-4686-4d97-9f82-7ed299482f50</entry>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <entry name='uuid'>770605b0-4686-4d97-9f82-7ed299482f50</entry>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <entry name='family'>Virtual Machine</entry>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </system>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <os>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <boot dev='hd'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <smbios mode='sysinfo'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </os>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <features>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <vmcoreinfo state='on'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </features>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <cpu mode='custom' match='exact' check='full'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <model fallback='forbid'>Nehalem</model>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <feature policy='require' name='x2apic'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <feature policy='require' name='hypervisor'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <feature policy='require' name='vme'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <clock offset='utc'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <timer name='pit' tickpolicy='delay'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <timer name='hpet' present='no'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <on_poweroff>destroy</on_poweroff>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <on_reboot>restart</on_reboot>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <on_crash>destroy</on_crash>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <disk type='network' device='disk'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <auth username='openstack'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <source protocol='rbd' name='vms/770605b0-4686-4d97-9f82-7ed299482f50_disk' index='2'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:         <host name='192.168.122.100' port='6789'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:         <host name='192.168.122.102' port='6789'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:         <host name='192.168.122.101' port='6789'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       </source>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target dev='vda' bus='virtio'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='virtio-disk0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <disk type='network' device='cdrom'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <auth username='openstack'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <source protocol='rbd' name='vms/770605b0-4686-4d97-9f82-7ed299482f50_disk.config' index='1'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:         <host name='192.168.122.100' port='6789'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:         <host name='192.168.122.102' port='6789'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:         <host name='192.168.122.101' port='6789'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       </source>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target dev='sda' bus='sata'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <readonly/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='sata0-0-0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='0' model='pcie-root'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pcie.0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='1' port='0x10'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.1'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='2' port='0x11'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.2'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='3' port='0x12'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.3'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='4' port='0x13'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.4'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='5' port='0x14'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.5'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='6' port='0x15'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.6'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='7' port='0x16'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.7'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='8' port='0x17'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.8'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='9' port='0x18'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.9'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='10' port='0x19'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.10'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='11' port='0x1a'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.11'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='12' port='0x1b'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.12'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='13' port='0x1c'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.13'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='14' port='0x1d'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.14'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='15' port='0x1e'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.15'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='16' port='0x1f'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.16'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='17' port='0x20'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.17'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='18' port='0x21'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.18'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='19' port='0x22'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.19'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='20' port='0x23'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.20'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='21' port='0x24'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.21'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='22' port='0x25'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.22'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='23' port='0x26'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.23'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='24' port='0x27'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.24'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target chassis='25' port='0x28'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.25'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model name='pcie-pci-bridge'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='pci.26'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='usb'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <controller type='sata' index='0'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='ide'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <interface type='ethernet'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <mac address='fa:16:3e:f9:08:60'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target dev='tap52fb2315-9e'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model type='virtio'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <mtu size='1442'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='net0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <serial type='pty'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <source path='/dev/pts/0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <log file='/var/lib/nova/instances/770605b0-4686-4d97-9f82-7ed299482f50/console.log' append='off'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target type='isa-serial' port='0'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:         <model name='isa-serial'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       </target>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='serial0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <console type='pty' tty='/dev/pts/0'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <source path='/dev/pts/0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <log file='/var/lib/nova/instances/770605b0-4686-4d97-9f82-7ed299482f50/console.log' append='off'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <target type='serial' port='0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='serial0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </console>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <input type='tablet' bus='usb'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='input0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='usb' bus='0' port='1'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </input>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <input type='mouse' bus='ps2'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='input1'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </input>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <input type='keyboard' bus='ps2'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='input2'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </input>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <listen type='address' address='::0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </graphics>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <audio id='1' type='none'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <video>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <model type='virtio' heads='1' primary='yes'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='video0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </video>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <watchdog model='itco' action='reset'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='watchdog0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </watchdog>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <memballoon model='virtio'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <stats period='10'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='balloon0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <rng model='virtio'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <backend model='random'>/dev/urandom</backend>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <alias name='rng0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <label>system_u:system_r:svirt_t:s0:c126,c530</label>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c126,c530</imagelabel>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </seclabel>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <label>+107:+107</label>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <imagelabel>+107:+107</imagelabel>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </seclabel>
Jan 20 15:28:43 compute-1 nova_compute[225855]: </domain>
Jan 20 15:28:43 compute-1 nova_compute[225855]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.585 225859 INFO nova.virt.libvirt.driver [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully detached device tap77056a83-f3 from instance 770605b0-4686-4d97-9f82-7ed299482f50 from the live domain config.
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.586 225859 DEBUG nova.virt.libvirt.vif [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:28:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-774862138',display_name='tempest-TestNetworkBasicOps-server-774862138',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-774862138',id=204,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHV4gRYgjRGDnaHxkSMJQMAVAwkk8jLZAdWo/fxENkiAFcbr1CXMhNYzs2hTTM5NoLjR4u2qMt5dKH7C9b4LHMK/3O49DJNSLMAUfk3HkTe/ulPxPFwHn3vfQCPGwBrM5A==',key_name='tempest-TestNetworkBasicOps-1250991011',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:28:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-u6z01i8j',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:28:13Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=770605b0-4686-4d97-9f82-7ed299482f50,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "address": "fa:16:3e:1e:e5:d8", "network": {"id": "d8ab95ce-159e-451b-baf0-5271f6a3160b", "bridge": "br-int", "label": "tempest-network-smoke--1027589119", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77056a83-f3", "ovs_interfaceid": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.586 225859 DEBUG nova.network.os_vif_util [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "address": "fa:16:3e:1e:e5:d8", "network": {"id": "d8ab95ce-159e-451b-baf0-5271f6a3160b", "bridge": "br-int", "label": "tempest-network-smoke--1027589119", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77056a83-f3", "ovs_interfaceid": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.587 225859 DEBUG nova.network.os_vif_util [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:e5:d8,bridge_name='br-int',has_traffic_filtering=True,id=77056a83-f3ee-44a1-8cd0-fac2b5327a1e,network=Network(d8ab95ce-159e-451b-baf0-5271f6a3160b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77056a83-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.587 225859 DEBUG os_vif [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:e5:d8,bridge_name='br-int',has_traffic_filtering=True,id=77056a83-f3ee-44a1-8cd0-fac2b5327a1e,network=Network(d8ab95ce-159e-451b-baf0-5271f6a3160b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77056a83-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.589 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.589 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77056a83-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.590 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.592 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.595 225859 INFO os_vif [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:e5:d8,bridge_name='br-int',has_traffic_filtering=True,id=77056a83-f3ee-44a1-8cd0-fac2b5327a1e,network=Network(d8ab95ce-159e-451b-baf0-5271f6a3160b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77056a83-f3')
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.595 225859 DEBUG nova.virt.libvirt.guest [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:name>tempest-TestNetworkBasicOps-server-774862138</nova:name>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:creationTime>2026-01-20 15:28:43</nova:creationTime>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:flavor name="m1.nano">
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:memory>128</nova:memory>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:disk>1</nova:disk>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:swap>0</nova:swap>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:vcpus>1</nova:vcpus>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </nova:flavor>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:owner>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </nova:owner>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   <nova:ports>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     <nova:port uuid="52fb2315-9ec5-47a4-af4a-e0ed5e4caf21">
Jan 20 15:28:43 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 20 15:28:43 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 15:28:43 compute-1 nova_compute[225855]:   </nova:ports>
Jan 20 15:28:43 compute-1 nova_compute[225855]: </nova:instance>
Jan 20 15:28:43 compute-1 nova_compute[225855]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 20 15:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:43.595 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:e5:d8 10.100.0.28'], port_security=['fa:16:3e:1e:e5:d8 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': '770605b0-4686-4d97-9f82-7ed299482f50', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d8ab95ce-159e-451b-baf0-5271f6a3160b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aaa69ba6-9a27-441e-877e-2cd188322a42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4ab7f4cc-800b-4c21-91a0-e2fd29d04e91, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=77056a83-f3ee-44a1-8cd0-fac2b5327a1e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:43.596 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 77056a83-f3ee-44a1-8cd0-fac2b5327a1e in datapath d8ab95ce-159e-451b-baf0-5271f6a3160b unbound from our chassis
Jan 20 15:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:43.597 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d8ab95ce-159e-451b-baf0-5271f6a3160b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:43.598 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0ebf6446-90d9-4b57-90df-ff608afb8b20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:43.598 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b namespace which is not needed anymore
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.642 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.696 225859 DEBUG nova.compute.manager [req-7f6b7969-a575-4c67-816d-46a3ae8f2044 req-c6a518c5-ffb6-4594-abd4-48fdfeba6652 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received event network-vif-plugged-77056a83-f3ee-44a1-8cd0-fac2b5327a1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.697 225859 DEBUG oslo_concurrency.lockutils [req-7f6b7969-a575-4c67-816d-46a3ae8f2044 req-c6a518c5-ffb6-4594-abd4-48fdfeba6652 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "770605b0-4686-4d97-9f82-7ed299482f50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.697 225859 DEBUG oslo_concurrency.lockutils [req-7f6b7969-a575-4c67-816d-46a3ae8f2044 req-c6a518c5-ffb6-4594-abd4-48fdfeba6652 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.697 225859 DEBUG oslo_concurrency.lockutils [req-7f6b7969-a575-4c67-816d-46a3ae8f2044 req-c6a518c5-ffb6-4594-abd4-48fdfeba6652 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.697 225859 DEBUG nova.compute.manager [req-7f6b7969-a575-4c67-816d-46a3ae8f2044 req-c6a518c5-ffb6-4594-abd4-48fdfeba6652 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] No waiting events found dispatching network-vif-plugged-77056a83-f3ee-44a1-8cd0-fac2b5327a1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.698 225859 WARNING nova.compute.manager [req-7f6b7969-a575-4c67-816d-46a3ae8f2044 req-c6a518c5-ffb6-4594-abd4-48fdfeba6652 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received unexpected event network-vif-plugged-77056a83-f3ee-44a1-8cd0-fac2b5327a1e for instance with vm_state active and task_state None.
Jan 20 15:28:43 compute-1 neutron-haproxy-ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b[316429]: [NOTICE]   (316433) : haproxy version is 2.8.14-c23fe91
Jan 20 15:28:43 compute-1 neutron-haproxy-ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b[316429]: [NOTICE]   (316433) : path to executable is /usr/sbin/haproxy
Jan 20 15:28:43 compute-1 neutron-haproxy-ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b[316429]: [WARNING]  (316433) : Exiting Master process...
Jan 20 15:28:43 compute-1 neutron-haproxy-ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b[316429]: [WARNING]  (316433) : Exiting Master process...
Jan 20 15:28:43 compute-1 neutron-haproxy-ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b[316429]: [ALERT]    (316433) : Current worker (316435) exited with code 143 (Terminated)
Jan 20 15:28:43 compute-1 neutron-haproxy-ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b[316429]: [WARNING]  (316433) : All workers exited. Exiting... (0)
Jan 20 15:28:43 compute-1 systemd[1]: libpod-62b8365da44e9b0899fd7c930b462a3d53b7c0b3ab06aa3a347b5ba7ee813dbc.scope: Deactivated successfully.
Jan 20 15:28:43 compute-1 podman[316469]: 2026-01-20 15:28:43.733240821 +0000 UTC m=+0.044737449 container died 62b8365da44e9b0899fd7c930b462a3d53b7c0b3ab06aa3a347b5ba7ee813dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 20 15:28:43 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-62b8365da44e9b0899fd7c930b462a3d53b7c0b3ab06aa3a347b5ba7ee813dbc-userdata-shm.mount: Deactivated successfully.
Jan 20 15:28:43 compute-1 systemd[1]: var-lib-containers-storage-overlay-ec07ec1121543b297cde9c315ec4473068cc96eb4066eb377bd1883358661b91-merged.mount: Deactivated successfully.
Jan 20 15:28:43 compute-1 podman[316469]: 2026-01-20 15:28:43.773277344 +0000 UTC m=+0.084773972 container cleanup 62b8365da44e9b0899fd7c930b462a3d53b7c0b3ab06aa3a347b5ba7ee813dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 15:28:43 compute-1 systemd[1]: libpod-conmon-62b8365da44e9b0899fd7c930b462a3d53b7c0b3ab06aa3a347b5ba7ee813dbc.scope: Deactivated successfully.
Jan 20 15:28:43 compute-1 podman[316504]: 2026-01-20 15:28:43.833449832 +0000 UTC m=+0.039018075 container remove 62b8365da44e9b0899fd7c930b462a3d53b7c0b3ab06aa3a347b5ba7ee813dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 15:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:43.838 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[df4a7a0b-7f57-4b24-956d-26ee25e740d8]: (4, ('Tue Jan 20 03:28:43 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b (62b8365da44e9b0899fd7c930b462a3d53b7c0b3ab06aa3a347b5ba7ee813dbc)\n62b8365da44e9b0899fd7c930b462a3d53b7c0b3ab06aa3a347b5ba7ee813dbc\nTue Jan 20 03:28:43 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b (62b8365da44e9b0899fd7c930b462a3d53b7c0b3ab06aa3a347b5ba7ee813dbc)\n62b8365da44e9b0899fd7c930b462a3d53b7c0b3ab06aa3a347b5ba7ee813dbc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:43.840 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8665d42c-bf53-4666-94ef-b9a1af30562b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:43.841 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8ab95ce-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.843 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:43 compute-1 kernel: tapd8ab95ce-10: left promiscuous mode
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.845 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:43.847 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0a6ca87c-a3f4-43ea-bd0b-1c8eab7e945f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:43 compute-1 nova_compute[225855]: 2026-01-20 15:28:43.857 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:43.870 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6de378a1-9863-4905-a5b8-a7c4b1d8c778]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:43.871 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[7004bc9b-9bc9-44ca-8d84-55fb2adfdcbe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:43.884 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3f42b828-b5a7-4922-a123-4a37c65a5ad0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795946, 'reachable_time': 32719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316520, 'error': None, 'target': 'ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:43 compute-1 systemd[1]: run-netns-ovnmeta\x2dd8ab95ce\x2d159e\x2d451b\x2dbaf0\x2d5271f6a3160b.mount: Deactivated successfully.
Jan 20 15:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:43.888 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d8ab95ce-159e-451b-baf0-5271f6a3160b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:28:43 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:43.888 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa7837d-6414-47ad-97b1-24b755609a8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:44.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:44.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:44 compute-1 ceph-mon[81775]: pgmap v3196: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.580 225859 DEBUG oslo_concurrency.lockutils [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.580 225859 DEBUG oslo_concurrency.lockutils [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquired lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.580 225859 DEBUG nova.network.neutron [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.683 225859 DEBUG nova.compute.manager [req-72e41404-7980-41eb-ad27-c96cfd6fabc7 req-880a0b72-256e-40e0-aae8-c6805c21012b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received event network-vif-deleted-77056a83-f3ee-44a1-8cd0-fac2b5327a1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.683 225859 INFO nova.compute.manager [req-72e41404-7980-41eb-ad27-c96cfd6fabc7 req-880a0b72-256e-40e0-aae8-c6805c21012b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Neutron deleted interface 77056a83-f3ee-44a1-8cd0-fac2b5327a1e; detaching it from the instance and deleting it from the info cache
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.684 225859 DEBUG nova.network.neutron [req-72e41404-7980-41eb-ad27-c96cfd6fabc7 req-880a0b72-256e-40e0-aae8-c6805c21012b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Updating instance_info_cache with network_info: [{"id": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "address": "fa:16:3e:f9:08:60", "network": {"id": "f19fb67c-6bab-4253-851e-ede5bb26f589", "bridge": "br-int", "label": "tempest-network-smoke--2116756150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52fb2315-9e", "ovs_interfaceid": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.725 225859 DEBUG nova.objects.instance [req-72e41404-7980-41eb-ad27-c96cfd6fabc7 req-880a0b72-256e-40e0-aae8-c6805c21012b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lazy-loading 'system_metadata' on Instance uuid 770605b0-4686-4d97-9f82-7ed299482f50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.758 225859 DEBUG nova.objects.instance [req-72e41404-7980-41eb-ad27-c96cfd6fabc7 req-880a0b72-256e-40e0-aae8-c6805c21012b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lazy-loading 'flavor' on Instance uuid 770605b0-4686-4d97-9f82-7ed299482f50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.793 225859 DEBUG nova.virt.libvirt.vif [req-72e41404-7980-41eb-ad27-c96cfd6fabc7 req-880a0b72-256e-40e0-aae8-c6805c21012b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:28:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-774862138',display_name='tempest-TestNetworkBasicOps-server-774862138',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-774862138',id=204,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHV4gRYgjRGDnaHxkSMJQMAVAwkk8jLZAdWo/fxENkiAFcbr1CXMhNYzs2hTTM5NoLjR4u2qMt5dKH7C9b4LHMK/3O49DJNSLMAUfk3HkTe/ulPxPFwHn3vfQCPGwBrM5A==',key_name='tempest-TestNetworkBasicOps-1250991011',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:28:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-u6z01i8j',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:28:13Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=770605b0-4686-4d97-9f82-7ed299482f50,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "address": "fa:16:3e:1e:e5:d8", "network": {"id": "d8ab95ce-159e-451b-baf0-5271f6a3160b", "bridge": "br-int", "label": "tempest-network-smoke--1027589119", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77056a83-f3", "ovs_interfaceid": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.793 225859 DEBUG nova.network.os_vif_util [req-72e41404-7980-41eb-ad27-c96cfd6fabc7 req-880a0b72-256e-40e0-aae8-c6805c21012b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Converting VIF {"id": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "address": "fa:16:3e:1e:e5:d8", "network": {"id": "d8ab95ce-159e-451b-baf0-5271f6a3160b", "bridge": "br-int", "label": "tempest-network-smoke--1027589119", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77056a83-f3", "ovs_interfaceid": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.794 225859 DEBUG nova.network.os_vif_util [req-72e41404-7980-41eb-ad27-c96cfd6fabc7 req-880a0b72-256e-40e0-aae8-c6805c21012b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:e5:d8,bridge_name='br-int',has_traffic_filtering=True,id=77056a83-f3ee-44a1-8cd0-fac2b5327a1e,network=Network(d8ab95ce-159e-451b-baf0-5271f6a3160b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77056a83-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.797 225859 DEBUG nova.virt.libvirt.guest [req-72e41404-7980-41eb-ad27-c96cfd6fabc7 req-880a0b72-256e-40e0-aae8-c6805c21012b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:e5:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77056a83-f3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.800 225859 DEBUG nova.virt.libvirt.guest [req-72e41404-7980-41eb-ad27-c96cfd6fabc7 req-880a0b72-256e-40e0-aae8-c6805c21012b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1e:e5:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77056a83-f3"/></interface>not found in domain: <domain type='kvm' id='108'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <name>instance-000000cc</name>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <uuid>770605b0-4686-4d97-9f82-7ed299482f50</uuid>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:name>tempest-TestNetworkBasicOps-server-774862138</nova:name>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:creationTime>2026-01-20 15:28:43</nova:creationTime>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:flavor name="m1.nano">
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:memory>128</nova:memory>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:disk>1</nova:disk>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:swap>0</nova:swap>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:vcpus>1</nova:vcpus>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </nova:flavor>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:owner>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </nova:owner>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:ports>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:port uuid="52fb2315-9ec5-47a4-af4a-e0ed5e4caf21">
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </nova:ports>
Jan 20 15:28:45 compute-1 nova_compute[225855]: </nova:instance>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <memory unit='KiB'>131072</memory>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <vcpu placement='static'>1</vcpu>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <resource>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <partition>/machine</partition>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </resource>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <sysinfo type='smbios'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <system>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <entry name='manufacturer'>RDO</entry>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <entry name='product'>OpenStack Compute</entry>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <entry name='serial'>770605b0-4686-4d97-9f82-7ed299482f50</entry>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <entry name='uuid'>770605b0-4686-4d97-9f82-7ed299482f50</entry>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <entry name='family'>Virtual Machine</entry>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </system>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <os>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <boot dev='hd'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <smbios mode='sysinfo'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </os>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <features>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <vmcoreinfo state='on'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </features>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <cpu mode='custom' match='exact' check='full'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <model fallback='forbid'>Nehalem</model>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <feature policy='require' name='x2apic'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <feature policy='require' name='hypervisor'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <feature policy='require' name='vme'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <clock offset='utc'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <timer name='pit' tickpolicy='delay'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <timer name='hpet' present='no'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <on_poweroff>destroy</on_poweroff>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <on_reboot>restart</on_reboot>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <on_crash>destroy</on_crash>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <disk type='network' device='disk'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <auth username='openstack'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <source protocol='rbd' name='vms/770605b0-4686-4d97-9f82-7ed299482f50_disk' index='2'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:         <host name='192.168.122.100' port='6789'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:         <host name='192.168.122.102' port='6789'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:         <host name='192.168.122.101' port='6789'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       </source>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target dev='vda' bus='virtio'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='virtio-disk0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <disk type='network' device='cdrom'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <auth username='openstack'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <source protocol='rbd' name='vms/770605b0-4686-4d97-9f82-7ed299482f50_disk.config' index='1'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:         <host name='192.168.122.100' port='6789'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:         <host name='192.168.122.102' port='6789'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:         <host name='192.168.122.101' port='6789'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       </source>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target dev='sda' bus='sata'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <readonly/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='sata0-0-0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='0' model='pcie-root'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pcie.0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='1' port='0x10'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.1'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='2' port='0x11'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.2'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='3' port='0x12'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.3'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='4' port='0x13'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.4'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='5' port='0x14'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.5'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='6' port='0x15'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.6'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='7' port='0x16'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.7'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='8' port='0x17'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.8'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='9' port='0x18'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.9'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='10' port='0x19'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.10'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='11' port='0x1a'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.11'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='12' port='0x1b'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.12'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='13' port='0x1c'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.13'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='14' port='0x1d'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.14'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='15' port='0x1e'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.15'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='16' port='0x1f'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.16'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='17' port='0x20'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.17'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='18' port='0x21'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.18'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='19' port='0x22'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.19'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='20' port='0x23'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.20'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='21' port='0x24'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.21'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='22' port='0x25'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.22'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='23' port='0x26'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.23'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='24' port='0x27'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.24'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='25' port='0x28'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.25'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-pci-bridge'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.26'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='usb'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='sata' index='0'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='ide'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <interface type='ethernet'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <mac address='fa:16:3e:f9:08:60'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target dev='tap52fb2315-9e'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model type='virtio'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <mtu size='1442'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='net0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <serial type='pty'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <source path='/dev/pts/0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <log file='/var/lib/nova/instances/770605b0-4686-4d97-9f82-7ed299482f50/console.log' append='off'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target type='isa-serial' port='0'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:         <model name='isa-serial'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       </target>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='serial0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <console type='pty' tty='/dev/pts/0'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <source path='/dev/pts/0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <log file='/var/lib/nova/instances/770605b0-4686-4d97-9f82-7ed299482f50/console.log' append='off'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target type='serial' port='0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='serial0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </console>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <input type='tablet' bus='usb'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='input0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='usb' bus='0' port='1'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </input>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <input type='mouse' bus='ps2'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='input1'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </input>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <input type='keyboard' bus='ps2'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='input2'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </input>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <listen type='address' address='::0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </graphics>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <audio id='1' type='none'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <video>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model type='virtio' heads='1' primary='yes'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='video0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </video>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <watchdog model='itco' action='reset'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='watchdog0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </watchdog>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <memballoon model='virtio'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <stats period='10'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='balloon0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <rng model='virtio'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <backend model='random'>/dev/urandom</backend>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='rng0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <label>system_u:system_r:svirt_t:s0:c126,c530</label>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c126,c530</imagelabel>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </seclabel>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <label>+107:+107</label>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <imagelabel>+107:+107</imagelabel>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </seclabel>
Jan 20 15:28:45 compute-1 nova_compute[225855]: </domain>
Jan 20 15:28:45 compute-1 nova_compute[225855]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.800 225859 DEBUG nova.virt.libvirt.guest [req-72e41404-7980-41eb-ad27-c96cfd6fabc7 req-880a0b72-256e-40e0-aae8-c6805c21012b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:e5:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77056a83-f3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.804 225859 DEBUG nova.compute.manager [req-d27a989b-3052-4ace-82cf-75d5a82a9d84 req-bc1d4cc0-04e7-4c7b-af2a-5a50cd8e24be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received event network-vif-unplugged-77056a83-f3ee-44a1-8cd0-fac2b5327a1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.804 225859 DEBUG oslo_concurrency.lockutils [req-d27a989b-3052-4ace-82cf-75d5a82a9d84 req-bc1d4cc0-04e7-4c7b-af2a-5a50cd8e24be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "770605b0-4686-4d97-9f82-7ed299482f50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.804 225859 DEBUG oslo_concurrency.lockutils [req-d27a989b-3052-4ace-82cf-75d5a82a9d84 req-bc1d4cc0-04e7-4c7b-af2a-5a50cd8e24be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.804 225859 DEBUG oslo_concurrency.lockutils [req-d27a989b-3052-4ace-82cf-75d5a82a9d84 req-bc1d4cc0-04e7-4c7b-af2a-5a50cd8e24be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.805 225859 DEBUG nova.compute.manager [req-d27a989b-3052-4ace-82cf-75d5a82a9d84 req-bc1d4cc0-04e7-4c7b-af2a-5a50cd8e24be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] No waiting events found dispatching network-vif-unplugged-77056a83-f3ee-44a1-8cd0-fac2b5327a1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.805 225859 WARNING nova.compute.manager [req-d27a989b-3052-4ace-82cf-75d5a82a9d84 req-bc1d4cc0-04e7-4c7b-af2a-5a50cd8e24be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received unexpected event network-vif-unplugged-77056a83-f3ee-44a1-8cd0-fac2b5327a1e for instance with vm_state active and task_state None.
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.805 225859 DEBUG nova.compute.manager [req-d27a989b-3052-4ace-82cf-75d5a82a9d84 req-bc1d4cc0-04e7-4c7b-af2a-5a50cd8e24be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received event network-vif-plugged-77056a83-f3ee-44a1-8cd0-fac2b5327a1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.805 225859 DEBUG oslo_concurrency.lockutils [req-d27a989b-3052-4ace-82cf-75d5a82a9d84 req-bc1d4cc0-04e7-4c7b-af2a-5a50cd8e24be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "770605b0-4686-4d97-9f82-7ed299482f50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.805 225859 DEBUG oslo_concurrency.lockutils [req-d27a989b-3052-4ace-82cf-75d5a82a9d84 req-bc1d4cc0-04e7-4c7b-af2a-5a50cd8e24be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.805 225859 DEBUG oslo_concurrency.lockutils [req-d27a989b-3052-4ace-82cf-75d5a82a9d84 req-bc1d4cc0-04e7-4c7b-af2a-5a50cd8e24be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.806 225859 DEBUG nova.compute.manager [req-d27a989b-3052-4ace-82cf-75d5a82a9d84 req-bc1d4cc0-04e7-4c7b-af2a-5a50cd8e24be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] No waiting events found dispatching network-vif-plugged-77056a83-f3ee-44a1-8cd0-fac2b5327a1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.806 225859 WARNING nova.compute.manager [req-d27a989b-3052-4ace-82cf-75d5a82a9d84 req-bc1d4cc0-04e7-4c7b-af2a-5a50cd8e24be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received unexpected event network-vif-plugged-77056a83-f3ee-44a1-8cd0-fac2b5327a1e for instance with vm_state active and task_state None.
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.806 225859 DEBUG nova.virt.libvirt.guest [req-72e41404-7980-41eb-ad27-c96cfd6fabc7 req-880a0b72-256e-40e0-aae8-c6805c21012b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1e:e5:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77056a83-f3"/></interface>not found in domain: <domain type='kvm' id='108'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <name>instance-000000cc</name>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <uuid>770605b0-4686-4d97-9f82-7ed299482f50</uuid>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:name>tempest-TestNetworkBasicOps-server-774862138</nova:name>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:creationTime>2026-01-20 15:28:43</nova:creationTime>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:flavor name="m1.nano">
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:memory>128</nova:memory>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:disk>1</nova:disk>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:swap>0</nova:swap>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:vcpus>1</nova:vcpus>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </nova:flavor>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:owner>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </nova:owner>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:ports>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:port uuid="52fb2315-9ec5-47a4-af4a-e0ed5e4caf21">
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </nova:ports>
Jan 20 15:28:45 compute-1 nova_compute[225855]: </nova:instance>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <memory unit='KiB'>131072</memory>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <vcpu placement='static'>1</vcpu>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <resource>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <partition>/machine</partition>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </resource>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <sysinfo type='smbios'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <system>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <entry name='manufacturer'>RDO</entry>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <entry name='product'>OpenStack Compute</entry>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <entry name='serial'>770605b0-4686-4d97-9f82-7ed299482f50</entry>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <entry name='uuid'>770605b0-4686-4d97-9f82-7ed299482f50</entry>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <entry name='family'>Virtual Machine</entry>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </system>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <os>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <boot dev='hd'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <smbios mode='sysinfo'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </os>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <features>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <vmcoreinfo state='on'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </features>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <cpu mode='custom' match='exact' check='full'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <model fallback='forbid'>Nehalem</model>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <feature policy='require' name='x2apic'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <feature policy='require' name='hypervisor'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <feature policy='require' name='vme'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <clock offset='utc'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <timer name='pit' tickpolicy='delay'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <timer name='hpet' present='no'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <on_poweroff>destroy</on_poweroff>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <on_reboot>restart</on_reboot>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <on_crash>destroy</on_crash>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <disk type='network' device='disk'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <auth username='openstack'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <source protocol='rbd' name='vms/770605b0-4686-4d97-9f82-7ed299482f50_disk' index='2'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:         <host name='192.168.122.100' port='6789'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:         <host name='192.168.122.102' port='6789'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:         <host name='192.168.122.101' port='6789'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       </source>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target dev='vda' bus='virtio'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='virtio-disk0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <disk type='network' device='cdrom'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <auth username='openstack'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <source protocol='rbd' name='vms/770605b0-4686-4d97-9f82-7ed299482f50_disk.config' index='1'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:         <host name='192.168.122.100' port='6789'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:         <host name='192.168.122.102' port='6789'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:         <host name='192.168.122.101' port='6789'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       </source>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target dev='sda' bus='sata'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <readonly/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='sata0-0-0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='0' model='pcie-root'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pcie.0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='1' port='0x10'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.1'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='2' port='0x11'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.2'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='3' port='0x12'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.3'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='4' port='0x13'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.4'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='5' port='0x14'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.5'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='6' port='0x15'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.6'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='7' port='0x16'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.7'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='8' port='0x17'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.8'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='9' port='0x18'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.9'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='10' port='0x19'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.10'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='11' port='0x1a'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.11'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='12' port='0x1b'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.12'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='13' port='0x1c'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.13'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='14' port='0x1d'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.14'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='15' port='0x1e'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.15'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='16' port='0x1f'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.16'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='17' port='0x20'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.17'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='18' port='0x21'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.18'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='19' port='0x22'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.19'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='20' port='0x23'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.20'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='21' port='0x24'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.21'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='22' port='0x25'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.22'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='23' port='0x26'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.23'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='24' port='0x27'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.24'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-root-port'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target chassis='25' port='0x28'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.25'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model name='pcie-pci-bridge'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='pci.26'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='usb'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <controller type='sata' index='0'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='ide'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </controller>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <interface type='ethernet'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <mac address='fa:16:3e:f9:08:60'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target dev='tap52fb2315-9e'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model type='virtio'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <mtu size='1442'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='net0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <serial type='pty'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <source path='/dev/pts/0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <log file='/var/lib/nova/instances/770605b0-4686-4d97-9f82-7ed299482f50/console.log' append='off'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target type='isa-serial' port='0'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:         <model name='isa-serial'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       </target>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='serial0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <console type='pty' tty='/dev/pts/0'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <source path='/dev/pts/0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <log file='/var/lib/nova/instances/770605b0-4686-4d97-9f82-7ed299482f50/console.log' append='off'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <target type='serial' port='0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='serial0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </console>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <input type='tablet' bus='usb'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='input0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='usb' bus='0' port='1'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </input>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <input type='mouse' bus='ps2'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='input1'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </input>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <input type='keyboard' bus='ps2'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='input2'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </input>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <listen type='address' address='::0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </graphics>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <audio id='1' type='none'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <video>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <model type='virtio' heads='1' primary='yes'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='video0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </video>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <watchdog model='itco' action='reset'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='watchdog0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </watchdog>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <memballoon model='virtio'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <stats period='10'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='balloon0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <rng model='virtio'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <backend model='random'>/dev/urandom</backend>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <alias name='rng0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <label>system_u:system_r:svirt_t:s0:c126,c530</label>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c126,c530</imagelabel>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </seclabel>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <label>+107:+107</label>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <imagelabel>+107:+107</imagelabel>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </seclabel>
Jan 20 15:28:45 compute-1 nova_compute[225855]: </domain>
Jan 20 15:28:45 compute-1 nova_compute[225855]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.807 225859 WARNING nova.virt.libvirt.driver [req-72e41404-7980-41eb-ad27-c96cfd6fabc7 req-880a0b72-256e-40e0-aae8-c6805c21012b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Detaching interface fa:16:3e:1e:e5:d8 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap77056a83-f3' not found.
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.807 225859 DEBUG nova.virt.libvirt.vif [req-72e41404-7980-41eb-ad27-c96cfd6fabc7 req-880a0b72-256e-40e0-aae8-c6805c21012b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:28:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-774862138',display_name='tempest-TestNetworkBasicOps-server-774862138',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-774862138',id=204,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHV4gRYgjRGDnaHxkSMJQMAVAwkk8jLZAdWo/fxENkiAFcbr1CXMhNYzs2hTTM5NoLjR4u2qMt5dKH7C9b4LHMK/3O49DJNSLMAUfk3HkTe/ulPxPFwHn3vfQCPGwBrM5A==',key_name='tempest-TestNetworkBasicOps-1250991011',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:28:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-u6z01i8j',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:28:13Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=770605b0-4686-4d97-9f82-7ed299482f50,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "address": "fa:16:3e:1e:e5:d8", "network": {"id": "d8ab95ce-159e-451b-baf0-5271f6a3160b", "bridge": "br-int", "label": "tempest-network-smoke--1027589119", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77056a83-f3", "ovs_interfaceid": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.808 225859 DEBUG nova.network.os_vif_util [req-72e41404-7980-41eb-ad27-c96cfd6fabc7 req-880a0b72-256e-40e0-aae8-c6805c21012b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Converting VIF {"id": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "address": "fa:16:3e:1e:e5:d8", "network": {"id": "d8ab95ce-159e-451b-baf0-5271f6a3160b", "bridge": "br-int", "label": "tempest-network-smoke--1027589119", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77056a83-f3", "ovs_interfaceid": "77056a83-f3ee-44a1-8cd0-fac2b5327a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.808 225859 DEBUG nova.network.os_vif_util [req-72e41404-7980-41eb-ad27-c96cfd6fabc7 req-880a0b72-256e-40e0-aae8-c6805c21012b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:e5:d8,bridge_name='br-int',has_traffic_filtering=True,id=77056a83-f3ee-44a1-8cd0-fac2b5327a1e,network=Network(d8ab95ce-159e-451b-baf0-5271f6a3160b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77056a83-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.809 225859 DEBUG os_vif [req-72e41404-7980-41eb-ad27-c96cfd6fabc7 req-880a0b72-256e-40e0-aae8-c6805c21012b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:e5:d8,bridge_name='br-int',has_traffic_filtering=True,id=77056a83-f3ee-44a1-8cd0-fac2b5327a1e,network=Network(d8ab95ce-159e-451b-baf0-5271f6a3160b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77056a83-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.811 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.811 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77056a83-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.811 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.813 225859 INFO os_vif [req-72e41404-7980-41eb-ad27-c96cfd6fabc7 req-880a0b72-256e-40e0-aae8-c6805c21012b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:e5:d8,bridge_name='br-int',has_traffic_filtering=True,id=77056a83-f3ee-44a1-8cd0-fac2b5327a1e,network=Network(d8ab95ce-159e-451b-baf0-5271f6a3160b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77056a83-f3')
Jan 20 15:28:45 compute-1 nova_compute[225855]: 2026-01-20 15:28:45.814 225859 DEBUG nova.virt.libvirt.guest [req-72e41404-7980-41eb-ad27-c96cfd6fabc7 req-880a0b72-256e-40e0-aae8-c6805c21012b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:name>tempest-TestNetworkBasicOps-server-774862138</nova:name>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:creationTime>2026-01-20 15:28:45</nova:creationTime>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:flavor name="m1.nano">
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:memory>128</nova:memory>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:disk>1</nova:disk>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:swap>0</nova:swap>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:vcpus>1</nova:vcpus>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </nova:flavor>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:owner>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </nova:owner>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   <nova:ports>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     <nova:port uuid="52fb2315-9ec5-47a4-af4a-e0ed5e4caf21">
Jan 20 15:28:45 compute-1 nova_compute[225855]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 20 15:28:45 compute-1 nova_compute[225855]:     </nova:port>
Jan 20 15:28:45 compute-1 nova_compute[225855]:   </nova:ports>
Jan 20 15:28:45 compute-1 nova_compute[225855]: </nova:instance>
Jan 20 15:28:45 compute-1 nova_compute[225855]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 20 15:28:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:46.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:46.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:46 compute-1 ceph-mon[81775]: pgmap v3197: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s rd, 14 KiB/s wr, 1 op/s
Jan 20 15:28:46 compute-1 sudo[316522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:46 compute-1 sudo[316522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:46 compute-1 sudo[316522]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:47 compute-1 podman[316545]: 2026-01-20 15:28:47.019998362 +0000 UTC m=+0.059382007 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 15:28:47 compute-1 sudo[316558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:47 compute-1 sudo[316558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:47 compute-1 sudo[316558]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:47 compute-1 ovn_controller[130490]: 2026-01-20T15:28:47Z|00938|binding|INFO|Releasing lport d4b30c77-23b1-48b0-a6c8-4cf53f9840de from this chassis (sb_readonly=0)
Jan 20 15:28:47 compute-1 nova_compute[225855]: 2026-01-20 15:28:47.727 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:48.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:48 compute-1 nova_compute[225855]: 2026-01-20 15:28:48.401 225859 INFO nova.network.neutron [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Port 77056a83-f3ee-44a1-8cd0-fac2b5327a1e from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 20 15:28:48 compute-1 nova_compute[225855]: 2026-01-20 15:28:48.401 225859 DEBUG nova.network.neutron [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Updating instance_info_cache with network_info: [{"id": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "address": "fa:16:3e:f9:08:60", "network": {"id": "f19fb67c-6bab-4253-851e-ede5bb26f589", "bridge": "br-int", "label": "tempest-network-smoke--2116756150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52fb2315-9e", "ovs_interfaceid": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:28:48 compute-1 nova_compute[225855]: 2026-01-20 15:28:48.420 225859 DEBUG oslo_concurrency.lockutils [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Releasing lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:28:48 compute-1 nova_compute[225855]: 2026-01-20 15:28:48.440 225859 DEBUG oslo_concurrency.lockutils [None req-8a5274d3-ca85-4f1c-baeb-511d7dc60fdf 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "interface-770605b0-4686-4d97-9f82-7ed299482f50-77056a83-f3ee-44a1-8cd0-fac2b5327a1e" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:48.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:48 compute-1 nova_compute[225855]: 2026-01-20 15:28:48.591 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:48 compute-1 nova_compute[225855]: 2026-01-20 15:28:48.644 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:48 compute-1 ceph-mon[81775]: pgmap v3198: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.3 KiB/s rd, 2.7 KiB/s wr, 1 op/s
Jan 20 15:28:48 compute-1 nova_compute[225855]: 2026-01-20 15:28:48.864 225859 DEBUG nova.compute.manager [req-83d551fb-3df3-4a8b-b9bf-713402802e9b req-379ec33f-9859-4069-baa6-6765adb999ff 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received event network-changed-52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:28:48 compute-1 nova_compute[225855]: 2026-01-20 15:28:48.864 225859 DEBUG nova.compute.manager [req-83d551fb-3df3-4a8b-b9bf-713402802e9b req-379ec33f-9859-4069-baa6-6765adb999ff 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Refreshing instance network info cache due to event network-changed-52fb2315-9ec5-47a4-af4a-e0ed5e4caf21. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:28:48 compute-1 nova_compute[225855]: 2026-01-20 15:28:48.864 225859 DEBUG oslo_concurrency.lockutils [req-83d551fb-3df3-4a8b-b9bf-713402802e9b req-379ec33f-9859-4069-baa6-6765adb999ff 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:28:48 compute-1 nova_compute[225855]: 2026-01-20 15:28:48.865 225859 DEBUG oslo_concurrency.lockutils [req-83d551fb-3df3-4a8b-b9bf-713402802e9b req-379ec33f-9859-4069-baa6-6765adb999ff 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:28:48 compute-1 nova_compute[225855]: 2026-01-20 15:28:48.865 225859 DEBUG nova.network.neutron [req-83d551fb-3df3-4a8b-b9bf-713402802e9b req-379ec33f-9859-4069-baa6-6765adb999ff 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Refreshing network info cache for port 52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:28:48 compute-1 nova_compute[225855]: 2026-01-20 15:28:48.928 225859 DEBUG oslo_concurrency.lockutils [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "770605b0-4686-4d97-9f82-7ed299482f50" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:48 compute-1 nova_compute[225855]: 2026-01-20 15:28:48.929 225859 DEBUG oslo_concurrency.lockutils [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:48 compute-1 nova_compute[225855]: 2026-01-20 15:28:48.929 225859 DEBUG oslo_concurrency.lockutils [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "770605b0-4686-4d97-9f82-7ed299482f50-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:48 compute-1 nova_compute[225855]: 2026-01-20 15:28:48.929 225859 DEBUG oslo_concurrency.lockutils [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:48 compute-1 nova_compute[225855]: 2026-01-20 15:28:48.929 225859 DEBUG oslo_concurrency.lockutils [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:48 compute-1 nova_compute[225855]: 2026-01-20 15:28:48.930 225859 INFO nova.compute.manager [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Terminating instance
Jan 20 15:28:48 compute-1 nova_compute[225855]: 2026-01-20 15:28:48.931 225859 DEBUG nova.compute.manager [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:28:48 compute-1 kernel: tap52fb2315-9e (unregistering): left promiscuous mode
Jan 20 15:28:48 compute-1 NetworkManager[49104]: <info>  [1768922928.9849] device (tap52fb2315-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:28:48 compute-1 ovn_controller[130490]: 2026-01-20T15:28:48Z|00939|binding|INFO|Releasing lport 52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 from this chassis (sb_readonly=0)
Jan 20 15:28:48 compute-1 ovn_controller[130490]: 2026-01-20T15:28:48Z|00940|binding|INFO|Setting lport 52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 down in Southbound
Jan 20 15:28:48 compute-1 ovn_controller[130490]: 2026-01-20T15:28:48Z|00941|binding|INFO|Removing iface tap52fb2315-9e ovn-installed in OVS
Jan 20 15:28:48 compute-1 nova_compute[225855]: 2026-01-20 15:28:48.991 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:48 compute-1 nova_compute[225855]: 2026-01-20 15:28:48.993 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:48.998 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:08:60 10.100.0.14'], port_security=['fa:16:3e:f9:08:60 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '770605b0-4686-4d97-9f82-7ed299482f50', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f19fb67c-6bab-4253-851e-ede5bb26f589', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b57a9b16-bf8b-47b4-a097-c9a9044c2225', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c4c5a39-3036-4b6a-873b-b8673f881902, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=52fb2315-9ec5-47a4-af4a-e0ed5e4caf21) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:28:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:48.999 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 in datapath f19fb67c-6bab-4253-851e-ede5bb26f589 unbound from our chassis
Jan 20 15:28:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:49.000 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f19fb67c-6bab-4253-851e-ede5bb26f589, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:28:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:49.001 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[612cf865-0d16-4a07-b603-58dc7876232a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:49.001 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589 namespace which is not needed anymore
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.019 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:49 compute-1 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d000000cc.scope: Deactivated successfully.
Jan 20 15:28:49 compute-1 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d000000cc.scope: Consumed 14.760s CPU time.
Jan 20 15:28:49 compute-1 systemd-machined[194361]: Machine qemu-108-instance-000000cc terminated.
Jan 20 15:28:49 compute-1 neutron-haproxy-ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589[316172]: [NOTICE]   (316176) : haproxy version is 2.8.14-c23fe91
Jan 20 15:28:49 compute-1 neutron-haproxy-ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589[316172]: [NOTICE]   (316176) : path to executable is /usr/sbin/haproxy
Jan 20 15:28:49 compute-1 neutron-haproxy-ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589[316172]: [WARNING]  (316176) : Exiting Master process...
Jan 20 15:28:49 compute-1 neutron-haproxy-ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589[316172]: [ALERT]    (316176) : Current worker (316178) exited with code 143 (Terminated)
Jan 20 15:28:49 compute-1 neutron-haproxy-ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589[316172]: [WARNING]  (316176) : All workers exited. Exiting... (0)
Jan 20 15:28:49 compute-1 systemd[1]: libpod-e68935dd090d02c7d09ee6eaa42b1b2f1beffe055ed6b7bb3242ce69225c8b7f.scope: Deactivated successfully.
Jan 20 15:28:49 compute-1 podman[316618]: 2026-01-20 15:28:49.134880284 +0000 UTC m=+0.047848957 container died e68935dd090d02c7d09ee6eaa42b1b2f1beffe055ed6b7bb3242ce69225c8b7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:28:49 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e68935dd090d02c7d09ee6eaa42b1b2f1beffe055ed6b7bb3242ce69225c8b7f-userdata-shm.mount: Deactivated successfully.
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.173 225859 INFO nova.virt.libvirt.driver [-] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Instance destroyed successfully.
Jan 20 15:28:49 compute-1 systemd[1]: var-lib-containers-storage-overlay-987c681a0ffce47fad96bd241a3b13e2bd7d14a562c7f3c13eeeccc3550197bb-merged.mount: Deactivated successfully.
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.174 225859 DEBUG nova.objects.instance [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'resources' on Instance uuid 770605b0-4686-4d97-9f82-7ed299482f50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:28:49 compute-1 podman[316618]: 2026-01-20 15:28:49.186598171 +0000 UTC m=+0.099566854 container cleanup e68935dd090d02c7d09ee6eaa42b1b2f1beffe055ed6b7bb3242ce69225c8b7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.191 225859 DEBUG nova.virt.libvirt.vif [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:28:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-774862138',display_name='tempest-TestNetworkBasicOps-server-774862138',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-774862138',id=204,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHV4gRYgjRGDnaHxkSMJQMAVAwkk8jLZAdWo/fxENkiAFcbr1CXMhNYzs2hTTM5NoLjR4u2qMt5dKH7C9b4LHMK/3O49DJNSLMAUfk3HkTe/ulPxPFwHn3vfQCPGwBrM5A==',key_name='tempest-TestNetworkBasicOps-1250991011',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:28:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-u6z01i8j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:28:13Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=770605b0-4686-4d97-9f82-7ed299482f50,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "address": "fa:16:3e:f9:08:60", "network": {"id": "f19fb67c-6bab-4253-851e-ede5bb26f589", "bridge": "br-int", "label": "tempest-network-smoke--2116756150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52fb2315-9e", "ovs_interfaceid": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.191 225859 DEBUG nova.network.os_vif_util [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "address": "fa:16:3e:f9:08:60", "network": {"id": "f19fb67c-6bab-4253-851e-ede5bb26f589", "bridge": "br-int", "label": "tempest-network-smoke--2116756150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52fb2315-9e", "ovs_interfaceid": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.192 225859 DEBUG nova.network.os_vif_util [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f9:08:60,bridge_name='br-int',has_traffic_filtering=True,id=52fb2315-9ec5-47a4-af4a-e0ed5e4caf21,network=Network(f19fb67c-6bab-4253-851e-ede5bb26f589),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52fb2315-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.192 225859 DEBUG os_vif [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:08:60,bridge_name='br-int',has_traffic_filtering=True,id=52fb2315-9ec5-47a4-af4a-e0ed5e4caf21,network=Network(f19fb67c-6bab-4253-851e-ede5bb26f589),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52fb2315-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:28:49 compute-1 systemd[1]: libpod-conmon-e68935dd090d02c7d09ee6eaa42b1b2f1beffe055ed6b7bb3242ce69225c8b7f.scope: Deactivated successfully.
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.194 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.195 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52fb2315-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.238 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.242 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.244 225859 INFO os_vif [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:08:60,bridge_name='br-int',has_traffic_filtering=True,id=52fb2315-9ec5-47a4-af4a-e0ed5e4caf21,network=Network(f19fb67c-6bab-4253-851e-ede5bb26f589),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52fb2315-9e')
Jan 20 15:28:49 compute-1 podman[316660]: 2026-01-20 15:28:49.254482979 +0000 UTC m=+0.047434405 container remove e68935dd090d02c7d09ee6eaa42b1b2f1beffe055ed6b7bb3242ce69225c8b7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 15:28:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:49.256 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3435e13c-19e3-4ba8-b72b-fc73d31169aa]: (4, ('Tue Jan 20 03:28:49 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589 (e68935dd090d02c7d09ee6eaa42b1b2f1beffe055ed6b7bb3242ce69225c8b7f)\ne68935dd090d02c7d09ee6eaa42b1b2f1beffe055ed6b7bb3242ce69225c8b7f\nTue Jan 20 03:28:49 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589 (e68935dd090d02c7d09ee6eaa42b1b2f1beffe055ed6b7bb3242ce69225c8b7f)\ne68935dd090d02c7d09ee6eaa42b1b2f1beffe055ed6b7bb3242ce69225c8b7f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:49.259 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f41917d5-ed05-41f2-8d9e-7e3616671404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:49.260 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf19fb67c-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:28:49 compute-1 kernel: tapf19fb67c-60: left promiscuous mode
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.266 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:49.267 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[105f5e90-81ca-4495-a436-73536b1ad110]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.278 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:49.285 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3aa78ac9-a716-42c7-8141-350bee313e2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:49.286 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[790153df-4f54-4024-b577-ad01b9f93383]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:49.299 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[78faf0de-5559-4de3-a251-a073ba4d4fcf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 793211, 'reachable_time': 15583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316693, 'error': None, 'target': 'ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:49 compute-1 systemd[1]: run-netns-ovnmeta\x2df19fb67c\x2d6bab\x2d4253\x2d851e\x2dede5bb26f589.mount: Deactivated successfully.
Jan 20 15:28:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:49.303 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f19fb67c-6bab-4253-851e-ede5bb26f589 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:28:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:28:49.303 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[e65f2c15-4629-44fb-8f84-cd1e6ee0ff81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.734 225859 INFO nova.virt.libvirt.driver [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Deleting instance files /var/lib/nova/instances/770605b0-4686-4d97-9f82-7ed299482f50_del
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.734 225859 INFO nova.virt.libvirt.driver [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Deletion of /var/lib/nova/instances/770605b0-4686-4d97-9f82-7ed299482f50_del complete
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.793 225859 INFO nova.compute.manager [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Took 0.86 seconds to destroy the instance on the hypervisor.
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.794 225859 DEBUG oslo.service.loopingcall [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.795 225859 DEBUG nova.compute.manager [-] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.795 225859 DEBUG nova.network.neutron [-] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.867 225859 DEBUG nova.network.neutron [req-83d551fb-3df3-4a8b-b9bf-713402802e9b req-379ec33f-9859-4069-baa6-6765adb999ff 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Updated VIF entry in instance network info cache for port 52fb2315-9ec5-47a4-af4a-e0ed5e4caf21. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.868 225859 DEBUG nova.network.neutron [req-83d551fb-3df3-4a8b-b9bf-713402802e9b req-379ec33f-9859-4069-baa6-6765adb999ff 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Updating instance_info_cache with network_info: [{"id": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "address": "fa:16:3e:f9:08:60", "network": {"id": "f19fb67c-6bab-4253-851e-ede5bb26f589", "bridge": "br-int", "label": "tempest-network-smoke--2116756150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52fb2315-9e", "ovs_interfaceid": "52fb2315-9ec5-47a4-af4a-e0ed5e4caf21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:28:49 compute-1 nova_compute[225855]: 2026-01-20 15:28:49.892 225859 DEBUG oslo_concurrency.lockutils [req-83d551fb-3df3-4a8b-b9bf-713402802e9b req-379ec33f-9859-4069-baa6-6765adb999ff 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-770605b0-4686-4d97-9f82-7ed299482f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:28:50 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:50.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:50.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:50 compute-1 ceph-mon[81775]: pgmap v3199: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.3 KiB/s rd, 1.3 KiB/s wr, 0 op/s
Jan 20 15:28:50 compute-1 nova_compute[225855]: 2026-01-20 15:28:50.852 225859 DEBUG nova.network.neutron [-] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:28:50 compute-1 nova_compute[225855]: 2026-01-20 15:28:50.872 225859 INFO nova.compute.manager [-] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Took 1.08 seconds to deallocate network for instance.
Jan 20 15:28:50 compute-1 nova_compute[225855]: 2026-01-20 15:28:50.925 225859 DEBUG oslo_concurrency.lockutils [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:50 compute-1 nova_compute[225855]: 2026-01-20 15:28:50.926 225859 DEBUG oslo_concurrency.lockutils [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:50 compute-1 nova_compute[225855]: 2026-01-20 15:28:50.981 225859 DEBUG nova.compute.manager [req-2433cd5c-77e2-4b69-afa4-6720f7012c54 req-76336cc7-02e5-4eec-9a66-6abbdcde6260 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received event network-vif-unplugged-52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:28:50 compute-1 nova_compute[225855]: 2026-01-20 15:28:50.982 225859 DEBUG oslo_concurrency.lockutils [req-2433cd5c-77e2-4b69-afa4-6720f7012c54 req-76336cc7-02e5-4eec-9a66-6abbdcde6260 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "770605b0-4686-4d97-9f82-7ed299482f50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:50 compute-1 nova_compute[225855]: 2026-01-20 15:28:50.982 225859 DEBUG oslo_concurrency.lockutils [req-2433cd5c-77e2-4b69-afa4-6720f7012c54 req-76336cc7-02e5-4eec-9a66-6abbdcde6260 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:50 compute-1 nova_compute[225855]: 2026-01-20 15:28:50.982 225859 DEBUG oslo_concurrency.lockutils [req-2433cd5c-77e2-4b69-afa4-6720f7012c54 req-76336cc7-02e5-4eec-9a66-6abbdcde6260 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:50 compute-1 nova_compute[225855]: 2026-01-20 15:28:50.983 225859 DEBUG nova.compute.manager [req-2433cd5c-77e2-4b69-afa4-6720f7012c54 req-76336cc7-02e5-4eec-9a66-6abbdcde6260 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] No waiting events found dispatching network-vif-unplugged-52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:28:50 compute-1 nova_compute[225855]: 2026-01-20 15:28:50.983 225859 WARNING nova.compute.manager [req-2433cd5c-77e2-4b69-afa4-6720f7012c54 req-76336cc7-02e5-4eec-9a66-6abbdcde6260 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received unexpected event network-vif-unplugged-52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 for instance with vm_state deleted and task_state None.
Jan 20 15:28:50 compute-1 nova_compute[225855]: 2026-01-20 15:28:50.983 225859 DEBUG nova.compute.manager [req-2433cd5c-77e2-4b69-afa4-6720f7012c54 req-76336cc7-02e5-4eec-9a66-6abbdcde6260 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received event network-vif-plugged-52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:28:50 compute-1 nova_compute[225855]: 2026-01-20 15:28:50.984 225859 DEBUG oslo_concurrency.lockutils [req-2433cd5c-77e2-4b69-afa4-6720f7012c54 req-76336cc7-02e5-4eec-9a66-6abbdcde6260 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "770605b0-4686-4d97-9f82-7ed299482f50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:50 compute-1 nova_compute[225855]: 2026-01-20 15:28:50.984 225859 DEBUG oslo_concurrency.lockutils [req-2433cd5c-77e2-4b69-afa4-6720f7012c54 req-76336cc7-02e5-4eec-9a66-6abbdcde6260 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:50 compute-1 nova_compute[225855]: 2026-01-20 15:28:50.984 225859 DEBUG oslo_concurrency.lockutils [req-2433cd5c-77e2-4b69-afa4-6720f7012c54 req-76336cc7-02e5-4eec-9a66-6abbdcde6260 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:50 compute-1 nova_compute[225855]: 2026-01-20 15:28:50.984 225859 DEBUG nova.compute.manager [req-2433cd5c-77e2-4b69-afa4-6720f7012c54 req-76336cc7-02e5-4eec-9a66-6abbdcde6260 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] No waiting events found dispatching network-vif-plugged-52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:28:50 compute-1 nova_compute[225855]: 2026-01-20 15:28:50.985 225859 WARNING nova.compute.manager [req-2433cd5c-77e2-4b69-afa4-6720f7012c54 req-76336cc7-02e5-4eec-9a66-6abbdcde6260 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received unexpected event network-vif-plugged-52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 for instance with vm_state deleted and task_state None.
Jan 20 15:28:50 compute-1 nova_compute[225855]: 2026-01-20 15:28:50.985 225859 DEBUG nova.compute.manager [req-2433cd5c-77e2-4b69-afa4-6720f7012c54 req-76336cc7-02e5-4eec-9a66-6abbdcde6260 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Received event network-vif-deleted-52fb2315-9ec5-47a4-af4a-e0ed5e4caf21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:28:51 compute-1 nova_compute[225855]: 2026-01-20 15:28:51.002 225859 DEBUG oslo_concurrency.processutils [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:28:51 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:28:51 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/194072997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:51 compute-1 nova_compute[225855]: 2026-01-20 15:28:51.470 225859 DEBUG oslo_concurrency.processutils [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:28:51 compute-1 nova_compute[225855]: 2026-01-20 15:28:51.476 225859 DEBUG nova.compute.provider_tree [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:28:51 compute-1 nova_compute[225855]: 2026-01-20 15:28:51.496 225859 DEBUG nova.scheduler.client.report [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:28:51 compute-1 nova_compute[225855]: 2026-01-20 15:28:51.522 225859 DEBUG oslo_concurrency.lockutils [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:51 compute-1 nova_compute[225855]: 2026-01-20 15:28:51.565 225859 INFO nova.scheduler.client.report [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Deleted allocations for instance 770605b0-4686-4d97-9f82-7ed299482f50
Jan 20 15:28:51 compute-1 nova_compute[225855]: 2026-01-20 15:28:51.673 225859 DEBUG oslo_concurrency.lockutils [None req-58120d51-a644-4f05-98dd-40aa8321427d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "770605b0-4686-4d97-9f82-7ed299482f50" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:51 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/194072997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:52.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:52.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:52 compute-1 ceph-mon[81775]: pgmap v3200: 321 pgs: 321 active+clean; 142 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 6.2 KiB/s wr, 21 op/s
Jan 20 15:28:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:53 compute-1 nova_compute[225855]: 2026-01-20 15:28:53.648 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:54 compute-1 nova_compute[225855]: 2026-01-20 15:28:54.239 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:54.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:54.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:54 compute-1 ceph-mon[81775]: pgmap v3201: 321 pgs: 321 active+clean; 142 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 21 op/s
Jan 20 15:28:55 compute-1 nova_compute[225855]: 2026-01-20 15:28:55.526 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:55 compute-1 nova_compute[225855]: 2026-01-20 15:28:55.602 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:55 compute-1 ceph-mon[81775]: pgmap v3202: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 5.2 KiB/s wr, 126 op/s
Jan 20 15:28:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:56.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:56.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:58 compute-1 ceph-mon[81775]: pgmap v3203: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 104 KiB/s rd, 5.2 KiB/s wr, 169 op/s
Jan 20 15:28:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:58.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:28:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:58.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:58 compute-1 nova_compute[225855]: 2026-01-20 15:28:58.692 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:59 compute-1 nova_compute[225855]: 2026-01-20 15:28:59.241 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:00 compute-1 ceph-mon[81775]: pgmap v3204: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 104 KiB/s rd, 4.8 KiB/s wr, 169 op/s
Jan 20 15:29:00 compute-1 sshd-session[316724]: banner exchange: Connection from 3.134.148.59 port 45902: invalid format
Jan 20 15:29:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 20 15:29:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:00.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 20 15:29:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:00.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:02 compute-1 ceph-mon[81775]: pgmap v3205: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 104 KiB/s rd, 4.8 KiB/s wr, 169 op/s
Jan 20 15:29:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:02.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:29:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:02.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:29:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:03 compute-1 nova_compute[225855]: 2026-01-20 15:29:03.694 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:04 compute-1 ceph-mon[81775]: pgmap v3206: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 0 B/s wr, 148 op/s
Jan 20 15:29:04 compute-1 nova_compute[225855]: 2026-01-20 15:29:04.168 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922929.1668205, 770605b0-4686-4d97-9f82-7ed299482f50 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:29:04 compute-1 nova_compute[225855]: 2026-01-20 15:29:04.169 225859 INFO nova.compute.manager [-] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] VM Stopped (Lifecycle Event)
Jan 20 15:29:04 compute-1 nova_compute[225855]: 2026-01-20 15:29:04.192 225859 DEBUG nova.compute.manager [None req-3f3cbd01-11b9-4ce9-9d8d-37904102a05f - - - - - -] [instance: 770605b0-4686-4d97-9f82-7ed299482f50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:29:04 compute-1 nova_compute[225855]: 2026-01-20 15:29:04.282 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:04.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:04.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:06 compute-1 ceph-mon[81775]: pgmap v3207: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 0 B/s wr, 148 op/s
Jan 20 15:29:06 compute-1 nova_compute[225855]: 2026-01-20 15:29:06.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:06 compute-1 nova_compute[225855]: 2026-01-20 15:29:06.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:29:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:06.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:29:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:06.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:29:07 compute-1 sudo[316729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:07 compute-1 sudo[316729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:07 compute-1 sudo[316729]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:07 compute-1 sudo[316760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:07 compute-1 sudo[316760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:07 compute-1 sudo[316760]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:07 compute-1 podman[316753]: 2026-01-20 15:29:07.1990885 +0000 UTC m=+0.076629019 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 20 15:29:08 compute-1 ceph-mon[81775]: pgmap v3208: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Jan 20 15:29:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:08.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:08.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:08 compute-1 nova_compute[225855]: 2026-01-20 15:29:08.745 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:09 compute-1 nova_compute[225855]: 2026-01-20 15:29:09.284 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4223010889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:10 compute-1 ceph-mon[81775]: pgmap v3209: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4249767932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:10.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:10 compute-1 sshd-session[316808]: banner exchange: Connection from 3.134.148.59 port 52756: invalid format
Jan 20 15:29:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:10.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:10 compute-1 sshd-session[316809]: banner exchange: Connection from 3.134.148.59 port 52764: invalid format
Jan 20 15:29:11 compute-1 nova_compute[225855]: 2026-01-20 15:29:11.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:11 compute-1 nova_compute[225855]: 2026-01-20 15:29:11.372 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:29:11 compute-1 nova_compute[225855]: 2026-01-20 15:29:11.372 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:29:11 compute-1 nova_compute[225855]: 2026-01-20 15:29:11.372 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:29:11 compute-1 nova_compute[225855]: 2026-01-20 15:29:11.372 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:29:11 compute-1 nova_compute[225855]: 2026-01-20 15:29:11.373 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:29:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:29:11 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1352108626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:11 compute-1 nova_compute[225855]: 2026-01-20 15:29:11.800 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:29:11 compute-1 nova_compute[225855]: 2026-01-20 15:29:11.975 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:29:11 compute-1 nova_compute[225855]: 2026-01-20 15:29:11.977 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4251MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:29:11 compute-1 nova_compute[225855]: 2026-01-20 15:29:11.978 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:29:11 compute-1 nova_compute[225855]: 2026-01-20 15:29:11.978 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:29:12 compute-1 nova_compute[225855]: 2026-01-20 15:29:12.066 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:29:12 compute-1 nova_compute[225855]: 2026-01-20 15:29:12.067 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:29:12 compute-1 nova_compute[225855]: 2026-01-20 15:29:12.094 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing inventories for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 15:29:12 compute-1 nova_compute[225855]: 2026-01-20 15:29:12.126 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating ProviderTree inventory for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 15:29:12 compute-1 nova_compute[225855]: 2026-01-20 15:29:12.127 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 15:29:12 compute-1 nova_compute[225855]: 2026-01-20 15:29:12.176 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing aggregate associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 15:29:12 compute-1 nova_compute[225855]: 2026-01-20 15:29:12.209 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing trait associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 15:29:12 compute-1 nova_compute[225855]: 2026-01-20 15:29:12.237 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:29:12 compute-1 ceph-mon[81775]: pgmap v3210: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1352108626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:29:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:12.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:29:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:29:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:12.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:29:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:29:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/439750638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:12 compute-1 nova_compute[225855]: 2026-01-20 15:29:12.692 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:29:12 compute-1 nova_compute[225855]: 2026-01-20 15:29:12.697 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:29:12 compute-1 nova_compute[225855]: 2026-01-20 15:29:12.714 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:29:12 compute-1 nova_compute[225855]: 2026-01-20 15:29:12.752 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:29:12 compute-1 nova_compute[225855]: 2026-01-20 15:29:12.753 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:29:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/439750638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:29:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/640351222' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:29:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:29:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/640351222' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:29:13 compute-1 nova_compute[225855]: 2026-01-20 15:29:13.747 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:13 compute-1 nova_compute[225855]: 2026-01-20 15:29:13.752 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:13 compute-1 nova_compute[225855]: 2026-01-20 15:29:13.753 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:29:13 compute-1 nova_compute[225855]: 2026-01-20 15:29:13.753 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:29:13 compute-1 nova_compute[225855]: 2026-01-20 15:29:13.770 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:29:13 compute-1 nova_compute[225855]: 2026-01-20 15:29:13.770 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:14 compute-1 nova_compute[225855]: 2026-01-20 15:29:14.286 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:14 compute-1 nova_compute[225855]: 2026-01-20 15:29:14.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:14 compute-1 nova_compute[225855]: 2026-01-20 15:29:14.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:14 compute-1 ceph-mon[81775]: pgmap v3211: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/640351222' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:29:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/640351222' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:29:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:14.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:14.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:15 compute-1 nova_compute[225855]: 2026-01-20 15:29:15.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4191349382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/953815649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:16 compute-1 ceph-mon[81775]: pgmap v3212: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:29:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:16.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:29:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:16.450 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:29:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:16.451 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:29:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:16.451 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:29:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:16.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:17 compute-1 nova_compute[225855]: 2026-01-20 15:29:17.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:18 compute-1 podman[316859]: 2026-01-20 15:29:18.028830254 +0000 UTC m=+0.071786241 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 20 15:29:18 compute-1 sudo[316879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:18 compute-1 sudo[316879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:18 compute-1 sudo[316879]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:18 compute-1 sudo[316904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:29:18 compute-1 sudo[316904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:18 compute-1 sudo[316904]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:18 compute-1 sudo[316929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:18 compute-1 sudo[316929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:18 compute-1 sudo[316929]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:18 compute-1 sudo[316954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:29:18 compute-1 sudo[316954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:18.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:18 compute-1 ceph-mon[81775]: pgmap v3213: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:18.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:18 compute-1 sudo[316954]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:18 compute-1 nova_compute[225855]: 2026-01-20 15:29:18.749 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:18 compute-1 sudo[317009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:18 compute-1 sudo[317009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:18 compute-1 sudo[317009]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:18 compute-1 sudo[317034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:29:18 compute-1 sudo[317034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:18 compute-1 sudo[317034]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:19 compute-1 sudo[317059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:19 compute-1 sudo[317059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:19 compute-1 sudo[317059]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:19 compute-1 sudo[317084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- inventory --format=json-pretty --filter-for-batch
Jan 20 15:29:19 compute-1 sudo[317084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:19 compute-1 nova_compute[225855]: 2026-01-20 15:29:19.288 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:19 compute-1 podman[317151]: 2026-01-20 15:29:19.367246338 +0000 UTC m=+0.039433247 container create f1155693df52ee011de31a833a91bcd28894e3162c32382144f1a0e27c6c73e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khayyam, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:29:19 compute-1 systemd[1]: Started libpod-conmon-f1155693df52ee011de31a833a91bcd28894e3162c32382144f1a0e27c6c73e6.scope.
Jan 20 15:29:19 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:29:19 compute-1 podman[317151]: 2026-01-20 15:29:19.351434426 +0000 UTC m=+0.023621355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:29:19 compute-1 podman[317151]: 2026-01-20 15:29:19.447091937 +0000 UTC m=+0.119278866 container init f1155693df52ee011de31a833a91bcd28894e3162c32382144f1a0e27c6c73e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khayyam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 15:29:19 compute-1 podman[317151]: 2026-01-20 15:29:19.454746776 +0000 UTC m=+0.126933685 container start f1155693df52ee011de31a833a91bcd28894e3162c32382144f1a0e27c6c73e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khayyam, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:29:19 compute-1 podman[317151]: 2026-01-20 15:29:19.45768212 +0000 UTC m=+0.129869029 container attach f1155693df52ee011de31a833a91bcd28894e3162c32382144f1a0e27c6c73e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khayyam, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 15:29:19 compute-1 lucid_khayyam[317168]: 167 167
Jan 20 15:29:19 compute-1 systemd[1]: libpod-f1155693df52ee011de31a833a91bcd28894e3162c32382144f1a0e27c6c73e6.scope: Deactivated successfully.
Jan 20 15:29:19 compute-1 podman[317173]: 2026-01-20 15:29:19.502570151 +0000 UTC m=+0.027107535 container died f1155693df52ee011de31a833a91bcd28894e3162c32382144f1a0e27c6c73e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khayyam, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:29:19 compute-1 systemd[1]: var-lib-containers-storage-overlay-9a6874d1ab88d11d4755ca9acc94712eab8ba09638c6ed2633bdbf6aa513bb3c-merged.mount: Deactivated successfully.
Jan 20 15:29:19 compute-1 podman[317173]: 2026-01-20 15:29:19.539032462 +0000 UTC m=+0.063569816 container remove f1155693df52ee011de31a833a91bcd28894e3162c32382144f1a0e27c6c73e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khayyam, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:29:19 compute-1 systemd[1]: libpod-conmon-f1155693df52ee011de31a833a91bcd28894e3162c32382144f1a0e27c6c73e6.scope: Deactivated successfully.
Jan 20 15:29:19 compute-1 podman[317195]: 2026-01-20 15:29:19.701592614 +0000 UTC m=+0.041215468 container create 356f49d97ebffae8ce8f45012baed55fe3ac2576de3034d947434e5874995d2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ishizaka, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 15:29:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:19 compute-1 systemd[1]: Started libpod-conmon-356f49d97ebffae8ce8f45012baed55fe3ac2576de3034d947434e5874995d2b.scope.
Jan 20 15:29:19 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:29:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8fa87a1a4194b0ea37b9c4e76bfc535b3cf7f0ab2a24755828dcb087a59a39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8fa87a1a4194b0ea37b9c4e76bfc535b3cf7f0ab2a24755828dcb087a59a39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8fa87a1a4194b0ea37b9c4e76bfc535b3cf7f0ab2a24755828dcb087a59a39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8fa87a1a4194b0ea37b9c4e76bfc535b3cf7f0ab2a24755828dcb087a59a39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:19 compute-1 podman[317195]: 2026-01-20 15:29:19.684526086 +0000 UTC m=+0.024148970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:29:19 compute-1 podman[317195]: 2026-01-20 15:29:19.782702069 +0000 UTC m=+0.122324953 container init 356f49d97ebffae8ce8f45012baed55fe3ac2576de3034d947434e5874995d2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:29:19 compute-1 nova_compute[225855]: 2026-01-20 15:29:19.784 225859 DEBUG oslo_concurrency.lockutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "af78e376-a9fb-4854-9c34-fd8c6f63390a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:29:19 compute-1 nova_compute[225855]: 2026-01-20 15:29:19.786 225859 DEBUG oslo_concurrency.lockutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "af78e376-a9fb-4854-9c34-fd8c6f63390a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:29:19 compute-1 podman[317195]: 2026-01-20 15:29:19.789602157 +0000 UTC m=+0.129225011 container start 356f49d97ebffae8ce8f45012baed55fe3ac2576de3034d947434e5874995d2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 15:29:19 compute-1 podman[317195]: 2026-01-20 15:29:19.792768577 +0000 UTC m=+0.132391431 container attach 356f49d97ebffae8ce8f45012baed55fe3ac2576de3034d947434e5874995d2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ishizaka, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 15:29:19 compute-1 nova_compute[225855]: 2026-01-20 15:29:19.805 225859 DEBUG nova.compute.manager [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:29:19 compute-1 nova_compute[225855]: 2026-01-20 15:29:19.895 225859 DEBUG oslo_concurrency.lockutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:29:19 compute-1 nova_compute[225855]: 2026-01-20 15:29:19.896 225859 DEBUG oslo_concurrency.lockutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:29:19 compute-1 nova_compute[225855]: 2026-01-20 15:29:19.904 225859 DEBUG nova.virt.hardware [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:29:19 compute-1 nova_compute[225855]: 2026-01-20 15:29:19.904 225859 INFO nova.compute.claims [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.043 225859 DEBUG oslo_concurrency.processutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:29:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:20.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:29:20 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4253199867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.489 225859 DEBUG oslo_concurrency.processutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.496 225859 DEBUG nova.compute.provider_tree [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.517 225859 DEBUG nova.scheduler.client.report [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.553 225859 DEBUG oslo_concurrency.lockutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.554 225859 DEBUG nova.compute.manager [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:29:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:20.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.613 225859 DEBUG nova.compute.manager [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.614 225859 DEBUG nova.network.neutron [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.634 225859 INFO nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.651 225859 DEBUG nova.compute.manager [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:29:20 compute-1 ceph-mon[81775]: pgmap v3214: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4253199867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.781 225859 DEBUG nova.compute.manager [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.783 225859 DEBUG nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.783 225859 INFO nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Creating image(s)
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.807 225859 DEBUG nova.storage.rbd_utils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image af78e376-a9fb-4854-9c34-fd8c6f63390a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.837 225859 DEBUG nova.storage.rbd_utils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image af78e376-a9fb-4854-9c34-fd8c6f63390a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.866 225859 DEBUG nova.storage.rbd_utils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image af78e376-a9fb-4854-9c34-fd8c6f63390a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.872 225859 DEBUG oslo_concurrency.processutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.941 225859 DEBUG oslo_concurrency.processutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.943 225859 DEBUG oslo_concurrency.lockutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.944 225859 DEBUG oslo_concurrency.lockutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.944 225859 DEBUG oslo_concurrency.lockutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.975 225859 DEBUG nova.storage.rbd_utils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image af78e376-a9fb-4854-9c34-fd8c6f63390a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:29:20 compute-1 nova_compute[225855]: 2026-01-20 15:29:20.980 225859 DEBUG oslo_concurrency.processutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 af78e376-a9fb-4854-9c34-fd8c6f63390a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]: [
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:     {
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:         "available": false,
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:         "ceph_device": false,
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:         "lsm_data": {},
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:         "lvs": [],
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:         "path": "/dev/sr0",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:         "rejected_reasons": [
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "Has a FileSystem",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "Insufficient space (<5GB)"
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:         ],
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:         "sys_api": {
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "actuators": null,
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "device_nodes": "sr0",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "devname": "sr0",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "human_readable_size": "482.00 KB",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "id_bus": "ata",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "model": "QEMU DVD-ROM",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "nr_requests": "2",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "parent": "/dev/sr0",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "partitions": {},
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "path": "/dev/sr0",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "removable": "1",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "rev": "2.5+",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "ro": "0",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "rotational": "1",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "sas_address": "",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "sas_device_handle": "",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "scheduler_mode": "mq-deadline",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "sectors": 0,
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "sectorsize": "2048",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "size": 493568.0,
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "support_discard": "2048",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "type": "disk",
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:             "vendor": "QEMU"
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:         }
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]:     }
Jan 20 15:29:21 compute-1 reverent_ishizaka[317211]: ]
Jan 20 15:29:21 compute-1 systemd[1]: libpod-356f49d97ebffae8ce8f45012baed55fe3ac2576de3034d947434e5874995d2b.scope: Deactivated successfully.
Jan 20 15:29:21 compute-1 podman[317195]: 2026-01-20 15:29:21.071445584 +0000 UTC m=+1.411068458 container died 356f49d97ebffae8ce8f45012baed55fe3ac2576de3034d947434e5874995d2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ishizaka, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:29:21 compute-1 systemd[1]: libpod-356f49d97ebffae8ce8f45012baed55fe3ac2576de3034d947434e5874995d2b.scope: Consumed 1.274s CPU time.
Jan 20 15:29:21 compute-1 systemd[1]: var-lib-containers-storage-overlay-bc8fa87a1a4194b0ea37b9c4e76bfc535b3cf7f0ab2a24755828dcb087a59a39-merged.mount: Deactivated successfully.
Jan 20 15:29:21 compute-1 podman[317195]: 2026-01-20 15:29:21.138145808 +0000 UTC m=+1.477768662 container remove 356f49d97ebffae8ce8f45012baed55fe3ac2576de3034d947434e5874995d2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:29:21 compute-1 systemd[1]: libpod-conmon-356f49d97ebffae8ce8f45012baed55fe3ac2576de3034d947434e5874995d2b.scope: Deactivated successfully.
Jan 20 15:29:21 compute-1 sudo[317084]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:21 compute-1 nova_compute[225855]: 2026-01-20 15:29:21.248 225859 DEBUG oslo_concurrency.processutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 af78e376-a9fb-4854-9c34-fd8c6f63390a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:29:21 compute-1 nova_compute[225855]: 2026-01-20 15:29:21.301 225859 DEBUG nova.storage.rbd_utils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] resizing rbd image af78e376-a9fb-4854-9c34-fd8c6f63390a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:29:21 compute-1 nova_compute[225855]: 2026-01-20 15:29:21.384 225859 DEBUG nova.objects.instance [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'migration_context' on Instance uuid af78e376-a9fb-4854-9c34-fd8c6f63390a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:29:21 compute-1 nova_compute[225855]: 2026-01-20 15:29:21.399 225859 DEBUG nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:29:21 compute-1 nova_compute[225855]: 2026-01-20 15:29:21.399 225859 DEBUG nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Ensure instance console log exists: /var/lib/nova/instances/af78e376-a9fb-4854-9c34-fd8c6f63390a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:29:21 compute-1 nova_compute[225855]: 2026-01-20 15:29:21.400 225859 DEBUG oslo_concurrency.lockutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:29:21 compute-1 nova_compute[225855]: 2026-01-20 15:29:21.400 225859 DEBUG oslo_concurrency.lockutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:29:21 compute-1 nova_compute[225855]: 2026-01-20 15:29:21.400 225859 DEBUG oslo_concurrency.lockutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:29:21 compute-1 nova_compute[225855]: 2026-01-20 15:29:21.428 225859 DEBUG nova.policy [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5338aa65dc0e4326a66ce79053787f14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:29:22 compute-1 ceph-mon[81775]: pgmap v3215: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:29:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:29:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:29:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:29:22 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:29:22 compute-1 nova_compute[225855]: 2026-01-20 15:29:22.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:29:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:22.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:29:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:29:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:22.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:29:22 compute-1 nova_compute[225855]: 2026-01-20 15:29:22.938 225859 DEBUG nova.network.neutron [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Successfully created port: 31d3cb4c-b75a-468e-8a31-1fad4e27eb6e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #157. Immutable memtables: 0.
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:29:23.261239) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 157
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922963261310, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1553, "num_deletes": 250, "total_data_size": 3741755, "memory_usage": 3797104, "flush_reason": "Manual Compaction"}
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #158: started
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922963273290, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 158, "file_size": 1517641, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76710, "largest_seqno": 78257, "table_properties": {"data_size": 1512548, "index_size": 2424, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13383, "raw_average_key_size": 21, "raw_value_size": 1501453, "raw_average_value_size": 2357, "num_data_blocks": 108, "num_entries": 637, "num_filter_entries": 637, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768922833, "oldest_key_time": 1768922833, "file_creation_time": 1768922963, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 12095 microseconds, and 4587 cpu microseconds.
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:29:23.273342) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #158: 1517641 bytes OK
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:29:23.273364) [db/memtable_list.cc:519] [default] Level-0 commit table #158 started
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:29:23.275926) [db/memtable_list.cc:722] [default] Level-0 commit table #158: memtable #1 done
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:29:23.275946) EVENT_LOG_v1 {"time_micros": 1768922963275940, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:29:23.275970) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 3734570, prev total WAL file size 3734570, number of live WAL files 2.
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000154.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:29:23.277264) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353037' seq:72057594037927935, type:22 .. '6D6772737461740032373538' seq:0, type:0; will stop at (end)
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [158(1482KB)], [156(12MB)]
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922963277328, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [158], "files_L6": [156], "score": -1, "input_data_size": 14634920, "oldest_snapshot_seqno": -1}
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #159: 10030 keys, 11680406 bytes, temperature: kUnknown
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922963357147, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 159, "file_size": 11680406, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11617661, "index_size": 36584, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25093, "raw_key_size": 263802, "raw_average_key_size": 26, "raw_value_size": 11443861, "raw_average_value_size": 1140, "num_data_blocks": 1392, "num_entries": 10030, "num_filter_entries": 10030, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768922963, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 159, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:29:23.357387) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 11680406 bytes
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:29:23.359472) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.2 rd, 146.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 12.5 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(17.3) write-amplify(7.7) OK, records in: 10488, records dropped: 458 output_compression: NoCompression
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:29:23.359490) EVENT_LOG_v1 {"time_micros": 1768922963359482, "job": 100, "event": "compaction_finished", "compaction_time_micros": 79894, "compaction_time_cpu_micros": 33303, "output_level": 6, "num_output_files": 1, "total_output_size": 11680406, "num_input_records": 10488, "num_output_records": 10030, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922963359921, "job": 100, "event": "table_file_deletion", "file_number": 158}
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000156.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922963362037, "job": 100, "event": "table_file_deletion", "file_number": 156}
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:29:23.277117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:29:23.362209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:29:23.362216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:29:23.362218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:29:23.362220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:29:23 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:29:23.362222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:29:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:23 compute-1 nova_compute[225855]: 2026-01-20 15:29:23.677 225859 DEBUG nova.network.neutron [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Successfully updated port: 31d3cb4c-b75a-468e-8a31-1fad4e27eb6e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:29:23 compute-1 nova_compute[225855]: 2026-01-20 15:29:23.697 225859 DEBUG oslo_concurrency.lockutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "refresh_cache-af78e376-a9fb-4854-9c34-fd8c6f63390a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:29:23 compute-1 nova_compute[225855]: 2026-01-20 15:29:23.697 225859 DEBUG oslo_concurrency.lockutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquired lock "refresh_cache-af78e376-a9fb-4854-9c34-fd8c6f63390a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:29:23 compute-1 nova_compute[225855]: 2026-01-20 15:29:23.697 225859 DEBUG nova.network.neutron [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:29:23 compute-1 nova_compute[225855]: 2026-01-20 15:29:23.751 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:23 compute-1 nova_compute[225855]: 2026-01-20 15:29:23.807 225859 DEBUG nova.compute.manager [req-a4ecdd67-9f29-43cf-b52a-122317914962 req-3b75d416-57e2-4bb2-9484-49b87fec81e9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Received event network-changed-31d3cb4c-b75a-468e-8a31-1fad4e27eb6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:29:23 compute-1 nova_compute[225855]: 2026-01-20 15:29:23.808 225859 DEBUG nova.compute.manager [req-a4ecdd67-9f29-43cf-b52a-122317914962 req-3b75d416-57e2-4bb2-9484-49b87fec81e9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Refreshing instance network info cache due to event network-changed-31d3cb4c-b75a-468e-8a31-1fad4e27eb6e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:29:23 compute-1 nova_compute[225855]: 2026-01-20 15:29:23.808 225859 DEBUG oslo_concurrency.lockutils [req-a4ecdd67-9f29-43cf-b52a-122317914962 req-3b75d416-57e2-4bb2-9484-49b87fec81e9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-af78e376-a9fb-4854-9c34-fd8c6f63390a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:29:24 compute-1 ceph-mon[81775]: pgmap v3216: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:24 compute-1 nova_compute[225855]: 2026-01-20 15:29:24.305 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:24 compute-1 nova_compute[225855]: 2026-01-20 15:29:24.403 225859 DEBUG nova.network.neutron [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:29:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:24.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:24.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.429 225859 DEBUG nova.network.neutron [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Updating instance_info_cache with network_info: [{"id": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "address": "fa:16:3e:d9:69:f6", "network": {"id": "a2b49a8c-7446-4445-ae6e-d2870040582f", "bridge": "br-int", "label": "tempest-network-smoke--338330285", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d3cb4c-b7", "ovs_interfaceid": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.452 225859 DEBUG oslo_concurrency.lockutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Releasing lock "refresh_cache-af78e376-a9fb-4854-9c34-fd8c6f63390a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.453 225859 DEBUG nova.compute.manager [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Instance network_info: |[{"id": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "address": "fa:16:3e:d9:69:f6", "network": {"id": "a2b49a8c-7446-4445-ae6e-d2870040582f", "bridge": "br-int", "label": "tempest-network-smoke--338330285", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d3cb4c-b7", "ovs_interfaceid": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.454 225859 DEBUG oslo_concurrency.lockutils [req-a4ecdd67-9f29-43cf-b52a-122317914962 req-3b75d416-57e2-4bb2-9484-49b87fec81e9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-af78e376-a9fb-4854-9c34-fd8c6f63390a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.454 225859 DEBUG nova.network.neutron [req-a4ecdd67-9f29-43cf-b52a-122317914962 req-3b75d416-57e2-4bb2-9484-49b87fec81e9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Refreshing network info cache for port 31d3cb4c-b75a-468e-8a31-1fad4e27eb6e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.458 225859 DEBUG nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Start _get_guest_xml network_info=[{"id": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "address": "fa:16:3e:d9:69:f6", "network": {"id": "a2b49a8c-7446-4445-ae6e-d2870040582f", "bridge": "br-int", "label": "tempest-network-smoke--338330285", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d3cb4c-b7", "ovs_interfaceid": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.466 225859 WARNING nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.473 225859 DEBUG nova.virt.libvirt.host [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.474 225859 DEBUG nova.virt.libvirt.host [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.477 225859 DEBUG nova.virt.libvirt.host [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.478 225859 DEBUG nova.virt.libvirt.host [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.479 225859 DEBUG nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.480 225859 DEBUG nova.virt.hardware [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.480 225859 DEBUG nova.virt.hardware [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.481 225859 DEBUG nova.virt.hardware [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.481 225859 DEBUG nova.virt.hardware [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.481 225859 DEBUG nova.virt.hardware [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.481 225859 DEBUG nova.virt.hardware [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.482 225859 DEBUG nova.virt.hardware [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.482 225859 DEBUG nova.virt.hardware [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.482 225859 DEBUG nova.virt.hardware [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.482 225859 DEBUG nova.virt.hardware [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.483 225859 DEBUG nova.virt.hardware [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.485 225859 DEBUG oslo_concurrency.processutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:29:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:29:25 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1146710376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.967 225859 DEBUG oslo_concurrency.processutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.993 225859 DEBUG nova.storage.rbd_utils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image af78e376-a9fb-4854-9c34-fd8c6f63390a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:29:25 compute-1 nova_compute[225855]: 2026-01-20 15:29:25.997 225859 DEBUG oslo_concurrency.processutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:29:26 compute-1 ceph-mon[81775]: pgmap v3217: 321 pgs: 321 active+clean; 142 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s rd, 904 KiB/s wr, 13 op/s
Jan 20 15:29:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1146710376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:29:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:29:26 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/475366063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.423 225859 DEBUG oslo_concurrency.processutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.425 225859 DEBUG nova.virt.libvirt.vif [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:29:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-866255871',display_name='tempest-TestNetworkBasicOps-server-866255871',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-866255871',id=205,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNr82C8ft9iJVaxlnvB559FQY0a18+ddVyVQXOWpubG2vvEKlWos0eribBsrr0XJYAl5WSj1IuEMfruIIC3taSryOm9K9DYcrj57monaPm1w9c08Woz8HGduEiXhkpNC2g==',key_name='tempest-TestNetworkBasicOps-780774492',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-73qpy651',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:29:20Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=af78e376-a9fb-4854-9c34-fd8c6f63390a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "address": "fa:16:3e:d9:69:f6", "network": {"id": "a2b49a8c-7446-4445-ae6e-d2870040582f", "bridge": "br-int", "label": "tempest-network-smoke--338330285", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d3cb4c-b7", "ovs_interfaceid": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:29:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.425 225859 DEBUG nova.network.os_vif_util [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "address": "fa:16:3e:d9:69:f6", "network": {"id": "a2b49a8c-7446-4445-ae6e-d2870040582f", "bridge": "br-int", "label": "tempest-network-smoke--338330285", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d3cb4c-b7", "ovs_interfaceid": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.426 225859 DEBUG nova.network.os_vif_util [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:69:f6,bridge_name='br-int',has_traffic_filtering=True,id=31d3cb4c-b75a-468e-8a31-1fad4e27eb6e,network=Network(a2b49a8c-7446-4445-ae6e-d2870040582f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d3cb4c-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:29:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:29:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:26.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.427 225859 DEBUG nova.objects.instance [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'pci_devices' on Instance uuid af78e376-a9fb-4854-9c34-fd8c6f63390a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.438 225859 DEBUG nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:29:26 compute-1 nova_compute[225855]:   <uuid>af78e376-a9fb-4854-9c34-fd8c6f63390a</uuid>
Jan 20 15:29:26 compute-1 nova_compute[225855]:   <name>instance-000000cd</name>
Jan 20 15:29:26 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:29:26 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:29:26 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <nova:name>tempest-TestNetworkBasicOps-server-866255871</nova:name>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:29:25</nova:creationTime>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:29:26 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:29:26 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:29:26 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:29:26 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:29:26 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:29:26 compute-1 nova_compute[225855]:         <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:29:26 compute-1 nova_compute[225855]:         <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:29:26 compute-1 nova_compute[225855]:         <nova:port uuid="31d3cb4c-b75a-468e-8a31-1fad4e27eb6e">
Jan 20 15:29:26 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:29:26 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:29:26 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <system>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <entry name="serial">af78e376-a9fb-4854-9c34-fd8c6f63390a</entry>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <entry name="uuid">af78e376-a9fb-4854-9c34-fd8c6f63390a</entry>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     </system>
Jan 20 15:29:26 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:29:26 compute-1 nova_compute[225855]:   <os>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:   </os>
Jan 20 15:29:26 compute-1 nova_compute[225855]:   <features>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:   </features>
Jan 20 15:29:26 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:29:26 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:29:26 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/af78e376-a9fb-4854-9c34-fd8c6f63390a_disk">
Jan 20 15:29:26 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       </source>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:29:26 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/af78e376-a9fb-4854-9c34-fd8c6f63390a_disk.config">
Jan 20 15:29:26 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       </source>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:29:26 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:d9:69:f6"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <target dev="tap31d3cb4c-b7"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/af78e376-a9fb-4854-9c34-fd8c6f63390a/console.log" append="off"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <video>
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     </video>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:29:26 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:29:26 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:29:26 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:29:26 compute-1 nova_compute[225855]: </domain>
Jan 20 15:29:26 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.440 225859 DEBUG nova.compute.manager [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Preparing to wait for external event network-vif-plugged-31d3cb4c-b75a-468e-8a31-1fad4e27eb6e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.441 225859 DEBUG oslo_concurrency.lockutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "af78e376-a9fb-4854-9c34-fd8c6f63390a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.441 225859 DEBUG oslo_concurrency.lockutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "af78e376-a9fb-4854-9c34-fd8c6f63390a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.441 225859 DEBUG oslo_concurrency.lockutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "af78e376-a9fb-4854-9c34-fd8c6f63390a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.442 225859 DEBUG nova.virt.libvirt.vif [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:29:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-866255871',display_name='tempest-TestNetworkBasicOps-server-866255871',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-866255871',id=205,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNr82C8ft9iJVaxlnvB559FQY0a18+ddVyVQXOWpubG2vvEKlWos0eribBsrr0XJYAl5WSj1IuEMfruIIC3taSryOm9K9DYcrj57monaPm1w9c08Woz8HGduEiXhkpNC2g==',key_name='tempest-TestNetworkBasicOps-780774492',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-73qpy651',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:29:20Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=af78e376-a9fb-4854-9c34-fd8c6f63390a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "address": "fa:16:3e:d9:69:f6", "network": {"id": "a2b49a8c-7446-4445-ae6e-d2870040582f", "bridge": "br-int", "label": "tempest-network-smoke--338330285", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d3cb4c-b7", "ovs_interfaceid": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.442 225859 DEBUG nova.network.os_vif_util [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "address": "fa:16:3e:d9:69:f6", "network": {"id": "a2b49a8c-7446-4445-ae6e-d2870040582f", "bridge": "br-int", "label": "tempest-network-smoke--338330285", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d3cb4c-b7", "ovs_interfaceid": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.443 225859 DEBUG nova.network.os_vif_util [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:69:f6,bridge_name='br-int',has_traffic_filtering=True,id=31d3cb4c-b75a-468e-8a31-1fad4e27eb6e,network=Network(a2b49a8c-7446-4445-ae6e-d2870040582f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d3cb4c-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.443 225859 DEBUG os_vif [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:69:f6,bridge_name='br-int',has_traffic_filtering=True,id=31d3cb4c-b75a-468e-8a31-1fad4e27eb6e,network=Network(a2b49a8c-7446-4445-ae6e-d2870040582f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d3cb4c-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.444 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.444 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.445 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.449 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.449 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31d3cb4c-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.450 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31d3cb4c-b7, col_values=(('external_ids', {'iface-id': '31d3cb4c-b75a-468e-8a31-1fad4e27eb6e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:69:f6', 'vm-uuid': 'af78e376-a9fb-4854-9c34-fd8c6f63390a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.451 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:26 compute-1 NetworkManager[49104]: <info>  [1768922966.4525] manager: (tap31d3cb4c-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.453 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.459 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.460 225859 INFO os_vif [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:69:f6,bridge_name='br-int',has_traffic_filtering=True,id=31d3cb4c-b75a-468e-8a31-1fad4e27eb6e,network=Network(a2b49a8c-7446-4445-ae6e-d2870040582f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d3cb4c-b7')
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.518 225859 DEBUG nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.518 225859 DEBUG nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.519 225859 DEBUG nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No VIF found with MAC fa:16:3e:d9:69:f6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.519 225859 INFO nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Using config drive
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.544 225859 DEBUG nova.storage.rbd_utils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image af78e376-a9fb-4854-9c34-fd8c6f63390a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:29:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:26.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.786 225859 DEBUG nova.network.neutron [req-a4ecdd67-9f29-43cf-b52a-122317914962 req-3b75d416-57e2-4bb2-9484-49b87fec81e9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Updated VIF entry in instance network info cache for port 31d3cb4c-b75a-468e-8a31-1fad4e27eb6e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.787 225859 DEBUG nova.network.neutron [req-a4ecdd67-9f29-43cf-b52a-122317914962 req-3b75d416-57e2-4bb2-9484-49b87fec81e9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Updating instance_info_cache with network_info: [{"id": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "address": "fa:16:3e:d9:69:f6", "network": {"id": "a2b49a8c-7446-4445-ae6e-d2870040582f", "bridge": "br-int", "label": "tempest-network-smoke--338330285", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d3cb4c-b7", "ovs_interfaceid": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.805 225859 DEBUG oslo_concurrency.lockutils [req-a4ecdd67-9f29-43cf-b52a-122317914962 req-3b75d416-57e2-4bb2-9484-49b87fec81e9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-af78e376-a9fb-4854-9c34-fd8c6f63390a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.888 225859 INFO nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Creating config drive at /var/lib/nova/instances/af78e376-a9fb-4854-9c34-fd8c6f63390a/disk.config
Jan 20 15:29:26 compute-1 nova_compute[225855]: 2026-01-20 15:29:26.893 225859 DEBUG oslo_concurrency.processutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af78e376-a9fb-4854-9c34-fd8c6f63390a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph5ayjv58 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:29:27 compute-1 nova_compute[225855]: 2026-01-20 15:29:27.025 225859 DEBUG oslo_concurrency.processutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af78e376-a9fb-4854-9c34-fd8c6f63390a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph5ayjv58" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:29:27 compute-1 nova_compute[225855]: 2026-01-20 15:29:27.061 225859 DEBUG nova.storage.rbd_utils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image af78e376-a9fb-4854-9c34-fd8c6f63390a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:29:27 compute-1 nova_compute[225855]: 2026-01-20 15:29:27.065 225859 DEBUG oslo_concurrency.processutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af78e376-a9fb-4854-9c34-fd8c6f63390a/disk.config af78e376-a9fb-4854-9c34-fd8c6f63390a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:29:27 compute-1 sudo[318736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:27 compute-1 sudo[318736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:27 compute-1 sudo[318736]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:27 compute-1 sudo[318761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:27 compute-1 sudo[318761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:27 compute-1 sudo[318761]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/475366063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:29:27 compute-1 nova_compute[225855]: 2026-01-20 15:29:27.509 225859 DEBUG oslo_concurrency.processutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af78e376-a9fb-4854-9c34-fd8c6f63390a/disk.config af78e376-a9fb-4854-9c34-fd8c6f63390a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:29:27 compute-1 nova_compute[225855]: 2026-01-20 15:29:27.510 225859 INFO nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Deleting local config drive /var/lib/nova/instances/af78e376-a9fb-4854-9c34-fd8c6f63390a/disk.config because it was imported into RBD.
Jan 20 15:29:27 compute-1 kernel: tap31d3cb4c-b7: entered promiscuous mode
Jan 20 15:29:27 compute-1 NetworkManager[49104]: <info>  [1768922967.5723] manager: (tap31d3cb4c-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/399)
Jan 20 15:29:27 compute-1 ovn_controller[130490]: 2026-01-20T15:29:27Z|00942|binding|INFO|Claiming lport 31d3cb4c-b75a-468e-8a31-1fad4e27eb6e for this chassis.
Jan 20 15:29:27 compute-1 ovn_controller[130490]: 2026-01-20T15:29:27Z|00943|binding|INFO|31d3cb4c-b75a-468e-8a31-1fad4e27eb6e: Claiming fa:16:3e:d9:69:f6 10.100.0.10
Jan 20 15:29:27 compute-1 nova_compute[225855]: 2026-01-20 15:29:27.573 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:27 compute-1 nova_compute[225855]: 2026-01-20 15:29:27.578 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.587 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:69:f6 10.100.0.10'], port_security=['fa:16:3e:d9:69:f6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'af78e376-a9fb-4854-9c34-fd8c6f63390a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a2b49a8c-7446-4445-ae6e-d2870040582f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8d4ade01-8f6f-48ff-bd8a-0af514e9e9ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92632684-a601-487e-937b-036b3fd0bb35, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=31d3cb4c-b75a-468e-8a31-1fad4e27eb6e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.589 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 31d3cb4c-b75a-468e-8a31-1fad4e27eb6e in datapath a2b49a8c-7446-4445-ae6e-d2870040582f bound to our chassis
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.589 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a2b49a8c-7446-4445-ae6e-d2870040582f
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.600 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e4a949fd-6b22-47ab-8e92-9d2071ee7fff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.601 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa2b49a8c-71 in ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.602 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa2b49a8c-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.602 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2371b732-4f09-4963-bc1e-d8d64c5da540]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.603 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fcb799c5-2311-4974-a441-992e22bc8a69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:27 compute-1 systemd-udevd[318799]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.614 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[1767c1c4-0a05-4132-9272-cd1feae6fbd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:27 compute-1 NetworkManager[49104]: <info>  [1768922967.6222] device (tap31d3cb4c-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:29:27 compute-1 NetworkManager[49104]: <info>  [1768922967.6227] device (tap31d3cb4c-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:29:27 compute-1 systemd-machined[194361]: New machine qemu-109-instance-000000cd.
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.638 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3b0b65-98ba-4568-887f-cacfe3aecd74]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:27 compute-1 nova_compute[225855]: 2026-01-20 15:29:27.645 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:27 compute-1 systemd[1]: Started Virtual Machine qemu-109-instance-000000cd.
Jan 20 15:29:27 compute-1 ovn_controller[130490]: 2026-01-20T15:29:27Z|00944|binding|INFO|Setting lport 31d3cb4c-b75a-468e-8a31-1fad4e27eb6e ovn-installed in OVS
Jan 20 15:29:27 compute-1 ovn_controller[130490]: 2026-01-20T15:29:27Z|00945|binding|INFO|Setting lport 31d3cb4c-b75a-468e-8a31-1fad4e27eb6e up in Southbound
Jan 20 15:29:27 compute-1 nova_compute[225855]: 2026-01-20 15:29:27.653 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.668 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[940424ec-6d0b-4951-b1a5-0e65b1fd5fc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:27 compute-1 sudo[318802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:27 compute-1 NetworkManager[49104]: <info>  [1768922967.6765] manager: (tapa2b49a8c-70): new Veth device (/org/freedesktop/NetworkManager/Devices/400)
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.675 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[99b1fcf4-450c-496e-ac85-b2dbdfd73d72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:27 compute-1 systemd-udevd[318808]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:29:27 compute-1 sudo[318802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:27 compute-1 sudo[318802]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.709 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[adfbeca7-74b7-4d88-b9bc-bea146668438]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.711 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d73e3fc2-8b68-4f3d-ae1d-de8770cb49f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:27 compute-1 NetworkManager[49104]: <info>  [1768922967.7348] device (tapa2b49a8c-70): carrier: link connected
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.740 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[367aff37-a7f9-47cd-919d-0598db4ea8b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:27 compute-1 sudo[318838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:29:27 compute-1 sudo[318838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:27 compute-1 sudo[318838]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.757 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[880ee89c-8bc3-414f-adb7-1478068a77f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa2b49a8c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:53:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 268], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800666, 'reachable_time': 23706, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318882, 'error': None, 'target': 'ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.773 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[daddaeb5-c8f2-4545-9e04-bbdcb7de8b44]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:5337'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 800666, 'tstamp': 800666}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318884, 'error': None, 'target': 'ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.790 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1fab6ee8-6c73-4ceb-a348-2a942ce5da59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa2b49a8c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:53:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 268], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800666, 'reachable_time': 23706, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318885, 'error': None, 'target': 'ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.821 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[618c34a4-8566-4d12-96f0-81cb4effd289]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.878 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[40f0ae2e-dc9b-4a07-a575-3164a17ab1f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.879 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa2b49a8c-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.880 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.881 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa2b49a8c-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:29:27 compute-1 kernel: tapa2b49a8c-70: entered promiscuous mode
Jan 20 15:29:27 compute-1 nova_compute[225855]: 2026-01-20 15:29:27.882 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:27 compute-1 NetworkManager[49104]: <info>  [1768922967.8834] manager: (tapa2b49a8c-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/401)
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.885 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa2b49a8c-70, col_values=(('external_ids', {'iface-id': '83da6236-f092-462b-85f8-aab29a73a3b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:29:27 compute-1 nova_compute[225855]: 2026-01-20 15:29:27.886 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:27 compute-1 ovn_controller[130490]: 2026-01-20T15:29:27Z|00946|binding|INFO|Releasing lport 83da6236-f092-462b-85f8-aab29a73a3b5 from this chassis (sb_readonly=0)
Jan 20 15:29:27 compute-1 nova_compute[225855]: 2026-01-20 15:29:27.900 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.902 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a2b49a8c-7446-4445-ae6e-d2870040582f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a2b49a8c-7446-4445-ae6e-d2870040582f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.903 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[82cecca3-dcb2-4b43-b94d-6f367e8ea3f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.904 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-a2b49a8c-7446-4445-ae6e-d2870040582f
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/a2b49a8c-7446-4445-ae6e-d2870040582f.pid.haproxy
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID a2b49a8c-7446-4445-ae6e-d2870040582f
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:29:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:27.904 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f', 'env', 'PROCESS_TAG=haproxy-a2b49a8c-7446-4445-ae6e-d2870040582f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a2b49a8c-7446-4445-ae6e-d2870040582f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:29:27 compute-1 nova_compute[225855]: 2026-01-20 15:29:27.932 225859 DEBUG nova.compute.manager [req-5c67f1d9-02a7-435e-9cb6-8d60cf10bbfd req-0088dba6-0404-44d7-8fa1-757b2e088460 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Received event network-vif-plugged-31d3cb4c-b75a-468e-8a31-1fad4e27eb6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:29:27 compute-1 nova_compute[225855]: 2026-01-20 15:29:27.932 225859 DEBUG oslo_concurrency.lockutils [req-5c67f1d9-02a7-435e-9cb6-8d60cf10bbfd req-0088dba6-0404-44d7-8fa1-757b2e088460 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "af78e376-a9fb-4854-9c34-fd8c6f63390a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:29:27 compute-1 nova_compute[225855]: 2026-01-20 15:29:27.933 225859 DEBUG oslo_concurrency.lockutils [req-5c67f1d9-02a7-435e-9cb6-8d60cf10bbfd req-0088dba6-0404-44d7-8fa1-757b2e088460 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "af78e376-a9fb-4854-9c34-fd8c6f63390a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:29:27 compute-1 nova_compute[225855]: 2026-01-20 15:29:27.933 225859 DEBUG oslo_concurrency.lockutils [req-5c67f1d9-02a7-435e-9cb6-8d60cf10bbfd req-0088dba6-0404-44d7-8fa1-757b2e088460 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "af78e376-a9fb-4854-9c34-fd8c6f63390a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:29:27 compute-1 nova_compute[225855]: 2026-01-20 15:29:27.933 225859 DEBUG nova.compute.manager [req-5c67f1d9-02a7-435e-9cb6-8d60cf10bbfd req-0088dba6-0404-44d7-8fa1-757b2e088460 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Processing event network-vif-plugged-31d3cb4c-b75a-468e-8a31-1fad4e27eb6e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.100 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922968.1003408, af78e376-a9fb-4854-9c34-fd8c6f63390a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.106 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] VM Started (Lifecycle Event)
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.108 225859 DEBUG nova.compute.manager [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.111 225859 DEBUG nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.115 225859 INFO nova.virt.libvirt.driver [-] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Instance spawned successfully.
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.116 225859 DEBUG nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.129 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.133 225859 DEBUG nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.134 225859 DEBUG nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.135 225859 DEBUG nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.135 225859 DEBUG nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.136 225859 DEBUG nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.136 225859 DEBUG nova.virt.libvirt.driver [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.142 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.175 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.176 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922968.1013553, af78e376-a9fb-4854-9c34-fd8c6f63390a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.176 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] VM Paused (Lifecycle Event)
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.200 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.204 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768922968.1111882, af78e376-a9fb-4854-9c34-fd8c6f63390a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.204 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] VM Resumed (Lifecycle Event)
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.211 225859 INFO nova.compute.manager [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Took 7.43 seconds to spawn the instance on the hypervisor.
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.212 225859 DEBUG nova.compute.manager [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.220 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.224 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:29:28 compute-1 podman[318955]: 2026-01-20 15:29:28.279353067 +0000 UTC m=+0.048540277 container create 915250c82fb88665cde5846eb4724025c3659abf7c6170479e5702a9e6426c08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.295 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:29:28 compute-1 systemd[1]: Started libpod-conmon-915250c82fb88665cde5846eb4724025c3659abf7c6170479e5702a9e6426c08.scope.
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.319 225859 INFO nova.compute.manager [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Took 8.46 seconds to build instance.
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.338 225859 DEBUG oslo_concurrency.lockutils [None req-be45cfa3-d78e-4f5c-8d98-77fe199d6f8b 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "af78e376-a9fb-4854-9c34-fd8c6f63390a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:29:28 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:29:28 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbfcd70bae3d1e8aa8f666a8626fbab06db564cbad7eb171c298ad570d143684/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:28 compute-1 podman[318955]: 2026-01-20 15:29:28.254768126 +0000 UTC m=+0.023955346 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:29:28 compute-1 podman[318955]: 2026-01-20 15:29:28.358304892 +0000 UTC m=+0.127492142 container init 915250c82fb88665cde5846eb4724025c3659abf7c6170479e5702a9e6426c08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 15:29:28 compute-1 podman[318955]: 2026-01-20 15:29:28.365497467 +0000 UTC m=+0.134684687 container start 915250c82fb88665cde5846eb4724025c3659abf7c6170479e5702a9e6426c08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 15:29:28 compute-1 neutron-haproxy-ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f[318971]: [NOTICE]   (318975) : New worker (318977) forked
Jan 20 15:29:28 compute-1 neutron-haproxy-ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f[318971]: [NOTICE]   (318975) : Loading success.
Jan 20 15:29:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:28.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:28 compute-1 ceph-mon[81775]: pgmap v3218: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:29:28 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:28 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:29:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:28.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:29:28 compute-1 nova_compute[225855]: 2026-01-20 15:29:28.870 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:30 compute-1 nova_compute[225855]: 2026-01-20 15:29:30.028 225859 DEBUG nova.compute.manager [req-17701128-912e-4c7d-b82b-5d0e0fcd35ff req-7403f88e-5b11-42b9-bb55-0ba0839ec774 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Received event network-vif-plugged-31d3cb4c-b75a-468e-8a31-1fad4e27eb6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:29:30 compute-1 nova_compute[225855]: 2026-01-20 15:29:30.028 225859 DEBUG oslo_concurrency.lockutils [req-17701128-912e-4c7d-b82b-5d0e0fcd35ff req-7403f88e-5b11-42b9-bb55-0ba0839ec774 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "af78e376-a9fb-4854-9c34-fd8c6f63390a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:29:30 compute-1 nova_compute[225855]: 2026-01-20 15:29:30.029 225859 DEBUG oslo_concurrency.lockutils [req-17701128-912e-4c7d-b82b-5d0e0fcd35ff req-7403f88e-5b11-42b9-bb55-0ba0839ec774 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "af78e376-a9fb-4854-9c34-fd8c6f63390a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:29:30 compute-1 nova_compute[225855]: 2026-01-20 15:29:30.029 225859 DEBUG oslo_concurrency.lockutils [req-17701128-912e-4c7d-b82b-5d0e0fcd35ff req-7403f88e-5b11-42b9-bb55-0ba0839ec774 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "af78e376-a9fb-4854-9c34-fd8c6f63390a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:29:30 compute-1 nova_compute[225855]: 2026-01-20 15:29:30.029 225859 DEBUG nova.compute.manager [req-17701128-912e-4c7d-b82b-5d0e0fcd35ff req-7403f88e-5b11-42b9-bb55-0ba0839ec774 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] No waiting events found dispatching network-vif-plugged-31d3cb4c-b75a-468e-8a31-1fad4e27eb6e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:29:30 compute-1 nova_compute[225855]: 2026-01-20 15:29:30.029 225859 WARNING nova.compute.manager [req-17701128-912e-4c7d-b82b-5d0e0fcd35ff req-7403f88e-5b11-42b9-bb55-0ba0839ec774 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Received unexpected event network-vif-plugged-31d3cb4c-b75a-468e-8a31-1fad4e27eb6e for instance with vm_state active and task_state None.
Jan 20 15:29:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:30.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:30.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:30 compute-1 ceph-mon[81775]: pgmap v3219: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:29:31 compute-1 nova_compute[225855]: 2026-01-20 15:29:31.452 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:31 compute-1 nova_compute[225855]: 2026-01-20 15:29:31.610 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:31 compute-1 ovn_controller[130490]: 2026-01-20T15:29:31Z|00947|binding|INFO|Releasing lport 83da6236-f092-462b-85f8-aab29a73a3b5 from this chassis (sb_readonly=0)
Jan 20 15:29:31 compute-1 NetworkManager[49104]: <info>  [1768922971.6138] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Jan 20 15:29:31 compute-1 NetworkManager[49104]: <info>  [1768922971.6147] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Jan 20 15:29:31 compute-1 nova_compute[225855]: 2026-01-20 15:29:31.644 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:31 compute-1 ovn_controller[130490]: 2026-01-20T15:29:31Z|00948|binding|INFO|Releasing lport 83da6236-f092-462b-85f8-aab29a73a3b5 from this chassis (sb_readonly=0)
Jan 20 15:29:31 compute-1 nova_compute[225855]: 2026-01-20 15:29:31.649 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:31 compute-1 nova_compute[225855]: 2026-01-20 15:29:31.904 225859 DEBUG nova.compute.manager [req-dcdbea0a-b4f6-4b85-8454-80c7c0cc24ce req-2a525e4c-1e17-49ad-813f-15647973df12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Received event network-changed-31d3cb4c-b75a-468e-8a31-1fad4e27eb6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:29:31 compute-1 nova_compute[225855]: 2026-01-20 15:29:31.905 225859 DEBUG nova.compute.manager [req-dcdbea0a-b4f6-4b85-8454-80c7c0cc24ce req-2a525e4c-1e17-49ad-813f-15647973df12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Refreshing instance network info cache due to event network-changed-31d3cb4c-b75a-468e-8a31-1fad4e27eb6e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:29:31 compute-1 nova_compute[225855]: 2026-01-20 15:29:31.905 225859 DEBUG oslo_concurrency.lockutils [req-dcdbea0a-b4f6-4b85-8454-80c7c0cc24ce req-2a525e4c-1e17-49ad-813f-15647973df12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-af78e376-a9fb-4854-9c34-fd8c6f63390a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:29:31 compute-1 nova_compute[225855]: 2026-01-20 15:29:31.905 225859 DEBUG oslo_concurrency.lockutils [req-dcdbea0a-b4f6-4b85-8454-80c7c0cc24ce req-2a525e4c-1e17-49ad-813f-15647973df12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-af78e376-a9fb-4854-9c34-fd8c6f63390a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:29:31 compute-1 nova_compute[225855]: 2026-01-20 15:29:31.906 225859 DEBUG nova.network.neutron [req-dcdbea0a-b4f6-4b85-8454-80c7c0cc24ce req-2a525e4c-1e17-49ad-813f-15647973df12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Refreshing network info cache for port 31d3cb4c-b75a-468e-8a31-1fad4e27eb6e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:29:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:32.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:32.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:32 compute-1 ceph-mon[81775]: pgmap v3220: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Jan 20 15:29:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:33 compute-1 nova_compute[225855]: 2026-01-20 15:29:33.872 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:33 compute-1 nova_compute[225855]: 2026-01-20 15:29:33.930 225859 DEBUG nova.network.neutron [req-dcdbea0a-b4f6-4b85-8454-80c7c0cc24ce req-2a525e4c-1e17-49ad-813f-15647973df12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Updated VIF entry in instance network info cache for port 31d3cb4c-b75a-468e-8a31-1fad4e27eb6e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:29:33 compute-1 nova_compute[225855]: 2026-01-20 15:29:33.931 225859 DEBUG nova.network.neutron [req-dcdbea0a-b4f6-4b85-8454-80c7c0cc24ce req-2a525e4c-1e17-49ad-813f-15647973df12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Updating instance_info_cache with network_info: [{"id": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "address": "fa:16:3e:d9:69:f6", "network": {"id": "a2b49a8c-7446-4445-ae6e-d2870040582f", "bridge": "br-int", "label": "tempest-network-smoke--338330285", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d3cb4c-b7", "ovs_interfaceid": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:29:33 compute-1 nova_compute[225855]: 2026-01-20 15:29:33.955 225859 DEBUG oslo_concurrency.lockutils [req-dcdbea0a-b4f6-4b85-8454-80c7c0cc24ce req-2a525e4c-1e17-49ad-813f-15647973df12 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-af78e376-a9fb-4854-9c34-fd8c6f63390a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:29:34 compute-1 nova_compute[225855]: 2026-01-20 15:29:34.361 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:34.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:34.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:34 compute-1 ceph-mon[81775]: pgmap v3221: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Jan 20 15:29:35 compute-1 nova_compute[225855]: 2026-01-20 15:29:35.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:35 compute-1 nova_compute[225855]: 2026-01-20 15:29:35.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 15:29:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:36.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:36 compute-1 nova_compute[225855]: 2026-01-20 15:29:36.454 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:36.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:36 compute-1 ceph-mon[81775]: pgmap v3222: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 20 15:29:38 compute-1 podman[318992]: 2026-01-20 15:29:38.070689232 +0000 UTC m=+0.119192134 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:29:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:29:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:38.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:29:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:29:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:38.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:29:38 compute-1 ceph-mon[81775]: pgmap v3223: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 924 KiB/s wr, 86 op/s
Jan 20 15:29:38 compute-1 nova_compute[225855]: 2026-01-20 15:29:38.874 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:40 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Jan 20 15:29:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:40.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:29:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:40.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:29:40 compute-1 ceph-mon[81775]: pgmap v3224: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:29:41 compute-1 nova_compute[225855]: 2026-01-20 15:29:41.503 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:42 compute-1 ovn_controller[130490]: 2026-01-20T15:29:42Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d9:69:f6 10.100.0.10
Jan 20 15:29:42 compute-1 ovn_controller[130490]: 2026-01-20T15:29:42Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:69:f6 10.100.0.10
Jan 20 15:29:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:42.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:29:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:42.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:29:42 compute-1 ceph-mon[81775]: pgmap v3225: 321 pgs: 321 active+clean; 175 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 97 op/s
Jan 20 15:29:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:43 compute-1 nova_compute[225855]: 2026-01-20 15:29:43.924 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:44.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:44.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:44 compute-1 ceph-mon[81775]: pgmap v3226: 321 pgs: 321 active+clean; 175 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 418 KiB/s rd, 1.1 MiB/s wr, 32 op/s
Jan 20 15:29:45 compute-1 ceph-mon[81775]: pgmap v3227: 321 pgs: 321 active+clean; 198 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 545 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 20 15:29:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:46.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:46 compute-1 nova_compute[225855]: 2026-01-20 15:29:46.505 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:46.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:47 compute-1 sudo[319023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:47 compute-1 sudo[319023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:47 compute-1 sudo[319023]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:47 compute-1 sudo[319048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:47 compute-1 sudo[319048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:47 compute-1 sudo[319048]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:48 compute-1 ceph-mon[81775]: pgmap v3228: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:29:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:48.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:29:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:48.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:29:48 compute-1 nova_compute[225855]: 2026-01-20 15:29:48.647 225859 INFO nova.compute.manager [None req-8d4a16ca-c04e-4973-b930-2615e689c44d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Get console output
Jan 20 15:29:48 compute-1 nova_compute[225855]: 2026-01-20 15:29:48.652 263775 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:29:48 compute-1 nova_compute[225855]: 2026-01-20 15:29:48.926 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:49 compute-1 podman[319074]: 2026-01-20 15:29:49.01171139 +0000 UTC m=+0.052401357 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Jan 20 15:29:49 compute-1 nova_compute[225855]: 2026-01-20 15:29:49.622 225859 INFO nova.compute.manager [None req-eead650f-1964-4b99-98dc-d03510545f6a 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Get console output
Jan 20 15:29:49 compute-1 nova_compute[225855]: 2026-01-20 15:29:49.626 263775 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:29:50 compute-1 ceph-mon[81775]: pgmap v3229: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:29:50 compute-1 nova_compute[225855]: 2026-01-20 15:29:50.362 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:50 compute-1 nova_compute[225855]: 2026-01-20 15:29:50.363 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 15:29:50 compute-1 nova_compute[225855]: 2026-01-20 15:29:50.381 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 15:29:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:50.439 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:29:50 compute-1 nova_compute[225855]: 2026-01-20 15:29:50.439 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:50.440 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:29:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:29:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:50.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:29:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:50.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:50 compute-1 nova_compute[225855]: 2026-01-20 15:29:50.699 225859 DEBUG nova.compute.manager [req-ddc73362-1fdd-47fd-883f-53ae0e138efb req-d1b0c495-3541-4df6-9a1d-8c0ba5786bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Received event network-changed-31d3cb4c-b75a-468e-8a31-1fad4e27eb6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:29:50 compute-1 nova_compute[225855]: 2026-01-20 15:29:50.700 225859 DEBUG nova.compute.manager [req-ddc73362-1fdd-47fd-883f-53ae0e138efb req-d1b0c495-3541-4df6-9a1d-8c0ba5786bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Refreshing instance network info cache due to event network-changed-31d3cb4c-b75a-468e-8a31-1fad4e27eb6e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:29:50 compute-1 nova_compute[225855]: 2026-01-20 15:29:50.700 225859 DEBUG oslo_concurrency.lockutils [req-ddc73362-1fdd-47fd-883f-53ae0e138efb req-d1b0c495-3541-4df6-9a1d-8c0ba5786bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-af78e376-a9fb-4854-9c34-fd8c6f63390a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:29:50 compute-1 nova_compute[225855]: 2026-01-20 15:29:50.700 225859 DEBUG oslo_concurrency.lockutils [req-ddc73362-1fdd-47fd-883f-53ae0e138efb req-d1b0c495-3541-4df6-9a1d-8c0ba5786bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-af78e376-a9fb-4854-9c34-fd8c6f63390a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:29:50 compute-1 nova_compute[225855]: 2026-01-20 15:29:50.700 225859 DEBUG nova.network.neutron [req-ddc73362-1fdd-47fd-883f-53ae0e138efb req-d1b0c495-3541-4df6-9a1d-8c0ba5786bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Refreshing network info cache for port 31d3cb4c-b75a-468e-8a31-1fad4e27eb6e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:29:50 compute-1 nova_compute[225855]: 2026-01-20 15:29:50.790 225859 DEBUG oslo_concurrency.lockutils [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "af78e376-a9fb-4854-9c34-fd8c6f63390a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:29:50 compute-1 nova_compute[225855]: 2026-01-20 15:29:50.791 225859 DEBUG oslo_concurrency.lockutils [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "af78e376-a9fb-4854-9c34-fd8c6f63390a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:29:50 compute-1 nova_compute[225855]: 2026-01-20 15:29:50.791 225859 DEBUG oslo_concurrency.lockutils [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "af78e376-a9fb-4854-9c34-fd8c6f63390a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:29:50 compute-1 nova_compute[225855]: 2026-01-20 15:29:50.792 225859 DEBUG oslo_concurrency.lockutils [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "af78e376-a9fb-4854-9c34-fd8c6f63390a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:29:50 compute-1 nova_compute[225855]: 2026-01-20 15:29:50.792 225859 DEBUG oslo_concurrency.lockutils [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "af78e376-a9fb-4854-9c34-fd8c6f63390a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:29:50 compute-1 nova_compute[225855]: 2026-01-20 15:29:50.793 225859 INFO nova.compute.manager [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Terminating instance
Jan 20 15:29:50 compute-1 nova_compute[225855]: 2026-01-20 15:29:50.795 225859 DEBUG nova.compute.manager [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:29:50 compute-1 kernel: tap31d3cb4c-b7 (unregistering): left promiscuous mode
Jan 20 15:29:50 compute-1 NetworkManager[49104]: <info>  [1768922990.8508] device (tap31d3cb4c-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:29:50 compute-1 nova_compute[225855]: 2026-01-20 15:29:50.860 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:50 compute-1 ovn_controller[130490]: 2026-01-20T15:29:50Z|00949|binding|INFO|Releasing lport 31d3cb4c-b75a-468e-8a31-1fad4e27eb6e from this chassis (sb_readonly=0)
Jan 20 15:29:50 compute-1 ovn_controller[130490]: 2026-01-20T15:29:50Z|00950|binding|INFO|Setting lport 31d3cb4c-b75a-468e-8a31-1fad4e27eb6e down in Southbound
Jan 20 15:29:50 compute-1 ovn_controller[130490]: 2026-01-20T15:29:50Z|00951|binding|INFO|Removing iface tap31d3cb4c-b7 ovn-installed in OVS
Jan 20 15:29:50 compute-1 nova_compute[225855]: 2026-01-20 15:29:50.863 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:50.869 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:69:f6 10.100.0.10'], port_security=['fa:16:3e:d9:69:f6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'af78e376-a9fb-4854-9c34-fd8c6f63390a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a2b49a8c-7446-4445-ae6e-d2870040582f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8d4ade01-8f6f-48ff-bd8a-0af514e9e9ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92632684-a601-487e-937b-036b3fd0bb35, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=31d3cb4c-b75a-468e-8a31-1fad4e27eb6e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:29:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:50.871 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 31d3cb4c-b75a-468e-8a31-1fad4e27eb6e in datapath a2b49a8c-7446-4445-ae6e-d2870040582f unbound from our chassis
Jan 20 15:29:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:50.872 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a2b49a8c-7446-4445-ae6e-d2870040582f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:29:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:50.873 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d14cb7-39ae-4f63-8e3f-12770c9e0ce2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:50.873 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f namespace which is not needed anymore
Jan 20 15:29:50 compute-1 nova_compute[225855]: 2026-01-20 15:29:50.880 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:50 compute-1 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d000000cd.scope: Deactivated successfully.
Jan 20 15:29:50 compute-1 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d000000cd.scope: Consumed 13.716s CPU time.
Jan 20 15:29:50 compute-1 systemd-machined[194361]: Machine qemu-109-instance-000000cd terminated.
Jan 20 15:29:51 compute-1 nova_compute[225855]: 2026-01-20 15:29:51.033 225859 INFO nova.virt.libvirt.driver [-] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Instance destroyed successfully.
Jan 20 15:29:51 compute-1 nova_compute[225855]: 2026-01-20 15:29:51.035 225859 DEBUG nova.objects.instance [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'resources' on Instance uuid af78e376-a9fb-4854-9c34-fd8c6f63390a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:29:51 compute-1 nova_compute[225855]: 2026-01-20 15:29:51.051 225859 DEBUG nova.virt.libvirt.vif [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:29:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-866255871',display_name='tempest-TestNetworkBasicOps-server-866255871',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-866255871',id=205,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNr82C8ft9iJVaxlnvB559FQY0a18+ddVyVQXOWpubG2vvEKlWos0eribBsrr0XJYAl5WSj1IuEMfruIIC3taSryOm9K9DYcrj57monaPm1w9c08Woz8HGduEiXhkpNC2g==',key_name='tempest-TestNetworkBasicOps-780774492',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:29:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-73qpy651',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:29:28Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=af78e376-a9fb-4854-9c34-fd8c6f63390a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "address": "fa:16:3e:d9:69:f6", "network": {"id": "a2b49a8c-7446-4445-ae6e-d2870040582f", "bridge": "br-int", "label": "tempest-network-smoke--338330285", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d3cb4c-b7", "ovs_interfaceid": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:29:51 compute-1 nova_compute[225855]: 2026-01-20 15:29:51.051 225859 DEBUG nova.network.os_vif_util [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "address": "fa:16:3e:d9:69:f6", "network": {"id": "a2b49a8c-7446-4445-ae6e-d2870040582f", "bridge": "br-int", "label": "tempest-network-smoke--338330285", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d3cb4c-b7", "ovs_interfaceid": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:29:51 compute-1 nova_compute[225855]: 2026-01-20 15:29:51.052 225859 DEBUG nova.network.os_vif_util [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d9:69:f6,bridge_name='br-int',has_traffic_filtering=True,id=31d3cb4c-b75a-468e-8a31-1fad4e27eb6e,network=Network(a2b49a8c-7446-4445-ae6e-d2870040582f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d3cb4c-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:29:51 compute-1 nova_compute[225855]: 2026-01-20 15:29:51.052 225859 DEBUG os_vif [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:69:f6,bridge_name='br-int',has_traffic_filtering=True,id=31d3cb4c-b75a-468e-8a31-1fad4e27eb6e,network=Network(a2b49a8c-7446-4445-ae6e-d2870040582f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d3cb4c-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:29:51 compute-1 nova_compute[225855]: 2026-01-20 15:29:51.053 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:51 compute-1 nova_compute[225855]: 2026-01-20 15:29:51.054 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31d3cb4c-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:29:51 compute-1 nova_compute[225855]: 2026-01-20 15:29:51.055 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:51 compute-1 nova_compute[225855]: 2026-01-20 15:29:51.056 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:29:51 compute-1 nova_compute[225855]: 2026-01-20 15:29:51.059 225859 INFO os_vif [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:69:f6,bridge_name='br-int',has_traffic_filtering=True,id=31d3cb4c-b75a-468e-8a31-1fad4e27eb6e,network=Network(a2b49a8c-7446-4445-ae6e-d2870040582f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d3cb4c-b7')
Jan 20 15:29:51 compute-1 neutron-haproxy-ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f[318971]: [NOTICE]   (318975) : haproxy version is 2.8.14-c23fe91
Jan 20 15:29:51 compute-1 neutron-haproxy-ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f[318971]: [NOTICE]   (318975) : path to executable is /usr/sbin/haproxy
Jan 20 15:29:51 compute-1 neutron-haproxy-ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f[318971]: [WARNING]  (318975) : Exiting Master process...
Jan 20 15:29:51 compute-1 neutron-haproxy-ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f[318971]: [ALERT]    (318975) : Current worker (318977) exited with code 143 (Terminated)
Jan 20 15:29:51 compute-1 neutron-haproxy-ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f[318971]: [WARNING]  (318975) : All workers exited. Exiting... (0)
Jan 20 15:29:51 compute-1 systemd[1]: libpod-915250c82fb88665cde5846eb4724025c3659abf7c6170479e5702a9e6426c08.scope: Deactivated successfully.
Jan 20 15:29:51 compute-1 podman[319114]: 2026-01-20 15:29:51.177105624 +0000 UTC m=+0.217167672 container died 915250c82fb88665cde5846eb4724025c3659abf7c6170479e5702a9e6426c08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:29:51 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-915250c82fb88665cde5846eb4724025c3659abf7c6170479e5702a9e6426c08-userdata-shm.mount: Deactivated successfully.
Jan 20 15:29:51 compute-1 systemd[1]: var-lib-containers-storage-overlay-fbfcd70bae3d1e8aa8f666a8626fbab06db564cbad7eb171c298ad570d143684-merged.mount: Deactivated successfully.
Jan 20 15:29:51 compute-1 podman[319114]: 2026-01-20 15:29:51.6848169 +0000 UTC m=+0.724878988 container cleanup 915250c82fb88665cde5846eb4724025c3659abf7c6170479e5702a9e6426c08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:29:51 compute-1 podman[319172]: 2026-01-20 15:29:51.945277307 +0000 UTC m=+0.237131462 container remove 915250c82fb88665cde5846eb4724025c3659abf7c6170479e5702a9e6426c08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:29:51 compute-1 systemd[1]: libpod-conmon-915250c82fb88665cde5846eb4724025c3659abf7c6170479e5702a9e6426c08.scope: Deactivated successfully.
Jan 20 15:29:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:51.951 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bb1d5113-0bd4-4ea7-b698-d14329868e5a]: (4, ('Tue Jan 20 03:29:50 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f (915250c82fb88665cde5846eb4724025c3659abf7c6170479e5702a9e6426c08)\n915250c82fb88665cde5846eb4724025c3659abf7c6170479e5702a9e6426c08\nTue Jan 20 03:29:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f (915250c82fb88665cde5846eb4724025c3659abf7c6170479e5702a9e6426c08)\n915250c82fb88665cde5846eb4724025c3659abf7c6170479e5702a9e6426c08\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:51.953 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[16523bb2-54b1-4d92-835b-d8f7d8124b4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:51.954 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa2b49a8c-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:29:51 compute-1 nova_compute[225855]: 2026-01-20 15:29:51.956 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:51 compute-1 kernel: tapa2b49a8c-70: left promiscuous mode
Jan 20 15:29:51 compute-1 nova_compute[225855]: 2026-01-20 15:29:51.969 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:51.973 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e930ae52-1890-448c-adaf-7c162d26bda0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:51.989 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c24beb-d864-4908-b2a5-75a625c1fa25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:51.990 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c5bdfa3e-9110-4a2f-bf8f-60f7cc3f8f9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:52.006 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0f743c32-f459-4e48-a0ff-cf4811497988]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800659, 'reachable_time': 35451, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319191, 'error': None, 'target': 'ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:52.010 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a2b49a8c-7446-4445-ae6e-d2870040582f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:29:52 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:52.010 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[8bdcfcac-4423-461e-814d-8ca007162627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:29:52 compute-1 systemd[1]: run-netns-ovnmeta\x2da2b49a8c\x2d7446\x2d4445\x2dae6e\x2dd2870040582f.mount: Deactivated successfully.
Jan 20 15:29:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:52.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:52 compute-1 ceph-mon[81775]: pgmap v3230: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.512 225859 INFO nova.virt.libvirt.driver [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Deleting instance files /var/lib/nova/instances/af78e376-a9fb-4854-9c34-fd8c6f63390a_del
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.513 225859 INFO nova.virt.libvirt.driver [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Deletion of /var/lib/nova/instances/af78e376-a9fb-4854-9c34-fd8c6f63390a_del complete
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.583 225859 INFO nova.compute.manager [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Took 1.79 seconds to destroy the instance on the hypervisor.
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.583 225859 DEBUG oslo.service.loopingcall [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.584 225859 DEBUG nova.compute.manager [-] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.584 225859 DEBUG nova.network.neutron [-] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:29:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:29:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:52.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.797 225859 DEBUG nova.compute.manager [req-50d574f9-bd10-4996-9b23-30b8ffc88df2 req-57c2b035-8abf-4800-a510-e2f818b7f616 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Received event network-vif-unplugged-31d3cb4c-b75a-468e-8a31-1fad4e27eb6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.798 225859 DEBUG oslo_concurrency.lockutils [req-50d574f9-bd10-4996-9b23-30b8ffc88df2 req-57c2b035-8abf-4800-a510-e2f818b7f616 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "af78e376-a9fb-4854-9c34-fd8c6f63390a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.798 225859 DEBUG oslo_concurrency.lockutils [req-50d574f9-bd10-4996-9b23-30b8ffc88df2 req-57c2b035-8abf-4800-a510-e2f818b7f616 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "af78e376-a9fb-4854-9c34-fd8c6f63390a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.798 225859 DEBUG oslo_concurrency.lockutils [req-50d574f9-bd10-4996-9b23-30b8ffc88df2 req-57c2b035-8abf-4800-a510-e2f818b7f616 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "af78e376-a9fb-4854-9c34-fd8c6f63390a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.798 225859 DEBUG nova.compute.manager [req-50d574f9-bd10-4996-9b23-30b8ffc88df2 req-57c2b035-8abf-4800-a510-e2f818b7f616 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] No waiting events found dispatching network-vif-unplugged-31d3cb4c-b75a-468e-8a31-1fad4e27eb6e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.799 225859 DEBUG nova.compute.manager [req-50d574f9-bd10-4996-9b23-30b8ffc88df2 req-57c2b035-8abf-4800-a510-e2f818b7f616 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Received event network-vif-unplugged-31d3cb4c-b75a-468e-8a31-1fad4e27eb6e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.799 225859 DEBUG nova.compute.manager [req-50d574f9-bd10-4996-9b23-30b8ffc88df2 req-57c2b035-8abf-4800-a510-e2f818b7f616 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Received event network-vif-plugged-31d3cb4c-b75a-468e-8a31-1fad4e27eb6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.799 225859 DEBUG oslo_concurrency.lockutils [req-50d574f9-bd10-4996-9b23-30b8ffc88df2 req-57c2b035-8abf-4800-a510-e2f818b7f616 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "af78e376-a9fb-4854-9c34-fd8c6f63390a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.799 225859 DEBUG oslo_concurrency.lockutils [req-50d574f9-bd10-4996-9b23-30b8ffc88df2 req-57c2b035-8abf-4800-a510-e2f818b7f616 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "af78e376-a9fb-4854-9c34-fd8c6f63390a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.799 225859 DEBUG oslo_concurrency.lockutils [req-50d574f9-bd10-4996-9b23-30b8ffc88df2 req-57c2b035-8abf-4800-a510-e2f818b7f616 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "af78e376-a9fb-4854-9c34-fd8c6f63390a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.800 225859 DEBUG nova.compute.manager [req-50d574f9-bd10-4996-9b23-30b8ffc88df2 req-57c2b035-8abf-4800-a510-e2f818b7f616 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] No waiting events found dispatching network-vif-plugged-31d3cb4c-b75a-468e-8a31-1fad4e27eb6e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.800 225859 WARNING nova.compute.manager [req-50d574f9-bd10-4996-9b23-30b8ffc88df2 req-57c2b035-8abf-4800-a510-e2f818b7f616 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Received unexpected event network-vif-plugged-31d3cb4c-b75a-468e-8a31-1fad4e27eb6e for instance with vm_state active and task_state deleting.
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.843 225859 DEBUG nova.network.neutron [req-ddc73362-1fdd-47fd-883f-53ae0e138efb req-d1b0c495-3541-4df6-9a1d-8c0ba5786bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Updated VIF entry in instance network info cache for port 31d3cb4c-b75a-468e-8a31-1fad4e27eb6e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.844 225859 DEBUG nova.network.neutron [req-ddc73362-1fdd-47fd-883f-53ae0e138efb req-d1b0c495-3541-4df6-9a1d-8c0ba5786bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Updating instance_info_cache with network_info: [{"id": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "address": "fa:16:3e:d9:69:f6", "network": {"id": "a2b49a8c-7446-4445-ae6e-d2870040582f", "bridge": "br-int", "label": "tempest-network-smoke--338330285", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d3cb4c-b7", "ovs_interfaceid": "31d3cb4c-b75a-468e-8a31-1fad4e27eb6e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:29:52 compute-1 nova_compute[225855]: 2026-01-20 15:29:52.866 225859 DEBUG oslo_concurrency.lockutils [req-ddc73362-1fdd-47fd-883f-53ae0e138efb req-d1b0c495-3541-4df6-9a1d-8c0ba5786bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-af78e376-a9fb-4854-9c34-fd8c6f63390a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:29:53 compute-1 nova_compute[225855]: 2026-01-20 15:29:53.176 225859 DEBUG nova.network.neutron [-] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:29:53 compute-1 nova_compute[225855]: 2026-01-20 15:29:53.194 225859 INFO nova.compute.manager [-] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Took 0.61 seconds to deallocate network for instance.
Jan 20 15:29:53 compute-1 nova_compute[225855]: 2026-01-20 15:29:53.275 225859 DEBUG oslo_concurrency.lockutils [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:29:53 compute-1 nova_compute[225855]: 2026-01-20 15:29:53.276 225859 DEBUG oslo_concurrency.lockutils [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:29:53 compute-1 nova_compute[225855]: 2026-01-20 15:29:53.284 225859 DEBUG nova.compute.manager [req-42e94f68-0bcc-42b5-928a-7cc11bc028bd req-e32af194-96c2-42bc-bebd-48e4ea18a59c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Received event network-vif-deleted-31d3cb4c-b75a-468e-8a31-1fad4e27eb6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:29:53 compute-1 nova_compute[225855]: 2026-01-20 15:29:53.324 225859 DEBUG oslo_concurrency.processutils [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:29:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:29:53 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2846343753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:53 compute-1 nova_compute[225855]: 2026-01-20 15:29:53.771 225859 DEBUG oslo_concurrency.processutils [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:29:53 compute-1 nova_compute[225855]: 2026-01-20 15:29:53.778 225859 DEBUG nova.compute.provider_tree [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:29:53 compute-1 nova_compute[225855]: 2026-01-20 15:29:53.804 225859 DEBUG nova.scheduler.client.report [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:29:53 compute-1 nova_compute[225855]: 2026-01-20 15:29:53.830 225859 DEBUG oslo_concurrency.lockutils [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:29:53 compute-1 nova_compute[225855]: 2026-01-20 15:29:53.874 225859 INFO nova.scheduler.client.report [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Deleted allocations for instance af78e376-a9fb-4854-9c34-fd8c6f63390a
Jan 20 15:29:53 compute-1 nova_compute[225855]: 2026-01-20 15:29:53.929 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:53 compute-1 nova_compute[225855]: 2026-01-20 15:29:53.963 225859 DEBUG oslo_concurrency.lockutils [None req-be7a7344-e8aa-43c5-ad00-ad230acbd8fc 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "af78e376-a9fb-4854-9c34-fd8c6f63390a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:29:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:54.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:54 compute-1 ceph-mon[81775]: pgmap v3231: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 199 KiB/s rd, 1.1 MiB/s wr, 41 op/s
Jan 20 15:29:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2846343753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:54.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:56 compute-1 nova_compute[225855]: 2026-01-20 15:29:56.056 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:56 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:29:56.442 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:29:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:56.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:56 compute-1 ceph-mon[81775]: pgmap v3232: 321 pgs: 321 active+clean; 156 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 210 KiB/s rd, 1.1 MiB/s wr, 58 op/s
Jan 20 15:29:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:56.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:58.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:58 compute-1 ceph-mon[81775]: pgmap v3233: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 18 KiB/s wr, 34 op/s
Jan 20 15:29:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:29:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:58.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:58 compute-1 nova_compute[225855]: 2026-01-20 15:29:58.932 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:59 compute-1 nova_compute[225855]: 2026-01-20 15:29:59.909 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:59 compute-1 nova_compute[225855]: 2026-01-20 15:29:59.989 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:30:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:00.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:30:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:00.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:00 compute-1 ceph-mon[81775]: pgmap v3234: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 28 op/s
Jan 20 15:30:00 compute-1 ceph-mon[81775]: overall HEALTH_OK
Jan 20 15:30:01 compute-1 nova_compute[225855]: 2026-01-20 15:30:01.058 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:02.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:02.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:02 compute-1 ceph-mon[81775]: pgmap v3235: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 20 15:30:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:03 compute-1 nova_compute[225855]: 2026-01-20 15:30:03.934 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:30:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:04.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:30:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:30:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:04.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:30:04 compute-1 ceph-mon[81775]: pgmap v3236: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:30:06 compute-1 nova_compute[225855]: 2026-01-20 15:30:06.033 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922991.0306811, af78e376-a9fb-4854-9c34-fd8c6f63390a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:30:06 compute-1 nova_compute[225855]: 2026-01-20 15:30:06.034 225859 INFO nova.compute.manager [-] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] VM Stopped (Lifecycle Event)
Jan 20 15:30:06 compute-1 nova_compute[225855]: 2026-01-20 15:30:06.061 225859 DEBUG nova.compute.manager [None req-04926d27-d190-4abc-8744-3cdfa7cf4ed1 - - - - - -] [instance: af78e376-a9fb-4854-9c34-fd8c6f63390a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:30:06 compute-1 nova_compute[225855]: 2026-01-20 15:30:06.062 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:06.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:06.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:06 compute-1 ceph-mon[81775]: pgmap v3237: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:30:07 compute-1 sudo[319224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:07 compute-1 sudo[319224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:07 compute-1 sudo[319224]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:07 compute-1 sudo[319249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:07 compute-1 sudo[319249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:07 compute-1 sudo[319249]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:08 compute-1 nova_compute[225855]: 2026-01-20 15:30:08.359 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:30:08 compute-1 nova_compute[225855]: 2026-01-20 15:30:08.360 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:30:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:08.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:08.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:08 compute-1 ceph-mon[81775]: pgmap v3238: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 11 op/s
Jan 20 15:30:08 compute-1 nova_compute[225855]: 2026-01-20 15:30:08.935 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:09 compute-1 podman[319275]: 2026-01-20 15:30:09.032539368 +0000 UTC m=+0.078861022 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 20 15:30:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:10.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:10.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:10 compute-1 ceph-mon[81775]: pgmap v3239: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:10 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3785469698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:30:11 compute-1 nova_compute[225855]: 2026-01-20 15:30:11.063 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:11 compute-1 nova_compute[225855]: 2026-01-20 15:30:11.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:30:11 compute-1 nova_compute[225855]: 2026-01-20 15:30:11.373 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:30:11 compute-1 nova_compute[225855]: 2026-01-20 15:30:11.374 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:30:11 compute-1 nova_compute[225855]: 2026-01-20 15:30:11.374 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:30:11 compute-1 nova_compute[225855]: 2026-01-20 15:30:11.374 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:30:11 compute-1 nova_compute[225855]: 2026-01-20 15:30:11.374 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:30:11 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:30:11 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3373613097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:30:11 compute-1 nova_compute[225855]: 2026-01-20 15:30:11.816 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:30:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1052730623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:30:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3373613097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:30:11 compute-1 nova_compute[225855]: 2026-01-20 15:30:11.956 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:30:11 compute-1 nova_compute[225855]: 2026-01-20 15:30:11.957 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4267MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:30:11 compute-1 nova_compute[225855]: 2026-01-20 15:30:11.957 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:30:11 compute-1 nova_compute[225855]: 2026-01-20 15:30:11.958 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:30:12 compute-1 nova_compute[225855]: 2026-01-20 15:30:12.027 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:30:12 compute-1 nova_compute[225855]: 2026-01-20 15:30:12.027 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:30:12 compute-1 nova_compute[225855]: 2026-01-20 15:30:12.042 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:30:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:30:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1559663311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:30:12 compute-1 nova_compute[225855]: 2026-01-20 15:30:12.475 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:30:12 compute-1 nova_compute[225855]: 2026-01-20 15:30:12.480 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:30:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:12.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:12 compute-1 nova_compute[225855]: 2026-01-20 15:30:12.504 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:30:12 compute-1 nova_compute[225855]: 2026-01-20 15:30:12.547 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:30:12 compute-1 nova_compute[225855]: 2026-01-20 15:30:12.547 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:30:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:12.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:12 compute-1 ceph-mon[81775]: pgmap v3240: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1559663311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:30:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:30:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/705460975' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:30:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:30:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/705460975' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:30:13 compute-1 ceph-mon[81775]: pgmap v3241: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/705460975' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:30:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/705460975' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:30:13 compute-1 nova_compute[225855]: 2026-01-20 15:30:13.937 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:14 compute-1 nova_compute[225855]: 2026-01-20 15:30:14.097 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:30:14 compute-1 nova_compute[225855]: 2026-01-20 15:30:14.097 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:30:14 compute-1 nova_compute[225855]: 2026-01-20 15:30:14.097 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:30:14 compute-1 nova_compute[225855]: 2026-01-20 15:30:14.116 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:30:14 compute-1 nova_compute[225855]: 2026-01-20 15:30:14.117 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:30:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:30:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:14.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:30:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:14.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1980788337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:30:15 compute-1 nova_compute[225855]: 2026-01-20 15:30:15.353 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:30:15 compute-1 nova_compute[225855]: 2026-01-20 15:30:15.354 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:30:15 compute-1 nova_compute[225855]: 2026-01-20 15:30:15.354 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:30:15 compute-1 nova_compute[225855]: 2026-01-20 15:30:15.354 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:30:15 compute-1 ceph-mon[81775]: pgmap v3242: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/23335784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:30:16 compute-1 nova_compute[225855]: 2026-01-20 15:30:16.065 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:16.452 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:30:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:16.452 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:30:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:16.452 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:30:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:16.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:30:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:16.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:30:17 compute-1 nova_compute[225855]: 2026-01-20 15:30:17.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:30:18 compute-1 ceph-mon[81775]: pgmap v3243: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:18.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:30:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:18.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:30:18 compute-1 nova_compute[225855]: 2026-01-20 15:30:18.939 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:20 compute-1 podman[319353]: 2026-01-20 15:30:20.008796412 +0000 UTC m=+0.048846116 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 20 15:30:20 compute-1 ceph-mon[81775]: pgmap v3244: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:20.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:20.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:21 compute-1 nova_compute[225855]: 2026-01-20 15:30:21.087 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:21 compute-1 nova_compute[225855]: 2026-01-20 15:30:21.379 225859 DEBUG oslo_concurrency.lockutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:30:21 compute-1 nova_compute[225855]: 2026-01-20 15:30:21.379 225859 DEBUG oslo_concurrency.lockutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:30:21 compute-1 nova_compute[225855]: 2026-01-20 15:30:21.399 225859 DEBUG nova.compute.manager [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:30:21 compute-1 nova_compute[225855]: 2026-01-20 15:30:21.486 225859 DEBUG oslo_concurrency.lockutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:30:21 compute-1 nova_compute[225855]: 2026-01-20 15:30:21.487 225859 DEBUG oslo_concurrency.lockutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:30:21 compute-1 nova_compute[225855]: 2026-01-20 15:30:21.495 225859 DEBUG nova.virt.hardware [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:30:21 compute-1 nova_compute[225855]: 2026-01-20 15:30:21.495 225859 INFO nova.compute.claims [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:30:21 compute-1 nova_compute[225855]: 2026-01-20 15:30:21.684 225859 DEBUG oslo_concurrency.processutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:30:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:30:22 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1059682167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.105 225859 DEBUG oslo_concurrency.processutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.111 225859 DEBUG nova.compute.provider_tree [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.135 225859 DEBUG nova.scheduler.client.report [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.175 225859 DEBUG oslo_concurrency.lockutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.176 225859 DEBUG nova.compute.manager [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.247 225859 DEBUG nova.compute.manager [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.247 225859 DEBUG nova.network.neutron [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.273 225859 INFO nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.290 225859 DEBUG nova.compute.manager [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:30:22 compute-1 ceph-mon[81775]: pgmap v3245: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1059682167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.386 225859 DEBUG nova.compute.manager [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.387 225859 DEBUG nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.387 225859 INFO nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Creating image(s)
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.413 225859 DEBUG nova.storage.rbd_utils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image c2074d47-58a3-49e8-82fd-6bc6145a1ea7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.438 225859 DEBUG nova.storage.rbd_utils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image c2074d47-58a3-49e8-82fd-6bc6145a1ea7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.464 225859 DEBUG nova.storage.rbd_utils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image c2074d47-58a3-49e8-82fd-6bc6145a1ea7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.468 225859 DEBUG oslo_concurrency.processutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.496 225859 DEBUG nova.policy [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5338aa65dc0e4326a66ce79053787f14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:30:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:22.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.535 225859 DEBUG oslo_concurrency.processutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.536 225859 DEBUG oslo_concurrency.lockutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.536 225859 DEBUG oslo_concurrency.lockutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.537 225859 DEBUG oslo_concurrency.lockutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.563 225859 DEBUG nova.storage.rbd_utils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image c2074d47-58a3-49e8-82fd-6bc6145a1ea7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.567 225859 DEBUG oslo_concurrency.processutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 c2074d47-58a3-49e8-82fd-6bc6145a1ea7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:30:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:22.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:22 compute-1 nova_compute[225855]: 2026-01-20 15:30:22.920 225859 DEBUG oslo_concurrency.processutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 c2074d47-58a3-49e8-82fd-6bc6145a1ea7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:30:23 compute-1 nova_compute[225855]: 2026-01-20 15:30:23.005 225859 DEBUG nova.storage.rbd_utils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] resizing rbd image c2074d47-58a3-49e8-82fd-6bc6145a1ea7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:30:23 compute-1 nova_compute[225855]: 2026-01-20 15:30:23.125 225859 DEBUG nova.objects.instance [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'migration_context' on Instance uuid c2074d47-58a3-49e8-82fd-6bc6145a1ea7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:30:23 compute-1 nova_compute[225855]: 2026-01-20 15:30:23.153 225859 DEBUG nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:30:23 compute-1 nova_compute[225855]: 2026-01-20 15:30:23.154 225859 DEBUG nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Ensure instance console log exists: /var/lib/nova/instances/c2074d47-58a3-49e8-82fd-6bc6145a1ea7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:30:23 compute-1 nova_compute[225855]: 2026-01-20 15:30:23.155 225859 DEBUG oslo_concurrency.lockutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:30:23 compute-1 nova_compute[225855]: 2026-01-20 15:30:23.155 225859 DEBUG oslo_concurrency.lockutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:30:23 compute-1 nova_compute[225855]: 2026-01-20 15:30:23.155 225859 DEBUG oslo_concurrency.lockutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:30:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:23 compute-1 nova_compute[225855]: 2026-01-20 15:30:23.881 225859 DEBUG nova.network.neutron [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Successfully created port: 60202d18-26b2-493b-a427-211cda112a80 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:30:23 compute-1 nova_compute[225855]: 2026-01-20 15:30:23.941 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:24 compute-1 nova_compute[225855]: 2026-01-20 15:30:24.334 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:30:24 compute-1 ceph-mon[81775]: pgmap v3246: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:24.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:24.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:24 compute-1 nova_compute[225855]: 2026-01-20 15:30:24.695 225859 DEBUG nova.network.neutron [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Successfully updated port: 60202d18-26b2-493b-a427-211cda112a80 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:30:24 compute-1 nova_compute[225855]: 2026-01-20 15:30:24.716 225859 DEBUG oslo_concurrency.lockutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "refresh_cache-c2074d47-58a3-49e8-82fd-6bc6145a1ea7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:30:24 compute-1 nova_compute[225855]: 2026-01-20 15:30:24.716 225859 DEBUG oslo_concurrency.lockutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquired lock "refresh_cache-c2074d47-58a3-49e8-82fd-6bc6145a1ea7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:30:24 compute-1 nova_compute[225855]: 2026-01-20 15:30:24.716 225859 DEBUG nova.network.neutron [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:30:24 compute-1 nova_compute[225855]: 2026-01-20 15:30:24.813 225859 DEBUG nova.compute.manager [req-1b80bc76-6b49-49d2-bbca-9eef0c3893a0 req-4d7a5a9d-73dc-4561-8a95-d6c3b56d899d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Received event network-changed-60202d18-26b2-493b-a427-211cda112a80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:30:24 compute-1 nova_compute[225855]: 2026-01-20 15:30:24.814 225859 DEBUG nova.compute.manager [req-1b80bc76-6b49-49d2-bbca-9eef0c3893a0 req-4d7a5a9d-73dc-4561-8a95-d6c3b56d899d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Refreshing instance network info cache due to event network-changed-60202d18-26b2-493b-a427-211cda112a80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:30:24 compute-1 nova_compute[225855]: 2026-01-20 15:30:24.814 225859 DEBUG oslo_concurrency.lockutils [req-1b80bc76-6b49-49d2-bbca-9eef0c3893a0 req-4d7a5a9d-73dc-4561-8a95-d6c3b56d899d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-c2074d47-58a3-49e8-82fd-6bc6145a1ea7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:30:24 compute-1 nova_compute[225855]: 2026-01-20 15:30:24.857 225859 DEBUG nova.network.neutron [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.717 225859 DEBUG nova.network.neutron [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Updating instance_info_cache with network_info: [{"id": "60202d18-26b2-493b-a427-211cda112a80", "address": "fa:16:3e:36:f9:3a", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60202d18-26", "ovs_interfaceid": "60202d18-26b2-493b-a427-211cda112a80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.737 225859 DEBUG oslo_concurrency.lockutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Releasing lock "refresh_cache-c2074d47-58a3-49e8-82fd-6bc6145a1ea7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.737 225859 DEBUG nova.compute.manager [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Instance network_info: |[{"id": "60202d18-26b2-493b-a427-211cda112a80", "address": "fa:16:3e:36:f9:3a", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60202d18-26", "ovs_interfaceid": "60202d18-26b2-493b-a427-211cda112a80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.738 225859 DEBUG oslo_concurrency.lockutils [req-1b80bc76-6b49-49d2-bbca-9eef0c3893a0 req-4d7a5a9d-73dc-4561-8a95-d6c3b56d899d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-c2074d47-58a3-49e8-82fd-6bc6145a1ea7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.738 225859 DEBUG nova.network.neutron [req-1b80bc76-6b49-49d2-bbca-9eef0c3893a0 req-4d7a5a9d-73dc-4561-8a95-d6c3b56d899d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Refreshing network info cache for port 60202d18-26b2-493b-a427-211cda112a80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.740 225859 DEBUG nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Start _get_guest_xml network_info=[{"id": "60202d18-26b2-493b-a427-211cda112a80", "address": "fa:16:3e:36:f9:3a", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60202d18-26", "ovs_interfaceid": "60202d18-26b2-493b-a427-211cda112a80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.745 225859 WARNING nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.755 225859 DEBUG nova.virt.libvirt.host [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.756 225859 DEBUG nova.virt.libvirt.host [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.759 225859 DEBUG nova.virt.libvirt.host [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.759 225859 DEBUG nova.virt.libvirt.host [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.760 225859 DEBUG nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.761 225859 DEBUG nova.virt.hardware [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.761 225859 DEBUG nova.virt.hardware [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.761 225859 DEBUG nova.virt.hardware [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.762 225859 DEBUG nova.virt.hardware [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.762 225859 DEBUG nova.virt.hardware [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.762 225859 DEBUG nova.virt.hardware [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.762 225859 DEBUG nova.virt.hardware [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.763 225859 DEBUG nova.virt.hardware [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.763 225859 DEBUG nova.virt.hardware [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.763 225859 DEBUG nova.virt.hardware [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.763 225859 DEBUG nova.virt.hardware [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:30:25 compute-1 nova_compute[225855]: 2026-01-20 15:30:25.766 225859 DEBUG oslo_concurrency.processutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.089 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:30:26 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2182023721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.211 225859 DEBUG oslo_concurrency.processutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.236 225859 DEBUG nova.storage.rbd_utils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image c2074d47-58a3-49e8-82fd-6bc6145a1ea7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.239 225859 DEBUG oslo_concurrency.processutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:30:26 compute-1 ceph-mon[81775]: pgmap v3247: 321 pgs: 321 active+clean; 145 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 957 KiB/s wr, 25 op/s
Jan 20 15:30:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2182023721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:30:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:30:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:26.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:30:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:26.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:30:26 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3549875571' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.696 225859 DEBUG oslo_concurrency.processutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.698 225859 DEBUG nova.virt.libvirt.vif [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:30:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-761395495',display_name='tempest-TestNetworkBasicOps-server-761395495',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-761395495',id=206,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6PcyU5b6KJgECZvP75RVISUjV8spB81h3nAjsUONZi4KISBeJ3H+m9LFQCp72IhdPL4TNE6iitZI83oIzTSr0WLM1hF9NfU7ED77LiXjCqrZKn4HPslanwlp/Qjc+bCQ==',key_name='tempest-TestNetworkBasicOps-1783413232',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-mwqa18l0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:30:22Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=c2074d47-58a3-49e8-82fd-6bc6145a1ea7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60202d18-26b2-493b-a427-211cda112a80", "address": "fa:16:3e:36:f9:3a", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60202d18-26", "ovs_interfaceid": "60202d18-26b2-493b-a427-211cda112a80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.699 225859 DEBUG nova.network.os_vif_util [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "60202d18-26b2-493b-a427-211cda112a80", "address": "fa:16:3e:36:f9:3a", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60202d18-26", "ovs_interfaceid": "60202d18-26b2-493b-a427-211cda112a80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.700 225859 DEBUG nova.network.os_vif_util [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:f9:3a,bridge_name='br-int',has_traffic_filtering=True,id=60202d18-26b2-493b-a427-211cda112a80,network=Network(0228362f-0ced-4cac-bb89-96bd472df47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60202d18-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.701 225859 DEBUG nova.objects.instance [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'pci_devices' on Instance uuid c2074d47-58a3-49e8-82fd-6bc6145a1ea7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.723 225859 DEBUG nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:30:26 compute-1 nova_compute[225855]:   <uuid>c2074d47-58a3-49e8-82fd-6bc6145a1ea7</uuid>
Jan 20 15:30:26 compute-1 nova_compute[225855]:   <name>instance-000000ce</name>
Jan 20 15:30:26 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:30:26 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:30:26 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <nova:name>tempest-TestNetworkBasicOps-server-761395495</nova:name>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:30:25</nova:creationTime>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:30:26 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:30:26 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:30:26 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:30:26 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:30:26 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:30:26 compute-1 nova_compute[225855]:         <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:30:26 compute-1 nova_compute[225855]:         <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:30:26 compute-1 nova_compute[225855]:         <nova:port uuid="60202d18-26b2-493b-a427-211cda112a80">
Jan 20 15:30:26 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:30:26 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:30:26 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <system>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <entry name="serial">c2074d47-58a3-49e8-82fd-6bc6145a1ea7</entry>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <entry name="uuid">c2074d47-58a3-49e8-82fd-6bc6145a1ea7</entry>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     </system>
Jan 20 15:30:26 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:30:26 compute-1 nova_compute[225855]:   <os>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:   </os>
Jan 20 15:30:26 compute-1 nova_compute[225855]:   <features>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:   </features>
Jan 20 15:30:26 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:30:26 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:30:26 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/c2074d47-58a3-49e8-82fd-6bc6145a1ea7_disk">
Jan 20 15:30:26 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       </source>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:30:26 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/c2074d47-58a3-49e8-82fd-6bc6145a1ea7_disk.config">
Jan 20 15:30:26 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       </source>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:30:26 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:36:f9:3a"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <target dev="tap60202d18-26"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/c2074d47-58a3-49e8-82fd-6bc6145a1ea7/console.log" append="off"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <video>
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     </video>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:30:26 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:30:26 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:30:26 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:30:26 compute-1 nova_compute[225855]: </domain>
Jan 20 15:30:26 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.725 225859 DEBUG nova.compute.manager [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Preparing to wait for external event network-vif-plugged-60202d18-26b2-493b-a427-211cda112a80 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.726 225859 DEBUG oslo_concurrency.lockutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.727 225859 DEBUG oslo_concurrency.lockutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.727 225859 DEBUG oslo_concurrency.lockutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.728 225859 DEBUG nova.virt.libvirt.vif [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:30:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-761395495',display_name='tempest-TestNetworkBasicOps-server-761395495',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-761395495',id=206,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6PcyU5b6KJgECZvP75RVISUjV8spB81h3nAjsUONZi4KISBeJ3H+m9LFQCp72IhdPL4TNE6iitZI83oIzTSr0WLM1hF9NfU7ED77LiXjCqrZKn4HPslanwlp/Qjc+bCQ==',key_name='tempest-TestNetworkBasicOps-1783413232',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-mwqa18l0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:30:22Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=c2074d47-58a3-49e8-82fd-6bc6145a1ea7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60202d18-26b2-493b-a427-211cda112a80", "address": "fa:16:3e:36:f9:3a", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60202d18-26", "ovs_interfaceid": "60202d18-26b2-493b-a427-211cda112a80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.729 225859 DEBUG nova.network.os_vif_util [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "60202d18-26b2-493b-a427-211cda112a80", "address": "fa:16:3e:36:f9:3a", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60202d18-26", "ovs_interfaceid": "60202d18-26b2-493b-a427-211cda112a80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.730 225859 DEBUG nova.network.os_vif_util [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:f9:3a,bridge_name='br-int',has_traffic_filtering=True,id=60202d18-26b2-493b-a427-211cda112a80,network=Network(0228362f-0ced-4cac-bb89-96bd472df47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60202d18-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.731 225859 DEBUG os_vif [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:f9:3a,bridge_name='br-int',has_traffic_filtering=True,id=60202d18-26b2-493b-a427-211cda112a80,network=Network(0228362f-0ced-4cac-bb89-96bd472df47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60202d18-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.731 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.732 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.733 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.738 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.739 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60202d18-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.739 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap60202d18-26, col_values=(('external_ids', {'iface-id': '60202d18-26b2-493b-a427-211cda112a80', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:f9:3a', 'vm-uuid': 'c2074d47-58a3-49e8-82fd-6bc6145a1ea7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.741 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:26 compute-1 NetworkManager[49104]: <info>  [1768923026.7426] manager: (tap60202d18-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/404)
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.745 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.749 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.751 225859 INFO os_vif [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:f9:3a,bridge_name='br-int',has_traffic_filtering=True,id=60202d18-26b2-493b-a427-211cda112a80,network=Network(0228362f-0ced-4cac-bb89-96bd472df47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60202d18-26')
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.809 225859 DEBUG nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.809 225859 DEBUG nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.810 225859 DEBUG nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No VIF found with MAC fa:16:3e:36:f9:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.810 225859 INFO nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Using config drive
Jan 20 15:30:26 compute-1 nova_compute[225855]: 2026-01-20 15:30:26.836 225859 DEBUG nova.storage.rbd_utils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image c2074d47-58a3-49e8-82fd-6bc6145a1ea7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:30:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3549875571' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:30:27 compute-1 sudo[319645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:27 compute-1 sudo[319645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:27 compute-1 sudo[319645]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:27 compute-1 sudo[319670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:27 compute-1 sudo[319670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:27 compute-1 sudo[319670]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:27 compute-1 sudo[319696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:27 compute-1 sudo[319696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:27 compute-1 sudo[319696]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:27 compute-1 sudo[319721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:30:27 compute-1 sudo[319721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:27 compute-1 sudo[319721]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:28 compute-1 sudo[319746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:28 compute-1 sudo[319746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:28 compute-1 sudo[319746]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:28 compute-1 sudo[319771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 15:30:28 compute-1 sudo[319771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:28 compute-1 sudo[319771]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:28 compute-1 sudo[319816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:28 compute-1 sudo[319816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:28 compute-1 sudo[319816]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:28 compute-1 sudo[319841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:30:28 compute-1 sudo[319841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:28 compute-1 sudo[319841]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:28 compute-1 ceph-mon[81775]: pgmap v3248: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:30:28 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:28 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:28 compute-1 nova_compute[225855]: 2026-01-20 15:30:28.468 225859 INFO nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Creating config drive at /var/lib/nova/instances/c2074d47-58a3-49e8-82fd-6bc6145a1ea7/disk.config
Jan 20 15:30:28 compute-1 nova_compute[225855]: 2026-01-20 15:30:28.474 225859 DEBUG oslo_concurrency.processutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c2074d47-58a3-49e8-82fd-6bc6145a1ea7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkcze09k3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:30:28 compute-1 sudo[319866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:28 compute-1 sudo[319866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:28 compute-1 sudo[319866]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:28.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:28 compute-1 sudo[319894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:30:28 compute-1 sudo[319894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:28 compute-1 nova_compute[225855]: 2026-01-20 15:30:28.615 225859 DEBUG oslo_concurrency.processutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c2074d47-58a3-49e8-82fd-6bc6145a1ea7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkcze09k3" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:30:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:28.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:28 compute-1 nova_compute[225855]: 2026-01-20 15:30:28.641 225859 DEBUG nova.storage.rbd_utils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image c2074d47-58a3-49e8-82fd-6bc6145a1ea7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:30:28 compute-1 nova_compute[225855]: 2026-01-20 15:30:28.645 225859 DEBUG oslo_concurrency.processutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c2074d47-58a3-49e8-82fd-6bc6145a1ea7/disk.config c2074d47-58a3-49e8-82fd-6bc6145a1ea7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:30:28 compute-1 nova_compute[225855]: 2026-01-20 15:30:28.792 225859 DEBUG oslo_concurrency.processutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c2074d47-58a3-49e8-82fd-6bc6145a1ea7/disk.config c2074d47-58a3-49e8-82fd-6bc6145a1ea7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:30:28 compute-1 nova_compute[225855]: 2026-01-20 15:30:28.793 225859 INFO nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Deleting local config drive /var/lib/nova/instances/c2074d47-58a3-49e8-82fd-6bc6145a1ea7/disk.config because it was imported into RBD.
Jan 20 15:30:28 compute-1 kernel: tap60202d18-26: entered promiscuous mode
Jan 20 15:30:28 compute-1 NetworkManager[49104]: <info>  [1768923028.8440] manager: (tap60202d18-26): new Tun device (/org/freedesktop/NetworkManager/Devices/405)
Jan 20 15:30:28 compute-1 systemd-udevd[319982]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:30:28 compute-1 ovn_controller[130490]: 2026-01-20T15:30:28Z|00952|binding|INFO|Claiming lport 60202d18-26b2-493b-a427-211cda112a80 for this chassis.
Jan 20 15:30:28 compute-1 ovn_controller[130490]: 2026-01-20T15:30:28Z|00953|binding|INFO|60202d18-26b2-493b-a427-211cda112a80: Claiming fa:16:3e:36:f9:3a 10.100.0.11
Jan 20 15:30:28 compute-1 nova_compute[225855]: 2026-01-20 15:30:28.907 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:28 compute-1 nova_compute[225855]: 2026-01-20 15:30:28.913 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:28.919 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:f9:3a 10.100.0.11'], port_security=['fa:16:3e:36:f9:3a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c2074d47-58a3-49e8-82fd-6bc6145a1ea7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0228362f-0ced-4cac-bb89-96bd472df47f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6458a221-63f1-42cc-b15d-f9334e60cb66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=306abd7d-c001-4e00-b2a1-8a251fd6a022, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=60202d18-26b2-493b-a427-211cda112a80) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:30:28 compute-1 NetworkManager[49104]: <info>  [1768923028.9212] device (tap60202d18-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:30:28 compute-1 NetworkManager[49104]: <info>  [1768923028.9221] device (tap60202d18-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:30:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:28.921 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 60202d18-26b2-493b-a427-211cda112a80 in datapath 0228362f-0ced-4cac-bb89-96bd472df47f bound to our chassis
Jan 20 15:30:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:28.922 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0228362f-0ced-4cac-bb89-96bd472df47f
Jan 20 15:30:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:28.934 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e777e4bd-74f5-4d33-a6f8-df6bcee95bd5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:30:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:28.936 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0228362f-01 in ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:30:28 compute-1 systemd-machined[194361]: New machine qemu-110-instance-000000ce.
Jan 20 15:30:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:28.939 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0228362f-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:30:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:28.939 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d084c6bf-f9d2-4007-af31-a3d73ef3c12f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:30:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:28.941 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cd5f2d92-cc24-4f39-80c5-1fa896f00631]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:30:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:28.952 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[00ca2826-f65f-4b49-bb4c-e167b8a9f39c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:30:28 compute-1 systemd[1]: Started Virtual Machine qemu-110-instance-000000ce.
Jan 20 15:30:28 compute-1 nova_compute[225855]: 2026-01-20 15:30:28.976 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:28 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:28.976 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[26aa4ddd-83cc-45dd-b735-563637cdf729]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:30:28 compute-1 ovn_controller[130490]: 2026-01-20T15:30:28Z|00954|binding|INFO|Setting lport 60202d18-26b2-493b-a427-211cda112a80 ovn-installed in OVS
Jan 20 15:30:28 compute-1 ovn_controller[130490]: 2026-01-20T15:30:28Z|00955|binding|INFO|Setting lport 60202d18-26b2-493b-a427-211cda112a80 up in Southbound
Jan 20 15:30:28 compute-1 nova_compute[225855]: 2026-01-20 15:30:28.982 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:29.004 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[8ed21a24-252d-4be7-93da-e7fec78b60fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:30:29 compute-1 NetworkManager[49104]: <info>  [1768923029.0099] manager: (tap0228362f-00): new Veth device (/org/freedesktop/NetworkManager/Devices/406)
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:29.010 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5eab8e15-4c09-4eed-a653-94646c80d377]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:30:29 compute-1 systemd-udevd[319985]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:29.040 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[10711aec-8c7b-4257-8fe8-1a35f01ae979]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:29.044 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[5246c544-e486-4e67-8782-232f8fca113d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:30:29 compute-1 sudo[319894]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:29 compute-1 NetworkManager[49104]: <info>  [1768923029.0663] device (tap0228362f-00): carrier: link connected
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:29.071 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[91e55b80-80f3-4292-bc75-75d2b3e0d3fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:29.089 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d108989f-ab76-46e5-a438-ff0a9766209b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0228362f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:13:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 271], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 806799, 'reachable_time': 22382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320032, 'error': None, 'target': 'ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:29.105 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[28b5778b-3ce3-44a0-9ad4-8d3a01955b68]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec9:1371'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 806799, 'tstamp': 806799}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320033, 'error': None, 'target': 'ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:29.123 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1fba40-8842-46e5-9813-75e73c06d9fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0228362f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:13:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 271], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 806799, 'reachable_time': 22382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320034, 'error': None, 'target': 'ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:29.156 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[987b5d2f-1aa3-4e96-bea3-a0796dd7cc36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:29.217 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[224dd069-30a9-4cef-a287-4b0764017f2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:29.219 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0228362f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:29.219 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:29.219 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0228362f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:30:29 compute-1 NetworkManager[49104]: <info>  [1768923029.2224] manager: (tap0228362f-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/407)
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.221 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:29 compute-1 kernel: tap0228362f-00: entered promiscuous mode
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.225 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:29.226 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0228362f-00, col_values=(('external_ids', {'iface-id': 'cd551c37-a4a7-45aa-9507-04cb570a94af'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:30:29 compute-1 ovn_controller[130490]: 2026-01-20T15:30:29Z|00956|binding|INFO|Releasing lport cd551c37-a4a7-45aa-9507-04cb570a94af from this chassis (sb_readonly=0)
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.227 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.240 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:29.241 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0228362f-0ced-4cac-bb89-96bd472df47f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0228362f-0ced-4cac-bb89-96bd472df47f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:29.242 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[aa51eede-28b4-4166-bc5e-492f796560e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:29.243 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-0228362f-0ced-4cac-bb89-96bd472df47f
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/0228362f-0ced-4cac-bb89-96bd472df47f.pid.haproxy
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 0228362f-0ced-4cac-bb89-96bd472df47f
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:30:29 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:29.243 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f', 'env', 'PROCESS_TAG=haproxy-0228362f-0ced-4cac-bb89-96bd472df47f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0228362f-0ced-4cac-bb89-96bd472df47f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.386 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768923029.3858116, c2074d47-58a3-49e8-82fd-6bc6145a1ea7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.386 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] VM Started (Lifecycle Event)
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.404 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.408 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768923029.3886147, c2074d47-58a3-49e8-82fd-6bc6145a1ea7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.408 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] VM Paused (Lifecycle Event)
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.421 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.423 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.449 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.496 225859 DEBUG nova.network.neutron [req-1b80bc76-6b49-49d2-bbca-9eef0c3893a0 req-4d7a5a9d-73dc-4561-8a95-d6c3b56d899d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Updated VIF entry in instance network info cache for port 60202d18-26b2-493b-a427-211cda112a80. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.498 225859 DEBUG nova.network.neutron [req-1b80bc76-6b49-49d2-bbca-9eef0c3893a0 req-4d7a5a9d-73dc-4561-8a95-d6c3b56d899d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Updating instance_info_cache with network_info: [{"id": "60202d18-26b2-493b-a427-211cda112a80", "address": "fa:16:3e:36:f9:3a", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60202d18-26", "ovs_interfaceid": "60202d18-26b2-493b-a427-211cda112a80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.519 225859 DEBUG oslo_concurrency.lockutils [req-1b80bc76-6b49-49d2-bbca-9eef0c3893a0 req-4d7a5a9d-73dc-4561-8a95-d6c3b56d899d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-c2074d47-58a3-49e8-82fd-6bc6145a1ea7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.627 225859 DEBUG nova.compute.manager [req-9e269883-a09d-4e62-ac7a-ddc280c986d4 req-0493deaa-8ae2-42e6-8b5c-8d842fa2b686 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Received event network-vif-plugged-60202d18-26b2-493b-a427-211cda112a80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.627 225859 DEBUG oslo_concurrency.lockutils [req-9e269883-a09d-4e62-ac7a-ddc280c986d4 req-0493deaa-8ae2-42e6-8b5c-8d842fa2b686 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.627 225859 DEBUG oslo_concurrency.lockutils [req-9e269883-a09d-4e62-ac7a-ddc280c986d4 req-0493deaa-8ae2-42e6-8b5c-8d842fa2b686 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.627 225859 DEBUG oslo_concurrency.lockutils [req-9e269883-a09d-4e62-ac7a-ddc280c986d4 req-0493deaa-8ae2-42e6-8b5c-8d842fa2b686 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.628 225859 DEBUG nova.compute.manager [req-9e269883-a09d-4e62-ac7a-ddc280c986d4 req-0493deaa-8ae2-42e6-8b5c-8d842fa2b686 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Processing event network-vif-plugged-60202d18-26b2-493b-a427-211cda112a80 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.628 225859 DEBUG nova.compute.manager [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.631 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768923029.6315727, c2074d47-58a3-49e8-82fd-6bc6145a1ea7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.632 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] VM Resumed (Lifecycle Event)
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.633 225859 DEBUG nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.636 225859 INFO nova.virt.libvirt.driver [-] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Instance spawned successfully.
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.637 225859 DEBUG nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.654 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.659 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:30:29 compute-1 podman[320105]: 2026-01-20 15:30:29.566105305 +0000 UTC m=+0.022763971 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.663 225859 DEBUG nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.663 225859 DEBUG nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.663 225859 DEBUG nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.664 225859 DEBUG nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.664 225859 DEBUG nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.665 225859 DEBUG nova.virt.libvirt.driver [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.697 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.737 225859 INFO nova.compute.manager [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Took 7.35 seconds to spawn the instance on the hypervisor.
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.737 225859 DEBUG nova.compute.manager [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.801 225859 INFO nova.compute.manager [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Took 8.35 seconds to build instance.
Jan 20 15:30:29 compute-1 nova_compute[225855]: 2026-01-20 15:30:29.814 225859 DEBUG oslo_concurrency.lockutils [None req-7a7d2c37-e6b7-40f0-9877-6a6ac3ea8422 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:30:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 15:30:29 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 15:30:30 compute-1 podman[320105]: 2026-01-20 15:30:30.044856104 +0000 UTC m=+0.501514750 container create 220fb88226bb4adc08a6d6e8c007dceb68fb2fe39546e4b37d2a422566ff7b43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 20 15:30:30 compute-1 systemd[1]: Started libpod-conmon-220fb88226bb4adc08a6d6e8c007dceb68fb2fe39546e4b37d2a422566ff7b43.scope.
Jan 20 15:30:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:30.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:30 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:30:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5492aad4f22230a780618b9ba63a9c548a97b7ff7061366af2c207314fa31229/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:30:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:30.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:30 compute-1 podman[320105]: 2026-01-20 15:30:30.676743845 +0000 UTC m=+1.133402511 container init 220fb88226bb4adc08a6d6e8c007dceb68fb2fe39546e4b37d2a422566ff7b43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 20 15:30:30 compute-1 podman[320105]: 2026-01-20 15:30:30.68320003 +0000 UTC m=+1.139858676 container start 220fb88226bb4adc08a6d6e8c007dceb68fb2fe39546e4b37d2a422566ff7b43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:30:30 compute-1 neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f[320122]: [NOTICE]   (320126) : New worker (320128) forked
Jan 20 15:30:30 compute-1 neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f[320122]: [NOTICE]   (320126) : Loading success.
Jan 20 15:30:31 compute-1 ceph-mon[81775]: pgmap v3249: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:30:31 compute-1 nova_compute[225855]: 2026-01-20 15:30:31.721 225859 DEBUG nova.compute.manager [req-726f0f16-0571-471f-896e-f046fcdf8f33 req-dc19859c-4a73-41ee-b99f-e7f335223ae0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Received event network-vif-plugged-60202d18-26b2-493b-a427-211cda112a80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:30:31 compute-1 nova_compute[225855]: 2026-01-20 15:30:31.721 225859 DEBUG oslo_concurrency.lockutils [req-726f0f16-0571-471f-896e-f046fcdf8f33 req-dc19859c-4a73-41ee-b99f-e7f335223ae0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:30:31 compute-1 nova_compute[225855]: 2026-01-20 15:30:31.721 225859 DEBUG oslo_concurrency.lockutils [req-726f0f16-0571-471f-896e-f046fcdf8f33 req-dc19859c-4a73-41ee-b99f-e7f335223ae0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:30:31 compute-1 nova_compute[225855]: 2026-01-20 15:30:31.722 225859 DEBUG oslo_concurrency.lockutils [req-726f0f16-0571-471f-896e-f046fcdf8f33 req-dc19859c-4a73-41ee-b99f-e7f335223ae0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:30:31 compute-1 nova_compute[225855]: 2026-01-20 15:30:31.722 225859 DEBUG nova.compute.manager [req-726f0f16-0571-471f-896e-f046fcdf8f33 req-dc19859c-4a73-41ee-b99f-e7f335223ae0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] No waiting events found dispatching network-vif-plugged-60202d18-26b2-493b-a427-211cda112a80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:30:31 compute-1 nova_compute[225855]: 2026-01-20 15:30:31.722 225859 WARNING nova.compute.manager [req-726f0f16-0571-471f-896e-f046fcdf8f33 req-dc19859c-4a73-41ee-b99f-e7f335223ae0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Received unexpected event network-vif-plugged-60202d18-26b2-493b-a427-211cda112a80 for instance with vm_state active and task_state None.
Jan 20 15:30:31 compute-1 nova_compute[225855]: 2026-01-20 15:30:31.742 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:32 compute-1 ceph-mon[81775]: pgmap v3250: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 777 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 20 15:30:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:32.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:32.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:33 compute-1 nova_compute[225855]: 2026-01-20 15:30:33.978 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:34 compute-1 ceph-mon[81775]: pgmap v3251: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 777 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 20 15:30:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:30:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:30:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:30:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:30:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:30:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:34.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:30:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:34.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:30:34 compute-1 ovn_controller[130490]: 2026-01-20T15:30:34Z|00957|binding|INFO|Releasing lport cd551c37-a4a7-45aa-9507-04cb570a94af from this chassis (sb_readonly=0)
Jan 20 15:30:34 compute-1 nova_compute[225855]: 2026-01-20 15:30:34.919 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:34 compute-1 NetworkManager[49104]: <info>  [1768923034.9238] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Jan 20 15:30:34 compute-1 NetworkManager[49104]: <info>  [1768923034.9247] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Jan 20 15:30:34 compute-1 ovn_controller[130490]: 2026-01-20T15:30:34Z|00958|binding|INFO|Releasing lport cd551c37-a4a7-45aa-9507-04cb570a94af from this chassis (sb_readonly=0)
Jan 20 15:30:34 compute-1 nova_compute[225855]: 2026-01-20 15:30:34.954 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:34 compute-1 nova_compute[225855]: 2026-01-20 15:30:34.958 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:35 compute-1 nova_compute[225855]: 2026-01-20 15:30:35.915 225859 DEBUG nova.compute.manager [req-9d25b58a-3146-46dc-86ed-535e078a311e req-27928c24-da8a-4dbe-9574-63e5daba4373 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Received event network-changed-60202d18-26b2-493b-a427-211cda112a80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:30:35 compute-1 nova_compute[225855]: 2026-01-20 15:30:35.915 225859 DEBUG nova.compute.manager [req-9d25b58a-3146-46dc-86ed-535e078a311e req-27928c24-da8a-4dbe-9574-63e5daba4373 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Refreshing instance network info cache due to event network-changed-60202d18-26b2-493b-a427-211cda112a80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:30:35 compute-1 nova_compute[225855]: 2026-01-20 15:30:35.915 225859 DEBUG oslo_concurrency.lockutils [req-9d25b58a-3146-46dc-86ed-535e078a311e req-27928c24-da8a-4dbe-9574-63e5daba4373 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-c2074d47-58a3-49e8-82fd-6bc6145a1ea7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:30:35 compute-1 nova_compute[225855]: 2026-01-20 15:30:35.916 225859 DEBUG oslo_concurrency.lockutils [req-9d25b58a-3146-46dc-86ed-535e078a311e req-27928c24-da8a-4dbe-9574-63e5daba4373 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-c2074d47-58a3-49e8-82fd-6bc6145a1ea7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:30:35 compute-1 nova_compute[225855]: 2026-01-20 15:30:35.916 225859 DEBUG nova.network.neutron [req-9d25b58a-3146-46dc-86ed-535e078a311e req-27928c24-da8a-4dbe-9574-63e5daba4373 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Refreshing network info cache for port 60202d18-26b2-493b-a427-211cda112a80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:30:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:30:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:36.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:30:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:36.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:36 compute-1 ceph-mon[81775]: pgmap v3252: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 20 15:30:36 compute-1 nova_compute[225855]: 2026-01-20 15:30:36.745 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:37 compute-1 nova_compute[225855]: 2026-01-20 15:30:37.508 225859 DEBUG nova.network.neutron [req-9d25b58a-3146-46dc-86ed-535e078a311e req-27928c24-da8a-4dbe-9574-63e5daba4373 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Updated VIF entry in instance network info cache for port 60202d18-26b2-493b-a427-211cda112a80. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:30:37 compute-1 nova_compute[225855]: 2026-01-20 15:30:37.509 225859 DEBUG nova.network.neutron [req-9d25b58a-3146-46dc-86ed-535e078a311e req-27928c24-da8a-4dbe-9574-63e5daba4373 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Updating instance_info_cache with network_info: [{"id": "60202d18-26b2-493b-a427-211cda112a80", "address": "fa:16:3e:36:f9:3a", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60202d18-26", "ovs_interfaceid": "60202d18-26b2-493b-a427-211cda112a80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:30:37 compute-1 nova_compute[225855]: 2026-01-20 15:30:37.535 225859 DEBUG oslo_concurrency.lockutils [req-9d25b58a-3146-46dc-86ed-535e078a311e req-27928c24-da8a-4dbe-9574-63e5daba4373 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-c2074d47-58a3-49e8-82fd-6bc6145a1ea7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:30:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:30:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:38.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:30:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:30:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:38.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:30:38 compute-1 ceph-mon[81775]: pgmap v3253: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 871 KiB/s wr, 75 op/s
Jan 20 15:30:38 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:38 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:38 compute-1 sudo[320142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:38 compute-1 sudo[320142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:38 compute-1 sudo[320142]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:38 compute-1 sudo[320167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:30:38 compute-1 sudo[320167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:38 compute-1 sudo[320167]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:38 compute-1 nova_compute[225855]: 2026-01-20 15:30:38.980 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:40 compute-1 podman[320193]: 2026-01-20 15:30:40.041040777 +0000 UTC m=+0.082785895 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 15:30:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:40.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:40.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:40 compute-1 ceph-mon[81775]: pgmap v3254: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:30:41 compute-1 nova_compute[225855]: 2026-01-20 15:30:41.780 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:41 compute-1 ovn_controller[130490]: 2026-01-20T15:30:41Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:36:f9:3a 10.100.0.11
Jan 20 15:30:41 compute-1 ovn_controller[130490]: 2026-01-20T15:30:41Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:36:f9:3a 10.100.0.11
Jan 20 15:30:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:30:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:42.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:30:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:30:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:42.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:30:42 compute-1 ceph-mon[81775]: pgmap v3255: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 15:30:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:43 compute-1 nova_compute[225855]: 2026-01-20 15:30:43.986 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:44.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:44.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:44 compute-1 ceph-mon[81775]: pgmap v3256: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 85 B/s wr, 43 op/s
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #160. Immutable memtables: 0.
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:30:44.736922) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 160
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923044737043, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 1070, "num_deletes": 251, "total_data_size": 2334552, "memory_usage": 2374080, "flush_reason": "Manual Compaction"}
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #161: started
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923044764217, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 161, "file_size": 1521192, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78262, "largest_seqno": 79327, "table_properties": {"data_size": 1516323, "index_size": 2456, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10746, "raw_average_key_size": 20, "raw_value_size": 1506527, "raw_average_value_size": 2805, "num_data_blocks": 107, "num_entries": 537, "num_filter_entries": 537, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768922964, "oldest_key_time": 1768922964, "file_creation_time": 1768923044, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 27344 microseconds, and 6707 cpu microseconds.
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:30:44.764281) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #161: 1521192 bytes OK
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:30:44.764302) [db/memtable_list.cc:519] [default] Level-0 commit table #161 started
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:30:44.765808) [db/memtable_list.cc:722] [default] Level-0 commit table #161: memtable #1 done
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:30:44.765826) EVENT_LOG_v1 {"time_micros": 1768923044765820, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:30:44.765849) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 2329283, prev total WAL file size 2329283, number of live WAL files 2.
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000157.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:30:44.766641) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [161(1485KB)], [159(11MB)]
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923044766702, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [161], "files_L6": [159], "score": -1, "input_data_size": 13201598, "oldest_snapshot_seqno": -1}
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #162: 10048 keys, 11281013 bytes, temperature: kUnknown
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923044894169, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 162, "file_size": 11281013, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11218443, "index_size": 36345, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25157, "raw_key_size": 264870, "raw_average_key_size": 26, "raw_value_size": 11044649, "raw_average_value_size": 1099, "num_data_blocks": 1377, "num_entries": 10048, "num_filter_entries": 10048, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768923044, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 162, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:30:44.894472) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 11281013 bytes
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:30:44.911669) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.5 rd, 88.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 11.1 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(16.1) write-amplify(7.4) OK, records in: 10567, records dropped: 519 output_compression: NoCompression
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:30:44.911717) EVENT_LOG_v1 {"time_micros": 1768923044911701, "job": 102, "event": "compaction_finished", "compaction_time_micros": 127575, "compaction_time_cpu_micros": 67581, "output_level": 6, "num_output_files": 1, "total_output_size": 11281013, "num_input_records": 10567, "num_output_records": 10048, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923044912229, "job": 102, "event": "table_file_deletion", "file_number": 161}
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000159.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923044914437, "job": 102, "event": "table_file_deletion", "file_number": 159}
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:30:44.766535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:30:44.914595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:30:44.914602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:30:44.914605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:30:44.914606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:30:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:30:44.914608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:30:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:46.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:46.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:46 compute-1 ceph-mon[81775]: pgmap v3257: 321 pgs: 321 active+clean; 191 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.4 MiB/s wr, 93 op/s
Jan 20 15:30:46 compute-1 nova_compute[225855]: 2026-01-20 15:30:46.782 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:47 compute-1 sudo[320222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:47 compute-1 sudo[320222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:47 compute-1 sudo[320222]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:47 compute-1 sudo[320248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:47 compute-1 sudo[320248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:47 compute-1 sudo[320248]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:48.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:48.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:48 compute-1 ceph-mon[81775]: pgmap v3258: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 461 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 20 15:30:48 compute-1 nova_compute[225855]: 2026-01-20 15:30:48.988 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:49 compute-1 nova_compute[225855]: 2026-01-20 15:30:49.062 225859 INFO nova.compute.manager [None req-b7c0f359-6fea-4f3c-b9a6-227e347dbb63 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Get console output
Jan 20 15:30:49 compute-1 nova_compute[225855]: 2026-01-20 15:30:49.068 263775 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:30:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:50.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:30:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:50.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:30:50 compute-1 ceph-mon[81775]: pgmap v3259: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:30:50 compute-1 nova_compute[225855]: 2026-01-20 15:30:50.869 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:50.870 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:30:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:50.871 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:30:50 compute-1 nova_compute[225855]: 2026-01-20 15:30:50.908 225859 DEBUG nova.compute.manager [req-4cfb5cd0-6dd6-4e37-a494-d2a93c433147 req-b28bd661-8e4c-422b-88b3-3d6303a03183 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Received event network-changed-60202d18-26b2-493b-a427-211cda112a80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:30:50 compute-1 nova_compute[225855]: 2026-01-20 15:30:50.908 225859 DEBUG nova.compute.manager [req-4cfb5cd0-6dd6-4e37-a494-d2a93c433147 req-b28bd661-8e4c-422b-88b3-3d6303a03183 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Refreshing instance network info cache due to event network-changed-60202d18-26b2-493b-a427-211cda112a80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:30:50 compute-1 nova_compute[225855]: 2026-01-20 15:30:50.908 225859 DEBUG oslo_concurrency.lockutils [req-4cfb5cd0-6dd6-4e37-a494-d2a93c433147 req-b28bd661-8e4c-422b-88b3-3d6303a03183 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-c2074d47-58a3-49e8-82fd-6bc6145a1ea7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:30:50 compute-1 nova_compute[225855]: 2026-01-20 15:30:50.908 225859 DEBUG oslo_concurrency.lockutils [req-4cfb5cd0-6dd6-4e37-a494-d2a93c433147 req-b28bd661-8e4c-422b-88b3-3d6303a03183 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-c2074d47-58a3-49e8-82fd-6bc6145a1ea7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:30:50 compute-1 nova_compute[225855]: 2026-01-20 15:30:50.909 225859 DEBUG nova.network.neutron [req-4cfb5cd0-6dd6-4e37-a494-d2a93c433147 req-b28bd661-8e4c-422b-88b3-3d6303a03183 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Refreshing network info cache for port 60202d18-26b2-493b-a427-211cda112a80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:30:51 compute-1 podman[320274]: 2026-01-20 15:30:51.002671385 +0000 UTC m=+0.045250053 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 15:30:51 compute-1 nova_compute[225855]: 2026-01-20 15:30:51.784 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:52.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:52 compute-1 nova_compute[225855]: 2026-01-20 15:30:52.579 225859 DEBUG nova.network.neutron [req-4cfb5cd0-6dd6-4e37-a494-d2a93c433147 req-b28bd661-8e4c-422b-88b3-3d6303a03183 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Updated VIF entry in instance network info cache for port 60202d18-26b2-493b-a427-211cda112a80. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:30:52 compute-1 nova_compute[225855]: 2026-01-20 15:30:52.579 225859 DEBUG nova.network.neutron [req-4cfb5cd0-6dd6-4e37-a494-d2a93c433147 req-b28bd661-8e4c-422b-88b3-3d6303a03183 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Updating instance_info_cache with network_info: [{"id": "60202d18-26b2-493b-a427-211cda112a80", "address": "fa:16:3e:36:f9:3a", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60202d18-26", "ovs_interfaceid": "60202d18-26b2-493b-a427-211cda112a80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:30:52 compute-1 nova_compute[225855]: 2026-01-20 15:30:52.603 225859 DEBUG oslo_concurrency.lockutils [req-4cfb5cd0-6dd6-4e37-a494-d2a93c433147 req-b28bd661-8e4c-422b-88b3-3d6303a03183 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-c2074d47-58a3-49e8-82fd-6bc6145a1ea7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:30:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:52.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:52 compute-1 ceph-mon[81775]: pgmap v3260: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:30:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:53 compute-1 nova_compute[225855]: 2026-01-20 15:30:53.991 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:54.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:30:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:54.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:30:54 compute-1 ceph-mon[81775]: pgmap v3261: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:30:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:56.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:56.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:56 compute-1 nova_compute[225855]: 2026-01-20 15:30:56.787 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:56 compute-1 ceph-mon[81775]: pgmap v3262: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:30:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:30:57.873 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:30:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:58.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:30:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:30:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:58.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:30:58 compute-1 ceph-mon[81775]: pgmap v3263: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 776 KiB/s wr, 14 op/s
Jan 20 15:30:58 compute-1 nova_compute[225855]: 2026-01-20 15:30:58.992 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:00.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:00.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:00 compute-1 ceph-mon[81775]: pgmap v3264: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.7 KiB/s rd, 13 KiB/s wr, 0 op/s
Jan 20 15:31:00 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/450668054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:01 compute-1 nova_compute[225855]: 2026-01-20 15:31:01.789 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:01 compute-1 ceph-mon[81775]: pgmap v3265: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.7 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 20 15:31:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:02.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:02.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:03 compute-1 nova_compute[225855]: 2026-01-20 15:31:03.994 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:04 compute-1 ceph-mon[81775]: pgmap v3266: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s wr, 0 op/s
Jan 20 15:31:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:04.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:04.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:06 compute-1 ceph-mon[81775]: pgmap v3267: 321 pgs: 321 active+clean; 228 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 992 KiB/s wr, 25 op/s
Jan 20 15:31:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:06.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:06.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:06 compute-1 nova_compute[225855]: 2026-01-20 15:31:06.791 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3621508670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:31:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/393004237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:31:07 compute-1 sudo[320303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:07 compute-1 sudo[320303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:07 compute-1 sudo[320303]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:08 compute-1 sudo[320328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:08 compute-1 sudo[320328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:08 compute-1 sudo[320328]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:08 compute-1 ceph-mon[81775]: pgmap v3268: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:31:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:08.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:08.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:08 compute-1 nova_compute[225855]: 2026-01-20 15:31:08.996 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:10 compute-1 ceph-mon[81775]: pgmap v3269: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 20 15:31:10 compute-1 nova_compute[225855]: 2026-01-20 15:31:10.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:31:10 compute-1 nova_compute[225855]: 2026-01-20 15:31:10.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:31:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:10.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:31:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:10.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:31:11 compute-1 podman[320354]: 2026-01-20 15:31:11.090431942 +0000 UTC m=+0.105734320 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:31:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/165161131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:11 compute-1 nova_compute[225855]: 2026-01-20 15:31:11.793 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:12 compute-1 ceph-mon[81775]: pgmap v3270: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 20 15:31:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3566983214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:12 compute-1 nova_compute[225855]: 2026-01-20 15:31:12.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:31:12 compute-1 nova_compute[225855]: 2026-01-20 15:31:12.368 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:12 compute-1 nova_compute[225855]: 2026-01-20 15:31:12.368 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:12 compute-1 nova_compute[225855]: 2026-01-20 15:31:12.368 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:12 compute-1 nova_compute[225855]: 2026-01-20 15:31:12.369 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:31:12 compute-1 nova_compute[225855]: 2026-01-20 15:31:12.369 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:31:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:31:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:12.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:31:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:12.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:31:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/46401669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:12 compute-1 nova_compute[225855]: 2026-01-20 15:31:12.809 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:31:12 compute-1 nova_compute[225855]: 2026-01-20 15:31:12.885 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000ce as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:31:12 compute-1 nova_compute[225855]: 2026-01-20 15:31:12.886 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000ce as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:31:13 compute-1 nova_compute[225855]: 2026-01-20 15:31:13.023 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:31:13 compute-1 nova_compute[225855]: 2026-01-20 15:31:13.024 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4034MB free_disk=20.921817779541016GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:31:13 compute-1 nova_compute[225855]: 2026-01-20 15:31:13.025 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:13 compute-1 nova_compute[225855]: 2026-01-20 15:31:13.025 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:13 compute-1 nova_compute[225855]: 2026-01-20 15:31:13.103 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance c2074d47-58a3-49e8-82fd-6bc6145a1ea7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:31:13 compute-1 nova_compute[225855]: 2026-01-20 15:31:13.103 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:31:13 compute-1 nova_compute[225855]: 2026-01-20 15:31:13.104 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:31:13 compute-1 nova_compute[225855]: 2026-01-20 15:31:13.155 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:31:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/46401669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:31:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3338679589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:13 compute-1 nova_compute[225855]: 2026-01-20 15:31:13.589 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:31:13 compute-1 nova_compute[225855]: 2026-01-20 15:31:13.596 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:31:13 compute-1 nova_compute[225855]: 2026-01-20 15:31:13.611 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:31:13 compute-1 nova_compute[225855]: 2026-01-20 15:31:13.633 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:31:13 compute-1 nova_compute[225855]: 2026-01-20 15:31:13.634 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:14 compute-1 nova_compute[225855]: 2026-01-20 15:31:13.999 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:14 compute-1 ceph-mon[81775]: pgmap v3271: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 20 15:31:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3338679589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1528520629' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:31:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1528520629' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:31:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:14.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:14 compute-1 nova_compute[225855]: 2026-01-20 15:31:14.634 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:31:14 compute-1 nova_compute[225855]: 2026-01-20 15:31:14.634 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:31:14 compute-1 nova_compute[225855]: 2026-01-20 15:31:14.635 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:31:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:14.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:14 compute-1 nova_compute[225855]: 2026-01-20 15:31:14.822 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-c2074d47-58a3-49e8-82fd-6bc6145a1ea7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:31:14 compute-1 nova_compute[225855]: 2026-01-20 15:31:14.823 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-c2074d47-58a3-49e8-82fd-6bc6145a1ea7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:31:14 compute-1 nova_compute[225855]: 2026-01-20 15:31:14.823 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:31:14 compute-1 nova_compute[225855]: 2026-01-20 15:31:14.824 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c2074d47-58a3-49e8-82fd-6bc6145a1ea7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:31:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:31:15 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2264811836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:16 compute-1 ceph-mon[81775]: pgmap v3272: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 15:31:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2264811836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:31:16.454 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:31:16.454 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:31:16.455 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:16.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:16.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:16 compute-1 nova_compute[225855]: 2026-01-20 15:31:16.795 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:16 compute-1 nova_compute[225855]: 2026-01-20 15:31:16.971 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Updating instance_info_cache with network_info: [{"id": "60202d18-26b2-493b-a427-211cda112a80", "address": "fa:16:3e:36:f9:3a", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60202d18-26", "ovs_interfaceid": "60202d18-26b2-493b-a427-211cda112a80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:31:16 compute-1 nova_compute[225855]: 2026-01-20 15:31:16.984 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-c2074d47-58a3-49e8-82fd-6bc6145a1ea7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:31:16 compute-1 nova_compute[225855]: 2026-01-20 15:31:16.984 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:31:16 compute-1 nova_compute[225855]: 2026-01-20 15:31:16.985 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:31:16 compute-1 nova_compute[225855]: 2026-01-20 15:31:16.985 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:31:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1451184599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:17 compute-1 nova_compute[225855]: 2026-01-20 15:31:17.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:31:17 compute-1 nova_compute[225855]: 2026-01-20 15:31:17.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:31:17 compute-1 nova_compute[225855]: 2026-01-20 15:31:17.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:31:18 compute-1 ceph-mon[81775]: pgmap v3273: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 841 KiB/s wr, 75 op/s
Jan 20 15:31:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:18.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:18.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:19 compute-1 nova_compute[225855]: 2026-01-20 15:31:19.001 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:20 compute-1 ceph-mon[81775]: pgmap v3274: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 20 15:31:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:20.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:20.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:21 compute-1 nova_compute[225855]: 2026-01-20 15:31:21.797 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:22 compute-1 podman[320432]: 2026-01-20 15:31:22.016796882 +0000 UTC m=+0.059242902 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 15:31:22 compute-1 ceph-mon[81775]: pgmap v3275: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 KiB/s wr, 65 op/s
Jan 20 15:31:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:31:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:22.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:31:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:22.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:24 compute-1 nova_compute[225855]: 2026-01-20 15:31:24.005 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:24 compute-1 nova_compute[225855]: 2026-01-20 15:31:24.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:31:24 compute-1 ceph-mon[81775]: pgmap v3276: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 576 KiB/s rd, 2.3 KiB/s wr, 18 op/s
Jan 20 15:31:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:24.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:24.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:26 compute-1 ceph-mon[81775]: pgmap v3277: 321 pgs: 321 active+clean; 269 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 687 KiB/s rd, 1.3 MiB/s wr, 57 op/s
Jan 20 15:31:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:26.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:26.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:26 compute-1 nova_compute[225855]: 2026-01-20 15:31:26.800 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:28 compute-1 sudo[320454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:28 compute-1 sudo[320454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:28 compute-1 sudo[320454]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:28 compute-1 sudo[320479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:28 compute-1 sudo[320479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:28 compute-1 sudo[320479]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:28 compute-1 ceph-mon[81775]: pgmap v3278: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:31:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:28.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:31:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:28.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:31:29 compute-1 nova_compute[225855]: 2026-01-20 15:31:29.007 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:30 compute-1 ceph-mon[81775]: pgmap v3279: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:31:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:30.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:31:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:30.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:31:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4116014989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:31 compute-1 nova_compute[225855]: 2026-01-20 15:31:31.802 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:32 compute-1 ceph-mon[81775]: pgmap v3280: 321 pgs: 321 active+clean; 215 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Jan 20 15:31:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:32.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:32.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:33 compute-1 nova_compute[225855]: 2026-01-20 15:31:33.841 225859 DEBUG oslo_concurrency.lockutils [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:33 compute-1 nova_compute[225855]: 2026-01-20 15:31:33.841 225859 DEBUG oslo_concurrency.lockutils [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:33 compute-1 nova_compute[225855]: 2026-01-20 15:31:33.841 225859 DEBUG oslo_concurrency.lockutils [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:33 compute-1 nova_compute[225855]: 2026-01-20 15:31:33.842 225859 DEBUG oslo_concurrency.lockutils [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:33 compute-1 nova_compute[225855]: 2026-01-20 15:31:33.842 225859 DEBUG oslo_concurrency.lockutils [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:33 compute-1 nova_compute[225855]: 2026-01-20 15:31:33.843 225859 INFO nova.compute.manager [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Terminating instance
Jan 20 15:31:33 compute-1 nova_compute[225855]: 2026-01-20 15:31:33.844 225859 DEBUG nova.compute.manager [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:31:33 compute-1 kernel: tap60202d18-26 (unregistering): left promiscuous mode
Jan 20 15:31:33 compute-1 NetworkManager[49104]: <info>  [1768923093.9103] device (tap60202d18-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:31:33 compute-1 ovn_controller[130490]: 2026-01-20T15:31:33Z|00959|binding|INFO|Releasing lport 60202d18-26b2-493b-a427-211cda112a80 from this chassis (sb_readonly=0)
Jan 20 15:31:33 compute-1 ovn_controller[130490]: 2026-01-20T15:31:33Z|00960|binding|INFO|Setting lport 60202d18-26b2-493b-a427-211cda112a80 down in Southbound
Jan 20 15:31:33 compute-1 nova_compute[225855]: 2026-01-20 15:31:33.917 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:33 compute-1 ovn_controller[130490]: 2026-01-20T15:31:33Z|00961|binding|INFO|Removing iface tap60202d18-26 ovn-installed in OVS
Jan 20 15:31:33 compute-1 nova_compute[225855]: 2026-01-20 15:31:33.920 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:31:33.933 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:f9:3a 10.100.0.11'], port_security=['fa:16:3e:36:f9:3a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c2074d47-58a3-49e8-82fd-6bc6145a1ea7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0228362f-0ced-4cac-bb89-96bd472df47f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6458a221-63f1-42cc-b15d-f9334e60cb66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=306abd7d-c001-4e00-b2a1-8a251fd6a022, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=60202d18-26b2-493b-a427-211cda112a80) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:31:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:31:33.935 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 60202d18-26b2-493b-a427-211cda112a80 in datapath 0228362f-0ced-4cac-bb89-96bd472df47f unbound from our chassis
Jan 20 15:31:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:31:33.936 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0228362f-0ced-4cac-bb89-96bd472df47f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:31:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:31:33.938 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[51c1519d-13d9-4dda-b968-80b1f1a9981c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:31:33.938 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f namespace which is not needed anymore
Jan 20 15:31:33 compute-1 nova_compute[225855]: 2026-01-20 15:31:33.940 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:33 compute-1 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d000000ce.scope: Deactivated successfully.
Jan 20 15:31:33 compute-1 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d000000ce.scope: Consumed 15.218s CPU time.
Jan 20 15:31:33 compute-1 systemd-machined[194361]: Machine qemu-110-instance-000000ce terminated.
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.008 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:34 compute-1 neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f[320122]: [NOTICE]   (320126) : haproxy version is 2.8.14-c23fe91
Jan 20 15:31:34 compute-1 neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f[320122]: [NOTICE]   (320126) : path to executable is /usr/sbin/haproxy
Jan 20 15:31:34 compute-1 neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f[320122]: [WARNING]  (320126) : Exiting Master process...
Jan 20 15:31:34 compute-1 neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f[320122]: [ALERT]    (320126) : Current worker (320128) exited with code 143 (Terminated)
Jan 20 15:31:34 compute-1 neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f[320122]: [WARNING]  (320126) : All workers exited. Exiting... (0)
Jan 20 15:31:34 compute-1 systemd[1]: libpod-220fb88226bb4adc08a6d6e8c007dceb68fb2fe39546e4b37d2a422566ff7b43.scope: Deactivated successfully.
Jan 20 15:31:34 compute-1 podman[320532]: 2026-01-20 15:31:34.05796173 +0000 UTC m=+0.042626288 container died 220fb88226bb4adc08a6d6e8c007dceb68fb2fe39546e4b37d2a422566ff7b43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.062 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.069 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.076 225859 INFO nova.virt.libvirt.driver [-] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Instance destroyed successfully.
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.076 225859 DEBUG nova.objects.instance [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'resources' on Instance uuid c2074d47-58a3-49e8-82fd-6bc6145a1ea7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:31:34 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-220fb88226bb4adc08a6d6e8c007dceb68fb2fe39546e4b37d2a422566ff7b43-userdata-shm.mount: Deactivated successfully.
Jan 20 15:31:34 compute-1 systemd[1]: var-lib-containers-storage-overlay-5492aad4f22230a780618b9ba63a9c548a97b7ff7061366af2c207314fa31229-merged.mount: Deactivated successfully.
Jan 20 15:31:34 compute-1 podman[320532]: 2026-01-20 15:31:34.096705376 +0000 UTC m=+0.081369924 container cleanup 220fb88226bb4adc08a6d6e8c007dceb68fb2fe39546e4b37d2a422566ff7b43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.099 225859 DEBUG nova.virt.libvirt.vif [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:30:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-761395495',display_name='tempest-TestNetworkBasicOps-server-761395495',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-761395495',id=206,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6PcyU5b6KJgECZvP75RVISUjV8spB81h3nAjsUONZi4KISBeJ3H+m9LFQCp72IhdPL4TNE6iitZI83oIzTSr0WLM1hF9NfU7ED77LiXjCqrZKn4HPslanwlp/Qjc+bCQ==',key_name='tempest-TestNetworkBasicOps-1783413232',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:30:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-mwqa18l0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:30:29Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=c2074d47-58a3-49e8-82fd-6bc6145a1ea7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60202d18-26b2-493b-a427-211cda112a80", "address": "fa:16:3e:36:f9:3a", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60202d18-26", "ovs_interfaceid": "60202d18-26b2-493b-a427-211cda112a80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.099 225859 DEBUG nova.network.os_vif_util [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "60202d18-26b2-493b-a427-211cda112a80", "address": "fa:16:3e:36:f9:3a", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60202d18-26", "ovs_interfaceid": "60202d18-26b2-493b-a427-211cda112a80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.100 225859 DEBUG nova.network.os_vif_util [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:36:f9:3a,bridge_name='br-int',has_traffic_filtering=True,id=60202d18-26b2-493b-a427-211cda112a80,network=Network(0228362f-0ced-4cac-bb89-96bd472df47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60202d18-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.101 225859 DEBUG os_vif [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:f9:3a,bridge_name='br-int',has_traffic_filtering=True,id=60202d18-26b2-493b-a427-211cda112a80,network=Network(0228362f-0ced-4cac-bb89-96bd472df47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60202d18-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.103 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.103 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60202d18-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:31:34 compute-1 systemd[1]: libpod-conmon-220fb88226bb4adc08a6d6e8c007dceb68fb2fe39546e4b37d2a422566ff7b43.scope: Deactivated successfully.
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.105 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.106 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.109 225859 INFO os_vif [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:f9:3a,bridge_name='br-int',has_traffic_filtering=True,id=60202d18-26b2-493b-a427-211cda112a80,network=Network(0228362f-0ced-4cac-bb89-96bd472df47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60202d18-26')
Jan 20 15:31:34 compute-1 podman[320573]: 2026-01-20 15:31:34.1542926 +0000 UTC m=+0.036551824 container remove 220fb88226bb4adc08a6d6e8c007dceb68fb2fe39546e4b37d2a422566ff7b43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 15:31:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:31:34.158 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[36a4dc8d-bbc2-4e25-b931-a4f043eafbd1]: (4, ('Tue Jan 20 03:31:34 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f (220fb88226bb4adc08a6d6e8c007dceb68fb2fe39546e4b37d2a422566ff7b43)\n220fb88226bb4adc08a6d6e8c007dceb68fb2fe39546e4b37d2a422566ff7b43\nTue Jan 20 03:31:34 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f (220fb88226bb4adc08a6d6e8c007dceb68fb2fe39546e4b37d2a422566ff7b43)\n220fb88226bb4adc08a6d6e8c007dceb68fb2fe39546e4b37d2a422566ff7b43\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:31:34.160 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0d80f30e-aa83-442a-af55-6d0ae3291714]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:31:34.161 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0228362f-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.162 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:34 compute-1 kernel: tap0228362f-00: left promiscuous mode
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.174 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:31:34.176 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5e67beb3-26a6-4229-9222-369609682e27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:31:34.188 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fcbfd4fd-c729-424c-a0a1-7fc5d42be9f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:31:34.189 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[98e0856f-ca3d-411a-895e-dc88459ccb57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:31:34.205 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[67abcb9b-ac55-4ad9-9ba5-9bf789037656]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 806792, 'reachable_time': 36443, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320606, 'error': None, 'target': 'ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:34 compute-1 systemd[1]: run-netns-ovnmeta\x2d0228362f\x2d0ced\x2d4cac\x2dbb89\x2d96bd472df47f.mount: Deactivated successfully.
Jan 20 15:31:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:31:34.209 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:31:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:31:34.209 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[08532f09-7406-4251-a509-7af40de05aba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.470 225859 INFO nova.virt.libvirt.driver [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Deleting instance files /var/lib/nova/instances/c2074d47-58a3-49e8-82fd-6bc6145a1ea7_del
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.471 225859 INFO nova.virt.libvirt.driver [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Deletion of /var/lib/nova/instances/c2074d47-58a3-49e8-82fd-6bc6145a1ea7_del complete
Jan 20 15:31:34 compute-1 ceph-mon[81775]: pgmap v3281: 321 pgs: 321 active+clean; 215 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 331 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.560 225859 INFO nova.compute.manager [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Took 0.72 seconds to destroy the instance on the hypervisor.
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.561 225859 DEBUG oslo.service.loopingcall [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.561 225859 DEBUG nova.compute.manager [-] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.561 225859 DEBUG nova.network.neutron [-] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:31:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:34.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.643 225859 DEBUG nova.compute.manager [req-6aa51df5-ffdf-436b-b095-c10f9d525a39 req-e33871ab-e95e-4e57-b70c-f4225524f920 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Received event network-vif-unplugged-60202d18-26b2-493b-a427-211cda112a80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.644 225859 DEBUG oslo_concurrency.lockutils [req-6aa51df5-ffdf-436b-b095-c10f9d525a39 req-e33871ab-e95e-4e57-b70c-f4225524f920 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.644 225859 DEBUG oslo_concurrency.lockutils [req-6aa51df5-ffdf-436b-b095-c10f9d525a39 req-e33871ab-e95e-4e57-b70c-f4225524f920 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.644 225859 DEBUG oslo_concurrency.lockutils [req-6aa51df5-ffdf-436b-b095-c10f9d525a39 req-e33871ab-e95e-4e57-b70c-f4225524f920 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.645 225859 DEBUG nova.compute.manager [req-6aa51df5-ffdf-436b-b095-c10f9d525a39 req-e33871ab-e95e-4e57-b70c-f4225524f920 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] No waiting events found dispatching network-vif-unplugged-60202d18-26b2-493b-a427-211cda112a80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:31:34 compute-1 nova_compute[225855]: 2026-01-20 15:31:34.645 225859 DEBUG nova.compute.manager [req-6aa51df5-ffdf-436b-b095-c10f9d525a39 req-e33871ab-e95e-4e57-b70c-f4225524f920 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Received event network-vif-unplugged-60202d18-26b2-493b-a427-211cda112a80 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:31:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:34.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:35 compute-1 nova_compute[225855]: 2026-01-20 15:31:35.729 225859 DEBUG nova.network.neutron [-] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:31:35 compute-1 nova_compute[225855]: 2026-01-20 15:31:35.765 225859 INFO nova.compute.manager [-] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Took 1.20 seconds to deallocate network for instance.
Jan 20 15:31:35 compute-1 nova_compute[225855]: 2026-01-20 15:31:35.830 225859 DEBUG oslo_concurrency.lockutils [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:35 compute-1 nova_compute[225855]: 2026-01-20 15:31:35.831 225859 DEBUG oslo_concurrency.lockutils [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:35 compute-1 nova_compute[225855]: 2026-01-20 15:31:35.848 225859 DEBUG nova.compute.manager [req-c4f4b203-3d85-4adc-9d31-425cc8af382c req-b1a0bc6e-518b-480a-a0fc-0f8e5633c5e0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Received event network-vif-deleted-60202d18-26b2-493b-a427-211cda112a80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:31:35 compute-1 nova_compute[225855]: 2026-01-20 15:31:35.888 225859 DEBUG oslo_concurrency.processutils [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:31:36 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:31:36 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/768340541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:36 compute-1 nova_compute[225855]: 2026-01-20 15:31:36.330 225859 DEBUG oslo_concurrency.processutils [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:31:36 compute-1 nova_compute[225855]: 2026-01-20 15:31:36.336 225859 DEBUG nova.compute.provider_tree [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:31:36 compute-1 nova_compute[225855]: 2026-01-20 15:31:36.365 225859 DEBUG nova.scheduler.client.report [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:31:36 compute-1 nova_compute[225855]: 2026-01-20 15:31:36.392 225859 DEBUG oslo_concurrency.lockutils [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:36 compute-1 nova_compute[225855]: 2026-01-20 15:31:36.426 225859 INFO nova.scheduler.client.report [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Deleted allocations for instance c2074d47-58a3-49e8-82fd-6bc6145a1ea7
Jan 20 15:31:36 compute-1 ceph-mon[81775]: pgmap v3282: 321 pgs: 321 active+clean; 169 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 2.2 MiB/s wr, 95 op/s
Jan 20 15:31:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/768340541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:36 compute-1 nova_compute[225855]: 2026-01-20 15:31:36.528 225859 DEBUG oslo_concurrency.lockutils [None req-97a779a1-cb81-4dd3-846a-094c02b4513d 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:36.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:36.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:36 compute-1 nova_compute[225855]: 2026-01-20 15:31:36.764 225859 DEBUG nova.compute.manager [req-7068a6eb-8ffe-49e5-950a-8c2a848a2bd4 req-0d9c7789-b765-47d3-bcba-62eae64c725e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Received event network-vif-plugged-60202d18-26b2-493b-a427-211cda112a80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:31:36 compute-1 nova_compute[225855]: 2026-01-20 15:31:36.765 225859 DEBUG oslo_concurrency.lockutils [req-7068a6eb-8ffe-49e5-950a-8c2a848a2bd4 req-0d9c7789-b765-47d3-bcba-62eae64c725e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:36 compute-1 nova_compute[225855]: 2026-01-20 15:31:36.765 225859 DEBUG oslo_concurrency.lockutils [req-7068a6eb-8ffe-49e5-950a-8c2a848a2bd4 req-0d9c7789-b765-47d3-bcba-62eae64c725e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:36 compute-1 nova_compute[225855]: 2026-01-20 15:31:36.765 225859 DEBUG oslo_concurrency.lockutils [req-7068a6eb-8ffe-49e5-950a-8c2a848a2bd4 req-0d9c7789-b765-47d3-bcba-62eae64c725e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c2074d47-58a3-49e8-82fd-6bc6145a1ea7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:36 compute-1 nova_compute[225855]: 2026-01-20 15:31:36.765 225859 DEBUG nova.compute.manager [req-7068a6eb-8ffe-49e5-950a-8c2a848a2bd4 req-0d9c7789-b765-47d3-bcba-62eae64c725e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] No waiting events found dispatching network-vif-plugged-60202d18-26b2-493b-a427-211cda112a80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:31:36 compute-1 nova_compute[225855]: 2026-01-20 15:31:36.765 225859 WARNING nova.compute.manager [req-7068a6eb-8ffe-49e5-950a-8c2a848a2bd4 req-0d9c7789-b765-47d3-bcba-62eae64c725e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Received unexpected event network-vif-plugged-60202d18-26b2-493b-a427-211cda112a80 for instance with vm_state deleted and task_state None.
Jan 20 15:31:37 compute-1 nova_compute[225855]: 2026-01-20 15:31:37.334 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:31:38 compute-1 ceph-mon[81775]: pgmap v3283: 321 pgs: 321 active+clean; 139 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 245 KiB/s rd, 881 KiB/s wr, 69 op/s
Jan 20 15:31:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:31:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:38.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:31:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:31:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:38.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:31:39 compute-1 nova_compute[225855]: 2026-01-20 15:31:39.010 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:39 compute-1 sudo[320632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:39 compute-1 sudo[320632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:39 compute-1 sudo[320632]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:39 compute-1 nova_compute[225855]: 2026-01-20 15:31:39.104 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:39 compute-1 sudo[320657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:31:39 compute-1 sudo[320657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:39 compute-1 sudo[320657]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:39 compute-1 sudo[320682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:39 compute-1 sudo[320682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:39 compute-1 sudo[320682]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:39 compute-1 sudo[320707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:31:39 compute-1 sudo[320707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:39 compute-1 sudo[320707]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:39 compute-1 sshd-session[320766]: banner exchange: Connection from 3.134.148.59 port 47086: invalid format
Jan 20 15:31:40 compute-1 ceph-mon[81775]: pgmap v3284: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 57 op/s
Jan 20 15:31:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 15:31:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:31:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:31:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:31:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:31:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:31:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:31:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:40.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:40.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:40 compute-1 nova_compute[225855]: 2026-01-20 15:31:40.938 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:41 compute-1 nova_compute[225855]: 2026-01-20 15:31:41.049 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:31:41.687 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:31:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:31:41.688 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:31:41 compute-1 nova_compute[225855]: 2026-01-20 15:31:41.746 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:42 compute-1 podman[320769]: 2026-01-20 15:31:42.069942663 +0000 UTC m=+0.112416131 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 15:31:42 compute-1 ceph-mon[81775]: pgmap v3285: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 57 op/s
Jan 20 15:31:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:42.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:31:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:42.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:31:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:44 compute-1 nova_compute[225855]: 2026-01-20 15:31:44.067 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:44 compute-1 nova_compute[225855]: 2026-01-20 15:31:44.105 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:44 compute-1 ceph-mon[81775]: pgmap v3286: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.6 KiB/s wr, 46 op/s
Jan 20 15:31:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:44.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:44.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:31:45.689 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:31:46 compute-1 sudo[320797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:46 compute-1 sudo[320797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:46 compute-1 sudo[320797]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:46 compute-1 sudo[320822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:31:46 compute-1 sudo[320822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:46 compute-1 sudo[320822]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:46.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:46.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:46 compute-1 ceph-mon[81775]: pgmap v3287: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.6 KiB/s wr, 46 op/s
Jan 20 15:31:46 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:31:46 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:31:48 compute-1 ceph-mon[81775]: pgmap v3288: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Jan 20 15:31:48 compute-1 sudo[320848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:48 compute-1 sudo[320848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:48 compute-1 sudo[320848]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:48 compute-1 sudo[320873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:48 compute-1 sudo[320873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:48 compute-1 sudo[320873]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:48.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:31:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:48.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:31:49 compute-1 nova_compute[225855]: 2026-01-20 15:31:49.070 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:49 compute-1 nova_compute[225855]: 2026-01-20 15:31:49.075 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768923094.0744, c2074d47-58a3-49e8-82fd-6bc6145a1ea7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:31:49 compute-1 nova_compute[225855]: 2026-01-20 15:31:49.076 225859 INFO nova.compute.manager [-] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] VM Stopped (Lifecycle Event)
Jan 20 15:31:49 compute-1 nova_compute[225855]: 2026-01-20 15:31:49.103 225859 DEBUG nova.compute.manager [None req-c44a54da-2c96-4712-b243-b3e332e10686 - - - - - -] [instance: c2074d47-58a3-49e8-82fd-6bc6145a1ea7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:31:49 compute-1 nova_compute[225855]: 2026-01-20 15:31:49.108 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:50 compute-1 ceph-mon[81775]: pgmap v3289: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 341 B/s wr, 11 op/s
Jan 20 15:31:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:50.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:31:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:50.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:31:52 compute-1 ceph-mon[81775]: pgmap v3290: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:31:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:52.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:52.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:52 compute-1 podman[320900]: 2026-01-20 15:31:52.993737111 +0000 UTC m=+0.045500320 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:31:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:54 compute-1 nova_compute[225855]: 2026-01-20 15:31:54.072 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:54 compute-1 nova_compute[225855]: 2026-01-20 15:31:54.110 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:54 compute-1 ceph-mon[81775]: pgmap v3291: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:31:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:54.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:54.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:56 compute-1 ceph-mon[81775]: pgmap v3292: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:31:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:56.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:56.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:58 compute-1 ceph-mon[81775]: pgmap v3293: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:31:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1442367220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:58.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:31:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:58.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:59 compute-1 nova_compute[225855]: 2026-01-20 15:31:59.074 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:59 compute-1 nova_compute[225855]: 2026-01-20 15:31:59.112 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:00 compute-1 ceph-mon[81775]: pgmap v3294: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:32:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:00.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:32:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:00.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:32:02 compute-1 ceph-mon[81775]: pgmap v3295: 321 pgs: 321 active+clean; 155 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.6 MiB/s wr, 26 op/s
Jan 20 15:32:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:02.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:02.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:04 compute-1 nova_compute[225855]: 2026-01-20 15:32:04.075 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:04 compute-1 nova_compute[225855]: 2026-01-20 15:32:04.113 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:04 compute-1 ceph-mon[81775]: pgmap v3296: 321 pgs: 321 active+clean; 155 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.6 MiB/s wr, 26 op/s
Jan 20 15:32:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:04.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:32:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:04.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:32:05 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3767820426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:32:06 compute-1 ceph-mon[81775]: pgmap v3297: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:32:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3040056216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:32:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:32:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:06.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:32:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:06.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:08 compute-1 sudo[320929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:08 compute-1 sudo[320929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:08 compute-1 sudo[320929]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:08 compute-1 sudo[320954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:08 compute-1 sudo[320954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:08 compute-1 sudo[320954]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:08 compute-1 ceph-mon[81775]: pgmap v3298: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:32:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:32:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:08.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:32:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:08.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:09 compute-1 nova_compute[225855]: 2026-01-20 15:32:09.114 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:32:09 compute-1 nova_compute[225855]: 2026-01-20 15:32:09.116 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:32:09 compute-1 nova_compute[225855]: 2026-01-20 15:32:09.116 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 15:32:09 compute-1 nova_compute[225855]: 2026-01-20 15:32:09.116 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 15:32:09 compute-1 nova_compute[225855]: 2026-01-20 15:32:09.133 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:09 compute-1 nova_compute[225855]: 2026-01-20 15:32:09.134 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 15:32:10 compute-1 ceph-mon[81775]: pgmap v3299: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 15:32:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:10.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:32:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:10.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:32:12 compute-1 nova_compute[225855]: 2026-01-20 15:32:12.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:32:12 compute-1 nova_compute[225855]: 2026-01-20 15:32:12.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:32:12 compute-1 ceph-mon[81775]: pgmap v3300: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Jan 20 15:32:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:12.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:12.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:13 compute-1 podman[320981]: 2026-01-20 15:32:13.036727712 +0000 UTC m=+0.081614281 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 20 15:32:13 compute-1 nova_compute[225855]: 2026-01-20 15:32:13.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:32:13 compute-1 nova_compute[225855]: 2026-01-20 15:32:13.367 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:32:13 compute-1 nova_compute[225855]: 2026-01-20 15:32:13.367 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:32:13 compute-1 nova_compute[225855]: 2026-01-20 15:32:13.368 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:32:13 compute-1 nova_compute[225855]: 2026-01-20 15:32:13.368 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:32:13 compute-1 nova_compute[225855]: 2026-01-20 15:32:13.368 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:32:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/779206332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:32:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:32:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3001522942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:32:13 compute-1 nova_compute[225855]: 2026-01-20 15:32:13.795 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:32:13 compute-1 nova_compute[225855]: 2026-01-20 15:32:13.940 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:32:13 compute-1 nova_compute[225855]: 2026-01-20 15:32:13.942 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4266MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:32:13 compute-1 nova_compute[225855]: 2026-01-20 15:32:13.942 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:32:13 compute-1 nova_compute[225855]: 2026-01-20 15:32:13.942 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:32:14 compute-1 nova_compute[225855]: 2026-01-20 15:32:14.060 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:32:14 compute-1 nova_compute[225855]: 2026-01-20 15:32:14.060 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:32:14 compute-1 nova_compute[225855]: 2026-01-20 15:32:14.123 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:32:14 compute-1 nova_compute[225855]: 2026-01-20 15:32:14.146 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:14 compute-1 nova_compute[225855]: 2026-01-20 15:32:14.149 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:32:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:32:14 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3050195879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:32:14 compute-1 nova_compute[225855]: 2026-01-20 15:32:14.563 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:32:14 compute-1 nova_compute[225855]: 2026-01-20 15:32:14.568 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:32:14 compute-1 ceph-mon[81775]: pgmap v3301: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 151 KiB/s wr, 68 op/s
Jan 20 15:32:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2050766060' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:32:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2050766060' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:32:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3001522942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:32:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1817760916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:32:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3050195879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:32:14 compute-1 nova_compute[225855]: 2026-01-20 15:32:14.595 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:32:14 compute-1 nova_compute[225855]: 2026-01-20 15:32:14.618 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:32:14 compute-1 nova_compute[225855]: 2026-01-20 15:32:14.618 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:32:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:32:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:14.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:32:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:32:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:14.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:32:16 compute-1 ovn_controller[130490]: 2026-01-20T15:32:16Z|00962|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 20 15:32:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:16.455 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:32:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:16.455 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:32:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:16.455 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:32:16 compute-1 nova_compute[225855]: 2026-01-20 15:32:16.619 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:32:16 compute-1 nova_compute[225855]: 2026-01-20 15:32:16.620 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:32:16 compute-1 nova_compute[225855]: 2026-01-20 15:32:16.620 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:32:16 compute-1 nova_compute[225855]: 2026-01-20 15:32:16.633 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:32:16 compute-1 nova_compute[225855]: 2026-01-20 15:32:16.633 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:32:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:16.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:16.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:16 compute-1 ceph-mon[81775]: pgmap v3302: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 151 KiB/s wr, 74 op/s
Jan 20 15:32:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2230532106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:32:17 compute-1 nova_compute[225855]: 2026-01-20 15:32:17.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:32:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1791023127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:32:18 compute-1 nova_compute[225855]: 2026-01-20 15:32:18.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:32:18 compute-1 nova_compute[225855]: 2026-01-20 15:32:18.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:32:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:18.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:32:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:18.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:32:18 compute-1 ceph-mon[81775]: pgmap v3303: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:32:19 compute-1 nova_compute[225855]: 2026-01-20 15:32:19.137 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:19 compute-1 nova_compute[225855]: 2026-01-20 15:32:19.148 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:19 compute-1 nova_compute[225855]: 2026-01-20 15:32:19.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:32:19 compute-1 ceph-mon[81775]: pgmap v3304: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:32:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:32:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:20.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:32:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:20.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:22 compute-1 ceph-mon[81775]: pgmap v3305: 321 pgs: 321 active+clean; 168 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 488 KiB/s wr, 84 op/s
Jan 20 15:32:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:32:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:22.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:32:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:22.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:24 compute-1 podman[321058]: 2026-01-20 15:32:24.004411673 +0000 UTC m=+0.053422736 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:32:24 compute-1 nova_compute[225855]: 2026-01-20 15:32:24.139 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:24 compute-1 nova_compute[225855]: 2026-01-20 15:32:24.149 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:24 compute-1 ceph-mon[81775]: pgmap v3306: 321 pgs: 321 active+clean; 168 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 203 KiB/s rd, 475 KiB/s wr, 18 op/s
Jan 20 15:32:24 compute-1 nova_compute[225855]: 2026-01-20 15:32:24.334 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:32:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:24.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:32:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:24.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:32:26 compute-1 ceph-mon[81775]: pgmap v3307: 321 pgs: 321 active+clean; 199 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 528 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 20 15:32:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:32:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:26.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:32:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:32:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:26.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:32:28 compute-1 ceph-mon[81775]: pgmap v3308: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:32:28 compute-1 sudo[321080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:28 compute-1 sudo[321080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:28 compute-1 sudo[321080]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:28 compute-1 sudo[321105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:28 compute-1 sudo[321105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:28 compute-1 sudo[321105]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:28.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:32:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:28.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:32:29 compute-1 nova_compute[225855]: 2026-01-20 15:32:29.141 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:29 compute-1 nova_compute[225855]: 2026-01-20 15:32:29.151 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:30 compute-1 ceph-mon[81775]: pgmap v3309: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:32:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:32:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:30.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:32:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:32:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:30.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:32:32 compute-1 ceph-mon[81775]: pgmap v3310: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:32:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:32.510 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:32:32 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:32.511 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:32:32 compute-1 nova_compute[225855]: 2026-01-20 15:32:32.510 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:32:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:32.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:32:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:32.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #163. Immutable memtables: 0.
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:32:33.314553) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 163
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923153314611, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1257, "num_deletes": 256, "total_data_size": 2756920, "memory_usage": 2790464, "flush_reason": "Manual Compaction"}
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #164: started
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923153331816, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 164, "file_size": 1819587, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79332, "largest_seqno": 80584, "table_properties": {"data_size": 1814121, "index_size": 2861, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11617, "raw_average_key_size": 19, "raw_value_size": 1803194, "raw_average_value_size": 3025, "num_data_blocks": 127, "num_entries": 596, "num_filter_entries": 596, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768923045, "oldest_key_time": 1768923045, "file_creation_time": 1768923153, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 17344 microseconds, and 4721 cpu microseconds.
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:32:33.331889) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #164: 1819587 bytes OK
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:32:33.331914) [db/memtable_list.cc:519] [default] Level-0 commit table #164 started
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:32:33.332902) [db/memtable_list.cc:722] [default] Level-0 commit table #164: memtable #1 done
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:32:33.332916) EVENT_LOG_v1 {"time_micros": 1768923153332912, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:32:33.332937) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 2750937, prev total WAL file size 2750937, number of live WAL files 2.
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000160.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:32:33.333586) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303232' seq:72057594037927935, type:22 .. '6C6F676D0033323734' seq:0, type:0; will stop at (end)
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [164(1776KB)], [162(10MB)]
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923153333662, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [164], "files_L6": [162], "score": -1, "input_data_size": 13100600, "oldest_snapshot_seqno": -1}
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #165: 10119 keys, 12969330 bytes, temperature: kUnknown
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923153484960, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 165, "file_size": 12969330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12904237, "index_size": 38676, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25349, "raw_key_size": 267308, "raw_average_key_size": 26, "raw_value_size": 12727179, "raw_average_value_size": 1257, "num_data_blocks": 1473, "num_entries": 10119, "num_filter_entries": 10119, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768923153, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 165, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:32:33.485624) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 12969330 bytes
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:32:33.487146) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 86.5 rd, 85.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 10.8 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(14.3) write-amplify(7.1) OK, records in: 10644, records dropped: 525 output_compression: NoCompression
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:32:33.487180) EVENT_LOG_v1 {"time_micros": 1768923153487167, "job": 104, "event": "compaction_finished", "compaction_time_micros": 151418, "compaction_time_cpu_micros": 32496, "output_level": 6, "num_output_files": 1, "total_output_size": 12969330, "num_input_records": 10644, "num_output_records": 10119, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923153487623, "job": 104, "event": "table_file_deletion", "file_number": 164}
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000162.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923153489520, "job": 104, "event": "table_file_deletion", "file_number": 162}
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:32:33.333504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:32:33.489719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:32:33.489728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:32:33.489730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:32:33.489732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:32:33 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:32:33.489735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:32:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:34 compute-1 nova_compute[225855]: 2026-01-20 15:32:34.143 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:34 compute-1 nova_compute[225855]: 2026-01-20 15:32:34.152 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:34 compute-1 ceph-mon[81775]: pgmap v3311: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 1.7 MiB/s wr, 53 op/s
Jan 20 15:32:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:34.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:32:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:34.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:32:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:36.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:36.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:36 compute-1 ceph-mon[81775]: pgmap v3312: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 1.7 MiB/s wr, 54 op/s
Jan 20 15:32:37 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:37.512 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:32:37 compute-1 ceph-mon[81775]: pgmap v3313: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 17 KiB/s wr, 3 op/s
Jan 20 15:32:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:38.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:38.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:39 compute-1 nova_compute[225855]: 2026-01-20 15:32:39.144 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:39 compute-1 nova_compute[225855]: 2026-01-20 15:32:39.152 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:40 compute-1 ceph-mon[81775]: pgmap v3314: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 20 15:32:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:40.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:32:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:40.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:32:42 compute-1 ceph-mon[81775]: pgmap v3315: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 3.3 KiB/s wr, 0 op/s
Jan 20 15:32:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:42.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:32:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:42.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:32:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:44 compute-1 podman[321138]: 2026-01-20 15:32:44.034650752 +0000 UTC m=+0.079469570 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 15:32:44 compute-1 nova_compute[225855]: 2026-01-20 15:32:44.146 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:44 compute-1 nova_compute[225855]: 2026-01-20 15:32:44.153 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:44 compute-1 ceph-mon[81775]: pgmap v3316: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 20 15:32:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:44.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:32:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:44.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:32:46 compute-1 ceph-mon[81775]: pgmap v3317: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 3.0 KiB/s wr, 0 op/s
Jan 20 15:32:46 compute-1 sudo[321165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:46 compute-1 sudo[321165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:46 compute-1 sudo[321165]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:46 compute-1 sudo[321190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:32:46 compute-1 sudo[321190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:46 compute-1 sudo[321190]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:46 compute-1 sudo[321215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:46 compute-1 sudo[321215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:46 compute-1 sudo[321215]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:46 compute-1 sudo[321240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 15:32:46 compute-1 sudo[321240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:46.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:46.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:47 compute-1 podman[321336]: 2026-01-20 15:32:47.187789278 +0000 UTC m=+0.070241707 container exec 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:32:47 compute-1 podman[321336]: 2026-01-20 15:32:47.326298733 +0000 UTC m=+0.208751122 container exec_died 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 20 15:32:47 compute-1 nova_compute[225855]: 2026-01-20 15:32:47.407 225859 DEBUG oslo_concurrency.lockutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:32:47 compute-1 nova_compute[225855]: 2026-01-20 15:32:47.409 225859 DEBUG oslo_concurrency.lockutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:32:47 compute-1 nova_compute[225855]: 2026-01-20 15:32:47.428 225859 DEBUG nova.compute.manager [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:32:47 compute-1 nova_compute[225855]: 2026-01-20 15:32:47.512 225859 DEBUG oslo_concurrency.lockutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:32:47 compute-1 nova_compute[225855]: 2026-01-20 15:32:47.513 225859 DEBUG oslo_concurrency.lockutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:32:47 compute-1 nova_compute[225855]: 2026-01-20 15:32:47.537 225859 DEBUG nova.virt.hardware [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:32:47 compute-1 nova_compute[225855]: 2026-01-20 15:32:47.537 225859 INFO nova.compute.claims [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:32:47 compute-1 nova_compute[225855]: 2026-01-20 15:32:47.690 225859 DEBUG oslo_concurrency.processutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:32:47 compute-1 podman[321496]: 2026-01-20 15:32:47.859586738 +0000 UTC m=+0.052781228 container exec 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 15:32:47 compute-1 podman[321496]: 2026-01-20 15:32:47.869347316 +0000 UTC m=+0.062541796 container exec_died 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 15:32:48 compute-1 podman[321580]: 2026-01-20 15:32:48.074320149 +0000 UTC m=+0.063443763 container exec e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=keepalived-container, name=keepalived, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, release=1793, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, version=2.2.4, description=keepalived for Ceph, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 20 15:32:48 compute-1 podman[321580]: 2026-01-20 15:32:48.08733003 +0000 UTC m=+0.076453664 container exec_died e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, architecture=x86_64, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, version=2.2.4, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, name=keepalived, vcs-type=git, vendor=Red Hat, Inc.)
Jan 20 15:32:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:32:48 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1085350766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.114 225859 DEBUG oslo_concurrency.processutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.120 225859 DEBUG nova.compute.provider_tree [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:32:48 compute-1 sudo[321240]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.152 225859 DEBUG nova.scheduler.client.report [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.191 225859 DEBUG oslo_concurrency.lockutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.192 225859 DEBUG nova.compute.manager [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:32:48 compute-1 sudo[321615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:48 compute-1 sudo[321615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:48 compute-1 sudo[321615]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.246 225859 DEBUG nova.compute.manager [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.247 225859 DEBUG nova.network.neutron [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:32:48 compute-1 sudo[321640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:32:48 compute-1 sudo[321640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:48 compute-1 sudo[321640]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.287 225859 INFO nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.314 225859 DEBUG nova.compute.manager [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:32:48 compute-1 sudo[321665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:48 compute-1 sudo[321665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:48 compute-1 sudo[321665]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:48 compute-1 sudo[321690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:32:48 compute-1 sudo[321690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.396 225859 DEBUG nova.compute.manager [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.397 225859 DEBUG nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.398 225859 INFO nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Creating image(s)
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.427 225859 DEBUG nova.storage.rbd_utils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 054e01d8-c9d1-4fb3-99e1-d417718d48c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.457 225859 DEBUG nova.storage.rbd_utils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 054e01d8-c9d1-4fb3-99e1-d417718d48c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.483 225859 DEBUG nova.storage.rbd_utils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 054e01d8-c9d1-4fb3-99e1-d417718d48c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.487 225859 DEBUG oslo_concurrency.processutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:32:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.552 225859 DEBUG oslo_concurrency.processutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.553 225859 DEBUG oslo_concurrency.lockutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.554 225859 DEBUG oslo_concurrency.lockutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.554 225859 DEBUG oslo_concurrency.lockutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.582 225859 DEBUG nova.storage.rbd_utils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 054e01d8-c9d1-4fb3-99e1-d417718d48c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.586 225859 DEBUG oslo_concurrency.processutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 054e01d8-c9d1-4fb3-99e1-d417718d48c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.613 225859 DEBUG nova.policy [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5338aa65dc0e4326a66ce79053787f14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:32:48 compute-1 sudo[321806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:48 compute-1 sudo[321806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:48 compute-1 sudo[321806]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:48 compute-1 sudo[321848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:48 compute-1 sudo[321848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:48 compute-1 sudo[321848]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:48.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:48.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:48 compute-1 sudo[321690]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:48 compute-1 ceph-mon[81775]: pgmap v3318: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 0 op/s
Jan 20 15:32:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1085350766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:32:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.866 225859 DEBUG oslo_concurrency.processutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 054e01d8-c9d1-4fb3-99e1-d417718d48c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:32:48 compute-1 nova_compute[225855]: 2026-01-20 15:32:48.940 225859 DEBUG nova.storage.rbd_utils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] resizing rbd image 054e01d8-c9d1-4fb3-99e1-d417718d48c9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:32:49 compute-1 nova_compute[225855]: 2026-01-20 15:32:49.047 225859 DEBUG nova.objects.instance [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'migration_context' on Instance uuid 054e01d8-c9d1-4fb3-99e1-d417718d48c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:32:49 compute-1 nova_compute[225855]: 2026-01-20 15:32:49.064 225859 DEBUG nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:32:49 compute-1 nova_compute[225855]: 2026-01-20 15:32:49.065 225859 DEBUG nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Ensure instance console log exists: /var/lib/nova/instances/054e01d8-c9d1-4fb3-99e1-d417718d48c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:32:49 compute-1 nova_compute[225855]: 2026-01-20 15:32:49.066 225859 DEBUG oslo_concurrency.lockutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:32:49 compute-1 nova_compute[225855]: 2026-01-20 15:32:49.066 225859 DEBUG oslo_concurrency.lockutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:32:49 compute-1 nova_compute[225855]: 2026-01-20 15:32:49.067 225859 DEBUG oslo_concurrency.lockutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:32:49 compute-1 nova_compute[225855]: 2026-01-20 15:32:49.148 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:49 compute-1 nova_compute[225855]: 2026-01-20 15:32:49.154 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:49 compute-1 nova_compute[225855]: 2026-01-20 15:32:49.737 225859 DEBUG nova.network.neutron [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Successfully created port: 71bbd457-6ff9-4170-b4f0-18fb471606d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:32:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:32:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:32:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:32:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:32:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:32:50 compute-1 nova_compute[225855]: 2026-01-20 15:32:50.664 225859 DEBUG nova.network.neutron [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Successfully updated port: 71bbd457-6ff9-4170-b4f0-18fb471606d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:32:50 compute-1 nova_compute[225855]: 2026-01-20 15:32:50.681 225859 DEBUG oslo_concurrency.lockutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "refresh_cache-054e01d8-c9d1-4fb3-99e1-d417718d48c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:32:50 compute-1 nova_compute[225855]: 2026-01-20 15:32:50.681 225859 DEBUG oslo_concurrency.lockutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquired lock "refresh_cache-054e01d8-c9d1-4fb3-99e1-d417718d48c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:32:50 compute-1 nova_compute[225855]: 2026-01-20 15:32:50.681 225859 DEBUG nova.network.neutron [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:32:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:32:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:50.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:32:50 compute-1 nova_compute[225855]: 2026-01-20 15:32:50.761 225859 DEBUG nova.compute.manager [req-8ddf2c85-37d3-46ae-be05-39ce818013fc req-83ace2c5-1087-4649-b9fa-54d0ca7cfcea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Received event network-changed-71bbd457-6ff9-4170-b4f0-18fb471606d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:32:50 compute-1 nova_compute[225855]: 2026-01-20 15:32:50.761 225859 DEBUG nova.compute.manager [req-8ddf2c85-37d3-46ae-be05-39ce818013fc req-83ace2c5-1087-4649-b9fa-54d0ca7cfcea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Refreshing instance network info cache due to event network-changed-71bbd457-6ff9-4170-b4f0-18fb471606d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:32:50 compute-1 nova_compute[225855]: 2026-01-20 15:32:50.761 225859 DEBUG oslo_concurrency.lockutils [req-8ddf2c85-37d3-46ae-be05-39ce818013fc req-83ace2c5-1087-4649-b9fa-54d0ca7cfcea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-054e01d8-c9d1-4fb3-99e1-d417718d48c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:32:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:50.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:50 compute-1 nova_compute[225855]: 2026-01-20 15:32:50.820 225859 DEBUG nova.network.neutron [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:32:50 compute-1 ceph-mon[81775]: pgmap v3319: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 5.7 KiB/s wr, 0 op/s
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.746 225859 DEBUG nova.network.neutron [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Updating instance_info_cache with network_info: [{"id": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "address": "fa:16:3e:65:ea:56", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71bbd457-6f", "ovs_interfaceid": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.769 225859 DEBUG oslo_concurrency.lockutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Releasing lock "refresh_cache-054e01d8-c9d1-4fb3-99e1-d417718d48c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.770 225859 DEBUG nova.compute.manager [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Instance network_info: |[{"id": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "address": "fa:16:3e:65:ea:56", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71bbd457-6f", "ovs_interfaceid": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.771 225859 DEBUG oslo_concurrency.lockutils [req-8ddf2c85-37d3-46ae-be05-39ce818013fc req-83ace2c5-1087-4649-b9fa-54d0ca7cfcea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-054e01d8-c9d1-4fb3-99e1-d417718d48c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.772 225859 DEBUG nova.network.neutron [req-8ddf2c85-37d3-46ae-be05-39ce818013fc req-83ace2c5-1087-4649-b9fa-54d0ca7cfcea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Refreshing network info cache for port 71bbd457-6ff9-4170-b4f0-18fb471606d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.777 225859 DEBUG nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Start _get_guest_xml network_info=[{"id": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "address": "fa:16:3e:65:ea:56", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71bbd457-6f", "ovs_interfaceid": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.784 225859 WARNING nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.791 225859 DEBUG nova.virt.libvirt.host [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.792 225859 DEBUG nova.virt.libvirt.host [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.802 225859 DEBUG nova.virt.libvirt.host [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.803 225859 DEBUG nova.virt.libvirt.host [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.805 225859 DEBUG nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.805 225859 DEBUG nova.virt.hardware [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.806 225859 DEBUG nova.virt.hardware [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.807 225859 DEBUG nova.virt.hardware [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.807 225859 DEBUG nova.virt.hardware [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.808 225859 DEBUG nova.virt.hardware [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.808 225859 DEBUG nova.virt.hardware [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.809 225859 DEBUG nova.virt.hardware [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.809 225859 DEBUG nova.virt.hardware [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.810 225859 DEBUG nova.virt.hardware [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.811 225859 DEBUG nova.virt.hardware [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.811 225859 DEBUG nova.virt.hardware [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:32:51 compute-1 nova_compute[225855]: 2026-01-20 15:32:51.816 225859 DEBUG oslo_concurrency.processutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:32:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:32:52 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/62047383' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.279 225859 DEBUG oslo_concurrency.processutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.304 225859 DEBUG nova.storage.rbd_utils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 054e01d8-c9d1-4fb3-99e1-d417718d48c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.308 225859 DEBUG oslo_concurrency.processutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:32:52 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:32:52 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1025644885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.725 225859 DEBUG oslo_concurrency.processutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.727 225859 DEBUG nova.virt.libvirt.vif [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:32:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1975990603',display_name='tempest-TestNetworkBasicOps-server-1975990603',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1975990603',id=209,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBIIq5p1Z8aKbdSJMUPSMnjWUaTZorIMa+mXmK10gXmX/oHg+Z5q1Rmf+/0TauJDUZqczNGvwDzE8yxRK1lxgnRI2fdz8rl+BuPz+yhlF83YWDX8Jzvo5YEkj80ZkenoXA==',key_name='tempest-TestNetworkBasicOps-1869687346',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-ox7roysq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:32:48Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=054e01d8-c9d1-4fb3-99e1-d417718d48c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "address": "fa:16:3e:65:ea:56", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71bbd457-6f", "ovs_interfaceid": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.727 225859 DEBUG nova.network.os_vif_util [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "address": "fa:16:3e:65:ea:56", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71bbd457-6f", "ovs_interfaceid": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.728 225859 DEBUG nova.network.os_vif_util [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:ea:56,bridge_name='br-int',has_traffic_filtering=True,id=71bbd457-6ff9-4170-b4f0-18fb471606d4,network=Network(e1610a22-2f29-4495-85e7-ab2081f73701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71bbd457-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.729 225859 DEBUG nova.objects.instance [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'pci_devices' on Instance uuid 054e01d8-c9d1-4fb3-99e1-d417718d48c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:32:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:52.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.745 225859 DEBUG nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:32:52 compute-1 nova_compute[225855]:   <uuid>054e01d8-c9d1-4fb3-99e1-d417718d48c9</uuid>
Jan 20 15:32:52 compute-1 nova_compute[225855]:   <name>instance-000000d1</name>
Jan 20 15:32:52 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:32:52 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:32:52 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <nova:name>tempest-TestNetworkBasicOps-server-1975990603</nova:name>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:32:51</nova:creationTime>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:32:52 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:32:52 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:32:52 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:32:52 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:32:52 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:32:52 compute-1 nova_compute[225855]:         <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:32:52 compute-1 nova_compute[225855]:         <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:32:52 compute-1 nova_compute[225855]:         <nova:port uuid="71bbd457-6ff9-4170-b4f0-18fb471606d4">
Jan 20 15:32:52 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:32:52 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:32:52 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <system>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <entry name="serial">054e01d8-c9d1-4fb3-99e1-d417718d48c9</entry>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <entry name="uuid">054e01d8-c9d1-4fb3-99e1-d417718d48c9</entry>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     </system>
Jan 20 15:32:52 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:32:52 compute-1 nova_compute[225855]:   <os>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:   </os>
Jan 20 15:32:52 compute-1 nova_compute[225855]:   <features>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:   </features>
Jan 20 15:32:52 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:32:52 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:32:52 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/054e01d8-c9d1-4fb3-99e1-d417718d48c9_disk">
Jan 20 15:32:52 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       </source>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:32:52 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/054e01d8-c9d1-4fb3-99e1-d417718d48c9_disk.config">
Jan 20 15:32:52 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       </source>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:32:52 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:65:ea:56"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <target dev="tap71bbd457-6f"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/054e01d8-c9d1-4fb3-99e1-d417718d48c9/console.log" append="off"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <video>
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     </video>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:32:52 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:32:52 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:32:52 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:32:52 compute-1 nova_compute[225855]: </domain>
Jan 20 15:32:52 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.747 225859 DEBUG nova.compute.manager [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Preparing to wait for external event network-vif-plugged-71bbd457-6ff9-4170-b4f0-18fb471606d4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.747 225859 DEBUG oslo_concurrency.lockutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.747 225859 DEBUG oslo_concurrency.lockutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.747 225859 DEBUG oslo_concurrency.lockutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.748 225859 DEBUG nova.virt.libvirt.vif [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:32:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1975990603',display_name='tempest-TestNetworkBasicOps-server-1975990603',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1975990603',id=209,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBIIq5p1Z8aKbdSJMUPSMnjWUaTZorIMa+mXmK10gXmX/oHg+Z5q1Rmf+/0TauJDUZqczNGvwDzE8yxRK1lxgnRI2fdz8rl+BuPz+yhlF83YWDX8Jzvo5YEkj80ZkenoXA==',key_name='tempest-TestNetworkBasicOps-1869687346',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-ox7roysq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:32:48Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=054e01d8-c9d1-4fb3-99e1-d417718d48c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "address": "fa:16:3e:65:ea:56", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71bbd457-6f", "ovs_interfaceid": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.748 225859 DEBUG nova.network.os_vif_util [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "address": "fa:16:3e:65:ea:56", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71bbd457-6f", "ovs_interfaceid": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.749 225859 DEBUG nova.network.os_vif_util [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:ea:56,bridge_name='br-int',has_traffic_filtering=True,id=71bbd457-6ff9-4170-b4f0-18fb471606d4,network=Network(e1610a22-2f29-4495-85e7-ab2081f73701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71bbd457-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.749 225859 DEBUG os_vif [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:ea:56,bridge_name='br-int',has_traffic_filtering=True,id=71bbd457-6ff9-4170-b4f0-18fb471606d4,network=Network(e1610a22-2f29-4495-85e7-ab2081f73701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71bbd457-6f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.750 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.751 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.751 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.754 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.754 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71bbd457-6f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.755 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap71bbd457-6f, col_values=(('external_ids', {'iface-id': '71bbd457-6ff9-4170-b4f0-18fb471606d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:ea:56', 'vm-uuid': '054e01d8-c9d1-4fb3-99e1-d417718d48c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.758 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:32:52 compute-1 NetworkManager[49104]: <info>  [1768923172.7582] manager: (tap71bbd457-6f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.762 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.764 225859 INFO os_vif [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:ea:56,bridge_name='br-int',has_traffic_filtering=True,id=71bbd457-6ff9-4170-b4f0-18fb471606d4,network=Network(e1610a22-2f29-4495-85e7-ab2081f73701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71bbd457-6f')
Jan 20 15:32:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:52.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.808 225859 DEBUG nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.808 225859 DEBUG nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.809 225859 DEBUG nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No VIF found with MAC fa:16:3e:65:ea:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.809 225859 INFO nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Using config drive
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.831 225859 DEBUG nova.storage.rbd_utils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 054e01d8-c9d1-4fb3-99e1-d417718d48c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:32:52 compute-1 ceph-mon[81775]: pgmap v3320: 321 pgs: 321 active+clean; 239 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.7 MiB/s wr, 27 op/s
Jan 20 15:32:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/62047383' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:32:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1025644885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.961 225859 DEBUG nova.network.neutron [req-8ddf2c85-37d3-46ae-be05-39ce818013fc req-83ace2c5-1087-4649-b9fa-54d0ca7cfcea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Updated VIF entry in instance network info cache for port 71bbd457-6ff9-4170-b4f0-18fb471606d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.962 225859 DEBUG nova.network.neutron [req-8ddf2c85-37d3-46ae-be05-39ce818013fc req-83ace2c5-1087-4649-b9fa-54d0ca7cfcea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Updating instance_info_cache with network_info: [{"id": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "address": "fa:16:3e:65:ea:56", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71bbd457-6f", "ovs_interfaceid": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:32:52 compute-1 nova_compute[225855]: 2026-01-20 15:32:52.994 225859 DEBUG oslo_concurrency.lockutils [req-8ddf2c85-37d3-46ae-be05-39ce818013fc req-83ace2c5-1087-4649-b9fa-54d0ca7cfcea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-054e01d8-c9d1-4fb3-99e1-d417718d48c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:32:53 compute-1 nova_compute[225855]: 2026-01-20 15:32:53.232 225859 INFO nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Creating config drive at /var/lib/nova/instances/054e01d8-c9d1-4fb3-99e1-d417718d48c9/disk.config
Jan 20 15:32:53 compute-1 nova_compute[225855]: 2026-01-20 15:32:53.240 225859 DEBUG oslo_concurrency.processutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/054e01d8-c9d1-4fb3-99e1-d417718d48c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphke05hdm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:32:53 compute-1 nova_compute[225855]: 2026-01-20 15:32:53.375 225859 DEBUG oslo_concurrency.processutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/054e01d8-c9d1-4fb3-99e1-d417718d48c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphke05hdm" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:32:53 compute-1 nova_compute[225855]: 2026-01-20 15:32:53.401 225859 DEBUG nova.storage.rbd_utils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 054e01d8-c9d1-4fb3-99e1-d417718d48c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:32:53 compute-1 nova_compute[225855]: 2026-01-20 15:32:53.405 225859 DEBUG oslo_concurrency.processutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/054e01d8-c9d1-4fb3-99e1-d417718d48c9/disk.config 054e01d8-c9d1-4fb3-99e1-d417718d48c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:32:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:53 compute-1 nova_compute[225855]: 2026-01-20 15:32:53.549 225859 DEBUG oslo_concurrency.processutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/054e01d8-c9d1-4fb3-99e1-d417718d48c9/disk.config 054e01d8-c9d1-4fb3-99e1-d417718d48c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:32:53 compute-1 nova_compute[225855]: 2026-01-20 15:32:53.550 225859 INFO nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Deleting local config drive /var/lib/nova/instances/054e01d8-c9d1-4fb3-99e1-d417718d48c9/disk.config because it was imported into RBD.
Jan 20 15:32:53 compute-1 kernel: tap71bbd457-6f: entered promiscuous mode
Jan 20 15:32:53 compute-1 NetworkManager[49104]: <info>  [1768923173.6015] manager: (tap71bbd457-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/411)
Jan 20 15:32:53 compute-1 ovn_controller[130490]: 2026-01-20T15:32:53Z|00963|binding|INFO|Claiming lport 71bbd457-6ff9-4170-b4f0-18fb471606d4 for this chassis.
Jan 20 15:32:53 compute-1 ovn_controller[130490]: 2026-01-20T15:32:53Z|00964|binding|INFO|71bbd457-6ff9-4170-b4f0-18fb471606d4: Claiming fa:16:3e:65:ea:56 10.100.0.20
Jan 20 15:32:53 compute-1 nova_compute[225855]: 2026-01-20 15:32:53.645 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:53 compute-1 nova_compute[225855]: 2026-01-20 15:32:53.651 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.657 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:ea:56 10.100.0.20'], port_security=['fa:16:3e:65:ea:56 10.100.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': '054e01d8-c9d1-4fb3-99e1-d417718d48c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e1610a22-2f29-4495-85e7-ab2081f73701', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '436420ae-5ad2-462b-90ca-5a96acbe39fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=733f71a0-4d98-4c07-b692-f20cf2a632ed, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=71bbd457-6ff9-4170-b4f0-18fb471606d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.658 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 71bbd457-6ff9-4170-b4f0-18fb471606d4 in datapath e1610a22-2f29-4495-85e7-ab2081f73701 bound to our chassis
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.659 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e1610a22-2f29-4495-85e7-ab2081f73701
Jan 20 15:32:53 compute-1 systemd-udevd[322098]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.670 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d9381682-eb95-4fde-bebd-847671125bd3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.672 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape1610a22-21 in ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:32:53 compute-1 systemd-machined[194361]: New machine qemu-111-instance-000000d1.
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.673 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape1610a22-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.673 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cfefeb7c-df45-42ba-800a-fd678ae3e1ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.674 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d2cf90d8-5e56-4d7d-9f64-0de3777f3530]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:53 compute-1 NetworkManager[49104]: <info>  [1768923173.6808] device (tap71bbd457-6f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:32:53 compute-1 NetworkManager[49104]: <info>  [1768923173.6814] device (tap71bbd457-6f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:32:53 compute-1 ovn_controller[130490]: 2026-01-20T15:32:53Z|00965|binding|INFO|Setting lport 71bbd457-6ff9-4170-b4f0-18fb471606d4 ovn-installed in OVS
Jan 20 15:32:53 compute-1 ovn_controller[130490]: 2026-01-20T15:32:53Z|00966|binding|INFO|Setting lport 71bbd457-6ff9-4170-b4f0-18fb471606d4 up in Southbound
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.687 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[6f35cfb8-b8f4-43ae-9a58-2f072325c39d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:53 compute-1 nova_compute[225855]: 2026-01-20 15:32:53.688 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:53 compute-1 systemd[1]: Started Virtual Machine qemu-111-instance-000000d1.
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.701 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c1c04e-ec40-4656-81f0-6b6c6616b294]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.726 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4a4b69-6df7-4d93-bf30-d9ada83b5767]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.731 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f8bb43-39ff-45d8-af2b-41fe705d20e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:53 compute-1 NetworkManager[49104]: <info>  [1768923173.7339] manager: (tape1610a22-20): new Veth device (/org/freedesktop/NetworkManager/Devices/412)
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.763 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[42b0b8c1-da54-4c8a-8fb1-372af8de7f3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.766 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[111d5739-b32c-4c58-8551-5d999fdfe785]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:53 compute-1 NetworkManager[49104]: <info>  [1768923173.7864] device (tape1610a22-20): carrier: link connected
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.792 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[ba136932-5c24-461c-9199-fe74d0e116cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.812 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c87b00a6-038d-4279-a316-5a55339ac340]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape1610a22-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:09:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 274], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 821271, 'reachable_time': 16693, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322132, 'error': None, 'target': 'ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.825 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[febe7bf7-59d9-44f5-90ec-4889aeca7298]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:961'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 821271, 'tstamp': 821271}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322133, 'error': None, 'target': 'ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.841 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b8763b90-c8cc-4330-bcef-326da40e3d33]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape1610a22-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:09:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 274], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 821271, 'reachable_time': 16693, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322134, 'error': None, 'target': 'ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.876 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[647ff6a8-5c39-456d-a99b-6ae1c626fa96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.927 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[71b1efb6-970b-4e02-bfea-208ce73e9903]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.929 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape1610a22-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.929 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.930 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape1610a22-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:32:53 compute-1 nova_compute[225855]: 2026-01-20 15:32:53.931 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:53 compute-1 kernel: tape1610a22-20: entered promiscuous mode
Jan 20 15:32:53 compute-1 NetworkManager[49104]: <info>  [1768923173.9325] manager: (tape1610a22-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/413)
Jan 20 15:32:53 compute-1 nova_compute[225855]: 2026-01-20 15:32:53.934 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.939 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape1610a22-20, col_values=(('external_ids', {'iface-id': '6d7499a4-3049-4825-9ec9-301fdceff3a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:32:53 compute-1 nova_compute[225855]: 2026-01-20 15:32:53.940 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:53 compute-1 ovn_controller[130490]: 2026-01-20T15:32:53Z|00967|binding|INFO|Releasing lport 6d7499a4-3049-4825-9ec9-301fdceff3a8 from this chassis (sb_readonly=0)
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.942 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e1610a22-2f29-4495-85e7-ab2081f73701.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e1610a22-2f29-4495-85e7-ab2081f73701.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.943 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cec34c11-32b6-44f9-9c5f-935fe56ed34b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.943 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-e1610a22-2f29-4495-85e7-ab2081f73701
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/e1610a22-2f29-4495-85e7-ab2081f73701.pid.haproxy
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID e1610a22-2f29-4495-85e7-ab2081f73701
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:32:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:32:53.944 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701', 'env', 'PROCESS_TAG=haproxy-e1610a22-2f29-4495-85e7-ab2081f73701', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e1610a22-2f29-4495-85e7-ab2081f73701.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:32:53 compute-1 ceph-mon[81775]: pgmap v3321: 321 pgs: 321 active+clean; 239 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.7 MiB/s wr, 27 op/s
Jan 20 15:32:53 compute-1 nova_compute[225855]: 2026-01-20 15:32:53.955 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.035 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768923174.034733, 054e01d8-c9d1-4fb3-99e1-d417718d48c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.035 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] VM Started (Lifecycle Event)
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.062 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.065 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768923174.0348258, 054e01d8-c9d1-4fb3-99e1-d417718d48c9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.066 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] VM Paused (Lifecycle Event)
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.096 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.100 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.129 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.150 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:54 compute-1 podman[322208]: 2026-01-20 15:32:54.299724672 +0000 UTC m=+0.066069868 container create 137c6f3d853ad68f70cddfd1e2ee96838bb27d94b52a9a2af830bfd96ffb6162 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 20 15:32:54 compute-1 systemd[1]: Started libpod-conmon-137c6f3d853ad68f70cddfd1e2ee96838bb27d94b52a9a2af830bfd96ffb6162.scope.
Jan 20 15:32:54 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:32:54 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c24838984a067b8901522e09bee329bb5043ee4222b2b8328f8d5882b1349f3b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:32:54 compute-1 podman[322208]: 2026-01-20 15:32:54.274365908 +0000 UTC m=+0.040711124 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:32:54 compute-1 podman[322208]: 2026-01-20 15:32:54.378317046 +0000 UTC m=+0.144662272 container init 137c6f3d853ad68f70cddfd1e2ee96838bb27d94b52a9a2af830bfd96ffb6162 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:32:54 compute-1 podman[322208]: 2026-01-20 15:32:54.384998386 +0000 UTC m=+0.151343582 container start 137c6f3d853ad68f70cddfd1e2ee96838bb27d94b52a9a2af830bfd96ffb6162 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:32:54 compute-1 podman[322219]: 2026-01-20 15:32:54.396052142 +0000 UTC m=+0.058496001 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 20 15:32:54 compute-1 neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701[322222]: [NOTICE]   (322243) : New worker (322247) forked
Jan 20 15:32:54 compute-1 neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701[322222]: [NOTICE]   (322243) : Loading success.
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.703 225859 DEBUG nova.compute.manager [req-3f493aaf-4bab-4659-8e93-c1b3d88511a0 req-9556ff14-6f9b-4d92-a9e2-aa06810e326e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Received event network-vif-plugged-71bbd457-6ff9-4170-b4f0-18fb471606d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.703 225859 DEBUG oslo_concurrency.lockutils [req-3f493aaf-4bab-4659-8e93-c1b3d88511a0 req-9556ff14-6f9b-4d92-a9e2-aa06810e326e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.704 225859 DEBUG oslo_concurrency.lockutils [req-3f493aaf-4bab-4659-8e93-c1b3d88511a0 req-9556ff14-6f9b-4d92-a9e2-aa06810e326e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.704 225859 DEBUG oslo_concurrency.lockutils [req-3f493aaf-4bab-4659-8e93-c1b3d88511a0 req-9556ff14-6f9b-4d92-a9e2-aa06810e326e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.704 225859 DEBUG nova.compute.manager [req-3f493aaf-4bab-4659-8e93-c1b3d88511a0 req-9556ff14-6f9b-4d92-a9e2-aa06810e326e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Processing event network-vif-plugged-71bbd457-6ff9-4170-b4f0-18fb471606d4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.705 225859 DEBUG nova.compute.manager [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.710 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768923174.710074, 054e01d8-c9d1-4fb3-99e1-d417718d48c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.711 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] VM Resumed (Lifecycle Event)
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.712 225859 DEBUG nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.717 225859 INFO nova.virt.libvirt.driver [-] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Instance spawned successfully.
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.718 225859 DEBUG nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:32:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:32:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:54.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.739 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.743 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.750 225859 DEBUG nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.751 225859 DEBUG nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.751 225859 DEBUG nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.752 225859 DEBUG nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.752 225859 DEBUG nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.752 225859 DEBUG nova.virt.libvirt.driver [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.759 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:32:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:32:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:54.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.798 225859 INFO nova.compute.manager [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Took 6.40 seconds to spawn the instance on the hypervisor.
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.799 225859 DEBUG nova.compute.manager [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.879 225859 INFO nova.compute.manager [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Took 7.40 seconds to build instance.
Jan 20 15:32:54 compute-1 nova_compute[225855]: 2026-01-20 15:32:54.896 225859 DEBUG oslo_concurrency.lockutils [None req-f04d4685-41d2-430c-9a80-84505e728853 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:32:56 compute-1 sudo[322258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:56 compute-1 sudo[322258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:56 compute-1 sudo[322258]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:56 compute-1 sudo[322283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:32:56 compute-1 sudo[322283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:56 compute-1 sudo[322283]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:32:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:56.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:32:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:56.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:56 compute-1 nova_compute[225855]: 2026-01-20 15:32:56.856 225859 DEBUG nova.compute.manager [req-44bbdca5-9374-48fe-98dc-127b6934573c req-a6559dbc-75e8-4408-82f1-ef7bf1829420 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Received event network-vif-plugged-71bbd457-6ff9-4170-b4f0-18fb471606d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:32:56 compute-1 nova_compute[225855]: 2026-01-20 15:32:56.857 225859 DEBUG oslo_concurrency.lockutils [req-44bbdca5-9374-48fe-98dc-127b6934573c req-a6559dbc-75e8-4408-82f1-ef7bf1829420 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:32:56 compute-1 nova_compute[225855]: 2026-01-20 15:32:56.857 225859 DEBUG oslo_concurrency.lockutils [req-44bbdca5-9374-48fe-98dc-127b6934573c req-a6559dbc-75e8-4408-82f1-ef7bf1829420 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:32:56 compute-1 nova_compute[225855]: 2026-01-20 15:32:56.858 225859 DEBUG oslo_concurrency.lockutils [req-44bbdca5-9374-48fe-98dc-127b6934573c req-a6559dbc-75e8-4408-82f1-ef7bf1829420 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:32:56 compute-1 nova_compute[225855]: 2026-01-20 15:32:56.858 225859 DEBUG nova.compute.manager [req-44bbdca5-9374-48fe-98dc-127b6934573c req-a6559dbc-75e8-4408-82f1-ef7bf1829420 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] No waiting events found dispatching network-vif-plugged-71bbd457-6ff9-4170-b4f0-18fb471606d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:32:56 compute-1 nova_compute[225855]: 2026-01-20 15:32:56.858 225859 WARNING nova.compute.manager [req-44bbdca5-9374-48fe-98dc-127b6934573c req-a6559dbc-75e8-4408-82f1-ef7bf1829420 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Received unexpected event network-vif-plugged-71bbd457-6ff9-4170-b4f0-18fb471606d4 for instance with vm_state active and task_state None.
Jan 20 15:32:56 compute-1 ceph-mon[81775]: pgmap v3322: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 20 15:32:56 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:56 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:57 compute-1 nova_compute[225855]: 2026-01-20 15:32:57.757 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:57 compute-1 ceph-mon[81775]: pgmap v3323: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 334 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Jan 20 15:32:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:58.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:32:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:58.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:59 compute-1 nova_compute[225855]: 2026-01-20 15:32:59.153 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:00 compute-1 ceph-mon[81775]: pgmap v3324: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 508 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 20 15:33:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:00.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:00.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:02 compute-1 ceph-mon[81775]: pgmap v3325: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 15:33:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:33:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:02.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:33:02 compute-1 nova_compute[225855]: 2026-01-20 15:33:02.761 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:33:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:02.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:33:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:04 compute-1 nova_compute[225855]: 2026-01-20 15:33:04.155 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:04 compute-1 ceph-mon[81775]: pgmap v3326: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 133 KiB/s wr, 75 op/s
Jan 20 15:33:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:33:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:04.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:33:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:04.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:06 compute-1 ceph-mon[81775]: pgmap v3327: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 135 KiB/s wr, 75 op/s
Jan 20 15:33:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:06.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:06.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:07 compute-1 nova_compute[225855]: 2026-01-20 15:33:07.764 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:08 compute-1 ceph-mon[81775]: pgmap v3328: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 KiB/s wr, 72 op/s
Jan 20 15:33:08 compute-1 ovn_controller[130490]: 2026-01-20T15:33:08Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:ea:56 10.100.0.20
Jan 20 15:33:08 compute-1 ovn_controller[130490]: 2026-01-20T15:33:08Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:ea:56 10.100.0.20
Jan 20 15:33:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:33:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:08.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:33:08 compute-1 sudo[322315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:33:08 compute-1 sudo[322315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:08 compute-1 sudo[322315]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:08.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:08 compute-1 sudo[322340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:33:08 compute-1 sudo[322340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:08 compute-1 sudo[322340]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:09 compute-1 nova_compute[225855]: 2026-01-20 15:33:09.156 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:10 compute-1 ceph-mon[81775]: pgmap v3329: 321 pgs: 321 active+clean; 253 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 363 KiB/s wr, 82 op/s
Jan 20 15:33:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:10.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:10.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:12 compute-1 nova_compute[225855]: 2026-01-20 15:33:12.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:33:12 compute-1 nova_compute[225855]: 2026-01-20 15:33:12.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:33:12 compute-1 ceph-mon[81775]: pgmap v3330: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Jan 20 15:33:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:12.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:12 compute-1 nova_compute[225855]: 2026-01-20 15:33:12.767 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:12.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1396469670' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:33:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1396469670' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:33:14 compute-1 nova_compute[225855]: 2026-01-20 15:33:14.159 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:14 compute-1 ceph-mon[81775]: pgmap v3331: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:33:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2273594203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:14.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:33:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:14.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:33:15 compute-1 podman[322370]: 2026-01-20 15:33:15.042602513 +0000 UTC m=+0.077405061 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller)
Jan 20 15:33:15 compute-1 nova_compute[225855]: 2026-01-20 15:33:15.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:33:15 compute-1 nova_compute[225855]: 2026-01-20 15:33:15.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:33:15 compute-1 nova_compute[225855]: 2026-01-20 15:33:15.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:33:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/959369576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:33:16.456 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:33:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:33:16.457 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:33:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:33:16.457 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:33:16 compute-1 nova_compute[225855]: 2026-01-20 15:33:16.549 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-054e01d8-c9d1-4fb3-99e1-d417718d48c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:33:16 compute-1 nova_compute[225855]: 2026-01-20 15:33:16.550 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-054e01d8-c9d1-4fb3-99e1-d417718d48c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:33:16 compute-1 nova_compute[225855]: 2026-01-20 15:33:16.550 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:33:16 compute-1 nova_compute[225855]: 2026-01-20 15:33:16.550 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 054e01d8-c9d1-4fb3-99e1-d417718d48c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:33:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:16.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:16 compute-1 ceph-mon[81775]: pgmap v3332: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:33:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:16.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:17 compute-1 nova_compute[225855]: 2026-01-20 15:33:17.769 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4217607316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:18.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:18.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:18 compute-1 ceph-mon[81775]: pgmap v3333: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:33:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/700645636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:19 compute-1 nova_compute[225855]: 2026-01-20 15:33:19.161 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:19 compute-1 nova_compute[225855]: 2026-01-20 15:33:19.537 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Updating instance_info_cache with network_info: [{"id": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "address": "fa:16:3e:65:ea:56", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71bbd457-6f", "ovs_interfaceid": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:33:19 compute-1 nova_compute[225855]: 2026-01-20 15:33:19.553 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-054e01d8-c9d1-4fb3-99e1-d417718d48c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:33:19 compute-1 nova_compute[225855]: 2026-01-20 15:33:19.554 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:33:19 compute-1 nova_compute[225855]: 2026-01-20 15:33:19.554 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:33:19 compute-1 nova_compute[225855]: 2026-01-20 15:33:19.555 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:33:19 compute-1 nova_compute[225855]: 2026-01-20 15:33:19.555 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:33:19 compute-1 nova_compute[225855]: 2026-01-20 15:33:19.555 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:33:19 compute-1 nova_compute[225855]: 2026-01-20 15:33:19.555 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:33:19 compute-1 nova_compute[225855]: 2026-01-20 15:33:19.575 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:33:19 compute-1 nova_compute[225855]: 2026-01-20 15:33:19.576 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:33:19 compute-1 nova_compute[225855]: 2026-01-20 15:33:19.576 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:33:19 compute-1 nova_compute[225855]: 2026-01-20 15:33:19.576 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:33:19 compute-1 nova_compute[225855]: 2026-01-20 15:33:19.577 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:33:19 compute-1 ceph-mon[81775]: pgmap v3334: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:33:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:33:20 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1180419469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:20 compute-1 nova_compute[225855]: 2026-01-20 15:33:20.048 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:33:20 compute-1 nova_compute[225855]: 2026-01-20 15:33:20.119 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000d1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:33:20 compute-1 nova_compute[225855]: 2026-01-20 15:33:20.119 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000d1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:33:20 compute-1 nova_compute[225855]: 2026-01-20 15:33:20.264 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:33:20 compute-1 nova_compute[225855]: 2026-01-20 15:33:20.266 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4052MB free_disk=20.897071838378906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:33:20 compute-1 nova_compute[225855]: 2026-01-20 15:33:20.266 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:33:20 compute-1 nova_compute[225855]: 2026-01-20 15:33:20.267 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:33:20 compute-1 nova_compute[225855]: 2026-01-20 15:33:20.367 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 054e01d8-c9d1-4fb3-99e1-d417718d48c9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:33:20 compute-1 nova_compute[225855]: 2026-01-20 15:33:20.368 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:33:20 compute-1 nova_compute[225855]: 2026-01-20 15:33:20.368 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:33:20 compute-1 nova_compute[225855]: 2026-01-20 15:33:20.419 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:33:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:20.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:20.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:33:20 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3427014143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:20 compute-1 nova_compute[225855]: 2026-01-20 15:33:20.864 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:33:20 compute-1 nova_compute[225855]: 2026-01-20 15:33:20.870 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:33:20 compute-1 nova_compute[225855]: 2026-01-20 15:33:20.883 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:33:20 compute-1 nova_compute[225855]: 2026-01-20 15:33:20.905 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:33:20 compute-1 nova_compute[225855]: 2026-01-20 15:33:20.906 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:33:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1180419469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3427014143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:21 compute-1 sshd-session[322445]: banner exchange: Connection from 3.134.148.59 port 36310: invalid format
Jan 20 15:33:21 compute-1 nova_compute[225855]: 2026-01-20 15:33:21.691 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:33:21 compute-1 ceph-mon[81775]: pgmap v3335: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 206 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 20 15:33:22 compute-1 sshd-session[322367]: Connection closed by 3.134.148.59 port 37774 [preauth]
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.558 225859 DEBUG oslo_concurrency.lockutils [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.558 225859 DEBUG oslo_concurrency.lockutils [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.559 225859 DEBUG oslo_concurrency.lockutils [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.559 225859 DEBUG oslo_concurrency.lockutils [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.559 225859 DEBUG oslo_concurrency.lockutils [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.560 225859 INFO nova.compute.manager [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Terminating instance
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.561 225859 DEBUG nova.compute.manager [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:33:22 compute-1 kernel: tap71bbd457-6f (unregistering): left promiscuous mode
Jan 20 15:33:22 compute-1 NetworkManager[49104]: <info>  [1768923202.6144] device (tap71bbd457-6f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:33:22 compute-1 ovn_controller[130490]: 2026-01-20T15:33:22Z|00968|binding|INFO|Releasing lport 71bbd457-6ff9-4170-b4f0-18fb471606d4 from this chassis (sb_readonly=0)
Jan 20 15:33:22 compute-1 ovn_controller[130490]: 2026-01-20T15:33:22Z|00969|binding|INFO|Setting lport 71bbd457-6ff9-4170-b4f0-18fb471606d4 down in Southbound
Jan 20 15:33:22 compute-1 ovn_controller[130490]: 2026-01-20T15:33:22Z|00970|binding|INFO|Removing iface tap71bbd457-6f ovn-installed in OVS
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.621 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:33:22.628 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:ea:56 10.100.0.20'], port_security=['fa:16:3e:65:ea:56 10.100.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': '054e01d8-c9d1-4fb3-99e1-d417718d48c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e1610a22-2f29-4495-85e7-ab2081f73701', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '436420ae-5ad2-462b-90ca-5a96acbe39fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=733f71a0-4d98-4c07-b692-f20cf2a632ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=71bbd457-6ff9-4170-b4f0-18fb471606d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:33:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:33:22.629 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 71bbd457-6ff9-4170-b4f0-18fb471606d4 in datapath e1610a22-2f29-4495-85e7-ab2081f73701 unbound from our chassis
Jan 20 15:33:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:33:22.630 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e1610a22-2f29-4495-85e7-ab2081f73701, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:33:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:33:22.631 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[883d867b-9b80-4fac-a1ee-f769419dc773]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:33:22.632 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701 namespace which is not needed anymore
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.643 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:22 compute-1 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d000000d1.scope: Deactivated successfully.
Jan 20 15:33:22 compute-1 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d000000d1.scope: Consumed 13.789s CPU time.
Jan 20 15:33:22 compute-1 systemd-machined[194361]: Machine qemu-111-instance-000000d1 terminated.
Jan 20 15:33:22 compute-1 neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701[322222]: [NOTICE]   (322243) : haproxy version is 2.8.14-c23fe91
Jan 20 15:33:22 compute-1 neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701[322222]: [NOTICE]   (322243) : path to executable is /usr/sbin/haproxy
Jan 20 15:33:22 compute-1 neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701[322222]: [WARNING]  (322243) : Exiting Master process...
Jan 20 15:33:22 compute-1 neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701[322222]: [ALERT]    (322243) : Current worker (322247) exited with code 143 (Terminated)
Jan 20 15:33:22 compute-1 neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701[322222]: [WARNING]  (322243) : All workers exited. Exiting... (0)
Jan 20 15:33:22 compute-1 systemd[1]: libpod-137c6f3d853ad68f70cddfd1e2ee96838bb27d94b52a9a2af830bfd96ffb6162.scope: Deactivated successfully.
Jan 20 15:33:22 compute-1 podman[322471]: 2026-01-20 15:33:22.76197491 +0000 UTC m=+0.047137297 container died 137c6f3d853ad68f70cddfd1e2ee96838bb27d94b52a9a2af830bfd96ffb6162 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.770 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.780 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:22.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.785 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:22 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-137c6f3d853ad68f70cddfd1e2ee96838bb27d94b52a9a2af830bfd96ffb6162-userdata-shm.mount: Deactivated successfully.
Jan 20 15:33:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-c24838984a067b8901522e09bee329bb5043ee4222b2b8328f8d5882b1349f3b-merged.mount: Deactivated successfully.
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.795 225859 INFO nova.virt.libvirt.driver [-] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Instance destroyed successfully.
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.796 225859 DEBUG nova.objects.instance [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'resources' on Instance uuid 054e01d8-c9d1-4fb3-99e1-d417718d48c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:33:22 compute-1 podman[322471]: 2026-01-20 15:33:22.800677585 +0000 UTC m=+0.085839962 container cleanup 137c6f3d853ad68f70cddfd1e2ee96838bb27d94b52a9a2af830bfd96ffb6162 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.811 225859 DEBUG nova.virt.libvirt.vif [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:32:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1975990603',display_name='tempest-TestNetworkBasicOps-server-1975990603',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1975990603',id=209,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBIIq5p1Z8aKbdSJMUPSMnjWUaTZorIMa+mXmK10gXmX/oHg+Z5q1Rmf+/0TauJDUZqczNGvwDzE8yxRK1lxgnRI2fdz8rl+BuPz+yhlF83YWDX8Jzvo5YEkj80ZkenoXA==',key_name='tempest-TestNetworkBasicOps-1869687346',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:32:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-ox7roysq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:32:54Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=054e01d8-c9d1-4fb3-99e1-d417718d48c9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "address": "fa:16:3e:65:ea:56", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71bbd457-6f", "ovs_interfaceid": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.811 225859 DEBUG nova.network.os_vif_util [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "address": "fa:16:3e:65:ea:56", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71bbd457-6f", "ovs_interfaceid": "71bbd457-6ff9-4170-b4f0-18fb471606d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.812 225859 DEBUG nova.network.os_vif_util [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:ea:56,bridge_name='br-int',has_traffic_filtering=True,id=71bbd457-6ff9-4170-b4f0-18fb471606d4,network=Network(e1610a22-2f29-4495-85e7-ab2081f73701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71bbd457-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.812 225859 DEBUG os_vif [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:ea:56,bridge_name='br-int',has_traffic_filtering=True,id=71bbd457-6ff9-4170-b4f0-18fb471606d4,network=Network(e1610a22-2f29-4495-85e7-ab2081f73701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71bbd457-6f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.814 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.815 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71bbd457-6f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.816 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.817 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:22 compute-1 systemd[1]: libpod-conmon-137c6f3d853ad68f70cddfd1e2ee96838bb27d94b52a9a2af830bfd96ffb6162.scope: Deactivated successfully.
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.821 225859 INFO os_vif [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:ea:56,bridge_name='br-int',has_traffic_filtering=True,id=71bbd457-6ff9-4170-b4f0-18fb471606d4,network=Network(e1610a22-2f29-4495-85e7-ab2081f73701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71bbd457-6f')
Jan 20 15:33:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:22.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:22 compute-1 podman[322509]: 2026-01-20 15:33:22.866274248 +0000 UTC m=+0.044298326 container remove 137c6f3d853ad68f70cddfd1e2ee96838bb27d94b52a9a2af830bfd96ffb6162 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 15:33:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:33:22.871 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[aeea1e19-56c6-4dc2-ab27-1dbf1093693d]: (4, ('Tue Jan 20 03:33:22 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701 (137c6f3d853ad68f70cddfd1e2ee96838bb27d94b52a9a2af830bfd96ffb6162)\n137c6f3d853ad68f70cddfd1e2ee96838bb27d94b52a9a2af830bfd96ffb6162\nTue Jan 20 03:33:22 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701 (137c6f3d853ad68f70cddfd1e2ee96838bb27d94b52a9a2af830bfd96ffb6162)\n137c6f3d853ad68f70cddfd1e2ee96838bb27d94b52a9a2af830bfd96ffb6162\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:33:22.872 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f6270262-9c6d-459b-b7f2-120fd0deede4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:33:22.874 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape1610a22-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.876 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:22 compute-1 kernel: tape1610a22-20: left promiscuous mode
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.878 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:33:22.880 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ed939647-7ca1-4bf2-ad76-9284ca861e37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:22 compute-1 nova_compute[225855]: 2026-01-20 15:33:22.891 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:33:22.898 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[305ffadd-d82f-4901-bfd4-e8e6487759fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:33:22.900 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0562e063-6067-4aab-9f22-2bbe0f7f2efe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:33:22.913 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[73358e28-b6ea-40f2-83f9-f3367f5830dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 821265, 'reachable_time': 40240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322542, 'error': None, 'target': 'ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:22 compute-1 systemd[1]: run-netns-ovnmeta\x2de1610a22\x2d2f29\x2d4495\x2d85e7\x2dab2081f73701.mount: Deactivated successfully.
Jan 20 15:33:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:33:22.917 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:33:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:33:22.917 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[6820b3f4-9b89-41c4-b2c7-67822c7dfda1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:23 compute-1 nova_compute[225855]: 2026-01-20 15:33:23.067 225859 DEBUG nova.compute.manager [req-823a2485-fb95-4e01-9de9-ffbf725eec05 req-23f1469c-47e8-4be0-8e41-31082ec38813 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Received event network-vif-unplugged-71bbd457-6ff9-4170-b4f0-18fb471606d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:33:23 compute-1 nova_compute[225855]: 2026-01-20 15:33:23.068 225859 DEBUG oslo_concurrency.lockutils [req-823a2485-fb95-4e01-9de9-ffbf725eec05 req-23f1469c-47e8-4be0-8e41-31082ec38813 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:33:23 compute-1 nova_compute[225855]: 2026-01-20 15:33:23.068 225859 DEBUG oslo_concurrency.lockutils [req-823a2485-fb95-4e01-9de9-ffbf725eec05 req-23f1469c-47e8-4be0-8e41-31082ec38813 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:33:23 compute-1 nova_compute[225855]: 2026-01-20 15:33:23.068 225859 DEBUG oslo_concurrency.lockutils [req-823a2485-fb95-4e01-9de9-ffbf725eec05 req-23f1469c-47e8-4be0-8e41-31082ec38813 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:33:23 compute-1 nova_compute[225855]: 2026-01-20 15:33:23.069 225859 DEBUG nova.compute.manager [req-823a2485-fb95-4e01-9de9-ffbf725eec05 req-23f1469c-47e8-4be0-8e41-31082ec38813 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] No waiting events found dispatching network-vif-unplugged-71bbd457-6ff9-4170-b4f0-18fb471606d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:33:23 compute-1 nova_compute[225855]: 2026-01-20 15:33:23.069 225859 DEBUG nova.compute.manager [req-823a2485-fb95-4e01-9de9-ffbf725eec05 req-23f1469c-47e8-4be0-8e41-31082ec38813 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Received event network-vif-unplugged-71bbd457-6ff9-4170-b4f0-18fb471606d4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:33:23 compute-1 nova_compute[225855]: 2026-01-20 15:33:23.155 225859 INFO nova.virt.libvirt.driver [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Deleting instance files /var/lib/nova/instances/054e01d8-c9d1-4fb3-99e1-d417718d48c9_del
Jan 20 15:33:23 compute-1 nova_compute[225855]: 2026-01-20 15:33:23.156 225859 INFO nova.virt.libvirt.driver [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Deletion of /var/lib/nova/instances/054e01d8-c9d1-4fb3-99e1-d417718d48c9_del complete
Jan 20 15:33:23 compute-1 nova_compute[225855]: 2026-01-20 15:33:23.222 225859 INFO nova.compute.manager [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Took 0.66 seconds to destroy the instance on the hypervisor.
Jan 20 15:33:23 compute-1 nova_compute[225855]: 2026-01-20 15:33:23.223 225859 DEBUG oslo.service.loopingcall [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:33:23 compute-1 nova_compute[225855]: 2026-01-20 15:33:23.223 225859 DEBUG nova.compute.manager [-] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:33:23 compute-1 nova_compute[225855]: 2026-01-20 15:33:23.223 225859 DEBUG nova.network.neutron [-] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:33:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:24 compute-1 nova_compute[225855]: 2026-01-20 15:33:24.163 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:24 compute-1 ceph-mon[81775]: pgmap v3336: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s wr, 0 op/s
Jan 20 15:33:24 compute-1 nova_compute[225855]: 2026-01-20 15:33:24.479 225859 DEBUG nova.network.neutron [-] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:33:24 compute-1 nova_compute[225855]: 2026-01-20 15:33:24.499 225859 INFO nova.compute.manager [-] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Took 1.28 seconds to deallocate network for instance.
Jan 20 15:33:24 compute-1 nova_compute[225855]: 2026-01-20 15:33:24.555 225859 DEBUG oslo_concurrency.lockutils [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:33:24 compute-1 nova_compute[225855]: 2026-01-20 15:33:24.555 225859 DEBUG oslo_concurrency.lockutils [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:33:24 compute-1 nova_compute[225855]: 2026-01-20 15:33:24.565 225859 DEBUG nova.compute.manager [req-eb6c2f2a-20fe-45b0-8efb-4fd4d1615165 req-e00c9aa2-fe15-4b77-b0f0-7d5a0229b768 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Received event network-vif-deleted-71bbd457-6ff9-4170-b4f0-18fb471606d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:33:24 compute-1 nova_compute[225855]: 2026-01-20 15:33:24.600 225859 DEBUG oslo_concurrency.processutils [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:33:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:33:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:24.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:33:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:24.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:25 compute-1 podman[322565]: 2026-01-20 15:33:25.019964478 +0000 UTC m=+0.060832678 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 20 15:33:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:33:25 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2267802654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:25 compute-1 nova_compute[225855]: 2026-01-20 15:33:25.053 225859 DEBUG oslo_concurrency.processutils [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:33:25 compute-1 nova_compute[225855]: 2026-01-20 15:33:25.060 225859 DEBUG nova.compute.provider_tree [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:33:25 compute-1 nova_compute[225855]: 2026-01-20 15:33:25.101 225859 DEBUG nova.scheduler.client.report [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:33:25 compute-1 nova_compute[225855]: 2026-01-20 15:33:25.168 225859 DEBUG oslo_concurrency.lockutils [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:33:25 compute-1 nova_compute[225855]: 2026-01-20 15:33:25.197 225859 INFO nova.scheduler.client.report [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Deleted allocations for instance 054e01d8-c9d1-4fb3-99e1-d417718d48c9
Jan 20 15:33:25 compute-1 nova_compute[225855]: 2026-01-20 15:33:25.253 225859 DEBUG nova.compute.manager [req-acca13a9-b37c-440e-a2af-6d207ff4ce61 req-56250356-8b17-467f-b81b-ad9be27f6109 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Received event network-vif-plugged-71bbd457-6ff9-4170-b4f0-18fb471606d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:33:25 compute-1 nova_compute[225855]: 2026-01-20 15:33:25.254 225859 DEBUG oslo_concurrency.lockutils [req-acca13a9-b37c-440e-a2af-6d207ff4ce61 req-56250356-8b17-467f-b81b-ad9be27f6109 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:33:25 compute-1 nova_compute[225855]: 2026-01-20 15:33:25.254 225859 DEBUG oslo_concurrency.lockutils [req-acca13a9-b37c-440e-a2af-6d207ff4ce61 req-56250356-8b17-467f-b81b-ad9be27f6109 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:33:25 compute-1 nova_compute[225855]: 2026-01-20 15:33:25.255 225859 DEBUG oslo_concurrency.lockutils [req-acca13a9-b37c-440e-a2af-6d207ff4ce61 req-56250356-8b17-467f-b81b-ad9be27f6109 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:33:25 compute-1 nova_compute[225855]: 2026-01-20 15:33:25.255 225859 DEBUG nova.compute.manager [req-acca13a9-b37c-440e-a2af-6d207ff4ce61 req-56250356-8b17-467f-b81b-ad9be27f6109 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] No waiting events found dispatching network-vif-plugged-71bbd457-6ff9-4170-b4f0-18fb471606d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:33:25 compute-1 nova_compute[225855]: 2026-01-20 15:33:25.256 225859 WARNING nova.compute.manager [req-acca13a9-b37c-440e-a2af-6d207ff4ce61 req-56250356-8b17-467f-b81b-ad9be27f6109 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Received unexpected event network-vif-plugged-71bbd457-6ff9-4170-b4f0-18fb471606d4 for instance with vm_state deleted and task_state None.
Jan 20 15:33:25 compute-1 nova_compute[225855]: 2026-01-20 15:33:25.278 225859 DEBUG oslo_concurrency.lockutils [None req-14700326-29a8-4b3d-b34a-acbcbf3acbaa 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "054e01d8-c9d1-4fb3-99e1-d417718d48c9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:33:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2267802654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:25 compute-1 nova_compute[225855]: 2026-01-20 15:33:25.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:33:26 compute-1 ceph-mon[81775]: pgmap v3337: 321 pgs: 321 active+clean; 217 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 KiB/s rd, 27 KiB/s wr, 8 op/s
Jan 20 15:33:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:26.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:26.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:33:27.747 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=79, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=78) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:33:27 compute-1 nova_compute[225855]: 2026-01-20 15:33:27.748 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:33:27.748 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:33:27 compute-1 nova_compute[225855]: 2026-01-20 15:33:27.816 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:28 compute-1 ceph-mon[81775]: pgmap v3338: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 26 KiB/s wr, 29 op/s
Jan 20 15:33:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:28.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:28.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:28 compute-1 sudo[322588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:33:28 compute-1 sudo[322588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:28 compute-1 sudo[322588]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:29 compute-1 sudo[322613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:33:29 compute-1 sudo[322613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:29 compute-1 sudo[322613]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:29 compute-1 nova_compute[225855]: 2026-01-20 15:33:29.166 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:30 compute-1 nova_compute[225855]: 2026-01-20 15:33:30.225 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:30 compute-1 ceph-mon[81775]: pgmap v3339: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 27 KiB/s wr, 29 op/s
Jan 20 15:33:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:33:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:30.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:33:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:33:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:30.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:33:32 compute-1 ceph-mon[81775]: pgmap v3340: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 29 op/s
Jan 20 15:33:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:33:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:32.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:33:32 compute-1 nova_compute[225855]: 2026-01-20 15:33:32.820 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:32.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:34 compute-1 nova_compute[225855]: 2026-01-20 15:33:34.168 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:34 compute-1 ceph-mon[81775]: pgmap v3341: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 29 op/s
Jan 20 15:33:34 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:33:34.751 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '79'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:33:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:34.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:34.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:36 compute-1 ceph-mon[81775]: pgmap v3342: 321 pgs: 321 active+clean; 138 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 16 KiB/s wr, 46 op/s
Jan 20 15:33:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/717147166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:33:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:36.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:33:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:36.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:37 compute-1 nova_compute[225855]: 2026-01-20 15:33:37.795 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768923202.794088, 054e01d8-c9d1-4fb3-99e1-d417718d48c9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:33:37 compute-1 nova_compute[225855]: 2026-01-20 15:33:37.796 225859 INFO nova.compute.manager [-] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] VM Stopped (Lifecycle Event)
Jan 20 15:33:37 compute-1 nova_compute[225855]: 2026-01-20 15:33:37.823 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:37 compute-1 nova_compute[225855]: 2026-01-20 15:33:37.836 225859 DEBUG nova.compute.manager [None req-f7441665-7223-4c0b-ae47-2f33138da596 - - - - - -] [instance: 054e01d8-c9d1-4fb3-99e1-d417718d48c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:33:38 compute-1 ceph-mon[81775]: pgmap v3343: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 4.0 KiB/s wr, 49 op/s
Jan 20 15:33:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:33:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:38.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:33:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:38.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:39 compute-1 nova_compute[225855]: 2026-01-20 15:33:39.169 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:39 compute-1 nova_compute[225855]: 2026-01-20 15:33:39.334 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:33:40 compute-1 ceph-mon[81775]: pgmap v3344: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Jan 20 15:33:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:40.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:40.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:42 compute-1 ceph-mon[81775]: pgmap v3345: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 20 15:33:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:42.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:42 compute-1 nova_compute[225855]: 2026-01-20 15:33:42.826 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:42.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:44 compute-1 nova_compute[225855]: 2026-01-20 15:33:44.169 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:44 compute-1 ceph-mon[81775]: pgmap v3346: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:33:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:44.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:44.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:46 compute-1 podman[322647]: 2026-01-20 15:33:46.033415199 +0000 UTC m=+0.083682721 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 15:33:46 compute-1 ceph-mon[81775]: pgmap v3347: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:33:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:46.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:46.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:47 compute-1 nova_compute[225855]: 2026-01-20 15:33:47.876 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:48 compute-1 ceph-mon[81775]: pgmap v3348: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 10 op/s
Jan 20 15:33:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:48.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:48.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:49 compute-1 sudo[322674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:33:49 compute-1 sudo[322674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:49 compute-1 sudo[322674]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:49 compute-1 nova_compute[225855]: 2026-01-20 15:33:49.170 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:49 compute-1 sudo[322699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:33:49 compute-1 sudo[322699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:49 compute-1 sudo[322699]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:50 compute-1 ceph-mon[81775]: pgmap v3349: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:33:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:50.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:50.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:52.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:52.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:52 compute-1 nova_compute[225855]: 2026-01-20 15:33:52.879 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:53 compute-1 ceph-mon[81775]: pgmap v3350: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:33:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:54 compute-1 nova_compute[225855]: 2026-01-20 15:33:54.172 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:54 compute-1 ceph-mon[81775]: pgmap v3351: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:33:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:54.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:54.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:56 compute-1 podman[322728]: 2026-01-20 15:33:56.009088045 +0000 UTC m=+0.055894677 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 15:33:56 compute-1 sudo[322747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:33:56 compute-1 sudo[322747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:56 compute-1 sudo[322747]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:56 compute-1 sudo[322772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:33:56 compute-1 sudo[322772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:56 compute-1 sudo[322772]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:56 compute-1 sudo[322797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:33:56 compute-1 sudo[322797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:56 compute-1 sudo[322797]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:56 compute-1 sudo[322822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:33:56 compute-1 sudo[322822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:56 compute-1 ceph-mon[81775]: pgmap v3352: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:33:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:56.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:56.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:57 compute-1 sudo[322822]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:57 compute-1 nova_compute[225855]: 2026-01-20 15:33:57.882 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:57 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:33:57 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:33:57 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:33:57 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:33:57 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:33:57 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:33:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:58.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:33:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:58.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:58 compute-1 ceph-mon[81775]: pgmap v3353: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:33:59 compute-1 nova_compute[225855]: 2026-01-20 15:33:59.174 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:59 compute-1 ceph-mon[81775]: pgmap v3354: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:00.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:00.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:02 compute-1 ceph-mon[81775]: pgmap v3355: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:02.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:02.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:02 compute-1 nova_compute[225855]: 2026-01-20 15:34:02.884 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:03 compute-1 sudo[322880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:34:03 compute-1 sudo[322880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:34:03 compute-1 sudo[322880]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:03 compute-1 sudo[322905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:34:03 compute-1 sudo[322905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:34:03 compute-1 sudo[322905]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:04 compute-1 nova_compute[225855]: 2026-01-20 15:34:04.176 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:04 compute-1 ceph-mon[81775]: pgmap v3356: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:34:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:34:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:04.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:04.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:06 compute-1 ceph-mon[81775]: pgmap v3357: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:06.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:06.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:07 compute-1 nova_compute[225855]: 2026-01-20 15:34:07.886 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:08 compute-1 ceph-mon[81775]: pgmap v3358: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:34:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:08.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:34:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:08.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:09 compute-1 nova_compute[225855]: 2026-01-20 15:34:09.178 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:09 compute-1 sudo[322934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:34:09 compute-1 sudo[322934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:34:09 compute-1 sudo[322934]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:09 compute-1 sudo[322959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:34:09 compute-1 sudo[322959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:34:09 compute-1 sudo[322959]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1657604232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:10 compute-1 ceph-mon[81775]: pgmap v3359: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:10.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:10.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:12 compute-1 ceph-mon[81775]: pgmap v3360: 321 pgs: 321 active+clean; 158 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 1.5 MiB/s wr, 5 op/s
Jan 20 15:34:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:12.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:12.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:12 compute-1 nova_compute[225855]: 2026-01-20 15:34:12.890 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4278523668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:34:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/634534325' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:34:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/634534325' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:34:14 compute-1 nova_compute[225855]: 2026-01-20 15:34:14.179 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:14 compute-1 nova_compute[225855]: 2026-01-20 15:34:14.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:14 compute-1 nova_compute[225855]: 2026-01-20 15:34:14.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:34:14 compute-1 ceph-mon[81775]: pgmap v3361: 321 pgs: 321 active+clean; 158 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 1.5 MiB/s wr, 5 op/s
Jan 20 15:34:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2140012008' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:34:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/650000176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:14.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:14.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1244062530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:16 compute-1 nova_compute[225855]: 2026-01-20 15:34:16.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:16 compute-1 nova_compute[225855]: 2026-01-20 15:34:16.399 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:34:16 compute-1 nova_compute[225855]: 2026-01-20 15:34:16.400 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:34:16 compute-1 nova_compute[225855]: 2026-01-20 15:34:16.400 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:34:16 compute-1 nova_compute[225855]: 2026-01-20 15:34:16.400 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:34:16 compute-1 nova_compute[225855]: 2026-01-20 15:34:16.401 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:34:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:16.457 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:34:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:16.457 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:34:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:16.457 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:34:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:34:16 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2281346832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:16 compute-1 ceph-mon[81775]: pgmap v3362: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:34:16 compute-1 nova_compute[225855]: 2026-01-20 15:34:16.833 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:34:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:34:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:16.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:34:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:16.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:16 compute-1 podman[323011]: 2026-01-20 15:34:16.955660875 +0000 UTC m=+0.084055791 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:34:17 compute-1 nova_compute[225855]: 2026-01-20 15:34:17.044 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:34:17 compute-1 nova_compute[225855]: 2026-01-20 15:34:17.046 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4268MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:34:17 compute-1 nova_compute[225855]: 2026-01-20 15:34:17.047 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:34:17 compute-1 nova_compute[225855]: 2026-01-20 15:34:17.048 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:34:17 compute-1 nova_compute[225855]: 2026-01-20 15:34:17.135 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:34:17 compute-1 nova_compute[225855]: 2026-01-20 15:34:17.136 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:34:17 compute-1 nova_compute[225855]: 2026-01-20 15:34:17.177 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing inventories for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 15:34:17 compute-1 nova_compute[225855]: 2026-01-20 15:34:17.212 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating ProviderTree inventory for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 15:34:17 compute-1 nova_compute[225855]: 2026-01-20 15:34:17.213 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 15:34:17 compute-1 nova_compute[225855]: 2026-01-20 15:34:17.258 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing aggregate associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 15:34:17 compute-1 nova_compute[225855]: 2026-01-20 15:34:17.281 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing trait associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 15:34:17 compute-1 nova_compute[225855]: 2026-01-20 15:34:17.304 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:34:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:34:17 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/431993206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:17 compute-1 nova_compute[225855]: 2026-01-20 15:34:17.770 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:34:17 compute-1 nova_compute[225855]: 2026-01-20 15:34:17.776 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:34:17 compute-1 nova_compute[225855]: 2026-01-20 15:34:17.809 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:34:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2281346832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4205653312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/431993206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:17 compute-1 nova_compute[225855]: 2026-01-20 15:34:17.860 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:34:17 compute-1 nova_compute[225855]: 2026-01-20 15:34:17.861 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:34:17 compute-1 nova_compute[225855]: 2026-01-20 15:34:17.892 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:18 compute-1 nova_compute[225855]: 2026-01-20 15:34:18.861 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:18 compute-1 nova_compute[225855]: 2026-01-20 15:34:18.862 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:34:18 compute-1 nova_compute[225855]: 2026-01-20 15:34:18.862 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:34:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:18.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:18 compute-1 nova_compute[225855]: 2026-01-20 15:34:18.876 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:34:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:18.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:19 compute-1 nova_compute[225855]: 2026-01-20 15:34:19.181 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:19 compute-1 ceph-mon[81775]: pgmap v3363: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:34:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/651347568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:19 compute-1 nova_compute[225855]: 2026-01-20 15:34:19.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:19 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #166. Immutable memtables: 0.
Jan 20 15:34:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:34:19.611994) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:34:19 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 166
Jan 20 15:34:19 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923259612038, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1355, "num_deletes": 251, "total_data_size": 3006932, "memory_usage": 3053952, "flush_reason": "Manual Compaction"}
Jan 20 15:34:19 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #167: started
Jan 20 15:34:19 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923259739527, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 167, "file_size": 1962624, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80589, "largest_seqno": 81939, "table_properties": {"data_size": 1956773, "index_size": 3181, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12789, "raw_average_key_size": 20, "raw_value_size": 1944902, "raw_average_value_size": 3062, "num_data_blocks": 140, "num_entries": 635, "num_filter_entries": 635, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768923153, "oldest_key_time": 1768923153, "file_creation_time": 1768923259, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:34:19 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 127617 microseconds, and 6022 cpu microseconds.
Jan 20 15:34:19 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:34:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:34:19.739603) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #167: 1962624 bytes OK
Jan 20 15:34:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:34:19.739636) [db/memtable_list.cc:519] [default] Level-0 commit table #167 started
Jan 20 15:34:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:34:19.817032) [db/memtable_list.cc:722] [default] Level-0 commit table #167: memtable #1 done
Jan 20 15:34:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:34:19.817108) EVENT_LOG_v1 {"time_micros": 1768923259817093, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:34:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:34:19.817143) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:34:19 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 3000532, prev total WAL file size 3000532, number of live WAL files 2.
Jan 20 15:34:19 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000163.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:34:19 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:34:19.818345) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Jan 20 15:34:19 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:34:19 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [167(1916KB)], [165(12MB)]
Jan 20 15:34:19 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923259818418, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [167], "files_L6": [165], "score": -1, "input_data_size": 14931954, "oldest_snapshot_seqno": -1}
Jan 20 15:34:20 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #168: 10231 keys, 12957264 bytes, temperature: kUnknown
Jan 20 15:34:20 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923260302451, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 168, "file_size": 12957264, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12891458, "index_size": 39133, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25605, "raw_key_size": 270322, "raw_average_key_size": 26, "raw_value_size": 12712502, "raw_average_value_size": 1242, "num_data_blocks": 1487, "num_entries": 10231, "num_filter_entries": 10231, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768923259, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 168, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:34:20 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:34:20 compute-1 nova_compute[225855]: 2026-01-20 15:34:20.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:20 compute-1 nova_compute[225855]: 2026-01-20 15:34:20.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:20 compute-1 nova_compute[225855]: 2026-01-20 15:34:20.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:20 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:34:20.302965) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 12957264 bytes
Jan 20 15:34:20 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:34:20.457712) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 30.8 rd, 26.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 12.4 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(14.2) write-amplify(6.6) OK, records in: 10754, records dropped: 523 output_compression: NoCompression
Jan 20 15:34:20 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:34:20.457755) EVENT_LOG_v1 {"time_micros": 1768923260457740, "job": 106, "event": "compaction_finished", "compaction_time_micros": 484170, "compaction_time_cpu_micros": 47587, "output_level": 6, "num_output_files": 1, "total_output_size": 12957264, "num_input_records": 10754, "num_output_records": 10231, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:34:20 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:34:20 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923260458324, "job": 106, "event": "table_file_deletion", "file_number": 167}
Jan 20 15:34:20 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000165.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:34:20 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923260460899, "job": 106, "event": "table_file_deletion", "file_number": 165}
Jan 20 15:34:20 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:34:19.818228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:34:20 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:34:20.461013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:34:20 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:34:20.461021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:34:20 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:34:20.461025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:34:20 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:34:20.461029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:34:20 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:34:20.461033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:34:20 compute-1 ceph-mon[81775]: pgmap v3364: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 120 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 20 15:34:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:20.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:20.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:22 compute-1 nova_compute[225855]: 2026-01-20 15:34:22.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:22 compute-1 ceph-mon[81775]: pgmap v3365: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 20 15:34:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:34:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:22.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:34:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:22.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:22 compute-1 nova_compute[225855]: 2026-01-20 15:34:22.895 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:23 compute-1 ovn_controller[130490]: 2026-01-20T15:34:23Z|00971|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 20 15:34:24 compute-1 nova_compute[225855]: 2026-01-20 15:34:24.183 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:24 compute-1 ceph-mon[81775]: pgmap v3366: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 284 KiB/s wr, 95 op/s
Jan 20 15:34:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:24.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:24.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:25 compute-1 nova_compute[225855]: 2026-01-20 15:34:25.334 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:25 compute-1 ceph-mon[81775]: pgmap v3367: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 284 KiB/s wr, 95 op/s
Jan 20 15:34:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:34:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:26.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:34:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:26.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:27 compute-1 podman[323065]: 2026-01-20 15:34:27.00583491 +0000 UTC m=+0.055862436 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 15:34:27 compute-1 nova_compute[225855]: 2026-01-20 15:34:27.898 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:28 compute-1 ceph-mon[81775]: pgmap v3368: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:34:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:34:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:28.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:34:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:28.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:29 compute-1 nova_compute[225855]: 2026-01-20 15:34:29.186 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:29 compute-1 sudo[323085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:34:29 compute-1 sudo[323085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:34:29 compute-1 sudo[323085]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:29 compute-1 sudo[323110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:34:29 compute-1 sudo[323110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:34:29 compute-1 sudo[323110]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:30.215 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=80, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=79) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:34:30 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:30.216 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:34:30 compute-1 nova_compute[225855]: 2026-01-20 15:34:30.216 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:30 compute-1 ceph-mon[81775]: pgmap v3369: 321 pgs: 321 active+clean; 164 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 767 B/s wr, 75 op/s
Jan 20 15:34:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:30.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:30.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:31 compute-1 nova_compute[225855]: 2026-01-20 15:34:31.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/22013695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:31 compute-1 ceph-mon[81775]: pgmap v3370: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.6 KiB/s wr, 95 op/s
Jan 20 15:34:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:32.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:32.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:32 compute-1 nova_compute[225855]: 2026-01-20 15:34:32.900 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:33 compute-1 ceph-mon[81775]: pgmap v3371: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 20 15:34:34 compute-1 nova_compute[225855]: 2026-01-20 15:34:34.187 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:34.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:34.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:36 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:36.218 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '80'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:34:36 compute-1 ceph-mon[81775]: pgmap v3372: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 20 15:34:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:36.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:36.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:37 compute-1 nova_compute[225855]: 2026-01-20 15:34:37.359 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:37 compute-1 nova_compute[225855]: 2026-01-20 15:34:37.360 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 15:34:37 compute-1 ceph-mon[81775]: pgmap v3373: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 20 15:34:37 compute-1 nova_compute[225855]: 2026-01-20 15:34:37.903 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:38.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:34:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:38.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:34:39 compute-1 nova_compute[225855]: 2026-01-20 15:34:39.189 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:40 compute-1 ceph-mon[81775]: pgmap v3374: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 20 15:34:40 compute-1 nova_compute[225855]: 2026-01-20 15:34:40.677 225859 DEBUG oslo_concurrency.lockutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "2003d484-9afb-4f49-8410-6e8c6aa813d0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:34:40 compute-1 nova_compute[225855]: 2026-01-20 15:34:40.677 225859 DEBUG oslo_concurrency.lockutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "2003d484-9afb-4f49-8410-6e8c6aa813d0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:34:40 compute-1 nova_compute[225855]: 2026-01-20 15:34:40.703 225859 DEBUG nova.compute.manager [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:34:40 compute-1 nova_compute[225855]: 2026-01-20 15:34:40.811 225859 DEBUG oslo_concurrency.lockutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:34:40 compute-1 nova_compute[225855]: 2026-01-20 15:34:40.812 225859 DEBUG oslo_concurrency.lockutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:34:40 compute-1 nova_compute[225855]: 2026-01-20 15:34:40.819 225859 DEBUG nova.virt.hardware [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:34:40 compute-1 nova_compute[225855]: 2026-01-20 15:34:40.819 225859 INFO nova.compute.claims [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:34:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:34:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:40.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:34:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6e4d6f0 =====
Jan 20 15:34:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6e4d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:40 compute-1 radosgw[83787]: beast: 0x7f09c6e4d6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:40.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:40 compute-1 nova_compute[225855]: 2026-01-20 15:34:40.916 225859 DEBUG oslo_concurrency.processutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:34:41 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:34:41 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3059552772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:41 compute-1 nova_compute[225855]: 2026-01-20 15:34:41.365 225859 DEBUG oslo_concurrency.processutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:34:41 compute-1 nova_compute[225855]: 2026-01-20 15:34:41.371 225859 DEBUG nova.compute.provider_tree [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:34:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3059552772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:41 compute-1 nova_compute[225855]: 2026-01-20 15:34:41.555 225859 DEBUG nova.scheduler.client.report [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:34:41 compute-1 nova_compute[225855]: 2026-01-20 15:34:41.843 225859 DEBUG oslo_concurrency.lockutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:34:41 compute-1 nova_compute[225855]: 2026-01-20 15:34:41.844 225859 DEBUG nova.compute.manager [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:34:41 compute-1 nova_compute[225855]: 2026-01-20 15:34:41.946 225859 DEBUG nova.compute.manager [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:34:41 compute-1 nova_compute[225855]: 2026-01-20 15:34:41.947 225859 DEBUG nova.network.neutron [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:34:42 compute-1 nova_compute[225855]: 2026-01-20 15:34:42.035 225859 INFO nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:34:42 compute-1 nova_compute[225855]: 2026-01-20 15:34:42.321 225859 DEBUG nova.policy [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5338aa65dc0e4326a66ce79053787f14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:34:42 compute-1 ceph-mon[81775]: pgmap v3375: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 852 B/s wr, 23 op/s
Jan 20 15:34:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6e4d6f0 =====
Jan 20 15:34:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:34:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:42.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:34:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6e4d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:42 compute-1 radosgw[83787]: beast: 0x7f09c6e4d6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:42.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:42 compute-1 nova_compute[225855]: 2026-01-20 15:34:42.906 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:44 compute-1 nova_compute[225855]: 2026-01-20 15:34:44.190 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:44 compute-1 nova_compute[225855]: 2026-01-20 15:34:44.400 225859 DEBUG nova.compute.manager [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:34:44 compute-1 ceph-mon[81775]: pgmap v3376: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:44 compute-1 nova_compute[225855]: 2026-01-20 15:34:44.693 225859 DEBUG nova.compute.manager [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:34:44 compute-1 nova_compute[225855]: 2026-01-20 15:34:44.694 225859 DEBUG nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:34:44 compute-1 nova_compute[225855]: 2026-01-20 15:34:44.695 225859 INFO nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Creating image(s)
Jan 20 15:34:44 compute-1 nova_compute[225855]: 2026-01-20 15:34:44.723 225859 DEBUG nova.storage.rbd_utils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 2003d484-9afb-4f49-8410-6e8c6aa813d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:34:44 compute-1 nova_compute[225855]: 2026-01-20 15:34:44.753 225859 DEBUG nova.storage.rbd_utils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 2003d484-9afb-4f49-8410-6e8c6aa813d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:34:44 compute-1 nova_compute[225855]: 2026-01-20 15:34:44.778 225859 DEBUG nova.storage.rbd_utils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 2003d484-9afb-4f49-8410-6e8c6aa813d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:34:44 compute-1 nova_compute[225855]: 2026-01-20 15:34:44.782 225859 DEBUG oslo_concurrency.processutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:34:44 compute-1 nova_compute[225855]: 2026-01-20 15:34:44.852 225859 DEBUG oslo_concurrency.processutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:34:44 compute-1 nova_compute[225855]: 2026-01-20 15:34:44.854 225859 DEBUG oslo_concurrency.lockutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:34:44 compute-1 nova_compute[225855]: 2026-01-20 15:34:44.855 225859 DEBUG oslo_concurrency.lockutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:34:44 compute-1 nova_compute[225855]: 2026-01-20 15:34:44.855 225859 DEBUG oslo_concurrency.lockutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:34:44 compute-1 nova_compute[225855]: 2026-01-20 15:34:44.880 225859 DEBUG nova.storage.rbd_utils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 2003d484-9afb-4f49-8410-6e8c6aa813d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:34:44 compute-1 nova_compute[225855]: 2026-01-20 15:34:44.884 225859 DEBUG oslo_concurrency.processutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 2003d484-9afb-4f49-8410-6e8c6aa813d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:34:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:44.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:44.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:45 compute-1 nova_compute[225855]: 2026-01-20 15:34:45.621 225859 DEBUG nova.network.neutron [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Successfully updated port: 1dee9c67-fb01-4fcd-8f35-805a326ee235 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:34:45 compute-1 nova_compute[225855]: 2026-01-20 15:34:45.643 225859 DEBUG oslo_concurrency.lockutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "refresh_cache-2003d484-9afb-4f49-8410-6e8c6aa813d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:34:45 compute-1 nova_compute[225855]: 2026-01-20 15:34:45.644 225859 DEBUG oslo_concurrency.lockutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquired lock "refresh_cache-2003d484-9afb-4f49-8410-6e8c6aa813d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:34:45 compute-1 nova_compute[225855]: 2026-01-20 15:34:45.644 225859 DEBUG nova.network.neutron [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:34:45 compute-1 nova_compute[225855]: 2026-01-20 15:34:45.849 225859 DEBUG nova.compute.manager [req-8eb53864-9c37-4448-b563-cc75612a40ab req-243503e8-488f-4f11-8943-44df69856886 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Received event network-changed-1dee9c67-fb01-4fcd-8f35-805a326ee235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:34:45 compute-1 nova_compute[225855]: 2026-01-20 15:34:45.850 225859 DEBUG nova.compute.manager [req-8eb53864-9c37-4448-b563-cc75612a40ab req-243503e8-488f-4f11-8943-44df69856886 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Refreshing instance network info cache due to event network-changed-1dee9c67-fb01-4fcd-8f35-805a326ee235. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:34:45 compute-1 nova_compute[225855]: 2026-01-20 15:34:45.850 225859 DEBUG oslo_concurrency.lockutils [req-8eb53864-9c37-4448-b563-cc75612a40ab req-243503e8-488f-4f11-8943-44df69856886 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-2003d484-9afb-4f49-8410-6e8c6aa813d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:34:45 compute-1 ceph-mon[81775]: pgmap v3377: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:45 compute-1 nova_compute[225855]: 2026-01-20 15:34:45.913 225859 DEBUG nova.network.neutron [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:34:45 compute-1 nova_compute[225855]: 2026-01-20 15:34:45.955 225859 DEBUG oslo_concurrency.processutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 2003d484-9afb-4f49-8410-6e8c6aa813d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:34:46 compute-1 nova_compute[225855]: 2026-01-20 15:34:46.054 225859 DEBUG nova.storage.rbd_utils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] resizing rbd image 2003d484-9afb-4f49-8410-6e8c6aa813d0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:34:46 compute-1 nova_compute[225855]: 2026-01-20 15:34:46.648 225859 DEBUG nova.objects.instance [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'migration_context' on Instance uuid 2003d484-9afb-4f49-8410-6e8c6aa813d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:34:46 compute-1 nova_compute[225855]: 2026-01-20 15:34:46.667 225859 DEBUG nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:34:46 compute-1 nova_compute[225855]: 2026-01-20 15:34:46.668 225859 DEBUG nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Ensure instance console log exists: /var/lib/nova/instances/2003d484-9afb-4f49-8410-6e8c6aa813d0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:34:46 compute-1 nova_compute[225855]: 2026-01-20 15:34:46.668 225859 DEBUG oslo_concurrency.lockutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:34:46 compute-1 nova_compute[225855]: 2026-01-20 15:34:46.669 225859 DEBUG oslo_concurrency.lockutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:34:46 compute-1 nova_compute[225855]: 2026-01-20 15:34:46.669 225859 DEBUG oslo_concurrency.lockutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:34:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:46.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:46.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:46 compute-1 nova_compute[225855]: 2026-01-20 15:34:46.963 225859 DEBUG nova.network.neutron [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Updating instance_info_cache with network_info: [{"id": "1dee9c67-fb01-4fcd-8f35-805a326ee235", "address": "fa:16:3e:ea:3a:75", "network": {"id": "99dd5684-1685-443e-9373-f548d80784f6", "bridge": "br-int", "label": "tempest-network-smoke--802350272", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dee9c67-fb", "ovs_interfaceid": "1dee9c67-fb01-4fcd-8f35-805a326ee235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:34:46 compute-1 nova_compute[225855]: 2026-01-20 15:34:46.994 225859 DEBUG oslo_concurrency.lockutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Releasing lock "refresh_cache-2003d484-9afb-4f49-8410-6e8c6aa813d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:34:46 compute-1 nova_compute[225855]: 2026-01-20 15:34:46.994 225859 DEBUG nova.compute.manager [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Instance network_info: |[{"id": "1dee9c67-fb01-4fcd-8f35-805a326ee235", "address": "fa:16:3e:ea:3a:75", "network": {"id": "99dd5684-1685-443e-9373-f548d80784f6", "bridge": "br-int", "label": "tempest-network-smoke--802350272", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dee9c67-fb", "ovs_interfaceid": "1dee9c67-fb01-4fcd-8f35-805a326ee235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:34:46 compute-1 nova_compute[225855]: 2026-01-20 15:34:46.995 225859 DEBUG oslo_concurrency.lockutils [req-8eb53864-9c37-4448-b563-cc75612a40ab req-243503e8-488f-4f11-8943-44df69856886 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-2003d484-9afb-4f49-8410-6e8c6aa813d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:34:46 compute-1 nova_compute[225855]: 2026-01-20 15:34:46.995 225859 DEBUG nova.network.neutron [req-8eb53864-9c37-4448-b563-cc75612a40ab req-243503e8-488f-4f11-8943-44df69856886 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Refreshing network info cache for port 1dee9c67-fb01-4fcd-8f35-805a326ee235 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:34:46 compute-1 nova_compute[225855]: 2026-01-20 15:34:46.997 225859 DEBUG nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Start _get_guest_xml network_info=[{"id": "1dee9c67-fb01-4fcd-8f35-805a326ee235", "address": "fa:16:3e:ea:3a:75", "network": {"id": "99dd5684-1685-443e-9373-f548d80784f6", "bridge": "br-int", "label": "tempest-network-smoke--802350272", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dee9c67-fb", "ovs_interfaceid": "1dee9c67-fb01-4fcd-8f35-805a326ee235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.002 225859 WARNING nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.006 225859 DEBUG nova.virt.libvirt.host [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.006 225859 DEBUG nova.virt.libvirt.host [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.011 225859 DEBUG nova.virt.libvirt.host [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.011 225859 DEBUG nova.virt.libvirt.host [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.012 225859 DEBUG nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.012 225859 DEBUG nova.virt.hardware [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.012 225859 DEBUG nova.virt.hardware [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.012 225859 DEBUG nova.virt.hardware [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.013 225859 DEBUG nova.virt.hardware [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.013 225859 DEBUG nova.virt.hardware [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.013 225859 DEBUG nova.virt.hardware [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.013 225859 DEBUG nova.virt.hardware [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.013 225859 DEBUG nova.virt.hardware [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.013 225859 DEBUG nova.virt.hardware [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.014 225859 DEBUG nova.virt.hardware [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.014 225859 DEBUG nova.virt.hardware [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.017 225859 DEBUG oslo_concurrency.processutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:34:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:34:47 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3471853459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.471 225859 DEBUG oslo_concurrency.processutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.500 225859 DEBUG nova.storage.rbd_utils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 2003d484-9afb-4f49-8410-6e8c6aa813d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.505 225859 DEBUG oslo_concurrency.processutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.908 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:47 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:34:47 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3172465850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.937 225859 DEBUG oslo_concurrency.processutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.939 225859 DEBUG nova.virt.libvirt.vif [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:34:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1912465458',display_name='tempest-TestNetworkBasicOps-server-1912465458',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1912465458',id=211,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKCclivZajv/oNiXd0J0tpc9M442c8dbYXCbsYeHEo3g2nh4Rcq6ISUBBO6XIX8RmCdEtQzJtRlazxR/MdQkZGMMo5bsdyOhXnm5vgMIIsHetJR9AEpVwxFDAVbRX9E2EQ==',key_name='tempest-TestNetworkBasicOps-204665299',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-g29h7bs4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:34:44Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=2003d484-9afb-4f49-8410-6e8c6aa813d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1dee9c67-fb01-4fcd-8f35-805a326ee235", "address": "fa:16:3e:ea:3a:75", "network": {"id": "99dd5684-1685-443e-9373-f548d80784f6", "bridge": "br-int", "label": "tempest-network-smoke--802350272", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dee9c67-fb", "ovs_interfaceid": "1dee9c67-fb01-4fcd-8f35-805a326ee235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.940 225859 DEBUG nova.network.os_vif_util [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "1dee9c67-fb01-4fcd-8f35-805a326ee235", "address": "fa:16:3e:ea:3a:75", "network": {"id": "99dd5684-1685-443e-9373-f548d80784f6", "bridge": "br-int", "label": "tempest-network-smoke--802350272", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dee9c67-fb", "ovs_interfaceid": "1dee9c67-fb01-4fcd-8f35-805a326ee235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.940 225859 DEBUG nova.network.os_vif_util [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:3a:75,bridge_name='br-int',has_traffic_filtering=True,id=1dee9c67-fb01-4fcd-8f35-805a326ee235,network=Network(99dd5684-1685-443e-9373-f548d80784f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1dee9c67-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.941 225859 DEBUG nova.objects.instance [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'pci_devices' on Instance uuid 2003d484-9afb-4f49-8410-6e8c6aa813d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.959 225859 DEBUG nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:34:47 compute-1 nova_compute[225855]:   <uuid>2003d484-9afb-4f49-8410-6e8c6aa813d0</uuid>
Jan 20 15:34:47 compute-1 nova_compute[225855]:   <name>instance-000000d3</name>
Jan 20 15:34:47 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:34:47 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:34:47 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <nova:name>tempest-TestNetworkBasicOps-server-1912465458</nova:name>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:34:47</nova:creationTime>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:34:47 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:34:47 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:34:47 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:34:47 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:34:47 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:34:47 compute-1 nova_compute[225855]:         <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:34:47 compute-1 nova_compute[225855]:         <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:34:47 compute-1 nova_compute[225855]:         <nova:port uuid="1dee9c67-fb01-4fcd-8f35-805a326ee235">
Jan 20 15:34:47 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:34:47 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:34:47 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <system>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <entry name="serial">2003d484-9afb-4f49-8410-6e8c6aa813d0</entry>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <entry name="uuid">2003d484-9afb-4f49-8410-6e8c6aa813d0</entry>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     </system>
Jan 20 15:34:47 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:34:47 compute-1 nova_compute[225855]:   <os>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:   </os>
Jan 20 15:34:47 compute-1 nova_compute[225855]:   <features>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:   </features>
Jan 20 15:34:47 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:34:47 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:34:47 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/2003d484-9afb-4f49-8410-6e8c6aa813d0_disk">
Jan 20 15:34:47 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       </source>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:34:47 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/2003d484-9afb-4f49-8410-6e8c6aa813d0_disk.config">
Jan 20 15:34:47 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       </source>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:34:47 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:ea:3a:75"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <target dev="tap1dee9c67-fb"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/2003d484-9afb-4f49-8410-6e8c6aa813d0/console.log" append="off"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <video>
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     </video>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:34:47 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:34:47 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:34:47 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:34:47 compute-1 nova_compute[225855]: </domain>
Jan 20 15:34:47 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.961 225859 DEBUG nova.compute.manager [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Preparing to wait for external event network-vif-plugged-1dee9c67-fb01-4fcd-8f35-805a326ee235 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.962 225859 DEBUG oslo_concurrency.lockutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "2003d484-9afb-4f49-8410-6e8c6aa813d0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.962 225859 DEBUG oslo_concurrency.lockutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "2003d484-9afb-4f49-8410-6e8c6aa813d0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.962 225859 DEBUG oslo_concurrency.lockutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "2003d484-9afb-4f49-8410-6e8c6aa813d0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.963 225859 DEBUG nova.virt.libvirt.vif [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:34:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1912465458',display_name='tempest-TestNetworkBasicOps-server-1912465458',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1912465458',id=211,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKCclivZajv/oNiXd0J0tpc9M442c8dbYXCbsYeHEo3g2nh4Rcq6ISUBBO6XIX8RmCdEtQzJtRlazxR/MdQkZGMMo5bsdyOhXnm5vgMIIsHetJR9AEpVwxFDAVbRX9E2EQ==',key_name='tempest-TestNetworkBasicOps-204665299',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-g29h7bs4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:34:44Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=2003d484-9afb-4f49-8410-6e8c6aa813d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1dee9c67-fb01-4fcd-8f35-805a326ee235", "address": "fa:16:3e:ea:3a:75", "network": {"id": "99dd5684-1685-443e-9373-f548d80784f6", "bridge": "br-int", "label": "tempest-network-smoke--802350272", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dee9c67-fb", "ovs_interfaceid": "1dee9c67-fb01-4fcd-8f35-805a326ee235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.963 225859 DEBUG nova.network.os_vif_util [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "1dee9c67-fb01-4fcd-8f35-805a326ee235", "address": "fa:16:3e:ea:3a:75", "network": {"id": "99dd5684-1685-443e-9373-f548d80784f6", "bridge": "br-int", "label": "tempest-network-smoke--802350272", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dee9c67-fb", "ovs_interfaceid": "1dee9c67-fb01-4fcd-8f35-805a326ee235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.964 225859 DEBUG nova.network.os_vif_util [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:3a:75,bridge_name='br-int',has_traffic_filtering=True,id=1dee9c67-fb01-4fcd-8f35-805a326ee235,network=Network(99dd5684-1685-443e-9373-f548d80784f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1dee9c67-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.965 225859 DEBUG os_vif [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:3a:75,bridge_name='br-int',has_traffic_filtering=True,id=1dee9c67-fb01-4fcd-8f35-805a326ee235,network=Network(99dd5684-1685-443e-9373-f548d80784f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1dee9c67-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.965 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.966 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.966 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.969 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.970 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1dee9c67-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.970 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1dee9c67-fb, col_values=(('external_ids', {'iface-id': '1dee9c67-fb01-4fcd-8f35-805a326ee235', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ea:3a:75', 'vm-uuid': '2003d484-9afb-4f49-8410-6e8c6aa813d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.972 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:47 compute-1 NetworkManager[49104]: <info>  [1768923287.9727] manager: (tap1dee9c67-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.974 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.981 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:47 compute-1 nova_compute[225855]: 2026-01-20 15:34:47.982 225859 INFO os_vif [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:3a:75,bridge_name='br-int',has_traffic_filtering=True,id=1dee9c67-fb01-4fcd-8f35-805a326ee235,network=Network(99dd5684-1685-443e-9373-f548d80784f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1dee9c67-fb')
Jan 20 15:34:48 compute-1 podman[323395]: 2026-01-20 15:34:48.056031891 +0000 UTC m=+0.103904297 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 20 15:34:48 compute-1 nova_compute[225855]: 2026-01-20 15:34:48.080 225859 DEBUG nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:34:48 compute-1 nova_compute[225855]: 2026-01-20 15:34:48.081 225859 DEBUG nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:34:48 compute-1 nova_compute[225855]: 2026-01-20 15:34:48.081 225859 DEBUG nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No VIF found with MAC fa:16:3e:ea:3a:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:34:48 compute-1 nova_compute[225855]: 2026-01-20 15:34:48.081 225859 INFO nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Using config drive
Jan 20 15:34:48 compute-1 nova_compute[225855]: 2026-01-20 15:34:48.106 225859 DEBUG nova.storage.rbd_utils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 2003d484-9afb-4f49-8410-6e8c6aa813d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:34:48 compute-1 ceph-mon[81775]: pgmap v3378: 321 pgs: 321 active+clean; 139 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.2 KiB/s rd, 492 KiB/s wr, 11 op/s
Jan 20 15:34:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3471853459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:34:48 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3172465850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:34:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:48.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:34:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:48.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:34:49 compute-1 nova_compute[225855]: 2026-01-20 15:34:49.192 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:49 compute-1 sudo[323439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:34:49 compute-1 sudo[323439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:34:49 compute-1 sudo[323439]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:49 compute-1 sudo[323464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:34:49 compute-1 sudo[323464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:34:49 compute-1 sudo[323464]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:49 compute-1 nova_compute[225855]: 2026-01-20 15:34:49.791 225859 INFO nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Creating config drive at /var/lib/nova/instances/2003d484-9afb-4f49-8410-6e8c6aa813d0/disk.config
Jan 20 15:34:49 compute-1 nova_compute[225855]: 2026-01-20 15:34:49.795 225859 DEBUG oslo_concurrency.processutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2003d484-9afb-4f49-8410-6e8c6aa813d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvw16danw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:34:49 compute-1 nova_compute[225855]: 2026-01-20 15:34:49.927 225859 DEBUG oslo_concurrency.processutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2003d484-9afb-4f49-8410-6e8c6aa813d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvw16danw" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:34:49 compute-1 nova_compute[225855]: 2026-01-20 15:34:49.960 225859 DEBUG nova.storage.rbd_utils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 2003d484-9afb-4f49-8410-6e8c6aa813d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:34:49 compute-1 nova_compute[225855]: 2026-01-20 15:34:49.963 225859 DEBUG oslo_concurrency.processutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2003d484-9afb-4f49-8410-6e8c6aa813d0/disk.config 2003d484-9afb-4f49-8410-6e8c6aa813d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.092 225859 DEBUG oslo_concurrency.processutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2003d484-9afb-4f49-8410-6e8c6aa813d0/disk.config 2003d484-9afb-4f49-8410-6e8c6aa813d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.093 225859 INFO nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Deleting local config drive /var/lib/nova/instances/2003d484-9afb-4f49-8410-6e8c6aa813d0/disk.config because it was imported into RBD.
Jan 20 15:34:50 compute-1 kernel: tap1dee9c67-fb: entered promiscuous mode
Jan 20 15:34:50 compute-1 NetworkManager[49104]: <info>  [1768923290.1465] manager: (tap1dee9c67-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/415)
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.146 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:50 compute-1 ovn_controller[130490]: 2026-01-20T15:34:50Z|00972|binding|INFO|Claiming lport 1dee9c67-fb01-4fcd-8f35-805a326ee235 for this chassis.
Jan 20 15:34:50 compute-1 ovn_controller[130490]: 2026-01-20T15:34:50Z|00973|binding|INFO|1dee9c67-fb01-4fcd-8f35-805a326ee235: Claiming fa:16:3e:ea:3a:75 10.100.0.12
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.151 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.154 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:50 compute-1 NetworkManager[49104]: <info>  [1768923290.1599] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.159 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:50 compute-1 NetworkManager[49104]: <info>  [1768923290.1613] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/417)
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.164 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:3a:75 10.100.0.12'], port_security=['fa:16:3e:ea:3a:75 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-703015767', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2003d484-9afb-4f49-8410-6e8c6aa813d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99dd5684-1685-443e-9373-f548d80784f6', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-703015767', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'aaa69ba6-9a27-441e-877e-2cd188322a42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44d2010b-16ff-4152-8c6b-d6e8ffb1b3ca, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=1dee9c67-fb01-4fcd-8f35-805a326ee235) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.165 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 1dee9c67-fb01-4fcd-8f35-805a326ee235 in datapath 99dd5684-1685-443e-9373-f548d80784f6 bound to our chassis
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.166 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99dd5684-1685-443e-9373-f548d80784f6
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.179 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a1626301-7cca-4e44-b928-d7261722b327]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.180 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap99dd5684-11 in ovnmeta-99dd5684-1685-443e-9373-f548d80784f6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.182 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap99dd5684-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.183 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5b33ad76-597b-4281-8120-4d15816757b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:50 compute-1 systemd-udevd[323544]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.184 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a0390ff4-b446-459b-9da8-c5d2e9d15f9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:50 compute-1 systemd-machined[194361]: New machine qemu-112-instance-000000d3.
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.195 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[43159564-1b2d-4949-8569-2b62ecb0404f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:50 compute-1 NetworkManager[49104]: <info>  [1768923290.1995] device (tap1dee9c67-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:34:50 compute-1 NetworkManager[49104]: <info>  [1768923290.2000] device (tap1dee9c67-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.219 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[05219e07-e736-48e5-96ad-d5a7493a67e4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:50 compute-1 systemd[1]: Started Virtual Machine qemu-112-instance-000000d3.
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.235 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.246 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:50 compute-1 ovn_controller[130490]: 2026-01-20T15:34:50Z|00974|binding|INFO|Setting lport 1dee9c67-fb01-4fcd-8f35-805a326ee235 ovn-installed in OVS
Jan 20 15:34:50 compute-1 ovn_controller[130490]: 2026-01-20T15:34:50Z|00975|binding|INFO|Setting lport 1dee9c67-fb01-4fcd-8f35-805a326ee235 up in Southbound
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.258 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.258 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[060fce39-a7ba-4bee-ace7-aae44275fcf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:50 compute-1 NetworkManager[49104]: <info>  [1768923290.2650] manager: (tap99dd5684-10): new Veth device (/org/freedesktop/NetworkManager/Devices/418)
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.264 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[763688d4-fc26-4824-af80-52fe132b68c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.298 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[6a61aa99-d7f9-4cff-87a6-e54e14889afe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.301 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[febd096e-0140-4532-9df3-2c22803dd2be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:50 compute-1 NetworkManager[49104]: <info>  [1768923290.3244] device (tap99dd5684-10): carrier: link connected
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.330 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[06989b20-ae79-485d-952f-e34de152a4b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.349 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c88d7e28-1d10-402a-9d13-303f5ba0f16d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99dd5684-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:88:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 277], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 832925, 'reachable_time': 26050, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323576, 'error': None, 'target': 'ovnmeta-99dd5684-1685-443e-9373-f548d80784f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:50 compute-1 ceph-mon[81775]: pgmap v3379: 321 pgs: 321 active+clean; 156 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.6 KiB/s rd, 1.1 MiB/s wr, 13 op/s
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.367 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a57d040f-109a-4fa5-a5bf-e178eedb4e5d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:8896'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 832925, 'tstamp': 832925}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323577, 'error': None, 'target': 'ovnmeta-99dd5684-1685-443e-9373-f548d80784f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.383 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4d60f09d-753e-4660-876b-7ede5a0335c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99dd5684-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:88:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 277], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 832925, 'reachable_time': 26050, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323578, 'error': None, 'target': 'ovnmeta-99dd5684-1685-443e-9373-f548d80784f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.411 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec7091a-c009-4e82-bd14-4cd3a45c09aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.469 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b4ef91-b5e9-42c3-bcf8-e64b0cc42428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.471 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99dd5684-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.471 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.471 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99dd5684-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.473 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:50 compute-1 NetworkManager[49104]: <info>  [1768923290.4738] manager: (tap99dd5684-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/419)
Jan 20 15:34:50 compute-1 kernel: tap99dd5684-10: entered promiscuous mode
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.477 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99dd5684-10, col_values=(('external_ids', {'iface-id': 'b36be382-7937-4c5c-b0f7-fc4a6e68a050'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:34:50 compute-1 ovn_controller[130490]: 2026-01-20T15:34:50Z|00976|binding|INFO|Releasing lport b36be382-7937-4c5c-b0f7-fc4a6e68a050 from this chassis (sb_readonly=0)
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.478 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.480 225859 DEBUG nova.network.neutron [req-8eb53864-9c37-4448-b563-cc75612a40ab req-243503e8-488f-4f11-8943-44df69856886 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Updated VIF entry in instance network info cache for port 1dee9c67-fb01-4fcd-8f35-805a326ee235. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.480 225859 DEBUG nova.network.neutron [req-8eb53864-9c37-4448-b563-cc75612a40ab req-243503e8-488f-4f11-8943-44df69856886 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Updating instance_info_cache with network_info: [{"id": "1dee9c67-fb01-4fcd-8f35-805a326ee235", "address": "fa:16:3e:ea:3a:75", "network": {"id": "99dd5684-1685-443e-9373-f548d80784f6", "bridge": "br-int", "label": "tempest-network-smoke--802350272", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dee9c67-fb", "ovs_interfaceid": "1dee9c67-fb01-4fcd-8f35-805a326ee235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.493 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.494 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/99dd5684-1685-443e-9373-f548d80784f6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/99dd5684-1685-443e-9373-f548d80784f6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.494 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[333cb462-ca21-49b7-a4b5-0de0ad3343d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.495 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-99dd5684-1685-443e-9373-f548d80784f6
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/99dd5684-1685-443e-9373-f548d80784f6.pid.haproxy
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 99dd5684-1685-443e-9373-f548d80784f6
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:34:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:50.496 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-99dd5684-1685-443e-9373-f548d80784f6', 'env', 'PROCESS_TAG=haproxy-99dd5684-1685-443e-9373-f548d80784f6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/99dd5684-1685-443e-9373-f548d80784f6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.502 225859 DEBUG oslo_concurrency.lockutils [req-8eb53864-9c37-4448-b563-cc75612a40ab req-243503e8-488f-4f11-8943-44df69856886 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-2003d484-9afb-4f49-8410-6e8c6aa813d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.569 225859 DEBUG nova.compute.manager [req-51a38df6-e2c9-4651-ad85-cc9a6579735d req-5fda12b5-aeea-4d19-b4d8-c74f482f92a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Received event network-vif-plugged-1dee9c67-fb01-4fcd-8f35-805a326ee235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.570 225859 DEBUG oslo_concurrency.lockutils [req-51a38df6-e2c9-4651-ad85-cc9a6579735d req-5fda12b5-aeea-4d19-b4d8-c74f482f92a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2003d484-9afb-4f49-8410-6e8c6aa813d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.570 225859 DEBUG oslo_concurrency.lockutils [req-51a38df6-e2c9-4651-ad85-cc9a6579735d req-5fda12b5-aeea-4d19-b4d8-c74f482f92a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2003d484-9afb-4f49-8410-6e8c6aa813d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.570 225859 DEBUG oslo_concurrency.lockutils [req-51a38df6-e2c9-4651-ad85-cc9a6579735d req-5fda12b5-aeea-4d19-b4d8-c74f482f92a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2003d484-9afb-4f49-8410-6e8c6aa813d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.571 225859 DEBUG nova.compute.manager [req-51a38df6-e2c9-4651-ad85-cc9a6579735d req-5fda12b5-aeea-4d19-b4d8-c74f482f92a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Processing event network-vif-plugged-1dee9c67-fb01-4fcd-8f35-805a326ee235 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.639 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768923290.6387074, 2003d484-9afb-4f49-8410-6e8c6aa813d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.639 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] VM Started (Lifecycle Event)
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.642 225859 DEBUG nova.compute.manager [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.647 225859 DEBUG nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.650 225859 INFO nova.virt.libvirt.driver [-] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Instance spawned successfully.
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.650 225859 DEBUG nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.662 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.667 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.672 225859 DEBUG nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.672 225859 DEBUG nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.673 225859 DEBUG nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.673 225859 DEBUG nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.674 225859 DEBUG nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.674 225859 DEBUG nova.virt.libvirt.driver [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.723 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.724 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768923290.6388218, 2003d484-9afb-4f49-8410-6e8c6aa813d0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.724 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] VM Paused (Lifecycle Event)
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.755 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.759 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768923290.64678, 2003d484-9afb-4f49-8410-6e8c6aa813d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.760 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] VM Resumed (Lifecycle Event)
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.768 225859 INFO nova.compute.manager [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Took 6.07 seconds to spawn the instance on the hypervisor.
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.770 225859 DEBUG nova.compute.manager [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.778 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.782 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.805 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.839 225859 INFO nova.compute.manager [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Took 10.07 seconds to build instance.
Jan 20 15:34:50 compute-1 nova_compute[225855]: 2026-01-20 15:34:50.865 225859 DEBUG oslo_concurrency.lockutils [None req-73ba7bc4-81d4-4e3d-b438-3c908544fde7 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "2003d484-9afb-4f49-8410-6e8c6aa813d0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:34:50 compute-1 podman[323652]: 2026-01-20 15:34:50.875519031 +0000 UTC m=+0.048556637 container create eafc6272744e7f80e260b39d508cf6343a4ca7b82c03acb636647827481af26f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99dd5684-1685-443e-9373-f548d80784f6, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 15:34:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6e4d6f0 =====
Jan 20 15:34:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:50.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6e4d6f0 op status=0 http_status=200 latency=0.002000058s ======
Jan 20 15:34:50 compute-1 radosgw[83787]: beast: 0x7f09c6e4d6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:50.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Jan 20 15:34:50 compute-1 systemd[1]: Started libpod-conmon-eafc6272744e7f80e260b39d508cf6343a4ca7b82c03acb636647827481af26f.scope.
Jan 20 15:34:50 compute-1 podman[323652]: 2026-01-20 15:34:50.849629252 +0000 UTC m=+0.022666878 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:34:50 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:34:50 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f8910fb7a7fc3ab8ba620a83841e990aceea1fa270b9dab15095ed60da470a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:34:50 compute-1 podman[323652]: 2026-01-20 15:34:50.965747807 +0000 UTC m=+0.138785433 container init eafc6272744e7f80e260b39d508cf6343a4ca7b82c03acb636647827481af26f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99dd5684-1685-443e-9373-f548d80784f6, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 20 15:34:50 compute-1 podman[323652]: 2026-01-20 15:34:50.971819131 +0000 UTC m=+0.144856737 container start eafc6272744e7f80e260b39d508cf6343a4ca7b82c03acb636647827481af26f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99dd5684-1685-443e-9373-f548d80784f6, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 20 15:34:50 compute-1 neutron-haproxy-ovnmeta-99dd5684-1685-443e-9373-f548d80784f6[323667]: [NOTICE]   (323671) : New worker (323673) forked
Jan 20 15:34:50 compute-1 neutron-haproxy-ovnmeta-99dd5684-1685-443e-9373-f548d80784f6[323667]: [NOTICE]   (323671) : Loading success.
Jan 20 15:34:52 compute-1 ceph-mon[81775]: pgmap v3380: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:34:52 compute-1 nova_compute[225855]: 2026-01-20 15:34:52.697 225859 DEBUG nova.compute.manager [req-2fc235bd-29a9-4a82-bae2-a3fd1dd5e952 req-6933f12d-e588-4c27-a5fa-0fe3063737ee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Received event network-vif-plugged-1dee9c67-fb01-4fcd-8f35-805a326ee235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:34:52 compute-1 nova_compute[225855]: 2026-01-20 15:34:52.697 225859 DEBUG oslo_concurrency.lockutils [req-2fc235bd-29a9-4a82-bae2-a3fd1dd5e952 req-6933f12d-e588-4c27-a5fa-0fe3063737ee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2003d484-9afb-4f49-8410-6e8c6aa813d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:34:52 compute-1 nova_compute[225855]: 2026-01-20 15:34:52.697 225859 DEBUG oslo_concurrency.lockutils [req-2fc235bd-29a9-4a82-bae2-a3fd1dd5e952 req-6933f12d-e588-4c27-a5fa-0fe3063737ee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2003d484-9afb-4f49-8410-6e8c6aa813d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:34:52 compute-1 nova_compute[225855]: 2026-01-20 15:34:52.698 225859 DEBUG oslo_concurrency.lockutils [req-2fc235bd-29a9-4a82-bae2-a3fd1dd5e952 req-6933f12d-e588-4c27-a5fa-0fe3063737ee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2003d484-9afb-4f49-8410-6e8c6aa813d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:34:52 compute-1 nova_compute[225855]: 2026-01-20 15:34:52.698 225859 DEBUG nova.compute.manager [req-2fc235bd-29a9-4a82-bae2-a3fd1dd5e952 req-6933f12d-e588-4c27-a5fa-0fe3063737ee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] No waiting events found dispatching network-vif-plugged-1dee9c67-fb01-4fcd-8f35-805a326ee235 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:34:52 compute-1 nova_compute[225855]: 2026-01-20 15:34:52.698 225859 WARNING nova.compute.manager [req-2fc235bd-29a9-4a82-bae2-a3fd1dd5e952 req-6933f12d-e588-4c27-a5fa-0fe3063737ee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Received unexpected event network-vif-plugged-1dee9c67-fb01-4fcd-8f35-805a326ee235 for instance with vm_state active and task_state None.
Jan 20 15:34:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:52.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:34:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:52.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:34:52 compute-1 nova_compute[225855]: 2026-01-20 15:34:52.973 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.242 225859 DEBUG oslo_concurrency.lockutils [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "2003d484-9afb-4f49-8410-6e8c6aa813d0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.243 225859 DEBUG oslo_concurrency.lockutils [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "2003d484-9afb-4f49-8410-6e8c6aa813d0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.243 225859 DEBUG oslo_concurrency.lockutils [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "2003d484-9afb-4f49-8410-6e8c6aa813d0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.244 225859 DEBUG oslo_concurrency.lockutils [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "2003d484-9afb-4f49-8410-6e8c6aa813d0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.244 225859 DEBUG oslo_concurrency.lockutils [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "2003d484-9afb-4f49-8410-6e8c6aa813d0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.246 225859 INFO nova.compute.manager [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Terminating instance
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.247 225859 DEBUG nova.compute.manager [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:34:53 compute-1 kernel: tap1dee9c67-fb (unregistering): left promiscuous mode
Jan 20 15:34:53 compute-1 NetworkManager[49104]: <info>  [1768923293.2867] device (tap1dee9c67-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:34:53 compute-1 ovn_controller[130490]: 2026-01-20T15:34:53Z|00977|binding|INFO|Releasing lport 1dee9c67-fb01-4fcd-8f35-805a326ee235 from this chassis (sb_readonly=0)
Jan 20 15:34:53 compute-1 ovn_controller[130490]: 2026-01-20T15:34:53Z|00978|binding|INFO|Setting lport 1dee9c67-fb01-4fcd-8f35-805a326ee235 down in Southbound
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.298 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:53 compute-1 ovn_controller[130490]: 2026-01-20T15:34:53Z|00979|binding|INFO|Removing iface tap1dee9c67-fb ovn-installed in OVS
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.302 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:53.306 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:3a:75 10.100.0.12'], port_security=['fa:16:3e:ea:3a:75 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-703015767', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2003d484-9afb-4f49-8410-6e8c6aa813d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99dd5684-1685-443e-9373-f548d80784f6', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-703015767', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'aaa69ba6-9a27-441e-877e-2cd188322a42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.217', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44d2010b-16ff-4152-8c6b-d6e8ffb1b3ca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=1dee9c67-fb01-4fcd-8f35-805a326ee235) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:34:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:53.308 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 1dee9c67-fb01-4fcd-8f35-805a326ee235 in datapath 99dd5684-1685-443e-9373-f548d80784f6 unbound from our chassis
Jan 20 15:34:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:53.309 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99dd5684-1685-443e-9373-f548d80784f6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:34:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:53.310 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ae15e720-870f-41a2-9706-4bcc74474b53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:53.310 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-99dd5684-1685-443e-9373-f548d80784f6 namespace which is not needed anymore
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.320 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:53 compute-1 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d000000d3.scope: Deactivated successfully.
Jan 20 15:34:53 compute-1 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d000000d3.scope: Consumed 3.055s CPU time.
Jan 20 15:34:53 compute-1 systemd-machined[194361]: Machine qemu-112-instance-000000d3 terminated.
Jan 20 15:34:53 compute-1 neutron-haproxy-ovnmeta-99dd5684-1685-443e-9373-f548d80784f6[323667]: [NOTICE]   (323671) : haproxy version is 2.8.14-c23fe91
Jan 20 15:34:53 compute-1 neutron-haproxy-ovnmeta-99dd5684-1685-443e-9373-f548d80784f6[323667]: [NOTICE]   (323671) : path to executable is /usr/sbin/haproxy
Jan 20 15:34:53 compute-1 neutron-haproxy-ovnmeta-99dd5684-1685-443e-9373-f548d80784f6[323667]: [WARNING]  (323671) : Exiting Master process...
Jan 20 15:34:53 compute-1 neutron-haproxy-ovnmeta-99dd5684-1685-443e-9373-f548d80784f6[323667]: [WARNING]  (323671) : Exiting Master process...
Jan 20 15:34:53 compute-1 neutron-haproxy-ovnmeta-99dd5684-1685-443e-9373-f548d80784f6[323667]: [ALERT]    (323671) : Current worker (323673) exited with code 143 (Terminated)
Jan 20 15:34:53 compute-1 neutron-haproxy-ovnmeta-99dd5684-1685-443e-9373-f548d80784f6[323667]: [WARNING]  (323671) : All workers exited. Exiting... (0)
Jan 20 15:34:53 compute-1 systemd[1]: libpod-eafc6272744e7f80e260b39d508cf6343a4ca7b82c03acb636647827481af26f.scope: Deactivated successfully.
Jan 20 15:34:53 compute-1 podman[323707]: 2026-01-20 15:34:53.440667379 +0000 UTC m=+0.045963993 container died eafc6272744e7f80e260b39d508cf6343a4ca7b82c03acb636647827481af26f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99dd5684-1685-443e-9373-f548d80784f6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.467 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.472 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.481 225859 INFO nova.virt.libvirt.driver [-] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Instance destroyed successfully.
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.482 225859 DEBUG nova.objects.instance [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'resources' on Instance uuid 2003d484-9afb-4f49-8410-6e8c6aa813d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.500 225859 DEBUG nova.virt.libvirt.vif [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:34:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1912465458',display_name='tempest-TestNetworkBasicOps-server-1912465458',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1912465458',id=211,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKCclivZajv/oNiXd0J0tpc9M442c8dbYXCbsYeHEo3g2nh4Rcq6ISUBBO6XIX8RmCdEtQzJtRlazxR/MdQkZGMMo5bsdyOhXnm5vgMIIsHetJR9AEpVwxFDAVbRX9E2EQ==',key_name='tempest-TestNetworkBasicOps-204665299',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:34:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-g29h7bs4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:34:50Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=2003d484-9afb-4f49-8410-6e8c6aa813d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1dee9c67-fb01-4fcd-8f35-805a326ee235", "address": "fa:16:3e:ea:3a:75", "network": {"id": "99dd5684-1685-443e-9373-f548d80784f6", "bridge": "br-int", "label": "tempest-network-smoke--802350272", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dee9c67-fb", "ovs_interfaceid": "1dee9c67-fb01-4fcd-8f35-805a326ee235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.501 225859 DEBUG nova.network.os_vif_util [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "1dee9c67-fb01-4fcd-8f35-805a326ee235", "address": "fa:16:3e:ea:3a:75", "network": {"id": "99dd5684-1685-443e-9373-f548d80784f6", "bridge": "br-int", "label": "tempest-network-smoke--802350272", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dee9c67-fb", "ovs_interfaceid": "1dee9c67-fb01-4fcd-8f35-805a326ee235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.501 225859 DEBUG nova.network.os_vif_util [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:3a:75,bridge_name='br-int',has_traffic_filtering=True,id=1dee9c67-fb01-4fcd-8f35-805a326ee235,network=Network(99dd5684-1685-443e-9373-f548d80784f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1dee9c67-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.502 225859 DEBUG os_vif [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:3a:75,bridge_name='br-int',has_traffic_filtering=True,id=1dee9c67-fb01-4fcd-8f35-805a326ee235,network=Network(99dd5684-1685-443e-9373-f548d80784f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1dee9c67-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.503 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.503 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1dee9c67-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.504 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.505 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.507 225859 INFO os_vif [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:3a:75,bridge_name='br-int',has_traffic_filtering=True,id=1dee9c67-fb01-4fcd-8f35-805a326ee235,network=Network(99dd5684-1685-443e-9373-f548d80784f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1dee9c67-fb')
Jan 20 15:34:53 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eafc6272744e7f80e260b39d508cf6343a4ca7b82c03acb636647827481af26f-userdata-shm.mount: Deactivated successfully.
Jan 20 15:34:53 compute-1 systemd[1]: var-lib-containers-storage-overlay-90f8910fb7a7fc3ab8ba620a83841e990aceea1fa270b9dab15095ed60da470a-merged.mount: Deactivated successfully.
Jan 20 15:34:53 compute-1 podman[323707]: 2026-01-20 15:34:53.625146186 +0000 UTC m=+0.230442780 container cleanup eafc6272744e7f80e260b39d508cf6343a4ca7b82c03acb636647827481af26f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99dd5684-1685-443e-9373-f548d80784f6, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:34:53 compute-1 systemd[1]: libpod-conmon-eafc6272744e7f80e260b39d508cf6343a4ca7b82c03acb636647827481af26f.scope: Deactivated successfully.
Jan 20 15:34:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:53 compute-1 podman[323767]: 2026-01-20 15:34:53.831259211 +0000 UTC m=+0.185017584 container remove eafc6272744e7f80e260b39d508cf6343a4ca7b82c03acb636647827481af26f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99dd5684-1685-443e-9373-f548d80784f6, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:34:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:53.839 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0bd58055-2978-49c5-bd30-b1dd4d1979a9]: (4, ('Tue Jan 20 03:34:53 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-99dd5684-1685-443e-9373-f548d80784f6 (eafc6272744e7f80e260b39d508cf6343a4ca7b82c03acb636647827481af26f)\neafc6272744e7f80e260b39d508cf6343a4ca7b82c03acb636647827481af26f\nTue Jan 20 03:34:53 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-99dd5684-1685-443e-9373-f548d80784f6 (eafc6272744e7f80e260b39d508cf6343a4ca7b82c03acb636647827481af26f)\neafc6272744e7f80e260b39d508cf6343a4ca7b82c03acb636647827481af26f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:53.841 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fb4ce807-8ec8-4577-af88-b103afe6ea66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:53.842 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99dd5684-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.844 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:53 compute-1 kernel: tap99dd5684-10: left promiscuous mode
Jan 20 15:34:53 compute-1 nova_compute[225855]: 2026-01-20 15:34:53.857 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:53.861 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ddf0a75f-3201-488c-b29d-9c49f7308b58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:53.876 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[35994666-406c-433e-8375-2ba56397bdde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:53.877 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9b078761-6b2c-47d6-bf48-ec314da0bc6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:53.894 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d0490458-4665-48eb-b15f-628c7d2c66a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 832917, 'reachable_time': 24456, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323783, 'error': None, 'target': 'ovnmeta-99dd5684-1685-443e-9373-f548d80784f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:53 compute-1 systemd[1]: run-netns-ovnmeta\x2d99dd5684\x2d1685\x2d443e\x2d9373\x2df548d80784f6.mount: Deactivated successfully.
Jan 20 15:34:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:53.896 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-99dd5684-1685-443e-9373-f548d80784f6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:34:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:34:53.897 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[3abcade1-e462-42c8-b562-37e667bc455e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:34:54 compute-1 nova_compute[225855]: 2026-01-20 15:34:54.194 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:54 compute-1 nova_compute[225855]: 2026-01-20 15:34:54.202 225859 INFO nova.virt.libvirt.driver [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Deleting instance files /var/lib/nova/instances/2003d484-9afb-4f49-8410-6e8c6aa813d0_del
Jan 20 15:34:54 compute-1 nova_compute[225855]: 2026-01-20 15:34:54.203 225859 INFO nova.virt.libvirt.driver [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Deletion of /var/lib/nova/instances/2003d484-9afb-4f49-8410-6e8c6aa813d0_del complete
Jan 20 15:34:54 compute-1 nova_compute[225855]: 2026-01-20 15:34:54.292 225859 INFO nova.compute.manager [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Took 1.04 seconds to destroy the instance on the hypervisor.
Jan 20 15:34:54 compute-1 nova_compute[225855]: 2026-01-20 15:34:54.292 225859 DEBUG oslo.service.loopingcall [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:34:54 compute-1 nova_compute[225855]: 2026-01-20 15:34:54.292 225859 DEBUG nova.compute.manager [-] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:34:54 compute-1 nova_compute[225855]: 2026-01-20 15:34:54.293 225859 DEBUG nova.network.neutron [-] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:34:54 compute-1 ceph-mon[81775]: pgmap v3381: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:34:54 compute-1 nova_compute[225855]: 2026-01-20 15:34:54.828 225859 DEBUG nova.compute.manager [req-7d8ffcc1-67dd-4ff1-b833-05f5010007de req-a0e0c5f4-35f6-4b66-8668-40e688531957 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Received event network-vif-unplugged-1dee9c67-fb01-4fcd-8f35-805a326ee235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:34:54 compute-1 nova_compute[225855]: 2026-01-20 15:34:54.828 225859 DEBUG oslo_concurrency.lockutils [req-7d8ffcc1-67dd-4ff1-b833-05f5010007de req-a0e0c5f4-35f6-4b66-8668-40e688531957 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2003d484-9afb-4f49-8410-6e8c6aa813d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:34:54 compute-1 nova_compute[225855]: 2026-01-20 15:34:54.829 225859 DEBUG oslo_concurrency.lockutils [req-7d8ffcc1-67dd-4ff1-b833-05f5010007de req-a0e0c5f4-35f6-4b66-8668-40e688531957 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2003d484-9afb-4f49-8410-6e8c6aa813d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:34:54 compute-1 nova_compute[225855]: 2026-01-20 15:34:54.829 225859 DEBUG oslo_concurrency.lockutils [req-7d8ffcc1-67dd-4ff1-b833-05f5010007de req-a0e0c5f4-35f6-4b66-8668-40e688531957 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2003d484-9afb-4f49-8410-6e8c6aa813d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:34:54 compute-1 nova_compute[225855]: 2026-01-20 15:34:54.829 225859 DEBUG nova.compute.manager [req-7d8ffcc1-67dd-4ff1-b833-05f5010007de req-a0e0c5f4-35f6-4b66-8668-40e688531957 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] No waiting events found dispatching network-vif-unplugged-1dee9c67-fb01-4fcd-8f35-805a326ee235 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:34:54 compute-1 nova_compute[225855]: 2026-01-20 15:34:54.829 225859 DEBUG nova.compute.manager [req-7d8ffcc1-67dd-4ff1-b833-05f5010007de req-a0e0c5f4-35f6-4b66-8668-40e688531957 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Received event network-vif-unplugged-1dee9c67-fb01-4fcd-8f35-805a326ee235 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:34:54 compute-1 nova_compute[225855]: 2026-01-20 15:34:54.830 225859 DEBUG nova.compute.manager [req-7d8ffcc1-67dd-4ff1-b833-05f5010007de req-a0e0c5f4-35f6-4b66-8668-40e688531957 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Received event network-vif-plugged-1dee9c67-fb01-4fcd-8f35-805a326ee235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:34:54 compute-1 nova_compute[225855]: 2026-01-20 15:34:54.830 225859 DEBUG oslo_concurrency.lockutils [req-7d8ffcc1-67dd-4ff1-b833-05f5010007de req-a0e0c5f4-35f6-4b66-8668-40e688531957 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2003d484-9afb-4f49-8410-6e8c6aa813d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:34:54 compute-1 nova_compute[225855]: 2026-01-20 15:34:54.830 225859 DEBUG oslo_concurrency.lockutils [req-7d8ffcc1-67dd-4ff1-b833-05f5010007de req-a0e0c5f4-35f6-4b66-8668-40e688531957 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2003d484-9afb-4f49-8410-6e8c6aa813d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:34:54 compute-1 nova_compute[225855]: 2026-01-20 15:34:54.830 225859 DEBUG oslo_concurrency.lockutils [req-7d8ffcc1-67dd-4ff1-b833-05f5010007de req-a0e0c5f4-35f6-4b66-8668-40e688531957 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2003d484-9afb-4f49-8410-6e8c6aa813d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:34:54 compute-1 nova_compute[225855]: 2026-01-20 15:34:54.830 225859 DEBUG nova.compute.manager [req-7d8ffcc1-67dd-4ff1-b833-05f5010007de req-a0e0c5f4-35f6-4b66-8668-40e688531957 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] No waiting events found dispatching network-vif-plugged-1dee9c67-fb01-4fcd-8f35-805a326ee235 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:34:54 compute-1 nova_compute[225855]: 2026-01-20 15:34:54.831 225859 WARNING nova.compute.manager [req-7d8ffcc1-67dd-4ff1-b833-05f5010007de req-a0e0c5f4-35f6-4b66-8668-40e688531957 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Received unexpected event network-vif-plugged-1dee9c67-fb01-4fcd-8f35-805a326ee235 for instance with vm_state active and task_state deleting.
Jan 20 15:34:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:54.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:54.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:56 compute-1 ceph-mon[81775]: pgmap v3382: 321 pgs: 321 active+clean; 162 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 947 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Jan 20 15:34:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:56.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:34:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:56.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:34:57 compute-1 nova_compute[225855]: 2026-01-20 15:34:57.549 225859 DEBUG nova.network.neutron [-] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:34:57 compute-1 nova_compute[225855]: 2026-01-20 15:34:57.569 225859 INFO nova.compute.manager [-] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Took 3.28 seconds to deallocate network for instance.
Jan 20 15:34:57 compute-1 nova_compute[225855]: 2026-01-20 15:34:57.618 225859 DEBUG oslo_concurrency.lockutils [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:34:57 compute-1 nova_compute[225855]: 2026-01-20 15:34:57.618 225859 DEBUG oslo_concurrency.lockutils [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:34:57 compute-1 nova_compute[225855]: 2026-01-20 15:34:57.685 225859 DEBUG oslo_concurrency.processutils [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:34:58 compute-1 podman[323808]: 2026-01-20 15:34:58.00583234 +0000 UTC m=+0.050412040 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 15:34:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:34:58 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1237257214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:58 compute-1 nova_compute[225855]: 2026-01-20 15:34:58.118 225859 DEBUG oslo_concurrency.processutils [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:34:58 compute-1 nova_compute[225855]: 2026-01-20 15:34:58.124 225859 DEBUG nova.compute.provider_tree [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:34:58 compute-1 nova_compute[225855]: 2026-01-20 15:34:58.167 225859 DEBUG nova.scheduler.client.report [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:34:58 compute-1 nova_compute[225855]: 2026-01-20 15:34:58.209 225859 DEBUG oslo_concurrency.lockutils [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:34:58 compute-1 nova_compute[225855]: 2026-01-20 15:34:58.259 225859 INFO nova.scheduler.client.report [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Deleted allocations for instance 2003d484-9afb-4f49-8410-6e8c6aa813d0
Jan 20 15:34:58 compute-1 nova_compute[225855]: 2026-01-20 15:34:58.346 225859 DEBUG oslo_concurrency.lockutils [None req-b63da538-d914-4572-80f0-4968c7623ce0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "2003d484-9afb-4f49-8410-6e8c6aa813d0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:34:58 compute-1 nova_compute[225855]: 2026-01-20 15:34:58.506 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:58 compute-1 ceph-mon[81775]: pgmap v3383: 321 pgs: 321 active+clean; 138 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Jan 20 15:34:58 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1237257214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:34:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:58.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:34:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:34:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:58.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:59 compute-1 nova_compute[225855]: 2026-01-20 15:34:59.196 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:59 compute-1 ceph-mon[81775]: pgmap v3384: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 115 op/s
Jan 20 15:35:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:00.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:00.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:01 compute-1 nova_compute[225855]: 2026-01-20 15:35:01.358 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:01 compute-1 nova_compute[225855]: 2026-01-20 15:35:01.358 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 15:35:01 compute-1 nova_compute[225855]: 2026-01-20 15:35:01.382 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 15:35:02 compute-1 ceph-mon[81775]: pgmap v3385: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 726 KiB/s wr, 113 op/s
Jan 20 15:35:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:02.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:35:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:02.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:35:03 compute-1 nova_compute[225855]: 2026-01-20 15:35:03.511 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:03 compute-1 sudo[323835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:03 compute-1 sudo[323835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:03 compute-1 sudo[323835]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:03 compute-1 sudo[323860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:35:03 compute-1 sudo[323860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:03 compute-1 sudo[323860]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:03 compute-1 sudo[323885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:03 compute-1 sudo[323885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:03 compute-1 sudo[323885]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:04 compute-1 sudo[323910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:35:04 compute-1 sudo[323910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:04 compute-1 nova_compute[225855]: 2026-01-20 15:35:04.197 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:04 compute-1 ceph-mon[81775]: pgmap v3386: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Jan 20 15:35:04 compute-1 sudo[323910]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:04.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:04.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:35:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:35:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:35:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:35:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:35:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:35:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:06.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:06.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:06 compute-1 ceph-mon[81775]: pgmap v3387: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Jan 20 15:35:07 compute-1 ceph-mon[81775]: pgmap v3388: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 852 B/s wr, 40 op/s
Jan 20 15:35:08 compute-1 nova_compute[225855]: 2026-01-20 15:35:08.480 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768923293.4797463, 2003d484-9afb-4f49-8410-6e8c6aa813d0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:35:08 compute-1 nova_compute[225855]: 2026-01-20 15:35:08.480 225859 INFO nova.compute.manager [-] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] VM Stopped (Lifecycle Event)
Jan 20 15:35:08 compute-1 nova_compute[225855]: 2026-01-20 15:35:08.515 225859 DEBUG nova.compute.manager [None req-fce46557-cd3f-47a5-a448-3729a39e0469 - - - - - -] [instance: 2003d484-9afb-4f49-8410-6e8c6aa813d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:35:08 compute-1 nova_compute[225855]: 2026-01-20 15:35:08.516 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:08.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:08.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:09 compute-1 nova_compute[225855]: 2026-01-20 15:35:09.198 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:09 compute-1 sudo[323970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:09 compute-1 sudo[323970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:09 compute-1 sudo[323970]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:09 compute-1 sudo[323995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:09 compute-1 sudo[323995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:09 compute-1 sudo[323995]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:10 compute-1 ceph-mon[81775]: pgmap v3389: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 20 15:35:10 compute-1 nova_compute[225855]: 2026-01-20 15:35:10.526 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:10 compute-1 nova_compute[225855]: 2026-01-20 15:35:10.602 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:10 compute-1 sudo[324022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:10 compute-1 sudo[324022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:10 compute-1 sudo[324022]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:10.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:10.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:10 compute-1 sudo[324047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:35:10 compute-1 sudo[324047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:10 compute-1 sudo[324047]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:11 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:35:11 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:35:11 compute-1 ceph-mon[81775]: pgmap v3390: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:12.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:12.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:13 compute-1 nova_compute[225855]: 2026-01-20 15:35:13.519 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:35:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3805897643' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:35:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:35:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3805897643' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:35:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:14 compute-1 nova_compute[225855]: 2026-01-20 15:35:14.200 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:14 compute-1 ceph-mon[81775]: pgmap v3391: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3805897643' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:35:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3805897643' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:35:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:14.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:14.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:15 compute-1 nova_compute[225855]: 2026-01-20 15:35:15.364 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:15 compute-1 nova_compute[225855]: 2026-01-20 15:35:15.365 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:35:15 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/858350277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:35:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:35:16.457 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:35:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:35:16.458 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:35:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:35:16.458 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:35:16 compute-1 ceph-mon[81775]: pgmap v3392: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1905477531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:35:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:16.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:35:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:16.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:35:17 compute-1 nova_compute[225855]: 2026-01-20 15:35:17.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:17 compute-1 nova_compute[225855]: 2026-01-20 15:35:17.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:35:17 compute-1 nova_compute[225855]: 2026-01-20 15:35:17.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:35:17 compute-1 nova_compute[225855]: 2026-01-20 15:35:17.401 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:35:17 compute-1 nova_compute[225855]: 2026-01-20 15:35:17.402 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:17 compute-1 nova_compute[225855]: 2026-01-20 15:35:17.422 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:35:17 compute-1 nova_compute[225855]: 2026-01-20 15:35:17.423 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:35:17 compute-1 nova_compute[225855]: 2026-01-20 15:35:17.423 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:35:17 compute-1 nova_compute[225855]: 2026-01-20 15:35:17.423 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:35:17 compute-1 nova_compute[225855]: 2026-01-20 15:35:17.423 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:35:17 compute-1 ceph-mon[81775]: pgmap v3393: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/71798964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:35:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:35:17 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3132927951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:35:17 compute-1 nova_compute[225855]: 2026-01-20 15:35:17.954 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:35:18 compute-1 nova_compute[225855]: 2026-01-20 15:35:18.128 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:35:18 compute-1 nova_compute[225855]: 2026-01-20 15:35:18.129 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4248MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:35:18 compute-1 nova_compute[225855]: 2026-01-20 15:35:18.129 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:35:18 compute-1 nova_compute[225855]: 2026-01-20 15:35:18.129 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:35:18 compute-1 nova_compute[225855]: 2026-01-20 15:35:18.264 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:35:18 compute-1 nova_compute[225855]: 2026-01-20 15:35:18.264 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:35:18 compute-1 nova_compute[225855]: 2026-01-20 15:35:18.315 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:35:18 compute-1 nova_compute[225855]: 2026-01-20 15:35:18.523 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3132927951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:35:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3346544430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:35:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:35:18 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/962082559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:35:18 compute-1 nova_compute[225855]: 2026-01-20 15:35:18.752 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:35:18 compute-1 nova_compute[225855]: 2026-01-20 15:35:18.758 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:35:18 compute-1 nova_compute[225855]: 2026-01-20 15:35:18.772 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:35:18 compute-1 nova_compute[225855]: 2026-01-20 15:35:18.795 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:35:18 compute-1 nova_compute[225855]: 2026-01-20 15:35:18.796 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:35:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:18.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:18.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:19 compute-1 podman[324121]: 2026-01-20 15:35:19.03153554 +0000 UTC m=+0.078522193 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 15:35:19 compute-1 nova_compute[225855]: 2026-01-20 15:35:19.200 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/962082559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:35:19 compute-1 ceph-mon[81775]: pgmap v3394: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:20 compute-1 nova_compute[225855]: 2026-01-20 15:35:20.734 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:20.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:20.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:21 compute-1 nova_compute[225855]: 2026-01-20 15:35:21.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:22 compute-1 nova_compute[225855]: 2026-01-20 15:35:22.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:22 compute-1 nova_compute[225855]: 2026-01-20 15:35:22.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:22 compute-1 ceph-mon[81775]: pgmap v3395: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:22.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:22.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:23 compute-1 nova_compute[225855]: 2026-01-20 15:35:23.526 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:24 compute-1 nova_compute[225855]: 2026-01-20 15:35:24.202 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:24 compute-1 nova_compute[225855]: 2026-01-20 15:35:24.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:24 compute-1 ceph-mon[81775]: pgmap v3396: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:24.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:24.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:26 compute-1 ceph-mon[81775]: pgmap v3397: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:26.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:26.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:27 compute-1 nova_compute[225855]: 2026-01-20 15:35:27.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:28 compute-1 ceph-mon[81775]: pgmap v3398: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:28 compute-1 nova_compute[225855]: 2026-01-20 15:35:28.530 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:28.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:28.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:29 compute-1 podman[324152]: 2026-01-20 15:35:29.005649691 +0000 UTC m=+0.054521398 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:35:29 compute-1 nova_compute[225855]: 2026-01-20 15:35:29.203 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:29 compute-1 sudo[324172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:29 compute-1 sudo[324172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:29 compute-1 sudo[324172]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:29 compute-1 sudo[324197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:29 compute-1 sudo[324197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:29 compute-1 sudo[324197]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:30 compute-1 ceph-mon[81775]: pgmap v3399: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:30.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:35:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:30.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:35:32 compute-1 ceph-mon[81775]: pgmap v3400: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:35:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:32.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:35:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:32.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:33 compute-1 nova_compute[225855]: 2026-01-20 15:35:33.577 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:33 compute-1 ceph-mon[81775]: pgmap v3401: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:34 compute-1 nova_compute[225855]: 2026-01-20 15:35:34.204 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:34.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:35:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:34.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:35:36 compute-1 ceph-mon[81775]: pgmap v3402: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:36.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:36.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:38 compute-1 ceph-mon[81775]: pgmap v3403: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2682010573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:35:38 compute-1 nova_compute[225855]: 2026-01-20 15:35:38.580 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:35:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:38.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:35:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:35:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:38.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:35:39 compute-1 nova_compute[225855]: 2026-01-20 15:35:39.206 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:40 compute-1 ceph-mon[81775]: pgmap v3404: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:40.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:40.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:41 compute-1 ceph-mon[81775]: pgmap v3405: 321 pgs: 321 active+clean; 146 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 778 KiB/s wr, 25 op/s
Jan 20 15:35:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:35:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:42.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:35:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:35:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:42.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:35:43 compute-1 nova_compute[225855]: 2026-01-20 15:35:43.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:43 compute-1 nova_compute[225855]: 2026-01-20 15:35:43.584 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:44 compute-1 nova_compute[225855]: 2026-01-20 15:35:44.241 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:44 compute-1 ceph-mon[81775]: pgmap v3406: 321 pgs: 321 active+clean; 146 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 778 KiB/s wr, 25 op/s
Jan 20 15:35:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3378826245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:35:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:44.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:44.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3772005823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:35:46 compute-1 ceph-mon[81775]: pgmap v3407: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:35:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:46.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:35:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:46.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:35:48 compute-1 ceph-mon[81775]: pgmap v3408: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:35:48 compute-1 nova_compute[225855]: 2026-01-20 15:35:48.587 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:35:48.747 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=81, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=80) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:35:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:48 compute-1 nova_compute[225855]: 2026-01-20 15:35:48.747 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:48 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:35:48.749 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:35:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:48.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:35:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:48.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:35:49 compute-1 nova_compute[225855]: 2026-01-20 15:35:49.242 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:49 compute-1 ovn_controller[130490]: 2026-01-20T15:35:49Z|00980|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 20 15:35:49 compute-1 sudo[324232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:49 compute-1 sudo[324232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:49 compute-1 sudo[324232]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:50 compute-1 sudo[324263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:50 compute-1 sudo[324263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:50 compute-1 sudo[324263]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:50 compute-1 podman[324252]: 2026-01-20 15:35:50.085636499 +0000 UTC m=+0.128509501 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:35:50 compute-1 ceph-mon[81775]: pgmap v3409: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 606 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Jan 20 15:35:50 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:35:50.751 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '81'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:35:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:35:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:50.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:35:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:50.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:52 compute-1 ceph-mon[81775]: pgmap v3410: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 20 15:35:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:52.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:52.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:53 compute-1 nova_compute[225855]: 2026-01-20 15:35:53.634 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:54 compute-1 nova_compute[225855]: 2026-01-20 15:35:54.245 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:54 compute-1 ceph-mon[81775]: pgmap v3411: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.0 MiB/s wr, 56 op/s
Jan 20 15:35:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:54.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:35:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:54.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:35:55 compute-1 ceph-mon[81775]: pgmap v3412: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 75 op/s
Jan 20 15:35:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:56.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:56.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:58 compute-1 ceph-mon[81775]: pgmap v3413: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:35:58 compute-1 nova_compute[225855]: 2026-01-20 15:35:58.638 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:35:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:58.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:35:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:35:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:58.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:59 compute-1 nova_compute[225855]: 2026-01-20 15:35:59.247 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:00 compute-1 podman[324315]: 2026-01-20 15:35:59.999928443 +0000 UTC m=+0.048089694 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:36:00 compute-1 ceph-mon[81775]: pgmap v3414: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 511 B/s wr, 73 op/s
Jan 20 15:36:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:00.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:00.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:02 compute-1 ceph-mon[81775]: pgmap v3415: 321 pgs: 321 active+clean; 184 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 74 op/s
Jan 20 15:36:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:36:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:02.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:36:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:36:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:02.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:36:03 compute-1 nova_compute[225855]: 2026-01-20 15:36:03.642 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:04 compute-1 nova_compute[225855]: 2026-01-20 15:36:04.249 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:04 compute-1 ceph-mon[81775]: pgmap v3416: 321 pgs: 321 active+clean; 184 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 669 KiB/s rd, 1.2 MiB/s wr, 43 op/s
Jan 20 15:36:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:04.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:05.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:06 compute-1 ceph-mon[81775]: pgmap v3417: 321 pgs: 321 active+clean; 196 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 903 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Jan 20 15:36:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:36:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:06.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:36:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:07.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:08 compute-1 ceph-mon[81775]: pgmap v3418: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:36:08 compute-1 nova_compute[225855]: 2026-01-20 15:36:08.673 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:36:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:08.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:36:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:36:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:09.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:36:09 compute-1 nova_compute[225855]: 2026-01-20 15:36:09.251 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:10 compute-1 sudo[324340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:10 compute-1 sudo[324340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:10 compute-1 sudo[324340]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:10 compute-1 sudo[324365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:10 compute-1 sudo[324365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:10 compute-1 sudo[324365]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:10 compute-1 ceph-mon[81775]: pgmap v3419: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:36:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:10.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:11.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:11 compute-1 sudo[324390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:11 compute-1 sudo[324390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:11 compute-1 sudo[324390]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:11 compute-1 sudo[324415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:36:11 compute-1 sudo[324415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:11 compute-1 sudo[324415]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:11 compute-1 sudo[324440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:11 compute-1 sudo[324440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:11 compute-1 sudo[324440]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:11 compute-1 sudo[324465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:36:11 compute-1 sudo[324465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:11 compute-1 sudo[324465]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:12 compute-1 ceph-mon[81775]: pgmap v3420: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:36:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:36:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:36:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:36:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:36:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:36:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:36:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:12.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:13.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:13 compute-1 nova_compute[225855]: 2026-01-20 15:36:13.718 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:14 compute-1 nova_compute[225855]: 2026-01-20 15:36:14.251 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:14 compute-1 ceph-mon[81775]: pgmap v3421: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 247 KiB/s rd, 983 KiB/s wr, 38 op/s
Jan 20 15:36:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/239173477' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:36:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/239173477' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:36:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:15.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:15.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:36:16.458 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:36:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:36:16.458 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:36:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:36:16.458 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:36:16 compute-1 ceph-mon[81775]: pgmap v3422: 321 pgs: 321 active+clean; 177 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 252 KiB/s rd, 989 KiB/s wr, 46 op/s
Jan 20 15:36:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:17.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:36:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:17.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:36:17 compute-1 nova_compute[225855]: 2026-01-20 15:36:17.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:36:17 compute-1 nova_compute[225855]: 2026-01-20 15:36:17.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:36:17 compute-1 nova_compute[225855]: 2026-01-20 15:36:17.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:36:17 compute-1 nova_compute[225855]: 2026-01-20 15:36:17.358 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:36:17 compute-1 nova_compute[225855]: 2026-01-20 15:36:17.359 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:36:17 compute-1 nova_compute[225855]: 2026-01-20 15:36:17.359 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:36:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3107279431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:36:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1875694725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:36:17 compute-1 sudo[324525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:17 compute-1 sudo[324525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:17 compute-1 sudo[324525]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:18 compute-1 sudo[324550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:36:18 compute-1 sudo[324550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:18 compute-1 sudo[324550]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:18 compute-1 ceph-mon[81775]: pgmap v3423: 321 pgs: 321 active+clean; 151 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 23 KiB/s wr, 23 op/s
Jan 20 15:36:18 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:36:18 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:36:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1689596933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:36:18 compute-1 nova_compute[225855]: 2026-01-20 15:36:18.723 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:19.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:19.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:19 compute-1 nova_compute[225855]: 2026-01-20 15:36:19.253 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:19 compute-1 nova_compute[225855]: 2026-01-20 15:36:19.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:36:19 compute-1 nova_compute[225855]: 2026-01-20 15:36:19.375 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:36:19 compute-1 nova_compute[225855]: 2026-01-20 15:36:19.375 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:36:19 compute-1 nova_compute[225855]: 2026-01-20 15:36:19.376 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:36:19 compute-1 nova_compute[225855]: 2026-01-20 15:36:19.376 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:36:19 compute-1 nova_compute[225855]: 2026-01-20 15:36:19.377 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:36:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4142365457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:36:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:36:19 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1836750772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:36:19 compute-1 nova_compute[225855]: 2026-01-20 15:36:19.822 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:36:19 compute-1 nova_compute[225855]: 2026-01-20 15:36:19.988 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:36:19 compute-1 nova_compute[225855]: 2026-01-20 15:36:19.989 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4262MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:36:19 compute-1 nova_compute[225855]: 2026-01-20 15:36:19.990 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:36:19 compute-1 nova_compute[225855]: 2026-01-20 15:36:19.990 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:36:20 compute-1 nova_compute[225855]: 2026-01-20 15:36:20.057 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:36:20 compute-1 nova_compute[225855]: 2026-01-20 15:36:20.057 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:36:20 compute-1 nova_compute[225855]: 2026-01-20 15:36:20.073 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:36:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:36:20 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2691880414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:36:20 compute-1 nova_compute[225855]: 2026-01-20 15:36:20.529 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:36:20 compute-1 nova_compute[225855]: 2026-01-20 15:36:20.535 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:36:20 compute-1 nova_compute[225855]: 2026-01-20 15:36:20.551 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:36:20 compute-1 nova_compute[225855]: 2026-01-20 15:36:20.553 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:36:20 compute-1 nova_compute[225855]: 2026-01-20 15:36:20.553 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:36:20 compute-1 ceph-mon[81775]: pgmap v3424: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 19 KiB/s wr, 29 op/s
Jan 20 15:36:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1836750772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:36:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/810772915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:36:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2691880414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:36:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:21.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:21.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:21 compute-1 podman[324620]: 2026-01-20 15:36:21.072805297 +0000 UTC m=+0.119596006 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:36:21 compute-1 nova_compute[225855]: 2026-01-20 15:36:21.554 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:36:21 compute-1 ceph-mon[81775]: pgmap v3425: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 18 KiB/s wr, 28 op/s
Jan 20 15:36:22 compute-1 nova_compute[225855]: 2026-01-20 15:36:22.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:36:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:36:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:23.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:36:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:23.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:23 compute-1 nova_compute[225855]: 2026-01-20 15:36:23.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:36:23 compute-1 nova_compute[225855]: 2026-01-20 15:36:23.728 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:24 compute-1 nova_compute[225855]: 2026-01-20 15:36:24.254 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:24 compute-1 nova_compute[225855]: 2026-01-20 15:36:24.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:36:24 compute-1 nova_compute[225855]: 2026-01-20 15:36:24.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:36:24 compute-1 ceph-mon[81775]: pgmap v3426: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Jan 20 15:36:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:25.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:36:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:25.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:36:26 compute-1 ceph-mon[81775]: pgmap v3427: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Jan 20 15:36:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:27.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:27.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:28 compute-1 nova_compute[225855]: 2026-01-20 15:36:28.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:36:28 compute-1 ceph-mon[81775]: pgmap v3428: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 511 B/s wr, 19 op/s
Jan 20 15:36:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:28 compute-1 nova_compute[225855]: 2026-01-20 15:36:28.776 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:29.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:29.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:29 compute-1 nova_compute[225855]: 2026-01-20 15:36:29.257 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:30 compute-1 sudo[324652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:30 compute-1 sudo[324652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:30 compute-1 sudo[324652]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:30 compute-1 sudo[324683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:30 compute-1 podman[324676]: 2026-01-20 15:36:30.314286741 +0000 UTC m=+0.052503880 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:36:30 compute-1 sudo[324683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:30 compute-1 sudo[324683]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:30 compute-1 ceph-mon[81775]: pgmap v3429: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.4 KiB/s rd, 0 B/s wr, 8 op/s
Jan 20 15:36:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:36:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:31.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:36:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:31.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:32 compute-1 ceph-mon[81775]: pgmap v3430: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:36:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:33.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:36:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:36:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:33.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:36:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:33 compute-1 nova_compute[225855]: 2026-01-20 15:36:33.779 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:34 compute-1 nova_compute[225855]: 2026-01-20 15:36:34.259 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:34 compute-1 ceph-mon[81775]: pgmap v3431: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:35.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:36:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:35.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:36:36 compute-1 ceph-mon[81775]: pgmap v3432: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:37.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:36:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:37.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:36:38 compute-1 ceph-mon[81775]: pgmap v3433: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:38 compute-1 nova_compute[225855]: 2026-01-20 15:36:38.806 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:39.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:36:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:39.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:36:39 compute-1 nova_compute[225855]: 2026-01-20 15:36:39.261 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:40 compute-1 ceph-mon[81775]: pgmap v3434: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:41.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:41.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:42 compute-1 ceph-mon[81775]: pgmap v3435: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:43.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:43.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:43 compute-1 nova_compute[225855]: 2026-01-20 15:36:43.810 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:44 compute-1 nova_compute[225855]: 2026-01-20 15:36:44.261 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:44 compute-1 ceph-mon[81775]: pgmap v3436: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2033270823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:36:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:36:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:45.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:36:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:45.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:45 compute-1 ceph-mon[81775]: pgmap v3437: 321 pgs: 321 active+clean; 145 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 1.1 MiB/s wr, 2 op/s
Jan 20 15:36:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:36:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:47.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:36:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:47.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:48 compute-1 ceph-mon[81775]: pgmap v3438: 321 pgs: 321 active+clean; 155 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 852 B/s rd, 1.4 MiB/s wr, 3 op/s
Jan 20 15:36:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:48 compute-1 nova_compute[225855]: 2026-01-20 15:36:48.814 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:49.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:36:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:49.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:36:49 compute-1 nova_compute[225855]: 2026-01-20 15:36:49.264 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:50 compute-1 sudo[324731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:50 compute-1 sudo[324731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:50 compute-1 sudo[324731]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:50 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 15:36:50 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 15:36:50 compute-1 sudo[324757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:50 compute-1 sudo[324757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:50 compute-1 ceph-mon[81775]: pgmap v3439: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Jan 20 15:36:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/521345695' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:36:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/940201291' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:36:50 compute-1 sudo[324757]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:51.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:51.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:52 compute-1 podman[324783]: 2026-01-20 15:36:52.029003556 +0000 UTC m=+0.079664514 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:36:52 compute-1 ceph-mon[81775]: pgmap v3440: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:36:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:53.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:53.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:53 compute-1 nova_compute[225855]: 2026-01-20 15:36:53.818 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:54 compute-1 nova_compute[225855]: 2026-01-20 15:36:54.265 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:54 compute-1 ceph-mon[81775]: pgmap v3441: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:36:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:55.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:55.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:55 compute-1 ceph-mon[81775]: pgmap v3442: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 89 op/s
Jan 20 15:36:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:57.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:36:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:57.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:36:58 compute-1 ceph-mon[81775]: pgmap v3443: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 748 KiB/s wr, 98 op/s
Jan 20 15:36:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:58 compute-1 nova_compute[225855]: 2026-01-20 15:36:58.822 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:36:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:59.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:36:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:36:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:59.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:59 compute-1 nova_compute[225855]: 2026-01-20 15:36:59.267 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:00 compute-1 ceph-mon[81775]: pgmap v3444: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 358 KiB/s wr, 97 op/s
Jan 20 15:37:00 compute-1 podman[324813]: 2026-01-20 15:37:00.994832098 +0000 UTC m=+0.047383170 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 15:37:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:37:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:01.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:37:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:01.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:02 compute-1 ceph-mon[81775]: pgmap v3445: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 15:37:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:03.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:37:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:03.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:37:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2682962310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:03 compute-1 nova_compute[225855]: 2026-01-20 15:37:03.876 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:04 compute-1 nova_compute[225855]: 2026-01-20 15:37:04.270 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:04 compute-1 ceph-mon[81775]: pgmap v3446: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:37:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:05.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:05.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:06 compute-1 ceph-mon[81775]: pgmap v3447: 321 pgs: 321 active+clean; 226 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 129 op/s
Jan 20 15:37:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:37:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:07.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:37:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:07.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:08 compute-1 ceph-mon[81775]: pgmap v3448: 321 pgs: 321 active+clean; 244 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 694 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Jan 20 15:37:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/547492813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:37:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:08 compute-1 nova_compute[225855]: 2026-01-20 15:37:08.879 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:09.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:09.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:09 compute-1 nova_compute[225855]: 2026-01-20 15:37:09.272 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:09 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2914974930' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:37:10 compute-1 ceph-mon[81775]: pgmap v3449: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 406 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 20 15:37:10 compute-1 sudo[324839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:10 compute-1 sudo[324839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:10 compute-1 sudo[324839]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:10 compute-1 sudo[324864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:10 compute-1 sudo[324864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:10 compute-1 sudo[324864]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:11.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:11.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:12 compute-1 ceph-mon[81775]: pgmap v3450: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 20 15:37:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:13.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:13.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:37:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3673359103' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:37:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:37:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3673359103' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:37:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:13 compute-1 nova_compute[225855]: 2026-01-20 15:37:13.883 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:14 compute-1 nova_compute[225855]: 2026-01-20 15:37:14.274 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:14 compute-1 ceph-mon[81775]: pgmap v3451: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 20 15:37:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3673359103' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:37:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3673359103' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:37:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:37:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:15.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:37:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:37:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:15.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:37:15 compute-1 ceph-mon[81775]: pgmap v3452: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Jan 20 15:37:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:37:16.459 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:37:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:37:16.459 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:37:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:37:16.459 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:37:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:17.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:37:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:17.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:37:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2535838283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:17 compute-1 nova_compute[225855]: 2026-01-20 15:37:17.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:37:17 compute-1 nova_compute[225855]: 2026-01-20 15:37:17.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:37:18 compute-1 ceph-mon[81775]: pgmap v3453: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 110 op/s
Jan 20 15:37:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1014966892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:18 compute-1 sudo[324893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:18 compute-1 sudo[324893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:18 compute-1 sudo[324893]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:18 compute-1 nova_compute[225855]: 2026-01-20 15:37:18.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:37:18 compute-1 nova_compute[225855]: 2026-01-20 15:37:18.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:37:18 compute-1 nova_compute[225855]: 2026-01-20 15:37:18.342 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:37:18 compute-1 nova_compute[225855]: 2026-01-20 15:37:18.360 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:37:18 compute-1 sudo[324918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:37:18 compute-1 sudo[324918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:18 compute-1 sudo[324918]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:18 compute-1 sudo[324943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:18 compute-1 sudo[324943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:18 compute-1 sudo[324943]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:18 compute-1 sudo[324968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:37:18 compute-1 sudo[324968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:18 compute-1 nova_compute[225855]: 2026-01-20 15:37:18.887 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:18 compute-1 sudo[324968]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:37:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:19.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:37:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:19.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:19 compute-1 nova_compute[225855]: 2026-01-20 15:37:19.276 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:37:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:37:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:37:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:37:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:37:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:37:20 compute-1 ceph-mon[81775]: pgmap v3454: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 29 KiB/s wr, 80 op/s
Jan 20 15:37:20 compute-1 nova_compute[225855]: 2026-01-20 15:37:20.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:37:20 compute-1 nova_compute[225855]: 2026-01-20 15:37:20.367 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:37:20 compute-1 nova_compute[225855]: 2026-01-20 15:37:20.368 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:37:20 compute-1 nova_compute[225855]: 2026-01-20 15:37:20.368 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:37:20 compute-1 nova_compute[225855]: 2026-01-20 15:37:20.368 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:37:20 compute-1 nova_compute[225855]: 2026-01-20 15:37:20.368 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:37:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:37:20 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1298775489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:20 compute-1 nova_compute[225855]: 2026-01-20 15:37:20.797 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:37:20 compute-1 nova_compute[225855]: 2026-01-20 15:37:20.951 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:37:20 compute-1 nova_compute[225855]: 2026-01-20 15:37:20.952 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4274MB free_disk=20.921852111816406GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:37:20 compute-1 nova_compute[225855]: 2026-01-20 15:37:20.952 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:37:20 compute-1 nova_compute[225855]: 2026-01-20 15:37:20.953 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:37:21 compute-1 nova_compute[225855]: 2026-01-20 15:37:21.064 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:37:21 compute-1 nova_compute[225855]: 2026-01-20 15:37:21.064 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:37:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:21.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:21 compute-1 nova_compute[225855]: 2026-01-20 15:37:21.081 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:37:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:21.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1298775489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:21 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:37:21 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/351549409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:21 compute-1 nova_compute[225855]: 2026-01-20 15:37:21.518 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:37:21 compute-1 nova_compute[225855]: 2026-01-20 15:37:21.524 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:37:21 compute-1 nova_compute[225855]: 2026-01-20 15:37:21.538 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:37:21 compute-1 nova_compute[225855]: 2026-01-20 15:37:21.541 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:37:21 compute-1 nova_compute[225855]: 2026-01-20 15:37:21.541 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:37:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:37:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 67K writes, 258K keys, 67K commit groups, 1.0 writes per commit group, ingest: 0.24 GB, 0.04 MB/s
                                           Cumulative WAL: 67K writes, 25K syncs, 2.63 writes per sync, written: 0.24 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2415 writes, 9201 keys, 2415 commit groups, 1.0 writes per commit group, ingest: 8.90 MB, 0.01 MB/s
                                           Interval WAL: 2415 writes, 999 syncs, 2.42 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000216 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000216 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000216 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000216 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000216 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000216 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000216 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdf610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdf610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdf610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000216 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000216 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 15:37:22 compute-1 ceph-mon[81775]: pgmap v3455: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Jan 20 15:37:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/351549409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1599222156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:23 compute-1 podman[325071]: 2026-01-20 15:37:23.07542054 +0000 UTC m=+0.120249190 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:37:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:23.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:37:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:23.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:37:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1011709830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:23 compute-1 nova_compute[225855]: 2026-01-20 15:37:23.541 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:37:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:23 compute-1 nova_compute[225855]: 2026-01-20 15:37:23.890 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:24 compute-1 nova_compute[225855]: 2026-01-20 15:37:24.279 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:24 compute-1 nova_compute[225855]: 2026-01-20 15:37:24.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:37:24 compute-1 ceph-mon[81775]: pgmap v3456: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 73 op/s
Jan 20 15:37:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:37:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:25.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:37:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:25.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:25 compute-1 nova_compute[225855]: 2026-01-20 15:37:25.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:37:25 compute-1 nova_compute[225855]: 2026-01-20 15:37:25.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:37:25 compute-1 sudo[325098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:25 compute-1 sudo[325098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:25 compute-1 sudo[325098]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:25 compute-1 sudo[325123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:37:25 compute-1 sudo[325123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:25 compute-1 sudo[325123]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:26 compute-1 nova_compute[225855]: 2026-01-20 15:37:26.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:37:26 compute-1 ceph-mon[81775]: pgmap v3457: 321 pgs: 321 active+clean; 264 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 112 op/s
Jan 20 15:37:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:37:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:37:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:27.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:27.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:28 compute-1 ceph-mon[81775]: pgmap v3458: 321 pgs: 321 active+clean; 278 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 985 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Jan 20 15:37:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:28 compute-1 nova_compute[225855]: 2026-01-20 15:37:28.893 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:29.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:29.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:29 compute-1 nova_compute[225855]: 2026-01-20 15:37:29.283 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:29 compute-1 nova_compute[225855]: 2026-01-20 15:37:29.334 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:37:30 compute-1 ceph-mon[81775]: pgmap v3459: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:37:30 compute-1 sudo[325151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:30 compute-1 sudo[325151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:30 compute-1 sudo[325151]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:30 compute-1 sudo[325176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:30 compute-1 sudo[325176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:30 compute-1 sudo[325176]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:31.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:37:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:31.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:37:31 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:37:31.896 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=82, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=81) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:37:31 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:37:31.897 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:37:31 compute-1 nova_compute[225855]: 2026-01-20 15:37:31.898 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:32 compute-1 podman[325202]: 2026-01-20 15:37:32.022367538 +0000 UTC m=+0.063624890 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 20 15:37:32 compute-1 ceph-mon[81775]: pgmap v3460: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:37:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:33.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:33.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:33 compute-1 nova_compute[225855]: 2026-01-20 15:37:33.929 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:34 compute-1 nova_compute[225855]: 2026-01-20 15:37:34.283 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:34 compute-1 ceph-mon[81775]: pgmap v3461: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:37:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:35.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:35.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:35 compute-1 ceph-mon[81775]: pgmap v3462: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:37:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:37.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:37.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:38 compute-1 ceph-mon[81775]: pgmap v3463: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 708 KiB/s wr, 25 op/s
Jan 20 15:37:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:38 compute-1 nova_compute[225855]: 2026-01-20 15:37:38.933 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:39.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:39.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:39 compute-1 nova_compute[225855]: 2026-01-20 15:37:39.283 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:40 compute-1 ceph-mon[81775]: pgmap v3464: 321 pgs: 321 active+clean; 266 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.7 KiB/s rd, 58 KiB/s wr, 2 op/s
Jan 20 15:37:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:41.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:41.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1559593881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:41 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:37:41.899 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '82'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:37:42 compute-1 ceph-mon[81775]: pgmap v3465: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 20 KiB/s wr, 29 op/s
Jan 20 15:37:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:37:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:43.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:37:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:43.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:43 compute-1 nova_compute[225855]: 2026-01-20 15:37:43.936 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:44 compute-1 nova_compute[225855]: 2026-01-20 15:37:44.285 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:44 compute-1 ceph-mon[81775]: pgmap v3466: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 20 KiB/s wr, 29 op/s
Jan 20 15:37:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:37:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:45.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:37:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:45.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:46 compute-1 ceph-mon[81775]: pgmap v3467: 321 pgs: 321 active+clean; 151 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 20 KiB/s wr, 48 op/s
Jan 20 15:37:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/496827110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:47.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:47.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:48 compute-1 nova_compute[225855]: 2026-01-20 15:37:48.334 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #169. Immutable memtables: 0.
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:37:48.421330) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 169
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923468421360, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 2200, "num_deletes": 252, "total_data_size": 5409694, "memory_usage": 5485344, "flush_reason": "Manual Compaction"}
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #170: started
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923468444022, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 170, "file_size": 2036286, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 81944, "largest_seqno": 84139, "table_properties": {"data_size": 2029966, "index_size": 3201, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16729, "raw_average_key_size": 20, "raw_value_size": 2015882, "raw_average_value_size": 2513, "num_data_blocks": 145, "num_entries": 802, "num_filter_entries": 802, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768923260, "oldest_key_time": 1768923260, "file_creation_time": 1768923468, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 22764 microseconds, and 5032 cpu microseconds.
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:37:48.444088) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #170: 2036286 bytes OK
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:37:48.444109) [db/memtable_list.cc:519] [default] Level-0 commit table #170 started
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:37:48.446024) [db/memtable_list.cc:722] [default] Level-0 commit table #170: memtable #1 done
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:37:48.446036) EVENT_LOG_v1 {"time_micros": 1768923468446033, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:37:48.446055) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 5400006, prev total WAL file size 5400006, number of live WAL files 2.
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000166.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:37:48.447347) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373537' seq:72057594037927935, type:22 .. '6D6772737461740033303130' seq:0, type:0; will stop at (end)
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [170(1988KB)], [168(12MB)]
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923468447425, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [170], "files_L6": [168], "score": -1, "input_data_size": 14993550, "oldest_snapshot_seqno": -1}
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #171: 10615 keys, 12636913 bytes, temperature: kUnknown
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923468593370, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 171, "file_size": 12636913, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12570509, "index_size": 38771, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26565, "raw_key_size": 278523, "raw_average_key_size": 26, "raw_value_size": 12386874, "raw_average_value_size": 1166, "num_data_blocks": 1478, "num_entries": 10615, "num_filter_entries": 10615, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768923468, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 171, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:37:48.593691) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 12636913 bytes
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:37:48.595469) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 102.7 rd, 86.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 12.4 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(13.6) write-amplify(6.2) OK, records in: 11033, records dropped: 418 output_compression: NoCompression
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:37:48.595493) EVENT_LOG_v1 {"time_micros": 1768923468595482, "job": 108, "event": "compaction_finished", "compaction_time_micros": 146025, "compaction_time_cpu_micros": 32018, "output_level": 6, "num_output_files": 1, "total_output_size": 12636913, "num_input_records": 11033, "num_output_records": 10615, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923468596197, "job": 108, "event": "table_file_deletion", "file_number": 170}
Jan 20 15:37:48 compute-1 ceph-mon[81775]: pgmap v3468: 321 pgs: 321 active+clean; 135 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 10 KiB/s wr, 51 op/s
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000168.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923468599238, "job": 108, "event": "table_file_deletion", "file_number": 168}
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:37:48.447207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:37:48.599352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:37:48.599360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:37:48.599364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:37:48.599368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:37:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:37:48.599371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:37:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:48 compute-1 nova_compute[225855]: 2026-01-20 15:37:48.940 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:49.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:49.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:49 compute-1 nova_compute[225855]: 2026-01-20 15:37:49.286 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:50 compute-1 ceph-mon[81775]: pgmap v3469: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 56 op/s
Jan 20 15:37:50 compute-1 sudo[325228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:50 compute-1 sudo[325228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:50 compute-1 sudo[325228]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:50 compute-1 sudo[325253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:50 compute-1 sudo[325253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:50 compute-1 sudo[325253]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:37:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:51.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:37:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:51.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:52 compute-1 ceph-mon[81775]: pgmap v3470: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 6.0 KiB/s wr, 55 op/s
Jan 20 15:37:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:37:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:53.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:37:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:53.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:53 compute-1 ceph-mon[81775]: pgmap v3471: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:37:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:53 compute-1 nova_compute[225855]: 2026-01-20 15:37:53.943 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:54 compute-1 podman[325280]: 2026-01-20 15:37:54.019970533 +0000 UTC m=+0.061793118 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 15:37:54 compute-1 nova_compute[225855]: 2026-01-20 15:37:54.288 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:37:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 16K writes, 84K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s
                                           Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.16 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1487 writes, 7136 keys, 1487 commit groups, 1.0 writes per commit group, ingest: 15.62 MB, 0.03 MB/s
                                           Interval WAL: 1487 writes, 1487 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     74.6      1.37              0.35        54    0.025       0      0       0.0       0.0
                                             L6      1/0   12.05 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.3     96.5     82.7      6.55              1.74        53    0.124    407K    28K       0.0       0.0
                                            Sum      1/0   12.05 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.3     79.8     81.3      7.92              2.09       107    0.074    407K    28K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.9     56.5     56.1      1.20              0.24        10    0.120     53K   2443       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0     96.5     82.7      6.55              1.74        53    0.124    407K    28K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     74.7      1.37              0.35        53    0.026       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.100, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.63 GB write, 0.11 MB/s write, 0.62 GB read, 0.11 MB/s read, 7.9 seconds
                                           Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 1.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564d515a71f0#2 capacity: 304.00 MB usage: 68.19 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000434 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3896,65.29 MB,21.4756%) FilterBlock(107,1.09 MB,0.358717%) IndexBlock(107,1.81 MB,0.596282%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 15:37:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:55.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:37:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:55.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:37:56 compute-1 ceph-mon[81775]: pgmap v3472: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:37:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:37:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:57.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:37:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:57.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:58 compute-1 ceph-mon[81775]: pgmap v3473: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.5 KiB/s rd, 852 B/s wr, 8 op/s
Jan 20 15:37:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:59 compute-1 nova_compute[225855]: 2026-01-20 15:37:59.006 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:59.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:37:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:59.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:59 compute-1 nova_compute[225855]: 2026-01-20 15:37:59.292 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:00 compute-1 ceph-mon[81775]: pgmap v3474: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 KiB/s rd, 341 B/s wr, 5 op/s
Jan 20 15:38:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:01.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:01.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:02 compute-1 ceph-mon[81775]: pgmap v3475: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:03 compute-1 podman[325312]: 2026-01-20 15:38:03.006578301 +0000 UTC m=+0.049965214 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:38:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:03.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:03.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:04 compute-1 nova_compute[225855]: 2026-01-20 15:38:04.011 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:04 compute-1 nova_compute[225855]: 2026-01-20 15:38:04.293 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:04 compute-1 ceph-mon[81775]: pgmap v3476: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:04 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #172. Immutable memtables: 0.
Jan 20 15:38:04 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:38:04.686599) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:38:04 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 172
Jan 20 15:38:04 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923484686646, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 396, "num_deletes": 251, "total_data_size": 409157, "memory_usage": 416728, "flush_reason": "Manual Compaction"}
Jan 20 15:38:04 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #173: started
Jan 20 15:38:04 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923484736998, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 173, "file_size": 269559, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84145, "largest_seqno": 84535, "table_properties": {"data_size": 267271, "index_size": 451, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5634, "raw_average_key_size": 18, "raw_value_size": 262744, "raw_average_value_size": 867, "num_data_blocks": 20, "num_entries": 303, "num_filter_entries": 303, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768923468, "oldest_key_time": 1768923468, "file_creation_time": 1768923484, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:38:04 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 50442 microseconds, and 1914 cpu microseconds.
Jan 20 15:38:04 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:38:04 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:38:04.737045) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #173: 269559 bytes OK
Jan 20 15:38:04 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:38:04.737064) [db/memtable_list.cc:519] [default] Level-0 commit table #173 started
Jan 20 15:38:04 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:38:04.810983) [db/memtable_list.cc:722] [default] Level-0 commit table #173: memtable #1 done
Jan 20 15:38:04 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:38:04.811054) EVENT_LOG_v1 {"time_micros": 1768923484811037, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:38:04 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:38:04.811093) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:38:04 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 406602, prev total WAL file size 406602, number of live WAL files 2.
Jan 20 15:38:04 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000169.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:38:04 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:38:04.811927) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Jan 20 15:38:04 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:38:04 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [173(263KB)], [171(12MB)]
Jan 20 15:38:04 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923484812093, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [173], "files_L6": [171], "score": -1, "input_data_size": 12906472, "oldest_snapshot_seqno": -1}
Jan 20 15:38:05 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #174: 10408 keys, 10873904 bytes, temperature: kUnknown
Jan 20 15:38:05 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923485110497, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 174, "file_size": 10873904, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10810369, "index_size": 36414, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26053, "raw_key_size": 274903, "raw_average_key_size": 26, "raw_value_size": 10631734, "raw_average_value_size": 1021, "num_data_blocks": 1371, "num_entries": 10408, "num_filter_entries": 10408, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768923484, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 174, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:38:05 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:38:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:38:05.110726) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 10873904 bytes
Jan 20 15:38:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:38:05.112258) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 43.2 rd, 36.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 12.1 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(88.2) write-amplify(40.3) OK, records in: 10918, records dropped: 510 output_compression: NoCompression
Jan 20 15:38:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:38:05.112306) EVENT_LOG_v1 {"time_micros": 1768923485112290, "job": 110, "event": "compaction_finished", "compaction_time_micros": 298432, "compaction_time_cpu_micros": 26590, "output_level": 6, "num_output_files": 1, "total_output_size": 10873904, "num_input_records": 10918, "num_output_records": 10408, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:38:05 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:38:05 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923485112590, "job": 110, "event": "table_file_deletion", "file_number": 173}
Jan 20 15:38:05 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000171.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:38:05 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923485115420, "job": 110, "event": "table_file_deletion", "file_number": 171}
Jan 20 15:38:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:38:04.811780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:38:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:38:05.115602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:38:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:38:05.115613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:38:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:38:05.115616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:38:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:38:05.115620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:38:05 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:38:05.115624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:38:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:38:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:05.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:38:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:38:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:05.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:38:05 compute-1 ceph-mon[81775]: pgmap v3477: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:07.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:07.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:08 compute-1 ceph-mon[81775]: pgmap v3478: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:09 compute-1 nova_compute[225855]: 2026-01-20 15:38:09.016 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:09.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:09.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:09 compute-1 nova_compute[225855]: 2026-01-20 15:38:09.296 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:10 compute-1 ceph-mon[81775]: pgmap v3479: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:11 compute-1 sudo[325336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:11 compute-1 sudo[325336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:11 compute-1 sudo[325336]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:11 compute-1 sudo[325361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:11 compute-1 sudo[325361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:11 compute-1 sudo[325361]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:11.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:11.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:12 compute-1 ceph-mon[81775]: pgmap v3480: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:13.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:13.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:14 compute-1 nova_compute[225855]: 2026-01-20 15:38:14.019 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:14 compute-1 nova_compute[225855]: 2026-01-20 15:38:14.298 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:14 compute-1 ceph-mon[81775]: pgmap v3481: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/650535647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:38:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/650535647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:38:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:38:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:15.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:38:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:15.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:38:16.459 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:38:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:38:16.460 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:38:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:38:16.461 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:38:16 compute-1 ceph-mon[81775]: pgmap v3482: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:17.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:17.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:18 compute-1 nova_compute[225855]: 2026-01-20 15:38:18.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:38:18 compute-1 nova_compute[225855]: 2026-01-20 15:38:18.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:38:18 compute-1 nova_compute[225855]: 2026-01-20 15:38:18.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:38:18 compute-1 nova_compute[225855]: 2026-01-20 15:38:18.574 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:38:18 compute-1 ceph-mon[81775]: pgmap v3483: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:19 compute-1 nova_compute[225855]: 2026-01-20 15:38:19.022 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:19.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:19.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:19 compute-1 nova_compute[225855]: 2026-01-20 15:38:19.300 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:19 compute-1 nova_compute[225855]: 2026-01-20 15:38:19.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:38:19 compute-1 nova_compute[225855]: 2026-01-20 15:38:19.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:38:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/420021600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:38:20 compute-1 ceph-mon[81775]: pgmap v3484: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2588081493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:38:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:21.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:21.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:22 compute-1 ceph-mon[81775]: pgmap v3485: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:22 compute-1 nova_compute[225855]: 2026-01-20 15:38:22.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:38:22 compute-1 nova_compute[225855]: 2026-01-20 15:38:22.399 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:38:22 compute-1 nova_compute[225855]: 2026-01-20 15:38:22.399 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:38:22 compute-1 nova_compute[225855]: 2026-01-20 15:38:22.400 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:38:22 compute-1 nova_compute[225855]: 2026-01-20 15:38:22.400 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:38:22 compute-1 nova_compute[225855]: 2026-01-20 15:38:22.400 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:38:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:38:22 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3824174375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:38:22 compute-1 nova_compute[225855]: 2026-01-20 15:38:22.847 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:38:23 compute-1 nova_compute[225855]: 2026-01-20 15:38:23.017 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:38:23 compute-1 nova_compute[225855]: 2026-01-20 15:38:23.019 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4292MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:38:23 compute-1 nova_compute[225855]: 2026-01-20 15:38:23.019 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:38:23 compute-1 nova_compute[225855]: 2026-01-20 15:38:23.020 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:38:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:38:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:23.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:38:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3824174375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:38:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:23.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:23 compute-1 nova_compute[225855]: 2026-01-20 15:38:23.272 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:38:23 compute-1 nova_compute[225855]: 2026-01-20 15:38:23.273 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:38:23 compute-1 nova_compute[225855]: 2026-01-20 15:38:23.329 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:38:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:38:23 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/374964423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:38:23 compute-1 nova_compute[225855]: 2026-01-20 15:38:23.862 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:38:23 compute-1 nova_compute[225855]: 2026-01-20 15:38:23.867 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:38:23 compute-1 nova_compute[225855]: 2026-01-20 15:38:23.958 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:38:23 compute-1 nova_compute[225855]: 2026-01-20 15:38:23.959 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:38:23 compute-1 nova_compute[225855]: 2026-01-20 15:38:23.959 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:38:24 compute-1 nova_compute[225855]: 2026-01-20 15:38:24.024 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:24 compute-1 ceph-mon[81775]: pgmap v3486: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1130345173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:38:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/374964423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:38:24 compute-1 nova_compute[225855]: 2026-01-20 15:38:24.302 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:24 compute-1 nova_compute[225855]: 2026-01-20 15:38:24.959 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:38:25 compute-1 podman[325437]: 2026-01-20 15:38:25.081470104 +0000 UTC m=+0.118173652 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:38:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:25.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:25.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1078152141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:38:25 compute-1 sudo[325464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:25 compute-1 sudo[325464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:25 compute-1 sudo[325464]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:25 compute-1 sudo[325489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:38:25 compute-1 sudo[325489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:25 compute-1 sudo[325489]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:26 compute-1 sudo[325514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:26 compute-1 sudo[325514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:26 compute-1 sudo[325514]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:26 compute-1 sudo[325539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:38:26 compute-1 sudo[325539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:26 compute-1 ceph-mon[81775]: pgmap v3487: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:26 compute-1 nova_compute[225855]: 2026-01-20 15:38:26.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:38:26 compute-1 nova_compute[225855]: 2026-01-20 15:38:26.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:38:26 compute-1 nova_compute[225855]: 2026-01-20 15:38:26.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:38:26 compute-1 nova_compute[225855]: 2026-01-20 15:38:26.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:38:26 compute-1 sudo[325539]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:27.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:38:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:27.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:38:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:38:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:38:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:38:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:38:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:38:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:38:28 compute-1 ceph-mon[81775]: pgmap v3488: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:29 compute-1 nova_compute[225855]: 2026-01-20 15:38:29.027 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:38:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:29.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:38:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:38:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:29.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:38:29 compute-1 nova_compute[225855]: 2026-01-20 15:38:29.304 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:30 compute-1 ceph-mon[81775]: pgmap v3489: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:31 compute-1 sudo[325597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:31 compute-1 sudo[325597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:31 compute-1 sudo[325597]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:31.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:31 compute-1 sudo[325622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:31 compute-1 sudo[325622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:31 compute-1 sudo[325622]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:31.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:31 compute-1 nova_compute[225855]: 2026-01-20 15:38:31.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:38:32 compute-1 ceph-mon[81775]: pgmap v3490: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:32 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:38:32 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:38:32 compute-1 sudo[325648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:32 compute-1 sudo[325648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:32 compute-1 sudo[325648]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:32 compute-1 sudo[325673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:38:32 compute-1 sudo[325673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:32 compute-1 sudo[325673]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:38:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:33.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:38:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:38:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:33.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:38:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:34 compute-1 podman[325699]: 2026-01-20 15:38:34.030680716 +0000 UTC m=+0.069098565 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 20 15:38:34 compute-1 nova_compute[225855]: 2026-01-20 15:38:34.030 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:34 compute-1 nova_compute[225855]: 2026-01-20 15:38:34.334 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:34 compute-1 ceph-mon[81775]: pgmap v3491: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:38:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:35.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:38:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:35.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:35 compute-1 ceph-mon[81775]: pgmap v3492: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:37.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:37.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1147255199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:38:38 compute-1 ceph-mon[81775]: pgmap v3493: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:39 compute-1 nova_compute[225855]: 2026-01-20 15:38:39.036 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:38:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:39.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:38:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:39.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:39 compute-1 nova_compute[225855]: 2026-01-20 15:38:39.335 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:39 compute-1 nova_compute[225855]: 2026-01-20 15:38:39.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:38:39 compute-1 nova_compute[225855]: 2026-01-20 15:38:39.339 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:38:39 compute-1 nova_compute[225855]: 2026-01-20 15:38:39.340 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:38:39 compute-1 nova_compute[225855]: 2026-01-20 15:38:39.341 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:38:39 compute-1 nova_compute[225855]: 2026-01-20 15:38:39.341 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:38:39 compute-1 nova_compute[225855]: 2026-01-20 15:38:39.342 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:38:39 compute-1 nova_compute[225855]: 2026-01-20 15:38:39.342 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:38:39 compute-1 nova_compute[225855]: 2026-01-20 15:38:39.382 225859 DEBUG nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Jan 20 15:38:39 compute-1 nova_compute[225855]: 2026-01-20 15:38:39.382 225859 WARNING nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462
Jan 20 15:38:39 compute-1 nova_compute[225855]: 2026-01-20 15:38:39.383 225859 WARNING nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c
Jan 20 15:38:39 compute-1 nova_compute[225855]: 2026-01-20 15:38:39.383 225859 WARNING nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/a522636f3423dd1eea3b834dfd08917146e09c47
Jan 20 15:38:39 compute-1 nova_compute[225855]: 2026-01-20 15:38:39.383 225859 INFO nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Removable base files: /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c /var/lib/nova/instances/_base/a522636f3423dd1eea3b834dfd08917146e09c47
Jan 20 15:38:39 compute-1 nova_compute[225855]: 2026-01-20 15:38:39.383 225859 INFO nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462
Jan 20 15:38:39 compute-1 nova_compute[225855]: 2026-01-20 15:38:39.383 225859 INFO nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c
Jan 20 15:38:39 compute-1 nova_compute[225855]: 2026-01-20 15:38:39.384 225859 INFO nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/a522636f3423dd1eea3b834dfd08917146e09c47
Jan 20 15:38:39 compute-1 nova_compute[225855]: 2026-01-20 15:38:39.384 225859 DEBUG nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Jan 20 15:38:39 compute-1 nova_compute[225855]: 2026-01-20 15:38:39.384 225859 DEBUG nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Jan 20 15:38:39 compute-1 nova_compute[225855]: 2026-01-20 15:38:39.384 225859 DEBUG nova.virt.libvirt.imagecache [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Jan 20 15:38:40 compute-1 ceph-mon[81775]: pgmap v3494: 321 pgs: 321 active+clean; 130 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 280 KiB/s wr, 11 op/s
Jan 20 15:38:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:41.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:41.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:42 compute-1 ceph-mon[81775]: pgmap v3495: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:38:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:43.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:43.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2392674960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:38:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1024218012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:38:43 compute-1 ceph-mon[81775]: pgmap v3496: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:38:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:44 compute-1 nova_compute[225855]: 2026-01-20 15:38:44.039 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:44 compute-1 nova_compute[225855]: 2026-01-20 15:38:44.336 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:38:45.041 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=83, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=82) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:38:45 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:38:45.042 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:38:45 compute-1 nova_compute[225855]: 2026-01-20 15:38:45.042 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:45.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:38:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:45.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:38:46 compute-1 ceph-mon[81775]: pgmap v3497: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 20 15:38:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:38:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:47.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:38:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:47.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:48 compute-1 ceph-mon[81775]: pgmap v3498: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 275 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 20 15:38:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:49 compute-1 nova_compute[225855]: 2026-01-20 15:38:49.042 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:49.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:49.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:49 compute-1 nova_compute[225855]: 2026-01-20 15:38:49.336 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:50 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 20 15:38:50 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 20 15:38:50 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 20 15:38:50 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 20 15:38:50 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 20 15:38:50 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 20 15:38:50 compute-1 radosgw[83787]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 20 15:38:50 compute-1 ceph-mon[81775]: pgmap v3499: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 582 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 20 15:38:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:51.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:38:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:51.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:38:51 compute-1 sudo[325727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:51 compute-1 sudo[325727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:51 compute-1 sudo[325727]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:51 compute-1 sudo[325752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:51 compute-1 sudo[325752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:51 compute-1 sudo[325752]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:52 compute-1 ceph-mon[81775]: pgmap v3500: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 90 op/s
Jan 20 15:38:53 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:38:53.044 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '83'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:38:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:53.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:53.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:54 compute-1 nova_compute[225855]: 2026-01-20 15:38:54.046 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:54 compute-1 nova_compute[225855]: 2026-01-20 15:38:54.338 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:54 compute-1 ceph-mon[81775]: pgmap v3501: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Jan 20 15:38:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:38:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:55.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:38:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:38:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:55.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:38:55 compute-1 ceph-mon[81775]: pgmap v3502: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 167 op/s
Jan 20 15:38:56 compute-1 podman[325780]: 2026-01-20 15:38:56.031626296 +0000 UTC m=+0.082442861 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 20 15:38:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:57.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:38:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:57.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:38:58 compute-1 ceph-mon[81775]: pgmap v3503: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 202 op/s
Jan 20 15:38:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:59 compute-1 nova_compute[225855]: 2026-01-20 15:38:59.050 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:59.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:38:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:59.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:59 compute-1 nova_compute[225855]: 2026-01-20 15:38:59.382 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:00 compute-1 ceph-mon[81775]: pgmap v3504: 321 pgs: 321 active+clean; 177 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 545 KiB/s wr, 213 op/s
Jan 20 15:39:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:39:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:01.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:39:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:01.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:02 compute-1 ceph-mon[81775]: pgmap v3505: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 240 op/s
Jan 20 15:39:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:03.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:03.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:03 compute-1 ceph-mon[81775]: pgmap v3506: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 193 op/s
Jan 20 15:39:04 compute-1 nova_compute[225855]: 2026-01-20 15:39:04.054 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:04 compute-1 nova_compute[225855]: 2026-01-20 15:39:04.384 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:05 compute-1 podman[325810]: 2026-01-20 15:39:05.010379643 +0000 UTC m=+0.058121924 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 20 15:39:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:05.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:39:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:05.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:39:06 compute-1 ceph-mon[81775]: pgmap v3507: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 194 op/s
Jan 20 15:39:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:39:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:07.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:39:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:07.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:08 compute-1 ceph-mon[81775]: pgmap v3508: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Jan 20 15:39:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:09 compute-1 nova_compute[225855]: 2026-01-20 15:39:09.058 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:39:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:09.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:39:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:39:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:09.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:39:09 compute-1 nova_compute[225855]: 2026-01-20 15:39:09.385 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:10 compute-1 ceph-mon[81775]: pgmap v3509: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:39:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:11.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:11.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:11 compute-1 sudo[325832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:11 compute-1 sudo[325832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:11 compute-1 sudo[325832]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:11 compute-1 sudo[325857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:11 compute-1 sudo[325857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:11 compute-1 sudo[325857]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/39756524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:12 compute-1 ceph-mon[81775]: pgmap v3510: 321 pgs: 321 active+clean; 140 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 235 KiB/s rd, 1.6 MiB/s wr, 58 op/s
Jan 20 15:39:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:13.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:13.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:14 compute-1 nova_compute[225855]: 2026-01-20 15:39:14.061 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:14 compute-1 nova_compute[225855]: 2026-01-20 15:39:14.388 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:14 compute-1 ceph-mon[81775]: pgmap v3511: 321 pgs: 321 active+clean; 140 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 13 KiB/s wr, 16 op/s
Jan 20 15:39:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/474579767' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:39:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/474579767' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:39:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:15.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:15.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:39:16.460 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:39:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:39:16.461 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:39:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:39:16.461 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:39:16 compute-1 ceph-mon[81775]: pgmap v3512: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Jan 20 15:39:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:17.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:17.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:18 compute-1 ceph-mon[81775]: pgmap v3513: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 20 15:39:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:19 compute-1 nova_compute[225855]: 2026-01-20 15:39:19.064 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:39:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:19.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:39:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:39:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:19.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:39:19 compute-1 nova_compute[225855]: 2026-01-20 15:39:19.385 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:19 compute-1 nova_compute[225855]: 2026-01-20 15:39:19.386 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:39:19 compute-1 nova_compute[225855]: 2026-01-20 15:39:19.386 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:39:19 compute-1 nova_compute[225855]: 2026-01-20 15:39:19.390 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:19 compute-1 nova_compute[225855]: 2026-01-20 15:39:19.411 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:39:20 compute-1 ceph-mon[81775]: pgmap v3514: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 20 15:39:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/366126376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:39:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:21.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:39:21 compute-1 nova_compute[225855]: 2026-01-20 15:39:21.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:21 compute-1 nova_compute[225855]: 2026-01-20 15:39:21.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:39:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:21.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3640117576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:22 compute-1 ceph-mon[81775]: pgmap v3515: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:39:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:23.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:23.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:24 compute-1 nova_compute[225855]: 2026-01-20 15:39:24.068 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:24 compute-1 nova_compute[225855]: 2026-01-20 15:39:24.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:24 compute-1 nova_compute[225855]: 2026-01-20 15:39:24.360 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:39:24 compute-1 nova_compute[225855]: 2026-01-20 15:39:24.360 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:39:24 compute-1 nova_compute[225855]: 2026-01-20 15:39:24.360 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:39:24 compute-1 nova_compute[225855]: 2026-01-20 15:39:24.361 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:39:24 compute-1 nova_compute[225855]: 2026-01-20 15:39:24.361 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:39:24 compute-1 nova_compute[225855]: 2026-01-20 15:39:24.392 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:24 compute-1 ceph-mon[81775]: pgmap v3516: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 341 B/s wr, 11 op/s
Jan 20 15:39:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2197890005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:39:24 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1540194721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:24 compute-1 nova_compute[225855]: 2026-01-20 15:39:24.791 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:39:24 compute-1 nova_compute[225855]: 2026-01-20 15:39:24.931 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:39:24 compute-1 nova_compute[225855]: 2026-01-20 15:39:24.932 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4289MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:39:24 compute-1 nova_compute[225855]: 2026-01-20 15:39:24.932 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:39:24 compute-1 nova_compute[225855]: 2026-01-20 15:39:24.933 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:39:24 compute-1 nova_compute[225855]: 2026-01-20 15:39:24.991 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:39:24 compute-1 nova_compute[225855]: 2026-01-20 15:39:24.991 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:39:25 compute-1 nova_compute[225855]: 2026-01-20 15:39:25.011 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing inventories for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 15:39:25 compute-1 nova_compute[225855]: 2026-01-20 15:39:25.042 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating ProviderTree inventory for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 15:39:25 compute-1 nova_compute[225855]: 2026-01-20 15:39:25.042 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 15:39:25 compute-1 nova_compute[225855]: 2026-01-20 15:39:25.061 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing aggregate associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 15:39:25 compute-1 nova_compute[225855]: 2026-01-20 15:39:25.105 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing trait associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 15:39:25 compute-1 nova_compute[225855]: 2026-01-20 15:39:25.135 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:39:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:25.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:25.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:39:25 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/312578530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:25 compute-1 nova_compute[225855]: 2026-01-20 15:39:25.571 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:39:25 compute-1 nova_compute[225855]: 2026-01-20 15:39:25.577 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:39:25 compute-1 nova_compute[225855]: 2026-01-20 15:39:25.594 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:39:25 compute-1 nova_compute[225855]: 2026-01-20 15:39:25.596 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:39:25 compute-1 nova_compute[225855]: 2026-01-20 15:39:25.596 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:39:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1540194721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3411632472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/312578530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:26 compute-1 nova_compute[225855]: 2026-01-20 15:39:26.596 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:26 compute-1 nova_compute[225855]: 2026-01-20 15:39:26.597 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:26 compute-1 nova_compute[225855]: 2026-01-20 15:39:26.597 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:26 compute-1 ceph-mon[81775]: pgmap v3517: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 341 B/s wr, 11 op/s
Jan 20 15:39:27 compute-1 podman[325934]: 2026-01-20 15:39:27.015636116 +0000 UTC m=+0.068701313 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 15:39:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:39:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:27.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:39:27 compute-1 nova_compute[225855]: 2026-01-20 15:39:27.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:27 compute-1 nova_compute[225855]: 2026-01-20 15:39:27.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:27.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:27 compute-1 ceph-mon[81775]: pgmap v3518: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:29 compute-1 nova_compute[225855]: 2026-01-20 15:39:29.071 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:39:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:29.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:39:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:29.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:29 compute-1 nova_compute[225855]: 2026-01-20 15:39:29.394 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:30 compute-1 ceph-mon[81775]: pgmap v3519: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:31.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:31.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:31 compute-1 sudo[325963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:31 compute-1 sudo[325963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:31 compute-1 sudo[325963]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:31 compute-1 sudo[325988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:31 compute-1 sudo[325988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:31 compute-1 sudo[325988]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:32 compute-1 ceph-mon[81775]: pgmap v3520: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:32 compute-1 sudo[326014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:32 compute-1 sudo[326014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:32 compute-1 sudo[326014]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:33 compute-1 sudo[326039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:39:33 compute-1 sudo[326039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:33 compute-1 sudo[326039]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:33 compute-1 sudo[326064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:33 compute-1 sudo[326064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:33 compute-1 sudo[326064]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:33 compute-1 sudo[326089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:39:33 compute-1 sudo[326089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:33.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:33 compute-1 nova_compute[225855]: 2026-01-20 15:39:33.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:33 compute-1 nova_compute[225855]: 2026-01-20 15:39:33.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:39:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:33.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:39:33 compute-1 sudo[326089]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:34 compute-1 nova_compute[225855]: 2026-01-20 15:39:34.074 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:34 compute-1 nova_compute[225855]: 2026-01-20 15:39:34.409 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:34 compute-1 ceph-mon[81775]: pgmap v3521: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:39:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:39:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:39:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:39:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:39:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:39:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:39:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:39:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:39:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:35.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:39:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:35.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:36 compute-1 podman[326148]: 2026-01-20 15:39:36.001634977 +0000 UTC m=+0.048736538 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:39:36 compute-1 ceph-mon[81775]: pgmap v3522: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:37.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:37.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:38 compute-1 ceph-mon[81775]: pgmap v3523: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:39 compute-1 nova_compute[225855]: 2026-01-20 15:39:39.102 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:39.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:39:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:39.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:39:39 compute-1 nova_compute[225855]: 2026-01-20 15:39:39.410 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:40 compute-1 sudo[326169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:40 compute-1 sudo[326169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:40 compute-1 sudo[326169]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:40 compute-1 sudo[326194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:39:40 compute-1 sudo[326194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:40 compute-1 sudo[326194]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:40 compute-1 ceph-mon[81775]: pgmap v3524: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:39:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:39:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:41.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:41.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:42 compute-1 ceph-mon[81775]: pgmap v3525: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:43.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:39:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:43.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:39:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:44 compute-1 nova_compute[225855]: 2026-01-20 15:39:44.144 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:44 compute-1 nova_compute[225855]: 2026-01-20 15:39:44.413 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:44 compute-1 ceph-mon[81775]: pgmap v3526: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3766277954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:45.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:39:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:45.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:39:46 compute-1 ceph-mon[81775]: pgmap v3527: 321 pgs: 321 active+clean; 138 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 597 B/s rd, 242 KiB/s wr, 3 op/s
Jan 20 15:39:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:39:46.957 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=84, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=83) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:39:46 compute-1 nova_compute[225855]: 2026-01-20 15:39:46.959 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:46 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:39:46.959 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:39:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:39:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:47.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:39:47 compute-1 nova_compute[225855]: 2026-01-20 15:39:47.389 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:47 compute-1 nova_compute[225855]: 2026-01-20 15:39:47.389 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 15:39:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:47.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:48 compute-1 nova_compute[225855]: 2026-01-20 15:39:48.350 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #175. Immutable memtables: 0.
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:39:48.475593) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 175
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923588475631, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 1261, "num_deletes": 255, "total_data_size": 2798984, "memory_usage": 2841408, "flush_reason": "Manual Compaction"}
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #176: started
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923588513978, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 176, "file_size": 1825659, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84540, "largest_seqno": 85796, "table_properties": {"data_size": 1820203, "index_size": 2851, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11654, "raw_average_key_size": 19, "raw_value_size": 1809208, "raw_average_value_size": 3025, "num_data_blocks": 127, "num_entries": 598, "num_filter_entries": 598, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768923485, "oldest_key_time": 1768923485, "file_creation_time": 1768923588, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 38428 microseconds, and 4449 cpu microseconds.
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:39:48.514018) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #176: 1825659 bytes OK
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:39:48.514038) [db/memtable_list.cc:519] [default] Level-0 commit table #176 started
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:39:48.516518) [db/memtable_list.cc:722] [default] Level-0 commit table #176: memtable #1 done
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:39:48.516533) EVENT_LOG_v1 {"time_micros": 1768923588516528, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:39:48.516551) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 2792962, prev total WAL file size 2792962, number of live WAL files 2.
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000172.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:39:48.517282) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323733' seq:72057594037927935, type:22 .. '6C6F676D0033353234' seq:0, type:0; will stop at (end)
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [176(1782KB)], [174(10MB)]
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923588517309, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [176], "files_L6": [174], "score": -1, "input_data_size": 12699563, "oldest_snapshot_seqno": -1}
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #177: 10483 keys, 12578541 bytes, temperature: kUnknown
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923588640020, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 177, "file_size": 12578541, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12512471, "index_size": 38757, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26245, "raw_key_size": 277412, "raw_average_key_size": 26, "raw_value_size": 12330508, "raw_average_value_size": 1176, "num_data_blocks": 1471, "num_entries": 10483, "num_filter_entries": 10483, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768923588, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 177, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:39:48.640385) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 12578541 bytes
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:39:48.641842) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.4 rd, 102.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 10.4 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(13.8) write-amplify(6.9) OK, records in: 11006, records dropped: 523 output_compression: NoCompression
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:39:48.641897) EVENT_LOG_v1 {"time_micros": 1768923588641856, "job": 112, "event": "compaction_finished", "compaction_time_micros": 122811, "compaction_time_cpu_micros": 29671, "output_level": 6, "num_output_files": 1, "total_output_size": 12578541, "num_input_records": 11006, "num_output_records": 10483, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923588642518, "job": 112, "event": "table_file_deletion", "file_number": 176}
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000174.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923588645201, "job": 112, "event": "table_file_deletion", "file_number": 174}
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:39:48.517237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:39:48.645284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:39:48.645290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:39:48.645292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:39:48.645294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:39:48 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:39:48.645295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:39:48 compute-1 ceph-mon[81775]: pgmap v3528: 321 pgs: 321 active+clean; 159 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.5 MiB/s wr, 26 op/s
Jan 20 15:39:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:49 compute-1 nova_compute[225855]: 2026-01-20 15:39:49.147 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:49.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:49.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:49 compute-1 nova_compute[225855]: 2026-01-20 15:39:49.415 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:49 compute-1 ceph-mon[81775]: pgmap v3529: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:39:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:51.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:51.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:51 compute-1 sudo[326224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:51 compute-1 sudo[326224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:51 compute-1 sudo[326224]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:51 compute-1 ceph-mon[81775]: pgmap v3530: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:39:51 compute-1 sudo[326249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:51 compute-1 sudo[326249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:51 compute-1 sudo[326249]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:52 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2582616422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:39:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:53.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:53.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2245512469' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:39:54 compute-1 ceph-mon[81775]: pgmap v3531: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:39:54 compute-1 nova_compute[225855]: 2026-01-20 15:39:54.150 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:54 compute-1 nova_compute[225855]: 2026-01-20 15:39:54.417 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:55.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:39:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:55.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:39:55 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:39:55.961 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '84'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:39:56 compute-1 ceph-mon[81775]: pgmap v3532: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 20 15:39:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:39:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:57.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:39:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:39:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:57.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:39:58 compute-1 podman[326278]: 2026-01-20 15:39:58.111817522 +0000 UTC m=+0.150243426 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:39:58 compute-1 ceph-mon[81775]: pgmap v3533: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.5 MiB/s wr, 33 op/s
Jan 20 15:39:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:59 compute-1 nova_compute[225855]: 2026-01-20 15:39:59.192 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:39:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:59.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:39:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:39:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:59.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:59 compute-1 nova_compute[225855]: 2026-01-20 15:39:59.419 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:59 compute-1 ceph-mon[81775]: pgmap v3534: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 317 KiB/s rd, 312 KiB/s wr, 20 op/s
Jan 20 15:40:00 compute-1 ceph-mon[81775]: overall HEALTH_OK
Jan 20 15:40:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:01.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:01 compute-1 nova_compute[225855]: 2026-01-20 15:40:01.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:01 compute-1 nova_compute[225855]: 2026-01-20 15:40:01.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 15:40:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:01.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:01 compute-1 nova_compute[225855]: 2026-01-20 15:40:01.466 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 15:40:01 compute-1 ceph-mon[81775]: pgmap v3535: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:40:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:03.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:03.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:04 compute-1 nova_compute[225855]: 2026-01-20 15:40:04.241 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:04 compute-1 nova_compute[225855]: 2026-01-20 15:40:04.422 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:04 compute-1 ceph-mon[81775]: pgmap v3536: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:40:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:40:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:05.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:40:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:05.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:06 compute-1 ceph-mon[81775]: pgmap v3537: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:40:06 compute-1 podman[326308]: 2026-01-20 15:40:06.601792832 +0000 UTC m=+0.070345329 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:40:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:40:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:07.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:40:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:07.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:08 compute-1 ceph-mon[81775]: pgmap v3538: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 67 op/s
Jan 20 15:40:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:09 compute-1 nova_compute[225855]: 2026-01-20 15:40:09.243 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:09.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:09.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:09 compute-1 nova_compute[225855]: 2026-01-20 15:40:09.423 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:10 compute-1 ceph-mon[81775]: pgmap v3539: 321 pgs: 321 active+clean; 174 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 444 KiB/s wr, 67 op/s
Jan 20 15:40:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:40:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:11.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:40:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:11.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:11 compute-1 sudo[326332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:11 compute-1 sudo[326332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:11 compute-1 sudo[326332]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:11 compute-1 sudo[326357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:11 compute-1 sudo[326357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:11 compute-1 sudo[326357]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:12 compute-1 ceph-mon[81775]: pgmap v3540: 321 pgs: 321 active+clean; 193 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Jan 20 15:40:13 compute-1 nova_compute[225855]: 2026-01-20 15:40:13.015 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:13.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:13.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:14 compute-1 nova_compute[225855]: 2026-01-20 15:40:14.293 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:14 compute-1 nova_compute[225855]: 2026-01-20 15:40:14.426 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:14 compute-1 ceph-mon[81775]: pgmap v3541: 321 pgs: 321 active+clean; 193 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 20 15:40:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4116331290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:40:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/4116331290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:40:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:40:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:15.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:40:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:15.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:16.461 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:40:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:16.462 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:40:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:16.462 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:40:16 compute-1 ceph-mon[81775]: pgmap v3542: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:40:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:40:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:17.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:40:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:40:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:17.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:40:17 compute-1 ceph-mon[81775]: pgmap v3543: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:40:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:40:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:19.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:40:19 compute-1 nova_compute[225855]: 2026-01-20 15:40:19.296 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:19 compute-1 nova_compute[225855]: 2026-01-20 15:40:19.428 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:40:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:19.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:40:20 compute-1 ceph-mon[81775]: pgmap v3544: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:40:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:40:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:21.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:40:21 compute-1 nova_compute[225855]: 2026-01-20 15:40:21.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:21 compute-1 nova_compute[225855]: 2026-01-20 15:40:21.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:40:21 compute-1 nova_compute[225855]: 2026-01-20 15:40:21.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:40:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:21.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:21 compute-1 nova_compute[225855]: 2026-01-20 15:40:21.582 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:40:21 compute-1 nova_compute[225855]: 2026-01-20 15:40:21.582 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:21 compute-1 nova_compute[225855]: 2026-01-20 15:40:21.582 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:40:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3019188951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:40:21 compute-1 ceph-mon[81775]: pgmap v3545: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 323 KiB/s rd, 1.7 MiB/s wr, 60 op/s
Jan 20 15:40:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4221299174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:40:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:23.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:23.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:23 compute-1 ceph-mon[81775]: pgmap v3546: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s rd, 27 KiB/s wr, 2 op/s
Jan 20 15:40:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:24 compute-1 nova_compute[225855]: 2026-01-20 15:40:24.336 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:24 compute-1 nova_compute[225855]: 2026-01-20 15:40:24.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:24 compute-1 nova_compute[225855]: 2026-01-20 15:40:24.360 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:40:24 compute-1 nova_compute[225855]: 2026-01-20 15:40:24.360 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:40:24 compute-1 nova_compute[225855]: 2026-01-20 15:40:24.360 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:40:24 compute-1 nova_compute[225855]: 2026-01-20 15:40:24.360 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:40:24 compute-1 nova_compute[225855]: 2026-01-20 15:40:24.361 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:40:24 compute-1 nova_compute[225855]: 2026-01-20 15:40:24.430 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:40:24 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/415358669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:40:24 compute-1 nova_compute[225855]: 2026-01-20 15:40:24.832 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:40:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3621488171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:40:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/415358669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:40:24 compute-1 nova_compute[225855]: 2026-01-20 15:40:24.983 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:40:24 compute-1 nova_compute[225855]: 2026-01-20 15:40:24.984 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4287MB free_disk=20.942752838134766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:40:24 compute-1 nova_compute[225855]: 2026-01-20 15:40:24.985 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:40:24 compute-1 nova_compute[225855]: 2026-01-20 15:40:24.985 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:40:25 compute-1 nova_compute[225855]: 2026-01-20 15:40:25.055 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:40:25 compute-1 nova_compute[225855]: 2026-01-20 15:40:25.056 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:40:25 compute-1 nova_compute[225855]: 2026-01-20 15:40:25.075 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:40:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:40:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:25.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:40:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:25.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:40:25 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2103467019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:40:25 compute-1 nova_compute[225855]: 2026-01-20 15:40:25.555 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:40:25 compute-1 nova_compute[225855]: 2026-01-20 15:40:25.562 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:40:25 compute-1 nova_compute[225855]: 2026-01-20 15:40:25.760 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:40:25 compute-1 nova_compute[225855]: 2026-01-20 15:40:25.763 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:40:25 compute-1 nova_compute[225855]: 2026-01-20 15:40:25.763 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:40:25 compute-1 ceph-mon[81775]: pgmap v3547: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s rd, 29 KiB/s wr, 2 op/s
Jan 20 15:40:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3854088840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:40:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2103467019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:40:26 compute-1 nova_compute[225855]: 2026-01-20 15:40:26.764 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:27.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:27.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:28 compute-1 nova_compute[225855]: 2026-01-20 15:40:28.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:28 compute-1 nova_compute[225855]: 2026-01-20 15:40:28.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:28 compute-1 nova_compute[225855]: 2026-01-20 15:40:28.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:28 compute-1 nova_compute[225855]: 2026-01-20 15:40:28.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:28 compute-1 ceph-mon[81775]: pgmap v3548: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 20 15:40:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:29 compute-1 podman[326434]: 2026-01-20 15:40:29.08117587 +0000 UTC m=+0.113388135 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 20 15:40:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:29.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:29 compute-1 nova_compute[225855]: 2026-01-20 15:40:29.338 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:29 compute-1 nova_compute[225855]: 2026-01-20 15:40:29.431 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:29.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:30 compute-1 ceph-mon[81775]: pgmap v3549: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s wr, 0 op/s
Jan 20 15:40:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:31.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:31.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:31 compute-1 ceph-mon[81775]: pgmap v3550: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s wr, 0 op/s
Jan 20 15:40:32 compute-1 sudo[326461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:32 compute-1 sudo[326461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:32 compute-1 sudo[326461]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:32 compute-1 sudo[326486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:32 compute-1 sudo[326486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:32 compute-1 sudo[326486]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:32 compute-1 nova_compute[225855]: 2026-01-20 15:40:32.378 225859 DEBUG oslo_concurrency.lockutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Acquiring lock "eae4f8ad-34d0-4893-b039-a371c87ba22e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:40:32 compute-1 nova_compute[225855]: 2026-01-20 15:40:32.379 225859 DEBUG oslo_concurrency.lockutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Lock "eae4f8ad-34d0-4893-b039-a371c87ba22e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:40:32 compute-1 nova_compute[225855]: 2026-01-20 15:40:32.399 225859 DEBUG nova.compute.manager [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:40:32 compute-1 nova_compute[225855]: 2026-01-20 15:40:32.476 225859 DEBUG oslo_concurrency.lockutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:40:32 compute-1 nova_compute[225855]: 2026-01-20 15:40:32.476 225859 DEBUG oslo_concurrency.lockutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:40:32 compute-1 nova_compute[225855]: 2026-01-20 15:40:32.482 225859 DEBUG nova.virt.hardware [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:40:32 compute-1 nova_compute[225855]: 2026-01-20 15:40:32.483 225859 INFO nova.compute.claims [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:40:32 compute-1 sshd-session[326511]: Connection closed by 134.199.199.245 port 50812
Jan 20 15:40:32 compute-1 nova_compute[225855]: 2026-01-20 15:40:32.719 225859 DEBUG oslo_concurrency.processutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:40:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:40:33 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/522043476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.189 225859 DEBUG oslo_concurrency.processutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.195 225859 DEBUG nova.compute.provider_tree [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:40:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/522043476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.251 225859 DEBUG nova.scheduler.client.report [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.277 225859 DEBUG oslo_concurrency.lockutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.278 225859 DEBUG nova.compute.manager [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:40:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:40:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:33.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.376 225859 DEBUG nova.compute.manager [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.376 225859 DEBUG nova.network.neutron [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.398 225859 INFO nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.421 225859 DEBUG nova.compute.manager [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:40:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:33.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.531 225859 DEBUG nova.compute.manager [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.533 225859 DEBUG nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.533 225859 INFO nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Creating image(s)
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.561 225859 DEBUG nova.storage.rbd_utils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] rbd image eae4f8ad-34d0-4893-b039-a371c87ba22e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.592 225859 DEBUG nova.storage.rbd_utils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] rbd image eae4f8ad-34d0-4893-b039-a371c87ba22e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.621 225859 DEBUG nova.storage.rbd_utils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] rbd image eae4f8ad-34d0-4893-b039-a371c87ba22e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.625 225859 DEBUG oslo_concurrency.processutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.688 225859 DEBUG oslo_concurrency.processutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.689 225859 DEBUG oslo_concurrency.lockutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.690 225859 DEBUG oslo_concurrency.lockutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.690 225859 DEBUG oslo_concurrency.lockutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.720 225859 DEBUG nova.storage.rbd_utils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] rbd image eae4f8ad-34d0-4893-b039-a371c87ba22e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.727 225859 DEBUG oslo_concurrency.processutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 eae4f8ad-34d0-4893-b039-a371c87ba22e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:40:33 compute-1 nova_compute[225855]: 2026-01-20 15:40:33.764 225859 DEBUG nova.policy [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a8b010c120d8488bb889b23fb6abfc7f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '124217db76ec4d598d94591670b51957', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:40:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:34 compute-1 nova_compute[225855]: 2026-01-20 15:40:34.026 225859 DEBUG oslo_concurrency.processutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 eae4f8ad-34d0-4893-b039-a371c87ba22e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:40:34 compute-1 nova_compute[225855]: 2026-01-20 15:40:34.099 225859 DEBUG nova.storage.rbd_utils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] resizing rbd image eae4f8ad-34d0-4893-b039-a371c87ba22e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:40:34 compute-1 nova_compute[225855]: 2026-01-20 15:40:34.201 225859 DEBUG nova.objects.instance [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Lazy-loading 'migration_context' on Instance uuid eae4f8ad-34d0-4893-b039-a371c87ba22e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:40:34 compute-1 nova_compute[225855]: 2026-01-20 15:40:34.267 225859 DEBUG nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:40:34 compute-1 nova_compute[225855]: 2026-01-20 15:40:34.267 225859 DEBUG nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Ensure instance console log exists: /var/lib/nova/instances/eae4f8ad-34d0-4893-b039-a371c87ba22e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:40:34 compute-1 nova_compute[225855]: 2026-01-20 15:40:34.267 225859 DEBUG oslo_concurrency.lockutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:40:34 compute-1 nova_compute[225855]: 2026-01-20 15:40:34.268 225859 DEBUG oslo_concurrency.lockutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:40:34 compute-1 nova_compute[225855]: 2026-01-20 15:40:34.268 225859 DEBUG oslo_concurrency.lockutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:40:34 compute-1 nova_compute[225855]: 2026-01-20 15:40:34.341 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:34 compute-1 nova_compute[225855]: 2026-01-20 15:40:34.434 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:35.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:35 compute-1 nova_compute[225855]: 2026-01-20 15:40:35.364 225859 DEBUG nova.network.neutron [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Successfully created port: 2f6f66d9-264a-4c11-ba21-8cef740517bf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:40:35 compute-1 ceph-mon[81775]: pgmap v3551: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Jan 20 15:40:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:35.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:36 compute-1 ceph-mon[81775]: pgmap v3552: 321 pgs: 321 active+clean; 230 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 KiB/s rd, 1.2 MiB/s wr, 14 op/s
Jan 20 15:40:37 compute-1 podman[326702]: 2026-01-20 15:40:37.052152101 +0000 UTC m=+0.091878797 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 15:40:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:37.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:37 compute-1 nova_compute[225855]: 2026-01-20 15:40:37.412 225859 DEBUG nova.network.neutron [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Successfully updated port: 2f6f66d9-264a-4c11-ba21-8cef740517bf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:40:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:37.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:37 compute-1 nova_compute[225855]: 2026-01-20 15:40:37.914 225859 DEBUG oslo_concurrency.lockutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Acquiring lock "refresh_cache-eae4f8ad-34d0-4893-b039-a371c87ba22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:40:37 compute-1 nova_compute[225855]: 2026-01-20 15:40:37.914 225859 DEBUG oslo_concurrency.lockutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Acquired lock "refresh_cache-eae4f8ad-34d0-4893-b039-a371c87ba22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:40:37 compute-1 nova_compute[225855]: 2026-01-20 15:40:37.914 225859 DEBUG nova.network.neutron [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:40:38 compute-1 nova_compute[225855]: 2026-01-20 15:40:38.162 225859 DEBUG nova.compute.manager [req-4eb5c8a7-e00c-406e-94a8-7c73a07aecb1 req-745af3b2-0e18-40f7-be22-1d50f7b3dcac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Received event network-changed-2f6f66d9-264a-4c11-ba21-8cef740517bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:40:38 compute-1 nova_compute[225855]: 2026-01-20 15:40:38.162 225859 DEBUG nova.compute.manager [req-4eb5c8a7-e00c-406e-94a8-7c73a07aecb1 req-745af3b2-0e18-40f7-be22-1d50f7b3dcac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Refreshing instance network info cache due to event network-changed-2f6f66d9-264a-4c11-ba21-8cef740517bf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:40:38 compute-1 nova_compute[225855]: 2026-01-20 15:40:38.163 225859 DEBUG oslo_concurrency.lockutils [req-4eb5c8a7-e00c-406e-94a8-7c73a07aecb1 req-745af3b2-0e18-40f7-be22-1d50f7b3dcac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-eae4f8ad-34d0-4893-b039-a371c87ba22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:40:38 compute-1 ceph-mon[81775]: pgmap v3553: 321 pgs: 321 active+clean; 235 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 26 op/s
Jan 20 15:40:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:38 compute-1 nova_compute[225855]: 2026-01-20 15:40:38.998 225859 DEBUG nova.network.neutron [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:40:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:39.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:39 compute-1 nova_compute[225855]: 2026-01-20 15:40:39.345 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:39 compute-1 nova_compute[225855]: 2026-01-20 15:40:39.436 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:39.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:40 compute-1 sudo[326723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:40 compute-1 sudo[326723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:40 compute-1 sudo[326723]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:40 compute-1 sudo[326748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:40:40 compute-1 sudo[326748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:40 compute-1 ceph-mon[81775]: pgmap v3554: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:40:40 compute-1 sudo[326748]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:40 compute-1 sudo[326773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:40 compute-1 sudo[326773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:40 compute-1 sudo[326773]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:40 compute-1 sudo[326798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 15:40:40 compute-1 sudo[326798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:40 compute-1 sudo[326798]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:41 compute-1 sudo[326843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:41 compute-1 sudo[326843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:41 compute-1 sudo[326843]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:41 compute-1 sudo[326868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:40:41 compute-1 sudo[326868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:41 compute-1 sudo[326868]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:41 compute-1 sudo[326893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:41 compute-1 sudo[326893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:41 compute-1 sudo[326893]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:41 compute-1 sudo[326918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:40:41 compute-1 sudo[326918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:41.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:40:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:41.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:40:41 compute-1 sudo[326918]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 15:40:41 compute-1 ceph-mon[81775]: pgmap v3555: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:40:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 15:40:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:40:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:40:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:40:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:40:41 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.048 225859 DEBUG nova.network.neutron [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Updating instance_info_cache with network_info: [{"id": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "address": "fa:16:3e:e0:ea:47", "network": {"id": "7887623c-0aac-4bfc-b122-61e1bb0418eb", "bridge": "br-int", "label": "tempest-network-smoke--1885521634", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "124217db76ec4d598d94591670b51957", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6f66d9-26", "ovs_interfaceid": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.090 225859 DEBUG oslo_concurrency.lockutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Releasing lock "refresh_cache-eae4f8ad-34d0-4893-b039-a371c87ba22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.091 225859 DEBUG nova.compute.manager [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Instance network_info: |[{"id": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "address": "fa:16:3e:e0:ea:47", "network": {"id": "7887623c-0aac-4bfc-b122-61e1bb0418eb", "bridge": "br-int", "label": "tempest-network-smoke--1885521634", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "124217db76ec4d598d94591670b51957", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6f66d9-26", "ovs_interfaceid": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.091 225859 DEBUG oslo_concurrency.lockutils [req-4eb5c8a7-e00c-406e-94a8-7c73a07aecb1 req-745af3b2-0e18-40f7-be22-1d50f7b3dcac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-eae4f8ad-34d0-4893-b039-a371c87ba22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.091 225859 DEBUG nova.network.neutron [req-4eb5c8a7-e00c-406e-94a8-7c73a07aecb1 req-745af3b2-0e18-40f7-be22-1d50f7b3dcac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Refreshing network info cache for port 2f6f66d9-264a-4c11-ba21-8cef740517bf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.094 225859 DEBUG nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Start _get_guest_xml network_info=[{"id": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "address": "fa:16:3e:e0:ea:47", "network": {"id": "7887623c-0aac-4bfc-b122-61e1bb0418eb", "bridge": "br-int", "label": "tempest-network-smoke--1885521634", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "124217db76ec4d598d94591670b51957", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6f66d9-26", "ovs_interfaceid": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.098 225859 WARNING nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.123 225859 DEBUG nova.virt.libvirt.host [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.124 225859 DEBUG nova.virt.libvirt.host [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.128 225859 DEBUG nova.virt.libvirt.host [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.128 225859 DEBUG nova.virt.libvirt.host [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.130 225859 DEBUG nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.130 225859 DEBUG nova.virt.hardware [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.130 225859 DEBUG nova.virt.hardware [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.131 225859 DEBUG nova.virt.hardware [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.131 225859 DEBUG nova.virt.hardware [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.131 225859 DEBUG nova.virt.hardware [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.131 225859 DEBUG nova.virt.hardware [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.132 225859 DEBUG nova.virt.hardware [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.132 225859 DEBUG nova.virt.hardware [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.132 225859 DEBUG nova.virt.hardware [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.132 225859 DEBUG nova.virt.hardware [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.133 225859 DEBUG nova.virt.hardware [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.136 225859 DEBUG oslo_concurrency.processutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:40:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:40:42 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4232303621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.577 225859 DEBUG oslo_concurrency.processutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.613 225859 DEBUG nova.storage.rbd_utils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] rbd image eae4f8ad-34d0-4893-b039-a371c87ba22e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:40:42 compute-1 nova_compute[225855]: 2026-01-20 15:40:42.618 225859 DEBUG oslo_concurrency.processutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:40:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4232303621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:40:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:40:43 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3309998384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.089 225859 DEBUG oslo_concurrency.processutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.090 225859 DEBUG nova.virt.libvirt.vif [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:40:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1131874044-access_point-243412062',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1131874044-access_point-243412062',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1131874044-ac',id=217,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBI8BHGvvVMk8YauPGCgxR3A4v1f0rWghzxaRGrntFPvXTTSIDvApqOQBjhgH6T1AnKeJJSHdUCD1AT2JA7XK7b8l6gUg2nLAeLoR1LsiUcTGAlR1hX0RRwuXqUe5lWZMg==',key_name='tempest-TestSecurityGroupsBasicOps-32974910',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='124217db76ec4d598d94591670b51957',ramdisk_id='',reservation_id='r-egebxdzr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1131874044',owner_user_name='tempest-TestSecurityGroupsBasicOps-1131874044-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:40:33Z,user_data=None,user_id='a8b010c120d8488bb889b23fb6abfc7f',uuid=eae4f8ad-34d0-4893-b039-a371c87ba22e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "address": "fa:16:3e:e0:ea:47", "network": {"id": "7887623c-0aac-4bfc-b122-61e1bb0418eb", "bridge": "br-int", "label": "tempest-network-smoke--1885521634", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "124217db76ec4d598d94591670b51957", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6f66d9-26", "ovs_interfaceid": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.091 225859 DEBUG nova.network.os_vif_util [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Converting VIF {"id": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "address": "fa:16:3e:e0:ea:47", "network": {"id": "7887623c-0aac-4bfc-b122-61e1bb0418eb", "bridge": "br-int", "label": "tempest-network-smoke--1885521634", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "124217db76ec4d598d94591670b51957", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6f66d9-26", "ovs_interfaceid": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.092 225859 DEBUG nova.network.os_vif_util [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:ea:47,bridge_name='br-int',has_traffic_filtering=True,id=2f6f66d9-264a-4c11-ba21-8cef740517bf,network=Network(7887623c-0aac-4bfc-b122-61e1bb0418eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6f66d9-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.093 225859 DEBUG nova.objects.instance [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Lazy-loading 'pci_devices' on Instance uuid eae4f8ad-34d0-4893-b039-a371c87ba22e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.122 225859 DEBUG nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:40:43 compute-1 nova_compute[225855]:   <uuid>eae4f8ad-34d0-4893-b039-a371c87ba22e</uuid>
Jan 20 15:40:43 compute-1 nova_compute[225855]:   <name>instance-000000d9</name>
Jan 20 15:40:43 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:40:43 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:40:43 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1131874044-access_point-243412062</nova:name>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:40:42</nova:creationTime>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:40:43 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:40:43 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:40:43 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:40:43 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:40:43 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:40:43 compute-1 nova_compute[225855]:         <nova:user uuid="a8b010c120d8488bb889b23fb6abfc7f">tempest-TestSecurityGroupsBasicOps-1131874044-project-member</nova:user>
Jan 20 15:40:43 compute-1 nova_compute[225855]:         <nova:project uuid="124217db76ec4d598d94591670b51957">tempest-TestSecurityGroupsBasicOps-1131874044</nova:project>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:40:43 compute-1 nova_compute[225855]:         <nova:port uuid="2f6f66d9-264a-4c11-ba21-8cef740517bf">
Jan 20 15:40:43 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:40:43 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:40:43 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <system>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <entry name="serial">eae4f8ad-34d0-4893-b039-a371c87ba22e</entry>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <entry name="uuid">eae4f8ad-34d0-4893-b039-a371c87ba22e</entry>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     </system>
Jan 20 15:40:43 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:40:43 compute-1 nova_compute[225855]:   <os>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:   </os>
Jan 20 15:40:43 compute-1 nova_compute[225855]:   <features>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:   </features>
Jan 20 15:40:43 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:40:43 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:40:43 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/eae4f8ad-34d0-4893-b039-a371c87ba22e_disk">
Jan 20 15:40:43 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       </source>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:40:43 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/eae4f8ad-34d0-4893-b039-a371c87ba22e_disk.config">
Jan 20 15:40:43 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       </source>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:40:43 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:e0:ea:47"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <target dev="tap2f6f66d9-26"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/eae4f8ad-34d0-4893-b039-a371c87ba22e/console.log" append="off"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <video>
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     </video>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:40:43 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:40:43 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:40:43 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:40:43 compute-1 nova_compute[225855]: </domain>
Jan 20 15:40:43 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.124 225859 DEBUG nova.compute.manager [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Preparing to wait for external event network-vif-plugged-2f6f66d9-264a-4c11-ba21-8cef740517bf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.125 225859 DEBUG oslo_concurrency.lockutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Acquiring lock "eae4f8ad-34d0-4893-b039-a371c87ba22e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.125 225859 DEBUG oslo_concurrency.lockutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Lock "eae4f8ad-34d0-4893-b039-a371c87ba22e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.125 225859 DEBUG oslo_concurrency.lockutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Lock "eae4f8ad-34d0-4893-b039-a371c87ba22e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.126 225859 DEBUG nova.virt.libvirt.vif [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:40:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1131874044-access_point-243412062',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1131874044-access_point-243412062',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1131874044-ac',id=217,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBI8BHGvvVMk8YauPGCgxR3A4v1f0rWghzxaRGrntFPvXTTSIDvApqOQBjhgH6T1AnKeJJSHdUCD1AT2JA7XK7b8l6gUg2nLAeLoR1LsiUcTGAlR1hX0RRwuXqUe5lWZMg==',key_name='tempest-TestSecurityGroupsBasicOps-32974910',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='124217db76ec4d598d94591670b51957',ramdisk_id='',reservation_id='r-egebxdzr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1131874044',owner_user_name='tempest-TestSecurityGroupsBasicOps-1131874044-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:40:33Z,user_data=None,user_id='a8b010c120d8488bb889b23fb6abfc7f',uuid=eae4f8ad-34d0-4893-b039-a371c87ba22e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "address": "fa:16:3e:e0:ea:47", "network": {"id": "7887623c-0aac-4bfc-b122-61e1bb0418eb", "bridge": "br-int", "label": "tempest-network-smoke--1885521634", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "124217db76ec4d598d94591670b51957", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6f66d9-26", "ovs_interfaceid": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.127 225859 DEBUG nova.network.os_vif_util [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Converting VIF {"id": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "address": "fa:16:3e:e0:ea:47", "network": {"id": "7887623c-0aac-4bfc-b122-61e1bb0418eb", "bridge": "br-int", "label": "tempest-network-smoke--1885521634", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "124217db76ec4d598d94591670b51957", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6f66d9-26", "ovs_interfaceid": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.127 225859 DEBUG nova.network.os_vif_util [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:ea:47,bridge_name='br-int',has_traffic_filtering=True,id=2f6f66d9-264a-4c11-ba21-8cef740517bf,network=Network(7887623c-0aac-4bfc-b122-61e1bb0418eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6f66d9-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.128 225859 DEBUG os_vif [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:ea:47,bridge_name='br-int',has_traffic_filtering=True,id=2f6f66d9-264a-4c11-ba21-8cef740517bf,network=Network(7887623c-0aac-4bfc-b122-61e1bb0418eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6f66d9-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.129 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.129 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.130 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.134 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.134 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f6f66d9-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.135 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2f6f66d9-26, col_values=(('external_ids', {'iface-id': '2f6f66d9-264a-4c11-ba21-8cef740517bf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:ea:47', 'vm-uuid': 'eae4f8ad-34d0-4893-b039-a371c87ba22e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.136 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:43 compute-1 NetworkManager[49104]: <info>  [1768923643.1378] manager: (tap2f6f66d9-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/420)
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.139 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.144 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.145 225859 INFO os_vif [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:ea:47,bridge_name='br-int',has_traffic_filtering=True,id=2f6f66d9-264a-4c11-ba21-8cef740517bf,network=Network(7887623c-0aac-4bfc-b122-61e1bb0418eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6f66d9-26')
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.272 225859 DEBUG nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.273 225859 DEBUG nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.273 225859 DEBUG nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] No VIF found with MAC fa:16:3e:e0:ea:47, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.274 225859 INFO nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Using config drive
Jan 20 15:40:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:43.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.303 225859 DEBUG nova.storage.rbd_utils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] rbd image eae4f8ad-34d0-4893-b039-a371c87ba22e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:40:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:40:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:43.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.735 225859 INFO nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Creating config drive at /var/lib/nova/instances/eae4f8ad-34d0-4893-b039-a371c87ba22e/disk.config
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.745 225859 DEBUG oslo_concurrency.processutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eae4f8ad-34d0-4893-b039-a371c87ba22e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp489gev6m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.775 225859 DEBUG nova.network.neutron [req-4eb5c8a7-e00c-406e-94a8-7c73a07aecb1 req-745af3b2-0e18-40f7-be22-1d50f7b3dcac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Updated VIF entry in instance network info cache for port 2f6f66d9-264a-4c11-ba21-8cef740517bf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.776 225859 DEBUG nova.network.neutron [req-4eb5c8a7-e00c-406e-94a8-7c73a07aecb1 req-745af3b2-0e18-40f7-be22-1d50f7b3dcac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Updating instance_info_cache with network_info: [{"id": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "address": "fa:16:3e:e0:ea:47", "network": {"id": "7887623c-0aac-4bfc-b122-61e1bb0418eb", "bridge": "br-int", "label": "tempest-network-smoke--1885521634", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "124217db76ec4d598d94591670b51957", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6f66d9-26", "ovs_interfaceid": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.812 225859 DEBUG oslo_concurrency.lockutils [req-4eb5c8a7-e00c-406e-94a8-7c73a07aecb1 req-745af3b2-0e18-40f7-be22-1d50f7b3dcac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-eae4f8ad-34d0-4893-b039-a371c87ba22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:40:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.883 225859 DEBUG oslo_concurrency.processutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eae4f8ad-34d0-4893-b039-a371c87ba22e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp489gev6m" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.912 225859 DEBUG nova.storage.rbd_utils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] rbd image eae4f8ad-34d0-4893-b039-a371c87ba22e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:40:43 compute-1 nova_compute[225855]: 2026-01-20 15:40:43.916 225859 DEBUG oslo_concurrency.processutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eae4f8ad-34d0-4893-b039-a371c87ba22e/disk.config eae4f8ad-34d0-4893-b039-a371c87ba22e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:40:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3309998384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:40:44 compute-1 ceph-mon[81775]: pgmap v3556: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:40:44 compute-1 nova_compute[225855]: 2026-01-20 15:40:44.070 225859 DEBUG oslo_concurrency.processutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eae4f8ad-34d0-4893-b039-a371c87ba22e/disk.config eae4f8ad-34d0-4893-b039-a371c87ba22e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:40:44 compute-1 nova_compute[225855]: 2026-01-20 15:40:44.071 225859 INFO nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Deleting local config drive /var/lib/nova/instances/eae4f8ad-34d0-4893-b039-a371c87ba22e/disk.config because it was imported into RBD.
Jan 20 15:40:44 compute-1 kernel: tap2f6f66d9-26: entered promiscuous mode
Jan 20 15:40:44 compute-1 NetworkManager[49104]: <info>  [1768923644.1251] manager: (tap2f6f66d9-26): new Tun device (/org/freedesktop/NetworkManager/Devices/421)
Jan 20 15:40:44 compute-1 ovn_controller[130490]: 2026-01-20T15:40:44Z|00981|binding|INFO|Claiming lport 2f6f66d9-264a-4c11-ba21-8cef740517bf for this chassis.
Jan 20 15:40:44 compute-1 nova_compute[225855]: 2026-01-20 15:40:44.124 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:44 compute-1 ovn_controller[130490]: 2026-01-20T15:40:44Z|00982|binding|INFO|2f6f66d9-264a-4c11-ba21-8cef740517bf: Claiming fa:16:3e:e0:ea:47 10.100.0.6
Jan 20 15:40:44 compute-1 nova_compute[225855]: 2026-01-20 15:40:44.128 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:44 compute-1 nova_compute[225855]: 2026-01-20 15:40:44.133 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.145 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:ea:47 10.100.0.6'], port_security=['fa:16:3e:e0:ea:47 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'eae4f8ad-34d0-4893-b039-a371c87ba22e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7887623c-0aac-4bfc-b122-61e1bb0418eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '124217db76ec4d598d94591670b51957', 'neutron:revision_number': '2', 'neutron:security_group_ids': '16a3d061-4fac-44b6-8559-a0d83ddd3ce6 9098e587-0497-4bd0-abe2-7bb17bf96b42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ca3022e-756e-490f-aed0-0aadecee6965, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=2f6f66d9-264a-4c11-ba21-8cef740517bf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.147 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 2f6f66d9-264a-4c11-ba21-8cef740517bf in datapath 7887623c-0aac-4bfc-b122-61e1bb0418eb bound to our chassis
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.148 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7887623c-0aac-4bfc-b122-61e1bb0418eb
Jan 20 15:40:44 compute-1 systemd-udevd[327110]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:40:44 compute-1 systemd-machined[194361]: New machine qemu-113-instance-000000d9.
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.160 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb2349d-8248-4fd8-81ad-c690a7515c07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.161 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7887623c-01 in ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.163 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7887623c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.163 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[48131b4e-061f-43aa-a7d3-96b7059feaca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.164 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bb7c424f-b0f4-4ffd-92d0-e784bae4edc6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:40:44 compute-1 NetworkManager[49104]: <info>  [1768923644.1706] device (tap2f6f66d9-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:40:44 compute-1 NetworkManager[49104]: <info>  [1768923644.1722] device (tap2f6f66d9-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.176 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[f43074c4-2401-464e-9fd2-6bcf6fb5b270]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:40:44 compute-1 systemd[1]: Started Virtual Machine qemu-113-instance-000000d9.
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.201 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[84869386-35e3-4e29-ac2a-90e935e442f3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:40:44 compute-1 nova_compute[225855]: 2026-01-20 15:40:44.209 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:44 compute-1 ovn_controller[130490]: 2026-01-20T15:40:44Z|00983|binding|INFO|Setting lport 2f6f66d9-264a-4c11-ba21-8cef740517bf ovn-installed in OVS
Jan 20 15:40:44 compute-1 ovn_controller[130490]: 2026-01-20T15:40:44Z|00984|binding|INFO|Setting lport 2f6f66d9-264a-4c11-ba21-8cef740517bf up in Southbound
Jan 20 15:40:44 compute-1 nova_compute[225855]: 2026-01-20 15:40:44.213 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.230 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[d212b9c6-87a6-4945-a328-ac074d0f743c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.235 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8b806365-fdb1-486e-8080-d800ec133d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:40:44 compute-1 systemd-udevd[327114]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:40:44 compute-1 NetworkManager[49104]: <info>  [1768923644.2374] manager: (tap7887623c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/422)
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.266 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[35203a60-984d-4e0c-aff4-dea7a83fc2c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.269 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[3d3dd9db-53cf-45d8-afcf-866173a3624d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:40:44 compute-1 NetworkManager[49104]: <info>  [1768923644.2911] device (tap7887623c-00): carrier: link connected
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.294 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[c312275a-2e55-4d7d-bd61-5c715b725d80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.310 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[57cb103c-b22f-41bc-87cf-fe8ee9337041]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7887623c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:56:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 868321, 'reachable_time': 23841, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327143, 'error': None, 'target': 'ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.328 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf64a52-359f-4377-8d2b-d9a4a37f8fb6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0c:5695'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 868321, 'tstamp': 868321}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327144, 'error': None, 'target': 'ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.346 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6e5ee92d-dc5f-47ef-9e19-ed7d56966776]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7887623c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:56:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 868321, 'reachable_time': 23841, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327145, 'error': None, 'target': 'ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.378 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa2838f-d8b3-4911-99c6-2d5b9ea114c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.435 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a00a56ba-fc6b-4297-97e4-446c9171a955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.436 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7887623c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.437 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:40:44 compute-1 nova_compute[225855]: 2026-01-20 15:40:44.438 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.438 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7887623c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:40:44 compute-1 kernel: tap7887623c-00: entered promiscuous mode
Jan 20 15:40:44 compute-1 NetworkManager[49104]: <info>  [1768923644.4405] manager: (tap7887623c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/423)
Jan 20 15:40:44 compute-1 nova_compute[225855]: 2026-01-20 15:40:44.441 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.442 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7887623c-00, col_values=(('external_ids', {'iface-id': 'b64ac449-7e2b-4185-951f-151c787a165d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:40:44 compute-1 nova_compute[225855]: 2026-01-20 15:40:44.443 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:44 compute-1 ovn_controller[130490]: 2026-01-20T15:40:44Z|00985|binding|INFO|Releasing lport b64ac449-7e2b-4185-951f-151c787a165d from this chassis (sb_readonly=0)
Jan 20 15:40:44 compute-1 nova_compute[225855]: 2026-01-20 15:40:44.457 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.458 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7887623c-0aac-4bfc-b122-61e1bb0418eb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7887623c-0aac-4bfc-b122-61e1bb0418eb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.459 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[8e185db9-8a44-45cc-8342-fe0c410a9751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.460 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-7887623c-0aac-4bfc-b122-61e1bb0418eb
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/7887623c-0aac-4bfc-b122-61e1bb0418eb.pid.haproxy
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 7887623c-0aac-4bfc-b122-61e1bb0418eb
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:40:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:44.461 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb', 'env', 'PROCESS_TAG=haproxy-7887623c-0aac-4bfc-b122-61e1bb0418eb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7887623c-0aac-4bfc-b122-61e1bb0418eb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:40:44 compute-1 nova_compute[225855]: 2026-01-20 15:40:44.618 225859 DEBUG nova.compute.manager [req-cca5b4c8-df7a-45ee-a915-1916b6efab6c req-a3c16476-6d51-4e35-ba27-c24f467678cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Received event network-vif-plugged-2f6f66d9-264a-4c11-ba21-8cef740517bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:40:44 compute-1 nova_compute[225855]: 2026-01-20 15:40:44.619 225859 DEBUG oslo_concurrency.lockutils [req-cca5b4c8-df7a-45ee-a915-1916b6efab6c req-a3c16476-6d51-4e35-ba27-c24f467678cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "eae4f8ad-34d0-4893-b039-a371c87ba22e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:40:44 compute-1 nova_compute[225855]: 2026-01-20 15:40:44.619 225859 DEBUG oslo_concurrency.lockutils [req-cca5b4c8-df7a-45ee-a915-1916b6efab6c req-a3c16476-6d51-4e35-ba27-c24f467678cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "eae4f8ad-34d0-4893-b039-a371c87ba22e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:40:44 compute-1 nova_compute[225855]: 2026-01-20 15:40:44.619 225859 DEBUG oslo_concurrency.lockutils [req-cca5b4c8-df7a-45ee-a915-1916b6efab6c req-a3c16476-6d51-4e35-ba27-c24f467678cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "eae4f8ad-34d0-4893-b039-a371c87ba22e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:40:44 compute-1 nova_compute[225855]: 2026-01-20 15:40:44.620 225859 DEBUG nova.compute.manager [req-cca5b4c8-df7a-45ee-a915-1916b6efab6c req-a3c16476-6d51-4e35-ba27-c24f467678cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Processing event network-vif-plugged-2f6f66d9-264a-4c11-ba21-8cef740517bf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:40:44 compute-1 podman[327177]: 2026-01-20 15:40:44.849335531 +0000 UTC m=+0.078720665 container create e50f16a5fb9f5df17d9e12509c39a365ee2814f233d00bebc285b4f3f143706d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:40:44 compute-1 systemd[1]: Started libpod-conmon-e50f16a5fb9f5df17d9e12509c39a365ee2814f233d00bebc285b4f3f143706d.scope.
Jan 20 15:40:44 compute-1 podman[327177]: 2026-01-20 15:40:44.80081618 +0000 UTC m=+0.030201344 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:40:44 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:40:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a41ca13b23ec89cab9c006d19667afe76dbef3f5eb560ea287f9b1085cdff420/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:40:44 compute-1 podman[327177]: 2026-01-20 15:40:44.934367065 +0000 UTC m=+0.163752219 container init e50f16a5fb9f5df17d9e12509c39a365ee2814f233d00bebc285b4f3f143706d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 20 15:40:44 compute-1 podman[327177]: 2026-01-20 15:40:44.940191029 +0000 UTC m=+0.169576173 container start e50f16a5fb9f5df17d9e12509c39a365ee2814f233d00bebc285b4f3f143706d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:40:44 compute-1 neutron-haproxy-ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb[327193]: [NOTICE]   (327197) : New worker (327199) forked
Jan 20 15:40:44 compute-1 neutron-haproxy-ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb[327193]: [NOTICE]   (327197) : Loading success.
Jan 20 15:40:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:45.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:45.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.066 225859 DEBUG nova.compute.manager [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.067 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768923646.0656903, eae4f8ad-34d0-4893-b039-a371c87ba22e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.067 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] VM Started (Lifecycle Event)
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.070 225859 DEBUG nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.075 225859 INFO nova.virt.libvirt.driver [-] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Instance spawned successfully.
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.076 225859 DEBUG nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.100 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.104 225859 DEBUG nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.104 225859 DEBUG nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.105 225859 DEBUG nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.105 225859 DEBUG nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.106 225859 DEBUG nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.106 225859 DEBUG nova.virt.libvirt.driver [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.112 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.174 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.175 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768923646.06819, eae4f8ad-34d0-4893-b039-a371c87ba22e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.175 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] VM Paused (Lifecycle Event)
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.225 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.230 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768923646.0700994, eae4f8ad-34d0-4893-b039-a371c87ba22e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.230 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] VM Resumed (Lifecycle Event)
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.250 225859 INFO nova.compute.manager [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Took 12.72 seconds to spawn the instance on the hypervisor.
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.251 225859 DEBUG nova.compute.manager [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.262 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.266 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.295 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.331 225859 INFO nova.compute.manager [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Took 13.89 seconds to build instance.
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.388 225859 DEBUG oslo_concurrency.lockutils [None req-e3aa03ba-f6bc-4ddf-96ec-477e46733c14 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Lock "eae4f8ad-34d0-4893-b039-a371c87ba22e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:40:46 compute-1 ceph-mon[81775]: pgmap v3557: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.716 225859 DEBUG nova.compute.manager [req-abd76668-deab-48eb-93c6-22b3006e1ffc req-4aa7251c-a823-4108-99e4-b4079e02c1a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Received event network-vif-plugged-2f6f66d9-264a-4c11-ba21-8cef740517bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.716 225859 DEBUG oslo_concurrency.lockutils [req-abd76668-deab-48eb-93c6-22b3006e1ffc req-4aa7251c-a823-4108-99e4-b4079e02c1a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "eae4f8ad-34d0-4893-b039-a371c87ba22e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.717 225859 DEBUG oslo_concurrency.lockutils [req-abd76668-deab-48eb-93c6-22b3006e1ffc req-4aa7251c-a823-4108-99e4-b4079e02c1a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "eae4f8ad-34d0-4893-b039-a371c87ba22e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.717 225859 DEBUG oslo_concurrency.lockutils [req-abd76668-deab-48eb-93c6-22b3006e1ffc req-4aa7251c-a823-4108-99e4-b4079e02c1a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "eae4f8ad-34d0-4893-b039-a371c87ba22e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.717 225859 DEBUG nova.compute.manager [req-abd76668-deab-48eb-93c6-22b3006e1ffc req-4aa7251c-a823-4108-99e4-b4079e02c1a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] No waiting events found dispatching network-vif-plugged-2f6f66d9-264a-4c11-ba21-8cef740517bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:40:46 compute-1 nova_compute[225855]: 2026-01-20 15:40:46.717 225859 WARNING nova.compute.manager [req-abd76668-deab-48eb-93c6-22b3006e1ffc req-4aa7251c-a823-4108-99e4-b4079e02c1a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Received unexpected event network-vif-plugged-2f6f66d9-264a-4c11-ba21-8cef740517bf for instance with vm_state active and task_state None.
Jan 20 15:40:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:47.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:47.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:47 compute-1 sudo[327251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:47 compute-1 sudo[327251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:47 compute-1 sudo[327251]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:47 compute-1 sudo[327276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:40:47 compute-1 sudo[327276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:47 compute-1 sudo[327276]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:48 compute-1 nova_compute[225855]: 2026-01-20 15:40:48.138 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:48 compute-1 ceph-mon[81775]: pgmap v3558: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 574 KiB/s wr, 17 op/s
Jan 20 15:40:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:48 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:48 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:40:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:49.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:40:49 compute-1 nova_compute[225855]: 2026-01-20 15:40:49.441 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:40:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:49.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:40:50 compute-1 ceph-mon[81775]: pgmap v3559: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 175 KiB/s rd, 120 KiB/s wr, 17 op/s
Jan 20 15:40:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:51.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:51.349 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=85, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=84) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:40:51 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:51.350 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:40:51 compute-1 nova_compute[225855]: 2026-01-20 15:40:51.349 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:51.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:52 compute-1 sudo[327305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:52 compute-1 sudo[327305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:52 compute-1 sudo[327305]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:52 compute-1 sudo[327330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:52 compute-1 sudo[327330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:52 compute-1 sudo[327330]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:52 compute-1 NetworkManager[49104]: <info>  [1768923652.3095] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Jan 20 15:40:52 compute-1 NetworkManager[49104]: <info>  [1768923652.3114] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Jan 20 15:40:52 compute-1 nova_compute[225855]: 2026-01-20 15:40:52.308 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:52 compute-1 nova_compute[225855]: 2026-01-20 15:40:52.380 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:52 compute-1 ovn_controller[130490]: 2026-01-20T15:40:52Z|00986|binding|INFO|Releasing lport b64ac449-7e2b-4185-951f-151c787a165d from this chassis (sb_readonly=0)
Jan 20 15:40:52 compute-1 nova_compute[225855]: 2026-01-20 15:40:52.391 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:52 compute-1 ceph-mon[81775]: pgmap v3560: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 20 15:40:53 compute-1 nova_compute[225855]: 2026-01-20 15:40:53.105 225859 DEBUG nova.compute.manager [req-7110c713-14c1-492b-b483-520999b9649c req-a44ce692-5bee-4068-a331-73120313c60e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Received event network-changed-2f6f66d9-264a-4c11-ba21-8cef740517bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:40:53 compute-1 nova_compute[225855]: 2026-01-20 15:40:53.106 225859 DEBUG nova.compute.manager [req-7110c713-14c1-492b-b483-520999b9649c req-a44ce692-5bee-4068-a331-73120313c60e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Refreshing instance network info cache due to event network-changed-2f6f66d9-264a-4c11-ba21-8cef740517bf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:40:53 compute-1 nova_compute[225855]: 2026-01-20 15:40:53.106 225859 DEBUG oslo_concurrency.lockutils [req-7110c713-14c1-492b-b483-520999b9649c req-a44ce692-5bee-4068-a331-73120313c60e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-eae4f8ad-34d0-4893-b039-a371c87ba22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:40:53 compute-1 nova_compute[225855]: 2026-01-20 15:40:53.106 225859 DEBUG oslo_concurrency.lockutils [req-7110c713-14c1-492b-b483-520999b9649c req-a44ce692-5bee-4068-a331-73120313c60e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-eae4f8ad-34d0-4893-b039-a371c87ba22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:40:53 compute-1 nova_compute[225855]: 2026-01-20 15:40:53.106 225859 DEBUG nova.network.neutron [req-7110c713-14c1-492b-b483-520999b9649c req-a44ce692-5bee-4068-a331-73120313c60e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Refreshing network info cache for port 2f6f66d9-264a-4c11-ba21-8cef740517bf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:40:53 compute-1 nova_compute[225855]: 2026-01-20 15:40:53.182 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:53.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:53.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:53 compute-1 ceph-mon[81775]: pgmap v3561: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 20 15:40:53 compute-1 sshd-session[327304]: Connection closed by authenticating user root 134.199.199.245 port 54248 [preauth]
Jan 20 15:40:53 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:54 compute-1 nova_compute[225855]: 2026-01-20 15:40:54.444 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:55 compute-1 nova_compute[225855]: 2026-01-20 15:40:55.025 225859 DEBUG nova.network.neutron [req-7110c713-14c1-492b-b483-520999b9649c req-a44ce692-5bee-4068-a331-73120313c60e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Updated VIF entry in instance network info cache for port 2f6f66d9-264a-4c11-ba21-8cef740517bf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:40:55 compute-1 nova_compute[225855]: 2026-01-20 15:40:55.026 225859 DEBUG nova.network.neutron [req-7110c713-14c1-492b-b483-520999b9649c req-a44ce692-5bee-4068-a331-73120313c60e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Updating instance_info_cache with network_info: [{"id": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "address": "fa:16:3e:e0:ea:47", "network": {"id": "7887623c-0aac-4bfc-b122-61e1bb0418eb", "bridge": "br-int", "label": "tempest-network-smoke--1885521634", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "124217db76ec4d598d94591670b51957", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6f66d9-26", "ovs_interfaceid": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:40:55 compute-1 nova_compute[225855]: 2026-01-20 15:40:55.126 225859 DEBUG oslo_concurrency.lockutils [req-7110c713-14c1-492b-b483-520999b9649c req-a44ce692-5bee-4068-a331-73120313c60e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-eae4f8ad-34d0-4893-b039-a371c87ba22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:40:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:55.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:55.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:56 compute-1 ceph-mon[81775]: pgmap v3562: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 20 15:40:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:40:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:57.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:40:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:57.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:58 compute-1 nova_compute[225855]: 2026-01-20 15:40:58.185 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:58 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:40:58.353 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '85'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:40:58 compute-1 ceph-mon[81775]: pgmap v3563: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Jan 20 15:40:58 compute-1 ovn_controller[130490]: 2026-01-20T15:40:58Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e0:ea:47 10.100.0.6
Jan 20 15:40:58 compute-1 ovn_controller[130490]: 2026-01-20T15:40:58Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e0:ea:47 10.100.0.6
Jan 20 15:40:58 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:59.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:59 compute-1 nova_compute[225855]: 2026-01-20 15:40:59.467 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:40:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:59.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:00 compute-1 podman[327362]: 2026-01-20 15:41:00.093954085 +0000 UTC m=+0.135219573 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 15:41:00 compute-1 ceph-mon[81775]: pgmap v3564: 321 pgs: 321 active+clean; 255 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 908 KiB/s wr, 81 op/s
Jan 20 15:41:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:01.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:01.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:02 compute-1 ceph-mon[81775]: pgmap v3565: 321 pgs: 321 active+clean; 278 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 20 15:41:03 compute-1 nova_compute[225855]: 2026-01-20 15:41:03.188 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:03.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:03.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:03 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:04 compute-1 nova_compute[225855]: 2026-01-20 15:41:04.468 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:04 compute-1 ceph-mon[81775]: pgmap v3566: 321 pgs: 321 active+clean; 278 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 318 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:41:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:05.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:41:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:05.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:41:06 compute-1 ceph-mon[81775]: pgmap v3567: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:41:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:07.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:07.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:08 compute-1 podman[327392]: 2026-01-20 15:41:08.021737297 +0000 UTC m=+0.061803728 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 15:41:08 compute-1 nova_compute[225855]: 2026-01-20 15:41:08.249 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:08 compute-1 ceph-mon[81775]: pgmap v3568: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:41:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:41:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:09.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:41:09 compute-1 nova_compute[225855]: 2026-01-20 15:41:09.470 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:41:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:09.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:41:10 compute-1 ceph-mon[81775]: pgmap v3569: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:41:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:11.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:11.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:12 compute-1 sudo[327416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:12 compute-1 sudo[327416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:12 compute-1 sudo[327416]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:12 compute-1 sudo[327441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:12 compute-1 sudo[327441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:12 compute-1 sudo[327441]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:12 compute-1 ceph-mon[81775]: pgmap v3570: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 228 KiB/s rd, 1.3 MiB/s wr, 54 op/s
Jan 20 15:41:13 compute-1 nova_compute[225855]: 2026-01-20 15:41:13.253 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:13.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:41:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:13.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:41:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:41:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3873927125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:41:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:41:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3873927125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:41:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:14 compute-1 nova_compute[225855]: 2026-01-20 15:41:14.471 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:14 compute-1 ceph-mon[81775]: pgmap v3571: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 59 KiB/s wr, 3 op/s
Jan 20 15:41:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3873927125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:41:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3873927125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.277 225859 DEBUG nova.compute.manager [req-8caac3aa-6d96-4dc6-bcce-12672339a92d req-f75162f6-4773-4072-8fd5-455289e8f644 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Received event network-changed-2f6f66d9-264a-4c11-ba21-8cef740517bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.278 225859 DEBUG nova.compute.manager [req-8caac3aa-6d96-4dc6-bcce-12672339a92d req-f75162f6-4773-4072-8fd5-455289e8f644 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Refreshing instance network info cache due to event network-changed-2f6f66d9-264a-4c11-ba21-8cef740517bf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.278 225859 DEBUG oslo_concurrency.lockutils [req-8caac3aa-6d96-4dc6-bcce-12672339a92d req-f75162f6-4773-4072-8fd5-455289e8f644 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-eae4f8ad-34d0-4893-b039-a371c87ba22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.278 225859 DEBUG oslo_concurrency.lockutils [req-8caac3aa-6d96-4dc6-bcce-12672339a92d req-f75162f6-4773-4072-8fd5-455289e8f644 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-eae4f8ad-34d0-4893-b039-a371c87ba22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.279 225859 DEBUG nova.network.neutron [req-8caac3aa-6d96-4dc6-bcce-12672339a92d req-f75162f6-4773-4072-8fd5-455289e8f644 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Refreshing network info cache for port 2f6f66d9-264a-4c11-ba21-8cef740517bf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:41:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:15.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.341 225859 DEBUG oslo_concurrency.lockutils [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Acquiring lock "eae4f8ad-34d0-4893-b039-a371c87ba22e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.342 225859 DEBUG oslo_concurrency.lockutils [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Lock "eae4f8ad-34d0-4893-b039-a371c87ba22e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.342 225859 DEBUG oslo_concurrency.lockutils [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Acquiring lock "eae4f8ad-34d0-4893-b039-a371c87ba22e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.342 225859 DEBUG oslo_concurrency.lockutils [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Lock "eae4f8ad-34d0-4893-b039-a371c87ba22e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.343 225859 DEBUG oslo_concurrency.lockutils [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Lock "eae4f8ad-34d0-4893-b039-a371c87ba22e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.344 225859 INFO nova.compute.manager [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Terminating instance
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.345 225859 DEBUG nova.compute.manager [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:41:15 compute-1 kernel: tap2f6f66d9-26 (unregistering): left promiscuous mode
Jan 20 15:41:15 compute-1 NetworkManager[49104]: <info>  [1768923675.3903] device (tap2f6f66d9-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:41:15 compute-1 ovn_controller[130490]: 2026-01-20T15:41:15Z|00987|binding|INFO|Releasing lport 2f6f66d9-264a-4c11-ba21-8cef740517bf from this chassis (sb_readonly=0)
Jan 20 15:41:15 compute-1 ovn_controller[130490]: 2026-01-20T15:41:15Z|00988|binding|INFO|Setting lport 2f6f66d9-264a-4c11-ba21-8cef740517bf down in Southbound
Jan 20 15:41:15 compute-1 ovn_controller[130490]: 2026-01-20T15:41:15Z|00989|binding|INFO|Removing iface tap2f6f66d9-26 ovn-installed in OVS
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.399 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:41:15.406 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:ea:47 10.100.0.6'], port_security=['fa:16:3e:e0:ea:47 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'eae4f8ad-34d0-4893-b039-a371c87ba22e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7887623c-0aac-4bfc-b122-61e1bb0418eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '124217db76ec4d598d94591670b51957', 'neutron:revision_number': '4', 'neutron:security_group_ids': '16a3d061-4fac-44b6-8559-a0d83ddd3ce6 9098e587-0497-4bd0-abe2-7bb17bf96b42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ca3022e-756e-490f-aed0-0aadecee6965, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=2f6f66d9-264a-4c11-ba21-8cef740517bf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:41:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:41:15.408 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 2f6f66d9-264a-4c11-ba21-8cef740517bf in datapath 7887623c-0aac-4bfc-b122-61e1bb0418eb unbound from our chassis
Jan 20 15:41:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:41:15.409 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7887623c-0aac-4bfc-b122-61e1bb0418eb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:41:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:41:15.410 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[15c3a541-9ab5-4154-bba5-225d74ebc3a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:41:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:41:15.410 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb namespace which is not needed anymore
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.416 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:15 compute-1 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d000000d9.scope: Deactivated successfully.
Jan 20 15:41:15 compute-1 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d000000d9.scope: Consumed 14.941s CPU time.
Jan 20 15:41:15 compute-1 systemd-machined[194361]: Machine qemu-113-instance-000000d9 terminated.
Jan 20 15:41:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:15.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:15 compute-1 neutron-haproxy-ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb[327193]: [NOTICE]   (327197) : haproxy version is 2.8.14-c23fe91
Jan 20 15:41:15 compute-1 neutron-haproxy-ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb[327193]: [NOTICE]   (327197) : path to executable is /usr/sbin/haproxy
Jan 20 15:41:15 compute-1 neutron-haproxy-ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb[327193]: [WARNING]  (327197) : Exiting Master process...
Jan 20 15:41:15 compute-1 neutron-haproxy-ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb[327193]: [ALERT]    (327197) : Current worker (327199) exited with code 143 (Terminated)
Jan 20 15:41:15 compute-1 neutron-haproxy-ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb[327193]: [WARNING]  (327197) : All workers exited. Exiting... (0)
Jan 20 15:41:15 compute-1 systemd[1]: libpod-e50f16a5fb9f5df17d9e12509c39a365ee2814f233d00bebc285b4f3f143706d.scope: Deactivated successfully.
Jan 20 15:41:15 compute-1 podman[327487]: 2026-01-20 15:41:15.540979271 +0000 UTC m=+0.048560523 container died e50f16a5fb9f5df17d9e12509c39a365ee2814f233d00bebc285b4f3f143706d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 15:41:15 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e50f16a5fb9f5df17d9e12509c39a365ee2814f233d00bebc285b4f3f143706d-userdata-shm.mount: Deactivated successfully.
Jan 20 15:41:15 compute-1 systemd[1]: var-lib-containers-storage-overlay-a41ca13b23ec89cab9c006d19667afe76dbef3f5eb560ea287f9b1085cdff420-merged.mount: Deactivated successfully.
Jan 20 15:41:15 compute-1 podman[327487]: 2026-01-20 15:41:15.594288988 +0000 UTC m=+0.101870220 container cleanup e50f16a5fb9f5df17d9e12509c39a365ee2814f233d00bebc285b4f3f143706d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.592 225859 INFO nova.virt.libvirt.driver [-] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Instance destroyed successfully.
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.593 225859 DEBUG nova.objects.instance [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Lazy-loading 'resources' on Instance uuid eae4f8ad-34d0-4893-b039-a371c87ba22e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:41:15 compute-1 systemd[1]: libpod-conmon-e50f16a5fb9f5df17d9e12509c39a365ee2814f233d00bebc285b4f3f143706d.scope: Deactivated successfully.
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.612 225859 DEBUG nova.virt.libvirt.vif [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:40:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1131874044-access_point-243412062',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1131874044-access_point-243412062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1131874044-ac',id=217,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBI8BHGvvVMk8YauPGCgxR3A4v1f0rWghzxaRGrntFPvXTTSIDvApqOQBjhgH6T1AnKeJJSHdUCD1AT2JA7XK7b8l6gUg2nLAeLoR1LsiUcTGAlR1hX0RRwuXqUe5lWZMg==',key_name='tempest-TestSecurityGroupsBasicOps-32974910',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:40:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='124217db76ec4d598d94591670b51957',ramdisk_id='',reservation_id='r-egebxdzr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1131874044',owner_user_name='tempest-TestSecurityGroupsBasicOps-1131874044-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:40:46Z,user_data=None,user_id='a8b010c120d8488bb889b23fb6abfc7f',uuid=eae4f8ad-34d0-4893-b039-a371c87ba22e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "address": "fa:16:3e:e0:ea:47", "network": {"id": "7887623c-0aac-4bfc-b122-61e1bb0418eb", "bridge": "br-int", "label": "tempest-network-smoke--1885521634", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "124217db76ec4d598d94591670b51957", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6f66d9-26", "ovs_interfaceid": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.613 225859 DEBUG nova.network.os_vif_util [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Converting VIF {"id": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "address": "fa:16:3e:e0:ea:47", "network": {"id": "7887623c-0aac-4bfc-b122-61e1bb0418eb", "bridge": "br-int", "label": "tempest-network-smoke--1885521634", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "124217db76ec4d598d94591670b51957", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6f66d9-26", "ovs_interfaceid": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.613 225859 DEBUG nova.network.os_vif_util [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e0:ea:47,bridge_name='br-int',has_traffic_filtering=True,id=2f6f66d9-264a-4c11-ba21-8cef740517bf,network=Network(7887623c-0aac-4bfc-b122-61e1bb0418eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6f66d9-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.614 225859 DEBUG os_vif [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:ea:47,bridge_name='br-int',has_traffic_filtering=True,id=2f6f66d9-264a-4c11-ba21-8cef740517bf,network=Network(7887623c-0aac-4bfc-b122-61e1bb0418eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6f66d9-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.615 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.616 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f6f66d9-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.617 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.620 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.623 225859 INFO os_vif [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:ea:47,bridge_name='br-int',has_traffic_filtering=True,id=2f6f66d9-264a-4c11-ba21-8cef740517bf,network=Network(7887623c-0aac-4bfc-b122-61e1bb0418eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6f66d9-26')
Jan 20 15:41:15 compute-1 podman[327526]: 2026-01-20 15:41:15.666409086 +0000 UTC m=+0.045435225 container remove e50f16a5fb9f5df17d9e12509c39a365ee2814f233d00bebc285b4f3f143706d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 20 15:41:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:41:15.676 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[22d578ef-9b0b-4725-bfe1-58fd0bbaa213]: (4, ('Tue Jan 20 03:41:15 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb (e50f16a5fb9f5df17d9e12509c39a365ee2814f233d00bebc285b4f3f143706d)\ne50f16a5fb9f5df17d9e12509c39a365ee2814f233d00bebc285b4f3f143706d\nTue Jan 20 03:41:15 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb (e50f16a5fb9f5df17d9e12509c39a365ee2814f233d00bebc285b4f3f143706d)\ne50f16a5fb9f5df17d9e12509c39a365ee2814f233d00bebc285b4f3f143706d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:41:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:41:15.678 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e5db5f8c-ab71-4043-858a-5431aad116b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:41:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:41:15.679 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7887623c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:41:15 compute-1 kernel: tap7887623c-00: left promiscuous mode
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.681 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.683 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:41:15.686 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9018f9-ae31-452f-9953-6ef34e5c7893]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:41:15 compute-1 nova_compute[225855]: 2026-01-20 15:41:15.696 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:41:15.702 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[b66335df-0bc2-4956-a1bf-3f6d929f82fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:41:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:41:15.703 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[f86b4785-917a-4de0-a803-d1093c8315c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:41:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:41:15.719 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[93df383c-28a0-4359-a55b-ed0b67f6f73b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 868315, 'reachable_time': 38139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327557, 'error': None, 'target': 'ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:41:15 compute-1 systemd[1]: run-netns-ovnmeta\x2d7887623c\x2d0aac\x2d4bfc\x2db122\x2d61e1bb0418eb.mount: Deactivated successfully.
Jan 20 15:41:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:41:15.724 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7887623c-0aac-4bfc-b122-61e1bb0418eb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:41:15 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:41:15.724 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[564e33be-ad80-4d63-8a9a-88684a79cb0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:41:16 compute-1 nova_compute[225855]: 2026-01-20 15:41:16.018 225859 INFO nova.virt.libvirt.driver [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Deleting instance files /var/lib/nova/instances/eae4f8ad-34d0-4893-b039-a371c87ba22e_del
Jan 20 15:41:16 compute-1 nova_compute[225855]: 2026-01-20 15:41:16.019 225859 INFO nova.virt.libvirt.driver [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Deletion of /var/lib/nova/instances/eae4f8ad-34d0-4893-b039-a371c87ba22e_del complete
Jan 20 15:41:16 compute-1 nova_compute[225855]: 2026-01-20 15:41:16.103 225859 INFO nova.compute.manager [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Took 0.76 seconds to destroy the instance on the hypervisor.
Jan 20 15:41:16 compute-1 nova_compute[225855]: 2026-01-20 15:41:16.103 225859 DEBUG oslo.service.loopingcall [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:41:16 compute-1 nova_compute[225855]: 2026-01-20 15:41:16.104 225859 DEBUG nova.compute.manager [-] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:41:16 compute-1 nova_compute[225855]: 2026-01-20 15:41:16.104 225859 DEBUG nova.network.neutron [-] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:41:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:41:16.462 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:41:16.463 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:41:16.463 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:41:16 compute-1 ceph-mon[81775]: pgmap v3572: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 60 KiB/s wr, 3 op/s
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.279 225859 DEBUG nova.network.neutron [-] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.304 225859 INFO nova.compute.manager [-] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Took 1.20 seconds to deallocate network for instance.
Jan 20 15:41:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:17.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.358 225859 DEBUG oslo_concurrency.lockutils [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.358 225859 DEBUG oslo_concurrency.lockutils [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.409 225859 DEBUG nova.compute.manager [req-4bf7d939-cd60-4a30-8dc0-9e368e129012 req-1c929245-1c97-4143-bcbc-807eb7ff56a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Received event network-vif-unplugged-2f6f66d9-264a-4c11-ba21-8cef740517bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.409 225859 DEBUG oslo_concurrency.lockutils [req-4bf7d939-cd60-4a30-8dc0-9e368e129012 req-1c929245-1c97-4143-bcbc-807eb7ff56a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "eae4f8ad-34d0-4893-b039-a371c87ba22e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.409 225859 DEBUG oslo_concurrency.lockutils [req-4bf7d939-cd60-4a30-8dc0-9e368e129012 req-1c929245-1c97-4143-bcbc-807eb7ff56a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "eae4f8ad-34d0-4893-b039-a371c87ba22e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.410 225859 DEBUG oslo_concurrency.lockutils [req-4bf7d939-cd60-4a30-8dc0-9e368e129012 req-1c929245-1c97-4143-bcbc-807eb7ff56a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "eae4f8ad-34d0-4893-b039-a371c87ba22e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.410 225859 DEBUG nova.compute.manager [req-4bf7d939-cd60-4a30-8dc0-9e368e129012 req-1c929245-1c97-4143-bcbc-807eb7ff56a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] No waiting events found dispatching network-vif-unplugged-2f6f66d9-264a-4c11-ba21-8cef740517bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.410 225859 WARNING nova.compute.manager [req-4bf7d939-cd60-4a30-8dc0-9e368e129012 req-1c929245-1c97-4143-bcbc-807eb7ff56a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Received unexpected event network-vif-unplugged-2f6f66d9-264a-4c11-ba21-8cef740517bf for instance with vm_state deleted and task_state None.
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.411 225859 DEBUG nova.compute.manager [req-4bf7d939-cd60-4a30-8dc0-9e368e129012 req-1c929245-1c97-4143-bcbc-807eb7ff56a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Received event network-vif-plugged-2f6f66d9-264a-4c11-ba21-8cef740517bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.411 225859 DEBUG oslo_concurrency.lockutils [req-4bf7d939-cd60-4a30-8dc0-9e368e129012 req-1c929245-1c97-4143-bcbc-807eb7ff56a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "eae4f8ad-34d0-4893-b039-a371c87ba22e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.411 225859 DEBUG oslo_concurrency.lockutils [req-4bf7d939-cd60-4a30-8dc0-9e368e129012 req-1c929245-1c97-4143-bcbc-807eb7ff56a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "eae4f8ad-34d0-4893-b039-a371c87ba22e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.411 225859 DEBUG oslo_concurrency.lockutils [req-4bf7d939-cd60-4a30-8dc0-9e368e129012 req-1c929245-1c97-4143-bcbc-807eb7ff56a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "eae4f8ad-34d0-4893-b039-a371c87ba22e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.412 225859 DEBUG nova.compute.manager [req-4bf7d939-cd60-4a30-8dc0-9e368e129012 req-1c929245-1c97-4143-bcbc-807eb7ff56a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] No waiting events found dispatching network-vif-plugged-2f6f66d9-264a-4c11-ba21-8cef740517bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.412 225859 WARNING nova.compute.manager [req-4bf7d939-cd60-4a30-8dc0-9e368e129012 req-1c929245-1c97-4143-bcbc-807eb7ff56a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Received unexpected event network-vif-plugged-2f6f66d9-264a-4c11-ba21-8cef740517bf for instance with vm_state deleted and task_state None.
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.443 225859 DEBUG nova.compute.manager [req-1b3fc75f-dcfb-4dc0-b35c-83dc81ea76fe req-bbbd344c-bc9b-4b70-9729-21d0a7b62aac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Received event network-vif-deleted-2f6f66d9-264a-4c11-ba21-8cef740517bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.465 225859 DEBUG oslo_concurrency.processutils [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.498 225859 DEBUG nova.network.neutron [req-8caac3aa-6d96-4dc6-bcce-12672339a92d req-f75162f6-4773-4072-8fd5-455289e8f644 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Updated VIF entry in instance network info cache for port 2f6f66d9-264a-4c11-ba21-8cef740517bf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.500 225859 DEBUG nova.network.neutron [req-8caac3aa-6d96-4dc6-bcce-12672339a92d req-f75162f6-4773-4072-8fd5-455289e8f644 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Updating instance_info_cache with network_info: [{"id": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "address": "fa:16:3e:e0:ea:47", "network": {"id": "7887623c-0aac-4bfc-b122-61e1bb0418eb", "bridge": "br-int", "label": "tempest-network-smoke--1885521634", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "124217db76ec4d598d94591670b51957", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6f66d9-26", "ovs_interfaceid": "2f6f66d9-264a-4c11-ba21-8cef740517bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:41:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:17.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.525 225859 DEBUG oslo_concurrency.lockutils [req-8caac3aa-6d96-4dc6-bcce-12672339a92d req-f75162f6-4773-4072-8fd5-455289e8f644 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-eae4f8ad-34d0-4893-b039-a371c87ba22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:41:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:41:17 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1600213521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.938 225859 DEBUG oslo_concurrency.processutils [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.945 225859 DEBUG nova.compute.provider_tree [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.968 225859 DEBUG nova.scheduler.client.report [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:41:17 compute-1 nova_compute[225855]: 2026-01-20 15:41:17.993 225859 DEBUG oslo_concurrency.lockutils [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:41:18 compute-1 nova_compute[225855]: 2026-01-20 15:41:18.026 225859 INFO nova.scheduler.client.report [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Deleted allocations for instance eae4f8ad-34d0-4893-b039-a371c87ba22e
Jan 20 15:41:18 compute-1 nova_compute[225855]: 2026-01-20 15:41:18.106 225859 DEBUG oslo_concurrency.lockutils [None req-2d8dd99a-9c66-4120-8ec7-3958f0ca8179 a8b010c120d8488bb889b23fb6abfc7f 124217db76ec4d598d94591670b51957 - - default default] Lock "eae4f8ad-34d0-4893-b039-a371c87ba22e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:41:18 compute-1 ceph-mon[81775]: pgmap v3573: 321 pgs: 321 active+clean; 263 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 12 KiB/s wr, 10 op/s
Jan 20 15:41:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1600213521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:19.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:19 compute-1 nova_compute[225855]: 2026-01-20 15:41:19.473 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:19.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:20 compute-1 nova_compute[225855]: 2026-01-20 15:41:20.619 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:20 compute-1 ceph-mon[81775]: pgmap v3574: 321 pgs: 321 active+clean; 242 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 15 KiB/s wr, 19 op/s
Jan 20 15:41:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:41:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:21.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:41:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:41:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:21.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:41:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/552745251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:22 compute-1 nova_compute[225855]: 2026-01-20 15:41:22.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:22 compute-1 nova_compute[225855]: 2026-01-20 15:41:22.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:41:22 compute-1 nova_compute[225855]: 2026-01-20 15:41:22.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:41:22 compute-1 nova_compute[225855]: 2026-01-20 15:41:22.363 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:41:22 compute-1 nova_compute[225855]: 2026-01-20 15:41:22.365 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:22 compute-1 nova_compute[225855]: 2026-01-20 15:41:22.365 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:41:22 compute-1 ceph-mon[81775]: pgmap v3575: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 6.9 KiB/s wr, 29 op/s
Jan 20 15:41:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1289347087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:41:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:23.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:41:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:41:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:23.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:41:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:24 compute-1 nova_compute[225855]: 2026-01-20 15:41:24.475 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:24 compute-1 nova_compute[225855]: 2026-01-20 15:41:24.535 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:24 compute-1 nova_compute[225855]: 2026-01-20 15:41:24.650 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:24 compute-1 ceph-mon[81775]: pgmap v3576: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Jan 20 15:41:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:25.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:25 compute-1 nova_compute[225855]: 2026-01-20 15:41:25.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:25 compute-1 nova_compute[225855]: 2026-01-20 15:41:25.375 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:25 compute-1 nova_compute[225855]: 2026-01-20 15:41:25.375 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:25 compute-1 nova_compute[225855]: 2026-01-20 15:41:25.376 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:41:25 compute-1 nova_compute[225855]: 2026-01-20 15:41:25.376 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:41:25 compute-1 nova_compute[225855]: 2026-01-20 15:41:25.376 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:41:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:25.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:25 compute-1 nova_compute[225855]: 2026-01-20 15:41:25.622 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:25 compute-1 ceph-mon[81775]: pgmap v3577: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Jan 20 15:41:25 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:41:25 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3545405961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:25 compute-1 nova_compute[225855]: 2026-01-20 15:41:25.798 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:41:25 compute-1 nova_compute[225855]: 2026-01-20 15:41:25.960 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:41:25 compute-1 nova_compute[225855]: 2026-01-20 15:41:25.961 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4227MB free_disk=20.94268798828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:41:25 compute-1 nova_compute[225855]: 2026-01-20 15:41:25.961 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:25 compute-1 nova_compute[225855]: 2026-01-20 15:41:25.962 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:26 compute-1 nova_compute[225855]: 2026-01-20 15:41:26.062 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:41:26 compute-1 nova_compute[225855]: 2026-01-20 15:41:26.062 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:41:26 compute-1 nova_compute[225855]: 2026-01-20 15:41:26.118 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:41:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:41:26 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3284217090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:26 compute-1 nova_compute[225855]: 2026-01-20 15:41:26.554 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:41:26 compute-1 nova_compute[225855]: 2026-01-20 15:41:26.560 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:41:26 compute-1 nova_compute[225855]: 2026-01-20 15:41:26.577 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:41:26 compute-1 nova_compute[225855]: 2026-01-20 15:41:26.605 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:41:26 compute-1 nova_compute[225855]: 2026-01-20 15:41:26.605 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:41:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1427972615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3545405961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3284217090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1981697531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:41:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:27.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:41:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:27.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:27 compute-1 nova_compute[225855]: 2026-01-20 15:41:27.606 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:27 compute-1 ceph-mon[81775]: pgmap v3578: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Jan 20 15:41:28 compute-1 nova_compute[225855]: 2026-01-20 15:41:28.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:28 compute-1 nova_compute[225855]: 2026-01-20 15:41:28.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:29 compute-1 nova_compute[225855]: 2026-01-20 15:41:29.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:29.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3928735627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:29 compute-1 nova_compute[225855]: 2026-01-20 15:41:29.476 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:29.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:30 compute-1 nova_compute[225855]: 2026-01-20 15:41:30.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:30 compute-1 ceph-mon[81775]: pgmap v3579: 321 pgs: 321 active+clean; 178 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 6.2 KiB/s wr, 19 op/s
Jan 20 15:41:30 compute-1 nova_compute[225855]: 2026-01-20 15:41:30.589 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768923675.5871499, eae4f8ad-34d0-4893-b039-a371c87ba22e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:41:30 compute-1 nova_compute[225855]: 2026-01-20 15:41:30.589 225859 INFO nova.compute.manager [-] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] VM Stopped (Lifecycle Event)
Jan 20 15:41:30 compute-1 nova_compute[225855]: 2026-01-20 15:41:30.615 225859 DEBUG nova.compute.manager [None req-82719537-6f0b-47e1-849b-fd851d759790 - - - - - -] [instance: eae4f8ad-34d0-4893-b039-a371c87ba22e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:41:30 compute-1 nova_compute[225855]: 2026-01-20 15:41:30.624 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:31 compute-1 podman[327637]: 2026-01-20 15:41:31.125010726 +0000 UTC m=+0.156690710 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Jan 20 15:41:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:41:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:31.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:41:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:31.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:32 compute-1 ceph-mon[81775]: pgmap v3580: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 4.1 KiB/s wr, 37 op/s
Jan 20 15:41:32 compute-1 sudo[327664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:32 compute-1 sudo[327664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:32 compute-1 sudo[327664]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:32 compute-1 sudo[327689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:32 compute-1 sudo[327689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:32 compute-1 sudo[327689]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:33 compute-1 nova_compute[225855]: 2026-01-20 15:41:33.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:33.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:33.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:34 compute-1 nova_compute[225855]: 2026-01-20 15:41:34.513 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:34 compute-1 ceph-mon[81775]: pgmap v3581: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:41:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:41:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:35.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:41:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:35.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:35 compute-1 nova_compute[225855]: 2026-01-20 15:41:35.673 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:36 compute-1 ceph-mon[81775]: pgmap v3582: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:41:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:37.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:41:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:37.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:41:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:39 compute-1 podman[327717]: 2026-01-20 15:41:39.155074147 +0000 UTC m=+0.204287544 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 15:41:39 compute-1 ceph-mon[81775]: pgmap v3583: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:41:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:39.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:39 compute-1 nova_compute[225855]: 2026-01-20 15:41:39.516 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:39.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:40 compute-1 ceph-mon[81775]: pgmap v3584: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:41:40 compute-1 nova_compute[225855]: 2026-01-20 15:41:40.677 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:41.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #178. Immutable memtables: 0.
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:41:41.521088) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 178
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923701521251, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1347, "num_deletes": 251, "total_data_size": 3039612, "memory_usage": 3068128, "flush_reason": "Manual Compaction"}
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #179: started
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923701539809, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 179, "file_size": 2006340, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 85801, "largest_seqno": 87143, "table_properties": {"data_size": 2000478, "index_size": 3192, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12603, "raw_average_key_size": 20, "raw_value_size": 1988788, "raw_average_value_size": 3176, "num_data_blocks": 139, "num_entries": 626, "num_filter_entries": 626, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768923588, "oldest_key_time": 1768923588, "file_creation_time": 1768923701, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 18839 microseconds, and 10893 cpu microseconds.
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:41:41.539927) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #179: 2006340 bytes OK
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:41:41.539962) [db/memtable_list.cc:519] [default] Level-0 commit table #179 started
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:41:41.541336) [db/memtable_list.cc:722] [default] Level-0 commit table #179: memtable #1 done
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:41:41.541353) EVENT_LOG_v1 {"time_micros": 1768923701541347, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:41:41.541397) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 3033280, prev total WAL file size 3033280, number of live WAL files 2.
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000175.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:41:41.542535) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [179(1959KB)], [177(11MB)]
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923701542578, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [179], "files_L6": [177], "score": -1, "input_data_size": 14584881, "oldest_snapshot_seqno": -1}
Jan 20 15:41:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:41.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #180: 10590 keys, 12624895 bytes, temperature: kUnknown
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923701617448, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 180, "file_size": 12624895, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12558089, "index_size": 39237, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26501, "raw_key_size": 280337, "raw_average_key_size": 26, "raw_value_size": 12374248, "raw_average_value_size": 1168, "num_data_blocks": 1485, "num_entries": 10590, "num_filter_entries": 10590, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768923701, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 180, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:41:41.617859) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 12624895 bytes
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:41:41.619939) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.5 rd, 168.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 12.0 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(13.6) write-amplify(6.3) OK, records in: 11109, records dropped: 519 output_compression: NoCompression
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:41:41.619974) EVENT_LOG_v1 {"time_micros": 1768923701619958, "job": 114, "event": "compaction_finished", "compaction_time_micros": 74994, "compaction_time_cpu_micros": 42643, "output_level": 6, "num_output_files": 1, "total_output_size": 12624895, "num_input_records": 11109, "num_output_records": 10590, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923701620789, "job": 114, "event": "table_file_deletion", "file_number": 179}
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000177.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923701625140, "job": 114, "event": "table_file_deletion", "file_number": 177}
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:41:41.542420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:41:41.625239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:41:41.625245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:41:41.625247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:41:41.625248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:41:41 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:41:41.625250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:41:42 compute-1 ceph-mon[81775]: pgmap v3585: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:41:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:43.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:43.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:44 compute-1 nova_compute[225855]: 2026-01-20 15:41:44.518 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:44 compute-1 ceph-mon[81775]: pgmap v3586: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:41:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:41:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:45.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:41:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:41:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:45.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:41:45 compute-1 nova_compute[225855]: 2026-01-20 15:41:45.680 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:46 compute-1 ceph-mon[81775]: pgmap v3587: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:41:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:47.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:47.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:48 compute-1 sudo[327742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:48 compute-1 sudo[327742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:48 compute-1 sudo[327742]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:48 compute-1 sudo[327767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:41:48 compute-1 sudo[327767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:48 compute-1 sudo[327767]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:48 compute-1 sudo[327792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:48 compute-1 sudo[327792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:48 compute-1 sudo[327792]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:48 compute-1 sudo[327817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:41:48 compute-1 sudo[327817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:48 compute-1 ceph-mon[81775]: pgmap v3588: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:41:48 compute-1 sudo[327817]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:49.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:49 compute-1 nova_compute[225855]: 2026-01-20 15:41:49.521 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:49.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 15:41:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:41:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:41:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:41:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:41:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:41:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:41:50 compute-1 ceph-mon[81775]: pgmap v3589: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:41:50 compute-1 nova_compute[225855]: 2026-01-20 15:41:50.683 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:51.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:41:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:51.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:41:52 compute-1 ceph-mon[81775]: pgmap v3590: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:41:52 compute-1 sudo[327875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:52 compute-1 sudo[327875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:52 compute-1 sudo[327875]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:52 compute-1 sudo[327900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:52 compute-1 sudo[327900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:52 compute-1 sudo[327900]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:53 compute-1 nova_compute[225855]: 2026-01-20 15:41:53.334 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:41:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:53.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:41:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:53.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:54 compute-1 nova_compute[225855]: 2026-01-20 15:41:54.523 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:54 compute-1 ceph-mon[81775]: pgmap v3591: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:41:54 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2109221212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:55 compute-1 sudo[327928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:55 compute-1 sudo[327928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:55 compute-1 sudo[327928]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:55 compute-1 sudo[327953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:41:55 compute-1 sudo[327953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:55 compute-1 sudo[327953]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:55.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:55.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:55 compute-1 nova_compute[225855]: 2026-01-20 15:41:55.685 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:55 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:41:55 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:41:55 compute-1 ceph-mon[81775]: pgmap v3592: 321 pgs: 321 active+clean; 137 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 597 B/s rd, 481 KiB/s wr, 2 op/s
Jan 20 15:41:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:57.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:41:57.542 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=86, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=85) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:41:57 compute-1 nova_compute[225855]: 2026-01-20 15:41:57.543 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:57 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:41:57.543 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:41:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:57.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:58 compute-1 ceph-mon[81775]: pgmap v3593: 321 pgs: 321 active+clean; 158 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.6 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Jan 20 15:41:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:59.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:59 compute-1 nova_compute[225855]: 2026-01-20 15:41:59.525 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:59 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:41:59.545 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '86'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:41:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:41:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:59.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:00 compute-1 ceph-mon[81775]: pgmap v3594: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:42:00 compute-1 nova_compute[225855]: 2026-01-20 15:42:00.687 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:42:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:01.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:42:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:01.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:02 compute-1 podman[327982]: 2026-01-20 15:42:02.051690118 +0000 UTC m=+0.103135155 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 20 15:42:02 compute-1 ceph-mon[81775]: pgmap v3595: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:42:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:42:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:03.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:42:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:03.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:03 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3473706580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:42:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:04 compute-1 nova_compute[225855]: 2026-01-20 15:42:04.558 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:04 compute-1 ceph-mon[81775]: pgmap v3596: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:42:04 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2723949721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:42:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:42:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:05.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:42:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:05.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:05 compute-1 nova_compute[225855]: 2026-01-20 15:42:05.726 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:06 compute-1 ceph-mon[81775]: pgmap v3597: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 20 15:42:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:42:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:07.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:42:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:07.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:08 compute-1 ceph-mon[81775]: pgmap v3598: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.3 MiB/s wr, 27 op/s
Jan 20 15:42:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:09.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:09 compute-1 nova_compute[225855]: 2026-01-20 15:42:09.559 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:09.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:10 compute-1 podman[328012]: 2026-01-20 15:42:10.027135195 +0000 UTC m=+0.062284782 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 20 15:42:10 compute-1 ceph-mon[81775]: pgmap v3599: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 508 KiB/s rd, 477 KiB/s wr, 33 op/s
Jan 20 15:42:10 compute-1 nova_compute[225855]: 2026-01-20 15:42:10.728 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:11.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:11.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:12 compute-1 ceph-mon[81775]: pgmap v3600: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:42:12 compute-1 sudo[328034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:12 compute-1 sudo[328034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:12 compute-1 sudo[328034]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:12 compute-1 ovn_controller[130490]: 2026-01-20T15:42:12Z|00990|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Jan 20 15:42:12 compute-1 sudo[328059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:12 compute-1 sudo[328059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:12 compute-1 sudo[328059]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:42:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:13.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:42:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:13.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:13 compute-1 ceph-mon[81775]: pgmap v3601: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:42:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2189209461' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:42:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2189209461' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:42:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:14 compute-1 nova_compute[225855]: 2026-01-20 15:42:14.562 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:15.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:42:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:15.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:42:15 compute-1 nova_compute[225855]: 2026-01-20 15:42:15.730 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:16.463 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:42:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:16.463 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:42:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:16.463 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:42:16 compute-1 ceph-mon[81775]: pgmap v3602: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:42:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:42:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:17.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:42:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:17.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:18 compute-1 ceph-mon[81775]: pgmap v3603: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Jan 20 15:42:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:19.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:19 compute-1 nova_compute[225855]: 2026-01-20 15:42:19.564 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:19.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:20 compute-1 ceph-mon[81775]: pgmap v3604: 321 pgs: 321 active+clean; 170 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 206 KiB/s wr, 75 op/s
Jan 20 15:42:20 compute-1 nova_compute[225855]: 2026-01-20 15:42:20.732 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:42:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:21.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:42:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:42:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:21.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:42:22 compute-1 ceph-mon[81775]: pgmap v3605: 321 pgs: 321 active+clean; 198 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 111 op/s
Jan 20 15:42:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:23.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1871014464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:42:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:23.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:24 compute-1 nova_compute[225855]: 2026-01-20 15:42:24.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:42:24 compute-1 nova_compute[225855]: 2026-01-20 15:42:24.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:42:24 compute-1 nova_compute[225855]: 2026-01-20 15:42:24.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:42:24 compute-1 nova_compute[225855]: 2026-01-20 15:42:24.352 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:42:24 compute-1 nova_compute[225855]: 2026-01-20 15:42:24.353 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:42:24 compute-1 nova_compute[225855]: 2026-01-20 15:42:24.353 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:42:24 compute-1 nova_compute[225855]: 2026-01-20 15:42:24.564 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:24 compute-1 ceph-mon[81775]: pgmap v3606: 321 pgs: 321 active+clean; 198 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 20 15:42:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1603594978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:42:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:25.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:25.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:25 compute-1 nova_compute[225855]: 2026-01-20 15:42:25.734 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:26 compute-1 ceph-mon[81775]: pgmap v3607: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:42:27 compute-1 nova_compute[225855]: 2026-01-20 15:42:27.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:42:27 compute-1 nova_compute[225855]: 2026-01-20 15:42:27.375 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:42:27 compute-1 nova_compute[225855]: 2026-01-20 15:42:27.376 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:42:27 compute-1 nova_compute[225855]: 2026-01-20 15:42:27.376 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:42:27 compute-1 nova_compute[225855]: 2026-01-20 15:42:27.376 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:42:27 compute-1 nova_compute[225855]: 2026-01-20 15:42:27.376 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:42:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:27.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:27.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2416527839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:42:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:42:27 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1965861902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:42:27 compute-1 nova_compute[225855]: 2026-01-20 15:42:27.799 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:42:27 compute-1 nova_compute[225855]: 2026-01-20 15:42:27.964 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:42:27 compute-1 nova_compute[225855]: 2026-01-20 15:42:27.966 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4269MB free_disk=20.942886352539062GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:42:27 compute-1 nova_compute[225855]: 2026-01-20 15:42:27.966 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:42:27 compute-1 nova_compute[225855]: 2026-01-20 15:42:27.967 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:42:28 compute-1 nova_compute[225855]: 2026-01-20 15:42:28.031 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:42:28 compute-1 nova_compute[225855]: 2026-01-20 15:42:28.031 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:42:28 compute-1 nova_compute[225855]: 2026-01-20 15:42:28.122 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:42:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:42:28 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3529525207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:42:28 compute-1 nova_compute[225855]: 2026-01-20 15:42:28.552 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:42:28 compute-1 nova_compute[225855]: 2026-01-20 15:42:28.559 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:42:28 compute-1 nova_compute[225855]: 2026-01-20 15:42:28.587 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:42:28 compute-1 nova_compute[225855]: 2026-01-20 15:42:28.588 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:42:28 compute-1 nova_compute[225855]: 2026-01-20 15:42:28.588 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:42:28 compute-1 ceph-mon[81775]: pgmap v3608: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:42:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1965861902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:42:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/607743711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:42:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3529525207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:42:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:42:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:29.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:42:29 compute-1 nova_compute[225855]: 2026-01-20 15:42:29.566 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:29 compute-1 nova_compute[225855]: 2026-01-20 15:42:29.587 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:42:29 compute-1 nova_compute[225855]: 2026-01-20 15:42:29.588 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:42:29 compute-1 nova_compute[225855]: 2026-01-20 15:42:29.588 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:42:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:42:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:29.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:42:30 compute-1 ceph-mon[81775]: pgmap v3609: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:42:30 compute-1 nova_compute[225855]: 2026-01-20 15:42:30.737 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:31 compute-1 nova_compute[225855]: 2026-01-20 15:42:31.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:42:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:42:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:31.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:42:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:42:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:31.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:42:32 compute-1 nova_compute[225855]: 2026-01-20 15:42:32.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:42:32 compute-1 ceph-mon[81775]: pgmap v3610: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 356 KiB/s rd, 1.9 MiB/s wr, 60 op/s
Jan 20 15:42:33 compute-1 podman[328138]: 2026-01-20 15:42:33.054383798 +0000 UTC m=+0.098752682 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 15:42:33 compute-1 sudo[328155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:33 compute-1 sudo[328155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:33 compute-1 sudo[328155]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:33 compute-1 sudo[328189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:33 compute-1 sudo[328189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:33 compute-1 sudo[328189]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:33.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:33.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:34 compute-1 nova_compute[225855]: 2026-01-20 15:42:34.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:42:34 compute-1 nova_compute[225855]: 2026-01-20 15:42:34.567 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:35.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:35 compute-1 ceph-mon[81775]: pgmap v3611: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 43 KiB/s rd, 16 KiB/s wr, 5 op/s
Jan 20 15:42:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:42:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:35.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:42:35 compute-1 nova_compute[225855]: 2026-01-20 15:42:35.739 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:35 compute-1 ceph-mon[81775]: pgmap v3612: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 19 KiB/s wr, 5 op/s
Jan 20 15:42:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:37.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:42:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:37.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:42:37 compute-1 nova_compute[225855]: 2026-01-20 15:42:37.761 225859 DEBUG oslo_concurrency.lockutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:42:37 compute-1 nova_compute[225855]: 2026-01-20 15:42:37.762 225859 DEBUG oslo_concurrency.lockutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:42:37 compute-1 nova_compute[225855]: 2026-01-20 15:42:37.784 225859 DEBUG nova.compute.manager [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:42:37 compute-1 nova_compute[225855]: 2026-01-20 15:42:37.902 225859 DEBUG oslo_concurrency.lockutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:42:37 compute-1 nova_compute[225855]: 2026-01-20 15:42:37.903 225859 DEBUG oslo_concurrency.lockutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:42:37 compute-1 nova_compute[225855]: 2026-01-20 15:42:37.914 225859 DEBUG nova.virt.hardware [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:42:37 compute-1 nova_compute[225855]: 2026-01-20 15:42:37.914 225859 INFO nova.compute.claims [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.068 225859 DEBUG oslo_concurrency.processutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:42:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:42:38 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2809331689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.523 225859 DEBUG oslo_concurrency.processutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.531 225859 DEBUG nova.compute.provider_tree [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:42:38 compute-1 ceph-mon[81775]: pgmap v3613: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 15 KiB/s wr, 0 op/s
Jan 20 15:42:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2809331689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.554 225859 DEBUG nova.scheduler.client.report [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.587 225859 DEBUG oslo_concurrency.lockutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.587 225859 DEBUG nova.compute.manager [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.636 225859 DEBUG nova.compute.manager [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.637 225859 DEBUG nova.network.neutron [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.658 225859 INFO nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.678 225859 DEBUG nova.compute.manager [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.770 225859 DEBUG nova.compute.manager [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.772 225859 DEBUG nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.772 225859 INFO nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Creating image(s)
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.796 225859 DEBUG nova.storage.rbd_utils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 6c6a79bf-04d3-4839-84cc-ab8b383d602c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.822 225859 DEBUG nova.storage.rbd_utils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 6c6a79bf-04d3-4839-84cc-ab8b383d602c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.845 225859 DEBUG nova.storage.rbd_utils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 6c6a79bf-04d3-4839-84cc-ab8b383d602c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.849 225859 DEBUG oslo_concurrency.processutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.915 225859 DEBUG oslo_concurrency.processutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.916 225859 DEBUG oslo_concurrency.lockutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.917 225859 DEBUG oslo_concurrency.lockutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.917 225859 DEBUG oslo_concurrency.lockutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.943 225859 DEBUG nova.storage.rbd_utils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 6c6a79bf-04d3-4839-84cc-ab8b383d602c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:42:38 compute-1 nova_compute[225855]: 2026-01-20 15:42:38.947 225859 DEBUG oslo_concurrency.processutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 6c6a79bf-04d3-4839-84cc-ab8b383d602c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:42:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:39 compute-1 nova_compute[225855]: 2026-01-20 15:42:39.153 225859 DEBUG nova.policy [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5985ef736503499a9f1d734cabc33ce5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '728662ec7f654a3fb2e53a90b8707d7e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:42:39 compute-1 nova_compute[225855]: 2026-01-20 15:42:39.241 225859 DEBUG oslo_concurrency.processutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 6c6a79bf-04d3-4839-84cc-ab8b383d602c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:42:39 compute-1 nova_compute[225855]: 2026-01-20 15:42:39.321 225859 DEBUG nova.storage.rbd_utils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] resizing rbd image 6c6a79bf-04d3-4839-84cc-ab8b383d602c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:42:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:39.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:39 compute-1 nova_compute[225855]: 2026-01-20 15:42:39.426 225859 DEBUG nova.objects.instance [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lazy-loading 'migration_context' on Instance uuid 6c6a79bf-04d3-4839-84cc-ab8b383d602c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:42:39 compute-1 nova_compute[225855]: 2026-01-20 15:42:39.440 225859 DEBUG nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:42:39 compute-1 nova_compute[225855]: 2026-01-20 15:42:39.441 225859 DEBUG nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Ensure instance console log exists: /var/lib/nova/instances/6c6a79bf-04d3-4839-84cc-ab8b383d602c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:42:39 compute-1 nova_compute[225855]: 2026-01-20 15:42:39.441 225859 DEBUG oslo_concurrency.lockutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:42:39 compute-1 nova_compute[225855]: 2026-01-20 15:42:39.442 225859 DEBUG oslo_concurrency.lockutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:42:39 compute-1 nova_compute[225855]: 2026-01-20 15:42:39.442 225859 DEBUG oslo_concurrency.lockutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:42:39 compute-1 nova_compute[225855]: 2026-01-20 15:42:39.569 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:39.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:39 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:39.792 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=87, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=86) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:42:39 compute-1 nova_compute[225855]: 2026-01-20 15:42:39.792 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:39 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:39.793 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:42:40 compute-1 nova_compute[225855]: 2026-01-20 15:42:40.199 225859 DEBUG nova.network.neutron [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Successfully created port: 9af895a3-cca7-495f-ab5a-68e04355f005 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:42:40 compute-1 ceph-mon[81775]: pgmap v3614: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 15 KiB/s wr, 0 op/s
Jan 20 15:42:40 compute-1 nova_compute[225855]: 2026-01-20 15:42:40.741 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:41 compute-1 podman[328406]: 2026-01-20 15:42:41.004658794 +0000 UTC m=+0.056672233 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:42:41 compute-1 nova_compute[225855]: 2026-01-20 15:42:41.233 225859 DEBUG nova.network.neutron [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Successfully updated port: 9af895a3-cca7-495f-ab5a-68e04355f005 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:42:41 compute-1 nova_compute[225855]: 2026-01-20 15:42:41.254 225859 DEBUG oslo_concurrency.lockutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "refresh_cache-6c6a79bf-04d3-4839-84cc-ab8b383d602c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:42:41 compute-1 nova_compute[225855]: 2026-01-20 15:42:41.254 225859 DEBUG oslo_concurrency.lockutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquired lock "refresh_cache-6c6a79bf-04d3-4839-84cc-ab8b383d602c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:42:41 compute-1 nova_compute[225855]: 2026-01-20 15:42:41.254 225859 DEBUG nova.network.neutron [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:42:41 compute-1 nova_compute[225855]: 2026-01-20 15:42:41.374 225859 DEBUG nova.compute.manager [req-fe27a064-2531-409e-b4ea-10a7562a43fa req-9349ded2-faf5-4942-89ae-d68358c868cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Received event network-changed-9af895a3-cca7-495f-ab5a-68e04355f005 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:42:41 compute-1 nova_compute[225855]: 2026-01-20 15:42:41.375 225859 DEBUG nova.compute.manager [req-fe27a064-2531-409e-b4ea-10a7562a43fa req-9349ded2-faf5-4942-89ae-d68358c868cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Refreshing instance network info cache due to event network-changed-9af895a3-cca7-495f-ab5a-68e04355f005. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:42:41 compute-1 nova_compute[225855]: 2026-01-20 15:42:41.375 225859 DEBUG oslo_concurrency.lockutils [req-fe27a064-2531-409e-b4ea-10a7562a43fa req-9349ded2-faf5-4942-89ae-d68358c868cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-6c6a79bf-04d3-4839-84cc-ab8b383d602c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:42:41 compute-1 nova_compute[225855]: 2026-01-20 15:42:41.407 225859 DEBUG nova.network.neutron [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:42:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:41.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:41.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.321 225859 DEBUG nova.network.neutron [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Updating instance_info_cache with network_info: [{"id": "9af895a3-cca7-495f-ab5a-68e04355f005", "address": "fa:16:3e:48:2c:f1", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9af895a3-cc", "ovs_interfaceid": "9af895a3-cca7-495f-ab5a-68e04355f005", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.340 225859 DEBUG oslo_concurrency.lockutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Releasing lock "refresh_cache-6c6a79bf-04d3-4839-84cc-ab8b383d602c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.341 225859 DEBUG nova.compute.manager [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Instance network_info: |[{"id": "9af895a3-cca7-495f-ab5a-68e04355f005", "address": "fa:16:3e:48:2c:f1", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9af895a3-cc", "ovs_interfaceid": "9af895a3-cca7-495f-ab5a-68e04355f005", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.342 225859 DEBUG oslo_concurrency.lockutils [req-fe27a064-2531-409e-b4ea-10a7562a43fa req-9349ded2-faf5-4942-89ae-d68358c868cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-6c6a79bf-04d3-4839-84cc-ab8b383d602c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.342 225859 DEBUG nova.network.neutron [req-fe27a064-2531-409e-b4ea-10a7562a43fa req-9349ded2-faf5-4942-89ae-d68358c868cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Refreshing network info cache for port 9af895a3-cca7-495f-ab5a-68e04355f005 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.346 225859 DEBUG nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Start _get_guest_xml network_info=[{"id": "9af895a3-cca7-495f-ab5a-68e04355f005", "address": "fa:16:3e:48:2c:f1", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9af895a3-cc", "ovs_interfaceid": "9af895a3-cca7-495f-ab5a-68e04355f005", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.350 225859 WARNING nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.355 225859 DEBUG nova.virt.libvirt.host [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.355 225859 DEBUG nova.virt.libvirt.host [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.362 225859 DEBUG nova.virt.libvirt.host [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.363 225859 DEBUG nova.virt.libvirt.host [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.364 225859 DEBUG nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.365 225859 DEBUG nova.virt.hardware [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.365 225859 DEBUG nova.virt.hardware [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.366 225859 DEBUG nova.virt.hardware [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.366 225859 DEBUG nova.virt.hardware [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.366 225859 DEBUG nova.virt.hardware [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.367 225859 DEBUG nova.virt.hardware [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.367 225859 DEBUG nova.virt.hardware [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.367 225859 DEBUG nova.virt.hardware [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.368 225859 DEBUG nova.virt.hardware [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.368 225859 DEBUG nova.virt.hardware [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.369 225859 DEBUG nova.virt.hardware [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.372 225859 DEBUG oslo_concurrency.processutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:42:42 compute-1 ceph-mon[81775]: pgmap v3615: 321 pgs: 321 active+clean; 229 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 1004 KiB/s wr, 14 op/s
Jan 20 15:42:42 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:42:42 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2863498301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.824 225859 DEBUG oslo_concurrency.processutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.859 225859 DEBUG nova.storage.rbd_utils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 6c6a79bf-04d3-4839-84cc-ab8b383d602c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:42:42 compute-1 nova_compute[225855]: 2026-01-20 15:42:42.864 225859 DEBUG oslo_concurrency.processutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:42:43 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:42:43 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2419632345' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.359 225859 DEBUG oslo_concurrency.processutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.362 225859 DEBUG nova.virt.libvirt.vif [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:42:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-gen-0-248124310',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-gen-0-248124310',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-342561427-gen',id=219,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNPrC26E5zjpds8PmYXeLNQKBwLdgsc+VcubrdKnriEXDiMjUXGvx1Qk1D9X7eLck7XYpiSHt4U9t1SsZB3lsAeahV1YqeLst2/p8UQkxJjHaCXNOlF5uwsraAqiSop7uA==',key_name='tempest-TestSecurityGroupsBasicOps-681797586',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='728662ec7f654a3fb2e53a90b8707d7e',ramdisk_id='',reservation_id='r-s0krodc6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-342561427',owner_user_name='tempest-TestSecurityGroupsBasicOps-342561427-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:42:38Z,user_data=None,user_id='5985ef736503499a9f1d734cabc33ce5',uuid=6c6a79bf-04d3-4839-84cc-ab8b383d602c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9af895a3-cca7-495f-ab5a-68e04355f005", "address": "fa:16:3e:48:2c:f1", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9af895a3-cc", "ovs_interfaceid": "9af895a3-cca7-495f-ab5a-68e04355f005", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.362 225859 DEBUG nova.network.os_vif_util [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converting VIF {"id": "9af895a3-cca7-495f-ab5a-68e04355f005", "address": "fa:16:3e:48:2c:f1", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9af895a3-cc", "ovs_interfaceid": "9af895a3-cca7-495f-ab5a-68e04355f005", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.363 225859 DEBUG nova.network.os_vif_util [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:2c:f1,bridge_name='br-int',has_traffic_filtering=True,id=9af895a3-cca7-495f-ab5a-68e04355f005,network=Network(b5fb4ee9-fa45-4797-871a-53247ebaf43e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9af895a3-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.365 225859 DEBUG nova.objects.instance [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lazy-loading 'pci_devices' on Instance uuid 6c6a79bf-04d3-4839-84cc-ab8b383d602c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.382 225859 DEBUG nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:42:43 compute-1 nova_compute[225855]:   <uuid>6c6a79bf-04d3-4839-84cc-ab8b383d602c</uuid>
Jan 20 15:42:43 compute-1 nova_compute[225855]:   <name>instance-000000db</name>
Jan 20 15:42:43 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:42:43 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:42:43 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-gen-0-248124310</nova:name>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:42:42</nova:creationTime>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:42:43 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:42:43 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:42:43 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:42:43 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:42:43 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:42:43 compute-1 nova_compute[225855]:         <nova:user uuid="5985ef736503499a9f1d734cabc33ce5">tempest-TestSecurityGroupsBasicOps-342561427-project-member</nova:user>
Jan 20 15:42:43 compute-1 nova_compute[225855]:         <nova:project uuid="728662ec7f654a3fb2e53a90b8707d7e">tempest-TestSecurityGroupsBasicOps-342561427</nova:project>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:42:43 compute-1 nova_compute[225855]:         <nova:port uuid="9af895a3-cca7-495f-ab5a-68e04355f005">
Jan 20 15:42:43 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:42:43 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:42:43 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <system>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <entry name="serial">6c6a79bf-04d3-4839-84cc-ab8b383d602c</entry>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <entry name="uuid">6c6a79bf-04d3-4839-84cc-ab8b383d602c</entry>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     </system>
Jan 20 15:42:43 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:42:43 compute-1 nova_compute[225855]:   <os>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:   </os>
Jan 20 15:42:43 compute-1 nova_compute[225855]:   <features>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:   </features>
Jan 20 15:42:43 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:42:43 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:42:43 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/6c6a79bf-04d3-4839-84cc-ab8b383d602c_disk">
Jan 20 15:42:43 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       </source>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:42:43 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/6c6a79bf-04d3-4839-84cc-ab8b383d602c_disk.config">
Jan 20 15:42:43 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       </source>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:42:43 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:48:2c:f1"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <target dev="tap9af895a3-cc"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/6c6a79bf-04d3-4839-84cc-ab8b383d602c/console.log" append="off"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <video>
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     </video>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:42:43 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:42:43 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:42:43 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:42:43 compute-1 nova_compute[225855]: </domain>
Jan 20 15:42:43 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.384 225859 DEBUG nova.compute.manager [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Preparing to wait for external event network-vif-plugged-9af895a3-cca7-495f-ab5a-68e04355f005 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.385 225859 DEBUG oslo_concurrency.lockutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.385 225859 DEBUG oslo_concurrency.lockutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.385 225859 DEBUG oslo_concurrency.lockutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.386 225859 DEBUG nova.virt.libvirt.vif [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:42:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-gen-0-248124310',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-gen-0-248124310',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-342561427-gen',id=219,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNPrC26E5zjpds8PmYXeLNQKBwLdgsc+VcubrdKnriEXDiMjUXGvx1Qk1D9X7eLck7XYpiSHt4U9t1SsZB3lsAeahV1YqeLst2/p8UQkxJjHaCXNOlF5uwsraAqiSop7uA==',key_name='tempest-TestSecurityGroupsBasicOps-681797586',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='728662ec7f654a3fb2e53a90b8707d7e',ramdisk_id='',reservation_id='r-s0krodc6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-342561427',owner_user_name='tempest-TestSecurityGroupsBasicOps-342561427-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:42:38Z,user_data=None,user_id='5985ef736503499a9f1d734cabc33ce5',uuid=6c6a79bf-04d3-4839-84cc-ab8b383d602c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9af895a3-cca7-495f-ab5a-68e04355f005", "address": "fa:16:3e:48:2c:f1", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9af895a3-cc", "ovs_interfaceid": "9af895a3-cca7-495f-ab5a-68e04355f005", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.386 225859 DEBUG nova.network.os_vif_util [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converting VIF {"id": "9af895a3-cca7-495f-ab5a-68e04355f005", "address": "fa:16:3e:48:2c:f1", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9af895a3-cc", "ovs_interfaceid": "9af895a3-cca7-495f-ab5a-68e04355f005", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.387 225859 DEBUG nova.network.os_vif_util [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:2c:f1,bridge_name='br-int',has_traffic_filtering=True,id=9af895a3-cca7-495f-ab5a-68e04355f005,network=Network(b5fb4ee9-fa45-4797-871a-53247ebaf43e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9af895a3-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.388 225859 DEBUG os_vif [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:2c:f1,bridge_name='br-int',has_traffic_filtering=True,id=9af895a3-cca7-495f-ab5a-68e04355f005,network=Network(b5fb4ee9-fa45-4797-871a-53247ebaf43e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9af895a3-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.389 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.389 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.390 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.393 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.394 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9af895a3-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.395 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9af895a3-cc, col_values=(('external_ids', {'iface-id': '9af895a3-cca7-495f-ab5a-68e04355f005', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:2c:f1', 'vm-uuid': '6c6a79bf-04d3-4839-84cc-ab8b383d602c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.396 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:43 compute-1 NetworkManager[49104]: <info>  [1768923763.3977] manager: (tap9af895a3-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/426)
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.400 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.403 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.404 225859 INFO os_vif [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:2c:f1,bridge_name='br-int',has_traffic_filtering=True,id=9af895a3-cca7-495f-ab5a-68e04355f005,network=Network(b5fb4ee9-fa45-4797-871a-53247ebaf43e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9af895a3-cc')
Jan 20 15:42:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:43.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.456 225859 DEBUG nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.457 225859 DEBUG nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.457 225859 DEBUG nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] No VIF found with MAC fa:16:3e:48:2c:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.457 225859 INFO nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Using config drive
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.487 225859 DEBUG nova.storage.rbd_utils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 6c6a79bf-04d3-4839-84cc-ab8b383d602c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.507 225859 DEBUG nova.network.neutron [req-fe27a064-2531-409e-b4ea-10a7562a43fa req-9349ded2-faf5-4942-89ae-d68358c868cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Updated VIF entry in instance network info cache for port 9af895a3-cca7-495f-ab5a-68e04355f005. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.507 225859 DEBUG nova.network.neutron [req-fe27a064-2531-409e-b4ea-10a7562a43fa req-9349ded2-faf5-4942-89ae-d68358c868cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Updating instance_info_cache with network_info: [{"id": "9af895a3-cca7-495f-ab5a-68e04355f005", "address": "fa:16:3e:48:2c:f1", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9af895a3-cc", "ovs_interfaceid": "9af895a3-cca7-495f-ab5a-68e04355f005", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.530 225859 DEBUG oslo_concurrency.lockutils [req-fe27a064-2531-409e-b4ea-10a7562a43fa req-9349ded2-faf5-4942-89ae-d68358c868cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-6c6a79bf-04d3-4839-84cc-ab8b383d602c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:42:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2863498301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:42:43 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2419632345' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:42:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:43.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.810 225859 INFO nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Creating config drive at /var/lib/nova/instances/6c6a79bf-04d3-4839-84cc-ab8b383d602c/disk.config
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.816 225859 DEBUG oslo_concurrency.processutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6c6a79bf-04d3-4839-84cc-ab8b383d602c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfcg06clv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:42:43 compute-1 nova_compute[225855]: 2026-01-20 15:42:43.968 225859 DEBUG oslo_concurrency.processutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6c6a79bf-04d3-4839-84cc-ab8b383d602c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfcg06clv" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.002 225859 DEBUG nova.storage.rbd_utils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 6c6a79bf-04d3-4839-84cc-ab8b383d602c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.008 225859 DEBUG oslo_concurrency.processutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6c6a79bf-04d3-4839-84cc-ab8b383d602c/disk.config 6c6a79bf-04d3-4839-84cc-ab8b383d602c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:42:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.200 225859 DEBUG oslo_concurrency.processutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6c6a79bf-04d3-4839-84cc-ab8b383d602c/disk.config 6c6a79bf-04d3-4839-84cc-ab8b383d602c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.201 225859 INFO nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Deleting local config drive /var/lib/nova/instances/6c6a79bf-04d3-4839-84cc-ab8b383d602c/disk.config because it was imported into RBD.
Jan 20 15:42:44 compute-1 kernel: tap9af895a3-cc: entered promiscuous mode
Jan 20 15:42:44 compute-1 NetworkManager[49104]: <info>  [1768923764.2592] manager: (tap9af895a3-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/427)
Jan 20 15:42:44 compute-1 ovn_controller[130490]: 2026-01-20T15:42:44Z|00991|binding|INFO|Claiming lport 9af895a3-cca7-495f-ab5a-68e04355f005 for this chassis.
Jan 20 15:42:44 compute-1 ovn_controller[130490]: 2026-01-20T15:42:44Z|00992|binding|INFO|9af895a3-cca7-495f-ab5a-68e04355f005: Claiming fa:16:3e:48:2c:f1 10.100.0.3
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.259 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.264 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.269 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:44 compute-1 NetworkManager[49104]: <info>  [1768923764.2705] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/428)
Jan 20 15:42:44 compute-1 NetworkManager[49104]: <info>  [1768923764.2713] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/429)
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.282 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:2c:f1 10.100.0.3'], port_security=['fa:16:3e:48:2c:f1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6c6a79bf-04d3-4839-84cc-ab8b383d602c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5fb4ee9-fa45-4797-871a-53247ebaf43e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '728662ec7f654a3fb2e53a90b8707d7e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b7424c3a-5aee-4d68-a5d7-51752094553b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=909829b9-c0dd-4f89-9095-7f817ccefae3, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=9af895a3-cca7-495f-ab5a-68e04355f005) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.288 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 9af895a3-cca7-495f-ab5a-68e04355f005 in datapath b5fb4ee9-fa45-4797-871a-53247ebaf43e bound to our chassis
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.289 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5fb4ee9-fa45-4797-871a-53247ebaf43e
Jan 20 15:42:44 compute-1 systemd-udevd[328564]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.305 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5a253a-5f03-4aa0-8eae-b1e32862bc37]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.307 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb5fb4ee9-f1 in ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:42:44 compute-1 systemd-machined[194361]: New machine qemu-114-instance-000000db.
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.309 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb5fb4ee9-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.309 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[35ff0543-6b70-4eda-9bf0-08e4cf246e15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.310 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff5be4a-f9c6-4152-a5ff-7933df0adef9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:44 compute-1 NetworkManager[49104]: <info>  [1768923764.3259] device (tap9af895a3-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.324 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[35fefe0c-7af3-4f27-8567-60921b972a64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:44 compute-1 NetworkManager[49104]: <info>  [1768923764.3265] device (tap9af895a3-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:42:44 compute-1 systemd[1]: Started Virtual Machine qemu-114-instance-000000db.
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.350 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5d26acf2-dd72-4653-929b-a76c12af15b5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.364 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.374 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:44 compute-1 ovn_controller[130490]: 2026-01-20T15:42:44Z|00993|binding|INFO|Setting lport 9af895a3-cca7-495f-ab5a-68e04355f005 ovn-installed in OVS
Jan 20 15:42:44 compute-1 ovn_controller[130490]: 2026-01-20T15:42:44Z|00994|binding|INFO|Setting lport 9af895a3-cca7-495f-ab5a-68e04355f005 up in Southbound
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.389 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.397 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[1acdaeec-ccb0-43fc-a5b5-7f12d6266128]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:44 compute-1 NetworkManager[49104]: <info>  [1768923764.4036] manager: (tapb5fb4ee9-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/430)
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.402 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[c15397cf-691f-4355-9122-2a4d302f81b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.439 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[812d3161-0afb-4dd9-a279-5d9604fdde49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.443 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[177202a8-4b9e-433f-91b4-4e2b2958c2c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:44 compute-1 NetworkManager[49104]: <info>  [1768923764.4700] device (tapb5fb4ee9-f0): carrier: link connected
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.476 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[e7df1dcb-0f80-4913-8d97-482340ff741e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.496 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[4994cfa7-33d1-4dc3-a12f-1c4de87818e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5fb4ee9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:58:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 283], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 880339, 'reachable_time': 34549, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328596, 'error': None, 'target': 'ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.524 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2b1a45-e0e2-4e35-bf39-58f8be0030c1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:585a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 880339, 'tstamp': 880339}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328597, 'error': None, 'target': 'ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.557 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[efdf8cc8-0ce3-4693-92d0-bd1dec40fd6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5fb4ee9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:58:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 283], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 880339, 'reachable_time': 34549, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328598, 'error': None, 'target': 'ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.570 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:44 compute-1 ceph-mon[81775]: pgmap v3616: 321 pgs: 321 active+clean; 229 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 993 KiB/s wr, 14 op/s
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.607 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e697b7e9-db74-4219-8f4b-a1409365fd28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.691 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[6de1a2aa-067c-49f7-87aa-28e8a423ae1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.694 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5fb4ee9-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.694 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.695 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5fb4ee9-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.697 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:44 compute-1 NetworkManager[49104]: <info>  [1768923764.6985] manager: (tapb5fb4ee9-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Jan 20 15:42:44 compute-1 kernel: tapb5fb4ee9-f0: entered promiscuous mode
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.702 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5fb4ee9-f0, col_values=(('external_ids', {'iface-id': '92b999b0-5595-47ff-ac54-cb52d2ba58ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:42:44 compute-1 ovn_controller[130490]: 2026-01-20T15:42:44Z|00995|binding|INFO|Releasing lport 92b999b0-5595-47ff-ac54-cb52d2ba58ba from this chassis (sb_readonly=0)
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.704 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.706 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b5fb4ee9-fa45-4797-871a-53247ebaf43e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b5fb4ee9-fa45-4797-871a-53247ebaf43e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.708 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[bd889d86-9e88-435e-8628-634de0ca366a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.709 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-b5fb4ee9-fa45-4797-871a-53247ebaf43e
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/b5fb4ee9-fa45-4797-871a-53247ebaf43e.pid.haproxy
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID b5fb4ee9-fa45-4797-871a-53247ebaf43e
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:42:44 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:44.712 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e', 'env', 'PROCESS_TAG=haproxy-b5fb4ee9-fa45-4797-871a-53247ebaf43e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b5fb4ee9-fa45-4797-871a-53247ebaf43e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.717 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.792 225859 DEBUG nova.compute.manager [req-4fb91efe-8ee7-41fa-ba82-f31139ae9958 req-123402e4-aabd-41cf-9f52-a2a1f1e6709a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Received event network-vif-plugged-9af895a3-cca7-495f-ab5a-68e04355f005 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.793 225859 DEBUG oslo_concurrency.lockutils [req-4fb91efe-8ee7-41fa-ba82-f31139ae9958 req-123402e4-aabd-41cf-9f52-a2a1f1e6709a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.793 225859 DEBUG oslo_concurrency.lockutils [req-4fb91efe-8ee7-41fa-ba82-f31139ae9958 req-123402e4-aabd-41cf-9f52-a2a1f1e6709a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.793 225859 DEBUG oslo_concurrency.lockutils [req-4fb91efe-8ee7-41fa-ba82-f31139ae9958 req-123402e4-aabd-41cf-9f52-a2a1f1e6709a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.793 225859 DEBUG nova.compute.manager [req-4fb91efe-8ee7-41fa-ba82-f31139ae9958 req-123402e4-aabd-41cf-9f52-a2a1f1e6709a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Processing event network-vif-plugged-9af895a3-cca7-495f-ab5a-68e04355f005 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.859 225859 DEBUG nova.compute.manager [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.860 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768923764.8585937, 6c6a79bf-04d3-4839-84cc-ab8b383d602c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.860 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] VM Started (Lifecycle Event)
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.865 225859 DEBUG nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.869 225859 INFO nova.virt.libvirt.driver [-] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Instance spawned successfully.
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.869 225859 DEBUG nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.882 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.888 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.893 225859 DEBUG nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.893 225859 DEBUG nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.894 225859 DEBUG nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.894 225859 DEBUG nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.894 225859 DEBUG nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.895 225859 DEBUG nova.virt.libvirt.driver [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.936 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.936 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768923764.858916, 6c6a79bf-04d3-4839-84cc-ab8b383d602c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.936 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] VM Paused (Lifecycle Event)
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.961 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.964 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768923764.8642828, 6c6a79bf-04d3-4839-84cc-ab8b383d602c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.964 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] VM Resumed (Lifecycle Event)
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.970 225859 INFO nova.compute.manager [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Took 6.20 seconds to spawn the instance on the hypervisor.
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.971 225859 DEBUG nova.compute.manager [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.980 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:42:44 compute-1 nova_compute[225855]: 2026-01-20 15:42:44.983 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:42:45 compute-1 nova_compute[225855]: 2026-01-20 15:42:45.011 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:42:45 compute-1 podman[328672]: 2026-01-20 15:42:45.069714728 +0000 UTC m=+0.056724054 container create 03cd23c47de70dcff18cd70c75122fd429c6ff5e41750880fef510538ae5ef75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:42:45 compute-1 systemd[1]: Started libpod-conmon-03cd23c47de70dcff18cd70c75122fd429c6ff5e41750880fef510538ae5ef75.scope.
Jan 20 15:42:45 compute-1 podman[328672]: 2026-01-20 15:42:45.036194171 +0000 UTC m=+0.023203527 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:42:45 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:42:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b5e8f4f94618ce44091fb433750cac7f4bda279532d391abb9a2cd7dfef8589/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:42:45 compute-1 podman[328672]: 2026-01-20 15:42:45.165439114 +0000 UTC m=+0.152448520 container init 03cd23c47de70dcff18cd70c75122fd429c6ff5e41750880fef510538ae5ef75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:42:45 compute-1 podman[328672]: 2026-01-20 15:42:45.173421769 +0000 UTC m=+0.160431125 container start 03cd23c47de70dcff18cd70c75122fd429c6ff5e41750880fef510538ae5ef75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 15:42:45 compute-1 neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e[328687]: [NOTICE]   (328691) : New worker (328693) forked
Jan 20 15:42:45 compute-1 neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e[328687]: [NOTICE]   (328691) : Loading success.
Jan 20 15:42:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:42:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:45.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:42:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:45.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:45 compute-1 nova_compute[225855]: 2026-01-20 15:42:45.713 225859 INFO nova.compute.manager [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Took 7.85 seconds to build instance.
Jan 20 15:42:46 compute-1 nova_compute[225855]: 2026-01-20 15:42:46.485 225859 DEBUG oslo_concurrency.lockutils [None req-7aa3daef-dee4-4751-afd5-b672c4cbb8af 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:42:46 compute-1 ceph-mon[81775]: pgmap v3617: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 20 15:42:46 compute-1 nova_compute[225855]: 2026-01-20 15:42:46.883 225859 DEBUG nova.compute.manager [req-95cc77aa-584b-4791-bdb8-970c47fd8621 req-c1fe608a-c649-4f5e-aa71-7c13b714f6f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Received event network-vif-plugged-9af895a3-cca7-495f-ab5a-68e04355f005 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:42:46 compute-1 nova_compute[225855]: 2026-01-20 15:42:46.884 225859 DEBUG oslo_concurrency.lockutils [req-95cc77aa-584b-4791-bdb8-970c47fd8621 req-c1fe608a-c649-4f5e-aa71-7c13b714f6f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:42:46 compute-1 nova_compute[225855]: 2026-01-20 15:42:46.884 225859 DEBUG oslo_concurrency.lockutils [req-95cc77aa-584b-4791-bdb8-970c47fd8621 req-c1fe608a-c649-4f5e-aa71-7c13b714f6f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:42:46 compute-1 nova_compute[225855]: 2026-01-20 15:42:46.885 225859 DEBUG oslo_concurrency.lockutils [req-95cc77aa-584b-4791-bdb8-970c47fd8621 req-c1fe608a-c649-4f5e-aa71-7c13b714f6f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:42:46 compute-1 nova_compute[225855]: 2026-01-20 15:42:46.885 225859 DEBUG nova.compute.manager [req-95cc77aa-584b-4791-bdb8-970c47fd8621 req-c1fe608a-c649-4f5e-aa71-7c13b714f6f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] No waiting events found dispatching network-vif-plugged-9af895a3-cca7-495f-ab5a-68e04355f005 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:42:46 compute-1 nova_compute[225855]: 2026-01-20 15:42:46.886 225859 WARNING nova.compute.manager [req-95cc77aa-584b-4791-bdb8-970c47fd8621 req-c1fe608a-c649-4f5e-aa71-7c13b714f6f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Received unexpected event network-vif-plugged-9af895a3-cca7-495f-ab5a-68e04355f005 for instance with vm_state active and task_state None.
Jan 20 15:42:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:47.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:47.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:48 compute-1 nova_compute[225855]: 2026-01-20 15:42:48.398 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:48 compute-1 ceph-mon[81775]: pgmap v3618: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 231 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 20 15:42:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:49.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:49 compute-1 nova_compute[225855]: 2026-01-20 15:42:49.572 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:49.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:49 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:42:49.795 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '87'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:42:50 compute-1 ceph-mon[81775]: pgmap v3619: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 71 op/s
Jan 20 15:42:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:42:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:51.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:42:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:51.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:52 compute-1 ceph-mon[81775]: pgmap v3620: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 15:42:53 compute-1 sudo[328706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:53 compute-1 sudo[328706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:53 compute-1 sudo[328706]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:53 compute-1 sudo[328731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:53 compute-1 sudo[328731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:53 compute-1 sudo[328731]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:53 compute-1 nova_compute[225855]: 2026-01-20 15:42:53.401 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:53.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.002000056s ======
Jan 20 15:42:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:53.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Jan 20 15:42:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:54 compute-1 nova_compute[225855]: 2026-01-20 15:42:54.574 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:54 compute-1 ceph-mon[81775]: pgmap v3621: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 840 KiB/s wr, 87 op/s
Jan 20 15:42:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:42:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:55.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:42:55 compute-1 sudo[328758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:55 compute-1 sudo[328758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:55 compute-1 sudo[328758]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:55 compute-1 sudo[328783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:42:55 compute-1 sudo[328783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:55 compute-1 sudo[328783]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:55 compute-1 sudo[328808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:55 compute-1 sudo[328808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:55 compute-1 sudo[328808]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:55 compute-1 sudo[328833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 15:42:55 compute-1 sudo[328833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:55.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:56 compute-1 podman[328933]: 2026-01-20 15:42:56.110569118 +0000 UTC m=+0.067455087 container exec 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 20 15:42:56 compute-1 podman[328933]: 2026-01-20 15:42:56.207254591 +0000 UTC m=+0.164140540 container exec_died 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:42:56 compute-1 ceph-mon[81775]: pgmap v3622: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 844 KiB/s wr, 88 op/s
Jan 20 15:42:56 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:56 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:56 compute-1 podman[329084]: 2026-01-20 15:42:56.833607171 +0000 UTC m=+0.052843744 container exec 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 15:42:56 compute-1 podman[329084]: 2026-01-20 15:42:56.844272323 +0000 UTC m=+0.063508896 container exec_died 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 15:42:57 compute-1 podman[329150]: 2026-01-20 15:42:57.03629394 +0000 UTC m=+0.051503547 container exec e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, version=2.2.4, name=keepalived, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, distribution-scope=public, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git)
Jan 20 15:42:57 compute-1 podman[329150]: 2026-01-20 15:42:57.049231065 +0000 UTC m=+0.064440682 container exec_died e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, distribution-scope=public, name=keepalived, release=1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2)
Jan 20 15:42:57 compute-1 sudo[328833]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:57 compute-1 sudo[329180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:57 compute-1 sudo[329180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:57 compute-1 sudo[329180]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:57 compute-1 sudo[329205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:42:57 compute-1 sudo[329205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:57 compute-1 sudo[329205]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:57 compute-1 sudo[329230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:57 compute-1 sudo[329230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:57 compute-1 sudo[329230]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:57 compute-1 sudo[329255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:42:57 compute-1 sudo[329255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:57.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:57.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:57 compute-1 sudo[329255]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:58 compute-1 ceph-mon[81775]: pgmap v3623: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.1 KiB/s wr, 72 op/s
Jan 20 15:42:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:42:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:42:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:42:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:42:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:42:58 compute-1 nova_compute[225855]: 2026-01-20 15:42:58.405 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:58 compute-1 ovn_controller[130490]: 2026-01-20T15:42:58Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:2c:f1 10.100.0.3
Jan 20 15:42:58 compute-1 ovn_controller[130490]: 2026-01-20T15:42:58Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:2c:f1 10.100.0.3
Jan 20 15:42:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:59.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:59 compute-1 nova_compute[225855]: 2026-01-20 15:42:59.576 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:42:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:59.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:00 compute-1 ceph-mon[81775]: pgmap v3624: 321 pgs: 321 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 85 op/s
Jan 20 15:43:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:01.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:01.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:02 compute-1 ceph-mon[81775]: pgmap v3625: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 92 op/s
Jan 20 15:43:03 compute-1 nova_compute[225855]: 2026-01-20 15:43:03.411 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.002000056s ======
Jan 20 15:43:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:03.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Jan 20 15:43:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:43:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:03.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:43:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:04 compute-1 sudo[329330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:43:04 compute-1 podman[329317]: 2026-01-20 15:43:04.168501836 +0000 UTC m=+0.194534389 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 15:43:04 compute-1 sudo[329330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:04 compute-1 sudo[329330]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:04 compute-1 sudo[329368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:43:04 compute-1 sudo[329368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:04 compute-1 sudo[329368]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:04 compute-1 nova_compute[225855]: 2026-01-20 15:43:04.580 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:04 compute-1 ceph-mon[81775]: pgmap v3626: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 308 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:43:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:43:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.439 225859 DEBUG oslo_concurrency.lockutils [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.440 225859 DEBUG oslo_concurrency.lockutils [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:43:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.440 225859 DEBUG oslo_concurrency.lockutils [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.440 225859 DEBUG oslo_concurrency.lockutils [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:43:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:43:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:05.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.441 225859 DEBUG oslo_concurrency.lockutils [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.442 225859 INFO nova.compute.manager [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Terminating instance
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.443 225859 DEBUG nova.compute.manager [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:43:05 compute-1 kernel: tap9af895a3-cc (unregistering): left promiscuous mode
Jan 20 15:43:05 compute-1 NetworkManager[49104]: <info>  [1768923785.5031] device (tap9af895a3-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.518 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:05 compute-1 ovn_controller[130490]: 2026-01-20T15:43:05Z|00996|binding|INFO|Releasing lport 9af895a3-cca7-495f-ab5a-68e04355f005 from this chassis (sb_readonly=0)
Jan 20 15:43:05 compute-1 ovn_controller[130490]: 2026-01-20T15:43:05Z|00997|binding|INFO|Setting lport 9af895a3-cca7-495f-ab5a-68e04355f005 down in Southbound
Jan 20 15:43:05 compute-1 ovn_controller[130490]: 2026-01-20T15:43:05Z|00998|binding|INFO|Removing iface tap9af895a3-cc ovn-installed in OVS
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.521 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.558 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:43:05.563 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:2c:f1 10.100.0.3'], port_security=['fa:16:3e:48:2c:f1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6c6a79bf-04d3-4839-84cc-ab8b383d602c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5fb4ee9-fa45-4797-871a-53247ebaf43e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '728662ec7f654a3fb2e53a90b8707d7e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b7424c3a-5aee-4d68-a5d7-51752094553b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=909829b9-c0dd-4f89-9095-7f817ccefae3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=9af895a3-cca7-495f-ab5a-68e04355f005) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:43:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:43:05.565 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 9af895a3-cca7-495f-ab5a-68e04355f005 in datapath b5fb4ee9-fa45-4797-871a-53247ebaf43e unbound from our chassis
Jan 20 15:43:05 compute-1 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d000000db.scope: Deactivated successfully.
Jan 20 15:43:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:43:05.568 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5fb4ee9-fa45-4797-871a-53247ebaf43e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:43:05 compute-1 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d000000db.scope: Consumed 14.573s CPU time.
Jan 20 15:43:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:43:05.570 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1f888135-93ff-4f2b-9988-cb3084c2d1b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:43:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:43:05.571 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e namespace which is not needed anymore
Jan 20 15:43:05 compute-1 systemd-machined[194361]: Machine qemu-114-instance-000000db terminated.
Jan 20 15:43:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:05.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.687 225859 INFO nova.virt.libvirt.driver [-] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Instance destroyed successfully.
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.688 225859 DEBUG nova.objects.instance [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lazy-loading 'resources' on Instance uuid 6c6a79bf-04d3-4839-84cc-ab8b383d602c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.705 225859 DEBUG nova.virt.libvirt.vif [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:42:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-gen-0-248124310',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-gen-0-248124310',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-342561427-gen',id=219,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNPrC26E5zjpds8PmYXeLNQKBwLdgsc+VcubrdKnriEXDiMjUXGvx1Qk1D9X7eLck7XYpiSHt4U9t1SsZB3lsAeahV1YqeLst2/p8UQkxJjHaCXNOlF5uwsraAqiSop7uA==',key_name='tempest-TestSecurityGroupsBasicOps-681797586',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:42:44Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='728662ec7f654a3fb2e53a90b8707d7e',ramdisk_id='',reservation_id='r-s0krodc6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-342561427',owner_user_name='tempest-TestSecurityGroupsBasicOps-342561427-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:42:45Z,user_data=None,user_id='5985ef736503499a9f1d734cabc33ce5',uuid=6c6a79bf-04d3-4839-84cc-ab8b383d602c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9af895a3-cca7-495f-ab5a-68e04355f005", "address": "fa:16:3e:48:2c:f1", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9af895a3-cc", "ovs_interfaceid": "9af895a3-cca7-495f-ab5a-68e04355f005", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.706 225859 DEBUG nova.network.os_vif_util [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converting VIF {"id": "9af895a3-cca7-495f-ab5a-68e04355f005", "address": "fa:16:3e:48:2c:f1", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9af895a3-cc", "ovs_interfaceid": "9af895a3-cca7-495f-ab5a-68e04355f005", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.707 225859 DEBUG nova.network.os_vif_util [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:2c:f1,bridge_name='br-int',has_traffic_filtering=True,id=9af895a3-cca7-495f-ab5a-68e04355f005,network=Network(b5fb4ee9-fa45-4797-871a-53247ebaf43e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9af895a3-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.708 225859 DEBUG os_vif [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:2c:f1,bridge_name='br-int',has_traffic_filtering=True,id=9af895a3-cca7-495f-ab5a-68e04355f005,network=Network(b5fb4ee9-fa45-4797-871a-53247ebaf43e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9af895a3-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.710 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.710 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9af895a3-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.713 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.716 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.718 225859 INFO os_vif [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:2c:f1,bridge_name='br-int',has_traffic_filtering=True,id=9af895a3-cca7-495f-ab5a-68e04355f005,network=Network(b5fb4ee9-fa45-4797-871a-53247ebaf43e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9af895a3-cc')
Jan 20 15:43:05 compute-1 neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e[328687]: [NOTICE]   (328691) : haproxy version is 2.8.14-c23fe91
Jan 20 15:43:05 compute-1 neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e[328687]: [NOTICE]   (328691) : path to executable is /usr/sbin/haproxy
Jan 20 15:43:05 compute-1 neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e[328687]: [WARNING]  (328691) : Exiting Master process...
Jan 20 15:43:05 compute-1 neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e[328687]: [ALERT]    (328691) : Current worker (328693) exited with code 143 (Terminated)
Jan 20 15:43:05 compute-1 neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e[328687]: [WARNING]  (328691) : All workers exited. Exiting... (0)
Jan 20 15:43:05 compute-1 systemd[1]: libpod-03cd23c47de70dcff18cd70c75122fd429c6ff5e41750880fef510538ae5ef75.scope: Deactivated successfully.
Jan 20 15:43:05 compute-1 podman[329419]: 2026-01-20 15:43:05.72977551 +0000 UTC m=+0.052532966 container died 03cd23c47de70dcff18cd70c75122fd429c6ff5e41750880fef510538ae5ef75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:43:05 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-03cd23c47de70dcff18cd70c75122fd429c6ff5e41750880fef510538ae5ef75-userdata-shm.mount: Deactivated successfully.
Jan 20 15:43:05 compute-1 systemd[1]: var-lib-containers-storage-overlay-4b5e8f4f94618ce44091fb433750cac7f4bda279532d391abb9a2cd7dfef8589-merged.mount: Deactivated successfully.
Jan 20 15:43:05 compute-1 podman[329419]: 2026-01-20 15:43:05.76801655 +0000 UTC m=+0.090773996 container cleanup 03cd23c47de70dcff18cd70c75122fd429c6ff5e41750880fef510538ae5ef75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:43:05 compute-1 systemd[1]: libpod-conmon-03cd23c47de70dcff18cd70c75122fd429c6ff5e41750880fef510538ae5ef75.scope: Deactivated successfully.
Jan 20 15:43:05 compute-1 podman[329474]: 2026-01-20 15:43:05.821635726 +0000 UTC m=+0.035097353 container remove 03cd23c47de70dcff18cd70c75122fd429c6ff5e41750880fef510538ae5ef75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:43:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:43:05.827 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[d5604fbd-240e-4516-b55c-e977d1c758d2]: (4, ('Tue Jan 20 03:43:05 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e (03cd23c47de70dcff18cd70c75122fd429c6ff5e41750880fef510538ae5ef75)\n03cd23c47de70dcff18cd70c75122fd429c6ff5e41750880fef510538ae5ef75\nTue Jan 20 03:43:05 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e (03cd23c47de70dcff18cd70c75122fd429c6ff5e41750880fef510538ae5ef75)\n03cd23c47de70dcff18cd70c75122fd429c6ff5e41750880fef510538ae5ef75\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:43:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:43:05.828 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ba94d232-79a2-4ae4-a26a-f004fde9e135]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:43:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:43:05.829 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5fb4ee9-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:43:05 compute-1 kernel: tapb5fb4ee9-f0: left promiscuous mode
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.833 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:05 compute-1 nova_compute[225855]: 2026-01-20 15:43:05.843 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:43:05.847 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d8f79c-2b06-4550-aca5-0d81c4d1f4fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:43:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:43:05.864 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[9a159ce1-3b23-4a2e-b9a1-be6eb7bfc0fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:43:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:43:05.865 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[38098c07-1784-4e95-946b-185fc0ef5d12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:43:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:43:05.882 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[795445f3-9453-4cd2-abb6-ff297bce20e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 880331, 'reachable_time': 44384, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329490, 'error': None, 'target': 'ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:43:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:43:05.885 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:43:05 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:43:05.886 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[16dc837e-0af1-4db2-8c3b-fe64673e7dc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:43:05 compute-1 systemd[1]: run-netns-ovnmeta\x2db5fb4ee9\x2dfa45\x2d4797\x2d871a\x2d53247ebaf43e.mount: Deactivated successfully.
Jan 20 15:43:06 compute-1 nova_compute[225855]: 2026-01-20 15:43:06.078 225859 INFO nova.virt.libvirt.driver [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Deleting instance files /var/lib/nova/instances/6c6a79bf-04d3-4839-84cc-ab8b383d602c_del
Jan 20 15:43:06 compute-1 nova_compute[225855]: 2026-01-20 15:43:06.079 225859 INFO nova.virt.libvirt.driver [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Deletion of /var/lib/nova/instances/6c6a79bf-04d3-4839-84cc-ab8b383d602c_del complete
Jan 20 15:43:06 compute-1 nova_compute[225855]: 2026-01-20 15:43:06.352 225859 INFO nova.compute.manager [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Took 0.91 seconds to destroy the instance on the hypervisor.
Jan 20 15:43:06 compute-1 nova_compute[225855]: 2026-01-20 15:43:06.353 225859 DEBUG oslo.service.loopingcall [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:43:06 compute-1 nova_compute[225855]: 2026-01-20 15:43:06.354 225859 DEBUG nova.compute.manager [-] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:43:06 compute-1 nova_compute[225855]: 2026-01-20 15:43:06.354 225859 DEBUG nova.network.neutron [-] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:43:06 compute-1 nova_compute[225855]: 2026-01-20 15:43:06.442 225859 DEBUG nova.compute.manager [req-83a5d9a7-fcc0-45de-bee9-e3eac7b07931 req-4a5775bf-736b-4def-8e67-7e9d4f4d2886 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Received event network-vif-unplugged-9af895a3-cca7-495f-ab5a-68e04355f005 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:43:06 compute-1 nova_compute[225855]: 2026-01-20 15:43:06.442 225859 DEBUG oslo_concurrency.lockutils [req-83a5d9a7-fcc0-45de-bee9-e3eac7b07931 req-4a5775bf-736b-4def-8e67-7e9d4f4d2886 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:43:06 compute-1 nova_compute[225855]: 2026-01-20 15:43:06.443 225859 DEBUG oslo_concurrency.lockutils [req-83a5d9a7-fcc0-45de-bee9-e3eac7b07931 req-4a5775bf-736b-4def-8e67-7e9d4f4d2886 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:43:06 compute-1 nova_compute[225855]: 2026-01-20 15:43:06.443 225859 DEBUG oslo_concurrency.lockutils [req-83a5d9a7-fcc0-45de-bee9-e3eac7b07931 req-4a5775bf-736b-4def-8e67-7e9d4f4d2886 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:43:06 compute-1 nova_compute[225855]: 2026-01-20 15:43:06.443 225859 DEBUG nova.compute.manager [req-83a5d9a7-fcc0-45de-bee9-e3eac7b07931 req-4a5775bf-736b-4def-8e67-7e9d4f4d2886 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] No waiting events found dispatching network-vif-unplugged-9af895a3-cca7-495f-ab5a-68e04355f005 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:43:06 compute-1 nova_compute[225855]: 2026-01-20 15:43:06.443 225859 DEBUG nova.compute.manager [req-83a5d9a7-fcc0-45de-bee9-e3eac7b07931 req-4a5775bf-736b-4def-8e67-7e9d4f4d2886 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Received event network-vif-unplugged-9af895a3-cca7-495f-ab5a-68e04355f005 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:43:06 compute-1 ceph-mon[81775]: pgmap v3627: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:43:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:07.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:07 compute-1 nova_compute[225855]: 2026-01-20 15:43:07.448 225859 DEBUG nova.network.neutron [-] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:43:07 compute-1 nova_compute[225855]: 2026-01-20 15:43:07.463 225859 INFO nova.compute.manager [-] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Took 1.11 seconds to deallocate network for instance.
Jan 20 15:43:07 compute-1 nova_compute[225855]: 2026-01-20 15:43:07.515 225859 DEBUG oslo_concurrency.lockutils [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:43:07 compute-1 nova_compute[225855]: 2026-01-20 15:43:07.516 225859 DEBUG oslo_concurrency.lockutils [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:43:07 compute-1 nova_compute[225855]: 2026-01-20 15:43:07.581 225859 DEBUG oslo_concurrency.processutils [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:43:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:07.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:08 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:43:08 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4178163900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:08 compute-1 nova_compute[225855]: 2026-01-20 15:43:08.068 225859 DEBUG oslo_concurrency.processutils [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:43:08 compute-1 nova_compute[225855]: 2026-01-20 15:43:08.075 225859 DEBUG nova.compute.provider_tree [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:43:08 compute-1 nova_compute[225855]: 2026-01-20 15:43:08.137 225859 DEBUG nova.scheduler.client.report [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:43:08 compute-1 nova_compute[225855]: 2026-01-20 15:43:08.160 225859 DEBUG oslo_concurrency.lockutils [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:43:08 compute-1 nova_compute[225855]: 2026-01-20 15:43:08.189 225859 INFO nova.scheduler.client.report [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Deleted allocations for instance 6c6a79bf-04d3-4839-84cc-ab8b383d602c
Jan 20 15:43:08 compute-1 nova_compute[225855]: 2026-01-20 15:43:08.257 225859 DEBUG oslo_concurrency.lockutils [None req-011eca6b-40b4-4a38-bba5-ce034d3c98eb 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:43:08 compute-1 nova_compute[225855]: 2026-01-20 15:43:08.524 225859 DEBUG nova.compute.manager [req-7b4f883c-3014-4002-a0b2-08c791dafe63 req-c6584fb1-9b0b-404f-90de-19f41af7fc94 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Received event network-vif-plugged-9af895a3-cca7-495f-ab5a-68e04355f005 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:43:08 compute-1 nova_compute[225855]: 2026-01-20 15:43:08.525 225859 DEBUG oslo_concurrency.lockutils [req-7b4f883c-3014-4002-a0b2-08c791dafe63 req-c6584fb1-9b0b-404f-90de-19f41af7fc94 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:43:08 compute-1 nova_compute[225855]: 2026-01-20 15:43:08.525 225859 DEBUG oslo_concurrency.lockutils [req-7b4f883c-3014-4002-a0b2-08c791dafe63 req-c6584fb1-9b0b-404f-90de-19f41af7fc94 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:43:08 compute-1 nova_compute[225855]: 2026-01-20 15:43:08.525 225859 DEBUG oslo_concurrency.lockutils [req-7b4f883c-3014-4002-a0b2-08c791dafe63 req-c6584fb1-9b0b-404f-90de-19f41af7fc94 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6c6a79bf-04d3-4839-84cc-ab8b383d602c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:43:08 compute-1 nova_compute[225855]: 2026-01-20 15:43:08.526 225859 DEBUG nova.compute.manager [req-7b4f883c-3014-4002-a0b2-08c791dafe63 req-c6584fb1-9b0b-404f-90de-19f41af7fc94 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] No waiting events found dispatching network-vif-plugged-9af895a3-cca7-495f-ab5a-68e04355f005 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:43:08 compute-1 nova_compute[225855]: 2026-01-20 15:43:08.526 225859 WARNING nova.compute.manager [req-7b4f883c-3014-4002-a0b2-08c791dafe63 req-c6584fb1-9b0b-404f-90de-19f41af7fc94 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Received unexpected event network-vif-plugged-9af895a3-cca7-495f-ab5a-68e04355f005 for instance with vm_state deleted and task_state None.
Jan 20 15:43:08 compute-1 nova_compute[225855]: 2026-01-20 15:43:08.527 225859 DEBUG nova.compute.manager [req-7b4f883c-3014-4002-a0b2-08c791dafe63 req-c6584fb1-9b0b-404f-90de-19f41af7fc94 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Received event network-vif-deleted-9af895a3-cca7-495f-ab5a-68e04355f005 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:43:08 compute-1 ceph-mon[81775]: pgmap v3628: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:43:08 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4178163900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:09.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:09 compute-1 nova_compute[225855]: 2026-01-20 15:43:09.583 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:43:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:09.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:43:10 compute-1 ceph-mon[81775]: pgmap v3629: 321 pgs: 321 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 20 15:43:10 compute-1 nova_compute[225855]: 2026-01-20 15:43:10.713 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:11.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:11.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:12 compute-1 podman[329517]: 2026-01-20 15:43:12.003680759 +0000 UTC m=+0.052261518 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 15:43:12 compute-1 ceph-mon[81775]: pgmap v3630: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 186 KiB/s rd, 1.1 MiB/s wr, 65 op/s
Jan 20 15:43:13 compute-1 sudo[329537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:43:13 compute-1 sudo[329537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:13 compute-1 sudo[329537]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:13 compute-1 sudo[329562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:43:13 compute-1 sudo[329562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:13 compute-1 sudo[329562]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:13.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:43:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/454088694' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:43:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:43:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/454088694' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:43:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/454088694' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:43:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/454088694' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:43:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:13.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:14 compute-1 nova_compute[225855]: 2026-01-20 15:43:14.583 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:14 compute-1 ceph-mon[81775]: pgmap v3631: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 14 KiB/s wr, 29 op/s
Jan 20 15:43:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1694918340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:43:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:15.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:43:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:43:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:15.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:43:15 compute-1 nova_compute[225855]: 2026-01-20 15:43:15.716 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:43:16.465 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:43:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:43:16.465 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:43:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:43:16.465 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:43:16 compute-1 ceph-mon[81775]: pgmap v3632: 321 pgs: 321 active+clean; 159 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 15 KiB/s wr, 51 op/s
Jan 20 15:43:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:17.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:43:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:17.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:43:18 compute-1 ceph-mon[81775]: pgmap v3633: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 5.3 KiB/s wr, 56 op/s
Jan 20 15:43:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:19.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:19 compute-1 nova_compute[225855]: 2026-01-20 15:43:19.585 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:19 compute-1 nova_compute[225855]: 2026-01-20 15:43:19.694 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:19.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:19 compute-1 ceph-mon[81775]: pgmap v3634: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 4.7 KiB/s wr, 54 op/s
Jan 20 15:43:19 compute-1 nova_compute[225855]: 2026-01-20 15:43:19.768 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:20 compute-1 nova_compute[225855]: 2026-01-20 15:43:20.684 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768923785.6833277, 6c6a79bf-04d3-4839-84cc-ab8b383d602c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:43:20 compute-1 nova_compute[225855]: 2026-01-20 15:43:20.684 225859 INFO nova.compute.manager [-] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] VM Stopped (Lifecycle Event)
Jan 20 15:43:20 compute-1 nova_compute[225855]: 2026-01-20 15:43:20.718 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:20 compute-1 nova_compute[225855]: 2026-01-20 15:43:20.983 225859 DEBUG nova.compute.manager [None req-c5a9e3e5-fc1b-467a-b510-071df4d8b152 - - - - - -] [instance: 6c6a79bf-04d3-4839-84cc-ab8b383d602c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:43:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:21.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:21.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:22 compute-1 ceph-mon[81775]: pgmap v3635: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 3.7 KiB/s wr, 50 op/s
Jan 20 15:43:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:23.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:43:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:23.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:43:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:24 compute-1 nova_compute[225855]: 2026-01-20 15:43:24.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:43:24 compute-1 nova_compute[225855]: 2026-01-20 15:43:24.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:43:24 compute-1 nova_compute[225855]: 2026-01-20 15:43:24.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:43:24 compute-1 nova_compute[225855]: 2026-01-20 15:43:24.407 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:43:24 compute-1 nova_compute[225855]: 2026-01-20 15:43:24.588 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:24 compute-1 ceph-mon[81775]: pgmap v3636: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 20 15:43:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1142949718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:25.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/578300199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:25.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:25 compute-1 nova_compute[225855]: 2026-01-20 15:43:25.721 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:26 compute-1 nova_compute[225855]: 2026-01-20 15:43:26.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:43:26 compute-1 nova_compute[225855]: 2026-01-20 15:43:26.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:43:26 compute-1 ceph-mon[81775]: pgmap v3637: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 20 15:43:27 compute-1 ceph-mgr[82135]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2542147622
Jan 20 15:43:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:27.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:27.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:28 compute-1 nova_compute[225855]: 2026-01-20 15:43:28.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:43:28 compute-1 nova_compute[225855]: 2026-01-20 15:43:28.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:43:28 compute-1 nova_compute[225855]: 2026-01-20 15:43:28.370 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:43:28 compute-1 nova_compute[225855]: 2026-01-20 15:43:28.371 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:43:28 compute-1 nova_compute[225855]: 2026-01-20 15:43:28.371 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:43:28 compute-1 nova_compute[225855]: 2026-01-20 15:43:28.372 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:43:28 compute-1 nova_compute[225855]: 2026-01-20 15:43:28.372 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:43:28 compute-1 ceph-mon[81775]: pgmap v3638: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.4 KiB/s rd, 852 B/s wr, 6 op/s
Jan 20 15:43:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3618259457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:43:28 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3199149310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:28 compute-1 nova_compute[225855]: 2026-01-20 15:43:28.831 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:43:28 compute-1 nova_compute[225855]: 2026-01-20 15:43:28.990 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:43:28 compute-1 nova_compute[225855]: 2026-01-20 15:43:28.991 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4230MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:43:28 compute-1 nova_compute[225855]: 2026-01-20 15:43:28.991 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:43:28 compute-1 nova_compute[225855]: 2026-01-20 15:43:28.991 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:43:29 compute-1 nova_compute[225855]: 2026-01-20 15:43:29.150 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:43:29 compute-1 nova_compute[225855]: 2026-01-20 15:43:29.151 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:43:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:29 compute-1 nova_compute[225855]: 2026-01-20 15:43:29.291 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:43:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:43:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:29.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:43:29 compute-1 nova_compute[225855]: 2026-01-20 15:43:29.589 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:43:29 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/204372452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:29 compute-1 nova_compute[225855]: 2026-01-20 15:43:29.706 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:43:29 compute-1 nova_compute[225855]: 2026-01-20 15:43:29.711 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:43:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3199149310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2347421671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/204372452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:29.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:29 compute-1 nova_compute[225855]: 2026-01-20 15:43:29.727 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:43:29 compute-1 nova_compute[225855]: 2026-01-20 15:43:29.753 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:43:29 compute-1 nova_compute[225855]: 2026-01-20 15:43:29.753 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:43:30 compute-1 ceph-mon[81775]: pgmap v3639: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:43:30 compute-1 nova_compute[225855]: 2026-01-20 15:43:30.722 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:30 compute-1 nova_compute[225855]: 2026-01-20 15:43:30.754 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:43:30 compute-1 nova_compute[225855]: 2026-01-20 15:43:30.754 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:43:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:31.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:31.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:31 compute-1 ceph-mon[81775]: pgmap v3640: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:43:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:43:33.221 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=88, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=87) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:43:33 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:43:33.222 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:43:33 compute-1 nova_compute[225855]: 2026-01-20 15:43:33.223 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:33 compute-1 nova_compute[225855]: 2026-01-20 15:43:33.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:43:33 compute-1 nova_compute[225855]: 2026-01-20 15:43:33.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:43:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:43:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:33.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:43:33 compute-1 sudo[329643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:43:33 compute-1 sudo[329643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:33 compute-1 sudo[329643]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:33 compute-1 sudo[329668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:43:33 compute-1 sudo[329668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:33 compute-1 sudo[329668]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:33.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:34 compute-1 ceph-mon[81775]: pgmap v3641: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:43:34 compute-1 nova_compute[225855]: 2026-01-20 15:43:34.592 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:35 compute-1 podman[329694]: 2026-01-20 15:43:35.033258866 +0000 UTC m=+0.079032205 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Jan 20 15:43:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:43:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:35.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:43:35 compute-1 nova_compute[225855]: 2026-01-20 15:43:35.725 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:43:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:35.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:43:36 compute-1 nova_compute[225855]: 2026-01-20 15:43:36.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:43:36 compute-1 ceph-mon[81775]: pgmap v3642: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:43:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:37.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:43:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:37.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:43:37 compute-1 ceph-mon[81775]: pgmap v3643: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:43:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1438658562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:39.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:39 compute-1 nova_compute[225855]: 2026-01-20 15:43:39.593 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:39.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:39 compute-1 ceph-mon[81775]: pgmap v3644: 321 pgs: 321 active+clean; 127 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 KiB/s rd, 219 KiB/s wr, 2 op/s
Jan 20 15:43:40 compute-1 nova_compute[225855]: 2026-01-20 15:43:40.727 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:43:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:41.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:43:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:43:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:41.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:43:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:43:42.224 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '88'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:43:42 compute-1 ceph-mon[81775]: pgmap v3645: 321 pgs: 321 active+clean; 163 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.4 KiB/s rd, 1.7 MiB/s wr, 18 op/s
Jan 20 15:43:43 compute-1 podman[329727]: 2026-01-20 15:43:43.003900977 +0000 UTC m=+0.043115559 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 15:43:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:43.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:43:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:43.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:43:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:44 compute-1 nova_compute[225855]: 2026-01-20 15:43:44.595 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:44 compute-1 ceph-mon[81775]: pgmap v3646: 321 pgs: 321 active+clean; 163 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.4 KiB/s rd, 1.7 MiB/s wr, 18 op/s
Jan 20 15:43:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:45.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:45.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:45 compute-1 nova_compute[225855]: 2026-01-20 15:43:45.771 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:46 compute-1 ceph-mon[81775]: pgmap v3647: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:43:46 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/697914505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:43:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:47.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:47.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:47 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3464045524' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:43:48 compute-1 ceph-mon[81775]: pgmap v3648: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:43:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:49.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:49 compute-1 nova_compute[225855]: 2026-01-20 15:43:49.642 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:43:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:49.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:43:49 compute-1 ceph-mon[81775]: pgmap v3649: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 20 15:43:50 compute-1 nova_compute[225855]: 2026-01-20 15:43:50.813 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:51.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:51.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:52 compute-1 ceph-mon[81775]: pgmap v3650: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 285 KiB/s rd, 1.6 MiB/s wr, 44 op/s
Jan 20 15:43:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:53.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:53 compute-1 sudo[329752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:43:53 compute-1 sudo[329752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:53 compute-1 sudo[329752]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:53 compute-1 sudo[329777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:43:53 compute-1 sudo[329777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:53 compute-1 sudo[329777]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:43:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:53.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:43:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:54 compute-1 nova_compute[225855]: 2026-01-20 15:43:54.644 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:54 compute-1 ceph-mon[81775]: pgmap v3651: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 277 KiB/s rd, 56 KiB/s wr, 29 op/s
Jan 20 15:43:55 compute-1 nova_compute[225855]: 2026-01-20 15:43:55.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:43:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:55.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:55.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:55 compute-1 nova_compute[225855]: 2026-01-20 15:43:55.860 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:56 compute-1 ceph-mon[81775]: pgmap v3652: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 56 KiB/s wr, 82 op/s
Jan 20 15:43:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:57.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:57.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:58 compute-1 ceph-mon[81775]: pgmap v3653: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:43:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:59.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:59 compute-1 nova_compute[225855]: 2026-01-20 15:43:59.691 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:43:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:59.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:59 compute-1 ceph-mon[81775]: pgmap v3654: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:44:00 compute-1 nova_compute[225855]: 2026-01-20 15:44:00.899 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:01 compute-1 ovn_controller[130490]: 2026-01-20T15:44:01Z|00999|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Jan 20 15:44:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.004000113s ======
Jan 20 15:44:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:01.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000113s
Jan 20 15:44:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:01.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:02 compute-1 ceph-mon[81775]: pgmap v3655: 321 pgs: 321 active+clean; 179 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 973 KiB/s wr, 83 op/s
Jan 20 15:44:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:03.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:03.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:04 compute-1 sudo[329809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:04 compute-1 sudo[329809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:04 compute-1 sudo[329809]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:04 compute-1 sudo[329834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:44:04 compute-1 sudo[329834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:04 compute-1 sudo[329834]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:04 compute-1 sudo[329859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:04 compute-1 sudo[329859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:04 compute-1 sudo[329859]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:04 compute-1 sudo[329884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:44:04 compute-1 sudo[329884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:04 compute-1 ceph-mon[81775]: pgmap v3656: 321 pgs: 321 active+clean; 179 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 960 KiB/s wr, 65 op/s
Jan 20 15:44:04 compute-1 nova_compute[225855]: 2026-01-20 15:44:04.730 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:04 compute-1 sudo[329884]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:44:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:05.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:44:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:44:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:44:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:44:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:44:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:44:05 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:44:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:44:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:05.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:44:05 compute-1 nova_compute[225855]: 2026-01-20 15:44:05.943 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:06 compute-1 podman[329941]: 2026-01-20 15:44:06.055941768 +0000 UTC m=+0.088184572 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 15:44:06 compute-1 ceph-mon[81775]: pgmap v3657: 321 pgs: 321 active+clean; 196 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 99 op/s
Jan 20 15:44:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:07.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:44:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:07.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:44:08 compute-1 ceph-mon[81775]: pgmap v3658: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:44:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:09.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:09 compute-1 nova_compute[225855]: 2026-01-20 15:44:09.733 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:44:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:09.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:44:10 compute-1 ceph-mon[81775]: pgmap v3659: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:44:10 compute-1 nova_compute[225855]: 2026-01-20 15:44:10.946 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:11 compute-1 sudo[329971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:11 compute-1 sudo[329971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:11 compute-1 sudo[329971]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:11 compute-1 sudo[329996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:44:11 compute-1 sudo[329996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:11 compute-1 sudo[329996]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:44:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:11.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:44:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:11.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:11 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:44:11 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:44:11 compute-1 ceph-mon[81775]: pgmap v3660: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:44:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:13.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:44:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/889081625' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:44:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:44:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/889081625' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:44:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:13.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:13 compute-1 sudo[330022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:13 compute-1 sudo[330022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:13 compute-1 sudo[330022]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:13 compute-1 sudo[330049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:13 compute-1 sudo[330049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:13 compute-1 sudo[330049]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:13 compute-1 podman[330046]: 2026-01-20 15:44:13.850882754 +0000 UTC m=+0.046476774 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:44:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:14 compute-1 ceph-mon[81775]: pgmap v3661: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 258 KiB/s rd, 1.2 MiB/s wr, 50 op/s
Jan 20 15:44:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/889081625' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:44:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/889081625' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:44:14 compute-1 nova_compute[225855]: 2026-01-20 15:44:14.735 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:44:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:15.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:44:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:15.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:15 compute-1 nova_compute[225855]: 2026-01-20 15:44:15.986 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:44:16.466 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:44:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:44:16.466 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:44:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:44:16.466 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:44:16 compute-1 ceph-mon[81775]: pgmap v3662: 321 pgs: 321 active+clean; 158 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 260 KiB/s rd, 1.2 MiB/s wr, 55 op/s
Jan 20 15:44:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:44:17.177 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=89, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=88) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:44:17 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:44:17.178 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:44:17 compute-1 nova_compute[225855]: 2026-01-20 15:44:17.187 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:44:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:17.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:44:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:44:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:17.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:44:18 compute-1 ceph-mon[81775]: pgmap v3663: 321 pgs: 321 active+clean; 135 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 68 KiB/s rd, 100 KiB/s wr, 28 op/s
Jan 20 15:44:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3037305259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:44:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:19.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:19 compute-1 nova_compute[225855]: 2026-01-20 15:44:19.736 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:44:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:19.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:44:20 compute-1 ceph-mon[81775]: pgmap v3664: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 17 KiB/s wr, 20 op/s
Jan 20 15:44:20 compute-1 nova_compute[225855]: 2026-01-20 15:44:20.988 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:44:21.180 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '89'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:44:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:44:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:21.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:44:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:44:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:21.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:44:22 compute-1 ceph-mon[81775]: pgmap v3665: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 17 KiB/s wr, 28 op/s
Jan 20 15:44:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:44:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:23.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:44:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:44:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:23.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:44:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:24 compute-1 ceph-mon[81775]: pgmap v3666: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Jan 20 15:44:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2793886615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:44:24 compute-1 nova_compute[225855]: 2026-01-20 15:44:24.738 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:25 compute-1 nova_compute[225855]: 2026-01-20 15:44:25.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:25 compute-1 nova_compute[225855]: 2026-01-20 15:44:25.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:44:25 compute-1 nova_compute[225855]: 2026-01-20 15:44:25.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:44:25 compute-1 nova_compute[225855]: 2026-01-20 15:44:25.358 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:44:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:25.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2855771777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:44:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:25.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:25 compute-1 nova_compute[225855]: 2026-01-20 15:44:25.990 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:26 compute-1 ceph-mon[81775]: pgmap v3667: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Jan 20 15:44:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:27.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:27 compute-1 ceph-mon[81775]: pgmap v3668: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 5.5 KiB/s wr, 22 op/s
Jan 20 15:44:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:27.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:28 compute-1 nova_compute[225855]: 2026-01-20 15:44:28.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:28 compute-1 nova_compute[225855]: 2026-01-20 15:44:28.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:44:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:29 compute-1 nova_compute[225855]: 2026-01-20 15:44:29.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:29.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:29 compute-1 nova_compute[225855]: 2026-01-20 15:44:29.740 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:29.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:30 compute-1 nova_compute[225855]: 2026-01-20 15:44:30.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:30 compute-1 nova_compute[225855]: 2026-01-20 15:44:30.364 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:44:30 compute-1 nova_compute[225855]: 2026-01-20 15:44:30.365 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:44:30 compute-1 nova_compute[225855]: 2026-01-20 15:44:30.365 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:44:30 compute-1 nova_compute[225855]: 2026-01-20 15:44:30.365 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:44:30 compute-1 nova_compute[225855]: 2026-01-20 15:44:30.366 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:44:30 compute-1 ceph-mon[81775]: pgmap v3669: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 16 op/s
Jan 20 15:44:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1016780540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:44:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:44:30 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2706026692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:44:30 compute-1 nova_compute[225855]: 2026-01-20 15:44:30.804 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:44:30 compute-1 nova_compute[225855]: 2026-01-20 15:44:30.953 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:44:30 compute-1 nova_compute[225855]: 2026-01-20 15:44:30.954 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4253MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:44:30 compute-1 nova_compute[225855]: 2026-01-20 15:44:30.954 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:44:30 compute-1 nova_compute[225855]: 2026-01-20 15:44:30.955 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:44:30 compute-1 nova_compute[225855]: 2026-01-20 15:44:30.991 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:31 compute-1 nova_compute[225855]: 2026-01-20 15:44:31.025 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:44:31 compute-1 nova_compute[225855]: 2026-01-20 15:44:31.026 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:44:31 compute-1 nova_compute[225855]: 2026-01-20 15:44:31.043 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing inventories for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 15:44:31 compute-1 nova_compute[225855]: 2026-01-20 15:44:31.067 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating ProviderTree inventory for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 15:44:31 compute-1 nova_compute[225855]: 2026-01-20 15:44:31.067 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 15:44:31 compute-1 nova_compute[225855]: 2026-01-20 15:44:31.097 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing aggregate associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 15:44:31 compute-1 nova_compute[225855]: 2026-01-20 15:44:31.128 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing trait associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 15:44:31 compute-1 nova_compute[225855]: 2026-01-20 15:44:31.176 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:44:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:31.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:31 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:44:31 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1480609939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:44:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2706026692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:44:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3067520434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:44:31 compute-1 nova_compute[225855]: 2026-01-20 15:44:31.616 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:44:31 compute-1 nova_compute[225855]: 2026-01-20 15:44:31.621 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:44:31 compute-1 nova_compute[225855]: 2026-01-20 15:44:31.636 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:44:31 compute-1 nova_compute[225855]: 2026-01-20 15:44:31.638 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:44:31 compute-1 nova_compute[225855]: 2026-01-20 15:44:31.638 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:44:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:31.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:32 compute-1 ceph-mon[81775]: pgmap v3670: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.8 KiB/s rd, 0 B/s wr, 8 op/s
Jan 20 15:44:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1480609939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:44:32 compute-1 nova_compute[225855]: 2026-01-20 15:44:32.639 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:32 compute-1 nova_compute[225855]: 2026-01-20 15:44:32.639 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:33 compute-1 nova_compute[225855]: 2026-01-20 15:44:33.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:33 compute-1 nova_compute[225855]: 2026-01-20 15:44:33.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:33.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:33.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:33 compute-1 sudo[330146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:33 compute-1 sudo[330146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:33 compute-1 sudo[330146]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:33 compute-1 sudo[330171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:33 compute-1 sudo[330171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:34 compute-1 sudo[330171]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:34 compute-1 nova_compute[225855]: 2026-01-20 15:44:34.782 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:34 compute-1 ceph-mon[81775]: pgmap v3671: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:44:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:35.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:35 compute-1 ceph-mon[81775]: pgmap v3672: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:44:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:44:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:35.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:44:35 compute-1 nova_compute[225855]: 2026-01-20 15:44:35.993 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:36 compute-1 podman[330197]: 2026-01-20 15:44:36.297634409 +0000 UTC m=+0.086106824 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:44:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3380038031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:44:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:44:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:37.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:44:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:37.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:38 compute-1 ceph-mon[81775]: pgmap v3673: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:44:38 compute-1 nova_compute[225855]: 2026-01-20 15:44:38.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:39.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:39 compute-1 nova_compute[225855]: 2026-01-20 15:44:39.784 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:39.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:40 compute-1 ceph-mon[81775]: pgmap v3674: 321 pgs: 321 active+clean; 126 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.4 KiB/s rd, 270 KiB/s wr, 11 op/s
Jan 20 15:44:40 compute-1 nova_compute[225855]: 2026-01-20 15:44:40.995 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:44:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:41.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:44:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:44:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:41.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:44:42 compute-1 ceph-mon[81775]: pgmap v3675: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:44:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:43.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:44:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:43.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:44:43 compute-1 ceph-mon[81775]: pgmap v3676: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:44:44 compute-1 podman[330228]: 2026-01-20 15:44:44.068205876 +0000 UTC m=+0.097430505 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 15:44:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:44 compute-1 nova_compute[225855]: 2026-01-20 15:44:44.787 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2712563561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:44:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:45.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:44:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:45.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:44:45 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3294861214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:44:45 compute-1 ceph-mon[81775]: pgmap v3677: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:44:46 compute-1 nova_compute[225855]: 2026-01-20 15:44:46.034 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:46 compute-1 nova_compute[225855]: 2026-01-20 15:44:46.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:47.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:47.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:48 compute-1 ceph-mon[81775]: pgmap v3678: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:44:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:49.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:49 compute-1 nova_compute[225855]: 2026-01-20 15:44:49.789 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:49.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:50 compute-1 ceph-mon[81775]: pgmap v3679: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 20 15:44:51 compute-1 nova_compute[225855]: 2026-01-20 15:44:51.036 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:44:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:51.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:44:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:51.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:52 compute-1 ceph-mon[81775]: pgmap v3680: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 590 KiB/s rd, 1.5 MiB/s wr, 45 op/s
Jan 20 15:44:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:44:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:53.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:44:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:53.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:54 compute-1 sudo[330252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:54 compute-1 sudo[330252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:54 compute-1 sudo[330252]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:54 compute-1 sudo[330277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:54 compute-1 sudo[330277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:54 compute-1 sudo[330277]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:54 compute-1 ceph-mon[81775]: pgmap v3681: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 579 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 20 15:44:54 compute-1 nova_compute[225855]: 2026-01-20 15:44:54.789 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:55.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:55.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:56 compute-1 nova_compute[225855]: 2026-01-20 15:44:56.070 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:56 compute-1 ceph-mon[81775]: pgmap v3682: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 12 KiB/s wr, 52 op/s
Jan 20 15:44:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:44:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:57.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:44:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:57.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:58 compute-1 ceph-mon[81775]: pgmap v3683: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:44:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:59.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:59 compute-1 nova_compute[225855]: 2026-01-20 15:44:59.790 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:44:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:59.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:00 compute-1 nova_compute[225855]: 2026-01-20 15:45:00.356 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:00 compute-1 nova_compute[225855]: 2026-01-20 15:45:00.356 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 15:45:00 compute-1 ceph-mon[81775]: pgmap v3684: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:45:01 compute-1 nova_compute[225855]: 2026-01-20 15:45:01.072 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:01.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:01.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:02 compute-1 ceph-mon[81775]: pgmap v3685: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 68 op/s
Jan 20 15:45:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:03.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:03.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:04 compute-1 ceph-mon[81775]: pgmap v3686: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 44 op/s
Jan 20 15:45:04 compute-1 nova_compute[225855]: 2026-01-20 15:45:04.793 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:05.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:05.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:06 compute-1 nova_compute[225855]: 2026-01-20 15:45:06.101 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:06 compute-1 ceph-mon[81775]: pgmap v3687: 321 pgs: 321 active+clean; 195 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.2 MiB/s wr, 88 op/s
Jan 20 15:45:07 compute-1 podman[330308]: 2026-01-20 15:45:07.070333368 +0000 UTC m=+0.107331544 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 20 15:45:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:07.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:07.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:08 compute-1 ceph-mon[81775]: pgmap v3688: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1004 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Jan 20 15:45:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:45:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:09.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:45:09 compute-1 ceph-mon[81775]: pgmap v3689: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:45:09 compute-1 nova_compute[225855]: 2026-01-20 15:45:09.795 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:09.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:11 compute-1 nova_compute[225855]: 2026-01-20 15:45:11.139 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:11 compute-1 sudo[330337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:11 compute-1 sudo[330337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:11 compute-1 sudo[330337]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:11 compute-1 sudo[330362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:45:11 compute-1 sudo[330362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:11 compute-1 sudo[330362]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:11 compute-1 sudo[330387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:11 compute-1 sudo[330387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:11 compute-1 sudo[330387]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:11 compute-1 sudo[330412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:45:11 compute-1 sudo[330412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:11.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:45:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:11.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:45:11 compute-1 sudo[330412]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:12 compute-1 nova_compute[225855]: 2026-01-20 15:45:12.361 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:12 compute-1 nova_compute[225855]: 2026-01-20 15:45:12.362 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 15:45:12 compute-1 nova_compute[225855]: 2026-01-20 15:45:12.378 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 15:45:12 compute-1 ceph-mon[81775]: pgmap v3690: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:45:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:45:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:45:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:45:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:45:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:45:12 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:45:13 compute-1 nova_compute[225855]: 2026-01-20 15:45:13.047 225859 DEBUG oslo_concurrency.lockutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "464c0661-0ddb-4794-8959-db066827326c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:45:13 compute-1 nova_compute[225855]: 2026-01-20 15:45:13.048 225859 DEBUG oslo_concurrency.lockutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "464c0661-0ddb-4794-8959-db066827326c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:45:13 compute-1 nova_compute[225855]: 2026-01-20 15:45:13.067 225859 DEBUG nova.compute.manager [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:45:13 compute-1 nova_compute[225855]: 2026-01-20 15:45:13.159 225859 DEBUG oslo_concurrency.lockutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:45:13 compute-1 nova_compute[225855]: 2026-01-20 15:45:13.160 225859 DEBUG oslo_concurrency.lockutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:45:13 compute-1 nova_compute[225855]: 2026-01-20 15:45:13.167 225859 DEBUG nova.virt.hardware [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:45:13 compute-1 nova_compute[225855]: 2026-01-20 15:45:13.167 225859 INFO nova.compute.claims [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Claim successful on node compute-1.ctlplane.example.com
Jan 20 15:45:13 compute-1 nova_compute[225855]: 2026-01-20 15:45:13.282 225859 DEBUG oslo_concurrency.processutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:45:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:13.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:45:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4231837537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:13 compute-1 nova_compute[225855]: 2026-01-20 15:45:13.718 225859 DEBUG oslo_concurrency.processutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:45:13 compute-1 nova_compute[225855]: 2026-01-20 15:45:13.728 225859 DEBUG nova.compute.provider_tree [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:45:13 compute-1 nova_compute[225855]: 2026-01-20 15:45:13.747 225859 DEBUG nova.scheduler.client.report [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:45:13 compute-1 nova_compute[225855]: 2026-01-20 15:45:13.771 225859 DEBUG oslo_concurrency.lockutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:45:13 compute-1 nova_compute[225855]: 2026-01-20 15:45:13.772 225859 DEBUG nova.compute.manager [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:45:13 compute-1 nova_compute[225855]: 2026-01-20 15:45:13.877 225859 DEBUG nova.compute.manager [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:45:13 compute-1 nova_compute[225855]: 2026-01-20 15:45:13.878 225859 DEBUG nova.network.neutron [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:45:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:45:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:13.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:45:13 compute-1 nova_compute[225855]: 2026-01-20 15:45:13.925 225859 INFO nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:45:13 compute-1 nova_compute[225855]: 2026-01-20 15:45:13.977 225859 DEBUG nova.compute.manager [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:45:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.216 225859 DEBUG nova.policy [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5985ef736503499a9f1d734cabc33ce5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '728662ec7f654a3fb2e53a90b8707d7e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.300 225859 DEBUG nova.compute.manager [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.301 225859 DEBUG nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.302 225859 INFO nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Creating image(s)
Jan 20 15:45:14 compute-1 sudo[330491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:14 compute-1 sudo[330491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:14 compute-1 sudo[330491]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.330 225859 DEBUG nova.storage.rbd_utils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 464c0661-0ddb-4794-8959-db066827326c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.364 225859 DEBUG nova.storage.rbd_utils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 464c0661-0ddb-4794-8959-db066827326c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:45:14 compute-1 sudo[330535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:14 compute-1 sudo[330535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:14 compute-1 sudo[330535]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.399 225859 DEBUG nova.storage.rbd_utils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 464c0661-0ddb-4794-8959-db066827326c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.403 225859 DEBUG oslo_concurrency.processutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:45:14 compute-1 podman[330515]: 2026-01-20 15:45:14.423183709 +0000 UTC m=+0.081457173 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.465 225859 DEBUG oslo_concurrency.processutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.466 225859 DEBUG oslo_concurrency.lockutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.467 225859 DEBUG oslo_concurrency.lockutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.467 225859 DEBUG oslo_concurrency.lockutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.491 225859 DEBUG nova.storage.rbd_utils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 464c0661-0ddb-4794-8959-db066827326c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.495 225859 DEBUG oslo_concurrency.processutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 464c0661-0ddb-4794-8959-db066827326c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:45:14 compute-1 ceph-mon[81775]: pgmap v3691: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:45:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1310749465' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:45:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1310749465' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:45:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4231837537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.754 225859 DEBUG oslo_concurrency.processutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 464c0661-0ddb-4794-8959-db066827326c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.823 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.831 225859 DEBUG nova.storage.rbd_utils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] resizing rbd image 464c0661-0ddb-4794-8959-db066827326c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.940 225859 DEBUG nova.objects.instance [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lazy-loading 'migration_context' on Instance uuid 464c0661-0ddb-4794-8959-db066827326c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.955 225859 DEBUG nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.956 225859 DEBUG nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Ensure instance console log exists: /var/lib/nova/instances/464c0661-0ddb-4794-8959-db066827326c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.956 225859 DEBUG oslo_concurrency.lockutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.956 225859 DEBUG oslo_concurrency.lockutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:45:14 compute-1 nova_compute[225855]: 2026-01-20 15:45:14.957 225859 DEBUG oslo_concurrency.lockutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:45:15 compute-1 nova_compute[225855]: 2026-01-20 15:45:15.586 225859 DEBUG nova.network.neutron [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Successfully created port: 2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:45:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:15.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:45:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:15.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:45:16 compute-1 nova_compute[225855]: 2026-01-20 15:45:16.182 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:16.467 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:45:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:16.468 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:45:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:16.469 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:45:16 compute-1 nova_compute[225855]: 2026-01-20 15:45:16.543 225859 DEBUG nova.network.neutron [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Successfully updated port: 2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:45:16 compute-1 nova_compute[225855]: 2026-01-20 15:45:16.558 225859 DEBUG oslo_concurrency.lockutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "refresh_cache-464c0661-0ddb-4794-8959-db066827326c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:45:16 compute-1 nova_compute[225855]: 2026-01-20 15:45:16.559 225859 DEBUG oslo_concurrency.lockutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquired lock "refresh_cache-464c0661-0ddb-4794-8959-db066827326c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:45:16 compute-1 nova_compute[225855]: 2026-01-20 15:45:16.559 225859 DEBUG nova.network.neutron [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:45:16 compute-1 nova_compute[225855]: 2026-01-20 15:45:16.639 225859 DEBUG nova.compute.manager [req-41268f81-84af-4d68-b440-89abcbfe12ee req-d5b71cc0-eeae-4bf0-89d6-f004f7f62779 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Received event network-changed-2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:45:16 compute-1 nova_compute[225855]: 2026-01-20 15:45:16.639 225859 DEBUG nova.compute.manager [req-41268f81-84af-4d68-b440-89abcbfe12ee req-d5b71cc0-eeae-4bf0-89d6-f004f7f62779 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Refreshing instance network info cache due to event network-changed-2d94aa0d-ed38-41aa-9f34-5ed2a83a7304. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:45:16 compute-1 nova_compute[225855]: 2026-01-20 15:45:16.640 225859 DEBUG oslo_concurrency.lockutils [req-41268f81-84af-4d68-b440-89abcbfe12ee req-d5b71cc0-eeae-4bf0-89d6-f004f7f62779 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-464c0661-0ddb-4794-8959-db066827326c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:45:16 compute-1 ceph-mon[81775]: pgmap v3692: 321 pgs: 321 active+clean; 215 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 342 KiB/s rd, 2.8 MiB/s wr, 87 op/s
Jan 20 15:45:16 compute-1 nova_compute[225855]: 2026-01-20 15:45:16.717 225859 DEBUG nova.network.neutron [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.553 225859 DEBUG nova.network.neutron [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Updating instance_info_cache with network_info: [{"id": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "address": "fa:16:3e:87:b1:99", "network": {"id": "6567de92-725d-4dcc-97c2-0fec6d9bda84", "bridge": "br-int", "label": "tempest-network-smoke--623656421", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d94aa0d-ed", "ovs_interfaceid": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.573 225859 DEBUG oslo_concurrency.lockutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Releasing lock "refresh_cache-464c0661-0ddb-4794-8959-db066827326c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.573 225859 DEBUG nova.compute.manager [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Instance network_info: |[{"id": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "address": "fa:16:3e:87:b1:99", "network": {"id": "6567de92-725d-4dcc-97c2-0fec6d9bda84", "bridge": "br-int", "label": "tempest-network-smoke--623656421", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d94aa0d-ed", "ovs_interfaceid": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.574 225859 DEBUG oslo_concurrency.lockutils [req-41268f81-84af-4d68-b440-89abcbfe12ee req-d5b71cc0-eeae-4bf0-89d6-f004f7f62779 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-464c0661-0ddb-4794-8959-db066827326c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.574 225859 DEBUG nova.network.neutron [req-41268f81-84af-4d68-b440-89abcbfe12ee req-d5b71cc0-eeae-4bf0-89d6-f004f7f62779 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Refreshing network info cache for port 2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.576 225859 DEBUG nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Start _get_guest_xml network_info=[{"id": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "address": "fa:16:3e:87:b1:99", "network": {"id": "6567de92-725d-4dcc-97c2-0fec6d9bda84", "bridge": "br-int", "label": "tempest-network-smoke--623656421", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d94aa0d-ed", "ovs_interfaceid": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.583 225859 WARNING nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.591 225859 DEBUG nova.virt.libvirt.host [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:45:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:17.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.593 225859 DEBUG nova.virt.libvirt.host [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.597 225859 DEBUG nova.virt.libvirt.host [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.598 225859 DEBUG nova.virt.libvirt.host [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.599 225859 DEBUG nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.600 225859 DEBUG nova.virt.hardware [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.601 225859 DEBUG nova.virt.hardware [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.601 225859 DEBUG nova.virt.hardware [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.602 225859 DEBUG nova.virt.hardware [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.602 225859 DEBUG nova.virt.hardware [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.602 225859 DEBUG nova.virt.hardware [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.603 225859 DEBUG nova.virt.hardware [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.603 225859 DEBUG nova.virt.hardware [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.604 225859 DEBUG nova.virt.hardware [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.604 225859 DEBUG nova.virt.hardware [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.604 225859 DEBUG nova.virt.hardware [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:45:17 compute-1 nova_compute[225855]: 2026-01-20 15:45:17.609 225859 DEBUG oslo_concurrency.processutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:45:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:17.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:18 compute-1 sudo[330747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:18 compute-1 sudo[330747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:18 compute-1 sudo[330747]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:18 compute-1 sudo[330772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:45:18 compute-1 sudo[330772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:18 compute-1 sudo[330772]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:45:18 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1748637888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.101 225859 DEBUG oslo_concurrency.processutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.129 225859 DEBUG nova.storage.rbd_utils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 464c0661-0ddb-4794-8959-db066827326c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.134 225859 DEBUG oslo_concurrency.processutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:45:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:45:18 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/739713049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.561 225859 DEBUG oslo_concurrency.processutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.563 225859 DEBUG nova.virt.libvirt.vif [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:45:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-gen-1-623022678',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-gen-1-623022678',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-342561427-gen',id=222,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJW0RDNBQ7KP8Mqzxdg2i8X8upMhqABnnonEiTjmMv4W9RTdXxd1b3Z8QL9swZ0e0+6po4+8oM5PFrC0tn+WmJ7twYzqOI2QMeaFZC9+Q35AVwNQsKxl3WWPGvw1iSa1jA==',key_name='tempest-TestSecurityGroupsBasicOps-1661471182',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='728662ec7f654a3fb2e53a90b8707d7e',ramdisk_id='',reservation_id='r-5t8zkczs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-342561427',owner_user_name='tempest-TestSecurityGroupsBasicOps-342561427-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:45:14Z,user_data=None,user_id='5985ef736503499a9f1d734cabc33ce5',uuid=464c0661-0ddb-4794-8959-db066827326c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "address": "fa:16:3e:87:b1:99", "network": {"id": "6567de92-725d-4dcc-97c2-0fec6d9bda84", "bridge": "br-int", "label": "tempest-network-smoke--623656421", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d94aa0d-ed", "ovs_interfaceid": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.564 225859 DEBUG nova.network.os_vif_util [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converting VIF {"id": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "address": "fa:16:3e:87:b1:99", "network": {"id": "6567de92-725d-4dcc-97c2-0fec6d9bda84", "bridge": "br-int", "label": "tempest-network-smoke--623656421", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d94aa0d-ed", "ovs_interfaceid": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.565 225859 DEBUG nova.network.os_vif_util [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:b1:99,bridge_name='br-int',has_traffic_filtering=True,id=2d94aa0d-ed38-41aa-9f34-5ed2a83a7304,network=Network(6567de92-725d-4dcc-97c2-0fec6d9bda84),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d94aa0d-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.568 225859 DEBUG nova.objects.instance [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lazy-loading 'pci_devices' on Instance uuid 464c0661-0ddb-4794-8959-db066827326c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.593 225859 DEBUG nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:45:18 compute-1 nova_compute[225855]:   <uuid>464c0661-0ddb-4794-8959-db066827326c</uuid>
Jan 20 15:45:18 compute-1 nova_compute[225855]:   <name>instance-000000de</name>
Jan 20 15:45:18 compute-1 nova_compute[225855]:   <memory>131072</memory>
Jan 20 15:45:18 compute-1 nova_compute[225855]:   <vcpu>1</vcpu>
Jan 20 15:45:18 compute-1 nova_compute[225855]:   <metadata>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-gen-1-623022678</nova:name>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <nova:creationTime>2026-01-20 15:45:17</nova:creationTime>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <nova:flavor name="m1.nano">
Jan 20 15:45:18 compute-1 nova_compute[225855]:         <nova:memory>128</nova:memory>
Jan 20 15:45:18 compute-1 nova_compute[225855]:         <nova:disk>1</nova:disk>
Jan 20 15:45:18 compute-1 nova_compute[225855]:         <nova:swap>0</nova:swap>
Jan 20 15:45:18 compute-1 nova_compute[225855]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:45:18 compute-1 nova_compute[225855]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       </nova:flavor>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <nova:owner>
Jan 20 15:45:18 compute-1 nova_compute[225855]:         <nova:user uuid="5985ef736503499a9f1d734cabc33ce5">tempest-TestSecurityGroupsBasicOps-342561427-project-member</nova:user>
Jan 20 15:45:18 compute-1 nova_compute[225855]:         <nova:project uuid="728662ec7f654a3fb2e53a90b8707d7e">tempest-TestSecurityGroupsBasicOps-342561427</nova:project>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       </nova:owner>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <nova:ports>
Jan 20 15:45:18 compute-1 nova_compute[225855]:         <nova:port uuid="2d94aa0d-ed38-41aa-9f34-5ed2a83a7304">
Jan 20 15:45:18 compute-1 nova_compute[225855]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:         </nova:port>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       </nova:ports>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     </nova:instance>
Jan 20 15:45:18 compute-1 nova_compute[225855]:   </metadata>
Jan 20 15:45:18 compute-1 nova_compute[225855]:   <sysinfo type="smbios">
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <system>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <entry name="serial">464c0661-0ddb-4794-8959-db066827326c</entry>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <entry name="uuid">464c0661-0ddb-4794-8959-db066827326c</entry>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     </system>
Jan 20 15:45:18 compute-1 nova_compute[225855]:   </sysinfo>
Jan 20 15:45:18 compute-1 nova_compute[225855]:   <os>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <boot dev="hd"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <smbios mode="sysinfo"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:   </os>
Jan 20 15:45:18 compute-1 nova_compute[225855]:   <features>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <acpi/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <apic/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <vmcoreinfo/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:   </features>
Jan 20 15:45:18 compute-1 nova_compute[225855]:   <clock offset="utc">
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <timer name="hpet" present="no"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:   </clock>
Jan 20 15:45:18 compute-1 nova_compute[225855]:   <cpu mode="custom" match="exact">
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <model>Nehalem</model>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:   </cpu>
Jan 20 15:45:18 compute-1 nova_compute[225855]:   <devices>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <disk type="network" device="disk">
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/464c0661-0ddb-4794-8959-db066827326c_disk">
Jan 20 15:45:18 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       </source>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:45:18 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <target dev="vda" bus="virtio"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <disk type="network" device="cdrom">
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <driver type="raw" cache="none"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <source protocol="rbd" name="vms/464c0661-0ddb-4794-8959-db066827326c_disk.config">
Jan 20 15:45:18 compute-1 nova_compute[225855]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       </source>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <auth username="openstack">
Jan 20 15:45:18 compute-1 nova_compute[225855]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       </auth>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <target dev="sda" bus="sata"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     </disk>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <interface type="ethernet">
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <mac address="fa:16:3e:87:b1:99"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <mtu size="1442"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <target dev="tap2d94aa0d-ed"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     </interface>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <serial type="pty">
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <log file="/var/lib/nova/instances/464c0661-0ddb-4794-8959-db066827326c/console.log" append="off"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     </serial>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <video>
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <model type="virtio"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     </video>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <input type="tablet" bus="usb"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <rng model="virtio">
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     </rng>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <controller type="usb" index="0"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     <memballoon model="virtio">
Jan 20 15:45:18 compute-1 nova_compute[225855]:       <stats period="10"/>
Jan 20 15:45:18 compute-1 nova_compute[225855]:     </memballoon>
Jan 20 15:45:18 compute-1 nova_compute[225855]:   </devices>
Jan 20 15:45:18 compute-1 nova_compute[225855]: </domain>
Jan 20 15:45:18 compute-1 nova_compute[225855]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.595 225859 DEBUG nova.compute.manager [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Preparing to wait for external event network-vif-plugged-2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.595 225859 DEBUG oslo_concurrency.lockutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "464c0661-0ddb-4794-8959-db066827326c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.595 225859 DEBUG oslo_concurrency.lockutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "464c0661-0ddb-4794-8959-db066827326c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.596 225859 DEBUG oslo_concurrency.lockutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "464c0661-0ddb-4794-8959-db066827326c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.596 225859 DEBUG nova.virt.libvirt.vif [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:45:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-gen-1-623022678',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-gen-1-623022678',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-342561427-gen',id=222,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJW0RDNBQ7KP8Mqzxdg2i8X8upMhqABnnonEiTjmMv4W9RTdXxd1b3Z8QL9swZ0e0+6po4+8oM5PFrC0tn+WmJ7twYzqOI2QMeaFZC9+Q35AVwNQsKxl3WWPGvw1iSa1jA==',key_name='tempest-TestSecurityGroupsBasicOps-1661471182',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='728662ec7f654a3fb2e53a90b8707d7e',ramdisk_id='',reservation_id='r-5t8zkczs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-342561427',owner_user_name='tempest-TestSecurityGroupsBasicOps-342561427-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:45:14Z,user_data=None,user_id='5985ef736503499a9f1d734cabc33ce5',uuid=464c0661-0ddb-4794-8959-db066827326c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "address": "fa:16:3e:87:b1:99", "network": {"id": "6567de92-725d-4dcc-97c2-0fec6d9bda84", "bridge": "br-int", "label": "tempest-network-smoke--623656421", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d94aa0d-ed", "ovs_interfaceid": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.597 225859 DEBUG nova.network.os_vif_util [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converting VIF {"id": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "address": "fa:16:3e:87:b1:99", "network": {"id": "6567de92-725d-4dcc-97c2-0fec6d9bda84", "bridge": "br-int", "label": "tempest-network-smoke--623656421", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d94aa0d-ed", "ovs_interfaceid": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.598 225859 DEBUG nova.network.os_vif_util [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:b1:99,bridge_name='br-int',has_traffic_filtering=True,id=2d94aa0d-ed38-41aa-9f34-5ed2a83a7304,network=Network(6567de92-725d-4dcc-97c2-0fec6d9bda84),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d94aa0d-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.598 225859 DEBUG os_vif [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:b1:99,bridge_name='br-int',has_traffic_filtering=True,id=2d94aa0d-ed38-41aa-9f34-5ed2a83a7304,network=Network(6567de92-725d-4dcc-97c2-0fec6d9bda84),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d94aa0d-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.598 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.599 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.599 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.603 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.603 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d94aa0d-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.603 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d94aa0d-ed, col_values=(('external_ids', {'iface-id': '2d94aa0d-ed38-41aa-9f34-5ed2a83a7304', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:b1:99', 'vm-uuid': '464c0661-0ddb-4794-8959-db066827326c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.605 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:18 compute-1 NetworkManager[49104]: <info>  [1768923918.6060] manager: (tap2d94aa0d-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/432)
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.607 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.614 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.615 225859 INFO os_vif [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:b1:99,bridge_name='br-int',has_traffic_filtering=True,id=2d94aa0d-ed38-41aa-9f34-5ed2a83a7304,network=Network(6567de92-725d-4dcc-97c2-0fec6d9bda84),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d94aa0d-ed')
Jan 20 15:45:18 compute-1 ceph-mon[81775]: pgmap v3693: 321 pgs: 321 active+clean; 240 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 193 KiB/s rd, 2.5 MiB/s wr, 45 op/s
Jan 20 15:45:18 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:45:18 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:45:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1748637888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:45:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/739713049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #181. Immutable memtables: 0.
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:45:18.676396) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 181
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923918676519, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 2362, "num_deletes": 251, "total_data_size": 5877047, "memory_usage": 5943728, "flush_reason": "Manual Compaction"}
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #182: started
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923918705311, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 182, "file_size": 3835390, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87149, "largest_seqno": 89505, "table_properties": {"data_size": 3825792, "index_size": 6091, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19644, "raw_average_key_size": 20, "raw_value_size": 3806783, "raw_average_value_size": 3953, "num_data_blocks": 265, "num_entries": 963, "num_filter_entries": 963, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768923702, "oldest_key_time": 1768923702, "file_creation_time": 1768923918, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 28974 microseconds, and 8272 cpu microseconds.
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:45:18.705377) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #182: 3835390 bytes OK
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:45:18.705406) [db/memtable_list.cc:519] [default] Level-0 commit table #182 started
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:45:18.707404) [db/memtable_list.cc:722] [default] Level-0 commit table #182: memtable #1 done
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:45:18.707420) EVENT_LOG_v1 {"time_micros": 1768923918707415, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:45:18.707443) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 5866682, prev total WAL file size 5866682, number of live WAL files 2.
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000178.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:45:18.709081) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [182(3745KB)], [180(12MB)]
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923918709122, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [182], "files_L6": [180], "score": -1, "input_data_size": 16460285, "oldest_snapshot_seqno": -1}
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.746 225859 DEBUG nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.746 225859 DEBUG nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.747 225859 DEBUG nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] No VIF found with MAC fa:16:3e:87:b1:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.747 225859 INFO nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Using config drive
Jan 20 15:45:18 compute-1 nova_compute[225855]: 2026-01-20 15:45:18.774 225859 DEBUG nova.storage.rbd_utils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 464c0661-0ddb-4794-8959-db066827326c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #183: 11034 keys, 14482290 bytes, temperature: kUnknown
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923918829995, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 183, "file_size": 14482290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14410941, "index_size": 42628, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27653, "raw_key_size": 290303, "raw_average_key_size": 26, "raw_value_size": 14217898, "raw_average_value_size": 1288, "num_data_blocks": 1625, "num_entries": 11034, "num_filter_entries": 11034, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768923918, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 183, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:45:18.830242) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 14482290 bytes
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:45:18.831416) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.1 rd, 119.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 12.0 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(8.1) write-amplify(3.8) OK, records in: 11553, records dropped: 519 output_compression: NoCompression
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:45:18.831432) EVENT_LOG_v1 {"time_micros": 1768923918831424, "job": 116, "event": "compaction_finished", "compaction_time_micros": 120961, "compaction_time_cpu_micros": 37750, "output_level": 6, "num_output_files": 1, "total_output_size": 14482290, "num_input_records": 11553, "num_output_records": 11034, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923918832317, "job": 116, "event": "table_file_deletion", "file_number": 182}
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000180.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923918834561, "job": 116, "event": "table_file_deletion", "file_number": 180}
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:45:18.708992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:45:18.834742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:45:18.834749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:45:18.834751) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:45:18.834753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:45:18 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:45:18.834754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:45:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:45:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:19.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:45:19 compute-1 nova_compute[225855]: 2026-01-20 15:45:19.838 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:19.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:20 compute-1 nova_compute[225855]: 2026-01-20 15:45:20.208 225859 INFO nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Creating config drive at /var/lib/nova/instances/464c0661-0ddb-4794-8959-db066827326c/disk.config
Jan 20 15:45:20 compute-1 nova_compute[225855]: 2026-01-20 15:45:20.213 225859 DEBUG oslo_concurrency.processutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/464c0661-0ddb-4794-8959-db066827326c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0yndlp77 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:45:20 compute-1 nova_compute[225855]: 2026-01-20 15:45:20.344 225859 DEBUG oslo_concurrency.processutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/464c0661-0ddb-4794-8959-db066827326c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0yndlp77" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:45:20 compute-1 nova_compute[225855]: 2026-01-20 15:45:20.381 225859 DEBUG nova.storage.rbd_utils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 464c0661-0ddb-4794-8959-db066827326c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:45:20 compute-1 nova_compute[225855]: 2026-01-20 15:45:20.385 225859 DEBUG oslo_concurrency.processutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/464c0661-0ddb-4794-8959-db066827326c/disk.config 464c0661-0ddb-4794-8959-db066827326c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:45:20 compute-1 nova_compute[225855]: 2026-01-20 15:45:20.572 225859 DEBUG oslo_concurrency.processutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/464c0661-0ddb-4794-8959-db066827326c/disk.config 464c0661-0ddb-4794-8959-db066827326c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:45:20 compute-1 nova_compute[225855]: 2026-01-20 15:45:20.573 225859 INFO nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Deleting local config drive /var/lib/nova/instances/464c0661-0ddb-4794-8959-db066827326c/disk.config because it was imported into RBD.
Jan 20 15:45:20 compute-1 kernel: tap2d94aa0d-ed: entered promiscuous mode
Jan 20 15:45:20 compute-1 NetworkManager[49104]: <info>  [1768923920.6297] manager: (tap2d94aa0d-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/433)
Jan 20 15:45:20 compute-1 nova_compute[225855]: 2026-01-20 15:45:20.631 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:20 compute-1 ovn_controller[130490]: 2026-01-20T15:45:20Z|01000|binding|INFO|Claiming lport 2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 for this chassis.
Jan 20 15:45:20 compute-1 ovn_controller[130490]: 2026-01-20T15:45:20Z|01001|binding|INFO|2d94aa0d-ed38-41aa-9f34-5ed2a83a7304: Claiming fa:16:3e:87:b1:99 10.100.0.7
Jan 20 15:45:20 compute-1 nova_compute[225855]: 2026-01-20 15:45:20.635 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:20 compute-1 nova_compute[225855]: 2026-01-20 15:45:20.642 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:20 compute-1 NetworkManager[49104]: <info>  [1768923920.6432] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Jan 20 15:45:20 compute-1 NetworkManager[49104]: <info>  [1768923920.6441] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/435)
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.649 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:b1:99 10.100.0.7'], port_security=['fa:16:3e:87:b1:99 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '464c0661-0ddb-4794-8959-db066827326c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6567de92-725d-4dcc-97c2-0fec6d9bda84', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '728662ec7f654a3fb2e53a90b8707d7e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '133c0593-3211-4540-bb4e-2efa6f05d67f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92670689-434c-4ed8-a2e4-6278a7d19616, chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=2d94aa0d-ed38-41aa-9f34-5ed2a83a7304) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.651 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 in datapath 6567de92-725d-4dcc-97c2-0fec6d9bda84 bound to our chassis
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.653 140354 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6567de92-725d-4dcc-97c2-0fec6d9bda84
Jan 20 15:45:20 compute-1 systemd-udevd[330913]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.668 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8c52e5-16ee-4f9d-9db7-a38b5a8d72a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.669 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6567de92-71 in ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.670 229707 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6567de92-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.670 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[229fe9df-f40c-4ec2-a379-54a410698243]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:20 compute-1 systemd-machined[194361]: New machine qemu-115-instance-000000de.
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.671 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e7bc705c-55c5-44f7-a90f-cadc2be9aae1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:20 compute-1 NetworkManager[49104]: <info>  [1768923920.6861] device (tap2d94aa0d-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:45:20 compute-1 NetworkManager[49104]: <info>  [1768923920.6874] device (tap2d94aa0d-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.689 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[374d504f-26c6-4593-ab86-c81d863833d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:20 compute-1 ceph-mon[81775]: pgmap v3694: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:45:20 compute-1 systemd[1]: Started Virtual Machine qemu-115-instance-000000de.
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.717 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[22192256-50e4-475f-a715-56ae870367a8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:20 compute-1 nova_compute[225855]: 2026-01-20 15:45:20.723 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:20 compute-1 nova_compute[225855]: 2026-01-20 15:45:20.734 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:20 compute-1 ovn_controller[130490]: 2026-01-20T15:45:20Z|01002|binding|INFO|Setting lport 2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 ovn-installed in OVS
Jan 20 15:45:20 compute-1 ovn_controller[130490]: 2026-01-20T15:45:20Z|01003|binding|INFO|Setting lport 2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 up in Southbound
Jan 20 15:45:20 compute-1 nova_compute[225855]: 2026-01-20 15:45:20.746 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.755 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[853e0913-8921-47d1-9ff8-ebbe00aa4f10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:20 compute-1 NetworkManager[49104]: <info>  [1768923920.7614] manager: (tap6567de92-70): new Veth device (/org/freedesktop/NetworkManager/Devices/436)
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.762 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[da3bcb1f-2c16-4eab-b7ac-843c8fa87237]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.804 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[9a176663-3340-440a-bc6a-b316be82439d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.807 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[8792c5de-ca16-4d18-aac3-7725ae1b3ee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:20 compute-1 NetworkManager[49104]: <info>  [1768923920.8300] device (tap6567de92-70): carrier: link connected
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.835 229764 DEBUG oslo.privsep.daemon [-] privsep: reply[4ccf7397-1a91-40be-805d-69b81a254dd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.850 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[3528d836-2fac-48b9-9de8-ae4f778728b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6567de92-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:26:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 286], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 895975, 'reachable_time': 24803, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330946, 'error': None, 'target': 'ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.868 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed846bf-6c81-4399-a22e-f92b582915dd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0d:2665'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 895975, 'tstamp': 895975}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330947, 'error': None, 'target': 'ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.884 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ad1f2273-3c87-4bd5-bf24-fb9604fd58df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6567de92-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:26:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 286], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 895975, 'reachable_time': 24803, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 330948, 'error': None, 'target': 'ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.918 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[1ded0c26-4f0e-43d5-82b4-11a283461098]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.981 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[faa0c7bc-55be-4dd8-93a6-a126970744fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.983 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6567de92-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.983 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:45:20 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:20.984 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6567de92-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.030 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:21 compute-1 NetworkManager[49104]: <info>  [1768923921.0304] manager: (tap6567de92-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/437)
Jan 20 15:45:21 compute-1 kernel: tap6567de92-70: entered promiscuous mode
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:21.034 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6567de92-70, col_values=(('external_ids', {'iface-id': '49b3575c-9e2a-4ac6-bd85-e7d639dfd6e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.035 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:21 compute-1 ovn_controller[130490]: 2026-01-20T15:45:21Z|01004|binding|INFO|Releasing lport 49b3575c-9e2a-4ac6-bd85-e7d639dfd6e3 from this chassis (sb_readonly=0)
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.049 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:21.049 140354 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6567de92-725d-4dcc-97c2-0fec6d9bda84.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6567de92-725d-4dcc-97c2-0fec6d9bda84.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:21.050 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc632bf-6794-4fb0-837a-d0ff810e8c4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:21.051 140354 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]: global
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     log         /dev/log local0 debug
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     log-tag     haproxy-metadata-proxy-6567de92-725d-4dcc-97c2-0fec6d9bda84
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     user        root
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     group       root
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     maxconn     1024
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     pidfile     /var/lib/neutron/external/pids/6567de92-725d-4dcc-97c2-0fec6d9bda84.pid.haproxy
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     daemon
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]: defaults
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     log global
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     mode http
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     option httplog
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     option dontlognull
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     option http-server-close
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     option forwardfor
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     retries                 3
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     timeout http-request    30s
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     timeout connect         30s
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     timeout client          32s
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     timeout server          32s
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     timeout http-keep-alive 30s
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]: 
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]: listen listener
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     bind 169.254.169.254:80
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:     http-request add-header X-OVN-Network-ID 6567de92-725d-4dcc-97c2-0fec6d9bda84
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:45:21 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:21.051 140354 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84', 'env', 'PROCESS_TAG=haproxy-6567de92-725d-4dcc-97c2-0fec6d9bda84', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6567de92-725d-4dcc-97c2-0fec6d9bda84.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.151 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768923921.150042, 464c0661-0ddb-4794-8959-db066827326c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.151 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 464c0661-0ddb-4794-8959-db066827326c] VM Started (Lifecycle Event)
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.182 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 464c0661-0ddb-4794-8959-db066827326c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.186 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768923921.1507576, 464c0661-0ddb-4794-8959-db066827326c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.186 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 464c0661-0ddb-4794-8959-db066827326c] VM Paused (Lifecycle Event)
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.210 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 464c0661-0ddb-4794-8959-db066827326c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.213 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 464c0661-0ddb-4794-8959-db066827326c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.234 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 464c0661-0ddb-4794-8959-db066827326c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.249 225859 DEBUG nova.network.neutron [req-41268f81-84af-4d68-b440-89abcbfe12ee req-d5b71cc0-eeae-4bf0-89d6-f004f7f62779 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Updated VIF entry in instance network info cache for port 2d94aa0d-ed38-41aa-9f34-5ed2a83a7304. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.249 225859 DEBUG nova.network.neutron [req-41268f81-84af-4d68-b440-89abcbfe12ee req-d5b71cc0-eeae-4bf0-89d6-f004f7f62779 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Updating instance_info_cache with network_info: [{"id": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "address": "fa:16:3e:87:b1:99", "network": {"id": "6567de92-725d-4dcc-97c2-0fec6d9bda84", "bridge": "br-int", "label": "tempest-network-smoke--623656421", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d94aa0d-ed", "ovs_interfaceid": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.274 225859 DEBUG oslo_concurrency.lockutils [req-41268f81-84af-4d68-b440-89abcbfe12ee req-d5b71cc0-eeae-4bf0-89d6-f004f7f62779 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-464c0661-0ddb-4794-8959-db066827326c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:45:21 compute-1 podman[331022]: 2026-01-20 15:45:21.387781278 +0000 UTC m=+0.049535240 container create 492a6f3905f2e7ce4d211c13d3bef1132fb971d16c227b376e5f4514c80ff23f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:45:21 compute-1 systemd[1]: Started libpod-conmon-492a6f3905f2e7ce4d211c13d3bef1132fb971d16c227b376e5f4514c80ff23f.scope.
Jan 20 15:45:21 compute-1 systemd[1]: Started libcrun container.
Jan 20 15:45:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4351c11d91150c48202686c6f66728d5abd4164b417977ca124c1d87a5582683/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:45:21 compute-1 podman[331022]: 2026-01-20 15:45:21.358962604 +0000 UTC m=+0.020716606 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:45:21 compute-1 podman[331022]: 2026-01-20 15:45:21.46709406 +0000 UTC m=+0.128848042 container init 492a6f3905f2e7ce4d211c13d3bef1132fb971d16c227b376e5f4514c80ff23f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 15:45:21 compute-1 podman[331022]: 2026-01-20 15:45:21.475179988 +0000 UTC m=+0.136933950 container start 492a6f3905f2e7ce4d211c13d3bef1132fb971d16c227b376e5f4514c80ff23f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.493 225859 DEBUG nova.compute.manager [req-9a985f7a-329d-410f-a407-b7fa3b65256e req-c9cc979e-3a06-48c5-86d9-9728ab71172b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Received event network-vif-plugged-2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.493 225859 DEBUG oslo_concurrency.lockutils [req-9a985f7a-329d-410f-a407-b7fa3b65256e req-c9cc979e-3a06-48c5-86d9-9728ab71172b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "464c0661-0ddb-4794-8959-db066827326c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.494 225859 DEBUG oslo_concurrency.lockutils [req-9a985f7a-329d-410f-a407-b7fa3b65256e req-c9cc979e-3a06-48c5-86d9-9728ab71172b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "464c0661-0ddb-4794-8959-db066827326c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.494 225859 DEBUG oslo_concurrency.lockutils [req-9a985f7a-329d-410f-a407-b7fa3b65256e req-c9cc979e-3a06-48c5-86d9-9728ab71172b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "464c0661-0ddb-4794-8959-db066827326c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.495 225859 DEBUG nova.compute.manager [req-9a985f7a-329d-410f-a407-b7fa3b65256e req-c9cc979e-3a06-48c5-86d9-9728ab71172b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Processing event network-vif-plugged-2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.495 225859 DEBUG nova.compute.manager [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.500 225859 DEBUG nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:45:21 compute-1 neutron-haproxy-ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84[331037]: [NOTICE]   (331041) : New worker (331043) forked
Jan 20 15:45:21 compute-1 neutron-haproxy-ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84[331037]: [NOTICE]   (331041) : Loading success.
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.511 225859 DEBUG nova.virt.driver [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] Emitting event <LifecycleEvent: 1768923921.5107138, 464c0661-0ddb-4794-8959-db066827326c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.512 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 464c0661-0ddb-4794-8959-db066827326c] VM Resumed (Lifecycle Event)
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.516 225859 INFO nova.virt.libvirt.driver [-] [instance: 464c0661-0ddb-4794-8959-db066827326c] Instance spawned successfully.
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.517 225859 DEBUG nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.540 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 464c0661-0ddb-4794-8959-db066827326c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.546 225859 DEBUG nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 464c0661-0ddb-4794-8959-db066827326c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.549 225859 DEBUG nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.549 225859 DEBUG nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.550 225859 DEBUG nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.550 225859 DEBUG nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.551 225859 DEBUG nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.551 225859 DEBUG nova.virt.libvirt.driver [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.581 225859 INFO nova.compute.manager [None req-400328df-8b8f-4375-8546-3040b3d2f21e - - - - - -] [instance: 464c0661-0ddb-4794-8959-db066827326c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:45:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:21.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.623 225859 INFO nova.compute.manager [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Took 7.32 seconds to spawn the instance on the hypervisor.
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.624 225859 DEBUG nova.compute.manager [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.684 225859 INFO nova.compute.manager [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Took 8.55 seconds to build instance.
Jan 20 15:45:21 compute-1 nova_compute[225855]: 2026-01-20 15:45:21.715 225859 DEBUG oslo_concurrency.lockutils [None req-240e01b4-a824-470a-a8e7-b30c4e45ae3f 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "464c0661-0ddb-4794-8959-db066827326c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:45:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:45:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:21.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:45:22 compute-1 nova_compute[225855]: 2026-01-20 15:45:22.506 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:22.505 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=90, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=89) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:45:22 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:22.507 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:45:23 compute-1 ceph-mon[81775]: pgmap v3695: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:45:23 compute-1 nova_compute[225855]: 2026-01-20 15:45:23.573 225859 DEBUG nova.compute.manager [req-1b8e6d7f-fd58-475c-8f92-a8c609105f82 req-a714cb57-d29b-4c95-a226-0d8e89884282 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Received event network-vif-plugged-2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:45:23 compute-1 nova_compute[225855]: 2026-01-20 15:45:23.573 225859 DEBUG oslo_concurrency.lockutils [req-1b8e6d7f-fd58-475c-8f92-a8c609105f82 req-a714cb57-d29b-4c95-a226-0d8e89884282 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "464c0661-0ddb-4794-8959-db066827326c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:45:23 compute-1 nova_compute[225855]: 2026-01-20 15:45:23.573 225859 DEBUG oslo_concurrency.lockutils [req-1b8e6d7f-fd58-475c-8f92-a8c609105f82 req-a714cb57-d29b-4c95-a226-0d8e89884282 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "464c0661-0ddb-4794-8959-db066827326c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:45:23 compute-1 nova_compute[225855]: 2026-01-20 15:45:23.573 225859 DEBUG oslo_concurrency.lockutils [req-1b8e6d7f-fd58-475c-8f92-a8c609105f82 req-a714cb57-d29b-4c95-a226-0d8e89884282 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "464c0661-0ddb-4794-8959-db066827326c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:45:23 compute-1 nova_compute[225855]: 2026-01-20 15:45:23.574 225859 DEBUG nova.compute.manager [req-1b8e6d7f-fd58-475c-8f92-a8c609105f82 req-a714cb57-d29b-4c95-a226-0d8e89884282 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] No waiting events found dispatching network-vif-plugged-2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:45:23 compute-1 nova_compute[225855]: 2026-01-20 15:45:23.574 225859 WARNING nova.compute.manager [req-1b8e6d7f-fd58-475c-8f92-a8c609105f82 req-a714cb57-d29b-4c95-a226-0d8e89884282 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Received unexpected event network-vif-plugged-2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 for instance with vm_state active and task_state None.
Jan 20 15:45:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:23.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:23 compute-1 nova_compute[225855]: 2026-01-20 15:45:23.606 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:45:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:23.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:45:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:24 compute-1 ceph-mon[81775]: pgmap v3696: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:45:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3839987835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:24 compute-1 nova_compute[225855]: 2026-01-20 15:45:24.900 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1092852698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:45:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:25.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:45:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:45:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:25.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:45:26 compute-1 nova_compute[225855]: 2026-01-20 15:45:26.357 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:26 compute-1 nova_compute[225855]: 2026-01-20 15:45:26.357 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:45:26 compute-1 nova_compute[225855]: 2026-01-20 15:45:26.357 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:45:26 compute-1 ceph-mon[81775]: pgmap v3697: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 958 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Jan 20 15:45:26 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:26.509 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '90'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:45:26 compute-1 nova_compute[225855]: 2026-01-20 15:45:26.583 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "refresh_cache-464c0661-0ddb-4794-8959-db066827326c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:45:26 compute-1 nova_compute[225855]: 2026-01-20 15:45:26.584 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquired lock "refresh_cache-464c0661-0ddb-4794-8959-db066827326c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:45:26 compute-1 nova_compute[225855]: 2026-01-20 15:45:26.584 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 464c0661-0ddb-4794-8959-db066827326c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:45:26 compute-1 nova_compute[225855]: 2026-01-20 15:45:26.584 225859 DEBUG nova.objects.instance [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 464c0661-0ddb-4794-8959-db066827326c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:45:27 compute-1 nova_compute[225855]: 2026-01-20 15:45:27.374 225859 DEBUG nova.compute.manager [req-40492224-b9c5-40f2-bfe7-4ce6d817ce37 req-e0f43cad-c23e-4d4a-803b-e112bbc2f7c2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Received event network-changed-2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:45:27 compute-1 nova_compute[225855]: 2026-01-20 15:45:27.374 225859 DEBUG nova.compute.manager [req-40492224-b9c5-40f2-bfe7-4ce6d817ce37 req-e0f43cad-c23e-4d4a-803b-e112bbc2f7c2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Refreshing instance network info cache due to event network-changed-2d94aa0d-ed38-41aa-9f34-5ed2a83a7304. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:45:27 compute-1 nova_compute[225855]: 2026-01-20 15:45:27.375 225859 DEBUG oslo_concurrency.lockutils [req-40492224-b9c5-40f2-bfe7-4ce6d817ce37 req-e0f43cad-c23e-4d4a-803b-e112bbc2f7c2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-464c0661-0ddb-4794-8959-db066827326c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:45:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:45:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:27.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:45:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:27.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:28 compute-1 nova_compute[225855]: 2026-01-20 15:45:28.608 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:28 compute-1 ceph-mon[81775]: pgmap v3698: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 77 op/s
Jan 20 15:45:28 compute-1 nova_compute[225855]: 2026-01-20 15:45:28.714 225859 DEBUG nova.network.neutron [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 464c0661-0ddb-4794-8959-db066827326c] Updating instance_info_cache with network_info: [{"id": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "address": "fa:16:3e:87:b1:99", "network": {"id": "6567de92-725d-4dcc-97c2-0fec6d9bda84", "bridge": "br-int", "label": "tempest-network-smoke--623656421", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d94aa0d-ed", "ovs_interfaceid": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:45:28 compute-1 nova_compute[225855]: 2026-01-20 15:45:28.737 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Releasing lock "refresh_cache-464c0661-0ddb-4794-8959-db066827326c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:45:28 compute-1 nova_compute[225855]: 2026-01-20 15:45:28.738 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] [instance: 464c0661-0ddb-4794-8959-db066827326c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:45:28 compute-1 nova_compute[225855]: 2026-01-20 15:45:28.739 225859 DEBUG oslo_concurrency.lockutils [req-40492224-b9c5-40f2-bfe7-4ce6d817ce37 req-e0f43cad-c23e-4d4a-803b-e112bbc2f7c2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-464c0661-0ddb-4794-8959-db066827326c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:45:28 compute-1 nova_compute[225855]: 2026-01-20 15:45:28.739 225859 DEBUG nova.network.neutron [req-40492224-b9c5-40f2-bfe7-4ce6d817ce37 req-e0f43cad-c23e-4d4a-803b-e112bbc2f7c2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Refreshing network info cache for port 2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:45:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:29 compute-1 nova_compute[225855]: 2026-01-20 15:45:29.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:29 compute-1 nova_compute[225855]: 2026-01-20 15:45:29.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:45:29 compute-1 nova_compute[225855]: 2026-01-20 15:45:29.456 225859 DEBUG nova.compute.manager [req-50306de1-5986-4c5d-865b-29e100f9905d req-a18d3ac1-e6b8-4177-8688-93a87c57bc15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Received event network-changed-2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:45:29 compute-1 nova_compute[225855]: 2026-01-20 15:45:29.457 225859 DEBUG nova.compute.manager [req-50306de1-5986-4c5d-865b-29e100f9905d req-a18d3ac1-e6b8-4177-8688-93a87c57bc15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Refreshing instance network info cache due to event network-changed-2d94aa0d-ed38-41aa-9f34-5ed2a83a7304. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:45:29 compute-1 nova_compute[225855]: 2026-01-20 15:45:29.457 225859 DEBUG oslo_concurrency.lockutils [req-50306de1-5986-4c5d-865b-29e100f9905d req-a18d3ac1-e6b8-4177-8688-93a87c57bc15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-464c0661-0ddb-4794-8959-db066827326c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:45:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:29.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:29 compute-1 nova_compute[225855]: 2026-01-20 15:45:29.903 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:29.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:30 compute-1 ceph-mon[81775]: pgmap v3699: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 188 KiB/s wr, 75 op/s
Jan 20 15:45:30 compute-1 nova_compute[225855]: 2026-01-20 15:45:30.643 225859 DEBUG nova.network.neutron [req-40492224-b9c5-40f2-bfe7-4ce6d817ce37 req-e0f43cad-c23e-4d4a-803b-e112bbc2f7c2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Updated VIF entry in instance network info cache for port 2d94aa0d-ed38-41aa-9f34-5ed2a83a7304. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:45:30 compute-1 nova_compute[225855]: 2026-01-20 15:45:30.644 225859 DEBUG nova.network.neutron [req-40492224-b9c5-40f2-bfe7-4ce6d817ce37 req-e0f43cad-c23e-4d4a-803b-e112bbc2f7c2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Updating instance_info_cache with network_info: [{"id": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "address": "fa:16:3e:87:b1:99", "network": {"id": "6567de92-725d-4dcc-97c2-0fec6d9bda84", "bridge": "br-int", "label": "tempest-network-smoke--623656421", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d94aa0d-ed", "ovs_interfaceid": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:45:30 compute-1 nova_compute[225855]: 2026-01-20 15:45:30.661 225859 DEBUG oslo_concurrency.lockutils [req-40492224-b9c5-40f2-bfe7-4ce6d817ce37 req-e0f43cad-c23e-4d4a-803b-e112bbc2f7c2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-464c0661-0ddb-4794-8959-db066827326c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:45:30 compute-1 nova_compute[225855]: 2026-01-20 15:45:30.663 225859 DEBUG oslo_concurrency.lockutils [req-50306de1-5986-4c5d-865b-29e100f9905d req-a18d3ac1-e6b8-4177-8688-93a87c57bc15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-464c0661-0ddb-4794-8959-db066827326c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:45:30 compute-1 nova_compute[225855]: 2026-01-20 15:45:30.663 225859 DEBUG nova.network.neutron [req-50306de1-5986-4c5d-865b-29e100f9905d req-a18d3ac1-e6b8-4177-8688-93a87c57bc15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Refreshing network info cache for port 2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:45:31 compute-1 nova_compute[225855]: 2026-01-20 15:45:31.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:31.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1867368074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:31.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:32 compute-1 nova_compute[225855]: 2026-01-20 15:45:32.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:32 compute-1 nova_compute[225855]: 2026-01-20 15:45:32.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:32 compute-1 nova_compute[225855]: 2026-01-20 15:45:32.364 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:45:32 compute-1 nova_compute[225855]: 2026-01-20 15:45:32.365 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:45:32 compute-1 nova_compute[225855]: 2026-01-20 15:45:32.365 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:45:32 compute-1 nova_compute[225855]: 2026-01-20 15:45:32.365 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:45:32 compute-1 nova_compute[225855]: 2026-01-20 15:45:32.366 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:45:32 compute-1 nova_compute[225855]: 2026-01-20 15:45:32.609 225859 DEBUG nova.network.neutron [req-50306de1-5986-4c5d-865b-29e100f9905d req-a18d3ac1-e6b8-4177-8688-93a87c57bc15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Updated VIF entry in instance network info cache for port 2d94aa0d-ed38-41aa-9f34-5ed2a83a7304. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:45:32 compute-1 nova_compute[225855]: 2026-01-20 15:45:32.610 225859 DEBUG nova.network.neutron [req-50306de1-5986-4c5d-865b-29e100f9905d req-a18d3ac1-e6b8-4177-8688-93a87c57bc15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Updating instance_info_cache with network_info: [{"id": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "address": "fa:16:3e:87:b1:99", "network": {"id": "6567de92-725d-4dcc-97c2-0fec6d9bda84", "bridge": "br-int", "label": "tempest-network-smoke--623656421", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d94aa0d-ed", "ovs_interfaceid": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:45:32 compute-1 nova_compute[225855]: 2026-01-20 15:45:32.626 225859 DEBUG oslo_concurrency.lockutils [req-50306de1-5986-4c5d-865b-29e100f9905d req-a18d3ac1-e6b8-4177-8688-93a87c57bc15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-464c0661-0ddb-4794-8959-db066827326c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:45:32 compute-1 ceph-mon[81775]: pgmap v3700: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 20 15:45:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/254292647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:45:32 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2225520846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:32 compute-1 nova_compute[225855]: 2026-01-20 15:45:32.845 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:45:32 compute-1 nova_compute[225855]: 2026-01-20 15:45:32.932 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000de as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:45:32 compute-1 nova_compute[225855]: 2026-01-20 15:45:32.933 225859 DEBUG nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] skipping disk for instance-000000de as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:45:33 compute-1 nova_compute[225855]: 2026-01-20 15:45:33.102 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:45:33 compute-1 nova_compute[225855]: 2026-01-20 15:45:33.103 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3984MB free_disk=20.921802520751953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:45:33 compute-1 nova_compute[225855]: 2026-01-20 15:45:33.104 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:45:33 compute-1 nova_compute[225855]: 2026-01-20 15:45:33.104 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:45:33 compute-1 nova_compute[225855]: 2026-01-20 15:45:33.206 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Instance 464c0661-0ddb-4794-8959-db066827326c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:45:33 compute-1 nova_compute[225855]: 2026-01-20 15:45:33.207 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:45:33 compute-1 nova_compute[225855]: 2026-01-20 15:45:33.207 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:45:33 compute-1 nova_compute[225855]: 2026-01-20 15:45:33.238 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:45:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:45:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:33.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:45:33 compute-1 nova_compute[225855]: 2026-01-20 15:45:33.611 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2225520846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:33 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:45:33 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/740262342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:33 compute-1 nova_compute[225855]: 2026-01-20 15:45:33.705 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:45:33 compute-1 nova_compute[225855]: 2026-01-20 15:45:33.711 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:45:33 compute-1 nova_compute[225855]: 2026-01-20 15:45:33.743 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:45:33 compute-1 nova_compute[225855]: 2026-01-20 15:45:33.761 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:45:33 compute-1 nova_compute[225855]: 2026-01-20 15:45:33.761 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:45:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:33.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:34 compute-1 sudo[331105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:34 compute-1 sudo[331105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:34 compute-1 sudo[331105]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:34 compute-1 sudo[331130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:34 compute-1 sudo[331130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:34 compute-1 sudo[331130]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:34 compute-1 nova_compute[225855]: 2026-01-20 15:45:34.761 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:34 compute-1 nova_compute[225855]: 2026-01-20 15:45:34.761 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:34 compute-1 nova_compute[225855]: 2026-01-20 15:45:34.761 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:34 compute-1 nova_compute[225855]: 2026-01-20 15:45:34.942 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:35 compute-1 ovn_controller[130490]: 2026-01-20T15:45:35Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:b1:99 10.100.0.7
Jan 20 15:45:35 compute-1 ceph-mon[81775]: pgmap v3701: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 20 15:45:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/740262342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:35 compute-1 ovn_controller[130490]: 2026-01-20T15:45:35Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:b1:99 10.100.0.7
Jan 20 15:45:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:45:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:35.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:45:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:35.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:36 compute-1 ceph-mon[81775]: pgmap v3702: 321 pgs: 321 active+clean; 269 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 111 op/s
Jan 20 15:45:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:37.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:45:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:37.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:45:38 compute-1 podman[331157]: 2026-01-20 15:45:38.060253885 +0000 UTC m=+0.092208347 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 15:45:38 compute-1 nova_compute[225855]: 2026-01-20 15:45:38.669 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:38 compute-1 ceph-mon[81775]: pgmap v3703: 321 pgs: 321 active+clean; 276 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 87 op/s
Jan 20 15:45:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:39 compute-1 nova_compute[225855]: 2026-01-20 15:45:39.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:45:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:39.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:45:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:45:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:39.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:45:39 compute-1 nova_compute[225855]: 2026-01-20 15:45:39.949 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:40 compute-1 ceph-mon[81775]: pgmap v3704: 321 pgs: 321 active+clean; 276 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Jan 20 15:45:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:41.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:41 compute-1 ceph-mon[81775]: pgmap v3705: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:45:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:41.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.332 225859 DEBUG oslo_concurrency.lockutils [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "464c0661-0ddb-4794-8959-db066827326c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.332 225859 DEBUG oslo_concurrency.lockutils [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "464c0661-0ddb-4794-8959-db066827326c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.332 225859 DEBUG oslo_concurrency.lockutils [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "464c0661-0ddb-4794-8959-db066827326c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.333 225859 DEBUG oslo_concurrency.lockutils [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "464c0661-0ddb-4794-8959-db066827326c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.333 225859 DEBUG oslo_concurrency.lockutils [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "464c0661-0ddb-4794-8959-db066827326c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.334 225859 INFO nova.compute.manager [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Terminating instance
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.335 225859 DEBUG nova.compute.manager [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:45:42 compute-1 kernel: tap2d94aa0d-ed (unregistering): left promiscuous mode
Jan 20 15:45:42 compute-1 NetworkManager[49104]: <info>  [1768923942.3917] device (tap2d94aa0d-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:45:42 compute-1 ovn_controller[130490]: 2026-01-20T15:45:42Z|01005|binding|INFO|Releasing lport 2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 from this chassis (sb_readonly=0)
Jan 20 15:45:42 compute-1 ovn_controller[130490]: 2026-01-20T15:45:42Z|01006|binding|INFO|Setting lport 2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 down in Southbound
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.402 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:42 compute-1 ovn_controller[130490]: 2026-01-20T15:45:42Z|01007|binding|INFO|Removing iface tap2d94aa0d-ed ovn-installed in OVS
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.404 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:42.414 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:b1:99 10.100.0.7', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '464c0661-0ddb-4794-8959-db066827326c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6567de92-725d-4dcc-97c2-0fec6d9bda84', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '728662ec7f654a3fb2e53a90b8707d7e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92670689-434c-4ed8-a2e4-6278a7d19616, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>], logical_port=2d94aa0d-ed38-41aa-9f34-5ed2a83a7304) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb671582ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:45:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:42.415 140354 INFO neutron.agent.ovn.metadata.agent [-] Port 2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 in datapath 6567de92-725d-4dcc-97c2-0fec6d9bda84 unbound from our chassis
Jan 20 15:45:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:42.416 140354 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6567de92-725d-4dcc-97c2-0fec6d9bda84, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:45:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:42.417 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cba7414d-7022-44f4-9538-5296240846ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.419 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:42.418 140354 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84 namespace which is not needed anymore
Jan 20 15:45:42 compute-1 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d000000de.scope: Deactivated successfully.
Jan 20 15:45:42 compute-1 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d000000de.scope: Consumed 13.524s CPU time.
Jan 20 15:45:42 compute-1 systemd-machined[194361]: Machine qemu-115-instance-000000de terminated.
Jan 20 15:45:42 compute-1 neutron-haproxy-ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84[331037]: [NOTICE]   (331041) : haproxy version is 2.8.14-c23fe91
Jan 20 15:45:42 compute-1 neutron-haproxy-ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84[331037]: [NOTICE]   (331041) : path to executable is /usr/sbin/haproxy
Jan 20 15:45:42 compute-1 neutron-haproxy-ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84[331037]: [WARNING]  (331041) : Exiting Master process...
Jan 20 15:45:42 compute-1 neutron-haproxy-ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84[331037]: [ALERT]    (331041) : Current worker (331043) exited with code 143 (Terminated)
Jan 20 15:45:42 compute-1 neutron-haproxy-ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84[331037]: [WARNING]  (331041) : All workers exited. Exiting... (0)
Jan 20 15:45:42 compute-1 systemd[1]: libpod-492a6f3905f2e7ce4d211c13d3bef1132fb971d16c227b376e5f4514c80ff23f.scope: Deactivated successfully.
Jan 20 15:45:42 compute-1 podman[331209]: 2026-01-20 15:45:42.547083229 +0000 UTC m=+0.045061704 container died 492a6f3905f2e7ce4d211c13d3bef1132fb971d16c227b376e5f4514c80ff23f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.557 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.563 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.569 225859 INFO nova.virt.libvirt.driver [-] [instance: 464c0661-0ddb-4794-8959-db066827326c] Instance destroyed successfully.
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.570 225859 DEBUG nova.objects.instance [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lazy-loading 'resources' on Instance uuid 464c0661-0ddb-4794-8959-db066827326c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:45:42 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-492a6f3905f2e7ce4d211c13d3bef1132fb971d16c227b376e5f4514c80ff23f-userdata-shm.mount: Deactivated successfully.
Jan 20 15:45:42 compute-1 systemd[1]: var-lib-containers-storage-overlay-4351c11d91150c48202686c6f66728d5abd4164b417977ca124c1d87a5582683-merged.mount: Deactivated successfully.
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.592 225859 DEBUG nova.virt.libvirt.vif [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:45:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-gen-1-623022678',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-gen-1-623022678',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-342561427-gen',id=222,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJW0RDNBQ7KP8Mqzxdg2i8X8upMhqABnnonEiTjmMv4W9RTdXxd1b3Z8QL9swZ0e0+6po4+8oM5PFrC0tn+WmJ7twYzqOI2QMeaFZC9+Q35AVwNQsKxl3WWPGvw1iSa1jA==',key_name='tempest-TestSecurityGroupsBasicOps-1661471182',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:45:21Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='728662ec7f654a3fb2e53a90b8707d7e',ramdisk_id='',reservation_id='r-5t8zkczs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-342561427',owner_user_name='tempest-TestSecurityGroupsBasicOps-342561427-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:45:21Z,user_data=None,user_id='5985ef736503499a9f1d734cabc33ce5',uuid=464c0661-0ddb-4794-8959-db066827326c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "address": "fa:16:3e:87:b1:99", "network": {"id": "6567de92-725d-4dcc-97c2-0fec6d9bda84", "bridge": "br-int", "label": "tempest-network-smoke--623656421", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d94aa0d-ed", "ovs_interfaceid": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:45:42 compute-1 podman[331209]: 2026-01-20 15:45:42.593075879 +0000 UTC m=+0.091054344 container cleanup 492a6f3905f2e7ce4d211c13d3bef1132fb971d16c227b376e5f4514c80ff23f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.593 225859 DEBUG nova.network.os_vif_util [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converting VIF {"id": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "address": "fa:16:3e:87:b1:99", "network": {"id": "6567de92-725d-4dcc-97c2-0fec6d9bda84", "bridge": "br-int", "label": "tempest-network-smoke--623656421", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d94aa0d-ed", "ovs_interfaceid": "2d94aa0d-ed38-41aa-9f34-5ed2a83a7304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.594 225859 DEBUG nova.network.os_vif_util [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:b1:99,bridge_name='br-int',has_traffic_filtering=True,id=2d94aa0d-ed38-41aa-9f34-5ed2a83a7304,network=Network(6567de92-725d-4dcc-97c2-0fec6d9bda84),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d94aa0d-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.594 225859 DEBUG os_vif [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:b1:99,bridge_name='br-int',has_traffic_filtering=True,id=2d94aa0d-ed38-41aa-9f34-5ed2a83a7304,network=Network(6567de92-725d-4dcc-97c2-0fec6d9bda84),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d94aa0d-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.596 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.597 225859 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d94aa0d-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.600 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:42 compute-1 systemd[1]: libpod-conmon-492a6f3905f2e7ce4d211c13d3bef1132fb971d16c227b376e5f4514c80ff23f.scope: Deactivated successfully.
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.603 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.606 225859 INFO os_vif [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:b1:99,bridge_name='br-int',has_traffic_filtering=True,id=2d94aa0d-ed38-41aa-9f34-5ed2a83a7304,network=Network(6567de92-725d-4dcc-97c2-0fec6d9bda84),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d94aa0d-ed')
Jan 20 15:45:42 compute-1 podman[331250]: 2026-01-20 15:45:42.670314902 +0000 UTC m=+0.049929252 container remove 492a6f3905f2e7ce4d211c13d3bef1132fb971d16c227b376e5f4514c80ff23f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 15:45:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:42.678 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[617dfa11-9e94-47a2-bfe6-90be4ae56dc1]: (4, ('Tue Jan 20 03:45:42 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84 (492a6f3905f2e7ce4d211c13d3bef1132fb971d16c227b376e5f4514c80ff23f)\n492a6f3905f2e7ce4d211c13d3bef1132fb971d16c227b376e5f4514c80ff23f\nTue Jan 20 03:45:42 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84 (492a6f3905f2e7ce4d211c13d3bef1132fb971d16c227b376e5f4514c80ff23f)\n492a6f3905f2e7ce4d211c13d3bef1132fb971d16c227b376e5f4514c80ff23f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:42.680 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[5e4cf084-741e-49f3-bde6-fbcbee11a88d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:42.681 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6567de92-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.682 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:42 compute-1 kernel: tap6567de92-70: left promiscuous mode
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.700 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:42.703 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[93f86377-8053-4e16-b7f2-c5b18706a24d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:42.717 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[a4cac1ba-84c0-435b-badc-dccd54b528c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:42.719 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe5e427-e67f-436f-bb3d-8e23e86fd94a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:42.735 229707 DEBUG oslo.privsep.daemon [-] privsep: reply[e62ab6cd-0fe5-4adc-ae34-d60648b93021]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 895967, 'reachable_time': 41472, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331283, 'error': None, 'target': 'ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:42.738 140466 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6567de92-725d-4dcc-97c2-0fec6d9bda84 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:45:42 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:45:42.738 140466 DEBUG oslo.privsep.daemon [-] privsep: reply[05697ccb-2a63-487b-8502-1f611ce134d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:45:42 compute-1 systemd[1]: run-netns-ovnmeta\x2d6567de92\x2d725d\x2d4dcc\x2d97c2\x2d0fec6d9bda84.mount: Deactivated successfully.
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.986 225859 INFO nova.virt.libvirt.driver [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Deleting instance files /var/lib/nova/instances/464c0661-0ddb-4794-8959-db066827326c_del
Jan 20 15:45:42 compute-1 nova_compute[225855]: 2026-01-20 15:45:42.987 225859 INFO nova.virt.libvirt.driver [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Deletion of /var/lib/nova/instances/464c0661-0ddb-4794-8959-db066827326c_del complete
Jan 20 15:45:43 compute-1 nova_compute[225855]: 2026-01-20 15:45:43.059 225859 INFO nova.compute.manager [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Took 0.72 seconds to destroy the instance on the hypervisor.
Jan 20 15:45:43 compute-1 nova_compute[225855]: 2026-01-20 15:45:43.060 225859 DEBUG oslo.service.loopingcall [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:45:43 compute-1 nova_compute[225855]: 2026-01-20 15:45:43.061 225859 DEBUG nova.compute.manager [-] [instance: 464c0661-0ddb-4794-8959-db066827326c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:45:43 compute-1 nova_compute[225855]: 2026-01-20 15:45:43.061 225859 DEBUG nova.network.neutron [-] [instance: 464c0661-0ddb-4794-8959-db066827326c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:45:43 compute-1 nova_compute[225855]: 2026-01-20 15:45:43.214 225859 DEBUG nova.compute.manager [req-d9604e77-d39d-40f5-8a23-ad474dc757ae req-8bdb0172-e6db-4fa6-b308-1e01b04074e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Received event network-vif-unplugged-2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:45:43 compute-1 nova_compute[225855]: 2026-01-20 15:45:43.214 225859 DEBUG oslo_concurrency.lockutils [req-d9604e77-d39d-40f5-8a23-ad474dc757ae req-8bdb0172-e6db-4fa6-b308-1e01b04074e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "464c0661-0ddb-4794-8959-db066827326c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:45:43 compute-1 nova_compute[225855]: 2026-01-20 15:45:43.215 225859 DEBUG oslo_concurrency.lockutils [req-d9604e77-d39d-40f5-8a23-ad474dc757ae req-8bdb0172-e6db-4fa6-b308-1e01b04074e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "464c0661-0ddb-4794-8959-db066827326c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:45:43 compute-1 nova_compute[225855]: 2026-01-20 15:45:43.215 225859 DEBUG oslo_concurrency.lockutils [req-d9604e77-d39d-40f5-8a23-ad474dc757ae req-8bdb0172-e6db-4fa6-b308-1e01b04074e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "464c0661-0ddb-4794-8959-db066827326c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:45:43 compute-1 nova_compute[225855]: 2026-01-20 15:45:43.215 225859 DEBUG nova.compute.manager [req-d9604e77-d39d-40f5-8a23-ad474dc757ae req-8bdb0172-e6db-4fa6-b308-1e01b04074e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] No waiting events found dispatching network-vif-unplugged-2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:45:43 compute-1 nova_compute[225855]: 2026-01-20 15:45:43.215 225859 DEBUG nova.compute.manager [req-d9604e77-d39d-40f5-8a23-ad474dc757ae req-8bdb0172-e6db-4fa6-b308-1e01b04074e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Received event network-vif-unplugged-2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:45:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:45:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:43.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:45:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:43.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:44 compute-1 nova_compute[225855]: 2026-01-20 15:45:44.036 225859 DEBUG nova.network.neutron [-] [instance: 464c0661-0ddb-4794-8959-db066827326c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:45:44 compute-1 nova_compute[225855]: 2026-01-20 15:45:44.053 225859 INFO nova.compute.manager [-] [instance: 464c0661-0ddb-4794-8959-db066827326c] Took 0.99 seconds to deallocate network for instance.
Jan 20 15:45:44 compute-1 nova_compute[225855]: 2026-01-20 15:45:44.108 225859 DEBUG oslo_concurrency.lockutils [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:45:44 compute-1 nova_compute[225855]: 2026-01-20 15:45:44.108 225859 DEBUG oslo_concurrency.lockutils [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:45:44 compute-1 nova_compute[225855]: 2026-01-20 15:45:44.111 225859 DEBUG nova.compute.manager [req-357eff8c-6f1f-4608-9502-70278042b0cb req-d92e821c-676d-46d2-ad19-a6dfba903a85 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Received event network-vif-deleted-2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:45:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:44 compute-1 nova_compute[225855]: 2026-01-20 15:45:44.172 225859 DEBUG oslo_concurrency.processutils [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:45:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:45:44 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1327197156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:44 compute-1 nova_compute[225855]: 2026-01-20 15:45:44.614 225859 DEBUG oslo_concurrency.processutils [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:45:44 compute-1 nova_compute[225855]: 2026-01-20 15:45:44.620 225859 DEBUG nova.compute.provider_tree [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:45:44 compute-1 ceph-mon[81775]: pgmap v3706: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:45:44 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1327197156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:44 compute-1 nova_compute[225855]: 2026-01-20 15:45:44.644 225859 DEBUG nova.scheduler.client.report [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:45:44 compute-1 nova_compute[225855]: 2026-01-20 15:45:44.673 225859 DEBUG oslo_concurrency.lockutils [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:45:44 compute-1 nova_compute[225855]: 2026-01-20 15:45:44.711 225859 INFO nova.scheduler.client.report [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Deleted allocations for instance 464c0661-0ddb-4794-8959-db066827326c
Jan 20 15:45:44 compute-1 nova_compute[225855]: 2026-01-20 15:45:44.785 225859 DEBUG oslo_concurrency.lockutils [None req-adaa13a9-ed09-47ae-af7e-ee4814c03d82 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "464c0661-0ddb-4794-8959-db066827326c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:45:44 compute-1 nova_compute[225855]: 2026-01-20 15:45:44.950 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:45 compute-1 podman[331308]: 2026-01-20 15:45:45.01549313 +0000 UTC m=+0.053085331 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 20 15:45:45 compute-1 nova_compute[225855]: 2026-01-20 15:45:45.322 225859 DEBUG nova.compute.manager [req-13b38445-d8db-4fb9-b18b-7e315ab564d9 req-49bed99b-bb7d-4b86-a6d6-1d9aa9175568 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Received event network-vif-plugged-2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:45:45 compute-1 nova_compute[225855]: 2026-01-20 15:45:45.322 225859 DEBUG oslo_concurrency.lockutils [req-13b38445-d8db-4fb9-b18b-7e315ab564d9 req-49bed99b-bb7d-4b86-a6d6-1d9aa9175568 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "464c0661-0ddb-4794-8959-db066827326c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:45:45 compute-1 nova_compute[225855]: 2026-01-20 15:45:45.322 225859 DEBUG oslo_concurrency.lockutils [req-13b38445-d8db-4fb9-b18b-7e315ab564d9 req-49bed99b-bb7d-4b86-a6d6-1d9aa9175568 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "464c0661-0ddb-4794-8959-db066827326c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:45:45 compute-1 nova_compute[225855]: 2026-01-20 15:45:45.323 225859 DEBUG oslo_concurrency.lockutils [req-13b38445-d8db-4fb9-b18b-7e315ab564d9 req-49bed99b-bb7d-4b86-a6d6-1d9aa9175568 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "464c0661-0ddb-4794-8959-db066827326c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:45:45 compute-1 nova_compute[225855]: 2026-01-20 15:45:45.323 225859 DEBUG nova.compute.manager [req-13b38445-d8db-4fb9-b18b-7e315ab564d9 req-49bed99b-bb7d-4b86-a6d6-1d9aa9175568 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] No waiting events found dispatching network-vif-plugged-2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:45:45 compute-1 nova_compute[225855]: 2026-01-20 15:45:45.323 225859 WARNING nova.compute.manager [req-13b38445-d8db-4fb9-b18b-7e315ab564d9 req-49bed99b-bb7d-4b86-a6d6-1d9aa9175568 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 464c0661-0ddb-4794-8959-db066827326c] Received unexpected event network-vif-plugged-2d94aa0d-ed38-41aa-9f34-5ed2a83a7304 for instance with vm_state deleted and task_state None.
Jan 20 15:45:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:45.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:45.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:46 compute-1 ceph-mon[81775]: pgmap v3707: 321 pgs: 321 active+clean; 232 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 342 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Jan 20 15:45:47 compute-1 nova_compute[225855]: 2026-01-20 15:45:47.600 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:47.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:47.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:48 compute-1 ceph-mon[81775]: pgmap v3708: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 183 KiB/s rd, 762 KiB/s wr, 54 op/s
Jan 20 15:45:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:49.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:49.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:49 compute-1 nova_compute[225855]: 2026-01-20 15:45:49.952 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:50 compute-1 ceph-mon[81775]: pgmap v3709: 321 pgs: 321 active+clean; 170 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 60 KiB/s rd, 47 KiB/s wr, 51 op/s
Jan 20 15:45:50 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3862295260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:51.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:51.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:52 compute-1 nova_compute[225855]: 2026-01-20 15:45:52.601 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:52 compute-1 ceph-mon[81775]: pgmap v3710: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 48 KiB/s wr, 65 op/s
Jan 20 15:45:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:53.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:45:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:53.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:45:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:54 compute-1 sudo[331333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:54 compute-1 sudo[331333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:54 compute-1 sudo[331333]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:54 compute-1 sudo[331358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:54 compute-1 sudo[331358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:54 compute-1 sudo[331358]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:54 compute-1 ceph-mon[81775]: pgmap v3711: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 4.4 KiB/s wr, 55 op/s
Jan 20 15:45:54 compute-1 nova_compute[225855]: 2026-01-20 15:45:54.954 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:55.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:55.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:55 compute-1 nova_compute[225855]: 2026-01-20 15:45:55.968 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:55 compute-1 nova_compute[225855]: 2026-01-20 15:45:55.985 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:56 compute-1 ceph-mon[81775]: pgmap v3712: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 4.4 KiB/s wr, 55 op/s
Jan 20 15:45:57 compute-1 nova_compute[225855]: 2026-01-20 15:45:57.568 225859 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768923942.5672007, 464c0661-0ddb-4794-8959-db066827326c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:45:57 compute-1 nova_compute[225855]: 2026-01-20 15:45:57.568 225859 INFO nova.compute.manager [-] [instance: 464c0661-0ddb-4794-8959-db066827326c] VM Stopped (Lifecycle Event)
Jan 20 15:45:57 compute-1 nova_compute[225855]: 2026-01-20 15:45:57.602 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:45:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:57.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:45:57 compute-1 nova_compute[225855]: 2026-01-20 15:45:57.852 225859 DEBUG nova.compute.manager [None req-6e006aaa-8dc1-4de1-be25-613d4a75da6a - - - - - -] [instance: 464c0661-0ddb-4794-8959-db066827326c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:45:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:45:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:57.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:45:58 compute-1 nova_compute[225855]: 2026-01-20 15:45:58.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:58 compute-1 ceph-mon[81775]: pgmap v3713: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 2.7 KiB/s wr, 31 op/s
Jan 20 15:45:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:59.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:59 compute-1 nova_compute[225855]: 2026-01-20 15:45:59.955 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:45:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:59.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:00 compute-1 ceph-mon[81775]: pgmap v3714: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:46:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:01.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:01 compute-1 ceph-mon[81775]: pgmap v3715: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 852 B/s wr, 15 op/s
Jan 20 15:46:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:01.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:02 compute-1 nova_compute[225855]: 2026-01-20 15:46:02.604 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:03.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:03.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:04 compute-1 ceph-mon[81775]: pgmap v3716: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:04 compute-1 nova_compute[225855]: 2026-01-20 15:46:04.957 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:46:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:05.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:46:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:05.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:06 compute-1 ceph-mon[81775]: pgmap v3717: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:07 compute-1 nova_compute[225855]: 2026-01-20 15:46:07.606 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:46:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:07.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:46:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:07.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #184. Immutable memtables: 0.
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:46:08.556955) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 184
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923968557075, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 686, "num_deletes": 250, "total_data_size": 1245577, "memory_usage": 1266536, "flush_reason": "Manual Compaction"}
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #185: started
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923968566236, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 185, "file_size": 531405, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89510, "largest_seqno": 90191, "table_properties": {"data_size": 528539, "index_size": 837, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7711, "raw_average_key_size": 20, "raw_value_size": 522605, "raw_average_value_size": 1382, "num_data_blocks": 38, "num_entries": 378, "num_filter_entries": 378, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768923919, "oldest_key_time": 1768923919, "file_creation_time": 1768923968, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 9336 microseconds, and 5253 cpu microseconds.
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:46:08.566289) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #185: 531405 bytes OK
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:46:08.566316) [db/memtable_list.cc:519] [default] Level-0 commit table #185 started
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:46:08.567930) [db/memtable_list.cc:722] [default] Level-0 commit table #185: memtable #1 done
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:46:08.567951) EVENT_LOG_v1 {"time_micros": 1768923968567944, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:46:08.567975) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 1241881, prev total WAL file size 1241881, number of live WAL files 2.
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000181.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:46:08.568855) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303039' seq:72057594037927935, type:22 .. '6D6772737461740033323630' seq:0, type:0; will stop at (end)
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [185(518KB)], [183(13MB)]
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923968568941, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [185], "files_L6": [183], "score": -1, "input_data_size": 15013695, "oldest_snapshot_seqno": -1}
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #186: 10922 keys, 11470784 bytes, temperature: kUnknown
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923968702480, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 186, "file_size": 11470784, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11404453, "index_size": 37930, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27333, "raw_key_size": 288150, "raw_average_key_size": 26, "raw_value_size": 11217508, "raw_average_value_size": 1027, "num_data_blocks": 1430, "num_entries": 10922, "num_filter_entries": 10922, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768923968, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 186, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:46:08.702999) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 11470784 bytes
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:46:08.704464) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.3 rd, 85.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 13.8 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(49.8) write-amplify(21.6) OK, records in: 11412, records dropped: 490 output_compression: NoCompression
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:46:08.704498) EVENT_LOG_v1 {"time_micros": 1768923968704484, "job": 118, "event": "compaction_finished", "compaction_time_micros": 133686, "compaction_time_cpu_micros": 54301, "output_level": 6, "num_output_files": 1, "total_output_size": 11470784, "num_input_records": 11412, "num_output_records": 10922, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923968705162, "job": 118, "event": "table_file_deletion", "file_number": 185}
Jan 20 15:46:08 compute-1 ceph-mon[81775]: pgmap v3718: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000183.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923968710695, "job": 118, "event": "table_file_deletion", "file_number": 183}
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:46:08.568758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:46:08.710858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:46:08.710882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:46:08.710884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:46:08.710887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:46:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:46:08.710889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:46:09 compute-1 podman[331391]: 2026-01-20 15:46:09.074997435 +0000 UTC m=+0.120098775 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 15:46:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:09.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:09 compute-1 nova_compute[225855]: 2026-01-20 15:46:09.960 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:09.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:10 compute-1 ceph-mon[81775]: pgmap v3719: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:11.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:11.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:12 compute-1 nova_compute[225855]: 2026-01-20 15:46:12.609 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:12 compute-1 ceph-mon[81775]: pgmap v3720: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:13.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2810297471' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:46:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/2810297471' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:46:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:13.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:14 compute-1 ceph-mon[81775]: pgmap v3721: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:14 compute-1 sudo[331420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:14 compute-1 sudo[331420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:14 compute-1 sudo[331420]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:14 compute-1 sudo[331445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:14 compute-1 sudo[331445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:14 compute-1 sudo[331445]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:14 compute-1 nova_compute[225855]: 2026-01-20 15:46:14.961 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:15.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:15 compute-1 ceph-mon[81775]: pgmap v3722: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:46:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:15.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:46:16 compute-1 podman[331471]: 2026-01-20 15:46:16.011084678 +0000 UTC m=+0.058614598 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 20 15:46:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:46:16.467 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:46:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:46:16.468 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:46:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:46:16.468 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:46:17 compute-1 nova_compute[225855]: 2026-01-20 15:46:17.610 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:17.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:17.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:18 compute-1 sudo[331493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:18 compute-1 sudo[331493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:18 compute-1 sudo[331493]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:18 compute-1 sudo[331518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:46:18 compute-1 sudo[331518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:18 compute-1 sudo[331518]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:18 compute-1 sudo[331543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:18 compute-1 sudo[331543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:18 compute-1 sudo[331543]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:18 compute-1 sudo[331568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:46:18 compute-1 sudo[331568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:18 compute-1 ceph-mon[81775]: pgmap v3723: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:18 compute-1 sudo[331568]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:46:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:46:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:46:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:46:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:46:19 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:46:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:19.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:19 compute-1 nova_compute[225855]: 2026-01-20 15:46:19.964 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:20.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:20 compute-1 ceph-mon[81775]: pgmap v3724: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:21.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:22.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:22 compute-1 nova_compute[225855]: 2026-01-20 15:46:22.611 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:22 compute-1 ceph-mon[81775]: pgmap v3725: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:46:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:23.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:46:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:46:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:24.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:46:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:24 compute-1 ceph-mon[81775]: pgmap v3726: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:24 compute-1 nova_compute[225855]: 2026-01-20 15:46:24.966 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:25.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:25 compute-1 sudo[331627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:25 compute-1 sudo[331627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:25 compute-1 sudo[331627]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:25 compute-1 sudo[331652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:46:25 compute-1 sudo[331652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:25 compute-1 sudo[331652]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:26.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:26 compute-1 nova_compute[225855]: 2026-01-20 15:46:26.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:46:26 compute-1 nova_compute[225855]: 2026-01-20 15:46:26.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:46:26 compute-1 nova_compute[225855]: 2026-01-20 15:46:26.339 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:46:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:46:26 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:46:26 compute-1 ceph-mon[81775]: pgmap v3727: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:27 compute-1 ovn_controller[130490]: 2026-01-20T15:46:27Z|01008|memory_trim|INFO|Detected inactivity (last active 30019 ms ago): trimming memory
Jan 20 15:46:27 compute-1 nova_compute[225855]: 2026-01-20 15:46:27.645 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:27.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:46:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:28.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:46:28 compute-1 ceph-mon[81775]: pgmap v3728: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:29.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:29 compute-1 nova_compute[225855]: 2026-01-20 15:46:29.969 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:46:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:30.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:46:30 compute-1 ceph-mon[81775]: pgmap v3729: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:46:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:31.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:46:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:32.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:32 compute-1 nova_compute[225855]: 2026-01-20 15:46:32.708 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:32 compute-1 ceph-mon[81775]: pgmap v3730: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:33.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:34.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:34 compute-1 ceph-mon[81775]: pgmap v3731: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:34 compute-1 nova_compute[225855]: 2026-01-20 15:46:34.971 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:34 compute-1 sudo[331683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:34 compute-1 sudo[331683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:34 compute-1 sudo[331683]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:35 compute-1 sudo[331708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:35 compute-1 sudo[331708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:35 compute-1 sudo[331708]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:35.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:35 compute-1 ceph-mon[81775]: pgmap v3732: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:46:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:36.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:46:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:46:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:37.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:46:37 compute-1 nova_compute[225855]: 2026-01-20 15:46:37.712 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:46:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:38.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:46:38 compute-1 ceph-mon[81775]: pgmap v3733: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:39.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:40 compute-1 nova_compute[225855]: 2026-01-20 15:46:40.014 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:40.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:40 compute-1 podman[331736]: 2026-01-20 15:46:40.085812584 +0000 UTC m=+0.132421603 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:46:40 compute-1 ceph-mon[81775]: pgmap v3734: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:46:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:41.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:46:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:42.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:42 compute-1 ceph-mon[81775]: pgmap v3735: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:42 compute-1 nova_compute[225855]: 2026-01-20 15:46:42.717 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:46:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:43.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:46:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:44.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:44 compute-1 ceph-mon[81775]: pgmap v3736: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:45 compute-1 nova_compute[225855]: 2026-01-20 15:46:45.016 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:45.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:46.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:46 compute-1 ceph-mon[81775]: pgmap v3737: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:47 compute-1 podman[331766]: 2026-01-20 15:46:47.068652588 +0000 UTC m=+0.106014997 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 20 15:46:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:47.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:47 compute-1 nova_compute[225855]: 2026-01-20 15:46:47.763 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:48.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:48 compute-1 ceph-mon[81775]: pgmap v3738: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:49.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:50 compute-1 nova_compute[225855]: 2026-01-20 15:46:50.017 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:50.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:50 compute-1 ceph-mon[81775]: pgmap v3739: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:51.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:51 compute-1 ceph-mon[81775]: pgmap v3740: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:52.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:52 compute-1 nova_compute[225855]: 2026-01-20 15:46:52.813 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:46:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:53.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:46:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:54.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:54 compute-1 ceph-mon[81775]: pgmap v3741: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:55 compute-1 nova_compute[225855]: 2026-01-20 15:46:55.020 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:55 compute-1 sudo[331789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:55 compute-1 sudo[331789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:55 compute-1 sudo[331789]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:55 compute-1 sudo[331814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:55 compute-1 sudo[331814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:55 compute-1 sudo[331814]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:46:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:55.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:46:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:56.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:56 compute-1 ceph-mon[81775]: pgmap v3742: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:57.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:57 compute-1 nova_compute[225855]: 2026-01-20 15:46:57.815 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:58.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:58 compute-1 ceph-mon[81775]: pgmap v3743: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:46:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:59.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:00 compute-1 nova_compute[225855]: 2026-01-20 15:47:00.021 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:47:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:00.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:47:00 compute-1 ceph-mon[81775]: pgmap v3744: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:47:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:01.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:47:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:47:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:02.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:47:02 compute-1 ceph-mon[81775]: pgmap v3745: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:02 compute-1 nova_compute[225855]: 2026-01-20 15:47:02.858 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:47:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:03.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:47:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:04.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:04 compute-1 ceph-mon[81775]: pgmap v3746: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:05 compute-1 nova_compute[225855]: 2026-01-20 15:47:05.068 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:05.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:47:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:06.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:47:06 compute-1 ceph-mon[81775]: pgmap v3747: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:47:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:07.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:47:07 compute-1 ceph-mon[81775]: pgmap v3748: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:07 compute-1 nova_compute[225855]: 2026-01-20 15:47:07.861 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:08.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #187. Immutable memtables: 0.
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:47:08.577481) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 187
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924028577571, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 805, "num_deletes": 254, "total_data_size": 1549312, "memory_usage": 1576960, "flush_reason": "Manual Compaction"}
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #188: started
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924028588691, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 188, "file_size": 1022693, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90196, "largest_seqno": 90996, "table_properties": {"data_size": 1018873, "index_size": 1599, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8317, "raw_average_key_size": 18, "raw_value_size": 1011251, "raw_average_value_size": 2298, "num_data_blocks": 71, "num_entries": 440, "num_filter_entries": 440, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768923968, "oldest_key_time": 1768923968, "file_creation_time": 1768924028, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 11242 microseconds, and 4441 cpu microseconds.
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:47:08.588737) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #188: 1022693 bytes OK
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:47:08.588756) [db/memtable_list.cc:519] [default] Level-0 commit table #188 started
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:47:08.590722) [db/memtable_list.cc:722] [default] Level-0 commit table #188: memtable #1 done
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:47:08.590772) EVENT_LOG_v1 {"time_micros": 1768924028590762, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:47:08.590801) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 1545139, prev total WAL file size 1545139, number of live WAL files 2.
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000184.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:47:08.591623) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353233' seq:72057594037927935, type:22 .. '6C6F676D0033373734' seq:0, type:0; will stop at (end)
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [188(998KB)], [186(10MB)]
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924028591670, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [188], "files_L6": [186], "score": -1, "input_data_size": 12493477, "oldest_snapshot_seqno": -1}
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #189: 10842 keys, 12373635 bytes, temperature: kUnknown
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924028673182, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 189, "file_size": 12373635, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12306393, "index_size": 39033, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27141, "raw_key_size": 287390, "raw_average_key_size": 26, "raw_value_size": 12119494, "raw_average_value_size": 1117, "num_data_blocks": 1475, "num_entries": 10842, "num_filter_entries": 10842, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768924028, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 189, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:47:08.673435) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 12373635 bytes
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:47:08.675083) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.1 rd, 151.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.9 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(24.3) write-amplify(12.1) OK, records in: 11362, records dropped: 520 output_compression: NoCompression
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:47:08.675100) EVENT_LOG_v1 {"time_micros": 1768924028675091, "job": 120, "event": "compaction_finished", "compaction_time_micros": 81586, "compaction_time_cpu_micros": 28746, "output_level": 6, "num_output_files": 1, "total_output_size": 12373635, "num_input_records": 11362, "num_output_records": 10842, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924028675427, "job": 120, "event": "table_file_deletion", "file_number": 188}
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000186.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924028677287, "job": 120, "event": "table_file_deletion", "file_number": 186}
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:47:08.591558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:47:08.677416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:47:08.677423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:47:08.677425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:47:08.677427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:47:08 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:47:08.677429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:47:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:09.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:10 compute-1 nova_compute[225855]: 2026-01-20 15:47:10.071 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:10.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:10 compute-1 ceph-mon[81775]: pgmap v3749: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:11 compute-1 podman[331847]: 2026-01-20 15:47:11.044590852 +0000 UTC m=+0.087283747 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 15:47:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:11.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:47:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:12.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:47:12 compute-1 ceph-mon[81775]: pgmap v3750: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:12 compute-1 nova_compute[225855]: 2026-01-20 15:47:12.898 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/559030029' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:47:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/559030029' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:47:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:13.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:14.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:14 compute-1 ceph-mon[81775]: pgmap v3751: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:15 compute-1 nova_compute[225855]: 2026-01-20 15:47:15.072 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:15 compute-1 sudo[331878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:15 compute-1 sudo[331878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:15 compute-1 sudo[331878]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:15 compute-1 sudo[331903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:15 compute-1 sudo[331903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:15 compute-1 sudo[331903]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:47:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:15.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:47:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:47:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:16.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:47:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:47:16.469 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:47:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:47:16.469 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:47:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:47:16.469 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:47:16 compute-1 ceph-mon[81775]: pgmap v3752: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:17 compute-1 nova_compute[225855]: 2026-01-20 15:47:17.362 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:47:17 compute-1 nova_compute[225855]: 2026-01-20 15:47:17.362 225859 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 60.83 sec
Jan 20 15:47:17 compute-1 nova_compute[225855]: 2026-01-20 15:47:17.362 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:17 compute-1 nova_compute[225855]: 2026-01-20 15:47:17.364 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:17 compute-1 nova_compute[225855]: 2026-01-20 15:47:17.364 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:17 compute-1 nova_compute[225855]: 2026-01-20 15:47:17.364 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:17 compute-1 nova_compute[225855]: 2026-01-20 15:47:17.365 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:17 compute-1 nova_compute[225855]: 2026-01-20 15:47:17.365 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:17 compute-1 nova_compute[225855]: 2026-01-20 15:47:17.365 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:47:17 compute-1 nova_compute[225855]: 2026-01-20 15:47:17.365 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:17 compute-1 nova_compute[225855]: 2026-01-20 15:47:17.507 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:47:17 compute-1 nova_compute[225855]: 2026-01-20 15:47:17.508 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:47:17 compute-1 nova_compute[225855]: 2026-01-20 15:47:17.508 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:47:17 compute-1 nova_compute[225855]: 2026-01-20 15:47:17.508 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:47:17 compute-1 nova_compute[225855]: 2026-01-20 15:47:17.508 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:47:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:17.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/536885463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:47:17 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2951300976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:17 compute-1 nova_compute[225855]: 2026-01-20 15:47:17.928 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:17 compute-1 nova_compute[225855]: 2026-01-20 15:47:17.947 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:47:18 compute-1 podman[331951]: 2026-01-20 15:47:18.007962166 +0000 UTC m=+0.050782457 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:47:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:18.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:18 compute-1 nova_compute[225855]: 2026-01-20 15:47:18.126 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:47:18 compute-1 nova_compute[225855]: 2026-01-20 15:47:18.127 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4255MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:47:18 compute-1 nova_compute[225855]: 2026-01-20 15:47:18.128 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:47:18 compute-1 nova_compute[225855]: 2026-01-20 15:47:18.128 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:47:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:47:18.419 140354 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=91, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=90) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:47:18 compute-1 nova_compute[225855]: 2026-01-20 15:47:18.420 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:18 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:47:18.420 140354 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:47:18 compute-1 nova_compute[225855]: 2026-01-20 15:47:18.448 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:47:18 compute-1 nova_compute[225855]: 2026-01-20 15:47:18.449 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:47:18 compute-1 nova_compute[225855]: 2026-01-20 15:47:18.589 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:47:18 compute-1 ceph-mon[81775]: pgmap v3753: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1886461505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2951300976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1097814839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:47:18 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2122862983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:19 compute-1 nova_compute[225855]: 2026-01-20 15:47:19.004 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:47:19 compute-1 nova_compute[225855]: 2026-01-20 15:47:19.012 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:47:19 compute-1 nova_compute[225855]: 2026-01-20 15:47:19.048 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:47:19 compute-1 nova_compute[225855]: 2026-01-20 15:47:19.077 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:47:19 compute-1 nova_compute[225855]: 2026-01-20 15:47:19.077 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:47:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:47:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:19.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:47:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3316246032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2122862983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:20 compute-1 nova_compute[225855]: 2026-01-20 15:47:20.074 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:20.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:20 compute-1 ceph-mon[81775]: pgmap v3754: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:47:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:21.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:47:21 compute-1 ceph-mon[81775]: pgmap v3755: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:22.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:47:22 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 69K writes, 268K keys, 69K commit groups, 1.0 writes per commit group, ingest: 0.26 GB, 0.04 MB/s
                                           Cumulative WAL: 69K writes, 26K syncs, 2.63 writes per sync, written: 0.26 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2415 writes, 9981 keys, 2415 commit groups, 1.0 writes per commit group, ingest: 10.97 MB, 0.02 MB/s
                                           Interval WAL: 2415 writes, 949 syncs, 2.54 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 15:47:22 compute-1 nova_compute[225855]: 2026-01-20 15:47:22.978 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:23.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:24.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:24 compute-1 ceph-mon[81775]: pgmap v3756: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:25 compute-1 nova_compute[225855]: 2026-01-20 15:47:25.075 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:25.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:25 compute-1 sudo[331997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:25 compute-1 sudo[331997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:25 compute-1 sudo[331997]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:26 compute-1 sudo[332022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:47:26 compute-1 sudo[332022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:26 compute-1 sudo[332022]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:26 compute-1 sudo[332047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:26.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:26 compute-1 sudo[332047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:26 compute-1 sudo[332047]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:26 compute-1 sudo[332072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:47:26 compute-1 sudo[332072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:47:26 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2959745194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:26 compute-1 sudo[332072]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:26 compute-1 ceph-mon[81775]: pgmap v3757: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2959745194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:27 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:47:27.422 140354 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5ffd4ac3-9266-4927-98ad-20a17782c725, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '91'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:47:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:47:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:47:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:47:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:47:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:47:27 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:47:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:47:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:27.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:47:27 compute-1 nova_compute[225855]: 2026-01-20 15:47:27.979 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:28.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:28 compute-1 ceph-mon[81775]: pgmap v3758: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4258930174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:29.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:29 compute-1 ceph-mon[81775]: pgmap v3759: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:30 compute-1 nova_compute[225855]: 2026-01-20 15:47:30.075 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:30.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:31.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:32 compute-1 nova_compute[225855]: 2026-01-20 15:47:32.074 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:32 compute-1 nova_compute[225855]: 2026-01-20 15:47:32.074 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:32 compute-1 nova_compute[225855]: 2026-01-20 15:47:32.075 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:47:32 compute-1 nova_compute[225855]: 2026-01-20 15:47:32.075 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:47:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:47:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:32.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:47:32 compute-1 ceph-mon[81775]: pgmap v3760: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:32 compute-1 nova_compute[225855]: 2026-01-20 15:47:32.983 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:33 compute-1 sudo[332131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:33 compute-1 sudo[332131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:33 compute-1 sudo[332131]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:33 compute-1 sudo[332156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:47:33 compute-1 sudo[332156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:33 compute-1 sudo[332156]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:33.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:33 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:47:33 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:47:33 compute-1 ceph-mon[81775]: pgmap v3761: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:34.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:34 compute-1 nova_compute[225855]: 2026-01-20 15:47:34.551 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:47:34 compute-1 nova_compute[225855]: 2026-01-20 15:47:34.552 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:34 compute-1 nova_compute[225855]: 2026-01-20 15:47:34.553 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:34 compute-1 nova_compute[225855]: 2026-01-20 15:47:34.553 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:34 compute-1 nova_compute[225855]: 2026-01-20 15:47:34.554 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:34 compute-1 nova_compute[225855]: 2026-01-20 15:47:34.554 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:47:34 compute-1 nova_compute[225855]: 2026-01-20 15:47:34.554 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:34 compute-1 nova_compute[225855]: 2026-01-20 15:47:34.587 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:47:34 compute-1 nova_compute[225855]: 2026-01-20 15:47:34.588 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:47:34 compute-1 nova_compute[225855]: 2026-01-20 15:47:34.589 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:47:34 compute-1 nova_compute[225855]: 2026-01-20 15:47:34.589 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:47:34 compute-1 nova_compute[225855]: 2026-01-20 15:47:34.590 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:47:35 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:47:35 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3096368661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/847583616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3096368661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:35 compute-1 nova_compute[225855]: 2026-01-20 15:47:35.064 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:47:35 compute-1 nova_compute[225855]: 2026-01-20 15:47:35.078 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:35 compute-1 nova_compute[225855]: 2026-01-20 15:47:35.209 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:47:35 compute-1 nova_compute[225855]: 2026-01-20 15:47:35.210 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4260MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:47:35 compute-1 nova_compute[225855]: 2026-01-20 15:47:35.210 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:47:35 compute-1 nova_compute[225855]: 2026-01-20 15:47:35.211 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:47:35 compute-1 sudo[332204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:35 compute-1 sudo[332204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:35 compute-1 sudo[332204]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:35 compute-1 sudo[332229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:35 compute-1 sudo[332229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:35 compute-1 sudo[332229]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:35.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:36 compute-1 ceph-mon[81775]: pgmap v3762: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:36.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:36 compute-1 nova_compute[225855]: 2026-01-20 15:47:36.976 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:47:36 compute-1 nova_compute[225855]: 2026-01-20 15:47:36.977 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:47:37 compute-1 nova_compute[225855]: 2026-01-20 15:47:37.019 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:47:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:47:37 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2258595412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:37 compute-1 nova_compute[225855]: 2026-01-20 15:47:37.709 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.690s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:47:37 compute-1 nova_compute[225855]: 2026-01-20 15:47:37.714 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:47:37 compute-1 nova_compute[225855]: 2026-01-20 15:47:37.731 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:47:37 compute-1 nova_compute[225855]: 2026-01-20 15:47:37.733 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:47:37 compute-1 nova_compute[225855]: 2026-01-20 15:47:37.733 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:47:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:37.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:37 compute-1 nova_compute[225855]: 2026-01-20 15:47:37.986 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:38.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:38 compute-1 ceph-mon[81775]: pgmap v3763: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2258595412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1871560347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:39 compute-1 nova_compute[225855]: 2026-01-20 15:47:39.519 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:39 compute-1 nova_compute[225855]: 2026-01-20 15:47:39.520 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:39.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:40 compute-1 nova_compute[225855]: 2026-01-20 15:47:40.079 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:40.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:40 compute-1 ceph-mon[81775]: pgmap v3764: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:41 compute-1 nova_compute[225855]: 2026-01-20 15:47:41.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:47:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:41.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:47:42 compute-1 podman[332280]: 2026-01-20 15:47:42.046900301 +0000 UTC m=+0.088145302 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 15:47:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:47:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:42.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:47:42 compute-1 ceph-mon[81775]: pgmap v3765: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:43 compute-1 nova_compute[225855]: 2026-01-20 15:47:43.026 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:43.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:44.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:44 compute-1 ceph-mon[81775]: pgmap v3766: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:45 compute-1 nova_compute[225855]: 2026-01-20 15:47:45.080 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:45.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:47:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:46.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:47:46 compute-1 ceph-mon[81775]: pgmap v3767: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:47:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:47.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:47:48 compute-1 nova_compute[225855]: 2026-01-20 15:47:48.030 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:47:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:48.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:47:48 compute-1 ceph-mon[81775]: pgmap v3768: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:49 compute-1 podman[332310]: 2026-01-20 15:47:49.02107224 +0000 UTC m=+0.067400566 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 20 15:47:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:49.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:49 compute-1 ceph-mon[81775]: pgmap v3769: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:50 compute-1 nova_compute[225855]: 2026-01-20 15:47:50.081 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:47:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:50.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:47:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:47:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:51.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:47:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:52.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:52 compute-1 ceph-mon[81775]: pgmap v3770: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:53 compute-1 nova_compute[225855]: 2026-01-20 15:47:53.090 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:53.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:54.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:47:54 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 18K writes, 91K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1462 writes, 7250 keys, 1462 commit groups, 1.0 writes per commit group, ingest: 15.21 MB, 0.03 MB/s
                                           Interval WAL: 1462 writes, 1462 syncs, 1.00 writes per sync, written: 0.01 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     72.9      1.53              0.38        60    0.025       0      0       0.0       0.0
                                             L6      1/0   11.80 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.5     96.5     83.0      7.38              1.96        59    0.125    474K    31K       0.0       0.0
                                            Sum      1/0   11.80 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.5     79.9     81.2      8.91              2.34       119    0.075    474K    31K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.8     81.1     80.8      0.99              0.25        12    0.082     67K   3081       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0     96.5     83.0      7.38              1.96        59    0.125    474K    31K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     73.0      1.53              0.38        59    0.026       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.109, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.71 GB write, 0.11 MB/s write, 0.70 GB read, 0.11 MB/s read, 8.9 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564d515a71f0#2 capacity: 304.00 MB usage: 76.40 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000442 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4357,73.06 MB,24.0343%) FilterBlock(119,1.27 MB,0.417664%) IndexBlock(119,2.07 MB,0.681179%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 15:47:54 compute-1 ceph-mon[81775]: pgmap v3771: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:55 compute-1 nova_compute[225855]: 2026-01-20 15:47:55.084 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:55 compute-1 sudo[332333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:55 compute-1 sudo[332333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:55 compute-1 sudo[332333]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:55 compute-1 sudo[332358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:55 compute-1 sudo[332358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:55 compute-1 sudo[332358]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:55.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:56.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:56 compute-1 ceph-mon[81775]: pgmap v3772: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:57.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:58 compute-1 nova_compute[225855]: 2026-01-20 15:47:58.093 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:58.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:58 compute-1 ceph-mon[81775]: pgmap v3773: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:59 compute-1 nova_compute[225855]: 2026-01-20 15:47:59.334 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:47:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:59.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:59 compute-1 ceph-mon[81775]: pgmap v3774: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:00 compute-1 nova_compute[225855]: 2026-01-20 15:48:00.085 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:48:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:00.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:48:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:48:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:01.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:48:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:02.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:02 compute-1 ceph-mon[81775]: pgmap v3775: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:03 compute-1 nova_compute[225855]: 2026-01-20 15:48:03.101 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:48:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:03.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:48:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:48:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:04.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:48:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:04 compute-1 ceph-mon[81775]: pgmap v3776: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:05 compute-1 nova_compute[225855]: 2026-01-20 15:48:05.087 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:48:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:05.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:48:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:48:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:06.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:48:06 compute-1 ceph-mon[81775]: pgmap v3777: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:07.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:08 compute-1 nova_compute[225855]: 2026-01-20 15:48:08.103 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:08.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:08 compute-1 ceph-mon[81775]: pgmap v3778: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:09.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:09 compute-1 ceph-mon[81775]: pgmap v3779: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:10 compute-1 nova_compute[225855]: 2026-01-20 15:48:10.089 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:10.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:11.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:12.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:12 compute-1 ceph-mon[81775]: pgmap v3780: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:13 compute-1 podman[332392]: 2026-01-20 15:48:13.020645949 +0000 UTC m=+0.072866420 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 15:48:13 compute-1 nova_compute[225855]: 2026-01-20 15:48:13.105 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:13.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:48:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:14.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:48:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:14 compute-1 ceph-mon[81775]: pgmap v3781: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/576372207' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:48:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/576372207' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:48:15 compute-1 nova_compute[225855]: 2026-01-20 15:48:15.090 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:15 compute-1 sudo[332419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:15 compute-1 sudo[332419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:15 compute-1 sudo[332419]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:15.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:15 compute-1 sudo[332444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:15 compute-1 sudo[332444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:15 compute-1 sudo[332444]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:16 compute-1 ceph-mon[81775]: pgmap v3782: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:16.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:48:16.470 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:48:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:48:16.470 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:48:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:48:16.471 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:48:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:17.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:18 compute-1 nova_compute[225855]: 2026-01-20 15:48:18.107 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:48:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:18.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:48:18 compute-1 ceph-mon[81775]: pgmap v3783: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:48:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:19.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:48:20 compute-1 podman[332472]: 2026-01-20 15:48:20.014498125 +0000 UTC m=+0.058397162 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:48:20 compute-1 nova_compute[225855]: 2026-01-20 15:48:20.092 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:20.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:20 compute-1 ceph-mon[81775]: pgmap v3784: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:21.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:22.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:22 compute-1 ceph-mon[81775]: pgmap v3785: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:23 compute-1 nova_compute[225855]: 2026-01-20 15:48:23.111 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:23.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:48:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:24.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:48:24 compute-1 ceph-mon[81775]: pgmap v3786: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:25 compute-1 nova_compute[225855]: 2026-01-20 15:48:25.094 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:25.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:48:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:26.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:48:26 compute-1 ceph-mon[81775]: pgmap v3787: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:27.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:27 compute-1 ceph-mon[81775]: pgmap v3788: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:28 compute-1 nova_compute[225855]: 2026-01-20 15:48:28.115 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:28.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:28 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4229310605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:48:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:29 compute-1 nova_compute[225855]: 2026-01-20 15:48:29.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:48:29 compute-1 nova_compute[225855]: 2026-01-20 15:48:29.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:48:29 compute-1 nova_compute[225855]: 2026-01-20 15:48:29.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:48:29 compute-1 nova_compute[225855]: 2026-01-20 15:48:29.358 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:48:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:29.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/281312359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:48:29 compute-1 ceph-mon[81775]: pgmap v3789: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:30 compute-1 nova_compute[225855]: 2026-01-20 15:48:30.096 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:30.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:31.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:48:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:32.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:48:32 compute-1 nova_compute[225855]: 2026-01-20 15:48:32.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:48:32 compute-1 nova_compute[225855]: 2026-01-20 15:48:32.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:48:32 compute-1 ceph-mon[81775]: pgmap v3790: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:33 compute-1 nova_compute[225855]: 2026-01-20 15:48:33.118 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:33 compute-1 sudo[332497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:33 compute-1 sudo[332497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:33 compute-1 sudo[332497]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:33 compute-1 nova_compute[225855]: 2026-01-20 15:48:33.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:48:33 compute-1 nova_compute[225855]: 2026-01-20 15:48:33.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:48:33 compute-1 sudo[332522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:48:33 compute-1 sudo[332522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:33 compute-1 sudo[332522]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:33 compute-1 sudo[332547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:33 compute-1 sudo[332547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:33 compute-1 sudo[332547]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:33 compute-1 sudo[332572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:48:33 compute-1 sudo[332572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:33.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:33 compute-1 sudo[332572]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:48:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:34.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:48:34 compute-1 nova_compute[225855]: 2026-01-20 15:48:34.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:48:34 compute-1 ceph-mon[81775]: pgmap v3791: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:48:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:48:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:48:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:48:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:48:34 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:48:35 compute-1 nova_compute[225855]: 2026-01-20 15:48:35.097 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:48:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:35.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:48:35 compute-1 sudo[332631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:35 compute-1 sudo[332631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:35 compute-1 sudo[332631]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:35 compute-1 sudo[332656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:35 compute-1 sudo[332656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:35 compute-1 sudo[332656]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:48:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:36.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:48:36 compute-1 nova_compute[225855]: 2026-01-20 15:48:36.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:48:36 compute-1 nova_compute[225855]: 2026-01-20 15:48:36.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:48:36 compute-1 ceph-mon[81775]: pgmap v3792: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:37 compute-1 nova_compute[225855]: 2026-01-20 15:48:37.017 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:48:37 compute-1 nova_compute[225855]: 2026-01-20 15:48:37.018 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:48:37 compute-1 nova_compute[225855]: 2026-01-20 15:48:37.018 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:48:37 compute-1 nova_compute[225855]: 2026-01-20 15:48:37.018 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:48:37 compute-1 nova_compute[225855]: 2026-01-20 15:48:37.019 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:48:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:48:37 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/334353735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:48:37 compute-1 nova_compute[225855]: 2026-01-20 15:48:37.510 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:48:37 compute-1 nova_compute[225855]: 2026-01-20 15:48:37.704 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:48:37 compute-1 nova_compute[225855]: 2026-01-20 15:48:37.706 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4261MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:48:37 compute-1 nova_compute[225855]: 2026-01-20 15:48:37.706 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:48:37 compute-1 nova_compute[225855]: 2026-01-20 15:48:37.706 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:48:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3518712833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:48:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/334353735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:48:37 compute-1 ceph-mon[81775]: pgmap v3793: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:37.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:38 compute-1 nova_compute[225855]: 2026-01-20 15:48:38.122 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:38.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/467179759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:48:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:39.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:39 compute-1 ceph-mon[81775]: pgmap v3794: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:40 compute-1 nova_compute[225855]: 2026-01-20 15:48:40.099 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:40 compute-1 sudo[332705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:40 compute-1 sudo[332705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:40 compute-1 sudo[332705]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:40 compute-1 sudo[332730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:48:40 compute-1 sudo[332730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:40 compute-1 nova_compute[225855]: 2026-01-20 15:48:40.192 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:48:40 compute-1 nova_compute[225855]: 2026-01-20 15:48:40.192 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:48:40 compute-1 sudo[332730]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:40.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:40 compute-1 nova_compute[225855]: 2026-01-20 15:48:40.394 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:48:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:48:40 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3934550702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:48:40 compute-1 nova_compute[225855]: 2026-01-20 15:48:40.877 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:48:40 compute-1 nova_compute[225855]: 2026-01-20 15:48:40.883 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:48:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:48:40 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:48:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3934550702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:48:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:41.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:41 compute-1 ceph-mon[81775]: pgmap v3795: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:42.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:43 compute-1 nova_compute[225855]: 2026-01-20 15:48:43.126 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:43.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:43 compute-1 nova_compute[225855]: 2026-01-20 15:48:43.915 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:48:43 compute-1 nova_compute[225855]: 2026-01-20 15:48:43.918 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:48:43 compute-1 nova_compute[225855]: 2026-01-20 15:48:43.918 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:48:44 compute-1 podman[332779]: 2026-01-20 15:48:44.031664531 +0000 UTC m=+0.079479228 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible)
Jan 20 15:48:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:48:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:44.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:48:44 compute-1 ceph-mon[81775]: pgmap v3796: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:44 compute-1 nova_compute[225855]: 2026-01-20 15:48:44.921 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:48:44 compute-1 nova_compute[225855]: 2026-01-20 15:48:44.921 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:48:45 compute-1 nova_compute[225855]: 2026-01-20 15:48:45.101 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:45.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:48:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:46.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:48:46 compute-1 ceph-mon[81775]: pgmap v3797: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:47.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:48 compute-1 nova_compute[225855]: 2026-01-20 15:48:48.129 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:48.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:48 compute-1 ceph-mon[81775]: pgmap v3798: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:48:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:49.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:48:50 compute-1 nova_compute[225855]: 2026-01-20 15:48:50.139 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:48:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:50.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:48:50 compute-1 ceph-mon[81775]: pgmap v3799: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:51 compute-1 podman[332811]: 2026-01-20 15:48:51.018447587 +0000 UTC m=+0.062959830 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Jan 20 15:48:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:51.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:52.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:52 compute-1 ceph-mon[81775]: pgmap v3800: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Jan 20 15:48:53 compute-1 nova_compute[225855]: 2026-01-20 15:48:53.178 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:53.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:54.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:54 compute-1 ceph-mon[81775]: pgmap v3801: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Jan 20 15:48:55 compute-1 nova_compute[225855]: 2026-01-20 15:48:55.173 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:48:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:55.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:48:56 compute-1 sudo[332834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:56 compute-1 sudo[332834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:56 compute-1 sudo[332834]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:56 compute-1 sudo[332859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:56 compute-1 sudo[332859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:56 compute-1 sudo[332859]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:56.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:56 compute-1 ceph-mon[81775]: pgmap v3802: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 0 B/s wr, 121 op/s
Jan 20 15:48:57 compute-1 ceph-mon[81775]: pgmap v3803: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Jan 20 15:48:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:57.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:58 compute-1 nova_compute[225855]: 2026-01-20 15:48:58.181 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:58.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:48:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:59.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:00 compute-1 nova_compute[225855]: 2026-01-20 15:49:00.177 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:00.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:00 compute-1 ceph-mon[81775]: pgmap v3804: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Jan 20 15:49:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:01.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:02.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:02 compute-1 ceph-mon[81775]: pgmap v3805: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Jan 20 15:49:03 compute-1 nova_compute[225855]: 2026-01-20 15:49:03.183 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:03 compute-1 ceph-mon[81775]: pgmap v3806: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 0 B/s wr, 169 op/s
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #190. Immutable memtables: 0.
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:49:03.838206) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 190
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924143838296, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 1349, "num_deletes": 251, "total_data_size": 3020198, "memory_usage": 3057200, "flush_reason": "Manual Compaction"}
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #191: started
Jan 20 15:49:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:03.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924143862552, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 191, "file_size": 1982070, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 91001, "largest_seqno": 92345, "table_properties": {"data_size": 1976282, "index_size": 3118, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12420, "raw_average_key_size": 19, "raw_value_size": 1964668, "raw_average_value_size": 3153, "num_data_blocks": 139, "num_entries": 623, "num_filter_entries": 623, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768924029, "oldest_key_time": 1768924029, "file_creation_time": 1768924143, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 24465 microseconds, and 9735 cpu microseconds.
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:49:03.862685) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #191: 1982070 bytes OK
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:49:03.862729) [db/memtable_list.cc:519] [default] Level-0 commit table #191 started
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:49:03.864450) [db/memtable_list.cc:722] [default] Level-0 commit table #191: memtable #1 done
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:49:03.864463) EVENT_LOG_v1 {"time_micros": 1768924143864458, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:49:03.864483) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 3013864, prev total WAL file size 3013864, number of live WAL files 2.
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000187.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:49:03.865637) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [191(1935KB)], [189(11MB)]
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924143865668, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [191], "files_L6": [189], "score": -1, "input_data_size": 14355705, "oldest_snapshot_seqno": -1}
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #192: 10950 keys, 12391529 bytes, temperature: kUnknown
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924143959928, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 192, "file_size": 12391529, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12323581, "index_size": 39483, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27397, "raw_key_size": 290348, "raw_average_key_size": 26, "raw_value_size": 12134572, "raw_average_value_size": 1108, "num_data_blocks": 1490, "num_entries": 10950, "num_filter_entries": 10950, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768924143, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 192, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:49:03.960235) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 12391529 bytes
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:49:03.961616) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.1 rd, 131.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 11.8 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(13.5) write-amplify(6.3) OK, records in: 11465, records dropped: 515 output_compression: NoCompression
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:49:03.961636) EVENT_LOG_v1 {"time_micros": 1768924143961626, "job": 122, "event": "compaction_finished", "compaction_time_micros": 94375, "compaction_time_cpu_micros": 28004, "output_level": 6, "num_output_files": 1, "total_output_size": 12391529, "num_input_records": 11465, "num_output_records": 10950, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924143962135, "job": 122, "event": "table_file_deletion", "file_number": 191}
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000189.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924143964294, "job": 122, "event": "table_file_deletion", "file_number": 189}
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:49:03.865549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:49:03.964346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:49:03.964352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:49:03.964353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:49:03.964355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:49:03 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:49:03.964356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:49:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:49:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:04.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:49:05 compute-1 nova_compute[225855]: 2026-01-20 15:49:05.178 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:05.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:49:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:06.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:49:06 compute-1 ceph-mon[81775]: pgmap v3807: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 0 B/s wr, 169 op/s
Jan 20 15:49:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:07.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:08 compute-1 nova_compute[225855]: 2026-01-20 15:49:08.187 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:08.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:08 compute-1 ceph-mon[81775]: pgmap v3808: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Jan 20 15:49:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:09.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:10 compute-1 nova_compute[225855]: 2026-01-20 15:49:10.180 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:49:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:10.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:49:10 compute-1 ceph-mon[81775]: pgmap v3809: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 20 15:49:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:11.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:49:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:12.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:49:12 compute-1 ceph-mon[81775]: pgmap v3810: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 20 15:49:13 compute-1 nova_compute[225855]: 2026-01-20 15:49:13.190 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:13.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:14.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:14 compute-1 ceph-mon[81775]: pgmap v3811: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 20 15:49:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1297599588' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:49:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1297599588' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:49:15 compute-1 podman[332893]: 2026-01-20 15:49:15.070601915 +0000 UTC m=+0.116539005 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 20 15:49:15 compute-1 nova_compute[225855]: 2026-01-20 15:49:15.182 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:15 compute-1 ceph-mon[81775]: pgmap v3812: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 20 15:49:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:49:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:15.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:49:16 compute-1 sudo[332922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:16 compute-1 sudo[332922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:16 compute-1 sudo[332922]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:16.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:16 compute-1 sudo[332947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:16 compute-1 sudo[332947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:16 compute-1 sudo[332947]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:49:16.471 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:49:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:49:16.472 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:49:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:49:16.472 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:49:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:49:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:17.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:49:18 compute-1 nova_compute[225855]: 2026-01-20 15:49:18.193 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:49:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:18.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:49:18 compute-1 ceph-mon[81775]: pgmap v3813: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 20 15:49:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:19.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:20 compute-1 nova_compute[225855]: 2026-01-20 15:49:20.184 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:20.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:20 compute-1 ceph-mon[81775]: pgmap v3814: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 20 15:49:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:49:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:21.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:49:22 compute-1 podman[332975]: 2026-01-20 15:49:22.014091288 +0000 UTC m=+0.056900039 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 15:49:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:22.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:23 compute-1 ceph-mon[81775]: pgmap v3815: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:23 compute-1 nova_compute[225855]: 2026-01-20 15:49:23.196 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:23.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:24 compute-1 ceph-mon[81775]: pgmap v3816: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:24.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:25 compute-1 nova_compute[225855]: 2026-01-20 15:49:25.187 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:49:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:25.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:49:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:26.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:26 compute-1 ceph-mon[81775]: pgmap v3817: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:27.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:28 compute-1 nova_compute[225855]: 2026-01-20 15:49:28.198 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:28.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:28 compute-1 ceph-mon[81775]: pgmap v3818: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:29.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:30 compute-1 nova_compute[225855]: 2026-01-20 15:49:30.189 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:49:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:30.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:49:30 compute-1 nova_compute[225855]: 2026-01-20 15:49:30.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:30 compute-1 nova_compute[225855]: 2026-01-20 15:49:30.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:49:30 compute-1 nova_compute[225855]: 2026-01-20 15:49:30.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:49:30 compute-1 nova_compute[225855]: 2026-01-20 15:49:30.373 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:49:30 compute-1 ceph-mon[81775]: pgmap v3819: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2906927452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:49:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3255063098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:49:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:31.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:49:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:32.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:49:32 compute-1 ceph-mon[81775]: pgmap v3820: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1041901981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:49:33 compute-1 nova_compute[225855]: 2026-01-20 15:49:33.202 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:33 compute-1 nova_compute[225855]: 2026-01-20 15:49:33.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2183781135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:49:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:33.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:49:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:34.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:49:34 compute-1 nova_compute[225855]: 2026-01-20 15:49:34.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:34 compute-1 nova_compute[225855]: 2026-01-20 15:49:34.341 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:49:34 compute-1 ceph-mon[81775]: pgmap v3821: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:35 compute-1 nova_compute[225855]: 2026-01-20 15:49:35.190 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:35 compute-1 nova_compute[225855]: 2026-01-20 15:49:35.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:49:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:35.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:49:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:49:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:36.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:49:36 compute-1 nova_compute[225855]: 2026-01-20 15:49:36.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:36 compute-1 nova_compute[225855]: 2026-01-20 15:49:36.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:36 compute-1 nova_compute[225855]: 2026-01-20 15:49:36.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:36 compute-1 sudo[333000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:36 compute-1 sudo[333000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:36 compute-1 sudo[333000]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:36 compute-1 nova_compute[225855]: 2026-01-20 15:49:36.406 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:49:36 compute-1 nova_compute[225855]: 2026-01-20 15:49:36.406 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:49:36 compute-1 nova_compute[225855]: 2026-01-20 15:49:36.406 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:49:36 compute-1 nova_compute[225855]: 2026-01-20 15:49:36.407 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:49:36 compute-1 nova_compute[225855]: 2026-01-20 15:49:36.407 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:49:36 compute-1 sudo[333025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:36 compute-1 sudo[333025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:36 compute-1 sudo[333025]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:37 compute-1 ceph-mon[81775]: pgmap v3822: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:49:37 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4206473289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:49:37 compute-1 nova_compute[225855]: 2026-01-20 15:49:37.324 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.917s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:49:37 compute-1 nova_compute[225855]: 2026-01-20 15:49:37.489 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:49:37 compute-1 nova_compute[225855]: 2026-01-20 15:49:37.490 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4260MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:49:37 compute-1 nova_compute[225855]: 2026-01-20 15:49:37.491 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:49:37 compute-1 nova_compute[225855]: 2026-01-20 15:49:37.491 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:49:37 compute-1 nova_compute[225855]: 2026-01-20 15:49:37.620 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:49:37 compute-1 nova_compute[225855]: 2026-01-20 15:49:37.621 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:49:37 compute-1 nova_compute[225855]: 2026-01-20 15:49:37.713 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing inventories for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 15:49:37 compute-1 nova_compute[225855]: 2026-01-20 15:49:37.742 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating ProviderTree inventory for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 15:49:37 compute-1 nova_compute[225855]: 2026-01-20 15:49:37.743 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Updating inventory in ProviderTree for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 15:49:37 compute-1 nova_compute[225855]: 2026-01-20 15:49:37.767 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing aggregate associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 15:49:37 compute-1 nova_compute[225855]: 2026-01-20 15:49:37.805 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Refreshing trait associations for resource provider bbb02880-a710-4ac1-8b2c-5c09765848d1, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 15:49:37 compute-1 nova_compute[225855]: 2026-01-20 15:49:37.831 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:49:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:37.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:38 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4206473289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:49:38 compute-1 nova_compute[225855]: 2026-01-20 15:49:38.205 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:49:38 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2377546314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:49:38 compute-1 nova_compute[225855]: 2026-01-20 15:49:38.281 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:49:38 compute-1 nova_compute[225855]: 2026-01-20 15:49:38.287 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:49:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:38.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:38 compute-1 nova_compute[225855]: 2026-01-20 15:49:38.308 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:49:38 compute-1 nova_compute[225855]: 2026-01-20 15:49:38.309 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:49:38 compute-1 nova_compute[225855]: 2026-01-20 15:49:38.310 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:49:39 compute-1 ceph-mon[81775]: pgmap v3823: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2377546314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:49:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:39 compute-1 nova_compute[225855]: 2026-01-20 15:49:39.310 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:39.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:40 compute-1 nova_compute[225855]: 2026-01-20 15:49:40.192 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:40.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:40 compute-1 sudo[333096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:40 compute-1 sudo[333096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:40 compute-1 sudo[333096]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:40 compute-1 sudo[333121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:49:40 compute-1 sudo[333121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:40 compute-1 sudo[333121]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:40 compute-1 sudo[333146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:40 compute-1 sudo[333146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:40 compute-1 sudo[333146]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:40 compute-1 sudo[333171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:49:40 compute-1 sudo[333171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:40 compute-1 sudo[333171]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:41 compute-1 ceph-mon[81775]: pgmap v3824: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:41.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:49:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:49:42 compute-1 ceph-mon[81775]: pgmap v3825: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:49:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:49:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:49:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:49:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:49:42 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:49:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:42.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:42 compute-1 nova_compute[225855]: 2026-01-20 15:49:42.336 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:43 compute-1 nova_compute[225855]: 2026-01-20 15:49:43.210 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:49:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:43.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:49:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:44.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:44 compute-1 ceph-mon[81775]: pgmap v3826: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:45 compute-1 nova_compute[225855]: 2026-01-20 15:49:45.194 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:45.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:46 compute-1 podman[333230]: 2026-01-20 15:49:46.037972496 +0000 UTC m=+0.083437289 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:49:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:46.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:46 compute-1 ceph-mon[81775]: pgmap v3827: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:49:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:47.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:49:48 compute-1 nova_compute[225855]: 2026-01-20 15:49:48.212 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:48 compute-1 sudo[333258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:48 compute-1 sudo[333258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:48 compute-1 sudo[333258]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:48.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:48 compute-1 sudo[333283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:49:48 compute-1 sudo[333283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:48 compute-1 sudo[333283]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:49 compute-1 ceph-mon[81775]: pgmap v3828: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:49:49 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:49:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:49.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:50 compute-1 nova_compute[225855]: 2026-01-20 15:49:50.195 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:50.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:51 compute-1 ceph-mon[81775]: pgmap v3829: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:51.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:52 compute-1 ceph-mon[81775]: pgmap v3830: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:52.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:52 compute-1 podman[333310]: 2026-01-20 15:49:52.996994098 +0000 UTC m=+0.046881876 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:49:53 compute-1 nova_compute[225855]: 2026-01-20 15:49:53.215 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:53.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:54.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:54 compute-1 ceph-mon[81775]: pgmap v3831: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:55 compute-1 nova_compute[225855]: 2026-01-20 15:49:55.198 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:55.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:56.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:56 compute-1 sudo[333331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:56 compute-1 sudo[333331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:56 compute-1 sudo[333331]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:56 compute-1 sudo[333356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:56 compute-1 sudo[333356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:56 compute-1 sudo[333356]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:56 compute-1 ceph-mon[81775]: pgmap v3832: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:57.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:58 compute-1 nova_compute[225855]: 2026-01-20 15:49:58.219 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:58.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:58 compute-1 nova_compute[225855]: 2026-01-20 15:49:58.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:58 compute-1 ceph-mon[81775]: pgmap v3833: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:49:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:59.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:00 compute-1 nova_compute[225855]: 2026-01-20 15:50:00.200 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:50:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:00.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:50:00 compute-1 nova_compute[225855]: 2026-01-20 15:50:00.351 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:00 compute-1 ceph-mon[81775]: pgmap v3834: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:00 compute-1 ceph-mon[81775]: overall HEALTH_OK
Jan 20 15:50:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:50:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:01.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:50:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:02.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:02 compute-1 ceph-mon[81775]: pgmap v3835: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:03 compute-1 nova_compute[225855]: 2026-01-20 15:50:03.222 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:03.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:04.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:04 compute-1 ceph-mon[81775]: pgmap v3836: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:05 compute-1 nova_compute[225855]: 2026-01-20 15:50:05.202 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:05.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:06.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:06 compute-1 ceph-mon[81775]: pgmap v3837: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:07.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:08 compute-1 nova_compute[225855]: 2026-01-20 15:50:08.225 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:08 compute-1 nova_compute[225855]: 2026-01-20 15:50:08.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:08 compute-1 nova_compute[225855]: 2026-01-20 15:50:08.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 15:50:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:08.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:08 compute-1 ceph-mon[81775]: pgmap v3838: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:09.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:10 compute-1 nova_compute[225855]: 2026-01-20 15:50:10.203 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:10.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:10 compute-1 ceph-mon[81775]: pgmap v3839: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:11.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:12.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:13 compute-1 ceph-mon[81775]: pgmap v3840: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:13 compute-1 nova_compute[225855]: 2026-01-20 15:50:13.227 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:13 compute-1 nova_compute[225855]: 2026-01-20 15:50:13.360 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:13 compute-1 nova_compute[225855]: 2026-01-20 15:50:13.360 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 15:50:13 compute-1 nova_compute[225855]: 2026-01-20 15:50:13.396 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 15:50:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:50:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:13.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:50:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/660523494' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:50:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/660523494' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:50:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:14.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:15 compute-1 ceph-mon[81775]: pgmap v3841: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:15 compute-1 nova_compute[225855]: 2026-01-20 15:50:15.206 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:15.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:16 compute-1 ceph-mon[81775]: pgmap v3842: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:16.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:50:16.473 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:50:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:50:16.473 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:50:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:50:16.473 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:50:16 compute-1 sudo[333391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:16 compute-1 sudo[333391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:16 compute-1 sudo[333391]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:16 compute-1 sudo[333422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:16 compute-1 sudo[333422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:16 compute-1 sudo[333422]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:16 compute-1 podman[333415]: 2026-01-20 15:50:16.803326107 +0000 UTC m=+0.078083798 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:50:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:17.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:18 compute-1 nova_compute[225855]: 2026-01-20 15:50:18.230 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:18.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:18 compute-1 ceph-mon[81775]: pgmap v3843: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:19 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:19 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:50:19 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:19.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:50:20 compute-1 nova_compute[225855]: 2026-01-20 15:50:20.208 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:20.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:20 compute-1 ceph-mon[81775]: pgmap v3844: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:21 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:21 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:21 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:21.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:22.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:22 compute-1 ceph-mon[81775]: pgmap v3845: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:23 compute-1 nova_compute[225855]: 2026-01-20 15:50:23.233 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:23 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:23 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:23 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:23.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:24 compute-1 podman[333471]: 2026-01-20 15:50:24.018454057 +0000 UTC m=+0.067194020 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:50:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:24.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:24 compute-1 ceph-mon[81775]: pgmap v3846: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:25 compute-1 nova_compute[225855]: 2026-01-20 15:50:25.211 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:25 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:25 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:25 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:25.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:26.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:26 compute-1 ceph-mon[81775]: pgmap v3847: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:27 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:27 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:50:27 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:27.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:50:28 compute-1 nova_compute[225855]: 2026-01-20 15:50:28.237 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:28.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:28 compute-1 ceph-mon[81775]: pgmap v3848: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:29 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:29 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:29 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:29.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:30 compute-1 nova_compute[225855]: 2026-01-20 15:50:30.211 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:30.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:30 compute-1 ceph-mon[81775]: pgmap v3849: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3445212458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:50:31 compute-1 nova_compute[225855]: 2026-01-20 15:50:31.377 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:31 compute-1 nova_compute[225855]: 2026-01-20 15:50:31.377 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:50:31 compute-1 nova_compute[225855]: 2026-01-20 15:50:31.378 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:50:31 compute-1 nova_compute[225855]: 2026-01-20 15:50:31.391 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:50:31 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:31 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:31 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:31.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2651699864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:50:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:50:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:32.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:50:32 compute-1 ceph-mon[81775]: pgmap v3850: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:33 compute-1 nova_compute[225855]: 2026-01-20 15:50:33.266 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:33 compute-1 nova_compute[225855]: 2026-01-20 15:50:33.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:33 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:33 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:33 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:33.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3990908444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:50:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:34.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:35 compute-1 ceph-mon[81775]: pgmap v3851: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:35 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/425106910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:50:35 compute-1 nova_compute[225855]: 2026-01-20 15:50:35.213 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:35 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:35 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:35 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:35.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:36 compute-1 nova_compute[225855]: 2026-01-20 15:50:36.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:36 compute-1 nova_compute[225855]: 2026-01-20 15:50:36.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:36 compute-1 nova_compute[225855]: 2026-01-20 15:50:36.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:50:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:36.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:36 compute-1 sudo[333497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:36 compute-1 sudo[333497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:36 compute-1 sudo[333497]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:36 compute-1 sudo[333522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:37 compute-1 sudo[333522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:37 compute-1 sudo[333522]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:37 compute-1 ceph-mon[81775]: pgmap v3852: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:37 compute-1 nova_compute[225855]: 2026-01-20 15:50:37.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:37 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:37 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:50:37 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:37.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:50:38 compute-1 nova_compute[225855]: 2026-01-20 15:50:38.270 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:38 compute-1 nova_compute[225855]: 2026-01-20 15:50:38.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:38 compute-1 nova_compute[225855]: 2026-01-20 15:50:38.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:38 compute-1 nova_compute[225855]: 2026-01-20 15:50:38.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:38 compute-1 nova_compute[225855]: 2026-01-20 15:50:38.373 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:50:38 compute-1 nova_compute[225855]: 2026-01-20 15:50:38.373 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:50:38 compute-1 nova_compute[225855]: 2026-01-20 15:50:38.374 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:50:38 compute-1 nova_compute[225855]: 2026-01-20 15:50:38.374 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:50:38 compute-1 nova_compute[225855]: 2026-01-20 15:50:38.375 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:50:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:50:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:38.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:50:38 compute-1 nova_compute[225855]: 2026-01-20 15:50:38.858 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:50:39 compute-1 ceph-mon[81775]: pgmap v3853: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1285300892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:50:39 compute-1 nova_compute[225855]: 2026-01-20 15:50:39.077 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:50:39 compute-1 nova_compute[225855]: 2026-01-20 15:50:39.079 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4272MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:50:39 compute-1 nova_compute[225855]: 2026-01-20 15:50:39.079 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:50:39 compute-1 nova_compute[225855]: 2026-01-20 15:50:39.079 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:50:39 compute-1 nova_compute[225855]: 2026-01-20 15:50:39.149 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:50:39 compute-1 nova_compute[225855]: 2026-01-20 15:50:39.149 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:50:39 compute-1 nova_compute[225855]: 2026-01-20 15:50:39.184 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:50:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:50:39 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1405845245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:50:39 compute-1 nova_compute[225855]: 2026-01-20 15:50:39.638 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:50:39 compute-1 nova_compute[225855]: 2026-01-20 15:50:39.644 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:50:39 compute-1 nova_compute[225855]: 2026-01-20 15:50:39.657 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:50:39 compute-1 nova_compute[225855]: 2026-01-20 15:50:39.658 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:50:39 compute-1 nova_compute[225855]: 2026-01-20 15:50:39.659 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:50:39 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:39 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:39 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:39.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1405845245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:50:40 compute-1 nova_compute[225855]: 2026-01-20 15:50:40.215 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:40.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:41 compute-1 ceph-mon[81775]: pgmap v3854: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:41 compute-1 nova_compute[225855]: 2026-01-20 15:50:41.208 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:41 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:41 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:41 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:41.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:42.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:43 compute-1 ceph-mon[81775]: pgmap v3855: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:43 compute-1 nova_compute[225855]: 2026-01-20 15:50:43.319 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:43 compute-1 nova_compute[225855]: 2026-01-20 15:50:43.360 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:43 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:43 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:43 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:43.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:44 compute-1 ceph-mon[81775]: pgmap v3856: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:50:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:44.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:50:45 compute-1 nova_compute[225855]: 2026-01-20 15:50:45.217 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:45 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:45 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:45 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:45.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:50:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:46.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:50:46 compute-1 ceph-mon[81775]: pgmap v3857: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:47 compute-1 podman[333596]: 2026-01-20 15:50:47.016724951 +0000 UTC m=+0.064848273 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 15:50:47 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:47 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:47 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:47.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:48 compute-1 nova_compute[225855]: 2026-01-20 15:50:48.338 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:50:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:48.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:50:48 compute-1 sudo[333621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:48 compute-1 sudo[333621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:48 compute-1 sudo[333621]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:48 compute-1 sudo[333646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:50:48 compute-1 sudo[333646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:48 compute-1 sudo[333646]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:48 compute-1 sudo[333671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:48 compute-1 sudo[333671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:48 compute-1 sudo[333671]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:48 compute-1 ceph-mon[81775]: pgmap v3858: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:48 compute-1 sudo[333696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 15:50:48 compute-1 sudo[333696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:49 compute-1 sudo[333696]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:49 compute-1 sudo[333741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:49 compute-1 sudo[333741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:49 compute-1 sudo[333741]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:49 compute-1 sudo[333766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:50:49 compute-1 sudo[333766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:49 compute-1 sudo[333766]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:49 compute-1 sudo[333791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:49 compute-1 sudo[333791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:49 compute-1 sudo[333791]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:49 compute-1 sudo[333816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:50:49 compute-1 sudo[333816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:49 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:49 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:49 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:49.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:50 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 15:50:50 compute-1 sudo[333816]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:50 compute-1 nova_compute[225855]: 2026-01-20 15:50:50.219 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:50:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:50.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:50:50 compute-1 nova_compute[225855]: 2026-01-20 15:50:50.947 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:51 compute-1 ceph-mon[81775]: pgmap v3859: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 15:50:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:50:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:50:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:50:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:50:51 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:50:51 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:51 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:51 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:51.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:50:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:52.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:50:53 compute-1 ceph-mon[81775]: pgmap v3860: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:53 compute-1 nova_compute[225855]: 2026-01-20 15:50:53.376 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:53 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:53 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:53 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:53.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:54 compute-1 ceph-mon[81775]: pgmap v3861: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:50:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:54.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:50:55 compute-1 podman[333875]: 2026-01-20 15:50:55.034422661 +0000 UTC m=+0.085265780 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 20 15:50:55 compute-1 nova_compute[225855]: 2026-01-20 15:50:55.222 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:55 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:55 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:55 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:55.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:56.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:56 compute-1 sudo[333894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:56 compute-1 sudo[333894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:56 compute-1 sudo[333894]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:56 compute-1 sudo[333919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:50:56 compute-1 sudo[333919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:56 compute-1 sudo[333919]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:56 compute-1 ceph-mon[81775]: pgmap v3862: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:56 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:56 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:57 compute-1 sudo[333944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:57 compute-1 sudo[333944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:57 compute-1 sudo[333944]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:57 compute-1 sudo[333969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:57 compute-1 sudo[333969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:57 compute-1 sudo[333969]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:57 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:57 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:57 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:57.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:58 compute-1 nova_compute[225855]: 2026-01-20 15:50:58.410 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:58.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:58 compute-1 ceph-mon[81775]: pgmap v3863: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:59 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:50:59 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:59 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:59.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:00 compute-1 nova_compute[225855]: 2026-01-20 15:51:00.225 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:00.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:00 compute-1 ceph-mon[81775]: pgmap v3864: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:01 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:01 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:01 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:01.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:02.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:02 compute-1 ceph-mon[81775]: pgmap v3865: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:03 compute-1 nova_compute[225855]: 2026-01-20 15:51:03.413 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:03 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:03 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:51:03 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:03.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:51:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:04.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:04 compute-1 ceph-mon[81775]: pgmap v3866: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:05 compute-1 nova_compute[225855]: 2026-01-20 15:51:05.227 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:05 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:05 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:05 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:05.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:06.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:06 compute-1 ceph-mon[81775]: pgmap v3867: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:07 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:07 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:07 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:07.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:08.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:08 compute-1 nova_compute[225855]: 2026-01-20 15:51:08.453 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:09 compute-1 ceph-mon[81775]: pgmap v3868: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:09 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:09 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:09 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:09.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:10 compute-1 nova_compute[225855]: 2026-01-20 15:51:10.229 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:10.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:11 compute-1 ceph-mon[81775]: pgmap v3869: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:11 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:11 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:11 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:11.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:51:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:12.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:51:13 compute-1 ceph-mon[81775]: pgmap v3870: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:13 compute-1 nova_compute[225855]: 2026-01-20 15:51:13.510 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:51:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3537550131' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:51:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:51:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3537550131' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:51:13 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:13 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:13 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:13.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3537550131' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:51:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/3537550131' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:51:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:51:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:14.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:51:15 compute-1 ceph-mon[81775]: pgmap v3871: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:15 compute-1 nova_compute[225855]: 2026-01-20 15:51:15.231 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:15 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:15 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:15 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:15.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:16.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:51:16.474 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:51:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:51:16.474 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:51:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:51:16.474 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:51:16 compute-1 ceph-mon[81775]: pgmap v3872: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:17 compute-1 sudo[334004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:51:17 compute-1 sudo[334004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:17 compute-1 sudo[334004]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:17 compute-1 sudo[334030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:51:17 compute-1 sudo[334030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:17 compute-1 sudo[334030]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:17 compute-1 podman[334028]: 2026-01-20 15:51:17.42758877 +0000 UTC m=+0.109293958 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 20 15:51:17 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:17 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:17 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:17.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:18.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:18 compute-1 nova_compute[225855]: 2026-01-20 15:51:18.512 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:18 compute-1 ceph-mon[81775]: pgmap v3873: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:51:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:19.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:51:20 compute-1 nova_compute[225855]: 2026-01-20 15:51:20.233 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:20.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:20 compute-1 ceph-mon[81775]: pgmap v3874: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:51:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:22.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:51:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:51:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:22.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:51:22 compute-1 ceph-mon[81775]: pgmap v3875: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:23 compute-1 nova_compute[225855]: 2026-01-20 15:51:23.546 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:51:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:24.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:51:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:51:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:24.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:51:24 compute-1 ceph-mon[81775]: pgmap v3876: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:25 compute-1 nova_compute[225855]: 2026-01-20 15:51:25.236 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:26.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:26 compute-1 podman[334085]: 2026-01-20 15:51:26.053805609 +0000 UTC m=+0.085213019 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 15:51:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:51:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:26.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:51:26 compute-1 ceph-mon[81775]: pgmap v3877: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:51:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:28.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:51:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:28.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:28 compute-1 nova_compute[225855]: 2026-01-20 15:51:28.549 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:28 compute-1 ceph-mon[81775]: pgmap v3878: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:30.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:30 compute-1 nova_compute[225855]: 2026-01-20 15:51:30.237 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:30.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:30 compute-1 ceph-mon[81775]: pgmap v3879: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:32.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:32.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:32 compute-1 ceph-mon[81775]: pgmap v3880: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3335256768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:51:33 compute-1 nova_compute[225855]: 2026-01-20 15:51:33.343 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:51:33 compute-1 nova_compute[225855]: 2026-01-20 15:51:33.344 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:51:33 compute-1 nova_compute[225855]: 2026-01-20 15:51:33.344 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:51:33 compute-1 nova_compute[225855]: 2026-01-20 15:51:33.359 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:51:33 compute-1 nova_compute[225855]: 2026-01-20 15:51:33.359 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:51:33 compute-1 nova_compute[225855]: 2026-01-20 15:51:33.552 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/814707366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:51:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:51:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:34.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:51:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:34.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:34 compute-1 ceph-mon[81775]: pgmap v3881: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:34 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3208351070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:51:35 compute-1 nova_compute[225855]: 2026-01-20 15:51:35.240 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1180192974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:51:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:36.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:51:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:36.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:51:37 compute-1 ceph-mon[81775]: pgmap v3882: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:37 compute-1 sudo[334110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:51:37 compute-1 sudo[334110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:37 compute-1 sudo[334110]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:37 compute-1 sudo[334135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:51:37 compute-1 sudo[334135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:37 compute-1 sudo[334135]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:51:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:38.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:51:38 compute-1 nova_compute[225855]: 2026-01-20 15:51:38.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:51:38 compute-1 nova_compute[225855]: 2026-01-20 15:51:38.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:51:38 compute-1 nova_compute[225855]: 2026-01-20 15:51:38.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:51:38 compute-1 nova_compute[225855]: 2026-01-20 15:51:38.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:51:38 compute-1 nova_compute[225855]: 2026-01-20 15:51:38.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:51:38 compute-1 nova_compute[225855]: 2026-01-20 15:51:38.434 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:51:38 compute-1 nova_compute[225855]: 2026-01-20 15:51:38.435 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:51:38 compute-1 nova_compute[225855]: 2026-01-20 15:51:38.435 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:51:38 compute-1 nova_compute[225855]: 2026-01-20 15:51:38.435 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:51:38 compute-1 nova_compute[225855]: 2026-01-20 15:51:38.436 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:51:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:38.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:38 compute-1 nova_compute[225855]: 2026-01-20 15:51:38.555 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:38 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:51:38 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/701871106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:51:38 compute-1 nova_compute[225855]: 2026-01-20 15:51:38.873 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:51:39 compute-1 ceph-mon[81775]: pgmap v3883: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:39 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/701871106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:51:39 compute-1 nova_compute[225855]: 2026-01-20 15:51:39.068 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:51:39 compute-1 nova_compute[225855]: 2026-01-20 15:51:39.069 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4259MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:51:39 compute-1 nova_compute[225855]: 2026-01-20 15:51:39.070 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:51:39 compute-1 nova_compute[225855]: 2026-01-20 15:51:39.070 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:51:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:39 compute-1 nova_compute[225855]: 2026-01-20 15:51:39.230 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:51:39 compute-1 nova_compute[225855]: 2026-01-20 15:51:39.231 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:51:39 compute-1 nova_compute[225855]: 2026-01-20 15:51:39.255 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:51:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:51:39 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1389155706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:51:39 compute-1 nova_compute[225855]: 2026-01-20 15:51:39.686 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:51:39 compute-1 nova_compute[225855]: 2026-01-20 15:51:39.693 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:51:39 compute-1 nova_compute[225855]: 2026-01-20 15:51:39.851 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:51:39 compute-1 nova_compute[225855]: 2026-01-20 15:51:39.854 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:51:39 compute-1 nova_compute[225855]: 2026-01-20 15:51:39.854 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:51:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:51:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:40.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:51:40 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1389155706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:51:40 compute-1 nova_compute[225855]: 2026-01-20 15:51:40.241 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:40.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:40 compute-1 nova_compute[225855]: 2026-01-20 15:51:40.854 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:51:40 compute-1 nova_compute[225855]: 2026-01-20 15:51:40.855 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:51:41 compute-1 ceph-mon[81775]: pgmap v3884: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:51:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:42.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:51:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:42.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:43 compute-1 ceph-mon[81775]: pgmap v3885: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:43 compute-1 nova_compute[225855]: 2026-01-20 15:51:43.559 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:44.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:51:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:44.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:51:45 compute-1 ceph-mon[81775]: pgmap v3886: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:45 compute-1 nova_compute[225855]: 2026-01-20 15:51:45.243 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:45 compute-1 nova_compute[225855]: 2026-01-20 15:51:45.335 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:51:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 20 15:51:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:46.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 20 15:51:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:51:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:46.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:51:47 compute-1 ceph-mon[81775]: pgmap v3887: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:51:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:48.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:51:48 compute-1 podman[334211]: 2026-01-20 15:51:48.091969576 +0000 UTC m=+0.129221023 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 15:51:48 compute-1 ceph-mon[81775]: pgmap v3888: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:51:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:48.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:51:48 compute-1 nova_compute[225855]: 2026-01-20 15:51:48.560 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:51:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:50.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:51:50 compute-1 nova_compute[225855]: 2026-01-20 15:51:50.245 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:50.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:50 compute-1 ceph-mon[81775]: pgmap v3889: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:52.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:51:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:52.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:51:52 compute-1 ceph-mon[81775]: pgmap v3890: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:53 compute-1 nova_compute[225855]: 2026-01-20 15:51:53.563 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:54.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:54.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:54 compute-1 ceph-mon[81775]: pgmap v3891: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:55 compute-1 nova_compute[225855]: 2026-01-20 15:51:55.247 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:51:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:56.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:51:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:56.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:56 compute-1 ceph-mon[81775]: pgmap v3892: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:57 compute-1 podman[334241]: 2026-01-20 15:51:57.020718714 +0000 UTC m=+0.065244265 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:51:57 compute-1 sudo[334242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:51:57 compute-1 sudo[334242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:57 compute-1 sudo[334242]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:57 compute-1 sudo[334285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:51:57 compute-1 sudo[334285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:57 compute-1 sudo[334285]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:57 compute-1 sudo[334311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:51:57 compute-1 sudo[334311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:57 compute-1 sudo[334311]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:57 compute-1 sudo[334336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:51:57 compute-1 sudo[334336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:57 compute-1 sudo[334374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:51:57 compute-1 sudo[334374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:57 compute-1 sudo[334374]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:57 compute-1 sudo[334400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:51:57 compute-1 sudo[334400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:57 compute-1 sudo[334400]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:57 compute-1 sudo[334336]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:51:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:58.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:51:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:51:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:51:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:58.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:51:58 compute-1 nova_compute[225855]: 2026-01-20 15:51:58.568 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:58 compute-1 ceph-mon[81775]: pgmap v3893: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 15:51:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:51:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:51:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:51:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:51:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:51:58 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:51:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:00.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:00 compute-1 nova_compute[225855]: 2026-01-20 15:52:00.251 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:00.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:00 compute-1 ceph-mon[81775]: pgmap v3894: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:52:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:02.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:52:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:52:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:02.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:52:03 compute-1 ceph-mon[81775]: pgmap v3895: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:03 compute-1 nova_compute[225855]: 2026-01-20 15:52:03.583 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:03 compute-1 sudo[334445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:52:03 compute-1 sudo[334445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:03 compute-1 sudo[334445]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:04.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:04 compute-1 sudo[334470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:52:04 compute-1 sudo[334470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:04 compute-1 sudo[334470]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:52:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:04.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:52:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:52:04 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:52:04 compute-1 ceph-mon[81775]: pgmap v3896: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:05 compute-1 nova_compute[225855]: 2026-01-20 15:52:05.253 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:05 compute-1 nova_compute[225855]: 2026-01-20 15:52:05.334 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:52:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:06.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:52:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:06.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:52:06 compute-1 ceph-mon[81775]: pgmap v3897: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:08.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:52:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:08.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:52:08 compute-1 nova_compute[225855]: 2026-01-20 15:52:08.585 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:08 compute-1 ceph-mon[81775]: pgmap v3898: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:10.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:10 compute-1 nova_compute[225855]: 2026-01-20 15:52:10.254 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:10.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:10 compute-1 ceph-mon[81775]: pgmap v3899: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:12.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:52:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:12.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:52:12 compute-1 ceph-mon[81775]: pgmap v3900: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:13 compute-1 nova_compute[225855]: 2026-01-20 15:52:13.588 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1897007487' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:52:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1897007487' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:52:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:14.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:52:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:14.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:52:15 compute-1 ceph-mon[81775]: pgmap v3901: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:15 compute-1 nova_compute[225855]: 2026-01-20 15:52:15.257 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:16.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:16 compute-1 ceph-mon[81775]: pgmap v3902: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:52:16.475 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:52:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:52:16.475 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:52:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:52:16.476 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:52:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:52:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:16.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:52:17 compute-1 sudo[334501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:52:17 compute-1 sudo[334501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:17 compute-1 sudo[334501]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:17 compute-1 sudo[334526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:52:17 compute-1 sudo[334526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:17 compute-1 sudo[334526]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:52:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:18.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:52:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:18.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:18 compute-1 nova_compute[225855]: 2026-01-20 15:52:18.591 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:18 compute-1 ceph-mon[81775]: pgmap v3903: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:19 compute-1 podman[334552]: 2026-01-20 15:52:19.089830177 +0000 UTC m=+0.126619399 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:52:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:52:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:20.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:52:20 compute-1 nova_compute[225855]: 2026-01-20 15:52:20.259 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:20.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:21 compute-1 ceph-mon[81775]: pgmap v3904: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:22.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:22 compute-1 ceph-mon[81775]: pgmap v3905: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:52:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:22.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:52:23 compute-1 nova_compute[225855]: 2026-01-20 15:52:23.594 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:52:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:24.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:52:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:24.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:25 compute-1 ceph-mon[81775]: pgmap v3906: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:25 compute-1 nova_compute[225855]: 2026-01-20 15:52:25.262 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:26.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:26.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:27 compute-1 ceph-mon[81775]: pgmap v3907: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:28 compute-1 podman[334585]: 2026-01-20 15:52:28.009341635 +0000 UTC m=+0.055661044 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 20 15:52:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:52:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:28.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:52:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:28.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:28 compute-1 nova_compute[225855]: 2026-01-20 15:52:28.598 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:29 compute-1 ceph-mon[81775]: pgmap v3908: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:52:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:30.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:52:30 compute-1 nova_compute[225855]: 2026-01-20 15:52:30.263 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:30 compute-1 ceph-mon[81775]: pgmap v3909: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:30.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:52:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:32.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:52:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.002000056s ======
Jan 20 15:52:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:32.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Jan 20 15:52:32 compute-1 ceph-mon[81775]: pgmap v3910: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2680723541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:52:33 compute-1 nova_compute[225855]: 2026-01-20 15:52:33.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:52:33 compute-1 nova_compute[225855]: 2026-01-20 15:52:33.601 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:33 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3402365577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:52:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:52:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:34.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:52:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:34.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:34 compute-1 ceph-mon[81775]: pgmap v3911: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:35 compute-1 nova_compute[225855]: 2026-01-20 15:52:35.299 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:35 compute-1 nova_compute[225855]: 2026-01-20 15:52:35.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:52:35 compute-1 nova_compute[225855]: 2026-01-20 15:52:35.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:52:35 compute-1 nova_compute[225855]: 2026-01-20 15:52:35.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:52:35 compute-1 nova_compute[225855]: 2026-01-20 15:52:35.362 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:52:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:36.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:36.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:36 compute-1 ceph-mon[81775]: pgmap v3912: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4065801336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:52:37 compute-1 sudo[334610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:52:37 compute-1 sudo[334610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:37 compute-1 sudo[334610]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:37 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/579735090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:52:38 compute-1 sudo[334635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:52:38 compute-1 sudo[334635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:38 compute-1 sudo[334635]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:38.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:38 compute-1 nova_compute[225855]: 2026-01-20 15:52:38.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:52:38 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:38 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:52:38 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:38.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:52:38 compute-1 nova_compute[225855]: 2026-01-20 15:52:38.605 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:38 compute-1 ceph-mon[81775]: pgmap v3913: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:39 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:39 compute-1 nova_compute[225855]: 2026-01-20 15:52:39.340 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:52:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:40.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:40 compute-1 nova_compute[225855]: 2026-01-20 15:52:40.300 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:40 compute-1 nova_compute[225855]: 2026-01-20 15:52:40.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:52:40 compute-1 nova_compute[225855]: 2026-01-20 15:52:40.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:52:40 compute-1 nova_compute[225855]: 2026-01-20 15:52:40.341 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:52:40 compute-1 nova_compute[225855]: 2026-01-20 15:52:40.374 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:52:40 compute-1 nova_compute[225855]: 2026-01-20 15:52:40.374 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:52:40 compute-1 nova_compute[225855]: 2026-01-20 15:52:40.375 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:52:40 compute-1 nova_compute[225855]: 2026-01-20 15:52:40.375 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:52:40 compute-1 nova_compute[225855]: 2026-01-20 15:52:40.375 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:52:40 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:40 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:52:40 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:40.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:52:40 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:52:40 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1369305377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:52:40 compute-1 nova_compute[225855]: 2026-01-20 15:52:40.852 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:52:41 compute-1 ceph-mon[81775]: pgmap v3914: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:41 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1369305377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:52:41 compute-1 nova_compute[225855]: 2026-01-20 15:52:41.027 225859 WARNING nova.virt.libvirt.driver [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:52:41 compute-1 nova_compute[225855]: 2026-01-20 15:52:41.029 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4251MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:52:41 compute-1 nova_compute[225855]: 2026-01-20 15:52:41.030 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:52:41 compute-1 nova_compute[225855]: 2026-01-20 15:52:41.031 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:52:41 compute-1 nova_compute[225855]: 2026-01-20 15:52:41.089 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:52:41 compute-1 nova_compute[225855]: 2026-01-20 15:52:41.090 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:52:41 compute-1 nova_compute[225855]: 2026-01-20 15:52:41.115 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:52:41 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:52:41 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/405040981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:52:41 compute-1 nova_compute[225855]: 2026-01-20 15:52:41.681 225859 DEBUG oslo_concurrency.processutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:52:41 compute-1 nova_compute[225855]: 2026-01-20 15:52:41.688 225859 DEBUG nova.compute.provider_tree [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed in ProviderTree for provider: bbb02880-a710-4ac1-8b2c-5c09765848d1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:52:41 compute-1 nova_compute[225855]: 2026-01-20 15:52:41.845 225859 DEBUG nova.scheduler.client.report [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Inventory has not changed for provider bbb02880-a710-4ac1-8b2c-5c09765848d1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:52:41 compute-1 nova_compute[225855]: 2026-01-20 15:52:41.847 225859 DEBUG nova.compute.resource_tracker [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:52:41 compute-1 nova_compute[225855]: 2026-01-20 15:52:41.847 225859 DEBUG oslo_concurrency.lockutils [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:52:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:52:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:42.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:52:42 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/405040981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:52:42 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:42 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:42 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:42.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:42 compute-1 nova_compute[225855]: 2026-01-20 15:52:42.848 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:52:42 compute-1 nova_compute[225855]: 2026-01-20 15:52:42.849 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:52:43 compute-1 ceph-mon[81775]: pgmap v3915: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:43 compute-1 nova_compute[225855]: 2026-01-20 15:52:43.609 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:52:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:44.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:52:44 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #193. Immutable memtables: 0.
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:52:44.459587) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 193
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924364459682, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2347, "num_deletes": 251, "total_data_size": 5826467, "memory_usage": 5894800, "flush_reason": "Manual Compaction"}
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #194: started
Jan 20 15:52:44 compute-1 ceph-mon[81775]: pgmap v3916: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924364498538, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 194, "file_size": 3812952, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 92350, "largest_seqno": 94692, "table_properties": {"data_size": 3803442, "index_size": 6003, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19248, "raw_average_key_size": 20, "raw_value_size": 3784588, "raw_average_value_size": 3992, "num_data_blocks": 262, "num_entries": 948, "num_filter_entries": 948, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768924145, "oldest_key_time": 1768924145, "file_creation_time": 1768924364, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 39001 microseconds, and 7615 cpu microseconds.
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:52:44.498581) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #194: 3812952 bytes OK
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:52:44.498606) [db/memtable_list.cc:519] [default] Level-0 commit table #194 started
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:52:44.500048) [db/memtable_list.cc:722] [default] Level-0 commit table #194: memtable #1 done
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:52:44.500060) EVENT_LOG_v1 {"time_micros": 1768924364500056, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:52:44.500080) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 5816207, prev total WAL file size 5816207, number of live WAL files 2.
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000190.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:52:44.501399) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [194(3723KB)], [192(11MB)]
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924364501460, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [194], "files_L6": [192], "score": -1, "input_data_size": 16204481, "oldest_snapshot_seqno": -1}
Jan 20 15:52:44 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:44 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:52:44 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:44.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #195: 11379 keys, 14207870 bytes, temperature: kUnknown
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924364645302, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 195, "file_size": 14207870, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14135506, "index_size": 42811, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28485, "raw_key_size": 299918, "raw_average_key_size": 26, "raw_value_size": 13937506, "raw_average_value_size": 1224, "num_data_blocks": 1628, "num_entries": 11379, "num_filter_entries": 11379, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917474, "oldest_key_time": 0, "file_creation_time": 1768924364, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1539d774-8a6f-4e48-b253-137c44586344", "db_session_id": "LFF7G2OZDOU7TKQ8MKAH", "orig_file_number": 195, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:52:44.645558) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 14207870 bytes
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:52:44.668665) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.6 rd, 98.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 11.8 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 11898, records dropped: 519 output_compression: NoCompression
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:52:44.668709) EVENT_LOG_v1 {"time_micros": 1768924364668692, "job": 124, "event": "compaction_finished", "compaction_time_micros": 143931, "compaction_time_cpu_micros": 32264, "output_level": 6, "num_output_files": 1, "total_output_size": 14207870, "num_input_records": 11898, "num_output_records": 11379, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924364669764, "job": 124, "event": "table_file_deletion", "file_number": 194}
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000192.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924364672596, "job": 124, "event": "table_file_deletion", "file_number": 192}
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:52:44.501248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:52:44.672672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:52:44.672677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:52:44.672678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:52:44.672680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:52:44 compute-1 ceph-mon[81775]: rocksdb: (Original Log Time 2026/01/20-15:52:44.672681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:52:45 compute-1 nova_compute[225855]: 2026-01-20 15:52:45.302 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:46.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:46 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:46 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:46 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:46.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:47 compute-1 ceph-mon[81775]: pgmap v3917: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:47 compute-1 nova_compute[225855]: 2026-01-20 15:52:47.337 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:52:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:52:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:48.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:52:48 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:48 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:48 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:48.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:48 compute-1 nova_compute[225855]: 2026-01-20 15:52:48.614 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:48 compute-1 ceph-mon[81775]: pgmap v3918: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:49 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:50 compute-1 podman[334710]: 2026-01-20 15:52:50.101103859 +0000 UTC m=+0.142437837 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller)
Jan 20 15:52:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:52:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:50.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:52:50 compute-1 nova_compute[225855]: 2026-01-20 15:52:50.304 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:50 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:50 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:50 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:50.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:51 compute-1 ceph-mon[81775]: pgmap v3919: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:52:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:52.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:52:52 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:52 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:52 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:52.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:53 compute-1 ceph-mon[81775]: pgmap v3920: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:53 compute-1 nova_compute[225855]: 2026-01-20 15:52:53.616 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:54.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:54 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:54 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:54 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:54 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:54.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:54 compute-1 ceph-mon[81775]: pgmap v3921: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:55 compute-1 nova_compute[225855]: 2026-01-20 15:52:55.306 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:56.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:56 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:56 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:56 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:56.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:56 compute-1 ceph-mon[81775]: pgmap v3922: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:58 compute-1 sudo[334742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:52:58 compute-1 sudo[334742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:58 compute-1 sudo[334742]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:52:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:58.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:52:58 compute-1 podman[334766]: 2026-01-20 15:52:58.210729817 +0000 UTC m=+0.076468562 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:52:58 compute-1 sudo[334773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:52:58 compute-1 sudo[334773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:58 compute-1 sudo[334773]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:58 compute-1 nova_compute[225855]: 2026-01-20 15:52:58.620 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:58 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:52:58 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:58 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:58.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:58 compute-1 ceph-mon[81775]: pgmap v3923: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:59 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:53:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:00.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:00 compute-1 nova_compute[225855]: 2026-01-20 15:53:00.309 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:00 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:00 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:00 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:00.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:00 compute-1 ceph-mon[81775]: pgmap v3924: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:01 compute-1 sshd-session[334812]: Accepted publickey for zuul from 192.168.122.10 port 33522 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 15:53:01 compute-1 systemd-logind[783]: New session 75 of user zuul.
Jan 20 15:53:01 compute-1 systemd[1]: Started Session 75 of User zuul.
Jan 20 15:53:01 compute-1 sshd-session[334812]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 15:53:02 compute-1 sudo[334817]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 20 15:53:02 compute-1 sudo[334817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 15:53:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:02.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:02 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:02 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:02 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:02.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:02 compute-1 ceph-mon[81775]: pgmap v3925: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:03 compute-1 nova_compute[225855]: 2026-01-20 15:53:03.623 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:04.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:04 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:53:04 compute-1 sudo[334928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:53:04 compute-1 sudo[334928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:04 compute-1 sudo[334928]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:04 compute-1 sudo[334959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:53:04 compute-1 sudo[334959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:04 compute-1 sudo[334959]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:04 compute-1 sudo[334993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:53:04 compute-1 sudo[334993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:04 compute-1 sudo[334993]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:04 compute-1 sudo[335031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 15:53:04 compute-1 sudo[335031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:04 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:04 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:04 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:04.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:04 compute-1 ceph-mon[81775]: pgmap v3926: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:05 compute-1 podman[335162]: 2026-01-20 15:53:05.010336572 +0000 UTC m=+0.081529795 container exec 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 15:53:05 compute-1 podman[335162]: 2026-01-20 15:53:05.130488858 +0000 UTC m=+0.201682111 container exec_died 718ebba7a543e42aad7051248d2c7dc014068c35c89c5b87f27b82d4de39c009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 15:53:05 compute-1 nova_compute[225855]: 2026-01-20 15:53:05.310 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:05 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 20 15:53:05 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1223251623' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 15:53:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:53:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:06.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:53:06 compute-1 podman[335371]: 2026-01-20 15:53:06.172790835 +0000 UTC m=+0.480730237 container exec 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 15:53:06 compute-1 ceph-mon[81775]: from='client.37089 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:06 compute-1 ceph-mon[81775]: from='client.50426 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:06 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2586220715' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 15:53:06 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:06 compute-1 podman[335371]: 2026-01-20 15:53:06.212365144 +0000 UTC m=+0.520304496 container exec_died 25e2c3387bc15944c21038272559da5fbf75910d8dd4add0faa995fb4e0f7788 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-1-uyeocq)
Jan 20 15:53:06 compute-1 podman[335462]: 2026-01-20 15:53:06.494857767 +0000 UTC m=+0.063582388 container exec e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, com.redhat.component=keepalived-container, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vcs-type=git, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 20 15:53:06 compute-1 podman[335462]: 2026-01-20 15:53:06.510037106 +0000 UTC m=+0.078761687 container exec_died e27b69e4cc956b06482c80498336e112a56122514cd7345d3d4b39a4d206f962 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-1-cevitz, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, architecture=x86_64, version=2.2.4, vcs-type=git, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.buildah.version=1.28.2, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived)
Jan 20 15:53:06 compute-1 sudo[335031]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:06 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:06 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:06 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:06.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:06 compute-1 sudo[335502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:53:06 compute-1 sudo[335502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:06 compute-1 sudo[335502]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:06 compute-1 sudo[335527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:53:06 compute-1 sudo[335527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:06 compute-1 sudo[335527]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:06 compute-1 sudo[335552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:53:06 compute-1 sudo[335552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:06 compute-1 sudo[335552]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:06 compute-1 sudo[335577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:53:06 compute-1 sudo[335577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:07 compute-1 sudo[335577]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:07 compute-1 ceph-mon[81775]: from='client.37095 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:07 compute-1 ceph-mon[81775]: from='client.50435 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:07 compute-1 ceph-mon[81775]: pgmap v3927: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1223251623' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 15:53:07 compute-1 ceph-mon[81775]: from='client.49987 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:07 compute-1 ceph-mon[81775]: from='client.50450 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:07 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:07 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3700717351' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 15:53:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:08.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:08 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:53:08 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:53:08 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:08 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:53:08 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:53:08 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:53:08 compute-1 ceph-mon[81775]: pgmap v3928: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:08 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:08 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:08 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:08.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:08 compute-1 nova_compute[225855]: 2026-01-20 15:53:08.668 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:09 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:53:09 compute-1 ovs-vsctl[335664]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 20 15:53:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:10.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:10 compute-1 nova_compute[225855]: 2026-01-20 15:53:10.311 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:10 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:10 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:10 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:10.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:10 compute-1 ceph-mon[81775]: pgmap v3929: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:11 compute-1 virtqemud[225396]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 20 15:53:11 compute-1 virtqemud[225396]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 20 15:53:11 compute-1 virtqemud[225396]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 20 15:53:11 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx asok_command: cache status {prefix=cache status} (starting...)
Jan 20 15:53:11 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx Can't run that command on an inactive MDS!
Jan 20 15:53:11 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx asok_command: client ls {prefix=client ls} (starting...)
Jan 20 15:53:11 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx Can't run that command on an inactive MDS!
Jan 20 15:53:11 compute-1 lvm[335984]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 15:53:11 compute-1 lvm[335984]: VG ceph_vg0 finished
Jan 20 15:53:11 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1569318041' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 15:53:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:12.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:12 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx asok_command: damage ls {prefix=damage ls} (starting...)
Jan 20 15:53:12 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx Can't run that command on an inactive MDS!
Jan 20 15:53:12 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx asok_command: dump loads {prefix=dump loads} (starting...)
Jan 20 15:53:12 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx Can't run that command on an inactive MDS!
Jan 20 15:53:12 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 20 15:53:12 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx Can't run that command on an inactive MDS!
Jan 20 15:53:12 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 20 15:53:12 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3384103310' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 15:53:12 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:12 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:12 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:12.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:12 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 20 15:53:12 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx Can't run that command on an inactive MDS!
Jan 20 15:53:12 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 20 15:53:12 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx Can't run that command on an inactive MDS!
Jan 20 15:53:12 compute-1 ceph-mon[81775]: from='client.37122 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:12 compute-1 ceph-mon[81775]: from='client.37134 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:12 compute-1 ceph-mon[81775]: pgmap v3930: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3906255464' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:53:12 compute-1 ceph-mon[81775]: from='client.50008 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4126405177' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 20 15:53:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4190772805' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 15:53:12 compute-1 ceph-mon[81775]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 15:53:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3384103310' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 15:53:12 compute-1 ceph-mon[81775]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 15:53:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1881459806' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 20 15:53:12 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3773467630' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 20 15:53:12 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 20 15:53:12 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx Can't run that command on an inactive MDS!
Jan 20 15:53:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:53:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/620047485' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:53:13 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 20 15:53:13 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx Can't run that command on an inactive MDS!
Jan 20 15:53:13 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 20 15:53:13 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx Can't run that command on an inactive MDS!
Jan 20 15:53:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 20 15:53:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3593184449' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 20 15:53:13 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx asok_command: ops {prefix=ops} (starting...)
Jan 20 15:53:13 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx Can't run that command on an inactive MDS!
Jan 20 15:53:13 compute-1 nova_compute[225855]: 2026-01-20 15:53:13.676 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 20 15:53:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1882969061' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 20 15:53:13 compute-1 sudo[336278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:53:13 compute-1 sudo[336278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:13 compute-1 sudo[336278]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:13 compute-1 ceph-mon[81775]: from='client.50468 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:13 compute-1 ceph-mon[81775]: from='client.37155 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:13 compute-1 ceph-mon[81775]: from='client.50020 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:13 compute-1 ceph-mon[81775]: from='client.50032 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:13 compute-1 sudo[336305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:53:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1906101247' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:53:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/620047485' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:53:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1129415843' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 15:53:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/713696760' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 20 15:53:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3593184449' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 20 15:53:13 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:13 compute-1 ceph-mon[81775]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4057764480' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 20 15:53:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1050385293' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:53:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.10:0/1050385293' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:53:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1706995726' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 20 15:53:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3546405552' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 15:53:13 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1882969061' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 20 15:53:13 compute-1 sudo[336305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:13 compute-1 sudo[336305]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:13 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 20 15:53:13 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4139919592' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 20 15:53:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:14.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 20 15:53:14 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/817991170' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 15:53:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:53:14 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx asok_command: session ls {prefix=session ls} (starting...)
Jan 20 15:53:14 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx Can't run that command on an inactive MDS!
Jan 20 15:53:14 compute-1 ceph-mds[84722]: mds.cephfs.compute-1.rtofcx asok_command: status {prefix=status} (starting...)
Jan 20 15:53:14 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 20 15:53:14 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2827410809' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 15:53:14 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:14 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:14 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:14.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:14 compute-1 ceph-mon[81775]: from='client.37188 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:14 compute-1 ceph-mon[81775]: from='client.50059 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:14 compute-1 ceph-mon[81775]: from='client.50504 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:14 compute-1 ceph-mon[81775]: from='client.37197 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:14 compute-1 ceph-mon[81775]: pgmap v3931: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4139919592' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 20 15:53:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3054439434' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 15:53:14 compute-1 ceph-mon[81775]: from='client.50104 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2698181100' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 15:53:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/817991170' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 15:53:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2148797103' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 15:53:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2388841994' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 20 15:53:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1121232392' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 15:53:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2058637716' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 15:53:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2827410809' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 15:53:14 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/829000758' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 15:53:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 20 15:53:15 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/893498978' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 15:53:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 20 15:53:15 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2579756407' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 15:53:15 compute-1 nova_compute[225855]: 2026-01-20 15:53:15.311 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 20 15:53:15 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/22520359' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 15:53:15 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 20 15:53:15 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3071146643' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 20 15:53:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 20 15:53:16 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1908376492' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 20 15:53:16 compute-1 ceph-mon[81775]: from='client.50552 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:16 compute-1 ceph-mon[81775]: from='client.50116 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:16 compute-1 ceph-mon[81775]: from='client.50564 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:16 compute-1 ceph-mon[81775]: from='client.37254 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/124324877' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 20 15:53:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1991284178' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 15:53:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/893498978' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 15:53:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2579756407' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 15:53:16 compute-1 ceph-mon[81775]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 15:53:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3030325856' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 20 15:53:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3585713236' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 20 15:53:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3163341667' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 15:53:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2989774423' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 15:53:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/22520359' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 15:53:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2870537491' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 20 15:53:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3071146643' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 20 15:53:16 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1406338621' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 20 15:53:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:16.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:53:16.476 140354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:53:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:53:16.477 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:53:16 compute-1 ovn_metadata_agent[140349]: 2026-01-20 15:53:16.477 140354 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:53:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 20 15:53:16 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/693312015' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 15:53:16 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 20 15:53:16 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2432965860' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 20 15:53:16 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:16 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:16 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:16.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:17 compute-1 ceph-mon[81775]: from='client.50182 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:17 compute-1 ceph-mon[81775]: pgmap v3932: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:17 compute-1 ceph-mon[81775]: from='client.37296 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1908376492' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 20 15:53:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/598650118' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 15:53:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1611874348' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 20 15:53:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/878803266' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 15:53:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/693312015' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 15:53:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2810450873' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 15:53:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2432965860' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 20 15:53:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/692267894' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 20 15:53:17 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/947449568' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 15:53:17 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 20 15:53:17 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/808794295' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:16.164138+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447578112 unmapped: 72196096 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:17.164317+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447578112 unmapped: 72196096 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:18.164556+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5051104 data_alloc: 234881024 data_used: 27340800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 415 heartbeat osd_stat(store_statfs(0x19f901000/0x0/0x1bfc00000, data 0x5698bfa/0x58bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447578112 unmapped: 72196096 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:19.164702+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 415 heartbeat osd_stat(store_statfs(0x19f901000/0x0/0x1bfc00000, data 0x5698bfa/0x58bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447586304 unmapped: 72187904 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:20.164942+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447586304 unmapped: 72187904 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:21.165081+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.583354950s of 13.874148369s, submitted: 116
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447586304 unmapped: 72187904 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:22.166955+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447586304 unmapped: 72187904 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:23.167292+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5051172 data_alloc: 234881024 data_used: 27344896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 415 ms_handle_reset con 0x557dc20ef400 session 0x557dc0121e00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 415 ms_handle_reset con 0x557dc2367c00 session 0x557dc308c960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447586304 unmapped: 72187904 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:24.168482+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 72179712 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:25.169967+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 415 heartbeat osd_stat(store_statfs(0x19f900000/0x0/0x1bfc00000, data 0x5699bfa/0x58be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 72179712 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:26.170681+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 72179712 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:27.171178+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 72179712 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:28.171464+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5051184 data_alloc: 234881024 data_used: 27344896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 72179712 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:29.171826+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 72179712 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:30.172136+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 415 heartbeat osd_stat(store_statfs(0x19f8fd000/0x0/0x1bfc00000, data 0x569cbfa/0x58c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447594496 unmapped: 72179712 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:31.172278+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dcb8db800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.923791885s of 10.021304131s, submitted: 7
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 415 heartbeat osd_stat(store_statfs(0x19f8fd000/0x0/0x1bfc00000, data 0x569cbfa/0x58c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 415 ms_handle_reset con 0x557dcb8db800 session 0x557dc208a3c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2c47400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 415 ms_handle_reset con 0x557dc2c47400 session 0x557dc20954a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 416 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc30ea780
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 416 ms_handle_reset con 0x557dc20ef400 session 0x557dc0734960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2367c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 416 heartbeat osd_stat(store_statfs(0x19f8fd000/0x0/0x1bfc00000, data 0x569cbfa/0x58c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 416 ms_handle_reset con 0x557dc2367c00 session 0x557dc01214a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447602688 unmapped: 72171520 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:32.172523+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dcb8db800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 416 ms_handle_reset con 0x557dcb8db800 session 0x557dc30fc000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447610880 unmapped: 72163328 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc10ab400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 416 ms_handle_reset con 0x557dc10ab400 session 0x557dc01a5680
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:33.172676+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5056788 data_alloc: 234881024 data_used: 27353088
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447610880 unmapped: 72163328 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc10ab400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:34.172809+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447610880 unmapped: 72163328 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:35.172965+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447619072 unmapped: 72155136 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:36.173400+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447627264 unmapped: 72146944 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:37.173662+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 416 heartbeat osd_stat(store_statfs(0x19f8f4000/0x0/0x1bfc00000, data 0x56a38b5/0x58ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 416 ms_handle_reset con 0x557dc25b1c00 session 0x557dc0eddc20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 416 ms_handle_reset con 0x557dc5058400 session 0x557dc13f63c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447627264 unmapped: 72146944 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:38.173879+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055592 data_alloc: 234881024 data_used: 27353088
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 416 ms_handle_reset con 0x557dc20ef400 session 0x557dc138a3c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447660032 unmapped: 72114176 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:39.174043+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447660032 unmapped: 72114176 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 416 heartbeat osd_stat(store_statfs(0x19f6b0000/0x0/0x1bfc00000, data 0x54d9895/0x56fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:40.174212+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2367c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 416 ms_handle_reset con 0x557dc2367c00 session 0x557dc30ea1e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dcb8db800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 416 ms_handle_reset con 0x557dcb8db800 session 0x557dc3156960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447660032 unmapped: 72114176 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:41.174422+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447668224 unmapped: 72105984 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:42.174559+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.301989555s of 11.480295181s, submitted: 61
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 417 ms_handle_reset con 0x557dc20ef400 session 0x557dc0e54960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447676416 unmapped: 72097792 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:43.174837+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5038646 data_alloc: 234881024 data_used: 27234304
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447676416 unmapped: 72097792 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:44.174975+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2367c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 417 ms_handle_reset con 0x557dc2367c00 session 0x557dc106af00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 417 heartbeat osd_stat(store_statfs(0x19f6ad000/0x0/0x1bfc00000, data 0x54db4e0/0x5700000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25b1c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 417 ms_handle_reset con 0x557dc25b1c00 session 0x557dc01a5c20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447692800 unmapped: 72081408 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:45.175254+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc5058400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 417 ms_handle_reset con 0x557dc5058400 session 0x557dc01a5680
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2c47400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 417 ms_handle_reset con 0x557dc2c47400 session 0x557dc0734960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447709184 unmapped: 72065024 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:46.175564+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447717376 unmapped: 72056832 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:47.175787+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447717376 unmapped: 72056832 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:48.176009+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5042789 data_alloc: 234881024 data_used: 28385280
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447717376 unmapped: 72056832 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:49.176218+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 417 heartbeat osd_stat(store_statfs(0x19f6af000/0x0/0x1bfc00000, data 0x54db4d1/0x56ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447717376 unmapped: 72056832 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:50.176370+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447717376 unmapped: 72056832 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:51.176482+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447717376 unmapped: 72056832 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:52.176604+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f6aa000/0x0/0x1bfc00000, data 0x54e04d1/0x5704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447733760 unmapped: 72040448 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:53.176720+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5049123 data_alloc: 234881024 data_used: 28393472
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:54.176895+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447741952 unmapped: 72032256 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:55.177042+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 447741952 unmapped: 72032256 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc10ab400 session 0x557dc31572c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc2082960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.068335533s of 13.254554749s, submitted: 70
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:56.177583+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445775872 unmapped: 73998336 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc30eb0e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:57.177745+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445784064 unmapped: 73990144 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:58.177972+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445784064 unmapped: 73990144 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0248000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4921289 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:59.178130+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445784064 unmapped: 73990144 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0248000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:00.178284+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445784064 unmapped: 73990144 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:01.178677+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445784064 unmapped: 73990144 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:02.178901+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445784064 unmapped: 73990144 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:03.179059+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445784064 unmapped: 73990144 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4921289 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:04.179243+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445784064 unmapped: 73990144 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445792256 unmapped: 73981952 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0248000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:05.430667+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0248000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445792256 unmapped: 73981952 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:06.430838+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445792256 unmapped: 73981952 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:07.431020+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0248000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445792256 unmapped: 73981952 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:08.431168+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4921289 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445792256 unmapped: 73981952 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:09.431292+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445792256 unmapped: 73981952 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:10.431427+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445792256 unmapped: 73981952 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:11.431601+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0248000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445800448 unmapped: 73973760 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:12.431731+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445800448 unmapped: 73973760 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:13.431860+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4921289 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445800448 unmapped: 73973760 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:14.432050+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0248000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445800448 unmapped: 73973760 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:15.432219+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445800448 unmapped: 73973760 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:16.432366+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0248000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445800448 unmapped: 73973760 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:17.432510+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445808640 unmapped: 73965568 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:18.432689+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4921289 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445808640 unmapped: 73965568 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:19.432827+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445808640 unmapped: 73965568 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:20.433008+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445816832 unmapped: 73957376 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:21.433196+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445816832 unmapped: 73957376 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:22.433376+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0248000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445816832 unmapped: 73957376 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:23.433511+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4921289 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2367c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2367c00 session 0x557dc2083a40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25b1c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25b1c00 session 0x557dc0145a40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc01203c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 445816832 unmapped: 73957376 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc10ab400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc10ab400 session 0x557dc30ebe00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.219793320s of 28.305349350s, submitted: 17
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:24.433639+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc138a000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2367c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2367c00 session 0x557dc22ce1e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc5058400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc5058400 session 0x557dc01b90e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc5058400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc5058400 session 0x557dc2083860
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc2330d20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446021632 unmapped: 73752576 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:25.433798+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446021632 unmapped: 73752576 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:26.434012+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446021632 unmapped: 73752576 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19fe9a000/0x0/0x1bfc00000, data 0x4cef010/0x4f14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:27.438314+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19fe9a000/0x0/0x1bfc00000, data 0x4cef010/0x4f14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446021632 unmapped: 73752576 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19fe9a000/0x0/0x1bfc00000, data 0x4cef010/0x4f14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:28.438957+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4959010 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446021632 unmapped: 73752576 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:29.439401+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446021632 unmapped: 73752576 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:30.439565+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446021632 unmapped: 73752576 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19fe9a000/0x0/0x1bfc00000, data 0x4cef010/0x4f14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:31.439743+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446021632 unmapped: 73752576 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19fe9a000/0x0/0x1bfc00000, data 0x4cef010/0x4f14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:32.439901+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19fe9a000/0x0/0x1bfc00000, data 0x4cef010/0x4f14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446021632 unmapped: 73752576 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:33.440017+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc10ab400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4959010 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc10ab400 session 0x557dc30fd0e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19fe9a000/0x0/0x1bfc00000, data 0x4cef010/0x4f14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2367c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446021632 unmapped: 73752576 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:34.440163+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446021632 unmapped: 73752576 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:35.440382+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446029824 unmapped: 73744384 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:36.440524+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19fe9a000/0x0/0x1bfc00000, data 0x4cef010/0x4f14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446029824 unmapped: 73744384 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:37.440725+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446029824 unmapped: 73744384 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:38.441017+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4977862 data_alloc: 234881024 data_used: 26775552
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446029824 unmapped: 73744384 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:39.441255+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446029824 unmapped: 73744384 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:40.441400+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19fe9a000/0x0/0x1bfc00000, data 0x4cef010/0x4f14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446029824 unmapped: 73744384 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:41.441608+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446038016 unmapped: 73736192 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:42.441885+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446038016 unmapped: 73736192 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:43.442046+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4977862 data_alloc: 234881024 data_used: 26775552
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446038016 unmapped: 73736192 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:44.442229+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446038016 unmapped: 73736192 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:45.442435+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446038016 unmapped: 73736192 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:46.442579+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19fe9a000/0x0/0x1bfc00000, data 0x4cef010/0x4f14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 446038016 unmapped: 73736192 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:47.442773+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.273998260s of 23.385925293s, submitted: 23
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448405504 unmapped: 71368704 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:48.442935+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5060854 data_alloc: 234881024 data_used: 27189248
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448421888 unmapped: 71352320 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:49.443172+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448421888 unmapped: 71352320 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:50.443557+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448421888 unmapped: 71352320 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:51.443705+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f40d000/0x0/0x1bfc00000, data 0x576d010/0x5992000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448421888 unmapped: 71352320 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:52.443896+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f40d000/0x0/0x1bfc00000, data 0x576d010/0x5992000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448421888 unmapped: 71352320 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:53.444059+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5073154 data_alloc: 234881024 data_used: 27004928
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448421888 unmapped: 71352320 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:54.444254+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448176128 unmapped: 71598080 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:55.444376+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448176128 unmapped: 71598080 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:56.444499+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448176128 unmapped: 71598080 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:57.444668+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.343052864s of 10.078209877s, submitted: 109
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc0edd0e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2367c00 session 0x557dc30f0000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f416000/0x0/0x1bfc00000, data 0x5772010/0x5997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448184320 unmapped: 71589888 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:58.444838+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5066250 data_alloc: 234881024 data_used: 26992640
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448184320 unmapped: 71589888 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:59.445012+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448184320 unmapped: 71589888 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 20 15:53:18 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3140472892' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:00.445152+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f417000/0x0/0x1bfc00000, data 0x5772010/0x5997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448192512 unmapped: 71581696 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:01.445297+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448192512 unmapped: 71581696 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:02.445429+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f417000/0x0/0x1bfc00000, data 0x5772010/0x5997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448192512 unmapped: 71581696 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:03.445565+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5066250 data_alloc: 234881024 data_used: 26992640
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2367c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2367c00 session 0x557dc0edc000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f417000/0x0/0x1bfc00000, data 0x5772010/0x5997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448192512 unmapped: 71581696 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:04.445707+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448192512 unmapped: 71581696 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f417000/0x0/0x1bfc00000, data 0x5772010/0x5997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:05.445850+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc10ab400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:06.446013+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448176128 unmapped: 71598080 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:07.446159+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448135168 unmapped: 71639040 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:08.446339+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448135168 unmapped: 71639040 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5067822 data_alloc: 234881024 data_used: 27152384
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:09.446521+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448135168 unmapped: 71639040 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:10.446758+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448135168 unmapped: 71639040 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f417000/0x0/0x1bfc00000, data 0x5772010/0x5997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:11.446918+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448135168 unmapped: 71639040 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f417000/0x0/0x1bfc00000, data 0x5772010/0x5997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:12.447039+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448135168 unmapped: 71639040 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f417000/0x0/0x1bfc00000, data 0x5772010/0x5997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:13.447170+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448135168 unmapped: 71639040 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5067822 data_alloc: 234881024 data_used: 27152384
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:14.447369+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448135168 unmapped: 71639040 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:15.462077+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448135168 unmapped: 71639040 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:16.462258+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448143360 unmapped: 71630848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.337461472s of 19.394742966s, submitted: 3
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:17.462432+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448192512 unmapped: 71581696 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f417000/0x0/0x1bfc00000, data 0x5772010/0x5997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:18.462637+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448200704 unmapped: 71573504 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f417000/0x0/0x1bfc00000, data 0x5772010/0x5997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5078282 data_alloc: 234881024 data_used: 28155904
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f417000/0x0/0x1bfc00000, data 0x5772010/0x5997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:19.462809+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448200704 unmapped: 71573504 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:20.462963+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448200704 unmapped: 71573504 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:21.470340+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448200704 unmapped: 71573504 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:22.470539+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448200704 unmapped: 71573504 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f417000/0x0/0x1bfc00000, data 0x5772010/0x5997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:23.470732+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448200704 unmapped: 71573504 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5078282 data_alloc: 234881024 data_used: 28155904
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:24.470934+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448200704 unmapped: 71573504 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f417000/0x0/0x1bfc00000, data 0x5772010/0x5997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ae4f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:25.471133+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448200704 unmapped: 71573504 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:26.471303+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448200704 unmapped: 71573504 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc0fe6b40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc10ab400 session 0x557dc22ebc20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eff1000/0x0/0x1bfc00000, data 0x5778010/0x599d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b26f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.055052757s of 10.106260300s, submitted: 23
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:27.471462+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448143360 unmapped: 71630848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:28.471767+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448143360 unmapped: 71630848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5078616 data_alloc: 234881024 data_used: 28155904
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:29.471918+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448143360 unmapped: 71630848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc2330780
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:30.472085+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448143360 unmapped: 71630848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:31.472273+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448143360 unmapped: 71630848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:32.472460+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448143360 unmapped: 71630848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a1288000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:33.472605+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448143360 unmapped: 71630848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4933724 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:34.472811+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448143360 unmapped: 71630848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a1288000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:35.473006+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448143360 unmapped: 71630848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:36.473177+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448143360 unmapped: 71630848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a1288000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:37.473339+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448143360 unmapped: 71630848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:38.473550+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448143360 unmapped: 71630848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4933724 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:39.473752+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448143360 unmapped: 71630848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:40.473920+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448151552 unmapped: 71622656 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a1288000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:41.474111+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448159744 unmapped: 71614464 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:42.474295+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448159744 unmapped: 71614464 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:43.474445+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448159744 unmapped: 71614464 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4933724 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:44.474646+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448159744 unmapped: 71614464 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a1288000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:45.474806+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448159744 unmapped: 71614464 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:46.474952+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448159744 unmapped: 71614464 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:47.475116+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448159744 unmapped: 71614464 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:48.475299+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448167936 unmapped: 71606272 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4933724 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a1288000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:49.475450+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448167936 unmapped: 71606272 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a1288000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:50.475612+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448167936 unmapped: 71606272 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:51.475752+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448167936 unmapped: 71606272 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:52.475950+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448167936 unmapped: 71606272 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:53.476175+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a1288000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448167936 unmapped: 71606272 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4933724 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:54.476344+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc5058400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448167936 unmapped: 71606272 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.636264801s of 27.555721283s, submitted: 34
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc5058400 session 0x557dc05d9860
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc30f0f00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc10ab400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc10ab400 session 0x557dc33ca960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc13f6f00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2367c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2367c00 session 0x557dc150e1e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:55.476502+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448266240 unmapped: 75710464 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:56.476708+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448266240 unmapped: 75710464 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a06ec000/0x0/0x1bfc00000, data 0x54de000/0x5702000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:57.476894+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448274432 unmapped: 75702272 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a06ec000/0x0/0x1bfc00000, data 0x54de000/0x5702000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:58.477101+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448274432 unmapped: 75702272 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5023552 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:59.477303+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448274432 unmapped: 75702272 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a06ec000/0x0/0x1bfc00000, data 0x54de000/0x5702000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:00.477501+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448274432 unmapped: 75702272 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2418000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2418000 session 0x557dc011e960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc13f74a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:01.477670+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448274432 unmapped: 75702272 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc10ab400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc10ab400 session 0x557dc30f10e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:02.477889+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc1a56000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448274432 unmapped: 75702272 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2367c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc26ca800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:03.478388+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448274432 unmapped: 75702272 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5030803 data_alloc: 218103808 data_used: 24649728
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:04.478466+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448290816 unmapped: 75685888 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:05.478590+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448290816 unmapped: 75685888 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a06ea000/0x0/0x1bfc00000, data 0x54de033/0x5704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:06.478673+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448290816 unmapped: 75685888 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:07.478786+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448290816 unmapped: 75685888 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:08.478947+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448290816 unmapped: 75685888 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5104883 data_alloc: 234881024 data_used: 34652160
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:09.479073+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448290816 unmapped: 75685888 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:10.479198+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448290816 unmapped: 75685888 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a06ea000/0x0/0x1bfc00000, data 0x54de033/0x5704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:11.479309+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448290816 unmapped: 75685888 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:12.479471+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448290816 unmapped: 75685888 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:13.479597+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448290816 unmapped: 75685888 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5104883 data_alloc: 234881024 data_used: 34652160
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:14.479746+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 448290816 unmapped: 75685888 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.314420700s of 20.434036255s, submitted: 19
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:15.479909+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450478080 unmapped: 73498624 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a06ea000/0x0/0x1bfc00000, data 0x54de033/0x5704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:16.480038+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450494464 unmapped: 73482240 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:17.480208+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450584576 unmapped: 73392128 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:18.481065+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0008000/0x0/0x1bfc00000, data 0x5bb1033/0x5dd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450584576 unmapped: 73392128 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5168343 data_alloc: 234881024 data_used: 34938880
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:19.481187+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450584576 unmapped: 73392128 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0008000/0x0/0x1bfc00000, data 0x5bb1033/0x5dd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:20.481336+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450584576 unmapped: 73392128 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0008000/0x0/0x1bfc00000, data 0x5bb1033/0x5dd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:21.481472+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450584576 unmapped: 73392128 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:22.481609+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450584576 unmapped: 73392128 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0008000/0x0/0x1bfc00000, data 0x5bb1033/0x5dd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:23.481706+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450584576 unmapped: 73392128 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5168343 data_alloc: 234881024 data_used: 34938880
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:24.481896+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450584576 unmapped: 73392128 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2367c00 session 0x557dc234be00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc26ca800 session 0x557dc308de00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0008000/0x0/0x1bfc00000, data 0x5bb1033/0x5dd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:25.482038+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450592768 unmapped: 73383936 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:26.482174+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450592768 unmapped: 73383936 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:27.482335+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450592768 unmapped: 73383936 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:28.482512+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450592768 unmapped: 73383936 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5168343 data_alloc: 234881024 data_used: 34938880
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:29.482690+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450592768 unmapped: 73383936 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0008000/0x0/0x1bfc00000, data 0x5bb1033/0x5dd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:30.482894+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450592768 unmapped: 73383936 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc10ab400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:31.483062+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450592768 unmapped: 73383936 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0008000/0x0/0x1bfc00000, data 0x5bb1033/0x5dd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:32.483244+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450592768 unmapped: 73383936 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:33.483416+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc01b90e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc10ab400 session 0x557dc2082960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450600960 unmapped: 73375744 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5168503 data_alloc: 234881024 data_used: 34942976
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.522113800s of 18.740686417s, submitted: 73
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc30f01e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:34.483564+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450609152 unmapped: 73367552 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:35.483689+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450609152 unmapped: 73367552 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:36.483846+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450609152 unmapped: 73367552 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:37.484051+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450617344 unmapped: 73359360 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a1287000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:38.484245+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450617344 unmapped: 73359360 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4944778 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:39.484372+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450617344 unmapped: 73359360 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:40.484541+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450617344 unmapped: 73359360 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:41.484724+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450625536 unmapped: 73351168 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:42.484910+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450625536 unmapped: 73351168 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:43.485039+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a1287000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450625536 unmapped: 73351168 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4944778 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:44.485148+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450633728 unmapped: 73342976 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:45.485270+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450633728 unmapped: 73342976 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a1287000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:46.485375+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450633728 unmapped: 73342976 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:47.485573+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450633728 unmapped: 73342976 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:48.485797+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450633728 unmapped: 73342976 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4944778 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:49.485952+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450633728 unmapped: 73342976 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:50.486095+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450633728 unmapped: 73342976 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:51.486231+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450633728 unmapped: 73342976 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a1287000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:52.486386+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450641920 unmapped: 73334784 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:53.486550+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450641920 unmapped: 73334784 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4944778 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:54.486674+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450641920 unmapped: 73334784 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a1287000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:55.486792+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450641920 unmapped: 73334784 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:56.486966+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450641920 unmapped: 73334784 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:57.487095+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450641920 unmapped: 73334784 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:58.487284+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450641920 unmapped: 73334784 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4944778 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:59.487438+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450641920 unmapped: 73334784 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:00.487594+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450650112 unmapped: 73326592 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a1287000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:01.487736+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a1287000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450650112 unmapped: 73326592 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:02.487982+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450650112 unmapped: 73326592 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:03.488151+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450658304 unmapped: 73318400 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a1287000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4944778 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:04.488314+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450658304 unmapped: 73318400 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:05.488476+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450658304 unmapped: 73318400 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:06.488745+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450658304 unmapped: 73318400 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:07.488980+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450666496 unmapped: 73310208 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a1287000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:08.489155+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450674688 unmapped: 73302016 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4944778 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:09.489433+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450674688 unmapped: 73302016 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:10.489617+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450674688 unmapped: 73302016 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a1287000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:11.489774+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450674688 unmapped: 73302016 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a1287000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:12.489901+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450674688 unmapped: 73302016 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:13.490018+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450674688 unmapped: 73302016 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4944778 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:14.490176+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450674688 unmapped: 73302016 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:15.490327+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450674688 unmapped: 73302016 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a1287000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:16.490446+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2367c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2367c00 session 0x557dc308d4a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc308c960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 450682880 unmapped: 73293824 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc01a52c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc011e1e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc10ab400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.948684692s of 43.016662598s, submitted: 27
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc10ab400 session 0x557dc07352c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc22ceb40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2367c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2367c00 session 0x557dc2330780
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2367c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2367c00 session 0x557dc0edd0e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc0d3e3c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:17.490562+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 451747840 unmapped: 72228864 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:18.490754+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 451747840 unmapped: 72228864 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4982133 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:19.490926+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 451747840 unmapped: 72228864 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:20.491057+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 451747840 unmapped: 72228864 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0eb7000/0x0/0x1bfc00000, data 0x4d12062/0x4f37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:21.491228+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 451747840 unmapped: 72228864 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:22.491403+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0eb7000/0x0/0x1bfc00000, data 0x4d12062/0x4f37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 451747840 unmapped: 72228864 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:23.491557+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 451747840 unmapped: 72228864 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc10ab400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc10ab400 session 0x557dc20c61e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4982266 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:24.491692+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 451756032 unmapped: 72220672 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:25.491820+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 451756032 unmapped: 72220672 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:26.491941+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 453722112 unmapped: 70254592 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0eb6000/0x0/0x1bfc00000, data 0x4d12085/0x4f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:27.492099+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 453722112 unmapped: 70254592 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:28.492258+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 453722112 unmapped: 70254592 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5009786 data_alloc: 234881024 data_used: 27885568
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:29.492419+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 453722112 unmapped: 70254592 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:30.492555+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 453722112 unmapped: 70254592 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:31.492688+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 453722112 unmapped: 70254592 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:32.492835+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0eb6000/0x0/0x1bfc00000, data 0x4d12085/0x4f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 453722112 unmapped: 70254592 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:33.492953+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 453722112 unmapped: 70254592 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5009786 data_alloc: 234881024 data_used: 27885568
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:34.493079+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 453722112 unmapped: 70254592 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:35.493273+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 453722112 unmapped: 70254592 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:36.493415+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.696350098s of 19.785596848s, submitted: 33
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 453976064 unmapped: 70000640 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:37.493551+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 456605696 unmapped: 67371008 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:38.493719+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a04b8000/0x0/0x1bfc00000, data 0x5708085/0x592e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457121792 unmapped: 66854912 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5100816 data_alloc: 234881024 data_used: 28143616
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:39.493883+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457121792 unmapped: 66854912 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:40.494030+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a046b000/0x0/0x1bfc00000, data 0x574f085/0x5975000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457121792 unmapped: 66854912 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:41.494177+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457121792 unmapped: 66854912 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:42.494334+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457121792 unmapped: 66854912 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:43.494499+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457129984 unmapped: 66846720 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5092420 data_alloc: 234881024 data_used: 28147712
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0458000/0x0/0x1bfc00000, data 0x5770085/0x5996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:44.494640+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457129984 unmapped: 66846720 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:45.494779+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457129984 unmapped: 66846720 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:46.494925+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0458000/0x0/0x1bfc00000, data 0x5770085/0x5996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457129984 unmapped: 66846720 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:47.495094+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457129984 unmapped: 66846720 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:48.495246+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457138176 unmapped: 66838528 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5092420 data_alloc: 234881024 data_used: 28147712
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:49.495412+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457138176 unmapped: 66838528 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:50.495618+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.340342522s of 13.752699852s, submitted: 111
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0458000/0x0/0x1bfc00000, data 0x5770085/0x5996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457138176 unmapped: 66838528 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:51.495756+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457138176 unmapped: 66838528 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:52.495932+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457138176 unmapped: 66838528 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a044f000/0x0/0x1bfc00000, data 0x5779085/0x599f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:53.496368+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457138176 unmapped: 66838528 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5092664 data_alloc: 234881024 data_used: 28147712
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:54.496527+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457138176 unmapped: 66838528 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:55.496710+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457138176 unmapped: 66838528 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:56.496893+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a044f000/0x0/0x1bfc00000, data 0x5779085/0x599f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457146368 unmapped: 66830336 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:57.497113+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a044f000/0x0/0x1bfc00000, data 0x5779085/0x599f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457146368 unmapped: 66830336 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:58.497339+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457146368 unmapped: 66830336 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5092664 data_alloc: 234881024 data_used: 28147712
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:59.497507+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457146368 unmapped: 66830336 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:00.497658+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457146368 unmapped: 66830336 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:01.497805+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457154560 unmapped: 66822144 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:02.497956+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a044f000/0x0/0x1bfc00000, data 0x5779085/0x599f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457154560 unmapped: 66822144 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:03.498151+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457154560 unmapped: 66822144 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5092664 data_alloc: 234881024 data_used: 28147712
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:04.498295+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457162752 unmapped: 66813952 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:05.498470+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457162752 unmapped: 66813952 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:06.498640+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457162752 unmapped: 66813952 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:07.498833+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a044f000/0x0/0x1bfc00000, data 0x5779085/0x599f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457162752 unmapped: 66813952 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:08.499072+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.195745468s of 18.203834534s, submitted: 2
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457162752 unmapped: 66813952 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5092664 data_alloc: 234881024 data_used: 28147712
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a044f000/0x0/0x1bfc00000, data 0x5779085/0x599f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:09.499187+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457162752 unmapped: 66813952 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:10.499331+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457162752 unmapped: 66813952 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:11.499561+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457162752 unmapped: 66813952 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:12.499748+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457170944 unmapped: 66805760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242dc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a044f000/0x0/0x1bfc00000, data 0x5779085/0x599f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:13.499916+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242dc00 session 0x557dc31561e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc4082000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc4082000 session 0x557dc30ebe00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc2320d20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc10ab400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc10ab400 session 0x557dc2094000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2367c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2367c00 session 0x557dc22cf680
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458530816 unmapped: 65445888 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5114082 data_alloc: 234881024 data_used: 28147712
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:14.500066+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458530816 unmapped: 65445888 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:15.500228+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458530816 unmapped: 65445888 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:16.500365+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458530816 unmapped: 65445888 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:17.500541+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458530816 unmapped: 65445888 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0260000/0x0/0x1bfc00000, data 0x5968085/0x5b8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:18.500698+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458530816 unmapped: 65445888 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5114082 data_alloc: 234881024 data_used: 28147712
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:19.500824+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458530816 unmapped: 65445888 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:20.500976+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458539008 unmapped: 65437696 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:21.501111+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458547200 unmapped: 65429504 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:22.501210+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458547200 unmapped: 65429504 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:23.501323+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0260000/0x0/0x1bfc00000, data 0x5968085/0x5b8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458547200 unmapped: 65429504 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5114082 data_alloc: 234881024 data_used: 28147712
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:24.501452+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458547200 unmapped: 65429504 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:25.501613+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458547200 unmapped: 65429504 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:26.501753+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242dc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242dc00 session 0x557dc30f03c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc72e4800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458547200 unmapped: 65429504 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:27.501889+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458547200 unmapped: 65429504 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:28.502036+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458555392 unmapped: 65421312 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0260000/0x0/0x1bfc00000, data 0x5968085/0x5b8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5114082 data_alloc: 234881024 data_used: 28147712
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:29.502181+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458596352 unmapped: 65380352 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:30.502322+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458596352 unmapped: 65380352 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:31.502500+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0260000/0x0/0x1bfc00000, data 0x5968085/0x5b8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458596352 unmapped: 65380352 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:32.502607+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0260000/0x0/0x1bfc00000, data 0x5968085/0x5b8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458596352 unmapped: 65380352 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:33.502851+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458596352 unmapped: 65380352 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5120162 data_alloc: 234881024 data_used: 28934144
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:34.503040+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458596352 unmapped: 65380352 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:35.503237+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a0260000/0x0/0x1bfc00000, data 0x5968085/0x5b8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x19e0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458596352 unmapped: 65380352 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:36.503405+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458604544 unmapped: 65372160 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:37.503569+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.897705078s of 28.928741455s, submitted: 14
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458604544 unmapped: 65372160 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:38.503731+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458604544 unmapped: 65372160 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5119710 data_alloc: 234881024 data_used: 28934144
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:39.503893+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458612736 unmapped: 65363968 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:40.504063+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459366400 unmapped: 64610304 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:41.504216+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e85a000/0x0/0x1bfc00000, data 0x61c8085/0x63ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460840960 unmapped: 63135744 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:42.504381+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460840960 unmapped: 63135744 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:43.504602+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460840960 unmapped: 63135744 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5187954 data_alloc: 234881024 data_used: 30027776
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:44.504808+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460840960 unmapped: 63135744 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:45.505833+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460840960 unmapped: 63135744 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e841000/0x0/0x1bfc00000, data 0x61e7085/0x640d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:46.505987+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460840960 unmapped: 63135744 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:47.506921+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460840960 unmapped: 63135744 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:48.507343+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460840960 unmapped: 63135744 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5187954 data_alloc: 234881024 data_used: 30027776
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:49.507566+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.731851578s of 12.032382011s, submitted: 91
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460840960 unmapped: 63135744 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e841000/0x0/0x1bfc00000, data 0x61e7085/0x640d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:50.507825+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460849152 unmapped: 63127552 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:51.507951+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 64K writes, 249K keys, 64K commit groups, 1.0 writes per commit group, ingest: 0.24 GB, 0.04 MB/s
                                           Cumulative WAL: 64K writes, 24K syncs, 2.64 writes per sync, written: 0.24 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4629 writes, 15K keys, 4629 commit groups, 1.0 writes per commit group, ingest: 15.94 MB, 0.03 MB/s
                                           Interval WAL: 4629 writes, 1847 syncs, 2.51 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460849152 unmapped: 63127552 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:52.508431+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc72e4800 session 0x557dc0edd2c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc0121680
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460849152 unmapped: 63127552 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets getting new tickets!
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:53.508887+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _finish_auth 0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:53.510166+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc3122f00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460865536 unmapped: 63111168 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f2a6000/0x0/0x1bfc00000, data 0x5782085/0x59a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5099508 data_alloc: 234881024 data_used: 28135424
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:54.509280+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f2a6000/0x0/0x1bfc00000, data 0x5782085/0x59a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460865536 unmapped: 63111168 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f2a6000/0x0/0x1bfc00000, data 0x5782085/0x59a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:55.509585+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460865536 unmapped: 63111168 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:56.509955+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460865536 unmapped: 63111168 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:57.510260+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460865536 unmapped: 63111168 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f2a6000/0x0/0x1bfc00000, data 0x5782085/0x59a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:58.510545+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460865536 unmapped: 63111168 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:59.510832+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5099508 data_alloc: 234881024 data_used: 28135424
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460873728 unmapped: 63102976 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:00.511041+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc011b400 session 0x557dc13dd2c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc10ab400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: mgrc ms_handle_reset ms_handle_reset con 0x557dc33fc400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2542147622
Jan 20 15:53:18 compute-1 ceph-osd[79119]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2542147622,v1:192.168.122.100:6801/2542147622]
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: get_auth_request con 0x557dc72e4800 auth_method 0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: mgrc handle_mgr_configure stats_period=5
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460873728 unmapped: 63102976 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:01.511256+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc239c000 session 0x557dc20c70e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2367c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc217e400 session 0x557dc301b680
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242dc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f2a6000/0x0/0x1bfc00000, data 0x5782085/0x59a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460873728 unmapped: 63102976 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:02.511453+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460873728 unmapped: 63102976 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:03.511643+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460873728 unmapped: 63102976 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:04.511821+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5099508 data_alloc: 234881024 data_used: 28135424
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.522294044s of 14.739879608s, submitted: 17
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc23303c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc01a43c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460873728 unmapped: 63102976 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:05.511932+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2c46800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2c46800 session 0x557dc22ea960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:06.512103+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a00e1000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:07.512264+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:08.512717+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a00e1000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:09.512904+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4963340 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a00e1000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:10.513089+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:11.513269+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:12.513434+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:13.513590+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:14.513733+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4963340 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a00e1000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:15.513915+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:16.514053+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:17.514211+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a00e1000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:18.514375+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:19.514507+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4963340 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a00e1000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:20.514656+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:21.514778+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:22.514905+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:23.515012+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:24.515143+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4963340 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:25.515256+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a00e1000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:26.515348+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:27.515490+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:28.515749+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:29.515947+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4963340 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a00e1000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a00e1000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:30.516105+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:31.516255+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:32.516412+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:33.516561+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a00e1000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:34.516690+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4963340 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:35.516857+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:36.517028+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455122944 unmapped: 68853760 heap: 523976704 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:37.517186+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2c46800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2c46800 session 0x557dc13dd2c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc0f1c3c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc3122f00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc0edd2c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.626598358s of 33.775032043s, submitted: 37
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc30f03c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455041024 unmapped: 72089600 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:38.517468+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc0d3e3c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc0edd0e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f8b2000/0x0/0x1bfc00000, data 0x5176039/0x539c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc22ceb40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc011e1e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455041024 unmapped: 72089600 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:39.517660+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5030023 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:40.517934+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455049216 unmapped: 72081408 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:41.518080+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455049216 unmapped: 72081408 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2c46800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2c46800 session 0x557dc308c960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2c46800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2c46800 session 0x557dc308d4a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:42.518228+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455049216 unmapped: 72081408 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc2082960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc01b90e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:43.518378+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 454991872 unmapped: 72138752 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f8b0000/0x0/0x1bfc00000, data 0x51760a5/0x539e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:44.518497+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 454803456 unmapped: 72327168 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5094631 data_alloc: 234881024 data_used: 32604160
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:45.518624+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 454803456 unmapped: 72327168 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:46.518752+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 454803456 unmapped: 72327168 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f8b0000/0x0/0x1bfc00000, data 0x51760a5/0x539e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:47.518918+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 454803456 unmapped: 72327168 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:48.519071+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 454803456 unmapped: 72327168 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:49.519665+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 454803456 unmapped: 72327168 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5094631 data_alloc: 234881024 data_used: 32604160
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:50.520021+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 454803456 unmapped: 72327168 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:51.520338+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 454803456 unmapped: 72327168 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:52.520560+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 454803456 unmapped: 72327168 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f8b0000/0x0/0x1bfc00000, data 0x51760a5/0x539e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:53.520657+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 454803456 unmapped: 72327168 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:54.520829+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 454803456 unmapped: 72327168 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5094631 data_alloc: 234881024 data_used: 32604160
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.654638290s of 16.778350830s, submitted: 40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:55.521208+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457662464 unmapped: 69468160 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:56.521423+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 457662464 unmapped: 69468160 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:57.521673+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458964992 unmapped: 68165632 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f2e9000/0x0/0x1bfc00000, data 0x57270a5/0x594f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:58.522028+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458973184 unmapped: 68157440 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f2e9000/0x0/0x1bfc00000, data 0x57270a5/0x594f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:59.522225+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458973184 unmapped: 68157440 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5152737 data_alloc: 234881024 data_used: 32755712
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:00.522446+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458973184 unmapped: 68157440 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:01.522777+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458973184 unmapped: 68157440 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f2e9000/0x0/0x1bfc00000, data 0x57270a5/0x594f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:02.522927+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458301440 unmapped: 68829184 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:03.523063+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458301440 unmapped: 68829184 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:04.523206+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458301440 unmapped: 68829184 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5144273 data_alloc: 234881024 data_used: 32759808
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:05.523452+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458301440 unmapped: 68829184 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:06.523658+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458301440 unmapped: 68829184 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f2fc000/0x0/0x1bfc00000, data 0x572a0a5/0x5952000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:07.523789+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458301440 unmapped: 68829184 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:08.523929+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458301440 unmapped: 68829184 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:09.524036+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458301440 unmapped: 68829184 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5144273 data_alloc: 234881024 data_used: 32759808
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f2fc000/0x0/0x1bfc00000, data 0x572a0a5/0x5952000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:10.524165+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458301440 unmapped: 68829184 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f2fc000/0x0/0x1bfc00000, data 0x572a0a5/0x5952000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:11.524319+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458309632 unmapped: 68820992 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:12.524453+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458309632 unmapped: 68820992 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:13.579453+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458309632 unmapped: 68820992 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:14.579648+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458309632 unmapped: 68820992 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5144273 data_alloc: 234881024 data_used: 32759808
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.573434830s of 19.795026779s, submitted: 93
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:15.579824+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458309632 unmapped: 68820992 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:16.580062+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc239d400 session 0x557dc33cad20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc011bc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f2fb000/0x0/0x1bfc00000, data 0x572b0a5/0x5953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458309632 unmapped: 68820992 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:17.580279+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458326016 unmapped: 68804608 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:18.580424+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458326016 unmapped: 68804608 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc011e960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc3156780
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:19.580657+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458358784 unmapped: 68771840 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc0fe7e00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4975866 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:20.580939+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458366976 unmapped: 68763648 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19fd39000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [1,0,1])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:21.581063+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459456512 unmapped: 67674112 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:22.581290+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459522048 unmapped: 67608576 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:23.581507+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459522048 unmapped: 67608576 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:24.581758+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459522048 unmapped: 67608576 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4975514 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a00e8000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:25.581926+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459522048 unmapped: 67608576 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:26.582103+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459522048 unmapped: 67608576 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:27.582342+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459522048 unmapped: 67608576 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a00e8000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:28.582669+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459522048 unmapped: 67608576 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:29.582943+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459522048 unmapped: 67608576 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4975514 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:30.583107+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459522048 unmapped: 67608576 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:31.583321+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459522048 unmapped: 67608576 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a00e8000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:32.583532+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 67600384 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:33.583746+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 67600384 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:34.583985+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 67600384 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4975514 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:35.584184+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 67600384 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a00e8000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:36.584384+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 67600384 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:37.584953+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a00e8000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 67600384 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:38.585934+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 67600384 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:39.586120+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 67600384 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4975514 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:40.586299+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 67600384 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:41.586537+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 67600384 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a00e8000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:42.586718+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a00e8000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 67600384 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:43.586942+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 67600384 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a00e8000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:44.587177+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 67600384 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4975514 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:45.587357+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 67600384 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:46.587520+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 67600384 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:47.587698+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 67600384 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a00e8000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:48.587921+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459538432 unmapped: 67592192 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:49.588088+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a00e8000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459538432 unmapped: 67592192 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4975514 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:50.588220+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459538432 unmapped: 67592192 heap: 527130624 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.319103241s of 36.174694061s, submitted: 289
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc2304f00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc0121680
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc01214a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2c46800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2c46800 session 0x557dc20b52c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2c46800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2c46800 session 0x557dc20b5680
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:51.588345+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459866112 unmapped: 74612736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f63a000/0x0/0x1bfc00000, data 0x53f0000/0x5614000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:52.588525+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459866112 unmapped: 74612736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:53.588758+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459866112 unmapped: 74612736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f63a000/0x0/0x1bfc00000, data 0x53f0000/0x5614000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:54.588932+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f63a000/0x0/0x1bfc00000, data 0x53f0000/0x5614000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459874304 unmapped: 74604544 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5066141 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:55.589108+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459882496 unmapped: 74596352 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc07450e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc308de00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:56.589254+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459882496 unmapped: 74596352 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc01b2d20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc13dcd20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:57.591212+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459882496 unmapped: 74596352 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:58.591418+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459882496 unmapped: 74596352 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:59.591572+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462422016 unmapped: 72056832 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5147845 data_alloc: 234881024 data_used: 35192832
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:00.591751+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f616000/0x0/0x1bfc00000, data 0x5414000/0x5638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462422016 unmapped: 72056832 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:01.591960+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f616000/0x0/0x1bfc00000, data 0x5414000/0x5638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462422016 unmapped: 72056832 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:02.592191+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462422016 unmapped: 72056832 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:03.592361+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462422016 unmapped: 72056832 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:04.592618+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462422016 unmapped: 72056832 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5147845 data_alloc: 234881024 data_used: 35192832
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:05.592748+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462422016 unmapped: 72056832 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:06.592929+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462422016 unmapped: 72056832 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:07.593142+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f616000/0x0/0x1bfc00000, data 0x5414000/0x5638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462422016 unmapped: 72056832 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:08.593343+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462430208 unmapped: 72048640 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:09.593619+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462430208 unmapped: 72048640 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5149125 data_alloc: 234881024 data_used: 35225600
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.860935211s of 18.978137970s, submitted: 28
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:10.593813+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466042880 unmapped: 68435968 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:11.594016+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466100224 unmapped: 68378624 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d7ba000/0x0/0x1bfc00000, data 0x60d0000/0x62f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:12.594182+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466100224 unmapped: 68378624 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:13.594360+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466100224 unmapped: 68378624 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:14.594506+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466100224 unmapped: 68378624 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5267683 data_alloc: 234881024 data_used: 37605376
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:15.594691+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466100224 unmapped: 68378624 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:16.594886+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466100224 unmapped: 68378624 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d7ba000/0x0/0x1bfc00000, data 0x60d0000/0x62f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:17.595040+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466100224 unmapped: 68378624 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:18.595206+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466100224 unmapped: 68378624 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:19.595403+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466100224 unmapped: 68378624 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5268003 data_alloc: 234881024 data_used: 37613568
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:20.595553+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.521827698s of 10.745787621s, submitted: 103
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc13f6d20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc31232c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466100224 unmapped: 68378624 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:21.595689+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466100224 unmapped: 68378624 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc23310e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:22.595816+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ef24000/0x0/0x1bfc00000, data 0x4966000/0x4b8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:23.596010+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:24.596199+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4988387 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:25.596426+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:26.596595+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:27.596754+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ef48000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:28.596952+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:29.597089+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4988387 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:30.597260+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:31.597392+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:32.597491+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:33.597682+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ef48000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:34.597814+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4988387 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:35.597966+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:36.598098+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:37.598229+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ef48000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:38.598410+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ef48000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:39.598581+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4988387 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:40.598716+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:41.598885+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:42.599061+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ef48000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:43.599242+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:44.599420+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4988387 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:45.599572+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:46.599750+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:47.599921+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ef48000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:48.600136+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:49.600312+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4988387 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:50.600531+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:51.600749+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ef48000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc30fd0e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2c46800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2c46800 session 0x557dc3123c20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc1370000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc0edcf00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:52.600889+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.576032639s of 32.002433777s, submitted: 36
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc30fd0e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc13dcd20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25ad000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25ad000 session 0x557dc01b2d20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25ad000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25ad000 session 0x557dc308de00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc07450e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:53.601023+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e539000/0x0/0x1bfc00000, data 0x5351000/0x5575000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:54.601172+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5068817 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:55.601328+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455770112 unmapped: 78708736 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc20b52c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e539000/0x0/0x1bfc00000, data 0x5351000/0x5575000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:56.601607+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc01214a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 78700544 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:57.601758+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 78700544 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:58.602002+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc0121680
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc2304f00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 78700544 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:59.602502+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e538000/0x0/0x1bfc00000, data 0x5351023/0x5576000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 78700544 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5072547 data_alloc: 218103808 data_used: 24174592
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:00.602637+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458629120 unmapped: 75849728 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:01.602905+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458629120 unmapped: 75849728 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:02.603363+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458629120 unmapped: 75849728 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:03.603507+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458629120 unmapped: 75849728 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e538000/0x0/0x1bfc00000, data 0x5351023/0x5576000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:04.603650+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458629120 unmapped: 75849728 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5146467 data_alloc: 218103808 data_used: 34623488
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:05.603790+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458629120 unmapped: 75849728 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:06.603959+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458629120 unmapped: 75849728 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:07.604094+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458629120 unmapped: 75849728 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:08.604273+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458629120 unmapped: 75849728 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:09.604459+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 458629120 unmapped: 75849728 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5146467 data_alloc: 218103808 data_used: 34623488
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e538000/0x0/0x1bfc00000, data 0x5351023/0x5576000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:10.604713+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.228843689s of 18.303384781s, submitted: 21
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460292096 unmapped: 74186752 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:11.604885+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19dc6c000/0x0/0x1bfc00000, data 0x5c1d023/0x5e42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462307328 unmapped: 72171520 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:12.605116+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19dc4f000/0x0/0x1bfc00000, data 0x5c3a023/0x5e5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462315520 unmapped: 72163328 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:13.605284+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462315520 unmapped: 72163328 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:14.605558+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462315520 unmapped: 72163328 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5230717 data_alloc: 234881024 data_used: 35557376
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:15.605812+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462315520 unmapped: 72163328 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:16.606039+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462315520 unmapped: 72163328 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:17.606191+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462315520 unmapped: 72163328 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d835000/0x0/0x1bfc00000, data 0x5c44023/0x5e69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:18.606379+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462315520 unmapped: 72163328 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:19.606625+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462315520 unmapped: 72163328 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5228541 data_alloc: 234881024 data_used: 35627008
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:20.606765+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462315520 unmapped: 72163328 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:21.606906+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462315520 unmapped: 72163328 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:22.607028+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462315520 unmapped: 72163328 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:23.607198+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d832000/0x0/0x1bfc00000, data 0x5c47023/0x5e6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462315520 unmapped: 72163328 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:24.607341+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.729228973s of 13.922365189s, submitted: 80
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462315520 unmapped: 72163328 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5228769 data_alloc: 234881024 data_used: 35627008
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d831000/0x0/0x1bfc00000, data 0x5c48023/0x5e6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:25.607485+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462315520 unmapped: 72163328 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:26.607620+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462315520 unmapped: 72163328 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:27.607749+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462323712 unmapped: 72155136 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:28.607921+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462323712 unmapped: 72155136 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:29.608060+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d831000/0x0/0x1bfc00000, data 0x5c48023/0x5e6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d831000/0x0/0x1bfc00000, data 0x5c48023/0x5e6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462323712 unmapped: 72155136 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5228769 data_alloc: 234881024 data_used: 35627008
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:30.608193+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462331904 unmapped: 72146944 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:31.608677+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc20c7680
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25ad000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25ad000 session 0x557dc30fc960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25c6800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25c6800 session 0x557dc30f0b40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dcb9f5c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dcb9f5c00 session 0x557dc01a4b40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc3122b40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462356480 unmapped: 72122368 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:32.609205+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462356480 unmapped: 72122368 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:33.609414+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ce82000/0x0/0x1bfc00000, data 0x65f7023/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ce82000/0x0/0x1bfc00000, data 0x65f7023/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462364672 unmapped: 72114176 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:34.609530+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462364672 unmapped: 72114176 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5306037 data_alloc: 234881024 data_used: 35627008
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:35.609766+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ce82000/0x0/0x1bfc00000, data 0x65f7023/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462364672 unmapped: 72114176 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:36.609909+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462364672 unmapped: 72114176 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:37.610184+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc138b4a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ce82000/0x0/0x1bfc00000, data 0x65f7023/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462372864 unmapped: 72105984 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25ad000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25c6800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:38.610370+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462381056 unmapped: 72097792 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:39.610480+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467673088 unmapped: 66805760 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5377397 data_alloc: 234881024 data_used: 45633536
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:40.610642+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467673088 unmapped: 66805760 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:41.610800+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ce82000/0x0/0x1bfc00000, data 0x65f7023/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467681280 unmapped: 66797568 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:42.610927+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.127248764s of 18.263854980s, submitted: 14
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467681280 unmapped: 66797568 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:43.611056+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ce82000/0x0/0x1bfc00000, data 0x65f7023/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467681280 unmapped: 66797568 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:44.611185+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467681280 unmapped: 66797568 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5377573 data_alloc: 234881024 data_used: 45633536
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:45.611434+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ce82000/0x0/0x1bfc00000, data 0x65f7023/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467681280 unmapped: 66797568 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:46.611585+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ce82000/0x0/0x1bfc00000, data 0x65f7023/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467681280 unmapped: 66797568 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:47.611774+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467681280 unmapped: 66797568 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:48.611973+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ce82000/0x0/0x1bfc00000, data 0x65f7023/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467681280 unmapped: 66797568 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ce82000/0x0/0x1bfc00000, data 0x65f7023/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:49.612189+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467689472 unmapped: 66789376 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5377573 data_alloc: 234881024 data_used: 45633536
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:50.612358+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 472375296 unmapped: 62103552 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:51.612498+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470654976 unmapped: 63823872 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:52.612655+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19c4d1000/0x0/0x1bfc00000, data 0x6fa2023/0x71c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 472195072 unmapped: 62283776 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:53.612811+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 472195072 unmapped: 62283776 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:54.612951+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 472195072 unmapped: 62283776 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5475871 data_alloc: 234881024 data_used: 47382528
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:55.613107+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 472375296 unmapped: 62103552 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:56.613291+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19c4b1000/0x0/0x1bfc00000, data 0x6fba023/0x71df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 472375296 unmapped: 62103552 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:57.613410+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 472375296 unmapped: 62103552 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:58.613609+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25ad000 session 0x557dc150e5a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25c6800 session 0x557dc13dc780
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc33fd400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.607994080s of 15.813341141s, submitted: 89
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 472383488 unmapped: 62095360 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc33fd400 session 0x557dc0e8be00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:59.613816+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 472383488 unmapped: 62095360 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5238241 data_alloc: 234881024 data_used: 35627008
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:00.613944+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 472383488 unmapped: 62095360 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:01.614084+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 472383488 unmapped: 62095360 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:02.614268+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d831000/0x0/0x1bfc00000, data 0x5c48023/0x5e6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 472383488 unmapped: 62095360 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:03.614400+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc01b90e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc2082960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466550784 unmapped: 67928064 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc33cb4a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:04.614650+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466567168 unmapped: 67911680 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5009345 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:05.614843+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466567168 unmapped: 67911680 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:06.615059+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466567168 unmapped: 67911680 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:07.615264+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb37000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466567168 unmapped: 67911680 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:08.615502+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466567168 unmapped: 67911680 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:09.615639+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466567168 unmapped: 67911680 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5009345 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:10.615964+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466567168 unmapped: 67911680 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:11.616118+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 67903488 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb37000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:12.616248+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 67903488 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:13.616426+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 67903488 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:14.616624+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 67903488 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5009345 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:15.616850+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb37000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 67903488 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:16.617131+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 67903488 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:17.617298+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 67903488 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:18.617556+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 67903488 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:19.617792+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 67903488 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5009345 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:20.618024+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 67903488 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:21.618199+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb37000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 67903488 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:22.618374+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 67903488 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:23.618527+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 67903488 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:24.618678+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 67903488 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5009345 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:25.618836+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 67903488 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:26.618986+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 67903488 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:27.619180+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb37000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 67903488 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc011e960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:28.619303+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25ad000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25ad000 session 0x557dc30ea780
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25ad000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25ad000 session 0x557dc3156f00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc138a1e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.595907211s of 29.741331100s, submitted: 59
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc0e554a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc30eb680
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc22ce960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc05d8b40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc05d81e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e717000/0x0/0x1bfc00000, data 0x4d62010/0x4f87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 67903488 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:29.619459+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 67903488 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5047689 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:30.619705+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e717000/0x0/0x1bfc00000, data 0x4d62010/0x4f87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466583552 unmapped: 67895296 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:31.620023+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466583552 unmapped: 67895296 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:32.620264+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466583552 unmapped: 67895296 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:33.620501+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466583552 unmapped: 67895296 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:34.620704+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e717000/0x0/0x1bfc00000, data 0x4d62010/0x4f87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc22ce960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466583552 unmapped: 67895296 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5047689 data_alloc: 218103808 data_used: 24088576
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:35.620841+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc0e554a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466583552 unmapped: 67895296 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:36.620986+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25ad000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25ad000 session 0x557dc3156f00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25ad000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25ad000 session 0x557dc30ea780
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67592192 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:37.621184+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67592192 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:38.621401+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e6f2000/0x0/0x1bfc00000, data 0x4d86020/0x4fac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466894848 unmapped: 67584000 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:39.621563+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466894848 unmapped: 67584000 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5084440 data_alloc: 218103808 data_used: 28332032
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:40.621682+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466894848 unmapped: 67584000 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:41.621840+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466894848 unmapped: 67584000 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:42.622033+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e6f2000/0x0/0x1bfc00000, data 0x4d86020/0x4fac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466894848 unmapped: 67584000 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:43.622167+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466894848 unmapped: 67584000 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e6f2000/0x0/0x1bfc00000, data 0x4d86020/0x4fac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:44.622336+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e6f2000/0x0/0x1bfc00000, data 0x4d86020/0x4fac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466894848 unmapped: 67584000 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5084440 data_alloc: 218103808 data_used: 28332032
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:45.622469+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466894848 unmapped: 67584000 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:46.622602+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e6f2000/0x0/0x1bfc00000, data 0x4d86020/0x4fac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466894848 unmapped: 67584000 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:47.622746+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466894848 unmapped: 67584000 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:48.622962+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466894848 unmapped: 67584000 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:49.623102+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e6f2000/0x0/0x1bfc00000, data 0x4d86020/0x4fac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.227991104s of 21.312829971s, submitted: 14
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468475904 unmapped: 66002944 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:50.623279+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5129090 data_alloc: 218103808 data_used: 28553216
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467681280 unmapped: 66797568 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:51.623421+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467681280 unmapped: 66797568 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:52.623613+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467681280 unmapped: 66797568 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:53.623804+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e177000/0x0/0x1bfc00000, data 0x5301020/0x5527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467681280 unmapped: 66797568 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:54.623977+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467681280 unmapped: 66797568 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:55.624175+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5132844 data_alloc: 218103808 data_used: 28835840
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e177000/0x0/0x1bfc00000, data 0x5301020/0x5527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467681280 unmapped: 66797568 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:56.624317+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467681280 unmapped: 66797568 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:57.624470+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467681280 unmapped: 66797568 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:58.624664+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e177000/0x0/0x1bfc00000, data 0x5301020/0x5527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467681280 unmapped: 66797568 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:59.624845+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467681280 unmapped: 66797568 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:00.625048+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5132860 data_alloc: 218103808 data_used: 28835840
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467681280 unmapped: 66797568 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:01.625232+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467689472 unmapped: 66789376 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:02.625403+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467689472 unmapped: 66789376 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:03.625619+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e177000/0x0/0x1bfc00000, data 0x5301020/0x5527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467697664 unmapped: 66781184 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:04.625795+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e177000/0x0/0x1bfc00000, data 0x5301020/0x5527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467697664 unmapped: 66781184 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:05.625993+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5132860 data_alloc: 218103808 data_used: 28835840
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e177000/0x0/0x1bfc00000, data 0x5301020/0x5527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467697664 unmapped: 66781184 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:06.626149+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e177000/0x0/0x1bfc00000, data 0x5301020/0x5527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467697664 unmapped: 66781184 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:07.626340+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467697664 unmapped: 66781184 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:08.626557+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467697664 unmapped: 66781184 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:09.626709+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e177000/0x0/0x1bfc00000, data 0x5301020/0x5527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467697664 unmapped: 66781184 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:10.626887+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5132860 data_alloc: 218103808 data_used: 28835840
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467697664 unmapped: 66781184 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:11.627030+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467705856 unmapped: 66772992 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:12.627194+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e177000/0x0/0x1bfc00000, data 0x5301020/0x5527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467705856 unmapped: 66772992 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:13.627390+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467705856 unmapped: 66772992 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:14.627573+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467705856 unmapped: 66772992 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:15.627701+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5132860 data_alloc: 218103808 data_used: 28835840
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:16.627827+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467705856 unmapped: 66772992 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:17.627995+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467705856 unmapped: 66772992 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e177000/0x0/0x1bfc00000, data 0x5301020/0x5527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:18.628123+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467714048 unmapped: 66764800 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.342596054s of 28.506412506s, submitted: 48
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc308c960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc0120780
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25c6800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25c6800 session 0x557dc20c70e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc72e4c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc72e4c00 session 0x557dc13dc3c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc2304f00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:19.628312+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467755008 unmapped: 66723840 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:20.628516+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467755008 unmapped: 66723840 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5174592 data_alloc: 218103808 data_used: 28835840
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:21.628676+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66715648 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19dc6c000/0x0/0x1bfc00000, data 0x580b082/0x5a32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc05d8960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:22.628821+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66715648 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25ad000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25ad000 session 0x557dc301b0e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25c6800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25c6800 session 0x557dc0e8a3c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc7bf6c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc7bf6c00 session 0x557dc0734d20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:23.629007+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66715648 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:24.629257+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66715648 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19dc6b000/0x0/0x1bfc00000, data 0x580c082/0x5a33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:25.629404+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66715648 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5201765 data_alloc: 218103808 data_used: 32505856
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:26.629684+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66715648 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:27.629832+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66715648 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19dc6b000/0x0/0x1bfc00000, data 0x580c082/0x5a33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:28.630084+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66715648 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:29.630338+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66715648 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:30.630542+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66715648 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5213125 data_alloc: 218103808 data_used: 34119680
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:31.630796+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66715648 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19dc6b000/0x0/0x1bfc00000, data 0x580c082/0x5a33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:32.630934+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66715648 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:33.631124+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66715648 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:34.631261+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66715648 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:35.631429+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66715648 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5213125 data_alloc: 218103808 data_used: 34119680
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.470623016s of 17.634029388s, submitted: 41
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:36.631595+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471228416 unmapped: 63250432 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:37.631722+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471007232 unmapped: 63471616 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d5a7000/0x0/0x1bfc00000, data 0x5ed0082/0x60f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:38.631948+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471072768 unmapped: 63406080 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:39.632143+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471072768 unmapped: 63406080 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:40.632319+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471072768 unmapped: 63406080 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5280329 data_alloc: 218103808 data_used: 34820096
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:41.632499+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471138304 unmapped: 63340544 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d5a7000/0x0/0x1bfc00000, data 0x5ed0082/0x60f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:42.632629+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471138304 unmapped: 63340544 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:43.632767+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471138304 unmapped: 63340544 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:44.632958+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471154688 unmapped: 63324160 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:45.633130+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471154688 unmapped: 63324160 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5276949 data_alloc: 218103808 data_used: 34824192
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d5a5000/0x0/0x1bfc00000, data 0x5ed2082/0x60f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:46.633297+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471154688 unmapped: 63324160 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:47.633438+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471154688 unmapped: 63324160 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:48.633765+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471154688 unmapped: 63324160 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:49.633954+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471154688 unmapped: 63324160 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.511910439s of 14.040276527s, submitted: 72
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:50.634111+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471154688 unmapped: 63324160 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d5a4000/0x0/0x1bfc00000, data 0x5ed3082/0x60fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5277177 data_alloc: 218103808 data_used: 34824192
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:51.634288+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471154688 unmapped: 63324160 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc011e5a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc0121e00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:52.636527+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471162880 unmapped: 63315968 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25ad000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25ad000 session 0x557dc208a3c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:53.636723+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:54.636960+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:55.637102+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5141183 data_alloc: 218103808 data_used: 28835840
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19de30000/0x0/0x1bfc00000, data 0x5302020/0x5528000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:56.637261+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19de30000/0x0/0x1bfc00000, data 0x5302020/0x5528000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:57.637479+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:58.637652+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:59.637903+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:00.638093+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5141183 data_alloc: 218103808 data_used: 28835840
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:01.638265+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19de30000/0x0/0x1bfc00000, data 0x5302020/0x5528000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc13dc780
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.882023811s of 12.038696289s, submitted: 56
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc138b4a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:02.638434+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc0120d20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:03.638601+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:04.638767+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:05.638912+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5026516 data_alloc: 218103808 data_used: 24023040
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:06.639053+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:07.639193+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:08.639402+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:09.639595+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:10.639756+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5026516 data_alloc: 218103808 data_used: 24023040
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:11.639918+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:12.640106+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:13.640251+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:14.640392+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:15.640540+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5026516 data_alloc: 218103808 data_used: 24023040
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:16.640733+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:17.640948+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:18.641175+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:19.641354+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:20.641523+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5026516 data_alloc: 218103808 data_used: 24023040
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:21.641665+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:22.641820+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:23.641964+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468582400 unmapped: 65896448 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:24.642185+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468590592 unmapped: 65888256 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:25.642357+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468590592 unmapped: 65888256 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5026516 data_alloc: 218103808 data_used: 24023040
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:26.642499+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468590592 unmapped: 65888256 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:27.642644+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468590592 unmapped: 65888256 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:28.642932+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468590592 unmapped: 65888256 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:29.643140+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468598784 unmapped: 65880064 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:30.643297+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468598784 unmapped: 65880064 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5026516 data_alloc: 218103808 data_used: 24023040
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:31.643503+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468598784 unmapped: 65880064 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:32.643718+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468598784 unmapped: 65880064 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:33.643872+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468598784 unmapped: 65880064 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:34.644095+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468598784 unmapped: 65880064 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:35.644314+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468598784 unmapped: 65880064 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5026516 data_alloc: 218103808 data_used: 24023040
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:36.644571+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468598784 unmapped: 65880064 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:37.644774+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468598784 unmapped: 65880064 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:38.645488+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468606976 unmapped: 65871872 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:39.645661+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.292110443s of 37.377944946s, submitted: 31
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469270528 unmapped: 65208320 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc31565a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc208ad20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc05d9a40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25ad000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25ad000 session 0x557dc05d8000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25ad000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25ad000 session 0x557dc07352c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:40.645890+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469278720 unmapped: 65200128 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5065424 data_alloc: 218103808 data_used: 24023040
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:41.646199+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469278720 unmapped: 65200128 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:42.646432+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469278720 unmapped: 65200128 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:18.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc2095860
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:43.646646+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469278720 unmapped: 65200128 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc22cf680
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:44.646846+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469278720 unmapped: 65200128 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc0e552c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc33cb2c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e76b000/0x0/0x1bfc00000, data 0x4d0f000/0x4f33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:45.647160+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469213184 unmapped: 65265664 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5069474 data_alloc: 218103808 data_used: 24023040
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:46.647422+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469213184 unmapped: 65265664 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:47.647666+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e746000/0x0/0x1bfc00000, data 0x4d33010/0x4f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469213184 unmapped: 65265664 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:48.647955+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e746000/0x0/0x1bfc00000, data 0x4d33010/0x4f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469221376 unmapped: 65257472 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:49.648169+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469221376 unmapped: 65257472 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:50.648363+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469221376 unmapped: 65257472 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5096514 data_alloc: 218103808 data_used: 27635712
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:51.648656+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469221376 unmapped: 65257472 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:52.648914+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469221376 unmapped: 65257472 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:53.649131+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469221376 unmapped: 65257472 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:54.649359+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e746000/0x0/0x1bfc00000, data 0x4d33010/0x4f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469221376 unmapped: 65257472 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:55.649623+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469221376 unmapped: 65257472 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc13f74a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc0745680
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5096514 data_alloc: 218103808 data_used: 27635712
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.558765411s of 16.670768738s, submitted: 15
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:56.650996+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466747392 unmapped: 67731456 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:57.651162+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466747392 unmapped: 67731456 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc0145680
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:58.651380+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466747392 unmapped: 67731456 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:59.651557+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466747392 unmapped: 67731456 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:00.651799+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466747392 unmapped: 67731456 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5033069 data_alloc: 218103808 data_used: 24023040
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:01.651979+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466747392 unmapped: 67731456 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:02.652213+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466747392 unmapped: 67731456 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:03.652497+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466747392 unmapped: 67731456 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:04.652715+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466747392 unmapped: 67731456 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:05.652926+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466747392 unmapped: 67731456 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5033069 data_alloc: 218103808 data_used: 24023040
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:06.653078+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466747392 unmapped: 67731456 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:07.653227+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466747392 unmapped: 67731456 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:08.653448+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466747392 unmapped: 67731456 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:09.653668+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466755584 unmapped: 67723264 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:10.653848+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466755584 unmapped: 67723264 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5033069 data_alloc: 218103808 data_used: 24023040
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:11.654034+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466755584 unmapped: 67723264 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:12.654218+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466755584 unmapped: 67723264 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:13.654430+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466755584 unmapped: 67723264 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:14.654606+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc30f1c20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25ad000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25ad000 session 0x557dc011e000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25ad000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25ad000 session 0x557dc011ed20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc138a1e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.058893204s of 18.565244675s, submitted: 15
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466763776 unmapped: 67715072 heap: 534478848 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:15.654767+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e070000/0x0/0x1bfc00000, data 0x5409029/0x562e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,7])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc01b25a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc0eddc20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc150e5a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467001344 unmapped: 71680000 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5126150 data_alloc: 218103808 data_used: 24023040
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e070000/0x0/0x1bfc00000, data 0x5409029/0x562e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc1371e00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc0e8a780
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:16.654926+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467001344 unmapped: 71680000 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:17.655081+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467001344 unmapped: 71680000 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:18.655261+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467001344 unmapped: 71680000 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e026000/0x0/0x1bfc00000, data 0x5453062/0x5678000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:19.655456+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467001344 unmapped: 71680000 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc30fcb40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25ad000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:20.655600+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467009536 unmapped: 71671808 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5127446 data_alloc: 218103808 data_used: 24150016
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:21.655735+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 71663616 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:22.655895+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e026000/0x0/0x1bfc00000, data 0x5453062/0x5678000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467017728 unmapped: 71663616 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc01b90e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25ad000 session 0x557dc3157e00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25ad000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:23.656022+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25ad000 session 0x557dc1a563c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:24.656236+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:25.656444+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5040354 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:26.656625+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:27.656794+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:28.656933+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:29.657103+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:30.657286+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5040354 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:31.657510+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:32.657721+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:33.657929+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:34.658081+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:35.658275+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5040354 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:36.658471+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:37.658610+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:38.658836+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:39.659040+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:40.659273+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5040354 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:41.659514+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:42.659702+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:43.659904+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:44.660054+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:45.660217+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5040354 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:46.660423+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:47.660583+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:48.660775+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:49.660920+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:50.661066+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5040354 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:51.661258+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:52.661405+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:53.661672+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:54.661840+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:55.662095+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5040354 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:56.662263+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:57.662540+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:58.662772+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:59.662976+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:00.663123+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5040354 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:01.663265+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:02.663428+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:03.663564+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:04.663700+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 79282176 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:05.663917+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459407360 unmapped: 79273984 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5040354 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:06.664101+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459407360 unmapped: 79273984 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:07.664274+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc01a43c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc0121680
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459407360 unmapped: 79273984 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc1a56f00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc2320780
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 52.363605499s of 53.645584106s, submitted: 61
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:08.664427+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc13dda40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc0edd860
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc30ea1e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 79183872 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc22ebe00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25ad000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25ad000 session 0x557dc138b2c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:09.664579+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 79183872 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:10.664773+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 79183872 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5080213 data_alloc: 218103808 data_used: 23830528
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:11.664979+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 79183872 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:12.665121+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 79183872 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:13.665295+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e6a3000/0x0/0x1bfc00000, data 0x4dd6010/0x4ffb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 79183872 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc05d8b40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:14.665443+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc30f01e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 79183872 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc2304b40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc01443c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:15.665585+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 79183872 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5083129 data_alloc: 218103808 data_used: 23830528
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25c6800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc4082800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:16.665791+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 79183872 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:17.665961+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 79183872 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:18.666113+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460480512 unmapped: 78200832 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:19.666251+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e67f000/0x0/0x1bfc00000, data 0x4dfa010/0x501f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460480512 unmapped: 78200832 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:20.666407+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460480512 unmapped: 78200832 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5109501 data_alloc: 218103808 data_used: 27508736
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e67f000/0x0/0x1bfc00000, data 0x4dfa010/0x501f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:21.666610+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460480512 unmapped: 78200832 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:22.666788+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460480512 unmapped: 78200832 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:23.666969+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460480512 unmapped: 78200832 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:24.667117+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460488704 unmapped: 78192640 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:25.667238+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460488704 unmapped: 78192640 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5109501 data_alloc: 218103808 data_used: 27508736
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:26.667380+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e67f000/0x0/0x1bfc00000, data 0x4dfa010/0x501f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460488704 unmapped: 78192640 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:27.667574+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460488704 unmapped: 78192640 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:28.667778+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.432371140s of 20.581054688s, submitted: 20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 460759040 unmapped: 77922304 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:29.667937+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461283328 unmapped: 77398016 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e37a000/0x0/0x1bfc00000, data 0x50ff010/0x5324000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:30.668081+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462381056 unmapped: 76300288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5189025 data_alloc: 218103808 data_used: 27648000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:31.668231+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462381056 unmapped: 76300288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:32.668375+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462381056 unmapped: 76300288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:33.668539+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462381056 unmapped: 76300288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:34.668676+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462381056 unmapped: 76300288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:35.668819+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19dd8d000/0x0/0x1bfc00000, data 0x56eb010/0x5910000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462381056 unmapped: 76300288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5189025 data_alloc: 218103808 data_used: 27648000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:36.669021+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462381056 unmapped: 76300288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:37.669193+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462381056 unmapped: 76300288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:38.669366+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462381056 unmapped: 76300288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:39.669549+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462381056 unmapped: 76300288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:40.669711+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19dd8d000/0x0/0x1bfc00000, data 0x56eb010/0x5910000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462389248 unmapped: 76292096 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5189025 data_alloc: 218103808 data_used: 27648000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:41.669853+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462389248 unmapped: 76292096 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:42.670032+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19dd8d000/0x0/0x1bfc00000, data 0x56eb010/0x5910000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462389248 unmapped: 76292096 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:43.670191+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.622055054s of 15.069800377s, submitted: 37
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25c6800 session 0x557dc0edd0e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc4082800 session 0x557dc0e8be00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462389248 unmapped: 76292096 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:44.670304+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc3156f00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462397440 unmapped: 76283904 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:45.670541+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462397440 unmapped: 76283904 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5052885 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:46.670746+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462397440 unmapped: 76283904 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:47.670939+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462397440 unmapped: 76283904 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:48.671161+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462397440 unmapped: 76283904 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:49.671343+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462397440 unmapped: 76283904 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:50.671602+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462405632 unmapped: 76275712 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5052885 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:51.671803+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462405632 unmapped: 76275712 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:52.672023+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462405632 unmapped: 76275712 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:53.672343+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462405632 unmapped: 76275712 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:54.672593+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462405632 unmapped: 76275712 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:55.672959+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462405632 unmapped: 76275712 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5052885 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:56.673169+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462405632 unmapped: 76275712 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:57.673407+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462405632 unmapped: 76275712 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:58.673629+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462413824 unmapped: 76267520 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:59.673850+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462413824 unmapped: 76267520 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:00.674077+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462413824 unmapped: 76267520 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5052885 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:01.674279+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462422016 unmapped: 76259328 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:02.674456+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462422016 unmapped: 76259328 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:03.674660+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462430208 unmapped: 76251136 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:04.674811+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462430208 unmapped: 76251136 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:05.674998+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462430208 unmapped: 76251136 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5052885 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:06.675183+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462430208 unmapped: 76251136 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:07.675368+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462430208 unmapped: 76251136 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:08.675589+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462430208 unmapped: 76251136 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:09.675757+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462430208 unmapped: 76251136 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:10.676128+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462430208 unmapped: 76251136 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5052885 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:11.676299+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462438400 unmapped: 76242944 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:12.676479+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462438400 unmapped: 76242944 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:13.676644+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462446592 unmapped: 76234752 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:14.676849+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.448078156s of 30.543226242s, submitted: 36
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc23043c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc23303c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc242c800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242c800 session 0x557dc31223c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc20b5680
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc0edc960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462561280 unmapped: 76120064 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:15.677059+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462561280 unmapped: 76120064 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5089967 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:16.677264+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462561280 unmapped: 76120064 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:17.677398+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462561280 unmapped: 76120064 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:18.677662+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e6a4000/0x0/0x1bfc00000, data 0x4dd6000/0x4ffa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462561280 unmapped: 76120064 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:19.677911+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e6a4000/0x0/0x1bfc00000, data 0x4dd6000/0x4ffa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462569472 unmapped: 76111872 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:20.678076+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc301ba40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462716928 unmapped: 75964416 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5092355 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:21.678294+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc4082800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc3f24000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462716928 unmapped: 75964416 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:22.678479+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463077376 unmapped: 75603968 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:23.678641+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463077376 unmapped: 75603968 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e680000/0x0/0x1bfc00000, data 0x4dfa000/0x501e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:24.678797+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463077376 unmapped: 75603968 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:25.678920+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463077376 unmapped: 75603968 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5118595 data_alloc: 218103808 data_used: 27504640
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:26.679060+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e680000/0x0/0x1bfc00000, data 0x4dfa000/0x501e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463077376 unmapped: 75603968 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:27.679247+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463077376 unmapped: 75603968 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:28.679461+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463077376 unmapped: 75603968 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:29.679703+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463077376 unmapped: 75603968 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:30.679951+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463077376 unmapped: 75603968 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e680000/0x0/0x1bfc00000, data 0x4dfa000/0x501e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:31.680130+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5118595 data_alloc: 218103808 data_used: 27504640
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463077376 unmapped: 75603968 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:32.680299+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2513c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.218767166s of 18.299646378s, submitted: 8
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2513c00 session 0x557dc308cb40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2184000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2184000 session 0x557dc308cf00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc138a3c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc05d8780
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463110144 unmapped: 75571200 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc20821e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:33.680427+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d98d000/0x0/0x1bfc00000, data 0x5aeb062/0x5d10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463134720 unmapped: 75546624 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:34.680568+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463134720 unmapped: 75546624 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:35.680710+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463134720 unmapped: 75546624 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:36.680951+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5228936 data_alloc: 218103808 data_used: 27914240
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463134720 unmapped: 75546624 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:37.681177+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463134720 unmapped: 75546624 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d987000/0x0/0x1bfc00000, data 0x5af1062/0x5d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:38.681392+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463142912 unmapped: 75538432 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:39.681531+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2513c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2513c00 session 0x557dc2320d20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463142912 unmapped: 75538432 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:40.681700+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc75a1400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc33fdc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463183872 unmapped: 75497472 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:41.681969+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5233416 data_alloc: 218103808 data_used: 28471296
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 464429056 unmapped: 74252288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:42.682117+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 464429056 unmapped: 74252288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:43.682277+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d987000/0x0/0x1bfc00000, data 0x5af1062/0x5d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 464429056 unmapped: 74252288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:44.682434+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 464429056 unmapped: 74252288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:45.682565+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 464429056 unmapped: 74252288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:46.682713+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5277096 data_alloc: 218103808 data_used: 34607104
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 464429056 unmapped: 74252288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:47.682994+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 464429056 unmapped: 74252288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:48.683197+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d987000/0x0/0x1bfc00000, data 0x5af1062/0x5d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 464429056 unmapped: 74252288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:49.683345+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 464429056 unmapped: 74252288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:50.683538+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 464429056 unmapped: 74252288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:51.683736+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5277096 data_alloc: 218103808 data_used: 34607104
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 67K writes, 258K keys, 67K commit groups, 1.0 writes per commit group, ingest: 0.24 GB, 0.04 MB/s
                                           Cumulative WAL: 67K writes, 25K syncs, 2.63 writes per sync, written: 0.24 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2415 writes, 9201 keys, 2415 commit groups, 1.0 writes per commit group, ingest: 8.90 MB, 0.01 MB/s
                                           Interval WAL: 2415 writes, 999 syncs, 2.42 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000216 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000216 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000216 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000216 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000216 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000216 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000216 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdf610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdf610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdf610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000216 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557dbecdef30#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000216 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 464461824 unmapped: 74219520 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:52.683945+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.664016724s of 19.928941727s, submitted: 58
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465453056 unmapped: 73228288 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:53.684095+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466599936 unmapped: 72081408 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:54.684270+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d244000/0x0/0x1bfc00000, data 0x622f062/0x6454000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466599936 unmapped: 72081408 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:55.684420+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:56.684561+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466632704 unmapped: 72048640 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5348170 data_alloc: 234881024 data_used: 35033088
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:57.684733+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466632704 unmapped: 72048640 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d223000/0x0/0x1bfc00000, data 0x6248062/0x646d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:58.684923+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466632704 unmapped: 72048640 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:59.685431+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466632704 unmapped: 72048640 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d223000/0x0/0x1bfc00000, data 0x6248062/0x646d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:00.685716+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466632704 unmapped: 72048640 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d223000/0x0/0x1bfc00000, data 0x6248062/0x646d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:01.685957+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466632704 unmapped: 72048640 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5348170 data_alloc: 234881024 data_used: 35033088
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:02.686041+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466632704 unmapped: 72048640 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:03.686159+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466632704 unmapped: 72048640 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:04.686246+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466632704 unmapped: 72048640 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d223000/0x0/0x1bfc00000, data 0x6248062/0x646d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:05.686386+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466632704 unmapped: 72048640 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:06.686538+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466632704 unmapped: 72048640 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5348170 data_alloc: 234881024 data_used: 35033088
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:07.686692+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466632704 unmapped: 72048640 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d223000/0x0/0x1bfc00000, data 0x6248062/0x646d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.767245293s of 14.978918076s, submitted: 83
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc75a1400 session 0x557dc3156780
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc33fdc00 session 0x557dc33ca960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc05d92c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:08.686830+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465600512 unmapped: 73080832 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:09.686944+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465600512 unmapped: 73080832 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:10.687097+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465600512 unmapped: 73080832 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:11.687230+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465600512 unmapped: 73080832 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5186456 data_alloc: 218103808 data_used: 27914240
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19dc60000/0x0/0x1bfc00000, data 0x548f000/0x56b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:12.687381+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465600512 unmapped: 73080832 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc4082800 session 0x557dc30f03c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc3f24000 session 0x557dc0fe61e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:13.687487+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465600512 unmapped: 73080832 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc138a000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:14.687645+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:15.687850+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:16.688056+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5070490 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:17.688390+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:18.688555+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:19.688718+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:20.688896+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:21.689053+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5070490 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:22.689217+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:23.689378+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:24.689529+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:25.689658+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:26.689847+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5070490 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:27.690059+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:28.690260+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:29.690921+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:30.691426+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:31.691844+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5070490 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:32.692235+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:33.692496+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:34.692637+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:35.692795+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:36.693009+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5070490 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:37.693477+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:38.693912+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:39.694174+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:40.694415+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:41.694691+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5070490 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:42.694958+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:43.695123+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:44.695409+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461627392 unmapped: 77053952 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:45.695612+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461635584 unmapped: 77045760 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:46.695786+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461635584 unmapped: 77045760 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5070490 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:47.696050+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461635584 unmapped: 77045760 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:48.696295+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461635584 unmapped: 77045760 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:49.696498+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461635584 unmapped: 77045760 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:50.696703+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461635584 unmapped: 77045760 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:51.696939+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461635584 unmapped: 77045760 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5070490 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:52.697101+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461643776 unmapped: 77037568 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:53.697288+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461643776 unmapped: 77037568 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:54.697471+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461643776 unmapped: 77037568 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:55.697658+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461643776 unmapped: 77037568 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:56.697913+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461651968 unmapped: 77029376 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5070490 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:57.698065+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461651968 unmapped: 77029376 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:58.698247+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461651968 unmapped: 77029376 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:59.698408+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461651968 unmapped: 77029376 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:00.698562+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461651968 unmapped: 77029376 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:01.698709+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461651968 unmapped: 77029376 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5070490 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:02.698898+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb38000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461651968 unmapped: 77029376 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:03.699072+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461651968 unmapped: 77029376 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:04.699235+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 77021184 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:05.699410+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 77021184 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:06.699571+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 77021184 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5070490 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc33ca5a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc0e8be00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc33fdc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc33fdc00 session 0x557dc1a56b40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:07.699717+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc3f24000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc3f24000 session 0x557dc13f6b40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc4082800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 59.841094971s of 59.972972870s, submitted: 43
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 77021184 heap: 538681344 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb37000/0x0/0x1bfc00000, data 0x4942010/0x4b67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc4082800 session 0x557dc1371e00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc0e54b40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc13dc780
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc33fdc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:08.699942+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc33fdc00 session 0x557dc011e5a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc3f24000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc3f24000 session 0x557dc0734780
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461668352 unmapped: 81215488 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:09.700121+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461668352 unmapped: 81215488 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:10.700296+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461668352 unmapped: 81215488 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:11.700465+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461668352 unmapped: 81215488 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157747 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:12.700651+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e034000/0x0/0x1bfc00000, data 0x5444072/0x566a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461676544 unmapped: 81207296 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:13.700802+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461676544 unmapped: 81207296 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc3122d20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:14.700950+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 461651968 unmapped: 81231872 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:15.701120+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462372864 unmapped: 80510976 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e010000/0x0/0x1bfc00000, data 0x5468072/0x568e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:16.701307+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e010000/0x0/0x1bfc00000, data 0x5468072/0x568e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463224832 unmapped: 79659008 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5242069 data_alloc: 234881024 data_used: 35377152
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e010000/0x0/0x1bfc00000, data 0x5468072/0x568e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:17.701500+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463224832 unmapped: 79659008 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:18.701938+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463224832 unmapped: 79659008 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:19.702122+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463224832 unmapped: 79659008 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.630780220s of 12.218190193s, submitted: 30
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:20.702307+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e010000/0x0/0x1bfc00000, data 0x5468072/0x568e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463224832 unmapped: 79659008 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:21.702505+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e010000/0x0/0x1bfc00000, data 0x5468072/0x568e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 463224832 unmapped: 79659008 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5242141 data_alloc: 234881024 data_used: 35377152
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:22.702621+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 464347136 unmapped: 78536704 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:23.702842+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 464371712 unmapped: 78512128 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:24.703277+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 464371712 unmapped: 78512128 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:25.703433+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 464371712 unmapped: 78512128 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:26.703629+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 464617472 unmapped: 78266368 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19dee7000/0x0/0x1bfc00000, data 0x5591072/0x57b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [0,1,3])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5280509 data_alloc: 234881024 data_used: 35397632
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:27.703940+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 77873152 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:28.704122+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465084416 unmapped: 77799424 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:29.704561+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465092608 unmapped: 77791232 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:30.704738+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465092608 unmapped: 77791232 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:31.704975+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d83f000/0x0/0x1bfc00000, data 0x5c39072/0x5e5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 77783040 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5313851 data_alloc: 234881024 data_used: 36397056
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:32.705110+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 77783040 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:33.705307+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 77783040 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:34.705778+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 77783040 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d83f000/0x0/0x1bfc00000, data 0x5c39072/0x5e5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:35.705987+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 77783040 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:36.706158+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 77783040 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5313851 data_alloc: 234881024 data_used: 36397056
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:37.706376+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 77783040 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.592128754s of 18.461534500s, submitted: 292
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:38.706544+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc0fe7e00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc0fe6960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc011fe00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:39.706848+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d840000/0x0/0x1bfc00000, data 0x5c39062/0x5e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:40.707032+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:41.707225+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5082456 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb37000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:42.707461+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:43.707673+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb37000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:44.707834+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:45.708026+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb37000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:46.708222+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5082456 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb37000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:47.708406+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:48.708626+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:49.708762+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb37000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:50.708913+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:51.709052+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5082456 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:52.709187+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:53.709324+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:54.709460+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb37000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:55.709586+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:56.709772+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5082456 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:57.709945+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb37000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:58.710129+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb37000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:59.710350+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 77774848 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:00.710515+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465117184 unmapped: 77766656 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:01.710728+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465125376 unmapped: 77758464 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5082456 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb37000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:02.710931+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465125376 unmapped: 77758464 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:03.711121+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465125376 unmapped: 77758464 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:04.711283+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465125376 unmapped: 77758464 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:05.711511+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465125376 unmapped: 77758464 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:06.711776+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465125376 unmapped: 77758464 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5082456 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:07.711963+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465133568 unmapped: 77750272 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb37000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:08.712173+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465133568 unmapped: 77750272 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:09.712368+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465133568 unmapped: 77750272 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:10.712584+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465133568 unmapped: 77750272 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:11.712817+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465133568 unmapped: 77750272 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5082456 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:12.712997+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465133568 unmapped: 77750272 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:13.713215+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19eb37000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465133568 unmapped: 77750272 heap: 542883840 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc33fdc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.520748138s of 35.597747803s, submitted: 29
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc33fdc00 session 0x557dc011f0e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc3f24000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc3f24000 session 0x557dc2082d20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc011fc20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc20954a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc30f1c20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:14.713441+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465149952 unmapped: 81403904 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:15.713596+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465149952 unmapped: 81403904 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:16.713782+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19dc53000/0x0/0x1bfc00000, data 0x5826062/0x5a4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465149952 unmapped: 81403904 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5195204 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:17.714076+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465149952 unmapped: 81403904 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:18.714342+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465158144 unmapped: 81395712 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:19.714641+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465158144 unmapped: 81395712 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:20.714807+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465158144 unmapped: 81395712 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19dc53000/0x0/0x1bfc00000, data 0x5826062/0x5a4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:21.714935+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465158144 unmapped: 81395712 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5195204 data_alloc: 218103808 data_used: 23826432
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc33fdc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc33fdc00 session 0x557dc2082d20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:22.715190+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2513c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2513c00 session 0x557dc011fe00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19dc53000/0x0/0x1bfc00000, data 0x5826062/0x5a4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465158144 unmapped: 81395712 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2513c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2513c00 session 0x557dc0fe6960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:23.715342+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc0fe7e00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465158144 unmapped: 81395712 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:24.715587+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.223106384s of 10.457739830s, submitted: 40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465174528 unmapped: 81379328 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:25.715796+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 465190912 unmapped: 81362944 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:26.716091+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466018304 unmapped: 80535552 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5309994 data_alloc: 234881024 data_used: 38117376
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:27.716317+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19dc2e000/0x0/0x1bfc00000, data 0x584a072/0x5a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466018304 unmapped: 80535552 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:28.716610+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466018304 unmapped: 80535552 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:29.716954+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466018304 unmapped: 80535552 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:30.717208+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466026496 unmapped: 80527360 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:31.717399+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466034688 unmapped: 80519168 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5309994 data_alloc: 234881024 data_used: 38117376
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:32.717644+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19dc2e000/0x0/0x1bfc00000, data 0x584a072/0x5a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466034688 unmapped: 80519168 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:33.717839+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466034688 unmapped: 80519168 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:34.718083+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19dc2e000/0x0/0x1bfc00000, data 0x584a072/0x5a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466042880 unmapped: 80510976 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:35.718241+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466042880 unmapped: 80510976 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:36.718388+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 466042880 unmapped: 80510976 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5310794 data_alloc: 234881024 data_used: 38137856
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.728159904s of 12.743644714s, submitted: 1
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:37.718534+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468926464 unmapped: 77627392 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:38.718811+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d2ee000/0x0/0x1bfc00000, data 0x618a072/0x63b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468934656 unmapped: 77619200 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:39.719019+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468934656 unmapped: 77619200 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:40.719250+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468934656 unmapped: 77619200 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:41.719420+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468934656 unmapped: 77619200 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5401744 data_alloc: 234881024 data_used: 39141376
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:42.719648+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d2e4000/0x0/0x1bfc00000, data 0x6194072/0x63ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468934656 unmapped: 77619200 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:43.719850+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468934656 unmapped: 77619200 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:44.720097+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468942848 unmapped: 77611008 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:45.720349+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468942848 unmapped: 77611008 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d2e4000/0x0/0x1bfc00000, data 0x6194072/0x63ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:46.720553+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468942848 unmapped: 77611008 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5402064 data_alloc: 234881024 data_used: 39149568
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:47.720730+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468942848 unmapped: 77611008 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:48.720954+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468942848 unmapped: 77611008 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:49.721166+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468942848 unmapped: 77611008 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:50.721324+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468942848 unmapped: 77611008 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:51.721503+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d2e4000/0x0/0x1bfc00000, data 0x6194072/0x63ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468942848 unmapped: 77611008 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5402064 data_alloc: 234881024 data_used: 39149568
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:52.721678+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468942848 unmapped: 77611008 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d2e4000/0x0/0x1bfc00000, data 0x6194072/0x63ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:53.721851+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468942848 unmapped: 77611008 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d2e4000/0x0/0x1bfc00000, data 0x6194072/0x63ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:54.722064+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468942848 unmapped: 77611008 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d2e4000/0x0/0x1bfc00000, data 0x6194072/0x63ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:55.722273+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468951040 unmapped: 77602816 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:56.722487+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468951040 unmapped: 77602816 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5402064 data_alloc: 234881024 data_used: 39149568
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:57.722699+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468951040 unmapped: 77602816 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:58.722929+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468951040 unmapped: 77602816 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:59.723090+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468951040 unmapped: 77602816 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:00.723354+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d2e4000/0x0/0x1bfc00000, data 0x6194072/0x63ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468951040 unmapped: 77602816 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:01.723503+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468959232 unmapped: 77594624 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5402064 data_alloc: 234881024 data_used: 39149568
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:02.723636+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468959232 unmapped: 77594624 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:03.723767+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc33fdc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.165599823s of 26.330036163s, submitted: 72
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d2e4000/0x0/0x1bfc00000, data 0x6194072/0x63ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc33fdc00 session 0x557dc301a3c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc72e5000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc72e5000 session 0x557dc23303c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc6f24000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc6f24000 session 0x557dc20950e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc0734f00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2513c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2513c00 session 0x557dc138b2c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 77570048 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:04.723925+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 77570048 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:05.724083+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 77570048 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:06.724237+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cd90000/0x0/0x1bfc00000, data 0x66e8072/0x690e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 77570048 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5438075 data_alloc: 234881024 data_used: 39153664
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:07.724386+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 77570048 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:08.724569+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cd90000/0x0/0x1bfc00000, data 0x66e8072/0x690e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 77570048 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:09.724724+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 77570048 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:10.724882+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cd90000/0x0/0x1bfc00000, data 0x66e8072/0x690e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468983808 unmapped: 77570048 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:11.725008+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cd90000/0x0/0x1bfc00000, data 0x66e8072/0x690e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468992000 unmapped: 77561856 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5438075 data_alloc: 234881024 data_used: 39153664
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:12.725172+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469000192 unmapped: 77553664 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:13.725324+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc33fdc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.135008812s of 10.212894440s, submitted: 14
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc33fdc00 session 0x557dc05d8960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469000192 unmapped: 77553664 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:14.725498+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469000192 unmapped: 77553664 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc72e5000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:15.725666+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc3410000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cd6c000/0x0/0x1bfc00000, data 0x670c072/0x6932000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469008384 unmapped: 77545472 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:16.725821+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471605248 unmapped: 74948608 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5480787 data_alloc: 234881024 data_used: 44744704
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:17.725930+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471605248 unmapped: 74948608 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:18.726081+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471605248 unmapped: 74948608 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:19.726206+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471605248 unmapped: 74948608 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:20.726355+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cd6c000/0x0/0x1bfc00000, data 0x670c072/0x6932000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471605248 unmapped: 74948608 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:21.726487+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471605248 unmapped: 74948608 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5480787 data_alloc: 234881024 data_used: 44744704
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:22.726651+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471605248 unmapped: 74948608 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:23.726810+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471605248 unmapped: 74948608 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:24.727010+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cd6c000/0x0/0x1bfc00000, data 0x670c072/0x6932000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.110620499s of 11.345820427s, submitted: 2
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471605248 unmapped: 74948608 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:25.727198+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471605248 unmapped: 74948608 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:26.727411+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 471605248 unmapped: 74948608 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5481679 data_alloc: 234881024 data_used: 44765184
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:27.727737+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473464832 unmapped: 73089024 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:28.727907+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473497600 unmapped: 73056256 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:29.728077+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cb46000/0x0/0x1bfc00000, data 0x6932072/0x6b58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473497600 unmapped: 73056256 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:30.728287+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473497600 unmapped: 73056256 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:31.728481+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473497600 unmapped: 73056256 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5521285 data_alloc: 234881024 data_used: 45551616
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:32.728705+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473497600 unmapped: 73056256 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:33.728889+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cb46000/0x0/0x1bfc00000, data 0x6932072/0x6b58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473497600 unmapped: 73056256 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:34.729115+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473497600 unmapped: 73056256 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:35.729338+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cb46000/0x0/0x1bfc00000, data 0x6932072/0x6b58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473497600 unmapped: 73056256 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:36.729637+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473497600 unmapped: 73056256 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5521285 data_alloc: 234881024 data_used: 45551616
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:37.729837+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473497600 unmapped: 73056256 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:38.730098+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cb46000/0x0/0x1bfc00000, data 0x6932072/0x6b58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473497600 unmapped: 73056256 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:39.730328+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.284305573s of 14.851020813s, submitted: 25
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473497600 unmapped: 73056256 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:40.730574+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473505792 unmapped: 73048064 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:41.730761+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473505792 unmapped: 73048064 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5522885 data_alloc: 234881024 data_used: 45682688
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:42.730958+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cb46000/0x0/0x1bfc00000, data 0x6932072/0x6b58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473505792 unmapped: 73048064 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:43.731141+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473522176 unmapped: 73031680 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:44.731390+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc72e5000 session 0x557dc3157c20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc3410000 session 0x557dc13dcf00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc3410000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473522176 unmapped: 73031680 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:45.731518+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc3410000 session 0x557dc0edc000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473522176 unmapped: 73031680 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:46.731675+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d2e4000/0x0/0x1bfc00000, data 0x6194072/0x63ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473522176 unmapped: 73031680 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:47.731835+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5406801 data_alloc: 234881024 data_used: 39215104
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473522176 unmapped: 73031680 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:48.732156+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473522176 unmapped: 73031680 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:49.732335+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473522176 unmapped: 73031680 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:50.732548+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473522176 unmapped: 73031680 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:51.732707+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473522176 unmapped: 73031680 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:52.732932+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5406801 data_alloc: 234881024 data_used: 39215104
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d2e4000/0x0/0x1bfc00000, data 0x6194072/0x63ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473522176 unmapped: 73031680 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:53.733099+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473522176 unmapped: 73031680 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:54.733233+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473522176 unmapped: 73031680 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:55.733486+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d2e4000/0x0/0x1bfc00000, data 0x6194072/0x63ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473522176 unmapped: 73031680 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:56.733695+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.167324066s of 17.263990402s, submitted: 33
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc0734780
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc0e54b40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473522176 unmapped: 73031680 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:57.733856+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5406729 data_alloc: 234881024 data_used: 39215104
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc33ca1e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462045184 unmapped: 84508672 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:58.734165+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462045184 unmapped: 84508672 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:59.734333+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e13e000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462045184 unmapped: 84508672 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:00.734543+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e13e000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462045184 unmapped: 84508672 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:01.734720+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e13e000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:02.734962+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462053376 unmapped: 84500480 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5104768 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:03.735200+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462053376 unmapped: 84500480 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:04.735365+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462053376 unmapped: 84500480 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e13e000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e13e000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:05.735622+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462053376 unmapped: 84500480 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:06.735854+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462053376 unmapped: 84500480 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:07.736093+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462053376 unmapped: 84500480 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5104768 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:08.736340+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462053376 unmapped: 84500480 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e13e000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:09.736545+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462053376 unmapped: 84500480 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e13e000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:10.736750+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462053376 unmapped: 84500480 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e13e000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e13e000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:11.736904+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462053376 unmapped: 84500480 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:12.737383+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462053376 unmapped: 84500480 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5104768 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:13.737594+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462053376 unmapped: 84500480 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:14.737766+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462053376 unmapped: 84500480 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e13e000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:15.737946+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462053376 unmapped: 84500480 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:16.738087+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462053376 unmapped: 84500480 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:17.738262+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462053376 unmapped: 84500480 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5104768 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:18.738468+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462053376 unmapped: 84500480 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:19.738648+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462069760 unmapped: 84484096 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:20.738828+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462069760 unmapped: 84484096 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e13e000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:21.739021+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462069760 unmapped: 84484096 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:22.739147+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462069760 unmapped: 84484096 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5104768 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e13e000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:23.739371+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462069760 unmapped: 84484096 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2513c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.696624756s of 26.810197830s, submitted: 46
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2513c00 session 0x557dc011e000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e13e000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc2094b40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc208b4a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dbfac3e00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc3410000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc3410000 session 0x557dc31561e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:24.739553+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462069760 unmapped: 84484096 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e10e000/0x0/0x1bfc00000, data 0x536b062/0x5590000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:25.739731+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462069760 unmapped: 84484096 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e10e000/0x0/0x1bfc00000, data 0x536b062/0x5590000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e10e000/0x0/0x1bfc00000, data 0x536b062/0x5590000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:26.739911+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462069760 unmapped: 84484096 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:27.740084+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462069760 unmapped: 84484096 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5188273 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:28.740256+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462069760 unmapped: 84484096 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:29.740452+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462077952 unmapped: 84475904 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:30.740629+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462077952 unmapped: 84475904 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:31.740783+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462077952 unmapped: 84475904 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e10e000/0x0/0x1bfc00000, data 0x536b062/0x5590000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:32.740943+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462077952 unmapped: 84475904 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5188273 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc33fdc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc33fdc00 session 0x557dc01443c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:33.741123+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462077952 unmapped: 84475904 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc33fdc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc33fdc00 session 0x557dc2082b40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:34.741289+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc01b9860
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462086144 unmapped: 84467712 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.539037704s of 10.654681206s, submitted: 35
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc30f0b40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:35.741440+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462094336 unmapped: 84459520 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc3410000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:36.741576+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462094336 unmapped: 84459520 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:37.741745+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e0e9000/0x0/0x1bfc00000, data 0x538f072/0x55b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462102528 unmapped: 84451328 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5258851 data_alloc: 218103808 data_used: 33280000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:38.741959+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462102528 unmapped: 84451328 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:39.742125+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462102528 unmapped: 84451328 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e0e9000/0x0/0x1bfc00000, data 0x538f072/0x55b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:40.742286+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462102528 unmapped: 84451328 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:41.742479+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462102528 unmapped: 84451328 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:42.742666+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462102528 unmapped: 84451328 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5258851 data_alloc: 218103808 data_used: 33280000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:43.742828+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462102528 unmapped: 84451328 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:44.742958+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462102528 unmapped: 84451328 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e0e9000/0x0/0x1bfc00000, data 0x538f072/0x55b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:45.743134+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462102528 unmapped: 84451328 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e0e9000/0x0/0x1bfc00000, data 0x538f072/0x55b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:46.743309+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462102528 unmapped: 84451328 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:47.743480+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 462102528 unmapped: 84451328 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5259331 data_alloc: 218103808 data_used: 33292288
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.320911407s of 13.332280159s, submitted: 2
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:48.743673+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 467959808 unmapped: 78594048 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:49.743823+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 78348288 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cece000/0x0/0x1bfc00000, data 0x6199072/0x63bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:50.743985+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 78348288 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cecb000/0x0/0x1bfc00000, data 0x619d072/0x63c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:51.744197+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 78348288 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:52.744375+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 78348288 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5376943 data_alloc: 218103808 data_used: 34361344
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:53.744555+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 78348288 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cecb000/0x0/0x1bfc00000, data 0x619d072/0x63c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:54.744704+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 78348288 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:55.744860+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 78348288 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:56.745051+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 78348288 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:57.745231+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 78348288 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5376923 data_alloc: 218103808 data_used: 34365440
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cec9000/0x0/0x1bfc00000, data 0x619f072/0x63c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:58.745402+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 78348288 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:59.745553+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 78348288 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cec9000/0x0/0x1bfc00000, data 0x619f072/0x63c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc10ab400 session 0x557dc3123e00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc72e5000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:00.745731+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 78348288 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.527844429s of 12.793952942s, submitted: 111
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2367c00 session 0x557dc0e550e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc7489400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:01.745994+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc242dc00 session 0x557dc0f1d2c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2187000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 78348288 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:02.746196+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 78348288 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5377151 data_alloc: 218103808 data_used: 34365440
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:03.746389+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 78348288 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:04.746612+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 78348288 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:05.746790+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 78348288 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cec8000/0x0/0x1bfc00000, data 0x61a0072/0x63c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:06.746985+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 78348288 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:07.747234+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468213760 unmapped: 78340096 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5377151 data_alloc: 218103808 data_used: 34365440
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:08.747459+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468213760 unmapped: 78340096 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc7488c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc7488c00 session 0x557dc308cb40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25c7800
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25c7800 session 0x557dc234a1e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc3156b40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc0fe6b40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc33fdc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc33fdc00 session 0x557dc01b90e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:09.747655+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cec8000/0x0/0x1bfc00000, data 0x61a0072/0x63c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468353024 unmapped: 78200832 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:10.747785+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468353024 unmapped: 78200832 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:11.747923+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468353024 unmapped: 78200832 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:12.748079+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468353024 unmapped: 78200832 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5426809 data_alloc: 218103808 data_used: 34365440
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:13.748211+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468353024 unmapped: 78200832 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc7488c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.988913536s of 13.046833992s, submitted: 13
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc7488c00 session 0x557dc3122000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25c6000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc34c0000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:14.748338+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468353024 unmapped: 78200832 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19c90c000/0x0/0x1bfc00000, data 0x675b095/0x6982000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:15.748491+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468451328 unmapped: 78102528 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:16.748635+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468484096 unmapped: 78069760 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:17.748923+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468484096 unmapped: 78069760 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5462955 data_alloc: 234881024 data_used: 38064128
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:18.749141+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468484096 unmapped: 78069760 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:19.749293+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468484096 unmapped: 78069760 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19c90c000/0x0/0x1bfc00000, data 0x675b095/0x6982000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:20.749436+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19c90c000/0x0/0x1bfc00000, data 0x675b095/0x6982000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468484096 unmapped: 78069760 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:21.749617+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468484096 unmapped: 78069760 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19c90c000/0x0/0x1bfc00000, data 0x675b095/0x6982000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:22.749760+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468484096 unmapped: 78069760 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5463131 data_alloc: 234881024 data_used: 38064128
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:23.749925+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468484096 unmapped: 78069760 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:24.750077+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468484096 unmapped: 78069760 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19c90b000/0x0/0x1bfc00000, data 0x675c095/0x6983000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:25.750228+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 468484096 unmapped: 78069760 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.383593559s of 12.423521042s, submitted: 12
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:26.750362+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469630976 unmapped: 76922880 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19c7ed000/0x0/0x1bfc00000, data 0x686c095/0x6a93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19c7ed000/0x0/0x1bfc00000, data 0x686c095/0x6a93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:27.750556+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469630976 unmapped: 76922880 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5478097 data_alloc: 234881024 data_used: 38313984
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:28.750718+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469630976 unmapped: 76922880 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:29.750906+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469630976 unmapped: 76922880 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:30.751056+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469630976 unmapped: 76922880 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:31.751260+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469639168 unmapped: 76914688 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:32.751418+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19c7ce000/0x0/0x1bfc00000, data 0x6882095/0x6aa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469639168 unmapped: 76914688 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5485747 data_alloc: 234881024 data_used: 38129664
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:33.751578+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19c7ce000/0x0/0x1bfc00000, data 0x6882095/0x6aa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469639168 unmapped: 76914688 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:34.751721+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469639168 unmapped: 76914688 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25c6000 session 0x557dc011e3c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc34c0000 session 0x557dc13dcd20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:35.751844+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc0e552c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469008384 unmapped: 77545472 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:36.752018+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cec6000/0x0/0x1bfc00000, data 0x61a1072/0x63c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469008384 unmapped: 77545472 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:37.752223+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469008384 unmapped: 77545472 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5385274 data_alloc: 218103808 data_used: 33284096
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:38.752411+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469008384 unmapped: 77545472 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:39.752670+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469008384 unmapped: 77545472 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cec6000/0x0/0x1bfc00000, data 0x61a1072/0x63c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:40.752835+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469008384 unmapped: 77545472 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.639362335s of 14.880970955s, submitted: 82
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc13f6f00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc3410000 session 0x557dc138a3c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:41.752971+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469032960 unmapped: 77520896 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc308cd20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:42.753158+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469032960 unmapped: 77520896 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5127069 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:43.753352+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469032960 unmapped: 77520896 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:44.753488+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469032960 unmapped: 77520896 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:45.753738+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e044000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469032960 unmapped: 77520896 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:46.753943+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469032960 unmapped: 77520896 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:47.754110+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469032960 unmapped: 77520896 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5127069 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:48.754319+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469032960 unmapped: 77520896 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e044000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:49.754483+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469032960 unmapped: 77520896 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:50.754690+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469032960 unmapped: 77520896 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:51.754845+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469032960 unmapped: 77520896 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:52.755049+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469041152 unmapped: 77512704 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5127069 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:53.755211+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469041152 unmapped: 77512704 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e044000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:54.755429+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469049344 unmapped: 77504512 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:55.755693+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469049344 unmapped: 77504512 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:56.755909+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469049344 unmapped: 77504512 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:57.756105+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469049344 unmapped: 77504512 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5127069 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:58.756305+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469049344 unmapped: 77504512 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:59.756444+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e044000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469049344 unmapped: 77504512 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:00.756590+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469057536 unmapped: 77496320 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:01.756746+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469057536 unmapped: 77496320 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:02.756927+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469057536 unmapped: 77496320 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5127069 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e044000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:03.757068+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469057536 unmapped: 77496320 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:04.757247+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469057536 unmapped: 77496320 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:05.757463+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469057536 unmapped: 77496320 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:06.757602+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469057536 unmapped: 77496320 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:07.757787+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e044000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 469057536 unmapped: 77496320 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5127069 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc2304f00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc2304b40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc07352c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc3410000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc3410000 session 0x557dc31223c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc34c0000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.155056000s of 27.264543533s, submitted: 31
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:08.758037+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc34c0000 session 0x557dc0f1c3c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc34c0000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc34c0000 session 0x557dc33cb4a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dbfb2fc00 session 0x557dc3123680
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc20c63c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc2320d20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470278144 unmapped: 76275712 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:09.758213+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470278144 unmapped: 76275712 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:10.758385+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470278144 unmapped: 76275712 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:11.758541+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470278144 unmapped: 76275712 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:12.758713+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e605000/0x0/0x1bfc00000, data 0x4a65000/0x4c89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470278144 unmapped: 76275712 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5143497 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:13.758898+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470278144 unmapped: 76275712 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:14.759102+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470278144 unmapped: 76275712 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e605000/0x0/0x1bfc00000, data 0x4a65000/0x4c89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:15.759240+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470286336 unmapped: 76267520 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc011bc00 session 0x557dc05d9860
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dbfb2fc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:16.759415+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470286336 unmapped: 76267520 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:17.759611+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e605000/0x0/0x1bfc00000, data 0x4a65000/0x4c89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc13f72c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470286336 unmapped: 76267520 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5147362 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc34c0000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:18.759820+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470294528 unmapped: 76259328 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:19.759944+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470294528 unmapped: 76259328 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:20.760080+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470302720 unmapped: 76251136 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e5e0000/0x0/0x1bfc00000, data 0x4a89023/0x4cae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:21.760229+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470302720 unmapped: 76251136 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:22.760424+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470302720 unmapped: 76251136 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5154882 data_alloc: 218103808 data_used: 24670208
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:23.760578+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470302720 unmapped: 76251136 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e5e0000/0x0/0x1bfc00000, data 0x4a89023/0x4cae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:24.760942+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470302720 unmapped: 76251136 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:25.761080+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470302720 unmapped: 76251136 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:26.761212+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470302720 unmapped: 76251136 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:27.761358+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5154882 data_alloc: 218103808 data_used: 24670208
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470302720 unmapped: 76251136 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:28.761652+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470302720 unmapped: 76251136 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e5e0000/0x0/0x1bfc00000, data 0x4a89023/0x4cae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:29.761927+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 470302720 unmapped: 76251136 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.634801865s of 21.770465851s, submitted: 16
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:30.762138+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474480640 unmapped: 72073216 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:31.762706+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475955200 unmapped: 70598656 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:32.763265+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5263596 data_alloc: 218103808 data_used: 25063424
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475955200 unmapped: 70598656 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:33.763740+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475955200 unmapped: 70598656 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:34.764007+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d9e9000/0x0/0x1bfc00000, data 0x5672023/0x5897000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475955200 unmapped: 70598656 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:35.764214+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 476020736 unmapped: 70533120 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:36.764583+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 476020736 unmapped: 70533120 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:37.764966+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5252004 data_alloc: 218103808 data_used: 25063424
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474742784 unmapped: 71811072 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:38.765262+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474742784 unmapped: 71811072 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:39.765556+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474742784 unmapped: 71811072 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:40.765822+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d9d6000/0x0/0x1bfc00000, data 0x5693023/0x58b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474742784 unmapped: 71811072 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:41.765996+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474742784 unmapped: 71811072 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:42.766207+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.431081772s of 12.760804176s, submitted: 122
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5252140 data_alloc: 218103808 data_used: 25063424
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474701824 unmapped: 71852032 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:43.766348+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474701824 unmapped: 71852032 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc30fc3c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc34c0000 session 0x557dc30eb860
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:44.766529+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc3410000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d9c8000/0x0/0x1bfc00000, data 0x56a1023/0x58c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [0,1])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc3410000 session 0x557dc3157860
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474710016 unmapped: 71843840 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:45.766855+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474710016 unmapped: 71843840 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:46.767090+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474710016 unmapped: 71843840 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:47.767343+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5137658 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474710016 unmapped: 71843840 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:48.767579+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e727000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474710016 unmapped: 71843840 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:49.767777+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474710016 unmapped: 71843840 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:50.768026+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e727000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474710016 unmapped: 71843840 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:51.768326+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474710016 unmapped: 71843840 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:52.768524+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5137658 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474710016 unmapped: 71843840 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:53.768668+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e727000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474710016 unmapped: 71843840 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:54.768839+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474710016 unmapped: 71843840 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:55.769047+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474718208 unmapped: 71835648 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:56.769217+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474718208 unmapped: 71835648 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:57.769370+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e727000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5137658 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474718208 unmapped: 71835648 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:58.769596+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474718208 unmapped: 71835648 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:59.769811+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474718208 unmapped: 71835648 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:00.769946+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474718208 unmapped: 71835648 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:01.770168+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474718208 unmapped: 71835648 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:02.770389+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e727000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5137658 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474718208 unmapped: 71835648 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e727000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:03.770604+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e727000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474734592 unmapped: 71819264 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:04.770834+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474734592 unmapped: 71819264 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:05.771062+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19e727000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474734592 unmapped: 71819264 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:06.771219+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474734592 unmapped: 71819264 heap: 546553856 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25c6000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:07.771347+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.476037979s of 24.546947479s, submitted: 30
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25c6000 session 0x557dc22cf2c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc30f14a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc0fe74a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25c6000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25c6000 session 0x557dc234ad20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc3410000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc3410000 session 0x557dc301b4a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5245776 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475971584 unmapped: 74260480 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:08.771558+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475971584 unmapped: 74260480 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:09.771715+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475971584 unmapped: 74260480 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:10.771885+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d9b5000/0x0/0x1bfc00000, data 0x56b5000/0x58d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475971584 unmapped: 74260480 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:11.772063+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475979776 unmapped: 74252288 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:12.772243+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5245776 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475979776 unmapped: 74252288 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:13.772416+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d9b5000/0x0/0x1bfc00000, data 0x56b5000/0x58d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475979776 unmapped: 74252288 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:14.772567+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc34c0000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc34c0000 session 0x557dc0fe61e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc0735860
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475979776 unmapped: 74252288 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:15.772713+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475979776 unmapped: 74252288 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc20943c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25c6000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:16.772914+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25c6000 session 0x557dc0d3f680
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d9b5000/0x0/0x1bfc00000, data 0x56b5000/0x58d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc3410000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc34c0000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475987968 unmapped: 74244096 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:17.773177+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5245776 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475987968 unmapped: 74244096 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:18.773455+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475987968 unmapped: 74244096 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:19.773634+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475922432 unmapped: 74309632 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:20.773799+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 476225536 unmapped: 74006528 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:21.774022+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d9b5000/0x0/0x1bfc00000, data 0x56b5000/0x58d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 476225536 unmapped: 74006528 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:22.774186+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5343376 data_alloc: 234881024 data_used: 37441536
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 476225536 unmapped: 74006528 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:23.774350+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 476225536 unmapped: 74006528 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:24.774508+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 476225536 unmapped: 74006528 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:25.774776+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 476225536 unmapped: 74006528 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:26.775130+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d9b5000/0x0/0x1bfc00000, data 0x56b5000/0x58d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 476225536 unmapped: 74006528 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:27.775348+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5343376 data_alloc: 234881024 data_used: 37441536
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 476225536 unmapped: 74006528 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:28.775541+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 476225536 unmapped: 74006528 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:29.775707+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 476225536 unmapped: 74006528 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:30.775946+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.191040039s of 23.264764786s, submitted: 20
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 478543872 unmapped: 71688192 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:31.776123+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479035392 unmapped: 71196672 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:32.776271+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cf05000/0x0/0x1bfc00000, data 0x6165000/0x6389000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5441608 data_alloc: 234881024 data_used: 39387136
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479043584 unmapped: 71188480 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:33.776423+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479043584 unmapped: 71188480 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:34.776592+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479043584 unmapped: 71188480 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cf05000/0x0/0x1bfc00000, data 0x6165000/0x6389000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:35.776759+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479141888 unmapped: 71090176 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:36.776951+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479141888 unmapped: 71090176 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:37.777110+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5432300 data_alloc: 234881024 data_used: 39387136
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479141888 unmapped: 71090176 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:38.777308+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479141888 unmapped: 71090176 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:39.777489+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479141888 unmapped: 71090176 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:40.777667+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cee4000/0x0/0x1bfc00000, data 0x6186000/0x63aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479141888 unmapped: 71090176 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:41.777844+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479141888 unmapped: 71090176 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:42.778030+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.025492668s of 12.276173592s, submitted: 110
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5432856 data_alloc: 234881024 data_used: 39395328
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480190464 unmapped: 70041600 heap: 550232064 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:43.778173+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19cedd000/0x0/0x1bfc00000, data 0x618d000/0x63b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc33fdc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc33fdc00 session 0x557dc150f680
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480444416 unmapped: 73990144 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:44.778338+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc7488c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc7488c00 session 0x557dc22eb4a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc30ea3c0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc208b4a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc25c6000
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc25c6000 session 0x557dc308c5a0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19c410000/0x0/0x1bfc00000, data 0x6c5a000/0x6e7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480444416 unmapped: 73990144 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:45.778571+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480444416 unmapped: 73990144 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:46.778760+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480444416 unmapped: 73990144 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:47.778947+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19c410000/0x0/0x1bfc00000, data 0x6c5a000/0x6e7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5521156 data_alloc: 234881024 data_used: 39395328
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480444416 unmapped: 73990144 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:48.779153+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480444416 unmapped: 73990144 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:49.779355+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc33fdc00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc33fdc00 session 0x557dc33ca960
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480444416 unmapped: 73990144 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:50.779541+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19c410000/0x0/0x1bfc00000, data 0x6c5a000/0x6e7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc3f25c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2652c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:51.779725+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480444416 unmapped: 73990144 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19c40d000/0x0/0x1bfc00000, data 0x6c5d000/0x6e81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:52.779937+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 485171200 unmapped: 69263360 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5599248 data_alloc: 234881024 data_used: 50343936
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:53.780068+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 485171200 unmapped: 69263360 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:54.780200+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 485171200 unmapped: 69263360 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19c40d000/0x0/0x1bfc00000, data 0x6c5d000/0x6e81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:55.780346+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 485171200 unmapped: 69263360 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:56.780545+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 485171200 unmapped: 69263360 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:57.780689+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 485212160 unmapped: 69222400 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5599888 data_alloc: 234881024 data_used: 50405376
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:58.780966+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 485212160 unmapped: 69222400 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:59.781129+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 485212160 unmapped: 69222400 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.097406387s of 17.195215225s, submitted: 19
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19c40d000/0x0/0x1bfc00000, data 0x6c5d000/0x6e81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:00.781279+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 485212160 unmapped: 69222400 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:01.781469+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 485212160 unmapped: 69222400 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:02.781664+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 485212160 unmapped: 69222400 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19bc56000/0x0/0x1bfc00000, data 0x740c000/0x7630000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5665978 data_alloc: 234881024 data_used: 51638272
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:03.781808+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 489594880 unmapped: 64839680 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:04.781987+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 489594880 unmapped: 64839680 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:05.782127+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 489897984 unmapped: 64536576 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19aa2c000/0x0/0x1bfc00000, data 0x7496000/0x76ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:06.782292+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 490004480 unmapped: 64430080 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:07.782436+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 489332736 unmapped: 65101824 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:08.782609+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5669434 data_alloc: 234881024 data_used: 51773440
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 489332736 unmapped: 65101824 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:09.782816+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 489332736 unmapped: 65101824 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:10.783067+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 489332736 unmapped: 65101824 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19aa10000/0x0/0x1bfc00000, data 0x74ba000/0x76de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:11.783225+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 489332736 unmapped: 65101824 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19aa0f000/0x0/0x1bfc00000, data 0x74bb000/0x76df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc3f25c00 session 0x557dc0744b40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2652c00 session 0x557dc234be00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc20ef400
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.671845436s of 12.190391541s, submitted: 115
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:12.783365+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 489242624 unmapped: 65191936 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc20ef400 session 0x557dc011eb40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:13.783549+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5444956 data_alloc: 234881024 data_used: 39456768
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 489242624 unmapped: 65191936 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:14.783696+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 489242624 unmapped: 65191936 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:15.783908+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19bd2e000/0x0/0x1bfc00000, data 0x619c000/0x63c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 489242624 unmapped: 65191936 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:16.784089+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 489242624 unmapped: 65191936 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc3410000 session 0x557dc1a56b40
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc34c0000 session 0x557dc07441e0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: handle_auth_request added challenge on 0x557dc2419c00
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:17.784235+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 489250816 unmapped: 65183744 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 ms_handle_reset con 0x557dc2419c00 session 0x557dc301b860
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:18.784387+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 76947456 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:19.784525+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 76947456 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:20.784709+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 76947456 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:21.784851+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 76947456 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:22.785023+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 76947456 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:23.785154+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 76947456 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:24.785311+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 76947456 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:25.785509+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 76947456 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:26.785693+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 76947456 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:27.785851+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 76947456 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:28.786106+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 76947456 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:29.786307+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 76947456 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:30.786450+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 76947456 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:31.786582+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477487104 unmapped: 76947456 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:32.786854+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:33.787104+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:34.787250+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:35.787378+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:36.787524+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:37.787708+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:38.787937+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:39.788067+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:40.788249+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:41.788395+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:42.788525+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:43.788676+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:44.788837+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:45.789000+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:46.789153+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:47.789320+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:48.789480+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:49.789666+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:50.789834+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:51.790043+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:52.790194+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 sudo[337078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:53.790340+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 sudo[337078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:54.790481+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:55.790636+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:56.790790+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:57.790917+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:58.791123+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:59.791247+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:00.791406+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:01.791612+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:02.791753+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:03.792008+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:04.793347+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:05.795939+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 sudo[337078]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:06.797903+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:07.799501+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:08.800930+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:09.802117+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:10.802947+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:11.803820+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477495296 unmapped: 76939264 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:12.804462+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477503488 unmapped: 76931072 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:13.804860+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477503488 unmapped: 76931072 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:14.805270+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477503488 unmapped: 76931072 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:15.805533+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477503488 unmapped: 76931072 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:16.805745+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477503488 unmapped: 76931072 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:17.805921+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477503488 unmapped: 76931072 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:18.806672+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477503488 unmapped: 76931072 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:19.807489+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477503488 unmapped: 76931072 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:20.808055+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477511680 unmapped: 76922880 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:21.808630+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477511680 unmapped: 76922880 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:22.809174+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477511680 unmapped: 76922880 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:23.809672+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477511680 unmapped: 76922880 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:24.809939+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477511680 unmapped: 76922880 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:25.810210+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477511680 unmapped: 76922880 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:26.810463+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477511680 unmapped: 76922880 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:27.810685+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477511680 unmapped: 76922880 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:28.810940+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477519872 unmapped: 76914688 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:29.811145+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477519872 unmapped: 76914688 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:30.811383+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477519872 unmapped: 76914688 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:31.811641+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477519872 unmapped: 76914688 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:32.811812+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477519872 unmapped: 76914688 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:33.812030+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477519872 unmapped: 76914688 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:34.812184+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477519872 unmapped: 76914688 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:35.812366+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477528064 unmapped: 76906496 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:36.812545+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477528064 unmapped: 76906496 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:37.812739+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477528064 unmapped: 76906496 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:38.812955+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477528064 unmapped: 76906496 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:39.813095+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477528064 unmapped: 76906496 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:40.813283+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477528064 unmapped: 76906496 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:41.813458+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477536256 unmapped: 76898304 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:42.813627+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477536256 unmapped: 76898304 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:43.814071+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477536256 unmapped: 76898304 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:44.814211+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477536256 unmapped: 76898304 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:45.814345+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477544448 unmapped: 76890112 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:46.814479+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477544448 unmapped: 76890112 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:47.814696+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477544448 unmapped: 76890112 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:48.814927+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477544448 unmapped: 76890112 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:49.815082+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477544448 unmapped: 76890112 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:50.815216+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477544448 unmapped: 76890112 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:51.815354+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 69K writes, 268K keys, 69K commit groups, 1.0 writes per commit group, ingest: 0.26 GB, 0.04 MB/s
                                           Cumulative WAL: 69K writes, 26K syncs, 2.63 writes per sync, written: 0.26 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2415 writes, 9981 keys, 2415 commit groups, 1.0 writes per commit group, ingest: 10.97 MB, 0.02 MB/s
                                           Interval WAL: 2415 writes, 949 syncs, 2.54 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477544448 unmapped: 76890112 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:52.815558+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477552640 unmapped: 76881920 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:53.815708+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477552640 unmapped: 76881920 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:54.815851+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477552640 unmapped: 76881920 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:55.816030+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477552640 unmapped: 76881920 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:56.816197+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477552640 unmapped: 76881920 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:57.816346+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477552640 unmapped: 76881920 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:58.816723+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477552640 unmapped: 76881920 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:59.816905+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477552640 unmapped: 76881920 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:00.817072+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477552640 unmapped: 76881920 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:01.817225+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477560832 unmapped: 76873728 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:02.817419+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477560832 unmapped: 76873728 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:03.817577+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477560832 unmapped: 76873728 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:04.817737+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477560832 unmapped: 76873728 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:05.817917+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477569024 unmapped: 76865536 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:06.818153+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477569024 unmapped: 76865536 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:07.818279+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477577216 unmapped: 76857344 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:08.818427+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477577216 unmapped: 76857344 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:09.818574+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477577216 unmapped: 76857344 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:10.818775+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477577216 unmapped: 76857344 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:11.818916+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477577216 unmapped: 76857344 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:12.819083+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477577216 unmapped: 76857344 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:13.819239+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477577216 unmapped: 76857344 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:14.819387+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477577216 unmapped: 76857344 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:15.819546+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477593600 unmapped: 76840960 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:16.819773+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477593600 unmapped: 76840960 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:17.819905+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477593600 unmapped: 76840960 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:18.820055+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477593600 unmapped: 76840960 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:19.820188+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477593600 unmapped: 76840960 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:20.820313+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477593600 unmapped: 76840960 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:21.820484+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477593600 unmapped: 76840960 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:22.820620+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477593600 unmapped: 76840960 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:23.820740+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477601792 unmapped: 76832768 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:24.820911+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477601792 unmapped: 76832768 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:25.821045+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477601792 unmapped: 76832768 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:26.821205+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477618176 unmapped: 76816384 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:27.821332+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477626368 unmapped: 76808192 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:28.821499+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477626368 unmapped: 76808192 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:29.821641+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477626368 unmapped: 76808192 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:30.821917+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477626368 unmapped: 76808192 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:31.822064+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477634560 unmapped: 76800000 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:32.822217+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477634560 unmapped: 76800000 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:33.822355+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477634560 unmapped: 76800000 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:34.822491+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477634560 unmapped: 76800000 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:35.822625+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477634560 unmapped: 76800000 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:36.822852+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477634560 unmapped: 76800000 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:37.822998+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477634560 unmapped: 76800000 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:38.823161+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477634560 unmapped: 76800000 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:39.823286+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477650944 unmapped: 76783616 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:40.823440+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477650944 unmapped: 76783616 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:41.823591+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477650944 unmapped: 76783616 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:42.823800+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477650944 unmapped: 76783616 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:43.823942+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477650944 unmapped: 76783616 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:44.824099+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477650944 unmapped: 76783616 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:45.824267+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477659136 unmapped: 76775424 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:46.824409+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477667328 unmapped: 76767232 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:47.824537+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477675520 unmapped: 76759040 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:48.824709+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477675520 unmapped: 76759040 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:49.824850+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477675520 unmapped: 76759040 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:50.825005+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477675520 unmapped: 76759040 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:51.825127+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477675520 unmapped: 76759040 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:52.825248+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477675520 unmapped: 76759040 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:53.825381+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477675520 unmapped: 76759040 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:54.825501+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477675520 unmapped: 76759040 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:55.825623+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477683712 unmapped: 76750848 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:56.825787+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477691904 unmapped: 76742656 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:57.825940+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477691904 unmapped: 76742656 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:58.826145+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477700096 unmapped: 76734464 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:59.826310+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477700096 unmapped: 76734464 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:00.826459+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477700096 unmapped: 76734464 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:01.826592+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477700096 unmapped: 76734464 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:02.826730+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477700096 unmapped: 76734464 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:03.826928+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477716480 unmapped: 76718080 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:04.827077+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477716480 unmapped: 76718080 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:05.827214+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477716480 unmapped: 76718080 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:06.827334+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477716480 unmapped: 76718080 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:07.827469+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477724672 unmapped: 76709888 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:08.827628+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477724672 unmapped: 76709888 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:09.827748+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477724672 unmapped: 76709888 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:10.827855+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477724672 unmapped: 76709888 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:11.827994+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477732864 unmapped: 76701696 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:12.828181+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477732864 unmapped: 76701696 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:13.828325+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477732864 unmapped: 76701696 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:14.828480+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477749248 unmapped: 76685312 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:15.828653+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477749248 unmapped: 76685312 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:16.828805+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477749248 unmapped: 76685312 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:17.828967+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477749248 unmapped: 76685312 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:18.829194+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477749248 unmapped: 76685312 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:19.829436+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 187.584411621s of 187.672454834s, submitted: 38
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477749248 unmapped: 76685312 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:20.829589+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 477757440 unmapped: 76677120 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:21.829750+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 478830592 unmapped: 75603968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:22.829882+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479985664 unmapped: 74448896 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:23.830041+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479985664 unmapped: 74448896 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:24.830205+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479985664 unmapped: 74448896 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:25.830359+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479985664 unmapped: 74448896 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:26.830507+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479985664 unmapped: 74448896 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:27.830668+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479985664 unmapped: 74448896 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:28.830840+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479985664 unmapped: 74448896 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:29.831052+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479985664 unmapped: 74448896 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:30.831225+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479985664 unmapped: 74448896 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:31.831474+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479985664 unmapped: 74448896 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:32.831678+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479985664 unmapped: 74448896 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:33.832037+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479985664 unmapped: 74448896 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:34.832274+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 479985664 unmapped: 74448896 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:35.832451+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480002048 unmapped: 74432512 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:36.832643+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 74424320 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:37.832792+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 74424320 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:38.832983+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 74424320 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:39.833129+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 74424320 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:40.833365+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 74424320 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:41.833519+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 74424320 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:42.833713+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 74424320 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:43.833891+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 74424320 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:44.834033+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480018432 unmapped: 74416128 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:45.834176+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480018432 unmapped: 74416128 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:46.834395+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480018432 unmapped: 74416128 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:47.834605+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480018432 unmapped: 74416128 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:48.834734+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480018432 unmapped: 74416128 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:49.834850+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480018432 unmapped: 74416128 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:50.835021+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480018432 unmapped: 74416128 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:51.835182+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480026624 unmapped: 74407936 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:52.835358+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480026624 unmapped: 74407936 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:53.835454+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480026624 unmapped: 74407936 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:54.835636+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480026624 unmapped: 74407936 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:55.835815+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480026624 unmapped: 74407936 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:56.835992+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480034816 unmapped: 74399744 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:57.836117+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480034816 unmapped: 74399744 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:58.836236+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480034816 unmapped: 74399744 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:59.836382+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480043008 unmapped: 74391552 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:00.836535+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480043008 unmapped: 74391552 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:01.836685+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480043008 unmapped: 74391552 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:02.836859+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480043008 unmapped: 74391552 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:03.837039+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480043008 unmapped: 74391552 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:04.837181+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480043008 unmapped: 74391552 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:05.837328+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480043008 unmapped: 74391552 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:06.837486+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480051200 unmapped: 74383360 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:07.837661+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 74375168 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:08.837846+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 74375168 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:09.838084+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 74375168 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:10.838213+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 74375168 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:11.838384+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 74375168 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:12.838541+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 74375168 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:13.838699+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 74375168 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:14.838836+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 74375168 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:15.838978+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480067584 unmapped: 74366976 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:16.839156+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480067584 unmapped: 74366976 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:17.839341+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480067584 unmapped: 74366976 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:18.839560+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480075776 unmapped: 74358784 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:19.839710+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480083968 unmapped: 74350592 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:20.839982+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480083968 unmapped: 74350592 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:21.840205+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480083968 unmapped: 74350592 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:22.840396+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480083968 unmapped: 74350592 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:23.840547+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480083968 unmapped: 74350592 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:24.840721+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480083968 unmapped: 74350592 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:25.840985+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480083968 unmapped: 74350592 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:26.841154+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480083968 unmapped: 74350592 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:27.841300+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480083968 unmapped: 74350592 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:28.841496+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480092160 unmapped: 74342400 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:29.841672+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:30.841912+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480092160 unmapped: 74342400 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:31.842075+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480092160 unmapped: 74342400 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:32.842252+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480100352 unmapped: 74334208 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:33.842412+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480100352 unmapped: 74334208 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:34.842561+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480100352 unmapped: 74334208 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:35.842724+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480100352 unmapped: 74334208 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:36.842890+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480100352 unmapped: 74334208 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:37.843044+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480100352 unmapped: 74334208 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:38.843257+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480100352 unmapped: 74334208 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:39.843430+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480100352 unmapped: 74334208 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:40.843604+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480108544 unmapped: 74326016 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:41.843745+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480108544 unmapped: 74326016 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:42.843994+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480108544 unmapped: 74326016 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:43.844162+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480108544 unmapped: 74326016 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:44.844364+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480108544 unmapped: 74326016 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:45.844521+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480108544 unmapped: 74326016 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:46.844696+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480116736 unmapped: 74317824 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:47.844837+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480116736 unmapped: 74317824 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:48.845021+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480124928 unmapped: 74309632 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:49.845260+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480124928 unmapped: 74309632 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:50.845700+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480124928 unmapped: 74309632 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:51.846705+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480124928 unmapped: 74309632 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:52.847192+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480133120 unmapped: 74301440 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:53.848198+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480133120 unmapped: 74301440 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:54.848776+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480133120 unmapped: 74301440 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:55.848957+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480133120 unmapped: 74301440 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:56.849192+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480149504 unmapped: 74285056 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:57.849399+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480149504 unmapped: 74285056 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:58.849618+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480149504 unmapped: 74285056 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:59.849837+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480149504 unmapped: 74285056 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:00.850106+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480149504 unmapped: 74285056 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:01.850259+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480149504 unmapped: 74285056 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:02.850496+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480149504 unmapped: 74285056 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:03.850653+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480149504 unmapped: 74285056 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:04.850912+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480165888 unmapped: 74268672 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:05.851061+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480174080 unmapped: 74260480 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:06.851240+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480174080 unmapped: 74260480 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:07.851406+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480174080 unmapped: 74260480 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:08.851616+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480174080 unmapped: 74260480 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:09.851752+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480174080 unmapped: 74260480 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:10.851974+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480174080 unmapped: 74260480 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:11.852164+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480174080 unmapped: 74260480 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:12.852417+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480182272 unmapped: 74252288 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:13.852568+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480182272 unmapped: 74252288 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:14.852772+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480182272 unmapped: 74252288 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:15.852958+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480182272 unmapped: 74252288 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:16.853224+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480182272 unmapped: 74252288 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:17.853433+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480182272 unmapped: 74252288 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:18.853659+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480182272 unmapped: 74252288 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:19.853830+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480190464 unmapped: 74244096 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:20.854045+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480198656 unmapped: 74235904 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:21.854224+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480198656 unmapped: 74235904 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:22.854439+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480198656 unmapped: 74235904 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:23.854594+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480198656 unmapped: 74235904 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:24.854739+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480198656 unmapped: 74235904 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:25.854923+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480206848 unmapped: 74227712 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:26.855093+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480215040 unmapped: 74219520 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:27.855256+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480215040 unmapped: 74219520 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:28.855433+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480223232 unmapped: 74211328 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:29.855626+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480223232 unmapped: 74211328 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:30.855823+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480223232 unmapped: 74211328 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:31.856013+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480223232 unmapped: 74211328 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:32.856176+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480223232 unmapped: 74211328 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:33.856342+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480231424 unmapped: 74203136 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:34.856506+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480231424 unmapped: 74203136 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:35.856652+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480231424 unmapped: 74203136 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:36.856765+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480239616 unmapped: 74194944 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:37.856971+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480239616 unmapped: 74194944 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:38.857186+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480239616 unmapped: 74194944 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:39.857320+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480239616 unmapped: 74194944 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:40.857522+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480239616 unmapped: 74194944 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:41.857709+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480239616 unmapped: 74194944 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:42.858014+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480239616 unmapped: 74194944 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:43.858236+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480239616 unmapped: 74194944 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:44.858425+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480256000 unmapped: 74178560 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:45.858608+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480256000 unmapped: 74178560 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:46.858756+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480264192 unmapped: 74170368 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:47.858891+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480264192 unmapped: 74170368 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:48.859055+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480264192 unmapped: 74170368 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:49.859231+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480264192 unmapped: 74170368 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:50.859575+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480264192 unmapped: 74170368 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:51.859769+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480264192 unmapped: 74170368 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:52.859962+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480280576 unmapped: 74153984 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:53.860140+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480280576 unmapped: 74153984 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:54.860346+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480280576 unmapped: 74153984 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:55.860517+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480280576 unmapped: 74153984 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:56.860733+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480280576 unmapped: 74153984 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:57.861005+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480280576 unmapped: 74153984 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:58.861244+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480280576 unmapped: 74153984 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:59.861394+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480280576 unmapped: 74153984 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:00.861556+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480288768 unmapped: 74145792 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:01.861794+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480288768 unmapped: 74145792 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:02.861999+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480288768 unmapped: 74145792 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:03.862159+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480288768 unmapped: 74145792 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:04.862398+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480288768 unmapped: 74145792 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:05.862578+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480288768 unmapped: 74145792 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:06.862707+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480296960 unmapped: 74137600 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:07.862879+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480296960 unmapped: 74137600 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:08.863088+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480305152 unmapped: 74129408 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:09.863274+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480313344 unmapped: 74121216 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:10.863451+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480313344 unmapped: 74121216 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:11.863633+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480313344 unmapped: 74121216 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:12.863841+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480313344 unmapped: 74121216 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:13.864118+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480313344 unmapped: 74121216 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:14.864325+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480313344 unmapped: 74121216 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:15.864511+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480329728 unmapped: 74104832 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:16.864710+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480329728 unmapped: 74104832 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:17.864880+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480337920 unmapped: 74096640 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:18.865148+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480337920 unmapped: 74096640 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:19.865344+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480337920 unmapped: 74096640 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:20.865549+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480337920 unmapped: 74096640 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:21.865765+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480337920 unmapped: 74096640 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:22.865961+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480337920 unmapped: 74096640 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:23.866155+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 74088448 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:24.866376+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 74088448 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:25.866613+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 74088448 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:26.866830+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 74080256 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:27.867006+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480362496 unmapped: 74072064 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:28.867273+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480362496 unmapped: 74072064 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:29.867506+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480362496 unmapped: 74072064 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:30.867671+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480362496 unmapped: 74072064 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:31.867844+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480370688 unmapped: 74063872 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:32.868114+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480370688 unmapped: 74063872 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:33.868299+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480370688 unmapped: 74063872 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:34.868454+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480370688 unmapped: 74063872 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:35.868656+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480370688 unmapped: 74063872 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:36.868991+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 74055680 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:37.869206+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480387072 unmapped: 74047488 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:38.869398+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480387072 unmapped: 74047488 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:39.869576+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480387072 unmapped: 74047488 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:40.869785+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480395264 unmapped: 74039296 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:41.869986+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480395264 unmapped: 74039296 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:42.870252+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480395264 unmapped: 74039296 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:43.870525+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480395264 unmapped: 74039296 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:44.870758+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480395264 unmapped: 74039296 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:45.870926+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480403456 unmapped: 74031104 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:46.871149+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480403456 unmapped: 74031104 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:47.871329+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 74022912 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:48.871572+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 74022912 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:49.871796+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 74022912 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:50.872039+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 74022912 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:51.872251+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 74022912 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:52.872420+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 74022912 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:53.872673+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 74022912 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:54.873172+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 74022912 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:55.873388+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480419840 unmapped: 74014720 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:56.873583+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480428032 unmapped: 74006528 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:57.873783+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480428032 unmapped: 74006528 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:58.874164+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 73998336 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:59.874351+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 73998336 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:00.874551+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 73998336 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:01.874756+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 73998336 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:02.874968+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 73998336 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:03.875855+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 73998336 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:04.876109+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480444416 unmapped: 73990144 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:05.876295+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480444416 unmapped: 73990144 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:06.876499+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480444416 unmapped: 73990144 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:07.876661+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480444416 unmapped: 73990144 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:08.876958+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480444416 unmapped: 73990144 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:09.877180+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480444416 unmapped: 73990144 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:10.877327+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480452608 unmapped: 73981952 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:11.877497+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480468992 unmapped: 73965568 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:12.877673+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480468992 unmapped: 73965568 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:13.877910+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480468992 unmapped: 73965568 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:14.878121+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480468992 unmapped: 73965568 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:15.878344+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480468992 unmapped: 73965568 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:16.878515+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480468992 unmapped: 73965568 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:17.878706+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480468992 unmapped: 73965568 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:18.878923+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480468992 unmapped: 73965568 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 sudo[337115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:19.879118+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480485376 unmapped: 73949184 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:20.879322+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480485376 unmapped: 73949184 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:21.879475+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 480485376 unmapped: 73949184 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:22.879639+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:23.879796+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:24.879964+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 sudo[337115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:25.880132+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:26.880352+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:27.880606+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:28.880840+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:29.881062+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:30.881233+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:31.881405+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:32.881538+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:33.881700+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:34.881855+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:35.882006+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 sudo[337115]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:36.882143+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:37.882354+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:38.882575+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:39.882755+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:40.882926+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:41.883085+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:42.883220+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475758592 unmapped: 78675968 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:43.883372+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475766784 unmapped: 78667776 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:44.883535+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: do_command 'config diff' '{prefix=config diff}'
Jan 20 15:53:18 compute-1 ceph-osd[79119]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:18 compute-1 ceph-osd[79119]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:18 compute-1 ceph-osd[79119]: bluestore.MempoolThread(0x557dbedbdb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5157674 data_alloc: 218103808 data_used: 23760896
Jan 20 15:53:18 compute-1 ceph-osd[79119]: do_command 'config show' '{prefix=config show}'
Jan 20 15:53:18 compute-1 ceph-osd[79119]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 475308032 unmapped: 79126528 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: do_command 'counter dump' '{prefix=counter dump}'
Jan 20 15:53:18 compute-1 ceph-osd[79119]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 20 15:53:18 compute-1 ceph-osd[79119]: do_command 'counter schema' '{prefix=counter schema}'
Jan 20 15:53:18 compute-1 ceph-osd[79119]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:45.883659+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: osd.1 418 heartbeat osd_stat(store_statfs(0x19d588000/0x0/0x1bfc00000, data 0x4942000/0x4b66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 474652672 unmapped: 79781888 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:46.883845+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: prioritycache tune_memory target: 4294967296 mapped: 473948160 unmapped: 80486400 heap: 554434560 old mem: 2845415833 new mem: 2845415833
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: tick
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_tickets
Jan 20 15:53:18 compute-1 ceph-osd[79119]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:47.884018+0000)
Jan 20 15:53:18 compute-1 ceph-osd[79119]: do_command 'log dump' '{prefix=log dump}'
Jan 20 15:53:18 compute-1 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 15:53:18 compute-1 ceph-mon[81775]: from='client.50612 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:18 compute-1 ceph-mon[81775]: from='client.37311 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:18 compute-1 ceph-mon[81775]: from='client.37329 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:18 compute-1 ceph-mon[81775]: from='client.50227 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:18 compute-1 ceph-mon[81775]: from='client.37350 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:18 compute-1 ceph-mon[81775]: from='client.50639 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/808794295' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 20 15:53:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2103643461' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 15:53:18 compute-1 ceph-mon[81775]: from='client.50254 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1551083616' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 15:53:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2779943835' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 15:53:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2247268192' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 15:53:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2812618961' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 15:53:18 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2298780932' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 15:53:18 compute-1 nova_compute[225855]: 2026-01-20 15:53:18.681 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:18 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:18 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:18 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:18.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:53:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 20 15:53:19 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3089804160' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.37362 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.50660 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.50269 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.37371 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.50681 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: pgmap v3933: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.50293 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.37386 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3140472892' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1364837486' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.50693 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.50314 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.37401 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1550185194' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.37428 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.50705 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/149186993' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.50332 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/668688603' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/440145304' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3089804160' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1122230926' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 20 15:53:19 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 20 15:53:19 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3608108182' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 15:53:19 compute-1 crontab[337345]: (root) LIST (root)
Jan 20 15:53:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:20.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:20 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 20 15:53:20 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/858049734' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 20 15:53:20 compute-1 nova_compute[225855]: 2026-01-20 15:53:20.315 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:20 compute-1 ceph-mon[81775]: from='client.50714 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:20 compute-1 ceph-mon[81775]: from='client.50350 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:20 compute-1 ceph-mon[81775]: from='client.37458 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:20 compute-1 ceph-mon[81775]: from='client.50726 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3403921456' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 20 15:53:20 compute-1 ceph-mon[81775]: from='client.50368 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3608108182' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 15:53:20 compute-1 ceph-mon[81775]: pgmap v3934: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/947707734' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 20 15:53:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4259400615' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 20 15:53:20 compute-1 ceph-mon[81775]: from='client.50741 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3743176531' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 20 15:53:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/858049734' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 20 15:53:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3465038305' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 20 15:53:20 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4011182796' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 20 15:53:20 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:20 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:20 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:20.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:21 compute-1 podman[337517]: 2026-01-20 15:53:21.085918731 +0000 UTC m=+0.120833706 container health_status 72be95c12e041eb2cd1a16ad70dfa1391d9f03a5f3180a614ead9459dcfcd46f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 20 15:53:21 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 20 15:53:21 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4097280376' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 20 15:53:21 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 20 15:53:21 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2052119312' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 20 15:53:21 compute-1 ceph-mon[81775]: from='client.50395 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:21 compute-1 ceph-mon[81775]: from='client.50762 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4106595786' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 20 15:53:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/515019539' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 20 15:53:21 compute-1 ceph-mon[81775]: from='client.50434 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/685726112' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 20 15:53:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1113246010' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 20 15:53:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/4097280376' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 20 15:53:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4293837088' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 20 15:53:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/952246884' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 20 15:53:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4162090687' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 20 15:53:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3716471466' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 20 15:53:21 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2052119312' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 20 15:53:21 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 20 15:53:21 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/502472852' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Jan 20 15:53:22 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1585738903' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 20 15:53:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:22.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Jan 20 15:53:22 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2813367287' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 20 15:53:22 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1825010125' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 20 15:53:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 20 15:53:22 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3075812643' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-1 ceph-mon[81775]: from='client.50789 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1793993722' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 20 15:53:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/906646828' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/502472852' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2302797618' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 15:53:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/728917619' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-1 ceph-mon[81775]: pgmap v3935: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1585738903' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 20 15:53:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3519584347' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 20 15:53:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2636043298' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1021293115' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 20 15:53:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2141415677' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2813367287' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/454954369' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 20 15:53:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1825010125' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 20 15:53:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3893597713' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2492913358' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3075812643' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:22 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:53:22 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:22.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:53:22 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 20 15:53:22 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1414603985' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 20 15:53:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 20 15:53:23 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1188902209' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 20 15:53:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 20 15:53:23 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1107210386' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 20 15:53:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Jan 20 15:53:23 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1482769606' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 20 15:53:23 compute-1 nova_compute[225855]: 2026-01-20 15:53:23.686 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:23 compute-1 systemd[1]: Starting Hostname Service...
Jan 20 15:53:23 compute-1 systemd[1]: Started Hostname Service.
Jan 20 15:53:23 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 20 15:53:23 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3218920685' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 20 15:53:23 compute-1 ceph-mon[81775]: from='client.37587 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/722851636' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 20 15:53:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1414603985' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 20 15:53:23 compute-1 ceph-mon[81775]: from='client.37599 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:23 compute-1 ceph-mon[81775]: from='client.37608 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3368371534' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 20 15:53:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1188902209' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 20 15:53:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/708267248' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 15:53:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1107210386' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 20 15:53:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2239306666' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 20 15:53:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1482769606' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 20 15:53:23 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/4113140527' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 20 15:53:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:24.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:53:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 20 15:53:24 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2496834440' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 15:53:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 20 15:53:24 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3119005333' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:24 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:24 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:24.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:24 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 20 15:53:24 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1406159115' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 20 15:53:24 compute-1 ceph-mon[81775]: from='client.37617 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-1 ceph-mon[81775]: from='client.37638 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-1 ceph-mon[81775]: pgmap v3936: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2078948093' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 20 15:53:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3271178705' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3218920685' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 20 15:53:24 compute-1 ceph-mon[81775]: from='client.37656 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/134214396' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1947446086' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 20 15:53:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2496834440' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 15:53:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/3119005333' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1094165142' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1406159115' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 20 15:53:25 compute-1 nova_compute[225855]: 2026-01-20 15:53:25.318 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:25 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:25 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='client.50614 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='client.37671 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='client.50629 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='client.37689 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='client.50638 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='client.50644 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='client.37695 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='client.50930 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2745875369' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='client.50656 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1694053069' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:25 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:26 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:26 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:53:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:26.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:53:26 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 20 15:53:26 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/921537409' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 20 15:53:26 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:26 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:53:26 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:26.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:53:26 compute-1 ceph-mon[81775]: from='client.50945 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:26 compute-1 ceph-mon[81775]: from='client.50665 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:26 compute-1 ceph-mon[81775]: from='client.50957 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:26 compute-1 ceph-mon[81775]: from='client.50966 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:26 compute-1 ceph-mon[81775]: pgmap v3937: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:26 compute-1 ceph-mon[81775]: from='client.50698 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:26 compute-1 ceph-mon[81775]: from='client.50975 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/539377997' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 20 15:53:26 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:26 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2824071788' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 20 15:53:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1811606035' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 15:53:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/921537409' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 20 15:53:26 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/331790073' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 20 15:53:26 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:26 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 20 15:53:27 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1876167769' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 20 15:53:27 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:27 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:27 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 20 15:53:27 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1931971985' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 15:53:27 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:27 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:27 compute-1 ceph-mon[81775]: from='client.50725 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:27 compute-1 ceph-mon[81775]: from='client.51005 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:27 compute-1 ceph-mon[81775]: from='client.50746 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:27 compute-1 ceph-mon[81775]: from='client.51023 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:27 compute-1 ceph-mon[81775]: from='client.37779 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:27 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:27 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1876167769' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 20 15:53:27 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:27 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/287765366' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 20 15:53:27 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:27 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:27 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:27 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1931971985' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 15:53:27 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:27 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:27 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/4139354330' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 20 15:53:28 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 20 15:53:28 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1807707528' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 20 15:53:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:28.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:28 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:28 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:53:28 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:28.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:53:28 compute-1 nova_compute[225855]: 2026-01-20 15:53:28.721 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:28 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:28 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:29 compute-1 ceph-mon[81775]: from='client.51041 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:29 compute-1 ceph-mon[81775]: from='client.50812 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:29 compute-1 ceph-mon[81775]: pgmap v3938: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1807707528' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 20 15:53:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1290319706' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 20 15:53:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/883441442' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 20 15:53:29 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:29 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:29 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:29 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:29 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:29 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/1440796443' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 20 15:53:29 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:29 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:29 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:29 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:29 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1389273415' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 20 15:53:29 compute-1 podman[338630]: 2026-01-20 15:53:29.08232334 +0000 UTC m=+0.099364219 container health_status 533cd8e0c3b8b7f910f15cef49e7f879ab46c12ce646a35628c15f13725eb500 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-74c9debe5f9e04ff9ee42a3aa155432affc8a9cb0fab2a60d92b56b70743714b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 20 15:53:29 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:29 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:29 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:53:30 compute-1 ceph-mon[81775]: from='client.50845 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:30 compute-1 ceph-mon[81775]: from='client.37845 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:30 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:30 compute-1 ceph-mon[81775]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/281162553' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 20 15:53:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/3370524222' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 20 15:53:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2933957023' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 20 15:53:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/2148008205' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 20 15:53:30 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2699355290' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 20 15:53:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:53:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:30.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:53:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Jan 20 15:53:30 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/608709473' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 20 15:53:30 compute-1 nova_compute[225855]: 2026-01-20 15:53:30.321 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:30 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Jan 20 15:53:30 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2207446485' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 20 15:53:30 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:30 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:53:30 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:30.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:53:31 compute-1 ceph-mon[81775]: pgmap v3939: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:31 compute-1 ceph-mon[81775]: from='client.51179 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/608709473' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 20 15:53:31 compute-1 ceph-mon[81775]: from='client.37872 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/2207446485' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 20 15:53:31 compute-1 ceph-mon[81775]: from='client.50932 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/672864250' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 20 15:53:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/684520784' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 20 15:53:31 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/3125534778' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 20 15:53:31 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Jan 20 15:53:31 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1277536710' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 20 15:53:31 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Jan 20 15:53:31 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1619219323' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 20 15:53:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:32.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:32 compute-1 ceph-mon[81775]: from='client.37890 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1277536710' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 20 15:53:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1851347541' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 20 15:53:32 compute-1 ceph-mon[81775]: from='client.37902 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/1619219323' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 20 15:53:32 compute-1 ceph-mon[81775]: pgmap v3940: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:32 compute-1 ceph-mon[81775]: from='client.51218 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:32 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2799835478' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 20 15:53:32 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Jan 20 15:53:32 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/510735189' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 20 15:53:32 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:32 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:53:32 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:32.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:53:33 compute-1 nova_compute[225855]: 2026-01-20 15:53:33.724 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:34.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:34 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:34 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 15:53:34 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:34.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 15:53:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:53:34 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Jan 20 15:53:34 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2672762418' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 20 15:53:35 compute-1 nova_compute[225855]: 2026-01-20 15:53:35.323 225859 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:35 compute-1 nova_compute[225855]: 2026-01-20 15:53:35.339 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:53:35 compute-1 nova_compute[225855]: 2026-01-20 15:53:35.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:53:35 compute-1 nova_compute[225855]: 2026-01-20 15:53:35.340 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:53:35 compute-1 nova_compute[225855]: 2026-01-20 15:53:35.361 225859 DEBUG nova.compute.manager [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:53:35 compute-1 nova_compute[225855]: 2026-01-20 15:53:35.362 225859 DEBUG oslo_service.periodic_task [None req-7a792283-28fa-4bd6-942f-00a63e52f6c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:53:36 compute-1 ceph-mon[81775]: from='client.51227 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.102:0/1559989133' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 20 15:53:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.100:0/2251635322' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 20 15:53:36 compute-1 ceph-mon[81775]: from='client.50992 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:36 compute-1 ceph-mon[81775]: from='client.? 192.168.122.101:0/510735189' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 20 15:53:36 compute-1 ceph-mon[81775]: from='client.37938 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:36 compute-1 ceph-mon[81775]: from='client.50998 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:36.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:36 compute-1 radosgw[83787]: ====== starting new request req=0x7f09c6ece6f0 =====
Jan 20 15:53:36 compute-1 radosgw[83787]: ====== req done req=0x7f09c6ece6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 20 15:53:36 compute-1 radosgw[83787]: beast: 0x7f09c6ece6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:36.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 20 15:53:37 compute-1 ceph-mon[81775]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Jan 20 15:53:37 compute-1 ceph-mon[81775]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2387994047' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
